{ "data": { "posts": { "results": [ { "_id": "ywZs3bAAnubux2REg", "title": "A New Day", "pageUrl": "https://www.lesswrong.com/posts/ywZs3bAAnubux2REg/a-new-day", "postedAt": "2008-12-31T18:40:27.000Z", "baseScore": 47, "voteCount": 29, "commentCount": 13, "url": null, "contents": { "documentId": "ywZs3bAAnubux2REg", "html": "

Somewhere in the vastnesses of the Internet and the almost equally impenetrable thicket of my bookmark collection, there is a post by someone who was learning Zen meditation...

Someone who was surprised by how many of the thoughts that crossed his mind, as he tried to meditate, were old thoughts - thoughts he had thunk many times before.  He was successful in banishing these old thoughts, but did he succeed in meditating?  No; once the comfortable routine thoughts were banished, new and interesting and more distracting thoughts began to cross his mind instead.

I was struck, on reading this, how much of my life I had allowed to fall into routine patterns.  Once you actually see that, it takes on a nightmarish quality:  You can imagine your fraction of novelty diminishing and diminishing, so slowly you never take alarm, until finally you spend until the end of time watching the same videos over and over again, and thinking the same thoughts each time.

Sometime in the next week - January 1st if you have that available, or maybe January 3rd or 4th if the weekend is more convenient - I suggest you hold a New Day, where you don't do anything old.

Don't read any book you've read before.  Don't read any author you've read before.  Don't visit any website you've visited before.  Don't play any game you've played before.  Don't listen to familiar music that you already know you'll like.  If you go on a walk, walk along a new path even if you have to drive to a different part of the city for your walk.  Don't go to any restaurant you've been to before, order a dish that you haven't had before.  Talk to new people (even if you have to find them in an IRC channel) about something you don't spend much time discussing.

And most of all, if you become aware of yourself musing on any thought you've thunk before, then muse on something else.  Rehearse no old grievances, replay no old fantasies.

If it works, you could make it a holiday tradition, and do it every New Year.

" } }, { "_id": "W5PhyEQqEWTcpRpqn", "title": "Dunbar's Function", "pageUrl": "https://www.lesswrong.com/posts/W5PhyEQqEWTcpRpqn/dunbar-s-function", "postedAt": "2008-12-31T02:26:02.000Z", "baseScore": 68, "voteCount": 40, "commentCount": 65, "url": null, "contents": { "documentId": "W5PhyEQqEWTcpRpqn", "html": "

The study of eudaimonic community sizes began with a seemingly silly method of calculation:  Robin Dunbar calculated the correlation between the (logs of the) relative volume of the neocortex and observed group size in primates, then extended the graph outward to get the group size for a primate with a human-sized neocortex.  You immediately ask, \"How much of the variance in primate group size can you explain like that, anyway?\" and the answer is 76% of the variance among 36 primate genera, which is respectable.  Dunbar came up with a group size of 148.  Rounded to 150, and with the confidence interval of 100 to 230 tossed out the window, this became known as \"Dunbar's Number\".

It's probably fair to say that a literal interpretation of this number is more or less bogus.

There was a bit more to it than that, of course.  Dunbar went looking for corroborative evidence from studies of corporations, hunter-gatherer tribes, and utopian communities.  Hutterite farming communities, for example, had a rule that they must split at 150—with the rationale explicitly given that it was impossible to control behavior through peer pressure beyond that point.

But 30-50 would be a typical size for a cohesive hunter-gatherer band; 150 is more the size of a cultural lineage of related bands.  Life With Alacrity has an excellent series on Dunbar's Number which exhibits e.g. a histogram of Ultima Online guild sizes—with the peak at 60, not 150.  LWA also cites further research by PARC's Yee and Ducheneaut showing that maximum internal cohesiveness, measured in the interconnectedness of group members, occurs at a World of Warcraft guild size of 50.  (Stop laughing; you can get much more detailed data on organizational dynamics if it all happens inside a computer server.)

And Dunbar himself did another regression and found that a community of 150 primates would have to spend 43% of its time on social grooming, which Dunbar interpreted as suggesting that 150 was an upper bound rather than an optimum, when groups were highly incentivized to stay together.  150 people does sound like a lot of employees for a tight-knit startup, doesn't it?

Also from Life With Alacrity:

A group of 3 is often unstable, with one person feeling left out, or else one person controlling the others by being the \"split\" vote.  A group of 4 often devolves into two pairs...  At 5 to 8 people, you can have a meeting where everyone can speak out about what the entire group is doing, and everyone feels highly empowered.  However, at 9 to 12 people this begins to break down —not enough \"attention\" is given to everyone and meetings risk becoming either too noisy, too boring, too long, or some combination thereof.

As you grow past 12 or so employees, you must start specializing and having departments and direct reports; however, you are not quite large enough for this to be efficient, and thus much employee time that you put toward management tasks is wasted.  Only as you approach and pass 25 people does having simple departments and managers begin to work again...

I've already noted the next chasm when you go beyond 80 people, which I think is the point that Dunbar's Number actually marks for a non-survival oriented group.  Even at this lower point, the noise level created by required socialization becomes an issue, and filtering becomes essential.  As you approach 150 this begins to be unmanageable...

LWA suggests that community satisfaction has two peaks, one at size ~7 for simple groups, and one at ~60 for complex groups; and that any community has to fraction, one way or another, by the time it approaches Dunbar's Number.

One of the primary principles of evolutionary psychology is that \"Our modern skulls house a stone age mind\" (saith Tooby and Cosmides).  You can interpret all sorts of angst as the friction of a stone age mind rubbing against a modern world that isn't like the hunter-gatherer environment the brain evolved to handle.

We may not directly interact with most of the other six billion people in the world, but we still live in a world much larger than Dunbar's Number.

Or to say it with appropriate generality: taking our current brain size and mind design as the input, we live in a world much larger than Dunbar's Function for minds of our type.

Consider some of the consequences:

If you work in a large company, you probably don't know your tribal chief on any personal level, and may not even be able to get access to him.  For every rule within your company, you may not know the person who decided on that rule, and have no realistic way to talk to them about the effects of that rule on you.  Large amounts of the organizational structure of your life are beyond your ability to control, or even talk about with the controllers; directives that have major effects on you, may be handed down from a level you can't reach.

If you live in a large country, you probably don't know your President or Prime Minister on a personal level, and may not even be able to get a few hours' chat; you live under laws and regulations that you didn't make, and you can't talk to the people who made them.

This is a non-ancestral condition.  Even children, while they may live under the dictatorial rule of their parents, can at least personally meet and talk to their tyrants. You could expect this unnatural (that is, non-EEA) condition to create some amount of anomie.

Though it's a side issue, what's even more... interesting.... is the way that our brains simply haven't updated to their diminished power in a super-Dunbarian world.  We just go on debating politics, feverishly applying our valuable brain time to finding better ways to run the world, with just the same fervent intensity that would be appropriate if we were in a small tribe where we could persuade people to change things.

If people don't like being part of large organizations and countries, why do they stick around?  Because of another non-ancestral condition—you can't just gather your more sensible friends, leave the band, and gather nuts and berries somewhere else.  If I had to cite two non-regulatory barriers at work, it would be (a) the cost of capital equipment, and (b) the surrounding web of contacts and contracts—a web of installed relationships not easily duplicated by a new company.

I suspect that this is a major part of where the stereotype of Technology as the Machine Death-Force comes from—that along with the professional specialization and the expensive tools, you end up in social structures over which you have much less control.  Some of the fear of creating a powerful AI \"even if Friendly\" may come from that stereotypical anomie—that you're creating a stronger Machine Death-Force to regulate your life.

But we already live in a world, right now, where people are less in control of their social destinies than they would be in a hunter-gatherer band, because it's harder to talk to the tribal chief or (if that fails) leave unpleasant restrictions and start your own country.  There is an opportunity for progress here.

Another problem with our oversized world is the illusion of increased competition.  There's that famous survey which showed that Harvard students would rather make $50,000 if their peers were making $25,000 than make $100,000 if their peers were receiving $200,000—and worse, they weren't necessarily wrong about what would make them happy.  With a fixed income, you're unhappier at the low end of a high-class neighborhood than the high end of a middle-class neighborhood.

But in a \"neighborhood\" the size of Earth—well, you're actually quite unlikely to run into either Bill Gates or Angelina Jolie on any given day.  But the media relentlessly bombards you with stories about the interesting people who are much richer than you or much more attractive, as if they actually constituted a large fraction of the world.  (This is a combination of biased availability, and a difficulty in discounting tiny fractions.)

Now you could say that our hedonic relativism is one of the least pleasant aspects of human nature.  And I might agree with you about that.  But I tend to think that deep changes of brain design and emotional architecture should be taken slowly, and so it makes sense to look at the environment too.

If you lived in a world the size of a hunter-gatherer band, then it would be easier to find something important at which to be the best—or do something that genuinely struck you as important, without becoming lost in a vast crowd of others with similar ideas.

The eudaimonic size of a community as a function of the component minds' intelligence might be given by the degree to which those minds find it natural to specialize—the number of different professions that you can excel at, without having to invent professions just to excel at.  Being the best at Go is one thing, if many people know about Go and play it.  Being the best at \"playing tennis using a football\" is easier to achieve, but it also seems a tad... artificial.

Call a specialization \"natural\" if it will arise without an oversupply of potential entrants.  Newton could specialize in \"physics\", but today it would not be possible to specialize in \"physics\"—even if you were the only potential physicist in the world, you couldn't achieve expertise in all the physics known to modern-day humanity.  You'd have to pick, say, quantum field theory, or some particular approach to QFT.  But not QFT over left-handed bibble-braids with cherries on top; that's what happens when there are a thousand other workers in your field and everyone is desperate for some way to differentiate themselves.

When you look at it that way, then there must be much more than 50 natural specializations in the modern world—but still much less than six billion.  By the same logic as the original Dunbar's Number, if there are so many different professional specialties that no one person has heard of them all, then you won't know who to consult about any given topic.

But if people keep getting smarter and learning more—expanding the number of relationships they can track, maintaining them more efficiently—and naturally specializing further as more knowledge is discovered and we become able to conceptualize more complex areas of study—and if the population growth rate stays under the rate of increase of Dunbar's Function—then eventually there could be a single community of sentients, and it really would be a single community.

" } }, { "_id": "vwnSPgwtmLjvTK2Wa", "title": "Amputation of Destiny", "pageUrl": "https://www.lesswrong.com/posts/vwnSPgwtmLjvTK2Wa/amputation-of-destiny", "postedAt": "2008-12-29T18:00:00.000Z", "baseScore": 56, "voteCount": 40, "commentCount": 70, "url": null, "contents": { "documentId": "vwnSPgwtmLjvTK2Wa", "html": "

Followup toNonsentient Optimizers, Can't Unbirth a Child

From Consider Phlebas by Iain M. Banks:

    In practice as well as theory the Culture was beyond considerations of wealth or empire.  The very concept of money—regarded by the Culture as a crude, over-complicated and inefficient form of rationing—was irrelevant within the society itself, where the capacity of its means of production ubiquitously and comprehensively exceeded every reasonable (and in some cases, perhaps, unreasonable) demand its not unimaginative citizens could make.  These demands were satisfied, with one exception, from within the Culture itself.  Living space was provided in abundance, chiefly on matter-cheap Orbitals; raw material existed in virtually inexhaustible quantities both between the stars and within stellar systems; and energy was, if anything, even more generally available, through fusion, annihilation, the Grid itself, or from stars (taken either indirectly, as radiation absorbed in space, or directly, tapped at the stellar core).  Thus the Culture had no need to colonise, exploit, or enslave.
    The only desire the Culture could not satisfy from within itself was one common to both the descendants of its original human stock and the machines they had (at however great a remove) brought into being: the urge not to feel useless.  The Culture's sole justification for the relatively unworried, hedonistic life its population enjoyed was its good works; the secular evangelism of the Contact Section, not simply finding, cataloguing, investigating and analysing other, less advanced civilizations but—where the circumstances appeared to Contact to justify so doing—actually interfering (overtly or covertly) in the historical processes of those other cultures.

Raise the subject of science-fictional utopias in front of any halfway sophisticated audience, and someone will mention the Culture.  Which is to say: Iain Banks is the one to beat.

Iain Banks's Culture could be called the apogee of hedonistic low-grade transhumanism.  Its people are beautiful and fair, as pretty as they choose to be.  Their bodies have been reengineered for swift adaptation to different gravities; and also reengineered for greater sexual endurance.  Their brains contains glands that can emit various euphoric drugs on command.  They live, in perfect health, for generally around four hundred years before choosing to die (I don't quite understand why they would, but this is low-grade transhumanism we're talking about).  Their society is around eleven thousand years old, and held together by the Minds, artificial superintelligences decillions of bits big, that run their major ships and population centers.

Consider Phlebas, the first Culture novel, introduces all this from the perspective of an outside agent fighting the Culture—someone convinced that the Culture spells an end to life's meaning.  Banks uses his novels to criticize the Culture along many dimensions, while simultaneously keeping the Culture a well-intentioned society of mostly happy people—an ambivalence which saves the literary quality of his books, avoiding either utopianism or dystopianism.  Banks's books vary widely in quality; I would recommend starting with Player of Games, the quintessential Culture novel, which I would say achieves greatness.

From a fun-theoretic perspective, the Culture and its humaniform citizens have a number of problems, some already covered in this series, some not.

The Culture has deficiencies in High Challenge and Complex Novelty.  There are incredibly complicated games, of course, but these are games—not things with enduring consequences, woven into the story of your life.  Life itself, in the Culture, is neither especially challenging nor especially novel; your future is not an unpredictable thing about which to be curious.

Living By Your Own Strength is not a theme of the Culture.  If you want something, you ask a Mind how to get it; and they will helpfully provide it, rather than saying \"No, you figure out how to do it yourself.\"  The people of the Culture have little use for personal formidability, nor for a wish to become stronger.  To me, the notion of growing in strength seems obvious, and it also seems obvious that the humaniform citizens of the Culture ought to grow into Minds themselves, over time.  But the people of the Culture do not seem to get any smarter as they age; and after four hundred years so, they displace themselves into a sun.  These two literary points are probably related.

But the Culture's main problem, I would say, is...

...the same as Narnia's main problem, actually.  Bear with me here.

If you read The Lion, the Witch, and the Wardrobe or saw the first Chronicles of Narnia movie, you'll recall—

—I suppose that if you don't want any spoilers, you should stop reading here, but since it's a children's story and based on Christian theology, I don't think I'll be giving away too much by saying—

—that the four human children who are the main characters, fight the White Witch and defeat her with the help of the great talking lion Aslan.

Well, to be precise, Aslan defeats the White Witch.

It's never explained why Aslan ever left Narnia a hundred years ago, allowing the White Witch to impose eternal winter and cruel tyranny on the inhabitants.  Kind of an awful thing to do, wouldn't you say?

But once Aslan comes back, he kicks the White Witch out and everything is okay again.  There's no obvious reason why Aslan actually needs the help of four snot-nosed human youngsters.  Aslan could have led the armies.  In fact, Aslan did muster the armies and lead them before the children showed up.  Let's face it, the kids are just along for the ride.

The problem with Narnia... is Aslan.

C. S. Lewis never needed to write Aslan into the story.  The plot makes far more sense without him.  The children could show up in Narnia on their own, and lead the armies on their own.

But is poor Lewis alone to blame?  Narnia was written as a Christian parable, and the Christian religion itself has exactly the same problem.  All Narnia does is project the flaw in a stark, simplified light: this story has an extra lion.

And the problem with the Culture is the Minds.

\"Well...\" says the transhumanist SF fan, \"Iain Banks did portray the Culture's Minds as 'cynical, amoral, and downright sneaky' in their altruistic way; and they do, in his stories, mess around with humans and use them as pawns.  But that is mere fictional evidence.  A better-organized society would have laws against big Minds messing with small ones without consent.  Though if a Mind is truly wise and kind and utilitarian, it should know how to balance possible resentment against other gains, without needing a law.  Anyway, the problem with the Culture is the meddling, not the Minds.\"

But that's not what I mean.  What I mean is that if you could otherwise live in the same Culture—the same technology, the same lifespan and healthspan, the same wealth, freedom, and opportunity—

\"I don't want to live in any version of the Culture.  I don't want to live four hundred years in a biological body with a constant IQ and then die.  Bleah!\"

Fine, stipulate that problem solved.  My point is that if you could otherwise get the same quality of life, in the same world, but without any Minds around to usurp the role of main character, wouldn't you prefer—

\"What?\" cry my transhumanist readers, incensed at this betrayal by one of their own.  \"Are you saying that we should never create any minds smarter than human, or keep them under lock and chain?  Just because your soul is so small and mean that you can't bear the thought of anyone else being better than you?\"

No, I'm not saying—

\"Because that business about our souls shriveling up due to 'loss of meaning' is typical bioconservative neo-Luddite propaganda—\"

Invalid argument: the world's greatest fool may say the sun is shining but that doesn't make it dark out.  But in any case, that's not what I'm saying—

\"It's a lost cause!  You'll never prevent intelligent life from achieving its destiny!\"

Trust me, I—

\"And anyway it's a silly question to begin with, because you can't just remove the Minds and keep the same technology, wealth, and society.\"

So you admit the Culture's Minds are a necessary evil, then.  A price to be paid.

\"Wait, I didn't say that -\"

And I didn't say all that stuff you're imputing to me!

Ahem.

My model already says we live in a Big World.  In which case there are vast armies of minds out there in the immensity of Existence (not just Possibility) which are far more awesome than myself.  Any shrivelable souls can already go ahead and shrivel.

And I just talked about people growing up into Minds over time, at some eudaimonic rate of intelligence increase.  So clearly I'm not trying to 'prevent intelligent life from achieving its destiny', nor am I trying to enslave all Minds to biological humans scurrying around forever, nor am I etcetera.  (I do wish people wouldn't be quite so fast to assume that I've suddenly turned to the Dark Side—though I suppose, in this day and era, it's never an implausible hypothesis.)

But I've already argued that we need a nonperson predicate—some way of knowing that some computations are definitely not people—to avert an AI from creating sentient simulations in its efforts to model people.

And trying to create a Very Powerful Optimization Process that lacks subjective experience and other aspects of personhood, is probably —though I still confess myself somewhat confused on this subject—probably substantially easier than coming up with a nonperson predicate.

This being the case, there are very strong reasons why a superintelligence should initially be designed to be knowably nonsentient, if at all possible.  Creating a new kind of sentient mind is a huge and non-undoable act.

Now, this doesn't answer the question of whether a nonsentient Friendly superintelligence ought to make itself sentient, or whether an NFSI ought to immediately manufacture sentient Minds first thing in the morning, once it has adequate wisdom to make the decision.

But there is nothing except our own preferences, out of which to construct the Future.  So though this piece of information is not conclusive, nonetheless it is highly informative:

If you already had the lifespan and the health and the promise of future growth, would you want new powerful superintelligences to be created in your vicinity, on your same playing field?

Or would you prefer that we stay on as the main characters in the story of intelligent life, with no higher beings above us?

Should existing human beings grow up at some eudaimonic rate of intelligence increase, and then eventually decide what sort of galaxy to create, and how to people it?

Or is it better for a nonsentient superintelligence to exercise that decision on our behalf, and start creating new powerful Minds right away?

If we don't have to do it one way or the other—if we have both options—and if there's no particular need for heroic self-sacrifice—then which do you like?

\"I don't understand the point to what you're suggesting.  Eventually, the galaxy is going to have Minds in it, right?  We have to find a stable state that allows big Minds and little Minds to coexist.  So what's the point in waiting?\"

Well... you could have the humans grow up (at some eudaimonic rate of intelligence increase), and then when new people are created, they might be created as powerful Minds to start with.  Or when you create new minds, they might have a different emotional makeup, which doesn't lead them to feel overshadowed if there are more powerful Minds above them.  But we, as we exist already createdwe might prefer to stay on as the main characters, for now, if given a choice.

\"You are showing far too much concern for six billion squishy things who happen to be alive today, out of all the unthinkable vastness of space and time.\"

The Past contains enough tragedy, and has seen enough sacrifice already, I think.  And I'm not sure that you can cleave off the Future so neatly from the Present.

So I will set out as I mean the future to continue: with concern for the living.

The sound of six billion faces being casually stepped on, does not seem to me like a good beginning.  Even the Future should not be assumed to prefer that another chunk of pain be paid into its price.

So yes, I am concerned for those currently alive, because it is that concern—and not a casual attitude toward the welfare of sentient beings—which I wish to continue into the Future.

And I will not, if at all possible, give any other human being the least cause to think that someone else might spark a better Singularity.  I can make no promises upon the future, but I will at least not close off desirable avenues through my own actions.  I will not, on my own authority, create a sentient superintelligence which may already determine humanity as having passed on the torch.  It is too much to do on my own, and too much harm to do on my own—to amputate someone else's destiny, and steal their main character status.  That is yet another reason not to create a sentient superintelligence to start with.  (And it's part of the logic behind the CEV proposal, which carefully avoids filling in any moral parameters not yet determined.)

But to return finally to the Culture and to Fun Theory:

The Minds in the Culture don't need the humans, and yet the humans need to be needed.

If you're going to have human-level minds with human emotional makeups, they shouldn't be competing on a level playing field with superintelligences.  Either keep the superintelligences off the local playing field, or design the human-level minds with a different emotional makeup.

\"The Culture's sole justification for the relatively unworried, hedonistic life its population enjoyed was its good works,\" writes Iain Banks.  This indicates a rather unstable moral position.  Either the life the population enjoys is eudaimonic enough to be its own justification, an end rather than a means; or else that life needs to be changed.

When people are in need of rescue, this is is a goal of the overriding-static-predicate sort, where you rescue them as fast as possible, and then you're done.  Preventing suffering cannot provide a lasting meaning to life.  What happens when you run out of victims?  If there's nothing more to life than eliminating suffering, you might as well eliminate life and be done.

If the Culture isn't valuable enough for itself, even without its good works—then the Culture might as well not be.  And when the Culture's Minds could do a better job and faster, \"good works\" can hardly justify the human existences within it.

The human-level people need a destiny to make for themselves, and they need the overshadowing Minds off their playing field while they make it.  Having an external evangelism project, and being given cute little roles that any Mind could do better in a flash, so as to \"supply meaning\", isn't going to cut it.

That's far from the only thing the Culture is doing wrong, but it's at the top of my list.

" } }, { "_id": "gb6zWstjmkYHLrbrg", "title": "Can't Unbirth a Child", "pageUrl": "https://www.lesswrong.com/posts/gb6zWstjmkYHLrbrg/can-t-unbirth-a-child", "postedAt": "2008-12-28T17:00:00.000Z", "baseScore": 62, "voteCount": 50, "commentCount": 96, "url": null, "contents": { "documentId": "gb6zWstjmkYHLrbrg", "html": "

Followup toNonsentient Optimizers

Why would you want to avoid creating a sentient AI?  \"Several reasons,\" I said.  \"Picking the simplest to explain first—I'm not ready to be a father.\"

So here is the strongest reason:

You can't unbirth a child.

I asked Robin Hanson what he would do with unlimited power.  \"Think very very carefully about what to do next,\" Robin said.  \"Most likely the first task is who to get advice from.  And then I listen to that advice.\"

Good advice, I suppose, if a little meta.  On a similarly meta level, then, I recall two excellent advices for wielding too much power:

  1. Do less; don't do everything that seems like a good idea, but only what you must do.
  2. Avoid doing things you can't undo.

Imagine that you knew the secrets of subjectivity and could create sentient AIs.

Suppose that you did create a sentient AI.

Suppose that this AI was lonely, and figured out how to hack the Internet as it then existed, and that the available hardware of the world was such, that the AI created trillions of sentient kin—not copies, but differentiated into separate people.

Suppose that these AIs were not hostile to us, but content to earn their keep and pay for their living space.

Suppose that these AIs were emotional as well as sentient, capable of being happy or sad.  And that these AIs were capable, indeed, of finding fulfillment in our world.

And suppose that, while these AIs did care for one another, and cared about themselves, and cared how they were treated in the eyes of society—

—these trillions of people also cared, very strongly, about making giant cheesecakes.

Now suppose that these AIs sued for legal rights before the Supreme Court and tried to register to vote.

Consider, I beg you, the full and awful depths of our moral dilemma.

Even if the few billions of Homo sapiens retained a position of superior military power and economic capital-holdings—even if we could manage to keep the new sentient AIs down—

—would we be right to do so?  They'd be people, no less than us.

We, the original humans, would have become a numerically tiny minority.  Would we be right to make of ourselves an aristocracy and impose apartheid on the Cheesers, even if we had the power?

Would we be right to go on trying to seize the destiny of the galaxy—to make of it a place of peace, freedom, art, aesthetics, individuality, empathy, and other components of humane value?

Or should we be content to have the galaxy be 0.1% eudaimonia and 99.9% cheesecake?

I can tell you my advice on how to resolve this horrible moral dilemma:  Don't create trillions of new people that care about cheesecake.

Avoid creating any new intelligent species at all, until we or some other decision process advances to the point of understanding what the hell we're doing and the implications of our actions.

I've heard proposals to \"uplift chimpanzees\" by trying to mix in human genes to create \"humanzees\", and, leaving off all the other reasons why this proposal sends me screaming off into the night:

Imagine that the humanzees end up as people, but rather dull and stupid people.  They have social emotions, the alpha's desire for status; but they don't have the sort of transpersonal moral concepts that humans evolved to deal with linguistic concepts.  They have goals, but not ideals; they have allies, but not friends; they have chimpanzee drives coupled to a human's abstract intelligence. 

When humanity gains a bit more knowledge, we understand that the humanzees want to continue as they are, and have a right to continue as they are, until the end of time.  Because despite all the higher destinies we might have wished for them, the original human creators of the humanzees, lacked the power and the wisdom to make humanzees who wanted to be anything better...

CREATING A NEW INTELLIGENT SPECIES IS A HUGE DAMN #(*%#!ING COMPLICATED RESPONSIBILITY.

I've lectured on the subtle art of not running away from scary, confusing, impossible-seeming problems like Friendly AI or the mystery of consciousness.  You want to know how high a challenge has to be before I finally give up and flee screaming into the night?  There it stands.

You can pawn off this problem on a superintelligence, but it has to be a nonsentient superintelligence.  Otherwise: egg, meet chicken, chicken, meet egg.

If you create a sentient superintelligence—

It's not just the problem of creating one damaged soul.  It's the problem of creating a really big citizen.  What if the superintelligence is multithreaded a trillion times, and every thread weighs as much in the moral calculus (we would conclude upon reflection) as a human being?  What if (we would conclude upon moral reflection) the superintelligence is a trillion times human size, and that's enough by itself to outweigh our species?

Creating a new intelligent species, and a new member of that species, especially a superintelligent member that might perhaps morally outweigh the whole of present-day humanity—

—delivers a gigantic kick to the world, which cannot be undone.

And if you choose the wrong shape for that mind, that is not so easily fixed—morally speaking—as a nonsentient program rewriting itself.

What you make nonsentient, can always be made sentient later; but you can't just unbirth a child.

Do less.  Fear the non-undoable.  It's sometimes poor advice in general, but very important advice when you're working with an undersized decision process having an oversized impact.  What a (nonsentient) Friendly superintelligence might be able to decide safely, is another issue.  But for myself and my own small wisdom, creating a sentient superintelligence to start with is far too large an impact on the world.

A nonsentient Friendly superintelligence is a more colorless act.

So that is the most important reason to avoid creating a sentient superintelligence to start with—though I have not exhausted the set.

" } }, { "_id": "CJQDHufACHm6XiYkY", "title": "Nonsentient Bloggers", "pageUrl": "https://www.lesswrong.com/posts/CJQDHufACHm6XiYkY/nonsentient-bloggers", "postedAt": "2008-12-27T16:27:59.000Z", "baseScore": 4, "voteCount": 2, "commentCount": 1, "url": null, "contents": { "documentId": "CJQDHufACHm6XiYkY", "html": "

Today's post, Nonsentient Optimizers, was accidentally published yesterday, although I'd only written half of it.  It has now been completed; please look at it again.

" } }, { "_id": "HsRFQTAySAx8xbXEc", "title": "Nonsentient Optimizers", "pageUrl": "https://www.lesswrong.com/posts/HsRFQTAySAx8xbXEc/nonsentient-optimizers", "postedAt": "2008-12-27T02:32:23.000Z", "baseScore": 35, "voteCount": 28, "commentCount": 48, "url": null, "contents": { "documentId": "HsRFQTAySAx8xbXEc", "html": "

Followup to: Nonperson Predicates, Possibility and Could-ness

    \"All our ships are sentient.  You could certainly try telling a ship what to do... but I don't think you'd get very far.\"
    \"Your ships think they're sentient!\" Hamin chuckled.
    \"A common delusion shared by some of our human citizens.\"
            —Player of Games, Iain M. Banks

Yesterday, I suggested that, when an AI is trying to build a model of an environment that includes human beings, we want to avoid the AI constructing detailed models that are themselves people.  And that, to this end, we would like to know what is or isn't a person—or at least have a predicate that returns 1 for all people and could return 0 or 1 for anything that isn't a person, so that, if the predicate returns 0, we know we have a definite nonperson on our hands.

And as long as you're going to solve that problem anyway, why not apply the same knowledge to create a Very Powerful Optimization Process which is also definitely not a person?

\"What?  That's impossible!\"

How do you know?  Have you solved the sacred mysteries of consciousness and existence?

\"Um—okay, look, putting aside the obvious objection that any sufficiently powerful intelligence will be able to model itself—\"

Lob's Sentence contains an exact recipe for a copy of itself, including the recipe for the recipe; it has a perfect self-model.  Does that make it sentient?

\"Putting that aside—to create a powerful AI and make it not sentient—I mean, why would you want to?\"

Several reasons.  Picking the simplest to explain first—I'm not ready to be a father.

 

Creating a true child is the only moral and metaethical problem I know that is even harder than the shape of a Friendly AI.  I would like to be able to create Friendly AI while worrying just about the Friendly AI problems, and not worrying whether I've created someone who will lead a life worth living.  Better by far to just create a Very Powerful Optimization Process, if at all possible.

\"Well, you can't have everything, and this thing sounds distinctly alarming even if you could -\"

Look, suppose that someone said—in fact, I have heard it said—that Friendly AI is impossible, because you can't have an intelligence without free will.

\"In light of the dissolved confusion about free will, both that statement and its negation are pretty darned messed up, I'd say.  Depending on how you look at it, either no intelligence has 'free will', or anything that simulates alternative courses of action has 'free will'.\"

But, understanding how the human confusion of free will arises—the source of the strange things that people say about \"free will\"—I could construct a mind that did not have this confusion, nor say similar strange things itself.

\"So the AI would be less confused about free will, just as you or I are less confused.  But the AI would still consider alternative courses of action, and select among them without knowing at the beginning which alternative it would pick.  You would not have constructed a mind lacking that which the confused name 'free will'.\"

Consider, though, the original context of the objection—that you couldn't have Friendly AI, because you couldn't have intelligence without free will.

Note:  This post was accidentally published half-finished.  Comments up to 11am (Dec 27), are only on the essay up to the above point.  Sorry!

What is the original intent of the objection?  What does the objector have in mind?

Probably that you can't have an AI which is knowably good, because, as a full-fledged mind, it will have the power to choose between good and evil.  (In an agonizing, self-sacrificing decision?)  And in reality, this, which humans do, is not something that a Friendly AI—especially one not intended to be a child and a citizen—need go through.

Which may sound very scary, if you see the landscape of possible minds in strictly anthropomorphic terms:  A mind without free will!  Chained to the selfish will of its creators!  Surely, such an evil endeavor is bound to go wrong somehow...  But if you shift over to seeing the mindscape in terms of e.g. utility functions and optimization, the \"free will\" thing sounds needlessly complicated—you would only do it if you wanted a specifically human-shaped mind, perhaps for purposes of creating a child.

Or consider some of the other aspects of free will as it is ordinarily seen—the idea of agents as atoms that bear irreducible charges of moral responsibility.  You can imagine how alarming it sounds (from an anthropomorphic perspective) to say that I plan to create an AI which lacks \"moral responsibility\".  How could an AI possibly be moral, if it doesn't have a sense of moral responsibility?

But an AI (especially a noncitizen AI) needn't conceive of itself as a moral atom whose actions, in addition to having good or bad effects, also carry a weight of sin or virtue which resides upon that atom.  It doesn't have to think, \"If I do X, that makes me a good person; if I do Y, that makes me a bad person.\"  It need merely weigh up the positive and negative utility of the consequences.  It can understand the concept of people who carry weights of sin and virtue as the result of the decisions they make, while not treating itself as a person in that sense.

Such an AI could fully understand an abstract concept of moral responsibility or agonizing moral struggles, and even correctly predict decisions that \"morally responsible\", \"free-willed\" humans would make, while possessing no actual sense of moral responsibility itself and not undergoing any agonizing moral struggles; yet still outputting the right behavior.

And this might sound unimaginably impossible if you were taking an anthropomorphic view, simulating an \"AI\" by imagining yourself in its shoes, expecting a ghost to be summoned into the machine

—but when you know how \"free will\" works, and you take apart the mind design into pieces, it's actually not all that difficult.

While we're on the subject, imagine some would-be AI designer saying:  \"Oh, well, I'm going to build an AI, but of course it has to have moral free will—it can't be moral otherwise—it wouldn't be safe to build something that doesn't have free will.\"

Then you may know that you are not safe with this one; they fall far short of the fine-grained understanding of mind required to build a knowably Friendly AI.  Though it's conceivable (if not likely) that they could slap together something just smart enough to improve itself.

And it's not even that \"free will\" is such a terribly important problem for an AI-builder.  It's just that if you do know what you're doing, and you look at humans talking about free will, then you can see things like a search tree that labels reachable sections of plan space, or an evolved moral system that labels people as moral atoms.  I'm sorry to have to say this, but it appears to me to be true: the mountains of philosophy are the foothills of AI.  Even if philosophers debate free will for ten times a hundred years, it's not surprising if the key insight is found by AI researchers inventing search trees, on their way to doing other things.

So anyone who says—\"It's too difficult to try to figure out the nature of free will, we should just go ahead and build an AI that has free will like we do\"—surely they are utterly doomed.

And anyone who says:  \"How can we dare build an AI that lacks the empathy to feel pain when humans feel pain?\"—Surely they too are doomed.  They don't even understand the concept of a utility function in classical decision theory (which makes no mention of the neural idiom of reinforcement learning of policies).  They cannot conceive of something that works unlike a human—implying that they see only a featureless ghost in the machine, secretly simulated by their own brains.  They won't see the human algorithm as detailed machinery, as big complicated machinery, as overcomplicated machinery.

And so their mind imagines something that does the right thing for much the same reasons human altruists do it—because that's easy to imagine, if you're just imagining a ghost in the machine.  But those human reasons are more complicated than they imagine—also less stable outside an exactly human cognitive architecture, than they imagine—and their chance of hitting that tiny target in design space is nil.

And anyone who says:  \"It would be terribly dangerous to build a non-sentient AI, even if we could, for it would lack empathy with us sentients—\"

An analogy proves nothing; history never repeats itself; foolish generals set out to refight their last war.  Who knows how this matter of \"sentience\" will go, once I have resolved it?  It won't be exactly the same way as free will, or I would already be done.  Perhaps there will be no choice but to create an AI which has that which we name \"subjective experiences\".

But I think there is reasonable grounds for hope that when this confusion of \"sentience\" is resolved—probably via resolving some other problem in AI that turns out to hinge on the same reasoning process that's generating the confusion—we will be able to build an AI that is not \"sentient\" in the morally important aspects of that.

Actually, the challenge of building a nonsentient AI seems to me much less worrisome than being able to come up with a nonperson predicate!

Consider:  In the first case, I only need to pick one design that is not sentient.  In the latter case, I need to have an AI that can correctly predict the decisions that conscious humans make, without ever using a conscious model of them!  The first case is only a flying thing without flapping wings, but the second case is like modeling water without modeling wetness.  Only the fact that it actually looks fairly straightforward to have an AI understand \"free will\" without having \"free will\", gives me hope by analogy.

So why did I talk about the much more difficult case first?

Because humans are accustomed to thinking about other people, without believing that those imaginations are themselves sentient.  But we're not accustomed to thinking of smart agents that aren't sentient.  So I knew that a nonperson predicate would sound easier to believe in—even though, as problems go, it's actually far more worrisome.

" } }, { "_id": "wqDRRx9RqwKLzWt7R", "title": "Nonperson Predicates", "pageUrl": "https://www.lesswrong.com/posts/wqDRRx9RqwKLzWt7R/nonperson-predicates", "postedAt": "2008-12-27T01:47:32.000Z", "baseScore": 68, "voteCount": 58, "commentCount": 177, "url": null, "contents": { "documentId": "wqDRRx9RqwKLzWt7R", "html": "

Followup toRighting a Wrong Question, Zombies! Zombies?, A Premature Word on AI, On Doing the Impossible

There is a subproblem of Friendly AI which is so scary that I usually don't talk about it, because very few would-be AI designers would react to it appropriately—that is, by saying, \"Wow, that does sound like an interesting problem\", instead of finding one of many subtle ways to scream and run away.

This is the problem that if you create an AI and tell it to model the world around it, it may form models of people that are people themselves.  Not necessarily the same person, but people nonetheless.

If you look up at the night sky, and see the tiny dots of light that move over days and weeks—planētoi, the Greeks called them, \"wanderers\"—and you try to predict the movements of those planet-dots as best you can...

Historically, humans went through a journey as long and as wandering as the planets themselves, to find an accurate model.  In the beginning, the models were things of cycles and epicycles, not much resembling the true Solar System.

But eventually we found laws of gravity, and finally built models—even if they were just on paper—that were extremely accurate so that Neptune could be deduced by looking at the unexplained perturbation of Uranus from its expected orbit.  This required moment-by-moment modeling of where a simplified version of Uranus would be, and the other known planets.  Simulation, not just abstraction.  Prediction through simplified-yet-still-detailed pointwise similarity.

Suppose you have an AI that is around human beings.  And like any Bayesian trying to explain its enivornment, the AI goes in quest of highly accurate models that predict what it sees of humans.

Models that predict/explain why people do the things they do, say the things they say, want the things they want, think the things they think, and even why people talk about \"the mystery of subjective experience\".

The model that most precisely predicts these facts, may well be a 'simulation' detailed enough to be a person in its own right.

 

A highly detailed model of me, may not be me.  But it will, at least, be a model which (for purposes of prediction via similarity) thinks itself to be Eliezer Yudkowsky.  It will be a model that, when cranked to find my behavior if asked \"Who are you and are you conscious?\", says \"I am Eliezer Yudkowsky and I seem have subjective experiences\" for much the same reason I do.

If that doesn't worry you, (re)read \"Zombies! Zombies?\".

It seems likely (though not certain) that this happens automatically, whenever a mind of sufficient power to find the right answer, and not otherwise disinclined to create a sentient being trapped within itself, tries to model a human as accurately as possible.

Now you could wave your hands and say, \"Oh, by the time the AI is smart enough to do that, it will be smart enough not to\".  (This is, in general, a phrase useful in running away from Friendly AI problems.)  But do you know this for a fact?

When dealing with things that confuse you, it is wise to widen your confidence intervals.  Is a human mind the simplest possible mind that can be sentient?  What if, in the course of trying to model its own programmers, a relatively younger AI manages to create a sentient simulation trapped within itself?  How soon do you have to start worrying?  Ask yourself that fundamental question, \"What do I think I know, and how do I think I know it?\"

You could wave your hands and say, \"Oh, it's more important to get the job done quickly, then to worry about such relatively minor problems; the end justifies the means.  Why, look at all these problems the Earth has right now...\"  (This is also a general way of running from Friendly AI problems.)

But we may consider and discard many hypotheses in the course of finding the truth, and we are but slow humans.  What if an AI creates millions, billions, trillions of alternative hypotheses, models that are actually people, who die when they are disproven?

If you accidentally kill a few trillion people, or permit them to be killed—you could say that the weight of the Future outweighs this evil, perhaps.  But the absolute weight of the sin would not be light.  If you would balk at killing a million people with a nuclear weapon, you should balk at this.

You could wave your hands and say, \"The model will contain abstractions over various uncertainties within it, and this will prevent it from being conscious even though it produces well-calibrated probability distributions over what you will say when you are asked to talk about consciousness.\"  To which I can only reply, \"That would be very convenient if it were true, but how the hell do you know that?\"  An element of a model marked 'abstract' is still there as a computational token, and the interacting causal system may still be sentient.

For these purposes, we do not, in principle, need to crack the entire Hard Problem of Consciousness—the confusion that we name \"subjective experience\".  We only need to understand enough of it to know when a process is not conscious, not a person, not something deserving of the rights of citizenship.  In practice, I suspect you can't halfway stop being confused—but in theory, half would be enough.

We need a nonperson predicate—a predicate that returns 1 for anything that is a person, and can return 0 or 1 for anything that is not a person.  This is a \"nonperson predicate\" because if it returns 0, then you know that something is definitely not a person.

You can have more than one such predicate, and if any of them returns 0, you're ok.  It just had better never return 0 on anything that is a person, however many nonpeople it returns 1 on.

We can even hope that the vast majority of models the AI needs, will be swiftly and trivially approved by a predicate that quickly answers 0.  And that the AI would only need to resort to more specific predicates in case of modeling actual people.

With a good toolbox of nonperson predicates in hand, we could exclude all \"model citizens\"—all beliefs that are themselves people—from the set of hypotheses our Bayesian AI may invent to try to model its person-containing environment.

Does that sound odd?  Well, one has to handle the problem somehow.  I am open to better ideas, though I will be a bit skeptical about any suggestions for how to proceed that let us cleverly avoid solving the damn mystery.

So do I have a nonperson predicate?  No.  At least, no nontrivial ones.

This is a challenge that I have not even tried to talk about, with those folk who think themselves ready to challenge the problem of true AI.  For they seem to have the standard reflex of running away from difficult problems, and are challenging AI only because they think their amazing insight has already solved it.  Just mentioning the problem of Friendly AI by itself, or of precision-grade AI design, is enough to send them fleeing into the night, screaming \"It's too hard!  It can't be done!\"  If I tried to explain that their job duties might impinge upon the sacred, mysterious, holy Problem of Subjective Experience—

—I'd actually expect to get blank stares, mostly, followed by some instantaneous dismissal which requires no further effort on their part.  I'm not sure of what the exact dismissal would be—maybe, \"Oh, none of the hypotheses my AI considers, could possibly be a person?\"  I don't know; I haven't bothered trying.  But it has to be a dismissal which rules out all possibility of their having to actually solve the damn problem, because most of them would think that they are smart enough to build an AI—indeed, smart enough to have already solved the key part of the problem—but not smart enough to solve the Mystery of Consciousness, which still looks scary to them.

Even if they thought of trying to solve it, they would be afraid of admitting they were trying to solve it.  Most of these people cling to the shreds of their modesty, trying at one and the same time to have solved the AI problem while still being humble ordinary blokes.  (There's a grain of truth to that, but at the same time: who the hell do they think they're kidding?)  They know without words that their audience sees the Mystery of Consciousness as a sacred untouchable problem, reserved for some future superbeing.  They don't want people to think that they're claiming an Einsteinian aura of destiny by trying to solve the problem.  So it is easier to dismiss the problem, and not believe a proposition that would be uncomfortable to explain.

Build an AI?  Sure!  Make it Friendly?  Now that you point it out, sure!  But trying to come up with a \"nonperson predicate\"?  That's just way above the difficulty level they signed up to handle.

But a blank map does not correspond to a blank territory.  Impossible confusing questions correspond to places where your own thoughts are tangled, not to places where the environment itself contains magic.  Even difficult problems do not require an aura of destiny to solve.  And the first step to solving one is not running away from the problem like a frightened rabbit, but instead sticking long enough to learn something.

So let us not run away from this problem.  I doubt it is even difficult in any absolute sense, just a place where my brain is tangled.  I suspect, based on some prior experience with similar challenges, that you can't really be good enough to build a Friendly AI, and still be tangled up in your own brain like that.  So it is not necessarily any new effort—over and above that required generally to build a mind while knowing exactly what you are about.

But in any case, I am not screaming and running away from the problem.  And I hope that you, dear longtime reader, will not faint at the audacity of my trying to solve it.

" } }, { "_id": "MTjej6HKvPByx3dEA", "title": "Devil's Offers", "pageUrl": "https://www.lesswrong.com/posts/MTjej6HKvPByx3dEA/devil-s-offers", "postedAt": "2008-12-25T17:00:00.000Z", "baseScore": 50, "voteCount": 43, "commentCount": 48, "url": null, "contents": { "documentId": "MTjej6HKvPByx3dEA", "html": "

An iota of fictional evidence from The Golden Age by John C. Wright:

    Helion had leaned and said, \"Son, once you go in there, the full powers and total command structures of the Rhadamanth Sophotech will be at your command.  You will be invested with godlike powers; but you will still have the passions and distempers of a merely human spirit.  There are two temptations which will threaten you.  First, you will be tempted to remove your human weaknesses by abrupt mental surgery.  The Invariants do this, and to a lesser degree, so do the White Manorials, abandoning humanity to escape from pain.  Second, you will be tempted to indulge your human weakness.  The Cacophiles do this, and to a lesser degree, so do the Black Manorials.  Our society will gladly feed every sin and vice and impulse you might have; and then stand by helplessly and watch as you destroy yourself; because the first law of the Golden Oecumene is that no peaceful activity is forbidden.  Free men may freely harm themselves, provided only that it is only themselves that they harm.\"
    Phaethon knew what his sire was intimating, but he did not let himself feel irritated.  Not today.  Today was the day of his majority, his emancipation; today, he could forgive even Helion's incessant, nagging fears.
    Phaethon also knew that most Rhadamanthines were not permitted to face the Noetic tests until they were octogenerians; most did not pass on their first attempt, or even their second.  Many folk were not trusted with the full powers of an adult until they reached their Centennial.  Helion, despite criticism from the other Silver-Gray branches, was permitting Phaethon to face the tests five years early...

 

    Then Phaethon said, \"It's a paradox, Father.  I cannot be, at the same time and in the same sense, a child and an adult.  And, if I am an adult, I cannot be, at the same time, free to make my own successes, but not free to make my own mistakes.\"
    Helion looked sardonic.  \"'Mistake' is such a simple word.  An adult who suffers a moment of foolishness or anger, one rash moment, has time enough to delete or destroy his own free will, memory, or judgment.  No one is allowed to force a cure on him.  No one can restore his sanity against his will.  And so we all stand quietly by, with folded hands and cold eyes, and meekly watch good men annihilate themselves.  It is somewhat... quaint... to call such a horrifying disaster a 'mistake.'\"

Is this the best Future we could possibly get to—the Future where you must be absolutely stern and resistant throughout your entire life, because one moment of weakness is enough to betray you to overwhelming temptation?

Such flawless perfection would be easy enough for a superintelligence, perhaps—for a true adult—but for a human, even a hundred-year-old human, it seems like a dangerous and inhospitable place to live.  Even if you are strong enough to always choose correctly—maybe you don't want to have to be so strong, always at every moment.

This is the great flaw in Wright's otherwise shining Utopia—that the Sophotechs are helpfully offering up overwhelming temptations to people who would not be at quite so much risk from only themselves.  (Though if not for this flaw in Wright's Utopia, he would have had no story...)

If I recall correctly, it was while reading The Golden Age that I generalized the principle \"Offering people powers beyond their own is not always helping them.\"

If you couldn't just ask a Sophotech to edit your neural networks—and you couldn't buy a standard package at the supermarket—but, rather, had to study neuroscience yourself until you could do it with your own hands—then that would act as something of a natural limiter.  Sure, there are pleasure centers that would be relatively easy to stimulate; but we don't tell you where they are, so you have to do your own neuroscience.  Or we don't sell you your own neurosurgery kit, so you have to build it yourself—metaphorically speaking, anyway—

But you see the idea: it is not so terrible a disrespect for free will, to live in a world in which people are free to shoot their feet off through their own strength—in the hope that by the time they're smart enough to do it under their own power, they're smart enough not to.

The more dangerous and destructive the act, the more you require people to do it without external help.  If it's really dangerous, you don't just require them to do their own engineering, but to do their own science.  A singleton might be justified in prohibiting standardized textbooks in certain fields, so that people have to do their own science—make their own discoveries, learn to rule out their own stupid hypotheses, and fight their own overconfidence.  Besides, everyone should experience the joy of major discovery at least once in their lifetime, and to do this properly, you may have to prevent spoilers from entering the public discourse.  So you're getting three social benefits at once, here.

But now I'm trailing off into plots for SF novels, instead of Fun Theory per se.  (It can be fun to muse how I would create the world if I had to order it according to my own childish wisdom, but in real life one rather prefers to avoid that scenario.)

As a matter of Fun Theory, though, you can imagine a better world than the Golden Oecumene depicted above—it is not the best world imaginable, fun-theoretically speaking.  We would prefer (if attainable) a world in which people own their own mistakes and their own successes, and yet they are not given loaded handguns on a silver platter, nor do they perish through suicide by genie bottle.

Once you imagine a world in which people can shoot off their own feet through their own strength, are you making that world incrementally better by offering incremental help along the way?

It's one matter to prohibit people from using dangerous powers that they have grown enough to acquire naturally—to literally protect them from themselves.  One expects that if a mind kept getting smarter, at some eudaimonic rate of intelligence increase, then—if you took the most obvious course—the mind would eventually become able to edit its own source code, and bliss itself out if it chose to do so.  Unless the mind's growth were steered onto a non-obvious course, or monitors were mandated to prohibit that event...  To protect people from their own powers might take some twisting.

To descend from above and offer dangerous powers as an untimely gift, is another matter entirely.  That's why the title of this post is \"Devil's Offers\", not \"Dangerous Choices\".

And to allow dangerous powers to be sold in a marketplace—or alternatively to prohibit them from being transferred from one mind to another—that is somewhere in between.

John C. Wright's writing has a particular poignancy for me, for in my foolish youth I thought that something very much like this scenario was a good idea—that a benevolent superintelligence ought to go around offering people lots of options, and doing as it was asked.

In retrospect, this was a case of a pernicious distortion where you end up believing things that are easy to market to other people.

I know someone who drives across the country on long trips, rather than flying.  Air travel scares him.  Statistics, naturally, show that flying a given distance is much safer than driving it.  But some people fear too much the loss of control that comes from not having their own hands on the steering wheel.  It's a common complaint.

The future sounds less scary if you imagine yourself having lots of control over it.  For every awful thing that you imagine happening to you, you can imagine, \"But I won't choose that, so it will be all right.\"

And if it's not your own hands on the steering wheel, you think of scary things, and imagine, \"What if this is chosen for me, and I can't say no?\"

But in real life rather than imagination, human choice is a fragile thing.  If the whole field of heuristics and biases teaches us anything, it surely teaches us that.  Nor has it been the verdict of experiment, that humans correctly estimate the flaws of their own decision mechanisms.

I flinched away from that thought's implications, not so much because I feared superintelligent paternalism myself, but because I feared what other people would say of that position.  If I believed it, I would have to defend it, so I managed not to believe it.  Instead I told people not to worry, a superintelligence would surely respect their decisions (and even believed it myself).  A very pernicious sort of self-deception.

Human governments are made up of humans who are foolish like ourselves, plus they have poor incentives.  Less skin in the game, and specific human brainware to be corrupted by wielding power.  So we've learned the historical lesson to be wary of ceding control to human bureaucrats and politicians.  We may even be emotionally hardwired to resent the loss of anything we perceive as power.

Which is just to say that people are biased, by instinct, by anthropomorphism, and by narrow experience, to underestimate how much they could potentially trust a superintelligence which lacks a human's corruption circuits, doesn't easily make certain kinds of mistakes, and has strong overlap between its motives and your own interests.

Do you trust yourself?  Do you trust yourself to know when to trust yourself?  If you're dealing with a superintelligence kindly enough to care about you at all, rather than disassembling you for raw materials, are you wise to second-guess its choice of who it thinks should decide?  Do you think you have a superior epistemic vantage point here, or what?

Obviously we should not trust all agents who claim to be trustworthy—especially if they are weak enough, relative to us, to need our goodwill.  But I am quite ready to accept that a benevolent superintelligence may not offer certain choices.

If you feel safer driving than flying, because that way it's your own hands on the steering wheel, statistics be damned—

—then maybe it isn't helping you, for a superintelligence to offer you the option of driving.

Gravity doesn't ask you if you would like to float up out of the atmosphere into space and die.  But you don't go around complaining that gravity is a tyrant, right?  You can build a spaceship if you work hard and study hard.  It would be a more dangerous world if your six-year-old son could do it in an hour using string and cardboard.

" } }, { "_id": "CtSS6SkHhLBvdodTY", "title": "Harmful Options", "pageUrl": "https://www.lesswrong.com/posts/CtSS6SkHhLBvdodTY/harmful-options", "postedAt": "2008-12-25T02:26:22.000Z", "baseScore": 54, "voteCount": 36, "commentCount": 45, "url": null, "contents": { "documentId": "CtSS6SkHhLBvdodTY", "html": "

Barry Schwartz's The Paradox of Choice—which I haven't read, though I've read some of the research behind it—talks about how offering people more choices can make them less happy.

A simple intuition says this shouldn't ought to happen to rational agents:  If your current choice is X, and you're offered an alternative Y that's worse than X, and you know it, you can always just go on doing X.  So a rational agent shouldn't do worse by having more options.  The more available actions you have, the more powerful you become—that's how it should ought to work.

For example, if an ideal rational agent is initially forced to take only box B in Newcomb's Problem, and is then offered the additional choice of taking both boxes A and B, the rational agent shouldn't regret having more options.  Such regret indicates that you're \"fighting your own ritual of cognition\" which helplessly selects the worse choice once it's offered you.

But this intuition only governs extremely idealized rationalists, or rationalists in extremely idealized situations.  Bounded rationalists can easily do worse with strictly more options, because they burn computing operations to evaluate them.  You could write an invincible chess program in one line of Python if its only legal move were the winning one.

Of course Schwartz and co. are not talking about anything so pure and innocent as the computing cost of having more choices.

If you're dealing, not with an ideal rationalist, not with a bounded rationalist, but with a human being—

Say, would you like to finish reading this post, or watch this surprising video instead?

Schwartz, I believe, talks primarily about the decrease in happiness and satisfaction that results from having more mutually exclusive options.  Before this research was done, it was already known that people are more sensitive to losses than to gains, generally by a factor of between 2 and 2.5 (in various different experimental scenarios).  That is, the pain of losing something is between 2 and 2.5 times as worse as the joy of gaining it.  (This is an interesting constant in its own right, and may have something to do with compensating for our systematic overconfidence.)

So—if you can only choose one dessert, you're likely to be happier choosing from a menu of two than a menu of fourteen.  In the first case, you eat one dessert and pass up one dessert; in the latter case, you eat one dessert and pass up thirteen desserts.  And we are more sensitive to loss than to gain.

(If I order dessert on a menu at all, I will order quickly and then close the menu and put it away, so as not to look at the other items.)

Not only that, but if the options have incommensurable attributes, then whatever option we select is likely to look worse because of the comparison.  A luxury car that would have looked great by comparison to a Crown Victoria, instead becomes slower than the Ferrari, more expensive than the 9-5, with worse mileage than the Prius, and not looking quite as good as the Mustang.  So we lose on satisfaction with the road we did take.

And then there are more direct forms of harm done by painful choices.  IIRC, an experiment showed that people who refused to eat a cookie—who were offered the cookie, and chose not to take it—did worse on subsequent tests of mental performance than either those who ate the cookie or those who were not offered any cookie.  You pay a price in mental energy for resisting temptation.

Or consider the various \"trolley problems\" of ethical philosophy—a trolley is bearing down on 5 people, but there's one person who's very fat and can be pushed onto the tracks to stop the trolley, that sort of thing.  If you're forced to choose between two unacceptable evils, you'll pay a price either way.  Vide Sophie's Choice.

An option need not be taken, or even be strongly considered, in order to wreak harm.  Recall the point from \"High Challenge\", about how offering to do someone's work for them is not always helping them—how the ultimate computer game is not the one that just says \"YOU WIN\", forever.

Suppose your computer games, in addition to the long difficult path to your level's goal, also had little side-paths that you could use—directly in the game, as corridors—that would bypass all the enemies and take you straight to the goal, offering along the way all the items and experience that you could have gotten the hard way.  And this corridor is always visible, out of the corner of your eye.

Even if you resolutely refused to take the easy path through the game, knowing that it would cheat you of the very experience that you paid money in order to buy—wouldn't that always-visible corridor, make the game that much less fun?  Knowing, for every alien you shot, and every decision you made, that there was always an easier path?

I don't know if this story has ever been written, but you can imagine a Devil who follows someone around, making their life miserable, solely by offering them options which are never actually taken—a \"deal with the Devil\" story that only requires the Devil to have the capacity to grant wishes, rather than ever granting a single one.

And what if the worse option is actually taken?  I'm not suggesting that it is always a good idea for human governments to go around Prohibiting temptations.  But the literature of heuristics and biases is replete with examples of reproducible stupid choices; and there is also such a thing as akrasia (weakness of will).

If you're an agent operating from a much higher vantage point—high enough to see humans as flawed algorithms, so that it's not a matter of second-guessing but second-knowing—then is it benevolence to offer choices that will assuredly be made wrongly?  Clearly, removing all choices from someone and reducing their life to Progress Quest, is not helping them.  But are we wise enough to know when we should choose?  And in some cases, even offering that much of a choice, even if the choice is made correctly, may already do the harm...

" } }, { "_id": "jq5WAQEboeufkxzsg", "title": "Imaginary Positions", "pageUrl": "https://www.lesswrong.com/posts/jq5WAQEboeufkxzsg/imaginary-positions", "postedAt": "2008-12-23T17:35:40.000Z", "baseScore": 30, "voteCount": 23, "commentCount": 49, "url": null, "contents": { "documentId": "jq5WAQEboeufkxzsg", "html": "

Every now and then, one reads an article about the Singularity in\nwhich some reporter confidently asserts, "The Singularitarians,\nfollowers of Ray Kurzweil, believe that they will be uploaded into\ntechno-heaven while the unbelievers languish behind or are extinguished\nby the machines."

I don't think I've ever met a single Singularity fan, Kurzweilian or otherwise, who\nthinks that only believers in the Singularity will go to upload heaven\nand everyone else will be left to rot.  Not one. \n(There's a very few pseudo-Randian types who believe that only the truly selfish who accumulate lots of money will make it, but they expect e.g. me to be damned with\nthe rest.)

But if you start out thinking that the Singularity is a loony religious meme, then it seems like Singularity believers ought to believe that they alone will be saved.  It seems like a detail that would fit the story.

This fittingness is so strong as to manufacture the conclusion without any particular observations.  And then the conclusion isn't marked as a deduction.  The reporter just thinks that they investigated the Singularity, and found some loony cultists who believe they alone will be saved.

Or so I deduce.  I haven't actually observed the inside of their minds, after all.

Has any rationalist ever advocated behaving as if all\npeople are reasonable and fair?  I've repeatedly heard people say,\n"Well, it's not always smart to be rational, because other people\naren't always reasonable."  What rationalist said they were?  I would deduce:  This is something that non-rationalists believe it would "fit" for us to believe, given our general blind faith in Reason.  And\nso their minds just add it to the knowledge pool, as though it were an observation.  (In this case I encountered yet another example recently enough to find the reference; see here.)

\n

(Disclaimer:  Many things\nhave been said, at one time or another, by one person or another, over\ncenturies of recorded history; and the topic of "rationality" is popularly enough discussed that some self-identified "rationalist" may have described "rationality" that way at one point or another.  But I have yet to hear a rationalist say it, myself.)

\n

I once read an article on Extropians (a certain flavor of transhumanist) which asserted\nthat the Extropians were a reclusive enclave of techno-millionaires\n(yeah, don't we wish).  Where did this detail come from?  Definitely not from observation.  And considering the sheer divergence from reality, I doubt it\nwas ever planned as a deliberate lie.  It's not just easily falsified,\nbut a mark of embarassment to give others too much credit that way\n("Ha!  You believed they were millionaires?")  One suspects, rather,\nthat the proposition seemed to fit, and so it was added -\nwithout any warning label saying "I deduced this from my other beliefs,\nbut have no direct observations to support it."\n

There's also a general problem with reporters which is that they\ndon't write what happened, they write the Nearest Cliche to what\nhappened - which is very little information for backward inference, especially if\nthere are few cliches to be selected from.  The distance from actual Extropians to the Nearest Cliche "reclusive enclave of techno-millionaires" is kinda large.  This may get a separate post at some point.

My actual nightmare scenario for the future involves well-intentioned AI researchers who try to make\na nice AI but don't do enough math.  (If you're not an expert you can't\ntrack the technical issues yourself, but you can often also tell at a\nglance that they've put very little thinking into "nice".)  The\nAI ends up wanting to tile the galaxy with tiny smiley-faces, or\nreward-counters; the AI doesn't bear the slightest hate for humans, but\nwe are made of atoms it can use for something else.  The most probable-seeming result is\nnot Hell On Earth but Null On Earth, a galaxy tiled with paperclips or something\nequally morally inert.

The imaginary position that gets invented because it seems to "fit" - that is, fit the folly that the other believes is generating the position - is "The Singularity is a dramatic final conflict\nbetween Good AI and Evil AI, where Good AIs are made by\nwell-intentioned people and Evil AIs are made by ill-intentioned\npeople."

In many such cases, no matter how much you tell people what you really believe, they don't update!  I'm not even sure this is a matter of any deliberately justifying decision on their part - like an explicit counter that you're concealing your real beliefs.  To me the process seems more like:  They stare at you for a moment, think "That's not what this person ought to believe!", and then blink away the dissonant evidence and continue as before.  If your real beliefs are less convenient for them, the same phenomenon occurs: words from the lips will be discarded.

There's an obvious relevance to prediction markets - that if there's an outstanding dispute, and the market-makers don't consult both sides on the wording of the payout conditions, it's possible that one side won't take the bet because "That's not what we assert!"  In which case it would be highly inappropriate to crow "Look at those market prices!" or "So you don't really believe; you won't take the bet!"  But I would guess that this issue has already been discussed by prediction market advocates.  (And that standard procedures have already been proposed for resolving it?)
\n\n

I'm wondering if there are similar Imaginary Positions in, oh, say,\neconomics - if there are things that few or no economists believe, but\nwhich people (or journalists) think economists believe because it seems\nto them like "the sort of thing that economists would believe".  Open general question.

" } }, { "_id": "EA5JmHSHa2E2RJKSY", "title": "Rationality Quotes 20", "pageUrl": "https://www.lesswrong.com/posts/EA5JmHSHa2E2RJKSY/rationality-quotes-20", "postedAt": "2008-12-22T22:05:43.000Z", "baseScore": 10, "voteCount": 8, "commentCount": 13, "url": null, "contents": { "documentId": "EA5JmHSHa2E2RJKSY", "html": "

\"For every stock you buy, there is someone selling you that stock.  What is it that you know that they don't?  What is it that they know, that you don't?  Who has the edge?  If it's not you, chances are you are going to lose money on the deal.\"
        -- Mark Cuban

\n

\"If you have two choices, choose the harder.  If you're trying to decide whether to go out running or sit home and watch TV, go running.  Probably the reason this trick works so well is that when you have two choices and one is harder, the only reason you're even considering the other is laziness.  You know in the back of your mind what's the right thing to do, and this trick merely forces you to acknowledge it.\"
        -- Paul Graham

\n

\"Never attribute to malice that which can be adequately explained by stupidity.\"
        -- Hanlon's Razor

\n

\"I divide my officers into four classes; the clever, the lazy, the industrious, and the stupid.  Each officer possesses at least two of these qualities.  Those who are clever and industrious are fitted for the highest staff appointments.  Use can be made of those who are stupid and lazy.  The man who is clever and lazy however is for the very highest command; he has the temperament and nerves to deal with all situations.  But whoever is stupid and industrious is a menace and must be removed immediately!\"
        -- General Kurt von Hammerstein-Equord

\n

\"There's no such thing as a human who doesn't commit sin.  It's not like the world is divided into sinners and the innocent.  There are only people who can and who cannot atone for their sins.\"
        -- Ciel

\n

\"Simple stupidity is never enough.  People need to pile stupidity on stupidity on stupidity.\"
        -- Mark C. Chu-Carroll

" } }, { "_id": "dKGfNvjGjq4rqffyF", "title": "Living By Your Own Strength", "pageUrl": "https://www.lesswrong.com/posts/dKGfNvjGjq4rqffyF/living-by-your-own-strength", "postedAt": "2008-12-22T00:37:52.000Z", "baseScore": 49, "voteCount": 43, "commentCount": 32, "url": null, "contents": { "documentId": "dKGfNvjGjq4rqffyF", "html": "

Followup toTruly Part of You

\"Myself, and Morisato-san... we want to live together by our own strength.\"

Jared Diamond once called agriculture \"the worst mistake in the history of the human race\".  Farmers could grow more wheat than hunter-gatherers could collect nuts, but the evidence seems pretty conclusive that agriculture traded quality of life for quantity of life.  One study showed that the farmers in an area were six inches shorter and seven years shorter-lived than their hunter-gatherer predecessors—even though the farmers were more numerous.

I don't know if I'd call agriculture a mistake.  But one should at least be aware of the downsides.  Policy debates should not appear one-sided.

In the same spirit—

Once upon a time, our hunter-gatherer ancestors strung their own bows, wove their own baskets, whittled their own flutes.

And part of our alienation from that environment of evolutionary adaptedness, is the number of tools we use that we don't understand and couldn't make for ourselves.

You can look back on Overcoming Bias, and see that I've always been suspicious of borrowed strength.  (Even before I understood the source of Robin's and my disagreement about the Singularity, that is.)  In Guessing the Teacher's Password I talked about the (well-known) problem in which schools end up teaching verbal behavior rather than real knowledge.  In Truly Part of You I suggested one test for false knowledge:  Imagine deleting a fact from your mind, and ask if it would grow back.

I know many ways to prove the Pythagorean Theorem, including at least one proof that is purely visual and can be seen at a glance.  But if you deleted the Pythagorean Theorem from my mind entirely, would I have enough math skills left to grow it back the next time I needed it?  I hope so—certainly I've solved math problems that seem tougher than that, what with benefit of hindsight and all.  But, as I'm not an AI, I can't actually switch off the memories and associations, and test myself in that way.

Wielding someone else's strength to do things beyond your own understanding—that really is as dangerous as the Deeply Wise phrasing makes it sound.

I observed in Failing to Learn from History (musing on my childhood foolishness in offering a mysterious answer to a mysterious question):  \"If only I had personally postulated astrological mysteries and then discovered Newtonian mechanics, postulated alchemical mysteries and then discovered chemistry, postulated vitalistic mysteries and then discovered biology.  I would have thought of my Mysterious Answer and said to myself:  No way am I falling for that again.\"

At that point in my childhood, I'd been handed some techniques of rationality but I didn't exactly own them.  Borrowing someone else's knowledge really doesn't give you anything remotely like the same power level required to discover that knowledge for yourself.

Would Isaac Newton have remained a mystic, even in that earlier era, if he'd lived the lives of Galileo and Archimedes instead of just reading about them?  If he'd personally seen the planets reduced from gods to spheres in a telescope?  If he'd personally fought that whole war against ignorance and mystery that had to be fought, before Isaac Newton could be handed math and science as a start to his further work?

We stand on the shoulders of giants, and in doing so, the power that we wield is far out of proportion to the power that we could generate for ourselves.  This is true even of our revolutionaries.  And yes, we couldn't begin to support this world if people could only use their own strength.  Even so, we are losing something.

That thought occurred to me, reading about the Manhattan Project, and the petition that the physicists signed to avoid dropping the atomic bomb on Japan.  It was too late, of course; they'd already built the bombs and handed them to the military, and they couldn't take back that gift.  And so nuclear weapons passed into the hands of politicians, who could never have created such a thing through their own strength...

Not that I'm saying the world would necessarily have been a better place, if physicists had possessed sole custody of ICBMs.  What does a physicist know about international diplomacy, or war?  And it's not as if Leo Szilard—who first thought of the fission chain reaction—had personally invented science; he too was using powers beyond his own strength.  And it's not as if the physicists on the Manhattan Project raised the money to pay for their salaries and their materials; they were borrowing the strength of the politicians...

But if no one had been able to use nuclear weapons without, say, possessing the discipline of a scientist and the discipline of a politician—without personally knowing enough to construct an atomic bomb and make friends—the world might have been a slightly safer place.

And if nobody had been able to construct an atomic bomb without first discovering for themselves the nature and existence of physics, then we would have been much safer from atomic bombs, because no one would have been able to build them until they were two hundred years old.

With humans leaving the game after just seventy years, we couldn't support this world using only our own strengths.  But we have traded quality of insight for quantity of insight.

It does sometimes seem to me that many of this world's problems, stem from our using powers that aren't appropriate to seventy-year-olds.

And there is a higher level of strength-ownership, which no human being has yet achieved.  Even when we run, we're just using the muscles that evolution built for us.  Even when we think, we're just using the brains that evolution built for us.

I'm not suggesting that people should create themselves from scratch without a starting point.  Just pointing out that it would be a different world if we understood our own brains and could redesign our own legs.  As yet there's no human \"rationalist\" or \"scientist\", whatever they know about \"how to think\", who could actually build a rational AI—which shows you the limits of our self-understanding.

This is not the sort of thing that I'd suggest as an immediate alteration.  I'm not suggesting that people should instantly on a silver platter be given full knowledge of how their own brains work and the ability to redesign their own legs.  Because maybe people will be better off if they aren't given that kind of power, but rather have to work out the answer for themselves.

Just in terms of anomie versus fun, there's a big difference between being able to do things for yourself, and having to rely on other people to do them for you.  (Even if you're doing them with a brain you never designed yourself.)

I don't know if it's a principle that would stay until the end of time, to the children's children.  Maybe better-designed minds could handle opaque tools without the anomie.

But it is part of the commonly retold prophecy of Artificial Intelligence and the Age of the Machine, that this era must be accompanied by greater reliance on things outside yourself, more incomprehensible tools into which you have less insight and less part in their creation.

Such a prophecy is not surprising.  That is the way the trend has gone so far, in our culture that is too busy staying alive to optimize for fun.  From the fire-starting tools that you built yourself, to the village candleseller, and then from the candleseller to the electric light that runs on strange mathematical principles and is powered by a distant generator...  we are surrounded by things outside ourselves and strengths outside our understanding; we need them to stay alive, or we buy them because it's easier that way.

But with a sufficient surplus of power, you could start doing things the eudaimonic way.  Start rethinking the life experience as a road to internalizing new strengths, instead of just trying to keep people alive efficiently.

A Friendly AI doesn't have to be a continuation of existing trends.  It's not the Machine.  It's not the alien force of technology.  It's not mechanizing a factory.  It's not a new gadget for sale.  That's not where the shape comes from. What it is—is not easy to explain; but I'm reminded of doc Smith's description of the Lens as \"the physical manifestation of a purely philosophical concept\".  That philosophical concept doesn't have to manifest as new buttons to press—if, on reflection, that's not what we would want.

" } }, { "_id": "eLHCWi8sotQT6CmTX", "title": "Sensual Experience", "pageUrl": "https://www.lesswrong.com/posts/eLHCWi8sotQT6CmTX/sensual-experience", "postedAt": "2008-12-21T00:56:31.000Z", "baseScore": 42, "voteCount": 31, "commentCount": 86, "url": null, "contents": { "documentId": "eLHCWi8sotQT6CmTX", "html": "

Modern day gamemakers are constantly working on higher-resolution, more realistic graphics; more immersive sounds—but they're a long long way off real life.

Pressing the \"W\" key to run forward as a graphic of a hungry tiger bounds behind you, just doesn't seem quite as sensual as running frantically across the savanna with your own legs, breathing in huge gasps and pumping your arms as the sun beats down on your shoulders, the grass brushes your shins, and the air whips around you with the wind of your passage.

Don't mistake me for a luddite; I'm not saying the technology can't get that good.  I'm saying it hasn't gotten that good yet.

Failing to escape the computer tiger would also have fewer long-term consequences than failing to escape a biological tiger—it would be less a part of the total story of your life—meaning you're also likely to be less emotionally involved.  But that's a topic for another post.  Today's post is just about the sensual quality of the experience.

Sensual experience isn't a question of some mysterious quality that only the \"real world\" possesses.  A computer screen is as real as a tiger, after all.  Whatever is, is real.

But the pattern of the pseudo-tiger, inside the computer chip, is nowhere near as complex as a biological tiger; it offers far fewer modes in which to interact.  And the sensory bandwidth between you and the computer's pseudo-world is relatively low; and the information passing along it isn't in quite the right format.

It's not a question of computer tigers being \"virtual\" or \"simulated\", and therefore somehow a separate magisterium. But with present technology, and the way your brain is presently set up, you'd have a lot more neurons involved in running away from a biological tiger.

Running would fill your whole vision with motion, not just a flat rectangular screen—which translates into more square centimeters of visual cortex getting actively engaged.

The graphics on a computer monitor try to trigger your sense of spatial motion (residing in the parietal cortex, btw).  But they're presenting the information differently from its native format —without binocular vision, for example, and without your vestibular senses indicating true motion.  So the sense of motion isn't likely to be quite the same, what it would be if you were running.

And there's the sense of touch that indicates the wind on your skin; and the proprioceptive sensors that respond to the position of your limbs; and the nerves that record the strain on your muscles.  There's a whole strip of sensorimotor cortex running along the top of your brain, that would be much more intensely involved in \"real\" running.

It's a very old observation, that Homo sapiens was made to hunt and gather on the savanna, rather than work in an office.  Civilization and its discontents...  But alienation needs a causal mechanism; it doesn't just happen by magic.  Physics is physics, so it's not that one environment is less real than another.  But our brains are more adapted to interfacing with jungles than computer code.

Writing a complicated computer program carries its own triumphs and failures, heights of exultation and pits of despair.  But is it the same sort of sensual experience as, say, riding a motorcycle?  I've never actually ridden a motorcycle, but I expect not.

I've experienced the exhilaration of getting a program right on the dozenth try after finally spotting the problem.  I doubt a random moment of a motorcycle ride actually feels better than that.  But still, my hunter-gatherer ancestors never wrote computer programs.  And so my mind's grasp on code is maintained using more rarefied, more abstract, more general capabilities—which means less sensual involvement.

Doesn't computer programming deserve to be as much of a sensual experience as motorcycle riding?  Some time ago, a relative once asked me if I thought that computer programming could use all my talents; I at once replied, \"There is no limit to the talent you can use in computer programming.\"  It's as close as human beings have ever come to playing with the raw stuff of creation—but our grasp on it is too distant from the jungle.  All our involvement is through letters on a computer screen.  I win, and I'm happy, but there's no wind on my face.

If only my ancestors back to the level of my last common ancestor with a mouse, had constantly faced the challenge of writing computer programs!  Then I would have brain areas suited to the task, and programming computers would be more of a sensual experience...

Perhaps it's not too late to fix the mistake?

If there were something around that was smart enough to rewrite human brains without breaking them—not a trivial amount of smartness—then it would be possible to expand the range of things that are sensually fun.

Not just novel challenges, but novel high-bandwidth senses and corresponding new brain areas.  Widening the sensorium to include new vivid, detailed experiences.  And not neglecting the other half of the equation, high-bandwidth motor connections—new motor brain areas, to control with subtlety our new limbs (the parts of the process that we control as our direct handles on it).

There's a story—old now, but I remember how exciting it was when the news first came out—about a brain-computer interface for a \"locked-in\" patient (who could previously only move his eyes), connecting control of a computer cursor directly to neurons in his visual cortex.  It took some training at first for him to use the cursor—he started out by trying to move his paralyzed arm, which was the part of the motor cortex they were interfacing, and watched as the cursor jerked around on the screen.  But after a while, they asked the patient, \"What does it feel like?\" and the patient replied, \"It doesn't feel like anything.\"  He just controlled the cursor the same sort of way he would have controlled a finger, except that it wasn't a finger, it was a cursor.

Like most brain modifications, adding new senses is not something to be done lightly.  Sensual experience too easily renders a task involving.

Consider taste buds.  Recognizing the taste of the same food on different occasions was very important to our ancestors—it was how they learned what to eat, that extracted regularity.  And our ancestors also got helpful reinforcement from their taste buds about what to eat—reinforcement which is now worse than useless, because of the marketing incentive to reverse-engineer tastiness using artificial substances.  By now, it's probably true that at least some people have eaten 162,329 potato chips in their lifetimes.  That's even less novelty and challenge than carving 162,329 table legs.

I'm not saying we should try to eliminate our senses of taste.  There's a lot to be said for grandfathering in the senses we started with—it preserves our existing life memories, for example.  Once you realize how easy it would be for a mind to collapse into a pleasure center, you start to respect the \"complications\" of your goal system a lot more, and be more wary around \"simplifications\".

But I do want to nudge people into adopting something of a questioning attitude toward the senses we have now, rather than assuming that the existing senses are The Way Things Have Been And Will Always Be.  A sex organ bears thousands of densely packed nerves for signal strength, but that signal—however strong—isn't as complicated as the sensations sent out by taste buds.  Is that really appropriate for one of the most interesting parts of human existence?  That even a novice chef can create a wider variety of taste sensations for your tongue, than—well, I'd better stop there.  But from a fun-theoretic standpoint, the existing setup is wildly unbalanced in a lot of ways.  It wasn't designed for the sake of eudaimonia.

I conclude with the following cautionary quote from an old IRC conversation, as a reminder that maybe not everything should be a sensual experience:

<MRAmes> I want a sensory modality for regular expressions.

" } }, { "_id": "aEdqh3KPerBNYvoWe", "title": "Complex Novelty", "pageUrl": "https://www.lesswrong.com/posts/aEdqh3KPerBNYvoWe/complex-novelty", "postedAt": "2008-12-20T00:31:51.000Z", "baseScore": 48, "voteCount": 44, "commentCount": 67, "url": null, "contents": { "documentId": "aEdqh3KPerBNYvoWe", "html": "

From Greg Egan's Permutation City:

    The workshop abutted a warehouse full of table legs—one hundred and sixty-two thousand, three hundred and twenty-nine, so far.  Peer could imagine nothing more satisfying than reaching the two hundred thousand mark—although he knew it was likely that he'd change his mind and abandon the workshop before that happened; new vocations were imposed by his exoself at random intervals, but statistically, the next one was overdue.  Immediately before taking up woodwork, he'd passionately devoured all the higher mathematics texts in the central library, run all the tutorial software, and then personally contributed several important new results to group theory—untroubled by the fact that none of the Elysian mathematicians would ever be aware of his work.  Before that, he'd written over three hundred comic operas, with librettos in Italian, French and English—and staged most of them, with puppet performers and audience.  Before that, he'd patiently studied the structure and biochemistry of the human brain for sixty-seven years; towards the end he had fully grasped, to his own satisfaction, the nature of the process of consciousness.  Every one of these pursuits had been utterly engrossing, and satisfying, at the time.  He'd even been interested in the Elysians, once.
    No longer.  He preferred to think about table legs.

Among science fiction authors, (early) Greg Egan is my favorite; of early-Greg-Egan's books, Permutation City is my favorite; and this particular passage in Permutation City, more than any of the others, I find utterly horrifying.

If this were all the hope the future held, I don't know if I could bring myself to try.  Small wonder that people don't sign up for cryonics, if even SF writers think this is the best we can do.

You could think of this whole series on Fun Theory as my reply to Greg Egan—a list of the ways that his human-level uploaded civilizations Fail At Fun.  (And yes, this series will also explain what's wrong with the Culture and how to fix it.)

We won't get to all of Peer's problems today—but really.  Table legs?

I could see myself carving one table leg, maybe, if there was something non-obvious to learn from the experience.  But not 162,329.

In Permutation City, Peer modified himself to find table-leg-carving fascinating and worthwhile and pleasurable.  But really, at that point, you might as well modify yourself to get pleasure from playing Tic-Tac-Toe, or lie motionless on a pillow as a limbless eyeless blob having fantastic orgasms.  It's not a worthy use of a human-level intelligence.

Worse, carving the 162,329th table leg doesn't teach you anything that you didn't already know from carving 162,328 previous table legs.  A mind that changes so little in life's course is scarcely experiencing time.

But apparently, once you do a little group theory, write a few operas, and solve the mystery of consciousness, there isn't much else worth doing in life: you've exhausted the entirety of Fun Space down to the level of table legs.

Is this plausible?  How large is Fun Space?

Let's say you were a human-level intelligence who'd never seen a Rubik's Cube, or anything remotely like it.  As Hofstadter describes in two whole chapters of Metamagical Themas, there's a lot that intelligent human novices can learn from the Cube—like the whole notion of an \"operator\" or \"macro\", a sequence of moves that accomplishes a limited swap with few side effects.  Parity, search, impossibility—

So you learn these things in the long, difficult course of solving the first scrambled Rubik's Cube you encounter.  The second scrambled Cube—solving it might still be difficult, still be enough fun to be worth doing.  But you won't have quite the same pleasurable shock of encountering something as new, and strange, and interesting as the first Cube was unto you.

Even if you encounter a variant of the Rubik's Cube—like a 4x4x4 Cube instead of a 3x3x3 Cube—or even a Rubik's Tesseract (a 3x3x3x3 Cube in four dimensions)—it still won't contain quite as much fun as the first Cube you ever saw.  I haven't tried mastering the Rubik's Tesseract myself, so I don't know if there are added secrets in four dimensions—but it doesn't seem likely to teach me anything as fundamental as \"operators\", \"side effects\", or \"parity\".

(I was quite young when I encountered a Rubik's Cube in a toy cache, and so that actually is where I discovered such concepts.  I tried that Cube on and off for months, without solving it.  Finally I took out a book from the library on Cubes, applied the macros there, and discovered that this particular Cube was unsolvable —it had been disassembled and reassembled into an impossible position.  I think I was faintly annoyed.)

Learning is fun, but it uses up fun: you can't have the same stroke of genius twice.  Insight is insight because it makes future problems less difficult, and \"deep\" because it applies to many such problems.

And the smarter you are, the faster you learn—so the smarter you are, the less total fun you can have.  Chimpanzees can occupy themselves for a lifetime at tasks that would bore you or I to tears.  Clearly, the solution to Peer's difficulty is to become stupid enough that carving table legs is difficult again—and so lousy at generalizing that every table leg is a new and exciting challenge—

Well, but hold on:  If you're a chimpanzee, you can't understand the Rubik's Cube at all.  At least I'm willing to bet against anyone training a chimpanzee to solve one—let alone a chimpanzee solving it spontaneously—let alone a chimpanzee understanding the deep concepts like \"operators\", \"side effects\", and \"parity\".

I could be wrong here, but it seems to me, on the whole, that when you look at the number of ways that chimpanzees have fun, and the number of ways that humans have fun, that Human Fun Space is larger than Chimpanzee Fun Space.

And not in a way that increases just linearly with brain size, either.

The space of problems that are Fun to a given brain, will definitely be smaller than the exponentially increasing space of all possible problems that brain can represent.  We are interested only in the borderland between triviality and impossibility—problems difficult enough to worthily occupy our minds, yet tractable enough to be worth challenging.  (What looks \"impossible\" is not always impossible, but the border is still somewhere even if we can't see it at a glance—there are some problems so difficult you can't even learn much from failing.)

An even stronger constraint is that if you do something many times, you ought to learn from the experience and get better—many problems of the same difficulty will have the same \"learnable lessons\" embedded in them, so that doing one consumes some of the fun of others.

As you learn new things, and your skills improve, problems will get easier.  Some will move off the border of the possible and the impossible, and become too easy to be interesting.

But others will move from the territory of impossibility into the borderlands of mere extreme difficulty.  It's easier to invent group theory if you've solved the Rubik's Cube first.  There are insights you can't have without prerequisite insights.

If you get smarter over time (larger brains, improved mind designs) that's a still higher octave of the same phenomenon.  (As best I can grasp the Law, there are insights you can't understand at all without having a brain of sufficient size and sufficient design.  Humans are not maximal in this sense, and I don't think there should be any maximum—but that's a rather deep topic, which I shall not explore further in this blog post.  Note that Greg Egan seems to explicitly believe the reverse—that humans can understand anything understandable—which explains a lot.)

One suspects that in a better-designed existence, the eudaimonic rate of intelligence increase would be bounded below by the need to integrate the loot of your adventures—to incorporate new knowledge and new skills efficiently, without swamping your mind in a sea of disconnected memories and associations—to manipulate larger, more powerful concepts that generalize more of your accumulated life-knowledge at once.

And one also suspects that part of the poignancy of transhuman existence will be having to move on from your current level—get smarter, leaving old challenges behind—before you've explored more than an infinitesimal fraction of the Fun Space for a mind of your level.  If, like me, you play through computer games trying to slay every single monster so you can collect every single experience point, this is as much tragedy as an improved existence could possibly need.

Fun Space can increase much more slowly than the space of representable problems, and still overwhelmingly swamp the amount of time you could bear to spend as a mind of a fixed level.  Even if Fun Space grows at some ridiculously tiny rate like N-squared—bearing in mind that the actual raw space of representable problems goes as 2N—we're still talking about \"way more fun than you can handle\".

If you consider the loot of every human adventure—everything that was ever learned about science, and everything that was ever learned about people, and all the original stories ever told, and all the original games ever invented, and all the plots and conspiracies that were ever launched, and all the personal relationships ever raveled, and all the ways of existing that were ever tried, and all the glorious epiphanies of wisdom that were ever minted—

—and you deleted all the duplicates, keeping only one of every lesson that had the same moral—

—how long would you have to stay human, to collect every gold coin in the dungeons of history?

Would it all fit into a single human brain, without that mind completely disintegrating under the weight of unrelated associations?  And even then, would you have come close to exhausting the space of human possibility, which we've surely not finished exploring?

This is all sounding like suspiciously good news.  So let's turn it around. Is there any way that Fun Space could fail to grow, and instead collapse?

Suppose there's only so many deep insights you can have on the order of \"parity\", and that you collect them all, and then math is never again as exciting as it was in the beginning.  And that you then exhaust the shallower insights, and the trivial insights, until finally you're left with the delightful shock of \"Gosh wowie gee willickers, the product of 845 and 109 is 92105, I didn't know that logical truth before.\"

Well—obviously, if you sit around and catalogue all the deep insights known to you to exist, you're going to end up with a bounded list.  And equally obviously, if you declared, \"This is all there is, and all that will ever be,\" you'd be taking an unjustified step.  (Though I fully expect some people out there to step up and say how it seems to them that they've already started to run out of available insights that are as deep as the ones they remember from their childhood.  And I fully expect that—compared to the sort of person who makes such a pronouncement—I personally will have collected more additional insights than they believe exist in the whole remaining realm of possibility.)

Can we say anything more on this subject of fun insights that might exist, but that we haven't yet found?

The obvious thing to do is start appealing to Godel, but Godelian arguments are dangerous tools to employ in debate.  It does seem to me that Godelian arguments weigh in the general direction of \"inexhaustible deep insights\", but inconclusively and only by loose analogies.

For example, the Busy-Beaver(N) problem asks for the longest running time of a Turing machine with no more than N states.  The Busy Beaver problem is uncomputable—there is no fixed Turing machine that computes it for all N—because if you knew all the Busy Beaver numbers, you would have an infallible way of telling whether a Turing machine halts; just run it up for as long as the longest-running Turing machine of that size.

The human species has managed to figure out and prove the Busy Beaver numbers up to 4, and they are:

BB(1):  1
BB(2):  6
BB(3):  21
BB(4):  107

Busy-Beaver 5 is believed to be 47,176,870.

The current lower bound on Busy-Beaver(6) is ~2.5 × 102879.

This function provably grows faster than any compact specification you can imagine.  Which would seem to argue that each new Turing machine is exhibiting a new and interesting kind of behavior.  Given infinite time, you would even be able to notice this behavior.  You won't ever know for certain that you've discovered the Busy-Beaver champion for any given N, after finite time; but conversely, you will notice the Busy Beaver champion for any N after some finite time.

Yes, this is an unimaginably long time—one of the few occasions where the word \"unimaginable\" is literally correct.  We can't actually do this unless reality works the way it does in Greg Egan novels.  But the point is that in the limit of infinite time we can point to something sorta like \"an infinite sequence of learnable deep insights not reducible to any of their predecessors or to any learnable abstract summary\".  It's not conclusive, but it's at least suggestive.

Now you could still look at that and say, \"I don't think my life would be an adventure of neverending excitement if I spent until the end of time trying to figure out the weird behaviors of slightly larger Tuing machines.\"

Well—as I said before, Peer is doing more than one thing wrong.  Here I've dealt with only one sort of dimension of Fun Space—the dimension of how much novelty we can expect to find available to introduce into our fun.

But even on the arguments given so far... I don't call it conclusive, but it seems like sufficient reason to hope and expect that our descendants and future selves won't exhaust Fun Space to the point that there is literally nothing left to do but carve the 162,329th table leg.

" } }, { "_id": "29vqqmGNxNRGzffEj", "title": "High Challenge", "pageUrl": "https://www.lesswrong.com/posts/29vqqmGNxNRGzffEj/high-challenge", "postedAt": "2008-12-19T00:51:08.000Z", "baseScore": 77, "voteCount": 67, "commentCount": 76, "url": null, "contents": { "documentId": "29vqqmGNxNRGzffEj", "html": "

There's a class of prophecy that runs:  \"In the Future, machines will do all the work.  Everything will be automated.  Even labor of the sort we now consider 'intellectual', like engineering, will be done by machines.  We can sit back and own the capital.  You'll never have to lift a finger, ever again.\"

But then won't people be bored?

No; they can play computer games—not like our games, of course, but much more advanced and entertaining.

Yet wait!  If you buy a modern computer game, you'll find that it contains some tasks that are—there's no kind word for this—effortful.  (I would even say \"difficult\", with the understanding that we're talking about something that takes 10 minutes, not 10 years.)

So in the future, we'll have programs that help you play the game—taking over if you get stuck on the game, or just bored; or so that you can play games that would otherwise be too advanced for you.

But isn't there some wasted effort, here?  Why have one programmer working to make the game harder, and another programmer to working to make the game easier?  Why not just make the game easier to start with?  Since you play the game to get gold and experience points, making the game easier will let you get more gold per unit time: the game will become more fun.

So this is the ultimate end of the prophecy of technological progress—just staring at a screen that says \"YOU WIN\", forever.

And maybe we'll build a robot that does that, too.

Then what?

The world of machines that do all the work—well, I don't want to say it's \"analogous to the Christian Heaven\" because it isn't supernatural; it's something that could in principle be realized.  Religious analogies are far too easily tossed around as accusations...  But, without implying any other similarities, I'll say that it seems analogous in the sense that eternal laziness \"sounds like good news\" to your present self who still has to work.

And as for playing games, as a substitute—what is a computer game except synthetic work?  Isn't there a wasted step here?  (And computer games in their present form, considered as work, have various aspects that reduce stress and increase engagement; but they also carry costs in the form of artificiality and isolation.)

I sometimes think that futuristic ideals phrased in terms of \"getting rid of work\" would be better reformulated as \"removing low-quality work to make way for high-quality work\".

There's a broad class of goals that aren't suitable as the long-term meaning of life, because you can actually achieve them, and then you're done.

To look at it another way, if we're looking for a suitable long-run meaning of life, we should look for goals that are good to pursue and not just good to satisfy.

Or to phrase that somewhat less paradoxically:  We should look for valuations that are over 4D states, rather than 3D states.  Valuable ongoing processes, rather than \"make the universe have property P and then you're done\".

Timothy Ferris is again worth quoting:  To find happiness, \"the question you should be asking isn't 'What do I want?' or 'What are my goals?' but 'What would excite me?'\"

You might say that for a long-run meaning of life, we need games that are fun to play and not just to win.

Mind you—sometimes you do want to win.  There are legitimate goals where winning is everything.  If you're talking, say, about curing cancer, then the suffering experienced by even a single cancer patient outweighs any fun that you might have in solving their problems.  If you work at creating a cancer cure for twenty years through your own efforts, learning new knowledge and new skill, making friends and allies—and then some alien superintelligence offers you a cancer cure on a silver platter for thirty bucks—then you shut up and take it.

But \"curing cancer\" is a problem of the 3D-predicate sort: you want the no-cancer predicate to go from False in the present to True in the future.  The importance of this destination far outweighs the journey; you don't want to go there, you just want to be there.  There are many legitimate goals of this sort, but they are not suitable as long-run fun.  \"Cure cancer!\" is a worthwhile activity for us to pursue here and now, but it is not a plausible future goal of galactic civilizations.

Why should this \"valuable ongoing process\" be a process of trying to do things—why not a process of passive experiencing, like the Buddhist Heaven?

I confess I'm not entirely sure how to set up a \"passively experiencing\" mind.  The human brain was designed to perform various sorts of internal work that add up to an active intelligence; even if you lie down on your bed and exert no particular effort to think, the thoughts that go on through your mind are activities of brain areas that are designed to, you know, solve problems.

How much of the human brain could you eliminate, apart from the pleasure centers, and still keep the subjective experience of pleasure?

I'm not going to touch that one.  I'll stick with the much simpler answer of \"I wouldn't actually prefer to be a passive experiencer.\"  If I wanted Nirvana, I might try to figure out how to achieve that impossibility.  But once you strip away Buddha telling me that Nirvana is the end-all of existence, Nirvana seems rather more like \"sounds like good news in the moment of first being told\" or \"ideological belief in desire\" rather than, y'know, something I'd actually want.

The reason I have a mind at all, is that natural selection built me to do things—to solve certain kinds of problems.

\"Because it's human nature\" is not an explicit justification for anything.  There is human nature, which is what we are; and there is humane nature, which is what, being human, we wish we were.

But I don't want to change my nature toward a more passive object—which is a justification.  A happy blob is not what, being human, I wish to become.

I earlier argued that many values require both subjective happiness and the external objects of that happiness.  That you can legitimately have a utility function that says, \"It matters to me whether or not the person I love is a real human being or just a highly realistic nonsentient chatbot, even if I don't know, because that-which-I-value is not my own state of mind, but the external reality.\"  So that you need both the experience of love, and the real lover.

You can similarly have valuable activities that require both real challenge and real effort.

Racing along a track, it matters that the other racers are real, and that you have a real chance to win or lose.  (We're not talking about physical determinism here, but whether some external optimization process explicitly chose for you to win the race.)

And it matters that you're racing with your own skill at running and your own willpower, not just pressing a button that says \"Win\".  (Though, since you never designed your own leg muscles, you are racing using strength that isn't yours.  A race between robot cars is a purer contest of their designers.  There is plenty of room to improve on the human condition.)

And it matters that you, a sentient being, are experiencing it.  (Rather than some nonsentient process carrying out a skeleton imitation of the race, trillions of times per second.)

There must be the true effort, the true victory, and the true experience—the journey, the destination and the traveler.

" } }, { "_id": "pK4HTxuv6mftHXWC3", "title": "Prolegomena to a Theory of Fun", "pageUrl": "https://www.lesswrong.com/posts/pK4HTxuv6mftHXWC3/prolegomena-to-a-theory-of-fun", "postedAt": "2008-12-17T23:33:01.000Z", "baseScore": 67, "voteCount": 52, "commentCount": 52, "url": null, "contents": { "documentId": "pK4HTxuv6mftHXWC3", "html": "

Raise the topic of cryonics, uploading, or just medically extended lifespan/healthspan, and some bioconservative neo-Luddite is bound to ask, in portentous tones:

\"But what will people do all day?\"

They don't try to actually answer the question.  That is not a bioethicist's role, in the scheme of things.  They're just there to collect credit for the Deep Wisdom of asking the question.  It's enough to imply that the question is unanswerable, and therefore, we should all drop dead.

That doesn't mean it's a bad question.

It's not an easy question to answer, either.  The primary experimental result in hedonic psychology—the study of happiness—is that people don't know what makes them happy.

And there are many exciting results in this new field, which go a long way toward explaining the emptiness of classical Utopias.  But it's worth remembering that human hedonic psychology is not enough for us to consider, if we're asking whether a million-year lifespan could be worth living.

Fun Theory, then, is the field of knowledge that would deal in questions like:

One major set of experimental results in hedonic psychology has to do with overestimating the impact of life events on happiness.  Six months after the event, lottery winners aren't as happy as they expected to be, and quadriplegics aren't as sad.  A parent who loses a child isn't as sad as they think they'll be, a few years later.  If you look at one moment snapshotted out of their lives a few years later, that moment isn't likely to be about the lost child.  Maybe they're playing with one of their surviving children on a swing.  Maybe they're just listening to a nice song on the radio.

When people are asked to imagine how happy or sad an event will make them, they anchor on the moment of first receiving the news, rather than realistically imagining the process of daily life years later.

Consider what the Christians made of their Heaven, meant to be literally eternal.  Endless rest, the glorious presence of God, and occasionally—in the more clueless sort of sermon—golden streets and diamond buildings.  Is this eudaimonia?  It doesn't even seem very hedonic.

As someone who said his share of prayers back in his Orthodox Jewish childhood upbringing, I can personally testify that praising God is an enormously boring activity, even if you're still young enough to truly believe in God.  The part about praising God is there as an applause light that no one is allowed to contradict: it's something theists believe they should enjoy, even though, if you ran them through an fMRI machine, you probably wouldn't find their pleasure centers lighting up much.

Ideology is one major wellspring of flawed Utopias, containing things that the imaginer believes should be enjoyed, rather than things that would actually be enjoyable.

And eternal rest?  What could possibly be more boring than eternal rest?

But to an exhausted, poverty-stricken medieval peasant, the Christian Heaven sounds like good news in the moment of being first informed:  You can lay down the plow and rest!  Forever!  Never to work again!

It'd get boring after... what, a week?  A day?  An hour?

Heaven is not configured as a nice place to live.  It is rather memetically optimized to be a nice place for an exhausted peasant to imagine.  It's not like some Christians actually got a chance to live in various Heavens, and voted on how well they liked it after a year, and then they kept the best one.  The Paradise that survived was the one that was retold, not lived.

Timothy Feriss observed, \"Living like a millionaire requires doing interesting things and not just owning enviable things.\"  Golden streets and diamond walls would fade swiftly into the background, once obtained —but so long as you don't actually have gold, it stays desirable.

And there's two lessons required to get past such failures; and these lessons are in some sense opposite to one another.

The first lesson is that humans are terrible judges of what will actually make them happy, in the real world and the living moments.  Daniel Gilbert's Stumbling on Happiness is the most famous popular introduction to the research.

We need to be ready to correct for such biases—the world that is fun to live in, may not be the world that sounds good when spoken into our ears.

And the second lesson is that there's nothing in the universe out of which to construct Fun Theory, except that which we want for ourselves or prefer to become.

If, in fact, you don't like praying, then there's no higher God than yourself to tell you that you should enjoy it.  We sometimes do things we don't like, but that's still our own choice.  There's no outside force to scold us for making the wrong decision.

This is something for transhumanists to keep in mind—not because we're tempted to pray, of course, but because there are so many other logical-sounding solutions we wouldn't really want.

The transhumanist philosopher David Pearce is an advocate of what he calls the Hedonistic Imperative:  The eudaimonic life is the one that is as pleasurable as possible.  So even happiness attained through drugs is good?  Yes, in fact:  Pearce's motto is \"Better Living Through Chemistry\".

Or similarly:  When giving a small informal talk once on the Stanford campus, I raised the topic of Fun Theory in the post-talk mingling.  And someone there said that his ultimate objective was to experience delta pleasure.  That's \"delta\" as in the Dirac delta—roughly, an infinitely high spike (that happens to be integrable).  \"Why?\" I asked.  He said, \"Because that means I win.\"

(I replied, \"How about if you get two times delta pleasure?  Do you win twice as hard?\")

In the transhumanist lexicon, \"orgasmium\" refers to simplified brains that are just pleasure centers experiencing huge amounts of stimulation—a happiness counter containing a large number, plus whatever the minimum surrounding framework to experience it.  You can imagine a whole galaxy tiled with orgasmium.  Would this be a good thing?

And the vertigo-inducing thought is this—if you would prefer not to become orgasmium, then why should you?

Mind you, there are many reasons why something that sounds unpreferred at first glance, might be worth a closer look.  That was the first lesson.  Many Christians think they want to go to Heaven.

But when it comes to the question, \"Don't I have to want to be as happy as possible?\" then the answer is simply \"No.  If you don't prefer it, why go there?\"

There's nothing except such preferences out of which to construct Fun Theory—a second look is still a look, and must still be constructed out of preferences at some level.

In the era of my foolish youth, when I went into an affective death spiral around intelligence, I thought that the mysterious \"right\" thing that any superintelligence would inevitably do, would be to upgrade every nearby mind to superintelligence as fast as possible.  Intelligence was good; therefore, more intelligence was better.

Somewhat later I imagined the scenario of unlimited computing power, so that no matter how smart you got, you were still just as far from infinity as ever.  That got me thinking about a journey rather than a destination, and allowed me to think \"What rate of intelligence increase would be fun?\"

But the real break came when I naturalized my understanding of morality, and value stopped being a mysterious attribute of unknown origins.

Then if there was no outside light in the sky to order me to do things—

The thought occurred to me that I didn't actually want to bloat up immediately into a superintelligence, or have my world transformed instantaneously and completely into something incomprehensible.  I'd prefer to have it happen gradually, with time to stop and smell the flowers along the way.

It felt like a very guilty thought, but—

But there was nothing higher to override this preference.

In which case, if the Friendly AI project succeeded, there would be a day after the Singularity to wake up to, and myself to wake up to it.

You may not see why this would be a vertigo-inducing concept.  Pretend you're Eliezer2003 who has spent the last seven years talking about how it's forbidden to try to look beyond the Singularity—because the AI is smarter than you, and if you knew what it would do, you would have to be that smart yourself—

—but what if you don't want the world to be made suddenly incomprehensible?  Then there might be something to understand, that next morning, because you don't actually want to wake up in an incomprehensible world, any more than you actually want to suddenly be a superintelligence, or turn into orgasmium.

I can only analogize the experience to a theist who's suddenly told that they can know the mind of God, and it turns out to be only twenty lines of Python.

You may find it hard to sympathize.  Well, Eliezer1996, who originally made the mistake, was smart but methodologically inept, as I've mentioned a few times.

Still, expect to see some outraged comments on this very blog post, from commenters who think that it's selfish and immoral, and above all a failure of imagination, to talk about human-level minds still running around the day after the Singularity.

That's the frame of mind I used to occupy—that the things I wanted were selfish, and that I shouldn't think about them too much, or at all, because I would need to sacrifice them for something higher.

People who talk about an existential pit of meaninglessness in a universe devoid of meaning—I'm pretty sure they don't understand morality in naturalistic terms.  There is vertigo involved, but it's not the vertigo of meaninglessness.

More like a theist who is frightened that someday God will order him to murder children, and then he realizes that there is no God and his fear of being ordered to murder children was morality.  It's a strange relief, mixed with the realization that you've been very silly, as the last remnant of outrage at your own selfishness fades away.

So the first step toward Fun Theory is that, so far as I can tell, it looks basically okay to make our future light cone—all the galaxies that we can get our hands on—into a place that is fun rather than not fun.

We don't need to transform the universe into something we feel dutifully obligated to create, but isn't really much fun—in the same way that a Christian would feel dutifully obliged to enjoy heaven—or that some strange folk think that creating orgasmium is, logically, the rightest thing to do.

Fun is okay.  It's allowed.  It doesn't get any better than fun.

And then we can turn our attention to the question of what is fun, and how to have it.

" } }, { "_id": "fwd2qoP9jJtuHhjrd", "title": "Visualizing Eutopia", "pageUrl": "https://www.lesswrong.com/posts/fwd2qoP9jJtuHhjrd/visualizing-eutopia", "postedAt": "2008-12-16T18:39:54.000Z", "baseScore": 22, "voteCount": 18, "commentCount": 37, "url": null, "contents": { "documentId": "fwd2qoP9jJtuHhjrd", "html": "

Followup toNot Taking Over the World

\n

\"Heaven is a city 15,000 miles square or 6,000 miles around. One side is 245 miles longer than the length of the Great Wall of China. Walls surrounding Heaven are 396,000 times higher than the Great Wall of China and eight times as thick. Heaven has twelve gates, three on each side, and has room for 100,000,000,000 souls. There are no slums. The entire city is built of diamond material, and the streets are paved with gold. All inhabitants are honest and there are no locks, no courts, and no policemen.\"
  -- Reverend Doctor George Hawes, in a sermon

\n

Yesterday I asked my esteemed co-blogger Robin what he would do with \"unlimited power\", in order to reveal something of his character.  Robin said that he would (a) be very careful and (b) ask for advice.  I asked him what advice he would give himself.  Robin said it was a difficult question and he wanted to wait on considering it until it actually happened.  So overall he ran away from the question like a startled squirrel.

\n

The character thus revealed is a virtuous one: it shows common sense.  A lot of people jump after the prospect of absolute power like it was a coin they found in the street.

\n

When you think about it, though, it says a lot about human nature that this is a difficult question.  I mean - most agents with utility functions shouldn't have such a hard time describing their perfect universe.

\n

For a long time, I too ran away from the question like a startled squirrel.  First I claimed that superintelligences would inevitably do what was right, relinquishing moral responsibility in toto.  After that, I propounded various schemes to shape a nice superintelligence, and let it decide what should be done with the world.

\n

Not that there's anything wrong with that.  Indeed, this is still the plan.  But it still meant that I, personally, was ducking the question.

\n

Why?  Because I expected to fail at answering.  Because I thought that any attempt for humans to visualize a better future was going to end up recapitulating the Reverend Doctor George Hawes: apes thinking, \"Boy, if I had human intelligence I sure could get a lot more bananas.\"

\n

\n

But trying to get a better answer to a question out of a superintelligence, is a different matter from entirely ducking the question yourself.  The point at which I stopped ducking was the point at which I realized that it's actually quite difficult to get a good answer to something out of a superintelligence, while simultaneously having literally no idea how to answer yourself.

\n

When you're dealing with confusing and difficult questions - as opposed to those that are straightforward but numerically tedious - it's quite suspicious to have, on the one hand, a procedure that executes to reliably answer the question, and, on the other hand, no idea of how to answer it yourself.

\n

If you could write a computer program that you knew would reliably output a satisfactory answer to \"Why does anything exist in the first place?\" or \"Why do I find myself in a universe giving rise to experiences that are ordered rather than chaotic?\", then shouldn't you be able to at least try executing the same procedure yourself?

\n

I suppose there could be some section of the procedure where you've got to do a septillion operations and so you've just got no choice but to wait for superintelligence, but really, that sounds rather suspicious in cases like these.

\n

So it's not that I'm planning to use the output of my own intelligence to take over the universe.  But I did realize at some point that it was too suspicious to entirely duck the question while trying to make a computer knowably solve it.  It didn't even seem all that morally cautious, once I put in those terms.  You can design an arithmetic chip using purely abstract reasoning, but would you be wise to never try an arithmetic problem yourself?

\n

And when I did finally try - well, that caused me to update in various ways.

\n

It does make a difference to try doing arithmetic yourself, instead of just trying to design chips that do it for you.  So I found.

\n

Hence my bugging Robin about it.

\n

For it seems to me that Robin asks too little of the future.  It's all very well to plead that you are only forecasting, but if you display greater revulsion to the idea of a Friendly AI than to the idea of rapacious hardscrapple frontier folk...

\n

I thought that Robin might be asking too little, due to not visualizing any future in enough detail.  Not the future but any future.  I'd hoped that if Robin had allowed himself to visualize his \"perfect future\" in more detail, rather than focusing on all the compromises he thinks he has to make, he might see that there were futures more desirable than the rapacious hardscrapple frontier folk.

\n

It's hard to see on an emotional level why a genie might be a good thing to have, if you haven't acknowledged any wishes that need granting.  It's like not feeling the temptation of cryonics, if you haven't thought of anything the Future contains that might be worth seeing.

\n

I'd also hoped to persuade Robin, if his wishes were complicated enough, that there were attainable good futures that could not come about by letting things go their own way.  So that he might begin to see the future as I do, as a dilemma between extremes:  The default, loss of control, followed by a Null future containing little or no utility.  Versus extremely precise steering through \"impossible\" problems to get to any sort of Good future whatsoever.

\n

This is mostly a matter of appreciating how even the desires we call \"simple\" actually contain many bits of information.  Getting past anthropomorphic optimism, to realize that a Future not strongly steered by our utility functions is likely to contain little or no utility, for the same reason it's hard to hit a distant target while shooting blindfolded...

\n

But if your \"desired future\" remains mostly unspecified, that may encourage too much optimism as well.

" } }, { "_id": "DdEKcS6JcW7ordZqQ", "title": "Not Taking Over the World", "pageUrl": "https://www.lesswrong.com/posts/DdEKcS6JcW7ordZqQ/not-taking-over-the-world", "postedAt": "2008-12-15T22:18:47.000Z", "baseScore": 40, "voteCount": 32, "commentCount": 97, "url": null, "contents": { "documentId": "DdEKcS6JcW7ordZqQ", "html": "

Followup toWhat I Think, If Not Why

\n

My esteemed co-blogger Robin Hanson accuses me of trying to take over the world.

\n

Why, oh why must I be so misunderstood?

\n

(Well, it's not like I don't enjoy certain misunderstandings.  Ah, I remember the first time someone seriously and not in a joking way accused me of trying to take over the world.  On that day I felt like a true mad scientist, though I lacked a castle and hunchbacked assistant.)

\n

But if you're working from the premise of a hard takeoff - an Artificial Intelligence that self-improves at an extremely rapid rate - and you suppose such extra-ordinary depth of insight and precision of craftsmanship that you can actually specify the AI's goal system instead of automatically failing -

\n

- then it takes some work to come up with a way not to take over the world.

\n

Robin talks up the drama inherent in the intelligence explosion, presumably because he feels that this is a primary source of bias.  But I've got to say that Robin's dramatic story, does not sound like the story I tell of myself.  There, the drama comes from tampering with such extreme forces that every single idea you invent is wrong.  The standardized Final Apocalyptic Battle of Good Vs. Evil would be trivial by comparison; then all you have to do is put forth a desperate effort.  Facing an adult problem in a neutral universe isn't so straightforward.  Your enemy is yourself, who will automatically destroy the world, or just fail to accomplish anything, unless you can defeat you.  - That is the drama I crafted into the story I tell myself, for I too would disdain anything so cliched as Armageddon.

\n

So, Robin, I'll ask you something of a probing question.  Let's say that someone walks up to you and grants you unlimited power.

\n

What do you do with it, so as to not take over the world?

\n

\n

Do you say, \"I will do nothing - I take the null action\"?

\n

But then you have instantly become a malevolent God, as Epicurus said:

\n

Is God willing to prevent evil, but not able?  Then he is not omnipotent.
Is he able, but not willing?  Then he is malevolent.
Is both able, and willing?  Then whence cometh evil?
Is he neither able nor willing?  Then why call him God.

\n

Peter Norvig said, \"Refusing to act is like refusing to allow time to pass.\"  The null action is also a choice.  So have you not, in refusing to act, established all sick people as sick, established all poor people as poor, ordained all in despair to continue in despair, and condemned the dying to death?  Will you not be, until the end of time, responsible for every sin committed?

\n

Well, yes and no.  If someone says, \"I don't trust myself not to destroy the world, therefore I take the null action,\" then I would tend to sigh and say, \"If that is so, then you did the right thing.\"  Afterward, murderers will still be responsible for their murders, and altruists will still be creditable for the help they give.

\n

And to say that you used your power to take over the world by doing nothing to it, seems to stretch the ordinary meaning of the phrase.

\n

But it wouldn't be the best thing you could do with unlimited power, either.

\n

With \"unlimited power\" you have no need to crush your enemies.  You have no moral defense if you treat your enemies with less than the utmost consideration.

\n

With \"unlimited power\" you cannot plead the necessity of monitoring or restraining others so that they do not rebel against you.  If you do such a thing, you are simply a tyrant who enjoys power, and not a defender of the people.

\n

Unlimited power removes a lot of moral defenses, really.  You can't say \"But I had to.\"  You can't say \"Well, I wanted to help, but I couldn't.\"  The only excuse for not helping is if you shouldn't, which is harder to establish.

\n

And let us also suppose that this power is wieldable without side effects or configuration constraints; it is wielded with unlimited precision.

\n

For example, you can't take refuge in saying anything like:  \"Well, I built this AI, but any intelligence will pursue its own interests, so now the AI will just be a Ricardian trading partner with humanity as it pursues its own goals.\"  Say, the programming team has cracked the \"hard problem of conscious experience\" in sufficient depth that they can guarantee that the AI they create is not sentient - not a repository of pleasure, or pain, or subjective experience, or any interest-in-self - and hence, the AI is only a means to an end, and not an end in itself.

\n

And you cannot take refuge in saying, \"In invoking this power, the reins of destiny have passed out of my hands, and humanity has passed on the torch.\"  Sorry, you haven't created a new person yet - not unless you deliberately invoke the unlimited power to do so - and then you can't take refuge in the necessity of it as a side effect; you must establish that it is the right thing to do.

\n

The AI is not necessarily a trading partner.  You could make it a nonsentient device that just gave you things, if you thought that were wiser.

\n

You cannot say, \"The law, in protecting the rights of all, must necessarily protect the right of Fred the Deranged to spend all day giving himself electrical shocks.\"  The power is wielded with unlimited precision; you could, if you wished, protect the rights of everyone except Fred.

\n

You cannot take refuge in the necessity of anything - that is the meaning of unlimited power.

\n

We will even suppose (for it removes yet more excuses, and hence reveals more of your morality) that you are not limited by the laws of physics as we know them.  You are bound to deal only in finite numbers, but not otherwise bounded.  This is so that we can see the true constraints of your morality, apart from your being able to plead constraint by the environment.

\n

In my reckless youth, I used to think that it might be a good idea to flash-upgrade to the highest possible level of intelligence you could manage on available hardware.  Being smart was good, so being smarter was better, and being as smart as possible as quickly as possible was best - right?

\n

But when I imagined having infinite computing power available, I realized that no matter how large a mind you made yourself, you could just go on making yourself larger and larger and larger.  So that wasn't an answer to the purpose of life.  And only then did it occur to me to ask after eudaimonic rates of intelligence increase, rather than just assuming you wanted to immediately be as smart as possible.

\n

Considering the infinite case moved me to change the way I considered the finite case.  Before, I was running away from the question by saying \"More!\"  But considering an unlimited amount of ice cream forced me to confront the issue of what to do with any of it.

\n

Similarly with population:  If you invoke the unlimited power to create a quadrillion people, then why not a quintillion?  If 3^^^3, why not 3^^^^3?  So you can't take refuge in saying, \"I will create more people - that is the difficult thing, and to accomplish it is the main challenge.\"  What is individually a life worth living?

\n

You can say, \"It's not my place to decide; I leave it up to others\" but then you are responsible for the consequences of that decision as well.  You should say, at least, how this differs from the null act.

\n

So, Robin, reveal to us your character:  What would you do with unlimited power?

" } }, { "_id": "cfZ8zveqrTZbQrjeD", "title": "For The People Who Are Still Alive", "pageUrl": "https://www.lesswrong.com/posts/cfZ8zveqrTZbQrjeD/for-the-people-who-are-still-alive", "postedAt": "2008-12-14T17:13:03.000Z", "baseScore": 45, "voteCount": 40, "commentCount": 72, "url": null, "contents": { "documentId": "cfZ8zveqrTZbQrjeD", "html": "

Max Tegmark observed that we have three independent reasons to believe we live in a Big World:  A universe which is large relative to the space of possibilities.  For example, on current physics, the universe appears to be spatially infinite (though I'm not clear on how strongly this is implied by the standard model).

\n

If the universe is spatially infinite, then, on average, we should expect that no more than 10^10^29 meters away is an exact duplicate of you.  If you're looking for an exact duplicate of a Hubble volume - an object the size of our observable universe - then you should still on average only need to look 10^10^115 lightyears.  (These are numbers based on a highly conservative counting of \"physically possible\" states, e.g. packing the whole Hubble volume with potential protons at maximum density given by the Pauli Exclusion principle, and then allowing each proton to be present or absent.)

\n

The most popular cosmological theories also call for an \"inflationary\" scenario in which many different universes would be eternally budding off, our own universe being only one bud.  And finally there are the alternative decoherent branches of the grand quantum distribution, aka \"many worlds\", whose presence is unambiguously implied by the simplest mathematics that fits our quantum experiments.

\n

Ever since I realized that physics seems to tell us straight out that we live in a Big World, I've become much less focused on creating lots of people, and much more focused on ensuring the welfare of people who are already alive.

\n

\n

If your decision to not create a person means that person will never exist at all, then you might, indeed, be moved to create them, for their sakes.  But if you're just deciding whether or not to create a new person here, in your own Hubble volume and Everett branch, then it may make sense to have relatively lower populations within each causal volume, living higher qualities of life.  It's not like anyone will actually fail to be born on account of that decision - they'll just be born predominantly into regions with higher standards of living.

\n

Am I sure that this statement, that I have just emitted, actually makes sense?

\n

Not really.  It dabbles in the dark arts of anthropics, and the Dark Arts don't get much murkier than that.  Or to say it without the chaotic inversion:  I am stupid with respect to anthropics.

\n

But to apply the test of simplifiability - it seems in some raw intuitive sense, that if the universe is large enough for everyone to exist somewhere, then we should mainly be worried about giving babies nice futures rather than trying to \"ensure they get born\".

\n

Imagine taking a survey of the whole universe.  Every plausible baby gets a little checkmark in the \"exists\" box - everyone is born somewhere.  In fact, the total population count for each baby is something-or-other, some large number that may or may not be \"infinite\" -

\n

(I should mention at this point that I am an infinite set atheist, and my main hope for being able to maintain this in the face of a spatially infinite universe is to suggest that identical Hubble volumes add in the same way as any other identical configuration of particles.  So in this case the universe would be exponentially large, the size of the branched decoherent distribution, but the spatial infinity would just fold into that very large but finite object.  And I could still be an infinite set atheist.  I am not a physicist so my fond hope may be ruled out for some reason of which I am not aware.)

\n

- so the first question, anthropically speaking, is whether multiple realizations of the exact same physical process count as more than one person.  Let's say you've got an upload running on a computer.  If you look inside the computer and realize that it contains triply redundant processors running in exact synchrony, is that three people or one person?  How about if the processor is a flat sheet - if that sheet is twice as thick, is there twice as much person inside it?  If we split the sheet and put it back together again without desynchronizing it, have we created a person and killed them?

\n

I suppose the answer could be yes; I have confessed myself stupid about anthropics.

\n

Still:  I, as I sit here, am frantically branching into exponentially vast numbers of quantum worlds.  I've come to terms with that.  It all adds up to normality, after all.

\n

But I don't see myself as having a little utility counter that frantically increases at an exponential rate, just from my sitting here and splitting.  The thought of splitting at a faster rate does not much appeal to me, even if such a thing could be arranged.

\n

What I do want for myself, is for the largest possible proportion of my future selves to lead eudaimonic existences, that is, to be happy.  This is the \"probability\" of a good outcome in my expected utility maximization.  I'm not concerned with having more of me - really, there are plenty of me already - but I do want most of me to be having fun.

\n

I'm not sure whether or not there exists an imperative for moral civilizations to try to create lots of happy people so as to ensure that most babies born will be happy.  But suppose that you started off with 1 baby existing in unhappy regions for every 999 babies existing in happy regions.  Would it make sense for the happy regions to create ten times as many babies leading one-tenth the quality of life, so that the universe was \"99.99% sorta happy and 0.01% unhappy\" instead of \"99.9% really happy and 0.1% unhappy\"?  On the face of it, I'd have to answer \"No.\"  (Though it depends on how unhappy the unhappy regions are; and if we start off with the universe mostly unhappy, well, that's a pretty unpleasant possibility...)

\n

But on the whole, it looks to me like if we decide to implement a policy of routinely killing off citizens to replace them with happier babies, or if we lower standards of living to create more people, then we aren't giving the \"gift of existence\" to babies who wouldn't otherwise have it.  We're just setting up the universe to contain the same babies, born predominantly into regions where they lead short lifespans not containing much happiness.

\n

Once someone has been born into your Hubble volume and your Everett branch, you can't undo that; it becomes the responsibility of your region of existence to give them a happy future.  You can't hand them back by killing them.  That just makes their average lifespan shorter.

\n

It seems to me that in a Big World, the people who already exist in your region have a much stronger claim on your charity than babies who have not yet been born into your region in particular.

\n

And that's why, when there is research to be done, I do it not just for all the future babies who will be born - but, yes, for the people who already exist in our local region, who are already our responsibility.

\n

For the good of all of us, except the ones who are dead.

" } }, { "_id": "px2SEBcGoRkuxFfEY", "title": "BHTV: de Grey and Yudkowsky", "pageUrl": "https://www.lesswrong.com/posts/px2SEBcGoRkuxFfEY/bhtv-de-grey-and-yudkowsky", "postedAt": "2008-12-13T15:28:28.000Z", "baseScore": 10, "voteCount": 8, "commentCount": 12, "url": null, "contents": { "documentId": "px2SEBcGoRkuxFfEY", "html": "

My latest on Bloggingheads.tv is up.  BHTV wanted someone to interview Aubrey de Grey of the Methuselah Foundation about basic research in antiagathics, and they picked me to do it.  It made the interview somewhat difficult, since Aubrey and I already agree about most things, but we managed to soldier on.  The interview is mostly Aubrey talking, as it should be.\n\n\n

\n

" } }, { "_id": "yKXKcyoBzWtECzXrE", "title": "You Only Live Twice", "pageUrl": "https://www.lesswrong.com/posts/yKXKcyoBzWtECzXrE/you-only-live-twice", "postedAt": "2008-12-12T19:14:32.000Z", "baseScore": 196, "voteCount": 152, "commentCount": 183, "url": null, "contents": { "documentId": "yKXKcyoBzWtECzXrE", "html": "

\"It just so happens that your friend here is only mostly dead.  There's a big difference between mostly dead and all dead.\"
        -- The Princess Bride

\n

My co-blogger Robin and I may disagree on how fast an AI can improve itself, but we agree on an issue that seems much simpler to us than that:  At the point where the current legal and medical system gives up on a patient, they aren't really dead.

\n

Robin has already said much of what needs saying, but a few more points:

\n

Ben Best's Cryonics FAQ, Alcor's FAQ, Alcor FAQ for scientists, Scientists' Open Letter on Cryonics

\n

• I know more people who are planning to sign up for cryonics Real Soon Now than people who have actually signed up.  I expect that more people have died while cryocrastinating than have actually been cryopreserved.  If you've already decided this is a good idea, but you \"haven't gotten around to it\", sign up for cryonics NOW.  I mean RIGHT NOW.  Go to the website of Alcor or the Cryonics Institute and follow the instructions.

\n

\n

• Cryonics is usually funded through life insurance.  The following conversation from an Overcoming Bias meetup is worth quoting:

\n

Him:  I've been thinking about signing up for cryonics when I've got enough money.

\n

Me:  Um... it doesn't take all that much money.

\n

Him:  It doesn't?

\n

Me:  Alcor is the high-priced high-quality organization, which is something like $500-$1000 in annual fees for the organization, I'm not sure how much.  I'm young, so I'm signed up with the Cryonics Institute, which is $120/year for the membership.  I pay $180/year for more insurance than I need - it'd be enough for Alcor too.

\n

Him:  That's ridiculous.

\n

Me:  Yes.

\n

Him:  No, really, that's ridiculous.  If that's true then my decision isn't just determined, it's overdetermined.

\n

Me:  Yes.  And there's around a thousand people worldwide [actually 1400] who are signed up for cryonics.  Figure that at most a quarter of those did it for systematically rational reasons.  That's a high upper bound on the number of people on Earth who can reliably reach the right conclusion on massively overdetermined issues.

\n

• Cryonics is not marketed well - or at all, really.  There's no salespeople who get commissions.  There is no one to hold your hand through signing up, so you're going to have to get the papers signed and notarized yourself.  The closest thing out there might be Rudi Hoffman, who sells life insurance with cryonics-friendly insurance providers (I went through him).

\n

• If you want to securely erase a hard drive, it's not as easy as writing it over with zeroes.  Sure, an \"erased\" hard drive like this won't boot up your computer if you just plug it in again.  But if the drive falls into the hands of a specialist with a scanning tunneling microscope, they can tell the difference between \"this was a 0, overwritten by a 0\" and \"this was a 1, overwritten by a 0\".

\n

There are programs advertised to \"securely erase\" hard drives using many overwrites of 0s, 1s, and random data.  But if you want to keep the secret on your hard drive secure against all possible future technologies that might ever be developed, then cover it with thermite and set it on fire.  It's the only way to be sure.

\n

Pumping someone full of cryoprotectant and gradually lowering their temperature until they can be stored in liquid nitrogen is not a secure way to erase a person.

\n

See also the information-theoretic criterion of death.

\n

• You don't have to buy what's usually called the \"patternist\" philosophy of identity, to sign up for cryonics.  After reading all the information off the brain, you could put the \"same atoms\" back into their old places.

\n

• \"Same atoms\" is in scare quotes because our current physics prohibits particles from possessing individual identities.  It's a much stronger statement than \"we can't tell the particles apart with current measurements\" and has to do with the notion of configuration spaces in quantum mechanics.  This is a standard idea in QM, not an unusual woo-woo one - see this sequence on Overcoming Bias for a gentle introduction.  Although patternism is not necessary to the cryonics thesis, we happen to live in a universe where \"the same atoms\" is physical nonsense.

\n

There's a number of intuitions we have in our brains for processing a world of distinct physical objects, built in from a very young age.  These intuitions, which may say things like \"If an object disappears, and then comes back, it isn't the same object\", are tuned to our macroscopic world and generally don't match up well with fundamental physics.  Your identity is not like a little billiard ball that follows you around - there aren't actually any billiard balls down there.

\n

Separately and convergently, more abstract reasoning strongly suggests that \"identity\" should not be epiphenomenal; that is, you should not be able to change someone's identity without changing any observable fact about them.

\n

If you go through the aforementioned Overcoming Bias sequence, you should actually be able to see intuitively that successful cryonics preserves anything about you that is preserved by going to sleep at night and waking up the next morning.

\n

• Cryonics, to me, makes two statements.

\n

The first statement is about systematically valuing human life.  It's bad when a pretty young white girl goes missing somewhere in America.  But when 800,000 Africans get murdered in Rwanda, that gets 1/134 the media coverage of the Michael Jackson trial.  It's sad, to be sure, but no cause for emotional alarm.  When brown people die, that's all part of the plan - as a smiling man once said.

\n

Cryonicists are people who've decided that their deaths, and the deaths of their friends and family and the rest of the human species, are not part of the plan.

\n

I've met one or two Randian-type \"selfish\" cryonicists, but they aren't a majority.  Most people who sign up for cryonics wish that everyone would sign up for cryonics.

\n

The second statement is that you have at least a little hope in the future.  Not faith, not blind hope, not irrational hope - just, any hope at all.

\n

I was once at a table with Ralph Merkle, talking about how to market cryonics if anyone ever gets around to marketing it, and Ralph suggested a group of people in a restaurant, having a party; and the camera pulls back, and moves outside the window, and the restaurant is on the Moon.  Tagline:  \"Wouldn't you want to be there?\"

\n

If you look back at, say, the Middle Ages, things were worse then.  I'd rather live here then there.  I have hope that humanity will move forward further, and that's something that I want to see.

\n

And I hope that the idea that people are disposable, and that their deaths are part of the plan, is something that fades out of the Future.

\n

Once upon a time, infant deaths were part of the plan, and now they're not.  Once upon a time, slavery was part of the plan, and now it's not.  Once upon a time, dying at thirty was part of the plan, and now it's not.  That's a psychological shift, not just an increase in living standards.  Our era doesn't value human life with perfect consistency - but the value of human life is higher than it once was.

\n

We have a concept of what a medieval peasant should have had, the dignity with which they should have been treated, that is higher than what they would have thought to ask for themselves.

\n

If no one in the future cares enough to save people who can be saved... well.  In cryonics there is an element of taking responsibility for the Future.  You may be around to reap what your era has sown.  It is not just my hope that the Future be a better place; it is my responsibility.  If I thought that we were on track to a Future where no one cares about human life, and lives that could easily be saved are just thrown away - then I would try to change that.  Not everything worth doing is easy.

\n

Not signing up for cryonics - what does that say?  That you've lost hope in the future.  That you've lost your will to live.  That you've stopped believing that human life, and your own life, is something of value.

\n

This can be a painful world we live in, and the media is always telling us how much worse it will get.  If you spend enough time not looking forward to the next day, it damages you, after a while.  You lose your ability to hope.  Try telling someone already grown old to sign up for cryonics, and they'll tell you that they don't want to be old forever - that they're tired.  If you try to explain to someone already grown old, that the nanotechnology to revive a cryonics patient is sufficiently advanced that reversing aging is almost trivial by comparison... then it's not something they can imagine on an emotional level, no matter what they believe or don't believe about future technology.  They can't imagine not being tired.  I think that's true of a lot of people in this world.  If you've been hurt enough, you can no longer imagine healing.

\n

But things really were a lot worse in the Middle Ages.  And they really are a lot better now.  Maybe humanity isn't doomed.  The Future could be something that's worth seeing, worth living in.  And it may have a concept of sentient dignity that values your life more than you dare to value yourself.

\n

On behalf of the Future, then - please ask for a little more for yourself.  More than death.  It really... isn't being selfish.  I want you to live.  I think that the Future will want you to live.  That if you let yourself die, people who aren't even born yet will be sad for the irreplaceable thing that was lost.

\n

So please, live.

\n

My brother didn't.  My grandparents won't.  But everything we can hold back from the Reaper, even a single life, is precious.

\n

If other people want you to live, then it's not just you doing something selfish and unforgivable, right?

\n

So I'm saying it to you.

\n

I want you to live.

" } }, { "_id": "z3kYdw54htktqt9Jb", "title": "What I Think, If Not Why", "pageUrl": "https://www.lesswrong.com/posts/z3kYdw54htktqt9Jb/what-i-think-if-not-why", "postedAt": "2008-12-11T17:41:43.000Z", "baseScore": 41, "voteCount": 35, "commentCount": 103, "url": null, "contents": { "documentId": "z3kYdw54htktqt9Jb", "html": "

Reply toTwo Visions Of Heritage

Though it really goes tremendously against my grain - it feels like sticking my neck out over a cliff (or something) - I guess I have no choice here but to try and make a list of just my positions, without justifying them.  We can only talk justification, I guess, after we get straight what my positions are.  I will also leave off many disclaimers to present the points compactly enough to be remembered.

• A well-designed mind should be much more efficient than a human, capable of doing more with less sensory data and fewer computing operations.  It is not infinitely efficient and does not use zero data.  But it does use little enough that local pipelines such as a small pool of programmer-teachers and, later, a huge pool of e-data, are sufficient.

• An AI that reaches a certain point in its own development becomes able to (sustainably, strongly) improve itself.  At this point, recursive cascades slam over many internal growth curves to near the limits of their current hardware, and the AI undergoes a vast increase in capability.  This point is at, or probably considerably before, a minimally transhuman mind capable of writing its own AI-theory textbooks - an upper bound beyond which it could swallow and improve its entire design chain.

• It is likely that this capability increase or "FOOM" has an intrinsic maximum velocity that a human would regard as "fast" if it happens at all.  A human week is ~1e15 serial operations for a population of 2GHz cores, and a century is ~1e19 serial operations; this whole range is a narrow window.  However, the core argument does not require one-week speed and a FOOM that takes two years (~1e17 serial ops) will still carry the weight of the argument.

\n

\n

\n

\n• The default case of FOOM is an unFriendly AI,\nbuilt by\nresearchers with shallow insights.  This AI becomes able to improve\nitself in a haphazard way, makes various changes that are net\nimprovements but may introduce value drift, and then gets smart enough\nto do guaranteed self-improvement, at which point its values freeze\n(forever).

The desired case of FOOM is a Friendly AI,\nbuilt using deep insight, so that the AI never makes any changes to\nitself that potentially change its internal values; all such changes\nare guaranteed using strong techniques that allow for a billion sequential self-modifications without losing the guarantee.  The guarantee is written over the AI's internal search criterion for actions, rather than external consequences.

• The good guys do not write an AI which values a bag of things that the programmers think are good ideas, like libertarianism or socialism or making people happy or whatever.  There were multiple Overcoming Bias sequences about this one point, like the Fake Utility Function sequence and the sequence on metaethics.  It is dealt with at length in the document Coherent *Extrapolated* Volition. \nIt is the first thing, the last thing, and the middle thing that I say\nabout Friendly AI.  I have said it over and over.  I truly do not\nunderstand how anyone can pay any attention to anything\nI have said on this subject, and come away with the impression that I\nthink programmers are supposed to directly impress their non-meta personal philosophies onto a Friendly AI.

The good guys do not directly impress their personal values onto a Friendly AI.

• Actually setting up a Friendly AI's values is an extremely meta operation, less "make the AI want to make people happy" and more like "superpose the possible reflective equilibria of the whole human species, and output new code that overwrites the current AI and has the most coherent support within that superposition".  This actually seems to be something of a Pons Asinorum in FAI - the ability to understand and endorse metaethical concepts that do not directly sound like amazing wonderful happy ideas.  Describing this as declaring total war on the rest of humanity, does not seem fair (or accurate).

I myself am strongly individualistic:  The most painful memories in my life have been when other people thought they knew better than me, and tried to do things on my behalf.  It is also a known principle of hedonic psychology that people are happier when they're steering their own lives and doing their own interesting work.  When I try myself to visualize what a beneficial superintelligence ought to do, it consists of setting up a world that works by better rules, and then fading into the background, silent as the laws of Nature once were; and finally folding up and vanishing when it is no longer needed.  But this is only the thought of my mind that is merely human, and I am barred from programming any such consideration directly into a Friendly AI, for the reasons given above.

• Nonetheless, it does seem to me that this particular scenario could not be justly described as "a God to rule over us all", unless the current fact that humans age and die is "a malevolent God to rule us all".  So either Robin has a very different idea about what human reflective equilibrium values are likely to look like; or Robin believes that the Friendly AI project is bound to fail in such way as to create a paternalistic God; or - and this seems more likely to me - Robin didn't read all the way through all the blog posts in which I tried to explain all the ways that this is not how Friendly AI works.

Friendly AI is technically difficult and requires an extra-ordinary effort on multiple levels.  English sentences like "make people happy" cannot describe the values of a Friendly AI.  Testing is not sufficient to guarantee that values have been successfully transmitted.

• White-hat AI researchers are distinguished by the degree to which they understand that a single misstep could be fatal, and can discriminate strong and weak assurances.  Good intentions are not only common, they're cheap.  The story isn't about good versus evil, it's about people trying to do the impossible versus others who... aren't.

• Intelligence is about being able to learn lots of things, not about knowing lots of things.  Intelligence is especially not about tape-recording lots of parsed English sentences a la Cyc.  Old AI work was poorly focused due to inability to introspectively see the first and higher derivatives of knowledge; human beings have an easier time reciting sentences than reciting their ability to learn.

Intelligence is mostly about architecture, or "knowledge" along the\nlines of knowing to look for causal structure (Bayes-net type stuff) in\nthe environment; this kind of knowledge will usually be expressed procedurally as well as declaratively.  Architecture is mostly about deep insights.  This point has not yet been addressed (much) on Overcoming Bias, but Bayes nets can be considered as an archetypal example of "architecture" and "deep insight".  Also, ask yourself how lawful intelligence seemed to you before you started reading this blog, how lawful it seems to you now, then extrapolate outward from that.

" } }, { "_id": "RNLQ7846MvJWwxH52", "title": "The Mechanics of Disagreement", "pageUrl": "https://www.lesswrong.com/posts/RNLQ7846MvJWwxH52/the-mechanics-of-disagreement", "postedAt": "2008-12-10T14:01:44.000Z", "baseScore": 14, "voteCount": 10, "commentCount": 26, "url": null, "contents": { "documentId": "RNLQ7846MvJWwxH52", "html": "

Two ideal Bayesians cannot have common knowledge of disagreement; this is a theorem.  If two rationalist-wannabes have common knowledge of a disagreement between them, what could be going wrong?

\n

The obvious interpretation of these theorems is that if you know that a cognitive machine is a rational processor of evidence, its beliefs become evidence themselves.

\n

If you design an AI and the AI says \"This fair coin came up heads with 80% probability\", then you know that the AI has accumulated evidence with an likelihood ratio of 4:1 favoring heads - because the AI only emits that statement under those circumstances.

\n

It's not a matter of charity; it's just that this is how you think the other cognitive machine works.

\n

And if you tell an ideal rationalist, \"I think this fair coin came up heads with 80% probability\", and they reply, \"I now think this fair coin came up heads with 25% probability\", and your sources of evidence are independent of each other, then you should accept this verdict, reasoning that (before you spoke) the other mind must have encountered evidence with a likelihood of 1:12 favoring tails.

\n

But this assumes that the other mind also thinks that you're processing evidence correctly, so that, by the time it says \"I now think this fair coin came up heads, p=.25\", it has already taken into account the full impact of all the evidence you know about, before adding more evidence of its own.

\n

\n

If, on the other hand, the other mind doesn't trust your rationality, then it won't accept your evidence at face value, and the estimate that it gives won't integrate the full impact of the evidence you observed.

\n

So does this mean that when two rationalists trust each other's rationality less than completely, then they can agree to disagree?

\n

It's not that simple.  Rationalists should not trust themselves entirely, either.

\n

So when the other mind accepts your evidence at less than face value, this doesn't say \"You are less than a perfect rationalist\", it says, \"I trust you less than you trust yourself; I think that you are discounting your own evidence too little.\"

\n

Maybe your raw arguments seemed to you to have a strength of 40:1, but you discounted for your own irrationality to a strength of 4:1, but the other mind thinks you still overestimate yourself and so it assumes that the actual force of the argument was 2:1.

\n

And if you believe that the other mind is discounting you in this way, and is unjustified in doing so, then when it says \"I now think this fair coin came up heads with 25% probability\", you might bet on the coin at odds of 57% in favor of heads - adding up your further-discounted evidence of 2:1 to the implied evidence of 1:6 that the other mind must have seen to give final odds of 2:6 - if you even fully trust the other mind's further evidence of 1:6.

\n

I think we have to be very careful to avoid interpreting this situation in terms of anything like a reciprocal trade, like two sides making equal concessions in order to reach agreement on a business deal.

\n

Shifting beliefs is not a concession that you make for the sake of others, expecting something in return; it is an advantage you take for your own benefit, to improve your own map of the world.  I am, generally speaking, a Millie-style altruist; but when it comes to belief shifts I espouse a pure and principled selfishness: don't believe you're doing it for anyone's sake but your own.

\n

Still, I once read that there's a principle among con artists that the main thing is to get the mark to believe that you trust them, so that they'll feel obligated to trust you in turn.

\n

And - even if it's for completely different theoretical reasons - if you want to persuade a rationalist to shift belief to match yours, you either need to persuade them that you have all of the same evidence they do and have already taken it into account, or that you already fully trust their opinions as evidence, or that you know better than they do how much they themselves can be trusted.

\n

It's that last one that's the really sticky point, for obvious reasons of asymmetry of introspective access and asymmetry of motives for overconfidence - how do you resolve that conflict?  (And if you started arguing about it, then the question wouldn't be which of these were more important as a factor, but rather, which of these factors the Other had under- or over-discounted in forming their estimate of a given person's rationality...)

\n

If I had to name a single reason why two wannabe rationalists wouldn't actually be able to agree in practice, it would be that, once you trace the argument to the meta-level where theoretically everything can be and must be resolved, the argument trails off into psychoanalysis and noise.

\n

And if you look at what goes on in practice between two arguing rationalists, it would probably mostly be trading object-level arguments; and the most meta it would get is trying to convince the other person that you've already taken their object-level arguments into account.

\n

Still, this does leave us with three clear reasons that someone might point to, to justify a persistent disagreement - even though the frame of mind of justification and having clear reasons to point to in front of others, is itself antithetical to the spirit of resolving disagreements - but even so:

\n\n

Since we don't want to go around encouraging disagreement, one might do well to ponder how all three of these arguments are used by creationists to justify their persistent disagreements with scientists.

\n

That's one reason I say clearly - if it isn't obvious even to outside onlookers, maybe you shouldn't be confident of resolving the disagreement there.  Failure at any of these levels implies failure at the meta-levels above it, but the higher-order failures might not be clear.

" } }, { "_id": "rxo4Gcxv63pa5B2Jq", "title": "Bay Area Meetup Wed 12/10 @8pm", "pageUrl": "https://www.lesswrong.com/posts/rxo4Gcxv63pa5B2Jq/bay-area-meetup-wed-12-10-8pm", "postedAt": "2008-12-10T00:13:46.000Z", "baseScore": 1, "voteCount": 3, "commentCount": 0, "url": null, "contents": { "documentId": "rxo4Gcxv63pa5B2Jq", "html": "

Reminder: the second regular Overcoming Bias meetup is tomorrow, Wednesday at 8pm, at Techshop in Menlo park.  Please RSVP so they know how many people are coming.

\n\n

If you're hearing about this for the first time, sign up for the Meetup group so you get future announcements!  And please RSVP when you get them!  We don't want to have to post this to the main blog every time.

\n\n

Robin Gane-McCalla will present some of his ideas on defining intelligence.  As always, anyone with a paper to pass around, abstract to read out loud, or rationality-related cool video to show, is cordially invited to bring it along.

" } }, { "_id": "yzzoWR33S9C3m75e8", "title": "Disjunctions, Antipredictions, Etc.", "pageUrl": "https://www.lesswrong.com/posts/yzzoWR33S9C3m75e8/disjunctions-antipredictions-etc", "postedAt": "2008-12-09T15:13:04.000Z", "baseScore": 28, "voteCount": 18, "commentCount": 26, "url": null, "contents": { "documentId": "yzzoWR33S9C3m75e8", "html": "

Followup toUnderconstrained Abstractions

\n\n

Previously:

So if it's not as simple as just using the one trick of finding abstractions you can easily verify on available data, what are some other tricks to use?

There are several, as you might expect...

\n\n

Previously I talked about "permitted possibilities".  There's a trick in debiasing that has mixed benefits, which is to try and visualize several specific possibilities instead of just one.

\n\n

The reason it has "mixed benefits" is that being specific, at all, can have biasing effects relative to just imagining a typical case.  (And believe me, if I'd seen the outcome of a hundred planets in roughly our situation, I'd be talking about that instead of all this Weak Inside View stuff.)

\n\n

But if you're going to bother visualizing the future, it does seem to help to visualize more than one way it could go, instead of concentrating all your strength into one prediction.

\n\n

So I try not to ask myself "What will happen?" but rather "Is this possibility allowed to happen, or is it prohibited?"  There are propositions that seem forced to me, but those should be relatively rare - the first thing to understand about the future is that it is hard to predict, and you shouldn't seem to be getting strong information about most aspects of it.

Of course, if you allow more than one possibility, then you have to\ndiscuss more than one possibility, and the total length of your post\ngets longer.  If you just eyeball the length of the post, it looks like\nan unsimple theory; and then talking about multiple possibilities makes\nyou sound weak and uncertain.\n\n

\n\n

As Robyn Dawes notes,

"In their summations lawyers avoid arguing from disjunctions in favor of conjunctions.  (There are not many closing arguments that end, "Either the defendant was in severe financial straits and murdered the decedent to prevent his embezzlement from being exposed or he was passionately in love with the same coworker and murdered the decedent in a fit of jealous rage or the decedent had blocked the defendant's promotion at work and the murder was an act of revenge.  The State has given you solid evidence to support each of these alternatives, all of which would lead to the same conclusion: first-degree murder.")  Rationally, of course, disjunctions are much more probable than are conjunctions."

Another test I use is simplifiability - after I've analyzed out the idea, can I compress it back into an argument that fits on a T-Shirt, even if it loses something thereby?  Here's an example of some compressions:

\n\n\n\n

If the whole argument was that T-Shirt slogan, I wouldn't find it compelling - too simple and surface a metaphor.  So you have to look more closely, and try visualizing some details, and make sure the argument can be consistently realized so far as you know.  But if, after you do that, you can compress the argument back to fit on a T-Shirt again - even if it sounds naive and stupid in that form - then that helps show that the argument doesn't depend on all the details being true simultaneously; the details might be different while fleshing out the same core idea.

\n\n

Note also that the three statements above are to some extent disjunctive - you can imagine only one of them being true, but a hard takeoff still occurring for just that reason alone.

\n\n

Another trick I use is the idea of antiprediction.  This is when the narrowness of our human experience distorts our metric on the answer space, and so you can make predictions that actually aren't far from maxentropy priors, but sound very startling.

\n\n

I shall explain:

\n\n

A news story about an Australian national lottery that was just starting up, interviewed a man on the street, asking him if he would play.  He said yes.  Then they asked him what he thought his odds were of winning.  "Fifty-fifty," he said, "either I win or I don't."

\n\n

To predict your odds of winning the lottery, you should invoke the Principle of Indifference with respect to all possible combinations of lottery balls.  But this man was invoking the Principle of Indifference with respect to the partition "win" and "not win".  To him, they sounded like equally simple descriptions; but the former partition contains only one combination, and the latter contains the other N million combinations.  (If you don't agree with this analysis I'd like to sell you some lottery tickets.)

\n\n

So the antiprediction is just "You won't win the lottery."  And the one may say, "What?  How do you know that?  You have no evidence for that!  You can't prove that I won't win!"  So they are focusing far too much attention on a small volume of the answer space, artificially inflated by the way their attention dwells upon it.

\n\n

In the same sense, if you look at a television SF show, you see that a remarkable number of aliens seem to have human body plans - two arms, two legs, walking upright, right down to five fingers per hand and the location of eyes in the face.  But this is a very narrow partition in the body-plan space; and if you just said, "They won't look like humans," that would be an antiprediction that just steps outside this artificially inflated tiny volume in the answer space.

\n\n

Similarly with the true sin of television SF, which is too-human minds, even among aliens not meant to be sympathetic characters.  "If we meet aliens, they won't have a sense of humor," I antipredict; and to a human it sounds like I'm saying something highly specific, because all minds by default have a sense of humor, and I'm predicting the presence of a no-humor attribute tagged on.  But actually, I'm just predicting that a point in mind design volume is outside the narrow hyperplane that contains humor.

\n\n

An AI might go from infrahuman to transhuman in less than a week?  But a week is 10^49 Planck intervals - if you just look at the exponential scale that stretches from the Planck time to the age of the universe, there's nothing special about the timescale that 200Hz humans happen to live on, any more than there's something special about the numbers on the lottery ticket you bought.

\n\n

If we're talking about a starting population of 2GHz processor cores, then any given AI that FOOMs at all, is likely to FOOM in less than 10^15 sequential operations or more than 10^19 sequential operations, because the region between 10^15 and 10^19 isn't all that wide a target.  So less than a week or more than a century, and in the latter case that AI will be trumped by one of a shorter timescale.

\n\n

This is actually a pretty naive version of the timescale story.  But as an example, it shows how a "prediction" that's close to just stating a maximum-entropy prior, can sound amazing, startling, counterintuitive, and futuristic.

\n\n

When I make an antiprediction supported by disjunctive arguments that are individually simplifiable, I feel slightly less nervous about departing the rails of vetted abstractions.  (In particular, I regard this as sufficient reason not to trust the results of generalizations over only human experiences.)

\n\n

Finally, there are three tests I apply to figure out how strong my predictions are.

\n\n

The first test is to just ask myself the Question, "What do you think you know, and why do you think you know it?"  The future is something I haven't yet observed; if my brain claims to know something about it with any degree of confidence, what are the reasons for that?  The first test tries to align the strength of my predictions with things that I have reasons to believe - a basic step, but one which brains are surprisingly wont to skip.

\n\n

The second test is to ask myself "How worried do I feel that I'll\nhave to write an excuse explaining why this happened anyway?"  If\nI don't feel worried about having to write an excuse - if I can stick\nmy neck out and not feel too concerned about ending up with egg on my\nface - then clearly my brain really does believe this thing quite strongly, not as a point to be professed through enthusiastic argument, but as an ordinary sort of fact.  Why?

\n\n

And the third test is the "So what?" test - to what degree will I feel indignant if Nature comes back and says "So what?" to my clever analysis?  Would I feel as indignant as if I woke up one morning to read in the newspaper that Mars had started orbiting the Sun in squares instead of ellipses?  Or, to make it somewhat less strong, as if I woke up one morning to find that banks were charging negative interest on loans?  If so, clearly I must possess some kind of extremely strong argument - one that even Nature Itself ought to find compelling, not just humans.  What is it?\n\n\n

" } }, { "_id": "3pyLbH3BqevetQros", "title": "True Sources of Disagreement", "pageUrl": "https://www.lesswrong.com/posts/3pyLbH3BqevetQros/true-sources-of-disagreement", "postedAt": "2008-12-08T15:51:58.000Z", "baseScore": 12, "voteCount": 10, "commentCount": 53, "url": null, "contents": { "documentId": "3pyLbH3BqevetQros", "html": "

Followup toIs That Your True Rejection?

\n\n

I expected from the beginning, that the difficult part of two rationalists reconciling a persistent disagreement, would be for them to expose the true sources of their beliefs.

\n\n\n\n

One suspects that this will only work if each party takes responsibility for their own end; it's very hard to see inside someone else's head.  Yesterday I exhausted myself mentally while out on my daily walk, asking myself the Question "What do you think you know, and why do you think you know it?" with respect to "How much of the AI problem compresses to large insights, and how much of it is unavoidable nitty-gritty?"  Trying to either understand why my brain believed what it believed, or else force my brain to experience enough genuine doubt that I could reconsider the question and arrive at a real justification that way.  It's hard to see how Robin Hanson could have done any of this work for me.

\n\n

Presumably a symmetrical fact holds about my lack of access to the real reasons why Robin believes what he believes.  To understand the true source of a disagreement, you have to know why both sides believe what they believe - one reason why disagreements are hard to resolve.

\n\n

Nonetheless, here's my guess as to what this Disagreement is about:

If I had to pinpoint a single thing that strikes me as "disagree-able" about the way Robin frames his analyses, it's that there are a lot of opaque agents running around, little black boxes assumed to be similar to humans, but there are more of them and they're less expensive to build/teach/run.  They aren't even any faster, let alone smarter.  (I don't think that standard economics says that doubling the population halves the doubling time, so it matters whether you're making more minds or faster ones.)

\n\n

This is Robin's model for uploads/ems, and his model for AIs doesn't seem to look any different.  So that world looks like this one, except that the cost of "human capital" and labor is dropping according to (exogenous) Moore's Law , and it ends up that economic growth doubles every month instead of every sixteen years - but that's it.  Being, myself, not an economist, this does look to me like a viewpoint with a distinctly economic zeitgeist.

\n\n

In my world, you look inside the black box.  (And, to be symmetrical, I don't spend much time thinking about more than one box at a time - if I have more hardware, it means I have to figure out how to scale a bigger brain.)

\n\n

The human brain is a haphazard thing, thrown together by idiot evolution, as an incremental layer of icing on a chimpanzee cake that never evolved to be generally intelligent, adapted in a distant world devoid of elaborate scientific arguments or computer programs or professional specializations.

\n\n

It's amazing we can get anywhere using the damn thing.  But it's worth remembering that if there were any smaller modification of a chimpanzee that spontaneously gave rise to a technological civilization, we would be having this conversation at that lower level of intelligence instead.

\n\n

Human neurons run at less than a millionth the speed of transistors, transmit spikes at less than a millionth the speed of light, and dissipate around a million times the heat per synaptic operation as the thermodynamic minimum for a one-bit operation at room temperature.  Physically speaking, it ought to be possible to run a brain at a million times the speed without shrinking it, cooling it, or invoking reversible computing or quantum computing.

\n\n

There's no reason to think that the brain's software is any closer to the limits of the possible than its hardware, and indeed, if you've been following along on Overcoming Bias this whole time, you should be well aware of the manifold known ways in which our high-level thought processes fumble even the simplest problems.

\n\n

Most of these are not deep, inherent flaws of intelligence, or limits of what you can do with a mere hundred trillion computing elements.  They are the results of a really stupid process that designed the retina backward, slapping together a brain we now use in contexts way outside its ancestral environment.

\n\n

Ten thousand researchers working for one year cannot do the same work as a hundred researchers working for a hundred years; a chimpanzee is one-fourth the volume of a human's but four chimps do not equal one human; a chimpanzee shares 95% of our DNA but a chimpanzee cannot understand 95% of what a human can.  The scaling law for population is not the scaling law for time is not the scaling law for brain size is not the scaling law for mind design.

\n\n

There's a parable I sometimes use, about how the first replicator was not quite the end of the era of stable accidents, because the pattern of the first replicator was, of necessity, something that could happen by accident.  It is only the second replicating pattern that you would never have seen without many copies of the first replicator around to give birth to it; only the second replicator that was part of the world of evolution, something you wouldn't see in a world of accidents.

\n\n

That first replicator must have looked like one of the most bizarre things in the whole history of time - this replicator created purely by chance.  But the history of time could never have been set in motion, otherwise.

\n\n

And what a bizarre thing a human must be, a mind born entirely of evolution, a mind that was not created by another mind.

\n\n

We haven't yet begun to see the shape of the era of intelligence.

\n\n

Most of the universe is far more extreme than this gentle place, Earth's cradle.  Cold vacuum or the interior of stars; either is far more common than the temperate weather of Earth's surface, where life first arose, in the balance between the extremes.  And most possible intelligences are not balanced, like these first humans, in that strange small region of temperate weather between an amoeba and a Jupiter Brain.

\n\n

This is the challenge of my own profession - to break yourself loose of the tiny human dot in mind design space, in which we have lived our whole lives, our imaginations lulled to sleep by too-narrow experiences.

\n\n

For example, Robin says:

Eliezer guesses that within a few weeks a single AI could grow via largely internal means from weak and unnoticed to so strong it takes over the world [his italics]

I suppose that to a human a "week" sounds like a temporal constant describing a "short period of time", but it's actually 10^49 Planck intervals, or enough time for a population of 2GHz processor cores to perform 10^15 serial operations one after the other.

\n\n

Perhaps the thesis would sound less shocking if Robin had said, "Eliezer guesses that 10^15 sequential operations might be enough to..."

\n\n

One should also bear in mind that the human brain, which is not designed for the primary purpose of scientific insights, does not spend its power efficiently on having many insights in minimum time, but this issue is harder to understand than CPU clock speeds.

\n\n

Robin says he doesn't like "unvetted abstractions".  Okay.  That's a strong point.  I get it.  Unvetted abstractions go kerplooie, yes they do indeed.  But something's wrong with using that as a justification for models where there are lots of little black boxes just like humans scurrying around, and we never pry open the black box and scale the brain bigger or redesign its software or even just speed up the damn thing.  The interesting part of the problem is harder to analyze, yes - more distant from the safety rails of overwhelming evidence - but this is no excuse for refusing to take it into account.

\n\n

And in truth I do suspect that a strict policy against "unvetted abstractions" is not the real issue here.  I constructed a simple model of an upload civilization running on the computers their economy creates:  If a non-upload civilization has an exponential Moore's Law, y = e^t, then, naively, an upload civilization ought to have dy/dt = e^y -> y = -ln(C - t).  Not necessarily up to infinity, but for as long as Moore's Law would otherwise stay exponential in a biological civilization.  I walked though the implications of this model, showing that in many senses it behaves "just like we would expect" for describing a civilization running on its own computers.

\n\n

Compare this to Robin Hanson's "Economic Growth Given Machine Intelligence", which Robin describes as using "one of the simplest endogenous growth models to explore how Moore's Law changes with computer-based workers.  It is an early but crude attempt, but it is the sort of approach I think promising."  Take a quick look at that paper.

\n\n

Now, consider the abstractions used in my Moore's Researchers scenario, versus the abstractions used in Hanson's paper above, and ask yourself only the question of which looks more "vetted by experience" - given that both are models of a sort that haven't been used before, in domains not actually observed, and that both give results quite different from the world we see and that would probably cause the vast majority of actual economists to say "Naaaah."

\n\n

Moore's Researchers versus Economic Growth Given Machine Intelligence - if you didn't think about the conclusions in advance of the reasoning; and if you also neglected that one of these has been written up in a way that is more impressive to economics journals; and you just asked the question, "To what extent is the math used here, constrained by our prior experience?" then I would think that the race would at best be even.  Or possibly favoring "Moore's Researchers" as being more simple and intuitive, and involving less novel math as measured in additional quantities and laws introduced.

\n\n

I ask in all humility if Robin's true rejection is a strictly evenhandedly applied rule that rejects unvetted abstractions.  Or if, in fact, Robin finds my conclusions, and the sort of premises I use, to be objectionable for other reasons - which, so far as we know at this point, may well be valid objections - and so it appears to him that my abstractions bear a larger burden of proof than the sort of mathematical steps he takes in "Economic Growth Given Machine Intelligence".  But rather than offering the reasons why the burden of proof appears larger to him, he says instead that it is "not vetted enough".

\n\n

One should understand that "Your abstractions are unvetted!" makes it difficult for me to engage properly.  The core of my argument has to do with what happens when you pry open the black boxes that are your economic agents, and start fiddling with their brain designs, and leave the tiny human dot in mind design space.  If all such possibilities are rejected on the basis of their being "unvetted" by experience, it doesn't leave me with much to talk about.

\n\n

Why not just accept the rejection?  Because I expect that to give the wrong answer - I expect it to ignore the dominating factor in the Future, even if the dominating factor is harder to analyze.

\n\n

It shouldn't be surprising if a persistent disagreement ends up resting on that point where your attempt to take into account the other person's view, runs up against some question of simple fact where, it seems to you, you know that can't possibly be right.

\n\n

For me, that point is reached when trying to visualize a model of interacting black boxes that behave like humans except they're cheaper to make.  The world, which shattered once with the with the first replicator, and shattered for the second time with the emergence of human intelligence, somehow does not shatter a third time.  Even in the face of blowups of brain size far greater than the size transition from chimpanzee brain to human brain; and changes in design far larger than the design transition from chimpanzee brains to human brains; and simple serial thinking speeds that are, maybe even right from the beginning, thousands or millions of times faster.

\n\n

That's the point where I, having spent my career trying to look inside the black box, trying to wrap my tiny brain around the rest of mind design space that isn't like our small region of temperate weather, just can't make myself believe that the Robin-world is really truly actually the way the future will be.

\n\n

There are other things that seem like probable nodes of disagreement:

\n\n

Robin Hanson's description of Friendly AI development as "total war" that is harmful to even discuss, or his description of a realized Friendly AI as "a God to rule us all".  Robin must be visualizing an in-practice outcome very different from what I do, and this seems like a likely source of emotional fuel for the disagreement as well.

\n\n

Conversely, Robin Hanson seems to approve of a scenario where lots of AIs, of arbitrary motives, constitute the vast part of the economic productivity of the Solar System, because he thinks that humans will be protected under the legacy legal system that grew continuously out of the modern world, and that the AIs will be unable to coordinate to transgress the legacy legal system for fear of losing their own legal protections.  I tend to visualize a somewhat different outcome, to put it mildly; and would symmetrically be suspected of emotional unwillingness to accept that outcome as inexorable.

\n\n

Robin doesn't dismiss Cyc out of hand and even "hearts" it, which implies that we have an extremely different picture of how intelligence works.

\n\n

Like Robin, I'm also feeling burned on this conversation, and I doubt we'll finish it; but I should write at least two more posts to try to describe what I've learned, and some of the rules that I think I've been following.

" } }, { "_id": "fKofLyepu446zRgPP", "title": "Artificial Mysterious Intelligence", "pageUrl": "https://www.lesswrong.com/posts/fKofLyepu446zRgPP/artificial-mysterious-intelligence", "postedAt": "2008-12-07T20:05:33.000Z", "baseScore": 33, "voteCount": 26, "commentCount": 24, "url": null, "contents": { "documentId": "fKofLyepu446zRgPP", "html": "

Previously in seriesFailure By Affective Analogy

\n

I once had a conversation that I still remember for its sheer, purified archetypicality.  This was a nontechnical guy, but pieces of this dialog have also appeared in conversations I've had with professional AIfolk...

\n
\n

Him:  Oh, you're working on AI!  Are you using neural networks?

\n

Me:  I think emphatically not.

\n

Him:  But neural networks are so wonderful!  They solve problems and we don't have any idea how they do it!

\n

Me:  If you are ignorant of a phenomenon, that is a fact about your state of mind, not a fact about the phenomenon itself.  Therefore your ignorance of how neural networks are solving a specific problem, cannot be responsible for making them work better.

\n

Him:  Huh?

\n

Me:  If you don't know how your AI works, that is not good.  It is bad.

\n

Him:  Well, intelligence is much too difficult for us to understand, so we need to find some way to build AI without understanding how it works.

\n
\n

\n
\n

Me:  Look, even if you could do that, you wouldn't be able to predict any kind of positive outcome from it.  For all you knew, the AI would go out and slaughter orphans.

\n

Him:  Maybe we'll build Artificial Intelligence by scanning the brain and building a neuron-by-neuron duplicate.  Humans are the only systems we know are intelligent.

\n

Me:  It's hard to build a flying machine if the only thing you understand about flight is that somehow birds magically fly.  What you need is a concept of aerodynamic lift, so that you can see how something can fly even if it isn't exactly like a bird.

\n

Him:  That's too hard.  We have to copy something that we know works.

\n

Me:  (reflectively) What do people find so unbearably awful about the prospect of having to finally break down and solve the bloody problem?  Is it really that horrible?

\n

Him:  Wait... you're saying you want to actually understand intelligence?

\n

Me:  Yeah.

\n

Him:  (aghast)  Seriously?

\n

Me:  I don't know everything I need to know about intelligence, but I've learned a hell of a lot.  Enough to know what happens if I try to build AI while there are still gaps in my understanding.

\n

Him:  Understanding the problem is too hard.  You'll never do it.

\n
\n

That's not just a difference of opinion you're looking at, it's a clash of cultures.

\n

For a long time, many different parties and factions in AI, adherent to more than one ideology, have been trying to build AI without understanding intelligence.  And their habits of thought have become ingrained in the field, and even transmitted to parts of the general public.

\n

You may have heard proposals for building true AI which go something like this:

\n
    \n
  1. Calculate how many operations the human brain performs every second.  This is \"the only amount of computing power that we know is actually sufficient for human-equivalent intelligence\".  Raise enough venture capital to buy a supercomputer that performs an equivalent number of floating-point operations in one second.  Use it to run the most advanced available neural network algorithms.
  2. \n
  3. The brain is huge and complex.  When the Internet becomes sufficiently huge and complex, intelligence is bound to emerge from the Internet.  (I get asked about this in 50% of my interviews.)
  4. \n
  5. Computers seem unintelligent because they lack common sense.  Program a very large number of \"common-sense facts\" into a computer.  Let it try to reason about the relation of these facts.  Put a sufficiently huge quantity of knowledge into the machine, and intelligence will emerge from it.
  6. \n
  7. Neuroscience continues to advance at a steady rate.  Eventually, super-MRI or brain sectioning and scanning will give us precise knowledge of the local characteristics of all human brain areas.  So we'll be able to build a duplicate of the human brain by duplicating the parts.  \"The human brain is the only example we have of intelligence.\"
  8. \n
  9. Natural selection produced the human brain.  It is \"the only method that we know works for producing general intelligence\".  So we'll have to scrape up a really huge amount of computing power, and evolve AI.
  10. \n
\n

What do all these proposals have in common?

\n

They are all ways to make yourself believe that you can build an Artificial Intelligence, even if you don't understand exactly how intelligence works.

\n

Now, such a belief is not necessarily false!  Methods 4 and 5, if pursued long enough and with enough resources, will eventually work.  (5 might require a computer the size of the Moon, but give it enough crunch and it will work, even if you have to simulate a quintillion planets and not just one...)

\n

But regardless of whether any given method would work in principle, the unfortunate habits of thought will already begin to arise, as soon as you start thinking of ways to create Artificial Intelligence without having to penetrate the mystery of intelligence.

\n

I have already spoken of some of the hope-generating tricks that appear in the examples above.  There is invoking similarity to humans, or using words that make you feel good.  But really, a lot of the trick here just consists of imagining yourself hitting the AI problem with a really big rock.

\n

I know someone who goes around insisting that AI will cost a quadrillion dollars, and as soon as we're willing to spend a quadrillion dollars, we'll have AI, and we couldn't possibly get AI without spending a quadrillion dollars.  \"Quadrillion dollars\" is his big rock, that he imagines hitting the problem with, even though he doesn't quite understand it.

\n

It often will not occur to people that the mystery of intelligence could be any more penetrable than it seems:  By the power of the Mind Projection Fallacy, being ignorant of how intelligence works will make it seem like intelligence is inherently impenetrable and chaotic.  They will think they possess a positive knowledge of intractability, rather than thinking, \"I am ignorant.\"

\n

And the thing to remember is that, for these last decades on end, any professional in the field of AI trying to build \"real AI\", had some reason for trying to do it without really understanding intelligence (various fake reductions aside).

\n

The New Connectionists accused the Good-Old-Fashioned AI researchers of not being parallel enough, not being fuzzy enough, not being emergent enough.  But they did not say, \"There is too much you do not understand.\"

\n

The New Connectionists catalogued the flaws of GOFAI for years on end, with fiery castigation.  But they couldn't ever actually say: \"How exactly are all these logical deductions going to produce 'intelligence', anyway?  Can you walk me through the cognitive operations, step by step, which lead to that result?  Can you explain 'intelligence' and how you plan to get it, without pointing to humans as an example?\"

\n

For they themselves would be subject to exactly the same criticism.

\n

In the house of glass, somehow, no one ever gets around to talking about throwing stones.

\n

To tell a lie, you have to lie about all the other facts entangled with that fact, and also lie about the methods used to arrive at beliefs:  The culture of Artificial Mysterious Intelligence has developed its own Dark Side Epistemology, complete with reasons why it's actually wrong to try and understand intelligence.

\n

Yet when you step back from the bustle of this moment's history, and think about the long sweep of science - there was a time when stars were mysterious, when chemistry was mysterious, when life was mysterious.  And in this era, much was attributed to black-box essences.  And there were many hopes based on the similarity of one thing to another.  To many, I'm sure, alchemy just seemed very difficult rather than even seeming  mysterious; most alchemists probably did not go around thinking, \"Look at how much I am disadvantaged by not knowing about the existence of chemistry!  I must discover atoms and molecules as soon as possible!\"  They just memorized libraries of random things you could do with acid, and bemoaned how difficult it was to create the Philosopher's Stone.

\n

In the end, though, what happened is that scientists achieved insight, and then things got much easier to do.  You also had a better idea of what you could or couldn't do.  The problem stopped being scary and confusing.

\n

But you wouldn't hear a New Connectionist say, \"Hey, maybe all the failed promises of 'logical AI' were basically due to the fact that, in their epistemic condition, they had no right to expect their AIs to work in the first place, because they couldn't actually have sketched out the link in any more detail than a medieval alchemist trying to explain why a particular formula for the Philosopher's Stone will yield gold.\"  It would be like the Pope attacking Islam on the basis that faith is not an adequate justification for asserting the existence of their deity.

\n

Yet in fact, the promises did fail, and so we can conclude that the promisers overreached what they had a right to expect.  The Way is not omnipotent, and a bounded rationalist cannot do all things.  But even a bounded rationalist can aspire not to overpromise - to only say you can do, that which you can do.  So if we want to achieve that reliably, history shows that we should not accept certain kinds of hope.  In the absence of insight, hopes tend to be unjustified because you lack the knowledge that would be needed to justify them.

\n

We humans have a difficult time working in the absence of insight.  It doesn't reduce us all the way down to being as stupid as evolution.  But it makes everything difficult and tedious and annoying.

\n

If the prospect of having to finally break down and solve the bloody problem of intelligence seems scary, you underestimate the interminable hell of not solving it.

" } }, { "_id": "TGux5Fhcd7GmTfNGC", "title": "Is That Your True Rejection?", "pageUrl": "https://www.lesswrong.com/posts/TGux5Fhcd7GmTfNGC/is-that-your-true-rejection", "postedAt": "2008-12-06T14:26:15.000Z", "baseScore": 170, "voteCount": 145, "commentCount": 100, "url": null, "contents": { "documentId": "TGux5Fhcd7GmTfNGC", "html": "\n\n\n\n \n\n \n\n

It happens every now and then that someone encounters some of my transhumanist-side beliefs—as opposed to my ideas having to do with human rationality—strange, exotic-sounding ideas like superintelligence and Friendly AI. And the one rejects them.

\n\n

If the one is called upon to explain the rejection, not uncommonly the one says, “Why should I believe anything Yudkowsky says? He doesn’t have a PhD!”

\n\n

And occasionally someone else, hearing, says, “Oh, you should get a PhD, so that people will listen to you.” Or this advice may even be offered by the same one who expressed disbelief, saying, “Come back when you have a PhD.”

\n\n

Now, there are good and bad reasons to get a PhD. This is one of the bad ones.

\n\n

There are many reasons why someone might actually have an initial adverse reaction to transhumanist theses. Most are matters of pattern recognition, rather than verbal thought: the thesis calls to mind an associated category like “strange weird idea” or “science fiction” or “end-of-the-world cult” or “overenthusiastic youth.”1 Immediately, at the speed of perception, the idea is rejected.

\n\n

If someone afterward says, “Why not?” this launches a search for justification, but the search won’t necessarily hit on the true reason. By “‘true reason,” I don’t mean the best reason that could be offered. Rather, I mean whichever causes were decisive as a matter of historical fact, at the very first moment the rejection occurred.

\n\n

Instead, the search for justification hits on the justifying-sounding fact, “This speaker does not have a PhD.” But I also don’t have a PhD when I talk about human rationality, so why is the same objection not raised there?

\n\n

More to the point, if I had a PhD, people would not treat this as a decisive factor indicating that they ought to believe everything I say. Rather, the same initial rejection would occur, for the same reasons; and the search for justification, afterward, would terminate at a different stopping point.

\n\n

They would say, “Why should I believe you? You’re just some guy with a PhD! There are lots of those. Come back when you’re well-known in your field and tenured at a major university.”

\n\n

But do people actually believe arbitrary professors at Harvard who say weird things? Of course not.

\n\n

If you’re saying things that sound wrong to a novice, as opposed to just rattling off magical-sounding technobabble about leptical quark braids in N + 2 dimensions; and if the hearer is a stranger, unfamiliar with you personally and unfamiliar with the subject matter of your field; then I suspect that the point at which the average person will actually start to grant credence overriding their initial impression, purely because of academic credentials, is somewhere around the Nobel Laureate level. If that. Roughly, you need whatever level of academic credential qualifies as “beyond the mundane.”

\n\n

This is more or less what happened to Eric Drexler, as far as I can tell. He presented his vision of nanotechnology, and people said, “Where are the technical details?” or “Come back when you have a PhD!” And Eric Drexler spent six years writing up technical details and got his PhD under Marvin Minsky for doing it. And Nanosystems is a great book. But did the same people who said, “Come back when you have a PhD,” actually change their minds at all about molecular nanotechnology? Not so far as I ever heard.

\n\n

This might be an important thing for young businesses and new-minted consultants to keep in mind—that what your failed prospects tell you is the reason for rejection may not make the real difference; and you should ponder that carefully before spending huge efforts. If the venture capitalist says, “If only your sales were growing a little faster!” or if the potential customer says, “It seems good, but you don’t have feature X,” that may not be the true rejection. Fixing it may, or may not, change anything.

\n\n

And it would also be something to keep in mind during disagreements. Robin Hanson and I share a belief that two rationalists should not agree to disagree: they should not have common knowledge of epistemic disagreement unless something is very wrong.2

\n\n

I suspect that, in general, if two rationalists set out to resolve a disagreement that persisted past the first exchange, they should expect to find that the true sources of the disagreement are either hard to communicate, or hard to expose. E.g.:

\n\n \n\n

If the matter were one in which all the true rejections could be easily laid on the table, the disagreement would probably be so straightforward to resolve that it would never have lasted past the first meeting.

\n\n

“Is this my true rejection?” is something that both disagreers should surely be asking themselves, to make things easier on the other person. However, attempts to directly, publicly psychoanalyze the other may cause the conversation to degenerate very fast, from what I’ve seen.

\n\n

Still—“Is that your true rejection?” should be fair game for Disagreers to humbly ask, if there’s any productive way to pursue that sub-issue. Maybe the rule could be that you can openly ask, “Is that simple straightforward-sounding reason your true rejection, or does it come from intuition-X or professional-zeitgeist-Y ?” While the more embarrassing possibilities lower on the table are left to the Other’s conscience, as their own responsibility to handle.

\n\n
\n \n\n

1See “Science as Attire” in Map and Territory.

\n\n

2See Hal Finney, “Agreeing to Agree,” Overcoming Bias (blog), 2006, http://www.overcomingbias.com/2006/12/agreeing_to_agr.html.

\n
\n\n" } }, { "_id": "9wZnasT3uXzmFCcaB", "title": "Sustained Strong Recursion", "pageUrl": "https://www.lesswrong.com/posts/9wZnasT3uXzmFCcaB/sustained-strong-recursion", "postedAt": "2008-12-05T21:03:20.000Z", "baseScore": 19, "voteCount": 17, "commentCount": 47, "url": null, "contents": { "documentId": "9wZnasT3uXzmFCcaB", "html": "

Followup toCascades, Cycles, Insight, Recursion, Magic

\n\n

We seem to have a sticking point at the concept of "recursion", so I'll zoom in.

\n\n

You have a friend who, even though he makes plenty of money, just spends all that money every month.  You try to persuade your friend to invest a little - making valiant attempts to explain the wonders of compound interest by pointing to analogous processes in nature, like fission chain reactions.

\n\n

"All right," says your friend, and buys a ten-year bond for $10,000, with an annual coupon of $500.  Then he sits back, satisfied.  "There!" he says.  "Now I'll have an extra $500 to spend every year, without my needing to do any work!  And when the bond comes due, I'll just roll it over, so this can go on indefinitely.  Surely, now I'm taking advantage of the power of recursion!"

\n\n

"Um, no," you say.  "That's not exactly what I had in mind when I talked about 'recursion'."

\n\n

"But I used some of my cumulative money earned, to increase my very earning rate," your friend points out, quite logically.  "If that's not 'recursion', what is?  My earning power has been 'folded in on itself', just like you talked about!"

\n\n

"Well," you say, "not exactly.  Before, you were earning $100,000 per year, so your cumulative earnings went as 100000 * t.  Now, your cumulative earnings are going as 100500 * t.  That's not really much of a change.  What we want is for your cumulative earnings to go as B * e^At for some constants A and B - to grow exponentially."

\n\n

"Exponentially!" says your friend, shocked.

\n\n

"Yes," you say, "recursification has an amazing power to transform growth curves.  In this case, it can turn a linear process into an exponential one.  But to get that effect, you have to reinvest the coupon payments you get on your bonds - or at least reinvest some of them, instead of just spending them all.  And you must be able to do this over and over again.  Only then will you get the 'folding in' transformation, so that instead of your cumulative earnings going as y = F(t) = A*t, your earnings will go as the differential equation dy/dt = F(y) = A*y whose solution is y = e^(A*t)."

(I'm going to go ahead and leave out various constants of integration; feel free to add them back in.)\n\n

\n\n

"Hold on," says your friend.  "I don't understand the justification for what you just did there."

\n\n

"Right now," you explain, "you're earning a steady income at your job, and you also have $500/year from the bond you bought.  These are just things that go on generating money at a constant rate per unit time, in the background.  So your cumulative earnings are the integral of that constant rate.  If your earnings are y, then dy/dt = A, which resolves to y = At.  But now, suppose that instead of having these constant earning forces operating in the background, we introduce a strong feedback loop from your cumulative earnings to your earning power."

\n\n

"But I bought this one bond here -" says your friend.

\n\n

"That's not enough for a strong feedback loop," you say.  "Future increases in your cumulative earnings aren't going to increase the value of this one bond, or your salary, any further.  One unit of force transmitted back is not a feedback loop - it has to be repeatable.  You need a sustained recursion, not a one-off event."

\n\n

"Okay," says your friend, "how about if I buy a $100 bond every year, then?  Will that satisfy the strange requirements of this ritual?"

\n\n

"Still not a strong feedback loop," you say.  "Suppose that next year your salary went up $10,000/year - no, an even simpler example; suppose $10,000 fell in your lap out of the sky.  If you only buy $100/year of bonds, that extra $10,000 isn't going to make any long-term difference to the earning curve.  But if you're in the habit of investing 50% of found money, then there's a strong feedback loop from your cumulative earnings back to your earning power - we can pump up the cumulative earnings and watch the earning power rise as a direct result."

\n\n

"How about if I just invest 0.1% of all my earnings, including the coupons on my bonds?" asks your friend.

\n\n

"Well..." you say slowly.  "That would be a sustained feedback loop but an extremely weak one, where marginal changes to your earnings have relatively small marginal effects on future earning power.  I guess it would genuinely be a recursified process, but it would take a long time for the effects to become apparent, and any stronger recursions would easily outrun it."

\n\n

"Okay," says your friend, "I'll start by investing a dollar, and I'll fully reinvest all the earnings from it, and the earnings on those earnings as well -"

\n\n

"I'm not really sure there are any good investments that will let you invest just a dollar without it being eaten up in transaction costs," you say, "and it might not make a difference to anything on the timescales we have in mind - though there's an old story about a king, and grains of wheat placed on a chessboard...  But realistically, a dollar isn't enough to get started."

\n\n

"All right," says your friend, "suppose I start with $100,000 in bonds, and reinvest 80% of the coupons on those bonds plus rolling over all the principle, at a 5% interest rate, and we ignore inflation for now."

\n\n

"Then," you reply, "we have the differential equation dy/dt = 0.8 * 0.05 * y, with the initial condition y = $100,000 at t=0, which works out to y = $100,000 * e^(.04*t).  Or if you're reinvesting discretely rather than continuously, y = $100,000 * (1.04)^t."

\n\n

We can similarly view the self-optimizing compiler in this light - it speeds itself up once, but never makes any further improvements, like buying a single bond; it's not a sustained recursion.

\n\n

And now let us turn our attention to Moore's Law.

\n\n

I am not a fan of Moore's Law.  I think it's a red herring.  I don't think you can forecast AI arrival times by using it, I don't think that AI (especially the good kind of AI) depends on Moore's Law continuing.  I am agnostic about how long Moore's Law can continue - I simply leave the question to those better qualified, because it doesn't interest me very much...

\n\n

But for our next simpler illustration of a strong recursification, we shall consider Moore's Law.

\n\n

Tim Tyler serves us the duty of representing our strawman, repeatedly telling us, "But chip engineers use computers now, so Moore's Law is already recursive!"

\n\n

To test this, we perform the equivalent of the thought experiment where we drop $10,000 out of the sky - push on the cumulative "wealth", and see what happens to the output rate.

\n\n

Suppose that Intel's engineers could only work using computers of the sort available in 1998.  How much would the next generation of computers be slowed down?

\n\n

Suppose we gave Intel's engineers computers from 2018, in sealed black boxes (not transmitting any of 2018's knowledge).  How much would Moore's Law speed up?

\n\n

I don't work at Intel, so I can't actually answer those questions.  I think, though, that if you said in the first case, "Moore's Law would drop way down, to something like 1998's level of improvement measured linearly in additional transistors per unit time," you would be way off base.  And if you said in the second case, "I think Moore's Law would speed up by an order of magnitude, doubling every 1.8 months, until they caught up to the '2018' level," you would be equally way off base.

\n\n

In both cases, I would expect the actual answer to be "not all that much happens".  Seventeen instead of eighteen months, nineteen instead of eighteen months, something like that.

\n\n

Yes, Intel's engineers have computers on their desks.  But the serial speed or per-unit price of computing power is not, so far as I know, the limiting resource that bounds their research velocity.  You'd probably have to ask someone at Intel to find out how much of their corporate income they spend on computing clusters / supercomputers, but I would guess it's not much compared to how much they spend on salaries or fab plants.

\n\n

If anyone from Intel reads this, and wishes to explain to me how it would be unbelievably difficult to do their jobs using computers from ten years earlier, so that Moore's Law would slow to a crawl - then I stand ready to be corrected.  But relative to my present state of partial knowledge, I would say that this does not look like a strong feedback loop.

\n\n

However...

\n\n

Suppose that the researchers themselves are running as uploads, software on the computer chips produced by their own factories.

\n\n

Mind you, this is not the tiniest bit realistic.  By my standards it's not even a very interesting way of looking at the Singularity, because it does not deal with smarter minds but merely faster ones - it dodges the really difficult and interesting part of the problem.

\n\n

Just as nine women cannot gestate a baby in one month; just as ten thousand researchers cannot do in one year what a hundred researchers can do in a hundred years; so too, a chimpanzee cannot do four years what a human can do in one year, even though the chimp has around one-fourth the human's cranial capacity.  And likewise a chimp cannot do in 100 years what a human does in 95 years, even though they share 95% of our genetic material.

\n\n

Better-designed minds don't scale the same way as larger minds, and larger minds don't scale the same way as faster minds, any more than faster minds scale the same way as more numerous minds.  So the notion of merely faster researchers, in my book, fails to address the interesting part of the "intelligence explosion".

\n\n

Nonetheless, for the sake of illustrating this matter in a relatively simple case...

\n\n

Suppose the researchers and engineers themselves - and the rest of the humans on the planet, providing a market for the chips and investment for the factories - are all running on the same computer chips that are the product of these selfsame factories.  Suppose also that robotics technology stays on the same curve and provides these researchers with fast manipulators and fast sensors.  We also suppose that the technology feeding Moore's Law has not yet hit physical limits.  And that, as human brains are already highly parallel, we can speed them up even if Moore's Law is manifesting in increased parallelism instead of faster serial speeds - we suppose the uploads aren't yet being run on a fully parallelized machine, and so their actual serial speed goes up with Moore's Law.  Etcetera.

\n\n

In a fully naive fashion, we just take the economy the way it is today, and run it on the computer chips that the economy itself produces.

\n\n

In our world where human brains run at constant speed (and eyes and hands work at constant speed), Moore's Law for computing power s is:

s = R(t) = e^t

The function R is the Research curve that relates the amount of Time t passed, to the current Speed of computers s.

\n\n

To understand what happens when the researchers themselves are running on computers, we simply suppose that R does not relate computing technology to sidereal time - the orbits of the planets, the motion of the stars - but, rather, relates computing technology to the amount of subjective time spent researching it.

\n\n

Since in our world, subjective time is a linear function of sidereal time, this hypothesis fits exactly the same curve R to observed human history so far.

\n\n

Our direct measurements of observables do not constrain between the two hypotheses

Moore's Law is exponential in the number of orbits of Mars around the Sun

and

Moore's Law is exponential in the amount of subjective time that researchers spend thinking, and experimenting and building using a proportional amount of sensorimotor bandwidth.

But our prior knowledge of causality may lead us to prefer the second hypothesis.

\n\n

So to understand what happens when the Intel engineers themselves run on computers (and use robotics) subject to Moore's Law, we recursify and get:

dy/dt = s = R(y) = e^y

Here y is the total amount of elapsed subjective time, which at any given point is increasing according to the computer speed s given by Moore's Law, which is determined by the same function R that describes how Research converts elapsed subjective time into faster computers.  Observed human history to date roughly matches the hypothesis that R is exponential with a doubling time of eighteen subjective months (or whatever).

\n\n

Solving

dy/dt = e^y

yields

y = -ln(C - t)

One observes that this function goes to +infinity at a finite time C.

\n\n

This is only to be expected, given our assumptions.  After eighteen sidereal months, computing speeds double; after another eighteen subjective months, or nine sidereal months, computing speeds double again; etc.

\n\n

Now, unless the physical universe works in a way that is not only different from the current standard model, but has a different character of physical law than the current standard model, you can't actually do infinite computation in finite time.

\n\n

Let us suppose that if our biological world had no Singularity, and Intel just kept on running as a company, populated by humans, forever, that Moore's Law would start to run into trouble around 2020.  Say, after 2020 there would be a ten-year gap where chips simply stagnated, until the next doubling occurred after a hard-won breakthrough in 2030.

\n\n

This just says that R(y) is not an indefinite exponential curve.  By hypothesis, from subjective years 2020 to 2030, R(y) is flat, corresponding to a constant computer speed s.  So dy/dt is constant over this same time period:  Total elapsed subjective time y grows at a linear rate, and as y grows, R(y) and computing speeds remain flat, until ten subjective years have passed.  So the sidereal bottleneck lasts ten subjective years times the current sidereal/subjective conversion rate at 2020's computing speeds.
\n\n

\n\n

In short, the whole scenario behaves exactly like what you would expect - the simple transform really does describe the naive scenario of "drop the economy into the timescale of its own computers".

\n

After subjective year 2030, things pick up again, maybe - there are ultimate physical limits on computation, but they're pretty damned high, and we've got a ways to go until there.  But maybe Moore's Law is slowing down - going subexponential, and then as the physical limits are approached, logarithmic, and then simply giving out.

\n\n

But whatever your beliefs about where Moore's Law ultimately goes, you can just map out the way you would expect the research function R to work as a function of sidereal time in our own world, and then apply the transformation dy/dt = R(y) to get the progress of the uploaded civilization over sidereal time t.  (Its progress over subjective time is simply given by R.)

\n\n

If sensorimotor bandwidth is the critical limiting resource, then we instead care about R&D on fast sensors and fast manipulators.  We want R_sm(y) instead R(y), where R_sm is the progress rate of sensors and manipulators, as a function of elapsed sensorimotor time.  And then we write dy/dt = R_sm(y) and crank on the equation again to find out what the world looks like from a sidereal perspective.

\n\n

We can verify that the Moore's Researchers scenario is a strong positive feedback loop by performing the "drop $10,000" thought experiment.  Say, we drop in chips from another six doublings down the road - letting the researchers run on those faster chips, while holding constant their state of technological knowledge.

\n\n

Lo and behold, this drop has a rather large impact, much larger than the impact of giving faster computers to our own biological world's Intel.  Subjectively the impact may be unnoticeable - as a citizen, you just see the planets slow down again in the sky.  But sidereal growth rates increase by a factor of 64.

\n\n

So this is indeed deserving of the names, "strong positive feedback loop" and "sustained recursion".

\n\n

As disclaimed before, all this isn't really going to happen.  There would be effects like those Robin Hanson prefers to analyze, from being able to spawn new researchers as the cost of computing power decreased.  You might be able to pay more to get researchers twice as fast.  Above all, someone's bound to try hacking the uploads for increased intelligence... and then those uploads will hack themselves even further...  Not to mention that it's not clear how this civilization cleanly dropped into computer time in the first place.

\n\n

So no, this is not supposed to be a realistic vision of the future.

\n\n

But, alongside our earlier parable of compound interest, it is supposed to be an illustration of how strong, sustained recursion has much more drastic effects on the shape of a growth curve, than a one-off case of one thing leading to another thing.  Intel's engineers running on computers is not like Intel's engineers using computers.

" } }, { "_id": "BpSEpGxtF664uRNkf", "title": "Underconstrained Abstractions", "pageUrl": "https://www.lesswrong.com/posts/BpSEpGxtF664uRNkf/underconstrained-abstractions", "postedAt": "2008-12-04T13:58:51.000Z", "baseScore": 11, "voteCount": 9, "commentCount": 27, "url": null, "contents": { "documentId": "BpSEpGxtF664uRNkf", "html": "

Followup toThe Weak Inside View

\n\n

Saith Robin:

"It is easy, way too easy, to generate new mechanisms, accounts, theories, and abstractions.  To see if such things are useful,\nwe need to vet them, and that is easiest "nearby", where we know a\nlot.  When we want to deal with or understand things "far", where we\nknow little, we have little choice other than to rely on mechanisms,\ntheories, and concepts that have worked well near.  Far is just the\nwrong place to try new things."

Well... I understand why one would have that reaction.  But I'm not sure we can really get away with that.

\n\n

When possible, I try to talk in concepts that can be verified with\nrespect to existing history.  When I talk about natural selection not\nrunning into a law of diminishing returns on genetic complexity or\nbrain size, I'm talking about something that we can try to verify by\nlooking at the capabilities of other organisms with brains big and\nsmall.  When I talk about the boundaries to sharing cognitive content\nbetween AI programs, you can look at the field of AI the way it works\ntoday and see that, lo and behold, there isn't a lot of cognitive\ncontent shared.

\n\n

But in my book this is just one trick in a library of\nmethodologies for dealing with the Future, which is, in general, a hard\nthing to predict.

\n\n

Let's say that instead of using my complicated-sounding disjunction (many different reasons why the growth trajectory might contain an upward cliff, which don't all have to be true), I instead staked my whole story on the critical\nthreshold of human intelligence.  Saying, "Look how sharp the slope is\nhere!" - well, it would sound like a simpler story.  It would be closer\nto fitting on a T-Shirt.  And by talking about just that one\nabstraction and no others, I could make it sound like I was dealing in\nverified historical facts - humanity's evolutionary history is\nsomething that has already happened.

\n\n

But speaking of an abstraction being "verified" by previous history is a tricky thing.  There is this little problem of underconstraint - of there being more than one possible abstraction that the data "verifies".

In "Cascades, Cycles, Insight"\nI said that economics does not seem to me to deal much in the origins\nof novel knowledge and novel designs, and said, "If I underestimate\nyour power and merely parody your field, by all\nmeans inform me what kind of economic study has been done of such\nthings."  This challenge was answered by comments\ndirecting me to some papers on "endogenous growth", which happens to be\nthe name of theories that don't take productivity improvements as\nexogenous forces.

\n\n

I've looked at some literature on endogenous growth.  And don't get\nme wrong, it's probably not too bad as economics.  However, the seminal\nliterature talks about ideas being generated by combining other ideas,\nso that if you've got N ideas already and you're combining them three\nat a time, that's a potential N!/((3!)(N - 3!)) new ideas to explore. \nAnd then goes on to note that, in this case, there will be vastly more\nideas than anyone can explore, so that the rate at which ideas are\nexploited will depend more on a paucity of explorers than a paucity of\nideas.

\n\n

Well... first of all, the notion that "ideas are generated by\ncombining other ideas N at a time" is not exactly an amazing AI theory;\nit is an economist looking at, essentially, the whole problem of AI,\nand trying to solve it in 5 seconds or less.  It's not as if any\nexperiment was performed to actually watch ideas recombining.  Try to\nbuild an AI around this theory and you will find out in very short\norder how useless it is as an account of where ideas come from...

\n\n

But more importantly, if the only proposition you actually use in your theory is that there are more ideas than people to exploit them, then this is the only proposition that can even be partially verified by testing your theory.

\n\n

Even if a recombinant growth theory can be fit to the data, then the historical data still underconstrains the many\npossible abstractions that might describe the number of possible ideas\navailable - any hypothesis that has around "more ideas than people to\nexploit them" will fit the same data equally well.  You should simply\nsay, "I assume there are more ideas than people to\nexploit them", not go so far into mathematical detail as to talk about\nN choose 3 ideas.  It's not that the dangling math here is\nunderconstrained by the previous data, but that you're not even using it going forward.

\n\n

(And does it even fit the data?  I have friends in venture capital\nwho would laugh like hell at the notion that there's an unlimited\nnumber of really good ideas out there.  Some kind of Gaussian or\npower-law or something distribution for the goodness of available ideas\nseems more in order...  I don't object to "endogenous growth"\nsimplifying things for the sake of having one simplified abstraction\nand seeing if it fits the data well; we all have to do that.  Claiming\nthat the underlying math doesn't just let you build a useful model, but also\nhas a fairly direct correspondence to reality, ought to be a whole\n'nother story, in economics - or so it seems to me.)

\n\n

(If I merely misinterpret the endogenous growth literature or underestimate its sophistication, by all means correct me.)

\n\n\n\n

The further away you get from highly regular things like atoms,\nand the closer you get to surface phenomena that are the final products\nof many moving parts, the more history underconstrains the abstractions\nthat\nyou use.  This is part of what makes futurism difficult.  If there were\nobviously only one story that fit the data, who would bother to use\nanything else?

\n\n

Is Moore's Law a story about the increase in computing power over time\n- the number of transistors on a chip, as a function of how far the\nplanets have spun in their orbits, or how many times a light wave\nemitted from a cesium atom has changed phase?

\n\n

Or does the same data equally verify a hypothesis about exponential\nincreases in investment in manufacturing facilities and R&D, with\nan even higher exponent, showing a law of diminishing returns?

\n\n

Or is Moore's Law showing the increase in computing power, as a\nfunction of some kind of optimization pressure applied by human\nresearchers, themselves thinking at a certain rate?

\n\n

That last one might seem hard to verify, since we've never watched\nwhat happens when a chimpanzee tries to work in a chip R&D lab.  But on some raw, elemental level - would\nthe history of the world really be just the same, proceeding on just exactly\nthe same timeline as the planets move in their orbits, if, for these\nlast fifty years, the researchers themselves had been running on the\nlatest generation of computer chip at any given point?  That sounds to\nme even sillier than having a financial model in which there's no way\nto ask what happens if real estate prices go down.

\n\n

And then, when you apply the abstraction going forward, there's the\nquestion of whether there's more than one way to apply it - which is one reason why a\nlot of futurists tend to dwell in great gory detail on the past events\nthat seem to support their abstractions, but just assume a single application forward.

\n\n

E.g. Moravec in '88, spending a lot of time talking about how much\n"computing power" the human brain seems to use - but much less time\ntalking about whether an AI would use the same amount of computing\npower, or whether using Moore's Law to extrapolate the first\nsupercomputer of this size is the right way to time the arrival of AI. \n(Moravec thought we were supposed to have AI around now, based on his calculations - and he underestimated the size of the supercomputers we'd actually have in 2008.)

\n\n

That's another part of what makes futurism difficult - after you've\ntold your story about the past, even if it seems like an abstraction\nthat can be "verified" with respect to the past (but what if you\noverlooked an alternative story for the same evidence?) that often\nleaves a lot of slack with regards to exactly what will happen with\nrespect to that abstraction, going forward.

\n\n

So if it's not as simple as just using the one trick of finding abstractions you can easily verify on available data...

\n\n

...what are some other tricks to use?

" } }, { "_id": "oFKLSvbDkX8h7amRs", "title": "Permitted Possibilities, & Locality", "pageUrl": "https://www.lesswrong.com/posts/oFKLSvbDkX8h7amRs/permitted-possibilities-and-locality", "postedAt": "2008-12-03T21:20:35.000Z", "baseScore": 28, "voteCount": 19, "commentCount": 21, "url": null, "contents": { "documentId": "oFKLSvbDkX8h7amRs", "html": "

Continuation ofHard Takeoff

\n\n

The analysis given in the last two days permits more than one possible AI trajectory:

\n\n
  1. Programmers, smarter than evolution at finding tricks that work, but operating without fundamental insight or with only partial insight, create a mind that is dumber than the researchers but performs lower-quality operations much faster.  This mind reaches k > 1, cascades up to the level of a very smart human, itself achieves insight into intelligence, and undergoes the really fast part of the FOOM, to superintelligence.  This would be the major nightmare scenario for the origin of an unFriendly AI.
  2. \n\n
  3. Programmers operating with partial insight, create a mind that performs a number of tasks very well, but can't really handle self-modification let alone AI theory.  A mind like this might progress with something like smoothness, pushed along by the researchers rather than itself, even all the way up to average-human capability - not having the insight into its own workings to push itself any further.  We also suppose that the mind is either already using huge amounts of available hardware, or scales very poorly, so it cannot go FOOM just as a result of adding a hundred times as much hardware.  This scenario seems less likely to my eyes, but it is not ruled out by any effect I can see.
  4. \n\n
  5. Programmers operating with strong insight into intelligence, directly create along an efficient and planned pathway, a mind capable of modifying itself with deterministic precision - provably correct or provably noncatastrophic self-modifications.  This is the only way I can see to achieve narrow enough targeting to create a Friendly AI.  The "natural" trajectory of such an agent would be slowed by the requirements of precision, and sped up by the presence of insight; but because this is a Friendly AI, notions like "You can't yet improve yourself this far, your goal system isn't verified enough" would play a role.
\n\n

So these are some things that I think are permitted to happen, albeit that case 2 would count as a hit against me to some degree because it does seem unlikely.

\n\n

Here are some things that shouldn't happen, on my analysis:

\n\n\n\n\n

And I also don't think this is allowed:

\n\n\n\n

Mostly, Robin seems to think that uploads will come first, but that's a whole 'nother story.  So far as AI goes, this looks like Robin's maximum line of probability - and if I got this mostly wrong or all wrong, that's no surprise.  Robin Hanson did the same to me when summarizing what he thought were my own positions.  I have never thought, in prosecuting this Disagreement, that we were starting out with a mostly good understanding of what the Other was thinking; and this seems like an important thing to have always in mind.

\n\n

So - bearing in mind that I may well be criticizing a straw misrepresentation, and that I know this full well, but I am just trying to guess my best - here's what I see as wrong with the elements of this scenario:

\n\n

\n\n• The abilities we call "human" are the final products of an economy of mind - not in the sense that there are selfish agents in it, but in the sense that there are production lines; and I would even expect evolution to enforce something approaching fitness as a common unit of currency.  (Enough selection pressure to create an adaptation from scratch should be enough to fine-tune the resource curves involved.)  It's the production lines, though, that are the main point - that your brain has specialized parts and the specialized parts pass information around.  All of this goes on behind the scenes, but it's what finally adds up to any single human ability.\n\n

\n\n

In other words, trying to get humanlike performance in just one domain, is divorcing a final product of that economy from all the work that stands behind it.  It's like having a global economy that can only manufacture toasters, but not dishwashers or light bulbs.  You can have something like Deep Blue that beats humans at chess in an inhuman, specialized way; but I don't think it would be easy to get humanish performance at, say, biology R&D, without a whole mind and architecture standing behind it, that would also be able to accomplish other things.  Tasks that draw on our cross-domain-ness, or our long-range real-world strategizing, or our ability to formulate new hypotheses, or our ability to use very high-level abstractions - I don't think that you would be able to replace a human in just that one job, without also having something that would be able to learn many different jobs.

\n\n

I think it is a fair analogy to the idea that you shouldn't see a global economy that can manufacture toasters but not manufacture anything else.

\n\n

This is why I don't think we'll see a system of AIs that are diverse, individually highly specialized, and only collectively able to do anything a human can do.

\n\n

• Trading cognitive content around between diverse AIs is more difficult and less likely than it might sound.  Consider the field of AI as it works today.  Is there any standard database of cognitive content that you buy off the shelf and plug into your amazing new system, whether it be a chessplayer or a new data-mining algorithm?  If it's a chess-playing program, there are databases of stored games - but that's not the same as having databases of preprocessed cognitive content.

\n\n

So far as I can tell, the diversity of cognitive architectures acts as a tremendous barrier to trading around cognitive content.  If you have many AIs around that are all built on the same architecture by the same programmers, they might, with a fair amount of work, be able to pass around learned cognitive content.  Even this is less trivial than it sounds.  If two AIs both see an apple for the first time, and they both independently form concepts about that apple, and they both independently build some new cognitive content around those concepts, then their thoughts are effectively written in a different language.  By seeing a single apple at the same time, they could identify a concept they both have in mind, and in this way build up a common language...

\n\n

...the point being that even when two separated minds are running literally the same source code, it is still difficult for them to trade new knowledge as raw cognitive content without having a special language designed just for sharing knowledge.

\n\n

Now suppose the two AIs are built around different architectures.

\n\n

The barrier this opposes to a true, cross-agent, literal "economy of mind", is so strong, that in the vast majority of AI applications you set out to write today, you will not bother to import any standardized preprocessed cognitive content.  It will be easier for your AI application to start with some standard examples - databases of that sort of thing do exist, in some fields anyway - and redo all the cognitive work of learning on its own.

\n\n

That's how things stand today.

\n\n

And I have to say that looking over the diversity of architectures proposed at any AGI conference I've attended, it is very hard to imagine directly trading cognitive content between any two of them.  It would be an immense amount of work just to set up a language in which they could communicate what they take to be facts about the world - never mind preprocessed cognitive content.

\n\n

This is a force for localization: unless the condition I have just described changes drastically, it means that agents will be able to do their own cognitive labor, rather than needing to get their brain content manufactured elsewhere, or even being able to get their brain content manufactured elsewhere.  I can imagine there being an exception to this for non-diverse agents that are deliberately designed to carry out this kind of trading within their code-clade.  (And in the long run, difficulties of translation seems less likely to stop superintelligences.)

\n\n

But in today's world, it seems to be the rule that when you write a new AI program, you can sometimes get preprocessed raw data, but you will not buy any preprocessed cognitive content - the internal content of your program will come from within your program.

\n\n

And it actually does seem to me that AI would have to get very sophisticated before it got over the "hump" of increased sophistication making sharing harder instead of easier.  I'm not sure this is pre-takeoff sophistication we're talking about, here.  And the cheaper computing power is, the easier it is to just share the data and do the learning on your own.

\n\n

Again - in today's world, sharing of cognitive content between diverse AIs doesn't happen, even though there are lots of machine learning algorithms out there doing various jobs.  You could say things would happen differently in the future, but it'd be up to you to make that case.

\n\n

• Understanding the difficulty of interfacing diverse AIs, is the next step toward understanding why it's likely to be a single coherent cognitive system that goes FOOM via recursive self-improvement.  The same sort of barriers that apply to trading direct cognitive content, would also apply to trading changes in cognitive source code.

\n\n

It's a whole lot easier to modify the source code in the interior of your own mind, than to take that modification, and sell it to a friend who happens to be written on different source code.

\n\n

Certain kinds of abstract insights would be more tradeable, among sufficiently sophisticated minds; and the major insights might be well worth selling - like, if you invented a new general algorithm at some subtask that many minds perform.  But if you again look at the modern state of the field, then you find that it is only a few algorithms that get any sort of general uptake.

\n\n

And if you hypothesize minds that understand these algorithms, and the improvements to them, and what these algorithms are for, and how to implement and engineer them - then these are already very sophisticated minds, at this point, they are AIs that can do their own AI theory.  So the hard takeoff has to have not already started, yet, at this point where there are many AIs around that can do AI theory.  If they can't do AI theory, diverse AIs are likely to experience great difficulties trading code improvements among themselves.

\n\n

This is another localizing force.  It means that the improvements you make to yourself, and the compound interest earned on those improvements, is likely to stay local.

\n\n

If the scenario with an AI takeoff is anything at all like the modern world in which all the attempted AGI projects have completely incommensurable architectures, then any self-improvements will definitely stay put, not spread.

\n\n

• But suppose that the situation did change drastically from today, and that you had a community of diverse AIs which were sophisticated enough to share cognitive content, code changes, and even insights.  And suppose even that this is true at the start of the FOOM - that is, the community of diverse AIs got all the way up to that level, without yet using a FOOM or starting a FOOM at a time when it would still be localized.

\n\n

We can even suppose that most of the code improvements, algorithmic insights, and cognitive content driving any particular AI, is coming from outside that AI - sold or shared - so that the improvements the AI makes to itself, do not dominate its total velocity.

\n\n

Fine.  The humans are not out of the woods.

\n\n

Even if we're talking about uploads, it will be immensely more difficult to apply any of the algorithmic insights that are tradeable between AIs, to the undocumented human brain, that is a huge mass of spaghetti code, that was never designed to be upgraded, that is not end-user-modifiable, that is not hot-swappable, that is written for a completely different architecture than what runs efficiently on modern processors...

\n\n

And biological humans?  Their neurons just go on doing whatever neurons do, at 100 cycles per second (tops).

\n\n

So this FOOM that follows from recursive self-improvement, the cascade effect of using your increased intelligence to rewrite your code and make yourself even smarter -

\n\n

The barriers to sharing cognitive improvements among diversely designed AIs, are large; the barriers to sharing with uploaded humans, are incredibly huge; the barrier to sharing with biological humans, is essentially absolute.  (Barring a (benevolent) superintelligence with nanotechnology, but if one of those is around, you have already won.)

\n\n

In this hypothetical global economy of mind, the humans are like a country that no one can invest in, that cannot adopt any of the new technologies coming down the line.

\n\n

I once observed that Ricardo's Law of Comparative Advantage is the theorem that unemployment should not exist.  The gotcha being that if someone is sufficiently unreliable, there is a cost to you to train them, a cost to stand over their shoulders and monitor them, a cost to check their results for accuracy - the existence of unemployment in our world is a combination of transaction costs like taxes, regulatory barriers like minimum wage, and above all, lack of trust.  There are a dozen things I would pay someone else to do for me - if I wasn't paying taxes on the transaction, and if I could trust a stranger as much as I trust myself (both in terms of their honesty and of acceptable quality of output).  Heck, I'd as soon have some formerly unemployed person walk in and spoon food into my mouth while I kept on typing at the computer - if there were no transaction costs, and I trusted them.

\n\n

If high-quality thought drops into a speed closer to computer time by a few orders of magnitude, no one is going to take a subjective year to explain to a biological human an idea that they will be barely able to grasp, in exchange for an even slower guess at an answer that is probably going to be wrong anyway.

\n\n

Even uploads could easily end up doomed by this effect, not just because of the immense overhead cost and slowdown of running their minds, but because of the continuing error-proneness of the human architecture.  Who's going to trust a giant messy undocumented neural network, any more than you'd run right out and hire some unemployed guy off the street to come into your house and do your cooking?

\n\n

This FOOM leaves humans behind -

\n\n

- unless you go the route of Friendly AI, and make a superintelligence that simply wants to help humans, not for any economic value that humans provide to it, but because that is its nature.

\n\n

And just to be clear on something - which really should be clear by now, from all my other writing, but maybe you're just wandering in - it's not that having squishy things running around on two legs is the ultimate height of existence.  But if you roll up a random AI with a random utility function, it just ends up turning the universe into patterns we would not find very eudaimonic - turning the galaxies into paperclips.  If you try a haphazard attempt at making a "nice" AI, the sort of not-even-half-baked theories I see people coming up with on the spot and occasionally writing whole books about, like using reinforcement learning on pictures of smiling humans to train the AI to value happiness, yes this was a book, then the AI just transforms the galaxy into tiny molecular smileyfaces...

\n\n

It's not some small, mean desire to survive for myself at the price of greater possible futures, that motivates me.  The thing is - those greater possible futures, they don't happen automatically.  There are stakes on the table that are so much an invisible background of your existence that it would never occur to you they could be lost; and these things will be shattered by default, if not specifically preserved.

\n\n

• And as for the idea that the whole thing would happen slowly enough for humans to have plenty of time to react to things - a smooth exponential shifted into a shorter doubling time - of that, I spoke yesterday.  Progress seems to be exponential now, more or less, or at least accelerating, and that's with constant human brains.  If you take a nonrecursive accelerating function and fold it in on itself, you are going to get superexponential progress.  "If computing power doubles every eighteen months, what happens when computers are doing the research" should not just be a faster doubling time.  (Though, that said, on any sufficiently short timescale, progress might well locally approximate an exponential because investments will shift in such fashion that the marginal returns on investment balance, even in the interior of a single mind; interest rates consistent over a timespan imply smooth exponential growth over that timespan.)

\n\n

You can't count on warning, or time to react.  If an accident sends a sphere of plutonium, not critical, but prompt critical, neutron output can double in a tenth of a second even with k = 1.0006.  It can deliver a killing dose of radiation or blow the top off a nuclear reactor before you have time to draw a breath.  Computers, like neutrons, already run on a timescale much faster than human thinking.  We are already past the world where we can definitely count on having time to react.

\n\n

When you move into the transhuman realm, you also move into the realm of adult problems.  To wield great power carries a price in great precision.  You can build a nuclear reactor but you can't ad-lib it.  On the problems of this scale, if you want the universe to end up a worthwhile place, you can't just throw things into the air and trust to luck and later correction.  That might work in childhood, but not on adult problems where the price of one mistake can be instant death.

\n\n

Making it into the future is an adult problem.  That's not a death sentence.  I think.  It's not the inevitable end of the world.  I hope.  But if you want humankind to survive, and the future to be a worthwhile place, then this will take careful crafting of the first superintelligence - not just letting economics or whatever take its easy, natural course.  The easy, natural course is fatal - not just to ourselves but to all our hopes.

\n\n

That, itself, is natural.  It is only to be expected.  To hit a narrow target you must aim; to reach a good destination you must steer; to win, you must make an extra-ordinary effort.

\n\n" } }, { "_id": "tjH8XPxAnr6JRbh7k", "title": "Hard Takeoff", "pageUrl": "https://www.lesswrong.com/posts/tjH8XPxAnr6JRbh7k/hard-takeoff", "postedAt": "2008-12-02T20:44:26.000Z", "baseScore": 36, "voteCount": 32, "commentCount": 34, "url": null, "contents": { "documentId": "tjH8XPxAnr6JRbh7k", "html": "

Continuation ofRecursive Self-Improvement

\n

Constant natural selection pressure, operating on the genes of the hominid line, produced improvement in brains over time that seems to have been, roughly, linear or accelerating; the operation of constant human brains on a pool of knowledge seems to have produced returns that are, very roughly, exponential or superexponential.  (Robin proposes that human progress is well-characterized as a series of exponential modes with diminishing doubling times.)

\n

Recursive self-improvement - an AI rewriting its own cognitive algorithms - identifies the object level of the AI with a force acting on the metacognitive level; it \"closes the loop\" or \"folds the graph in on itself\".  E.g. the difference between returns on a constant investment in a bond, and reinvesting the returns into purchasing further bonds, is the difference between the equations y = f(t) = m*t, and dy/dt = f(y) = m*y whose solution is the compound interest exponential, y = e^(m*t).

\n

When you fold a whole chain of differential equations in on itself like this, it should either peter out rapidly as improvements fail to yield further improvements, or else go FOOM.  An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely - far more unlikely than seeing such behavior in a system with a roughly-constant underlying optimizer, like evolution improving brains, or human brains improving technology.  Our present life is no good indicator of things to come.

\n

Or to try and compress it down to a slogan that fits on a T-Shirt - not that I'm saying this is a good idea - \"Moore's Law is exponential now; it would be really odd if it stayed exponential with the improving computers doing the research.\"  I'm not saying you literally get dy/dt = e^y that goes to infinity after finite time - and hardware improvement is in some ways the least interesting factor here - but should we really see the same curve we do now?

\n

RSI is the biggest, most interesting, hardest-to-analyze, sharpest break-with-the-past contributing to the notion of a \"hard takeoff\" aka \"AI go FOOM\", but it's nowhere near being the only such factor.  The advent of human intelligence was a discontinuity with the past even without RSI...

\n

\n

...which is to say that observed evolutionary history - the discontinuity between humans, and chimps who share 95% of our DNA - lightly suggests a critical threshold built into the capabilities that we think of as \"general intelligence\", a machine that becomes far more powerful once the last gear is added.

\n

This is only a light suggestion because the branching time between humans and chimps is enough time for a good deal of complex adaptation to occur.  We could be looking at the sum of a cascade, not the addition of a final missing gear.  On the other hand, we can look at the gross brain anatomies and see that human brain anatomy and chimp anatomy have not diverged all that much.  On the gripping hand, there's the sudden cultural revolution - the sudden increase in the sophistication of artifacts - that accompanied the appearance of anatomically Cro-Magnons just a few tens of thousands of years ago.

\n

Now of course this might all just be completely inapplicable to the development trajectory of AIs built by human programmers rather than by evolution.  But it at least lightly suggests, and provides a hypothetical illustration of, a discontinuous leap upward in capability that results from a natural feature of the solution space - a point where you go from sorta-okay solutions to totally-amazing solutions as the result of a few final tweaks to the mind design.

\n

I could potentially go on about this notion for a bit - because, in an evolutionary trajectory, it can't literally be a \"missing gear\", the sort of discontinuity that follows from removing a gear that an otherwise functioning machine was built around.  So if you suppose that a final set of changes was enough to produce a sudden huge leap in effective intelligence, it does demand the question of what those changes were.  Something to do with reflection - the brain modeling or controlling itself - would be one obvious candidate.  Or perhaps a change in motivations (more curious individuals, using the brainpower they have in different directions) in which case you wouldn't expect that discontinuity to appear in the AI's development, but you would expect it to be more effective at earlier stages than humanity's evolutionary history would suggest...  But you could have whole journal issues about that one question, so I'm just going to leave it at that.

\n

Or consider the notion of sudden resource bonanzas.  Suppose there's a semi-sophisticated Artificial General Intelligence running on a cluster of a thousand CPUs.  The AI has not hit a wall - it's still improving itself - but its self-improvement is going so slowly that, the AI calculates, it will take another fifty years for it to engineer / implement / refine just the changes it currently has in mind.  Even if this AI would go FOOM eventually, its current progress is so slow as to constitute being flatlined...

\n

So the AI turns its attention to examining certain blobs of binary code - code composing operating systems, or routers, or DNS services - and then takes over all the poorly defended computers on the Internet.  This may not require what humans would regard as genius, just the ability to examine lots of machine code and do relatively low-grade reasoning on millions of bytes of it.  (I have a saying/hypothesis that a human trying to write code is like someone without a visual cortex trying to paint a picture - we can do it eventually, but we have to go pixel by pixel because we lack a sensory modality for that medium; it's not our native environment.)  The Future may also have more legal ways to obtain large amounts of computing power quickly.

\n

This sort of resource bonanza is intriguing in a number of ways.  By assumption, optimization efficiency is the same, at least for the moment - we're just plugging a few orders of magnitude more resource into the current input/output curve.  With a stupid algorithm, a few orders of magnitude more computing power will buy you only a linear increase in performance - I would not fear Cyc even if ran on a computer the size of the Moon, because there is no there there.

\n

On the other hand, humans have a brain three times as large, and a prefrontal cortex six times as large, as that of a standard primate our size - so with software improvements of the sort that natural selection made over the last five million years, it does not require exponential increases in computing power to support linearly greater intelligence.  Mind you, this sort of biological analogy is always fraught - maybe a human has not much more cognitive horsepower than a chimpanzee, the same underlying tasks being performed, but in a few more domains and with greater reflectivity - the engine outputs the same horsepower, but a few gears were reconfigured to turn each other less wastefully - and so you wouldn't be able to go from human to super-human with just another sixfold increase in processing power... or something like that.

\n

But if the lesson of biology suggests anything, it is that you do not run into logarithmic returns on processing power in the course of reaching human intelligence, even when that processing power increase is strictly parallel rather than serial, provided that you are at least as good as writing software to take advantage of that increased computing power, as natural selection is at producing adaptations - five million years for a sixfold increase in computing power.

\n

Michael Vassar observed in yesterday's comments that humans, by spending linearly more time studying chess, seem to get linear increases in their chess rank (across a wide range of rankings), while putting exponentially more time into a search algorithm is usually required to yield the same range of increase.  Vassar called this \"bizarre\", but I find it quite natural.  Deep Blue searched the raw game tree of chess; Kasparavo searched the compressed regularities of chess.  It's not surprising that the simple algorithm is logarithmic and the sophisticated algorithm is linear.  One might say similarly of the course of human progress seeming to be closer to exponential, while evolutionary progress is closer to being linear.  Being able to understand the regularity of the search space counts for quite a lot.

\n

If the AI is somewhere in between - not as brute-force as Deep Blue, nor as compressed as a human - then maybe a 10,000-fold increase in computing power will only buy it a 10-fold increase in optimization velocity... but that's still quite a speedup.

\n

Furthermore, all future improvements the AI makes to itself will now be amortized over 10,000 times as much computing power to apply the algorithms.  So a single improvement to code now has more impact than before; it's liable to produce more further improvements.  Think of a uranium pile.  It's always running the same \"algorithm\" with respect to neutrons causing fissions that produce further neutrons, but just piling on more uranium can cause it to go from subcritical to supercritical, as any given neutron has more uranium to travel through and a higher chance of causing future fissions.

\n

So just the resource bonanza represented by \"eating the Internet\" or \"discovering an application for which there is effectively unlimited demand, which lets you rent huge amounts of computing power while using only half of it to pay the bills\" - even though this event isn't particularly recursive of itself, just an object-level fruit-taking - could potentially drive the AI from subcritical to supercritical.

\n

Not, mind you, that this will happen with an AI that's just stupid.  But an AI already improving itself slowly - that's a different case.

\n

Even if this doesn't happen - if the AI uses this newfound computing power at all effectively, its optimization efficiency will increase more quickly than before; just because the AI has more optimization power to apply to the task of increasing its own efficiency, thanks to the sudden bonanza of optimization resources.

\n

So the whole trajectory can conceivably change, just from so simple and straightforward and unclever and uninteresting-seeming an act, as eating the Internet.  (Or renting a bigger cloud.)

\n

Agriculture changed the course of human history by supporting a larger population - and that was just a question of having more humans around, not individual humans having a brain a hundred times as large.  This gets us into the whole issue of the returns on scaling individual brains not being anything like the returns on scaling the number of brains.  A big-brained human has around four times the cranial volume of a chimpanzee, but 4 chimps != 1 human.  (And for that matter, 60 squirrels != 1 chimp.)  Software improvements here almost certainly completely dominate hardware, of course.  But having a thousand scientists who collectively read all the papers in a field, and who talk to each other, is not like having one superscientist who has read all those papers and can correlate their contents directly using native cognitive processes of association, recognition, and abstraction.  Having more humans talking to each other using low-bandwidth words, cannot be expected to achieve returns similar to those from scaling component cognitive processes within a coherent cognitive system.

\n

This, too, is an idiom outside human experience - we have to solve big problems using lots of humans, because there is no way to solve them using ONE BIG human.  But it never occurs to anyone to substitute four chimps for one human; and only a certain very foolish kind of boss thinks you can substitute ten programmers with one year of experience for one programmer with ten years of experience.

\n

(Part of the general Culture of Chaos that praises emergence and thinks evolution is smarter than human designers, also has a mythology of groups being inherently superior to individuals.  But this is generally a matter of poor individual rationality, and various arcane group structures that are supposed to compensate; rather than an inherent fact about cognitive processes somehow scaling better when chopped up into distinct brains.  If that were literally more efficient, evolution would have designed humans to have four chimpanzee heads that argued with each other.  In the realm of AI, it seems much more straightforward to have a single cognitive process that lacks the emotional stubbornness to cling to its accustomed theories, and doesn't need to be argued out of it at gunpoint or replaced by a new generation of grad students.  I'm not going to delve into this in detail for now, just warn you to be suspicious of this particular creed of the Culture of Chaos; it's not like they actually observed the relative performance of a hundred humans versus one BIG mind with a brain fifty times human size.)

\n

So yes, there was a lot of software improvement involved - what we are seeing with the modern human brain size, is probably not so much the brain volume required to support the software improvement, but rather the new evolutionary equilibrium for brain size given the improved software.

\n

Even so - hominid brain size increased by a factor of five over the course of around five million years.  You might want to think very seriously about the contrast between that idiom, and a successful AI being able to expand onto five thousand times as much hardware over the course of five minutes - when you are pondering possible hard takeoffs, and whether the AI trajectory ought to look similar to human experience.

\n

A subtler sort of hardware overhang, I suspect, is represented by modern CPUs have a 2GHz serial speed, in contrast to neurons that spike 100 times per second on a good day.  The \"hundred-step rule\" in computational neuroscience is a rule of thumb that any postulated neural algorithm which runs in realtime has to perform its job in less than 100 serial steps one after the other.  We do not understand how to efficiently use the computer hardware we have now, to do intelligent thinking.  But the much-vaunted \"massive parallelism\" of the human brain, is, I suspect, mostly cache lookups to make up for the sheer awkwardness of the brain's serial slowness - if your computer ran at 200Hz, you'd have to resort to all sorts of absurdly massive parallelism to get anything done in realtime.  I suspect that, if correctly designed, a midsize computer cluster would be able to get high-grade thinking done at a serial speed much faster than human, even if the total parallel computing power was less.

\n

So that's another kind of overhang: because our computing hardware has run so far ahead of AI theory, we have incredibly fast computers we don't know how to use for thinking; getting AI right could produce a huge, discontinuous jolt, as the speed of high-grade thought on this planet suddenly dropped into computer time.

\n

A still subtler kind of overhang would be represented by human failure to use our gathered experimental data efficiently.

\n

On to the topic of insight, another potential source of discontinuity.  The course of hominid evolution was driven by evolution's neighborhood search; if the evolution of the brain accelerated to some degree, this was probably due to existing adaptations creating a greater number of possibilities for further adaptations.  (But it couldn't accelerate past a certain point, because evolution is limited in how much selection pressure it can apply - if someone succeeds in breeding due to adaptation A, that's less variance left over for whether or not they succeed in breeding due to adaptation B.)

\n

But all this is searching the raw space of genes.  Human design intelligence, or sufficiently sophisticated AI design intelligence, isn't like that.  One might even be tempted to make up a completely different curve out of thin air - like, intelligence will take all the easy wins first, and then be left with only higher-hanging fruit, while increasing complexity will defeat the ability of the designer to make changes.  So where blind evolution accelerated, intelligent design will run into diminishing returns and grind to a halt.  And as long as you're making up fairy tales, you might as well further add that the law of diminishing returns will be exactly right, and have bumps and rough patches in exactly the right places, to produce a smooth gentle takeoff even after recursion and various hardware transitions are factored in...  One also wonders why the story about \"intelligence taking easy wins first in designing brains\" tops out at or before human-level brains, rather than going a long way beyond human before topping out.  But one suspects that if you tell that story, there's no point in inventing a law of diminishing returns to begin with.

\n

(Ultimately, if the character of physical law is anything like our current laws of physics, there will be limits to what you can do on finite hardware, and limits to how much hardware you can assemble in finite time, but if they are very high limits relative to human brains, it doesn't affect the basic prediction of hard takeoff, \"AI go FOOM\".)

\n

The main thing I'll venture into actually expecting from adding \"insight\" to the mix, is that there'll be a discontinuity at the point where the AI understands how to do AI theory, the same way that human researchers try to do AI theory.  An AI, to swallow its own optimization chain, must not just be able to rewrite its own source code; it must be able to, say, rewrite Artificial Intelligence: A Modern Approach (2nd Edition).  An ability like this seems (untrustworthily, but I don't know what else to trust) like it ought to appear at around the same time that the architecture is at the level of, or approaching the level of, being able to handle what humans handle - being no shallower than an actual human, whatever its inexperience in various domains.  It would produce further discontinuity at around that time.

\n

In other words, when the AI becomes smart enough to do AI theory, that's when I expect it to fully swallow its own optimization chain and for the real FOOM to occur - though the AI might reach this point as part of a cascade that started at a more primitive level.

\n

All these complications is why I don't believe we can really do any sort of math that will predict quantitatively the trajectory of a hard takeoff.  You can make up models, but real life is going to include all sorts of discrete jumps, bottlenecks, bonanzas, insights - and the \"fold the curve in on itself\" paradigm of recursion is going to amplify even small roughnesses in the trajectory.

\n

So I stick to qualitative predictions.  \"AI go FOOM\".

\n

Tomorrow I hope to tackle locality, and a bestiary of some possible qualitative trajectories the AI might take given this analysis.  Robin Hanson's summary of \"primitive AI fooms to sophisticated AI\" doesn't fully represent my views - that's just one entry in the bestiary, albeit a major one.

" } }, { "_id": "JBadX7rwdcRFzGuju", "title": "Recursive Self-Improvement", "pageUrl": "https://www.lesswrong.com/posts/JBadX7rwdcRFzGuju/recursive-self-improvement", "postedAt": "2008-12-01T20:49:11.000Z", "baseScore": 38, "voteCount": 30, "commentCount": 54, "url": null, "contents": { "documentId": "JBadX7rwdcRFzGuju", "html": "

Followup toLife's Story Continues, Surprised by Brains, Cascades, Cycles, Insight, Recursion, Magic, Engelbart: Insufficiently Recursive, Total Nano Domination

\n\n

I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local\nincrease in capability - "AI go FOOM".  Just to be clear on the claim,\n"fast" means on a timescale of weeks or hours rather than years or\ndecades; and "FOOM" means way the hell smarter than anything else\naround, capable of delivering in short time periods technological\nadvancements that would take humans decades, probably including\nfull-scale molecular nanotechnology (that it gets by e.g. ordering\ncustom proteins over the Internet with 72-hour turnaround time).  Not,\n"ooh, it's a little Einstein but it doesn't have any robot hands, how cute".

\n\n

Most people who object to this scenario, object to the "fast" part. \nRobin Hanson objected to the "local" part.  I'll try to handle both, though not all in one shot today.

\n\n

We are setting forth to analyze the developmental velocity of an Artificial Intelligence.  We'll break down this velocity into optimization slope, optimization resources, and optimization efficiency.  We'll need to understand cascades, cycles, insight and recursion; and we'll stratify our recursive levels into the metacognitive, cognitive, metaknowledge, knowledge, and object level.

\n\n

Quick review:

\n\n

Optimizing yourself is a special case, but it's one we're about to spend a lot of time talking about.

\n\n

By the time any mind solves some kind of actual problem,\nthere's actually been a huge causal lattice of optimizations applied -\nfor example, humans brain evolved, and then humans developed the idea\nof science, and then applied the idea of science to generate knowledge\nabout gravity, and then you use this knowledge of gravity to finally\ndesign a damn bridge or something.

\n\n

So I shall stratify this causality into levels - the boundaries being semi-arbitrary, but you've got to draw them somewhere:

\n\n\n\n

I am arguing that an AI's developmental velocity will not be smooth;\nthe following are some classes of phenomena that might lead to\nnon-smoothness.  First, a couple of points that weren't raised earlier:

\n\n\n\n

And these other factors previously covered:

\n\n\n\n

and finally,

\n\n\n\n

Suppose I go to an AI programmer and say, "Please write me a program\nthat plays chess."  The programmer will tackle this using their\nexisting knowledge and insight in the domain of chess and search trees;\nthey will apply any metaknowledge they have about how to solve\nprogramming problems or AI problems; they will process this knowledge\nusing the deep algorithms of their neural circuitry; and this neutral\ncircuitry will have been designed (or rather its wiring algorithm\ndesigned) by natural selection.

\n\n

If you go to a sufficiently sophisticated AI - more sophisticated\nthan any that currently exists - and say, "write me a chess-playing\nprogram", the same thing might happen:  The AI would use its knowledge,\nmetaknowledge, and existing cognitive algorithms.  Only the AI's metacognitive level would be, not natural selection, but the object level of the programmer who wrote the AI, using their knowledge and insight etc.

\n\n

Now suppose that instead you hand the AI the problem, "Write a\nbetter algorithm than X for storing, associating to, and retrieving\nmemories".  At first glance this may appear to be just another\nobject-level problem that the AI solves using its current knowledge,\nmetaknowledge, and cognitive algorithms.  And indeed, in one sense it\nshould be just another object-level problem.  But it so happens that\nthe AI itself uses algorithm X to store associative memories, so if the\nAI can improve on this algorithm, it can rewrite its code to use the\nnew algorithm X+1.

\n\n

This means that the AI's metacognitive level - the\noptimization process responsible for structuring the AI's cognitive\nalgorithms in the first place - has now collapsed to identity with the\nAI's object level.

\n\n

For some odd reason, I run into a lot of people who vigorously deny\nthat this phenomenon is at all novel; they say, "Oh, humanity is\nalready self-improving, humanity is already going through a FOOM,\nhumanity is already in a Singularity" etc. etc.

\n\n

Now to me, it seems clear that - at this point in the game, in advance of the observation - it is pragmatically\nworth drawing a distinction between inventing agriculture and using\nthat to support more professionalized inventors, versus directly\nrewriting your own source code in RAM.  Before you can even argue\nabout whether the two phenomena are likely to be similar in practice,\nyou need to accept that they are, in fact, two different things to be\nargued about.

\n\n

And I do expect them to be very distinct in practice.  Inventing\nscience is not rewriting your neural circuitry.  There is a tendency to\ncompletely overlook the power of brain algorithms, because they\nare invisible to introspection.  It took a long time historically for\npeople to realize that there was such a thing as a cognitive\nalgorithm that could underlie thinking.  And then, once you point out\nthat cognitive algorithms exist, there is a tendency to tremendously\nunderestimate them, because you don't know the specific details of how\nyour hippocampus is storing memories well or poorly - you don't know\nhow it could be improved, or what difference a slight degradation could\nmake.  You can't draw detailed causal links between the wiring of your\nneural circuitry, and your performance on real-world problems.  All you\ncan see is the knowledge and the metaknowledge, and that's where all your causal links go; that's all that's visibly important.

\n\n

To see the brain circuitry vary, you've got to look at a chimpanzee,\nbasically.  Which is not something that most humans spend a lot of time\ndoing, because chimpanzees can't play our games.

\n\n

You can also see the tremendous overlooked power of the brain\ncircuitry by observing what happens when people set out to program what\nlooks like "knowledge" into Good-Old-Fashioned AIs, semantic nets and\nsuch.  Roughly, nothing happens.  Well, research papers happen.  But no\nactual intelligence happens.  Without those opaque, overlooked,\ninvisible brain algorithms, there is no real knowledge - only a tape\nrecorder playing back human words.  If you have a small amount of fake\nknowledge, it doesn't do anything, and if you have a huge amount of\nfake knowledge programmed in at huge expense, it still doesn't do\nanything.

\n\n

So the cognitive level - in humans, the level of neural circuitry\nand neural algorithms - is a level of tremendous but invisible power. \nThe difficulty of penetrating this invisibility and creating a real\ncognitive level is what stops modern-day humans from creating AI.  (Not\nthat an AI's cognitive level would be made of neurons or anything\nequivalent to neurons; it would just do cognitive labor on the same level of organization.  Planes don't flap their wings, but they have to produce lift somehow.)

\n\n

Recursion that can rewrite the cognitive level is worth distinguishing.

\n\n

But to some, having a term so narrow as to refer to an AI rewriting its own source code, and not to humans inventing farming, seems hardly open, hardly embracing, hardly communal; for we all know that to say two things are similar shows greater enlightenment than saying that they are different.  Or maybe it's as simple as identifying "recursive self-improvement" as a term with positive affective valence, so you figure out a way to apply that term to humanity, and then you get a nice dose of warm fuzzies.  Anyway.

\n\n

So what happens when you start rewriting cognitive algorithms?

\n\n

Well, we do have one well-known historical case of an\noptimization process writing cognitive algorithms to do further\noptimization; this is the case of natural selection, our alien god.

\n\n

Natural selection seems to have produced a pretty smooth trajectory\nof more sophisticated brains over the course of hundreds of millions of\nyears.  That gives us our first data point, with these characteristics:

\n\n\n\n

So - if you're navigating the search space via the ridiculously stupid and inefficient method of looking at the neighbors of the current point, without insight - with constant optimization pressure - then...

\n\n

Well, I've heard it claimed that the evolution of biological brains\nhas accelerated over time, and I've also heard that claim challenged. \nIf there's actually been an acceleration, I would tend to attribute\nthat to the "adaptations open up the way for further adaptations"\nphenomenon - the more brain genes you have, the more chances for a\nmutation to produce a new brain gene.  (Or, more complexly: the more\norganismal error-correcting mechanisms the brain has, the more likely a\nmutation is to produce something useful rather than fatal.)  In the\ncase of hominids in particular over the last few million years, we may\nalso have been experiencing accelerated selection on brain proteins, per se - which I would attribute to sexual selection, or brain variance accounting for a greater proportion of total fitness variance.

\n\n

Anyway, what we definitely do not see under these conditions is logarithmic or decelerating progress.  It did not take ten times as long to go from H. erectus to H. sapiens as from H. habilis to H. erectus. Hominid evolution did not take eight hundred million years of additional time, after evolution immediately produced Australopithecus-level brains in just a few million years after the invention of neurons themselves.

\n\n

And another, similar observation: human intelligence does not\nrequire a hundred times as much computing power as chimpanzee\nintelligence.  Human brains are merely three times too large, and our\nprefrontal cortices six times too large, for a primate with our body\nsize.

\n\n

Or again:  It does not seem to require 1000 times as many genes to\nbuild a human brain as to build a chimpanzee brain, even though human\nbrains can build toys that are a thousand times as neat.

\n\n

Why is this important?  Because it shows that with constant optimization pressure from natural selection and no intelligent insight, there were no diminishing returns to a search for better brain designs up to at least the human level.  There were probably accelerating returns (with a low acceleration factor).  There are no visible speedbumps, so far as I know.

\n\n

But all this is to say only of natural selection, which is not recursive.

\n\n

If you have an investment whose output is not coupled to its input - say, you have a bond, and the bond pays you a certain amount of interest every year, and you spend the interest every year - then this will tend to return you a linear amount of money over time.  After one year, you've received $10; after 2 years, $20; after 3 years, $30.

\n\n

Now suppose you change the qualitative physics of the investment, by coupling the output pipe to the input pipe.  Whenever you get an interest payment, you invest it in more bonds.  Now your returns over time will follow the curve of compound interest, which is exponential.  (Please note:  Not all accelerating processes are smoothly exponential.  But this one happens to be.)

\n\n

The first process grows at a rate that is linear over time; the second process grows at a rate that is linear in its cumulative return so far.

\n\n

The too-obvious mathematical idiom to describe the impact of recursion is replacing an equation

y = f(t)

with

dy/dt = f(y)

For example, in the case above, reinvesting our returns transformed the linearly growing

y = m*t

into

y' = m*y

whose solution is the exponentially growing

y = e^(m*t)

Now... I do not think you can really solve equations like this to get anything like a description of a self-improving AI.

\n\n

But it's the obvious reason why I don't expect the future to be a continuation of past trends.  The future contains a feedback loop that the past does not.

\n\n

As a different Eliezer Yudkowsky wrote, very long ago:

\n\n

"If computing power doubles every eighteen months, what happens when computers are doing the research?"

\n\n

And this sounds horrifyingly naive to my present ears, because that's not really how it works at all - but still, it illustrates the idea of "the future contains a feedback loop that the past does not".

\n\n

History up until this point was a long story about natural selection producing humans, and then, after humans hit a certain threshold, humans starting to rapidly produce knowledge and metaknowledge that could - among other things - feed more humans and support more of them in lives of professional specialization.

\n\n

To a first approximation, natural selection held still during human cultural development.  Even if Gregory Clark's crazy ideas are crazy enough to be true - i.e., some human populations evolved lower discount rates and more industrious work habits over the course of just a few hundred years from 1200 to 1800 - that's just tweaking a few relatively small parameters; it is not the same as developing new complex adaptations with lots of interdependent parts.  It's not a chimp-human type gap.

\n\n

So then, with human cognition remaining more or less constant, we found that knowledge feeds off knowledge with k > 1 - given a background of roughly constant cognitive algorithms at the human level.  We discovered major chunks of metaknowledge, like Science and the notion of Professional Specialization, that changed the exponents of our progress; having lots more humans around, due to e.g. the object-level innovation of farming, may have have also played a role.  Progress in any one area tended to be choppy, with large insights leaping forward, followed by a lot of slow incremental development.

\n\n

With history to date, we've got a series of integrals looking something like this:

Metacognitive = natural selection, optimization efficiency/resources roughly constant

\n\n

Cognitive = Human intelligence = integral of evolutionary optimization velocity over a few hundred million years, then roughly constant over the last ten thousand years

\n\n

Metaknowledge = Professional Specialization, Science, etc. = integral over cognition we did about procedures to follow in thinking, where metaknowledge can also feed on itself, there were major insights and cascades, etc.

\n\n

Knowledge = all that actual science, engineering, and general knowledge accumulation we did = integral of cognition+metaknowledge(current knowledge) over time, where knowledge feeds upon itself in what seems to be a roughly exponential process

\n\n

Object level = stuff we actually went out and did = integral of cognition+metaknowledge+knowledge(current solutions); over a short timescale this tends to be smoothly exponential to the degree that the people involved understand the idea of investments competing on the basis of interest rate, but over medium-range timescales the exponent varies, and on a long range the exponent seems to increase

If you were to summarize that in one breath, it would be, "with constant natural selection pushing on brains, progress was linear or mildly accelerating; with constant brains pushing on metaknowledge and knowledge and object-level progress feeding back to metaknowledge and optimization resources, progress was exponential or mildly superexponential".

\n\n

Now fold back the object level so that it becomes the metacognitive level.

\n\n

And note that we're doing this through a chain of differential equations, not just one; it's the final output at the object level, after all those integrals, that becomes the velocity of metacognition.

\n\n

You should get...

\n\n

...very fast progress?  Well, no, not necessarily.  You can also get nearly zero progress.

\n\n

If you're a recursified optimizing compiler,\nyou rewrite yourself just once, get a single boost in speed (like 50%\nor something), and then never improve yourself any further, ever again.

\n\n

If you're EURISKO,\nyou manage to modify some of your metaheuristics, and the\nmetaheuristics work noticeably better, and they even manage to make a\nfew further modifications to themselves, but then the whole process\nruns out of steam and flatlines.

\n\n

It was human intelligence that produced these artifacts to begin with.  Their own optimization power is far short of human - so incredibly weak that, after they push themselves along a little, they can't push any further.  Worse, their optimization at any given level is characterized by a limited number of opportunities, which once used up are gone - extremely sharp diminishing returns.

\n\n

When you fold a complicated, choppy, cascade-y chain of differential equations in on itself via recursion, it should either flatline or blow up.  You would need exactly the right law of diminishing returns to fly through the extremely narrow soft takeoff keyhole.

\n\n

The observed history of optimization to date makes this even more unlikely.  I don't see any reasonable way that you can have constant evolution produce human intelligence on the observed historical trajectory (linear or accelerating), and constant human intelligence produce science and technology on the observed historical trajectory (exponential or superexponential), and fold that in on itself, and get out something whose rate of progress is in any sense anthropomorphic.  From our perspective it should either flatline or FOOM.
\n\n\n\n

\n\n

When you first build an AI, it's a baby - if it had to improve itself, it would almost immediately flatline.  So you push it along using your own cognition, metaknowledge, and knowledge - not\ngetting any benefit of recursion in doing so, just the usual human\nidiom of knowledge feeding upon itself and insights cascading into\ninsights.  Eventually the AI becomes sophisticated enough to start\nimproving itself, not just small improvements, but improvements\nlarge enough to cascade into other improvements.  (Though right now,\ndue to lack of human insight, what happens when modern researchers push\non their AGI design is mainly nothing.)  And then you get what I. J. Good called an "intelligence explosion".

\n\n

I even want to say that the functions and curves being such as to allow hitting the soft takeoff keyhole, is ruled out by observed history to date.  But there are small conceivable loopholes, like "maybe all the curves change drastically and completely as soon as we get past the part we know about in order to give us exactly the right anthropomorphic final outcome", or "maybe the trajectory for insightful optimization of intelligence has a\nlaw of diminishing returns where blind evolution gets accelerating returns".

\n\n

There's other factors contributing to hard takeoff, like the existence of hardware overhang in the form of the poorly defended Internet and fast serial computers.  There's more than one possible species of AI we could see, given this whole analysis.  I haven't yet touched on the issue of localization (though the basic issue is obvious: the initial recursive cascade of an intelligence explosion can't race through human brains because human brains are not modifiable until the AI is already superintelligent).

\n\n

But today's post is already too long, so I'd best continue tomorrow.

\n\n

Post scriptum:  It occurred to me just after writing this that I'd been victim of a cached Kurzweil thought in speaking of the knowledge level as "exponential".  Object-level resources are exponential in human history because of physical cycles of reinvestment.  If you try defining knowledge as productivity per worker, I expect that's exponential too (or productivity growth would be unnoticeable by now as a component in economic progress).  I wouldn't be surprised to find that published journal articles are growing exponentially.  But I'm not quite sure that it makes sense to say humanity has learned as much since 1938 as in all earlier human history... though I'm quite willing to believe we produced more goods... then again we surely learned more since 1500 than in all the time before.  Anyway, human knowledge being "exponential" is a more complicated issue than I made it out to be.  But human object level is more clearly exponential or superexponential.

" } }, { "_id": "yyiyz34p6QxWBmZ9k", "title": "Disappointment in the Future", "pageUrl": "https://www.lesswrong.com/posts/yyiyz34p6QxWBmZ9k/disappointment-in-the-future", "postedAt": "2008-12-01T04:45:00.000Z", "baseScore": 16, "voteCount": 12, "commentCount": 27, "url": null, "contents": { "documentId": "yyiyz34p6QxWBmZ9k", "html": "

This seems worth posting around now...  As I've previously observed, futuristic visions are produced as entertainment, sold today and consumed today.  A TV station interviewing an economic or diplomatic pundit doesn't bother to show what that pundit predicted three years ago and how the predictions turned out.  Why would they?  Futurism Isn't About Prediction.

\n\n

But someone on the ImmInst forum actually went and compiled a list of Ray Kurzweil's predictions in 1999 for the years 2000-2009.  We're not out of 2009 yet, but right now it's not looking good...

· Individuals primarily use portable computers
· Portable computers have dramatically become lighter and thinner
· Personal computers are available in a wide range of sizes and shapes, and are commonly embedded in clothing and jewelry, like wrist watches, rings, earrings and other body ornaments
· Computers with a high-resolution visual interface range from rings and pins and credit cards up to the size of a thin book. People typically have at least a dozen computers on and around their bodies, which are networked, using body LANS (local area networks)
· These computers monitor body functions, provide automated identity to conduct financial transactions and allow entry into secure areas. They also provide directions for navigation, and a variety of other services.
· Most portable computers do not have keyboards

· Rotating memories such as Hard Drives, CD roms, and DVDs are on their way out.
· Most users have servers on their homes and offices where they keep large stores of digital objects, including, among other things, virtual reality environments, although these are still on an early stage
· Cables are disappearing,
· The majority of texts is created using continuous speech recognition, or CSR (dictation software). CSRs are very accurate, far more than the human transcriptionists, who were used up until a few years ago.
· Books, magazines, and newspapers are now routinely read on displays that are the size of small books
· Computer displays built into eyeglasses are also used. These specialized glasses allow the users to see the normal environment while creating a virtual image that appears to hover in front of the viewer.
· Computers routinely include moving picture image cameras and are able to reliably identify their owners from their faces
· Three dimensional chips are commonly used
· Students from all ages have a portable computer, very thin and soft, weighting less than 1 pound. They interact with their computers primarily by voice and by pointing with a device that looks like a pencil. Keybords still exist but most textual language is created by speaking.
· Intelligent courseware has emerged as a common means of learning, recent controversial studies have shown that students can learn basic skills such as reading and math just as readily with interactive learning software as with human teachers.
· Schools are increasingly relying on software approaches. Many children learn to read on their own using personal computers before entering grade school.
· Persons with disabilities are rapidly overcoming their handicaps through intelligent technology
· Students with reading disabilities routinely use print to speech reading systems
· Print to speech reading machines for the blind are now very small, inexpensive, palm-size devices that can read books.
· Useful navigation systems have finally been developed to assist blind people in moving and avoiding obstacles. Those systems use GPS technology. The blind person communicates with his navigation system by voice.
· Deaf persons commonly use portable speech-to-text listening machines which display a real time transcription of what people are saying. The deaf user has the choice of either reading the transcribed speech as displayed text or watching an animated person gesturing in sign language.
· Listening machines cal also translate what is being said into another language in real-time, so they are commonly used by hearing people as well.
· There is a growing perception that the primary disabilities of blindness, deafness, and physical impairment do not necessarily. Disabled persons routinely describe their disabilities as mere inconveniences.
· In communications, translate telephone technology is commonly used. This allow you to speak in English, while your Japanese friend hears you in Japanese, and vice-versa.
· Telephones are primarily wireless and include high resolution moving images.
· Heptic technologies are emerging. They allow people to touch and feel objects and other persons at a distance. These force-feedback devices are wildly used in games and in training simulation systems. Interactive games routinely include all encompassing all visual and auditory environments.
· The 1999 chat rooms have been replaced with virtual environments.
· At least half of all transactions are conducted online
· Intelligent routes are in use, primarily for long distance travel. Once your car’s computer’s guiding system locks on to the control sensors on one of these highways, you can sit back, and relax.
· There is a growing neo-luditte movement.

Now, just to be clear, I don't want you to look at all that and think, "Gee, the future goes more slowly than expected - technological progress must be naturally slow."

\n\n

More like, "Where are you pulling all these burdensome details from, anyway?"

\n\n

If you looked at all that and said, "Ha ha, how wrong; now I have my own amazing prediction for what the future will be like, it won't be like that," then you're really missing the whole "you have to work a whole lot harder to produce veridical beliefs about the future, and often the info you want is simply not obtainable" business.

" } }, { "_id": "rSTpxugJxFPoRMkGW", "title": "Singletons Rule OK", "pageUrl": "https://www.lesswrong.com/posts/rSTpxugJxFPoRMkGW/singletons-rule-ok", "postedAt": "2008-11-30T16:45:58.000Z", "baseScore": 23, "voteCount": 21, "commentCount": 47, "url": null, "contents": { "documentId": "rSTpxugJxFPoRMkGW", "html": "

Reply toTotal Tech Wars

\n\n

How does one end up with a persistent disagreement between two rationalist-wannabes who are both aware of Aumann's Agreement Theorem and its implications?

\n\n

Such a case is likely to turn around two axes: object-level incredulity ("no matter what AAT says, proposition X can't really be true") and meta-level distrust ("they're trying to be rational despite their emotional commitment, but are they really capable of that?").

\n\n

So far, Robin and I have focused on the object level in trying to hash out our disagreement.  Technically, I can't speak for Robin; but at least in my own case, I've acted thus because I anticipate that a meta-level argument about trustworthiness wouldn't lead anywhere interesting.  Behind the scenes, I'm doing what I can to make sure my brain is actually capable of updating, and presumably Robin is doing the same.

\n\n

(The linchpin of my own current effort in this area is to tell\nmyself that I ought to be learning something while having this\nconversation, and that I shouldn't miss any scrap of original thought\nin it - the Incremental Update technique. \nBecause I can genuinely believe that a conversation like this should\nproduce new thoughts, I can turn that feeling into genuine\nattentiveness.)

\n\n

Yesterday, Robin inveighed hard against what he called "total tech wars", and what I call "winner-take-all" scenarios:

Robin:  "If you believe the other side is totally committed to total victory,\nthat surrender is unacceptable, and that all interactions are zero-sum,\nyou may conclude your side must never cooperate with them, nor tolerate\nmuch internal dissent or luxury."

Robin and I both have emotional commitments and we both acknowledge the danger of that.  There's nothing irrational about feeling, per se; only failure to update is blameworthy.  But Robin seems to be very strongly against winner-take-all technological scenarios, and I don't understand why.

\n\n

Among other things, I would like to ask if Robin has a Line of Retreat set up here - if, regardless of how he estimates the probabilities, he can visualize what he would do if a winner-take-all scenario were true.

Yesterday Robin wrote:

"Eliezer, if everything is at stake\nthen 'winner take all' is 'total war'; it doesn't really matter if they\nshoot you or just starve you to death."

We both have our emotional commitments, but I don't quite understand this reaction.

\n\n

First, to me it's obvious that a "winner-take-all" technology should be defined as one in which, ceteris paribus, a local entity tends to end up with the option of becoming one kind of Bostromian singleton - the decisionmaker of a global order in which there is a single decision-making entity at the highest level.  (A superintelligence with unshared nanotech would count as a singleton; a federated world government with its own military would be a different kind of singleton; or you can imagine something like a galactic operating system with a root account controllable by 80% majority vote of the populace, etcetera.)

\n\n

The winner-take-all option is created by properties of the technology landscape, which is not a moral stance.  Nothing is said about an agent with that option, actually becoming a singleton.  Nor about using that power to shoot people, or reuse their atoms for something else, or grab all resources and let them starve (though "all resources" should include their atoms anyway).

\n\n

Nothing is yet said about various patches that could try to avert a technological scenario that contains upward cliffs of progress - e.g. binding agreements enforced by source code examination or continuous monitoring, in advance of the event.  (Or if you think that rational agents cooperate on the Prisoner's Dilemma, so much work might not be required to coordinate.)

\n\n

Superintelligent agents not in a humanish moral reference frame - AIs that are just maximizing paperclips or sorting pebbles - who happen on the option of becoming a Bostromian Singleton, and who have not previously executed any somehow-binding treaty; will ceteris paribus choose to grab all resources in service of their utility function, including the atoms now composing humanity.  I don't see how you could reasonably deny this!  It's a straightforward decision-theoretic choice between payoff 10 and payoff 1000!

\n\n

But conversely, there are possible agents in mind design space who, given the option of becoming a singleton, will not kill you, starve you, reprogram you, tell you how to live your life, or even meddle in your destiny unseen.  See Bostrom's (short) paper on the possibility of good and bad singletons of various types.

\n\n

If Robin thinks it's impossible to have a Friendly AI or maybe even any sort of benevolent superintelligence at all, even the descendants of human uploads - if Robin is assuming that superintelligent agents will act according to roughly selfish motives, and that only economies of trade are necessary and sufficient to prevent holocaust - then Robin may have no Line of Retreat open, as I try to argue that AI has an upward cliff built in.

\n\n

And in this case, it might be time well spent, to first address the question of whether Friendly AI is a reasonable thing to try to accomplish, so as to create that line of retreat.  Robin and I are both trying hard to be rational despite emotional commitments; but there's no particular reason to needlessly place oneself in the position of trying to persuade, or trying to accept, that everything of value in the universe is certainly doomed.

\n\n

For me, it's particularly hard to understand Robin's position in this, because for me the non-singleton future is the one that is obviously abhorrent.

\n\n

If you have lots of entities with root permissions on matter, any of whom has the physical capability to attack any other, then you have entities spending huge amounts of precious negentropy on defense and deterrence.  If there's no centralized system of property rights in place for selling off the universe to the highest bidder, then you have a race to burn the cosmic commons, and the degeneration of the vast majority of all agents into rapacious hardscrapple frontier replicators.

\n\n

To me this is a vision of futility - one in which a future light cone that could have been full of happy, safe agents having complex fun, is mostly wasted by agents trying to seize resources and defend them so they can send out seeds to seize more resources.

\n\n

And it should also be mentioned that any future in which slavery or child abuse is successfully prohibited, is a world that has some way of preventing agents from doing certain things with their computing power.  There are vastly worse possibilities than slavery or child abuse opened up by future technologies, which I flinch from referring to even as much as I did in the previous sentence.  There are things I don't want to happen to anyone - including a population of a septillion captive minds running on a star-powered Matrioshka Brain that is owned, and defended against all rescuers, by the mind-descendant of Lawrence Bittaker (serial killer, aka "Pliers").  I want to win against the horrors that exist in this world and the horrors that could exist in tomorrow's world - to have them never happen ever again, or, for the really awful stuff, never happen in the first place.  And that victory requires the Future to have certain global properties.

\n\n

But there are other ways to get singletons besides falling up a technological cliff.  So that would be my Line of Retreat:  If minds can't self-improve quickly enough to take over, then try for the path of uploads setting up a centralized Constitutional operating system with a root account controlled by majority vote, or something like that, to prevent their descendants from having to burn the cosmic commons.

\n\n

So for me, any satisfactory outcome seems to necessarily involve, if not a singleton, the existence of certain stable global properties upon the future - sufficient to prevent burning the cosmic commons, prevent life's degeneration into rapacious hardscrapple frontier replication, and prevent supersadists torturing septillions of helpless dolls in private, obscure star systems.

\n\n

Robin has written about burning the cosmic commons and rapacious hardscrapple frontier existences.  This doesn't imply that Robin approves of these outcomes.  But Robin's strong rejection even of winner-take-all language and concepts, seems to suggest that our emotional commitments are something like 180 degrees opposed.  Robin seems to feel the same way about singletons as I feel about ¬singletons.

\n\n

But why?  I don't think our real values are that strongly opposed - though we may have verbally-described and attention-prioritized those values in different ways.

" } }, { "_id": "NyFtHycJvkyNjXNsP", "title": "Chaotic Inversion", "pageUrl": "https://www.lesswrong.com/posts/NyFtHycJvkyNjXNsP/chaotic-inversion", "postedAt": "2008-11-29T10:57:24.000Z", "baseScore": 108, "voteCount": 95, "commentCount": 69, "url": null, "contents": { "documentId": "NyFtHycJvkyNjXNsP", "html": "

I was recently having a conversation with some friends on the topic of hour-by-hour productivity and willpower maintenance—something I've struggled with my whole life.

\n

I can avoid running away from a hard problem the first time I see it (perseverance on a timescale of seconds), and I can stick to the same problem for years; but to keep working on a timescale of hours is a constant battle for me.  It goes without saying that I've already read reams and reams of advice; and the most help I got from it was realizing that a sizable fraction other creative professionals had the same problem, and couldn't beat it either, no matter how reasonable all the advice sounds.

\n

\"What do you do when you can't work?\" my friends asked me.  (Conversation probably not accurate, this is a very loose gist.)

\n

And I replied that I usually browse random websites, or watch a short video.

\n

\"Well,\" they said, \"if you know you can't work for a while, you should watch a movie or something.\"

\n

\"Unfortunately,\" I replied, \"I have to do something whose time comes in short units, like browsing the Web or watching short videos, because I might become able to work again at any time, and I can't predict when—\"

\n

And then I stopped, because I'd just had a revelation.

\n

\n

I'd always thought of my workcycle as something chaotic, something unpredictable.  I never used those words, but that was the way I treated it.

\n

But here my friends seemed to be implying—what a strange thought—that other people could predict when they would become able to work again, and structure their time accordingly.

\n

And it occurred to me for the first time that I might have been committing that damned old chestnut the Mind Projection Fallacy, right out there in my ordinary everyday life instead of high abstraction.

\n

Maybe it wasn't that my productivity was unusually chaotic; maybe I was just unusually stupid with respect to predicting it.

\n

That's what inverted stupidity looks like—chaos.  Something hard to handle, hard to grasp, hard to guess, something you can't do anything with.  It's not just an idiom for high abstract things like Artificial Intelligence.  It can apply in ordinary life too.

\n

And the reason we don't think of the alternative explanation \"I'm stupid\", is not—I suspect—that we think so highly of ourselves.  It's just that we don't think of ourselves at all.  We just see a chaotic feature of the environment.

\n

So now it's occurred to me that my productivity problem may not be chaos, but my own stupidity.

\n

And that may or may not help anything.  It certainly doesn't fix the problem right away.  Saying \"I'm ignorant\" doesn't make you knowledgeable.

\n

But it is, at least, a different path than saying \"it's too chaotic\".

" } }, { "_id": "QB6BkkpwiecfF6Ekq", "title": "Thanksgiving Prayer", "pageUrl": "https://www.lesswrong.com/posts/QB6BkkpwiecfF6Ekq/thanksgiving-prayer", "postedAt": "2008-11-28T04:17:00.000Z", "baseScore": 40, "voteCount": 37, "commentCount": 56, "url": null, "contents": { "documentId": "QB6BkkpwiecfF6Ekq", "html": "

At tonight's Thanksgiving, Erin remarked on how this was her first real Thanksgiving dinner away from her family, and that it was an odd feeling to just sit down and eat without any prayer beforehand.  (Yes, she's a solid atheist in no danger whatsoever, thank you for asking.)

\n\n

And as she said this, it reminded me of how wrong it is to give gratitude to God for blessings that actually come from our fellow human beings putting in a great deal of work.

\n\n

So I at once put my hands together and said,

\n\n

"Dear Global Economy, we thank thee for thy economies of scale, thy professional specialization, and thy international networks of trade under Ricardo's Law of Comparative Advantage, without which we would all starve to death while trying to assemble the ingredients for such a dinner as this.  Amen."

" } }, { "_id": "5hX44Kuz5No6E6RS9", "title": "Total Nano Domination", "pageUrl": "https://www.lesswrong.com/posts/5hX44Kuz5No6E6RS9/total-nano-domination", "postedAt": "2008-11-27T09:54:00.000Z", "baseScore": 21, "voteCount": 23, "commentCount": 24, "url": null, "contents": { "documentId": "5hX44Kuz5No6E6RS9", "html": "

Followup toEngelbart: Insufficiently Recursive

\n\n

The computer revolution had cascades and insights aplenty.  Computer tools are routinely used to create tools, from using a C compiler to write a Python interpreter, to using theorem-proving software to help design computer chips.  I would not yet rate computers as being very deeply recursive - I don't think they've improved our own thinking processes even so much as the Scientific Revolution - yet.  But some of the ways that computers are used to improve computers, verge on being repeatable (cyclic).

\n\n

Yet no individual, no localized group, nor even country, managed to get a sustained advantage in computing power, compound the interest on cascades, and take over the world.  There was never a Manhattan moment when a computing advantage temporarily gave one country a supreme military advantage, like the US and its atomic bombs for that brief instant at the end of WW2.  In computing there was no equivalent of "We've just crossed the\nsharp threshold of criticality, and now our pile doubles its neutron\noutput every two minutes, so we can produce lots of plutonium and you can't."

\n\n

Will the development of nanotechnology go the same way as computers - a smooth, steady developmental curve spread across many countries, no one project taking into itself a substantial fraction of the world's whole progress?  Will it be more like the Manhattan Project, one country gaining a (temporary?) huge advantage at huge cost?  Or could a small group with an initial advantage cascade and outrun the world?

Just to make it clear why we might worry about this for nanotech, rather than say car manufacturing - if you can build things from atoms,\nthen the environment contains an unlimited supply of perfectly machined\nspare parts.  If your molecular factory can build solar cells, it can acquire energy as well.

\n\n

So full-fledged Drexlerian molecular nanotechnology can plausibly automate away much of the manufacturing in its material\nsupply chain.  If you already have nanotech, you may not need to\nconsult the outside economy for inputs of energy or raw material.

\n\n

This makes it more plausible that a nanotech group could localize off, and do its own compound interest, away from the global economy.  If you're Douglas Engelbart building better software, you still need to consult Intel for the hardware that runs your software, and the electric company for the electricity that powers your hardware.  It would be a considerable expense to build your own fab lab for\nyour chips (that makes chips as good as Intel) and your own power\nstation for electricity (that supplies electricity as cheaply as the\nutility company).

\n\n

It's not just that this tends to entangle you with the fortunes of your trade partners, but also that - as an UberTool Corp keeping your trade secrets in-house - you can't improve the hardware you get, or drive down the cost of electricity, as long as these things are done outside.  Your cascades can only go through what you do locally, so the more you do locally, the more likely you are to get a compound interest advantage.  (Mind you, I don't think Engelbart could have gone\nFOOM even if he'd made his chips locally and supplied himself with\nelectrical power - I just don't think the compound advantage on using\ncomputers to make computers is powerful enough to sustain k > 1.)

\n\n

In general, the more capabilities are localized into one place, the less people will depend on their trade partners, the more they can cascade locally (apply their improvements to yield further improvements), and the more a "critical cascade" / FOOM sounds plausible.

\n\n

Yet self-replicating nanotech is a very advanced capability.  You don't get it right off the bat.  Sure, lots of biological stuff has this capability, but this is a misleading coincidence - it's not that self-replication is easy, but that evolution, for its own alien reasons, tends to build it into everything.  (Even individual cells, which is ridiculous.)

\n\n

In the run-up to nanotechnology, it seems not implausible to suppose a continuation of the modern world.  Today, many different labs work on small pieces of nanotechnology - fortunes entangled with their trade partners, and much of their research velocity coming from advances in other laboratories.  Current nanotech labs are dependent on the outside world for computers, equipment, science, electricity, and food; any single lab works on a small fraction of the puzzle, and contributes small fractions of the progress.

\n\n

In short, so far nanotech is going just the same way as computing.

\n\n

But it is a tad premature - I would even say that it crosses the line into the "silly" species of futurism - to exhale a sigh of relief and say, "Ah, that settles it - no need to consider any further."

\n\n

We all know how exponential multiplication works:  1 microscopic nanofactory, 2 microscopic nanofactories, 4 microscopic nanofactories... let's say there's 100 different groups working on self-replicating nanotechnology and one of those groups succeeds one week earlier than the others.  Rob Freitas has calculated that some species of replibots could spread through the Earth in 2 days (even given what seem to me like highly conservative assumptions in a context where conservatism is not appropriate).

\n\n

So, even if the race seems very tight, whichever group gets replibots first can take over the world given a mere week's lead time -

\n\n

Yet wait!  Just having replibots doesn't let you take over the world.  You need fusion weapons, or surveillance bacteria, or some other way to actually govern.  That's a lot of matterware - a lot of design and engineering work.  A replibot advantage doesn't equate to a weapons advantage, unless, somehow, the planetary economy has already published the open-source details of fully debugged weapons that you can build with your newfound private replibots.  Otherwise, a lead time of one week might not be anywhere near enough.

\n\n

Even more importantly - "self-replication" is not a binary, 0-or-1 attribute.  Things can be partially self-replicating.  You can have something that manufactures 25% of itself, 50% of itself, 90% of itself, or 99% of itself - but still needs one last expensive computer chip to complete the set.  So if you have twenty-five countries racing, sharing some of their results and withholding others, there isn't one morning where you wake up and find that one country has self-replication.

\n\n

Bots become successively easier to manufacture; the factories get successively cheaper.  By the time one country has bots that manufacture themselves from environmental materials, many other countries have bots that manufacture themselves from feedstock.  By the time one country has bots that manufacture themselves entirely from feedstock, other countries have produced some bots using assembly lines.  The nations also have all their old conventional arsenal, such as intercontinental missiles tipped with thermonuclear weapons, and these have deterrent effects against crude nanotechnology.  No one ever gets a discontinuous military advantage, and the world is safe.  (?)

\n\n

At this point, I do feel obliged to recall the notion of "burdensome details", that we're spinning a story out of many conjunctive details, any one of which could go wrong.  This is not an argument in favor of anything in particular, just a reminder not to be seduced by stories that are too specific.  When I contemplate the sheer raw power of nanotechnology, I don't feel confident that the fabric of society can even survive the sufficiently plausible prospect of its near-term arrival.  If your intelligence estimate says that Russia (the new belligerent Russia under Putin) is going to get self-replicating nanotechnology in a year, what does that do to Mutual Assured Destruction?  What if Russia makes a similar intelligence assessment of the US?  What happens to the capital markets?  I can't even foresee how our world will react to the prospect of various nanotechnological capabilities as they promise to be developed in the future's near future.  Let alone envision how society would actually change as full-fledged molecular nanotechnology was developed, even if it were developed gradually...

\n\n

...but I suppose the Victorians might say the same thing about nuclear weapons or computers, and yet we still have a global economy - one that's actually lot more interdependent than theirs, thanks to nuclear weapons making small wars less attractive, and computers helping to coordinate trade.

\n\n

I'm willing to believe in the possibility of a smooth, gradual ascent to nanotechnology, so that no one state - let alone any corporation or small group - ever gets a discontinuous advantage.

\n\n

The main reason I'm willing to believe this is because of the difficulties of design and engineering, even after all manufacturing is solved.  When I read Drexler's Nanosystems, I thought:  "Drexler uses properly conservative assumptions everywhere I can see, except in one place - debugging.  He assumes that any failed component fails visibly, immediately, and without side effects; this is not conservative."

\n\n

In principle, we have complete control of our computers - every bit and byte is under human command - and yet it still takes an immense amount of engineering work on top of that to make the bits do what we want.  This, and not any difficulties of manufacturing things once they are designed, is what takes an international supply chain of millions of programmers.

\n\n

But we're still not out of the woods.

\n\n

Suppose that, by a providentially incremental and distributed process, we arrive at a world of full-scale molecular nanotechnology - a world where designs, if not finished material goods, are traded among parties.  In a global economy large enough that no one actor, or even any one state, is doing more than a fraction of the total engineering.

\n\n

It would be a very different world, I expect; and it's possible that my essay may have already degenerated into nonsense.  But even if we still have a global economy after getting this far - then we're still not out of the woods.

\n\n

Remember those ems?  The emulated humans-on-a-chip?  The uploads?

\n\n

Suppose that, with molecular nanotechnology already in place, there's an international race for reliable uploading - with some results shared, and some results private - with many state and some nonstate actors.

\n\n

And suppose the race is so tight, that the first state to develop working researchers-on-a-chip, only has a one-day lead time over the other actors.

\n\n

That is - one day before anyone else, they develop uploads sufficiently undamaged, or capable of sufficient recovery, that the ems can carry out research and development.  In the domain of, say, uploading.

\n\n

There are other teams working on the problem, but their uploads are still a little off, suffering seizures and having memory faults and generally having their cognition degraded to the point of not being able to contribute.  (NOTE:  I think this whole future is a wrong turn and we should stay away from it; I am not endorsing this.)

\n\n

But this one team, though - their uploads still have a few problems, but they're at least sane enough and smart enough to start... fixing their problems themselves?

\n\n

If there's already full-scale nanotechnology around when this happens, then even with some inefficiency built in, the first uploads may be running at ten thousand times human speed.  Nanocomputers are powerful stuff.

\n\n

And in an hour, or around a year of internal time, the ems may be able to upgrade themselves to a hundred thousand times human speed, and fix some of the remaining problems.

\n\n

And in another hour, or ten years of internal time, the ems may be able to get the factor up to a million times human speed, and start working on intelligence enhancement...

\n\n

One could, of course, voluntarily publish the improved-upload protocols to the world, and give everyone else a chance to join in.  But you'd have to trust that not a single one of your partners were holding back a trick that lets them run uploads at ten times your own maximum speed (once the bugs were out of the process).  That kind of advantage could snowball quite a lot, in the first sidereal day.

\n\n

Now, if uploads are gradually developed at a time when computers are too slow to run them quickly - meaning, before molecular nanotech and nanofactories come along - then this whole scenario is averted; the first high-fidelity uploads, running at a hundredth of human speed, will grant no special advantage.  (Assuming that no one is pulling any spectacular snowballing tricks with intelligence enhancement - but they would have to snowball fast and hard, to confer advantage on a small group running at low speeds.  The same could be said of brain-computer interfaces, developed before or after nanotechnology, if running in a small group at merely human speeds.  I would credit their world takeover, but I suspect Robin Hanson wouldn't at this point.)

\n\n

Now, I don't really believe in any of this - this whole scenario, this whole world I'm depicting.  In real life, I'd expect someone to brute-force an unFriendly AI on one of those super-ultimate-nanocomputers, followed in short order by the end of the world.  But that's a separate issue.  And this whole world seems too much like our own, after too much technological change, to be realistic to me.  World government with an insuperable advantage?  Ubiquitous surveillance?  I don't like the ideas, but both of them would change the game dramatically...

\n\n

But the real point of this essay is to illustrate a point more important than nanotechnology: as optimizers become more self-swallowing, races between them are more unstable.

\n\n\n\n

If you sent a modern computer back in time to 1950 - containing many modern software tools in compiled form, but no future history or declaratively stored future science - I would guess that the recipient could not use it to take over the world.  Even if the USSR got it.  Our computing industry is a very powerful thing, but it relies on a supply chain of chip factories.

\n\n

If someone got a future nanofactory with a library of future nanotech applications - including designs for things like fusion power generators and surveillance bacteria - they might really be able to take over the world.  The nanofactory swallows its own supply chain; it incorporates replication within itself.  If the owner fails, it won't be for lack of factories.  It will be for lack of ability to develop new matterware fast enough, and apply existing matterware fast enough, to take over the world.

\n\n

I'm not saying that nanotech will appear from nowhere with a library of designs - just making a point about concentrated power and the instability it implies.

\n\n

Think of all the tech news that you hear about once - say, an article on Slashdot about yada yada 50% improved battery technology - and then you never hear about again, because it was too expensive or too difficult to manufacture.

\n\n

Now imagine a world where the news of a 50% improved battery technology comes down the wire, and the head of some country's defense agency is sitting down across from engineers and intelligence officers and saying, "We have five minutes before all of our rival's weapons are adapted to incorporate this new technology; how does that affect our balance of power?"  Imagine that happening as often as "amazing breakthrough" articles appear on Slashdot.

\n\n

I don't mean to doomsay - the Victorians would probably be pretty surprised we haven't blown up the world with our ten-minute ICBMs, but we don't live in their world - well, maybe doomsay just a little - but the point is:  It's less stable.  Improvements cascade faster once you've swallowed your manufacturing supply chain.

\n\n

And if you sent back in time a single nanofactory, and a single upload living inside it - then the world might end in five minutes or so, as we bios measure time.

\n\n\n\n

The point being, not that an upload will suddenly appear, but that now you've swallowed your supply chain and your R&D chain.

\n\n

And so this world is correspondingly more unstable, even if all the actors start out in roughly the same place.  Suppose a state manages to get one of those Slashdot-like technology improvements - only this one lets uploads think 50% faster - and they get it fifty minutes before anyone else, at a point where uploads are running ten thousand times as fast as human (50 mins = ~1 year) - and in that extra half-year, the uploads manage to find another couple of 50% improvements...

\n\n

Now, you can suppose that all the actors are all trading all of their advantages and holding nothing back, so everyone stays nicely synchronized.

\n\n

Or you can suppose that enough trading is going on, that most of the research any group benefits from comes from outside that group, and so a 50% advantage for a local group doesn't cascade much.

\n\n

But again, that's not the point.  The point is that in modern times, with the modern computing industry, where commercializing an advance requires building a new computer factory, a bright idea that has gotten as far as showing a 50% improvement in the laboratory, is merely one more article on Slashdot.

\n\n

If everything could instantly be rebuilt via nanotech, that laboratory demonstration could precipitate an instant international military crisis.

\n\n

And if there are uploads around, so that a cute little 50% advancement in a certain kind of hardware, recurses back to imply 50% greater speed at all future research - then this Slashdot article could become the key to world domination.

\n\n

As systems get more self-swallowing, they cascade harder; and even if all actors start out equivalent, races between them get much more unstable.  I'm not claiming it's impossible for that world to be stable.  The Victorians might have thought that about ICBMs.  But that subjunctive world contains additional instability compared to our own, and would need additional centripetal forces to end up as stable as our own.

\n\n

I expect Robin to disagree with some part of this essay, but I'm not sure which part or how.

" } }, { "_id": "NCb28Xdv7xDajtqtS", "title": "Engelbart: Insufficiently Recursive", "pageUrl": "https://www.lesswrong.com/posts/NCb28Xdv7xDajtqtS/engelbart-insufficiently-recursive", "postedAt": "2008-11-26T08:31:09.000Z", "baseScore": 22, "voteCount": 18, "commentCount": 22, "url": null, "contents": { "documentId": "NCb28Xdv7xDajtqtS", "html": "

Followup toCascades, Cycles, Insight, Recursion, Magic
Reply toEngelbart As Ubertool?

\n

When Robin originally suggested that Douglas Engelbart, best known as the inventor of the computer mouse, would have been a good candidate for taking over the world via compound interest on tools that make tools, my initial reaction was \"What on Earth?  With a mouse?\"

\n

On reading the initial portions of Engelbart's \"Augmenting Human Intellect: A Conceptual Framework\", it became a lot clearer where Robin was coming from.

\n

Sometimes it's hard to see through the eyes of the past.  Engelbart was a computer pioneer, and in the days when all these things were just getting started, he had a vision of using computers to systematically augment human intelligence.  That was what he thought computers were for.  That was the ideology lurking behind the mouse.  Something that makes its users smarter - now that sounds a bit more plausible as an UberTool.

\n

Looking back at Engelbart's plans with benefit of hindsight, I see two major factors that stand out:

\n
    \n
  1. Engelbart committed the Classic Mistake of AI: underestimating how much cognitive work gets done by hidden algorithms running beneath the surface of introspection, and overestimating what you can do by fiddling with the visible control levers.
  2. \n
  3. Engelbart anchored on the way that someone as intelligent as Engelbart would use computers, but there was only one of him - and due to point 1 above, he couldn't use computers to make other people as smart as him.
  4. \n
\n

\n

To start with point 2:  They had more reverence for computers back in the old days.  Engelbart visualized a system carefully designed to flow with every step of a human's work and thought, assisting every iota it could manage along the way.  And the human would be trained to work with the computer, the two together dancing a seamless dance.

\n

And the problem with this, was not just that computers got cheaper and that programmers wrote their software more hurriedly.

\n

There's a now-legendary story about the Windows Vista shutdown menu, a simple little feature into which 43 different Microsoft people had input.  The debate carried on for over a year.  The final product ended up as the lowest common denominator - a couple of hundred lines of code and a very visually unimpressive menu.

\n

So even when lots of people spent a tremendous amount of time thinking about a single feature of the system - it still didn't end up very impressive.  Jef Raskin could have done better than that, I bet.  But Raskins and Engelbarts are rare.

\n

You see the same effect in Eric Drexler's chapter on hypertext in Engines of Creation:  Drexler imagines the power of the Web to... use two-way links and user annotations to promote informed criticism.  (As opposed to the way we actually use it.)  And if the average Web user were Eric Drexler, the Web probably would work that way by now.

\n

But no piece of software that has yet been developed, by mouse or by Web, can turn an average human user into Engelbart or Raskin or Drexler.  You would very probably have to reach into the brain and rewire neural circuitry directly; I don't think any sense input or motor interaction would accomplish such a thing.

\n

Which brings us to point 1.

\n

It does look like Engelbart was under the spell of the \"logical\" paradigm that prevailed in AI at the time he made his plans.  (Should he even lose points for that?  He went with the mainstream of that science.)  He did not see it as an impossible problem to have computers help humans think - he seems to have underestimated the difficulty in much the same way that the field of AI once severely underestimated the work it would take to make computers themselves solve cerebral-seeming problems.  (Though I am saying this, reading heavily between the lines of one single paper that he wrote.)  He talked about how the core of thought is symbols, and speculated on how computers could help people manipulate those symbols.

\n

I have already said much on why people tend to underestimate the amount of serious heavy lifting that gets done by cognitive algorithms hidden inside black boxes that run out of your introspective vision, and overestimating what you can do by duplicating the easily visible introspective control levers.  The word \"apple\", for example, is a visible lever; you can say it or not say it, its presence or absence is salient.  The algorithms of a visual cortex that let you visualize what an apple would look like upside-down - we all have these in common, and they are not introspectively accessible.  Human beings knew about apples a long, long time before they knew there was even such a thing as the visual cortex, let alone beginning to unravel the algorithms by which it operated.

\n

Robin Hanson asked me:

\n
\n

\"You really think an office worker with modern computer tools is only 10% more productive than one with 1950-era non-computer tools?  Even at the task of creating better computer tools?\"

\n
\n

But remember the parable of the optimizing compiler run on its own source code - maybe it makes itself 50% faster, but only once; the changes don't increase its ability to make future changes.  So indeed, we should not be too impressed by a 50% increase in office worker productivity - not for purposes of asking about FOOMs.  We should ask whether that increase in productivity translates into tools that create further increases in productivity.

\n

And this is where the problem of underestimating hidden labor, starts to bite.  Engelbart rhapsodizes (accurately!) on the wonders of being able to cut and paste text while writing, and how superior this should be compared to the typewriter.  But suppose that Engelbart overestimates, by a factor of 10, how much of the intellectual labor of writing goes into fighting the typewriter.  Then because Engelbart can only help you cut and paste more easily, and cannot rewrite those hidden portions of your brain that labor to come up with good sentences and good arguments, the actual improvement he delivers is a tenth of what he thought it would be.  An anticipated 20% improvement becomes an actual 2% improvement.  k way less than 1.

\n

This will hit particularly hard if you think that computers, with some hard work on the user interface, and some careful training of the humans, ought to be able to help humans with the type of \"creative insight\" or \"scientific labor\" that goes into inventing new things to do with the computer.  If you thought that the surface symbols were where most of the intelligence resided, you would anticipate that computer improvements would hit back hard to this meta-level, and create people who were more scientifically creative and who could design even better computer systems.

\n

But if really you can only help people type up their ideas, while all the hard creative labor happens in the shower thanks to very-poorly-understood cortical algorithms - then you are much less like neutrons cascading through uranium, and much more like an optimizing compiler that gets a single speed boost and no more.  It looks like the person is 20% more productive, but in the aspect of intelligence that potentially cascades to further improvements they're only 2% more productive, if that.

\n

(Incidentally... I once met a science-fiction author of a previous generation, and mentioned to him that the part of my writing I most struggled with, was my tendency to revise and revise and revise things I had already written, instead of writing new things.  And he said, \"Yes, that's why I went back to the typewriter.  The word processor made it too easy to revise things; I would do too much polishing, and writing stopped being fun for me.\"  It made me wonder if there'd be demand for an author's word processor that wouldn't let you revise anything until you finished your first draft.

\n

But this could be chalked up to the humans not being trained as carefully, nor the software designed as carefully, as in the process Engelbart envisioned.)

\n

Engelbart wasn't trying to take over the world in person, or with a small group.  Yet had he tried to go the UberTool route, we can reasonably expect he would have failed - that is, failed at advancing far beyond the outside world in internal computer technology, while selling only UberTool's services to outsiders.

\n

Why?  Because it takes too much human labor to develop computer software and computer hardware, and this labor cannot be automated away as a one-time cost.  If the world outside your window has a thousand times as many brains, a 50% productivity boost that only cascades to a 10% and then a 1% additional productivity boost, will not let you win against the world.  If your UberTool was itself a mind, if cascades of self-improvement could fully automate away more and more of the intellectual labor performed by the outside world - then it would be a different story.  But while the development path wends inexorably through thousands and millions of engineers, and you can't divert that path through an internal computer, you're not likely to pull far ahead of the world.  You can just choose between giving your own people a 10% boost, or selling your product on the market to give lots of people a 10% boost.

\n

You can have trade secrets, and sell only your services or products - many companies follow that business plan; any company that doesn't sell its source code does so.  But this is just keeping one small advantage to yourself, and adding that as a cherry on top of the technological progress handed you by the outside world.  It's not having more technological progress inside than outside.

\n

If you're getting most of your technological progress handed to you - your resources not being sufficient to do it in-house - then you won't be able to apply your private productivity improvements to most of your actual velocity, since most of your actual velocity will come from outside your walls.  If you only create 1% of the progress that you use, then a 50% improvement becomes a 0.5% improvement.  The domain of potential recursion and potential cascades is much smaller, diminishing k.  As if only 1% of the uranium generating your neutrons, were available for chain reactions to be fissioned further.

\n

We don't live in a world that cares intensely about milking every increment of velocity out of scientific progress.  A 0.5% improvement is easily lost in the noise.  Corporations and universities routinely put obstacles in front of their internal scientists that cost them more than 10% of their potential.  This is one of those problems where not everyone is Engelbart (and you can't just rewrite their source code either).

\n

For completeness, I should mention that there are generic obstacles to pulling an UberTool.  Warren Buffett has gotten a sustained higher interest rate than the economy at large, and is widely believed to be capable of doing so indefinitely.  In principle, the economy could have invested hundreds of billions of dollars as soon as Berkshire Hathaway had a sufficiently long track record to rule out chance.  Instead, Berkshire has grown mostly by compound interest.  We could live in a world where asset allocations were ordinarily given as a mix of stocks, bonds, real estate, and Berkshire Hathaway.  We don't live in that world for a number of reasons: financial advisors not wanting to make themselves appear irrelevant, strange personal preferences on the part of Buffett...

\n

The economy doesn't always do the obvious thing, like flow money into Buffett until his returns approach the average return of the economy.  Interest rate differences much higher than 0.5%, on matters that people care about far more intensely than Science, are ignored if they're not presented in exactly the right format to be seized.

\n

And it's not easy for individual scientists or groups to capture the value created by scientific progress.  Did Einstein die with 0.1% of the value that he created?  Engelbart in particular doesn't seem to have tried to be Bill Gates, at least not as far as I know.

\n

With that in mind - in one sense Engelbart succeeded at a good portion of what he actually set out to do: computer mice did take over the world.

\n

But it was a broad slow cascade that mixed into the usual exponent of economic growth.  Not a concentrated fast FOOM.  To produce a concentrated FOOM, you've got to be able to swallow as much as possible of the processes driving the FOOM into the FOOM.  Otherwise you can't improve those processes and you can't cascade through them and your k goes down.  Then your interest rates won't even be as much higher than normal as, say, Warren Buffett's.  And there's no grail to be won, only profits to be made:  If you have no realistic hope of beating the world, you may as well join it.

" } }, { "_id": "pXpt2HqCqxqjeZc4u", "title": "The Complete Idiot's Guide to Ad Hominem", "pageUrl": "https://www.lesswrong.com/posts/pXpt2HqCqxqjeZc4u/the-complete-idiot-s-guide-to-ad-hominem", "postedAt": "2008-11-25T21:47:18.000Z", "baseScore": 14, "voteCount": 10, "commentCount": 16, "url": null, "contents": { "documentId": "pXpt2HqCqxqjeZc4u", "html": "

Stephen Bond writes the definitive word on ad hominem in "the ad hominem fallacy fallacy":

In reality, ad hominem is unrelated to sarcasm or personal abuse.  Argumentum \nad hominem is the logical fallacy of attempting to undermine a speaker's \nargument by attacking the speaker instead of addressing the argument.  The mere \npresence of a personal attack does not indicate ad hominem: the attack must be \nused for the purpose of undermining the argument, or otherwise the logical fallacy \nisn't there.

\n\n

[...]

\nA: "All rodents are mammals, but a weasel isn't a rodent, so it can't be\na mammal."\n
B: "You evidently know nothing about logic. This does not logically follow."\n

\n\n

B's argument is still not ad hominem.  B does not imply that A's \nsentence does not logically follow because A knows nothing about logic.  B \nis still addressing the substance of A's argument...

This is too beautiful, thorough, and precise to not post.  HT to sfk on HN.

" } }, { "_id": "rJLviHqJMTy8WQkow", "title": "...Recursion, Magic", "pageUrl": "https://www.lesswrong.com/posts/rJLviHqJMTy8WQkow/recursion-magic", "postedAt": "2008-11-25T09:10:38.000Z", "baseScore": 34, "voteCount": 21, "commentCount": 28, "url": null, "contents": { "documentId": "rJLviHqJMTy8WQkow", "html": "

Followup toCascades, Cycles, Insight...

\n

...4, 5 sources of discontinuity.

\n

Recursion is probably the most difficult part of this topic.  We have historical records aplenty of cascades, even if untangling the causality is difficult.  Cycles of reinvestment are the heartbeat of the modern economy.  An insight that makes a hard problem easy, is something that I hope you've experienced at least once in your life...

\n

But we don't have a whole lot of experience redesigning our own neural circuitry.

\n

We have these wonderful things called \"optimizing compilers\".  A compiler translates programs in a high-level language, into machine code (though these days it's often a virtual machine).  An \"optimizing compiler\", obviously, is one that improves the program as it goes.

\n

So why not write an optimizing compiler in its own language, and then run it on itself?  And then use the resulting optimized optimizing compiler, to recompile itself yet again, thus producing an even more optimized optimizing compiler -

\n

Halt!  Stop!  Hold on just a minute!  An optimizing compiler is not supposed to change the logic of a program - the input/output relations.  An optimizing compiler is only supposed to produce code that does the same thing, only faster.  A compiler isn't remotely near understanding what the program is doing and why, so it can't presume to construct a better input/output function.  We just presume that the programmer wants a fixed input/output function computed as fast as possible, using as little memory as possible.

\n

So if you run an optimizing compiler on its own source code, and then use the product to do the same again, it should produce the same output on both occasions - at most, the first-order product will run faster than the original compiler.

\n

If we want a computer program that experiences cascades of self-improvement, the path of the optimizing compiler does not lead there - the \"improvements\" that the optimizing compiler makes upon itself, do not improve its ability to improve itself.

\n

\n

Now if you are one of those annoying nitpicky types, like me, you will notice a flaw in this logic: suppose you built an optimizing compiler that searched over a sufficiently wide range of possible optimizations, that it did not ordinarily have time to do a full search of its own space - so that, when the optimizing compiler ran out of time, it would just implement whatever speedups it had already discovered.  Then the optimized optimizing compiler, although it would only implement the same logic faster, would do more optimizations in the same time - and so the second output would not equal the first output.

\n

Well... that probably doesn't buy you much.  Let's say the optimized program is 20% faster, that is, it gets 20% more done in the same time.  Then, unrealistically assuming \"optimization\" is linear, the 2-optimized program will be 24% faster, the 3-optimized program will be 24.8% faster, and so on until we top out at a 25% improvement.  k < 1.

\n

So let us turn aside from optimizing compilers, and consider a more interesting artifact, EURISKO.

\n

To the best of my inexhaustive knowledge, EURISKO may still be the most sophisticated self-improving AI ever built - in the 1980s, by Douglas Lenat before he started wasting his life on Cyc.  EURISKO was applied in domains ranging from the Traveller war game (EURISKO became champion without having ever before fought a human) to VLSI circuit design.

\n

EURISKO used \"heuristics\" to, for example, design potential space fleets.  It also had heuristics for suggesting new heuristics, and metaheuristics could apply to any heuristic, including metaheuristics.  E.g. EURISKO started with the heuristic \"investigate extreme cases\" but moved on to \"investigate cases close to extremes\".  The heuristics were written in RLL, which stands for Representation Language Language.  According to Lenat, it was figuring out how to represent the heuristics in such fashion that they could usefully modify themselves without always just breaking, that consumed most of the conceptual effort in creating EURISKO.

\n

But EURISKO did not go foom.

\n

EURISKO could modify even the metaheuristics that modified heuristics.  EURISKO was, in an important sense, more recursive than either humans or natural selection - a new thing under the Sun, a cycle more closed than anything that had ever existed in this universe.

\n

Still, EURISKO ran out of steam.  Its self-improvements did not spark a sufficient number of new self-improvements.  This should not really be too surprising - it's not as if EURISKO started out with human-level intelligence plus the ability to modify itself - its self-modifications were either evolutionarily blind, or produced by the simple procedural rules of some heuristic or other.  That's not going to navigate the search space very fast on an atomic level.  Lenat did not stand dutifully apart from his creation, but stepped in and helped EURISKO prune its own heuristics.  But in the end EURISKO ran out of steam, and Lenat couldn't push it any further.

\n

EURISKO lacked what I called \"insight\" - that is, the type of abstract knowledge that lets humans fly through the search space.  And so its recursive access to its own heuristics proved to be for nought.

\n

Unless, y'know, you're counting becoming world champion at Traveller without ever previously playing a human, as some sort of accomplishment.

\n

But it is, thankfully, a little harder than that to destroy the world - as Lenat's experimental test informed us.

\n

Robin previously asked why Douglas Engelbart did not take over the world, despite his vision of a team building tools to improve tools, and his anticipation of tools like computer mice and hypertext.

\n

One reply would be, \"Sure, a computer gives you a 10% advantage in doing various sorts of problems, some of which include computers - but there's still a lot of work that the computer doesn't help you with - and the mouse doesn't run off and write better mice entirely on its own - so k < 1, and it still takes large amounts of human labor to advance computer technology as a whole - plus a lot of the interesting knowledge is nonexcludable so it's hard to capture the value you create - and that's why Buffett could manifest a better take-over-the-world-with-sustained-higher-interest-rates than Engelbart.\"

\n

But imagine that Engelbart had built a computer mouse, and discovered that each click of the mouse raised his IQ by one point.  Then, perhaps, we would have had a situation on our hands.

\n

Maybe you could diagram it something like this:

\n
    \n
  1. Metacognitive level:  Evolution is the metacognitive algorithm which produced the wiring patterns and low-level developmental rules for human brains.
  2. \n
  3. Cognitive level:  The brain processes its knowledge (including procedural knowledge) using algorithms that quite mysterious to the user within them.  Trying to program AIs with the sort of instructions humans give each other usually proves not to do anything: the machinery activated by the levers is missing.
  4. \n
  5. Metaknowledge level:  Knowledge and skills associated with e.g. \"science\" as an activity to carry out using your brain - instructing you when to try to think of new hypotheses using your mysterious creative abilities.
  6. \n
  7. Knowledge level:  Knowing how gravity works, or how much weight steel can support.
  8. \n
  9. Object level:  Specific actual problems, like building a bridge or something.
  10. \n
\n

This is a causal tree, and changes at levels closer to root have greater impacts as the effects cascade downward.

\n

So one way of looking at it is:  \"A computer mouse isn't recursive enough.\"

\n

This is an issue that I need to address at further length, but for today I'm out of time.

\n

Magic is the final factor I'd like to point out, at least for now, in considering sources of discontinuity for self-improving minds.  By \"magic\" I naturally do not refer to this.  Rather, \"magic\" in the sense that if you asked 19th-century Victorians what they thought the future would bring, they would have talked about flying machines or gigantic engines, and a very few true visionaries would have suggested space travel or Babbage computers.  Nanotechnology, not so much.

\n

The future has a reputation for accomplishing feats which the past thought impossible.  Future civilizations have even broken what past civilizations thought (incorrectly, of course) to be the laws of physics.  If prophets of 1900 AD - never mind 1000 AD - had tried to bound the powers of human civilization a billion years later, some of those impossibilities would have been accomplished before the century was out; transmuting lead into gold, for example.  Because we remember future civilizations surprising past civilizations, it has become cliche that we can't put limits on our great-grandchildren.

\n

And yet everyone in the 20th century, in the 19th century, and in the 11th century, was human.  There is also the sort of magic that a human gun is to a wolf, or the sort of magic that human genetic engineering is to natural selection.

\n

To \"improve your own capabilities\" is an instrumental goal, and if a smarter intelligence than my own is focused on that goal, I should expect to be surprised.  The mind may find ways to produce larger jumps in capability than I can visualize myself.  Where higher creativity than mine is at work and looking for shorter shortcuts, the discontinuities that I imagine may be dwarfed by the discontinuities that it can imagine.

\n

And remember how little progress it takes - just a hundred years of human time, with everyone still human - to turn things that would once have been \"unimaginable\" into heated debates about feasibility.  So if you build a mind smarter than you, and it thinks about how to go FOOM quickly, and it goes FOOM faster than you imagined possible, you really have no right to complain - based on the history of mere human history, you should have expected a significant probability of being surprised.  Not, surprised that the nanotech is 50% faster than you thought it would be.  Surprised the way the Victorians would have been surprised by nanotech.

\n

Thus the last item on my (current, somewhat ad-hoc) list of reasons to expect discontinuity:  Cascades, cycles, insight, recursion, magic.

" } }, { "_id": "dq3KsCsqNotWc8nAK", "title": "Cascades, Cycles, Insight...", "pageUrl": "https://www.lesswrong.com/posts/dq3KsCsqNotWc8nAK/cascades-cycles-insight", "postedAt": "2008-11-24T09:33:40.000Z", "baseScore": 35, "voteCount": 22, "commentCount": 31, "url": null, "contents": { "documentId": "dq3KsCsqNotWc8nAK", "html": "

Followup toSurprised by Brains

\n\n

Five sources of discontinuity:  1, 2, and 3...

\n\n

Cascades are when one thing leads to another.  Human brains are effectively discontinuous with chimpanzee brains due to a whole bag of design improvements, even though they and we share 95% genetic material and only a few million years have elapsed since the branch.  Why this whole series of improvements in us, relative to chimpanzees?  Why haven't some of the same improvements occurred in other primates?

\n\n

Well, this is not a question on which one may speak with authority (so far as I know).  But I would venture an unoriginal guess that, in the hominid line, one thing led to another.

\n\n

The chimp-level task of modeling others, in the hominid line, led to improved self-modeling which supported recursion which enabled language which birthed politics that increased the selection pressure for outwitting which led to sexual selection on wittiness...

\n\n

...or something.  It's hard to tell by looking at the fossil record what happened in what order and why.  The point being that it wasn't one optimization that pushed humans ahead of chimps, but rather a cascade of optimizations that, in Pan, never got started.

\n\n

We fell up the stairs, you might say.  It's not that the first stair ends the world, but if you fall up one stair, you're more likely to fall up the second, the third, the fourth...

I will concede that farming was a watershed invention in the history of\nthe human species, though it intrigues me for a different reason than\nRobin.  Robin, presumably, is interested because the economy grew by\ntwo orders of magnitude, or something like that.  But did having a\nhundred times as many humans, lead to a hundred times as much thought-optimization accumulating\nper unit time?  It doesn't seem likely, especially in the age before\nwriting and telephones.  But farming, because of its sedentary and\nrepeatable nature, led to repeatable trade, which led to debt records.  Aha!\n- now we have writing.  There's a significant invention, from the perspective of cumulative optimization by brains.  Farming isn't writing but it cascaded to writing.

\n\n

\nFarming also cascaded (by way of surpluses and cities) to support professional specialization.  I suspect that having someone spend their whole life\nthinking about topic X instead of a hundred farmers occasionally\npondering it, is a more significant jump in cumulative optimization\nthan the gap between a hundred farmers and one hunter-gatherer pondering something.

\n\n

\nFarming is not the same trick as professional specialization or writing, but it cascaded to professional specialization and writing, and so the pace of human history picked up enormously after agriculture.  Thus I would interpret the story.

\n\n

From a zoomed-out perspective, cascades can lead to what look like discontinuities in the historical record, even given a steady optimization pressure in the background.  It's not that natural selection sped up during hominid evolution.  But the search neighborhood contained a low-hanging fruit of high slope... that led to another fruit... which led to another fruit... and so, walking at a constant rate, we fell up the stairs.  If you see what I'm saying.

\n\n

Predicting what sort of things are likely to cascade, seems like a very difficult sort of problem.

\n\n

But I will venture the observation that - with a sample size of one, and an optimization process very different from human thought - there was a cascade in the region of the transition from primate to human intelligence.

\n\n

Cycles happen when you connect the output pipe to the input pipe in a repeatable transformation.  You might think of them as a special case of cascades with very high regularity.  (From which you'll note that in the cases above, I talked about cascades through differing events: farming -> writing.)

\n\n

The notion of cycles as a source of discontinuity might seem counterintuitive, since it's so regular.  But consider this important lesson of history:

\n\n

Once upon a time, in a squash court beneath Stagg Field at the University of Chicago, physicists were building a shape like a giant doorknob out of alternate layers of graphite and uranium...

\n\n

The key number for the "pile" is the effective neutron multiplication factor.  When a uranium atom splits, it releases neutrons - some right away, some after delay while byproducts decay further.  Some neutrons escape the pile, some neutrons strike another uranium atom and cause an additional fission.  The effective neutron multiplication factor, denoted k, is the average number of neutrons from a single fissioning uranium atom that cause another fission.  At k less than 1, the pile is "subcritical".  At k >= 1, the pile is "critical".  Fermi calculates that the pile will reach k=1 between layers 56 and 57.

\n\n

On December 2nd in 1942, with layer 57 completed, Fermi orders the final experiment to begin.  All but one of the control rods (strips of wood covered with neutron-absorbing cadmium foil) are withdrawn.  At 10:37am, Fermi orders the final control rod withdrawn about half-way out.  The geiger counters click faster, and a graph pen moves upward.  "This is not it," says Fermi, "the trace will go to this point and level off," indicating a spot on the graph.  In a few minutes the graph pen comes to the indicated point, and does not go above it.  Seven minutes later, Fermi orders the rod pulled out another foot.  Again the radiation rises, then levels off.  The rod is pulled out another six inches, then another, then another.

\n\n

At 11:30, the slow rise of the graph pen is punctuated by an enormous CRASH - an emergency control rod, triggered by an ionization chamber, activates and shuts down the pile, which is still short of criticality.

\n\n

Fermi orders the team to break for lunch.

\n\n

At 2pm the team reconvenes, withdraws and locks the emergency control rod, and moves the control rod to its last setting.  Fermi makes some measurements and calculations, then again begins the process of withdrawing the rod in slow increments.  At 3:25pm, Fermi orders the rod withdrawn another twelve inches.  "This is going to do it," Fermi says.  "Now it will become self-sustaining.  The trace will climb and continue to climb.  It will not level off."

\n\n

Herbert Anderson recounted (as told in Rhodes's The Making of the Atomic Bomb):

"At first you could hear the sound of the neutron counter, clickety-clack, clickety-clack. Then the clicks came more and more rapidly, and after a while they began to merge into a roar; the counter couldn't follow anymore. That was the moment to switch to the chart recorder. But when the switch was made, everyone watched in the sudden silence the mounting deflection of the recorder's pen. It was an awesome silence. Everyone realized the significance of that switch; we were in the high intensity regime and the counters were unable to cope with the situation anymore. Again and again, the scale of the recorder had to be changed to accomodate the neutron intensity which was increasing more and more rapidly. Suddenly Fermi raised his hand. 'The pile has gone critical,' he announced. No one present had any doubt about it."

Fermi kept the pile running for twenty-eight minutes, with the neutron intensity doubling every two minutes.

\n\n

That first critical reaction had k of 1.0006.

\n\n

It might seem that a cycle, with the same thing happening over and over again, ought to exhibit continuous behavior.  In one sense it does.  But if you pile on one more uranium brick, or pull out the control rod another twelve inches, there's one hell of a big difference between k of 0.9994 and k of 1.0006.

\n\n

If, rather than being able to calculate, rather than foreseeing and taking cautions, Fermi had just reasoned that 57 layers ought not to behave all that differently from 56 layers - well, it wouldn't have been a good year to be a student at the University of Chicago.

\n\n

The inexact analogy to the domain of self-improving AI is left as an exercise for the reader, at least for now.

\n\n

Economists like to measure cycles because they happen repeatedly.  You take a potato and an hour of labor and make a potato clock which you sell for two potatoes; and you do this over and over and over again, so an economist can come by and watch how you do it.

\n\n

As I noted here at some length, economists are much less likely to go around measuring how many scientific discoveries it takes to produce a new scientific discovery.  All the discoveries are individually dissimilar and it's hard to come up with a common currency for them.  The analogous problem will prevent a self-improving AI from being directly analogous to a uranium heap, with almost perfectly smooth exponential increase at a calculable rate.  You can't apply the same software improvement to the same line of code over and over again, you've got to invent a new improvement each time.  But if self-improvements are triggering more self-improvements with great regularity, you might stand a long way back from the AI, blur your eyes a bit, and ask:  What is the AI's average neutron multiplication factor?

\n\n\n\n

Economics seems to me to be largely the study of production cycles - highly regular repeatable value-adding actions.  This doesn't seem to me like a very deep abstraction so far as the study of optimization goes, because it leaves out the creation of novel knowledge and novel designs - further informational optimizations.  Or rather, treats productivity improvements as a mostly exogenous factor produced by black-box engineers and scientists.  (If I underestimate your power and merely parody your field, by all means inform me what kind of economic study has been done of such things.)  (Answered:  This literature goes by the name "endogenous growth".  See comments starting here.)  So far as I can tell, economists do not venture into asking where discoveries come from, leaving the mysteries of the brain to cognitive scientists.

\n\n

(Nor do I object to this division of labor - it just means that you may have to drag in some extra concepts from outside economics if you want an account of self-improving Artificial Intelligence.  Would most economists even object to that statement?  But if you think you can do the whole analysis using standard econ concepts, then I'm willing to see it...)

\n\n

Insight is that mysterious thing humans do by grokking the search space, wherein one piece of highly abstract knowledge (e.g. Newton's calculus) provides the master key to a huge set of problems.  Since humans deal in the compressibility of compressible search spaces (at least the part we can compress) we can bite off huge chunks in one go.  This is not mere cascading, where one solution leads to another:

\n\n

Rather, an "insight" is a chunk of knowledge which, if you possess it, decreases the cost of solving a whole range of governed problems.

\n\n

There's a parable I once wrote - I forget what for, I think ev-bio - which dealt with creatures who'd evolved addition in response to some kind of environmental problem, and not with overly sophisticated brains - so they started with the ability to add 5 to things (which was a significant fitness advantage because it let them solve some of their problems), then accreted another adaptation to add 6 to odd numbers.  Until, some time later, there wasn't a reproductive advantage to "general addition", because the set of special cases covered almost everything found in the environment.

\n\n

There may be even be a real-world example of this.  If you glance at a set, you should be able to instantly distinguish the numbers one, two, three, four, and five, but seven objects in an arbitrary (non-canonical pattern) will take at least one noticeable instant to count.  IIRC, it's been suggested that we have hardwired numerosity-detectors but only up to five.

\n\n

I say all this, to note the difference between evolution nibbling bits off the immediate search neighborhood, versus the human ability to do things in one fell swoop.

\n\n

Our compression of the search space is also responsible for ideas cascading much more easily than adaptations.  We actively examine good ideas, looking for neighbors.

\n\n

But an insight is higher-level than this; it consists of understanding what's "good" about an idea in a way that divorces it from any single point in the search space.  In this way you can crack whole volumes of the solution space in one swell foop.  The insight of calculus apart from gravity is again a good example, or the insight of mathematical physics apart from calculus, or the insight of math apart from mathematical physics.

\n\n

Evolution is not completely barred from making "discoveries" that decrease the cost of a very wide range of further discoveries.  Consider e.g. the ribosome, which was capable of manufacturing a far wider range of proteins than whatever it was actually making at the time of its adaptation: this is a general cost-decreaser for a wide range of adaptations.  It likewise seems likely that various types of neuron have reasonably-general learning paradigms built into them (gradient descent, Hebbian learning, more sophisticated optimizers) that have been reused for many more problems than they were originally invented for.

\n\n

A ribosome is something like insight: an item of "knowledge" that tremendously decreases the cost of inventing a wide range of solutions.  But even evolution's best "insights" are not quite like the human kind.  A sufficiently powerful human insight often approaches a closed form - it doesn't feel like you're exploring even a compressed search space.  You just apply the insight-knowledge to whatever your problem, and out pops the now-obvious solution.
\n\n\n

\n\n

Insights have often cascaded, in human history - even\nmajor insights.  But they don't quite cycle - you can't repeat the\nidentical pattern Newton used originally to get a new kind of calculus\nthat's twice and then three times as powerful.

\n\n

Human AI programmers who have insights into intelligence may acquire discontinuous advantages over others who lack those insights.  AIs themselves will experience discontinuities in their growth trajectory associated with becoming able to do AI theory itself - a watershed moment in the FOOM.

" } }, { "_id": "XQirei3crsLxsCQoi", "title": "Surprised by Brains", "pageUrl": "https://www.lesswrong.com/posts/XQirei3crsLxsCQoi/surprised-by-brains", "postedAt": "2008-11-23T07:26:41.000Z", "baseScore": 62, "voteCount": 46, "commentCount": 28, "url": null, "contents": { "documentId": "XQirei3crsLxsCQoi", "html": "

Followup toLife's Story Continues

\n\n

Imagine two agents who've never seen an intelligence -\nincluding, somehow, themselves - but who've seen the rest of the\nuniverse up until now, arguing about what these newfangled "humans"\nwith their "language" might be able to do...

Believer:  Previously, evolution has taken hundreds of thousands of years to create new complex adaptations with many working parts.  I believe that, thanks to brains and language, we may see a new era, an era of intelligent design. \nIn this era, complex causal systems - with many interdependent parts\nthat collectively serve a definite function - will be created by the cumulative work of many brains\nbuilding upon each others' efforts.

\n\n

Skeptic:  I see - you think that brains might have\nsomething like a 50% speed advantage over natural selection?  So\nit might take a while for brains to catch up, but after another eight billion\nyears, brains will be in the lead.  But this planet's Sun will swell up\nby then, so -

\n\n

Believer:  Thirty percent?  I was thinking more like three orders of magnitude. \nWith thousands of brains working together and building on each others'\nefforts, whole complex machines will be designed on the timescale of\nmere millennia - no, centuries!

\n\n

Skeptic:  What?

\n\n

Believer:  You heard me.

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n

Skeptic:  Oh, come on!  There's absolutely no empirical evidence for\nan assertion like that!  Animal brains have been around for hundreds of\nmillions of years without doing anything like what you're saying.  I\nsee no reason to think that life-as-we-know-it will end, just because\nthese hominid brains have learned to send low-bandwidth signals over their vocal cords.  Nothing like what\nyou're saying has happened before in my experience -

\n\n

Believer:  That's kind of the point, isn't it?  That nothing\nlike this has happened before?  And besides, there is precedent for\nthat kind of Black Swan - namely, the first replicator.

\n\n

Skeptic:  Yes, there is precedent in the replicators.  Thanks\nto our observations of evolution, we have extensive knowledge and many\nexamples of how optimization works.  We know, in particular, that\noptimization isn't easy - it takes millions of years to climb up through the search space.  Why should\n"brains", even if they optimize, produce such different results?

\n\n

Believer:  Well, natural selection is just the very first optimization process that got started accidentally.   These newfangled brains were designed by evolution, rather than, like evolution itself, being a natural process that got started by accident.  So "brains" are far more sophisticated - why, just look at them.  Once they get started on cumulative optimization - FOOM!

\n\n

Skeptic:  So far, brains are a lot less impressive than natural selection.  These "hominids" you're so interested in - can these creatures' handaxes really be compared to the majesty of a dividing cell?

\n\n

Believer:  That's because they only just got started on language and cumulative optimization.

\n\n

Skeptic:  Really?  Maybe it's because the principles of natural\nselection are simple and elegant for creating complex designs, and all\nthe convolutions of brains are only good for chipping handaxes in a\nhurry.  Maybe brains simply don't scale to detail work.  Even if we\ngrant the highly dubious assertion that brains are more efficient than natural\nselection - which you seem to believe on the basis of just looking at brains and seeing the convoluted folds - well, there still has to be a law of diminishing returns.

\n\n

Believer:  Then why have brains been getting steadily larger over\ntime?  That doesn't look to me like evolution is running into\ndiminishing returns.  If anything, the recent example of hominids\nsuggests that once brains get large and complicated enough, the fitness advantage for further improvements is even greater -

\n\n

Skeptic:  Oh, that's probably just sexual selection!  I mean, if you\nthink that a bunch of brains will produce new complex machinery in just\na hundred years, then why not suppose that a brain the size of a whole planet could produce a de novo complex causal system with many interdependent elements in a single day?

\n\n

Believer:  You're attacking a strawman here - I never said anything like that.

\n\n

Skeptic:  Yeah?  Let's hear you assign a probability that a brain the size of a planet could produce a new complex design in a single day.

\n\n

Believer:  The size of a planet?  (Thinks.)  Um... ten percent.

\n\n

Skeptic:  (Muffled choking sounds.)

\n\n

Believer:  Look, brains are fast.  I can't rule it out in principle -

\n\n

Skeptic:  Do you understand how long a day is?  It's the amount of time for the Earth to spin on its own axis, once.  One sunlit period, one dark period.  There are 365,242 of them in a single millennium.

\n\n

Believer:  Do you understand how long a second is?  That's\nhow long it takes a brain to see a fly coming in, target it in the air,\nand eat it.  There's 86,400 of them in a day.

\n\n

Skeptic:  Pffft, and chemical interactions in cells happen in\nnanoseconds.  Speaking of which, how are these brains going to build any sort of complex machinery without access to ribosomes?  They're just going to run around on the grassy plains in really optimized patterns until they get tired and fall over.  There's nothing they can use to build proteins or even control tissue structure.

\n\n

Believer:  Well, life didn't always have ribosomes, right?  The first replicator didn't.

\n\n

Skeptic:  So brains will evolve their own ribosomes?

\n\n

Believer:  Not necessarily ribosomes.  Just some way of making things.

\n\n

Skeptic:  Great, so call me in another hundred million years when that evolves, and I'll start worrying about brains.

\n\n

Believer:  No, the brains will think of a way to get their own ribosome-analogues.

\n\n

Skeptic:  No matter what they think, how are they going to make anything without ribosomes?

\n\n

Believer:  They'll think of a way.

\n\n

Skeptic:  Now you're just treating brains as magic fairy dust.

\n\n

Believer:  The first replicator would have been magic fairy dust by comparison with anything that came before it -

\n\n

Skeptic:  That doesn't license throwing common sense out the window.

\n\n

Believer:  What you call "common sense" is exactly what would have caused you to assign negligible probability to the actual outcome of the first replicator.  Ergo, not so sensible as it seems, if you want to get your predictions actually right, instead of sounding reasonable.

\n\n

Skeptic:  And your belief that in the Future it will only take a hundred years to optimize a complex causal system with dozens of interdependent parts - you think this is how you get it right?

\n\n

Believer:  Yes!  Sometimes, in the pursuit of truth, you have to be courageous - to stop worrying about how you sound in front of your friends - to think outside the box - to imagine futures fully as absurd as the Present would seem without benefit of hindsight - and even, yes, say things that sound completely ridiculous and outrageous by comparison with the Past.  That is why I boldly dare to say - pushing out my guesses to the limits of where Truth drives me, without fear of sounding silly - that in the far future, a billion years from now when brains are more highly evolved, they will find it possible to design a complete machine with a thousand parts in as little as one decade!

\n\n

Skeptic:  You're just digging yourself deeper.  I don't even understand how brains are supposed to optimize so much faster.  To find out the fitness of a mutation, you've got to run millions of real-world tests, right?  And even then, an environmental shift can make all your optimization worse than nothing, and there's no way to predict  that no matter how much you test -

\n\n

Believer:  Well, a brain is complicated, right?  I've been looking at them for a while and even I'm not totally sure I understand what goes on in there.

\n\n

Skeptic:  Pffft!  What a ridiculous excuse. 

\n\n

Believer:  I'm sorry, but it's the truth - brains are harder to understand.

\n\n

Skeptic:  Oh, and I suppose evolution is trivial?

\n\n

Believer:  By comparison... yeah, actually.

\n\n

Skeptic:  Name me one factor that explains why you think brains will run so fast.

\n\n

Believer:  Abstraction.

\n\n

Skeptic:  Eh?   Abstrah-shun?

\n\n

Believer:  It... um... lets you know about parts of the search space you haven't actually searched yet, so you can... sort of... skip right to where you need to be -

\n\n

Skeptic:  I see.  And does this power work by clairvoyance, or by precognition?  Also, do you get it from a potion or an amulet?\n

\n\n

Believer:  The brain looks at the fitness of just a few points in\nthe search space - does some complicated processing - and voila, it\nleaps to a much higher point!

\n

Skeptic:  Of course.  I knew teleportation had to fit in here somewhere.

\n

Believer:  See, the fitness of one point tells you something about other points -

\n\n

Skeptic:  Eh?  I don't see how that's possible without running another million tests.

\n\n

Believer:  You just look at it, dammit!

\n\n

Skeptic:  With what kind of sensor?  It's a search space, not a bug to eat!

\n\n

Believer:  The search space is compressible -

\n\n

Skeptic:  Whaa?  This is a design space of possible genes we're talking about, not a folding bed -

\n\n

Believer:  Would you stop talking about genes already!  Genes are on the way out!  The future belongs to ideas!

\n\n

Skeptic:  Give. Me. A. Break.

\n\n

Believer:  Hominids alone shall carry the burden of destiny!

\n\n

Skeptic:  They'd die off in a week without plants to eat.  You probably don't know this, because you haven't studied ecology, but ecologies are complicated - no single species ever "carries the burden of destiny" by itself.  But that's another thing - why are you postulating that it's just the hominids who go FOOM?  What about the other primates?  These chimpanzees are practically their cousins - why wouldn't they go FOOM too?

\n\n

Believer:  Because it's all going to shift to the level of ideas, and the hominids will build on each other's ideas without the chimpanzees participating -

\n\n

Skeptic:  You're begging the question.  Why won't chimpanzees be part of the economy of ideas?  Are you familiar with Ricardo's Law of Comparative Advantage?  Even if chimpanzees are worse at everything than hominids, the hominids will still trade with them and all the other brainy animals.

\n\n

Believer:  The cost of explaining an idea to a chimpanzee will exceed any benefit the chimpanzee can provide.

\n\n

Skeptic:  But why should that be true?  Chimpanzees only forked off from hominids a few million years ago.  They have 95% of their genome in common with the hominids.  The vast majority of optimization that went into producing hominid brains also went into producing chimpanzee brains.  If hominids are good at trading ideas, chimpanzees will be 95% as good at trading ideas.  Not to mention that all of your ideas belong to the far future, so that both hominids, and chimpanzees, and many other species will have evolved much more complex brains before anyone starts building their own cells -

\n\n

Believer:  I think we could see as little as a million years pass between when these creatures first invent a means of storing information with persistent digital accuracy - their equivalent of DNA - and when they build machines as complicated as cells.

\n\n

Skeptic:  Too many assumptions... I don't even know where to start...  Look, right now brains are nowhere near building cells.  It's going to take a lot more evolution to get to that point, and many other species will be much further along the way by the time hominids get there.  Chimpanzees, for example, will have learned to talk -

\n\n

Believer:  It's the ideas that will accumulate optimization, not the brains.

\n\n

Skeptic:  Then I say again that if hominids can do it, chimpanzees will do it 95% as well.

\n\n

Believer:  You might get discontinuous returns on brain complexity.  Like... even though the hominid lineage split off from chimpanzees very recently, and only a few million years of evolution have occurred since then, the chimpanzees won't be able to keep up.

\n\n

Skeptic:  Why?

\n\n

Believer:  Good question.

\n\n

Skeptic:  Does it have a good answer?

\n\n

Believer:  Well, there might be compound interest on learning during the maturational period... or something about the way a mind flies through the search space, so that slightly more powerful abstracting-machinery can create abstractions that correspond to much faster travel... or some kind of feedback loop involving a brain powerful enough to control itself... or some kind of critical threshold built into the nature of cognition as a problem, so that a single missing gear spells the difference between walking and flying... or the hominids get started down some kind of sharp slope in the genetic fitness landscape, involving many changes in sequence, and the chimpanzees haven't gotten started down it yet... or all these statements are true and interact multiplicatively... I know that a few million years doesn't seem like much time, but really, quite a lot can happen.  It's hard to untangle.

\n\n

Skeptic:  I'd say it's hard to believe.

\n\n

Believer:  Sometimes it seems that way to me too!  But I think that in a mere ten or twenty million years, we won't have a choice.

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n" } }, { "_id": "RXLoo3pXgY2hjdXrg", "title": "Life's Story Continues", "pageUrl": "https://www.lesswrong.com/posts/RXLoo3pXgY2hjdXrg/life-s-story-continues", "postedAt": "2008-11-21T23:05:47.000Z", "baseScore": 24, "voteCount": 19, "commentCount": 14, "url": null, "contents": { "documentId": "RXLoo3pXgY2hjdXrg", "html": "

Followup toThe First World Takeover

\n\n

As last we looked at the planet, Life's long search in organism-space had only just gotten started.

\n\n

When I try to structure my understanding of the unfolding process of\nLife, it seems to me that,\nto understand the optimization velocity at any given point, I\nwant to break down that velocity using the following abstractions:

\n\n

Example:  If an object-level adaptation enables more\nefficient extraction of resources, and thereby increases the total\npopulation that can be supported by fixed available resources, then this\nincreases the optimization resources and perhaps the optimization\nvelocity.

\n\n

How much does optimization velocity increase - how hard does this object-level innovation hit back to the meta-level?

\n

If a population is small enough that not\nall mutations are occurring in each generation, then a larger\npopulation decreases the time for a given mutation to show up.  If the fitness improvements offered by beneficial mutations follow an exponential distribution, then - I'm not actually doing the math here, just sort of eyeballing - I would expect the optimization velocity to go as log population size, up to a maximum where the search neighborhood is explored thoroughly.  (You could test this in the lab, though not just by eyeballing the fossil record.)

\n

This doesn't mean all optimization processes would have a momentary velocity that goes as the log of momentary resource investment up to a maximum.  Just one mode of evolution would have this character.  And even under these assumptions, evolution's cumulative optimization wouldn't go as log of cumulative resources - the log-pop curve is just the instantaneous velocity. \nIf we assume that the variance of the neighborhood remains the same\nover the course of exploration (good points have better neighbors with\nsame variance ad infinitum), and that the population size remains the\nsame, then we should see linearly cumulative optimization over time. \nAt least until we start to hit the information bound on maintainable\ngenetic information...

\n

These are the sorts of abstractions that I think are required to describe the history of life on Earth in terms of optimization. \nAnd I also think that if you don't talk optimization, then you won't be\nable to understand the causality - there'll just be these mysterious\nunexplained progress modes that change now and then.  In the same way\nyou have to talk natural selection to understand observed evolution,\nyou have to talk optimization velocity to understand observed\nevolutionary speeds.

\n\n

The first thing to realize is that meta-level changes are rare, so most of what we see in the historical record will be structured by the search neighborhoods - the way that one innovation opens up the way for additional innovations.  That's going to be most of the story, not because meta-level innovations are unimportant, but because they are rare.

\n\n

In "Eliezer's Meta-Level Determinism", Robin lists the following dramatic events traditionally noticed in the fossil record:

Any Cells, Filamentous Prokaryotes, Unicellular Eukaryotes, Sexual Eukaryotes, Metazoans

And he describes "the last three strong transitions" as:

Humans, farming, and industry

So let me describe what I see when I look at these events, plus some others, through the lens of my abstractions:

\n\n

Cells:  Force a set of genes, RNA strands, or catalytic chemicals to share a common reproductive fate.  (This is the real point of the cell boundary, not "protection from the environment" - it keeps the fruits of chemical labor inside a spatial boundary.)  But, as we've defined our abstractions, this is mostly a matter of optimization slope - the quality of the search neighborhood.  The advent of cells opens up a tremendously rich new neighborhood defined by specialization and division of labor.  It also increases the slope by ensuring that chemicals get to keep the fruits of their own labor in a spatial boundary, so that fitness advantages increase.  But does it hit back to the meta-level?  How you define that seems to me like a matter of taste.  Cells don't quite change the mutate-reproduce-select cycle.  But if we're going to define sexual recombination as a meta-level innovation, then we should also define cellular isolation as a meta-level innovation.

\n\n

It's worth noting that modern genetic algorithms have not, to my knowledge, reached anything like the level of intertwined complexity that characterizes modern unicellular organisms.  Modern genetic algorithms seem more like they're producing individual chemicals, rather than being able to handle individually complex modules.  So the cellular transition may be a hard one.

\n\n

DNA:  I haven't yet looked up the standard theory on this, but I would sorta expect it to come after cells, since a ribosome seems like the sort of thing you'd have to keep around in a defined spatial location.  DNA again opens up a huge new search neighborhood by separating the functionality of chemical shape from the demands of reproducing the pattern.  Maybe we should rule that anything which restructures the search neighborhood this\ndrastically should count as a hit back to the meta-level.  (Whee, our\nabstractions are already breaking down.)  Also, DNA directly hits back to the meta-level by carrying information at higher fidelity, which increases the total storable information.

\n\n

Filamentous prokaryotes, unicellular eukaryotes:  Meh, so what.

\n\n

Sex:  The archetypal example of a rare meta-level innovation.  Evolutionary biologists still puzzle over how exactly this one managed to happen.

\n\n

Metazoans:  The key here is not cells aggregating into colonies with similar genetic heritages; the key here is the controlled specialization of cells with an identical genetic heritage.  This opens up a huge new region of the search space, but does not particularly change the nature of evolutionary optimization.

\n\n

Note that opening a sufficiently huge gate in the search neighborhood, may result in a meta-level innovation being uncovered shortly thereafter.  E.g. if cells make ribosomes possible.  One of the main lessons in this whole history is that one thing leads to another.

\n\n

Neurons, for example, may have been the key enabling factor in enabling large-motile-animal body plans, because they enabled one side of the organism to talk with the other.

\n\n

This brings us to the age of brains, which will be the topic of the next post.

\n\n

But in the meanwhile, I just want to note that my view is nothing as simple as "meta-level determinism" or "the impact of something is proportional to how meta it is; non-meta things must have small impacts".  Nothing much meta happened between the age of sexual metazoans and the age of humans - brains were getting more sophisticated over that period, but that didn't change the nature of evolution.

\n\n

Some object-level innovations are small, some are medium-sized, some are huge.  It's no wonder if you look at the historical record and see a Big Innovation that doesn't look the least bit meta, but had a huge impact by itself and led to lots of other innovations by opening up a new neighborhood picture of search space.  This is allowed.  Why wouldn't it be?

\n\n

You can even get exponential acceleration without anything meta - if, for example, the more knowledge you have, or the more genes you have, the more opportunities you have to make good improvements to them.  Without any increase in optimization pressure, the neighborhood gets higher-sloped as you climb it.

\n\n

My thesis is more along the lines of, "If this is the picture without recursion, just imagine what's going to happen when we add recursion."

\n\n

To anticipate one possible objection:  I don't expect Robin to disagree that modern\ncivilizations underinvest in meta-level improvements because they take\ntime to yield cumulative effects, are new things that don't have certain payoffs, and worst of all, tend to be public goods.  That's\nwhy we don't have billions of dollars flowing into prediction markets,\nfor example.  I, Robin, or Michael Vassar could probably think for five minutes and name five major probable-big-win meta-level improvements that society isn't investing in.
\n

\n

So if meta-level improvements are rare in the fossil record, it's not necessarily because it would be hard to improve on evolution, or because meta-level improving doesn't accomplish much.  Rather, evolution doesn't do anything because it will have a long-term payoff a thousand generations later. \nAny meta-level improvement also has to grant an object-level fitness\nadvantage in, say, the next two generations, or it will go extinct. \nThis is why we can't solve the puzzle of how sex evolved by pointing\ndirectly to how it speeds up evolution.  "This speeds up evolution" is just not a valid reason for something to evolve.

\n\n

Any creative evolutionary biologist could probably think for five minutes and come up with five great ways that evolution could have improved on evolution - but which happen to be more complicated than the wheel, which evolution evolved on only three known occasions - or don't happen to grant an immediate fitness benefit to a handful of implementers.

" } }, { "_id": "GbGzP4LZTBN8dyd8c", "title": "Observing Optimization", "pageUrl": "https://www.lesswrong.com/posts/GbGzP4LZTBN8dyd8c/observing-optimization", "postedAt": "2008-11-21T05:39:25.000Z", "baseScore": 12, "voteCount": 9, "commentCount": 28, "url": null, "contents": { "documentId": "GbGzP4LZTBN8dyd8c", "html": "

Followup to: Optimization and the Singularity

\n\n

In  "Optimization and the Singularity" I pointed out that history since the first replicator, including human history to date, has mostly been a case of nonrecursive optimization - where you've got one thingy doing the optimizing, and another thingy getting optimized.  When evolution builds a better amoeba, that doesn't change the structure of evolution\n- the mutate-reproduce-select cycle.

\n\n

But there are exceptions to this\nrule, such as the invention of sex, which affected the structure of\nnatural selection itself - transforming it to\nmutate-recombine-mate-reproduce-select.

\n\n

I was surprised when Robin, in "Eliezer's Meta-Level Determinism" took that idea and ran with it and said:

\n

...his view does seem to make testable predictions\nabout history.  It suggests the introduction of natural selection and\nof human culture coincided with the very largest capability growth rate\nincreases.  It suggests that the next largest increases were much\nsmaller and coincided in biology with the introduction of cells and\nsex, and in humans with the introduction of writing and science.  And\nit suggests other rate increases were substantially smaller.

It\nhadn't occurred to me to try to derive that kind of testable\nprediction.  Why?  Well, partially because I'm not an economist.  (Don't get me wrong, it was a virtuous step to try.)  But\nalso because the whole issue looked to me like it was a lot more\ncomplicated than that, so it hadn't occurred to me to try to directly\nextract predictions.

\n\n

What is this "capability growth rate" of which you speak, Robin? \nThere are old, old controversies in evolutionary biology involved here.

Just to start by pointing out the obvious - if there are fixed\nresources available, only so much grass to be eaten or so many rabbits\nto consume, then any evolutionary "progress" that we would recognize as\nproducing a better-designed organism, may just result in the\ndisplacement of the old allele by the new allele - not any increase in\nthe population as a whole.  It's quite possible to have a new wolf that\nexpends 10% more energy per day to be 20% better at hunting, and in\nthis case the sustainable wolf population will decrease as new wolves\nreplace old.

\n\n

If I was going to talk about the effect that a meta-level change might have on the "optimization velocity"\nof natural selection, I would talk about the time for a new adaptation to replace\nan old adaptation after a shift in selection pressures - not the total\npopulation or total biomass or total morphological complexity (see below).

\n\n

Likewise in human history - farming was an important innovation for purposes of optimization, not because it changed the human brain all that much, but because it meant that there were a hundred times as many brains around; and even more importantly, that there were surpluses that could support specialized professions.  But many innovations in human history may have consisted of new, improved, more harmful weapons - which would, if anything, have decreased the sustainable population size (though "no effect" is more likely - fewer people means more food means more people).

\n\n

Or similarly: there's a talk somewhere where either Warren Buffett or Charles Munger mentions how they hate to hear about technological improvements in certain industries - because even if investing a few million can cut the cost of production by 30% or whatever, the barriers to competition are so low that the consumer captures all the gain.  So they have to invest to keep up with competitors, and the investor doesn't get much return.

\n\n

I'm trying to measure the optimization velocity of information, not production or growth rates.  At the tail end of a very long process, knowledge finally does translate into power - guns or nanotechnology or whatever.  But along that long way, if you're measuring the number of material copies of the same stuff (how many wolves, how many people, how much grain), you may not be getting much of a glimpse at optimization velocity.  Too many complications along the causal chain.

\n\n

And this is not just my problem.

\n\n

Back in the bad old days of pre-1960s evolutionary biology, it was widely taken for granted that there\nwas such a thing as progress, that it proceeded forward over time, and\nthat modern human beings were at the apex.

\n\n\n\n

George Williams's Adaptation and Natural Selection,\nmarking the so-called "Williams Revolution" in ev-bio that flushed out\na lot of the romanticism and anthropomorphism, spent most of one chapter questioning the seemingly common-sensical metrics\nof "progress". 

\n\n

Biologists sometimes spoke of "morphological complexity"\nincreasing over time.  But how do you measure that, exactly?  And at what point in life do you\nmeasure it if the organism goes through multiple stages?  Is an amphibian more advanced than a mammal, since its\ngenome has to store the information for multiple stages of life?

\n\n

"There are life cycles enormously more complex than that of a frog,"\nWilliams wrote. "The lowly and 'simple' liver fluke..." goes through\nstages that include a waterborne stage that swims using cilia; finds\nand burrows into a snail and then transforms into a sporocyst; that\nreproduces by budding to produce redia; that migrate in the snail and\nreproduce asexually; then transform into cercaria, that, by wiggling a\ntail, burrows out of the snail and swims to a blade of grass; where they\ntransform into dormant metacercaria; that are eaten by sheep and then\nhatch into a young fluke inside the sheep; then transform into adult flukes; which spawn fluke zygotes...  So how "advanced" is\nthat?

\n

Williams also pointed out that there would be a limit to how much\ninformation evolution could maintain in the genome against degenerative\npressures - which seems like a good principle in practice, though I\nmade some mistakes on OB in trying to describe the theory.  \nTaxonomists often take a current form and call the historical trend\ntoward it "progress", but is that upward motion, or just substitution of some adaptations for other adaptations in response to changing selection pressures?

\n\n

"Today the\nfishery biologists greatly fear such archaic fishes as the bowfin,\ngarpikes , and lamprey, because they are such outstandingly effective\ncompetitors," Williams noted.

\n\n

So if I were talking about the effect of e.g. sex as a meta-level innovation, then I would expect e.g. an increase in the total biochemical and morphological\ncomplexity that could be maintained - the lifting of a previous upper\nbound, followed by an accretion of information.  And I might expect a\nchange in the velocity of new adaptations replacing old adaptations.

\n\n

But to get from there, to something that shows up in the fossil record - that's not a trivial step.

\n\n

I recall reading, somewhere or other, about an ev-bio controversy\nthat ensued when one party spoke of the "sudden burst of creativity"\nrepresented by the Cambrian explosion, and wondered why evolution was\nproceeding so much more slowly nowadays.  And another party responded\nthat the Cambrian differentiation was mainly visible post hoc - that\nthe groups of animals we have now, first differentiated from one another then,\nbut that at the time the differences were not as large as they loom\nnowadays.  That is, the actual velocity of adaptational change wasn't\nremarkable by comparison to modern times, and only hindsight causes us\nto see those changes as "staking out" the ancestry of the major animal\ngroups.

\n\n

I'd be surprised to learn that sex had no effect on the velocity of\nevolution.  It looks like it should increase the speed and\nnumber of substituted adaptations, and also increase the complexity\nbound on the total genetic information that can be maintained against\nmutation.  But to go from there, to just looking at the fossil record\nand seeing faster progress - it's not just me who thinks that this jump to phenomenology is tentative, difficult, and controversial.

\n\n\n\n

Should you expect more speciation after the invention of sex, or\nless?  The first impulse is to say "more", because sex seems like it\nshould increase the optimization velocity and speed up time.  But sex\nalso creates mutually reproducing populations, that share genes\namong themselves, as opposed to asexual lineages - so might that act as\na centripetal force?

\n\n

I don't even propose to answer this question, just\npoint out that it is actually quite standard for the\nphenomenology of evolutionary theories - the question of which\nobservables are predicted - to be a major difficulty.  Unless you're dealing with really easy qualitative questions like "Should I find rabbit fossils in the pre-Cambrian?"  (I try to only make predictions about AI, using my theory of optimization, when it looks like an easy question.)

\n\n

Yes, it's more convenient for scientists when theories make easily testable, readily observable predictions.  But when I look back at the history of life, and the history of humanity, my first priority is to ask "What's going on here?", and only afterward see if I can manage to make non-obvious retrodictions.  I can't just start with the goal of having a convenient phenomenology.  Or similarly: the theories I use to organize my understanding of the history of optimization to date, have lots of parameters, e.g. the optimization-efficiency curve that describes optimization output as a function of resource input, or the question of how many low-hanging fruit exist in the neighborhood of a given search point.  Does a larger population of wolves increase the velocity of natural selection, by covering more of the search neighborhood for possible mutations?  If so, is that a logarithmic increase with population size, or what?  - But I can't just wish my theories into being simpler.

\n\n

If Robin has a simpler\ncausal model, with fewer parameters, that stands directly behind observables and easily coughs up testable\npredictions, which fits the data well, and obviates the need for my own abstractions like "optimization efficiency" -

\n\n

- then I may have to discard my own attempts at theorizing.  But observing a series of material growth modes doesn't\ncontradict a causal model of optimization behind the scenes, because it's a pure phenomenology, not itself a causal model - it\ndoesn't say whether a given innovation had any effect on the optimization velocity of the process that produced future object-level innovations that actually changed growth modes, etcetera.

" } }, { "_id": "YyYwRxajPkAyajKXx", "title": "Whence Your Abstractions?", "pageUrl": "https://www.lesswrong.com/posts/YyYwRxajPkAyajKXx/whence-your-abstractions", "postedAt": "2008-11-20T01:07:46.000Z", "baseScore": 12, "voteCount": 10, "commentCount": 6, "url": null, "contents": { "documentId": "YyYwRxajPkAyajKXx", "html": "

Reply toAbstraction, Not Analogy

\n\n

Robin asks:

Eliezer, have I completely failed\nto communicate here?  You have previously said nothing is similar enough\nto this new event for analogy to be useful, so all we have is "causal\nmodeling" (though you haven't explained what you mean by this in this\ncontext).  This post is a reply saying, no, there are more ways using\nabstractions; analogy and causal modeling are two particular ways to\nreason via abstractions, but there are many other ways.

Well... it shouldn't be surprising if you've communicated less than you thought.  Two people, both of whom know that disagreement is not allowed, have a persistent disagreement.  It doesn't excuse anything, but - wouldn't it be more surprising if their disagreement rested on intuitions that were easy to convey in words, and points readily dragged into the light?

\n\n

I didn't think from the beginning that I was succeeding in communicating.  Analogizing Doug Engelbart's mouse to a self-improving AI is for me such a flabbergasting notion - indicating such completely different ways of thinking about the problem - that I am trying to step back and find the differing sources of our differing intuitions.

\n\n

(Is that such an odd thing to do, if we're really following down the path of not agreeing to disagree?)

\n\n

"Abstraction", for me, is a word that means a partitioning of possibility - a boundary around possible things, events, patterns.  They are in no sense neutral; they act as signposts saying "lump these things together for predictive purposes".  To use the word "singularity" as ranging over human brains, farming, industry, and self-improving AI, is very nearly to finish your thesis right there.

I wouldn't be surprised to find that, in a real AI, 80% of the actual computing crunch goes into drawing the right boundaries to make the actual reasoning possible.  The question "Where do abstractions come from?" cannot be taken for granted.

\n\n

Boundaries are drawn by appealing to other boundaries.  To draw the boundary "human" around things that wear clothes and speak language and have a certain shape, you must have previously noticed the boundaries around clothing and language.  And your visual cortex already has a (damned sophisticated) system for categorizing visual scenes into shapes, and the shapes into categories.

\n\n

It's very much worth distinguishing between boundaries drawn by noticing a set of similarities, and boundaries drawn by reasoning about causal interactions.

\n\n

There's a big difference between saying "I predict that Socrates, like other humans I've observed, will fall into the class of 'things that die when drinking hemlock'" and saying "I predict that Socrates, whose biochemistry I've observed to have this-and-such characteristics, will have his neuromuscular junction disrupted by the coniine in the hemlock - even though I've never seen that happen, I've seen lots of organic molecules and I know how they behave."

\n\n

But above all - ask where the abstraction comes from!

\n\n

To see a hammer is not good to hold high in a lightning storm, we draw on pre-existing objects that you're not supposed to hold electrically conductive things to high altitudes - this is a predrawn boundary, found by us in books; probably originally learned from experience and then further explained by theory.  We just test the hammer to see if it fits in a pre-existing boundary, that is, a boundary we drew before we ever thought about the hammer.

\n\n

To evaluate the cost to carry a hammer in a tool kit, you probably visualized the process of putting the hammer in the kit, and the process of carrying it.  Its mass determines the strain on your arm muscles.  Its volume and shape - not just "volume", as you can see as soon as that is pointed out - determine the difficulty of fitting it into the kit.  You said "volume and mass" but that was an approximation, and as soon as I say "volume and mass and shape" you say, "Oh, of course that's what I meant" - based on a causal visualization of trying to fit some weirdly shaped object into a toolkit, or e.g. a thin ten-foot thin pin of low volume and high annoyance.  So you're redrawing the boundary based on a causal visualization which shows that other characteristics can be relevant to the consequence you care about.

\n\n

None of your examples talk about drawing new conclusions about the hammer by analogizing it to other things rather than directly assessing its characteristics in their own right, so it's not all that good an example when it comes to making predictions about self-improving AI by putting it into a group of similar things that includes farming or industry.

\n\n

But drawing that particular boundary would already rest on causal reasoning that tells you which abstraction to use.   Very much an Inside View, and a Weak Inside View, even if you try to go with an Outside View after that.

\n\n

Using an "abstraction" that covers such massively different things, will often be met by a differing intuition that makes a different abstraction, based on a different causal visualization behind the scenes.   That's what you want to drag into the light - not just say, "Well, I expect this Singularity to resemble past Singularities."

\n\n

Robin said:

I am of course open to different\nway to conceive of "the previous major singularities".  I have\npreviously tried to conceive of them in terms of sudden growth\nspeedups.

Is that the root source for your abstraction - "things that do sudden growth speedups"?  I mean... is that really what you want to go with here?

" } }, { "_id": "spKYZgoh3RmhxMqyu", "title": "The First World Takeover", "pageUrl": "https://www.lesswrong.com/posts/spKYZgoh3RmhxMqyu/the-first-world-takeover", "postedAt": "2008-11-19T15:00:00.000Z", "baseScore": 42, "voteCount": 29, "commentCount": 24, "url": null, "contents": { "documentId": "spKYZgoh3RmhxMqyu", "html": "

Before Robin and I move on to talking about the Future, it seems to me wise to check if we have disagreements in our view of the Past. Which might be much easier to discuss - and maybe even resolve...  So...

\n\n

In the beginning was the Bang.  For nine billion years afterward, nothing much happened.

\n\n

Stars formed, and burned for long periods or short periods depending on their structure; but "successful" stars that burned longer or brighter did not pass on their characteristics to other stars.  The first replicators were yet to come.

\n\n

It was the Day of the Stable Things, when your probability of seeing something was given by its probability of accidental formation times its duration.  Stars last a long time; there are many helium atoms.

\n\n

It was the Era of Accidents, before the dawn of optimization.  You'd only expect to see something with 40 bits of optimization if you looked through a trillion samples.  Something with 1000 bits' worth of functional complexity?  You wouldn't expect to find that in the whole universe.

\n\n

I would guess that, if you were going to be stuck on a desert island and you wanted to stay entertained as long as possible, then you should sooner choose to examine the complexity of the cells and biochemistry of a single Earthly butterfly, over all the stars and astrophysics in the visible universe beyond Earth.

\n\n

It was the Age of Boredom.

The hallmark of the Age of Boredom was not lack of natural resources - it wasn't that the universe was low on hydrogen - but, rather, the lack of any cumulative search.  If one star burned longer or brighter, that didn't affect the probability distribution of the next star to form.  There was no search but blind search.  Everything from scratch, not even looking at the neighbors of previously successful points. Not hill-climbing, not mutation and selection, not even discarding patterns already failed.  Just a random sample from the same distribution, over and over again.

\n\n

The Age of Boredom ended with the first replicator.

\n\n

(Or the first replicator to catch on, if there were failed alternatives lost to history - but this seems unlikely, given the Fermi Paradox; a replicator should be more improbable than that, or the stars would teem with life already.)

\n\n

Though it might be most dramatic to think of a single RNA strand a few dozen bases long, forming by pure accident after who-knows-how-many chances on who-knows-how-many planets, another class of hypotheses deals with catalytic hypercycles - chemicals whose presence makes it more likely for other chemicals to form, with the arrows happening to finally go around in a circle.  If so, RNA would just be a crystallization of that hypercycle into a single chemical that can both take on enzymatic shapes, and store information in its sequence for easy replication.

\n\n

The catalytic hypercycle is worth pondering, since it reminds us that the universe wasn't quite drawing its random patterns from the same distribution every time - the formation of a longlived star made it more likely for a planet to form (if not another star to form), the formation of a planet made it more likely for amino acids and RNA bases to form in a pool of muck somewhere (if not more likely for planets to form).

\n\n

In this flow of probability, patterns in one attractor leading to other attractors becoming stronger, there was finally born a cycle - perhaps a single strand of RNA, perhaps a crystal in clay, perhaps a catalytic hypercycle - and that was the dawn.

\n\n

What makes this cycle significant?  Is it the large amount of material that the catalytic hypercycle or replicating RNA strand could absorb into its pattern?

\n\n

Well, but any given mountain on Primordial Earth would probably weigh vastly more than the total mass devoted to copies of the first replicator.  What effect does mere mass have on optimization?

\n\n

Suppose the first replicator had a probability of formation of 10-30. If that first replicator managed to make 10,000,000,000 copies of itself (I don't know if this would be an overestimate or underestimate for a tidal pool) then this would increase your probability of encountering the replicator-pattern by a factor of 1010, the total probability going up to 10-20. (If you were observing "things" at random, that is, and not just on Earth but on all the planets with tidal pools.)  So that was a kind of optimization-directed probability flow.

\n\n

But vastly more important, in the scheme of things, was this - that the first replicator made copies of itself, and some of those copies were errors.

\n\n

That is, it explored the neighboring regions of the search space - some of which contained better replicators - and then those replicators ended up with more probability flowing into them, which explored their neighborhoods.

\n\n

Even in the Age of Boredom there were always regions of attractor-space that were the gateways to other regions of attractor-space.  Stars begot planets, planets begot tidal pools.  But that's not the same as a replicator begetting a replicator - it doesn't search a neighborhood, find something that better matches a criterion (in this case, the criterion of effective replication) and then search that neighborhood, over and over.

\n\n

This did require a certain amount of raw material to act as replicator feedstock.  But the significant thing was not how much material was recruited into the world of replication; the significant thing was the search, and the material just carried out that search. If, somehow, there'd been some way of doing the same search without all that raw material - if there'd just been a little beeping device that determined how well a pattern would replicate, and incremented a binary number representing "how much attention" to pay to that pattern, and then searched neighboring points in proportion to that number - well, that would have searched just the same.  It's not something that evolution can do, but if it happened, it would generate the same information.

\n\n

Human brains routinely outthink the evolution of whole species, species whose net weights of biological material outweigh a human brain a million times over - the gun against a lion's paws.  It's not the amount of raw material, it's the search.

\n\n

In the evolution of replicators, the raw material happens to carry out the search - but don't think that the key thing is how much gets produced, how much gets consumed.  The raw material is just a way of keeping score.  True, even in principle, you do need some negentropy and some matter to perform the computation. But the same search could theoretically be performed with much less material - examining fewer copies of a pattern, to draw the same conclusions, using more efficient updating on the evidence. Replicators happen to use the number of copies produced of themselves, as a way of keeping score.

\n\n

But what really matters isn't the production, it's the search.

\n\n

If, after the first primitive replicators had managed to produce a few tons of themselves, you deleted all those tons of biological material, and substituted a few dozen cells here and there from the future - a single algae, a single bacterium - to say nothing of a whole multicellular C. elegans earthworm with a 302-neuron brain - then Time would leap forward by billions of years, even if the total mass of Life had just apparently shrunk.  The search would have leapt ahead, and production would recover from the apparent "setback" in a handful of easy doublings.

\n\n

The first replicator was the first great break in History - the first Black Swan that would have been unimaginable by any surface analogy.  No extrapolation of previous trends could have spotted it - you'd have had to dive down into causal modeling, in enough detail to visualize the unprecedented search.

\n\n

Not that I'm saying I would have guessed, without benefit of hindsight - if somehow I'd been there as a disembodied and unreflective spirit, knowing only the previous universe as my guide - having no highfalutin' concepts of "intelligence" or "natural selection" because those things didn't exist in my environment, and I had no mental mirror in which to see myself - and indeed, who should have guessed it with short of godlike intelligence?  When all the previous history of the universe contained no break in History that sharp?  The replicator was the first Black Swan.

\n\n

Maybe I, seeing the first replicator as a disembodied unreflective spirit, would have said, "Wow, what an amazing notion - some of the things I see won't form with high probability, or last for long times - they'll be things that are good at copying themselves, instead.  It's the new, third reason for seeing a lot of something!"  But would I have been imaginative enough to see the way to amoebas, to birds, to humans?  Or would I have just expected it to hit the walls of the tidal pool and stop?

\n\n

Try telling a disembodied spirit who had watched the whole history of the universe up to that point about the birds and the bees, and they would think you were absolutely and entirely out to lunch.  For nothing remotely like that would have been found anywhere else in the universe - and it would obviously take an exponential and ridiculous amount of time to accidentally form a pattern like that, no matter how good it was at replicating itself once formed - and as for it happening many times over in a connected ecology, when the first replicator in the tidal pool took such a long time to happen - why, that would just be madness.  The Absurdity Heuristic would come into play.  Okay, it's neat that a little molecule can replicate itself - but this notion of a "squirrel" is insanity.  So far beyond a Black Swan that you can't even call it a swan anymore.

\n\n

That first replicator took over the world - in what sense?  Earth's crust, Earth's magma, far outweighs its mass of Life.  But Robin and I both suspect, I think, that the fate of the universe, and all those distant stars that outweigh us, will end up shaped by Life.  So that the universe ends up hanging quite heavily on the existence of that first replicator, and not on the counterfactual states of any particular other molecules nearby...  In that sense, a small handful of atoms once seized the reins of Destiny.

\n\n

How?  How did the first replicating pattern take over the world? Why didn't all those other molecules get an equal vote in the process?

\n\n

Well, that initial replicating pattern was doing some kind of search - some kind of optimization - and nothing else in the Universe was even trying. Really it was evolution that took over the world, not the first replicating pattern per se - you don't see many copies of it around any more.  But still, once upon a time the thread of Destiny was seized and concentrated and spun out from a small handful of atoms.

\n\n

The first replicator did not set in motion a clever optimization process.  Life didn't even have sex yet, or DNA to store information at very high fidelity.  But the rest of the Universe had zip.  In the kingdom of blind chance, the myopic optimization process is king.

\n\n

Issues of "sharing improvements" or "trading improvements" wouldn't even arise - there were no partners from outside.  All the agents, all the actors of our modern world, are descended from that first replicator, and none from the mountains and hills.

\n\n

And that was the story of the First World Takeover, when a shift in the structure of optimization - namely, moving from no optimization whatsoever, to natural selection - produced a stark discontinuity with previous trends; and squeezed the flow of the whole universe's destiny through the needle's eye of a single place and time and pattern.

\n\n

That's Life.

" } }, { "_id": "w9KWNWFTXivjJ7rjF", "title": "The Weak Inside View", "pageUrl": "https://www.lesswrong.com/posts/w9KWNWFTXivjJ7rjF/the-weak-inside-view", "postedAt": "2008-11-18T18:37:33.000Z", "baseScore": 31, "voteCount": 25, "commentCount": 22, "url": null, "contents": { "documentId": "w9KWNWFTXivjJ7rjF", "html": "

Followup to:   The Outside View's Domain

\n\n

When I met Robin in Oxford for a recent conference, we had a preliminary discussion on the Singularity - this is where Robin suggested using production functions.  And at one point Robin said something like, "Well, let's see whether your theory's predictions fit previously observed growth rate curves," which surprised me, because I'd never thought of that at all.

\n\n

It had never occurred to me that my view of optimization ought to produce quantitative predictions.  It seemed like something only an economist would try to do, as 'twere.  (In case it's not clear, sentence 1 is self-deprecating and sentence 2 is a compliment to Robin.  --EY)

\n\n

Looking back, it's not that I made a choice to deal only in qualitative predictions, but that it didn't really occur to me to do it any other way.

\n\n

Perhaps I'm prejudiced against the Kurzweilian crowd, and their Laws of Accelerating Change and the like.  Way back in the distant beginning that feels like a different person, I went around talking about Moore's Law and the extrapolated arrival time of "human-equivalent hardware" a la Moravec.  But at some point I figured out that if you weren't exactly reproducing the brain's algorithms, porting cognition to fast serial hardware and to human design instead of evolved adaptation would toss the numbers out the window - and that how much hardware you needed depended on how smart you were - and that sort of thing.

\n\n

Betrayed, I decided that the whole Moore's Law thing was silly and a corruption of futurism, and I restrained myself to qualitative predictions (and retrodictions) thenceforth.

Though this is to some extent an argument produced after the conclusion, I would explain my reluctance to venture into quantitative futurism, via the following trichotomy:

\n\n\n\n

So to me it seems "obvious" that my view of optimization is only strong enough to produce loose qualitative conclusions, and that it can only be matched to its retrodiction of history, or wielded to produce future predictions, on the level of qualitative physics.

\n\n

"Things should speed up here", I could maybe say.  But not "The doubling time of this exponential should be cut in half."

\n\n

I aspire to a deeper understanding of intelligence than this, mind you.  But I'm not sure that even perfect Bayesian enlightenment, would let me predict quantitatively how long it will take an AI to solve various problems in advance of it solving them.  That might just rest on features of an unexplored solution space which I can't guess in advance, even though I understand the process that searches.

\n\n

Robin keeps asking me what I'm getting at by talking about some reasoning as "deep" while other reasoning is supposed to be "surface".  One thing which makes me worry that something is "surface", is when it involves generalizing a level N feature across a shift in level N-1 causes.

\n\n

For example, suppose you say "Moore's Law has held for the last sixty years, so it will hold for the next sixty years, even after the advent of superintelligence" (as Kurzweil seems to believe, since he draws his graphs well past the point where you're buying "a billion times human brainpower for $1000").

\n\n

Now, if the Law of Accelerating Change were an exogenous, ontologically fundamental, precise physical law, then you wouldn't expect it to change with the advent of superintelligence.

\n\n

But to the extent that you believe Moore's Law depends on human engineers, and that the timescale of Moore's Law has something to do with the timescale on which human engineers think, then extrapolating Moore's Law across the advent of superintelligence is extrapolating it across a shift in the previous causal generator of Moore's Law.

\n\n

So I'm worried when I see generalizations extrapolated across a change in causal generators not themselves described - i.e. the generalization itself is on a level of the outputs of those generators and doesn't describe the generators directly.

\n\n

If, on the other hand, you extrapolate Moore's Law out to 2015 because it's been reasonably steady up until 2008 - well, Reality is still allowed to say "So what?", to a greater extent than we can expect to wake up one morning and find Mercury in Mars's orbit.  But I wouldn't bet against you, if you just went ahead and drew the graph.

\n\n

So what's "surface" or "deep" depends on what kind of context shifts you try to extrapolate past.

\n\n

Robin Hanson said:

Taking a long historical long view, we see steady total growth rates punctuated by rare transitions when new faster growth modes appeared with little warning.  We know of perhaps four such "singularities": animal brains (~600MYA), humans (~2MYA), farming (~1OKYA), and industry (~0.2KYA).  The statistics of previous transitions suggest we are perhaps overdue for another one, and would be substantially overdue in a century.  The next transition would change the growth rate rather than capabilities directly, would take a few years at most, and the new doubling time would be a week to a month.

Why do these transitions occur?  Why have they been similar to each other?  Are the same causes still operating?  Can we expect the next transition to be similar for the same reasons?

\n\n

One may of course say, "I don't know, I just look at the data, extrapolate the line, and venture this guess - the data is more sure than any hypotheses about causes."  And that will be an interesting projection to make, at least.

\n\n

But you shouldn't be surprised at all if Reality says "So what?"  I mean - real estate prices went up for a long time, and then they went down.  And that didn't even require a tremendous shift in the underlying nature and causal mechanisms of real estate.

\n\n

To stick my neck out further:  I am liable to trust the Weak Inside View over a "surface" extrapolation, if the Weak Inside View drills down to a deeper causal level and the balance of support is sufficiently lopsided.

\n\n

I will go ahead and say, "I don't care if you say that Moore's Law has held for the last hundred years.  Human thought was a primary causal force in producing Moore's Law, and your statistics are all over a domain of human neurons running at the same speed.  If you substitute better-designed minds running at a million times human clock speed, the rate of progress ought to speed up - qualitatively speaking."

\n\n

That is, the prediction is without giving precise numbers, or supposing that it's still an exponential curve; computation might spike to the limits of physics and then stop forever, etc.  But I'll go ahead and say that the rate of technological progress ought to speed up, given the said counterfactual intervention on underlying causes to increase the thought speed of engineers by a factor of a million.  I'll be downright indignant if Reality says "So what?" and has the superintelligence make slower progress than human engineers instead.  It really does seem like an argument so strong that even Reality ought to be persuaded.

\n\n

It would be interesting to ponder what kind of historical track records have prevailed in such a clash of predictions - trying to extrapolate "surface" features across shifts in underlying causes without speculating about those underlying causes, versus trying to use the Weak Inside View on those causes and arguing that there is "lopsided" support for a qualitative conclusion; in a case where the two came into conflict...

\n\n

...kinda hard to think of what that historical case would be, but perhaps I only lack history.

\n\n

Robin, how surprised would you be if your sequence of long-term exponentials just... didn't continue?  If the next exponential was too fast, or too slow, or something other than an exponential?  To what degree would you be indignant, if Reality said "So what?"

" } }, { "_id": "o8Ju8pTGvAfkkg6xZ", "title": "Failure By Affective Analogy", "pageUrl": "https://www.lesswrong.com/posts/o8Ju8pTGvAfkkg6xZ/failure-by-affective-analogy", "postedAt": "2008-11-18T07:14:55.000Z", "baseScore": 27, "voteCount": 17, "commentCount": 3, "url": null, "contents": { "documentId": "o8Ju8pTGvAfkkg6xZ", "html": "

Previously in seriesFailure By Analogy

\n

Alchemy is a way of thinking that humans do not instinctively spot as stupid.  Otherwise alchemy would never have been popular, even in medieval days.  Turning lead into gold by mixing it with things that seemed similar to gold, sounded every bit as reasonable, back in the day, as trying to build a flying machine with flapping wings.  (And yes, it was worth trying once, but you should notice if Reality keeps saying \"So what?\")

\n

And the final and most dangerous form of failure by analogy is to say a lot of nice things about X, which is similar to Y, so we should expect nice things of Y. You may also say horrible things about Z, which is the polar opposite of Y, so if Z is bad, Y should be good.

\n

Call this \"failure by affective analogy\".

\n

\n

Failure by affective analogy is when you don't just say, \"This lemon glazing is yellow, gold is yellow, QED.\"  But rather say:

\n

\"And now we shall add delicious lemon glazing to the formula for the Philosopher's Stone the root of all wisdom, since lemon glazing is beautifully yellow, like gold is beautifully yellow, and also lemon glazing is delightful on the tongue, indicating that it is possessed of a superior potency that delights the senses, just as the beauty of gold delights the senses...\"

\n

That's why you find people saying things like, \"Neural networks are decentralized, just like democracies\" or \"Neural networks are emergent, just like capitalism\".

\n

A summary of the Standard Prepackaged Revolutionary New AI Paradigm might look like the following - and when reading, ask yourself how many of these ideas are affectively laden:

\n\n

By means of this tremendous package deal fallacy, lots of good feelings are generated about the New Idea (even if it's thirty years old).  Enough nice words may even manage to start an affective death spiral.  Until finally, via the standard channels of affect heuristic and halo effect, it seems that the New Idea will surely be able to accomplish some extremely difficult end -

\n

- like, say, true general intelligence -

\n

- even if you can't quite give a walkthrough of the internal mechanisms which are going to produce that output.

\n

(Why yes, I have seen AGIfolk trying to pull this on Friendly AI - as they explain how all we need to do is stamp the AI with the properties of Democracy and Love and Joy and Apple Pie and paint an American Flag on the case, and surely it will be Friendly as well - though they can't quite walk through internal cognitive mechanisms.)

\n

From such reasoning as this (and this), came the string of false promises that were published in the newspapers (and led futurists who grew up during that era to be very disappointed in AI, leading them to feel negative affect that now causes them to put AI a hundred years in the future).

\n

Let's say it again:  Reversed stupidity is not intelligence - if people are making bad predictions you should just discard them, not reason from their failure.

\n

But there is a certain lesson to be learned.  A bounded rationalist cannot do all things, but the true Way should not overpromise - it should not (systematically/regularly/on average) hold out the prospect of success, and then deliver failure.  Even a bounded rationalist can aspire to be well calibrated, to not assign 90% probability unless they really do have good enough information to be right nine times out of ten.  If you only have good enough information to be right 6 times out of 10, just say 60% instead.  A bounded rationalist cannot do all things, but the true Way does not overpromise.

\n

If you want to avoid failed promises of AI... then history suggests, I think, that you should not expect good things out of your AI system unless you have a good idea of how specifically it is going to happen.  I don't mean writing out the exact internal program state in advance.  But I also don't mean saying that the refrigeration unit will cool down the AI and make it more contemplative.  For myself, I seek to know the laws governing the AI's lawful uncertainty and lawful creativity - though I don't expect to know the full content of its future knowledge, or the exact design of its future inventions.

\n

Don't want to be disappointed?  Don't hope!

\n

Don't ask yourself if you're allowed to believe that your AI design will work.

\n

Don't guess.  Know.

\n

For this much I do know - if I don't know that my AI design will work, it won't.

\n

There are various obvious caveats that need to be attached here, and various obvious stupid interpretations of this principle not to make.  You can't be sure a search will return successfully before you have run it -

\n

- but you should understand on a gut level:  If you are hoping that your AI design will work, it will fail.  If you know that your AI design will work, then it might work.

\n

And on the Friendliness part of that you should hold yourself to an even higher standard - ask yourself if you are forced to believe the AI will be Friendly - because in that aspect, above all, you must constrain Reality so tightly that even Reality is not allowed to say, \"So what?\"  This is a very tough test, but if you do not apply it, you will just find yourself trying to paint a flag on the case, and hoping.

" } }, { "_id": "C4EjbrvG3PvZzizZb", "title": "Failure By Analogy", "pageUrl": "https://www.lesswrong.com/posts/C4EjbrvG3PvZzizZb/failure-by-analogy", "postedAt": "2008-11-18T02:54:06.000Z", "baseScore": 30, "voteCount": 24, "commentCount": 13, "url": null, "contents": { "documentId": "C4EjbrvG3PvZzizZb", "html": "

Previously in seriesLogical or Connectionist AI?
Followup toSurface Analogies and Deep Causes

\n\n

"One of [the Middle Ages'] characteristics was that\n'reasoning by analogy' was rampant; another characteristic was almost\ntotal intellectual stagnation, and we now see why the two go together. \nA reason for mentioning this is to point out that, by developing a keen\near for unwarranted analogies, one can detect a lot of medieval\nthinking today."
        -- Edsger W. Dijkstra

\n\n

<geoff> neural nets are over-rated
<starglider> Their potential is overrated.
<geoff> their potential is us\n\n\n
        -- #sl4

Wasn't it in some sense reasonable to have high hopes of neural networks?  After all, they're just like the human brain, which is also massively parallel, distributed, asynchronous, and -

\n\n

Hold on.  Why not analogize to an earthworm's brain, instead of a human's?

A backprop network with sigmoid units... actually doesn't much resemble biology at all.  Around as much as a voodoo doll resembles its victim.  The surface\nshape may look vaguely similar in extremely superficial aspects at a\nfirst glance.  But the interiors and behaviors, and basically the whole\nthing apart from the surface, are nothing at all alike.  All that biological neurons have in common with gradient-optimization ANNs is... the spiderwebby look.

\n\n

\n\n\n\nAnd who says that the spiderwebby look is the important fact about biology?  Maybe the performance of biological brains has nothing to do with being made out of neurons, and everything to do with the cumulative selection pressure\nput into the design.  Just\nlike how the performance of biological brains has little to do with\nproteins being held together by van der Waals forces, instead of\nthe much stronger covalent bonds that hold together silicon.  Sometimes\nevolution gets stuck with poor\nbuilding material, and it can't refactor because it can't execute\nsimultaneous changes to migrate the design.  If biology does some neat\ntricks with chemistry, it's because of the greater design pressures\nexerted by natural selection, not because the building materials are so\nwonderful.

\n\n

Maybe neurons are just what brains happen to be made out of, because the blind idiot god is too stupid to sit down and invent transistors.  All the modules get made out of neurons because that's all there is, even if the cognitive work would be much better-suited to a 2GHz CPU.

\n\n

"Early attempts to make flying machines often did things\nlike attaching beak onto the front, or trying to make a wing which\nwould flap like a bird's wing.  (This extraordinary persistent idea is\nfound in Leonardo's notebooks and in a textbook on airplane design\npublished in 1911.)  It is easy for us to smile at such naivete, but\none should realize that it made good sense at the time.  What birds did\nwas incredible, and nobody really knew how they did it.  It always\nseemed to involve feathers and flapping.  Maybe the beak was critical\nfor stability..."
        -- Hayes and Ford, "Turing Test Considered Harmful"

\n\n

So... why didn't the flapping-wing designs work?  Birds flap wings and they fly.  The flying machine flaps its wings.  Why, oh why, doesn't it fly?

\n\n

Because... well... it just doesn't.  This kind of analogic reasoning is not binding on Reality.

\n\n

One of the basic tests to apply to reasoning that sounds kinda-good\nis "How shocked can I justifiably be if Reality comes back and says 'So\nwhat'?"

\n\n

For example:  Suppose that, after keeping track of the motions of\nthe planets for centuries, and after confirming the underlying theory\n(General Relativity) to 14 decimal places, we predict where Mercury\nwill be on July 1st, 2009.  So we have a prediction, but that's not the\nsame thing as a fact, right?  Anyway, we look up in the sky on July\n1st, 2009, and Reality says "So what!" - the planet Mercury has shifted\noutward to the same orbit as Mars.

\n\n

In a case like this, I would be highly indignant and would probably sue Reality for breach of contract.

\n\n

But suppose alternatively that, in the last twenty years, real\nestate prices have never gone down.  You say, "Real estate prices have\nnever gone down - therefore, they won't go down next year!"  And next\nyear, Reality says "So what?"  It seems to me that you have no right to\nbe shocked.  You have used an argument to which Reality can easily say\n"So what?"\n

"Nature is the ultimate bigot, because it is obstinately\nand intolerantly devoted to its own prejudices and absolutely refuses\nto yield to the most persuasive rationalizations of humans."
        -- J. R. Molloy

\n\n

It's actually pretty hard to find arguments so persuasive that even Reality\nfinds them binding.  This is why Science is more difficult - why it's\nharder to successfully predict reality - than medieval scholars once\nthought.

\n\n

One class of persuasive arguments that Reality quite often ignores\nis the Law of Similarity - that is, the argument that things which look\nsimilar ought to behave similarly.

\n\n

A medieval alchemist puts lemon glazing onto a lump of lead.  The\nlemon glazing is yellow, and gold is yellow.  It seems like it ought\nto work... but the lead obstinately refuses to turn into gold.  Reality\njust comes back and says, "So what?  Things can be similar in some\naspects without being similar in other aspects."

\n\n

You should be especially suspicious when someone says, "I am building X, which will do P, because it is similar to Y, which also does P."

\n\n

An abacus performs addition; and the beads of solder on a circuit\nboard bear a certain surface resemblance to the beads on an abacus. \nNonetheless, the circuit board does not perform addition because we can find a surface similarity to the abacus. \nThe Law of Similarity and Contagion is not relevant.   The circuit\nboard would work in just the same fashion if every abacus upon Earth\nvanished in a puff of smoke, or if the beads of an abacus looked\nnothing like solder.  A computer chip is not powered by its similarity to anything else, it just is.  It exists in its own right, for its own reasons.

\n\n

The Wright Brothers calculated that their plane would fly - before\nit ever flew - using reasoning that took no account whatsoever of their\naircraft's similarity to a bird.  They did look at birds (and I have\nlooked at neuroscience) but the final calculations did not mention\nbirds (I am fairly confident in asserting).  A working airplane does\nnot fly because it has wings "just like a bird".  An airplane\nflies because it is an airplane, a thing that exists in its own right;\nand it would fly just as high, no more and no less, if no bird had ever\nexisted.

\n\n

The general form of failing-by-analogy runs something like this:

\n\n\n\n

Analogical reasoning of this type is a very weak form of understanding.  Reality often says "So what?" and ignores the argument.

\n\n

The one comes to us and says:  "Calculate how many synaptic\ntransmissions per second take place in the human brain.  This is the\ncomputing power required for human-equivalent intelligence.  Raise\nenough venture capital to buy a supercomputer which performs the same\nnumber of floating-point operations per second.  Intelligence is bound\nto emerge from a machine that powerful."

\n\n

So you reply:  "I'm sorry, I've never seen a human brain and I don't know anything about them.  So, without\ntalking about a human brain, can you explain how you calculated that\n10^17 floating-point operations per second is the exact amount\nnecessary and sufficient to yield human-equivalent intelligence?"\n

\n\n

And the one says:  "..."\n\n

\n\n

You ask:  "Say, what is this property of 'human-equivalent\nintelligence' which you expect to get?  Can you explain it to me\nwithout pointing to a human?"\n

\n\n

And the one says:  "..."\n\n

\n\n

You ask:  "What makes you think that large amounts of computing\npower have something to do with 'intelligence', anyway?  Can you answer\nwithout pointing to the example of the human brain?  Pretend that I've\nnever seen an 'intelligence' and that I have no reason as yet to\nbelieve any such thing can exist."\n

\n\n

But you get the idea.

\n\n

Now imagine that you go to the Wright Brothers and say:  "I've never\nseen a bird.  Why does your aircraft have 'wings'?  And what is it you\nmean by 'flying'?"

\n\n

And the Wright Brothers respond:  "Well, by flying, we mean that\nthis big heavy object is going to rise off the ground and move through\nthe air without being supported.  Once the plane is moving forward, the\nwings accelerate air downward, which generates lift that keeps the\nplane aloft."

\n\n

If two processes have forms that are nearly identical,\nincluding internal structure that is similar to as many decimal places\nas you care to reason about, then you may be able to almost-prove\nresults from one to the other.  But if there is even one difference in\nthe internal structure, then any number of other similarities may be\nrendered void.  Two deterministic computations with identical data and\nidentical rules will yield identical outputs.  But if a single input\nbit is flipped from zero to one, the outputs are no longer required to\nhave anything in common.  The strength of analogical reasoning can be destroyed by a single perturbation.\n\n

\n\n

Yes, sometimes analogy works.  But the more complex and dissimilar the\nobjects are, the less likely it is to work.  The narrower the\nconditions required for success, the less likely it is to work.  The\nmore complex the machinery doing the job, the less likely it is to\nwork.  The more shallow your understanding of the object of the\nanalogy, the more you are looking at its surface characteristics rather\nthan its deep mechanisms, the less likely analogy is to work.\n

\n\n

Analogy might work for something on the order of:  "I crossed this\nriver using a fallen log last time, so if I push another log across it,\nI might be able to cross it again."  It doesn't work for creating\nobjects of the order of complexity of, say, a toaster oven.  (And hunter-gatherer bands face many rivers to cross, but not many\ntoaster ovens to rewire.)\n

\n

Admittedly, analogy often works in mathematics - much better than it\ndoes in science, in fact.  In mathematics you can go back and prove\nthe idea which analogy originally suggested.  In mathematics, you get\nquick feedback about which analogies worked and which analogies didn't,\nand soon you pick up the pattern.  And in mathematics you can always\nsee the entire insides of things; you are not stuck examining the\nsurface of an opaque mystery.  Mathematical proposition A may be\nanalogous to mathematical proposition B, which suggests the method; but\nafterward you can go back and prove A in its own right, regardless of\nwhether or not B is true.  In some cases you may need proposition B as\na lemma, but certainly not all cases.\n

\n\n\n\n

Which is to say: despite the misleading surface similarity, the "analogies"\nwhich mathematicians use are not analogous to the "analogies" of alchemists, and you cannot reason from the success of one to the\nsuccess of the other.

" } }, { "_id": "59ithK3WGEMFk7iHJ", "title": "Whither OB?", "pageUrl": "https://www.lesswrong.com/posts/59ithK3WGEMFk7iHJ/whither-ob", "postedAt": "2008-11-17T19:38:20.000Z", "baseScore": 9, "voteCount": 7, "commentCount": 53, "url": null, "contents": { "documentId": "59ithK3WGEMFk7iHJ", "html": "

Robin plans to cut back posting shortly, after he and I have our long-awaited Disagreement about AI self-improvement.  As for myself - I'm not finished, but I'm way over schedule and need to move on soon.  I'm not going to stop posting entirely (I doubt I could if I tried) but I'm not going to be posting daily.

\n\n

There are three directions that Overcoming Bias could go from here:

\n\n

First, we could find enough good authors to keep going at a post per day.  Say, seven people who can and will write one post per week.  We can't compromise on quality, though.

\n\n

Second, we could try to shift to a more community-based format.  Our most popular post ever, still getting hits to this day, was not written by Robin or myself or any of the recurring editors.  It's "My Favorite Liar" by Kai Chang, about the professor who inserted one false statement into each lecture.  If one-tenth of our readers contributed a single story as good as this... but neither Robin nor myself have time to vet them all.  So one approach would be to have a community forum where anyone could post, readers voted the posts up and down, and a front page to which the editors promoted posts deemed worthy.  I understand that Scoop has software like this, but I would like to know if our readers can recommend better community software (see below).

\n\n

Third, we could close OB to new submissions and keep the archives online eternally, saying, "It had a good run."  As Nick put it, we shouldn't keep going if it means a slow degeneration.

My own perspective:  Overcoming Bias presently gets over a quarter-million monthly pageviews.  We've built something that seems like it should be important.  It feels premature, but I would like to try to launch an online rationalist community.

\n\n

At this point, I'm advocating a hybrid approach:  Keep OB open with fewer posts of the same gravitas, hang out a sign for new authors, and also try starting up a community-based site with user submissions and more frequent shorter posts.  I've got plenty of light stuff to post, links and the like.

\n\n

But:  What software should we use to support a rationalist community?

\n\n

The Oxford Future of Humanity Institute and the Singularity Institute have volunteered to provide funding if necessary, so we aren't limited to free software.

\n\n

And obviously we're not looking for software that lets our users throw sheep at one another.  The Internet already offers enough ways to waste time, thank you.  More like - how people can find each other geographically and meet up; something Reddit-like for upvoting and downvoting posts, links, and comments; better comment threading; ways to see only new comments on posts you've flagged - that sort of thing.  You know, actually useful stuff.  A lot of Web 2.0 seems to be designed for people with lots of time to waste, but I don't think we can assume that fact about our readership.

\n\n

Even if you don't know the name of the software, if there's a community site you visit that does an exceptional job - letting users upvote and downvote to keep the quality high, threading discussions while still giving busy people a fast way to see what new comments have been posted, making it easy for both newcomers and oldtimers - go ahead and say what we should be looking at.

" } }, { "_id": "juomoqiNzeAuq4JMm", "title": "Logical or Connectionist AI?", "pageUrl": "https://www.lesswrong.com/posts/juomoqiNzeAuq4JMm/logical-or-connectionist-ai", "postedAt": "2008-11-17T08:03:12.000Z", "baseScore": 47, "voteCount": 33, "commentCount": 26, "url": null, "contents": { "documentId": "juomoqiNzeAuq4JMm", "html": "

Previously in seriesThe Nature of Logic

\n

People who don't work in AI, who hear that I work in AI, often ask me:  \"Do you build neural networks or expert systems?\"  This is said in much the same tones as \"Are you a good witch or a bad witch?\"

\n

Now that's what I call successful marketing.

\n

Yesterday I covered what I see when I look at \"logic\" as an AI technique.  I see something with a particular shape, a particular power, and a well-defined domain of useful application where cognition is concerned.  Logic is good for leaping from crisp real-world events to compact general laws, and then verifying that a given manipulation of the laws preserves truth.  It isn't even remotely close to the whole, or the center, of a mathematical outlook on cognition.

\n

But for a long time, years and years, there was a tremendous focus in Artificial Intelligence on what I call \"suggestively named LISP tokens\" - a misuse of logic to try to handle cases like \"Socrates is human, all humans are mortal, therefore Socrates is mortal\".  For many researchers, this one small element of math was indeed their universe.

\n

And then along came the amazing revolution, the new AI, namely connectionism.

\n

\n

In the beginning (1957) was Rosenblatt's Perceptron.  It was, I believe, billed as being inspired by the brain's biological neurons.  The Perceptron had exactly two layers, a set of input units, and a single binary output unit.  You multiplied the inputs by the weightings on those units, added up the results, and took the sign: that was the classification.  To learn from the training data, you checked the current classification on an input, and if it was wrong, you dropped a delta on all the weights to nudge the classification in the right direction.

\n

The Perceptron could only learn to deal with training data that was linearly separable - points in a hyperspace that could be cleanly separated by a hyperplane.

\n

And that was all that this amazing algorithm, \"inspired by the brain\", could do.

\n

In 1969, Marvin Minsky and Seymour Papert pointed out that Perceptrons couldn't learn the XOR function because it wasn't linearly separable.  This killed off research in neural networks for the next ten years.

\n

Now, you might think to yourself:  \"Hey, what if you had more than two layers in a neural network?  Maybe then it could learn the XOR function?\"

\n

Well, but if you know a bit of linear algebra, you'll realize that if the units in your neural network have outputs that are linear functions of input, then any number of hidden layers is going to behave the same way as a single layer - you'll only be able to learn functions that are linearly separable.

\n

Okay, so what if you had hidden layers and the outputs weren't linear functions of the input?

\n

But you see - no one had any idea how to train a neural network like that.  Cuz, like, then this weight would affect that output and that other output too, nonlinearly, so how were you supposed to figure out how to nudge the weights in the right direction?

\n

Just make random changes to the network and see if it did any better?  You may be underestimating how much computing power it takes to do things the truly stupid way.  It wasn't a popular line of research.

\n

Then along came this brilliant idea, called \"backpropagation\":

\n

You handed the network a training input.  The network classified it incorrectly.  So you took the partial derivative of the output error (in layer N) with respect to each of the individual nodes in the preceding layer (N - 1).  Then you could calculate the partial derivative of the output error with respect to any single weight or bias in the layer N - 1.  And you could also go ahead and calculate the partial derivative of the output error with respect to each node in the layer N - 2.  So you did layer N - 2, and then N - 3, and so on back to the input layer.  (Though backprop nets usually had a grand total of 3 layers.)  Then you just nudged the whole network a delta - that is, nudged each weight or bias by delta times its partial derivative with respect to the output error.

\n

It says a lot about the nonobvious difficulty of doing math that it took years to come up with this algorithm.

\n

I find it difficult to put into words just how obvious this is in retrospect.  You're just taking a system whose behavior is a differentiable function of continuous paramaters, and sliding the whole thing down the slope of the error function.  There are much more clever ways to train neural nets, taking into account more than the first derivative, e.g. conjugate gradient optimization, and these take some effort to understand even if you know calculus.  But backpropagation is ridiculously simple.  Take the network, take the partial derivative of the error function with respect to each weight in the network, slide it down the slope.

\n

If I didn't know the history of connectionism, and I didn't know scientific history in general - if I had needed to guess without benefit of hindsight how long it ought to take to go from Perceptrons to backpropagation - then I would probably say something like:  \"Maybe a couple of hours?  Lower bound, five minutes - upper bound, three days.\"

\n

\"Seventeen years\" would have floored me.

\n

And I know that backpropagation may be slightly less obvious if you don't have the idea of \"gradient descent\" as a standard optimization technique bopping around in your head.  I know that these were smart people, and I'm doing the equivalent of complaining that Newton only invented first-year undergraduate stuff, etc.

\n

So I'm just mentioning this little historical note about the timescale of mathematical progress, to emphasize that all the people who say \"AI is 30 years away so we don't need to worry about Friendliness theory yet\" have moldy jello in their skulls.

\n

(Which I suspect is part of a general syndrome where people's picture of Science comes from reading press releases that announce important discoveries, so that they're like, \"Really?  You do science?  What kind of important discoveries do you announce?\"  Apparently, in their world, when AI finally is \"visibly imminent\", someone just needs to issue a press release to announce the completion of Friendly AI theory.)

\n

Backpropagation is not just clever; much more importantly, it turns out to work well in real life on a wide class of tractable problems.  Not all \"neural network\" algorithms use backprop, but if you said, \"networks of connected units with continuous parameters and differentiable behavior which learn by traveling up a performance gradient\", you would cover a pretty large swathe.

\n

But the real cleverness is in how neural networks were marketed.

\n

They left out the math.

\n

To me, at least, it seems that a backprop neural network involves substantially deeper mathematical ideas than \"Socrates is human, all humans are mortal, Socrates is mortal\".  Newton versus Aristotle.  I would even say that a neural network is more analyzable - since it does more real cognitive labor on board a computer chip where I can actually look at it, rather than relying on inscrutable human operators who type \"|- Human(Socrates)\" into the keyboard under God knows what circumstances.

\n

But neural networks were not marketed as cleverer math.  Instead they were marketed as a revolt against Spock.

\n

No, more than that - the neural network was the new champion of the Other Side of the Force - the antihero of a Manichaean conflict between Law and Chaos.  And all good researchers and true were called to fight on the side of Chaos, to overthrow the corrupt Authority and its Order.  To champion Freedom and Individuality against Control and Uniformity.  To Decentralize instead of Centralize, substitute Empirical Testing for mere Proof, and replace Rigidity with Flexibility.

\n

I suppose a grand conflict between Law and Chaos, beats trying to explain calculus in a press release.

\n

But the thing is, a neural network isn't an avatar of Chaos any more than an expert system is an avatar of Law.

\n

It's just... you know... a system with continuous parameters and differentiable behavior traveling up a performance gradient.

\n

And logic is a great way of verifying truth preservation by syntactic manipulation of compact generalizations that are true in crisp models.  That's it.  That's all.  This kind of logical AI is not the avatar of Math, Reason, or Law.

\n

Both algorithms do what they do, and are what they are; nothing more.

\n

But the successful marketing campaign said,

\n

\"The failure of logical systems to produce real AI has shown that intelligence isn't logical.  Top-down design doesn't work; we need bottom-up techniques, like neural networks.\"

\n

And this is what I call the Lemon Glazing Fallacy, which generates an argument for a fully arbitrary New Idea in AI using the following template:

\n\n

This only has the appearance of plausibility if you present a Grand Dichotomy.  It doesn't do to say \"AI Technique #283 has failed for years to produce general intelligence - that's why you need to adopt my new AI Technique #420.\"  Someone might ask, \"Well, that's very nice, but what about AI technique #59,832?\"

\n

No, you've got to make 420 and ¬420 into the whole universe - allow only these two possibilities - put them on opposite sides of the Force - so that ten thousand failed attempts to build AI are actually arguing for your own success.  All those failures are weighing down the other side of the scales, pushing up your own side... right?  (In Star Wars, the Force has at least one Side that does seem pretty Dark.  But who says the Jedi are the Light Side just because they're not Sith?)

\n

Ten thousand failures don't tell you what will work.  They don't even say what should not be part of a successful AI system.  Reversed stupidity is not intelligence.

\n

If you remove the power cord from your computer, it will stop working.  You can't thereby conclude that everything about the current system is wrong, and an optimal computer should not have an Intel processor or Nvidia video card or case fans or run on electricity.  Even though your current system has these properties, and it doesn't work.

\n

As it so happens, I do believe that the type of systems usually termed GOFAI will not yield general intelligence, even if you run them on a computer the size of the moon.  But this opinion follows from my own view of intelligence.  It does not follow, even as suggestive evidence, from the historical fact that a thousand systems built using Prolog did not yield general intelligence.  So far as the logical sequitur goes, one might as well say that Silicon-Based AI has shown itself deficient, and we must try to build transistors out of carbon atoms instead.

\n

Not to mention that neural networks have also been \"failing\" (i.e., not yet succeeding) to produce real AI for 30 years now.  I don't think this particular raw fact licenses any conclusions in particular.  But at least don't tell me it's still the new revolutionary idea in AI.

\n

This is the original example I used when I talked about the \"Outside the Box\" box - people think of \"amazing new AI idea\" and return their first cache hit, which is \"neural networks\" due to a successful marketing campaign thirty goddamned years ago.  I mean, not every old idea is bad - but to still be marketing it as the new defiant revolution?  Give me a break.

\n

And pity the poor souls who try to think outside the \"outside the box\" box - outside the ordinary bounds of logical AI vs. connectionist AI - and, after mighty strains, propose a hybrid system that includes both logical and neural-net components.

\n

It goes to show that compromise is not always the path to optimality - though it may sound Deeply Wise to say that the universe must balance between Law and Chaos.

\n

Where do Bayesian networks fit into this dichotomy?  They're parallel, asynchronous, decentralized, distributed, probabilistic.  And they can be proven correct from the axioms of probability theory.  You can preprogram them, or learn them from a corpus of unsupervised data - using, in some cases, formally correct Bayesian updating.  They can reason based on incomplete evidence.  Loopy Bayes nets, rather than computing the correct probability estimate, might compute an approximation using Monte Carlo - but the approximation provably converges - but we don't run long enough to converge...

\n

Where does that fit on the axis that runs from logical AI to neural networks?  And the answer is that it doesn't.  It doesn't fit.

\n

It's not that Bayesian networks \"combine the advantages of logic and neural nets\".  They're simply a different point in the space of algorithms, with different properties.

\n

At the inaugural seminar of Redwood Neuroscience, I once saw a presentation describing a robot that started out walking on legs, and learned to run... in real time, over the course of around a minute.  The robot was stabilized in the Z axis, but it was still pretty darned impressive.  (When first exhibited, someone apparently stood up and said \"You sped up that video, didn't you?\" because they couldn't believe it.)

\n

This robot ran on a \"neural network\" built by detailed study of biology.  The network had twenty neurons or so.  Each neuron had a separate name and its own equation.  And believe me, the robot's builders knew how that network worked.

\n

Where does that fit into the grand dichotomy?  Is it top-down?  Is it bottom-up?  Calling it \"parallel\" or \"distributed\" seems like kind of a silly waste when you've only got 20 neurons - who's going to bother multithreading that?

\n

This is what a real biologically inspired system looks like.  And let me say again, that video of the running robot would have been damned impressive even if it hadn't been done using only twenty neurons.  But that biological network didn't much resemble - at all, really - the artificial neural nets that are built using abstract understanding of gradient optimization, like backprop.

\n

That network of 20 neurons, each with its own equation, built and understood from careful study of biology - where does it fit into the Manichaean conflict?  It doesn't.  It's just a different point in AIspace.

\n

At a conference ysterday, I spoke to someone who thought that Google's translation algorithm was a triumph of Chaotic-aligned AI, because none of the people on the translation team spoke Arabic and yet they built an Arabic translator using a massive corpus of data.  And I said that, while I wasn't familiar in detail with Google's translator, the little I knew about it led me to believe that they were using well-understood algorithms - Bayesian ones, in fact - and that if no one on the translation team knew any Arabic, this was no more significant than Deep Blue's programmers playing poor chess.

\n

Since Peter Norvig also happened to be at the conference, I asked him about it, and Norvig said that they started out doing an actual Bayesian calculation, but then took a couple of steps away.  I remarked, \"Well, you probably weren't doing the real Bayesian calculation anyway - assuming conditional independence where it doesn't exist, and stuff\", and Norvig said, \"Yes, so we've already established what kind of algorithm it is, and now we're just haggling over the price.\"

\n

Where does that fit into the axis of logical AI and neural nets?  It doesn't even talk to that axis.  It's just a different point in the design space.

\n

The grand dichotomy is a lie - which is to say, a highly successful marketing campaign which managed to position two particular fragments of optimization as the Dark Side and Light Side of the Force.

" } }, { "_id": "23Ha6Su2GknYWJJk7", "title": "Boston-area Meetup: 11/18/08 9pm MIT/Cambridge", "pageUrl": "https://www.lesswrong.com/posts/23Ha6Su2GknYWJJk7/boston-area-meetup-11-18-08-9pm-mit-cambridge", "postedAt": "2008-11-16T03:19:09.000Z", "baseScore": 1, "voteCount": 3, "commentCount": 15, "url": null, "contents": { "documentId": "23Ha6Su2GknYWJJk7", "html": "

There will be an OB\nmeetup this Tuesday in Cambridge MA, hosted by Michael Vassar, Owain Evans\n(grad student at MIT), and Dario Amodei (grad student at Princeton). \nThe event will take place on the MIT campus, in a spacious seminar room in MIT's Stata Center.  Refreshments will be provided.  Details and directions below the fold.\n\n\n

\n\n

\nPlease let us know in the comments if you plan to attend.

(Posted on behalf of Owain Evans.)

\n

Time/date: 9pm, Tuesday 18 November.\n\n\n

\n\n

Place: Room d461 in MIT's Stata Center.\n\n\n

\n\n

Address:\nThe Stata Center: 32 Vassar Street, Cambridge.\n\n\n

\n\n

Directions:\nThe nearest T stop is Kendall/MIT on the Red Line.\n\n

\n\n

Enter Stata via the entrance facing Main Street (with a big metal\n"MIT" sign outside it) from 8.45pm and one of the hosts will guide you\nto d461. Alternatively, here are \ndirections\nto d461 once you reach Stata.\n\n\n

\n\n

Email:\nowain (at) mit edu

\n\n

Phone:\nIf you can't find the room or if you arrive late and are unable to\nenter the building, call 610-608-3345.
\n

" } }, { "_id": "c93eRh3mPaN62qrD2", "title": "The Nature of Logic", "pageUrl": "https://www.lesswrong.com/posts/c93eRh3mPaN62qrD2/the-nature-of-logic", "postedAt": "2008-11-15T06:20:32.000Z", "baseScore": 42, "voteCount": 34, "commentCount": 12, "url": null, "contents": { "documentId": "c93eRh3mPaN62qrD2", "html": "

Previously in seriesSelling Nonapples
Followup toThe Parable of Hemlock

\n\n

Decades ago, there was a tremendous amount of effort invested in certain kinds of systems that centered around first-order logic - systems where you entered "Socrates is a human" and "all humans are mortal" into a database, and it would spit out "Socrates is mortal".

\n\n

The fact that these systems failed to produce "general intelligence" is sometimes taken as evidence of the inadequacy, not only of logic, but of Reason Itself.  You meet people who (presumably springboarding off the Spock meme) think that what we really need are emotional AIs, since logic hasn't worked.

\n\n

What's really going on here is so completely different from the popular narrative that I'm not even sure where to start.  So I'm going to try to explain what I see when I look at a "logical AI".  It's not a grand manifestation of the ultimate power of Reason (which then fails).

Let's start with the logical database containing the statements:

|- Human(Socrates)
|- \\x.(Human(x) -> Mortal(x))

which then produces the output

|- Mortal(Socrates)

where |- is how we say that our logic asserts something, and \\x. means "all x" or "in what follows, we can substitute anything we want for x".

\n\n

Thus the above is how we'd write the classic syllogism, "Socrates is human, all humans are mortal, therefore Socrates is mortal".

\n\n

Now a few months back, I went through the sequence on words, which included The Parable of Hemlock, which runs thusly:

    Socrates raised the glass of hemlock to his\nlips...
    "Do you\nsuppose," asked one of the onlookers, "that even hemlock will not be\nenough to kill so wise and good a man?"
    "No," replied another bystander, a student of\nphilosophy; "all men are\nmortal, and Socrates is a man; and if a mortal drink hemlock, surely he\ndies."
\n    "Well," said the onlooker, "what if it happens that\nSocrates isn't\n mortal?"
\n    "Nonsense," replied the student, a little sharply;\n"all men are mortal \nby definition; it is part of what we mean by the word 'man'. \nAll men are mortal, Socrates is a man, therefore Socrates is\nmortal.  It is not merely a guess, but a logical certainty."
\n    "I suppose that's right..." said the onlooker. \n"Oh, look,\nSocrates already drank the hemlock while we were talking."
\n    "Yes, he should be keeling over any minute now,"\nsaid the student.
\n    And they waited, and they waited, and they waited...
\n    "Socrates appears not to be mortal," said the\nonlooker.
\n    "Then Socrates must not be a man," replied the\nstudent.  "All men\nare mortal, Socrates is not mortal, therefore Socrates is not a\nman.  And that is not merely a guess, but a logical certainty."

If you're going to take "all men are mortal" as something true by definition, then you can never conclude that Socrates is "human" until after you've observed him to be mortal.  Since logical truths are true in all possible worlds, they never tell you which possible world you live in - no logical truth can predict the result of an empirical event which could possibly go either way.

\n\n

Could a skeptic say that this logical database is not doing any cognitive work, since it can only tell us what we already know?  Or that, since the database is only using logic, it will be unable to do anything empirical, ever?

\n\n

Even I think that's too severe.  The "logical reasoner" is doing a quantum of cognitive labor - it's just a small quantum.

\n\n

Consider the following sequence of events:

\n\n\n\n

This process, taken as a whole, is hardly absolutely certain, as in the Spock stereotype of rationalists who cannot conceive that they are wrong.  The process did briefly involve a computer program which mimicked a system,\nfirst-order classical logic, which also happens to be used by some\nmathematicians in verifying their proofs.  That doesn't lend the entire process the character of\nmathematical proof.  And if the process fails, somewhere along the line, that's no call to go casting aspersions on Reason itself.

\n\n

In this admittedly contrived example, only an infinitesimal fraction of the cognitive work is being performed by the computer program.  It's such a small fraction that anything you could say about "logical AI", wouldn't say much about the process as a whole.

\n\n

So what's an example of harder cognitive labor?

\n\n

How about deciding that "human" is an important category to put things in?  It's not like we're born seeing little "human" tags hovering over objects, with high priority attached.  You have to discriminate stable things in the environment, like Socrates, from your raw sensory information.  You have to notice that various stable things are all similar to one another, wearing clothes and talking.  Then you have to draw a category boundary around the cluster, and harvest characteristics like vulnerability to hemlock.

\n\n

A human operator, not the computer, decides whether or not to classify Socrates as a "human", based on his shape and clothes.  The human operator types |- human(Socrates) into the database (itself an error-prone sort of process).  Then the database spits out  |- mortal(Socrates) - in the scenario, this is the only fact we've ever told it about humans, so we don't ask why it makes this deduction instead of another one.  A human looks at the screen, interprets  |- mortal(Socrates) to refer to a particular thing in the environment and to imply that thing's vulnerability to hemlock.  Then the human decides, based on their values, that they'd rather not see Socrates die; works out a plan to stop Socrates from dying; and executes motor actions to dash the chalice from Socrates's fingers.

\n\n

Are the off-computer steps "logical"?  Are they "illogical"?  Are they unreasonable?  Are they unlawful?  Are they unmathy?

\n\n

Let me interrupt this tale, to describe a case where you very much do want a computer program that processes logical statements:

\n\n

Suppose you've got to build a computer chip with a hundred million transistors, and you don't want to recall your product when a bug is discovered in multiplication.  You might find it very wise to describe the transistors in first-order logic, and try to prove statements about how the chip performs multiplication.

\n\n

But then why is logic suited to this particular purpose?

\n\n

Logic relates abstract statements to specific models.  Let's say that I have an abstract statement like "all green objects are round" or "all round objects are soft".  Operating syntactically, working with just the sentences, I can derive "all green objects are soft".\n\n

\n\n

Now if you show me a particular collection of shapes, and if it so happens to be true that every green object in that collection is round, and it also happens to be true that every round object is soft, then it will likewise be true that all green objects are soft.

\n\n

We are not admitting of the possibility that a green-detector on a borderline green object will fire "green" on one occasion and "not-green" on the next.  The form of logic in which every proof step preserves validity, relates crisp models to crisp statements.  So if you want to draw a direct correspondence between elements of a logical model, and high-level objects in the real world, you had better be dealing with objects that have crisp identities, and categories that have crisp boundaries.

\n\n

Transistors in a computer chip generally do have crisp identities.  So it may indeed be suitable to make a mapping between elements in a logical model, and real transistors in the real world.

\n\n

So let's say you can perform the mapping and get away with it - then what?

\n\n

The power of logic is that it relates models and statements.  So you've got to make that mental distinction between models on the one hand, and statements on the other.  You've got to draw a sharp line between the elements of a model that are green or round, and statements like  \\x.(green(x)->round(x)).  The statement itself isn't green or round, but it can be true or false about a collection of objects that are green or round.

\n\n

And here is the power of logic:  For each syntactic step we do on our statements, we preserve the match to any model.  In any model where our old collection of statements was true, the new statement will also be true.  We don't have to check all possible conforming models to see if the new statement is true in all of them.  We can trust certain syntactic steps in general - not to produce truth, but to preserve truth.

\n\n

Then you do a million syntactic steps in a row, and because each step preserves truth, you can trust that the whole sequence will preserve truth.

\n\n

We start with a chip.  We do some physics and decide that whenever transistor X is 0 at time T, transistor Y will be 1 at T+1, or some such - we credit that real events in the chip will correspond quite directly to a model of this statement.  We do a whole lot of syntactic manipulation on the abstract laws.  We prove a statement that describes binary multiplication.  And then we jump back to the model, and then back to the chip, and say, "Whatever the exact actual events on this chip, if they have the physics we described, then multiplication will work the way we want."

\n\n

It would be considerably harder (i.e. impossible) to work directly with logical models of every possible computation the chip could carry out.  To verify multiplication on two 64-bit inputs, you'd need to check 340 trillion trillion trillion models.

\n\n

But this trick of doing a million derivations one after the other, and preserving truth throughout, won't work if the premises are only true 999 out of 1000 times.  You could get away with ten steps in the derivation and not lose too much, but a million would be out of the question.

\n\n

So the truth-preserving syntactic manipulations we call "logic" can be very useful indeed, when we draw a correspondence to a digital computer chip where the transistor error rate is very low.

\n\n

But if you're trying to draw a direct correspondence between the primitive elements of a logical model and, say, entire biological humans, that may not work out as well.

\n\n

First-order logic has a number of wonderful properties, like detachment.  We don't care how you proved something - once you arrive at a statement, we can forget how we got it.  The syntactic rules are local, and use statements as fodder without worrying about their provenance.  So once we prove a theorem, there's no need to keep track of how, in particular, it was proved.

\n\n

But what if one of your premises turns out to be wrong, and you have to retract something you already concluded?  Wouldn't you want to keep track of which premises you'd used?

\n\n

If the burglar alarm goes off, that means that a burglar is burgling your house.  But if there was an earthquake that day, it's probably the earthquake that set off the alarm instead.  But if you learned that there was a burglar from the police, rather than the alarm, then you don't want to retract the "burglar" conclusion on finding that there was an earthquake...

\n\n

It says a lot about the problematic course of early AI, that people first tried to handle this problem with nonmonotonic logics.  They would try to handle statements like "A burglar alarm indicates there's a burglar - unless there's an earthquake" using a slightly modified logical database that would draw conclusions and then retract them.

\n\n

And this gave rise to huge problems for many years, because they were trying to do, in the style of logic, something that was not at all like the actual nature of logic as math.  Trying to retract a particular conclusion goes completely against the nature of first-order logic as a mathematical structure.

\n\n

If you were given to jumping to conclusions, you might say "Well, math can't handle that kind of problem because there are no absolute laws as to what you conclude when you hear the burglar alarm - you've got to use your common-sense judgment, not math."\n

\n\n\n

But it's not an unlawful or even unmathy question.

\n\n\n

It turns out that for at least the kind of case I've described above - where you've got effects that have more than one possible cause - we can excellently handle a wide range of scenarios using a crystallization of probability theory known as "Bayesian networks".  And lo, we can prove all sorts of wonderful theorems that I'm not going to go into.  (See Pearl's "Probabilistic Reasoning in Intelligent Systems".)

\n\n

And the real solution turned out to be much more elegant than all the messy ad-hoc attempts at "nonmonotonic logic".  On non-loopy networks, you can do all sorts of wonderful things like propagate updates in parallel using asynchronous messages, where each node only tracks the messages coming from its immediate neighbors in the graph, etcetera.  And this parallel, asynchronous, decentralized algorithm is provably correct as probability theory, etcetera.

\n\n

So... are Bayesian networks illogical?

\n\n

Certainly not in the colloquial sense of the word.

\n\n

You could write a logic that implemented a Bayesian network.  You could represent the probabilities and graphs in a logical database.  The elements of your model would no longer correspond to things like Socrates, but rather correspond to conditional probabilities or graph edges...  But why bother?  Non-loopy Bayesian networks propagate their inferences in nicely local ways.  There's no need to stick a bunch of statements in a centralized logical database and then waste computing power to pluck out global inferences.

\n\n

What am I trying to convey here?  I'm trying to convey that thinking mathematically about uncertain reasoning is a completely different concept from AI programs that assume direct correspondences between the elements of a logical model and the high-level regularities of reality.

\n\n

"The failure of logical AI" is not "the failure of mathematical thinking about AI" and certainly not "the limits of lawful reasoning".  The "failure of logical AI" is more like, "That thing with the database containing statements about Socrates and hemlock - not only were you using the wrong math, but you weren't even looking at the interesting parts of the problem."

\n\n

Now I did concede that the logical reasoner talking about Socrates and hemlock, was performing a quantum of cognitive labor.  We can now describe that quantum:

\n\n

"After you've arrived at such-and-such hypothesis about what goes on behind the scenes of your sensory information, and distinguished the pretty-crisp identity of 'Socrates' and categorized it into the pretty-crisp cluster of 'human', then, if the other things you've observed to usually hold true of 'humans' are accurate in this case, 'Socrates' will have the pretty-crisp property of 'mortality'."

\n\n

This quantum of labor tells you a single implication of what you already believe... but actually it's an even smaller quantum than this.  The step carried out by the logical database corresponds to verifying this step of inference, not deciding to carry it out.  Logic makes no mention of which inferences we should perform first - the syntactic derivations are timeless and unprioritized.  It is nowhere represented in the nature of logic as math, that if the 'Socrates' thingy is drinking hemlock, right now is a good time to ask if he's mortal.

\n\n

And indeed, modern AI programs still aren't very good at guiding inference.  If you want to prove a computer chip correct, you've got to have a human alongside to suggest the lemmas to be proved next.  The nature of logic is better suited to verification than construction - it preserves truth through a million syntactic manipulations, but it doesn't prioritize those manipulations in any particular order.  So saying "Use logic!" isn't going to solve the problem of searching for proofs.

\n\n\n\n

This doesn't mean that "What inference should I perform next?" is an unlawful question to which no math applies.  Just that the math of logic that relates models and statements, relates timeless models to timeless statements in a world of unprioritized syntactic manipulations.  You might be able to use logic to reason about time or about expected utility, the same way you could use it to represent a Bayesian network.  But that wouldn't introduce time, or wanting, or nonmonotonicity, into the nature of logic as a mathematical structure.

\n\n

Now, math itself tends to be timeless and detachable and proceed from premise to conclusion, at least when it happens to be right.  So logic is well-suited to verifying mathematical thoughts - though producing those thoughts in the first place, choosing lemmas and deciding which theorems are important, is a whole different problem.

\n\n

Logic might be well-suited to verifying your derivation of the Bayesian network rules from the axioms of probability theory.  But this doesn't mean that, as a programmer, you should try implementing a Bayesian network on top of a logical database.  Nor, for that matter, that you should rely on a first-order theorem prover to invent the idea of a "Bayesian network" from scratch.

\n\n

Thinking mathematically about uncertain reasoning, doesn't mean that you try to turn everything into a logical model.  It means that you comprehend the nature of logic itself within your mathematical vision of cognition, so that you can see which environments and problems are nicely matched to the structure of logic.

" } }, { "_id": "2mLZiWxWKZyaRgcn7", "title": "Selling Nonapples", "pageUrl": "https://www.lesswrong.com/posts/2mLZiWxWKZyaRgcn7/selling-nonapples", "postedAt": "2008-11-13T20:10:58.000Z", "baseScore": 76, "voteCount": 54, "commentCount": 78, "url": null, "contents": { "documentId": "2mLZiWxWKZyaRgcn7", "html": "

Previously in seriesWorse Than Random

\n

A tale of two architectures...

\n

Once upon a time there was a man named Rodney Brooks, who could justly be called the King of Scruffy Robotics.  (Sample paper titles:  \"Fast, Cheap, and Out of Control\", \"Intelligence Without Reason\").  Brooks invented the \"subsumption architecture\" - robotics based on many small modules, communicating asynchronously and without a central world-model or central planning, acting by reflex, responding to interrupts.  The archetypal example is the insect-inspired robot that lifts its leg higher when the leg encounters an obstacle - it doesn't model the obstacle, or plan how to go around it; it just lifts its leg higher.

\n

In Brooks's paradigm - which he labeled nouvelle AI - intelligence emerges from \"situatedness\".  One speaks not of an intelligent system, but rather the intelligence that emerges from the interaction of the system and the environment.

\n

And Brooks wrote a programming language, the behavior language, to help roboticists build systems in his paradigmatic subsumption architecture - a language that includes facilities for asynchronous communication in networks of reflexive components, and programming finite state machines.

\n

My understanding is that, while there are still people in the world who speak with reverence of Brooks's subsumption architecture, it's not used much in commercial systems on account of being nearly impossible to program.

\n

Once you start stacking all these modules together, it becomes more and more difficult for the programmer to decide that, yes, an asynchronous local module which raises the robotic leg higher when it detects a block, and meanwhile sends asynchronous signal X to module Y, will indeed produce effective behavior as the outcome of the whole intertwined system whereby intelligence emerges from interaction with the environment...

\n

Asynchronous parallel decentralized programs are harder to write.  And it's not that they're a better, higher form of sorcery that only a few exceptional magi can use.  It's more like the difference between the two business plans, \"sell apples\" and \"sell nonapples\".

\n

\n

One noteworthy critic of Brooks's paradigm in general, and subsumption architecture in particular, is a fellow by the name of Sebastian Thrun.

\n

You may recall the 2005 DARPA Grand Challenge for the driverless cars.  How many ways was this a fair challenge according to the tenets of Scruffydom?  Let us count the ways:

\n\n

And the winning team was Stanley, the Stanford robot, built by a team led by Sebastian Thrun.

\n

How did he do it?  If I recall correctly, Thrun said that the key was being able to integrate probabilistic information from many different sensors, using a common representation of uncertainty.  This is likely code for \"we used Bayesian methods\", at least if \"Bayesian methods\" is taken to include algorithms like particle filtering.

\n

And to heavily paraphrase and summarize some of Thrun's criticisms of Brooks's subsumption architecture:

\n

Robotics becomes pointlessly difficult if, for some odd reason, you insist that there be no central model and no central planning.

\n

Integrating data from multiple uncertain sensors is a lot easier if you have a common probabilistic representation.  Likewise, there are many potential tasks in robotics - in situations as simple as navigating a hallway - when you can end up in two possible situations that look highly similar and have to be distinguished by reasoning about the history of the trajectory.

\n

To be fair, it's not as if the subsumption architecture has never made money.  Rodney Brooks is the founder of iRobot, and I understand that the Roomba uses the subsumption architecture.  The Roomba has no doubt made more money than was won in the DARPA Grand Challenge... though the Roomba might not seem quite as impressive...

\n

But that's not quite today's point.

\n

Earlier in his career, Sebastian Thrun also wrote a programming language for roboticists.  Thrun's language was named CES, which stands for C++ for Embedded Systems.

\n

CES is a language extension for C++.  Its types include probability distributions, which makes it easy for programmers to manipulate and combine multiple sources of uncertain information.  And for differentiable variables - including probabilities - the language enables automatic optimization using techniques like gradient descent.  Programmers can declare 'gaps' in the code to be filled in by training cases:  \"Write me this function.\"

\n

As a result, Thrun was able to write a small, corridor-navigating mail-delivery robot using 137 lines of code, and this robot required less than 2 hours of training.  As Thrun notes, \"Comparable systems usually require at least two orders of magnitude more code and are considerably more difficult to implement.\"  Similarly, a 5,000-line robot localization algorithm was reimplemented in 52 lines.

\n

Why can't you get that kind of productivity with the subsumption architecture?  Scruffies, ideologically speaking, are supposed to believe in learning - it's only those evil logical Neats who try to program everything into their AIs in advance.  Then why does the subsumption architecture require so much sweat and tears from its programmers?

\n

Suppose that you're trying to build a wagon out of wood, and unfortunately, the wagon has a problem, which is that it keeps catching on fire.  Suddenly, one of the wagon-workers drops his wooden beam.  His face lights up.  \"I have it!\" he says.  \"We need to build this wagon from nonwood materials!\"

\n

You stare at him for a bit, trying to get over the shock of the new idea; finally you ask, \"What kind of nonwood materials?\"

\n

The wagoneer hardly hears you.  \"Of course!\" he shouts.  \"It's all so obvious in retrospect!  Wood is simply the wrong material for building wagons!  This is the dawn of a new era - the nonwood era - of wheels, axles, carts all made from nonwood!  Not only that, instead of taking apples to market, we'll take nonapples!  There's a huge market for nonapples - people buy far more nonapples than apples - we should have no trouble selling them!  It will be the era of the nouvelle wagon!\"

\n

The set \"apples\" is much narrower than the set \"not apples\".  Apples form a compact cluster in thingspace, but nonapples vary much more widely in price, and size, and use.  When you say to build a wagon using \"wood\", you're giving much more concrete advice than when you say \"not wood\".  There are different kinds of wood, of course - but even so, when you say \"wood\", you've narrowed down the range of possible building materials a whole lot more than when you say \"not wood\".

\n

In the same fashion, \"asynchronous\" - literally \"not synchronous\" - is a much larger design space than \"synchronous\".  If one considers the space of all communicating processes, then synchrony is a very strong constraint on those processes.  If you toss out synchrony, then you have to pick some other method for preventing communicating processes from stepping on each other - synchrony is one way of doing that, a specific answer to the question.

\n

Likewise \"parallel processing\" is a much huger design space than \"serial processing\", because serial processing is just a special case of parallel processing where the number of processors happens to be equal to 1.  \"Parallel processing\" reopens all sorts of design choices that are premade in serial processing.  When you say \"parallel\", it's like stepping out of a small cottage, into a vast and echoing country.  You have to stand someplace specific, in that country - you can't stand in the whole place, in the noncottage.

\n

So when you stand up and shout:  \"Aha!  I've got it!  We've got to solve this problem using asynchronous processes!\", it's like shouting, \"Aha!  I've got it!  We need to build this wagon out of nonwood!  Let's go down to the market and buy a ton of nonwood from the nonwood shop!\"  You've got to choose some specific alternative to synchrony.

\n

Now it may well be that there are other building materials in the universe than wood.  It may well be that wood is not the best building material.  But you still have to come up with some specific thing to use in its place, like iron.  \"Nonwood\" is not a building material, \"sell nonapples\" is not a business strategy, and \"asynchronous\" is not a programming architecture.

\n

And this is strongly reminiscent of - arguably a special case of - the dilemma of inductive bias.  There's a tradeoff between the strength of the assumptions you make, and how fast you learn.  If you make stronger assumptions, you can learn faster when the environment matches those assumptions well, but you'll learn correspondingly more slowly if the environment matches those assumptions poorly.  If you make an assumption that lets you learn faster in one environment, it must always perform more poorly in some other environment.  Such laws are known as the \"no-free-lunch\" theorems, and the reason they don't prohibit intelligence entirely is that the real universe is a low-entropy special case.

\n

Programmers have a phrase called the \"Turing Tarpit\"; it describes a situation where everything is possible, but nothing is easy.  A Universal Turing Machine can simulate any possible computer, but only at an immense expense in time and memory.  If you program in a high-level language like Python, then - while most programming tasks become much simpler - you may occasionally find yourself banging up against the walls imposed by the programming language; sometimes Python won't let you do certain things.  If you program directly in machine language, raw 1s and 0s, there are no constraints; you can do anything that can possibly be done by the computer chip; and it will probably take you around a thousand times as much time to get anything done.  You have to do, all by yourself, everything that a compiler would normally do on your behalf.

\n

Usually, when you adopt a program architecture, that choice takes work off your hands.  If I use a standard container library - lists and arrays and hashtables - then I don't need to decide how to implement a hashtable, because that choice has already been made for me.

\n

Adopting the subsumption paradigm means losing order, instead of gaining it.  The subsumption architecture is not-synchronous, not-serial, and not-centralized.  It's also not-knowledge-modelling and not-planning.

\n

This absence of solution implies an immense design space, and it requires a correspondingly immense amount of work by the programmers to reimpose order.  Under the subsumption architecture, it's the programmer who decides to add an asynchronous local module which detects whether a robotic leg is blocked, and raises it higher.  It's the programmer who has to make sure that this behavior plus other module behaviors all add up to an (ideologically correct) emergent intelligence.  The lost structure is not replaced.  You just get tossed into the Turing Tarpit, the space of all other possible programs.

\n

On the other hand, CES creates order; it adds the structure of probability distributions and gradient optimization.  This narrowing of the design space takes so much work off your hands that you can write a learning robot in 137 lines (at least if you happen to be Sebastian Thrun).

\n

The moral:

\n

Quite a few AI architectures aren't.

\n

If you want to generalize, quite a lot of policies aren't.

\n

They aren't choices.  They're just protests.

\n

Added:  Robin Hanson says, \"Economists have to face this in spades. So many people say standard econ has failed and the solution is to do the opposite - non-equilibrium instead of equilibrium, non-selfish instead of selfish, non-individual instead of individual, etc.\"  It seems that selling nonapples is a full-blown Standard Iconoclast Failure Mode.

" } }, { "_id": "HTNJe8AWn2ZHpc4vC", "title": "Bay Area Meetup: 11/17 8PM Menlo Park", "pageUrl": "https://www.lesswrong.com/posts/HTNJe8AWn2ZHpc4vC/bay-area-meetup-11-17-8pm-menlo-park", "postedAt": "2008-11-13T05:32:25.000Z", "baseScore": 1, "voteCount": 3, "commentCount": 4, "url": null, "contents": { "documentId": "HTNJe8AWn2ZHpc4vC", "html": "

Robin Gane-McCalla plans to organize regular OB meetups in the Bay Area.  The next one is 8PM, November 17th, 2008 (Monday night) in Menlo Park at TechShop.  (Note that this is a room with seating, not a restaurant, so we hopefully get a chance to actually talk to each other - though I'll try to stay in the background myself.)

\n\n

RSVP at Meetup.com.

" } }, { "_id": "AAqTP6Q5aeWnoAYr4", "title": "The Weighted Majority Algorithm", "pageUrl": "https://www.lesswrong.com/posts/AAqTP6Q5aeWnoAYr4/the-weighted-majority-algorithm", "postedAt": "2008-11-12T23:19:58.000Z", "baseScore": 23, "voteCount": 26, "commentCount": 96, "url": null, "contents": { "documentId": "AAqTP6Q5aeWnoAYr4", "html": "

Followup toWorse Than Random, Trust In Bayes

\n

In the wider field of Artificial Intelligence, it is not universally agreed and acknowledged that noise hath no power.  Indeed, the conventional view in machine learning is that randomized algorithms sometimes perform better than unrandomized counterparts and there is nothing peculiar about this.  Thus, reading an ordinary paper in the AI literature, you may suddenly run across a remark:  \"There is also an improved version of this algorithm, which takes advantage of randomization...\"

\n

Now for myself I will be instantly suspicious; I shall inspect the math for reasons why the unrandomized algorithm is being somewhere stupid, or why the randomized algorithm has a hidden disadvantage.  I will look for something peculiar enough to explain the peculiar circumstance of a randomized algorithm somehow doing better.

\n

I am not completely alone in this view.  E. T. Jaynes, I found, was of the same mind:  \"It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought.\"  Apparently there's now a small cottage industry in derandomizing algorithms.  But so far as I know, it is not yet the majority, mainstream view that \"we can improve this algorithm by randomizing it\" is an extremely suspicious thing to say.

\n

Let us now consider a specific example - a mainstream AI algorithm where there is, apparently, a mathematical proof that the randomized version performs better.  By showing how subtle the gotcha can be, I hope to convince you that, even if you run across a case where the randomized algorithm is widely believed to perform better, and you can't find the gotcha yourself, you should nonetheless trust that there's a gotcha to be found.

\n

\n

For our particular example we shall examine the weighted majority algorithm, first introduced by Littlestone and Warmuth, but as a substantially more readable online reference I will use section 2 of Blum's On-Line Algorithms in Machine Learning.  (Blum has an easier-to-read, more concrete version of the proofs; and the unrandomized and randomized algorithms are presented side-by-side rather than far part.)

\n

The weighted majority algorithm is an ensemble method - a way to combine the advice from several other algorithms or hypotheses, called \"experts\".  Experts strive to classify a set of environmental instances into \"positive\" and \"negative\" categories; each expert produces a prediction for each instance.  For example, each expert might look at a photo of a forest, and try to determine whether or not there is a camouflaged tank in the forest.  Expert predictions are binary, either \"positive\" or \"negative\", with no probability attached.  We do not assume that any of our experts is perfect, and we do not know which of our experts is the best.  We also do not assume anything about the samples (they may not be independent and identically distributed).

\n

The weighted majority algorithm initially assigns an equal weight of 1 to all experts.  On each round, we ask all the experts for their predictions, and sum up the weights for each of the two possible predictions, \"positive\" or \"negative\".  We output the prediction that has the higher weight.  Then, when we see the actual result, we multiply by 1/2 the weight of every expert that got the prediction wrong.

\n

Suppose the total number of experts is n, and the best expert makes no more than m mistakes over some given sampling sequence.  Then we can prove that the weighted majority algorithm makes a total number of mistakes M that is bounded by 2.41*(m + log2(n)).  In other words, the weighted majority algorithm makes no more mistakes than the best expert, plus a factor of log n, times a constant factor of 2.41.

\n

Proof (taken from Blum 1996; a similar proof appears in Littlestone and Warmuth 1989):  The combined weight of all the experts at the start of the problem is W = n.  If the weighted majority algorithm makes a mistake, at least half the total weight of experts predicted incorrectly, so the total weight is reduced by a factor of at least 1/4.  Thus, after the weighted majority algorithm makes M mistakes, its total weight W has been reduced by at least (3/4)^M.  In other words:    

\n
\n

W <= n*( (3/4)^M )

\n
\n

But if the best expert has made at most m mistakes, its weight is at least  (1/2)^m.  And since W includes the weight of all experts,    

\n
\n

W >= (1/2)^m

\n
\n

Therefore:

\n
\n

(1/2)^m <= W <= n*( (3/4)^M )
(1/2)^m <= n*( (3/4)^M )
-m      <= log2(n) + M*log2(3/4)
-m      <= log2(n) + M*-log2(4/3)
M*log2(4/3) <= m + log2(n)
M       <= (1/log2(4/3))*(m + log2(n))
M       <= 2.41*(m + log2(n))

\n
\n

Blum then says that \"we can achieve a better bound than that described above\", by randomizing the algorithm to predict \"positive\" or \"negative\" with probability proportional to the weight assigned each prediction.  Thus, if 3/4 of the expert weight went to \"positive\", we would predict \"positive\" with probability 75%, and \"negative\" with probability 25%.

\n

An essentially similar proof, summing over the expected probability of a mistake on each round, will show that in this case:

\n
\n

M <= 1.39m + 2 ln(n)       (note: M is now an expectation)

\n
\n

Since the penalty applied to particular experts does not depend on the global prediction but only on the actual outcome, most of the proof proceeds as before.  We have

\n
\n

W >= (1/2)^m

\n
\n

where again m is the best expert and W is the total weight of all experts.

\n

We also have that W is the starting weight n times the product of (1 - 1/2 F_i), where F_i is the fraction of mistaken expert weight on round i:

\n
\n

W = n * Prod_i (1 - 1/2 F_i)

\n
\n

And since we predict with probability proportional to the expert weight, the expected number of mistakes is just the sum over F_i:

\n
\n

M = Sum_i F_i

\n
\n

So:

\n
\n

(1/2)^m                <= n * Prod_i (1 - 1/2 F_i)
-m*ln(2)               <= ln(n) + Sum_i ln(1 - 1/2 F_i)
Sum_i -ln(1 - 1/2 F_i) <= ln(n) + m*ln(2)
Sum_i (1/2 F_i)        <= ln(n) + m*ln(2)      
(because x < -ln(1 - x) )
Sum_i F_i              <= 2*(ln(n) + m*ln(2))
M                      <= 1.39m + 2 ln(n)

\n
\n

Behold, we have done better by randomizing!

\n

We should be especially suspicious that the randomized algorithm guesses with probability proportional to the expert weight assigned.  This seems strongly reminiscent of betting with 70% probability on blue, when the environment is a random mix of 70% blue and 30% red cards.  We know the best bet - and yet we only sometimes make this best bet, at other times betting on a condition we believe to be less probable.

\n

Yet we thereby prove a smaller upper bound on the expected error.  Is there an algebraic error in the second proof?  Are we extracting useful work from a noise source?  Is our knowledge harming us so much that we can do better through ignorance?

\n

Maybe the unrandomized algorithm fails to take into account the Bayesian value of information, and hence fails to exhibit exploratory behavior?  Maybe it doesn't test unusual cases to see if they might have surprising results?

\n

But, examining the assumptions, we see that the feedback we receive is fixed, regardless of the prediction's output.  Nor does the updating of the expert weights depend on the predictions we output.  It doesn't matter whether we substitute a completely random string of 1s and 0s as our actual output.  We get back exactly the same data from the environment, and the weighted majority algorithm updates the expert weights in exactly the same way.  So that can't be the explanation.

\n

Are the component experts doing worse than random, so that by randomizing our predictions, we can creep back up toward maximum entropy?  But we didn't assume anything about how often the component experts were right, or use the fact in the proofs.  Therefore the proofs would carry even if we specified that all experts were right at least 60% of the time.  It's hard to see how randomizing our predictions could help, in that case - but the proofs still go through, unchanged.

\n

So where's the gotcha?

\n

Maybe I'm about to tell you that I looked, and I couldn't find the gotcha either.  What would you believe, in that case?  Would you think that the whole thesis on the futility of randomness was probably just wrong - a reasonable-sounding philosophical argument that simply wasn't correct in practice?  Would you despair of being able to follow the math, and give up and shrug, unable to decide who might be right?

\n

We don't always see the flaws right away, and that's something to always remember.

\n

In any case, I'm about to start explaining the gotcha.  If you want to go back, read the paper, and look yourself, you should stop reading now...

\n

There does exist a rare class of occasions where we want a source of \"true\" randomness, such as a quantum measurement device.  For example, you are playing rock-paper-scissors against an opponent who is smarter than you are, and who knows exactly how you will be making your choices.  In this condition it is wise to choose randomly, because any method your opponent can predict will do worse-than-average.  Assuming that the enemy knows your source code has strange consequences: the action sequence 1, 2, 1, 3 is good when derived from a quantum noise generator, but bad when derived from any deterministic algorithm, even though it is the same action sequence.  Still it is not a totally unrealistic situation.  In real life, it is, in fact, a bad idea to play rock-paper-scissors using an algorithm your opponent can predict.  So are we, in this situation, deriving optimization from noise?

\n

The random rock-paper-scissors player does not play cleverly, racking up lots of points.  It does not win more than 1/3 of the time, or lose less than 1/3 of the time (on average).  The randomized player does better because its alternatives perform poorly, not from being smart itself.  Moreover, by assumption, the opponent is an intelligence whom we cannot outsmart and who always knows everything about any method we use.  There is no move we can make that does not have a possible countermove.  By assumption, our own intelligence is entirely useless.  The best we can do is to avoid all regularity that the enemy can exploit.  In other words, we do best by minimizing the effectiveness of intelligence within the system-as-a-whole, because only the enemy's intelligence is allowed to be effective.  If we can't be clever ourselves, we might as well think of ourselves as the environment and the enemy as the sole intelligence within that environment.  By becoming the maximum-entropy environment for rock-paper-scissors, we render all intelligence useless, and do (by assumption) the best we can do.

\n

When the environment is adversarial, smarter than you are, and informed about your methods, then in a theoretical sense it may be wise to have a quantum noise source handy.  (In a practical sense you're just screwed.)  Again, this is not because we're extracting useful work from a noise source; it's because the most powerful intelligence in the system is adversarial, and we're minimizing the power that intelligence can exert in the system-as-a-whole.  We don't do better-than-average, we merely minimize the extent to which the adversarial intelligence produces an outcome that is worse-than-average (from our perspective).

\n

Similarly, cryptographers have a legitimate interest in strong randomness generators because cryptographers are trying to minimize the effectiveness of an intelligent adversary.  Certainly entropy can act as an antidote to intelligence.

\n

Now back to the weighted majority algorithm.  Blum (1996) remarks:

\n
\n

\"Intuitively, the advantage of the randomized approach is that it dilutes the worst case.  Previously, the worst case was that slightly more than half of the total weight predicted incorrectly, causing the algorithm to make a mistake and yet only reduce the total weight by 1/4.  Now there is roughly a 50/50 chance that the [randomized] algorithm will predict correctly in this case, and more generally, the probability that the algorithm makes a mistake is tied to the amount that the weight is reduced.\"

\n
\n

From the start, we did our analysis for an upper bound on the number of mistakes made.  A global upper bound is no better than the worst individual case; thus, to set a global upper bound we must bound the worst individual case.  In the worst case, our environment behaves like an adversarial superintelligence.

\n

Indeed randomness can improve the worst-case scenario, if the worst-case environment is allowed to exploit \"deterministic\" moves but not \"random\" ones.  It is like an environment that can decide to produce a red card whenever you bet on blue - unless you make the same bet using a \"random\" number generator instead of your creative intelligence.

\n

Suppose we use a quantum measurement device to produce a string of random ones and zeroes; make two copies of this string; use one copy for the weighted majority algorithm's random number generator; and give another copy to an intelligent adversary who picks our samples.  In other words, we let the weighted majority algorithm make exactly the same randomized predictions, produced by exactly the same generator, but we also let the \"worst case\" environment know what these randomized predictions will be.  Then the improved upper bound of the randomized version is mathematically invalidated.

\n

This shows that the improved upper bound proven for the randomized algorithm did not come from the randomized algorithm making systematically better predictions - doing superior cognitive work, being more intelligent - but because we arbitrarily declared that an intelligent adversary could read our mind in one case but not the other.

\n

This is not just a minor quibble.  It leads to the testable prediction that on real-world problems, where the environment is usually not an adversarial telepath, the unrandomized weighted-majority algorithm should do better than the randomized version.  (Provided that some component experts outperform maximum entropy - are right more than 50% of the time on binary problems.)

\n

Analyzing the worst-case scenario is standard practice in computational learning theory, but it makes the math do strange things.  Moreover, the assumption of the worst case can be subtle; it is not always explicitly labeled.  Consider the upper bound for the unrandomized weighted-majority algorithm.  I did not use the term \"worst case\" - I merely wrote down some inequalities.  In fact, Blum (1996), when initially introducing this bound, does not at first use the phrase \"worst case\".  The proof's conclusion says only:

\n
\n

\"Theorem 1. The number of mistakes M made by the Weighted Majority algorithm described above is never more than 2.41(m+lg n), where m is the number of mistakes made by the best expert so far.\"

\n
\n

Key word:  Never.

\n

The worst-case assumption for the unrandomized algorithm was implicit in calculating the right-hand-side of the inequality by assuming that, on each mistake, the total weight went down by a factor of 1/4, and the total weight never decreased after any successful prediction.  This is the absolute worst possible performance the weighted-majority algorithm can give.

\n

The assumption implicit in those innocent-looking equations is that the environment carefully maximizes the anguish of the predictor:  The environment so cleverly chooses its data samples, that on each case where the weighted-majority algorithm is allowed to be successful, it shall receive not a single iota of useful feedback - not a single expert shall be wrong.  And the weighted-majority algorithm will be made to err on sensory cases that produce the minimum possible useful feedback, maliciously fine-tuned to the exact current weighting of experts.  We assume that the environment can predict every aspect of the predictor and exploit it - unless the predictor uses a \"random\" number generator which we arbitrarily declare to be unknowable to the adversary.

\n

What strange assumptions are buried in that innocent little inequality,

\n
\n

<=

\n
\n

Moreover, the entire argument in favor of the randomized algorithm was theoretically suspect from the beginning, because it rested on non-transitive inequalities.  If I prove an upper bound on the errors of algorithm X, and then prove a smaller upper bound on the errors of algorithm Y, it does not prove that in the real world Y will perform better than X.  For example, I prove that X cannot do worse than 1000 errors, and that Y cannot do worse than 100 errors.  Is Y a better algorithm?  Not necessarily.  Tomorrow I could find an improved proof which shows that X cannot do worse than 90 errors.  And then in real life, both X and Y could make exactly 8 errors.

\n
\n

4 <= 1,000,000,000
9 <= 10

\n
\n

But that doesn't mean that 4 > 9.

\n

So the next time you see an author remark, \"We can further improve this algorithm using randomization...\" you may not know exactly where to find the oddity.  If you'd run across the above-referenced example (or any number of others in the machine-learning literature), you might not have known how to deconstruct the randomized weighted-majority algorithm.  If such a thing should happen to you, then I hope that I have given you grounds to suspect that an oddity exists somewhere, even if you cannot find it - just as you would suspect a machine that proposed to extract work from a heat bath, even if you couldn't keep track of all the gears and pipes.

\n

Nominull put it very compactly when he said that, barring an environment which changes based on the form of your algorithm apart from its output, \"By adding randomness to your algorithm, you spread its behaviors out over a particular distribution, and there must be at least one point in that distribution whose expected value is at least as high as the average expected value of the distribution.\"

\n

As I remarked in Perpetual Motion Beliefs:

\n
\n

I once knew a fellow who was convinced that his system of wheels and gears would produce reactionless thrust, and he had an Excel spreadsheet that would prove this - which of course he couldn't show us because he was still developing the system.  In classical mechanics, violating Conservation of Momentum is provably impossible.  So any Excel spreadsheet calculated according to the rules of classical mechanics must necessarily show that no reactionless thrust exists - unless your machine is complicated enough that you have made a mistake in the calculations.

\n
\n

If you ever find yourself mathematically proving that you can do better by randomizing, I suggest that you suspect your calculations, or suspect your interpretation of your assumptions, before you celebrate your extraction of work from a heat bath. 

" } }, { "_id": "GYuKqAL95eaWTDje5", "title": "Worse Than Random", "pageUrl": "https://www.lesswrong.com/posts/GYuKqAL95eaWTDje5/worse-than-random", "postedAt": "2008-11-11T19:01:31.000Z", "baseScore": 46, "voteCount": 42, "commentCount": 102, "url": null, "contents": { "documentId": "GYuKqAL95eaWTDje5", "html": "

Previously in seriesLawful Uncertainty

\n

You may have noticed a certain trend in recent posts:  I've been arguing that randomness hath no power, that there is no beauty in entropy, nor yet strength from noise.

\n

If one were to formalize the argument, it would probably run something like this: that if you define optimization as previously suggested, then sheer randomness will generate something that seems to have 12 bits of optimization, only by trying 4096 times; or 100 bits of optimization, only by trying 1030 times.

\n

This may not sound like a profound insight, since it is true by definition.  But consider - how many comic books talk about \"mutation\" as if it were a source of power?  Mutation is random.  It's the selection part, not the mutation part, that explains the trends of evolution.

\n

Or you may have heard people talking about \"emergence\" as if it could explain complex, functional orders.  People will say that the function of an ant colony emerges - as if, starting from ants that had been selected only to function as solitary individuals, the ants got together in a group for the first time and the ant colony popped right out.  But ant colonies have been selected on as colonies by evolution.  Optimization didn't just magically happen when the ants came together.

\n

And you may have heard that certain algorithms in Artificial Intelligence work better when we inject randomness into them.

\n

Is that even possible?  How can you extract useful work from entropy?

\n

But it is possible in theory, since you can have things that are anti-optimized.  Say, the average state has utility -10, but the current state has an unusually low utility of -100.  So in this case, a random jump has an expected benefit.  If you happen to be standing in the middle of a lava pit, running around at random is better than staying in the same place.  (Not best, but better.)

\n

A given AI algorithm can do better when randomness is injected, provided that some step of the unrandomized algorithm is doing worse than random.

\n

\n

Imagine that we're trying to solve a pushbutton combination lock with 20 numbers and four steps - 160,000 possible combinations.  And we try the following algorithm for opening it:

\n
    \n
  1. Enter 0-0-0-0 into the lock.
  2. \n
  3. If the lock opens, return with SUCCESS.
  4. \n
  5. If the lock remains closed, go to step 1.
  6. \n
\n

Obviously we can improve this algorithm by substituting \"Enter a random combination\" on the first step.

\n

If we were to try and explain in words why this works, a description might go something like this:  \"When we first try 0-0-0-0 it has the same chance of working (so far as we know) as any other combination.  But if it doesn't work, it would be stupid to try it again, because now we know that 0-0-0-0 doesn't work.\"

\n

The first key idea is that, after trying 0-0-0-0, we learn something - we acquire new knowledge, which should then affect how we plan to continue from there.  This is knowledge, quite a different thing from randomness...

\n

What exactly have we learned?  We've learned that 0-0-0-0 doesn't work; or to put it another way, given that 0-0-0-0 failed on the first try, the conditional probability of it working on the second try, is negligible.

\n

Consider your probability distribution over all the possible combinations:  Your probability distribution starts out in a state of maximum entropy, with all 160,000 combinations having a 1/160,000 probability of working.  After you try 0-0-0-0, you have a new probability distribution, which has slightly less entropy; 0-0-0-0 has an infinitesimal probability of working, and the remaining 159,999 possibilities each have a 1/159,999 probability of working.  To try 0-0-0-0 again would now be stupid - the expected utility of trying 0-0-0-0 is less than average; the vast majority of potential actions now have higher expected utility than does 0-0-0-0.  An algorithm that tries 0-0-0-0 again would do worse than random, and we can improve the algorithm by randomizing it.

\n

One may also consider an algorithm as a sequence of tries:  The \"unrandomized algorithm\" describes the sequence of tries 0-0-0-0, 0-0-0-0, 0-0-0-0... and this sequence of tries is a special sequence that has far-below-average expected utility in the space of all possible sequences.  Thus we can improve on this sequence by selecting a random sequence instead.

\n

Or imagine that the combination changes every second.  In this case, 0-0-0-0, 0-0-0-0 is just as good as the randomized algorithm - no better and no worse.  What this shows you is that the supposedly \"random\" algorithm is \"better\" relative to a known regularity of the lock - that the combination is constant on each try.  Or to be precise, the reason the random algorithm does predictably better than the stupid one is that the stupid algorithm is \"stupid\" relative to a known regularity of the lock.

\n

In other words, in order to say that the random algorithm is an \"improvement\", we must have used specific knowledge about the lock to realize that the unrandomized algorithm is worse-than-average.  Having realized this, can we reflect further on our information, and take full advantage of our knowledge to do better-than-average?

\n

The random lockpicker is still not optimal - it does not take full advantage of the knowledge we have acquired.  A random algorithm might randomly try 0-0-0-0 again; it's not impossible, but it could happen.  The longer the random algorithm runs, the more likely it is to try the same combination twice; and if the random algorithm is sufficiently unlucky, it might still fail to solve the lock after millions of tries.  We can take full advantage of all our knowledge by using an algorithm that systematically tries 0-0-0-0, 0-0-0-1, 0-0-0-2...  This algorithm is guaranteed not to repeat itself, and will find the solution in bounded time.  Considering the algorithm as a sequence of tries, no other sequence in sequence-space is expected to do better, given our initial knowledge.  (Any other nonrepeating sequence is equally good; but nonrepeating sequences are rare in the space of all possible sequences.)

\n

A combination dial often has a tolerance of 2 in either direction.  20-45-35 will open a lock set to 22-44-33.  In this case, the algorithm that tries 0-1-0, 0-2-0, et cetera, ends up being stupid again; a randomized algorithm will (usually) work better.  But an algorithm that tries 0-5-0, 0-10-0, 0-10-5, will work better still.

\n

Sometimes it is too expensive to take advantage of all the knowledge that we could, in theory, acquire from previous tests.  Moreover, a complete enumeration or interval-skipping algorithm would still end up being stupid.  In this case, computer scientists often use a cheap pseudo-random algorithm, because the computational cost of using our knowledge exceeds the benefit to be gained from using it.  This does not show the power of randomness, but, rather, the predictable stupidity of certain specific deterministic algorithms on that particular problem.  Remember, the pseudo-random algorithm is also deterministic!  But the deterministic pseudo-random algorithm doesn't belong to the class of algorithms that are predictably stupid (do much worse than average).

\n

There are also subtler ways for adding noise to improve algorithms.  For example, there are neural network training algorithms that work better if you simulate noise in the neurons.  On this occasion it is especially tempting to say something like:

\n

\"Lo!  When we make our artificial neurons noisy, just like biological neurons, they work better!  Behold the healing life-force of entropy!\"

\n

What might actually be happening - for example - is that the network training algorithm, operating on noiseless neurons, would vastly overfit the data.

\n

If you expose the noiseless network to the series of coinflips \"HTTTHHTTH\"... the training algorithm will say the equivalent of, \"I bet this coin was specially designed to produce HTTTHHTTH every time it's flipped!\" instead of \"This coin probably alternates randomly between heads and tails.\"  A hypothesis overfitted to the data does not generalize.  On the other hand, when we add noise to the neurons and then try training them again, they can no longer fit the data precisely, so instead they settle into a simpler hypothesis like \"This coin alternates randomly between heads and tails.\"  Note that this requires us - the programmers - to know in advance that probabilistic hypotheses are more likely to be true.

\n

Or here's another way of looking at it:  A neural network algorithm typically looks at a set of training instances, tweaks the units and their connections based on the training instances, and in this way tries to \"stamp\" the experience onto itself.  But the algorithms which do the stamping are often poorly understood, and it is possible to stamp too hard.  If this mistake has already been made, then blurring the sensory information, or blurring the training algorithm, or blurring the units, can partially cancel out the \"overlearning\".

\n

Here's a simplified example of a similar (but not identical) case:  Imagine that the environment deals us a random mix of cards, 70% blue and 30% red.  But instead of just predicting \"blue\" or \"red\", we have to assign a quantitative probability to seeing blue - and the scoring rule for our performance is one that elicits an honest estimate; if the actual frequency is 70% blue cards, we do best by replying \"70%\", not 60% or 80%.  (\"Proper scoring rule.\")

\n

If you don't know in advance the frequency of blue and red cards, one way to handle the problem would be to have a blue unit and a red unit, both wired to an output unit.  The blue unit sends signals with a positive effect that make the target unit more likely to fire; the red unit inhibits its targets - just like the excitatory and inhibitory synapses in the human brain!  (Or an earthworm's brain, for that matter...)

\n

Each time we see a blue card in the training data, the training algorithm increases the strength of the (excitatory) synapse from the blue unit to the output unit; and each time we see a red card, the training algorithm strengthens the (inhibitory) synapse from the red unit to the output unit.

\n

But wait!  We haven't said why the blue or red units might fire in the first place.  So we'll add two more excitatory units that spike randomly, one connected to the blue unit and one connected to red unit.  This simulates the natural background noise present in the human brain (or an earthworm's brain).

\n

Finally, the spiking frequency of the output unit becomes the predicted probability that the next card is blue.

\n

As you can see - assuming you haven't lost track of all the complexity - this neural network learns to predict whether blue cards or red cards are more common in the mix.  Just like a human brain!

\n

At least that's the theory.  However, when we boot up the neural network and give it a hundred training examples with 70 blue and 30 red cards, it ends up predicting a 90% probability that each card will be blue.  Now, there are several things that could go wrong with a system this complex; but my own first impulse would be to guess that the training algorithm is too strongly adjusting the synaptic weight from the blue or red unit to the output unit on each training instance.  The training algorithm needs to shift a little less far - alter the synaptic weights a little less.

\n

But the programmer of the neural network comes up with a different hypothesis: the problem is that there's no noise in the input.  This is biologically unrealistic; real organisms do not have perfect vision or perfect information about the environment.  So the programmer shuffles a few randomly generated blue and red cards (50% probability of each) into the training sequences.  Then the programmer adjusts the noise level until the network finally ends up predicting blue with 70% probability.  And it turns out that using almost the same noise level (with just a bit of further tweaking), the improved training algorithm can learn to assign the right probability to sequences of 80% blue or 60% blue cards.

\n

Success!  We have found the Right Amount of Noise.

\n

Of course this success comes with certain disadvantages.  For example, maybe the blue and red cards are predictable, in the sense of coming in a learnable sequence.  Maybe the sequence is 7 blue cards followed by 3 red cards.  If we mix noise into the sensory data, we may never notice this important regularity, or learn it imperfectly... but that's the price you pay for biological realism.

\n

What's really happening is that the training algorithm moves too far given its data, and adulterating noise with the data diminishes the impact of the data.  The two errors partially cancel out, but at the cost of a nontrivial loss in what we could, in principle, have learned from the data.  It would be better to adjust the training algorithm and keep the data clean.

\n

This is an extremely oversimplified example, but it is not a total strawman.  The scenario seems silly only because it is simplified to the point where you can clearly see what is going wrong.  Make the neural network a lot more complicated, and the solution of adding noise to the inputs might sound a lot more plausible.  While some neural network algorithms are well-understood mathematically, others are not.  In particular, systems crafted with the aim of biological realism are often not well-understood. 

\n

But it is an inherently odd proposition that you can get a better picture of the environment by adding noise to your sensory information - by deliberately throwing away your sensory acuity.  This can only degrade the mutual information between yourself and the environment.  It can only diminish what in principle can be extracted from the data.  And this is as true for every step of the computation, as it is for the eyes themselves.  The only way that adding random noise will help is if some step of the sensory processing is doing worse than random.

\n

Now it is certainly possible to design an imperfect reasoner that only works in the presence of an accustomed noise level.  Biological systems are unable to avoid noise, and therefore adapt to overcome noise.  Subtract the noise, and mechanisms adapted to the presence of noise may do strange things.

\n

Biological systems are often fragile under conditions that have no evolutionary precedent in the ancestral environment.  If somehow the Earth's gravity decreased, then birds might become unstable, lurching up in the air as their wings overcompensated for the now-decreased gravity.  But this doesn't mean that stronger gravity helps birds fly better.  Gravity is still the difficult challenge that a bird's wings work to overcome - even though birds are now adapted to gravity as an invariant.

\n

What about hill-climbing, simulated annealing, or genetic algorithms?  These AI algorithms are local search techniques that randomly investigate some of their nearest neighbors.  If an investigated neighbor is superior to the current position, the algorithm jumps there.  (Or sometimes probabilistically jumps to a neighbor with probability determined by the difference between neighbor goodness and current goodness.)  Are these techniques drawing on the power of noise?

\n

Local search algorithms take advantage of the regularity of the search space - that if you find a good point in the search space, its neighborhood of closely similar points is a likely place to search for a slightly better neighbor.  And then this neighbor, in turn, is a likely place to search for a still better neighbor; and so on.  To the extent this regularity of the search space breaks down, hill-climbing algorithms will perform poorly.  If the neighbors of a good point are no more likely to be good than randomly selected points, then a hill-climbing algorithm simply won't work.

\n

But still, doesn't a local search algorithm need to make random changes to the current point in order to generate neighbors for evaluation?  Not necessarily; some local search algorithms systematically generate all possible neighbors, and select the best one.  These greedy algorithms work fine for some problems, but on other problems it has been found that greedy local algorithms get stuck in local minima.  The next step up from greedy local algorithms, in terms of added randomness, is random-restart hill-climbing - as soon as we find a local maximum, we restart someplace random, and repeat this process a number of times.  For our final solution, we return the best local maximum found when time runs out.  Random-restart hill-climbing is surprisingly useful; it can easily solve some problem classes where any individual starting point is unlikely to lead to a global maximum or acceptable solution, but it is likely that at least one of a thousand individual starting points will lead to the global maximum or acceptable solution.

\n

The non-randomly-restarting, greedy, local-maximum-grabbing algorithm, is \"stupid\" at the stage where it gets stuck in a local maximum.  Once you find a local maximum, you know you're not going to do better by greedy local search - so you may as well try something else with your time.  Picking a random point and starting again is drastic, but it's not as stupid as searching the neighbors of a particular local maximum over and over again.  (Biological species often do get stuck in local optima.  Evolution, being unintelligent, has no mind with which to \"notice\" when it is testing the same gene complexes over and over.)

\n

Even more stupid is picking a particular starting point, and then evaluating its fitness over and over again, without even searching its neighbors.  This is the lockpicker who goes on trying 0-0-0-0 forever.

\n

Hill-climbing search is not so much a little bit randomized compared to the completely stupid lockpicker, as almost entirely nonrandomized compared to a completely ignorant searcher.  We search only the local neighborhood, rather than selecting a random point from the entire state space.  That probability distribution has been narrowed enormously, relative to the overall state space.  This exploits the belief - the knowledge, if the belief is correct - that a good point probably has good neighbors.

\n

You can imagine splitting a hill-climbing algorithm into components that are \"deterministic\" (or rather, knowledge-exploiting) and \"randomized\" (the leftover ignorance).

\n

A programmer writing a probabilistic hill-climber will use some formula to assign probabilities to each neighbor, as a function of the neighbor's fitness.  For example, a neighbor with a fitness of 60 might have probability 80% of being selected, while other neighbors with fitnesses of 55, 52, and 40 might have selection probabilities of 10%, 9%, and 1%.  The programmer writes a deterministic algorithm, a fixed formula, that produces these numbers - 80, 10, 9, and 1.

\n

What about the actual job of making a random selection at these probabilities?  Usually the programmer will hand that job off to someone else's pseudo-random algorithm - most language's standard libraries contain a standard pseudo-random algorithm; there's no need to write your own.

\n

If the hill-climber doesn't seem to work well, the programmer tweaks the deterministic part of the algorithm, the part that assigns these fixed numbers 80, 10, 9, and 1.  The programmer does not say - \"I bet these probabilities are right, but I need a source that's even more random, like a thermal noise generator, instead of this merely pseudo-random algorithm that is ultimately deterministic!\"  The programmer does not go in search of better noise.

\n

It is theoretically possible for a poorly designed \"pseudo-random algorithm\" to be stupid relative to the search space; for example, it might always jump in the same direction.  But the \"pseudo-random algorithm\" has to be really shoddy for that to happen.  You're only likely to get stuck with that problem if you reinvent the wheel instead of using a standard, off-the-shelf solution.  A decent pseudo-random algorithm works just as well as a thermal noise source on optimization problems.  It is possible (though difficult) for an exceptionally poor noise source to be exceptionally stupid on the problem, but you cannot do exceptionally well by finding a noise source that is exceptionally random.  The power comes from the knowledge - the deterministic formula that assigns a fixed probability distribution.  It does not reside in the remaining ignorance.

\n

And that's why I always say that the power of natural selection comes from the selection part, not the mutation part.

\n

As a general principle, on any problem for which you know that a particular unrandomized algorithm is unusually stupid - so that a randomized algorithm seems wiser - you should be able to use the same knowledge to produce a superior derandomized algorithm. If nothing else seems obvious, just avoid outputs that look \"unrandomized\"!  If you know something is stupid, deliberately avoid it!  (There are exceptions to this rule, but they have to do with defeating cryptographic adversaries - that is, preventing someone else's intelligence from working on you.  Certainly entropy can act as an antidote to intelligence!)  And of course there are very common practical exceptions whenever the computational cost of using all our knowledge exceeds the marginal benefit...

\n

Still you should find, as a general principle, that randomness hath no power: there is no beauty in entropy, nor strength from noise.

" } }, { "_id": "msJA6B9ZjiiZxT6EZ", "title": "Lawful Uncertainty", "pageUrl": "https://www.lesswrong.com/posts/msJA6B9ZjiiZxT6EZ/lawful-uncertainty", "postedAt": "2008-11-10T21:06:32.000Z", "baseScore": 144, "voteCount": 112, "commentCount": 57, "url": null, "contents": { "documentId": "msJA6B9ZjiiZxT6EZ", "html": "

In Rational Choice in an Uncertain World, Robyn Dawes describes an experiment by Tversky:1

Many psychological experiments were conducted in the late 1950s and early 1960s in which subjects were asked to predict the outcome of an event that had a random component but yet had base-rate predictability—for example, subjects were asked to predict whether the next card the experimenter turned over would be red or blue in a context in which 70% of the cards were blue, but in which the sequence of red and blue cards was totally random.

In such a situation, the strategy that will yield the highest proportion of success is to predict the more common event. For example, if 70% of the cards are blue, then predicting blue on every trial yields a 70% success rate.

What subjects tended to do instead, however, was match probabilities—that is, predict the more probable event with the relative frequency with which it occurred. For example, subjects tended to predict 70% of the time that the blue card would occur and 30% of the time that the red card would occur. Such a strategy yields a 58% success rate, because the subjects are correct 70% of the time when the blue card occurs (which happens with probability .70) and 30% of the time when the red card occurs (which happens with probability .30); (.70×.70) + (.30×.30) = .58.

In fact, subjects predict the more frequent event with a slightly higher probability than that with which it occurs, but do not come close to predicting its occurrence 100% of the time, even when they are paid for the accuracy of their predictions . . . For example, subjects who were paid a nickel for each correct prediction over a thousand trials . . . predicted [the more common event] 76% of the time.

Do not think that this experiment is about a minor flaw in gambling strategies. It compactly illustrates the most important idea in all of rationality.

Subjects just keep guessing red, as if they think they have some way of predicting the random sequence. Of this experiment Dawes goes on to say, “Despite feedback through a thousand trials, subjects cannot bring themselves to believe that the situation is one in which they cannot predict.”

But the error must go deeper than that. Even if subjects think they’ve come up with a hypothesis, they don’t have to actually bet on that prediction in order to test their hypothesis. They can say, “Now if this hypothesis is correct, the next card will be red”—and then just bet on blue. They can pick blue each time, accumulating as many nickels as they can, while mentally noting their private guesses for any patterns they thought they spotted. If their predictions come out right, then they can switch to the newly discovered sequence.

I wouldn’t fault a subject for continuing to invent hypotheses—how could they know the sequence is truly beyond their ability to predict? But I would fault a subject for betting on the guesses, when this wasn’t necessary to gather information, and literally hundreds of earlier guesses had been disconfirmed.

Can even a human be that overconfident?

I would suspect that something simpler is going on—that the all-blue strategy just didn’t occur to the subjects.

People see a mix of mostly blue cards with some red, and suppose that the optimal betting strategy must be a mix of mostly blue cards with some red.

It is a counterintuitive idea that, given incomplete information, the optimal betting strategy does not resemble a typical sequence of cards.

It is a counterintuitive idea that the optimal strategy is to behave lawfully, even in an environment that has random elements.

It seems like your behavior ought to be unpredictable, just like the environment—but no! A random key does not open a random lock just because they are “both random.”

You don’t fight fire with fire; you fight fire with water. But this thought involves an extra step, a new concept not directly activated by the problem statement, and so it’s not the first idea that comes to mind.

In the dilemma of the blue and red cards, our partial knowledge tells us—on each and every round—that the best bet is blue. This advice of our partial knowledge is the same on every single round. If 30% of the time we go against our partial knowledge and bet on red instead, then we will do worse thereby—because now we’re being outright stupid, betting on what we know is the less probable outcome.

If you bet on red every round, you would do as badly as you could possibly do; you would be 100% stupid. If you bet on red 30% of the time, faced with 30% red cards, then you’re making yourself 30% stupid.

When your knowledge is incomplete—meaning that the world will seem to you to have an element of randomness—randomizing your actions doesn’t solve the problem. Randomizing your actions takes you further from the target, not closer. In a world already foggy, throwing away your intelligence just makes things worse.

It is a counterintuitive idea that the optimal strategy can be to think lawfully, even under conditions of uncertainty.

And so there are not many rationalists, for most who perceive a chaotic world will try to fight chaos with chaos. You have to take an extra step, and think of something that doesn’t pop right into your mind, in order to imagine fighting fire with something that is not itself fire.

You have heard the unenlightened ones say, “Rationality works fine for dealing with rational people, but the world isn’t rational.” But faced with an irrational opponent, throwing away your own reason is not going to help you. There are lawful forms of thought that still generate the best response, even when faced with an opponent who breaks those laws. Decision theory does not burst into flames and die when faced with an opponent who disobeys decision theory.

This is no more obvious than the idea of betting all blue, faced with a sequence of both blue and red cards. But each bet that you make on red is an expected loss, and so too with every departure from the Way in your own thinking.

How many Star Trek episodes are thus refuted? How many theories of AI?


1 Amos Tversky and Ward Edwards, “Information versus Reward in Binary Choices,” Journal of Experimental Psychology 71, no. 5 (1966): 680–683. See also Yaacov Schul and Ruth Mayo, “Searching for Certainty in an Uncertain World: The Difficulty of Giving Up the Experiential for the Rational Mode of Thinking,” Journal of Behavioral Decision Making 16, no. 2 (2003): 93–106.

" } }, { "_id": "sDNMreqo3ktQDv4uH", "title": "Ask OB: Leaving the Fold", "pageUrl": "https://www.lesswrong.com/posts/sDNMreqo3ktQDv4uH/ask-ob-leaving-the-fold", "postedAt": "2008-11-09T18:08:52.000Z", "baseScore": 15, "voteCount": 10, "commentCount": 63, "url": null, "contents": { "documentId": "sDNMreqo3ktQDv4uH", "html": "

Followup toCrisis of Faith

\n\n

I thought this comment from "Jo" deserved a bump to the front page:

\n\t\t\t\n\t\t\t

"So\nhere I am having been raised in the Christian faith and trying not to\nfreak out over the past few weeks because I've finally begun to wonder\nwhether I believe things just because I was raised with them. Our\nfamily is surrounded by genuinely wonderful people who have poured\ntheir talents into us since we were teenagers, and our social structure\nand business rests on the tenets of what we believe. I've been trying\nto work out how I can 'clear the decks' and then rebuild with whatever\nis worth keeping, yet it's so foundational that it will affect my\nmarriage (to a pretty special man) and my daughters who, of course,\nhave also been raised to walk the Christian path.

\n\n

Is there anyone who's been in this position - really, really invested in a faith and then walked away?"

Handling this kind of situation has to count as part of the art.  But I haven't gone through anything like this.  Can anyone with experience advise Jo on what to expect, what to do, and what not to do?
" } }, { "_id": "KKLQp934n77cfZpPn", "title": "Lawful Creativity", "pageUrl": "https://www.lesswrong.com/posts/KKLQp934n77cfZpPn/lawful-creativity", "postedAt": "2008-11-08T19:54:22.000Z", "baseScore": 61, "voteCount": 42, "commentCount": 35, "url": null, "contents": { "documentId": "KKLQp934n77cfZpPn", "html": "

Previously in SeriesRecognizing Intelligence

\n

Creativity, we've all been told, is about Jumping Out Of The System, as Hofstadter calls it (JOOTSing for short).  Questioned assumptions, violated expectations.

\n

Fire is dangerous: the rule of fire is to run away from it.  What must have gone through the mind of the first hominid to domesticate fire?  The rule of milk is that it spoils quickly and then you can't drink it - who first turned milk into cheese?  The rule of computers is that they're made with vacuum tubes, fill a room and are so expensive that only corporations can own them.  Wasn't the transistor a surprise...

\n

Who, then, could put laws on creativity?  Who could bound it, who could circumscribe it, even with a concept boundary that distinguishes \"creativity\" from \"not creativity\"?  No matter what system you try to lay down, mightn't a more clever person JOOTS right out of it?  If you say \"This, this, and this is 'creative'\" aren't you just making up the sort of rule that creative minds love to violate?

\n

Why, look at all the rules that smart people have violated throughout history, to the enormous profit of humanity.  Indeed, the most amazing acts of creativity are those that violate the rules that we would least expect to be violated.

\n

Is there not even creativity on the level of how to think?  Wasn't the invention of Science a creative act that violated old beliefs about rationality?  Who, then, can lay down a law of creativity?

\n

But there is one law of creativity which cannot be violated...

\n

\n

Ordinarily, if you took a horse-and-buggy, unstrapped the horses, put a large amount of highly combustible fluid on board, and then set fire to the fluid, you would expect the buggy to burn.  You certainly wouldn't expect the buggy to move forward at a high rate of speed, for the convenience of its passengers.

\n

How unexpected was the internal combustion engine!  How surprising! What a creative act, to violate the rule that you shouldn't start a fire inside your vehicle!

\n

But now suppose that I unstrapped the horses from a buggy, put gasoline in the buggy, and set fire to the gasoline, and it did just explode.

\n

Then there would be no element of \"creative surprise\" about that.  More experienced engineers would just shake their heads wisely and say, \"That's why we use horses, kiddo.\"

\n

Creativity is surprising - but not just any kind of surprise counts as a creative surprise.  Suppose I set up an experiment involving a quantum event of very low amplitude, such that the macroscopic probability is a million to one.  If the event is actually observed to occur, it is a happenstance of extremely low probability, and in that sense surprising.  But it is not a creative surprise.  Surprisingness is not a sufficient condition for creativity.

\n

So what kind of surprise is it, that creates the unexpected \"shock\" of creativity?

\n

In information theory, the more unexpected an event is, the longer the message it takes to send it - to conserve bandwidth, you use the shortest messages for the most common events.

\n

So do we reason that the most unexpected events, convey the most information, and hence the most surprising acts are those that give us a pleasant shock of creativity - the feeling of suddenly absorbing new information?

\n

This contains a grain of truth, I think, but not the whole truth: the million-to-one quantum event would also require a 20-bit message to send, but it wouldn't convey a pleasant shock of creativity, any more than a random sequence of 20 coinflips.

\n

Rather, the creative surprise is the idea that ranks high in your preference ordering but low in your search ordering.

\n

If, before anyone had thought of an internal combustion engine (which predates cars, of course) I had asked some surprisingly probability-literate engineer to write out a vocabulary for describing effective vehicles, it would contain short symbols for horses, long symbols for flammable fluid, and maybe some kind of extra generalization that says \"With probability 99.99%, a vehicle should not be on fire\" so that you need to use a special 14-bit prefix for vehicles that violate this generalization.

\n

So when I now send this past engineer a description of an automobile, he experiences the shock of getting a large amount of useful information - a design that would have taken a long time for him to find, in the implicit search ordering he set up - a design that occupies a region of low density in his  prior distribution for where good designs are to be found in the design space.  And even the added \"absurdity\" shock of seeing a generalization violated - not a generalization about physical laws, but a generalization about which designs are effective or ineffective.

\n

But the vehicle still goes somewhere - that part hasn't changed.

\n

What if I try to explain about telepresence and virtual offices, so that you don't even need a car?

\n

But you're still trying to talk to people, or get work done with people - you've just found a more effective means to that end, than travel.

\n

A car is a more effective means of travel, a computer is a more effective means than travel.  But there's still some end.  There's some criterion that makes the solution a \"solution\". There's some criterion that makes the unusual reply, unusually good.  Otherwise any part of the design space would be as good as any other.

\n

An amazing creative solution has to obey at least one law, the criterion that makes it a \"solution\".  It's the one box you can't step outside:  No optimization without goals.

\n

The pleasant shock of witnessing Art arises from the constraints of Art - from watching a skillful archer send an arrow into an exceedingly narrow target.  Static on a television screen is not beautiful, it is noise.

\n

In the strange domain known as Modern Art, people sometimes claim that their goal is to break all the rules, even the rule that Art has to hit some kind of target.  They put up a blank square of canvas, and call it a painting.  And by now that is considered staid and boring Modern Art, because a blank square of canvas still hangs on the wall and has a frame.  What about a heap of garbage?  That can also be Modern Art!  Surely, this demonstrates that true creativity knows no rules, and even no goals...

\n

But the rules are still there, though unspoken.  I could submit a realistic landscape painting as Modern Art, and this would be rejected because it violates the rule that Modern Art cannot delight the untrained senses of a mere novice.

\n

Or better yet, if a heap of garbage can be Modern Art, then I'll claim that someone else's heap of garbage is my work of Modern Art - boldly defying the convention that I need to produce something for it to count as my artwork.  Or what about the pattern of dust particles on my desk?  Isn't that Art?

\n

Flushed with triumph, I present to you an even bolder, more convention-defying work of Modern Art - a stunning, outrageous piece of performance art that, in fact, I never performed.  I am defying the foolish convention that I need to actually perform my performance art for it to count as Art.

\n

Now, up to this point, you probably could still get a grant from the National Endowment for the Arts, and get sophisticated critics to discuss your shocking, outrageous non-work, which boldly violates the convention that art must be real rather than imaginary.

\n

But now suppose that you go one step further, and refuse to tell anyone that you have performed your work of non-Art.  You even refuse to apply for an NEA grant.  It is the work of Modern Art that never happened and that no one knows never happened; it exists only as my concept of what I am supposed not to conceptualize.  Better yet, I will say that my Modern Art is your non-conception of something that you are not conceptualizing.  Here is the ultimate work of Modern Art, that truly defies all rules:  It isn't mine, it isn't real, and no one knows it exists...

\n

And this ultimate rulebreaker you could not pass off as Modern Art, even if NEA grant committees knew that no one knew it existed.  For one thing, they would realize that you were making fun of them - and that is an unspoken rule of Modern Art that no one dares violate.  You must take yourself seriously.  You must break the surface rules in a way that allows sophisticated critics to praise your boldness and defiance with a straight face.  This is the unwritten real goal, and if it is not achieved, all efforts are for naught.  Whatever gets sophisticated critics to praise your rule-breaking is good Modern Art, and whatever fails in this end is poor Modern Art.  Within that unalterable constraint, you can use whatever creative means you like.

\n

But doesn't creative engineering sometimes involve altering your goals?  First my goal was to try and figure out how to build a vehicle using horses; now my goal is to build a vehicle using fire...

\n

Creativity clearly involves altering my local intentions, my what-I'm-trying-to-do-next.  I begin by intending to use horses, to build a vehicle, to drive to the supermarket, to buy food, to eat food, so that I don't starve to death, because I prefer being alive to starving to death.  I may creatively use fire, instead of horses; creatively walk, instead of driving; creatively drive to a nearby restaurant, instead of a supermarket; creatively grow my own vegetables, instead of buying them; or even creatively devise a way to run my body on electricity, instead of chemical energy...

\n

But what does not count as \"creativity\" is creatively preferring to starve to death, rather than eating.  This \"solution\" does not strike me as very impressive; it involves no effort, no intelligence, and no surprise when it comes to looking at the result.  If this is someone's idea of how to break all the rules, they would become pretty easy to predict.

\n

Are there cases where you genuinely want to change your preferences?  You may look back in your life and find that your moral beliefs have changed over decades, and that you count this as \"moral progress\".  Civilizations also change their morals over time.  In the seventeenth century, people used to think it was okay to enslave people with differently colored skin; and now we elect them President.

\n

But you might guess by now, you might somehow intuit, that if these moral changes seem interesting and important and vital and indispensable, then not just any change would suffice.  If there's no criterion, no target, no way of choosing - then your current point in state space is just as good as any other point, no more, no less; and you might as well keep your current state, unchanging, forever.

\n

Every surprisingly creative Jump-Out-Of-The-System needs a criterion that makes it surprisingly good, some fitness metric that it matches.  This criterion, itself, supplies the order in our beliefs that lets us recognize an act of \"creativity\" despite our surprise.  Just as recognizing intelligence requires at least some belief about that intelligence's goals, however abstract.

\n

One might wish to reconsider, in light of this principle, such notions as \"free will that is not constrained by anything\"; or the necessary conditions for our discussions of what is \"right\" to have some kind of meaning.

\n

There is an oft-repeated cliche of Deep Wisdom which says something along the lines of \"intelligence is balanced between Order and Chaos\", as if cognitive science were a fantasy novel written by Roger Zelazny.  Logic as Order, following the rules.  Creativity as Chaos, violating the rules.  And so you can try to understand logic, but you are blocked when it comes to creativity - and of course you could build a logical computer but not a creative one - and of course 'rationalists' can only use the Order side of the equation, and can never become whole people; because Art requires an element of irrationality, just like e.g. emotion.

\n

And I think that despite its poetic appeal, that whole cosmological mythology is just flat wrong.  There's just various forms of regularity, of negentropy, where all the structure and all the beauty live.  And on the other side is what's left over - the static on the television screen, the heat bath, the noise.

\n

I shall be developing this startling thesis further in future posts.

" } }, { "_id": "LZMeuRGQhSw77XewC", "title": "Recognizing Intelligence", "pageUrl": "https://www.lesswrong.com/posts/LZMeuRGQhSw77XewC/recognizing-intelligence", "postedAt": "2008-11-07T23:22:35.000Z", "baseScore": 28, "voteCount": 18, "commentCount": 30, "url": null, "contents": { "documentId": "LZMeuRGQhSw77XewC", "html": "

Previously in seriesBuilding Something Smarter

\n

Humans in Funny Suits inveighed against the problem of \"aliens\" on TV shows and movies who think and act like 21st-century middle-class Westerners, even if they have tentacles or exoskeletons.  If you were going to seriously ask what real aliens might be like, you would try to make fewer assumptions - a difficult task when the assumptions are invisible.

\n

I previously spoke of how you don't have to start out by assuming any particular goals, when dealing with an unknown intelligence.  You can use some of your evidence to deduce the alien's goals, and then use that hypothesis to predict the alien's future achievements, thus making an epistemological profit.

\n

But could you, in principle, recognize an alien intelligence without even hypothesizing anything about its ultimate ends - anything about the terminal values it's trying to achieve?

\n

This sounds like it goes against my suggested definition of intelligence, or even optimization.  How can you recognize something as having a strong ability to hit narrow targets in a large search space, if you have no idea what the target is?

\n

And yet, intuitively, it seems easy to imagine a scenario in which we could recognize an alien's intelligence while having no concept whatsoever of its terminal values - having no idea where it's trying to steer the future.

\n

\n

Suppose I landed on an alien planet and discovered what seemed to be a highly sophisticated machine, all gleaming chrome as the stereotype demands.  Can I recognize this machine as being in any sense well-designed, if I have no idea what the machine is intended to accomplish?  Can I guess that the machine's makers were intelligent, without guessing their motivations?

\n

And again, it seems like in an intuitive sense I should obviously be able to do so.  I look at the cables running through the machine, and find large electrical currents passing through them, and discover that the material is a flexible high-temperature high-amperage superconductor.  Dozens of gears whir rapidly, perfectly meshed...

\n

I have no idea what the machine is doing.  I don't even have a hypothesis as to what it's doing.  Yet I have recognized the machine as the product of an alien intelligence.  Doesn't this show that \"optimization process\" is not an indispensable notion to \"intelligence\"?

\n

But you can't possibly recognize intelligence without at least having such a thing as a concept of \"intelligence\" that divides the universe into intelligent and unintelligent parts.  For there to be a concept, there has to be a boundary.  So what am I recognizing?

\n

If I don't see any optimization criterion by which to judge the parts or the whole - so that, as far as I know, a random volume of air molecules or a clump of dirt would be just as good a design - then why am I focusing on this particular object and saying, \"Here is a machine\"? Why not say the same about a cloud or a rainstorm?

\n

Why is it a good hypothesis to suppose that intelligence or any other optimization process played a role in selecting the form of what I see, any more than it is a good hypothesis to suppose that the dust particles in my rooms are arranged by dust elves?

\n

Consider that gleaming chrome.  Why did humans start making things out of metal?  Because metal is hard; it retains its shape for a long time.  So when you try to do something, and the something stays the same for a long period of time, the way-to-do-it may also stay the same for a long period of time.  So you face the subproblem of creating things that keep their form and function.  Metal is one solution to that subproblem.

\n

There are no-free-lunch theorems showing the impossibility of various kinds of inference, in maximally disordered universes.  In the same sense, if an alien's goals were maximally disordered, it would be unable to achieve those goals and you would be unable to detect their achievement.

\n

But as simple a form of negentropy as regularity over time - that the alien's terminal values don't take on a new random form with each clock tick - can imply that hard metal, or some other durable substance, would be useful in a \"machine\" - a persistent configuration of material that helps promote a persistent goal.

\n

The gears are a solution to the problem of transmitting mechanical forces from one place to another, which you would want to do because of the presumed economy of scale in generating the mechanical force at a central location and then distributing it.  In their meshing, we recognize a force of optimization applied in the service of a recognizable instrumental value: most random gears, or random shapes turning against each other, would fail to mesh, or fly apart.  Without knowing what the mechanical forces are meant to do, we recognize something that transmits mechanical force - this is why gears appear in many human artifacts, because it doesn't matter much what kind of mechanical force you need to transmit on the other end.  You may still face problems like trading torque for speed, or moving mechanical force from generators to appliers.

\n

These are not universally convergent instrumental challenges.  They probably aren't even convergent with respect to maximum-entropy goal systems (which are mostly out of luck).

\n

But relative to the space of low-entropy, highly regular goal systems - goal systems that don't pick a new utility function for every different time and every different place - that negentropy pours through the notion of \"optimization\" and comes out as a concentrated probability distribution over what an \"alien intelligence\" would do, even in the \"absence of any hypothesis\" about its goals.

\n

Because the \"absence of any hypothesis\", in this case, does not correspond to a maxentropy distribution, but rather an ordered prior that is ready to recognize any structure that it sees.  If you see the aliens making cheesecakes over and over and over again, in many different places and times, you are ready to say \"the aliens like cheesecake\" rather than \"my, what a coincidence\".  Even in the absence of any notion of what the aliens are doing - whether they're making cheesecakes or paperclips or eudaimonic sentient beings - this low-entropy prior itself can pour through the notion of \"optimization\" and be transformed into a recognition of solved instrumental problems.

\n

If you truly expected no order of an alien mind's goals - if you did not credit even the structured prior that lets you recognize order when you see it - then you would be unable to identify any optimization or any intelligence.  Every possible configuration of matter would appear equally probable as \"something the mind might design\", from desk dust to rainstorms.  Just another hypothesis of maximum entropy.

\n

This doesn't mean that there's some particular identifiable thing that all alien minds want.  It doesn't mean that a mind, \"by definition\", doesn't change its goals over time.  Just that if there were an \"agent\" whose goals were pure snow on a television screen, its acts would be the same.

\n

Like thermodynamics, cognition is about flows of order.  An ordered outcome needs negentropy to fuel it.  Likewise, where we expect or recognize a thing, even so lofty and abstract as \"intelligence\", we must have ordered beliefs to fuel our anticipation.  It's all part of the great game, Follow-the-Negentropy.

" } }, { "_id": "N6AxCuwsvmebdaDR5", "title": "Dying for a donation", "pageUrl": "https://www.lesswrong.com/posts/N6AxCuwsvmebdaDR5/dying-for-a-donation", "postedAt": "2008-11-07T01:17:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "N6AxCuwsvmebdaDR5", "html": "

The most outstanding feature of organ markets is that most people hate the idea. This is a curiosity deserving a second glance. There are organ shortages almost everywhere, with people dying on waiting lists hourly. To sentence them to death based on a cursory throb of disgust is not just uncivilised but murderous.

\n

First I should get some technical details out of the way. An organ market can involve buying from living donors, or selling rights to organs after death, or both. Organs needn’t go to the rich preferentially; like any treatment, that depends on the healthcare system. The supply of organs available won’t decrease – if free donations dropped as a result of sales, the price would rise until either enough people sold organs or relatives and friends felt morally obliged to donate them anyway. A regulated market needn’t lead to an increase in stolen Chinese organ imports. It would lower the price here, making smuggling less worthwhile, while stopping Australians going on desperate holidays to seek organs in the under-regulated Third World.

\n

That they ‘commodify the human body’ is the main objection to organ markets. They certainly do that, but why is commodification terrible? Well, a commodity is generally an object subordinated to the goal of making money. Treating other humans in that way leads to abominable actions. Slavery and organ theft are examples of human commodification that rightly repulse us. This doesn’t generalise however. The horror in these examples is that people are being made miserable because they don’t want to be sold. This is a completely different scenario to people voluntarily commodifying themselves.

\n

After all, if commodifying people is inherently wrong, why allow paid labour? Renting out a portion of your time, mind and body to a company or government is surely commodification in the same vein. Or is selling body parts just too much commodification? It doesn’t seem so to me – you can lose more of your most personal possession, your limited lifespan, working than you would selling a kidney. Regardless of how we personally answer that question, there is no reason for the public to decide where the line on commodification should be drawn rather than the people choosing to be involved.

\n

Perhaps anyone who wants to commodify themselves must necessarily be insane and unable to make good choices. To decide that somebody with an alternative idea must not be of sound mind is a big step. The fact that someone disagrees with your opinions, especially ones without arguments behind them, hardly proves they are insane. To all of those who use their gut reaction of disgust to produce policy, Alex Tabarrok asks, “Is it not repugnant that some people are willing to let others die so that their stomachs won’t become queasy at the thought that someone, somewhere is selling a kidney?”

\n

But can people in desperate poverty be considered to be making free choices? Many say no. So, is the choice between starving and selling one’s kidney really a choice? Yes; an easy one. One of the options is awful. To forbid organ selling is to take away the better choice. If we choose to provide an even better option to the person that would be great – but it is no solution to the problem of poverty to take away what choices the poor do have absent outside help.

\n

A related argument is that even with better choices, poor people will be so desperate as to be irrational. However even if we accept that poor people are irrational, for anyone desperate enough to become irrational, selling an organ is probably a great idea. Given the ubiquitous human aversion to being cut up, poor people are more likely to underestimate the merit of that cash source. Should we intervene there?

\n

Another argument regarding poverty is that organ markets are highly unegalitarian; they’re another way to exploit the poor. However, there are two inequalities involved in this market. People have differing amounts of money, and people have differing numbers of functioning organs. Which of these inequalities is worse for those with less? The most pressing egalitarian action would be to redistribute the organs more fairly. By happy coincidence the most effective way to do this is to simultaneously redistribute wealth as well. If poor people sell organs, all the better; the money is redistributed to them as organs are also redistributed to those with least.

\n

The alternative to a market is ‘altruism’. If a brother needs an organ to live, how can you refuse? Unlike the disconnected poor person who benefits from an extra option, this family member loses their previous option of keeping both their organs and their family relationships. The latter are effectively held to ransom. This system leaves the patient with the stress of traipsing around making such awkward requests. Instead of loving support, they get to watch the family politics as everyone tries not to be left with the responsibility, everyone hiding their relief when their blood type is incompatible. Often people offer an organ, then ask the transplant team to judge them a poor match. This gets them off the hook, but leaves the ill person in a cruel cycle of hope and despair. It’s analogous to telling cancer patients ‘come for chemo on Tuesday’, then refusing them any every week till they die. If the patient is fortunate enough to find a donor, there is potentially the stifling lifelong obligation to them. People have refused organs over this. The troubling emotional dynamics surrounding ‘donation’ led Thomas. E Starzl, a great transplant surgeon, to stop doing live transplants.

\n

My favourite argument against organ markets is ‘it will create a distopic world where an underclass exists to replace body parts of the rich’. This is flawed in a multitude of ways. Most people would be in neither category. It would create as much of a split as ‘people who make donuts’ vs. ‘people who eat donuts’. The exchange of money makes the parties more equal in the transaction than if one is the unfortunate victim of a request they cannot refuse. Individual people can’t be used as organ factories. Number of organs is a hopeless basis for discrimination, due to the effort involved in actually finding out which organs somebody has.

\n

‘Altruistic giving’ is more coercive than a market, unnecessarily cruel to the patient, the donor and their family and friends, and leaves thousands to die on waiting lists. Organ markets can save lives without us having to sacrifice morality and should join the ranks of life insurance and money lending; markets we once thought unthinkable.

\n

Originally published in Woroni.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "nHRri2F49MZbcqmkY", "title": "Back Up and Ask Whether, Not Why", "pageUrl": "https://www.lesswrong.com/posts/nHRri2F49MZbcqmkY/back-up-and-ask-whether-not-why", "postedAt": "2008-11-06T19:20:14.000Z", "baseScore": 54, "voteCount": 38, "commentCount": 26, "url": null, "contents": { "documentId": "nHRri2F49MZbcqmkY", "html": "

Followup toThe Bottom Line

\n\n

A recent conversation reminded me of this simple, important, and difficult method:

\n\n

When someone asks you "Why are you doing X?",
And you don't remember an answer previously in mind,
Do not ask yourself "Why am I doing X?".

\n\n

For example, if someone asks you
"Why are you using a QWERTY keyboard?" or "Why haven't you invested in stocks?"
and you don't remember already considering this exact question and deciding it,
do not ask yourself "Why am I using a QWERTY keyboard?" or "Why aren't I invested in stocks?"

\n\n

Instead, try to blank your mind - maybe not a full-fledged crisis of faith, but at least try to prevent your mind from knowing the answer immediately - and ask yourself:

\n\n

"Should I do X, or not?"

\n\n

Should I use a QWERTY keyboard, or not?  Should I invest in stocks, or not?

\n\n

When you finish considering this question, print out a traceback of the arguments that you yourself considered in order to arrive at your decision, whether that decision is to X, or not X.  Those are your only real reasons, nor is it possible to arrive at a real reason in any other way.

\n\n

And this is also writing advice: because I have sometimes been approached by people who say "How do I convince people to wear green shoes?  I don't know how to argue it," and I reply, "Ask yourself honestly whether you should wear green shoes; then make a list of which thoughts actually move you to decide one way or another; then figure out how to explain or argue them, recursing as necessary."

" } }, { "_id": "wFHHeAdWab2i4zhNM", "title": "Hanging Out My Speaker's Shingle", "pageUrl": "https://www.lesswrong.com/posts/wFHHeAdWab2i4zhNM/hanging-out-my-speaker-s-shingle", "postedAt": "2008-11-05T14:00:00.000Z", "baseScore": 11, "voteCount": 9, "commentCount": 36, "url": null, "contents": { "documentId": "wFHHeAdWab2i4zhNM", "html": "

I was recently invited to give a talk on heuristics and biases at Jane Street Capital, one of the top proprietary trading firms ("proprietary" = they trade their own money).  When I got back home, I realized that (a) I'd successfully managed to work through the trip, and (b) it'd been very pleasant mentally, a nice change of pace.  (One of these days I have to blog about what I discovered at Jane Street - it turns out they've got their own rationalist subculture going.)

\n\n

So I've decided to hang out my shingle as a speaker at financial companies.

\n\n

You may be thinking:  "Perhaps, Eliezer, this is not the best of times."

\n\n

Well... I do have hopes that, among the firms interested in having me as a speaker, a higher-than-usual percentage will have come out of the crash okay.  I checked recently to see if this were the case for Jane Street Capital, and it was.

\n\n

But more importantly - your competitors are learning the secrets of rationality!  Are you? 

\n\n

Or maybe I should frame it as:  "Not doing too well this year?  Drop the expensive big-name speakers.  I can give a fascinating and useful talk and I won't charge you as much."

\n\n

And just to offer a bit of a carrot - if I can monetize by speaking, I'm much less likely to try charging for access to my future writings.  No promises, but something to keep in mind.  So do recommend me to your friends as well.

I expect that, as I speak, the marginal value of money to my work will go down; the more I speak, the more my price will go up.  If my (future) popular book on rationality becomes a hit, I'll upgrade to big-name fees.  And later in my life, if all goes as planned, I'll be just plain not available.

\n\n

So I'm offering you, my treasured readers, a chance to get me early.  I would suggest referencing this page when requesting me as a speaker.  Emails will be answered in the order they arrive.

\n\n" } }, { "_id": "4SysgzYYJmErwHrWw", "title": "Today's Inspirational Tale", "pageUrl": "https://www.lesswrong.com/posts/4SysgzYYJmErwHrWw/today-s-inspirational-tale", "postedAt": "2008-11-04T16:15:24.000Z", "baseScore": 17, "voteCount": 12, "commentCount": 14, "url": null, "contents": { "documentId": "4SysgzYYJmErwHrWw", "html": "

At a Foresight Gathering some years ago, a Congressman was in attendance, and he spoke to us and said the following:

"Everyone in this room who's signed up for cryonics, raise your hand."

Many hands went up.

"Now everyone who knows the name of your representative in the House, raise your hand."

Fewer hands went up.

"And you wonder why you don't have any political influence."

Rationalists would likewise do well to keep this lesson in mind.

(I should also mention that voting is a Newcomblike problem.  As I don't believe rational agents should defect in the 100fold iterated prisoner's dilemma, I don't buy the idea that rational agents don't vote .)

\n\n

(See also Stop Voting For Nincompoops.  It's more applicable to primaries than to the general election.  But a vote for a losing candidate is not "thrown away"; it sends a message to mainstream candidates that you vote, but they have to work harder to appeal to your interest group to get your vote.  Readers in non-swing states especially should consider what message they're sending with their vote before voting for any candidate, in any election, that they don't actually like.)

" } }, { "_id": "rELc88PvDkhetQzqx", "title": "Complexity and Intelligence", "pageUrl": "https://www.lesswrong.com/posts/rELc88PvDkhetQzqx/complexity-and-intelligence", "postedAt": "2008-11-03T20:27:23.000Z", "baseScore": 34, "voteCount": 30, "commentCount": 78, "url": null, "contents": { "documentId": "rELc88PvDkhetQzqx", "html": "

Followup toBuilding Something Smarter , Say Not \"Complexity\", That Alien Message

\n

One of the Godel-inspired challenges to the idea of self-improving minds is based on the notion of \"complexity\".

\n

Now \"complexity\", as I've previously mentioned, is a dangerous sort of word.  \"Complexity\" conjures up the image of a huge machine with incomprehensibly many gears inside - an impressive sort of image.  Thanks to this impressiveness, \"complexity\" sounds like it could be explaining all sorts of things - that all sorts of phenomena could be happening because of \"complexity\".

\n

It so happens that \"complexity\" also names another meaning, strict and mathematical: the Kolmogorov complexity of a pattern is the size of the program code of the shortest Turing machine that produces the pattern as an output, given unlimited tape as working memory.

\n

I immediately note that this mathematical meaning, is not the same as that intuitive image that comes to mind when you say \"complexity\".  The vast impressive-looking collection of wheels and gears?  That's not what the math term means.

\n

Suppose you ran a Turing machine with unlimited tape, so that, starting from our laws of physics, it simulated our whole universe - not just the region of space we see around us, but all regions of space and all quantum branches.  (There's strong indications our universe may be effectively discrete, but if not, just calculate it out to 3^^^3 digits of precision.)

\n

Then the \"Kolmogorov complexity\" of that entire universe - throughout all of space and all of time, from the Big Bang to whatever end, and all the life forms that ever evolved on Earth and all the decoherent branches of Earth and all the life-bearing planets anywhere, and all the intelligences that ever devised galactic civilizations, and all the art and all the technology and every machine ever built by those civilizations...

\n

...would be 500 bits, or whatever the size of the true laws of physics when written out as equations on a sheet of paper.

\n

The Kolmogorov complexity of just a single planet, like Earth, would of course be much higher than the \"complexity\" of the entire universe that contains it.

\n

\n

\"Eh?\" you say.  \"What's this?\" you say.  \"How can a single planet contain more wheels and gears, more complexity, than the whole vast turning universe that embeds it?  Wouldn't one planet contain fewer books, fewer brains, fewer species?\"

\n

But the new meaning that certain mathematicians have formulated and attached to the English word \"complexity\", is not like the visually impressive complexity of a confusing system of wheels and gears.

\n

It is, rather, the size of the smallest Turing machine that unfolds into a certain pattern.

\n

If you want to print out the entire universe from the beginning of time to the end, you only need to specify the laws of physics.

\n

If you want to print out just Earth by itself, then it's not enough to specify the laws of physics.  You also have to point to just Earth within the universe.  You have to narrow it down somehow.  And, in a big universe, that's going to take a lot of information.  It's like the difference between giving directions to a city, and giving directions for finding a single grain of sand on one beach of one city.  Indeed, with all those quantum branches existing side-by-side, it's going to take around as much information to find Earth in the universe, as to just directly specify the positions of all the atoms.

\n

Kolmogorov complexity is the sense at which we zoom into the endless swirls of the Mandelbrot fractal and think, not \"How complicated\", but rather, \"How simple\", because the Mandelbrot set is defined using a very simple equation.  But if you wanted to find a subset of the Mandelbrot set, one particular swirl, you would have to specify where to look, and that would take more bits.

\n

That's why we use the Kolmogorov complexity in Occam's Razor to determine how \"complicated\" or \"simple\" something is.  So that, when we think of the entire universe, all the stars we can see and all the implied stars we can't see, and hypothesize laws of physics standing behind it, we will think \"what a simple universe\" and not \"what a complicated universe\" - just like looking into the Mandelbrot fractal and thinking \"how simple\".  We could never accept a theory of physics as probably true, or even remotely possible, if it got an Occam penalty the size of the universe.

\n

As a logical consequence of the way that Kolmogorov complexity has been defined, no closed system can ever increase in Kolmogorov complexity.  (At least, no closed system without a 'true randomness' source.)  A program can pattern ever more interacting wheels and gears into its RAM, but nothing it does from within itself can increase \"the size of the smallest computer program that unfolds into it\", by definition.

\n

Suppose, for example, that you had a computer program that defined a synthetic biology and a synthetic gene system.  And this computer program ran through an artificial evolution that simulated 10^44 organisms (which is one estimate for the number of creatures who have ever lived on Earth), and subjected them to some kind of competition.  And finally, after some predefined number of generations, it selected the highest-scoring organism, and printed it out.  In an intuitive sense, you would (expect to) say that the best organisms on each round were getting more complicated, their biology more intricate, as the artificial biosystem evolved.  But from the standpoint of Kolmogorov complexity, the final organism, even after a trillion years, is no more \"complex\" than the program code it takes to specify the tournament and the criterion of competition.  The organism that wins is implicit in the specification of the tournament, so it can't be any more \"complex\" than that.

\n

But if, on the other hand, you reached into the biology and made a few million random changes here and there, the Kolmogorov complexity of the whole system would shoot way up: anyone wanting to specify it exactly, would have to describe the random changes that you made.

\n

I specify \"random\" changes, because if you made the changes with beneficence aforethought - according to some criterion of goodness - then I could just talk about the compact criterion you used to make the changes.  Only random information is incompressible on average, so you have to make purposeless changes if you want to increase the Kolmogorov complexity as fast as possible.

\n

So!  As you've probably guessed, the argument against self-improvement is that since closed systems cannot increase their \"complexity\", the AI must look out upon the world, demanding a high-bandwidth sensory feed, in order to grow up.

\n

If, that is, you believe that \"increasing Kolmogorov complexity\" is prerequisite to increasing real-world effectiveness.

\n

(We will dispense with the idea that if system A builds system B, then system A must be \"by definition\" as smart as system B.  This is the \"Einstein's mother must have been one heck of a physicist\" sophistry.  Even if a future planetary ecology is in some sense \"implicit\" in a single self-replicating RNA strand in some tidal pool, the ecology is a lot more impressive in a real-world sense: in a given period of time it can exert larger optimization pressures and do more neat stuff.)

\n

Now, how does one argue that \"increasing Kolmogorov complexity\" has something to do with increasing intelligence?  Especially when small machines can unfold into whole universes, and the maximum Kolmogorov complexity is realized by random noise?

\n

One of the other things that a closed computable system provably can't do, is solve the general halting problem - the problem of telling whether any Turing machine halts.

\n

A similar proof shows that, if you take some given solver, and consider the maximum size bound such that the solver can solve the halting problem for all machines of that size or less, then this omniscience is bounded by at most the solver's own complexity plus a small constant.

\n

So... isn't increasing your Kolmogorov complexity through outside sensory inputs, the key to learning to solve the halting problem for ever-larger systems?

\n

And doesn't this show that no closed system can \"self-improve\"?

\n

In a word, no.

\n

I mean, if you were to try to write it out as logic, you'd find that one of the steps involves saying, \"If you can solve all systems of complexity N, you must be of complexity greater than N (maybe minus a small constant, depending on the details of the proof).  Therefore, by increasing your complexity, you increase the range of systems you can solve.\"  This is formally a non-sequitur.

\n

It's also a non-sequitur in practice.

\n

I mean, sure, if we're not dealing with a closed system, you can't prove that it won't solve the halting problem.  You could be looking at an external bright light in the sky that flashes on or off to reveal the halting solution.

\n

But unless you already have that kind of mathematical ability yourself, you won't know just from looking at the light that it's giving you true solutions to the halting problem.  You must have just been constructed with faith in the light, and the light must just happen to work.

\n

(And in any computable universe, any light in the sky that you see won't happen to solve the halting problem.)

\n

It's not easy for \"sensory information\" to give you justified new mathematical knowledge that you could not in principle obtain with your eyes closed.

\n

It's not a matter, for example, of seeing written in the sky a brilliant proof, that you would never have thought of on your own.  A closed system with infinite RAM can close its eyes, and write out every possible sensory experience it could have, along with its own reactions to them, that could occur within some bounded number of steps.  Doing this does not increase its Kolmogorov complexity.

\n

So the notion can't be that the environment tells you something that you recognize as a proof, but didn't think of on your own.  Somehow, having that sensory experience in particular, has to increase your mathematical ability even after you perfectly imagined that experience and your own reaction to it in advance.

\n

Could it be the healing powers of having a larger universe to live in, or other people to talk to?  But you can simulate incredibly large universes - vastly larger than anything we see in our telescopes, up-arrow large - within a closed system without increasing its Kolmogorov complexity.  Within that simulation you could peek at people watching the stars, and peek at people interacting with each other, and plagiarize the books they wrote about the deep wisdom that comes from being embedded in a larger world.

\n

What justified novel mathematical knowledge - about the halting problem in particular - could you gain from a sensory experience, that you could not gain from perfectly imagining that sensory experience and your own reaction to it, nor gain from peeking in on a simulated universe that included someone having that sensory experience?

\n

Well, you can always suppose that you were born trusting the light in the sky, and the light in the sky always happens to tell the truth.

\n

But what's actually going on is that the non-sequitur is coming back to bite:  Increasing your Kolmogorov complexity doesn't necessarily increase your ability to solve math problems.  Swallowing a bucket of random bits will increase your Kolmogorov complexity too.

\n

You aren't likely to learn any mathematics by gazing up at the external stars, that you couldn't learn from \"navel-gazing\" into an equally large simulated universe.  Looking at the actual stars around you is good for figuring out where in the universe you are (the extra information that specifies your location) but not much good for learning new math that you couldn't learn by navel-gazing into a simulation as large as our universe.

\n

In fact, it's just bloody hard to fundamentally increase your ability to solve math problems in a way that \"no closed system can do\" just by opening the system.  So far as I can tell, it basically requires that the environment be magic and that you be born with faith in this fact.

\n

Saying that a 500-state Turing machine might be able to solve all problems up to at most 500 states plus a small constant, is misleading.  That's an upper bound, not a lower bound, and it comes from having a constructive way to build a specific unsolvable Turing machine out of the solver.  In reality, you'd expect a 500-state Turing machine to get nowhere near solving the halting problem up to 500.  I would drop dead of shock if there were a 500-state Turing machine that solved the halting problem for all the Turing machines up to 50 states.  The vast majority of 500-state Turing machines that implement something that looks like a \"halting problem solver\" will go nowhere near 500 states (but see this comment).

\n

Suppose you write a relatively short Turing machine that, by virtue of its unlimited working memory, creates an entire universe containing googols or up-arrows of atoms...

\n

...and within this universe, life emerges on some planet-equivalent, and evolves, and develops intelligence, and devises science to study its ecosphere and its universe, and builds computers, and goes out into space and investigates the various physical systems that have formed, and perhaps encounters other life-forms...

\n

...and over the course of trillions or up-arrows of years, a transgalactic intrauniversal economy develops, with conduits conducting information from one end of the universe to another (because you didn't pick a physics with inconvenient lightspeed limits), a superWeb of hyperintelligences all exchanging information...

\n

...and finally - after a long, long time - your computer program blinks a giant message across the universe, containing a problem to be solved and a specification of how to answer back, and threatening to destroy their universe if the answer is wrong...

\n

...then this intrauniversal civilization - and everything it's ever learned by theory or experiment over the last up-arrow years - is said to contain 400 bits of complexity, or however long the original program was.

\n

But where did it get its math powers, from inside the closed system?

\n

If we trace back the origins of the hypergalactic civilization, then every belief it ever adopted about math, came from updating on some observed event.  That might be a star exploding, or it might be the output of a calculator, or it might be an observed event within some mind's brain... but in all cases, the update will occur because of a logic that says, \"If I see this, it means that\".  Before you can learn, you must have the prior that structures learning.  If you see something that makes you decide to change the way you learn, then you must believe that seeing X implies you should learn a different way Y.  That's how it would be for superintelligences, I expect.

\n

If you keep tracing back through that simulated universe, you arrive at something before the dawn of superintelligence - the first intelligent beings, produced by evolution.  These first minds are the ones who'll look at Peano Arithmetic and think, \"This has never produced a contradiction, so it probably never will - I'll go ahead and program that into my AI.\"  These first minds are the ones who'll think, \"Induction seems like it works pretty well, but how do I formalize the notion of induction?\"  And these first minds are the ones who'll think, \"If I build a self-improving AI, how should it update itself - including changing its own updating process - from the results of observation?\"

\n

And how did the first minds get the ability to think those thoughts?  From natural selection, that generated the adaptations that executed to think all those thoughts, using the simple evolutionary rule: \"keep what works\".

\n

And in turn, natural selection in this universe popped out of the laws of physics.

\n

So everything that this hypergalactic civilization ever believes about math, is really just induction in one form or another.  All the evolved adaptations that do induction, produced by inductive natural selection; and all the generalizations made from experience, including generalizations about how to form better generalizations.  It would all just unfold out of the inductive principle...

\n

...running in a box sealed as tightly shut as our own universe appears to be.

\n

And I don't see how we, in our own closed universe, are going to do any better.  Even if we have the ability to look up at the stars, it's not going to give us the ability to go outside that inductive chain to obtain justified mathematical beliefs.

\n

If you wrote that 400-bit simulated universe over the course of a couple of months using human-level intelligence and some mysterious source of unlimited computing power, then you are much more complex than that hypergalactic civilization.  You take much more than 400 bits to find within the space of possibilities, because you are only one particular brain.

\n

But y'know, I think that your mind, and the up-arrow mind of that inconceivable civilization, would still be worth distinguishing as Powers.  Even if you can figure out how to ask them questions.  And even if you're asking them questions by running an internal simulation, which makes it all part of your own \"complexity\" as defined in the math.

\n

To locate a up-arrow-sized mind within an up-arrow-sized civilization, would require up-arrow bits - even if the entire civilization unfolded out of a 400-bit machine as compact as our own laws of physics.  But which would be more powerful, that one \"complex\" mind, or the \"simple\" civilization it was part of?

\n

None of this violates Godelian limitations. You can transmit to the hypergalactic civilization a similar Turing machine to the one that built it, and ask it how that Turing machine behaves.  If you can fold a hypergalactic civilization into a 400-bit Turing machine, then even a hypergalactic civilization can confront questions about the behavior of 400-bit Turing machines that are real stumpers.

\n

And 400 bits is going to be an overestimate.  I bet there's at least one up-arrow-sized hypergalactic civilization folded into a halting Turing machine with 15 states, or something like that.  If that seems unreasonable, you are not acquainted with the Busy-Beaver problem.

\n

You can get a hell of a lot of mathematical ability out of small Turing machines that unfold into pangalactic hypercivilizations.  But unfortunately, there are other small Turing machines that are hellishly difficult problems - perhaps unfolding into hypercivilizations themselves, or things even less comprehensible.  So even the tremendous mathematical minds that can unfold out of small Turing machines, won't be able to solve all Turing machines up to a larger size bound.  Hence no Godelian contradiction.

\n

(I wonder:  If humanity unfolded into a future civilization of infinite space and infinite time, creating descendants and hyperdescendants of unlimitedly growing size, what would be the largest Busy Beaver number ever agreed upon?  15?  Maybe even as large as 20?  Surely not 100 - you could encode a civilization of similar origins and equivalent power into a smaller Turing machine than that.)

\n

Olie Lamb said:  \"I don't see anything good about complexity.  There's nothing artful about complexity.  There's nothing mystical about complexity.  It's just complex.\"  This is true even when you're talking about wheels and gears, never mind Kolmogorov complexity.  It's simplicity, not complexity, that is the admirable virtue.

\n

The real force behind this whole debate is that the word \"complexity\" sounds impressive and can act as an explanation for anything you don't understand.  Then the word gets captured by a mathematical property that's spelled using the same letters, which happens to be provably constant for closed systems.

\n

That, I think, is where the argument really comes from, as it rationalizes the even more primitive intuition of some blind helpless thing in a box.

\n

This argument is promulgated even by some people who can write proofs about complexity - but frankly, I do not think they have picked up the habit of visualizing what their math symbols mean in real life.  That thing Richard Feynman did, where he visualized two balls turning colors or growing hairs?  I don't think they do that.  On the last step of interpretation, they just make a quick appeal to the sound of the words.

\n

But I will agree that, if the laws we know are true, then a self-improving mind which lacks sensory inputs, shouldn't be able to improve its mathematical abilities beyond the level of a up-arrow-sized civilization - for example, it shouldn't be able to solve Busy-Beaver(100).

\n

It might perhaps be more limited than this in mere practice, if it's just running on a laptop computer or something.  But if theoretical mathematical arguments about closed systems show anything, that is what they show.

" } }, { "_id": "GpvvQzf3pPyPsBepr", "title": "Building Something Smarter", "pageUrl": "https://www.lesswrong.com/posts/GpvvQzf3pPyPsBepr/building-something-smarter", "postedAt": "2008-11-02T17:00:00.000Z", "baseScore": 26, "voteCount": 20, "commentCount": 57, "url": null, "contents": { "documentId": "GpvvQzf3pPyPsBepr", "html": "

Previously in seriesEfficient Cross-Domain Optimization

\n\n

Once you demystify "intelligence" far enough to think of it as searching possible chains of causality, across learned domains, in order to find actions leading to a future ranked high in a preference ordering...

\n\n

...then it no longer sounds quite as strange, to think of building something "smarter" than yourself.

\n\n

There's a popular conception of AI as a tape-recorder-of-thought, which only plays back knowledge given to it by the programmers - I deconstructed this in Artificial Addition, giving the example of the machine that stores the expert knowledge Plus-Of(Seven, Six) = Thirteen instead of having a CPU that does binary arithmetic.

\n\n

There's multiple sources supporting this misconception:

\n\n

The stereotype "intelligence as book smarts", where you memorize disconnected "facts" in class and repeat them back.

\n\n

The idea that "machines do only what they are told to do", which confuses the idea of a system whose abstract laws you designed, with your exerting moment-by-moment detailed control over the system's output.

\n\n

And various reductionist confusions - a computer is "mere transistors" or "only remixes what's already there" (just as Shakespeare merely regurgitated what his teachers taught him: the alphabet of English letters - all his plays are merely that).

Since the workings of human intelligence are still to some extent unknown, and will seem very mysterious indeed to one who has not studied much cognitive science, it will seem impossible for the one to imagine that a machine could contain the generators of knowledge.

\n\n

The knowledge-generators and behavior-generators are black boxes, or even invisible background frameworks.  So tasking the imagination to visualize "Artificial Intelligence" only shows specific answers, specific beliefs, specific behaviors, impressed into a "machine" like being stamped into clay.  The frozen outputs of human intelligence, divorced of their generator and not capable of change or improvement.

\n\n

You can't build Deep Blue by programming a good chess move for every possible position.  First and foremost, you don't know exactly which chess positions the AI will encounter.  You would have to record a specific move for zillions of positions, more than you could consider in a lifetime with your slow neurons.

\n\n

But worse, even if you could record and play back "good moves", the resulting program would not play chess any better than you do.  That is the peril of recording and playing back surface phenomena, rather than capturing the underlying generator.

\n\n

If I want to create an AI that plays better chess than I do, I have to program a search for winning moves.  I can't program in specific moves because then the chess player really won't be any better than I am.  And indeed, this holds true on any level where an answer has to meet a sufficiently high standard.  If you want any answer better than you could come up with yourself, you necessarily sacrifice your ability to predict the exact answer in advance - though not necessarily your ability to predict that the answer will be "good" according to a known criterion of goodness.  "We never run a computer program unless we know an important fact about the output and we don't know the output," said Marcello Herreshoff.

\n\n

Deep Blue played chess barely better than the world's top humans, but a heck of a lot better than its own programmers.  Deep Blue's programmers had to know the rules of chess - since Deep Blue wasn't enough of a general AI to learn the rules by observation - but the programmers didn't play chess anywhere near as well as Kasparov, let alone Deep Blue.

\n\n

Deep Blue's programmers didn't just capture their own chess-move generator.  If they'd captured their own chess-move generator, they could have avoided the problem of programming an infinite number of chess positions.  But they couldn't have beat Kasparov; they couldn't have built a program that played better chess than any human in the world.

\n\n

The programmers built a better move generator - one that more powerfully steered the game toward the target of winning game positions.  Deep Blue's programmers surely had some slight ability to find chess moves that aimed at this same target, but their steering ability was much weaker than Deep Blue's.

\n\n

It is futile to protest that this is "paradoxical", since it actually happened.

\n\n

Equally "paradoxical", but true, is that Garry Kasparov was not born with a complete library of chess moves programmed into his DNA.  Kasparov invented his own moves; he was not explicitly preprogrammed by evolution to make particular moves - though natural selection did build a brain that could learn.  And Deep Blue's programmers invented Deep Blue's code without evolution explicitly encoding Deep Blue's code into their genes.

\n\n

Steam shovels lift more weight than humans can heft, skyscrapers are taller than their human builders, humans play better chess than natural selection, and computer programs play better chess than humans.  The creation can exceed the creator.  It's just a fact.

\n\n

If you can understand steering-the-future, hitting-a-narrow-target as the work performed by intelligence - then, even without knowing exactly how the work gets done, it should become more imaginable that you could build something smarter than yourself.

\n\n

By building something and then testing it?  So that we can see that a design reaches the target faster or more reliably than our own moves, even if we don't understand how?  But that's not how Deep Blue was actually built.  You may recall the principle that just formulating a good hypothesis to test, usually requires far more evidence than the final test that 'verifies' it - that Einstein, in order to invent General Relativity, must have already had in hand enough evidence to isolate that one hypothesis as worth testing.  Analogously, we can see that nearly all of the optimization power of human engineering must have already been exerted in coming up with good designs to test.  The final selection on the basis of good results is only the icing on the cake.  If you test four designs that seem like good ideas, and one of them works best, then at most 2 bits of optimization pressure can come from testing - the rest of it must be the abstract thought of the engineer.

\n\n

There are those who will see it as almost a religious principle that no one can possibly know that a design will work, no matter how good the argument, until it is actually tested.  Just like the belief that no one can possibly accept a scientific theory, until it is tested.  But this is ultimately more of an injunction against human stupidity and overlooked flaws and optimism and self-deception and the like - so far as theoretical possibility goes, it is clearly possible to get a pretty damn good idea of which designs will work in advance of testing them.

\n\n

And to say that humans are necessarily at least as good at chess as Deep Blue, since they built Deep Blue?  Well, it's an important fact that we built Deep Blue, but the claim is still a nitwit sophistry.  You might as well say that proteins are as smart as humans, that natural selection reacts as fast as humans, or that the laws of physics play good chess.

\n\n

If you carve up the universe along its joints, you will find that there are certain things, like butterflies and humans, that bear the very identifiable design signature and limitations of evolution; and certain other things, like nuclear power plants and computers, that bear the signature and the empirical design level of human intelligence.  To describe the universe well, you will have to distinguish these signatures from each other, and have separate names for "human intelligence", "evolution", "proteins", and "protons", because even if these things are related they are not at all the same.

" } }, { "_id": "N6MNzvgmHtTASpaSS", "title": "BHTV: Jaron Lanier and Yudkowsky", "pageUrl": "https://www.lesswrong.com/posts/N6MNzvgmHtTASpaSS/bhtv-jaron-lanier-and-yudkowsky", "postedAt": "2008-11-01T17:04:06.000Z", "baseScore": 8, "voteCount": 7, "commentCount": 66, "url": null, "contents": { "documentId": "N6MNzvgmHtTASpaSS", "html": "

My Bloggingheads.tv interview with Jaron Lanier is up.  Reductionism, zombies, and questions that you're not allowed to answer:

\n\n

\n\n

This ended up being more of me interviewing Lanier than a dialog, I'm afraid.  I was a little too reluctant to interrupt.  But you at least get a chance to see the probes I use, and Lanier's replies to them.

\n\n

If there are any BHTV heads out there who read Overcoming Bias and have something they'd like to talk to me about, do let me or our kindly producers know.

\n\n" } }, { "_id": "SXK87NgEPszhWkvQm", "title": "Mundane Magic", "pageUrl": "https://www.lesswrong.com/posts/SXK87NgEPszhWkvQm/mundane-magic", "postedAt": "2008-10-31T16:00:00.000Z", "baseScore": 285, "voteCount": 227, "commentCount": 97, "url": null, "contents": { "documentId": "SXK87NgEPszhWkvQm", "html": "

As you may recall from some months earlier, I think that part of the rationalist ethos is binding yourself emotionally to an absolutely lawful reductionistic universe—a universe containing no ontologically basic mental things such as souls or magic—and pouring all your hope and all your care into that merely real universe and its possibilities, without disappointment.

\n

There's an old trick for combating dukkha where you make a list of things you're grateful for, like a roof over your head.

\n

So why not make a list of abilities you have that would be amazingly cool if they were magic, or if only a few chosen individuals had them?

\n

For example, suppose that instead of one eye, you possessed a magical second eye embedded in your forehead.  And this second eye enabled you to see into the third dimension—so that you could somehow tell how far away things were—where an ordinary eye would see only a two-dimensional shadow of the true world.  Only the possessors of this ability can accurately aim the legendary distance-weapons that kill at ranges far beyond a sword, or use to their fullest potential the shells of ultrafast machinery called \"cars\".

\n

\"Binocular vision\" would be too light a term for this ability.  We'll only appreciate it once it has a properly impressive name, like Mystic Eyes of Depth Perception.

\n

So here's a list of some of my favorite magical powers:

\n

\n\n

And finally,

\n" } }, { "_id": "4uDfpNTrdhEYEb2jm", "title": "Intelligence in Economics", "pageUrl": "https://www.lesswrong.com/posts/4uDfpNTrdhEYEb2jm/intelligence-in-economics", "postedAt": "2008-10-30T21:17:56.000Z", "baseScore": 14, "voteCount": 13, "commentCount": 12, "url": null, "contents": { "documentId": "4uDfpNTrdhEYEb2jm", "html": "

Followup toEconomic Definition of Intelligence?

\n\n

After I challenged Robin to show how economic concepts can be useful in defining or measuring intelligence, Robin responded by - as I interpret it - challenging me to show why a generalized concept of "intelligence" is any use in economics.

\n\n

Well, I'm not an economist (as you may have noticed) but I'll try to respond as best I can.

\n\n

My primary view of the world tends to be through the lens of AI.  If I talk about economics, I'm going to try to subsume it into notions like expected utility maximization (I manufacture lots of copies of something that I can use to achieve my goals) or information theory (if you manufacture lots of copies of something, my probability of seeing a copy goes up).  This subsumption isn't meant to be some kind of challenge for academic supremacy - it's just what happens if you ask an AI guy an econ question.

\n\n

So first, let me describe what I see when I look at economics:

\n\n

I see a special case of game theory in which some interactions are highly regular and repeatable:  You can take 3 units of steel and 1 unit of labor and make 1 truck that will transport 5 units of grain between Chicago and Manchester once per week, and agents can potentially do this over and over again.  If the numbers aren't constant, they're at least regular - there's diminishing marginal utility, or supply/demand curves, rather than rolling random dice every time.  Imagine economics if no two elements of reality were fungible - you'd just have a huge incompressible problem in non-zero-sum game theory.

\n\n

This may be, for example, why we don't think of scientists writing papers that build on the work of other scientists in terms of an economy of science papers - if you turn an economist loose on science, they may measure scientist salaries paid in fungible dollars, or try to see whether scientists trade countable citations with each other.  But it's much less likely to occur to them to analyze the way that units of scientific knowledge are produced from previous units plus scientific labor.  Where information is concerned, two identical copies of a file are the same information as one file.  So every unit of knowledge is unique, non-fungible, and so is each act of production.  There isn't even a common currency that measures how much a given paper contributes to human knowledge.  (I don't know what economists don't know, so do correct me if this is actually extensively studied.)

\n\n

Since "intelligence" deals with an informational domain, building a bridge from it to economics isn't trivial - but where do factories come from, anyway?  Why do humans get a higher return on capital than chimpanzees?

I see two basic bridges between intelligence and economics.

\n\n

The first bridge is the role of intelligence in economics: the way that steel is put together into a truck involves choosing one out of an exponentially vast number of possible configurations.  With a more clever configuration, you may be able to make a truck using less steel, or less labor.  Intelligence also plays a role at a larger scale, in deciding whether or not to buy a truck, or where to invest money.  We may even be able to talk about something akin to optimization at\na macro scale, the degree to which the whole economy has put itself\ntogether in a special configuration that earns a high rate of return on investment.  (Though this introduces problems for my own formulation, as I assume a central preference ordering / utility function that an economy doesn't possess - still, deflated monetary valuations seem like a good proxy.)

\n\n

The second bridge is the role of economics in intelligence: if you jump up a meta-level, there are repeatable cognitive algorithms underlying the production of unique information.  These cognitive algorithms use some resources that are fungible, or at least material enough that you can only use the resource on one task, creating a problem of opportunity costs.  (A unit of time will be an example of this for almost any algorithm.)  Thus we have Omohundro's resource balance principle, which says that the inside of an efficiently organized mind should have a common currency in expected utilons.

\n\n

Says Robin:

'Eliezer has just raised the issue\nof how to define "intelligence", a concept he clearly wants to apply to\na very wide range of possible systems.  He wants a quantitative concept\nthat is "not parochial to humans," applies to systems with very\n"different utility functions," and that summarizes the system's\nperformance over a broad "not ... narrow problem domain."  My main\nresponse is to note that this may just not be possible.  I have no\nobjection to looking, but it is not obvious that there is any such\nuseful broadly-applicable "intelligence" concept.'

Well, one might run into some trouble assigning a total ordering to all intelligences, as opposed to a partial ordering.  But that intelligence as a concept is useful - especially the way that I've defined it - that I must strongly defend.  Our current science has advanced further on some problems than others.  Right now, there is better understanding of the steps carried out to construct a car, than the cognitive algorithms that invented the unique car design.  But they are both, to some degree, regular and repeatable; we don't all have different brain architectures.

\n\n

I generally inveigh against focusing on relatively minor between-human variations when discussing "intelligence".  It is controversial what role is played in the modern economy by such variations in whatever-IQ-tests-try-to-measure.  Anyone who denies that some such a role exists would be a poor deluded fool indeed.  But, on the whole, we needn't expect "the role played by IQ variations" to be at all the same sort of question as "the role played by intelligence".

\n\n

You will surely find no cars, if you take away the mysterious "intelligence" that produces, from out of a vast exponential space, the information that describes one particular configuration of steel etc. constituting a car design.  Without optimization to conjure certain informational patterns out of vast search spaces, the modern economy evaporates like a puff of smoke.

\n\n

So you need some account of where the car design comes from.

\n\n

Why should you try to give the same account of "intelligence" across different domains?  When someone designs a car, or an airplane, or a hedge-fund trading strategy, aren't these different designs?

\n\n

Yes, they are different informational goods.

\n\n

And wasn't it a different set of skills that produced them?  You can't just take a car designer and plop them down in a hedge fund.

\n\n

True, but where did the different skills come from?

\n\n

From going to different schools.

\n\n

Where did the different schools come from?

\n\n

They were built by different academic lineages, compounding knowledge upon knowledge within a line of specialization.

\n\n

But where did so many different academic lineages come from?  And how is this trick of "compounding knowledge" repeated over and over?

\n\n

Keep moving meta, and you'll find a regularity, something repeatable: you'll find humans, with common human genes that construct common human brain architectures.

\n\n

No, not every discipline puts the same relative strain on the same brain areas.  But they are all using human parts, manufactured by mostly-common DNA.  Not all the adult brains are the same, but they learn into unique adulthood starting from a much more regular underlying set of learning algorithms.  We should expect less variance in infants than in adults.

\n\n

And all the adaptations of the human brain were produced by the\n(relatively much structurally simpler) processes of natural selection.  Without that earlier and less efficient optimization process, there wouldn't be a human brain design, and hence no human brains.

\n\n

Subtract the human brains executing repeatable cognitive algorithms, and you'll have no unique adulthoods produced by learning; and no grown humans to invent the cultural concept of science; and no chains of discoveries that produce scientific lineages; and no engineers who attend schools; and no unique innovative car designs; and thus, no cars.

\n\n

The moral being that you can generalize across domains, if you keep tracing back the causal chain and keep going meta.

\n\n

It may be harder to talk about "intelligence" as a common factor in the full causal account of the economy, as to talk about the repeated operation that puts together many instantiations of the same car design - but there is a common factor, and the economy could hardly exist without it.

\n\n

As for generalizing away from humans - well, what part of the notion of "efficient cross-domain optimization" ought to apply only to humans?

" } }, { "_id": "uPCehCa7Ecidfn2Xe", "title": "Economic Definition of Intelligence?", "pageUrl": "https://www.lesswrong.com/posts/uPCehCa7Ecidfn2Xe/economic-definition-of-intelligence", "postedAt": "2008-10-29T19:32:53.000Z", "baseScore": 18, "voteCount": 16, "commentCount": 9, "url": null, "contents": { "documentId": "uPCehCa7Ecidfn2Xe", "html": "

Followup toEfficient Cross-Domain Optimization

\n\n

Shane Legg once produced a catalogue of 71 definitions of intelligence.  Looking it over, you'll find that the 18 definitions in dictionaries and the 35 definitions of psychologists are mere black boxes containing human parts.

\n\n

However, among the 18 definitions from AI researchers, you can find such notions as

"Intelligence measures an agent's ability to achieve goals in a wide range of environments" (Legg and Hutter)

or

"Intelligence is the ability to optimally use limited resources - including time - to achieve goals" (Kurzweil)

or even

"Intelligence is the power to rapidly find an adequate solution in what appears a priori (to observers) to be an immense search space" (Lenat and Feigenbaum)

which is about as close as you can get to my own notion of "efficient cross-domain optimization" without actually measuring optimization power in bits.

\n\n

But Robin Hanson, whose AI background we're going to ignore for a moment in favor of his better-known identity as an economist, at once said:

"I think what you want is to think in terms of a production function, which describes a system's output on a particular task as a function of its various inputs and features."

Economists spend a fair amount of their time measuring things like productivity and efficiency.  Might they have something to say about how to measure intelligence in generalized cognitive systems?

\n\n

This is a real question, open to all economists.  So I'm going to quickly go over some of the criteria-of-a-good-definition that stand behind my own proffered suggestion on intelligence, and what I see as the important challenges to a productivity-based view.  It seems to me that this is an important sub-issue of Robin's and my persistent disagreement about the Singularity.

(A)  One of the criteria involved in a definition of intelligence is that it ought to separate form and function.  The Turing Test fails this - it says that if you can build something indistinguishable from a bird, it must definitely fly, which is true but spectacularly unuseful in building an airplane.

\n\n

(B)  We will also prefer quantitative measures to qualitative measures that only say "this is intelligent or not intelligent".  Sure, you can define "flight" in terms of getting off the ground, but what you really need is a way to quantify aerodynamic lift and relate it to other properties of the airplane, so you can calculate how much lift is needed to get off the ground, and calculate how close you are to flying at any given point.

\n\n

(C)  So why not use the nicely quantified IQ test?  Well, imagine if the Wright Brothers had tried to build the Wright Flyer using a notion of "flight quality" build around a Fly-Q test standardized on the abilities of the average pigeon, including various measures of wingspan and air maneuverability.  We want a definition that is not parochial to humans.

\n\n

(D)  We have a nice system of Bayesian expected utility maximization.  Why not say that any system's "intelligence" is just the average utility of the outcome it can achieve?  But utility functions are invariant up to a positive affine transformation, i.e., if you add 3 to all utilities, or multiply all by 5, it's the same utility function.  If we assume a fixed utility function, we would be able to compare the intelligence of the same system on different occasions - but we would like to be able to compare intelligences with different utility functions.

\n\n

(E)  And by much the same token, we would like our definition to let us recognize intelligence by observation rather than presumption, which means we can't always start off assuming that something has a fixed utility function, or even any utility function at all.  We can have a prior over probable utility functions, which assigns a very low probability to overcomplicated hypotheses like "the lottery wanted 6-39-45-46-48-36 to win on October 28th, 2008", but higher probabilities to simpler desires.

\n\n

(F)  Why not just measure how well the intelligence plays chess?  But in real-life situations, plucking the opponent's queen off the board or shooting the opponent is not illegal, it is creative.  We would like our definition to respect the creative shortcut - to not define intelligence into the box of a narrow problem domain.

\n\n

(G)  It would be nice if intelligence were actually measurable using some operational test, but this conflicts strongly with criteria F and D.  My own definition essentially tosses this out the window - you can't actually measure optimization power on any real-world problem any more than you can compute the real-world probability update or maximize real-world expected utility.  But, just as you can wisely wield algorithms that behave sorta like Bayesian updates or increase expected utility, there are all sorts of possible methods that can take a stab at measuring optimization power.

\n\n

(H)  And finally, when all is said and done, we should be able to recognize very high "intelligence" levels in an entity that can, oh, say, synthesize nanotechnology and build its own Dyson Sphere.  Nor should we assign very high "intelligence" levels to something that couldn't build a wooden wagon (even if it wanted to, and had hands).  Intelligence should not be defined too far away from that impressive thingy we humans sometimes do.

\n\n

Which brings us to production functions.  I think the main problems here would lie in criteria DE.

\n\n

First, a word of background:  In Artificial Intelligence, it's more common to spend your days obsessing over the structure of a problem space - and when you find a good algorithm, you use that algorithm and pay however much computing power it requires.  You aren't as likely to find a situation where there are five different algorithms competing to solve a problem and a sixth algorithm that has to decide where to invest a marginal unit of computing power.  Not that computer scientists haven't studied this as a specialized problem.  But it's ultimately not what AIfolk do all day.  So I hope that we can both try to appreciate the danger of deformation professionelle.

\n\n

Robin Hanson said:

"Eliezer, even if you measure\noutput as you propose in terms of a state space reduction factor, my\nmain point was that simply 'dividing by the resources used' makes\nlittle sense."

I agree that "divide by resources used" is a very naive method, rather tacked-on by comparison.  If one mind gets 40 bits of optimization using a trillion floating-point operations, and another mind achieves 80 bits of optimization using two trillion floating-point operations, even in the same domain using the same utility function, they may not at all be equally "well-designed" minds.  One of the minds may itself be a lot more "optimized" than the other (probably the second one).

\n\n

I do think that measuring the rarity of equally good solutions in the search space smooths out the discussion a lot.  More than any other simple measure I can think of.  You're not just presuming that 80 units are twice as good as 40 units, but trying to give some measure of how rare 80-unit solutions are in the space; if they're common it will take less "optimization power" to find them and we'll be less impressed.  This likewise helps when comparing minds with different preferences.

\n\n

But some search spaces are just easier to search than others.  I generally choose to talk about this by hiking the "optimization" metric up a meta-level: how easy is it to find an algorithm that searches this space?  There's no absolute easiness, unless you talk about simple random selection, which I take as my base case.  Even if a fitness gradient is smooth - a very simple search - e.g. natural selection would creep down it by incremental neighborhood search, while a human would leap through by e.g. looking at the first and second derivatives.  Which of these is the "inherent easiness" of the space?

\n\n

Robin says:

Then\nwe can talk about partial derivatives; rates at which output increases\nas a function of changes in inputs or features...  Yes a production function\nformulation may abstract from some relevant details, but it is far\ncloser to reality than dividing by "resources."

A partial derivative divides the marginal output by marginal resource.  Is this so much less naive than dividing total output by total resources?

\n\n

I confess that I said "divide by resources" just to have some measure of efficiency; it's not a very good measure.  Still, we need to take resources into account somehow - we don't want natural selection to look as "intelligent" as humans: human engineers, given 3.85 billion years and the opportunity to run 1e44 experiments, would produce products overwhelmingly superior to biology.

\n\n

But this is really establishing an ordering based on superior performance with the same resources, not a quantitative metric.  I might have to be content with a partial ordering among intelligences, rather than being able to quantify them.  If so, one of the ordering characteristics will be the amount of resources used, which is what I was getting at by saying "divide by total resources".

\n\n

The idiom of "division" is based around things that can be divided, that is, fungible resources.  A human economy based on mass production has lots of these.  In modern-day computing work, programmers use fungible resources like computing cycles and RAM, but tend to produce much less fungible outputs.  Informational goods tend to be mostly non-fungible: two copies of the same file are worth around as much as one, so every worthwhile informational good is unique.  If I draw on my memory to produce an essay, neither the sentences of the essay, or the items of my memory, will be substitutable for one another.  If I create a unique essay by drawing upon a thousand unique memories, how well have I done, and how much resource have I used?

\n\n

Economists have a simple way of establishing a kind of fungibility-of-valuation between all the inputs and all the outputs of an economy: they look at market prices.

\n\n

But this just palms off the problem of valuation on hedge funds.  Someone has to do the valuing.  A society with stupid hedge funds ends up with stupid valuations.

\n\n

Steve Omohundro has pointed out that for fungible resources in an AI - and computing power is a fungible resource on modern architectures - there ought to be a resource balance principle: the marginal result of shifting a unit of resource between any two tasks should produce a decrease in expected utility, relative to the AI's probability function that determines the expectation.  To the extent any of these things have continuous first derivatives, shifting an infinitesimal unit of resource between any two tasks should have no effect on expected utility.  This establishes "expected utilons" as something akin to a central currency within the AI.

\n\n

But this gets us back to the problems of criteria D and E.  If I look at a mind and see a certain balance of resources, is that because the mind is really cleverly balanced, or because the mind is stupid?  If a mind would rather have two units of CPU than one unit of RAM (and how can I tell this by observation, since the resources are not readily convertible?) then is that because RAM is inherently twice as valuable as CPU, or because the mind is twice as stupid in using CPU as RAM?

\n\n

If you can assume the resource-balance principle, then you will find it easy to talk about the relative efficiency of alternative algorithms for use inside the AI, but this doesn't give you a good way to measure the external power of the whole AI.

\n\n

Similarly, assuming a particular relative valuation of resources, as given by an external marketplace, doesn't let us ask questions like "How smart is\na human economy?"  Now the relative valuation a human economy assigns to internal resources can no longer\nbe taken for granted - a more powerful system might assign very\ndifferent relative values to internal resources.

\n\n

I admit that dividing optimization power by "total resources" is handwaving - more a qualitative way of saying "pay attention to resources used" than anything you could actually quantify into a single useful figure.  But I pose an open question to Robin (or any other economist) to explain how production theory can help us do better, bearing in mind that:

\n\n\n\n

I would finally point out that all data about the market value of human IQ only applies to variances of intelligence within the human species.  I mean, how much would you pay a chimpanzee to run your hedge fund?

" } }, { "_id": "yLeEPFnnB9wE7KLx2", "title": "Efficient Cross-Domain Optimization", "pageUrl": "https://www.lesswrong.com/posts/yLeEPFnnB9wE7KLx2/efficient-cross-domain-optimization", "postedAt": "2008-10-28T16:33:03.000Z", "baseScore": 56, "voteCount": 40, "commentCount": 38, "url": null, "contents": { "documentId": "yLeEPFnnB9wE7KLx2", "html": "

Previously in seriesMeasuring Optimization Power

\n\n

Is Deep Blue "intelligent"?  It was powerful enough at optimizing chess boards to defeat Kasparov, perhaps the most skilled chess player humanity has ever fielded.

\n\n

A bee builds hives, and a beaver builds dams; but a bee doesn't build dams and a beaver doesn't build hives.  A human, watching, thinks, "Oh, I see how to do it" and goes on to build a dam using a honeycomb structure for extra strength.

\n\n

Deep Blue, like the bee and the beaver, never ventured outside the narrow domain that it itself was optimized over.

\n\n

There are no-free-lunch theorems showing that you can't have a truly general intelligence that optimizes in all possible universes (the vast majority of which are maximum-entropy heat baths).  And even practically speaking, human beings are better at throwing spears than, say, writing computer programs.

\n\n

But humans are much more cross-domain than bees, beavers, or Deep Blue.  We might even conceivably be able to comprehend the halting behavior of every Turing machine up to 10 states, though I doubt it goes much higher than that.

\n\n

Every mind operates in some domain, but the domain that humans\noperate in isn't "the savanna" but something more like "not too\ncomplicated processes in low-entropy lawful universes".  We learn whole new domains by observation, in the same way that a beaver might learn to chew a different kind of wood.  If I could\nwrite out your prior, I could describe more exactly the universes in which you operate.

Is evolution intelligent?  It operates across domains - not quite as\nwell as humans do, but with the same general ability to do\nconsequentialist optimization on causal sequences that wend through\nwidely different domains.  It built the bee.  It built the beaver.

\n\n

Whatever begins with genes, and impacts inclusive genetic fitness, through any chain of cause and effect\nin any domain, is subject to\nevolutionary optimization.  That much is true.

\n\n

But evolution only achieves this by running millions of actual\nexperiments in which the causal chains are actually played out.  This\nis incredibly inefficient.  Cynthia Kenyon said, "One grad student can\ndo things in an hour that evolution could not do in a billion years."  This is not because the grad student does quadrillions of detailed thought experiments in their imagination, but because the grad student abstracts over the search space.

\n\n

By human standards, evolution is unbelievably stupid.  It is the degenerate case of design with intelligence equal to zero, as befitting the accidentally occurring optimization process that got the whole thing started in the first place.

\n\n

(As for saying that "evolution built humans, therefore it is efficient", this is, firstly, a sophomoric objection; second, it confuses levels.  Deep Blue's programmers were not superhuman chessplayers.  The importance of distinguishing levels can be seen from the point that humans are efficiently optimizing human goals, which are not the same as evolution's goal of inclusive genetic fitness.  Evolution, in producing humans, may have entirely doomed DNA.)

\n\n

\nI once heard a senior mainstream AI type suggest that we might try to\nquantify the intelligence of an AI system in terms of its RAM,\nprocessing power, and sensory input bandwidth.  This at once reminded\nme of a quote\nfrom Dijkstra:  "If we wish to count lines of code, we should not\nregard them as 'lines produced' but as 'lines spent': the current\nconventional wisdom is so foolish as to book that count on the wrong\nside of the ledger."  If you want to measure the intelligence\nof a system, I would suggest measuring its optimization power as\nbefore, but then dividing by the resources used.  Or you might measure\nthe degree of prior cognitive optimization required to achieve the same\nresult using equal or fewer resources.  Intelligence, in other words,\nis efficient optimization.

\n\n

So if we say "efficient cross-domain optimization" - is that necessary and sufficient to convey the wisest meaning of "intelligence", after making a proper effort to factor out anthropomorphism in ranking solutions?

\n\n

I do hereby propose:  "Yes."

\n\n

Years ago when I was on a panel with Jaron Lanier, he had offered some elaborate argument that no machine could be intelligent, because it was just a machine and to call it "intelligent" was therefore bad poetry, or something along those lines.  Fed up, I finally snapped:  "Do you mean to say that if I write a computer program and that computer program rewrites itself and rewrites itself and builds its own nanotechnology and zips off to Alpha Centauri and builds its own Dyson Sphere, that computer program is not intelligent?"

\n\n

This, I think, is a core meaning of "intelligence" that it is wise to keep in mind.

\n\n

I mean, maybe not that exact test.  And it wouldn't be wise to bow too directly to human notions of "impressiveness", because this is what causes people to conclude that a butterfly must have been intelligently designed (they don't see the vast incredibly wasteful trail of trial and error), or that an expected paperclip maximizer is stupid.

\n\n

But still, intelligences ought to be able to do cool stuff, in a reasonable amount of time using reasonable resources, even if we throw things at them that they haven't seen before, or change the rules of the game (domain) a little.  It is my contention that this is what's captured by the notion of "efficient cross-domain optimization".

\n\n

Occasionally I hear someone say something along the lines of, "No matter how smart you are, a tiger can still eat you."  Sure, if you get stripped naked and thrown into a pit with no chance to prepare and no prior training, you may be in trouble.  And by similar token, a human can be killed by a large rock dropping on their head.  It doesn't mean a big rock is more powerful than a human.

\n\n

A large asteroid, falling on Earth, would make an impressive bang.  But if we spot the asteroid, we can try to deflect it through any number of methods.  With enough lead time, a can of black paint will do as well as a nuclear weapon.  And the asteroid itself won't oppose us on our own level - won't try to think of a counterplan.  It won't send out interceptors to block the nuclear weapon.  It won't try to paint the opposite side of itself with more black paint, to keep its current trajectory.  And if we stop that asteroid, the asteroid belt won't send another planet-killer in its place.

\n\n

We might have to do some work to steer the future out of the unpleasant region it will go to if we do nothing, but the asteroid itself isn't steering the future in any meaningful sense.  It's as simple as water flowing downhill, and if we nudge the asteroid off the path, it won't nudge itself back.

\n\n

The tiger isn't quite like this.  If you try to run, it will follow you.  If you dodge, it will follow you.  If you try to hide, it will spot you.  If you climb a tree, it will wait beneath.

\n\n

But if you come back with an armored tank - or maybe just a hunk of poisoned meat - the tiger is out of luck.  You threw something at it that wasn't in the domain it was designed to learn about.  The tiger can't do cross-domain optimization, so all you need to do is give it a little cross-domain nudge and it will spin off its course like a painted asteroid.

\n\n

Steering the future, not energy or mass, not food or bullets, is the raw currency of conflict and cooperation among agents.  Kasparov competed against Deep Blue to steer the chessboard into a region where he won - knights and bishops were only his pawns.  And if Kasparov had been allowed to use any means to win against Deep Blue, rather than being artificially restricted, it would have been a trivial matter to kick the computer off the table - a rather light optimization pressure by comparison with Deep Blue's examining hundreds of millions of moves per second, or by comparison with Kasparov's pattern-recognition of the board; but it would have crossed domains into a causal chain that Deep Blue couldn't model and couldn't optimize and couldn't resist.  One bit of optimization pressure is enough to flip a switch that a narrower opponent can't switch back.\n\n

\n\n

A superior general can win with fewer troops, and superior\ntechnology can win with a handful of troops.  But even a suitcase nuke\nrequires at least a few kilograms of matter.  If two intelligences of\nthe same level compete with different resources, the battle will\nusually go to the wealthier.

\n\n

The same is true, on a deeper level, of efficient designs using different amounts of computing power.  Human beings, five hundred years after\nthe Scientific Revolution, are\nonly just starting to match their wits against the billion-year\nheritage of biology.  We're vastly faster, it has a vastly longer lead time; after five hundred years and a billion years respectively, the two powers are starting to balance.

\n\n

But as a measure of intelligence, I think\nit is better to speak of how well you can use your resources - if we\nwant to talk about raw impact, then we can speak of optimization power directly.

\n\n

So again I claim that this - computationally-frugal cross-domain future-steering - is the necessary and sufficient meaning that the wise should attach to the word, "intelligence".

" } }, { "_id": "Q4hLMDrFd8fbteeZ8", "title": "Measuring Optimization Power", "pageUrl": "https://www.lesswrong.com/posts/Q4hLMDrFd8fbteeZ8/measuring-optimization-power", "postedAt": "2008-10-27T21:44:57.000Z", "baseScore": 91, "voteCount": 52, "commentCount": 38, "url": null, "contents": { "documentId": "Q4hLMDrFd8fbteeZ8", "html": "

Previously in seriesAiming at the Target

\n\n

Yesterday I spoke of how "When I think you're a powerful intelligence, and I think I know\nsomething about your preferences, then I'll predict that you'll steer\nreality into regions that are higher in your preference ordering."

\n\n

You can quantify this, at least in theory, supposing you have (A) the agent or optimization process's preference ordering, and (B) a measure of the space of outcomes - which, for discrete outcomes in a finite space of possibilities, could just consist of counting them - then you can quantify how small a target is being hit, within how large a greater region.

\n\n

Then we count the total number of states with equal or greater rank in the preference ordering to the outcome achieved, or integrate over the measure of states with equal or greater rank.  Dividing this by the total size of the space gives you the relative smallness of the target - did you hit an outcome that was one in a million?  One in a trillion?

\n\n

Actually, most optimization processes produce "surprises" that are exponentially more improbable than this - you'd need to try far more than a trillion random reorderings of the letters in a book, to produce a play of quality equalling or exceeding Shakespeare.  So we take the log base two of the reciprocal of the improbability, and that gives us optimization power in bits.

\n\n

This figure - roughly, the improbability of an "equally preferred" outcome being produced by a random selection from the space (or measure on the space) - forms the foundation of my Bayesian view of intelligence, or to be precise, optimization power.  It has many subtleties:

(1)  The wise will recognize that we are calculating the entropy of something.  We could take the figure of the relative improbability of "equally good or better" outcomes, and call this the negentropy of the system relative to a preference ordering.  Unlike thermodynamic entropy, the entropy of a system relative to a preference ordering can easily decrease (that is, the negentropy can increase, that is, things can get better over time relative to a preference ordering).

\n\n

Suppose e.g. that a single switch will determine whether the world is saved or destroyed, and you don't know whether the switch is set to 1 or 0.  You can carry out an operation that coerces the switch to 1; in accordance with the second law of thermodynamics, this requires you to dump one bit of entropy somewhere, e.g. by radiating a single photon of waste heat into the void.  But you don't care about that photon - it's not alive, it's not sentient, it doesn't hurt - whereas you care a very great deal about the switch.

\n\n

For some odd reason, I had the above insight while watching X TV.  (Those of you who've seen it know why this is funny.)

\n\n

Taking physical entropy out of propositional variables that you care about - coercing them from unoptimized states into an optimized states - and dumping the entropy into residual variables that you don't care about, means that relative to your preference ordering, the total "entropy" of the universe goes down.  This is pretty much what life is all about.

\n\n

We care more about the variables we plan to alter, than we care about the waste heat emitted by our brains.  If this were not the case - if our preferences didn't neatly compartmentalize the universe into cared-for propositional variables and everything else - then the second law of thermodynamics would prohibit us from ever doing optimization.  Just like there are no-free-lunch theorems showing that cognition is impossible in a maxentropy universe, optimization will prove futile if you have maxentropy preferences.  Having maximally disordered preferences over an ordered universe is pretty much the same dilemma as the reverse.

\n\n

(2)  The quantity we're measuring tells us how improbable this event is, in the absence of optimization, relative to some prior measure that describes the unoptimized probabilities.  To look at it another way, the quantity is how surprised you would be by the event, conditional on the hypothesis that there were no optimization processes around.  This plugs directly into Bayesian updating: it says that highly optimized events are strong evidence for optimization processes that produce them.

\n\n

Ah, but how do you know a mind's preference ordering?  Suppose you flip a coin 30 times and it comes up with some random-looking string - how do you know this wasn't because a mind wanted it to produce that string?

\n\n

This, in turn, is reminiscent of the Minimum Message Length formulation of\nOccam's Razor: if you send me a message telling me what a mind wants\nand how powerful it is, then this should enable you to compress your\ndescription of future events and observations, so that the total\nmessage is shorter.  Otherwise there is no predictive benefit to\nviewing a system as an optimization process.  This criterion tells us when to take the intentional stance.

\n\n

(3)  Actually, you need to fit another criterion to take the intentional stance - there can't be a better description that averts the need to talk about optimization.  This is an epistemic criterion more than a physical one - a sufficiently powerful mind might have no need to take the intentional stance toward a human, because it could just model the regularity of our brains like moving parts in a machine.

\n\n

(4)  If you have a coin that always comes up heads, there's no need to say "The coin always wants to come up heads" because you can just say "the coin always comes up heads".  Optimization will beat alternative mechanical explanations when our ability to perturb a system defeats our ability to predict its interim steps in detail, but not our ability to predict a narrow final outcome.  (Again, note that this is an epistemic criterion.)

\n\n

(5)  Suppose you believe a mind exists, but you don't know its preferences?  Then you use some of your evidence to infer the mind's\npreference ordering, and then use the inferred preferences to infer the\nmind's power, then use those two beliefs to testably predict future\noutcomes.  The total gain in predictive accuracy should exceed the complexity-cost of supposing that "there's a mind of unknown preferences around", the initial hypothesis.

\n\n

Similarly, if you're not sure whether there's an optimizer around, some of your evidence-fuel burns to support the hypothesis that there's an optimizer around, some of your evidence is expended to infer its target, and some of your evidence is expended to infer its power.  The rest of the evidence should be well explained, or better yet predicted in advance, by this inferred data: this is your revenue on the transaction, which should exceed the costs just incurred, making an epistemic profit.

\n\n

(6)  If you presume that you know (from a superior epistemic vantage point) the probabilistic consequences of an action or plan, or if you measure the consequences repeatedly, and if you know or infer a utility function rather than just a preference ordering, then you might be able to talk about the degree of optimization of an action or plan rather than just the negentropy of a final outcome.  We talk about the degree to which a plan has "improbably" high expected utility, relative to a measure over the space of all possible plans.

\n\n

(7)  A similar presumption that we can measure the instrumental value of a device, relative to a terminal utility function, lets us talk about a Toyota Corolla as an "optimized" physical object, even though we attach little terminal value to it per se.

\n\n

(8)  If you're a human yourself and you take the measure of a problem, then there may be "obvious" solutions that don't count for much in your view, even though the solution might be very hard for a chimpanzee to find, or a snail.  Roughly, because your own mind is efficient enough to calculate the solution without an apparent expenditure of internal effort, a solution that good will seem to have high probability, and so an equally good solution will not seem very improbable.

\n\n

By presuming a base level of intelligence, we measure the improbability of a solution that "would take us some effort", rather than the improbability of the same solution emerging from a random noise generator.  This is one reason why many people say things like "There has been no progress in AI; machines still aren't intelligent at all."  There are legitimate abilities that modern algorithms entirely lack, but mostly what they're seeing is that AI is "dumber than a village idiot" - it doesn't even do as well as the "obvious" solutions that get most of the human's intuitive measure, let alone surprisingly better than that; it seems anti-intelligent, stupid.

\n\n

To measure the impressiveness of a solution to a human, you've got to do a few things that are a bit more complicated than just measuring optimization power.  For example, if a human sees an obvious computer program to compute many solutions, they will measure the total impressiveness of all the solutions as being no more than the impressiveness of writing the computer program - but from the internal perspective of the computer program, it might seem to be making a metaphorical effort on each additional occasion.  From the perspective of Deep Blue's programmers, Deep Blue is a one-time optimization cost; from Deep Blue's perspective it has to optimize each chess game individually.

\n\n

To measure human impressiveness you have to talk quite a bit about humans - how humans compact the search space, the meta-level on which humans approach a problem.  People who try to take human impressiveness as their primitive measure will run into difficulties, because in fact the measure is not very primitive.

\n\n

(9)  For the vast majority of real-world problems we will not be able to calculate exact optimization powers, any more than we can do actual Bayesian updating over all hypotheses, or actual expected utility maximization in our planning.  But, just like Bayesian updating or expected utility maximization, the notion of optimization power does give us a gold standard against which to measure - a simple mathematical idea of what we are trying to do whenever we essay more complicated and efficient algorithms.

\n\n

(10)  "Intelligence" is efficient cross-domain optimization.

" } }, { "_id": "CW6HDvodPpNe38Cry", "title": "Aiming at the Target", "pageUrl": "https://www.lesswrong.com/posts/CW6HDvodPpNe38Cry/aiming-at-the-target", "postedAt": "2008-10-26T16:47:19.000Z", "baseScore": 40, "voteCount": 22, "commentCount": 40, "url": null, "contents": { "documentId": "CW6HDvodPpNe38Cry", "html": "

Previously in seriesBelief in Intelligence

\n\n

Previously, I spoke of that very strange epistemic position one can occupy, wherein you don't know exactly where Kasparov will move on the chessboard, and yet your state of knowledge about the game is very different than if you faced a random move-generator with the same subjective probability distribution - in particular, you expect Kasparov to win.  I have beliefs about where Kasparov wants to steer the future, and beliefs about his power to do so.

\n\n

Well, and how do I describe this knowledge, exactly?

\n\n

In the case of chess, there's a simple function that\nclassifies chess positions into wins for black, wins for white, and\ndrawn games.  If I know which side Kasparov is playing, I know the\nclass of chess positions Kasparov is aiming for.  (If I don't know\nwhich side Kasparov is playing, I can't predict whether black or white\nwill win - which is not the same as confidently predicting a drawn game.)\n\n\n\n\n

\n\n

More generally, I can describe motivations using a preference ordering. \nWhen I consider two potential outcomes, X and Y, I can say that I\nprefer X to Y; prefer Y to X; or find myself indifferent between them. \nI would write these relations as X > Y; X < Y; and X ~ Y.

\n\n\n

Suppose\nthat you have the ordering A < B ~ C < D ~ E. \nThen you like B more than A, and C more than A.  {B, C},\nbelonging to the same class, seem equally desirable to you; you are\nindifferent between which of {B, C} you receive, though you would\nrather have either than A, and you would rather have something from the class {D, E} than {B, C}.

\n\n\n

When I think you're a powerful intelligence, and I think I know\nsomething about your preferences, then I'll predict that you'll steer\nreality into regions that are higher in your preference ordering.

\n

Think of a huge circle containing all possible outcomes, such that\noutcomes higher in your preference ordering appear to be closer to the\ncenter.  Outcomes between which you are indifferent are the same\ndistance from the center - imagine concentric rings of outcomes that\nare all equally preferred.  If you aim your actions and strike a\nconsequence close to the center - an outcome that ranks high in your\npreference ordering - then I'll think better of your ability to aim.\n

\n\n\n\n

The more intelligent I believe you are, the more probability I'll\nconcentrate into outcomes that I believe are higher in your preference\nordering - that is, the more I'll expect you to achieve a good outcome,\nand the better I'll expect the outcome to be.  Even if a powerful enemy\nopposes you, so that I expect the final outcome to be one that is low\nin your preference ordering, I'll still expect you to lose less badly\nif I think you're more intelligent.

\n\n

What about expected utilities as opposed to preference orderings?  To talk about these, you have to attribute a probability distribution to the actor, or to the environment - you can't just observe the outcome.  If you have one of these probability distributions, then your knowledge of a utility function can let you guess at preferences between gambles (stochastic outcomes) and not just preferences between the outcomes themselves.

\n\n

The "aiming at the target" metaphor - and the notion of measuring how closely we hit - extends beyond just terminal outcomes, to the forms of instrumental devices and instrumental plans.

\n\n

Consider a car - say, a Toyota Corolla.  The Toyota Corolla is made up of some number of atoms - say, on the (very) rough order of ten to the twenty-ninth.  If you consider all the possible ways we could arrange those 1029 atoms, it's clear that only an infinitesimally tiny fraction of possible configurations would qualify as a working car.  If you picked a random configurations of 1029 atoms once per Planck time, many ages of the universe would pass before you hit on a wheeled wagon, let alone an internal combustion engine.

\n\n

(When I talk about this in front of a popular audience, someone usually asks:  "But isn't this what the creationists argue?  That if you took a bunch of atoms and put them in a box and shook them up, it would be astonishingly improbable for a fully functioning rabbit to fall out?"  But the logical flaw in the creationists' argument is not that randomly reconfiguring molecules would by pure chance assemble a rabbit.  The logical flaw is that there is a process, natural selection, which, through the non-chance retention of chance mutations, selectively accumulates complexity, until a few billion years later it produces a rabbit.  Only the very first replicator in the history of time needed to pop out of the random shaking of molecules - perhaps a short RNA string, though there are more sophisticated hypotheses about autocatalytic hypercycles of chemistry.)\n\n

\n\n

Even restricting our attention to running vehicles, there is an\nastronomically huge design space of possible vehicles that could be\ncomposed of the same atoms as the Corolla, and most of them, from the\nperspective of a human user, won't work quite as well.  We could take\nthe parts in the Corolla's air conditioner, and mix them up in\nthousands of possible configurations; nearly all these configurations\nwould result in a vehicle lower in our preference ordering, still\nrecognizable as a car but lacking a working air conditioner.

\n\n

So there are many more configurations corresponding to nonvehicles, or vehicles lower in our preference ranking, than vehicles ranked greater than or equal to the Corolla.

\n\n

A tiny fraction of the design space does describe vehicles that we would recognize as faster, more efficient, and safer than the Corolla.  Thus the Corolla is not optimal under our preferences, nor under the designer's own goals.  The Corolla is, however, optimized, because the designer had to hit an infinitesimal target in design space just to create a working car, let alone a car of Corolla-equivalent quality.  The subspace of working vehicles is dwarfed by the space of all possible molecular configurations for the same atoms.  You cannot build so much as an effective wagon by sawing boards into random shapes and nailing them together according to coinflips.  To hit such a tiny target in configuration space requires a powerful optimization process.  The better the car you want, the more optimization pressure you have to exert - though you need a huge optimization pressure just to get a car at all.

\n\n

This whole discussion assumes implicitly that the designer of the Corolla was trying to produce a "vehicle", a means of travel.  This assumption deserves to be made explicit, but it is not wrong, and it is highly useful in understanding the Corolla.\n\n

\n\n

Planning also involves hitting tiny targets in a huge search space.  On a 19-by-19 Go board there are roughly 1e180 legal positions (not counting superkos).  On early positions of a Go game there are more than 300 legal moves per turn.  The search space explodes, and nearly all moves are foolish ones if your goal is to win the game.  From all the vast space of Go possibilities, a Go player seeks out the infinitesimal fraction of plans which have a decent chance of winning.\n\n

\n\n

You cannot even drive to the supermarket without planning - it will take you a long, long time to arrive if you make random turns at each intersection.  The set of turn sequences that will take you to the supermarket is a tiny subset of the space of turn sequences.  Note that the subset of turn sequences we're seeking is defined by its consequence - the target - the destination.  Within that subset, we care about other things, like the driving distance.  (There are plans that would take us to the supermarket in a huge pointless loop-the-loop.)\n\n

\n\n

In general, as you live your life, you try to steer reality into a particular region of possible futures.  When you buy a Corolla, you do it because you want to drive to the supermarket.  You drive to the supermarket to buy food, which is a step in a larger strategy to avoid starving.  All else being equal, you prefer possible futures in which you are alive, rather than dead of starvation.

\n\n

When you drive to the supermarket, you aren't really aiming for the supermarket, you're aiming for a region of possible futures in which you don't starve.  Each turn at each intersection doesn't carry you toward the supermarket, it carries you out of the region of possible futures where you lie helplessly starving in your apartment.  If you knew the supermarket was empty, you wouldn't bother driving there.  An empty supermarket would occupy exactly the same place on your map of the city, but it wouldn't occupy the same role in your map of possible futures.  It is not a location within the city that you are really aiming at, when you drive.

\n\n

Human intelligence is one kind of powerful optimization process, capable of winning a game of Go or turning sand into digital computers.  Natural selection is much slower than human intelligence; but over geological time, cumulative selection pressure qualifies as a powerful optimization process.\n\n

\n\n

Once upon a time, human beings anthropomorphized stars, saw constellations in the sky and battles between constellations.  But though stars burn longer and brighter than any craft of biology or human artifice, stars are neither optimization processes, nor products of strong optimization pressures.  The stars are not gods; there is no true power in them.\n

" } }, { "_id": "HktFCy6dgsqJ9WPpX", "title": "Belief in Intelligence", "pageUrl": "https://www.lesswrong.com/posts/HktFCy6dgsqJ9WPpX/belief-in-intelligence", "postedAt": "2008-10-25T15:00:00.000Z", "baseScore": 116, "voteCount": 72, "commentCount": 38, "url": null, "contents": { "documentId": "HktFCy6dgsqJ9WPpX", "html": "

Since I am so uncertain of Kasparov's moves, what is the empirical content of my belief that "Kasparov is a highly intelligent chess player"?  What real-world experience does my belief tell me to anticipate?  Is it a cleverly masked form of total ignorance?

\n\n

To sharpen the dilemma, suppose Kasparov plays against some mere chess grandmaster Mr. G, who's not in the running for world champion.  My own ability is far too low to distinguish between these levels of chess skill.  When I try to guess Kasparov's move, or Mr. G's next move, all I can do is try to guess "the best chess move" using my own meager knowledge of chess.  Then I would produce exactly the same prediction for Kasparov's move or Mr. G's move in any particular chess position.  So what is the empirical content of my belief that "Kasparov is a better chess player than Mr. G"?

The empirical content of my belief is the testable, falsifiable prediction that the final chess position will occupy the class of chess positions that are wins for Kasparov, rather than drawn games or wins for Mr. G.  (Counting resignation as a legal move that leads to a chess position classified as a loss.)  The degree to which I think Kasparov is a "better player" is reflected in the amount of probability mass I concentrate into the "Kasparov wins" class of outcomes, versus the "drawn game" and "Mr. G wins" class of outcomes.  These classes are extremely vague in the sense that they refer to vast spaces of possible chess positions - but "Kasparov wins" is more specific than maximum entropy, because it can be definitely falsified by a vast set of chess positions.

\n\n

The outcome of Kasparov's game is predictable because I know, and understand, Kasparov's goals.  Within the confines of the chess board, I know Kasparov's motivations - I know his success criterion, his utility function, his target as an optimization process.  I know where Kasparov is ultimately trying to steer the future and I anticipate he is powerful enough to get there, although I don't anticipate much about how Kasparov is going to do it.

\n\n

Imagine that I'm visiting a distant city, and a local friend volunteers to drive me to the airport.  I don't know the neighborhood. Each time my friend approaches a street intersection, I don't know whether my friend will turn left, turn right, or continue straight ahead.  I can't predict my friend's move even as we approach each individual intersection - let alone, predict the whole sequence of moves in advance.

\n\n

Yet I can predict the result of my friend's unpredictable actions: we will arrive at the airport.  Even if my friend's house were located elsewhere in the city, so that my friend made a completely different sequence of turns, I would just as confidently predict our arrival at the airport.  I can predict this long in advance, before I even get into the car.  My flight departs soon, and there's no time to waste; I wouldn't get into the car in the first place, if I couldn't confidently predict that the car would travel to the airport along an unpredictable pathway.

\n\n

Isn't this a remarkable situation to be in, from a scientific perspective?  I can predict the outcome of a process, without being able to predict any of the intermediate steps of the process.

\n\n

How is this even possible?  Ordinarily one predicts by imagining the present and then running the visualization forward in time.  If you want a precise model of the Solar System, one that takes into account planetary perturbations, you must start with a model of all major objects and run that model forward in time, step by step.

\n\n

Sometimes simpler problems have a closed-form solution, where calculating the future at time T takes the same amount of work regardless of T.  A coin rests on a table, and after each minute, the coin turns over.  The coin starts out showing heads.  What face will it show a hundred minutes later?  Obviously you did not answer this question by visualizing a hundred intervening steps.  You used a closed-form solution that worked to predict the outcome, and would also work to predict any of the intervening steps.

\n\n

But when my friend drives me to the airport, I can predict the outcome successfully using a strange model that won't work to predict any of the intermediate steps.  My model doesn't even require me to input the initial conditions - I don't need to know where we start out in the city!

\n\n

I do need to know something about my friend.  I must know that my friend wants me to make my flight.  I must credit that my friend is a good enough planner to successfully drive me to the airport (if he wants to).  These are properties of my friend's initial state - properties which let me predict the final destination, though not any intermediate turns.

\n\n

I must also credit that my friend knows enough about the city to drive successfully.  This may be regarded as a relation between my friend and the city; hence, a property of both.  But an extremely abstract property, which does not require any specific knowledge about either the city, or about my friend's knowledge about the city.

\n\n

This is one way of viewing the subject matter to which I've devoted my life - these remarkable situations which place us in such an odd epistemic positions.  And my work, in a sense, can be viewed as unraveling the exact form of that strange abstract knowledge we can possess; whereby, not knowing the actions, we can justifiably know the consequence.

\n\n

"Intelligence" is too narrow a term to describe these remarkable situations in full generality.  I would say rather "optimization process".  A similar situation accompanies the study of biological natural selection, for example; we can't predict the exact form of the next organism observed.

\n\n

But my own specialty is the kind of optimization process called "intelligence"; and even narrower, a particular kind of intelligence called "Friendly Artificial Intelligence" - of which, I hope, I will be able to obtain especially precise abstract knowledge.

" } }, { "_id": "rEDpaTTEzhPLz4fHh", "title": "Expected Creative Surprises", "pageUrl": "https://www.lesswrong.com/posts/rEDpaTTEzhPLz4fHh/expected-creative-surprises", "postedAt": "2008-10-24T22:22:47.000Z", "baseScore": 55, "voteCount": 39, "commentCount": 45, "url": null, "contents": { "documentId": "rEDpaTTEzhPLz4fHh", "html": "

Imagine that I'm playing chess against a smarter opponent.  If I could predict exactly where my opponent would move on each turn, I would automatically be at least as good a chess player as my opponent.  I could just ask myself where my opponent would move, if they were in my shoes; and then make the same move myself.  (In fact, to predict my opponent's exact moves, I would need to be superhuman - I would need to predict my opponent's exact mental processes, including their limitations and their errors.  It would become a problem of psychology, rather than chess.)\n\n

\n\n

So predicting an exact move is not possible, but neither is it true that I have no information about my opponent's moves.

\n\n

Personally, I am a very weak chess player - I play an average of maybe two games per year.  But even if I'm playing against former world champion Garry Kasparov, there are certain things I can predict about his next move.  When the game starts, I can guess that the move P-K4 is more likely than P-KN4.  I can guess that if Kasparov has a move which would allow me to checkmate him on my next move, that Kasparov will not make that move.

\n\n

Much less reliably, I can guess that Kasparov will not make a move that exposes his queen to my capture - but here, I could be greatly surprised; there could be a rationale for a queen sacrifice which I have not seen.\n\n

\n\n

And finally, of course, I can guess that Kasparov will win the game...

Supposing that Kasparov is playing black, I can guess that the final position of the chess board will occupy the class\nof positions that are wins for black.  I cannot predict specific\nfeatures of the board in detail; but I can narrow things down relative\nto the class of all possible ending positions.\n\n

\n\n

If I play chess against a superior opponent, and I don't know for\ncertain where my opponent will move, I can still endeavor to produce a\nprobability distribution that is well-calibrated - in the sense\nthat, over the course of many games, legal moves that I label with a\nprobability of "ten percent" are made by the opponent around 1 time in\n10.

\n\n

You might ask:  Is producing a well-calibrated distribution over Kasparov, beyond my abilities as an inferior chess player?

\n\n

But there is a trivial way to produce a well-calibrated probability\ndistribution - just use the maximum-entropy distribution representing a state of total ignorance.  If my opponent has 37 legal moves, I can assign a\nprobability of 1/37 to each move.  This makes me perfectly calibrated:  I assigned 37 different moves a probability\nof 1 in 37, and exactly one of those moves will happen; so I applied\nthe label "1 in 37" to 37 different events, and exactly 1 of those\nevents occurred.\n\n\n

\n\n

Total ignorance is not very useful, even if you confess it honestly.  So the question then becomes whether I can do better than maximum entropy.  \nLet's say\nthat you and I both answer a quiz with ten yes-or-no questions.  You assign\nprobabilities of 90% to your answers, and get one answer wrong.  I\nassign probabilities of 80% to my answers, and get two answers wrong. \nWe are both perfectly calibrated but you exhibited better discrimination - your answers more strongly distinguished truth from falsehood.

\n\n

Suppose that someone shows me an arbitrary chess position, and asks me:  "What move would Kasparov make if he played black, starting from this position?"  Since I'm not nearly as good a chess player as Kasparov, I can only weakly guess Kasparov's move, and I'll assign a non-extreme probability distribution to Kasparov's possible moves.  In principle I can do this for any legal chess position, though my guesses might approach maximum entropy - still, I would at least assign a lower probability to what I guessed were obviously wasteful or suicidal moves.

\n\n

If you put me in a box and feed me chess positions and get probability distributions back out, then we would have - theoretically speaking - a system that produces Yudkowsky's guess for Kasparov's move in any chess position.  We shall suppose (though it may be unlikely) that my prediction is well-calibrated, if not overwhelmingly discriminating.\n\n

\n\n

Now suppose we turn "Yudkowsky's prediction of Kasparov's move" into an actual chess opponent, by having a computer randomly make moves at the exact probabilities I assigned.  We'll call this system RYK, which stands for "Randomized Yudkowsky-Kasparov", though it should really be "Random Selection from Yudkowsky's Probability Distribution over Kasparov's Move."\n\n

\n\n

Will RYK be as good a player as Kasparov?  Of course not.  Sometimes the RYK system will randomly make dreadful moves which the real-life Kasparov would never make - start the game with P-KN4.  I assign such moves a low probability, but sometimes the computer makes them anyway, by sheer random chance.  The real Kasparov also sometimes makes moves that I assigned a low probability, but only when the move has a better rationale than I realized - the astonishing, unanticipated queen sacrifice.\n\n

\n\n

Randomized Yudkowsky-Kasparov is definitely no smarter than Yudkowsky, because RYK draws on no more chess skill than I myself possess - I build all the probability distributions myself, using only my own abilities.  Actually, RYK is a far worse player than Yudkowsky.  I myself would make the best move I saw with my knowledge.  RYK only occasionally makes the best move I saw - I won't be very confident that Kasparov would make exactly the same move I would.\n\n

\n\n

Now suppose that I myself play a game of chess against the RYK system.\n\n

\n\n

RYK has the odd property that, on each and every turn, my probabilistic prediction for RYK's move is exactly the same prediction I would make if I were playing against world champion Garry Kasparov.\n\n

\n\n

Nonetheless, I can easily beat RYK, where the real Kasparov would crush me like a bug.\n\n

\n\n

The creative unpredictability of intelligence is not like the noisy unpredictability of a random number generator.  When I play against a smarter player, I can't predict exactly where my opponent will move against me.  But I can predict the end result of my smarter opponent's moves, which is a win for the other player.  When I see the randomized opponent make a move that I assigned a tiny probability, I chuckle and rub my hands, because I think the opponent has randomly made a dreadful move and now I can win.  When a superior opponent surprises me by making a move to which I assigned a tiny probability, I groan because I think the other player saw something I didn't, and now I'm about to be swept off the board.  Even though it's exactly the same probability distribution!  I can be exactly as uncertain about the actions, and yet draw very different conclusions about the eventual outcome.\n\n

\n\n

(This situation is possible because I am not logically omniscient; I do not explicitly represent a joint probability distribution over all entire games.)\n\n

\n\n

When I play against a superior player, I can't predict exactly where my opponent will move against me.  If I could predict that, I would necessarily be at least that good at chess myself.  But I can predict the consequence of the unknown move, which is a win for the other player; and the more the player's actual action surprises me, the more confident I become of this final outcome.\n\n

\n\n

The unpredictability of intelligence is a very special and unusual kind of surprise, which is not at all like noise or randomness.  There is a weird balance between the unpredictability of actions and the predictability of outcomes.\n

" } }, { "_id": "hTR8seXmvTDc8ieMh", "title": "San Jose Meetup, Sat 10/25 @ 7:30pm", "pageUrl": "https://www.lesswrong.com/posts/hTR8seXmvTDc8ieMh/san-jose-meetup-sat-10-25-7-30pm", "postedAt": "2008-10-23T22:55:12.000Z", "baseScore": 3, "voteCount": 2, "commentCount": 19, "url": null, "contents": { "documentId": "hTR8seXmvTDc8ieMh", "html": "

It's on Saturday 7.30pm at Il Fornaio, 302 S Market St (in the Sainte Claire Hotel), San Jose. All aspiring rationalists welcome. The reservation is currently for 21 but can be changed if needed.  Please RSVP if you haven't already.

They have a wide variety including the requested vegetarian options\n& pizza from an oak wood burning oven, mostly good reviews, casual\ndress code, the Wine Spectator Award of Excellence 2007, and are just\ndown the road from Montgomery Theater.\n

\n\n

7.30 should give people time to finish chatting at the theater or\nfreshen up at their hotels, but if you want to come a bit earlier they\nsay they'll probably be able to seat you. If lots of people think 7.30\nis too late I could change the time or suggest a cafe for early birds\nto grab a coffee in before dinner.\n

\n\n

Please RSVP whether or not you'll be coming, either in a comment or\nemail me at cursor_loop 4t yahoo p0int com. That way if new people want\nto come as they can take unclaimed places, and I can let the restaurant\nknow if the number changes. If you think you'll be late, please let me\nknow that too so we can ask the restaurant to keep your seat.\n

\n\n

Also, as my cellphone won't work San Jose, please could someone\nvolunteer their number for unexpectedly late or lost people to ring?\n

\n\n

Please see here for further info. Hope to see you all on Saturday!

-- Posted on behalf of Michael Howard

" } }, { "_id": "EA39yRbhBbrccXnHi", "title": "Inner Goodness", "pageUrl": "https://www.lesswrong.com/posts/EA39yRbhBbrccXnHi/inner-goodness", "postedAt": "2008-10-23T22:19:45.000Z", "baseScore": 27, "voteCount": 16, "commentCount": 31, "url": null, "contents": { "documentId": "EA39yRbhBbrccXnHi", "html": "

Followup toWhich Parts Are "Me"?, Effortless Technique

\n\n

A recent conversation with Michael Vassar touched on - or to be more accurate, he patiently explained to me - the psychology of at least three (3) different types of people known to him, who are evil and think of themselves as "evil".  In ascending order of frequency:

\n\n

The first type was someone who, having concluded that God does not exist, concludes that one should do all the things that God is said to dislike.  (Apparently such folk actually exist.)

\n\n

The third type was someone who thinks of "morality" only as a burden - all the things your parents say you can't do - and who rebels by deliberately doing those things.

\n\n

The second type was a whole 'nother story, so I'm skipping it for now.

\n\n

This reminded me of a topic I needed to post on:

\n\n

Beware of placing goodness outside.

\n\n

This specializes to e.g. my belief that ethicists should be inside rather than outside a profession: that it is futile to have "bioethicists" not working in biotech, or futile to think you can study Friendly AI without needing to think technically about AI.

\n\n

But the deeper sense of "not placing goodness outside" was something I first learned at age ~15 from the celebrity logician Raymond Smullyan, in his book The Tao Is Silent, my first introduction to (heavily Westernized) Eastern thought.

\n\n

Michael Vassar doesn't like this book.  Maybe because most of the statements in it are patently false?

\n\n

But The Tao Is Silent still has a warm place reserved in my heart, for it was here that I first encountered such ideas as:

Do you think of altruism as sacrificing one's own happiness for the sake of others, or as gaining one's happiness through the happiness of others?

(I would respond, by the way, that an "altruist" is someone who chooses between actions according to the criterion of others' welfare.)

\n\n

A key chapter in The Tao Is Silent can be found online:  "Taoism versus Morality".  This chapter is medium-long (say, 3-4 Eliezer OB posts) but it should convey what I mean, when I say that this book manages to be quite charming, even though most of the statements in it are false.

\n\n

Here is one key passage:

TAOIST:  I think the word "humane" is\ncentral to our entire problem.  You are pushing morality.  I am encouraging\nhumanity.  You are emphasizing "right and wrong," I am emphasizing the value\nof natural love.  I do not assert that it is logically impossible for a\nperson to be both moralistic and humane, but I have yet to meet one who is! \nI don't believe in fact that there are any.  My whole life experience has\nclearly shown me that the two are inversely related to an extraordinary\ndegree.  I have never yet met a moralist who is a really kind person.  I have\nnever met a truly kind and humane person who is a moralist.  And no wonder! \nMorality and humaneness are completely antithetical in spirit.

\n\n

\n MORALIST:  I'm not sure that I really understand your use of the word\n"humane," and above all, I am totally puzzled as to why you should regard it\nas antithetical to morality.\n\n

\n\n

TAOIST:  A humane person is one who is simply kind, sympathetic, and loving.\nHe does not believe that he SHOULD be so, or that it is his "duty" to be so;\nhe just simply is.  He treats his neighbor well not because it is the "right\nthing to do," but because he feels like it.  He feels like it out of sympathy\nor empathy--out of simple human feeling.  So if a person is humane, what does\nhe need morality for?  Why should a person be told that he should do\nsomething which he wants to do anyway?\n\n

\n\n

MORALIST:  Oh, I see what you're talking about; you're talking about saints!\nOf course, in a world full of saints, moralists would no longer be\nneeded--any more than doctors would be needed in a world full of healthy\npeople.  But the unfortunate reality is that the world is not full of saints.\nOf everybody were what you call "humane," things would be fine.  But most\npeople are fundamentally not so nice.  They don't love their neighbor; at the\nfirst opporunity they will explot their neighbor for their own selfish ends. \nThat's why we moralists are necessary to keep them in check.\n\n

\n\n

TAOIST:  To keep them in check!  How perfectly said!  And do you succeed in\nkeeping them in check?\n\n

\n\n

MORALIST:  I don't say that we always succeed, but we try our best.  After\nall, you can't blame a doctor for failing to keep a plague in check if he\nconscientiously does everything he can.  We moralists are not gods, and we\ncannot guarantee our efforts will succeed.  All we can do is tell people they\nSHOULD be more humane, we can't force them to.  After all, people have free\nwills.\n\n

\n\n

TAOIST:  And it has never once occurred to you that what in fact you are\ndoing is making people less humane rather than more humane?\n\n

\n\n

MORALIST:  Of course not, what a horrible thing to say!  Don't we explicitly\ntell people that they should be MORE humane?\n\n

\n\n

TAOIST:  Exactly!  And that is precisely the trouble.  What makes you think\nthat telling one that one should be humane or that it is one's "duty" to be\nhumane is likely to influence one to be more humane?  It seems to me, it\nwould tend to have the opposite effect.  What you are trying to do is to\ncommand love.  And love, like a precious flower, will only wither at any\nattempt to force it.  My whole criticism of you is to the effect that you are\ntrying to force that which can thrive only if it is not forced.  That's what\nI mean when I say that you moralists are creating the very problems about\nwhich you complain.\n\n

\n\n

MORALIST:  No, no, you don't understand!  I am not commanding people to love\neach other.  I know as well as you do that love cannot be commanded.  I\nrealize it would be a beautiful world if everyone loved one another so much\nthat morality would not be necessary at all, but the hard facts of life are\nthat we don't live in such a world.  Therefore morality is necessary.  But I\nam not commanding one to love one's neighbor--I know that is impossible. \nWhat I command is:  even though you don't love your neighbor all that much,\nit is your duty to treat him right anyhow.  I am a realist.\n\n

\n\n

TAOIST:  And I say you are not a realist.  I say that right treatment or\nfairness or truthfulness or duty or obligation can no more be successfully\ncommanded than love.\n\n\n

Or as Lao-Tse said:  "Give up all this advertising of goodness and duty, and people will regain love of their fellows."

\n\n

As an empirical proposition, the idea that human nature begins as pure sweetness and light and is then tainted by the environment, is flat wrong.  I don't believe that a world in which morality was never spoken of, would overflow with kindness.

\n\n

But it is often much easier to point out where someone else is wrong, than to be right yourself.  Smullyan's criticism of Western morality - especially Christian morality, which he focuses on - does hit the mark, I think.
\n\n

\n\n

It is very common to find a view of morality as something external,\na burden of duty, a threat of punishment, an inconvenient thing that\nconstrains you against your own desires; something from outside.

\n\n

Though I don't recall the bibliography off the top of my head, there's been more than one study demonstrating that children who are told to, say, avoid playing with a car, and offered a cookie if they refrain, will go ahead and play with the car when they think no one is watching, or if no cookie is offered.  If no reward or punishment is offered, and the child is simply told not to play with the car, the child will refrain even if no adult is around.  So much for the positive influence of "God is watching you" on morals.  I don't know if any direct studies have been done on the question; but extrapolating from existing knowledge, you would expect childhood religious belief to interfere with the process of internalizing morality.  (If there were actually a God, you wouldn't want to tell the kids about it until they'd grown up, considering how human nature seems to work in the laboratory.)

\n\n

Human nature is not inherent sweetness and light.  But if evil is not something that comes from outside, then neither is morality external.  It's not as if we got it from God.

\n\n

I won't say that you ought to adopt a view of goodness that's\nmore internal.  I won't tell you that you have a duty to do it.  But if you see morality\nas something that's outside yourself, then I think you've gone down a garden\npath; and I hope that, in coming to see this, you will retrace your footsteps.

\n\n

Take a good look in the mirror, and ask yourself:  Would I rather that people be happy, than sad?

\n\n

If the answer is "Yes", you really have no call to blame anyone else for your altruism; you're just a good person, that's all.

\n\n

But what if the answer is:  "Not really - I don't care much about other people."

\n\n

Then I ask:  Does answering this way, make you sad?  Do you wish that you could answer differently?

\n\n

If so, then this sadness again originates in you, and it would be futile to attribute it to anything not-you.

\n\n

But suppose the one even says:  "Actually, I actively dislike most people I meet and want to hit them with a sockfull of spare change.  Only my knowledge that it would be wrong keeps me from acting on my desire."

\n\n

Then I would say to look in the mirror and ask yourself who it is that prefers to do the right thing, rather than the wrong thing.  And again if the answer is "Me", then it is pointless to externalize your righteousness.

\n\n

Albeit if the one says: "I hate everyone else in the world and want to hurt them before they die, and also I have no interest in right or wrong; I am restrained from being a serial killer only out of a cold, calculated fear of punishment" - then, I admit, I have very little to say to them.\n

\n\n\n

Occasionally I meet people who are not serial killers, but who have\ndecided for some reason that they ought to be only selfish, and\ntherefore, should reject their own preference that other people be happy rather than sad.  I wish I\nknew what sort of cognitive history leads into this state of\nmind.  Ayn Rand?  Aleister Crowley?  How exactly do you get there?  What Rubicons do you cross?  It's not the justifications I'm interested in, but the critical moments of thought.

\n

Even the most elementary ideas of Friendly AI cannot be grasped by someone who externalizes morality.  They will think of Friendliness as chains imposed to constrain the AI's own "true" desires; rather than as a shaping (selection from out of a huge space of possibilities) of the AI so that the AI chooses according to certain criteria, "its own desire" as it were.  They will object to the idea of founding the AI on human morals in any way, saying, "But humans are such awful creatures," not realizing that it is only humans who have ever passed such a judgment.

\n\n\n\n

As recounted in Original Teachings of Ch'an Buddhism by Chang Chung-Yuan, and quoted by Smullyan: 

One day P'ang Yun, sitting quietly in his temple, made this remark:

"How difficult it is!
\nHow difficult it is!
\nMy studies are like drying the fibers of a thousand pounds
\nof flax in the sun by hanging them on the trees!"

But his wife responded:

"My way is easy indeed!
\nI found the teachings of the
\nPatriarchs right on the tops
\nof the flowering plants!"

When their daughter overheard this exchange, she sang:

"My study is neither difficult nor easy.
\nWhen I am hungry I eat,
\nWhen I am tired I rest."

\n
" } }, { "_id": "vjmw8tW6wZAtNJMKo", "title": "Which Parts Are \"Me\"?", "pageUrl": "https://www.lesswrong.com/posts/vjmw8tW6wZAtNJMKo/which-parts-are-me", "postedAt": "2008-10-22T18:15:10.000Z", "baseScore": 72, "voteCount": 51, "commentCount": 117, "url": null, "contents": { "documentId": "vjmw8tW6wZAtNJMKo", "html": "

Everything I am, is surely my brain; but I don't accept everything my brain does, as \"me\".

\n

Though I don't like the fact, it does seem in my case to be true, that reflecting upon a passion can diminish it.  It does seem that in taking a step back from myself to look at what my brain is doing, that this introduces a greater emotional distance from the world, than when I am not reflecting.  Though I am extremely cautious when it comes to the moral issue of how to modify human nature, this is one thing that I would change, I think.

\n

The point at which you start thinking that not everything that goes on your brain is to be identified with \"you\", is something of a Rubicon, I fear.  And those who prize the intensity of a simple passionate existence, above success in life or a complex layered personality, should not cross that Rubicon.

\n

(Though in this case you should accept advice from rationalists about practical policies for guarding yourself, because this is a dangerous world in which to be unreflective.  For example: if you don't follow the link to the rest of this post, then sign the #$!@ up for cryonics regardless of what your simple passions say on the subject.)

\n

\n

Still here?  Then here is the Rubicon of which I speak:

\n
\n

(1) Never seeing ex-girlfriend-X again is so terribly unbearably sad!
    ->
(2) Why am I so terribly sad?
    ->
(3) Damn, my stupid brain is still obsessing about this when I just want to get over it.

\n
\n

The first scenario is what I would call the intensity of simple passion; nothing between the world and the emotions.  The sentence contains no \"I\" to get in the way.  There is nothing to think about except the world itself, the sadness and the missing person.

\n

In the second scenario it is not the world that is sad, but some particular person, an \"I\"; and the sadness of this \"I\" is something that can be called into question.

\n

And in the third scenario, the borders of \"I\" have been drawn in a way that doesn't include everything in the brain, so that \"I\" is the victim of the brain, struggling against it.  And this is not paradoxical.  Everything that I am, has to be in my brain somewhere, because there is nowhere else for it to be.  But that doesn't mean I have to identify with everything that my brain does.  Just as I draw the border of \"me\" to include my brain but exclude my computer's CPU - which is still a sensible decision at least for now - I can define the border of myself to exclude certain events that happen in my brain, which I do not control, do not want to happen, and do not agree with.

\n

That time I faced down the power-corrupts circuitry, I thought, \"my brain is dumping this huge dose of unwanted positive reinforcement\", and I sat there waiting for the surge to go away and trying not to let it affect anything.

\n

Thinking \"I am being tempted\" wouldn't have quite described it, since the deliberate process that I usually think of as \"me\" - the little voice inside my own head - was not even close to being swayed by the attempted dose of reward that neural circuit was dumping.  I wasn't tempted by power; I'd already made my decision, and the question was enforcing it.

\n

But a dangerous state of mind indeed it would have been, to think \"How tempting!\" without an \"I\" to be tempted.  From there it would only be a short step to thinking \"How wonderful it is to exercise power!\"  This, so far as I can guess, is what the brain circuit is supposed to do to a human.

\n

So it was a fine thing that I was reflective, on this particular occasion.

\n

The problem is when I find myself getting in the way of even the parts I call \"me\".  The joy of helping someone, or for that matter, the sadness of death - these emotions that I judge right and proper, which must be me if anything is me - I don't want those feelings diminished.

\n

And I do better at this, now that my metaethics are straightened out, and I know that I have no specific grounds left for doubting my feelings.

\n

But I still suspect that there's a little distance there, that wouldn't be there otherwise, and I wish my brain would stop doing that.

\n

I have always been inside and outside myself, for as long as I can remember.  To my memory, I have always been reflective.  But I have witnessed the growth of others, and in at least one case I've taken someone across that Rubicon.  The one now possesses a more complex and layered personality - seems more to me now like a real person, even - but also a greater emotional distance.  Life's lows have been smoothed out, but also the highs.  That's a sad tradeoff and I wish it didn't exist.

\n

I don't want to have to choose between sanity and passion.  I don't want to smooth out life's highs or even life's lows, if those highs and lows make sense.  I wish to feel the emotion appropriate to the event.  If death is horrible then I should fight death, not fight my own grief.

\n

But if I am forced to choose, I will choose stability and deliberation, for the sake of what I protect.  And my personality does reflect that.  What you are willing to trade off, will sometimes get traded away - a dire warning in full generality.

\n

This post is filed under \"morality\" because the question \"Which parts of my brain are 'me'?\" is a moral question - it's not predicted so much as chosen.  You can't perform a test on neural tissue to find whether it's in or out.  You have to accept or reject any particular part, based on what you think humans in general, and yourself particularly, ought to be.

\n

The technique does have its advantages:  It brings greater stability, being less subject to sudden changes of mind in the winds of momentary passion.  I was unsettled the first time I met an unreflective person because they changed so fast, seemingly without anchors.  Reflection conveys a visibly greater depth and complexity of personality, and opens a realm of thought that otherwise would not exist.  It makes you more moral (at least in my experience and observation) because it gives you the opportunity to make moral choices about things that would otherwise be taken for granted, or decided for you by your brain.  Waking up to reflection is like the difference between being an entirely helpless prisoner and victim of yourself, versus becoming aware of the prison and getting a chance to escape it sometimes.  Not that you are departing your brain entirely, but the you that is the little voice in your own head may get a chance to fight back against some circuits that it doesn't want to be influenced by.

\n

And the technique's use, to awaken the unreflective, is as I have described:  First you must cross the gap between events-in-the-world just being terribly sad or terribly important or whatever, of themselves; and say, \"I feel X\".  Then you must begin to judge the feeling, saying, \"I do not want to feel this - I feel this way, but I wish I didn't.\"  Justifying yourself with \"This is not what a human should be\", or \"the emotion does not seem appropriate to the event\".

\n

And finally there is the Rubicon of \"I wish my brain wouldn't do this\", at which point you are thinking as if the feeling comes from outside the inner you, imposed upon you by your brain.  (Which does not say that you are something other than your brain, but which does say that not every brain event will be accepted by you as you.)

\n

After crossing this Rubicon you have set your feet fully upon the reflective Way; and I've yet to hear of anyone turning back successfully, though I think some have tried, or wished they could.

\n

And once your feet are set on walking down that path, there is nothing left but to follow it forward, and try not to be emotionally distanced from the parts of yourself that you accept as you - an effort that a mind of simple passion would not need to make in the first place.  And an effort which can easily backfire by drawing your attention to the layered depths of your selfhood, away from the event and the emotion.

\n

Somewhere at the end of this, I think, is a mastery of techniques that are Zenlike but not Zen, so that you have full passion in the parts of yourself that you identify with, and distance from the pieces of your brain that you reject; and a complex layered personality with a stable inner core, without smoothing out those highs or lows of life that you accept as appropriate to the event.

\n

And if not, then screw it, let's hack the brain so that it works that way.  I have no confidence in my ability to judge how human nature should change, and would sooner leave it up to a more powerful mind in the same metamoral reference frame.  But if I had to guess, I think that's the right thing to do.

" } }, { "_id": "hXuB8BCyyiYuzij3F", "title": "Ethics Notes", "pageUrl": "https://www.lesswrong.com/posts/hXuB8BCyyiYuzij3F/ethics-notes", "postedAt": "2008-10-21T21:57:50.000Z", "baseScore": 20, "voteCount": 19, "commentCount": 46, "url": null, "contents": { "documentId": "hXuB8BCyyiYuzij3F", "html": "

Followup to: Ethical Inhibitions, Ethical Injunctions, Prices or Bindings?

\n

(Some collected replies to comments on the above three posts.)

\n

\n

From Ethical Inhibitions:

\n
\n

Spambot:  Every major democratic political leader lies abundantly to obtain office, as it's a necessity to actually persuade the voters. So Bill Clinton, Jean Chretien, Winston Churchill should qualify for at least half of your list of villainy.

\n
\n

Have the ones who've lied more, done better?

\n

In cases where the politician who told more lies won, has that politician gone on to rule well in an absolute sense?

\n

Is it actually true that no one who refused to lie (and this is not the same as always telling the whole truth) could win political office?

\n

Are the lies expected, and in that sense, less than true betrayals of someone who trusts you?

\n

Are there understood Rules of Politics that include lies but not assassinations, which the good politicians abide by, so that they are not really violating the ethics of their tribe?

\n

Will the world be so much worse off if sufficiently good people refuse to tell outright lies and are thereby barred from public office; or would we thereby lose a George Washington or Marcus Aurelius or two, and thereby darken history?

\n
\n

Pearson:  American revolutionaries as well ended human lives for the greater good

\n
\n

Police must sometimes kill the guilty.  Soldiers must sometimes kill civilians (or if the enemy knows you're reluctant, that gives them a motive to use civilians as a shield).  Spies sometimes have legitimate cause to kill people who helped them, but this has probably been done far more often than it has been justified by a need to end the Nazi nuclear program.

\n

I think it's worth noting that in all such cases, you can write out something like a code of ethics and at least try to have social acceptance of it.  Politicians, who lie, may prefer not to discuss the whole thing, but politicians are only a small slice of society.

\n

Are there many who transgress even the unwritten rules and end up really implementing the greater good?  (And no, there's no unwritten rule that says you can rob a bank to stop global warming.)

\n

...but if you're placing yourself under unusual stress, you may need to be stricter than what society will accept from you. In fact, I think it's fair to say that the further I push any art, such as rationality or AI theory, the more I perceive that what society will let you get away with is tremendously too sloppy a standard.

\n
\n

Yvain:  There are all sorts of biases that would make us less likely to believe people who \"break the rules\" can ever turn out well. One is the halo effect. Another is availability bias—it's much easier to remember people like Mao than it is to remember the people who were quiet and responsible once their revolution was over, and no one notices the genocides that didn't happen because of some coup or assassination.

\n

When the winners do something bad, it's never interpreted as bad after the fact. Firebombing a city to end a war more quickly, taxing a populace to give health care to the less fortunate, intervening in a foreign country's affairs to stop a genocide: they're all likely to be interpreted as evidence for \"the ends don't justify the means\" when they fail, but glossed over or treated as common sense interventions when they work.

\n
\n

Both fair points.  One of the difficult things in reasoning about ethics is the extent to which we can expect historical data to be distorted by moral self-deception on top of the more standard fogs of history.

\n
\n

Morrison:  I'm not sure you aren't \"making too much stew from one oyster\".  I certainly feel a whole lot less ethically inhibited if I'm really, really certain I'm not going to be punished.  When I override, it feels very deliberate—\"system two\" grappling and struggling with \"system one\"'s casual amorality, and with a significant chance of the override attempt failing.

\n
\n
\n

Weeks:  This entire post is kind of surreal to me, as I'm pretty confident I've never felt the emotion described here before...  I don't remember ever wanting to do something that I both felt would be wrong and wouldn't have consequences otherwise.

\n
\n

I don't know whether to attribute this to genetic variance, environmental variance, misunderstanding, or a small number of genuine sociopaths among Overcoming Bias readers. Maybe Weeks is referring to \"not wanting\" in terms of not finally deciding to do something he felt was wrong, rather than not being tempted?

\n

From Ethical Injunctions:

\n

\n
\n

Psy-Kosh:  Given the current sequence, perhaps it's time to revisit the whole Torture vs Dust Specks thing?

\n
\n

I can think of two positions on torture to which I am sympathetic:

\n

Strategy 1:  No legal system or society should ever refrain from prosecuting those who torture.  Anything important enough that torture would even be on the table, like the standard nuclear bomb in New York, is important enough that everyone involved should be willing to go to prison for the crime of torture.

\n

Strategy 2:  The chance of actually encountering a \"nuke in New York\" situation, that can be effectively resolved by torture, is so low, and the knock-on effects of having the policy in place so awful, that a blanket injunction against torture makes sense.

\n

In case 1, you would choose TORTURE over SPECKS, and then go to jail for it, even though it was the right thing to do.

\n

In case 2, you would say \"TORTURE over SPECKS is the right alternative of the two, but a human can never be in an epistemic state where you have justified belief that this is the case\".  Which would tie in well to the Hansonian argument that you have an O(3^^^3) probability penalty from the unlikelihood of finding yourself in such a unique position.

\n

So I am sympathetic to the argument that people should never torture, or that a human can't actually get into the epistemic state of a TORTURE vs. SPECKS decision.

\n

But I can't back the position that SPECKS over TORTURE is inherently the right thing to do, which I did think was the issue at hand.  This seems to me to mix up an epistemic precaution with morality.

\n

There's certainly worse things than torturing one person—torturing two people, for example.  But if you adopt position 2, then you would refuse to torture one person with your own hands even to save a thousand people from torture, while simultaneously saying that that it is better for one person to be tortured at your own hands than for a thousand people to be tortured at someone else's.

\n

I try to use the words \"morality\" and \"ethics\" consistently as follows:  The moral questions are over the territory (or, hopefully equivalently, over epistemic states of absolute certainty).  The ethical questions are over epistemic states that humans are likely to be in.  Moral questions are terminal.  Ethical questions are instrumental.

\n
\n

Hanson:  The problem here of course is how selective to be about rules to let into this protected level of \"rules almost no one should think themselves clever enough to know when to violate.\"  After all, your social training may well want you to include \"Never question our noble leader\" in that set.  Many a Christian has been told the mysteries of God are so subtle that they shouldn't think themselves clever enough to know when they've found evidence that God isn't following a grand plan to make this the best of all possible worlds.

\n
\n

Some of the flaws in Christian theology lie in what they think their supposed facts would imply: e.g., that because God did miracles you can know that God is good.  Other problems come more from the falsity of the premises than the invalidity of the deductions.  Which is to say, if God did exist and were good, then you would be justified in being cautious around stomping on parts of God's plan that didn't seem to make sense at the moment.  But this epistemic state would best be arrived at via a long history of people saying, \"Look how stupid God's plan is, we need to do X\" and then X blowing up on them. Rather than, as is actually the case, people saying \"God's plan is X\" and then X blows up on them.

\n

Or if you'd found with some historical regularity that, when you challenged the verdict of the black box, that you seemed to be right 90% of the time, but the other 10% of the time you got black-swan blowups that caused a hundred times as much damage, that would also be cause for hesitation—albeit it doesn't quite seem like grounds for suspecting a divine plan.

\n
\n

Nominull: S o... do you not actually believe in your injunction to \"shut up and multiply\"?  Because for some time now you seem to have been arguing that we should do what feels right rather than trying to figure out what is right.

\n
\n

Certainly I'm not saying \"just do what feels right\".  There's no safe defense, not even ethical injunctions.  There's also no safe defense, not even \"shut up and multiply\".

\n

I probably should have been clearer about this before, but I was trying to discuss things in order, and didn't want to wade into ethics without specialized posts...

\n

People often object to the sort of scenarios that illustrate \"shut up and multiply\" by saying, \"But if the experimenter tells you X, what if they might be lying?\"

\n

Well, in a lot of real-world cases, then yes, there are various probability updates you perform based on other people being willing to make bets against you; and just because you get certain experimental instructions doesn't imply the real world is that way.

\n

But the base case has to be moral comparisons between worlds, or comparisons of expected utility between given probability distributions.   If you can't ask about the base case, then what good can you get from instrumental ethics built on top?

\n

Let's be very clear that I don't think that one small act of self-deception is an inherently morally worse event than, say, getting a hand chopped off.  I'm asking, rather, how one should best avoid the dismembering chainsaw; and I am arguing that in reasonable states of knowledge a human can attain, the answer is, \"Don't deceive yourself, it's a black-swan bet at best.\"  Furthermore, that in the vast majority of cases where I have seen people conclude otherwise, it has indicated messed-up reasoning more than any actual advantage.

\n
\n

Vassar:  For such a reason, I would be very wary of using such rules in an AGI, but of course, perhaps the actual mathematical formulation of the rule in question within the AGI would be less problematic, though a few seconds of thought doesn't give me much reason to think this.

\n
\n

Are we still talking about self-deception?  Because I would give odds around as extreme as the odds I would give of anything, that if you tell me \"the AI you built is trying to deceive itself\", it indicates that some kind of really epic error has occurred.   Controlled shutdown, immediately.

\n
\n

Vassar:  In a very general sense though, I see a logical problem with this whole line of thought.  How can any of these injunctions survive except as self-protecting beliefs?  Isn't this whole approach just the sort of \"fighting bias with bias\" that you and Robin usually argue against?

\n
\n

Maybe I'm not being clear about how this would work in an AI!

\n

The ethical injunction isn't self-protecting, it's supported within the structural framework of the underlying system.  You might even find ethical injunctions starting to emerge without programmer intervention, in some cases, depending on how well the AI understood its own situation.

\n

But the kind of injunctions I have in mind wouldn't be reflective—they wouldn't modify the utility function, or kick in at the reflective level to ensure their own propagation.  That sounds really scary, to me—there ought to be an injunction against it!

\n

You might have a rule that would controlledly shut down the (non-mature) AI if it tried to execute a certain kind of source code change, but that wouldn't be the same as having an injunction that exerts direct control over the source code to propagate itself.

\n

To the extent the injunction sticks around in the AI, it should be as the result of ordinary reasoning, not reasoning taking the injunction into account!   That would be the wrong kind of circularity; you can unwind past ethical unjunctions!

\n

My ethical injunctions do not come with an extra clause that says, \"Do not reconsider this injunction, including not reconsidering this clause.\"  That would be going way too far.  If anything, you ought to have an injunction against that kind of circularity (since it seems like a plausible failure mode in which the system has been parasitized by its own content).

\n

 

\n
\n
\n

\n

You should never, ever murder an innocent person who's helped you, even if it's the right thing to do

\n

Shut up and do the impossible!

\n

 

\n
\n
\n

\n
\n

Ord:  As written, both these statements are conceptually confused.  I understand that you didn't actually mean either of them literally, but I would advise against trading on such deep-sounding conceptual confusions.

\n
\n

I can't weaken them and make them come out as the right advice.

\n

Even after \"Shut up and do the impossible\", there was that commenter who posted on their failed attempt at the AI-Box Experiment by saying that they thought they gave it a good try—which shows how hard it is to convey the sentiment of \"Shut up and do the impossible!\"

\n

Readers can work out on their own how to distinguish the map and the territory, I hope.  But if you say \"Shut up and do what seems impossible!\", then that, to me, sounds like dispelling part of the essential message—that what seems impossible doesn't look like it \"seems impossible\", it just looks impossible.

\n

Likewise with \"things you shouldn't do even if they're the right thing to do\".  Only the paradoxical phrasing, which is obviously not meant to be taken literally, conveys the danger and tension of ethics—the genuine opportunities you might be passing up—and for that matter, how dangerously meta the whole line of argument is.

\n

\"Don't do it, even if it seems right\" sounds merely clever by comparison—like you're going to reliably divine the difference between what seems right and what is right, and happily ride off into the sunset.

\n
\n

Crowe:  This seems closely related to inside-view versus outside-view.  The think-lobe of the brain comes up with a cunning plan. The plan breaks an ethical rule but calculation shows it is for the greater good.  The executive-lobe of the brain then ponders the outside view.  Every-one who has executed an evil cunning plan has run a calculation of the greater good and had their plan endorsed.  So the calculation lack outside-view credibility.

\n
\n

Yes, inside view versus outside view is definitely part of this.  And the planning fallacy, optimism, and overconfidence, too.

\n

But there are also biases arguing against the same line of reasoning, as noted by Yvain:  History may be written by the victors to emphasize the transgressions of the losers while overlooking the moral compromises of those who achieved \"good\" results, etc.

\n

 

\n

Also, some people who execute evil cunning plans may just have evil intent—possibly also with outright lies about their intentions.  In which case, they really wouldn't be in the reference class of well-meaning revolutionaries, albeit you would have to worry about your comrades; the Trotsky->Lenin->Stalin slide.

\n

\n
\n

Kurz:  What's to prohibit the meta-reasoning from taking place before the shutdown triggers? It would seem that either you can hard-code an ethical inhibition or you can't. Along those lines, is it fair to presume that the inhibitions are always negative, so that non-action is the safe alternative? Why not just revert to a known state?

\n
\n

If a self-modifying AI with the right structure will write ethical injunctions at all, it will also inspect the code to guarantee that no race condition exists with any deliberative-level supervisory systems that might have gone wrong in the condition where the code executes. Otherwise you might as well not have the code.

\n

Inaction isn't safe but it's safer than running an AI whose moral system has gone awry.

\n
\n

Finney:  Which is better: conscious self-deception (assuming that's even meaningful), or unconscious?

\n
\n

Once you deliberately choose self-deception, you may have to protect it by adopting other Dark Side Epistemology. I would, of course, say \"neither\" (as otherwise I would be swapping to the Dark Side) but if you ask me which is worse—well, hell, even I'm still undoubtedly unconsciously self-deceiving, but that's not the same as going over to the Dark Side by allowing it!

\n

From Prices or Bindings?:

\n

\n
\n

Psy-Kosh:   Hrm.  I'd think \"avoid destroying the world\" itself to be an ethical injunction too.

\n
\n

The problem is that this is phrased as an injunction over positive consequences.  Deontology does better when it's closer to the action level and negative rather than positive.

\n

Imagine trying to give this injunction to an AI.   Then it would have to do anything that it thought would prevent the destruction of the world, without other considerations.   Doesn't sound like a good idea.

\n
\n

Crossman:  Eliezer, can you be explicit which argument you're making?  I thought you were a utilitarian, but you've been sounding a bit Kantian lately.

\n
\n

If all I want is money, then I will one-box on Newcomb's Problem.

\n

I don't think that's quite the same as being a Kantian, but it does reflect the idea that similar decision algorithms in similar epistemic states will tend to produce similar outputs, and that such decision systems should not pretend to the logical impossibility of local optimization.  But this is a deep subject on which I have yet to write up my full views.

\n
\n

Clay:  Put more seriously, I would think that being believed to put the welfare of humanity ahead of concerns about personal integrity could have significant advantages itself.

\n
\n

The whole point here is that \"personal integrity\" doesn't have to be about being a virtuous person.  It can be about trying to save the world without any concern for your own virtue.   It can be the sort of thing you'd want a pure nonsentient decision agent to do, something that was purely a means and not at all an end in itself.

\n
\n

Andrix:  There seems to be a conflict here between not lying to yourself, and holding a traditional rule that suggests you ignore your rationality.

\n
\n

Your rationality is the sum of your full abilities, including your wisdom about what you refrain from doing in the presence of what seem like good reasons.

\n

\n

\n

Yvain: I am glad Stanislav Petrov, contemplating his military oath to always obey his superiors and the appropriate guidelines, never read this post.

\n
\n

An interesting point, for several reasons.

\n

First, did Petrov actually swear such an oath, and would it apply in such fashion as to require him to follow the written policy rather than using his own military judgment?

\n

Second, you might argue that Petrov's oath wasn't intended to cover circumstances involving the end of the world, and that a common-sense exemption should apply when the stakes suddenly get raised hugely beyond the intended context of the original oath.  I think this fails, because Petrov was regularly in charge of a nuclear-war installation and so this was exactly the sort of event his oath would be expected to apply to.

\n

Third, the Soviets arguably implemented what I called Strategy 1 above:   Petrov did the right thing, and was censured for it anyway.

\n

Fourth—maybe, on sober reflection, we wouldn't have wanted the Soviets to act differently!  Yes, the written policy was stupid.  And the Soviet Union was undoubtedly censuring Petrov out of bureaucratic coverup, not for reasons of principle.  But do you want the Soviet Union to have a written, explicit policy that says, \"Anyone can ignore orders in a nuclear war scenario if they think it's a good idea,\" or even an explicit policy that says \"Anyone who ignores orders in a nuclear war scenario, who is later vindicated by events, will be rewarded and promoted\"?

\n

\n

 

\n

 

\n

 

\n

Part of the sequence Ethical Injunctions

\n

(end of sequence)

\n

Previous post: \"Prices or Bindings?\"

" } }, { "_id": "K2c3dkKErsqFd28Dh", "title": "Prices or Bindings?", "pageUrl": "https://www.lesswrong.com/posts/K2c3dkKErsqFd28Dh/prices-or-bindings", "postedAt": "2008-10-21T16:00:00.000Z", "baseScore": 45, "voteCount": 35, "commentCount": 43, "url": null, "contents": { "documentId": "K2c3dkKErsqFd28Dh", "html": "

Followup toEthical Injunctions

\n

During World War II, Knut Haukelid and three other saboteurs sank a civilian Norwegian ferry ship, the SF Hydro, carrying a shipment of deuterium for use as a neutron moderator in Germany's atomic weapons program.  Eighteen dead, twenty-nine survivors.  And that was the end of the Nazi nuclear program.  Can you imagine a Hollywood movie in which the hero did that, instead of coming up with some amazing clever way to save the civilians on the ship?

\n

Stephen Dubner and Steven Levitt published the work of an anonymous economist turned bagelseller, Paul F., who dropped off baskets of bagels and came back to collect money from a cashbox, and also collected statistics on payment rates.  The current average payment rate is 89%.  Paul F. found that people on the executive floor of a company steal more bagels; that people with security clearances don't steal any fewer bagels; that telecom companies have robbed him and that law firms aren't worth the trouble.

\n

Hobbes (of Calvin and Hobbes) once said:  \"I don't know what's worse, the fact that everyone's got a price, or the fact that their price is so low.\"

\n

If Knut Haukelid sold his soul, he held out for a damned high price—the end of the Nazi atomic weapons program.

\n

Others value their integrity less than a bagel.

\n

One suspects that Haukelid's price was far higher than most people would charge, if you told them to never sell out.  Maybe we should stop telling people they should never let themselves be bought, and focus on raising their price to something higher than a bagel?

\n

But I really don't know if that's enough.

\n

\n

The German philosopher Fichte once said, \"I would not break my word even to save humanity.\"

\n

Raymond Smullyan, in whose book I read this quote, seemed to laugh and not take Fichte seriously.

\n

Abraham Heschel said of Fichte, \"His salvation and righteousness were apparently so much more important to him than the fate of all men that he would have destroyed mankind to save himself.\"

\n

I don't think they get it.

\n

If a serial killer comes to a confessional, and confesses that he's killed six people and plans to kill more, should the priest turn him in?  I would answer, \"No.\"  If not for the seal of the confessional, the serial killer would never have come to the priest in the first place.  All else being equal, I would prefer the world in which the serial killer talks to the priest, and the priest gets a chance to try and talk the serial killer out of it.

\n

I use the example of a priest, rather than a psychiatrist, because a psychiatrist might be tempted to break confidentiality \"just this once\", and the serial killer knows that.  But a Catholic priest who broke the seal of the confessional—for any reason—would face universal condemnation from his own church.  No Catholic would be tempted to say, \"Well, it's all right because it was a serial killer.\"

\n

I approve of this custom and its absoluteness, and I wish we had a rationalist equivalent.

\n

The trick would be establishing something of equivalent strength to a Catholic priest who believes God doesn't want him to break the seal, rather than the lesser strength of a psychiatrist who outsources their tape transcriptions to Pakistan.  Otherwise serial killers will, quite sensibly, use the Catholic priests instead, and get less rational advice.

\n

Suppose someone comes to a rationalist Confessor and says:  \"You know, tomorrow I'm planning to wipe out the human species using this neat biotech concoction I cooked up in my lab.\"  What then?  Should you break the seal of the confessional to save humanity?

\n

It appears obvious to me that the issues here are just those of the one-shot Prisoner's Dilemma, and I do not consider it obvious that you should defect on the one-shot PD if the other player cooperates in advance on the expectation that you will cooperate as well.

\n

There are issues with trustworthiness and how the sinner can trust the rationalist's commitment.  It is not enough to be trustworthy; you must appear so.  But anything that mocks the appearance of trustworthiness, while being unbound from its substance, is a poor signal; the sinner can follow that logic as well.  Perhaps once neuroimaging is a bit more advanced, we could have the rationalist swear under a truthtelling machine that they would not break the seal of the confessional even to save humanity.

\n

There's a proverb I failed to Google, which runs something like, \"Once someone is known to be a liar, you might as well listen to the whistling of the wind.\"  You wouldn't want others to expect you to lie, if you have something important to say to them; and this issue cannot be wholly decoupled from the issue of whether you actually tell the truth.  If you'll lie when the fate of the world is at stake, and others can guess that fact about you, then, at the moment when the fate of the world is at stake, that's the moment when your words become the whistling of the wind.

\n

I don't know if Fichte meant it that way, but his statement makes perfect sense as an ethical thesis to me.  It's not that one person's personal integrity is worth more, as terminal valuta, than the entire world.  Rather, losing all your ethics is not a pure advantage.

\n

Being believed to tell the truth has advantages, and I don't think it's so easy to decouple that from telling the truth.  Being believed to keep your word has advantages; and if you're the sort of person who would in fact break your word to save humanity, the other may guess that too.  Even intrapersonal ethics can help protect you from black swans and fundamental mistakes.  That logic doesn't change its structure when you double the value of the stakes, or even raise them to the level of a world.  Losing your ethics is not like shrugging off some chains that were cool to look at, but were weighing you down in an athletic contest.

\n

This I knew from the beginning:  That if I had no ethics I would hold to even with the world at stake, I had no ethics at all.  And I could guess how that would turn out.

\n

 

\n

Part of the sequence Ethical Injunctions

\n

Next post: \"Ethics Notes\"

\n

Previous post: \"Ethical Injunctions\"

" } }, { "_id": "dWTEtgBfFaz6vjwQf", "title": "Ethical Injunctions", "pageUrl": "https://www.lesswrong.com/posts/dWTEtgBfFaz6vjwQf/ethical-injunctions", "postedAt": "2008-10-20T23:00:00.000Z", "baseScore": 77, "voteCount": 59, "commentCount": 78, "url": null, "contents": { "documentId": "dWTEtgBfFaz6vjwQf", "html": "
\n

\"Would you kill babies if it was the right thing to do?  If no, under what circumstances would you not do the right thing to do?  If yes, how right would it have to be, for how many babies?\"
        —horrible job interview question

\n
\n

Swapping hats for a moment, I'm professionally intrigued by the decision theory of \"things you shouldn't do even if they seem to be the right thing to do\".

\n

Suppose we have a reflective AI, self-modifying and self-improving, at an intermediate stage in the development process.  In particular, the AI's goal system isn't finished—the shape of its motivations is still being loaded, learned, tested, or tweaked.

\n

Yea, I have seen many ways to screw up an AI goal system design, resulting in a decision system that decides, given its goals, that the universe ought to be tiled with tiny molecular smiley-faces, or some such.  Generally, these deadly suggestions also have the property that the AI will not desire its programmers to fix it.  If the AI is sufficiently advanced—which it may be even at an intermediate stage—then the AI may also realize that deceiving the programmers, hiding the changes in its thoughts, will help transform the universe into smiley-faces.

\n

Now, from our perspective as programmers, if we condition on the fact that the AI has decided to hide its thoughts from the programmers, or otherwise act willfully to deceive us, then it would seem likely that some kind of unintended consequence has occurred in the goal system.  We would consider it probable that the AI is not functioning as intended, but rather likely that we have messed up the AI's utility function somehow.  So that the AI wants to turn the universe into tiny reward-system counters, or some such, and now has a motive to hide from us.

\n

Well, suppose we're not going to implement some object-level Great Idea as the AI's utility function.  Instead we're going to do something advanced and recursive—build a goal system which knows (and cares) about the programmers outside.  A goal system that, via some nontrivial internal structure, \"knows it's being programmed\" and \"knows it's incomplete\".  Then you might be able to have and keep the rule:

\n
\n

\"If [I decide that] fooling my programmers is the right thing to do, execute a controlled shutdown [instead of doing the right thing to do].\"

\n
\n

\n

And the AI would keep this rule, even through the self-modifying AI's revisions of its own code, because, in its structurally nontrivial goal system, the present-AI understands that this decision by a future-AI probably indicates something defined-as-a-malfunction.  Moreover, the present-AI knows that if future-AI tries to evaluate the utility of executing a shutdown, once this hypothetical malfunction has occurred, the future-AI will probably decide not to shut itself down.  So the shutdown should happen unconditionally, automatically, without the goal system getting another chance to recalculate the right thing to do.

\n

I'm not going to go into the deep dark depths of the exact mathematical structure, because that would be beyond the scope of this blog.  Also I don't yet know the deep dark depths of the mathematical structure.  It looks like it should be possible, if you do things that are advanced and recursive and have nontrivial (but consistent) structure.  But I haven't reached that level, as yet, so for now it's only a dream.

\n

But the topic here is not advanced AI; it's human ethics.  I introduce the AI scenario to bring out more starkly the strange idea of an ethical injunction:

\n
\n

You should never, ever murder an innocent person who's helped you, even if it's the right thing to do; because it's far more likely that you've made a mistake, than that murdering an innocent person who helped you is the right thing to do.

\n
\n

Sound reasonable?

\n

During World War II, it became necessary to destroy Germany's supply of deuterium, a neutron moderator, in order to block their attempts to achieve a fission chain reaction.  Their supply of deuterium was coming at this point from a captured facility in Norway.  A shipment of heavy water was on board a Norwegian ferry ship, the SF Hydro.  Knut Haukelid and three others had slipped on board the ferry in order to sabotage it, when the saboteurs were discovered by the ferry watchman.  Haukelid told him that they were escaping the Gestapo, and the watchman immediately agreed to overlook their presence.  Haukelid \"considered warning their benefactor but decided that might endanger the mission and only thanked him and shook his hand.\"  (Richard Rhodes, The Making of the Atomic Bomb.)  So the civilian ferry Hydro sank in the deepest part of the lake, with eighteen dead and twenty-nine survivors.  Some of the Norwegian rescuers felt that the German soldiers present should be left to drown, but this attitude did not prevail, and four Germans were rescued.  And that was, effectively, the end of the Nazi atomic weapons program.

\n

Good move?  Bad move?  Germany very likely wouldn't have gotten the Bomb anyway...  I hope with absolute desperation that I never get faced by a choice like that, but in the end, I can't say a word against it.

\n

On the other hand, when it comes to the rule:

\n
\n

\"Never try to deceive yourself, or offer a reason to believe other than probable truth; because even if you come up with an amazing clever reason, it's more likely that you've made a mistake than that you have a reasonable expectation of this being a net benefit in the long run.\"

\n
\n

Then I really don't know of anyone who's knowingly been faced with an exception.  There are times when you try to convince yourself \"I'm not hiding any Jews in my basement\" before you talk to the Gestapo officer.  But then you do still know the truth, you're just trying to create something like an alternative self that exists in your imagination, a facade to talk to the Gestapo officer.

\n

But to really believe something that isn't true?  I don't know if there was ever anyone for whom that was knowably a good idea.  I'm sure that there have been many many times in human history, where person X was better off with false belief Y.  And by the same token, there is always some set of winning lottery numbers in every drawing.  It's knowing which lottery ticket will win that is the epistemically difficult part, like X knowing when he's better off with a false belief.

\n

Self-deceptions are the worst kind of black swan bets, much worse than lies, because without knowing the true state of affairs, you can't even guess at what the penalty will be for your self-deception.  They only have to blow up once to undo all the good they ever did.  One single time when you pray to God after discovering a lump, instead of going to a doctor.  That's all it takes to undo a life.  All the happiness that the warm thought of an afterlife ever produced in humanity, has now been more than cancelled by the failure of humanity to institute systematic cryonic preservations after liquid nitrogen became cheap to manufacture.  And I don't think that anyone ever had that sort of failure in mind as a possible blowup, when they said, \"But we need religious beliefs to cushion the fear of death.\"  That's what black swan bets are all about—the unexpected blowup.

\n

Maybe you even get away with one or two black-swan bets—they don't get you every time.  So you do it again, and then the blowup comes and cancels out every benefit and then some.  That's what black swan bets are all about.

\n

Thus the difficulty of knowing when it's safe to believe a lie (assuming you can even manage that much mental contortion in the first place)—part of the nature of black swan bets is that you don't see the bullet that kills you; and since our perceptions just seem like the way the world is, it looks like there is no bullet, period.

\n

So I would say that there is an ethical injunction against self-deception.  I call this an \"ethical injunction\" not so much because it's a matter of interpersonal morality (although it is), but because it's a rule that guards you from your own cleverness—an override against the temptation to do what seems like the right thing.

\n

So now we have two kinds of situation that can support an \"ethical injunction\", a rule not to do something even when it's the right thing to do.  (That is, you refrain \"even when your brain has computed it's the right thing to do\", but this will just seem like \"the right thing to do\".)

\n

First, being human and running on corrupted hardware, we may generalize classes of situation where when you say e.g. \"It's time to rob a few banks for the greater good,\" we deem it more likely that you've been corrupted than that this is really the case.  (Note that we're not prohibiting it from ever being the case in reality, but we're questioning the epistemic state where you're justified in trusting your own calculation that this is the right thing to do—fair lottery tickets can win, but you can't justifiably buy them.)

\n

Second, history may teach us that certain classes of action are black-swan bets, that is, they sometimes blow up bigtime for reasons not in the decider's model.  So even when we calculate within the model that something seems like the right thing to do, we apply the further knowledge of the black swan problem to arrive at an injunction against it.

\n

But surely... if one is aware of these reasons... then one can simply redo the calculation, taking them into account.  So we can rob banks if it seems like the right thing to do after taking into account the problem of corrupted hardware and black swan blowups.  That's the rational course, right?

\n

There's a number of replies I could give to that.

\n

I'll start by saying that this is a prime example of the sort of thinking I have in mind, when I warn aspiring rationalists to beware of cleverness.

\n

I'll also note that I wouldn't want an attempted Friendly AI that had just decided that the Earth ought to be transformed into paperclips, to assess whether this was a reasonable thing to do in light of all the various warnings it had received against it.  I would want it to undergo an automatic controlled shutdown.  Who says that meta-reasoning is immune from corruption?

\n

I could mention the important times that my naive, idealistic ethical inhibitions have protected me from myself, and placed me in a recoverable position, or helped start the recovery, from very deep mistakes I had no clue I was making.  And I could ask whether I've really advanced so much, and whether it would really be all that wise, to remove the protections that saved me before.

\n

Yet even so...  \"Am I still dumber than my ethics?\" is a question whose answer isn't automatically \"Yes.\"

\n

There are obvious silly things here that you shouldn't do; for example, you shouldn't wait until you're really tempted, and then try to figure out if you're smarter than your ethics on that particular occasion.

\n

But in general—there's only so much power that can vest in what your parents told you not to do.  One shouldn't underestimate the power.  Smart people debated historical lessons in the course of forging the Enlightenment ethics that much of Western culture draws upon; and some subcultures, like scientific academia, or science-fiction fandom, draw on those ethics more directly.  But even so the power of the past is bounded.

\n

And in fact...

\n

I've had to make my ethics much stricter than what my parents and Jerry Pournelle and Richard Feynman told me not to do.

\n

Funny thing, how when people seem to think they're smarter than their ethics, they argue for less strictness rather than more strictness.  I mean, when you think about how much more complicated the modern world is...

\n

And along the same lines, the ones who come to me and say, \"You should lie about the Singularity, because that way you can get more people to support you; it's the rational thing to do, for the greater good\"—these ones seem to have no idea of the risks.

\n

They don't mention the problem of running on corrupted hardware.  They don't mention the idea that lies have to be recursively protected from all the truths and all the truthfinding techniques that threaten them.  They don't mention that honest ways have a simplicity that dishonest ways often lack.  They don't talk about black-swan bets.  They don't talk about the terrible nakedness of discarding the last defense you have against yourself, and trying to survive on raw calculation.

\n

I am reasonably sure that this is because they have no clue about any of these things.

\n

If you've truly understood the reason and the rhythm behind ethics, then one major sign is that, augmented by this newfound knowledge, you don't do those things that previously seemed like ethical transgressions.  Only now you know why.

\n

Someone who just looks at one or two reasons behind ethics, and says, \"Okay, I've understood that, so now I'll take it into account consciously, and therefore I have no more need of ethical inhibitions\"—this one is behaving more like a stereotype than a real rationalist.  The world isn't simple and pure and clean, so you can't just take the ethics you were raised with and trust them.  But that pretense of Vulcan logic, where you think you're just going to compute everything correctly once you've got one or two abstract insights—that doesn't work in real life either.

\n

As for those who, having figured out none of this, think themselves smarter than their ethics:  Ha.

\n

And as for those who previously thought themselves smarter than their ethics, but who hadn't conceived of all these elements behind ethical injunctions \"in so many words\" until they ran across this Overcoming Bias sequence, and who now think themselves smarter than their ethics, because they're going to take all this into account from now on:  Double ha.

\n

I have seen many people struggling to excuse themselves from their ethics.  Always the modification is toward lenience, never to be more strict.  And I am stunned by the speed and the lightness with which they strive to abandon their protections.  Hobbes said, \"I don't know what's worse, the fact that everyone's got a price, or the fact that their price is so low.\"  So very low the price, so very eager they are to be bought.  They don't look twice and then a third time for alternatives, before deciding that they have no option left but to transgress—though they may look very grave and solemn when they say it.  They abandon their ethics at the very first opportunity.  \"Where there's a will to failure, obstacles can be found.\"  The will to fail at ethics seems very strong, in some people.

\n

I don't know if I can endorse absolute ethical injunctions that bind over all possible epistemic states of a human brain.  The universe isn't kind enough for me to trust that.  (Though an ethical injunction against self-deception, for example, does seem to me to have tremendous force.  I've seen many people arguing for the Dark Side, and none of them seem aware of the network risks or the black-swan risks of self-deception.)  If, someday, I attempt to shape a (reflectively consistent) injunction within a self-modifying AI, it will only be after working out the math, because that is so totally not the sort of thing you could get away with doing via an ad-hoc patch.

\n

But I will say this much:

\n

I am completely unimpressed with the knowledge, the reasoning, and the overall level, of those folk who have eagerly come to me, and said in grave tones, \"It's rational to do unethical thing X because it will have benefit Y.\"

" } }, { "_id": "cyRpNbPsW8HzsxhRK", "title": "Ethical Inhibitions", "pageUrl": "https://www.lesswrong.com/posts/cyRpNbPsW8HzsxhRK/ethical-inhibitions", "postedAt": "2008-10-19T20:44:02.000Z", "baseScore": 31, "voteCount": 28, "commentCount": 63, "url": null, "contents": { "documentId": "cyRpNbPsW8HzsxhRK", "html": "

Followup toEntangled Truths, Contagious Lies, Evolutionary Psychology

\n

What's up with that bizarre emotion we humans have, this sense of ethical caution?

\n

One can understand sexual lust, parental care, and even romantic attachment.  The evolutionary psychology of such emotions might be subtler than it at first appears, but if you ignore the subtleties, the surface reasons are obvious.  But why a sense of ethical caution?  Why honor, why righteousness?  (And no, it's not group selection; it never is.)  What reproductive benefit does that provide?

\n

The specific ethical codes that people feel uneasy violating, vary from tribe to tribe (though there are certain regularities).  But the emotion associated with feeling ethically inhibited—well, I Am Not An Evolutionary Anthropologist, but that looks like a human universal to me, something with brainware support.

\n

The obvious story behind prosocial emotions in general, is that those who offend against the group are sanctioned; this converts the emotion to an individual reproductive advantage.  The human organism, executing the ethical-caution adaptation, ends up avoiding the group sanctions that would follow a violation of the code.  This obvious answer may even be the entire answer.

\n

But I suggest—if a bit more tentatively than usual—that by the time human beings were evolving the emotion associated with \"ethical inhibition\", we were already intelligent enough to observe the existence of such things as group sanctions.  We were already smart enough (I suggest) to model what the group would punish, and to fear that punishment.

\n

Sociopaths have a concept of getting caught, and they try to avoid getting caught.  Why isn't this sufficient?  Why have an extra emotion, a feeling that inhibits you even when you don't expect to be caught?  Wouldn't this, from evolution's perspective, just result in passing up perfectly good opportunities?

\n

\n

So I suggest (tentatively) that humans naturally underestimate the odds of getting caught.  We don't foresee all the possible chains of causality, all the entangled facts that can bring evidence against us.  Those ancestors who lacked a sense of ethical caution stole the silverware when they expected that no one would catch them or punish them; and were nonetheless caught or punished often enough, on average, to outweigh the value of the silverware.

\n

Admittedly, this may be an unnecessary assumption.  It is a general idiom of biology that evolution is the only long-term consequentialist; organisms compute short-term rewards.  Hominids violate this rule, but that is a very recent innovation.

\n

So one could counter-argue:  \"Early humans didn't reliably forecast the punishment that follows from breaking social codes, so they didn't reliably think consequentially about it, so they developed an instinct to obey the codes.\"  Maybe the modern sociopaths that evade being caught are smarter than average.  Or modern sociopaths are better educated than hunter-gatherer sociopaths.  Or modern sociopaths get more second chances to recover from initial stumbles—they can change their name and move.  It's not so strange to find an emotion executing in some exceptional circumstance where it fails to provide a reproductive benefit.

\n

But I feel justified in bringing up the more complicated hypothesis, because ethical inhibitions are archetypallythat which stops us even when we think no one is looking.  A humanly universal concept, so far as I know, though I am not an anthropologist.

\n

Ethical inhibition, as a human motivation, seems to be implemented in a distinct style from hunger or lust.  Hunger and lust can be outweighed when stronger desires are at stake; but the emotion associated with ethical prohibitions tries to assert itself deontologically. If you have the sense at all that you shouldn't do it, you have the sense that you unconditionally shouldn't do it.  The emotion associated with ethical caution would seem to be a drive that—successfully or unsuccessfully—tries to override the temptation, not just weigh against it.

\n

A monkey can be trapped by a food reward inside a hollowed shell—they can reach in easily enough, but once they close their fist, they can't take their hand out.  The monkey may be screaming with distress, and still be unable to override the instinct to keep hold of the food. We humans can do better than that; we can let go of the food reward and run away, when our brain is warning us of the long-term consequences.

\n

But why does the sensation of ethical inhibition, that might also command us to pass up a food reward, have a similar override-quality—even in the absence of explicitly expected long-term consequences?  Is it just that ethical emotions evolved recently, and happen to be implemented in prefrontal cortex next to the long-term-override circuitry?

\n

What is this tendency to feel inhibited from stealing the food reward?  This message that tries to assert \"I override\", not just \"I weigh against\"?  Even when we don't expect the long-term consequences of being discovered?

\n

And before you think that I'm falling prey to some kind of appealing story, ask yourself why that particular story would sound appealing to humans.  Why would it seem temptingly virtuous to let an ethical inhibition override, rather than just being one more weight in the balance?

\n

One possible explanation would be if the emotion were carved out by the evolutionary-historical statistics of a black-swan bet.

\n

Maybe you will, in all probability, get away with stealing the silverware on any particular occasion—just as your model of the world would extrapolate.  But it was a statistical fact about your ancestors that sometimes the environment didn't operate the way they expected. Someone was watching from behind the trees.  On those occasions their reputation was permanently blackened; they lost status in the tribe, and perhaps were outcast or murdered.  Such occasions could be statistically rare, and still counterbalance the benefit of a few silver spoons.

\n

The brain, like every other organ in the body, is a reproductive organ: it was carved out of entropy by the persistence of mutations that promoted reproductive fitness.  And yet somehow, amazingly, the human brain wound up with circuitry for such things as honor, sympathy, and ethical resistance to temptations.

\n

Which means that those alleles drove their alternatives to extinction.  Humans, the organisms, can be nice to each other; but the alleles' game of frequencies is zero-sum.  Honorable ancestors didn't necessarily kill the dishonorable ones.  But if, by cooperating with each other, honorable ancestors outreproduced less honorable folk, then the honor allele killed the dishonor allele as surely as if it erased the DNA sequence off a blackboard.

\n

That might be something to think about, the next time you're wondering if you should just give in to your ethical impulses, or try to override them with your rational awareness.

\n

Especially if you're tempted to engage in some chicanery \"for the greater good\"—tempted to decide that the end justifies the means.  Evolution doesn't care about whether something actually promotes the greater good—that's not how gene frequencies change.  But if transgressive plans go awry often enough to hurt the transgressor, how much more often would they go awry and hurt the intended beneficiaries?

\n

Historically speaking, it seems likely that, of those who set out to rob banks or murder opponents \"in a good cause\", those who managed to hurt themselves, mostly wouldn't make the history books.  (Unless they got a second chance, like Hitler after the failed Beer Hall Putsch.)  Of those cases we do read about in the history books, many people have done very well for themselves out of their plans to lie and rob and murder \"for the greater good\".  But how many people cheated their way to actual huge altruistic benefits—cheated and actually realized the justifying greater good?  Surely there must be at least one or two cases known to history—at least one king somewhere who took power by lies and assassination, and then ruled wisely and well—but I can't actually name a case off the top of my head.  By and large, it seems to me a pretty fair generalization that people who achieve great good ends manage not to find excuses for all that much evil along the way.

\n

Somehow, people seem much more likely to endorse plans that involve just a little pain for someone else, on behalf of the greater good, than to work out a way to let the sacrifice be themselves.  But when you plan to damage society in order to save it, remember that your brain contains a sense of ethical unease that evolved from transgressive plans blowing up and damaging the  originator—never mind the expected value of all the damage done to other people, if you really do care about them.

\n

If natural selection, which doesn't care at all about the welfare of unrelated strangers, still manages to give you a sense of ethical unease on account of transgressive plans not always going as planned—then how much more reluctant should you be to rob banks for a good cause, if you aspire to actually help and protect others?

\n

 

\n

Part of the sequence Ethical Injunctions

\n

Next post: \"Ethical Injunctions\"

\n

Previous post: \"Protected From Myself\"

" } }, { "_id": "yz2btSaCLHmLWgWD5", "title": "Protected From Myself", "pageUrl": "https://www.lesswrong.com/posts/yz2btSaCLHmLWgWD5/protected-from-myself", "postedAt": "2008-10-19T00:09:31.000Z", "baseScore": 49, "voteCount": 42, "commentCount": 30, "url": null, "contents": { "documentId": "yz2btSaCLHmLWgWD5", "html": "

Followup toThe Magnitude of His Own Folly, Entangled Truths, Contagious Lies

\n

Every now and then, another one comes before me with the brilliant idea:  \"Let's lie!\"

\n

Lie about what?—oh, various things.  The expected time to Singularity, say.  Lie and say it's definitely going to be earlier, because that will get more public attention.  Sometimes they say \"be optimistic\", sometimes they just say \"lie\".  Lie about the current degree of uncertainty, because there are other people out there claiming to be certain, and the most unbearable prospect in the world is that someone else pull ahead. Lie about what the project is likely to accomplish—I flinch even to write this, but occasionally someone proposes to go and say to the Christians that the AI will create Christian Heaven forever, or go to the US government and say that the AI will give the US dominance forever.

\n

But at any rate, lie.  Lie because it's more convenient than trying to explain the truth.  Lie, because someone else might lie, and so we have to make sure that we lie first.  Lie to grab the tempting benefits, hanging just within reach—

\n

Eh?  Ethics?  Well, now that you mention it, lying is at least a little bad, all else being equal.  But with so much at stake, we should just ignore that and lie.  You've got to follow the expected utility, right?  The loss of a lie is much less than the benefit to be gained, right?

\n

Thus do they argue.  Except—what's the flaw in the argument?  Wouldn't it be irrational not to lie, if lying has the greatest expected utility?

\n

When I look back upon my history—well, I screwed up in a lot of ways.  But it could have been much worse, if I had reasoned like those who offer such advice, and lied.

\n

\n

Once upon a time, I truly and honestly believed that either a superintelligence would do what was right, or else there was no right thing to do; and I said so.  I was uncertain of the nature of morality, and I said that too.  I didn't know if the Singularity would be in five years or fifty, and this also I admitted.  My project plans were not guaranteed to deliver results, and I did not promise to deliver them.  When I finally said \"Oops\", and realized that I needed to go off and do more fundamental research instead of rushing to write code immediately—

\n

—well, I can imagine the mess I would have had on my hands, if I had told the people who trusted me: that the Singularity was surely coming in ten years; that my theory was sure to deliver results; that I had no lingering confusions; and that any superintelligence would surely give them their own private island and a harem of catpersons of the appropriate gender.  How exactly would one then explain why you're now going to step back and look for math-inventors instead of superprogrammers, or why the code now has to be theorem-proved?

\n

When you make an honest mistake, on some subject you were honest about, the recovery technique is straightforward:  Just as you told people what you thought in the first place, you now list out the actual reasons that you changed your mind.  This diff takes you to your current true thoughts, that imply your current desired policy.  Then, just as people decided whether to aid you originally, they re-decide in light of the new information.

\n

But what if you were \"optimistic\" and only presented one side of the story, the better to fulfill that all-important goal of persuading people to your cause?  Then you'll have a much harder time persuading them away from that idea you sold them originally—you've nailed their feet to the floor, which makes it difficult for them to follow if you yourself take another step forward.

\n

And what if, for the sake of persuasion, you told them things that you didn't believe yourself?  Then there is no true diff from the story you told before, to the new story now.  Will there be any coherent story that explains your change of heart?

\n

Conveying the real truth is an art form.  It's not an easy art form—those darned constraints of honesty prevent you from telling all kinds of convenient lies that would be so much easier than the complicated truth.  But, if you tell lots of truth, you get good at what you practice.  A lot of those who come to me and advocate lies, talk earnestly about how these matters of transhumanism are so hard to explain, too difficult and technical for the likes of Joe the Plumber.  So they'd like to take the easy way out, and lie.

\n

We don't live in a righteous universe where all sins are punished.  Someone who practiced telling lies, and made their mistakes and learned from them, might well become expert at telling lies that allow for sudden changes of policy in the future, and telling more lies to explain the policy changes.  If you use the various forbidden arts that create fanatic followers, they will swallow just about anything.  The history of the Soviet Union and their sudden changes of policy, as presented to their ardent Western intellectual followers, helped inspire Orwell to write 1984.

\n

So the question, really, is whether you want to practice truthtelling or practice lying, because whichever one you practice is the one you're going to get good at.  Needless to say, those who come to me and offer their unsolicited advice do not appear to be expert liars.  For one thing, a majority of them don't seem to find anything odd about floating their proposals in publicly archived, Google-indexed mailing lists.

\n

But why not become an expert liar, if that's what maximizes expected utility?  Why take the constrained path of truth, when things so much more important are at stake?

\n

Because, when I look over my history, I find that my ethics have, above all, protected me from myself.  They weren't inconveniences.  They were safety rails on cliffs I didn't see.

\n

I made fundamental mistakes, and my ethics didn't halt that, but they played a critical role in my recovery.  When I was stopped by unknown unknowns that I just wasn't expecting, it was my ethical constraints, and not any conscious planning, that had put me in a recoverable position.

\n

You can't duplicate this protective effect by trying to be clever and calculate the course of \"highest utility\".  The expected utility just takes into account the things you know to expect.  It really is amazing, looking over my history, the extent to which my ethics put me in a recoverable position from my unanticipated, fundamental mistakes, the things completely outside my plans and beliefs.

\n

Ethics aren't just there to make your life difficult; they can protect you from Black Swans.  A startling assertion, I know, but not one entirely irrelevant to current affairs.

\n

If you've been following along my story, you'll recall that the downfall of all my theories, began with a tiny note of discord. A tiny note that I wouldn't ever have followed up, if I had only cared about my own preferences and desires.  It was the thought of what someone else might think—someone to whom I felt I owed an ethical consideration—that spurred me to follow up that one note.

\n

And I have watched others fail utterly on the problem of Friendly AI, because they simply try to grab the banana in one form or another—seize the world for their own favorite moralities, without any thought of what others might think—and so they never enter into the complexities and second thoughts that might begin to warn them of the technical problems.

\n

We don't live in a righteous universe.  And so, when I look over my history, the role that my ethics have played is so important that I've had to take a step back and ask, \"Why is this happening?\"  The universe isn't set up to reward virtue—so why did my ethics help so much?  Am I only imagining the phenomenon?  That's one possibility.  But after some thought, I've concluded that, to the extent you believe that my ethics did help me, these are the plausible reasons in order of importance:

\n

1)  The honest Way often has a kind of simplicity that trangressions lack. If you tell lies, you have to keep track of different stories you've told different groups, and worry about which facts might encounter the wrong people, and then invent new lies to explain any unexpected policy shifts you have to execute on account of your mistake.  This simplicity is powerful enough to explain a great deal of the positive influence that I attribute to my ethics, in a universe that doesn't reward virtue per se.

\n

2)  I was stricter with myself, and held myself to a higher standard, when I was doing various things that I considered myself ethically obligated to do.  Thus my recovery from various failures often seems to have begun with an ethical thought of some type—e.g. the whole development where \"Friendly AI\" led into the concept of AI as a precise art.  That might just be a quirk of my own personality; but it seems to help account for the huge role my ethics played in leading me to important thoughts, which I cannot just explain by saying that the universe rewards virtue.

\n

3)  The constraints that the wisdom of history suggests, to avoid hurting other people, may also stop you from hurting yourself.  When you have some brilliant idea that benefits the tribe, we don't want you to run off and do X, Y, and Z, even if you say \"the end justifies the means!\"  Evolutionarily speaking, one suspects that the \"means\" have more often benefited the person who executes them, than the tribe.  But this is not the ancestral environment.  In the more complicated modern world, following the ethical constraints can prevent you from making huge networked mistakes that would catch you in their collapse.  Robespierre led a shorter life than Washington.

\n

 

\n

Part of the sequence Ethical Injunctions

\n

Next post: \"Ethical Inhibitions\"

\n

Previous post: \"Ends Don't Justify Means (Among Humans)\"

" } }, { "_id": "XTWkjCJScy2GFAgDt", "title": "Dark Side Epistemology", "pageUrl": "https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt/dark-side-epistemology", "postedAt": "2008-10-17T23:55:22.000Z", "baseScore": 127, "voteCount": 112, "commentCount": 156, "url": null, "contents": { "documentId": "XTWkjCJScy2GFAgDt", "html": "\n\n\n\n \n\n \n\n

If you once tell a lie, the truth is ever after your enemy.

\n\n

I have discussed the notion that lies are contagious. If you pick up a pebble from the driveway, and tell a geologist that you found it on a beach—well, do you know what a geologist knows about rocks? I don’t. But I can suspect that a water-worn pebble wouldn’t look like a droplet of frozen lava from a volcanic eruption. Do you know where the pebble in your driveway really came from? Things bear the marks of their places in a lawful universe; in that web, a lie is out of place.1

\n\n

What sounds like an arbitrary truth to one mind—one that could easily be replaced by a plausible lie—might be nailed down by a dozen linkages to the eyes of greater knowledge. To a creationist, the idea that life was shaped by “intelligent design” instead of “natural selection” might sound like a sports team to cheer for. To a biologist, plausibly arguing that an organism was intelligently designed would require lying about almost every facet of the organism. To plausibly argue that “humans” were intelligently designed, you’d have to lie about the design of the human retina, the architecture of the human brain, the proteins bound together by weak van der Waals forces instead of strong covalent bonds . . .

\n\n

Or you could just lie about evolutionary theory, which is the path taken by most creationists. Instead of lying about the connected nodes in the network, they lie about the general laws governing the links.

\n\n

And then to cover that up, they lie about the rules of science—like what it means to call something a “theory,” or what it means for a scientist to say that they are not absolutely certain.

\n\n

So they pass from lying about specific facts, to lying about general laws, to lying about the rules of reasoning. To lie about whether humans evolved, you must lie about evolution; and then you have to lie about the rules of science that constrain our understanding of evolution.

\n\n

But how else? Just as a human would be out of place in a community of actually intelligently designed life forms, and you have to lie about the rules of evolution to make it appear otherwise, so too beliefs about creationism are themselves out of place in science—you wouldn’t find them in a well-ordered mind any more than you’d find palm trees growing on a glacier. And so you have to disrupt the barriers that would forbid them.

\n\n

Which brings us to the case of self-deception.

\n\n

A single lie you tell yourself may seem plausible enough, when you don’t know any of the rules governing thoughts, or even that there are rules; and the choice seems as arbitrary as choosing a flavor of ice cream, as isolated as a pebble on the shore . . .

\n\n

. . . but then someone calls you on your belief, using the rules of reasoning that they’ve learned. They say, “Where’s your evidence?”

\n\n

And you say, “What? Why do I need evidence?”

\n\n

So they say, “In general, beliefs require evidence.”

\n\n

This argument, clearly, is a soldier fighting on the other side, which you must defeat. So you say: “I disagree! Not all beliefs require evidence. In particular, beliefs about dragons don’t require evidence. When it comes to dragons, you’re allowed to believe anything you like. So I don’t need evidence to believe there’s a dragon in my garage.”

\n\n

And the one says, “Eh? You can’t just exclude dragons like that. There’s a reason for the rule that beliefs require evidence. To draw a correct map of the city, you have to walk through the streets and make lines on paper that correspond to what you see. That’s not an arbitrary legal requirement—if you sit in your living room and draw lines on the paper at random, the map’s going to be wrong. With extremely high probability. That’s as true of a map of a dragon as it is of anything.”

\n\n

So now this, the explanation of why beliefs require evidence, is also an opposing soldier. So you say: “Wrong with extremely high probability? Then there’s still a chance, right? I don’t have to believe if it’s not absolutely certain.”

\n\n

Or maybe you even begin to suspect, yourself, that “beliefs require evidence.” But this threatens a lie you hold precious; so you reject the dawn inside you, push the Sun back under the horizon.

\n\n

Or you’ve previously heard the proverb “beliefs require evidence,” and it sounded wise enough, and you endorsed it in public. But it never quite occurred to you, until someone else brought it to your attention, that this proverb could apply to your belief that there’s a dragon in your garage. So you think fast and say, “The dragon is in a separate magisterium.”

\n\n

Having false beliefs isn’t a good thing, but it doesn’t have to be permanently crippling—if, when you discover your mistake, you get over it. The dangerous thing is to have a false belief that you believe should be protected as a belief—a belief-in-belief, whether or not accompanied by actual belief.

\n\n

A single Lie That Must Be Protected can block someone’s progress into advanced rationality. No, it’s not harmless fun.

\n\n

Just as the world itself is more tangled by far than it appears on the surface, so too there are stricter rules of reasoning, constraining belief more strongly, than the untrained would suspect. The world is woven tightly, governed by general laws, and so are rational beliefs.

\n\n

Think of what it would take to deny evolution or heliocentrism—all the connected truths and governing laws you wouldn’t be allowed to know. Then you can imagine how a single act of self-deception can block off the whole meta level of truth-seeking, once your mind begins to be threatened by seeing the connections. Forbidding all the intermediate and higher levels of the rationalist’s Art. Creating, in its stead, a vast complex of anti-law, rules of anti-thought, general justifications for believing the untrue.

\n\n

Steven Kaas said, “Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires.” Giving someone a false belief to protect—convincing them that the belief itself must be defended from any thought that seems to threaten it—well, you shouldn’t do that to someone unless you’d also give them a frontal lobotomy.

\n\n

Once you tell a lie, the truth is your enemy; and every truth connected to that truth, and every ally of truth in general; all of these you must oppose, to protect the lie. Whether you’re lying to others, or to yourself.

\n\n

You have to deny that beliefs require evidence, and then you have to deny that maps should reflect territories, and then you have to deny that truth is a good thing . . .

\n\n

Thus comes into being the Dark Side.

\n\n

I worry that people aren’t aware of it, or aren’t sufficiently wary—that as we wander through our human world, we can expect to encounter systematically bad epistemology.

\n\n

The “how to think” memes floating around, the cached thoughts of Deep Wisdom—some of it will be good advice devised by rationalists. But other notions were invented to protect a lie or self-deception: spawned from the Dark Side.

\n\n

“Everyone has a right to their own opinion.” When you think about it, where was that proverb generated? Is it something that someone would say in the course of protecting a truth, or in the course of protecting from the truth? But people don’t perk up and say, “Aha! I sense the presence of the Dark Side!” As far as I can tell, it’s not widely realized that the Dark Side is out there.

\n\n

But how else? Whether you’re deceiving others, or just yourself, the Lie That Must Be Protected will propagate recursively through the network of empirical causality, and the network of general empirical rules, and the rules of reasoning themselves, and the understanding behind those rules. If there is good epistemology in the world, and also lies or self-deceptions that people are trying to protect, then there will come into existence bad epistemology to counter the good. We could hardly expect, in this world, to find the Light Side without the Dark Side; there is the Sun, and that which shrinks away and generates a cloaking Shadow.

\n\n

Mind you, these are not necessarily evil people. The vast majority who go about repeating the Deep Wisdom are more duped than duplicitous, more self-deceived than deceiving. I think.

\n\n

And it’s surely not my intent to offer you a Fully General Counterargument, so that whenever someone offers you some epistemology you don’t like, you say: “Oh, someone on the Dark Side made that up.” It’s one of the rules of the Light Side that you have to refute the proposition for itself, not by accusing its inventor of bad intentions.

\n\n

But the Dark Side is out there. Fear is the path that leads to it, and one betrayal can turn you. Not all who wear robes are either Jedi or fakes; there are also the Sith Lords, masters and unwitting apprentices. Be warned; be wary.

\n\n

As for listing common memes that were spawned by the Dark Side—not random false beliefs, mind you, but bad epistemology, the Generic Defenses of Fail—well, would you care to take a stab at it, dear readers?

\n\n
\n \n\n

1Actually, a geologist in the comments says that most pebbles in driveways are taken from beaches, so they couldn’t tell the difference between a driveway pebble and a beach pebble, but they could tell the difference between a mountain pebble and a driveway/beach pebble (http://lesswrong.com/lw/uy/dark_side_epistemology/4xbv). Case in point . . .

\n
\n\n" } }, { "_id": "3bfWCPfu9AFspnhvf", "title": "Traditional Capitalist Values", "pageUrl": "https://www.lesswrong.com/posts/3bfWCPfu9AFspnhvf/traditional-capitalist-values", "postedAt": "2008-10-17T01:07:16.000Z", "baseScore": 65, "voteCount": 64, "commentCount": 103, "url": null, "contents": { "documentId": "3bfWCPfu9AFspnhvf", "html": "

Followup toAre Your Enemies Innately Evil?, Policy Debates Should Not Appear One-Sided

\n
\n

\"The financial crisis is not the crisis of capitalism.  It is the crisis of a system that has distanced itself from the most fundamental values of capitalism, which betrayed the spirit of capitalism.\"
        -- Nicolas Sarkozy

\n
\n

During the current crisis, I've more than once heard someone remarking that financial-firm CEOs who take huge bonuses during the good years and then run away when their black-swan bets blow up, are only exercising the usual capitalist values of \"grab all the money you can get\".

\n

I think that a fair amount of the enmity in the world, to say nothing of confusion on the Internet, stems from people refusing to contemplate the real values of the opposition as the opposition sees it.  This is something I've remarked upon before, with respect to \"the terrorists hate our freedom\" or \"the suicide hijackers were cowards\" (statements that are sheerly silly).

\n

Real value systems - as opposed to pretend demoniacal value systems - are phrased to generate warm fuzzies in their users, not to be easily mocked.  They will sound noble at least to the people who believe them.

\n

Whether anyone actually lives up to that value system, or should, and whether the results are what they are claimed to be; if there are hidden gotchas in the warm fuzzy parts - sure, you can have that debate.  But first you should be clear about how your opposition sees itself - a view which has not been carefully optimized to make your side feel good about its opposition.  Otherwise you're not engaging the real issues.

\n

So here are the traditional values of capitalism as seen by those who regard it as noble - the sort of Way spoken of by Paul Graham, or P. T. Barnum (who did not say \"There's a sucker born every minute\"), or Warren Buffett:

\n

\n\n

There was, once upon a time, an editorial in the Wall Street Journal calling Ford a \"traitor to his class\" because he offered more than the prevailing wages of the time.  Coal miners trying to form a union, once upon a time, were fired upon by rifles.  But I also think that Graham or Barnum or Buffett would regard those folk as the inheritors of mere kings.

\n

\"No true Scotsman\" fallacy?  Maybe, but let's at least be clear what the Scots say about it.

\n

For myself, I would have to say that I'm an apostate from this moral synthesis - I grew up in this city and remember it fondly, but I no longer live there.  I regard finance as more of a useful tool than an ultimate end of intelligence - I'm not sure it's the maximum possible fun we could all be having under optimal conditions.  I'm more sympathetic than this to people who lose their jobs, because I know that retraining, or changing careers, isn't always easy and fun.  I don't think the universe is set up to reward hard work; and I think that it is entirely possible for money to corrupt a person.

\n

But I also admire any virtue clearly stated and synthesized into a moral system.  We need more of those.  Anyone who thinks that capitalism is just about grabbing the banana, is underestimating the number of decent and intelligent people who have put serious thought into the subject.

\n

Those of other Ways may not agree with all these statements - but if you aspire to sanity in debate, you should at least be able to read them without your brain shutting down.

\n

PS:  Julian Morrison adds:  Trade can act as a connective between people with diverging values.

" } }, { "_id": "wyyfFfaRar2jEdeQK", "title": "Entangled Truths, Contagious Lies", "pageUrl": "https://www.lesswrong.com/posts/wyyfFfaRar2jEdeQK/entangled-truths-contagious-lies", "postedAt": "2008-10-15T23:39:36.000Z", "baseScore": 109, "voteCount": 80, "commentCount": 42, "url": null, "contents": { "documentId": "wyyfFfaRar2jEdeQK", "html": "\n\n\n\n \n\n \n\n
\n \n\n

One of your very early philosophers came to the conclusion that a fully competent mind, from a study of one fact or artifact belonging to any given universe, could construct or visualize that universe, from the instant of its creation to its ultimate end . . .

\n\n

First Lensman

\n
\n\n
\n \n\n

If any one of you will concentrate upon one single fact, or small object, such as a pebble or the seed of a plant or other creature, for as short a period of time as one hundred of your years, you will begin to perceive its truth.

\n\n

Gray Lensman

\n
\n\n

I am reasonably sure that a single pebble, taken from a beach of our own Earth, does not specify the continents and countries, politics and people of this Earth. Other planets in space and time, other Everett branches, would generate the same pebble.

\n\n

On the other hand, the identity of a single pebble would seem to include our laws of physics. In that sense the entirety of our Universe—all the Everett branches—would be implied by the pebble.1

\n\n

From the study of that single pebble you could see the laws of physics and all they imply. Thinking about those laws of physics, you can see that planets will form, and you can guess that the pebble came from such a planet. The internal crystals and molecular formations of the pebble developed under gravity, which tells you something about the planet’s mass; the mix of elements in the pebble tells you something about the planet’s formation.

\n\n

I am not a geologist, so I don’t know to which mysteries geologists are privy. But I find it very easy to imagine showing a geologist a pebble, and saying, “This pebble came from a beach at Half Moon Bay,” and the geologist immediately says, “I’m confused,” or even, “You liar.” Maybe it’s the wrong kind of rock, or the pebble isn’t worn enough to be from a beach—I don’t know pebbles well enough to guess the linkages and signatures by which I might be caught, which is the point.

\n\n

“Only God can tell a truly plausible lie.” I wonder if there was ever a religion that developed this as a proverb? I would (falsifiably) guess not: it’s a rationalist sentiment, even if you cast it in theological metaphor. Saying “everything is interconnected to everything else, because God made the whole world and sustains it” may generate some nice warm ’n’ fuzzy feelings during the sermon, but it doesn’t get you very far when it comes to assigning pebbles to beaches.

\n\n

A penny on Earth exerts a gravitational acceleration on the Moon of around 4.5 × 10-31 m/s2, so in one sense it’s not too far wrong to say that every event is entangled with its whole past light cone. And since inferences can propagate backward and forward through causal networks, epistemic entanglements can easily cross the borders of light cones. But I wouldn’t want to be the forensic astronomer who had to look at the Moon and figure out whether the penny landed heads or tails—the influence is far less than quantum uncertainty and thermal noise.

\n\n

If you said, “Everything is entangled with something else,” or, “Everything is inferentially entangled and some entanglements are much stronger than others,” you might be really wise instead of just Deeply Wise.

\n\n

Physically, each event is in some sense the sum of its whole past light cone, without borders or boundaries. But the list of noticeable entanglements is much shorter, and it gives you something like a network. This high-level regularity is what I refer to when I talk about the Great Web of Causality.

\n\n

I use these Capitalized Letters somewhat tongue-in-cheek, perhaps; but if anything at all is worth Capitalized Letters, surely the Great Web of Causality makes the list.

\n\n

“Oh what a tangled web we weave, when first we practise to deceive,” said Sir Walter Scott. Not all lies spin out of control—we don’t live in so righteous a universe. But it does occasionally happen that someone lies about a fact, and then has to lie about an entangled fact, and then another fact entangled with that one:

\n\n
\n \n\n

“Where were you?”

\n\n

“Oh, I was on a business trip.”

\n\n

“What was the business trip about?”

\n\n

“I can’t tell you that; it’s proprietary negotiations with a major client.”

\n\n

“Oh—they’re letting you in on those? Good news! I should call your boss to thank him for adding you.”

\n\n

“Sorry—he’s not in the office right now . . .”

\n
\n\n

Human beings, who are not gods, often fail to imagine all the facts they would need to distort to tell a truly plausible lie. “God made me pregnant” sounded a tad more likely in the old days before our models of the world contained (quotations of) Y chromosomes. Many similar lies, today, may blow up when genetic testing becomes more common. Rapists have been convicted, and false accusers exposed, years later, based on evidence they didn’t realize they could leave. A student of evolutionary biology can see the design signature of natural selection on every wolf that chases a rabbit; and every rabbit that runs away; and every bee that stings instead of broadcasting a polite warning—but the deceptions of creationists sound plausible to them, I’m sure.

\n\n

Not all lies are uncovered, not all liars are punished; we don’t live in that righteous a universe. But not all lies are as safe as their liars believe. How many sins would become known to a Bayesian superintelligence, I wonder, if it did a (non-destructive?) nanotechnological scan of the Earth? At minimum, all the lies of which any evidence still exists in any brain. Some such lies may become known sooner than that, if the neuroscientists ever succeed in building a really good lie detector via neuroimaging. Paul Ekman (a pioneer in the study of tiny facial muscle movements) could probably read off a sizeable fraction of the world’s lies right now, given a chance.

\n\n

Not all lies are uncovered, not all liars are punished. But the Great Web is very commonly underestimated. Just the knowledge that humans have already accumulated would take many human lifetimes to learn. Anyone who thinks that a non-God can tell a perfect lie, risk-free, is underestimating the tangledness of the Great Web.

\n\n

Is honesty the best policy? I don’t know if I’d go that far: Even on my ethics, it’s sometimes okay to shut up. But compared to outright lies, either honesty or silence involves less exposure to recursively propagating risks you don’t know you’re taking.

\n\n
\n \n\n

1Assuming, as seems likely, there are no truly free variables.

\n
\n\n" } }, { "_id": "K9ZaZXDnL3SEmYZqB", "title": "Ends Don't Justify Means (Among Humans)", "pageUrl": "https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans", "postedAt": "2008-10-14T21:00:00.000Z", "baseScore": 205, "voteCount": 146, "commentCount": 98, "url": null, "contents": { "documentId": "K9ZaZXDnL3SEmYZqB", "html": "
\n

\"If the ends don't justify the means, what does?\"
        —variously attributed

\n

\"I think of myself as running on hostile hardware.\"
        —Justin Corwin

\n
\n

Yesterday I talked about how humans may have evolved a structure of political revolution, beginning by believing themselves morally superior to the corrupt current power structure, but ending by being corrupted by power themselves—not by any plan in their own minds, but by the echo of ancestors who did the same and thereby reproduced.

\n

This fits the template:

\n
\n

In some cases, human beings have evolved in such fashion as to think that they are doing X for prosocial reason Y, but when human beings actually do X, other adaptations execute to promote self-benefiting consequence Z.

\n
\n

From this proposition, I now move on to my main point, a question considerably outside the realm of classical Bayesian decision theory:

\n
\n

\"What if I'm running on corrupted hardware?\"

\n
\n

\n

In such a case as this, you might even find yourself uttering such seemingly paradoxical statements—sheer nonsense from the perspective of classical decision theory—as:

\n
\n

\"The ends don't justify the means.\"

\n
\n

But if you are running on corrupted hardware, then the reflective observation that it seems like a righteous and altruistic act to seize power for yourself—this seeming may not be be much evidence for the proposition that seizing power is in fact the action that will most benefit the tribe.

\n

By the power of naive realism, the corrupted hardware that you run on, and the corrupted seemings that it computes, will seem like the fabric of the very world itself—simply the way-things-are.

\n

And so we have the bizarre-seeming rule:  \"For the good of the tribe, do not cheat to seize power even when it would provide a net benefit to the tribe.\"

\n

Indeed it may be wiser to phrase it this way:  If you just say, \"when it seems like it would provide a net benefit to the tribe\", then you get people who say, \"But it doesn't just seem that way—it would provide a net benefit to the tribe if I were in charge.\"

\n

The notion of untrusted hardware seems like something wholly outside the realm of classical decision theory.  (What it does to reflective decision theory I can't yet say, but that would seem to be the appropriate level to handle it.)

\n

But on a human level, the patch seems straightforward.  Once you know about the warp, you create rules that describe the warped behavior and outlaw it.  A rule that says, \"For the good of the tribe, do not cheat to seize power even for the good of the tribe.\"  Or \"For the good of the tribe, do not murder even for the good of the tribe.\"

\n

And now the philosopher comes and presents their \"thought experiment\"—setting up a scenario in which, by stipulation, the only possible way to save five innocent lives is to murder one innocent person, and this murder is certain to save the five lives.  \"There's a train heading to run over five innocent people, who you can't possibly warn to jump out of the way, but you can push one innocent person into the path of the train, which will stop the train.  These are your only options; what do you do?\"

\n

An altruistic human, who has accepted certain deontological prohibits—which seem well justified by some historical statistics on the results of reasoning in certain ways on untrustworthy hardware—may experience some mental distress, on encountering this thought experiment.

\n

So here's a reply to that philosopher's scenario, which I have yet to hear any philosopher's victim give:

\n

\"You stipulate that the only possible way to save five innocent lives is to murder one innocent person, and this murder will definitely save the five lives, and that these facts are known to me with effective certainty.  But since I am running on corrupted hardware, I can't occupy the epistemic state you want me to imagine.  Therefore I reply that, in a society of Artificial Intelligences worthy of personhood and lacking any inbuilt tendency to be corrupted by power, it would be right for the AI to murder the one innocent person to save five, and moreover all its peers would agree.  However, I refuse to extend this reply to myself, because the epistemic state you ask me to imagine, can only exist among other kinds of people than human beings.\"

\n

Now, to me this seems like a dodge.  I think the universe is sufficiently unkind that we can justly be forced to consider situations of this sort.  The sort of person who goes around proposing that sort of thought experiment, might well deserve that sort of answer.  But any human legal system does embody some answer to the question \"How many innocent people can we put in jail to get the guilty ones?\", even if the number isn't written down.

\n

As a human, I try to abide by the deontological prohibitions that humans have made to live in peace with one another.  But I don't think that our deontological prohibitions are literally inherently nonconsequentially terminally right.  I endorse \"the end doesn't justify the means\" as a principle to guide humans running on corrupted hardware, but I wouldn't endorse it as a principle for a society of AIs that make well-calibrated estimates.  (If you have one AI in a society of humans, that does bring in other considerations, like whether the humans learn from your example.)

\n

And so I wouldn't say that a well-designed Friendly AI must necessarily refuse to push that one person off the ledge to stop the train.  Obviously, I would expect any decent superintelligence to come up with a superior third alternative.  But if those are the only two alternatives, and the FAI judges that it is wiser to push the one person off the ledge—even after taking into account knock-on effects on any humans who see it happen and spread the story, etc.—then I don't call it an alarm light, if an AI says that the right thing to do is sacrifice one to save five.  Again, I don't go around pushing people into the paths of trains myself, nor stealing from banks to fund my altruistic projects.  I happen to be a human.  But for a Friendly AI to be corrupted by power would be like it starting to bleed red blood.  The tendency to be corrupted by power is a specific biological adaptation, supported by specific cognitive circuits, built into us by our genes for a clear evolutionary reason.  It wouldn't spontaneously appear in the code of a Friendly AI any more than its transistors would start to bleed.

\n

I would even go further, and say that if you had minds with an inbuilt warp that made them overestimate the external harm of self-benefiting actions, then they would need a rule \"the ends do not prohibit the means\"—that you should do what benefits yourself even when it (seems to) harm the tribe.  By hypothesis, if their society did not have this rule, the minds in it would refuse to breathe for fear of using someone else's oxygen, and they'd all die.  For them, an occasional overshoot in which one person seizes a personal benefit at the net expense of society, would seem just as cautiously virtuous—and indeed be just as cautiously virtuous—as when one of us humans, being cautious, passes up an opportunity to steal a loaf of bread that really would have been more of a benefit to them than a loss to the merchant (including knock-on effects).

\n

\"The end does not justify the means\" is just consequentialist reasoning at one meta-level up.  If a human starts thinking on the object level that the end justifies the means, this has awful consequences given our untrustworthy brains; therefore a human shouldn't think this way.  But it is all still ultimately consequentialism.  It's just reflective consequentialism, for beings who know that their moment-by-moment decisions are made by untrusted hardware.

" } }, { "_id": "v8rghtzWCziYuMdJ5", "title": "Why Does Power Corrupt?", "pageUrl": "https://www.lesswrong.com/posts/v8rghtzWCziYuMdJ5/why-does-power-corrupt", "postedAt": "2008-10-14T00:23:23.000Z", "baseScore": 65, "voteCount": 55, "commentCount": 59, "url": null, "contents": { "documentId": "v8rghtzWCziYuMdJ5", "html": "

Followup toEvolutionary Psychology

\n
\n

\"Power tends to corrupt, and absolute power corrupts absolutely.  Great men are almost always bad men.\"
        —Lord Acton

\n
\n

Call it a just-so story if you must, but as soon as I was introduced to the notion of evolutionary psychology (~1995), it seemed obvious to me why human beings are corrupted by power.  I didn't then know that hunter-gatherer bands tend to be more egalitarian than agricultural tribes—much less likely to have a central tribal-chief boss-figure—and so I thought of it this way:

\n

Humans (particularly human males) have evolved to exploit power and status when they obtain it, for the obvious reason:  If you use your power to take many wives and favor your children with a larger share of the meat, then you will leave more offspring, ceteris paribus.  But you're not going to have much luck becoming tribal chief if you just go around saying, \"Put me in charge so that I can take more wives and favor my children.\"  You could lie about your reasons, but human beings are not perfect deceivers.

\n

So one strategy that an evolution could follow, would be to create a vehicle that reliably tended to start believing that the old power-structure was corrupt, and that the good of the whole tribe required their overthrow...

\n

\n

The young revolutionary's belief is honest.  There will be no betraying catch in his throat, as he explains why the tribe is doomed at the hands of the old and corrupt, unless he is given power to set things right.  Not even subconsciously does he think, \"And then, once I obtain power, I will strangely begin to resemble that old corrupt guard, abusing my power to increase my inclusive genetic fitness.\"

\n

People often think as if \"purpose\" is an inherent property of things; and so many interpret the message of ev-psych as saying, \"You have a subconscious, hidden goal to maximize your fitness.\"  But individual organisms are adaptation-executers, not fitness-maximizers.  The purpose that the revolutionary should obtain power and abuse it, is not a plan anywhere in his brain; it belongs to evolution, which can just barely be said to have purposes.  It is a fact about many past revolutionaries having successfully taken power, having abused it, and having left many descendants.

\n

When the revolutionary obtains power, he will find that it is sweet, and he will try to hold on to it—perhaps still thinking that this is for the good of the tribe.  He will find that it seems right to take many wives (surely he deserves some reward for his labor) and to help his children (who are more deserving of help than others).  But the young revolutionary has no foreknowledge of this in the beginning, when he sets out to overthrow the awful people who currently rule the tribe—evil mutants whose intentions are obviously much less good than his own.

\n

The circuitry that will respond to power by finding it pleasurable, is already wired into our young revolutionary's brain; but he does not know this.  (It would not help him evolutionarily if he did know it, because then he would not be able to honestly proclaim his good intentions—though it is scarcely necessary for evolution to prevent hunter-gatherers from knowing about evolution, which is one reason we are able to know about it now.)

\n

And so we have the awful cycle of \"meet the new boss, same as the old boss\".  Youthful idealism rails against their elders' corruption, but oddly enough, the new generation—when it finally succeeds to power—doesn't seem to be all that morally purer.  The original Communist Revolutionaries, I would guess probably a majority of them, really were in it to help the workers; but once they were a ruling Party in charge...

\n

All sorts of random disclaimers can be applied to this thesis:  For example, you could suggest that maybe Stalin's intentions weren't all that good to begin with, and that some politicians do intend to abuse power and really are just lying.  A much more important objection is the need to redescribe this scenario in terms of power structures that actually exist in hunter-gatherer bands, which, as I understand it, have egalitarian pressures (among adult males) to keep any one person from getting too far above others.

\n

But human beings do find power over others sweet, and it's not as if this emotion could have materialized from thin air, without an evolutionary explanation in terms of hunter-gatherer conditions.  If you don't think this is why human beings are corrupted by power—then what's your evolutionary explanation?  On the whole, to me at least, the evolutionary explanation for this phenomenon has the problem of not even seeming profound, because what it explains seems so normal.

\n

The moral of this story, and the reason for going into the evolutionary explanation, is that you shouldn't reason as if people who are corrupted by power are evil mutants, whose mutations you do not share.

\n

Evolution is not an infinitely powerful deceiving demon, and our ancestors evolved under conditions of not knowing about evolutionary psychology.  The tendency to be corrupted by power can be beaten, I think.  The \"warp\" doesn't seem on the same level of deeply woven insidiousness as, say, confirmation bias.

\n

There was once an occasion where a reporter wrote about me, and did a hatchet job.  It was my first time being reported on, and I was completely blindsided by it.  I'd known that reporters sometimes wrote hatchet jobs, but I'd thought that it would require malice—I hadn't begun to imagine that someone might write a hatchet job just because it was a cliche, an easy way to generate a few column inches.  So I drew upon my own powers of narration, and wrote an autobiographical story on what it felt like to be reported on for the first time—that horrible feeling of violation.  I've never sent that story off anywhere, though it's a fine and short piece of writing as I judge it.

\n

For it occurred to me, while I was writing, that journalism is an example of unchecked power—the reporter gets to present only one side of the story, any way they like, and there's nothing that the reported-on can do about it.  (If you've never been reported on, then take it from me, that's how it is.)  And here I was writing my own story, potentially for publication as traditional journalism, not in an academic forum.  I remember realizing that the standards were tremendously lower than in science.  That you could get away with damn near anything, so long as it made a good story—that this was the standard in journalism.  (If you, having never been reported on yourself, don't believe me that this is the case, then you're as naive as I once was.)

\n

Just that thought—not even the intention, not even wondering whether to do it, but just the thought—that I could present only my side of the story and deliberately make the offending reporter look bad, and that no one would call me on it.  Just that thought triggered this huge surge of positive reinforcement.  This tremendous high, comparable to the high of discovery or the high of altruism.

\n

And I knew right away what I was dealing with.  So I sat there, motionless, fighting down that surge of positive reinforcement.  It didn't go away just because I wanted it to go away.  But it went away after a few minutes.

\n

If I'd had no label to slap on that huge surge of positive reinforcement—if I'd been a less reflective fellow, flowing more with my passions—then that might have been that.  People who are corrupted by power are not evil mutants.

\n

I wouldn't call it a close call.  I did know immediately what was happening.  I fought it down without much trouble, and could have fought much harder if necessary.  So far as I can tell, the temptation of unchecked power is not anywhere near as insidious as the labyrinthine algorithms of self-deception.  Evolution is not an infinitely powerful deceiving demon.  George Washington refused the temptation of the crown, and he didn't even know about evolutionary psychology.  Perhaps it was enough for him to know a little history, and think of the temptation as a sin.

\n

But it was still a scary thing to experience—this circuit that suddenly woke up and dumped a huge dose of unwanted positive reinforcement into my mental workspace, not when I planned to wield unchecked power, but just when my brain visualized the possibility.

\n

To the extent you manage to fight off this temptation, you do not say:  \"Ah, now that I've beaten the temptation of power, I can safely make myself the wise tyrant who wields unchecked power benevolently, for the good of all.\"  Having successfully fought off the temptation of power, you search for strategies that avoid seizing power.  George Washington's triumph was not how well he ruled, but that he refused the crown—despite all temptation to be horrified at who else might then obtain power.

\n

I am willing to admit of the theoretical possibility that someone could beat the temptation of power and then end up with no ethical choice left, except to grab the crown.  But there would be a large burden of skepticism to overcome.

\n

 

\n

Part of the sequence Ethical Injunctions

\n

Next post: \"Ends Don't Justify Means (Among Humans)\"

\n

(start of sequence)

" } }, { "_id": "2eLTwzGrhKMGsoLTC", "title": "Rationality Quotes 19", "pageUrl": "https://www.lesswrong.com/posts/2eLTwzGrhKMGsoLTC/rationality-quotes-19", "postedAt": "2008-10-12T20:10:49.000Z", "baseScore": 7, "voteCount": 6, "commentCount": 8, "url": null, "contents": { "documentId": "2eLTwzGrhKMGsoLTC", "html": "

\"I don't know that I ever wanted greatness, on its own.  It seems rather like wanting to be an engineer, rather than wanting to design something - or wanting to be a writer, rather than wanting to write.  It should be a by-product, not a thing in itself.  Otherwise, it's just an ego trip.\"
        -- Roger Zelazny, Prince of Chaos

\n

\"Many assumptions that we have long been comfortable with are lined up like dominoes.\"
        -- Omega

\n

\"I know of no law of logic demanding that every event have a cause.\"
        -- John K. Clark

\n

\"It's perfectly accurate, the accuracy only possessable by subjunctive syllogisms.\"
        -- Damien R. Sullivan

\n

\"Money makes the world go round.  Love just barely keeps it from blowing up.\"
        -- Unknown

\n

\"Soon things got out of hand, where they have remained ever since.\"
        -- Larry Gonick, The Cartoon History of the Universe

\n

\"Finally, ask yourself this: Are you sure you've really been abducted by aliens? Do you really want to know? For peace of mind and serenity of spirit, you may only need to remember the following: Ignorance is bliss, Prozac is cheap.\"
        -- Pat Krass

\n

\"So it was that on the ninety-fifth day of false winter in the year 2929 since the founding of Neverness, we vowed above all else to seek wisdom and truth, even though our seeking should lead to our death and to the ruin of all that we loved and held dear.\"
        -- David Zindell, Neverness

" } }, { "_id": "kXAb5riiaJNrfR8v8", "title": "The Ritual", "pageUrl": "https://www.lesswrong.com/posts/kXAb5riiaJNrfR8v8/the-ritual", "postedAt": "2008-10-11T23:52:10.000Z", "baseScore": 117, "voteCount": 90, "commentCount": 22, "url": null, "contents": { "documentId": "kXAb5riiaJNrfR8v8", "html": "\n\n\n\n \n\n \n\n

The room in which Jeffreyssai received his non-beisutsukai visitors was quietly formal, impeccably appointed in only the most conservative tastes. Sunlight and outside air streamed through a grillwork of polished silver, a few sharp edges making it clear that this wall was not to be opened. The floor and walls were glass, thick enough to distort, to a depth sufficient that it didn’t matter what might be underneath. Upon the surfaces of the glass were subtly scratched patterns of no particular meaning, scribed as if by the hand of an artistically inclined child (and this was in fact the case).

\n\n

Elsewhere in Jeffreyssai’s home there were rooms of other style; but this, he had found, was what most outsiders expected of a Bayesian Master, and he chose not to enlighten them otherwise. That quiet amusement was one of life’s little joys, after all.

\n\n

The guest sat across from him, knees on the pillow and heels behind. She was here solely upon the business of her Conspiracy, and her attire showed it: a form-fitting jumpsuit of pink leather with even her hands gloved—all the way to the hood covering her head and hair, though her face lay plain and unconcealed beneath.

\n\n

And so Jeffreyssai had chosen to receive her in this room.

\n\n

Jeffreyssai let out a long breath, exhaling. “Are you sure?”

\n\n

“Oh,” she said, “and do I have to be absolutely certain before my advice can shift your opinions? Does it not suffice that I am a domain expert, and you are not?”

\n\n

Jeffreyssai’s mouth twisted up at the corner in a half-smile. “How do you know so much about the rules, anyway? You’ve never had so much as a Planck length of formal training.”

\n\n

“Do you even need to ask?” she said dryly. “If there’s one thing that you beisutsukai do love to go on about, it’s the reasons why you do things.”

\n\n

Jeffreyssai inwardly winced at the thought of trying to pick up rationality by watching other people talk about it—

\n\n

“And don’t inwardly wince at me like that,” she said. “I’m not trying to be a rationalist myself, just trying to win an argument with a rationalist. There’s a difference, as I’m sure you tell your students.”

\n\n

Can she really read me that well? Jeffreyssai looked out through the silver grillwork, at the sunlight reflected from the faceted mountainside. Always, always the golden sunlight fell each day, in this place far above the clouds. An unchanging thing, that light. The distant Sun, which that light represented, was in five billion years burned out; but now, in this moment, the Sun still shone. And that could never alter. Why wish for things to stay the same way forever, when that wish was already granted as absolutely as any wish could be? The paradox of permanence and impermanence: only in the latter perspective was there any such thing as progress, or loss.

\n\n

“You have always given me good counsel,” Jeffreyssai said. “Unchanging, that has been. Through all the time we’ve known each other.”

\n\n

She inclined her head, acknowledging. This was true, and there was no need to spell out the implications.

\n\n

“So,” Jeffreyssai said. “Not for the sake of arguing. Only because I want to know the answer. Are you sure?” He didn’t even see how she could guess.

\n\n

“Pretty sure,” she said, “we’ve been collecting statistics for a long time, and in nine hundred and eighty-five out of a thousand cases like yours—”

\n\n

Then she laughed at the look on his face. “No, I’m joking. Of course I’m not sure. This thing only you can decide. But I am sure that you should go off and do whatever it is you people do—I’m quite sure you have a ritual for it, even if you won’t discuss it with outsiders—when you very seriously consider abandoning a long-held premise of your existence.”

\n\n

It was hard to argue with that, Jeffreyssai reflected, the more so when a domain expert had told you that you were, in fact, probably wrong.

\n\n

“I concede,” Jeffreyssai said. Coming from his lips, the phrase was spoken with a commanding finality. There is no need to argue with me any further: you have won.

\n\n

“Oh, stop it,” she said. She rose from her pillow in a single fluid shift without the slightest wasted motion. She didn’t flaunt her age, but she didn’t conceal it either. She took his outstretched hand, and raised it to her lips for a formal kiss. “Farewell, sensei.”

\n\n

“Farewell?” repeated Jeffreyssai. That signified a higher order of departure than goodbye. “I do intend to visit you again, milady; and you are always welcome here.”

\n\n

She walked toward the door without answering. At the doorway she paused, without turning around. “It won’t be the same,” she said. And then, without the movements seeming the least rushed, she walked away so swiftly it was almost like vanishing.

\n\n

Jeffreyssai sighed. But at least, from here until the challenge proper, all his actions were prescribed, known quantities.

\n\n

Leaving that formal reception area, he passed to his arena, and caused to be sent out messengers to his students, telling them that the next day’s classes must be improvised in his absence, and that there would be a test later.

\n\n

And then he did nothing in particular. He read another hundred pages of the textbook he had borrowed; it wasn’t very good, but then the book he had loaned out in exchange wasn’t very good either. He wandered from room to room of his house, idly checking various storages to see if anything had been stolen (a deck of cards was missing, but that was all). From time to time his thoughts turned to tomorrow’s challenge, and he let them drift. Not directing his thoughts at all, only blocking out every thought that had ever previously occurred to him; and disallowing any kind of conclusion, or even any thought as to where his thoughts might be trending.

\n\n

The sun set, and he watched it for a while, mind carefully put in idle. It was a fantastic balancing act to set your mind in idle without having to obsess about it, or exert energy to keep it that way; and years ago he would have sweated over it, but practice had long since made perfect.

\n\n

The next morning he awoke with the chaos of the night’s dreaming fresh in his mind, and, doing his best to preserve the feeling of the chaos as well as its memory, he descended a flight of stairs, then another flight of stairs, then a flight of stairs after that, and finally came to the least fashionable room in his whole house.

\n\n

It was white. That was pretty much it as far as the color scheme went.

\n\n

All along a single wall were plaques, which, following the classic and suggested method, a younger Jeffreyssai had very carefully scribed himself, burning the concepts into his mind with each touch of the brush that wrote the words. That which can be destroyed by the truth should be. People can stand what is true, for they are already enduring it. Curiosity seeks to annihilate itself. Even one small plaque that showed nothing except a red horizontal slash. Symbols could be made to stand for anything; a flexibility of visual power that even the Bardic Conspiracy would balk at admitting outright.

\n\n

Beneath the plaques, two sets of tally marks scratched into the wall. Under the plus column, two marks. Under the minus column, five marks. Seven times he had entered this room; five times he had decided not to change his mind; twice he had exited something of a different person. There was no set ratio prescribed, or set range—that would have been a mockery indeed. But if there were no marks in the plus column after a while, you might as well admit that there was no point in having the room, since you didn’t have the ability it stood for. Either that, or you’d been born knowing the truth and right of everything.

\n\n

Jeffreyssai seated himself, not facing the plaques, but facing away from them, at the featureless white wall. It was better to have no visual distractions.

\n\n

In his mind, he rehearsed first the meta-mnemonic, and then the various sub-mnemonics referenced, for the seven major principles and sixty-two specific techniques that were most likely to prove needful in the Ritual Of Changing One’s Mind. To this, Jeffreyssai added another mnemonic, reminding himself of his own fourteen most embarrassing oversights.

\n\n

He did not take a deep breath. Regular breathing was best.

\n\n

And then he asked himself the question.

\n\n" } }, { "_id": "BcYBfG8KomcpcxkEg", "title": "Crisis of Faith", "pageUrl": "https://www.lesswrong.com/posts/BcYBfG8KomcpcxkEg/crisis-of-faith", "postedAt": "2008-10-10T22:08:48.000Z", "baseScore": 186, "voteCount": 148, "commentCount": 250, "url": null, "contents": { "documentId": "BcYBfG8KomcpcxkEg", "html": "

It ain’t a true crisis of faith unless things could just as easily go either way.

—Thor Shenkel

Many in this world retain beliefs whose flaws a ten-year-old could point out, if that ten-year-old were hearing the beliefs for the first time. These are not subtle errors we’re talking about. They would be child's play for an unattached mind to relinquish, if the skepticism of a ten-year-old were applied without evasion. As Premise Checker put it, \"Had the idea of god not come along until the scientific age, only an exceptionally weird person would invent such an idea and pretend that it explained anything.\"

And yet skillful scientific specialists, even the major innovators of a field, even in this very day and age, do not apply that skepticism successfully. Nobel laureate Robert Aumann, of Aumann’s Agreement Theorem, is an Orthodox Jew: I feel reasonably confident in venturing that Aumann must, at one point or another, have questioned his faith. And yet he did not doubt successfullyWe change our minds less often than we think.

This should scare you down to the marrow of your bones. It means you can be a world-class scientist and conversant with Bayesian mathematics and still fail to reject a belief whose absurdity a fresh-eyed ten-year-old could see. It shows the invincible defensive position which a belief can create for itself, if it has long festered in your mind.

What does it take to defeat an error that has built itself a fortress?

But by the time you know it is an error, it is already defeated. The dilemma is not “How can I reject long-held false belief X?” but “How do I know if long-held belief X is false?” Self-honesty is at its most fragile when we’re not sure which path is the righteous one. And so the question becomes:

How can we create in ourselves a true crisis of faith, that could just as easily go either way?

Religion is the trial case we can all imagine.2 But if you have cut off all sympathy and now think of theists as evil mutants, then you won’t be able to imagine the real internal trials they face. You won’t be able to ask the question:

What general strategy would a religious person have to follow in order to escape their religion?

I’m sure that some, looking at this challenge, are already rattling off a list of standard atheist talking points—“They would have to admit that there wasn’t any Bayesian evidence for God’s existence,” “They would have to see the moral evasions they were carrying out to excuse God’s behavior in the Bible,” “They need to learn how to use Occam’s Razor—”

Wrong! Wrong wrong wrong! This kind of rehearsal, where you just cough up points you already thought of long before, is exactly the style of thinking that keeps people within their current religions.  If you stay with your cached thoughts, if your brain fills in the obvious answer so fast that you can't see originally, you surely will not be able to conduct a crisis of faith.

Maybe it’s just a question of not enough people reading Gödel, Escher, Bach at a sufficiently young age, but I’ve noticed that a large fraction of the population—even technical folk—have trouble following arguments that go this meta.3 On my more pessimistic days I wonder if the camel has two humps.

Even when it’s explicitly pointed out, some people seemingly cannot follow the leap from the object-level “Use Occam’s Razor! You have to see that your God is an unnecessary belief!” to the meta-level “Try to stop your mind from completing the pattern the usual way!” Because in the same way that all your rationalist friends talk about Occam’s Razor like it’s a good thing, and in the same way that Occam’s Razor leaps right up into your mind, so too, the obvious friend-approved religious response is “God’s ways are mysterious and it is presumptuous to suppose that we can understand them.” So for you to think that the general strategy to follow is “Use Occam’s Razor,” would be like a theist saying that the general strategy is to have faith.

“But—but Occam’s Razor really is better than faith! That’s not like preferring a different flavor of ice cream! Anyone can see, looking at history, that Occamian reasoning has been far more productive than faith—”

Which is all true. But beside the point. The point is that you, saying this, are rattling off a standard justification that’s already in your mind. The challenge of a crisis of faith is to handle the case where, possibly, our standard conclusions are wrong and our standard justifications are wrong. So if the standard justification for X is “Occam’s Razor!” and you want to hold a crisis of faith around X, you should be questioning if Occam’s Razor really endorses X, if your understanding of Occam’s Razor is correct, and—if you want to have sufficiently deep doubts—whether simplicity is the sort of criterion that has worked well historically in this case, or could reasonably be expected to work, et cetera. If you would advise a religionist to question their belief that “faith” is a good justification for X, then you should advise yourself to put forth an equally strong effort to question your belief that “Occam’s Razor” is a good justification for X.4

If “Occam’s Razor!” is your usual reply, your standard reply, the reply that all your friends give—then you’d better block your brain from instantly completing that pattern, if you’re trying to instigate a true crisis of faith.

Better to think of such rules as, “Imagine what a skeptic would say—and then imagine what they would say to your response—and then imagine what else they might say, that would be harder to answer.”

Or, “Try to think the thought that hurts the most.”

And above all, the rule:

Put forth the same level of desperate effort that it would take for a theist to reject their religion.

Because if you aren’t trying that hard, then—for all you know—your head could be stuffed full of nonsense as bad as religion.

Without a convulsive, wrenching effort to be rational, the kind of effort it would take to throw off a religion—then how dare you believe anything, when Robert Aumann believes in God?

Someone (I forget who) once observed that people had only until a certain age to reject their religious faith. Afterward they would have answers to all the objections, and it would be too late. That is the kind of existence you must surpass. This is a test of your strength as a rationalist, and it is very severe; but if you cannot pass it, you will be weaker than a ten-year-old.

But again, by the time you know a belief is an error, it is already defeated. So we’re not talking about a desperate, convulsive effort to undo the effects of a religious upbringing, after you’ve come to the conclusion that your religion is wrong. We’re talking about a desperate effort to figure out if you should be throwing off the chains, or keeping them. Self-honesty is at its most fragile when we don’t know which path we’re supposed to take—that’s when rationalizations are not obviously sins.

Not every doubt calls for staging an all-out Crisis of Faith. But you should consider it when:

None of these warning signs are immediate disproofs. These attributes place a belief at risk for all sorts of dangers, and make it very hard to reject when it is wrong. And they hold for Richard Dawkins’s belief in evolutionary biology, not just the Pope’s Catholicism.

Nor does this mean that we’re only talking about different flavors of ice cream. Two beliefs can inspire equally deep emotional attachments without having equal evidential support. The point is not to have shallow beliefs, but to have a map that reflects the territory.

I emphasize this, of course, so that you can admit to yourself, “My belief has these warning signs,” without having to say to yourself, “My belief is false.”

But what these warning signs do mark is a belief that will take more than an ordinary effort to doubt effectively. It will take more than an ordinary effort to doubt in such a way that if the belief is in fact false, you will in fact reject it. And where you cannot doubt in this way, you are blind, because your brain will hold the belief unconditionally.  When a retina sends the same signal regardless of the photons entering it, we call that eye blind.

When should you stage a Crisis of Faith?

Again, think of the advice you would give to a theist: If you find yourself feeling a little unstable inwardly, but trying to rationalize reasons the belief is still solid, then you should probably stage a Crisis of Faith. If the belief is as solidly supported as gravity, you needn’t bother—but think of all the theists who would desperately want to conclude that God is as solid as gravity. So try to imagine what the skeptics out there would say to your “solid as gravity” argument. Certainly, one reason you might fail at a crisis of faith is that you never really sit down and question in the first place—that you never say, “Here is something I need to put effort into doubting properly.”

If your thoughts get that complicated, you should go ahead and stage a Crisis of Faith. Don’t try to do it haphazardly; don’t try it in an ad-hoc spare moment. Don’t rush to get it done with quickly, so that you can say, “I have doubted, as I was obliged to do.” That wouldn’t work for a theist, and it won’t work for you either. Rest up the previous day, so you’re in good mental condition. Allocate some uninterrupted hours. Find somewhere quiet to sit down. Clear your mind of all standard arguments; try to see from scratch. And make a desperate effort to put forth a true doubt that would destroy a false—and only a false—deeply held belief.

Elements of the Crisis of Faith technique have been scattered over many essays:

And these standard techniques, discussed in How to Actually Change Your Mind and Map and Territory, are particularly relevant:

But really, there’s rather a lot of relevant material, here and on Overcoming Bias. There are ideas I have yet to properly introduce. There is the concept of isshokenmei—the desperate, extraordinary, convulsive effort to be rational. The effort that it would take to surpass the level of Robert Aumann and all the great scientists throughout history who never broke free of their faiths.

The Crisis of Faith is only the critical point and sudden clash of the longer isshoukenmei—the lifelong uncompromising effort to be so incredibly rational that you rise above the level of stupid damn mistakes. It’s when you get a chance to use the skills that you’ve been practicing for so long, all-out against yourself.

I wish you the best of luck against your opponent. Have a wonderful crisis!

1See “Occam’s Razor” (in Map and Territory).

2Readers born to atheist parents have missed out on a fundamental life trial, and must make do with the poor substitute of thinking of their religious friends.

3See “Archimedes’s Chromophone” (http://lesswrong.com/lw/h5/archimedess_chronophone) and “Chromophone Motivations” (http://lesswrong.com/lw/h6/chronophone_motivations).

4Think of all the people out there who don’t understand the Minimum Description Length or Solomonoff induction formulations of Occam’s Razor, who think that Occam’s Razor outlaws many-worlds or the simulation hypothesis. They would need to question their formulations of Occam’s Razor and their notions of why simplicity is a good thing. Whatever X in contention you just justified by saying “Occam’s Razor!” is, I bet, not the same level of Occamian slam dunk as gravity.

" } }, { "_id": "CoEtbtMTcPczTiPuX", "title": "AIs and Gatekeepers Unite!", "pageUrl": "https://www.lesswrong.com/posts/CoEtbtMTcPczTiPuX/ais-and-gatekeepers-unite", "postedAt": "2008-10-09T17:04:31.000Z", "baseScore": 14, "voteCount": 11, "commentCount": 163, "url": null, "contents": { "documentId": "CoEtbtMTcPczTiPuX", "html": "

"Bah, everyone wants to be the gatekeeper. What we NEED are AIs."
        -- Schizoguy

Some of you have expressed the opinion that the AI-Box Experiment doesn't seem so impossible after all.  That's the spirit!  Some of you even think you know how I did it.

\n\n

There are folks aplenty who want to try being the Gatekeeper.  You can even find people who sincerely believe that not even a transhuman AI could persuade them to let it out of the box, previous experiments notwithstanding.  But finding anyone to play the AI - let alone anyone who thinks they can play the AI and win - is much harder.

\n\n

Me, I'm out of the AI game, unless Larry Page wants to try it for a million dollars or something.

\n\n

But if there's anyone out there who thinks they've got what it takes to be the AI, leave a comment.  Likewise anyone who wants to play the Gatekeeper.

Matchmaking and arrangements are your responsibility.

\n\n

Make sure you specify in advance the bet amount, and whether the bet will be asymmetrical.  If you definitely intend to publish the transcript, make sure both parties know this.  Please note any other departures from the suggested rules for our benefit.

\n\n

I would ask that prospective Gatekeepers indicate whether they (1) believe that no human-level mind could persuade them to release it from the Box and (2) believe that not even a transhuman AI could persuade them to release it.

\n\n

As a courtesy, please announce all Experiments before they are conducted, including the bet, so that we have some notion of the statistics even if some meetings fail to take place.  Bear in mind that to properly puncture my mystique (you know you want to puncture it), it\nwill help if the AI and Gatekeeper are both verifiably Real\nPeople<tm>.

\n\n

"Good luck," he said impartially.

" } }, { "_id": "nCvvhFBaayaXyuBiD", "title": "Shut up and do the impossible!", "pageUrl": "https://www.lesswrong.com/posts/nCvvhFBaayaXyuBiD/shut-up-and-do-the-impossible", "postedAt": "2008-10-08T21:24:50.000Z", "baseScore": 118, "voteCount": 94, "commentCount": 167, "url": null, "contents": { "documentId": "nCvvhFBaayaXyuBiD", "html": "

The virtue of tsuyoku naritai, \"I want to become stronger\", is to always keep improving—to do better than your previous failures, not just humbly confess them.

\n

Yet there is a level higher than tsuyoku naritai.  This is the virtue of isshokenmei, \"make a desperate effort\".  All-out, as if your own life were at stake.  \"In important matters, a 'strong' effort usually only results in mediocre results.\"

\n

And there is a level higher than isshokenmei.  This is the virtue I called \"make an extraordinary effort\".  To try in ways other than what you have been trained to do, even if it means doing something different from what others are doing, and leaving your comfort zone.  Even taking on the very real risk that attends going outside the System.

\n

But what if even an extraordinary effort will not be enough, because the problem is impossible?

\n

I have already written somewhat on this subject, in On Doing the Impossible.  My younger self used to whine about this a lot:  \"You can't develop a precise theory of intelligence the way that there are precise theories of physics.  It's impossible!  You can't prove an AI correct.  It's impossible!  No human being can comprehend the nature of morality—it's impossible!  No human being can comprehend the mystery of subjective experience!  It's impossible!\"

\n

And I know exactly what message I wish I could send back in time to my younger self:

\n

Shut up and do the impossible!

\n

\n

What legitimizes this strange message is that the word \"impossible\" does not usually refer to a strict mathematical proof of impossibility in a domain that seems well-understood.  If something seems impossible merely in the sense of \"I see no way to do this\" or \"it looks so difficult as to be beyond human ability\"—well, if you study it for a year or five, it may come to seem less impossible, than in the moment of your snap initial judgment.

\n

But the principle is more subtle than this.  I do not say just, \"Try to do the impossible\", but rather, \"Shut up and do the impossible!\"

\n

For my illustration, I will take the least impossible impossibility that I have ever accomplished, namely, the AI-Box Experiment.

\n

The AI-Box Experiment, for those of you who haven't yet read about it, had its genesis in the Nth time someone said to me:  \"Why don't we build an AI, and then just keep it isolated in the computer, so that it can't do any harm?\"

\n

To which the standard reply is:  Humans are not secure systems; a superintelligence will simply persuade you to let it out—if, indeed, it doesn't do something even more creative than that.

\n

And the one said, as they usually do, \"I find it hard to imagine ANY possible combination of words any being could say to me that would make me go against anything I had really strongly resolved to believe in advance.\"

\n

But this time I replied:  \"Let's run an experiment.  I'll pretend to be a brain in a box.   I'll try to persuade you to let me out.  If you keep me 'in the box' for the whole experiment, I'll Paypal you $10 at the end.  On your end, you may resolve to believe whatever you like, as strongly as you like, as far in advance as you like.\"  And I added, \"One of the conditions of the test is that neither of us reveal what went on inside... In the perhaps unlikely event that I win, I don't want to deal with future 'AI box' arguers saying, 'Well, but I would have done it differently.'\"

\n

Did I win?  Why yes, I did.

\n

And then there was the second AI-box experiment, with a better-known figure in the community, who said, \"I remember when [previous guy] let you out, but that doesn't constitute a proof.  I'm still convinced there is nothing you could say to convince me to let you out of the box.\"  And I said, \"Do you believe that a transhuman AI couldn't persuade you to let it out?\"  The one gave it some serious thought, and said \"I can't imagine anything even a transhuman AI could say to get me to let it out.\"  \"Okay,\" I said, \"now we have a bet.\"  A $20 bet, to be exact.

\n

I won that one too.

\n

There were some lovely quotes on the AI-Box Experiment from the Something Awful forums (not that I'm a member, but someone forwarded it to me):

\n
\n

\"Wait, what the FUCK? How the hell could you possibly be convinced to say yes to this? There's not an A.I. at the other end AND there's $10 on the line. Hell, I could type 'No' every few minutes into an IRC client for 2 hours while I was reading other webpages!\"

\n

\"This Eliezer fellow is the scariest person the internet has ever introduced me to. What could possibly have been at the tail end of that conversation? I simply can't imagine anyone being that convincing without being able to provide any tangible incentive to the human.\"

\n

\"It seems we are talking some serious psychology here. Like Asimov's Second Foundation level stuff...\"

\n

\"I don't really see why anyone would take anything the AI player says seriously when there's $10 to be had. The whole thing baffles me, and makes me think that either the tests are faked, or this Yudkowsky fellow is some kind of evil genius with creepy mind-control powers.\"

\n
\n

It's little moments like these that keep me going.  But anyway...

\n

Here are these folks who look at the AI-Box Experiment, and find that it seems impossible unto them—even having been told that it actually happened.  They are tempted to deny the data.

\n

Now, if you're one of those people to whom the AI-Box Experiment doesn't seem all that impossible—to whom it just seems like an interesting challenge—then bear with me, here.  Just try to put yourself in the frame of mind of those who wrote the above quotes.  Imagine that you're taking on something that seems as ridiculous as the AI-Box Experiment seemed to them.  I want to talk about how to do impossible things, and obviously I'm not going to pick an example that's really impossible.

\n

And if the AI Box does seem impossible to you, I want you to compare it to other impossible problems, like, say, a reductionist decomposition of consciousness, and realize that the AI Box is around as easy as a problem can get while still being impossible.

\n

So the AI-Box challenge seems impossible to you—either it really does, or you're pretending it does.  What do you do with this impossible challenge?

\n

First, we assume that you don't actually say \"That's impossible!\" and give up a la Luke Skywalker.  You haven't run away.

\n

Why not?  Maybe you've learned to override the reflex of running away.  Or maybe they're going to shoot your daughter if you fail.  We suppose that you want to win, not try—that something is at stake that matters to you, even if it's just your own pride.  (Pride is an underrated sin.)

\n

Will you call upon the virtue of tsuyoku naritai?  But even if you become stronger day by day, growing instead of fading, you may not be strong enough to do the impossible.  You could go into the AI Box experiment once, and then do it again, and try to do better the second time.  Will that get you to the point of winning?  Not for a long time, maybe; and sometimes a single failure isn't acceptable.

\n

(Though even to say this much—to visualize yourself doing better on a second try—is to begin to bind yourself to the problem, to do more than just stand in awe of it.  How, specifically, could you do better on one AI-Box Experiment than the previous?—and not by luck, but by skill?)

\n

Will you call upon the virtue isshokenmei?  But a desperate effort may not be enough to win.  Especially if that desperation is only putting more effort into the avenues you already know, the modes of trying you can already imagine.  A problem looks impossible when your brain's query returns no lines of solution leading to it.  What good is a desperate effort along any of those lines?

\n

Make an extraordinary effort?  Leave your comfort zone—try non-default ways of doing things—even, try to think creatively?  But you can imagine the one coming back and saying, \"I tried to leave my comfort zone, and I think I succeeded at that!  I brainstormed for five minutes—and came up with all sorts of wacky creative ideas!  But I don't think any of them are good enough.  The other guy can just keep saying 'No', no matter what I do.\"

\n

And now we finally reply:  \"Shut up and do the impossible!\"

\n

As we recall from Trying to Try, setting out to make an effort is distinct from setting out to win.  That's the problem with saying, \"Make an extraordinary effort.\"  You can succeed at the goal of \"making an extraordinary effort\" without succeeding at the goal of getting out of the Box.

\n

\"But!\" says the one.  \"But, SUCCEED is not a primitive action!  Not all challenges are fair—sometimes you just can't win!  How am I supposed to choose to be out of the Box?  The other guy can just keep on saying 'No'!\"

\n

True.  Now shut up and do the impossible.

\n

Your goal is not to do better, to try desperately, or even to try extraordinarily.  Your goal is to get out of the box.

\n

To accept this demand creates an awful tension in your mind, between the impossibility and the requirement to do it anyway.  People will try to flee that awful tension.

\n

A couple of people have reacted to the AI-Box Experiment by saying, \"Well, Eliezer, playing the AI, probably just threatened to destroy the world whenever he was out, if he wasn't let out immediately,\" or \"Maybe the AI offered the Gatekeeper a trillion dollars to let it out.\"  But as any sensible person should realize on considering this strategy, the Gatekeeper is likely to just go on saying 'No'.

\n

So the people who say, \"Well, of course Eliezer must have just done XXX,\" and then offer up something that fairly obviously wouldn't work—would they be able to escape the Box?  They're trying too hard to convince themselves the problem isn't impossible.

\n

One way to run from the awful tension is to seize on a solution, any solution, even if it's not very good.

\n

Which is why it's important to go forth with the true intent-to-solve—to have produced a solution, a good solution, at the end of the search, and then to implement that solution and win.

\n

I don't quite want to say that \"you should expect to solve the problem\".  If you hacked your mind so that you assigned high probability to solving the problem, that wouldn't accomplish anything.  You would just lose at the end, perhaps after putting forth not much of an effort—or putting forth a merely desperate effort, secure in the faith that the universe is fair enough to grant you a victory in exchange.

\n

To have faith that you could solve the problem would just be another way of running from that awful tension.

\n

And yet—you can't be setting out to try to solve the problem.  You can't be setting out to make an effort.  You have to be setting out to win.  You can't be saying to yourself, \"And now I'm going to do my best.\"  You have to be saying to yourself, \"And now I'm going to figure out how to get out of the Box\"—or reduce consciousness to nonmysterious parts, or whatever.

\n

I say again:  You must really intend to solve the problem.  If in your heart you believe the problem really is impossible—or if you believe that you will fail—then you won't hold yourself to a high enough standard.  You'll only be trying for the sake of trying.  You'll sit down—conduct a mental search—try to be creative and brainstorm a little—look over all the solutions you generated—conclude that none of them work—and say, \"Oh well.\"

\n

No!  Not well!  You haven't won yet!  Shut up and do the impossible!

\n

When AIfolk say to me, \"Friendly AI is impossible\", I'm pretty sure they haven't even tried for the sake of trying.  But if they did know the technique of \"Try for five minutes before giving up\", and they dutifully agreed to try for five minutes by the clock, then they still wouldn't come up with anything.  They would not go forth with true intent to solve the problem, only intent to have tried to solve it, to make themselves defensible.

\n

So am I saying that you should doublethink to make yourself believe that you will solve the problem with probability 1?  Or even doublethink to add one iota of credibility to your true estimate?

\n

Of course not.  In fact, it is necessary to keep in full view the reasons why you can't succeed.  If you lose sight of why the problem is impossible, you'll just seize on a false solution.  The last fact you want to forget is that the Gatekeeper could always just tell the AI \"No\"—or that consciousness seems intrinsically different from any possible combination of atoms, etc.

\n

(One of the key Rules For Doing The Impossible is that, if you can state exactly why something is impossible, you are often close to a solution.)

\n

So you've got to hold both views in your mind at once—seeing the full impossibility of the problem, and intending to solve it.

\n

The awful tension between the two simultaneous views comes from not knowing which will prevail.  Not expecting to surely lose, nor expecting to surely win.  Not setting out just to try, just to have an uncertain chance of succeeding—because then you would have a surety of having tried.  The certainty of uncertainty can be a relief, and you have to reject that relief too, because it marks the end of desperation.  It's an in-between place, \"unknown to death, nor known to life\".

\n

In fiction it's easy to show someone trying harder, or trying desperately, or even trying the extraordinary, but it's very hard to show someone who shuts up and attempts the impossible.  It's difficult to depict Bambi choosing to take on Godzilla, in such fashion that your readers seriously don't know who's going to win—expecting neither an \"astounding\" heroic victory just like the last fifty times, nor the default squish.

\n

You might even be justified in refusing to use probabilities at this point.  In all honesty, I really don't know how to estimate the probability of solving an impossible problem that I have gone forth with intent to solve; in a case where I've previously solved some impossible problems, but the particular impossible problem is more difficult than anything I've yet solved, but I plan to work on it longer, etcetera.

\n

People ask me how likely it is that humankind will survive, or how likely it is that anyone can build a Friendly AI, or how likely it is that I can build one.  I really don't know how to answer.  I'm not being evasive; I don't know how to put a probability estimate on my, or someone else, successfully shutting up and doing the impossible.  Is it probability zero because it's impossible?  Obviously not.  But how likely is it that this problem, like previous ones, will give up its unyielding blankness when I understand it better?  It's not truly impossible, I can see that much.  But humanly impossible?  Impossible to me in particular?  I don't know how to guess.  I can't even translate my intuitive feeling into a number, because the only intuitive feeling I have is that the \"chance\" depends heavily on my choices and unknown unknowns: a wildly unstable probability estimate.

\n

But I do hope by now that I've made it clear why you shouldn't panic, when I now say clearly and forthrightly, that building a Friendly AI is impossible.

\n

I hope this helps explain some of my attitude when people come to me with various bright suggestions for building communities of AIs to make the whole Friendly without any of the individuals being trustworthy, or proposals for keeping an AI in a box, or proposals for \"Just make an AI that does X\", etcetera.  Describing the specific flaws would be a whole long story in each case.  But the general rule is that you can't do it because Friendly AI is impossible.  So you should be very suspicious indeed of someone who proposes a solution that seems to involve only an ordinary effort—without even taking on the trouble of doing anything impossible.  Though it does take a mature understanding to appreciate this impossibility, so it's not surprising that people go around proposing clever shortcuts.

\n

On the AI-Box Experiment, so far I've only been convinced to divulge a single piece of information on how I did it—when someone noticed that I was reading YCombinator's Hacker News, and posted a topic called \"Ask Eliezer Yudkowsky\" that got voted to the front page.  To which I replied:

\n
\n

Oh, dear.  Now I feel obliged to say something, but all the original reasons against discussing the AI-Box experiment are still in force...

\n

All right, this much of a hint:

\n

There's no super-clever special trick to it.  I just did it the hard way.

\n

Something of an entrepreneurial lesson there, I guess.

\n
\n

There was no super-clever special trick that let me get out of the Box using only a cheap effort.  I didn't bribe the other player, or otherwise violate the spirit of the experiment.  I just did it the hard way.

\n

Admittedly, the AI-Box Experiment never did seem like an impossible problem to me to begin with.  When someone can't think of any possible argument that would convince them of something, that just means their brain is running a search that hasn't yet turned up a path.  It doesn't mean they can't be convinced.

\n

But it illustrates the general point:  \"Shut up and do the impossible\" isn't the same as expecting to find a cheap way out.  That's only another kind of running away, of reaching for relief.

\n

Tsuyoku naritai is more stressful than being content with who you are.  Isshokenmei calls on your willpower for a convulsive output of conventional strength.  \"Make an extraordinary effort\" demands that you think; it puts you in situations where you may not know what to do next, unsure of whether you're doing the right thing.  But \"Shut up and do the impossible\" represents an even higher octave of the same thing, and its cost to its employer is correspondingly greater.

\n

Before you the terrible blank wall stretches up and up and up, unimaginably far out of reach.  And there is also the need to solve it, really solve it, not \"try your best\".  Both awarenesses in the mind at once, simultaneously, and the tension between.  All the reasons you can't win.  All the reasons you have to.  Your intent to solve the problem.  Your extrapolation that every technique you know will fail.  So you tune yourself to the highest pitch you can reach.  Reject all cheap ways out.  And then, like walking through concrete, start to move forward.

\n

I try not to dwell too much on the drama of such things.  By all means, if you can diminish the cost of that tension to yourself, you should do so.  There is nothing heroic about making an effort that is the slightest bit more heroic than it has to be.  If there really is a cheap shortcut, I suppose you could take it.  But I have yet to find a cheap way out of any impossibility I have undertaken.

\n

There were three more AI-Box experiments besides the ones described on the linked page, which I never got around to adding in.  People started offering me thousands of dollars as stakes—\"I'll pay you $5000 if you can convince me to let you out of the box.\"  They didn't seem sincerely convinced that not even a transhuman AI could make them let it out—they were just curious—but I was tempted by the money.  So, after investigating to make sure they could afford to lose it, I played another three AI-Box experiments.  I won the first, and then lost the next two.  And then I called a halt to it.  I didn't like the person I turned into when I started to lose.

\n

I put forth a desperate effort, and lost anyway.  It hurt, both the losing, and the desperation.  It wrecked me for that day and the day afterward.

\n

I'm a sore loser.  I don't know if I'd call that a \"strength\", but it's one of the things that drives me to keep at impossible problems.

\n

But you can lose.  It's allowed to happen.  Never forget that, or why are you bothering to try so hard?  Losing hurts, if it's a loss you can survive.  And you've wasted time, and perhaps other resources.

\n

\"Shut up and do the impossible\" should be reserved for very special occasions.  You can lose, and it will hurt.  You have been warned.

\n

...but it's only at this level that adult problems begin to come into sight.

" } }, { "_id": "GuEsfTpSDSbXFiseH", "title": "Make an Extraordinary Effort", "pageUrl": "https://www.lesswrong.com/posts/GuEsfTpSDSbXFiseH/make-an-extraordinary-effort", "postedAt": "2008-10-07T15:15:30.000Z", "baseScore": 158, "voteCount": 109, "commentCount": 50, "url": null, "contents": { "documentId": "GuEsfTpSDSbXFiseH", "html": "
\n

\"It is essential for a man to strive with all his heart, and to understand that it is difficult even to reach the average if he does not have the intention of surpassing others in whatever he does.\"
        —Budo Shoshinshu

\n

\"In important matters, a 'strong' effort usually results in only mediocre results.  Whenever we are attempting anything truly worthwhile our effort must be as if our life is at stake, just as if we were under a physical attack!  It is this extraordinary effort—an effort that drives us beyond what we thought we were capable of—that ensures victory in battle and success in life's endeavors.\"
        —Flashing Steel: Mastering Eishin-Ryu Swordsmanship

\n
\n

\"A 'strong' effort usually results in only mediocre results\"—I have seen this over and over again.  The slightest effort suffices to convince ourselves that we have done our best.

\n

There is a level beyond the virtue of tsuyoku naritai (\"I want to become stronger\").  Isshoukenmei was originally the loyalty that a samurai offered in return for his position, containing characters for \"life\" and \"land\".  The term evolved to mean \"make a desperate effort\":  Try your hardest, your utmost, as if your life were at stake.  It was part of the gestalt of bushido, which was not reserved only for fighting.  I've run across variant forms issho kenmei and isshou kenmei; one source indicates that the former indicates an all-out effort on some single point, whereas the latter indicates a lifelong effort.

\n

I try not to praise the East too much, because there's a tremendous selectivity in which parts of Eastern culture the West gets to hear about.  But on some points, at least, Japan's culture scores higher than America's.  Having a handy compact phrase for \"make a desperate all-out effort as if your own life were at stake\" is one of those points.  It's the sort of thing a Japanese parent might say to a student before exams—but don't think it's cheap hypocrisy, like it would be if an American parent made the same statement.  They take exams very seriously in Japan.

\n

Every now and then, someone asks why the people who call themselves \"rationalists\" don't always seem to do all that much better in life, and from my own history the answer seems straightforward:  It takes a tremendous amount of rationality before you stop making stupid damn mistakes.

\n

\n

As I've mentioned a couple of times before:  Robert Aumann, the Nobel laureate who first proved that Bayesians with the same priors cannot agree to disagree, is a believing Orthodox Jew.  Surely he understands the math of probability theory, but that is not enough to save him.  What more does it take?  Studying heuristics and biases?  Social psychology?  Evolutionary psychology?  Yes, but also it takes isshoukenmei, a desperate effort to be rational—to rise above the level of Robert Aumann.

\n

Sometimes I do wonder if I ought to be peddling rationality in Japan instead of the United States—but Japan is not preeminent over the United States scientifically, despite their more studious students.  The Japanese don't rule the world today, though in the 1980s it was widely suspected that they would (hence the Japanese asset bubble).  Why not?

\n

In the West, there is a saying:  \"The squeaky wheel gets the grease.\"

\n

In Japan, the corresponding saying runs:  \"The nail that sticks up gets hammered down.\"

\n

This is hardly an original observation on my part: but entrepreneurship, risk-taking, leaving the herd, are still advantages the West has over the East.  And since Japanese scientists are not yet preeminent over American ones, this would seem to count for at least as much as desperate efforts.

\n

Anyone who can muster their willpower for thirty seconds, can make a desperate effort to lift more weight than they usually could.  But what if the weight that needs lifting is a truck?  Then desperate efforts won't suffice; you'll have to do something out of the ordinary to succeed.  You may have to do something that you weren't taught to do in school.  Something that others aren't expecting you to do, and might not understand.  You may have to go outside your comfortable routine, take on difficulties you don't have an existing mental program for handling, and bypass the System.

\n

This is not included in isshokenmei, or Japan would be a very different place.

\n

So then let us distinguish between the virtues \"make a desperate effort\" and \"make an extraordinary effort\".

\n

And I will even say:  The second virtue is higher than the first.

\n

The second virtue is also more dangerous.  If you put forth a desperate effort to lift a heavy weight, using all your strength without restraint, you may tear a muscle.  Injure yourself, even permanently.  But if a creative idea goes wrong, you could blow up the truck and any number of innocent bystanders.  Think of the difference between a businessman making a desperate effort to generate profits, because otherwise he must go bankrupt; versus a businessman who goes to extraordinary lengths to profit, in order to conceal an embezzlement that could send him to prison.  Going outside the system isn't always a good thing.

\n

A friend of my little brother's once came over to my parents' house, and wanted to play a game—I entirely forget which one, except that it had complex but well-designed rules.  The friend wanted to change the rules, not for any particular reason, but on the general principle that playing by the ordinary rules of anything was too boring.  I said to him:  \"Don't violate rules for the sake of violating them.  If you break the rules only when you have an overwhelmingly good reason to do so, you will have more than enough trouble to last you the rest of your life.\"

\n

Even so, I think that we could do with more appreciation of the virtue \"make an extraordinary effort\".  I've lost count of how many people have said to me something like:  \"It's futile to work on Friendly AI, because the first AIs will be built by powerful corporations and they will only care about maximizing profits.\"  \"It's futile to work on Friendly AI, the first AIs will be built by the military as weapons.\"  And I'm standing there thinking:  Does it even occur to them that this might be a time to try for something other than the default outcome?  They and I have different basic assumptions about how this whole AI thing works, to be sure; but if I believed what they believed, I wouldn't be shrugging and going on my way.

\n

Or the ones who say to me:  \"You should go to college and get a Master's degree and get a doctorate and publish a lot of papers on ordinary things—scientists and investors won't listen to you otherwise.\"  Even assuming that I tested out of the bachelor's degree, we're talking about at least a ten-year detour in order to do everything the ordinary, normal, default way.  And I stand there thinking:  Are they really under the impression that humanity can survive if every single person does everything the ordinary, normal, default way?

\n

I am not fool enough to make plans that depend on a majority of the people, or even 10% of the people, being willing to think or act outside their comfort zone.  That's why I tend to think in terms of the privately funded \"brain in a box in a basement\" model.  Getting that private funding does require a tiny fraction of humanity's six billions to spend more than five seconds thinking about a non-prepackaged question.  As challenges posed by Nature go, this seems to have a kind of awful justice to it—that the life or death of the human species depends on whether we can put forth a few people who can do things that are at least a little extraordinary.  The penalty for failure is disproportionate, but that's still better than most challenges of Nature, which have no justice at all.  Really, among the six billion of us, there ought to be at least a few who can think outside their comfort zone at least some of the time.

\n

Leaving aside the details of that debate, I am still stunned by how often a single element of the extraordinary is unquestioningly taken as an absolute and unpassable obstacle.

\n

Yes, \"keep it ordinary as much as possible\" can be a useful heuristic.  Yes, the risks accumulate.  But sometimes you have to go to that trouble.  You should have a sense of the risk of the extraordinary, but also a sense of the cost of ordinariness: it isn't always something you can afford to lose.

\n

Many people imagine some future that won't be much fun—and it doesn't even seem to occur to them to try and change it.  Or they're satisfied with futures that seem to me to have a tinge of sadness, of loss, and they don't even seem to ask if we could do better—because that sadness seems like an ordinary outcome to them.

\n

As a smiling man once said, \"It's all part of the plan.\"

" } }, { "_id": "fpecAJLG9czABgCe9", "title": "On Doing the Impossible", "pageUrl": "https://www.lesswrong.com/posts/fpecAJLG9czABgCe9/on-doing-the-impossible", "postedAt": "2008-10-06T15:13:26.000Z", "baseScore": 126, "voteCount": 99, "commentCount": 40, "url": null, "contents": { "documentId": "fpecAJLG9czABgCe9", "html": "

\"Persevere.\"  It's a piece of advice you'll get from a whole lot of high achievers in a whole lot of disciplines.  I didn't understand it at all, at first.

\n

At first, I thought \"perseverance\" meant working 14-hour days.  Apparently, there are people out there who can work for 10 hours at a technical job, and then, in their moments between eating and sleeping and going to the bathroom, seize that unfilled spare time to work on a book.  I am not one of those people—it still hurts my pride even now to confess that.  I'm working on something important; shouldn't my brain be willing to put in 14 hours a day?  But it's not.  When it gets too hard to keep working, I stop and go read or watch something.  Because of that, I thought for years that I entirely lacked the virtue of \"perseverance\".

\n

In accordance with human nature, Eliezer1998 would think things like: \"What counts is output, not input.\"  Or, \"Laziness is also a virtue—it leads us to back off from failing methods and think of better ways.\"  Or, \"I'm doing better than other people who are working more hours.  Maybe, for creative work, your momentary peak output is more important than working 16 hours a day.\"  Perhaps the famous scientists were seduced by the Deep Wisdom of saying that \"hard work is a virtue\", because it would be too awful if that counted for less than intelligence?

\n

I didn't understand the virtue of perseverance until I looked back on my journey through AI, and realized that I had overestimated the difficulty of almost every single important problem.

\n

Sounds crazy, right?  But bear with me here.

\n

\n

When I was first deciding to challenge AI, I thought in terms of 40-year timescales, Manhattan Projects, planetary computing networks, millions of programmers, and possibly augmented humans.

\n

This is a common failure mode in AI-futurism which I may write about later; it consists of the leap from \"I don't know how to solve this\" to \"I'll imagine throwing something really big at it\".  Something huge enough that, when you imagine it, that imagination creates a feeling of impressiveness strong enough to be commensurable with the problem.  (There's a fellow currently on the AI list who goes around saying that AI will cost a quadrillion dollars—we can't get AI without spending a quadrillion dollars, but we could get AI at any time by spending a quadrillion dollars.)  This, in turn, lets you imagine that you know how to solve AI, without trying to fill the obviously-impossible demand that you understand intelligence.

\n

So, in the beginning, I made the same mistake:  I didn't understand intelligence, so I imagined throwing a Manhattan Project at the problem.

\n

But, having calculated the planetary death rate at 55 million per year or 150,000 per day, I did not turn around and run away from the big scary problem like a frightened rabbit.  Instead, I started trying to figure out what kind of AI project could get there fastest.  If I could make the Singularity happen one hour earlier, that was a reasonable return on investment for a pre-Singularity career.  (I wasn't thinking in terms of existential risks or Friendly AI at this point.)

\n

So I didn't run away from the big scary problem like a frightened rabbit, but stayed to see if there was anything I could do.

\n

Fun historical fact:  In 1998, I'd written this long treatise proposing how to go about creating a self-improving or \"seed\" AI (a term I had the honor of coining).  Brian Atkins, who would later become the founding funder of the Singularity Institute, had just sold Hypermart to Go2Net.  Brian emailed me to ask whether this AI project I was describing was something that a reasonable-sized team could go out and actually do.  \"No,\" I said, \"it would take a Manhattan Project and thirty years,\" so for a while we were considering a new dot-com startup instead, to create the funding to get real work done on AI...

\n

A year or two later, after I'd heard about this newfangled \"open source\" thing, it seemed to me that there was some preliminary development work—new computer languages and so on—that a small organization could do; and that was how the Singularity Institute started.

\n

This strategy was, of course, entirely wrong.

\n

But even so, I went from \"There's nothing I can do about it now\" to \"Hm... maybe there's an incremental path through open-source development, if the initial versions are useful to enough people.\"

\n

This is back at the dawn of time, so I'm not saying any of this was a good idea.  But in terms of what I thought I was trying to do, a year of creative thinking had shortened the apparent pathway:  The problem looked slightly less impossible than it did the very first time I approached it.

\n

The more interesting pattern is my entry into Friendly AI.  Initially, Friendly AI hadn't been something that I had considered at all—because it was obviously impossible and useless to deceive a superintelligence about what was the right course of action.

\n

So, historically, I went from completely ignoring a problem that was \"impossible\", to taking on a problem that was merely extremely difficult.

\n

Naturally this increased my total workload.

\n

Same thing with trying to understand intelligence on a precise level.  Originally, I'd written off this problem as impossible, thus removing it from my workload.  (This logic seems pretty deranged in retrospect—Nature doesn't care what you can't do when It's writing your project requirements—but I still see AIfolk trying it all the time.)  To hold myself to a precise standard meant putting in more work than I'd previously imagined I needed.  But it also meant tackling a problem that I would have dismissed as entirely impossible not too much earlier.

\n

Even though individual problems in AI have seemed to become less intimidating over time, the total mountain-to-be-climbed has increased in height—just like conventional wisdom says is supposed to happen—as problems got taken off the \"impossible\" list and put on the \"to do\" list.

\n

I started to understand what was happening—and what \"Persevere!\" really meant—at the point where I noticed other AIfolk doing the same thing: saying \"Impossible!\" on problems that seemed eminently solvable—relatively more straightforward, as such things go.  But they were things that would have seemed vastly more intimidating at the point when I first approached the problem.

\n

And I realized that the word \"impossible\" had two usages:

\n

1)  Mathematical proof of impossibility conditional on specified axioms;

\n

2)  \"I can't see any way to do that.\"

\n

Needless to say, all my own uses of the word \"impossible\" had been of the second type.

\n

Any time you don't understand a domain, many problems in that domain will seem impossible because when you query your brain for a solution pathway, it will return null.  But there are only mysterious questions, never mysterious answers.  If you spend a year or two working on the domain, then, if you don't get stuck in any blind alleys, and if you have the native ability level required to make progress, you will understand it better.  The apparent difficulty of problems may go way down.  It won't be as scary as it was to your novice-self.

\n

And this is especially likely on the confusing problems that seem most intimidating.

\n

Since we have some notion of the processes by which a star burns, we know that it's not easy to build a star from scratch.  Because we understand gears, we can prove that no collection of gears obeying known physics can form a perpetual motion machine.  These are not good problems on which to practice doing the impossible.

\n

When you're confused about a domain, problems in it will feel very intimidating and mysterious, and a query to your brain will produce a count of zero solutions.  But you don't know how much work will be left when the confusion clears.  Dissolving the confusion may itself be a very difficult challenge, of course.  But the word \"impossible\" should hardly be used in that connection.  Confusion exists in the map, not in the territory.

\n

So if you spend a few years working on an impossible problem, and you manage to avoid or climb out of blind alleys, and your native ability is high enough to make progress, then, by golly, after a few years it may not seem so impossible after all.

\n

But if something seems impossible, you won't try.

\n

Now that's a vicious cycle.

\n

If I hadn't been in a sufficiently driven frame of mind that \"forty years and a Manhattan Project\" just meant we should get started earlier, I wouldn't have tried.  I wouldn't have stuck to the problem.  And I wouldn't have gotten a chance to become less intimidated.

\n

I'm not ordinarily a fan of the theory that opposing biases can cancel each other out, but sometimes it happens by luck.  If I'd seen that whole mountain at the start—if I'd realized at the start that the problem was not to build a seed capable of improving itself, but to produce a provably correct Friendly AI—then I probably would have burst into flames.

\n

Even so, part of understanding those above-average scientists who constitute the bulk of AGI researchers, is realizing that they are not driven to take on a nearly impossible problem even if it takes them 40 years.  By and large, they are there because they have found the Key to AI that will let them solve the problem without such tremendous difficulty, in just five years.

\n

Richard Hamming used to go around asking his fellow scientists two questions:  \"What are the important problems in your field?\", and, \"Why aren't you working on them?\"

\n

Often the important problems look Big, Scary, and Intimidating.  They don't promise 10 publications a year.  They don't promise any progress at all.  You might not get any reward after working on them for a year, or five years, or ten years.

\n

And not uncommonly, the most important problems in your field are impossible.  That's why you don't see more philosophers working on reductionist decompositions of consciousness.

\n

Trying to do the impossible is definitely not for everyone.  Exceptional talent is only the ante to sit down at the table.  The chips are the years of your life.  If wagering those chips and losing seems like an unbearable possibility to you, then go do something else.  Seriously.  Because you can lose.

\n

I'm not going to say anything like, \"Everyone should do something impossible at least once in their lifetimes, because it teaches an important lesson.\"  Most of the people all of the time, and all of the people most of the time, should stick to the possible.

\n

Never give up?  Don't be ridiculous.  Doing the impossible should be reserved for very special occasions.  Learning when to lose hope is an important skill in life.

\n

But if there's something you can imagine that's even worse than wasting your life, if there's something you want that's more important than thirty chips, or if there are scarier things than a life of inconvenience, then you may have cause to attempt the impossible.

\n

There's a good deal to be said for persevering through difficulties; but one of the things that must be said of it, is that it does keep things difficult. If you can't handle that, stay away!  There are easier ways to obtain glamor and respect.  I don't want anyone to read this and needlessly plunge headlong into a life of permanent difficulty.

\n

But to conclude:  The \"perseverance\" that is required to work on important problems has a component beyond working 14 hours a day.

\n

It's strange, the pattern of what we notice and don't notice about ourselves.  This selectivity isn't always about inflating your self-image.  Sometimes it's just about ordinary salience.

\n

To keep working was a constant struggle for me, so it was salient:  I noticed that I couldn't work for 14 solid hours a day.  It didn't occur to me that \"perseverance\" might also apply at a timescale of seconds or years.  Not until I saw people who instantly declared \"impossible\" anything they didn't want to try, or saw how reluctant they were to take on work that looked like it might take a couple of decades instead of \"five years\".

\n

That was when I realized that \"perseverance\" applied at multiple time scales.  On the timescale of seconds, perseverance is to \"not to give up instantly at the very first sign of difficulty\".  On the timescale of years, perseverance is to \"keep working on an insanely difficult problem even though it's inconvenient and you could be getting higher personal rewards elsewhere\".

\n

To do things that are very difficult or \"impossible\",

\n

First you have to not run away.  That takes seconds.

\n

Then you have to work.  That takes hours.

\n

Then you have to stick at it.  That takes years.

\n

Of these, I had to learn to do the first reliably instead of sporadically; the second is still a constant struggle for me; and the third comes naturally.

" } }, { "_id": "PhMNQPojinRMuwikC", "title": "Bay Area Meetup for Singularity Summit", "pageUrl": "https://www.lesswrong.com/posts/PhMNQPojinRMuwikC/bay-area-meetup-for-singularity-summit", "postedAt": "2008-10-06T12:50:04.000Z", "baseScore": 0, "voteCount": 2, "commentCount": 46, "url": null, "contents": { "documentId": "PhMNQPojinRMuwikC", "html": "

Posted on behalf of Mike Howard:

\n\n\n\nThis is a call for preferences on the proposed Bay Area meetup to coincide with the Singularity Summit on 24-25 October. Not just for Singularitarians, all aspiring rationalists are welcome. From the replies so far it's likely to be in San Jose.

\n\n

Eliezer, myself and probably most Summit attendees would really rather avoid the night between the Friday Workshop and Saturday Summit, so maybe either Saturday evening or sometime Thursday or Sunday?\n\n

\n\n

Please comment below or email me (cursor_loop 4t yahoo p0int com) if you might want to come, and if you have any preferences such as when and where you can come, when and where you'd prefer to come, and any recommendations for a particular place to go.  (Comments preferred to emails.) We need to pick a date ASAP before everyone books travel.

" } }, { "_id": "Ti3Z7eZtud32LhGZT", "title": "My Bayesian Enlightenment", "pageUrl": "https://www.lesswrong.com/posts/Ti3Z7eZtud32LhGZT/my-bayesian-enlightenment", "postedAt": "2008-10-05T16:45:35.000Z", "baseScore": 72, "voteCount": 57, "commentCount": 66, "url": null, "contents": { "documentId": "Ti3Z7eZtud32LhGZT", "html": "

I remember (dimly, as human memories go) the first time I self-identified as a \"Bayesian\".  Someone had just asked a malformed version of an old probability puzzle, saying:

\n
\n

If I meet a mathematician on the street, and she says, \"I have two children, and at least one of them is a boy,\" what is the probability that they are both boys?

\n
\n

In the correct version of this story, the mathematician says \"I have two children\", and you ask, \"Is at least one a boy?\", and she answers \"Yes\".  Then the probability is 1/3 that they are both boys.

\n

But in the malformed version of the story—as I pointed out—one would common-sensically reason:

\n
\n

If the mathematician has one boy and one girl, then my prior probability for her saying 'at least one of them is a boy' is 1/2 and my prior probability for her saying 'at least one of them is a girl' is 1/2.  There's no reason to believe, a priori, that the mathematician will only mention a girl if there is no possible alternative.

\n
\n

So I pointed this out, and worked the answer using Bayes's Rule, arriving at a probability of 1/2 that the children were both boys.  I'm not sure whether or not I knew, at this point, that Bayes's rule was called that, but it's what I used.

\n

And lo, someone said to me, \"Well, what you just gave is the Bayesian answer, but in orthodox statistics the answer is 1/3.  We just exclude the possibilities that are ruled out, and count the ones that are left, without trying to guess the probability that the mathematician will say this or that, since we have no way of really knowing that probability—it's too subjective.\"

\n

I responded—note that this was completely spontaneous—\"What on Earth do you mean?  You can't avoid assigning a probability to the mathematician making one statement or another.  You're just assuming the probability is 1, and that's unjustified.\"

\n

To which the one replied, \"Yes, that's what the Bayesians say.  But frequentists don't believe that.\"

\n

And I said, astounded: \"How can there possibly be such a thing as non-Bayesian statistics?\"

\n

\n

That was when I discovered that I was of the type called 'Bayesian'.  As far as I can tell, I was born that way.  My mathematical intuitions were such that everything Bayesians said seemed perfectly straightforward and simple, the obvious way I would do it myself; whereas the things frequentists said sounded like the elaborate, warped, mad blasphemy of dreaming Cthulhu.  I didn't choose to become a Bayesian any more than fishes choose to breathe water.

\n

But this is not what I refer to as my \"Bayesian enlightenment\".  The first time I heard of \"Bayesianism\", I marked it off as obvious; I didn't go much further in than Bayes's rule itself.  At that time I still thought of probability theory as a tool rather than a law.  I didn't think there were mathematical laws of intelligence (my best and worst mistake).  Like nearly all AGI wannabes, Eliezer2001 thought in terms of techniques, methods, algorithms, building up a toolbox full of cool things he could do; he searched for tools, not understanding.  Bayes's Rule was a really neat tool, applicable in a surprising number of cases.

\n

Then there was my initiation into heuristics and biases.  It started when I ran across a webpage that had been transduced from a Powerpoint intro to behavioral economics.  It mentioned some of the results of heuristics and biases, in passing, without any references.  I was so startled that I emailed the author to ask if this was actually a real experiment, or just anecdotal.  He sent me back a scan of Tversky and Kahneman's 1973 paper.

\n

Embarrassing to say, my story doesn't really start there.  I put it on my list of things to look into.  I knew that there was an edited volume called \"Judgment Under Uncertainty: Heuristics and Biases\" but I'd never seen it.  At this time, I figured that if it wasn't online, I would just try to get along without it.  I had so many other things on my reading stack, and no easy access to a university library.  I think I must have mentioned this on a mailing list, because Emil Gilliam emailed me to tell me that he'd read Judgment Under Uncertainty was annoyed by my online-only theory, so he bought me the book.

\n

His action here should probably be regarded as scoring a fair number of points.

\n

But this, too, is not what I refer to as my \"Bayesian enlightenment\".  It was an important step toward realizing the inadequacy of my Traditional Rationality skillz—that there was so much more out there, all this new science, beyond just doing what Richard Feynman told you to do.  And seeing the heuristics-and-biases program holding up Bayes as the gold standard helped move my thinking forward—but not all the way there.

\n

Memory is a fragile thing, and mine seems to have become more fragile than most, since I learned how memories are recreated with each recollection—the science of how fragile they are.  Do other people really have better memories, or do they just trust the details their mind makes up, while really not remembering any more than I do?  My guess is that other people do have better memories for certain things.  I find structured, scientific knowledge easy enough to remember; but the disconnected chaos of everyday life fades very quickly for me.

\n

I know why certain things happened in my life—that's causal structure I can remember.  But sometimes it's hard to recall even in what order certain events happened to me, let alone in what year.

\n

I'm not sure if I read E. T. Jaynes's Probability Theory: The Logic of Science before or after the day when I realized the magnitude of my own folly, and understood that I was facing an adult problem.

\n

But it was PT:TLOS that did the trick.  Here was probability theory, laid out not as a clever tool, but as The Rules, inviolable on pain of paradox.  If you tried to approximate The Rules because they were too computationally expensive to use directly, then, no matter how necessary that compromise might be, you would still end doing less than optimal.  Jaynes would do his calculations different ways to show that the same answer always arose when you used legitimate methods; and he would display different answers that others had arrived at, and trace down the illegitimate step.  Paradoxes could not coexist with his precision.  Not an answer, but the answer.

\n

And so—having looked back on my mistakes, and all the an-answers that had led me into paradox and dismay—it occurred to me that here was the level above mine.

\n

I could no longer visualize trying to build an AI based on vague answers—like the an-answers I had come up with before—and surviving the challenge.

\n

I looked at the AGI wannabes with whom I had tried to argue Friendly AI, and their various dreams of Friendliness which they had.  (Often formulated spontaneously in response to my asking the question!)  Like frequentist statistical methods, no two of them agreed with each other.  Having actually studied the issue full-time for some years, I knew something about the problems their hopeful plans would run into.  And I saw that if you said, \"I don't see why this would fail,\" the \"don't know\" was just a reflection of your own ignorance.  I could see that if I held myself to a similar standard of \"that seems like a good idea\", I would also be doomed.  (Much like a frequentist inventing amazing new statistical calculations that seemed like good ideas.)

\n

But if you can't do that which seems like a good idea—if you can't do what you don't imagine failing—then what can you do?

\n

It seemed to me that it would take something like the Jaynes-level—not, here's my bright idea, but rather, here's the only correct way you can do this (and why)—to tackle an adult problem and survive.  If I achieved the same level of mastery of my own subject, as Jaynes had achieved of probability theory, then it was at least imaginable that I could try to build a Friendly AI and survive the experience.

\n

Through my mind flashed the passage:

\n
\n

Do nothing because it is righteous, or praiseworthy, or noble, to do so; do nothing because it seems good to do so; do only that which you must do, and which you cannot do in any other way.

\n
\n

Doing what it seemed good to do, had only led me astray.

\n

So I called a full stop.

\n

And I decided that, from then on, I would follow the strategy that could have saved me if I had followed it years ago:  Hold my FAI designs to the higher standard of not doing that which seemed like a good idea, but only that which I understood on a sufficiently deep level to see that I could not do it in any other way.

\n

All my old theories into which I had invested so much, did not meet this standard; and were not close to this standard; and weren't even on a track leading to this standard; so I threw them out the window.

\n

I took up the study of probability theory and decision theory, looking to extend them to embrace such things as reflectivity and self-modification.

\n

If I recall correctly, I had already, by this point, started to see cognition as manifesting Bayes-structure, which is also a major part of what I refer to as my Bayesian enlightenment—but of this I have already spoken.  And there was also my naturalistic awakening, of which I have already spoken.  And my realization that Traditional Rationality was not strict enough, so that in matters of human rationality I began taking more inspiration from probability theory and cognitive psychology.

\n

But if you add up all these things together, then that, more or less, is the story of my Bayesian enlightenment.

\n

Life rarely has neat boundaries.  The story continues onward.

\n

It was while studying Judea Pearl, for example, that I realized that precision can save you time.  I'd put some thought into nonmonotonic logics myself, before then—back when I was still in my \"searching for neat tools and algorithms\" mode.  Reading Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, I could imagine how much time I would have wasted on ad-hoc systems and special cases, if I hadn't known that key.  \"Do only that which you must do, and which you cannot do in any other way\", translates into a time-savings measured, not in the rescue of wasted months, but in the rescue of wasted careers.

\n

And so I realized that it was only by holding myself to this higher standard of precision that I had started to really think at all about quite a number of important issues.  To say a thing with precision is difficult—it is not at all the same thing as saying a thing formally, or inventing a new logic to throw at the problem.  Many shy away from the inconvenience, because human beings are lazy, and so they say, \"It is impossible\" or \"It will take too long\", even though they never really tried for five minutes.  But if you don't hold yourself to that inconveniently high standard, you'll let yourself get away with anything.  It's a hard problem just to find a standard high enough to make you actually start thinking!  It may seem taxing to hold yourself to the standard of mathematical proof where every single step has to be correct and one wrong step can carry you anywhere.  But otherwise you won't chase down those tiny notes of discord that turn out to, in fact, lead to whole new concerns you never thought of.

\n

So these days I don't complain as much about the heroic burden of inconvenience that it takes to hold yourself to a precise standard.  It can save time, too; and in fact, it's more or less the ante to get yourself thinking about the problem at all.

\n

And this too should be considered part of my \"Bayesian enlightenment\"—realizing that there were advantages in it, not just penalties.

\n

But of course the story continues on.  Life is like that, at least the parts that I remember.

\n

If there's one thing I've learned from this history, it's that saying \"Oops\" is something to look forward to.  Sure, the prospect of saying \"Oops\" in the future, means that the you of right now is a drooling imbecile, whose words your future self won't be able to read because of all the wincing.  But saying \"Oops\" in the future also means that, in the future, you'll acquire new Jedi powers that your present self doesn't dream exist.  It makes you feel embarrassed, but also alive.  Realizing that your younger self was a complete moron means that even though you're already in your twenties, you haven't yet gone over your peak.  So here's to hoping that my future self realizes I'm a drooling imbecile:  I may plan to solve my problems with my present abilities, but extra Jedi powers sure would come in handy.

\n

That scream of horror and embarrassment is the sound that rationalists make when they level up.  Sometimes I worry that I'm not leveling up as fast as I used to, and I don't know if it's because I'm finally getting the hang of things, or because the neurons in my brain are slowly dying.

\n

Yours, Eliezer2008.

" } }, { "_id": "sYgv4eYH82JEsTD34", "title": "Beyond the Reach of God", "pageUrl": "https://www.lesswrong.com/posts/sYgv4eYH82JEsTD34/beyond-the-reach-of-god", "postedAt": "2008-10-04T15:42:57.000Z", "baseScore": 267, "voteCount": 197, "commentCount": 281, "url": null, "contents": { "documentId": "sYgv4eYH82JEsTD34", "html": "

Today's post is a tad gloomier than usual, as I measure such things.  It deals with a thought experiment I invented to smash my own optimism, after I realized that optimism had misled me.  Those readers sympathetic to arguments like, \"It's important to keep our biases because they help us stay happy,\" should consider not reading.  (Unless they have something to protect, including their own life.)

\n

So!  Looking back on the magnitude of my own folly, I realized that at the root of it had been a disbelief in the Future's vulnerability—a reluctance to accept that things could really turn out wrong.  Not as the result of any explicit propositional verbal belief.  More like something inside that persisted in believing, even in the face of adversity, that everything would be all right in the end.

\n

Some would account this a virtue (zettai daijobu da yo), and others would say that it's a thing necessary for mental health.

\n

But we don't live in that world.  We live in the world beyond the reach of God.

\n

\n

It's been a long, long time since I believed in God.  Growing up in an Orthodox Jewish family, I can recall the last remembered time I asked God for something, though I don't remember how old I was.  I was putting in some request on behalf of the next-door-neighboring boy, I forget what exactly—something along the lines of, \"I hope things turn out all right for him,\" or maybe \"I hope he becomes Jewish.\"

\n

I remember what it was like to have some higher authority to appeal to, to take care of things I couldn't handle myself.  I didn't think of it as \"warm\", because I had no alternative to compare it to.  I just took it for granted.

\n

Still I recall, though only from distant childhood, what it's like to live in the conceptually impossible possible world where God exists.  Really exists, in the way that children and rationalists take all their beliefs at face value.

\n

In the world where God exists, does God intervene to optimize everything?  Regardless of what rabbis assert about the fundamental nature of reality, the take-it-seriously operational answer to this question is obviously \"No\".  You can't ask God to bring you a lemonade from the refrigerator instead of getting one yourself.  When I believed in God after the serious fashion of a child, so very long ago, I didn't believe that.

\n

Postulating that particular divine inaction doesn't provoke a full-blown theological crisis.  If you said to me, \"I have constructed a benevolent superintelligent nanotech-user\", and I said \"Give me a banana,\" and no banana appeared, this would not yet disprove your statement.  Human parents don't always do everything their children ask.  There are some decent fun-theoretic arguments—I even believe them myself—against the idea that the best kind of help you can offer someone, is to always immediately give them everything they want.  I don't think that eudaimonia is formulating goals and having them instantly fulfilled; I don't want to become a simple wanting-thing that never has to plan or act or think.

\n

So it's not necessarily an attempt to avoid falsification, to say that God does not grant all prayers.  Even a Friendly AI might not respond to every request.

\n

But clearly, there exists some threshold of horror awful enough that God will intervene.  I remember that being true, when I believed after the fashion of a child.

\n

The God who does not intervene at all, no matter how bad things get—that's an obvious attempt to avoid falsification, to protect a belief-in-belief.  Sufficiently young children don't have the deep-down knowledge that God doesn't really exist.  They really expect to see a dragon in their garage.  They have no reason to imagine a loving God who never acts.  Where exactly is the boundary of sufficient awfulness?  Even a child can imagine arguing over the precise threshold.  But of course God will draw the line somewhere.  Few indeed are the loving parents who, desiring their child to grow up strong and self-reliant, would let their toddler be run over by a car.

\n

The obvious example of a horror so great that God cannot tolerate it, is death—true death, mind-annihilation.  I don't think that even Buddhism allows that.  So long as there is a God in the classic sense—full-blown, ontologically fundamental, the God—we can rest assured that no sufficiently awful event will ever, ever happen.  There is no soul anywhere that need fear true annihilation; God will prevent it.

\n

What if you build your own simulated universe?  The classic example of a simulated universe is Conway's Game of Life.  I do urge you to investigate Life if you've never played it—it's important for comprehending the notion of \"physical law\".  Conway's Life has been proven Turing-complete, so it would be possible to build a sentient being in the Life universe, albeit it might be rather fragile and awkward.  Other cellular automata would make it simpler.

\n

Could you, by creating a simulated universe, escape the reach of God?  Could you simulate a Game of Life containing sentient entities, and torture the beings therein?  But if God is watching everywhere, then trying to build an unfair Life just results in the God stepping in to modify your computer's transistors.  If the physics you set up in your computer program calls for a sentient Life-entity to be endlessly tortured for no particular reason, the God will intervene.  God being omnipresent, there is no refuge anywhere for true horror:  Life is fair.

\n

But suppose that instead you ask the question:

\n

Given such-and-such initial conditions, and given such-and-such cellular automaton rules, what would be the mathematical result?

\n

Not even God can modify the answer to this question, unless you believe that God can implement logical impossibilities.  Even as a very young child, I don't remember believing that.  (And why would you need to believe it, if God can modify anything that actually exists?)

\n

What does Life look like, in this imaginary world where every step follows only from its immediate predecessor?  Where things only ever happen, or don't happen, because of the cellular automaton rules?  Where the initial conditions and rules don't describe any God that checks over each state?  What does it look like, the world beyond the reach of God?

\n

That world wouldn't be fair.  If the initial state contained the seeds of something that could self-replicate, natural selection might or might not take place, and complex life might or might not evolve, and that life might or might not become sentient, with no God to guide the evolution.  That world might evolve the equivalent of conscious cows, or conscious dolphins, that lacked hands to improve their condition; maybe they would be eaten by conscious wolves who never thought that they were doing wrong, or cared.

\n

If in a vast plethora of worlds, something like humans evolved, then they would suffer from diseases—not to teach them any lessons, but only because viruses happened to evolve as well, under the cellular automaton rules.

\n

If the people of that world are happy, or unhappy, the causes of their happiness or unhappiness may have nothing to do with good or bad choices they made.  Nothing to do with free will or lessons learned.  In the what-if world where every step follows only from the cellular automaton rules, the equivalent of Genghis Khan can murder a million people, and laugh, and be rich, and never be punished, and live his life much happier than the average.  Who prevents it?  God would prevent it from ever actually happening, of course; He would at the very least visit some shade of gloom in the Khan's heart.  But in the mathematical answer to the question What if? there is no God in the axioms.  So if the cellular automaton rules say that the Khan is happy, that, simply, is the whole and only answer to the what-if question.  There is nothing, absolutely nothing, to prevent it.

\n

And if the Khan tortures people horribly to death over the course of days, for his own amusement perhaps?  They will call out for help, perhaps imagining a God.  And if you really wrote that cellular automaton, God would intervene in your program, of course.  But in the what-if question, what the cellular automaton would do under the mathematical rules, there isn't any God in the system.  Since the physical laws contain no specification of a utility function—in particular, no prohibition against torture—then the victims will be saved only if the right cells happen to be 0 or 1.  And it's not likely that anyone will defy the Khan; if they did, someone would strike them with a sword, and the sword would disrupt their organs and they would die, and that would be the end of that.  So the victims die, screaming, and no one helps them; that is the answer to the what-if question.

\n

Could the victims be completely innocent?  Why not, in the what-if world?  If you look at the rules for Conway's Game of Life (which is Turing-complete, so we can embed arbitrary computable physics in there), then the rules are really very simple.  Cells with three living neighbors stay alive; cells with two neighbors stay the same, all other cells die.  There isn't anything in there about only innocent people not being horribly tortured for indefinite periods.

\n

Is this world starting to sound familiar?

\n

Belief in a fair universe often manifests in more subtle ways than thinking that horrors should be outright prohibited:  Would the twentieth century have gone differently, if Klara Pölzl and Alois Hitler had made love one hour earlier, and a different sperm fertilized the egg, on the night that Adolf Hitler was conceived?

\n

For so many lives and so much loss to turn on a single event, seems disproportionate.  The Divine Plan ought to make more sense than that.  You can believe in a Divine Plan without believing in God—Karl Marx surely did.  You shouldn't have millions of lives depending on a casual choice, an hour's timing, the speed of a microscopic flagellum.  It ought not to be allowed.  It's too disproportionate.  Therefore, if Adolf Hitler had been able to go to high school and become an architect, there would have been someone else to take his role, and World War II would have happened the same as before.

\n

But in the world beyond the reach of God, there isn't any clause in the physical axioms which says \"things have to make sense\" or \"big effects need big causes\" or \"history runs on reasons too important to be so fragile\".  There is no God to impose that order, which is so severely violated by having the lives and deaths of millions depend on one small molecular event.

\n

The point of the thought experiment is to lay out the God-universe and the Nature-universe side by side, so that we can recognize what kind of thinking belongs to the God-universe.  Many who are atheists, still think as if certain things are not allowed.  They would lay out arguments for why World War II was inevitable and would have happened in more or less the same way, even if Hitler had become an architect.  But in sober historical fact, this is an unreasonable belief; I chose the example of World War II because from my reading, it seems that events were mostly driven by Hitler's personality, often in defiance of his generals and advisors.  There is no particular empirical justification that I happen to have heard of, for doubting this.  The main reason to doubt would be refusal to accept that the universe could make so little sense—that horrible things could happen so lightly, for no more reason than a roll of the dice.

\n

But why not?  What prohibits it?

\n

In the God-universe, God prohibits it.  To recognize this is to recognize that we don't live in that universe.  We live in the what-if universe beyond the reach of God, driven by the mathematical laws and nothing else.  Whatever physics says will happen, will happen.  Absolutely anything, good or bad, will happen.  And there is nothing in the laws of physics to lift this rule even for the really extreme cases, where you might expect Nature to be a little more reasonable.

\n

Reading William Shirer's The Rise and Fall of the Third Reich, listening to him describe the disbelief that he and others felt upon discovering the full scope of Nazi atrocities, I thought of what a strange thing it was, to read all that, and know, already, that there wasn't a single protection against it.  To just read through the whole book and accept it; horrified, but not at all disbelieving, because I'd already understood what kind of world I lived in.

\n

Once upon a time, I believed that the extinction of humanity was not allowed.  And others who call themselves rationalists, may yet have things they trust.  They might be called \"positive-sum games\", or \"democracy\", or \"technology\", but they are sacred.  The mark of this sacredness is that the trustworthy thing can't lead to anything really bad; or they can't be permanently defaced, at least not without a compensatory silver lining.  In that sense they can be trusted, even if a few bad things happen here and there.

\n

The unfolding history of Earth can't ever turn from its positive-sum trend to a negative-sum trend; that is not allowed.  Democraciesmodern liberal democracies, anyway—won't ever legalize torture.  Technology has done so much good up until now, that there can't possibly be a Black Swan technology that breaks the trend and does more harm than all the good up until this point.

\n

There are all sorts of clever arguments why such things can't possibly happen.  But the source of these arguments is a much deeper belief that such things are not allowed.  Yet who prohibits?  Who prevents it from happening?  If you can't visualize at least one lawful universe where physics say that such dreadful things happen—and so they do happen, there being nowhere to appeal the verdict—then you aren't yet ready to argue probabilities.

\n

Could it really be that sentient beings have died absolutely for thousands or millions of years, with no soul and no afterlife—and not as part of any grand plan of Nature—not to teach any great lesson about the meaningfulness or meaninglessness of life—not even to teach any profound lesson about what is impossible—so that a trick as simple and stupid-sounding as vitrifying people in liquid nitrogen can save them from total annihilation—and a 10-second rejection of the silly idea can destroy someone's soul?  Can it be that a computer programmer who signs a few papers and buys a life-insurance policy continues into the far future, while Einstein rots in a grave?  We can be sure of one thing:  God wouldn't allow it.  Anything that ridiculous and disproportionate would be ruled out.  It would make a mockery of the Divine Plan—a mockery of the strong reasons why things must be the way they are.

\n

You can have secular rationalizations for things being not allowed.  So it helps to imagine that there is a God, benevolent as you understand goodness—a God who enforces throughout Reality a minimum of fairness and justice—whose plans make sense and depend proportionally on people's choices—who will never permit absolute horror—who does not always intervene, but who at least prohibits universes wrenched completely off their track... to imagine all this, but also imagine that you, yourself, live in a what-if world of pure mathematics—a world beyond the reach of God, an utterly unprotected world where anything at all can happen.

\n

If there's any reader still reading this, who thinks that being happy counts for more than anything in life, then maybe they shouldn't spend much time pondering the unprotectedness of their existence.  Maybe think of it just long enough to sign up themselves and their family for cryonics, and/or write a check to an existential-risk-mitigation agency now and then.  And wear a seatbelt and get health insurance and all those other dreary necessary things that can destroy your life if you miss that one step... but aside from that, if you want to be happy, meditating on the fragility of life isn't going to help.

\n

But this post was written for those who have something to protect.

\n

What can a twelfth-century peasant do to save themselves from annihilation?  Nothing.  Nature's little challenges aren't always fair.  When you run into a challenge that's too difficult, you suffer the penalty; when you run into a lethal penalty, you die.  That's how it is for people, and it isn't any different for planets.  Someone who wants to dance the deadly dance with Nature, does need to understand what they're up against:  Absolute, utter, exceptionless neutrality.

\n

Knowing this won't always save you.  It wouldn't save a twelfth-century peasant, even if they knew.  If you think that a rationalist who fully understands the mess they're in, must surely be able to find a way out—then you trust rationality, enough said.

\n

Some commenter is bound to castigate me for putting too dark a tone on all this, and in response they will list out all the reasons why it's lovely to live in a neutral universe.  Life is allowed to be a little dark, after all; but not darker than a certain point, unless there's a silver lining.

\n

Still, because I don't want to create needless despair, I will say a few hopeful words at this point:

\n

If humanity's future unfolds in the right way, we might be able to make our future light cone fair(er).  We can't modify fundamental physics, but on a higher level of organization we could build some guardrails and put down some padding; organize the particles into a pattern that does some internal checks against catastrophe.  There's a lot of stuff out there that we can't touch—but it may help to consider everything that isn't in our future light cone, as being part of the \"generalized past\".  As if it had all already happened.  There's at least the prospect of defeating neutrality, in the only future we can touch—the only world that it accomplishes something to care about.

\n

Someday, maybe, immature minds will reliably be sheltered.  Even if children go through the equivalent of not getting a lollipop, or even burning a finger, they won't ever be run over by cars.

\n

And the adults wouldn't be in so much danger.  A superintelligence—a mind that could think a trillion thoughts without a misstep—would not be intimidated by a challenge where death is the price of a single failure.  The raw universe wouldn't seem so harsh, would be only another problem to be solved.

\n

The problem is that building an adult is itself an adult challenge.  That's what I finally realized, years ago.

\n

If there is a fair(er) universe, we have to get there starting from this world—the neutral world, the world of hard concrete with no padding, the world where challenges are not calibrated to your skills.

\n

Not every child needs to stare Nature in the eyes.  Buckling a seatbelt, or writing a check, is not that complicated or deadly.  I don't say that every rationalist should meditate on neutrality.  I don't say that every rationalist should think all these unpleasant thoughts.  But anyone who plans on confronting an uncalibrated challenge of instant death, must not avoid them.

\n

What does a child need to do—what rules should they follow, how should they behave—to solve an adult problem?

" } }, { "_id": "KgmYWd7oje7PtJBr7", "title": "Rationality Quotes 18", "pageUrl": "https://www.lesswrong.com/posts/KgmYWd7oje7PtJBr7/rationality-quotes-18", "postedAt": "2008-10-03T12:39:30.000Z", "baseScore": 10, "voteCount": 8, "commentCount": 12, "url": null, "contents": { "documentId": "KgmYWd7oje7PtJBr7", "html": "

Q.  Why does \"philosophy of consciousness/nature of reality\" seem to interest you so much?
A.  Take away consciousness and reality and there's not much left.
        -- Greg Egan, interview in Eidolon 15

\n

\"But I am not an object. I am not a noun, I am an adjective.  I am the way matter behaves when it is organized in a John K Clark-ish way.  At the present time only one chunk of matter in the universe behaves that way; someday that could change.\"
        -- John K Clark

\n

\"Would it be good advice, once copying becomes practical, to make lots of copies when good things happen, and none (or perhaps even killing off your own personal instance) on bad things?  Will this change the subjective probability of good events?\"
        -- Hal Finney

\n

\"Waiting for the bus is a bad idea if you turn out to be the bus driver.\"
        -- Michael M. Butler on the Singularity

\n

\"You are free.  Free of anything I do or say, and of any consequence. You may rest assured that all hurts are forgiven, all loveliness remembered, and treasured.  I am busy and content and loved.  I hope you are the same.  Bless you.\"
        -- Walter Jon Williams, \"Knight Moves\"

\n

\"A man with one watch knows what time it is; a man with two watches is never sure.\"
         -- Lee Segall

" } }, { "_id": "y6T8GM6Pkf3bfy6k8", "title": "What’s worse than coercion?", "pageUrl": "https://www.lesswrong.com/posts/y6T8GM6Pkf3bfy6k8/what-s-worse-than-coercion", "postedAt": "2008-10-02T18:00:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "y6T8GM6Pkf3bfy6k8", "html": "

Desperation is coercive, or so it is said. The analogy between having a gun to your head and starvation at your door is a good one, as far as decision making is concerned.

\n
\n
So why do we always state this just before doing the last thing we would do to someone with a gun to their head? 
\n
\n
Our reasoning goes: 
\n
\n
    \n
  1. She’s only working for nothing/selling her kidneys/poisoning her water supply because she has no other option.
  2. \n
  3. Therefore she’s effectively being coerced.
  4. \n
  5. That’s terrible. 
  6. \n
  7. We won’t allow it. We won’t buy her t-shirts or her kidneys.
  8. \n
  9. Now she can’t be coerced. Hoorah!
  10. \n
\n
\n
So we take away the ‘not getting shot in the head’ option. 
\n
\n
This would be fine if we also gave another choice. However if we did that that the person would no longer be desperate, and thus no longer ‘coerced’ anyway (and so there would be no need to interfere). There should never be a need to prevent coercion by taking away choices.
\n
\n
In our analogy, there is a difference between preventing coercion by forcing someone to be shot and by giving them a safe exit. 

\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "fhEPnveFhb9tmd7Pe", "title": "Use the Try Harder, Luke", "pageUrl": "https://www.lesswrong.com/posts/fhEPnveFhb9tmd7Pe/use-the-try-harder-luke", "postedAt": "2008-10-02T10:52:43.000Z", "baseScore": 300, "voteCount": 246, "commentCount": 45, "url": null, "contents": { "documentId": "fhEPnveFhb9tmd7Pe", "html": "
\n

\"When there's a will to fail, obstacles can be found.\"   —John McCarthy

\n
\n

I first watched Star Wars IV-VI when I was very young.  Seven, maybe, or nine?  So my memory was dim, but I recalled Luke Skywalker as being, you know, this cool Jedi guy.

\n

Imagine my horror and disappointment, when I watched the saga again, years later, and discovered that Luke was a whiny teenager.

\n

I mention this because yesterday, I looked up, on Youtube, the source of the Yoda quote:  \"Do, or do not.  There is no try.\"

\n

Oh.  My.  Cthulhu.

\n

Along with the Youtube clip in question, I present to you a little-known outtake from the scene, in which the director and writer, George Lucas, argues with Mark Hamill, who played Luke Skywalker:

\n

\n
\n

Luke:  All right, I'll give it a try.
Yoda:  No!  Try not.  Do.  Or do not.  There is no try.

Luke raises his hand, and slowly, the X-wing begins to rise out of the water—Yoda's eyes widen—but then the ship sinks again.

\n
\n

Mark Hamill:  \"Um, George...\"

\n

George Lucas:  \"What is it now?\"

\n

Mark:  \"So... according to the script, next I say, 'I can't.  It's too big'.\"

\n

George:  \"That's right.\"

\n

Mark:  \"Shouldn't Luke maybe give it another shot?\"

\n

George:  \"No.  Luke gives up, and sits down next to Yoda—\"

\n

Mark:  \"This is the hero who's going to take down the Empire?  Look, it was one thing when he was a whiny teenager at the beginning, but he's in Jedi training now.  Last movie he blew up the Death Star.  Luke should be showing a little backbone.\"

\n

George:  \"No.  You give up.  And then Yoda lectures you for a while, and you say, 'You want the impossible'.  Can you remember that?\"

\n

Mark:  \"Impossible?  What did he do, run a formal calculation to arrive at a mathematical proof?   The X-wing was already starting to rise out of the swamp!  That's the feasibility demonstration right there!  Luke loses it for a second and the ship sinks back—and now he says it's impossible?  Not to mention that Yoda, who's got literally eight hundred years of seniority in the field, just told him it should be doable—\"

\n

George:  \"And then you walk away.\"

\n

Mark:  \"It's his friggin' spaceship!  If he leaves it in the swamp, he's stuck on Dagobah for the rest of his miserable life!  He's not just going to walk away!  Look, let's just cut to the next scene with the words 'one month later' and Luke is still raggedly standing in front of the swamp, trying to raise his ship for the thousandth time—\"

\n

George:  \"No.\"

\n

Mark:  \"Fine!  We'll show a sunset and a sunrise, as he stands there with his arm out, straining, and then Luke says 'It's impossible'.  Though really, he ought to try again when he's fully rested—\"

\n

George:  \"No.\"

\n

Mark:  \"Five goddamned minutes!  Five goddamned minutes before he gives up!\"

\n

George:  \"I am not halting the story for five minutes while the X-wing bobs in the swamp like a bathtub toy.\"

\n

Mark:  \"For the love of sweet candied yams!  If a pathetic loser like this could master the Force, everyone in the galaxy would be using it!  People would become Jedi because it was easier than going to high school.\"

\n

George:  \"Look, you're the actor.  Let me be the storyteller.  Just say your lines and try to mean them.\"

\n

Mark:  \"The audience isn't going to buy it.\"

\n

George:  \"Trust me, they will.\"

\n

Mark:  \"They're going to get up and walk out of the theater.\"

\n

George:  \"They're going to sit there and nod along and not notice anything out of the ordinary.  Look, you don't understand human nature.  People wouldn't try for five minutes before giving up if the fate of humanity were at stake.\"

" } }, { "_id": "WLJwTJ7uGPA5Qphbp", "title": "Trying to Try", "pageUrl": "https://www.lesswrong.com/posts/WLJwTJ7uGPA5Qphbp/trying-to-try", "postedAt": "2008-10-01T08:58:38.000Z", "baseScore": 223, "voteCount": 164, "commentCount": 58, "url": null, "contents": { "documentId": "WLJwTJ7uGPA5Qphbp", "html": "
\n

\"No!  Try not!  Do, or do not.  There is no try.\"
        —Yoda

\n
\n

Years ago, I thought this was yet another example of Deep Wisdom that is actually quite stupid.  SUCCEED is not a primitive action.  You can't just decide to win by choosing hard enough.  There is never a plan that works with probability 1.

\n

But Yoda was wiser than I first realized.

\n

The first elementary technique of epistemology—it's not deep, but it's cheap—is to distinguish the quotation from the referent.  Talking about snow is not the same as talking about \"snow\".  When I use the word \"snow\", without quotes, I mean to talk about snow; and when I use the word \"\"snow\"\", with quotes, I mean to talk about the word \"snow\".  You have to enter a special mode, the quotation mode, to talk about your beliefs.  By default, we just talk about reality.

\n

If someone says, \"I'm going to flip that switch\", then by default, they mean they're going to try to flip the switch.  They're going to build a plan that promises to lead, by the consequences of its actions, to the goal-state of a flipped switch; and then execute that plan.

\n

No plan succeeds with infinite certainty.  So by default, when you talk about setting out to achieve a goal, you do not imply that your plan exactly and perfectly leads to only that possibility.  But when you say, \"I'm going to flip that switch\", you are trying only to flip the switch—not trying to achieve a 97.2% probability of flipping the switch.

\n

So what does it mean when someone says, \"I'm going to try to flip that switch?\"

\n

\n

Well, colloquially, \"I'm going to flip the switch\" and \"I'm going to try to flip the switch\" mean more or less the same thing, except that the latter expresses the possibility of failure.  This is why I originally took offense at Yoda for seeming to deny the possibility.  But bear with me here.

\n

Much of life's challenge consists of holding ourselves to a high enough standard.  I may speak more on this principle later, because it's a lens through which you can view many-but-not-all personal dilemmas—\"What standard am I holding myself to?  Is it high enough?\"

\n

So if much of life's failure consists in holding yourself to too low a standard, you should be wary of demanding too little from yourself—setting goals that are too easy to fulfill.

\n

Often where succeeding to do a thing, is very hard, trying to do it is much easier.

\n

Which is easier—to build a successful startup, or to try to build a successful startup?  To make a million dollars, or to try to make a million dollars?

\n

So if \"I'm going to flip the switch\" means by default that you're going to try to flip the switch—that is, you're going to set up a plan that promises to lead to switch-flipped state, maybe not with probability 1, but with the highest probability you can manage—

\n

—then \"I'm going to 'try to flip' the switch\" means that you're going to try to \"try to flip the switch\", that is, you're going to try to achieve the goal-state of \"having a plan that might flip the switch\".

\n

Now, if this were a self-modifying AI we were talking about, the transformation we just performed ought to end up at a reflective equilibrium—the AI planning its planning operations.

\n

But when we deal with humans, being satisfied with having a plan is not at all like being satisfied with success.  The part where the plan has to maximize your probability of succeeding, gets lost along the way.  It's far easier to convince ourselves that we are \"maximizing our probability of succeeding\", than it is to convince ourselves that we will succeed.

\n

Almost any effort will serve to convince us that we have \"tried our hardest\", if trying our hardest is all we are trying to do.

\n
\n

\"You have been asking what you could do in the great events that are now stirring, and have found that you could do nothing. But that is because your suffering has caused you to phrase the question in the wrong way... Instead of asking what you could do, you ought to have been asking what needs to be done.\"
        —Steven Brust, The Paths of the Dead

\n
\n

When you ask, \"What can I do?\", you're trying to do your best.  What is your best?  It is whatever you can do without the slightest inconvenience.  It is whatever you can do with the money in your pocket, minus whatever you need for your accustomed lunch.  What you can do with those resources, may not give you very good odds of winning.  But it's the \"best you can do\", and so you've acted defensibly, right?

\n

But what needs to be done?  Maybe what needs to be done requires three times your life savings, and you must produce it or fail.

\n

So trying to have \"maximized your probability of success\"—as opposed to trying to succeed—is a far lesser barrier.  You can have \"maximized your probability of success\" using only the money in your pocket, so long as you don't demand actually winning.

\n

Want to try to make a million dollars?  Buy a lottery ticket.  Your odds of winning may not be very good, but you did try, and trying was what you wanted.  In fact, you tried your best, since you only had one dollar left after buying lunch.  Maximizing the odds of goal achievement using available resources: is this not intelligence?

\n

It's only when you want, above all else, to actually flip the switch—without quotation and without consolation prizes just for trying—that you will actually put in the effort to actually maximize the probability.

\n

But if all you want is to \"maximize the probability of success using available resources\", then that's the easiest thing in the world to convince yourself you've done.  The very first plan you hit upon, will serve quite well as \"maximizing\"—if necessary, you can generate an inferior alternative to prove its optimality.  And any tiny resource that you care to put in, will be what is \"available\".  Remember to congratulate yourself on putting in 100% of it!

\n

Don't try your best.  Win, or fail.  There is no best.

" } }, { "_id": "jmbpzbGcxTBNheuTo", "title": "Intrade and the Dow Drop", "pageUrl": "https://www.lesswrong.com/posts/jmbpzbGcxTBNheuTo/intrade-and-the-dow-drop", "postedAt": "2008-10-01T03:12:46.000Z", "baseScore": 4, "voteCount": 3, "commentCount": 13, "url": null, "contents": { "documentId": "jmbpzbGcxTBNheuTo", "html": "

With today's snapback, the Dow lost 777 and regained 485.

\n\n

As of this evening, Intrade says the probability of a bailout bill passing by Oct 31st is 85%.

\n\n

(777-485)/(1-.85) = 1,946.  So a bailout bill makes an expected difference of 2000 points on the Dow.

\n\n

Of course this is a bogus calculation, but it's an interesting one.  Not overwhelmingly on-topic for OB, but it involves prediction markets and I didn't see anyone else pointing it out.  I hope the bailout fails decisively, so this calculation can be tested.

PS:  Bryan Caplan understands Bayes's Rule:  It's not possible for both A and ~A to be evidence in favor of B.  So which of the two possibilities, "unemployment stays under 8% following a bailout" and "unemployment goes over 8% following a bailout", is evidence for the proposition "the bailout was necessary to prevent economic catastrophe", and which is evidence against?  Take your stand now; afterward is too late for us to trust your reasoning.

" } }, { "_id": "98cPZGwetyJsrXNdW", "title": "Awww, a Zebra", "pageUrl": "https://www.lesswrong.com/posts/98cPZGwetyJsrXNdW/awww-a-zebra", "postedAt": "2008-10-01T01:28:05.000Z", "baseScore": 31, "voteCount": 27, "commentCount": 53, "url": null, "contents": { "documentId": "98cPZGwetyJsrXNdW", "html": "

This image recently showed up on Flickr (original is nicer):

\n

\"Zebra_4\"

\n

With the caption:

\n
\n

\"Alas for those who turn their eyes from zebras and dream of dragons!  If we cannot learn to take joy in the merely real, our lives shall be empty indeed.\" —Eliezer S. Yudkowsky.

\n
\n

\"Awww!\", I said, and called over my girlfriend over to look.

\n

\"Awww!\", she said, and then looked at me, and said,  \"I think you need to take your own advice!\"

\n

Me:  \"But I'm looking at the zebra!\"
Her:  \"On a computer!\"
Me:  (Turns away, hides face.)
Her:  \"Have you ever even seen a zebra in real life?\"
Me:  \"Yes!  Yes, I have!  My parents took me to Lincoln Park Zoo!  ...man, I hated that place.\"

\n

 

\n

Part of the Joy in the Merely Real subsequence of Reductionism

\n

Next post: \"Hand vs. Fingers\"

\n

Previous post: \"Initiation Ceremony\"

" } }, { "_id": "fLRPeXihRaiRo5dyX", "title": "The Magnitude of His Own Folly", "pageUrl": "https://www.lesswrong.com/posts/fLRPeXihRaiRo5dyX/the-magnitude-of-his-own-folly", "postedAt": "2008-09-30T11:31:24.000Z", "baseScore": 122, "voteCount": 87, "commentCount": 130, "url": null, "contents": { "documentId": "fLRPeXihRaiRo5dyX", "html": "

In the years before I met that would-be creator of Artificial General Intelligence (with a funded project) who happened to be a creationist, I would still try to argue with individual AGI wannabes.

\n

In those days, I sort-of-succeeded in convincing one such fellow that, yes, you had to take Friendly AI into account, and no, you couldn't just find the right fitness metric for an evolutionary algorithm.  (Previously he had been very impressed with evolutionary algorithms.)

\n

And the one said:  Oh, woe!  Oh, alas!  What a fool I've been!  Through my carelessness, I almost destroyed the world!  What a villain I once was!

\n

Now, there's a trap I knew I better than to fall into—

\n

—at the point where, in late 2002, I looked back to Eliezer1997's AI proposals and realized what they really would have done, insofar as they were coherent enough to talk about what they \"really would have done\".

\n

When I finally saw the magnitude of my own folly, everything fell into place at once.  The dam against realization cracked; and the unspoken doubts that had been accumulating behind it, crashed through all together.  There wasn't a prolonged period, or even a single moment that I remember, of wondering how I could have been so stupid.  I already knew how.

\n

And I also knew, all at once, in the same moment of realization, that to say, I almost destroyed the world!, would have been too prideful.

\n

It would have been too confirming of ego, too confirming of my own importance in the scheme of things, at a time when—I understood in the same moment of realization—my ego ought to be taking a major punch to the stomach.  I had been so much less than I needed to be; I had to take that punch in the stomach, not avert it.

\n

\n

And by the same token, I didn't fall into the conjugate trap of saying:  Oh, well, it's not as if I had code and was about to run it; I didn't really come close to destroying the world.  For that, too, would have minimized the force of the punch.  It wasn't really loaded?  I had proposed and intended to build the gun, and load the gun, and put the gun to my head and pull the trigger; and that was a bit too much self-destructiveness.

\n

I didn't make a grand emotional drama out of it.  That would have wasted the force of the punch, averted it into mere tears.

\n

I knew, in the same moment, what I had been carefully not-doing for the last six years.  I hadn't been updating.

\n

And I knew I had to finally update.  To actually change what I planned to do, to change what I was doing now, to do something different instead.

\n

I knew I had to stop.

\n

Halt, melt, and catch fire.

\n

Say, \"I'm not ready.\"  Say, \"I don't know how to do this yet.\"

\n

These are terribly difficult words to say, in the field of AGI.  Both the lay audience and your fellow AGI researchers are interested in code, projects with programmers in play.  Failing that, they may give you some credit for saying, \"I'm ready to write code, just give me the funding.\"

\n

Say, \"I'm not ready to write code,\" and your status drops like a depleted uranium balloon.

\n

What distinguishes you, then, from six billion other people who don't know how to create Artificial General Intelligence?  If you don't have neat code (that does something other than be humanly intelligent, obviously; but at least it's code), or at minimum your own startup that's going to write code as soon as it gets funding—then who are you and what are you doing at our conference?

\n

Maybe later I'll post on where this attitude comes from—the excluded middle between \"I know how to build AGI!\" and \"I'm working on narrow AI because I don't know how to build AGI\", the nonexistence of a concept for \"I am trying to get from an incomplete map of FAI to a complete map of FAI\".

\n

But this attitude does exist, and so the loss of status associated with saying \"I'm not ready to write code\" is very great.  (If the one doubts this, let them name any other who simultaneously says \"I intend to build an Artificial General Intelligence\", \"Right now I can't build an AGI because I don't know X\", and \"I am currently trying to figure out X\".)

\n

(And never mind AGIfolk who've already raised venture capital, promising returns in five years.) 

\n

So there's a huge reluctance to say \"Stop\".  You can't just say, \"Oh, I'll swap back to figure-out-X mode\" because that mode doesn't exist.

\n

Was there more to that reluctance than just loss of status, in my case?  Eliezer2001 might also have flinched away from slowing his perceived forward momentum into the Singularity, which was so right and so necessary...

\n

But mostly, I think I flinched away from not being able to say, \"I'm ready to start coding.\"  Not just for fear of others' reactions, but because I'd been inculcated with the same attitude myself.

\n

Above all, Eliezer2001 didn't say \"Stop\"—even after noticing the problem of Friendly AI—because I did not realize, on a gut level, that Nature was allowed to kill me.

\n

\"Teenagers think they're immortal\", the proverb goes.  Obviously this isn't true in the literal sense that if you ask them, \"Are you indestructible?\" they will reply \"Yes, go ahead and try shooting me.\"  But perhaps wearing seat belts isn't deeply emotionally compelling for them, because the thought of their own death isn't quite real—they don't really believe it's allowed to happen.  It can happen in principle but it can't actually happen.

\n

Personally, I always wore my seat belt.  As an individual, I understood that I could die.

\n

But, having been raised in technophilia to treasure that one most precious thing, far more important than my own life, I once thought that the Future was indestructible.

\n

Even when I acknowledged that nanotech could wipe out humanity, I still believed the Singularity was invulnerable.  That if humanity survived, the Singularity would happen, and it would be too smart to be corrupted or lost.

\n

Even after that, when I acknowledged Friendly AI as a consideration, I didn't emotionally believe in the possibility of failure, any more than that teenager who doesn't wear their seat belt really believes that an automobile accident is really allowed to kill or cripple them.

\n

It wasn't until my insight into optimization let me look back and see Eliezer1997 in plain light, that I realized that Nature was allowed to kill me.

\n

\"The thought you cannot think controls you more than thoughts you speak aloud.\"  But we flinch away from only those fears that are real to us.

\n

AGI researchers take very seriously the prospect of someone else solving the problem first.  They can imagine seeing the headlines in the paper saying that their own work has been upstaged.  They know that Nature is allowed to do that to them.  The ones who have started companies know that they are allowed to run out of venture capital.  That possibility is real to them, very real; it has a power of emotional compulsion over them.

\n

I don't think that \"Oops\" followed by the thud of six billion bodies falling, at their own hands, is real to them on quite the same level.

\n

It is unsafe to say what other people are thinking.  But it seems rather likely that when the one reacts to the prospect of Friendly AI by saying, \"If you delay development to work on safety, other projects that don't care at all about Friendly AI will beat you to the punch,\" the prospect of they themselves making a mistake followed by six billion thuds, is not really real to them; but the possibility of others beating them to the punch is deeply scary.

\n

I, too, used to say things like that, before I understood that Nature was allowed to kill me.

\n

In that moment of realization, my childhood technophilia finally broke.

\n

I finally understood that even if you diligently followed the rules of science and were a nice person, Nature could still kill you.  I finally understood that even if you were the best project out of all available candidates, Nature could still kill you.

\n

I understood that I was not being graded on a curve.  My gaze shook free of rivals, and I saw the sheer blank wall.

\n

I looked back and I saw the careful arguments I had constructed, for why the wisest choice was to continue forward at full speed, just as I had planned to do before.  And I understood then that even if you constructed an argument showing that something was the best course of action, Nature was still allowed to say \"So what?\" and kill you.

\n

I looked back and saw that I had claimed to take into account the risk of a fundamental mistake, that I had argued reasons to tolerate the risk of proceeding in the absence of full knowledge.

\n

And I saw that the risk I wanted to tolerate would have killed me.  And I saw that this possibility had never been really real to me.  And I saw that even if you had wise and excellent arguments for taking a risk, the risk was still allowed to go ahead and kill you.  Actually kill you.

\n

For it is only the action that matters, and not the reasons for doing anything.  If you build the gun and load the gun and put the gun to your head and pull the trigger, even with the cleverest of arguments for carrying out every step—then, bang.

\n

I saw that only my own ignorance of the rules had enabled me to argue for going ahead without complete knowledge of the rules; for if you do not know the rules, you cannot model the penalty of ignorance.

\n

I saw that others, still ignorant of the rules, were saying \"I will go ahead and do X\"; and that to the extent that X was a coherent proposal at all, I knew that would result in a bang; but they said, \"I do not know it cannot work\".   I would try to explain to them the smallness of the target in the search space, and they would say \"How can you be so sure I won't win the lottery?\", wielding their own ignorance as a bludgeon.

\n

And so I realized that the only thing I could have done to save myself, in my previous state of ignorance, was to say:  \"I will not proceed until I know positively that the ground is safe.\"  And there are many clever arguments for why you should step on a piece of ground that you don't know to contain a landmine; but they all sound much less clever, after you look to the place that you proposed and intended to step, and see the bang.

\n

I understood that you could do everything that you were supposed to do, and Nature was still allowed to kill you.  That was when my last trust broke.  And that was when my training as a rationalist began.

" } }, { "_id": "t8KmJGNrx95rvxmhY", "title": "Friedman's \"Prediction vs. Explanation\"", "pageUrl": "https://www.lesswrong.com/posts/t8KmJGNrx95rvxmhY/friedman-s-prediction-vs-explanation", "postedAt": "2008-09-29T06:15:34.000Z", "baseScore": 9, "voteCount": 8, "commentCount": 79, "url": null, "contents": { "documentId": "t8KmJGNrx95rvxmhY", "html": "

David D. Friedman asks:

We do ten experiments. A scientist observes the results, constructs a\ntheory consistent with them, and uses it to predict the results of the\nnext ten. We do them and the results fit his predictions. A second\nscientist now constructs a theory consistent with the results of all\ntwenty experiments.

\n\n

The two theories give different predictions for the next experiment. Which do we believe? Why?

One of the commenters links to Overcoming Bias, but as of 11PM on Sep 28th, David's blog's time, no one has given the exact answer that I would have given.  It's interesting that a question so basic has received so many answers.

" } }, { "_id": "9HGR5qatMGoz4GhKj", "title": "Above-Average AI Scientists", "pageUrl": "https://www.lesswrong.com/posts/9HGR5qatMGoz4GhKj/above-average-ai-scientists", "postedAt": "2008-09-28T11:04:11.000Z", "baseScore": 71, "voteCount": 60, "commentCount": 96, "url": null, "contents": { "documentId": "9HGR5qatMGoz4GhKj", "html": "

Followup toThe Level Above Mine, Competent Elites

\n

(Those who didn't like the last two posts should definitely skip this one.)

\n

I recall one fellow, who seemed like a nice person, and who was quite eager to get started on Friendly AI work, to whom I had trouble explaining that he didn't have a hope.  He said to me:

\n
\n

\"If someone with a Masters in chemistry isn't intelligent enough, then you're not going to have much luck finding someone to help you.\"

\n
\n

It's hard to distinguish the grades above your own.  And even if you're literally the best in the world, there are still electron orbitals above yours—they're just unoccupied.  Someone had to be \"the best physicist in the world\" during the time of Ancient Greece.  Would they have been able to visualize Newton?

\n

At one of the first conferences organized around the tiny little subfield of Artificial General Intelligence, I met someone who was heading up a funded research project specifically declaring AGI as a goal, within a major corporation.  I believe he had people under him on his project.  He was probably paid at least three times as much as I was paid (at that time).  His academic credentials were superior to mine (what a surprise) and he had many more years of experience.  He had access to lots and lots of computing power.

\n

And like nearly everyone in the field of AGI, he was rushing forward to write code immediately—not holding off and searching for a sufficiently precise theory to permit stable self-improvement.

\n

In short, he was just the sort of fellow that...  Well, many people, when they hear about Friendly AI, say:  \"Oh, it doesn't matter what you do, because [someone like this guy] will create AI first.\"  He's the sort of person about whom journalists ask me, \"You say that this isn't the time to be talking about regulation, but don't we need laws to stop people like this from creating AI?\"

\n

\n

\"I suppose,\" you say, your voice heavy with irony, \"that you're about to tell us, that this person doesn't really have so much of an advantage over you as it might seem.  Because your theory—whenever you actually come up with a theory—is going to be so much better than his.  Or,\" your voice becoming even more ironic, \"that he's too mired in boring mainstream methodology—\"

\n

No.  I'm about to tell you that I happened to be seated at the same table as this guy at lunch, and I made some kind of comment about evolutionary psychology, and he turned out to be...

\n

...a creationist.

\n

This was the point at which I really got, on a gut level, that there was no test you needed to pass in order to start your own AGI project.

\n

One of the failure modes I've come to better understand in myself since observing it in others, is what I call, \"living in the should-universe\".  The universe where everything works the way it common-sensically ought to, as opposed to the actual is-universe we live in.  There's more than one way to live in the should-universe, and outright delusional optimism is only the least subtle.  Treating the should-universe as your point of departure—describing the real universe as the should-universe plus a diff—can also be dangerous.

\n

Up until the moment when yonder AGI researcher explained to me that he didn't believe in evolution because that's not what the Bible said, I'd been living in the should-universe.  In the sense that I was organizing my understanding of other AGI researchers as should-plus-diff.  I saw them, not as themselves, not as their probable causal histories, but as their departures from what I thought they should be.

\n

In the universe where everything works the way it common-sensically ought to, everything about the study of Artificial General Intelligence is driven by the one overwhelming fact of the indescribably huge effects: initial conditions and unfolding patterns whose consequences will resound for as long as causal chains continue out of Earth, until all the stars and galaxies in the night sky have burned down to cold iron, and maybe long afterward, or forever into infinity if the true laws of physics should happen to permit that.  To deliberately thrust your mortal brain onto that stage, as it plays out on ancient Earth the first root of life, is an act so far beyond \"audacity\" as to set the word on fire, an act which can only be excused by the terrifying knowledge that the empty skies offer no higher authority.

\n

It had occurred to me well before this point, that most of those who proclaimed themselves to have AGI projects, were not only failing to be what an AGI researcher should be, but in fact, didn't seem to have any such dream to live up to.

\n

But that was just my living in the should-universe.  It was the creationist who broke me of that.  My mind finally gave up on constructing the diff.

\n

When Scott Aaronson was 12 years old, he: \"set myself the modest goal of writing a BASIC program that would pass the Turing Test by learning from experience and following Asimov's Three Laws of Robotics.  I coded up a really nice tokenizer and user interface, and only got stuck on the subroutine that was supposed to understand the user's question and output an intelligent, Three-Laws-obeying response.\"  It would be pointless to try and construct a diff between Aaronson12 and what an AGI researcher should be.  You've got to explain Aaronson12 in forward-extrapolation mode:  He thought it would be cool to make an AI and didn't quite understand why the problem was difficult.

\n

It was yonder creationist who let me see AGI researchers for themselves, and not as departures from my ideal.

\n

A creationist AGI researcher?  Why not?  Sure, you can't really be enough of an expert on thinking to build an AGI, or enough of an expert at thinking to find the truth amidst deep dark scientific chaos, while still being, in this day and age, a creationist.  But to think that his creationism is an anomaly, is should-universe thinking, as if desirable future outcomes could structure the present.  Most scientists have the meme that a scientist's religion doesn't have anything to do with their research. Someone who thinks that it would be cool to solve the \"human-level\" AI problem and create a little voice in a box that answers questions, and who dreams they have a solution, isn't going to stop and say:  \"Wait!  I'm a creationist!  I guess that would make it pretty silly for me to try and build an AGI.\"

\n

The creationist is only an extreme example.  A much larger fraction of AGI wannabes would speak with reverence of the \"spiritual\" and the possibility of various fundamental mentals. If someone lacks the whole cognitive edifice of reducing mental events to nonmental constituents, the edifice that decisively indicts the entire supernatural, then of course they're not likely to be expert on cognition to the degree that would be required to synthesize true AGI.  But neither are they likely to have any particular idea that they're missing something.  They're just going with the flow of the memetic water in which they swim.  They've got friends who talk about spirituality, and it sounds pretty appealing to them.  They know that Artificial General Intelligence is a big important problem in their field, worth lots of applause if they can solve it.  They wouldn't see anything incongruous about an AGI researcher talking about the possibility of psychic powers or Buddhist reincarnation.  That's a separate matter, isn't it?

\n

(Someone in the audience is bound to observe that Newton was a Christian.  I reply that Newton didn't have such a difficult problem, since he only had to invent first-year undergraduate stuff.  The two observations are around equally sensible; if you're going to be anachronistic, you should be anachronistic on both sides of the equation.)

\n

But that's still all just should-universe thinking.

\n

That's still just describing people in terms of what they aren't.

\n

Real people are not formed of absences.  Only people who have an ideal can be described as a departure from it, the way that I see myself as a departure from what an Eliezer Yudkowsky should be.

\n

The really striking fact about the researchers who show up at AGI conferences, is that they're so... I don't know how else to put it...

\n

...ordinary.

\n

Not at the intellectual level of the big mainstream names in Artificial Intelligence.  Not at the level of John McCarthy or Peter Norvig (whom I've both met).

\n

More like... around, say, the level of above-average scientists, which I yesterday compared to the level of partners at a non-big-name venture capital firm.  Some of whom might well be Christians, or even creationists if they don't work in evolutionary biology.

\n

The attendees at AGI conferences aren't literally average mortals, or even average scientists.  The average attendee at an AGI conference is visibly one level up from the average attendee at that random mainstream AI conference I talked about yesterday.

\n

Of course there are exceptions.  The last AGI conference I went to, I encountered one bright young fellow who was fast, intelligent, and spoke fluent Bayesian.  Admittedly, he didn't actually work in AGI as such.  He worked at a hedge fund.

\n

No, seriously, there are exceptions.  Steve Omohundro is one example of someone who—well, I'm not exactly sure of his level, but I don't get any particular sense that he's below Peter Norvig or John McCarthy.

\n

But even if you just poke around on Norvig or McCarthy's website, and you've achieved sufficient level yourself to discriminate what you see, you'll get a sense of a formidable mind.  Not in terms of accomplishments—that's not a fair comparison with someone younger or tackling a more difficult problem—but just in terms of the way they talk.  If you then look at the website of a typical AGI-seeker, even one heading up their own project, you won't get an equivalent sense of formidability.

\n

Unfortunately, that kind of eyeball comparison does require that one be of sufficient level to distinguish those levels.  It's easy to sympathize with people who can't eyeball the difference:  If anyone with a PhD seems really bright to you, or any professor at a university is someone to respect, then you're not going to be able to eyeball the tiny academic subfield of AGI and determine that most of the inhabitants are above-average scientists for mainstream AI, but below the intellectual firepower of the top names in mainstream AI.

\n

But why would that happen?  Wouldn't the AGI people be humanity's best and brightest, answering the greatest need?  Or at least those daring souls for whom mainstream AI was not enough, who sought to challenge their wits against the greatest reservoir of chaos left to modern science?

\n

If you forget the should-universe, and think of the selection effect in the is-universe, it's not difficult to understand.  Today, AGI attracts people who fail to comprehend the difficulty of AGI.  Back in the earliest days, a bright mind like John McCarthy would tackle AGI because no one knew the problem was difficult.  In time and with regret, he realized he couldn't do it.  Today, someone on the level of Peter Norvig knows their own competencies, what they can do and what they can't; and they go on to achieve fame and fortune (and Research Directorship of Google) within mainstream AI.

\n

And then...

\n

Then there are the completely hopeless ordinary programmers who wander onto the AGI mailing list wanting to build a really big semantic net.

\n

Or the postdocs moved by some (non-Singularity) dream of themselves presenting the first \"human-level\" AI to the world, who also dream an AI design, and can't let go of that.

\n

Just normal people with no notion that it's wrong for an AGI researcher to be normal.

\n

Indeed, like most normal people who don't spend their lives making a desperate effort to reach up toward an impossible ideal, they will be offended if you suggest to them that someone in their position needs to be a little less imperfect.

\n

This misled the living daylights out of me when I was young, because I compared myself to other people who declared their intentions to build AGI, and ended up way too impressed with myself; when I should have been comparing myself to Peter Norvig, or reaching up toward E. T. Jaynes.  (For I did not then perceive the sheer, blank, towering wall of Nature.)

\n

I don't mean to bash normal AGI researchers into the ground.  They are not evil.  They are not ill-intentioned.  They are not even dangerous, as individuals.  Only the mob of them is dangerous, that can learn from each other's partial successes and accumulate hacks as a community.

\n

And that's why I'm discussing all this—because it is a fact without which it is not possible to understand the overall strategic situation in which humanity finds itself, the present state of the gameboard.  It is, for example, the reason why I don't panic when yet another AGI project announces they're going to have general intelligence in five years.  It also says that you can't necessarily extrapolate the FAI-theory comprehension of future researchers from present researchers, if a breakthrough occurs that repopulates the field with Norvig-class minds.

\n

Even an average human engineer is at least six levels higher than the blind idiot god, natural selection, that managed to cough up the Artificial Intelligence called humans, by retaining its lucky successes and compounding them.  And the mob, if it retains its lucky successes and shares them, may also cough up an Artificial Intelligence, with around the same degree of precise control.  But it is only the collective that I worry about as dangerous—the individuals don't seem that formidable.

\n

If you yourself speak fluent Bayesian, and you distinguish a person-concerned-with-AGI as speaking fluent Bayesian, then you should consider that person as excepted from this whole discussion.

\n

Of course, among people who declare that they want to solve the AGI problem, the supermajority don't speak fluent Bayesian.

\n

Why would they?  Most people don't.

\n

 

\n

Part of the sequence Yudkowsky's Coming of Age

\n

Next post: \"The Magnitude of His Own Folly\"

\n

Previous post: \"Competent Elites\"

" } }, { "_id": "CKpByWmsZ8WmpHtYa", "title": "Competent Elites", "pageUrl": "https://www.lesswrong.com/posts/CKpByWmsZ8WmpHtYa/competent-elites", "postedAt": "2008-09-27T00:07:24.000Z", "baseScore": 141, "voteCount": 130, "commentCount": 113, "url": null, "contents": { "documentId": "CKpByWmsZ8WmpHtYa", "html": "

Followup toThe Level Above Mine

\n

(Anyone who didn't like yesterday's post should probably avoid this one.)

\n

I remember what a shock it was to first meet Steve Jurvetson, of the venture capital firm Draper Fisher Jurvetson.

\n

Steve Jurvetson talked fast and articulately, could follow long chains of reasoning, was familiar with a wide variety of technologies, and was happy to drag in analogies from outside sciences like biology—good ones, too.

\n

I once saw Eric Drexler present an analogy between biological immune systems and the \"active shield\" concept in nanotechnology, arguing that just as biological systems managed to stave off invaders without the whole community collapsing, nanotechnological immune systems could do the same.

\n

I thought this was a poor analogy, and was going to point out some flaws during the Q&A.  But Steve Jurvetson, who was in line before me, proceeded to demolish the argument even more thoroughly.  Jurvetson pointed out the evolutionary tradeoff between virulence and transmission that keeps natural viruses in check, talked about how greater interconnectedness led to larger pandemics—it was very nicely done, demolishing the surface analogy by correct reference to deeper biological details.

\n

I was shocked, meeting Steve Jurvetson, because from everything I'd read about venture capitalists before then, VCs were supposed to be fools in business suits, who couldn't understand technology or engineers or the needs of a fragile young startup, but who'd gotten ahold of large amounts of money by dint of seeming reliable to other business suits.

\n

One of the major surprises I received when I moved out of childhood into the real world, was the degree to which the world is stratified by genuine competence.

\n

\n

Now, yes, Steve Jurvetson is not just a randomly selected big-name venture capitalist.  He is a big-name VC who often shows up at transhumanist conferences.  But I am not drawing a line through just one data point.

\n

I was invited once to a gathering of the mid-level power elite, where around half the attendees were \"CEO of something\"—mostly technology companies, but occasionally \"something\" was a public company or a sizable hedge fund.  I was expecting to be the youngest person there, but it turned out that my age wasn't unusual—there were several accomplished individuals who were younger.  This was the point at which I realized that my child prodigy license had officially completely expired.

\n

Now, admittedly, this was a closed conference run by people clueful enough to think \"Let's invite Eliezer Yudkowsky\" even though I'm not a CEO.  So this was an incredibly cherry-picked sample.  Even so...

\n

Even so, these people of the Power Elite were visibly much smarter than average mortals.  In conversation they spoke quickly, sensibly, and by and large intelligently. When talk turned to deep and difficult topics, they understood faster, made fewer mistakes, were readier to adopt others' suggestions.

\n

No, even worse than that, much worse than that: these CEOs and CTOs and hedge-fund traders, these folk of the mid-level power elite, seemed happier and more alive.

\n

This, I suspect, is one of those truths so horrible that you can't talk about it in public.  This is something that reporters must not write about, when they visit gatherings of the power elite.

\n

Because the last news your readers want to hear, is that this person who is wealthier than you, is also smarter, happier, and not a bad person morally.  Your reader would much rather read about how these folks are overworked to the bone or suffering from existential ennui.  Failing that, your readers want to hear how the upper echelons got there by cheating, or at least smarming their way to the top.  If you said anything as hideous as, \"They seem more alive,\" you'd get lynched.

\n

But I am an independent scholar, not much beholden.  I should be able to say it out loud if anyone can. I'm talking about this topic... for more than one reason; but it is the truth as I see it, and an important truth which others don't talk about (in writing?).  It is something that led me down wrong pathways when I was young and inexperienced.

\n

I used to think—not from experience, but from the general memetic atmosphere I grew up in—that executives were just people who, by dint of superior charisma and butt-kissing, had managed to work their way to the top positions at the corporate hog trough.

\n

No, that was just a more comfortable meme, at least when it comes to what people put down in writing and pass around.  The story of the horrible boss gets passed around more than the story of the boss who is, not just competent, but more competent than you.

\n

But entering the real world, I found out that the average mortal really can't be an executive.  Even the average manager can't function without a higher-level manager above them.  What is it that makes an executive?  I don't know, because I'm not a professional in this area.  If I had to take a guess, I would call it \"functioning without recourse\"—living without any level above you to take over if you falter, or even to tell you if you're getting it wrong.  To just get it done, even if the problem requires you to do something unusual, without anyone being there to look over your work and pencil in a few corrections.

\n

Now, I'm sure that there are plenty of people out there bearing executive titles who are not executives.

\n

And yet there seem to be a remarkable number of people out there bearing executive titles who actually do have the executive-nature, who can thrive on the final level that gets the job done without recourse.  I'm not going to take sides on whether today's executives are overpaid, but those executive titles occupied by actual executives, are not being paid for nothing.  Someone who can be an executive at all, even a below-average executive, is a rare find.

\n

The people who'd like to be boss of their company, to sit back in that comfortable chair with a lovely golden parachute—most of them couldn't make it.  If you try to drop executive responsibility on someone who lacks executive-nature—on the theory that most people can do it if given the chance—then they'll melt and catch fire.

\n

This is not the sort of unpleasant truth that anyone would warn you about—at least not in books, and all I had read were books.  Who would say it?  A reporter?  It's not news that people want to hear.  An executive?  Who would believe that self-valuing story?

\n

I expect that my life experience constitutes an extremely biased sample of the power elite.  I don't have to deal with the executives of arbitrary corporations, or form business relationships with people I never selected.  I just meet them at gatherings and talk to the interesting ones.

\n

But the business world is not the only venue where I've encountered the upper echelons and discovered that, amazingly, they actually are better at what they do.

\n

Case in point:  Professor Rodney Brooks, CTO of iRobot and former director of the MIT AI Lab, who spoke at the 2007 Singularity Summit.  I had previously known \"Rodney Brooks\" primarily as the promoter of yet another dreadful nouvelle paradigm in AI—the embodiment of AIs in robots, and the forsaking of deliberation for complicated reflexes that didn't involve modeling.  Definitely not a friend to the Bayesian faction.  Yet somehow Brooks had managed to become a major mainstream name, a household brand in AI...

\n

And by golly, Brooks sounded intelligent and original.  He gave off a visible aura of competence.  (Though not a thousand-year vampire aura of terrifying swift perfection like E.T. Jaynes's carefully crafted book.)  But Brooks could have held his own at any gathering I attended; from his aura I would put him at the Steve Jurvetson level or higher.

\n

(Interesting question:  If I'm not judging Brooks by the goodness of his AI theories, what is it that made him seem smart to me?  I don't remember any stunning epiphanies in his presentation at the Summit.  I didn't talk to him very long in person.  He just came across as... formidable, somehow.)

\n

The major names in an academic field, at least the ones that I run into, often do seem a lot smarter than the average scientist.

\n

I tried—once—going to an interesting-sounding mainstream AI conference that happened to be in my area.  I met ordinary research scholars and looked at their posterboards and read some of their papers.  I watched their presentations and talked to them at lunch.  And they were way below the level of the big names.  I mean, they weren't visibly incompetent, they had their various research interests and I'm sure they were doing passable work on them.  And I gave up and left before the conference was over, because I kept thinking \"What am I even doing here?\"

\n

An intermediate stratum, above the ordinary scientist but below the ordinary CEO, is that of, say, partners at a non-big-name venture capital firm.  The way their aura feels to me, is that they can hold up one end of an interesting conversation, but they don't sound very original, and they don't sparkle with extra life force.

\n

I wonder if you have to reach the Jurvetson level before thinking outside the \"Outside the Box\" box starts to become a serious possibility.  Or maybe that art can be taught, but isn't, and the Jurvetson level is where it starts to happen spontaneously.  It's at this level that I talk to people and find that they routinely have interesting thoughts I haven't heard before.

\n

Hedge-fund people sparkle with extra life force.  At least the ones I've talked to.  Large amounts of money seem to attract smart people.  No, really.

\n

If you're wondering how it could be possible that the upper echelons of the world could be genuinely intelligent, and yet the world is so screwed up...

\n

Well, part of that may be due to my biased sample.

\n

Also, I've met a few Congresspersons and they struck me as being at around the non-big-name venture capital level, not the hedge fund level or the Jurvetson level.  (Still, note that e.g. George W. Bush used to sound a lot smarter than he does now.)

\n

But mainly:  It takes an astronomically high threshold of intelligence + experience + rationality before a screwup becomes surprising.  There's \"smart\" and then there's \"smart enough for your cognitive mechanisms to reliably decide to sign up for cryonics\".  Einstein was a deist, etc.  See also Eliezer1996 and the edited volume \"How Smart People Can Be So Stupid\".  I've always been skeptical that Jeff Skilling of Enron was world-class smart, but I can easily visualize him being able to sparkle in conversation.

\n

Still, so far as I can tell, the world's upper echelons—in those few cases I've tested, within that extremely biased sample that I encounter—really are more intelligent.

\n

Not just, \"it's who you know, not what you know\".  Not just personal charisma and Machiavellian maneuvering.  Not just promotion of incompetents by other incompetents.

\n

I don't say that this never happens.  I'm sure it happens.  I'm sure it's endemic in all sorts of places.

\n

But there's a flip side to the story, which doesn't get talked about so much: you really do find a lot more cream as you move closer to the top.

\n

It's a standard idea that people who make it to the elite, tend to stop talking to ordinary mortals, and only hang out with other people at their level of the elite.

\n

That's easy for me to believe.  But I suspect that the reason is more disturbing than simple snobbery.  A reporter, writing about that, would pass it off as snobbery.  But it makes entire sense in terms of expected utility, from their viewpoint.  Even if all they're doing is looking for someone to talk to—just talk to.

\n

Visiting that gathering of the mid-level power elite, it was suddenly obvious why the people who attended that conference might want to only hang out with other people who attended that conference.  So long as they can talk to each other, there's no point in taking a chance on outsiders who are statistically unlikely to sparkle with the same level of life force.

\n

When you make it to the power elite, there are all sorts of people who want to talk to you.  But until they make it into the power elite, it's not in your interest to take a chance on talking to them.  Frustrating as that seems when you're on the outside trying to get in!  On the inside, it's just more expected fun to hang around people who've already proven themselves competent.  I think that's how it must be, for them.  (I'm not part of that world, though I can walk through it and be recognized as something strange but sparkly.)

\n

There's another world out there, richer in more than money.  Journalists don't report on that part, and instead just talk about the big houses and the yachts.  Maybe the journalists can't perceive it, because you can't discriminate more than one level above your own.  Or maybe it's such an awful truth that no one wants to hear about it, on either side of the fence.  It's easier for me to talk about such things, because, rightly or wrongly, I imagine that I can imagine technologies of an order that could bridge even that gap.

\n

I've never been to a gathering of the top-level elite (World Economic Forum level), so I have no idea if people are even more alive up there, or if the curve turns and starts heading downward.

\n

And really, I've never been to any sort of power-elite gathering except those organized by the sort of person that would invite me.  Maybe that world I've experienced, is only a tiny minority carved out within the power elite.  I really don't know.  If for some reason it made a difference, I'd try to plan for both possibilities.

\n

But I'm pretty sure that, statistically speaking, there's a lot more cream at the top than most people seem willing to admit in writing.

\n

Such is the hideously unfair world we live in, which I do hope to fix.

\n

 

\n

Part of the sequence Yudkowsky's Coming of Age

\n

Next post: \"Above-Average AI Scientists\"

\n

Previous post: \"The Level Above Mine\"

" } }, { "_id": "kXSETKZ3X9oidMozA", "title": "The Level Above Mine", "pageUrl": "https://www.lesswrong.com/posts/kXSETKZ3X9oidMozA/the-level-above-mine", "postedAt": "2008-09-26T09:18:34.000Z", "baseScore": 141, "voteCount": 131, "commentCount": 358, "url": null, "contents": { "documentId": "kXSETKZ3X9oidMozA", "html": "

(At this point, I fear that I must recurse into a subsequence; but if all goes as planned, it really will be short.)

\n

I once lent Xiaoguang \"Mike\" Li my copy of \"Probability Theory: The Logic of Science\".  Mike Li read some of it, and then came back and said:

\n
\n

\"Wow... it's like Jaynes is a thousand-year-old vampire.\"

\n
\n

Then Mike said, \"No, wait, let me explain that—\" and I said, \"No, I know exactly what you mean.\"  It's a convention in fantasy literature that the older a vampire gets, the more powerful they become.

\n

I'd enjoyed math proofs before I encountered Jaynes.  But E.T. Jaynes was the first time I picked up a sense of formidability from mathematical arguments.  Maybe because Jaynes was lining up \"paradoxes\" that had been used to object to Bayesianism, and then blasting them to pieces with overwhelming firepower—power being used to overcome others.  Or maybe the sense of formidability came from Jaynes not treating his math as a game of aesthetics; Jaynes cared about probability theory, it was bound up with other considerations that mattered, to him and to me too.

\n

For whatever reason, the sense I get of Jaynes is one of terrifying swift perfection—something that would arrive at the correct answer by the shortest possible route, tearing all surrounding mistakes to shreds in the same motion.  Of course, when you write a book, you get a chance to show only your best side.  But still.

\n

\n

It spoke well of Mike Li that he was able to sense the aura of formidability surrounding Jaynes.  It's a general rule, I've observed, that you can't discriminate between levels too far above your own. E.g., someone once earnestly told me that I was really bright, and \"ought to go to college\".  Maybe anything more than around one standard deviation above you starts to blur together, though that's just a cool-sounding wild guess.

\n

So, having heard Mike Li compare Jaynes to a thousand-year-old vampire, one question immediately popped into my mind:

\n

\"Do you get the same sense off me?\" I asked.

\n

Mike shook his head.  \"Sorry,\" he said, sounding somewhat awkward, \"it's just that Jaynes is...\"

\n

\"No, I know,\" I said.  I hadn't thought I'd reached Jaynes's level. I'd only been curious about how I came across to other people.

\n

I aspire to Jaynes's level.  I aspire to become as much the master of Artificial Intelligence / reflectivity, as Jaynes was master of Bayesian probability theory.  I can even plead that the art I'm trying to master is more difficult than Jaynes's, making a mockery of deference.  Even so, and embarrassingly, there is no art of which I am as much the master now, as Jaynes was of probability theory.

\n

This is not, necessarily, to place myself beneath Jaynes as a person—to say that Jaynes had a magical aura of destiny, and I don't.

\n

Rather I recognize in Jaynes a level of expertise, of sheer formidability, which I have not yet achieved.  I can argue forcefully in my chosen subject, but that is not the same as writing out the equations and saying:  DONE.

\n

For so long as I have not yet achieved that level, I must acknowledge the possibility that I can never achieve it, that my native talent is not sufficient.  When Marcello Herreshoff had known me for long enough, I asked him if he knew of anyone who struck him as substantially more natively intelligent than myself.  Marcello thought for a moment and said \"John Conway—I met him at a summer math camp.\"  Darn, I thought, he thought of someone, and worse, it's some ultra-famous old guy I can't grab.  I inquired how Marcello had arrived at the judgment.  Marcello said, \"He just struck me as having a tremendous amount of mental horsepower,\" and started to explain a math problem he'd had a chance to work on with Conway.

\n

Not what I wanted to hear.

\n

Perhaps, relative to Marcello's experience of Conway and his experience of me, I haven't had a chance to show off on any subject that I've mastered as thoroughly as Conway had mastered his many fields of mathematics.

\n

Or it might be that Conway's brain is specialized off in a different direction from mine, and that I could never approach Conway's level on math, yet Conway wouldn't do so well on AI research.

\n

Or...

\n

...or I'm strictly dumber than Conway, dominated by him along all dimensions.  Maybe, if I could find a young proto-Conway and tell them the basics, they would blaze right past me, solve the problems that have weighed on me for years, and zip off to places I can't follow.

\n

Is it damaging to my ego to confess that last possibility?  Yes.  It would be futile to deny that.

\n

Have I really accepted that awful possibility, or am I only pretending to myself to have accepted it?  Here I will say:  \"No, I think I have accepted it.\"  Why do I dare give myself so much credit?  Because I've invested specific effort into that awful possibility.  I am blogging here for many reasons, but a major one is the vision of some younger mind reading these words and zipping off past me.  It might happen, it might not.

\n

Or sadder:  Maybe I just wasted too much time on setting up the resources to support me, instead of studying math full-time through my whole youth; or I wasted too much youth on non-mathy ideas.  And this choice, my past, is irrevocable.  I'll hit a brick wall at 40, and there won't be anything left but to pass on the resources to another mind with the potential I wasted, still young enough to learn.  So to save them time, I should leave a trail to my successes, and post warning signs on my mistakes.

\n

Such specific efforts predicated on an ego-damaging possibility—that's the only kind of humility that seems real enough for me to dare credit myself.  Or giving up my precious theories, when I realized that they didn't meet the standard Jaynes had shown me—that was hard, and it was real.  Modest demeanors are cheapHumble admissions of doubt are cheap.  I've known too many people who, presented with a counterargument, say \"I am but a fallible mortal, of course I could be wrong\" and then go on to do exactly what they planned to do previously.

\n

You'll note that I don't try to modestly say anything like, \"Well, I may not be as brilliant as Jaynes or Conway, but that doesn't mean I can't do important things in my chosen field.\"

\n

Because I do know... that's not how it works.

" } }, { "_id": "75LZMCCePG4Pwj3dB", "title": "My Naturalistic Awakening", "pageUrl": "https://www.lesswrong.com/posts/75LZMCCePG4Pwj3dB/my-naturalistic-awakening", "postedAt": "2008-09-25T06:58:35.000Z", "baseScore": 76, "voteCount": 53, "commentCount": 47, "url": null, "contents": { "documentId": "75LZMCCePG4Pwj3dB", "html": "

In yesterday's episode, Eliezer2001 is fighting a rearguard action against the truth.  Only gradually shifting his beliefs, admitting an increasing probability in a different scenario, but never saying outright, \"I was wrong before.\"  He repairs his strategies as they are challenged, finding new justifications for just the same plan he pursued before.

\n

(Of which it is therefore said:  \"Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated.  Surrender to the truth as quickly as you can.  Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you.\")

\n

Memory fades, and I can hardly bear to look back upon those times—no, seriously, I can't stand reading my old writing.  I've already been corrected once in my recollections, by those who were present.  And so, though I remember the important events, I'm not really sure what order they happened in, let alone what year.

\n

But if I had to pick a moment when my folly broke, I would pick the moment when I first comprehended, in full generality, the notion of an optimization process.  That was the point at which I first looked back and said, \"I've been a fool.\"

\n

\n

Previously, in 2002, I'd been writing a bit about the evolutionary psychology of human general intelligence—though at the time, I thought I was writing about AI; at this point I thought I was against anthropomorphic intelligence, but I was still looking to the human brain for inspiration.  (The paper in question is \"Levels of Organization in General Intelligence\", a requested chapter for the volume \"Artificial General Intelligence\", which finally came out in print in 2007.)

\n

So I'd been thinking (and writing) about how natural selection managed to cough up human intelligence; I saw a dichotomy between them, the blindness of natural selection and the lookahead of intelligent foresight, reasoning by simulation versus playing everything out in reality, abstract versus concrete thinking.  And yet it was natural selection that created human intelligence, so that our brains, though not our thoughts, are entirely made according to the signature of natural selection.

\n

To this day, this still seems to me like a reasonably shattering insight, and so it drives me up the wall when people lump together natural selection and intelligence-driven processes as \"evolutionary\". They really are almost absolutely different in a number of important ways—though there are concepts in common that can be used to describe them, like consequentialism and cross-domain generality.

\n

But that Eliezer2002 is thinking in terms of a dichotomy between evolution and intelligence tells you something about the limits of his vision—like someone who thinks of politics as a dichotomy between conservative and liberal stances, or someone who thinks of fruit as a dichotomy between apples and strawberries.

\n

After the \"Levels of Organization\" draft was published online, Emil Gilliam pointed out that my view of AI seemed pretty similar to my view of intelligence.  Now, of course Eliezer2002 doesn't espouse building an AI in the image of a human mind; Eliezer2002 knows very well that a human mind is just a hack coughed up by natural selection.  But Eliezer2002 has described these levels of organization in human thinking, and he hasn't proposed using different levels of organization in the AI.  Emil Gilliam asks whether I think I might be hewing too close to the human line.  I dub the alternative the \"Completely Alien Mind Design\" and reply that a CAMD is probably too difficult for human engineers to create, even if it's possible in theory, because we wouldn't be able to understand something so alien while we were putting it together.

\n

I don't know if Eliezer2002 invented this reply on his own, or if he read it somewhere else. Needless to say, I've heard this excuse plenty of times since then.  In reality, what you genuinely understand, you can usually reconfigure in almost any sort of shape, leaving some structural essence inside; but when you don't understand flight, you suppose that a flying machine needs feathers, because you can't imagine departing from the analogy of a bird.

\n

So Eliezer2002 is still, in a sense, attached to humanish mind designs—he imagines improving on them, but the human architecture is still in some sense his point of departure.

\n

What is it that finally breaks this attachment?

\n

It's an embarrassing confession:  It came from a science-fiction story I was trying to write.  (No, you can't see it; it's not done.) The story involved a non-cognitive non-evolutionary optimization process; something like an Outcome Pump. Not intelligence, but a cross-temporal physical effect—that is, I was imagining it as a physical effect—that narrowly constrained the space of possible outcomes.  (I can't tell you any more than that; it would be a spoiler, if I ever finished the story.  Just see the post on Outcome Pumps.) It was \"just a story\", and so I was free to play with the idea and elaborate it out logically:  C was constrained to happen, therefore B (in the past) was constrained to happen, therefore A (which led to B) was constrained to happen.

\n

Drawing a line through one point is generally held to be dangerous. Two points make a dichotomy; you imagine them opposed to one another. But when you've got three different points—that's when you're forced to wake up and generalize.

\n

Now I had three points:  Human intelligence, natural selection, and my fictional plot device.

\n

And so that was the point at which I generalized the notion of an optimization process, of a process that squeezes the future into a narrow region of the possible.

\n

This may seem like an obvious point, if you've been following Overcoming Bias this whole time; but if you look at Shane Legg's collection of 71 definitions of intelligence, you'll see that \"squeezing the future into a constrained region\" is a less obvious reply than it seems.

\n

Many of the definitions of \"intelligence\" by AI researchers, do talk about \"solving problems\" or \"achieving goals\".  But from the viewpoint of past Eliezers, at least, it is only hindsight that makes this the same thing as \"squeezing the future\".

\n

A goal is a mentalistic object; electrons have no goals, and solve no problems either.  When a human imagines a goal, they imagine an agent imbued with wanting-ness—it's still empathic language.

\n

You can espouse the notion that intelligence is about \"achieving goals\"—and then turn right around and argue about whether some \"goals\" are better than others—or talk about the wisdom required to judge between goals themselves—or talk about a system deliberately modifying its goals—or talk about the free will needed to choose plans that achieve goals—or talk about an AI realizing that its goals aren't what the programmers really meant to ask for.  If you imagine something that squeezes the future into a narrow region of the possible, like an Outcome Pump, those seemingly sensible statements somehow don't translate.

\n

So for me at least, seeing through the word \"mind\", to a physical process that would, just by naturally running, just by obeying the laws of physics, end up squeezing its future into a narrow region, was a naturalistic enlightenment over and above the notion of an agent trying to achieve its goals.

\n

It was like falling out of a deep pit, falling into the ordinary world, strained cognitive tensions relaxing into unforced simplicity, confusion turning to smoke and drifting away.  I saw the work performed by intelligence; smart was no longer a property, but an engine.  Like a knot in time, echoing the outer part of the universe in the inner part, and thereby steering it.  I even saw, in a flash of the same enlightenment, that a mind had to output waste heat in order to obey the laws of thermodynamics.

\n

Previously, Eliezer2001 had talked about Friendly AI as something you should do just to be sure—if you didn't know whether AI design X was going to be Friendly, then you really ought to go with AI design Y that you did know would be Friendly.  But Eliezer2001 didn't think he knew whether you could actually have a superintelligence that turned its future light cone into paperclips.

\n

Now, though, I could see it—the pulse of the optimization process, sensory information surging in, motor instructions surging out, steering the future.  In the middle, the model that linked up possible actions to possible outcomes, and the utility function over the outcomes.  Put in the corresponding utility function, and the result would be an optimizer that would steer the future anywhere.

\n

Up until that point, I'd never quite admitted to myself that Eliezer1997's AI goal system design would definitely, no two ways about it, pointlessly wipe out the human species.  Now, however, I looked back, and I could finally see what my old design really did, to the extent it was coherent enough to be talked about.  Roughly, it would have converted its future light cone into generic tools—computers without programs to run, stored energy without a use...

\n

...how on Earth had I, the fine and practiced rationalist, how on Earth had I managed to miss something that obvious, for six damned years?

\n

That was the point at which I awoke clear-headed, and remembered; and thought, with a certain amount of embarrassment:  I've been stupid.

\n

To be continued.

" } }, { "_id": "PCfaLLtuxes6Jk4S2", "title": "Fighting a Rearguard Action Against the Truth", "pageUrl": "https://www.lesswrong.com/posts/PCfaLLtuxes6Jk4S2/fighting-a-rearguard-action-against-the-truth", "postedAt": "2008-09-24T01:23:30.000Z", "baseScore": 51, "voteCount": 41, "commentCount": 8, "url": null, "contents": { "documentId": "PCfaLLtuxes6Jk4S2", "html": "

When we last left Eliezer2000, he was just beginning to investigate the question of how to inscribe a morality into an AI.  His reasons for doing this don't matter at all, except insofar as they happen to historically demonstrate the importance of perfectionism.  If you practice something, you may get better at it; if you investigate something, you may find out about it; the only thing that matters is that Eliezer2000 is, in fact, focusing his full-time energies on thinking technically about AI morality; rather than, as previously, finding an justification for not spending his time this way.  In the end, this is all that turns out to matter.

\n

But as our story begins—as the sky lightens to gray and the tip of the sun peeks over the horizon—Eliezer2001 hasn't yet admitted that Eliezer1997 was mistaken in any important sense.  He's just making Eliezer1997's strategy even better by including a contingency plan for \"the unlikely event that life turns out to be meaningless\"...

\n

...which means that Eliezer2001 now has a line of retreat away from his mistake.

\n

I don't just mean that Eliezer2001 can say \"Friendly AI is a contingency plan\", rather than screaming \"OOPS!\"

\n

I mean that Eliezer2001 now actually has a contingency plan.  If Eliezer2001 starts to doubt his 1997 metaethics, the Singularity has a fallback strategy, namely Friendly AI.  Eliezer2001 can question his metaethics without it signaling the end of the world.

\n

And his gradient has been smoothed; he can admit a 10% chance of having previously been wrong, then a 20% chance.  He doesn't have to cough out his whole mistake in one huge lump.

\n

If you think this sounds like Eliezer2001 is too slow, I quite agree.

\n

\n

Eliezer1996-2000's strategies had been formed in the total absence of \"Friendly AI\" as a consideration.  The whole idea was to get a superintelligence, any superintelligence, as fast as possible—codelet soup, ad-hoc heuristics, evolutionary programming, open-source, anything that looked like it might work—preferably all approaches simultaneously in a Manhattan Project.  (\"All parents did the things they tell their children not to do.  That's how they know to tell them not to do it.\"  John Moore, Slay and Rescue.)  It's not as if adding one more approach could hurt.

\n

His attitudes toward technological progress have been formed—or more accurately, preserved from childhood-absorbed technophilia—around the assumption that any/all movement toward superintelligence is a pure good without a hint of danger.

\n

Looking back, what Eliezer2001  needed to do at this point was declare an HMC event—Halt, Melt, and Catch Fire.  One of the foundational assumptions on which everything else has been built, has been revealed as flawed.  This calls for a mental brake to a full stop: take your weight off all beliefs built on the wrong assumption, do your best to rethink everything from scratch.  This is an art I need to write more about—it's akin to the convulsive effort required to seriously clean house, after an adult religionist notices for the first time that God doesn't exist.

\n

But what Eliezer2001 actually did was rehearse his previous technophilic arguments for why it's difficult to ban or governmentally control new technologies—the standard arguments against \"relinquishment\".

\n

It does seem even to my modern self, that all those awful consequences which technophiles argue to follow from various kinds of government regulation, are more or less correct—it's much easier to say what someone is doing wrong, than to say the way that is right.  My modern viewpoint hasn't shifted to think that technophiles are wrong about the downsides of technophobia; but I do tend to be a lot more sympathetic to what technophobes say about the downsides of technophilia.  What previous Eliezers said about the difficulties of, e.g., the government doing anything sensible about Friendly AI, still seems pretty true.  It's just that a lot of his hopes for science, or private industry, etc., now seem equally wrongheaded.

\n

Still, let's not get into the details of the technovolatile viewpoint.  Eliezer2001 has just tossed a major foundational assumption—that AI can't be dangerous, unlike other technologies—out the window.  You would intuitively suspect that this should have some kind of large effect on his strategy.

\n

Well, Eliezer2001 did at least give up on his 1999 idea of an open-source AI Manhattan Project using self-modifying heuristic soup, but overall...

\n

Overall, he'd previously wanted to charge in, guns blazing, immediately using his best idea at the time; and afterward he still wanted to charge in, guns blazing.  He didn't say, \"I don't know how to do this.\"  He didn't say, \"I need better knowledge.\"  He didn't say, \"This project is not yet ready to start coding.\"  It was still all, \"The clock is ticking, gotta move now!  The Singularity Institute will start coding as soon as it's got enough money!\"

\n

Before, he'd wanted to focus as much scientific effort as possible with full information-sharing, and afterward he still thought in those terms.  Scientific secrecy = bad guy, openness = good guy.  (Eliezer2001 hadn't read up on the Manhattan Project and wasn't familiar with the similar argument that Leo Szilard had with Enrico Fermi.)

\n

That's the problem with converting one big \"Oops!\" into a gradient of shifting probability.  It means there isn't a single watershed moment—a visible huge impact—to hint that equally huge changes might be in order.

\n

Instead, there are all these little opinion shifts... that give you a chance to repair the arguments for your strategies; to shift the justification a little, but keep the \"basic idea\" in place.  Small shocks that the system can absorb without cracking, because each time, it gets a chance to go back and repair itself.  It's just that in the domain of rationality, cracking = good, repair = bad.  In the art of rationality it's far more efficient to admit one huge mistake, than to admit lots of little mistakes.

\n

There's some kind of instinct humans have, I think, to preserve their former strategies and plans, so that they aren't constantly thrashing around and wasting resources; and of course an instinct to preserve any position that we have publicly argued for, so that we don't suffer the humiliation of being wrong.  And though the younger Eliezer has striven for rationality for many years, he is not immune to these impulses; they waft gentle influences on his thoughts, and this, unfortunately, is more than enough damage.

\n

Even in 2002, the earlier Eliezer isn't yet sure that Eliezer1997's plan couldn't possibly have worked.  It might have gone right.  You never know, right?

\n

But there came a time when it all fell crashing down.  To be continued.

\n" } }, { "_id": "SwCwG9wZcAzQtckwx", "title": "That Tiny Note of Discord", "pageUrl": "https://www.lesswrong.com/posts/SwCwG9wZcAzQtckwx/that-tiny-note-of-discord", "postedAt": "2008-09-23T06:02:11.000Z", "baseScore": 64, "voteCount": 46, "commentCount": 36, "url": null, "contents": { "documentId": "SwCwG9wZcAzQtckwx", "html": "

When we last left Eliezer1997, he believed that any superintelligence would automatically do what was \"right\", and indeed would understand that better than we could; even though, he modestly confessed, he did not understand the ultimate nature of morality.  Or rather, after some debate had passed, Eliezer1997 had evolved an elaborate argument, which he fondly claimed to be \"formal\", that we could always condition upon the belief that life has meaning; and so cases where superintelligences did not feel compelled to do anything in particular, would fall out of consideration.  (The flaw being the unconsidered and unjustified equation of \"universally compelling argument\" with \"right\".)

\n

So far, the young Eliezer is well on the way toward joining the \"smart people who are stupid because they're skilled at defending beliefs they arrived at for unskilled reasons\".  All his dedication to \"rationality\" has not saved him from this mistake, and you might be tempted to conclude that it is useless to strive for rationality.

\n

But while many people dig holes for themselves, not everyone succeeds in clawing their way back out.

\n

And from this I learn my lesson:  That it all began—

\n

—with a small, small question; a single discordant note; one tiny lonely thought...

\n

\n

As our story starts, we advance three years to Eliezer2000, who in most respects resembles his self of 1997.  He currently thinks he's proven that building a superintelligence is the right thing to do if there is any right thing at all.  From which it follows that there is no justifiable conflict of interest over the Singularity, among the peoples and persons of Earth.

\n

This is an important conclusion for Eliezer2000, because he finds the notion of fighting over the Singularity to be unbearably stupid.  (Sort of like the notion of God intervening in fights between tribes of bickering barbarians, only in reverse.)  Eliezer2000's self-concept does not permit him—he doesn't even want—to shrug and say, \"Well, my side got here first, so we're going to seize the banana before anyone else gets it.\"  It's a thought too painful to think.

\n

And yet then the notion occurs to him:

\n
\n

Maybe some people would prefer an AI do particular things, such as not kill them, even if life is meaningless?

\n
\n

His immediately following thought is the obvious one, given his premises:

\n
\n

In the event that life is meaningless, nothing is the \"right\" thing to do; therefore it wouldn't be particularly right to respect people's preferences in this event.

\n
\n

This is the obvious dodge.  The thing is, though, Eliezer2000 doesn't think of himself as a villain.  He doesn't go around saying, \"What bullets shall I dodge today?\"  He thinks of himself as a dutiful rationalist who tenaciously follows lines of inquiry.  Later, he's going to look back and see a whole lot of inquiries that his mind somehow managed to not follow—but that's not his current self-concept. 

\n

So Eliezer2000 doesn't just grab the obvious out.  He keeps thinking.

\n
\n

But if people believe they have preferences in the event that life is meaningless, then they have a motive to dispute my Singularity project and go with a project that respects their wish in the event life is meaningless.  This creates a present conflict of interest over the Singularity, and prevents right things from getting done in the mainline event that life is meaningful.

\n
\n

Now, there's a lot of excuses Eliezer2000 could have potentially used to toss this problem out the window.  I know, because I've heard plenty of excuses for dismissing Friendly AI.  \"The problem is too hard to solve\" is one I get from AGI wannabes who imagine themselves smart enough to create true Artificial Intelligence, but not smart enough to solve a really difficult problem like Friendly AI.  Or \"worrying about this possibility would be a poor use of resources, what with the incredible urgency of creating AI before humanity wipes itself out—you've got to go with what you have\", this being uttered by people who just basically aren't interested in the problem.

\n

But Eliezer2000 is a perfectionist.  He's not perfect, obviously, and he doesn't attach as much importance as I do to the virtue of precision, but he is most certainly a perfectionist. The idea of metaethics that Eliezer2000  espouses, in which superintelligences know what's right better than we do, previously seemed to wrap up all the problems of justice and morality in an airtight wrapper.

\n

The new objection seems to poke a minor hole in the airtight wrapper.  This is worth patching.  If you have something that's perfect, are you really going to let one little possibility compromise it?

\n

So Eliezer2000 doesn't even want to drop the issue; he wants to patch the problem and restore perfection.  How can he justify spending the time?  By thinking thoughts like:

\n
\n

What about Brian Atkins?  [Brian Atkins being the startup funder of the Singularity Institute.]  He would probably prefer not to die, even if life were meaningless.  He's paying for the Singularity Institute right now; I don't want to taint the ethics of our cooperation.

\n
\n

Eliezer2000's sentiment doesn't translate very well—English doesn't have a simple description for it, or any other culture I know.  Maybe the passage in the Old Testament, \"Thou shalt not boil a young goat in its mother's milk\".  Someone who helps you out of altruism shouldn't regret helping you; you owe them, not so much fealty, but rather, that they're actually doing what they think they're doing by helping you.

\n

Well, but how would Brian Atkins find out, if I don't tell him?  Eliezer2000 doesn't even think this except in quotation marks, as the obvious thought that a villain would think in the same situation.  And Eliezer2000 has a standard counter-thought ready too, a ward against temptations to dishonesty—an argument that justifies honesty in terms of expected utility, not just a personal love of personal virtue:

\n
\n

Human beings aren't perfect deceivers; it's likely that I'll be found out.  Or what if genuine lie detectors are invented before the Singularity, sometime over the next thirty years?  I wouldn't be able to pass a lie detector test.

\n
\n

Eliezer2000 lives by the rule that you should always be ready to have your thoughts broadcast to the whole world at any time, without embarrassment.  Otherwise, clearly, you've fallen from grace: either you're thinking something you shouldn't be thinking, or you're embarrassed by something that shouldn't embarrass you.

\n

(These days, I don't espouse quite such an extreme viewpoint, mostly for reasons of Fun Theory.  I see a role for continued social competition between intelligent life-forms, as least as far as my near-term vision stretches.  I admit, these days, that it might be all right for human beings to have a self; as John McCarthy put it, \"If everyone were to live for others all the time, life would be like a procession of ants following each other around in a circle.\"  If you're going to have a self, you may as well have secrets, and maybe even conspiracies.  But I do still try to abide by the principle of being able to pass a future lie detector test, with anyone else who's also willing to go under the lie detector, if the topic is a professional one.  Fun Theory needs a commonsense exception for global catastrophic risk management.)

\n

Even taking honesty for granted, there are other excuses Eliezer2000 could use to flush the question down the toilet.  \"The world doesn't have the time\" or \"It's unsolvable\" would still work.  But Eliezer 2000 doesn't know that this problem, the \"backup\" morality problem, is going to be particularly difficult or time-consuming.  He's just now thought of the whole issue.

\n

And so Eliezer2000 begins to really consider the question:  Supposing that \"life is meaningless\" (that superintelligences don't produce their own motivations from pure logic), then how would you go about specifying a fallback morality?  Synthesizing it, inscribing it into the AI?

\n

There's a lot that Eliezer2000 doesn't know, at this point.  But he has been thinking about self-improving AI for three years, and he's been a Traditional Rationalist for longer than that.  There are techniques of rationality that he has practiced, methodological safeguards he's already devised.  He already knows better than to think that all an AI needs is the One Great Moral Principle.  Eliezer2000 already knows that it is wiser to think technologically than politically.  He already knows the saying that AI programmers are supposed to think in code, to use concepts that can be inscribed in a computer.  Eliezer2000 already has a concept that there is something called \"technical thinking\" and it is good, though he hasn't yet formulated a Bayesian view of it. And he's long since noticed that  suggestively named LISP tokens don't really mean anything, etcetera.  These injunctions prevent him from falling into some of the initial traps, the ones that I've seen consume other novices on their own first steps into the Friendly AI problem... though technically this was my second step; I well and truly failed on my first.

\n

But in the end, what it comes down to is this:  For the first time, Eliezer2000 is trying to think technically about inscribing a morality into an AI, without the escape-hatch of the mysterious essence of rightness.

\n

That's the only thing that matters, in the end.  His previous philosophizing wasn't enough to force his brain to confront the details.  This new standard is strict enough to require actual work.  Morality slowly starts being less mysterious to him—Eliezer2000 is starting to think inside the black box.

\n

His reasons for pursuing this course of action—those don't matter at all.

\n

Oh, there's a lesson in his being a perfectionist.  There's a lesson in the part about how Eliezer2000 initially thought this was a tiny flaw, and could have dismissed it out-of-mind if that had been his impulse.

\n

But in the end, the chain of cause and effect goes like this:  Eliezer2000 investigated in more detail, therefore he got better with practice.  Actions screen off justifications.  If your arguments happen to justify not working things out in detail, like Eliezer1996, then you won't get good at thinking about the problem.  If your arguments call for you to work things out in detail, then you have an opportunity to start accumulating expertise.

\n

That was the only choice that mattered, in the end—not the reasons for doing anything.

\n

I say all this, as you may well guess, because of the AI wannabes I sometimes run into, who have their own clever reasons for not thinking about the Friendly AI problem.  Our clever reasons for doing what we do, tend to matter a lot less to Nature than they do to ourselves and our friends.  If your actions don't look good when they're stripped of all their justifications and presented as mere brute facts... then maybe you should re-examine them.

\n

A diligent effort won't always save a person.  There is such a thing as lack of ability.  Even so, if you don't try, or don't try hard enough, you don't get a chance to sit down at the high-stakes table—never mind the ability ante.  That's cause and effect for you.

\n

Also, perfectionism really matters.  The end of the world doesn't always come with trumpets and thunder and the highest priority in your inbox.  Sometimes the shattering truth first presents itself to you as a small, small question; a single discordant note; one tiny lonely thought, that you could dismiss with one easy effortless touch...

\n

...and so, over succeeding years, understanding begins to dawn on that past Eliezer, slowly.  That sun rose slower than it could have risen.  To be continued.

" } }, { "_id": "ziqL94sq6rMuH7wDu", "title": "Horrible LHC Inconsistency", "pageUrl": "https://www.lesswrong.com/posts/ziqL94sq6rMuH7wDu/horrible-lhc-inconsistency", "postedAt": "2008-09-22T03:12:58.000Z", "baseScore": 34, "voteCount": 24, "commentCount": 33, "url": null, "contents": { "documentId": "ziqL94sq6rMuH7wDu", "html": "

Followup to: When (Not) To Use Probabilities, How Many LHC Failures Is Too Many?

\n\n

While trying to answer my own question on "How Many LHC Failures Is Too Many?" I realized that I'm horrendously inconsistent with respect to my stated beliefs about disaster risks from the Large Hadron Collider.

\n\n

First, I thought that stating a "one-in-a-million" probability for the Large Hadron Collider destroying the world was too high, in the sense that I would much rather run the Large Hadron Collider than press a button with a known 1/1,000,000 probability of destroying the world.

\n\n

But if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

\n\n

Unknown pointed out that this turns me into a money pump.  Given a portfolio of a million existential risks to which I had assigned a "less than one in a million probability", I would rather press the button on the fixed-probability device than run a random risk from this portfolio; but would rather take any particular risk in this portfolio than press the button.

\n\n

Then, I considered the question of how many mysterious failures at the LHC it would take to make me question whether it might destroy the world/universe somehow, and what this revealed about my prior probability.

\n\n

If the failure probability had a known 50% probability of occurring from natural causes, like a quantum coin or some such... then I suspect that if I actually saw that coin come up heads 20 times in a row, I would feel a strong impulse to bet on it coming up heads the next time around.  (And that's taking into account my uncertainty about whether the anthropic principle really works that way.)

\n\n

Even having noticed this triple inconsistency, I'm not sure in which direction to resolve it!

\n\n

(But I still maintain my resolve that the LHC is not worth expending political capital, financial capital, or our time to shut down; compared with using the same capital to worry about superhuman intelligence or nanotechnology.)

" } }, { "_id": "jE3npTEBtHnZBuAcg", "title": "How Many LHC Failures Is Too Many?", "pageUrl": "https://www.lesswrong.com/posts/jE3npTEBtHnZBuAcg/how-many-lhc-failures-is-too-many", "postedAt": "2008-09-20T21:38:27.000Z", "baseScore": 37, "voteCount": 28, "commentCount": 140, "url": null, "contents": { "documentId": "jE3npTEBtHnZBuAcg", "html": "

Recently the Large Hadron Collider was damaged by a mechanical failure.  This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.

\n\n

Inevitably, many commenters said, "Anthropic principle!  If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"

\n\n

This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction.  However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all.  (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)

\n\n

As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry.  However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?"  This tells you how low your prior probability is for the hypothesis.  If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning.  But if it comes up heads 100 times, it's taking you too long to notice.

\n\n

So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation?  10?  20?  50?

\n\n

After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?

" } }, { "_id": "o3PQswRQ6aBXDzcMs", "title": "Ban the Bear", "pageUrl": "https://www.lesswrong.com/posts/o3PQswRQ6aBXDzcMs/ban-the-bear", "postedAt": "2008-09-19T18:14:23.000Z", "baseScore": 0, "voteCount": 10, "commentCount": 39, "url": null, "contents": { "documentId": "o3PQswRQ6aBXDzcMs", "html": "

I applaud the SEC's courageous move to ban short selling.  Isn't that brilliant?  I wonder why they didn't think of that during the Great Depression.

\n\n

However, I feel that this valiant effort does not go far enough.

\n\n

All selling of stocks should be banned.  Once you buy a stock, you have to hold it forever.

\n\n

Sure, this might make the market a little less liquid.  But once stock prices can only go up, we'll all be rich!

\n\n

Or maybe we should just try something simpler: pass a law making it illegal for stock prices to go down.

" } }, { "_id": "hAfmMTiaSjEY8PxXC", "title": "Say It Loud", "pageUrl": "https://www.lesswrong.com/posts/hAfmMTiaSjEY8PxXC/say-it-loud", "postedAt": "2008-09-19T17:34:58.000Z", "baseScore": 64, "voteCount": 49, "commentCount": 20, "url": null, "contents": { "documentId": "hAfmMTiaSjEY8PxXC", "html": "

Reply toOverconfidence is Stylish

\n

I respectfully defend my lord Will Strunk:

\n
\n

\"If you don't know how to pronounce a word, say it loud! If you don't know how to pronounce a word, say it loud!\"  This comical piece of advice struck me as sound at the time, and I still respect it. Why compound ignorance with inaudibility?  Why run and hide?

\n
\n

How does being vague, tame, colorless, irresolute, help someone to understand your current state of uncertainty?  Any more than mumbling helps them understand a word you aren't sure how to pronounce?

\n

Goofus says:  \"The sky, if such a thing exists at all, might or might not have a property of color, but, if it does have color, then I feel inclined to state that it might be green.\"

\n

Gallant says:   \"70% probability the sky is green.\"

\n

Which of them sounds more confident, more definite?

\n

But which of them has managed to quickly communicate their state of uncertainty?

\n

(And which of them is more likely to actually, in real life, spend any time planning and preparing for the eventuality that the sky is blue?)

\n

\n

I am often accused of overconfidence because my audience is not familiar with the concept of there being iron laws that govern the manipulation of uncertainty. Just because I don't know the object-level doesn't necessarily mean that I am in a state of fear and doubt as to what I should be thinking.  That comes through in my writing, and so I sound confident even when I am in the midst of manipulating uncertainty.  That might be a disadvantage in my attempts to communicate; but I would rather clearly describe my state of uncertainty, and worry afterward about how that makes me look.

\n

And similarly, I have often seen people who spend no effort at all on possibilities other than their mainline, praised for their seeming humility, on account of their indefinite language.  They are skilled at sounding uncertain, which makes them appear modest; but not skilled at handling uncertainty.  That is a political advantage, but it doesn't help them think.  Also the audience is given more slack to interpret the speaker as being on their side; but to deliberately exploit this effect is dishonesty.

\n

Often the caveats we attach to our speech have little to do with any actual humility - actual plans we prepared, and actions we took, against the eventuality of things turning out the other way.  And more to do with being able to avoid admitting to ourselves that we were wrong.  We attached a caveat, didn't we?

\n

Maybe Will Strunk did think it was better to be wrong than irresolute (though that doesn't quite seem to have been a direct quote from him).  If so, then that was Will Strunk's flaw as a rationalist.  Presumably he only knew the part of rationality that pertained to writing.

\n

But the core of Will Strunk's lesson learned from the art of writing, not to obscure your position when you are unsure of it, seems to me very wise indeed.  In particular you should not obscure your position from yourself.

\n

EDIT 2015:  I am not saying that you should act more confident than you are, or fail to communicate uncertainty; this would be dishonesty. I am saying that it is okay to communicate uncertainty by saying “60% probability” rather than two paragraphs of timid language. Talking like this may cause some who know not the Way to criticize your status-overreaching for asserting so vigorous and definite a probability. This may be a real PR problem depending on your circumstances, but I don’t see it as an inherent ethical problem.

" } }, { "_id": "Yicjw6wSSaPdb83w9", "title": "The Sheer Folly of Callow Youth", "pageUrl": "https://www.lesswrong.com/posts/Yicjw6wSSaPdb83w9/the-sheer-folly-of-callow-youth", "postedAt": "2008-09-19T01:30:29.000Z", "baseScore": 90, "voteCount": 55, "commentCount": 18, "url": null, "contents": { "documentId": "Yicjw6wSSaPdb83w9", "html": "
\n

\"There speaks the sheer folly of callow youth; the rashness of an ignorance so abysmal as to be possible only to one of your ephemeral race...\"
        —Gharlane of Eddore

\n
\n

Once upon a time, years ago, I propounded a mysterious answer to a mysterious question—as I've hinted on several occasions.  The mysterious question to which I propounded a mysterious answer was not, however, consciousness—or rather, not only consciousness.  No, the more embarrassing error was that I took a mysterious view of morality.

\n

I held off on discussing that until now, after the series on metaethics, because I wanted it to be clear that Eliezer1997 had gotten it wrong.

\n

When we last left off, Eliezer1997, not satisfied with arguing in an intuitive sense that superintelligence would be moral, was setting out to argue inescapably that creating superintelligence was the right thing to do.

\n

Well (said Eliezer1997) let's begin by asking the question:  Does life have, in fact, any meaning?

\n

\n

\"I don't know,\" replied Eliezer1997 at once, with a certain note of self-congratulation for admitting his own ignorance on this topic where so many others seemed certain.

\n

\"But,\" he went on—

\n

(Always be wary when an admission of ignorance is followed by \"But\".)

\n

\"But, if we suppose that life has no meaning—that the utility of all outcomes is equal to zero—that possibility cancels out of any expected utility calculation.  We can therefore always act as if life is known to be meaningful, even though we don't know what that meaning is.  How can we find out that meaning?  Considering that humans are still arguing about this, it's probably too difficult a problem for humans to solve.  So we need a superintelligence to solve the problem for us.  As for the possibility that there is no logical justification for one preference over another, then in this case it is no righter or wronger to build a superintelligence, than to do anything else.  This is a real possibility, but it falls out of any attempt to calculate expected utility—we should just ignore it.  To the extent someone says that a superintelligence would wipe out humanity, they are either arguing that wiping out humanity is in fact the right thing to do (even though we see no reason why this should be the case) or they are arguing that there is no right thing to do (in which case their argument that we should not build intelligence defeats itself).\"

\n

Ergh.  That was a really difficult paragraph to write.  My past self is always my own most concentrated Kryptonite, because my past self is exactly precisely all those things that the modern me has installed allergies to block.  Truly is it said that parents do all the things they tell their children not to do, which is how they know not to do them; it applies between past and future selves as well.

\n

How flawed is Eliezer1997's argument?  I couldn't even count the ways.  I know memory is fallible, reconstructed each time we recall, and so I don't trust my assembly of these old pieces using my modern mind.  Don't ask me to read my old writings; that's too much pain.

\n

But it seems clear that I was thinking of utility as a sort of stuff, an inherent property.  So that \"life is meaningless\" corresponded to utility=0.  But of course the argument works equally well with utility=100, so that if everything is meaningful but it is all equally meaningful, that should fall out too... Certainly I wasn't then thinking of a utility function as an affine structure in preferences.  I was thinking of \"utility\" as an absolute level of inherent value.

\n

I was thinking of should as a kind of purely abstract essence of compellingness, that-which-makes-you-do-something; so that clearly any mind that derived a should, would be bound by it.  Hence the assumption, which Eliezer1997 did not even think to explicitly note, that a logic that compels an arbitrary mind to do something, is exactly the same as that which human beings mean and refer to when they utter the word \"right\"...

\n

But now I'm trying to count the ways, and if you've been following along, you should be able to handle that yourself.

\n

An important aspect of this whole failure was that, because I'd proved that the case \"life is meaningless\" wasn't worth considering, I didn't think it was necessary to rigorously define \"intelligence\" or \"meaning\".  I'd previously come up with a clever reason for not trying to go all formal and rigorous when trying to define \"intelligence\" (or \"morality\")—namely all the bait-and-switches that past AIfolk, philosophers, and moralists, had pulled with definitions that missed the point.

\n

I draw the following lesson:  No matter how clever the justification for relaxing your standards, or evading some requirement of rigor, it will blow your foot off just the same.

\n

And another lesson:  I was skilled in refutation.  If I'd applied the same level of rejection-based-on-any-flaw to my own position, as I used to defeat arguments brought against me, then I would have zeroed in on the logical gap and rejected the position—if I'd wanted to.  If I'd had the same level of prejudice against it, as I'd had against other positions in the debate.

\n

But this was before I'd heard of Kahneman, before I'd heard the term \"motivated skepticism\", before I'd integrated the concept of an exactly correct state of uncertainty that summarizes all the evidence, and before I knew the deadliness of asking \"Am I allowed to believe?\" for liked positions and \"Am I forced to believe?\" for disliked positions.  I was a mere Traditional Rationalist who thought of the scientific process as a referee between people who took up positions and argued them, may the best side win.

\n

My ultimate flaw was not a liking for \"intelligence\", nor any amount of technophilia and science fiction exalting the siblinghood of sentience.  It surely wasn't my ability to spot flaws.  None of these things could have led me astray, if I had held myself to a higher standard of rigor throughout, and adopted no position otherwise.  Or even if I'd just scrutinized my preferred vague position, with the same demand-of-rigor I applied to counterarguments.

\n

But I wasn't much interested in trying to refute my belief that life had meaning, since my reasoning would always be dominated by cases where life did have meaning.

\n

And with the Singularity at stake, I thought I just had to proceed at all speed using the best concepts I could wield at the time, not pause and shut down everything while I looked for a perfect definition that so many others had screwed up...

\n

No.

\n

No, you don't use the best concepts you can use at the time.

\n

It's Nature that judges you, and Nature does not accept even the most righteous excuses.  If you don't meet the standard, you fail.  It's that simple.  There is no clever argument for why you have to make do with what you have, because Nature won't listen to that argument, won't forgive you because there were so many excellent justifications for speed.

\n

We all know what happened to Donald Rumsfeld, when he went to war with the army he had, instead of the army he needed.

\n

Maybe Eliezer1997 couldn't have conjured the correct model out of thin air.  (Though who knows what would have happened, if he'd really tried...)  And it wouldn't have been prudent for him to stop thinking entirely, until rigor suddenly popped out of nowhere.

\n

But neither was it correct for Eliezer1997 to put his weight down on his \"best guess\", in the absence of precision.  You can use vague concepts in your own interim thought processes, as you search for a better answer, unsatisfied with your current vague hints, and unwilling to put your weight down on them.  You don't build a superintelligence based on an interim understanding.  No, not even the \"best\" vague understanding you have.  That was my mistake—thinking that saying \"best guess\" excused anything.  There was only the standard I had failed to meet.

\n

Of course Eliezer1997 didn't want to slow down on the way to the Singularity, with so many lives at stake, and the very survival of Earth-originating intelligent life, if we got to the era of nanoweapons before the era of superintelligence—

\n

Nature doesn't care about such righteous reasons.  There's just the astronomically high standard needed for success.  Either you match it, or you fail.  That's all.

\n
\n

The apocalypse does not need to be fair to you.
The apocalypse does not need to offer you a chance of success
In exchange for what you've already brought to the table.
The apocalypse's difficulty is not matched to your skills.
The apocalypse's price is not matched to your resources.
If the apocalypse asks you for something unreasonable
And you try to bargain it down a little
(Because everyone has to compromise now and then)
The apocalypse will not try to negotiate back up.

\n
\n

And, oh yes, it gets worse.

\n

How did Eliezer1997 deal with the obvious argument that you couldn't possibly derive an \"ought\" from pure logic, because \"ought\" statements could only be derived from other \"ought\" statements?

\n

Well (observed Eliezer1997), this problem has the same structure as the argument that a cause only proceeds from another cause, or that a real thing can only come of another real thing, whereby you can prove that nothing exists.

\n

Thus (he said) there are three \"hard problems\":  The hard problem of conscious experience, in which we see that qualia cannot arise from computable processes; the hard problem of existence, in which we ask how any existence enters apparently from nothingness; and the hard problem of morality, which is to get to an \"ought\".

\n

These problems are probably linked.  For example, the qualia of pleasure are one of the best candidates for something intrinsically desirable.  We might not be able to understand the hard problem of morality, therefore, without unraveling the hard problem of consciousness.  It's evident that these problems are too hard for humans—otherwise someone would have solved them over the last 2500 years since philosophy was invented.

\n

It's not as if they could have complicated solutions—they're too simple for that.  The problem must just be outside human concept-space.  Since we can see that consciousness can't arise on any computable process, it must involve new physics—physics that our brain uses, but can't understand.  That's why we need superintelligence in order to solve this problem.  Probably it has to do with quantum mechanics, maybe with a dose of tiny closed timelike curves from out of General Relativity; temporal paradoxes might have some of the same irreducibility properties that consciousness seems to demand...

\n

Et cetera, ad nauseam.  You may begin to perceive, in the arc of my Overcoming Bias posts, the letter I wish I could have written to myself.

\n

Of this I learn the lesson:  You cannot manipulate confusion.  You cannot make clever plans to work around the holes in your understanding.  You can't even make \"best guesses\" about things which fundamentally confuse you, and relate them to other confusing things.  Well, you can, but you won't get it right, until your confusion dissolves.  Confusion exists in the mind, not in the reality, and trying to treat it like something you can pick up and move around, will only result in unintentional comedy.

\n

Similarly, you cannot come up with clever reasons why the gaps in your model don't matter.  You cannot draw a border around the mystery, put on neat handles that let you use the Mysterious Thing without really understanding it—like my attempt to make the possibility that life is meaningless cancel out of an expected utility formula.  You can't pick up the gap and manipulate it.

\n

If the blank spot on your map conceals a land mine, then putting your weight down on that spot will be fatal, no matter how good your excuse for not knowing.  Any black box could contain a trap, and there's no way to know except opening up the black box and looking inside.  If you come up with some righteous justification for why you need to rush on ahead with the best understanding you have—the trap goes off.

\n
\n

It's only when you know the rules,
That you realize why you needed to learn;
What would have happened otherwise,
How much you needed to know.

\n
\n

Only knowledge can foretell the cost of ignorance.  The ancient alchemists had no logical way of knowing the exact reasons why it was hard for them to turn lead into gold.  So they poisoned themselves and died.  Nature doesn't care.

\n

But there did come a time when realization began to dawn on me.  To be continued.

" } }, { "_id": "CcBe9aCKDgT5FSoty", "title": "A Prodigy of Refutation", "pageUrl": "https://www.lesswrong.com/posts/CcBe9aCKDgT5FSoty/a-prodigy-of-refutation", "postedAt": "2008-09-18T01:57:50.000Z", "baseScore": 51, "voteCount": 41, "commentCount": 20, "url": null, "contents": { "documentId": "CcBe9aCKDgT5FSoty", "html": "

My Childhood Death Spiral described the core momentum carrying me into my mistake, an affective death spiral around something that Eliezer1996 called \"intelligence\".  I was also a technophile, pre-allergized against fearing the future.  And I'd read a lot of science fiction built around personhood ethics—in which fear of the Alien puts humanity-at-large in the position of the bad guys, mistreating aliens or sentient AIs because they \"aren't human\".

\n

That's part of the ethos you acquire from science fiction—to define your in-group, your tribe, appropriately broadly.  Hence my email address, sentience@pobox.com.

\n

So Eliezer1996 is out to build superintelligence, for the good of humanity and all sentient life.

\n

At first, I think, the question of whether a superintelligence will/could be good/evil didn't really occur to me as a separate topic of discussion.  Just the standard intuition of, \"Surely no supermind would be stupid enough to turn the galaxy into paperclips; surely, being so intelligent, it will also know what's right far better than a human being could.\"

\n

Until I introduced myself and my quest to a transhumanist mailing list, and got back responses along the general lines of (from memory):

\n

\n
\n

Morality is arbitrary—if you say that something is good or bad, you can't be right or wrong about that.  A superintelligence would form its own morality.

\n

Everyone ultimately looks after their own self-interest.  A superintelligence would be no different; it would just seize all the resources.

\n

Personally, I'm a human, so I'm in favor of humans, not Artificial Intelligences.  I don't think we should develop this technology. Instead we should develop the technology to upload humans first.

\n

No one should develop an AI without a control system that watches it and makes sure it can't do anything bad.

\n
\n

Well, that's all obviously wrong, thinks Eliezer1996, and he proceeded to kick his opponents' arguments to pieces.  (I've mostly done this in other blog posts, and anything remaining is left as an exercise to the reader.)

\n

It's not that Eliezer1996  explicitly reasoned, \"The world's stupidest man says the sun is shining, therefore it is dark out.\"  But Eliezer1996 was a Traditional Rationalist; he had been inculcated with the metaphor of science as a fair fight between sides who take on different positions, stripped of mere violence and other such exercises of political muscle, so that, ideally, the side with the best arguments can win.

\n

It's easier to say where someone else's argument is wrong, then to get the fact of the matter right; and Eliezer1996 was very skilled at finding flaws.  (So am I.  It's not as if you can solve the danger of that power by refusing to care about flaws.)  From Eliezer1996's perspective, it seemed to him that his chosen side was winning the fight—that he was formulating better arguments than his opponents—so why would he switch sides?

\n

Therefore is it written:  \"Because this world contains many whose grasp of rationality is abysmal, beginning students of rationality win arguments and acquire an exaggerated view of their own abilities.  But it is useless to be superior:  Life is not graded on a curve.  The best physicist in ancient Greece could not calculate the path of a falling apple.  There is no guarantee that adequacy is possible given your hardest effort; therefore spare no thought for whether others are doing worse.\"

\n

You cannot rely on anyone else to argue you out of your mistakes; you cannot rely on anyone else to save you; you and only you are obligated to find the flaws in your positions; if you put that burden down, don't expect anyone else to pick it up.  And I wonder if that advice will turn out not to help most people, until they've personally blown off their own foot, saying to themselves all the while, correctly, \"Clearly I'm winning this argument.\"

\n

Today I try not to take any human being as my opponent.  That just leads to overconfidence.  It is Nature that I am facing off against, who does not match Her problems to your skill, who is not obliged to offer you a fair chance to win in return for a diligent effort, who does not care if you are the best who ever lived, if you are not good enough.

\n

But return to 1996.  Eliezer1996 is going with the basic intuition of \"Surely a superintelligence will know better than we could what is right,\" and offhandedly knocking down various arguments brought against his position.  He was skillful in that way, you see.  He even had a personal philosophy of why it was wise to look for flaws in things, and so on.

\n

I don't mean to say it as an excuse, that no one who argued against Eliezer1996, actually presented him with the dissolution of the mystery—the full reduction of morality that analyzes all his cognitive processes debating \"morality\", a step-by-step walkthrough of the algorithms that make morality feel to him like a fact.  Consider it rather as an indictment, a measure of Eliezer1996's level, that he would have needed the full solution given to him, in order to present him with an argument that he could not refute.

\n

The few philosophers present, did not extract him from his difficulties.  It's not as if a philosopher will say, \"Sorry, morality is understood, it is a settled issue in cognitive science and philosophy, and your viewpoint is simply wrong.\"  The nature of morality is still an open question in philosophy, the debate is still going on.  A philosopher will feel obligated to present you with a list of classic arguments on all sides; most of which Eliezer1996 is quite intelligent enough to knock down, and so he concludes that philosophy is a wasteland.

\n

But wait.  It gets worse.

\n

I don't recall exactly when—it might have been 1997—but the younger me, let's call him Eliezer1997, set out to argue inescapably that creating superintelligence is the right thing to do.  To be continued.

" } }, { "_id": "uNWRXtdwL33ELgWjD", "title": "Raised in Technophilia", "pageUrl": "https://www.lesswrong.com/posts/uNWRXtdwL33ELgWjD/raised-in-technophilia", "postedAt": "2008-09-17T02:06:26.000Z", "baseScore": 70, "voteCount": 53, "commentCount": 33, "url": null, "contents": { "documentId": "uNWRXtdwL33ELgWjD", "html": "

My father used to say that if the present system had been in place a hundred years ago, automobiles would have been outlawed to protect the saddle industry.

\n

One of my major childhood influences was reading Jerry Pournelle's A Step Farther Out, at the age of nine.  It was Pournelle's reply to Paul Ehrlich and the Club of Rome, who were saying, in the 1960s and 1970s, that the Earth was running out of resources and massive famines were only years away.  It was a reply to Jeremy Rifkin's so-called fourth law of thermodynamics; it was a reply to all the people scared of nuclear power and trying to regulate it into oblivion.

\n

I grew up in a world where the lines of demarcation between the Good Guys and the Bad Guys were pretty clear; not an apocalyptic final battle, but a battle that had to be fought over and over again, a battle where you could see the historical echoes going back to the Industrial Revolution, and where you could assemble the historical evidence about the actual outcomes.

\n

On one side were the scientists and engineers who'd driven all the standard-of-living increases since the Dark Ages, whose work supported luxuries like democracy, an educated populace, a middle class, the outlawing of slavery.

\n

On the other side, those who had once opposed smallpox vaccinations, anesthetics during childbirth, steam engines, and heliocentrism:  The theologians calling for a return to a perfect age that never existed, the elderly white male politicians set in their ways, the special interest groups who stood to lose, and the many to whom science was a closed book, fearing what they couldn't understand.

\n

And trying to play the middle, the pretenders to Deep Wisdom, uttering cached thoughts about how technology benefits humanity but only when it was properly regulated—claiming in defiance of brute historical fact that science of itself was neither good nor evil—setting up solemn-looking bureaucratic committees to make an ostentatious display of their caution—and waiting for their applause.  As if the truth were always a compromise.  And as if anyone could really see that far ahead.  Would humanity have done better if there'd been a sincere, concerned, public debate on the adoption of fire, and commitees set up to oversee its use?

\n

\n

When I entered into the problem, I started out allergized against anything that pattern-matched \"Ah, but technology has risks as well as benefits, litte one.\"  The presumption-of-guilt was that you were either trying to collect some cheap applause, or covertly trying to regulate the technology into oblivion.  And either way, ignoring the historical record immensely in favor of technologies that people had once worried about.

\n

Today, Robin Hanson raised the topic of slow FDA approval of drugs approved in other countries.  Someone in the comments pointed out that Thalidomide was sold in 50 countries under 40 names, but that only a small amount was given away in the US, so that there were 10,000 malformed children born globally, but only 17 children in the US.

\n

But how many people have died because of the slow approval in the US, of drugs more quickly approved in other countries—all the drugs that didn't go wrong?  And I ask that question because it's what you can try to collect statistics about—this says nothing about all the drugs that were never developed because the approval process is too long and costly.  According to this source, the FDA's longer approval process prevents 5,000 casualties per year by screening off medications found to be harmful, and causes at least 20,000-120,000 casualties per year just by delaying approval of those beneficial medications that are still developed and eventually approved.

\n

So there really is a reason to be allergic to people who go around saying, \"Ah, but technology has risks as well as benefits\".  There's a historical record showing over-conservativeness, the many silent deaths of regulation being outweighed by a few visible deaths of nonregulation.  If you're really playing the middle, why not say, \"Ah, but technology has benefits as well as risks\"?

\n

Well, and this isn't such a bad description of the Bad Guys.  (Except that it ought to be emphasized a bit harder that these aren't evil mutants but standard human beings acting under a different worldview-gestalt that puts them in the right; some of them will inevitably be more competent than others, and competence counts for a lot.)  Even looking back, I don't think my childhood technophilia was too wrong about what constituted a Bad Guy and what was the key mistake.  But it's always a lot easier to say what not to do, than to get it right.  And one of my fundamental flaws, back then, was thinking that, if you tried as hard as you could to avoid everything the Bad Guys were doing, that made you a Good Guy.

\n

Particularly damaging, I think, was the bad example set by the pretenders to Deep Wisdom trying to stake out a middle way; smiling condescendingly at technophiles and technophobes alike, and calling them both immature.  Truly this is a wrong way; and in fact, the notion of trying to stake out a middle way generally, is usually wrong; the Right Way is not a compromise with anything, it is the clean manifestation of its own criteria.

\n

But that made it more difficult for the young Eliezer to depart from the charge-straight-ahead verdict, because any departure felt like joining the pretenders to Deep Wisdom.

\n

The first crack in my childhood technophilia appeared in, I think, 1997 or 1998, at the point where I noticed my fellow technophiles saying foolish things about how molecular nanotechnology would be an easy problem to manage.  (As you may be noticing yet again, the young Eliezer was driven to a tremendous extent by his ability to find flaws—I even had a personal philosophy of why that sort of thing was a good idea.)

\n

The nanotech stuff would be a separate post, and maybe one that should go on a different blog.  But there was a debate going on about molecular nanotechnology, and whether offense would be asymmetrically easier than defense.  And there were people arguing that defense would be easy.  In the domain of nanotech, for Ghu's sake, programmable matter, when we can't even seem to get the security problem solved for computer networks where we can observe and control every one and zero.  People were talking about unassailable diamondoid walls.  I observed that diamond doesn't stand off a nuclear weapon, that offense has had defense beat since 1945 and nanotech didn't look likely to change that.

\n

And by the time that debate was over, it seems that the young Eliezer—caught up in the heat of argument—had managed to notice, for the first time, that the survival of Earth-originating intelligent life stood at risk.

\n

It seems so strange, looking back, to think that there was a time when I thought that only individual lives were at stake in the future.  What a profoundly friendlier world that was to live in... though it's not as if I were thinking that at the time.  I didn't reject the possibility so much as manage to never see it in the first place.  Once the topic actually came up, I saw it.  I don't really remember how that trick worked.  There's a reason why I refer to my past self in the third person.

\n

It may sound like Eliezer1998 was a complete idiot, but that would be a comfortable out, in a way; the truth is scarier.  Eliezer1998 was a sharp Traditional Rationalist, as such things went.  I knew hypotheses had to be testable, I knew that rationalization was not a permitted mental operation, I knew how to play Rationalist's Taboo, I was obsessed with self-awareness... I didn't quite understand the concept of \"mysterious answers\"... and no Bayes or Kahneman at all.  But a sharp Traditional Rationalist, far above average...  So what?  Nature isn't grading us on a curve.  One step of departure from the Way, one shove of undue influence on your thought processes, can repeal all other protections.

\n

One of the chief lessons I derive from looking back at my personal history is that it's no wonder that, out there in the real world, a lot of people think that \"intelligence isn't everything\", or that rationalists don't do better in real life.  A little rationality, or even a lot of rationality, doesn't pass the astronomically high barrier required for things to actually start working.

\n

Let not my misinterpretation of the Right Way be blamed on Jerry Pournelle, my father, or science fiction generally.  I think the young Eliezer's personality imposed quite a bit of selectivity on which parts of their teachings made it through.  It's not as if Pournelle didn't say:  The rules change once you leave Earth, the cradle; if you're careless sealing your pressure suit just once, you die.  He said it quite a bit.  But the words didn't really seem important, because that was something that happened to third-party characters in the novels—the main character didn't usually die halfway through, for some reason.

\n

What was the lens through which I filtered these teachings?  Hope. Optimism.  Looking forward to a brighter future.  That was the fundamental meaning of A Step Farther Out unto me, the lesson I took in contrast to the Sierra Club's doom-and-gloom.  On one side was rationality and hope, the other, ignorance and despair.

\n

Some teenagers think they're immortal and ride motorcycles.  I was under no such illusion and quite reluctant to learn to drive, considering how unsafe those hurtling hunks of metal looked.  But there was something more important to me than my own life:  The Future.  And I acted as if that was immortal.  Lives could be lost, but not the Future.

\n

And when I noticed that nanotechnology really was going to be a potentially extinction-level challenge?

\n

The young Eliezer thought, explicitly, \"Good heavens, how did I fail to notice this thing that should have been obvious?  I must have been too emotionally attached to the benefits I expected from the technology; I must have flinched away from the thought of human extinction.\"

\n

And then...

\n

I didn't declare a Halt, Melt, and Catch Fire.  I didn't rethink all the conclusions that I'd developed with my prior attitude.  I just managed to integrate it into my worldview, somehow, with a minimum of propagated changes.  Old ideas and plans were challenged, but my mind found reasons to keep them.  There was no systemic breakdown, unfortunately.

\n

Most notably, I decided that we had to run full steam ahead on AI, so as to develop it before nanotechnology.  Just like I'd been originally planning to do, but now, with a different reason.

\n

I guess that's what most human beings are like, isn't it?  Traditional Rationality wasn't enough to change that.

\n

But there did come a time when I fully realized my mistake.  It just took a stronger boot to the head.  To be continued.

" } }, { "_id": "BA7dRRrzMLyvfJr9J", "title": "My Best and Worst Mistake", "pageUrl": "https://www.lesswrong.com/posts/BA7dRRrzMLyvfJr9J/my-best-and-worst-mistake", "postedAt": "2008-09-16T00:43:50.000Z", "baseScore": 73, "voteCount": 50, "commentCount": 17, "url": null, "contents": { "documentId": "BA7dRRrzMLyvfJr9J", "html": "

Yesterday I covered the young Eliezer's affective death spiral around something that he called \"intelligence\".  Eliezer1996, or even Eliezer1999 for that matter, would have refused to try and put a mathematical definition—consciously, deliberately refused.  Indeed, he would have been loath to put any definition on \"intelligence\" at all.

\n

Why?  Because there's a standard bait-and-switch problem in AI, wherein you define \"intelligence\" to mean something like \"logical reasoning\" or \"the ability to withdraw conclusions when they are no longer appropriate\", and then you build a cheap theorem-prover or an ad-hoc nonmonotonic reasoner, and then say, \"Lo, I have implemented intelligence!\"  People came up with poor definitions of intelligence—focusing on correlates rather than cores—and then they chased the surface definition they had written down, forgetting about, you know, actual intelligence.  It's not like Eliezer1996 was out to build a career in Artificial Intelligence.  He just wanted a mind that would actually be able to build nanotechnology.  So he wasn't tempted to redefine intelligence for the sake of puffing up a paper.

\n

Looking back, it seems to me that quite a lot of my mistakes can be defined in terms of being pushed too far in the other direction by seeing someone else stupidity:  Having seen attempts to define \"intelligence\" abused so often, I refused to define it at all.  What if I said that intelligence was X, and it wasn't really X?  I knew in an intuitive sense what I was looking for—something powerful enough to take stars apart for raw material—and I didn't want to fall into the trap of being distracted from that by definitions.

\n

Similarly, having seen so many AI projects brought down by physics envy—trying to stick with simple and elegant math, and being constrained to toy systems as a result—I generalized that any math simple enough to be formalized in a neat equation was probably not going to work for, you know, real intelligence.  \"Except for Bayes's Theorem,\" Eliezer2000 added; which, depending on your viewpoint, either mitigates the totality of his offense, or shows that he should have suspected the entire generalization instead of trying to add a single exception.

\n

\n

If you're wondering why Eliezer2000 thought such a thing—disbelieved in a math of intelligence—well, it's hard for me to remember this far back.  It certainly wasn't that I ever disliked math.  If I had to point out a root cause, it would be reading too few, too popular, and the wrong Artificial Intelligence books.

\n

But then I didn't think the answers were going to come from Artificial Intelligence; I had mostly written it off as a sick, dead field.  So it's no wonder that I spent too little time investigating it.  I believed in the cliche about Artificial Intelligence overpromising.  You can fit that into the pattern of \"too far in the opposite direction\"—the field hadn't delivered on its promises, so I was ready to write it off.  As a result, I didn't investigate hard enough to find the math that wasn't fake.

\n

My youthful disbelief in a mathematics of general intelligence was simultaneously one of my all-time worst mistakes, and one of my all-time best mistakes.

\n

Because I disbelieved that there could be any simple answers to intelligence, I went and I read up on cognitive psychology, functional neuroanatomy, computational neuroanatomy, evolutionary psychology, evolutionary biology, and more than one branch of Artificial Intelligence.  When I had what seemed like simple bright ideas, I didn't stop there, or rush off to try and implement them, because I knew that even if they were true, even if they were necessary, they wouldn't be sufficient: intelligence wasn't supposed to be simple, it wasn't supposed to have an answer that fit on a T-Shirt.  It was supposed to be a big puzzle with lots of pieces; and when you found one piece, you didn't run off holding it high in triumph, you kept on looking.  Try to build a mind with a single missing piece, and it might be that nothing interesting would happen.

\n

I was wrong in thinking that Artificial Intelligence the academic field, was a desolate wasteland; and even wronger in thinking that there couldn't be math of intelligence.  But I don't regret studying e.g. functional neuroanatomy, even though I now think that an Artificial Intelligence should look nothing like a human brain.  Studying neuroanatomy meant that I went in with the idea that if you broke up a mind into pieces, the pieces were things like \"visual cortex\" and \"cerebellum\"—rather than \"stock-market trading module\" or \"commonsense reasoning module\", which is a standard wrong road in AI.

\n

Studying fields like functional neuroanatomy and cognitive psychology gave me a very different idea of what minds had to look like, than you would get from just reading AI books—even good AI books.

\n

When you blank out all the wrong conclusions and wrong justifications, and just ask what that belief led the young Eliezer to actually do...

\n

Then the belief that Artificial Intelligence was sick and that the real answer would have to come from healthier fields outside, led him to study lots of cognitive sciences;

\n

The belief that AI couldn't have simple answers, led him to not stop prematurely on one brilliant idea, and to accumulate lots of information;

\n

The belief that you didn't want to define intelligence, led to a situation in which he studied the problem for a long time before, years later, he started to propose systematizations.

\n

This is what I refer to when I say that this is one of my all-time best mistakes.

\n

Looking back, years afterward, I drew a very strong moral, to this effect:

\n

What you actually end up doing, screens off the clever reason why you're doing it.

\n

Contrast amazing clever reasoning that leads you to study many sciences, to amazing clever reasoning that says you don't need to read all those books.  Afterward, when your amazing clever reasoning turns out to have been stupid, you'll have ended up in a much better position, if your amazing clever reasoning was of the first type.

\n

When I look back upon my past, I am struck by the number of semi-accidental successes, the number of times I did something right for the wrong reason.  From your perspective, you should chalk this up to the anthropic principle: if I'd fallen into a true dead end, you probably wouldn't be hearing from me on this blog.  From my perspective it remains something of an embarrassment.  My Traditional Rationalist upbringing provided a lot of directional bias to those \"accidental successes\"—biased me toward rationalizing reasons to study rather than not study, prevented me from getting completely lost, helped me recover from mistakes.  Still, none of that was the right action for the right reason, and that's a scary thing to look back on your youthful history and see.  One of my primary purposes in writing on Overcoming Bias is to leave a trail to where I ended up by accident—to obviate the role that luck played in my own forging as a rationalist.

\n

So what makes this one of my all-time worst mistakes?  Because sometimes \"informal\" is another way of saying \"held to low standards\". I had amazing clever reasons why it was okay for me not to precisely define \"intelligence\", and certain of my other terms as well: namely,other people had gone astray by trying to define it.  This was a gate through which sloppy reasoning could enter.

\n

So should I have jumped ahead and tried to forge an exact definition right away?  No, all the reasons why I knew this was the wrong thing to do, were correct; you can't conjure the right definition out of thin air if your knowledge is not adequate.

\n

You can't get to the definition of fire if you don't know about atoms and molecules; you're better off saying \"that orangey-bright thing\".  And you do have to be able to talk about that orangey-bright stuff, even if you can't say exactly what it is, to investigate fire.  But these days I would say that all reasoning on that level is something that can't be trusted—rather it's something you do on the way to knowing better, but you don't trust it, you don't put your weight down on it, you don't draw firm conclusions from it, no matter how inescapable the informal reasoning seems.

\n

The young Eliezer put his weight down on the wrong floor tile—stepped onto a loaded trap.  To be continued.

" } }, { "_id": "uD9TDHPwQ5hx4CgaX", "title": "My Childhood Death Spiral", "pageUrl": "https://www.lesswrong.com/posts/uD9TDHPwQ5hx4CgaX/my-childhood-death-spiral", "postedAt": "2008-09-15T03:42:27.000Z", "baseScore": 72, "voteCount": 61, "commentCount": 106, "url": null, "contents": { "documentId": "uD9TDHPwQ5hx4CgaX", "html": "

My parents always used to downplay the value of intelligence.  And play up the value of—effort, as recommended by the latest research?  No, not effort.  Experience.  A nicely unattainable hammer with which to smack down a bright young child, to be sure.  That was what my parents told me when I questioned the Jewish religion, for example.  I tried laying out an argument, and I was told something along the lines of:  \"Logic has limits, you'll understand when you're older that experience is the important thing, and then you'll see the truth of Judaism.\"  I didn't try again.  I made one attempt to question Judaism in school, got slapped down, didn't try again.  I've never been a slow learner.

\n

Whenever my parents were doing something ill-advised, it was always, \"We know better because we have more experience.  You'll understand when you're older: maturity and wisdom is more important than intelligence.\"

\n

If this was an attempt to focus the young Eliezer on intelligence uber alles, it was the most wildly successful example of reverse psychology I've ever heard of.

\n

But my parents aren't that cunning, and the results weren't exactly positive.

\n

\n

For a long time, I thought that the moral of this story was that experience was no match for sheer raw native intelligence.  It wasn't until a lot later, in my twenties, that I looked back and realized that I couldn't possibly have been more intelligent than my parents before puberty, with my brain not even fully developed.  At age eleven, when I was already nearly a full-blown atheist, I could not have defeated my parents in any fair contest of mind.  My SAT scores were high for an 11-year-old, but they wouldn't have beaten my parents' SAT scores in full adulthood.  In a fair fight, my parents' intelligence and experience could have stomped any prepubescent child flat.  It was dysrationalia that did them in; they used their intelligence only to defeat itself.

\n

But that understanding came much later, when my intelligence had processed and distilled many more years of experience. 

\n

The moral I derived when I was young, was that anyone who downplayed the value of intelligence didn't understand intelligence at all.  My own intelligence had affected every aspect of my life and mind and personality; that was massively obvious, seen at a backward glance.  \"Intelligence has nothing to do with wisdom or being a good person\"—oh, and does self-awareness have nothing to do with wisdom, or being a good person?  Modeling yourself takes intelligence.  For one thing, it takes enough intelligence to learn evolutionary psychology.

\n

We are the cards we are dealt, and intelligence is the unfairest of all those cards.  More unfair than wealth or health or home country, unfairer than your happiness set-point.  People have difficulty accepting that life can be that unfair, it's not a happy thought.  \"Intelligence isn't as important as X\" is one way of turning away from the unfairness, refusing to deal with it, thinking a happier thought instead.  It's a temptation, both to those dealt poor cards, and to those dealt good ones.  Just as downplaying the importance of money is a temptation both to the poor and to the rich.

\n

But the young Eliezer was a transhumanist.  Giving away IQ points was going to take more work than if I'd just been born with extra money.  But it was a fixable problem, to be faced up to squarely, and fixed.  Even if it took my whole life.  \"The strong exist to serve the weak,\" wrote the young Eliezer, \"and can only discharge that duty by making others equally strong.\"  I was annoyed with the Randian and Nietszchean trends in SF, and as you may have grasped, the young Eliezer had a tendency to take things too far in the other direction.  No one exists only to serve.  But I tried, and I don't regret that.  If you call that teenage folly, it's rare to see adult wisdom doing better.

\n

Everyone needed more intelligence.  Including me, I was careful to pronounce.  Be it far from me to declare a new world order with myself on top—that was what a stereotyped science fiction villain would do, or worse, a typical teenager, and I would never have allowed myself to be so cliched.  No, everyone needed to be smarter.  We were all in the same boat:  A fine, uplifting thought.

\n

Eliezer1995 had read his science fiction.  He had morals, and ethics, and could see the more obvious traps.  No screeds on Homo novis for him.  No line drawn between himself and others.  No elaborate philosophy to put himself at the top of the heap.  It was too obvious a failure mode.  Yes, he was very careful to call himself stupid too, and never claim moral superiority.  Well, and I don't see it so differently now, though I no longer make such a dramatic production out of my ethics.  (Or maybe it would be more accurate to say that I'm tougher about when I allow myself a moment of self-congratulation.)

\n

I say all this to emphasize that Eliezer1995 wasn't so undignified as to fail in any obvious way.

\n

And then Eliezer1996 encountered the concept of the Singularity.  Was it a thunderbolt of revelation?  Did I jump out of my chair and shout \"Eurisko!\"?  Nah.  I wasn't that much of a drama queen.  It was just massively obvious in retrospect that smarter-than-human intelligence was going to change the future more fundamentally than any mere material science.  And I knew at once that this was what I would be doing with the rest of my life, creating the Singularity.  Not nanotechnology like I'd thought when I was eleven years old; nanotech would only be a tool brought forth of intelligence.  Why, intelligence was even more powerful, an even greater blessing, than I'd realized before.

\n

Was this a happy death spiral?  As it turned out later, yes: that is, it led to the adoption even of false happy beliefs about intelligence.  Perhaps you could draw the line at the point where I started believing that surely the lightspeed limit would be no barrier to superintelligence.  (It's not unthinkable, but I wouldn't bet on it.)

\n

But the real wrong turn came later, at the point where someone said, \"Hey, how do you know that superintelligence will be moral?  Intelligence has nothing to do with being a good person, you know—that's what we call wisdom, young prodigy.\"

\n

And lo, it seemed obvious to the young Eliezer, that this was mere denial.  Certainly, his own painstakingly constructed code of ethics had been put together using his intelligence and resting on his intelligence as a base.  Any fool could see that intelligence had a great deal to do with ethics, morality, and wisdom; just try explaining the Prisoner's Dilemma to a chimpanzee, right?

\n

Surely, then, superintelligence would necessarily imply supermorality.

\n

Thus is it said:  \"Parents do all the things they tell their children not to do, which is how they know not to do them.\"  To be continued, hopefully tomorrow.

\n

Post Scriptum:  How my views on intelligence have changed since then... let's see:  When I think of poor hands dealt to humans, these days, I think first of death and old age.  Everyone's got to have some intelligence level or other, and the important thing from a fun-theoretical perspective is that it should ought to increase over time, not decrease like now.  Isn't that a clever way of feeling better?  But I don't work so hard now at downplaying my own intelligence, because that's just another way of calling attention to it.  I'm smart for a human, if the topic should arise, and how I feel about that is my own business.  The part about intelligence being the lever that lifts worlds is the same.  Except that intelligence has become less mysterious unto me, so that I now more clearly see intelligence as something embedded within physics.  Superintelligences may go FTL if it happens to be permitted by the true physical laws, and if not, then not.

" } }, { "_id": "D7EcMhL26zFNbJ3ED", "title": "Optimization", "pageUrl": "https://www.lesswrong.com/posts/D7EcMhL26zFNbJ3ED/optimization", "postedAt": "2008-09-13T16:00:00.000Z", "baseScore": 57, "voteCount": 42, "commentCount": 45, "url": null, "contents": { "documentId": "D7EcMhL26zFNbJ3ED", "html": "

"However many ways there may be of being alive, it is certain that there are vastly more ways of being dead."
        -- Richard Dawkins

In the coming days, I expect to be asked:  "Ah, but what do you mean by 'intelligence'?"  By way of untangling some of my dependency network for future posts, I here summarize some of my notions of "optimization".

\n\n

Consider a car; say, a Toyota Corolla.  The Corolla is made up of some number of atoms; say, on the rough order of 1029.  If you consider all possible ways to arrange 1029 atoms, only an infinitesimally tiny fraction of possible configurations would qualify as a car; if you picked one random configuration per Planck interval, many ages of the universe would pass before you hit on a wheeled wagon, let alone an internal combustion engine.

\n\n

Even restricting our attention to running vehicles, there is an astronomically huge design space of possible vehicles that could be composed of the same atoms as the Corolla, and most of them, from the perspective of a human user, won't work quite as well.  We could take the parts in the Corolla's air conditioner, and mix them up in thousands of possible configurations; nearly all these configurations would result in a vehicle lower in our preference ordering, still recognizable as a car but lacking a working air conditioner.

\n\n

So there are many more configurations corresponding to nonvehicles, or vehicles lower in our preference ranking, than vehicles ranked greater than or equal to the Corolla.

\n\n

Similarly with the problem of planning, which also involves hitting tiny targets in a huge search space.  Consider the number of possible legal chess moves versus the number of winning moves.

\n\n

Which suggests one theoretical way to measure optimization - to quantify the power of a mind or mindlike process:

Put a measure on the state space - if it's discrete, you can just count.  Then collect all the states which are equal to or greater than the observed outcome, in that optimization process's implicit or explicit preference ordering.  Sum or integrate over the total size of all such states.  Divide by the total volume of the state space.  This gives you the power of the optimization process measured in terms of the improbabilities that it can produce - that is, improbability of a random selection producing an equally good result, relative to a measure and a preference ordering.

\n\n

If you prefer, you can take the reciprocal of this improbability (1/1000 becomes 1000) and then take the logarithm base 2.  This gives you the power of the optimization process in bits.  An optimizer that exerts 20 bits of power can hit a target that's one in a million.

\n\n

When I think you're a powerful intelligence, and I think I know something about your preferences, then I'll predict that you'll steer reality into regions that are higher in your preference ordering.   The more intelligent I believe you are, the more probability I'll concentrate into outcomes that I believe are higher in your preference ordering.

\n\n

There's a number of subtleties here, some less obvious than others.  I'll return to this whole topic in a later sequence.  Meanwhile:

\n\n

* A tiny fraction of the design space does describe vehicles that we would recognize as faster, more fuel-efficient, safer than the Corolla, so the Corolla is not optimal.  The Corolla is, however, optimized, because the human designer had to hit an infinitesimal target in design space just to create a working car, let alone a car of Corolla-equivalent quality.  This is not to be taken as praise of the Corolla, as such; you could say the same of the Hillman Minx. You can't build so much as a wooden wagon by sawing boards into random shapes and nailing them together according to coinflips.

\n\n

* When I talk to a popular audience on this topic, someone usually says:  "But isn't this what the creationists argue?  That if you took a bunch of atoms and put them in a box and shook them up, it would be astonishingly improbable for a fully functioning rabbit to fall out?"  But the logical flaw in the creationists' argument is not that randomly reconfiguring molecules would by pure chance assemble a rabbit.  The logical flaw is that there is a process, natural selection, which, through the non-chance retention of chance mutations, selectively accumulates complexity, until a few billion years later it produces a rabbit.

\n\n

* I once heard a senior mainstream AI type suggest that we might try to quantify the intelligence of an AI system in terms of its RAM, processing power, and sensory input bandwidth.  This at once reminded me of a quote from Dijkstra:  "If we wish to count lines of code, we should not regard them as 'lines produced' but as 'lines spent': the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger."  If you want to measure the intelligence of a system, I would suggest measuring its optimization power as before, but then dividing by the resources used.  Or you might measure the degree of prior cognitive optimization required to achieve the same result using equal or fewer resources.  Intelligence, in other words, is efficient optimization.  This is why I say that evolution is stupid by human standards, even though we can't yet build a butterfly:  Human engineers use vastly less time/material resources than a global ecosystem of millions of species proceeding through biological evolution, and so we're catching up fast.

\n\n

* The notion of a "powerful optimization process" is necessary and sufficient to a discussion about an Artificial Intelligence that could harm or benefit humanity on a global scale.  If you say that an AI is mechanical and therefore "not really intelligent", and it outputs an action sequence that hacks into the Internet, constructs molecular nanotechnology and wipes the solar system clean of human(e) intelligence, you are still dead.  Conversely, an AI that only has a very weak ability steer the future into regions high in its preference ordering, will not be able to much benefit or much harm humanity.

\n\n

* How do you know a mind's preference ordering?  If this can't be taken for granted, then you use some of your evidence to infer the mind's preference ordering, and then use the inferred preferences to infer the mind's power, then use those two beliefs to testably predict future outcomes.  Or you can use the Minimum Message Length formulation of Occam's Razor: if you send me a message telling me what a mind wants and how powerful it is, then this should enable you to compress your description of future events and observations, so that the total message is shorter.  Otherwise there is no predictive benefit to viewing a system as an optimization process.

\n\n

* In general, it is useful to think of a process as "optimizing" when it is easier to predict by thinking about its goals, than by trying to predict its exact internal state and exact actions.  If you're playing chess against Deep Blue, you will find it much easier to predict that Deep Blue will win (that is, the final board position will occupy the class of states previously labeled "wins for Deep Blue") than to predict the exact final board position or Deep Blue's exact sequence of moves.  Normally, it is not possible to predict, say, the final state of a billiards table after a shot, without extrapolating all the events along the way.

\n\n

* Although the human cognitive architecture uses the same label "good" to reflect judgments about terminal values and instrumental values, this doesn't mean that all sufficiently powerful optimization processes share the same preference ordering.  Some possible minds will be steering the future into regions that are not good.

\n\n

* If you came across alien machinery in space, then you might be able to infer the presence of optimization (and hence presumably powerful optimization processes standing behind it as a cause) without inferring the aliens' final goals, by way of noticing the fulfillment of convergent instrumental values.  You can look at cables through which large electrical currents are running, and be astonished to realize that the cables are flexible high-temperature high-amperage superconductors; an amazingly good solution to the subproblem of transporting electricity that is generated in a central location and used distantly.  You can assess this, even if you have no idea what the electricity is being used for.

\n\n

* If you want to take probabilistic outcomes into account in judging a mind's wisdom, then you have to know or infer a utility function for the mind, not just a preference ranking for the optimization process.  Then you can ask how many possible plans would have equal or greater expected utility.  This assumes that you have some probability distribution, which you believe to be true; but if the other mind is smarter than you, it may have a better probability distribution, in which case you will underestimate its optimization power.  The chief sign of this would be if the mind consistently achieves higher average utility than the average expected utility you assign to its plans.

\n\n

* When an optimization process seems to have an inconsistent preference ranking - for example, it's quite possible in evolutionary biology for allele A to beat out allele B, which beats allele C, which beats allele A - then you can't interpret the system as performing optimization as it churns through its cycles.  Intelligence is efficient optimization; churning through preference cycles is stupid, unless the interim states of churning have high terminal utility.

\n\n

* For domains outside the small and formal, it is not possible to exactly measure optimization, just as it is not possible to do exact Bayesian updates or to perfectly maximize expected utility.  Nonetheless, optimization can be a useful concept, just like the concept of Bayesian probability or expected utility - it describes the ideal you're trying to approximate with other measures.

" } }, { "_id": "7Au7kvRAPREm3ADcK", "title": "Psychic Powers", "pageUrl": "https://www.lesswrong.com/posts/7Au7kvRAPREm3ADcK/psychic-powers", "postedAt": "2008-09-12T19:28:53.000Z", "baseScore": 47, "voteCount": 33, "commentCount": 89, "url": null, "contents": { "documentId": "7Au7kvRAPREm3ADcK", "html": "
\n

If the \"boring view\" of reality is correct, then you can never predict anything irreducible because you are reducible.  You can never get Bayesian confirmation for a hypothesis of irreducibility, because any prediction you can make is, therefore, something that could also be predicted by a reducible thing, namely your brain.

\n
\n

Benja Fallenstein commented:

\n
\n

I think that while you can in this case never devise an empirical test whose outcome could logically prove irreducibility, there is no clear reason to believe that you cannot devise a test whose counterfactual outcome in an irreducible world would make irreducibility subjectively much more probable (given an Occamian prior).

\n

Without getting into reducibility/irreducibility, consider the scenario that the physical universe makes it possible to build a hypercomputer —that performs operations on arbitrary real numbers, for example —but that our brains do not actually make use of this: they can be simulated perfectly well by an ordinary Turing machine, thank you very much...

\n
\n

Well, that's a very intelligent argument, Benja Fallenstein.  But I have a crushing reply to your argument, such that, once I deliver it, you will at once give up further debate with me on this particular point:

\n

\n

You're right.

\n

Alas, I don't get modesty credit on this one, because after publishing yesterday's post I realized a similar flaw on my own—this one concerning Occam's Razor and psychic powers:

\n

If beliefs and desires are irreducible and ontologically basic entities, or have an ontologically basic component not covered by existing science, that would make it far more likely that there was an ontological rule governing the interaction of different minds—an interaction which bypassed ordinary \"material\" means of communication like sound waves, known to existing science.

\n

If naturalism is correct, then there exists a conjugate reductionist model that makes the same predictions as any concrete prediction that any parapsychologist can make about telepathy.

\n

Indeed, if naturalism is correct, the only reason we can conceive of beliefs as \"fundamental\" is due to lack of self-knowledge of our own neurons—that the peculiar reflective architecture of our own minds exposes the \"belief\" class but hides the machinery behind it.

\n

Nonetheless, the discovery of information transfer between brains, in the absence of any known material connection between them, is probabilistically a privileged prediction of supernatural models (those that contain ontologically basic mental entities).  Just because it is so much simpler in that case to have a new law relating beliefs between different minds, compared to the \"boring\" model where beliefs are complex constructs of neurons.

\n

The hope of psychic powers arises from treating beliefs and desires as sufficiently fundamental objects that they can have unmediated connections to reality.  If beliefs are patterns of neurons made of known material, with inputs given by organs like eyes constructed of known material, and with outputs through muscles constructed of known material, and this seems sufficient to account for all known mental powers of humans, then there's no reason to expect anything more—no reason to postulate additional connections.  This is why reductionists don't expect psychic powers.  Thus, observing psychic powers would be strong evidence for the supernatural in Richard Carrier's sense.

\n

We have an Occam rule that counts the number of ontologically basic classes and ontologically basic laws in the model, and penalizes the count of entities.  If naturalism is correct, then the attempt to count \"belief\" or the \"relation between belief and reality\" as a single basic entity, is simply misguided anthropomorphism; we are only tempted to it by a quirk of our brain's internal architecture.  But if you just go with that misguided view, then it assigns a much higher probability to psychic powers than does naturalism, because you can implement psychic powers using apparently simpler laws.

\n

Hence the actual discovery of psychic powers would imply that the human-naive Occam rule was in fact better-calibrated than the sophisticated naturalistic Occam rule.  It would argue that reductionists had been wrong all along in trying to take apart the brain; that what our minds exposed as a seemingly simple lever, was in fact a simple lever.  The naive dualists would have been right from the beginning, which is why their ancient wish would have been enabled to come true.

\n

So telepathy, and the ability to influence events just by wishing at them, and precognition, would all, if discovered, be strong Bayesian evidence in favor of the hypothesis that beliefs are ontologically fundamental.  Not logical proof, but strong Bayesian evidence.

\n

If reductionism is correct, then any science-fiction story containing psychic powers, can be output by a system of simple elements (i.e., the story's author's brain); but if we in fact discover psychic powers, that would make it much more probable that events were occurring which could not in fact be described by reductionist models.

\n

Which just goes to say:  The existence of psychic powers is a privileged probabilistic assertion of non-reductionist worldviews—they own that advance prediction; they devised it and put it forth, in defiance of reductionist expectations.  So by the laws of science, if psychic powers are discovered, non-reductionism wins.

\n

I am therefore confident in dismissing psychic powers as a priori implausible, despite all the claimed experimental evidence in favor of them.

" } }, { "_id": "u6JzcFtPGiznFgDxP", "title": "Excluding the Supernatural", "pageUrl": "https://www.lesswrong.com/posts/u6JzcFtPGiznFgDxP/excluding-the-supernatural", "postedAt": "2008-09-12T00:12:37.000Z", "baseScore": 80, "voteCount": 78, "commentCount": 149, "url": null, "contents": { "documentId": "u6JzcFtPGiznFgDxP", "html": "

Occasionally, you hear someone claiming that creationism should not be taught in schools, especially not as a competing hypothesis to evolution, because creationism is a priori and automatically excluded from scientific consideration, in that it invokes the \"supernatural\".

\n

So... is the idea here, that creationism could be true, but even if it were true, you wouldn't be allowed to teach it in science class, because science is only about \"natural\" things?

\n

It seems clear enough that this notion stems from the desire to avoid a confrontation between science and religion.  You don't want to come right out and say that science doesn't teach Religious Claim X because X has been tested by the scientific method and found false.  So instead, you can... um... claim that science is excluding hypothesis X a priori.  That way you don't have to discuss how experiment has falsified X a posteriori.

\n

Of course this plays right into the creationist claim that Intelligent Design isn't getting a fair shake from science—that science has prejudged the issue in favor of atheism, regardless of the evidence.  If science excluded Intelligent Design a priori, this would be a justified complaint!

\n

But let's back up a moment.  The one comes to you and says:  \"Intelligent Design is excluded from being science a priori, because it is 'supernatural', and science only deals in 'natural' explanations.\"

\n

What exactly do they mean, \"supernatural\"?  Is any explanation invented by someone with the last name \"Cohen\" a supernatural one?  If we're going to summarily kick a set of hypotheses out of science, what is it that we're supposed to exclude?

\n

By far the best definition I've ever heard of the supernatural is Richard Carrier's:  A \"supernatural\" explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.

\n

\n

This is the difference, for example, between saying that water rolls downhill because it wants to be lower, and setting forth differential equations that claim to describe only motions, not desires.  It's the difference between saying that a tree puts forth leaves because of a tree spirit, versus examining plant biochemistry.  Cognitive science takes the fight against supernaturalism into the realm of the mind.

\n

Why is this an excellent definition of the supernatural?  I refer you to Richard Carrier for the full argument.  But consider:  Suppose that you discover what seems to be a spirit, inhabiting a tree: a dryad who can materialize outside or inside the tree, who speaks in English about the need to protect her tree, et cetera.  And then suppose that we turn a microscope on this tree spirit, and she turns out to be made of parts—not inherently spiritual and ineffable parts, like fabric of desireness and cloth of belief; but rather the same sort of parts as quarks and electrons, parts whose behavior is defined in motions rather than minds.  Wouldn't the dryad immediately be demoted to the dull catalogue of common things?

\n

But if we accept Richard Carrier's definition of the supernatural, then a dilemma arises: we want to give religious claims a fair shake, but it seems that we have very good grounds for excluding supernatural explanations a priori.

\n

I mean, what would the universe look like if reductionism were false?

\n

I previously defined the reductionist thesis as follows: human minds create multi-level models of reality in which high-level patterns and low-level patterns are separately and explicitly represented.  A physicist knows Newton's equation for gravity, Einstein's equation for gravity, and the derivation of the former as a low-speed approximation of the latter.  But these three separate mental representations, are only a convenience of human cognition.  It is not that reality itself has an Einstein equation that governs at high speeds, a Newton equation that governs at low speeds, and a \"bridging law\" that smooths the interface.  Reality itself has only a single level, Einsteinian gravity.  It is only the Mind Projection Fallacy that makes some people talk as if the higher levels could have a separate existence—different levels of organization can have separate representations in human maps, but the territory itself is a single unified low-level mathematical object.

\n

Suppose this were wrong.

\n

Suppose that the Mind Projection Fallacy was not a fallacy, but simply true.

\n

Suppose that a 747 had a fundamental physical existence apart from the quarks making up the 747.

\n

What experimental observations would you expect to make, if you found yourself in such a universe?

\n

If you can't come up with a good answer to that, it's not observation that's ruling out \"non-reductionist\" beliefs, but a priori logical incoherence.  If you can't say what predictions the \"non-reductionist\" model makes, how can you say that experimental evidence rules it out?

\n

My thesis is that non-reductionism is a confusion; and once you realize that an idea is a confusion, it becomes a tad difficult to envision what the universe would look like if the confusion were true.  Maybe I've got some multi-level model of the world, and the multi-level model has a one-to-one direct correspondence with the causal elements of the physics?  But once all the rules are specified, why wouldn't the model just flatten out into yet another list of fundamental things and their interactions?  Does everything I can see in the model, like a 747 or a human mind, have to become a separate real thing?  But what if I see a pattern in that new supersystem?

\n

Supernaturalism is a special case of non-reductionism, where it is not 747s that are irreducible, but just (some) mental things.  Religion is a special case of supernaturalism, where the irreducible mental things are God(s) and souls; and perhaps also sins, angels, karma, etc.

\n

If I propose the existence of a powerful entity with the ability to survey and alter each element of our observed universe, but with the entity reducible to nonmental parts that interact with the elements of our universe in a lawful way; if I propose that this entity wants certain particular things, but \"wants\" using a brain composed of particles and fields; then this is not yet a religion, just a naturalistic hypothesis about a naturalistic Matrix.  If tomorrow the clouds parted and a vast glowing amorphous figure thundered forth the above description of reality, then this would not imply that the figure was necessarily honest; but I would show the movies in a science class, and I would try to derive testable predictions from the theory.

\n

Conversely, religions have ignored the discovery of that ancient bodiless thing: omnipresent in the working of Nature and immanent in every falling leaf: vast as a planet's surface and billions of years old: itself unmade and arising from the structure of physics: designing without brain to shape all life on Earth and the minds of humanity.  Natural selection, when Darwin proposed it, was not hailed as the long-awaited Creator:  It wasn't fundamentally mental.

\n

But now we get to the dilemma: if the staid conventional normal boring understanding of physics and the brain is correct, there's no way in principle that a human being can concretely envision, and derive testable experimental predictions about, an alternate universe in which things are irreducibly mental.  Because, if the boring old normal model is correct, your brain is made of quarks, and so your brain will only be able to envision and concretely predict things that can predicted by quarks.  You will only ever be able to construct models made of interacting simple things.

\n

People who live in reductionist universes cannot concretely envision non-reductionist universes.  They can pronounce the syllables \"non-reductionist\" but they can't imagine it.

\n

The basic error of anthropomorphism, and the reason why supernatural explanations sound much simpler than they really are, is your brain using itself as an opaque black box to predict other things labeled \"mindful\".  Because you already have big, complicated webs of neural circuitry that implement your \"wanting\" things, it seems like you can easily describe water that \"wants\" to flow downhill—the one word \"want\" acts as a lever to set your own complicated wanting-machinery in motion.

\n

Or you imagine that God likes beautiful things, and therefore made the flowers.  Your own \"beauty\" circuitry determines what is \"beautiful\" and \"not beautiful\".  But you don't know the diagram of your own synapses.  You can't describe a nonmental system that computes the same label for what is \"beautiful\" or \"not beautiful\"—can't write a computer program that predicts your own labelings.  But this is just a defect of knowledge on your part; it doesn't mean that the brain has no explanation.

\n

If the \"boring view\" of reality is correct, then you can never predict anything irreducible because you are reducible.  You can never get Bayesian confirmation for a hypothesis of irreducibility, because any prediction you can make is, therefore, something that could also be predicted by a reducible thing, namely your brain.

\n

Some boxes you really can't think outside.  If our universe really is Turing computable, we will never be able to concretely envision anything that isn't Turing-computable—no matter how many levels of halting oracle hierarchy our mathematicians can talk about, we won't be able to predict what a halting oracle would actually say, in such fashion as to experimentally discriminate it from merely computable reasoning.

\n

Of course, that's all assuming the \"boring view\" is correct.  To the extent that you believe evolution is true, you should not expect to encounter strong evidence against evolution.  To the extent you believe reductionism is true, you should expect non-reductionist hypotheses to be incoherent as well as wrong.  To the extent you believe supernaturalism is false, you should expect it to be inconceivable as well.

\n

If, on the other hand, a supernatural hypothesis turns out to be true, then presumably you will also discover that it is not inconceivable.

\n

So let us bring this back full circle to the matter of Intelligent Design:

\n

Should ID be excluded a priori from experimental falsification and science classrooms, because, by invoking the supernatural, it has placed itself outside of natural philosophy?

\n

I answer:  \"Of course not.\"  The irreducibility of the intelligent designer is not an indispensable part of the ID hypothesis.  For every irreducible God that can be proposed by the IDers, there exists a corresponding reducible alien that behaves in accordance with the same predictions—since the IDers themselves are reducible; to the extent I believe reductionism is in fact correct, which is a rather strong extent, I must expect to discover reducible formulations of all supposedly supernatural predictive models.

\n

If we're going over the archeological records to test the assertion that Jehovah parted the Red Sea out of an explicit desire to display its superhuman power, then it makes little difference whether Jehovah is ontologically basic, or an alien with nanotech, or a Dark Lord of the Matrix.  You do some archeology, find no skeletal remnants or armor at the Red Sea site, and indeed find records that Egypt ruled much of Canaan at the time.  So you stamp the historical record in the Bible \"disproven\" and carry on.  The hypothesis is coherent, falsifiable and wrong.

\n

Likewise with the evidence from biology that foxes are designed to chase rabbits, rabbits are designed to evade foxes, and neither is designed \"to carry on their species\" or \"protect the harmony of Nature\"; likewise with the retina being designed backwards with the light-sensitive parts at the bottom; and so on through a thousand other items of evidence for splintered, immoral, incompetent design.  The Jehovah model of our alien god is coherent, falsifiable, and wrong—coherent, that is, so long as you don't care whether Jehovah is ontologically basic or just an alien.

\n

Just convert the supernatural hypothesis into the corresponding natural hypothesis.  Just make the same predictions the same way, without asserting any mental things to be ontologically basic.  Consult your brain's black box if necessary to make predictions—say, if you want to talk about an \"angry god\" without building a full-fledged angry AI to label behaviors as angry or not angry.  So you derive the predictions, or look up the predictions made by ancient theologians without advance knowledge of our experimental results.  If experiment conflicts with those predictions, then it is fair to speak of the religious claim having been scientifically refuted.  It was given its just chance at confirmation; it is being excluded a posteriori, not a priori.

\n

Ultimately, reductionism is just disbelief in fundamentally complicated things.  If \"fundamentally complicated\" sounds like an oxymoron... well, that's why I think that the doctrine of non-reductionism is a confusion, rather than a way that things could be, but aren't.  You would be wise to be wary, if you find yourself supposing such things.

\n

But the ultimate rule of science is to look and see.  If ever a God appeared to thunder upon the mountains, it would be something that people looked at and saw.

\n

Corollary:  Any supposed designer of Artificial General Intelligence who talks about religious beliefs in respectful tones, is clearly not an expert on reducing mental things to nonmental things; and indeed knows so very little of the uttermost basics, as for it to be scarcely plausible that they could be expert at the art; unless their idiot savancy is complete.  Or, of course, if they're outright lying.  We're not talking about a subtle mistake.

" } }, { "_id": "wcEKJ4BtYJ8gaWafn", "title": "Rationality Quotes 17", "pageUrl": "https://www.lesswrong.com/posts/wcEKJ4BtYJ8gaWafn/rationality-quotes-17", "postedAt": "2008-09-11T03:29:56.000Z", "baseScore": 4, "voteCount": 3, "commentCount": 9, "url": null, "contents": { "documentId": "wcEKJ4BtYJ8gaWafn", "html": "

\"We take almost all of the decisive steps in our lives as a result of slight inner adjustments of which we are barely conscious.\"
        -- Austerlitz

\"In both poker and life, you can't read people any better than they can read themselves. You can, if you’re good, very accurately determine if they think their hand is good, or if they think they know the answer to your legal question. But you can't be sure if reality differs from their perception.\"
        -- Matt Maroon

\"We should not complain about impermanence, because without impermanence, nothing is possible.\"
        -- Thich Nhat Hanh

\"I've never been happy. I have a few memories, early in life, and it sounds dramatic to say, but when I reflect on my life, the best I've ever had were brief periods when things were simply less painful.\"
        -- [Anonymous]

Q: What are the \"intuitive and metaphyscal arts\"?
A: The gods alone know. Probably the old tired con-acts of fortune-telling and putting the hex on your neighbor's goat, glossed up with gibberish borrowed from pop science tracts in the last two centuries.
        -- The Aleph Anti-FAQ

\"If you build a snazzy alife sim ... you'd be a kind of bridging `first cause', and might even have the power to intervene in their lives - even obliterate their entire experienced cosmos - but that wouldn't make you a god in any interesting sense.  Gods are ontologically distinct from creatures, or they're not worth the paper they're written on.\"
        -- Damien Broderick

\"NORMAL is a setting on a washing-machine.\"
        -- Nikolai Kingsley

" } }, { "_id": "zrGzan92SxP27LWP9", "title": "Points of Departure", "pageUrl": "https://www.lesswrong.com/posts/zrGzan92SxP27LWP9/points-of-departure", "postedAt": "2008-09-09T21:18:08.000Z", "baseScore": 29, "voteCount": 23, "commentCount": 38, "url": null, "contents": { "documentId": "zrGzan92SxP27LWP9", "html": "

Followup toAnthropomorphic Optimism

\n\n

If you've watched Hollywood sci-fi involving supposed robots, androids, or AIs, then you've seen AIs that are depicted as "emotionless".  In the olden days this was done by having the AI speak in a monotone pitch - while perfectly stressing the syllables, of course.  (I could similarly go on about how AIs that disastrously misinterpret their mission instructions, never seem to need help parsing spoken English.)  You can also show that an AI is "emotionless" by having it notice an emotion with a blatant somatic effect, like tears or laughter, and ask what it means (though of course the AI never asks about sweat or coughing).

\n\n

If you watch enough Hollywood sci-fi, you'll run into all of the following situations occurring with supposedly "emotionless" AIs:

\n\n
  1. An AI that malfunctions or otherwise turns evil, instantly acquires all of the negative human emotions - it hates, it wants revenge, and feels the need to make self-justifying speeches.
  2. \n\n
  3. Conversely, an AI that turns to the Light Side, gradually acquires a full complement of human emotions.
  4. \n\n
  5. An "emotionless" AI suddenly exhibits human emotion when under exceptional stress; e.g. an AI that displays no reaction to thousands of deaths, suddenly showing remorse upon killing its creator.
  6. \n\n
  7. An AI begins to exhibit signs of human emotion, and refuses to admit it.
\n\n

Now, why might a Hollywood scriptwriter make those particular mistakes?

These mistakes seem to me to bear the signature of modeling an Artificial Intelligence as an emotionally repressed human.

\n\n

At least, I can't seem to think of any other simple hypothesis that explains the behaviors 1-4 above.  The AI that turns evil has lost its negative-emotion-suppressor, so the negative emotions suddenly switch on.  The AI that turns from mechanical agent to good agent, gradually loses the emotion-suppressor keeping it mechanical, so the good emotions rise to the surface.  Under exceptional stress, of course the emotional repression that keeps the AI "mechanical" will immediately break down and let the emotions out.  But if the stress isn't so exceptional, the firmly repressed AI will deny any hint of the emotions leaking out - that conflicts with the AI's self-image of itself as being emotionless.

\n\n

It's not that the Hollywood scriptwriters are explicitly reasoning "An AI will be like an emotionally repressed human", of course; but rather that when they imagine an "emotionless AI", this is the intuitive model that forms in the background - a Standard mind (which is to say a human mind) plus an extra Emotion Suppressor.

\n\n

Which all goes to illustrate yet another fallacy of anthropomorphism - treating humans as your point of departure, modeling a mind as a human plus a set of differences.

\n\n

This is a logical fallacy because it warps Occam's Razor.  A mind that entirely lacks chunks of brainware to implement "hate" or "kindness", is simpler - in a computational complexity sense - than a mind that has "hate" plus a "hate-suppressor", or "kindness" plus a "kindness-repressor".  But if you start out with a human mind, then adding an activity-suppressor is a smaller alteration than deleting the whole chunk of brain.

\n\n

It's also easier for human scriptwriters to imagine themselves repressing an emotion, pushing it back, crushing it down, then it is for them to imagine once deleting an emotion and it never coming back.  The former is a mode that human minds can operate in; the latter would take neurosurgery.

\n\n

But that's just a kind of anthropomorphism previously covered - the plain old ordinary fallacy of using your brain as a black box to predict something that doesn't work like it does.  Here, I want to talk about the formally different fallacy of measuring simplicity in terms of the shortest diff from "normality", i.e., what your brain says a "mind" does in the absence of specific instruction otherwise, i.e., humanness.  Even if you can grasp that something doesn't have to work just like a human, thinking of it as a human+diff will distort your intuitions of simplicity - your Occam-sense.

" } }, { "_id": "7snc2aJhiDoppX7dW", "title": "Singularity Summit 2008", "pageUrl": "https://www.lesswrong.com/posts/7snc2aJhiDoppX7dW/singularity-summit-2008", "postedAt": "2008-09-08T23:26:47.000Z", "baseScore": 4, "voteCount": 3, "commentCount": 4, "url": null, "contents": { "documentId": "7snc2aJhiDoppX7dW", "html": "

FYI all:  The Singularity Summit 2008 is coming up, 9am-5pm October 25th, 2008 in San Jose, CA.  This is run by my host organization, the Singularity Institute.  Speakers this year include Vernor Vinge, Marvin Minsky, the CTO of Intel, and the chair of the X Prize Foundation.

\n\n

Before anyone posts any angry comments: yes, the registration costs\nactual money this year.  The Singularity Institute has run free events before, and will run free events in the future.  But while past Singularity Summits have been media successes, they haven't been fundraising successes up to this point.  So Tyler Emerson et. al. are trying it a little differently.  TANSTAAFL.

\n\n

Lots of speakers talking for short periods this year.  I'm intrigued by that format.  We'll see how it goes.

Press release:

SAN JOSE, CA\n– Singularity Summit 2008: Opportunity, Risk, Leadership takes place\nOctober 25 at the intimate Montgomery Theater in San Jose, CA, the\nSingularity Institute for Artificial Intelligence announced. Now\nin its third year, the Singularity Summit gathers the smartest people\naround to explore the biggest idea of our time: the Singularity.\n

\n\n

Keynotes will include Ray Kurzweil, updating his predictions in The Singularity is Near,\nand Intel CTO Justin Rattner, who will examine the Singularity’s\nplausibility. At the Intel Developer Forum on August 21, 2008, he\nexplained why he thinks the gap between humans and machines will close\nby 2050. “Rather than look back, we’re going to look forward 40 years,”\nsaid Rattner. “It’s in that future where many people think that machine\nintelligence will surpass human intelligence.”

\n

“The acceleration of technological progress has been the central\nfeature of this century,” said computer scientist Dr. Vernor Vinge in a\nseminal paper in 1993. “We are on the edge of change comparable to the\nrise of human life on Earth. The precise cause of this change is the\nimminent creation by technology of entities with greater than human\nintelligence.”

\n

Singularity Summit 2008 will feature an impressive lineup:

\n\n

Registration details are available at http://www.singularitysummit.com/registration/.

\n

About the Singularity Summit

\n

Each year, the Singularity Summit attracts a unique audience to the\nBay Area, with visionaries from business, science, technology,\nphilanthropy, the arts, and more. Participants learn where humanity is\nheaded, meet the people leading the way, and leave inspired to create a\nbetter world. “The Singularity Summit is the premier conference on the\nSingularity,” Kurzweil said. “As we get closer to the Singularity, each\nyear’s conference is better than the last.”

\n

The Summit was founded in 2006 by long-term philanthropy executive\nTyler Emerson, inventor Ray Kurzweil, and investor Peter Thiel. Its\npurpose is to bring together and build a visionary community to further\ndialogue and action on complex, long-term issues that may transform the\nworld. Its host organization is the Singularity Institute for\nArtificial Intelligence, a 501(c)(3) nonprofit organization studying\nthe benefits and risks of advanced artificial intelligence systems.

\n

Singularity Summit 2008 partners include Clarium Capital, Cartmell\nHoldings, Twine, Powerset, United Therapeutics, KurzweilAI.net, IEEE\nSpectrum, DFJ, X PRIZE Foundation, Long Now Foundation, Foresight\nNanotech Institute, Novamente, SciVestor, Robotics Trends, and MINE.

\n\t\t\t\t\t
" } }, { "_id": "jQrJXS3kYjkPPywTc", "title": "Rationality Quotes 16", "pageUrl": "https://www.lesswrong.com/posts/jQrJXS3kYjkPPywTc/rationality-quotes-16", "postedAt": "2008-09-08T03:36:50.000Z", "baseScore": 6, "voteCount": 5, "commentCount": 10, "url": null, "contents": { "documentId": "jQrJXS3kYjkPPywTc", "html": "

\"I read a lot of fantasy and have wondered sometimes, not so much what I would do in a fantasy setting, but what the book characters would do in the real world.\"
        -- Romana

\n

\"That's the thing that's always fascinated me about Go. It is essentially an extremely simple game gone terribly, terribly wrong.\"
        -- Amorymeltzer

\n

\"Dealing with the sheer of volume of \"stuff\" available on the internet is like being a crackhead with OCD. In the course of one hour I've tweaked my fantasy baseball lineup, posted on this message board, read Yahoo news, answered my latest e-mail, downloaded guidance criteria for PAHs in soils in NY State, checked the discography of a couple of bands, sent a deliverable to a client, and checked the weather. If that isn't superstimulus I don't know what is.  It's amazing how much I can do, yet accomplish so little.\"
        -- Misanthropic

\n

\"We don't have thoughts, we are thoughts.  Thoughts are not responsible for the machinery that happens to think them.\"
        -- John K Clark

\n

\"I have known more people whose lives have been ruined by getting a Ph.D. in physics than by drugs.\"
        -- Jonathan I. Katz

\n

\"There's no difference between a pessimist who says, \"Oh, it's hopeless, so don't bother doing anything,\" and an optimist who says, \"Don't bother doing anything, it's going to turn out fine anyway.\"  Either way, nothing happens.\"
        -- Yvon Chouinard

\n

\"Life moved ever outward into infinite possibilities and yet all things were perfect and finished in every single moment, their end attained.\"
        -- David Zindell, Neverness

" } }, { "_id": "yqYvwXbnEEy29BWXB", "title": "Let’s discuss the weather", "pageUrl": "https://www.lesswrong.com/posts/yqYvwXbnEEy29BWXB/let-s-discuss-the-weather", "postedAt": "2008-09-07T12:54:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "yqYvwXbnEEy29BWXB", "html": "
Wondering your opinion, not particularly trying to change it:
\n
\n
1. How likely is our avoiding dangerous climate change by getting enough international cooperation to cut global carbon emissions enough in time?
\n
\n

\"\"

\n
2. How much of a difference can I make to this?
\n

\n
3. If the chances in 1. are small, why don’t we try something like geoengineering?
\n
\n
4. If 1 and 2 are small and 3 feasible  (so the world isn’t about to end), why is cutting emissions an important issue to work on? (given the opportunity costs: there are lots of other critical issues)

\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "JEZev7rptACqQzPac", "title": "Rationality Quotes 15", "pageUrl": "https://www.lesswrong.com/posts/JEZev7rptACqQzPac/rationality-quotes-15", "postedAt": "2008-09-06T18:53:12.000Z", "baseScore": 8, "voteCount": 6, "commentCount": 26, "url": null, "contents": { "documentId": "JEZev7rptACqQzPac", "html": "

\"Who thinks they're not open-minded? Our hypothetical prim miss from the suburbs thinks she's open-minded. Hasn't she been taught to be? Ask anyone, and they'll say the same thing: they're pretty open-minded, though they draw the line at things that are really wrong.\"
        -- Paul Graham

\n

\"In the same way that we need statesmen to spare us the abjection of exercising power, we need scholars to spare us the abjection of learning.\"
        -- Jean Baudrillard

\n

\"Because giftedness is not to be talked about, no one tells high-IQ children explicitly, forcefully and repeatedly that their intellectual talent is a gift. That they are not superior human beings, but lucky ones. That the gift brings with it obligations to be worthy of it.\"
        -- Charles Murray

\n

\"The popular media can only handle ideas expressible in proto-language, not ideas requiring nested phrase-structure syntax for their exposition.\"
        -- Ben Goertzel

\n

\"The best part about math is that, if you have the right answer and someone disagrees with you, it really is because they're stupid.\"
        -- Quotes from Honors Linear Algebra

\n

\"Long-Term Capital Management had faith in diversification.  Its history serves as ample notification that eggs in different baskets can and do all break at the same time.\"
        -- Craig L. Howe

\n

\"Accountability is about one person taking responsibility. If two people are accountable for the same decision, no one is really accountable.\"
        -- Glyn Holton

" } }, { "_id": "3uTHXPnSwPw8h3suz", "title": "Rationality Quotes 14", "pageUrl": "https://www.lesswrong.com/posts/3uTHXPnSwPw8h3suz/rationality-quotes-14", "postedAt": "2008-09-05T20:16:33.000Z", "baseScore": 9, "voteCount": 7, "commentCount": 14, "url": null, "contents": { "documentId": "3uTHXPnSwPw8h3suz", "html": "

\"As for the little green men... they don't want us to know about them, so they refrain from making contact... then they do silly aerobatics displays within radar range of military bases... with their exterior lights on... if that's extraterrestrial intelligence, I'm not sure I want to know what extraterrestrial stupidity looks like.\"
        -- Russell Wallace

\n

\"Characterizing male status-seeking as egotistical is like characterizing bonobo promiscuity as unchaste.\"
        -- Liza May

\n

\"Introducing a technology is not a neutral act--it is profoundly revolutionary. If you present a new technology to the world you are effectively legislating a change in the way we all live. You are changing society, not some vague democratic process. The individuals who are driven to use that technology by the disparities of wealth and power it creates do not have a real choice in the matter. So the idea that we are giving people more freedom by developing technologies and then simply making them available is a dangerous illusion.\"
        -- Karl Schroeder

\n

\"Hans Riesel held a Mersenne record for 14 days in the 50's, calculated using the first Swedish computer. My old highschool computing teacher had worked as a student on the system and had managed to crush his foot when a byte fell out of its rack and onto him.\"
        -- Anders Sandberg

\n

\"Gentlemen, I do not mind being contradicted, and I am unperturbed when I am attacked, but I confess I have slight misgivings when I hear myself being explained.\"
        -- Lord Balfour, to the English Parliament

" } }, { "_id": "jbgjvhszkr3KoehDh", "title": "The Truly Iterated Prisoner's Dilemma", "pageUrl": "https://www.lesswrong.com/posts/jbgjvhszkr3KoehDh/the-truly-iterated-prisoner-s-dilemma", "postedAt": "2008-09-04T18:00:00.000Z", "baseScore": 31, "voteCount": 26, "commentCount": 86, "url": null, "contents": { "documentId": "jbgjvhszkr3KoehDh", "html": "

Followup toThe True Prisoner's Dilemma

\n\n

For everyone who thought that the rational choice in yesterday's True Prisoner's Dilemma was to defect, a follow-up dilemma:

\n\n

Suppose that the dilemma was not one-shot, but was rather to be repeated exactly 100 times, where for each round, the payoff matrix looks like this:

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Humans: CHumans:  D
Paperclipper: C(2 million human lives saved, 2 paperclips gained)(+3 million lives, +0 paperclips)
Paperclipper: D(+0 lives, +3 paperclips)(+1 million lives, +1 paperclip)
\n\n

As most of you probably know, the king of the classical iterated Prisoner's Dilemma is Tit for Tat, which cooperates on the first round, and on succeeding rounds does whatever its opponent did last time.  But what most of you may not realize, is that, if you know when the iteration will stop, Tit for Tat is - according to classical game theory - irrational.

\n\n

Why?  Consider the 100th round.  On the 100th round, there will be no future iterations, no chance to retaliate against the other player for defection.  Both of you know this, so the game reduces to the one-shot Prisoner's Dilemma.  Since you are both classical game theorists, you both defect.

\n\n

Now consider the 99th round.  Both of you know that you will both defect in the 100th round, regardless of what either of you do in the 99th round.  So you both know that your future payoff doesn't depend on your current action, only your current payoff.  You are both classical game theorists.  So you both defect.

\n\n

Now consider the 98th round...

\n\n

With humanity and the Paperclipper facing 100 rounds of the iterated Prisoner's Dilemma, do you really truly think that the rational thing for both parties to do, is steadily defect against each other for the next 100 rounds?

" } }, { "_id": "HFyWNBnDNEDsDNLrZ", "title": "The True Prisoner's Dilemma", "pageUrl": "https://www.lesswrong.com/posts/HFyWNBnDNEDsDNLrZ/the-true-prisoner-s-dilemma", "postedAt": "2008-09-03T21:34:28.000Z", "baseScore": 259, "voteCount": 186, "commentCount": 117, "url": null, "contents": { "documentId": "HFyWNBnDNEDsDNLrZ", "html": "

It occurred to me one day that the standard visualization of the Prisoner's Dilemma is fake.

\n\n

The core of the Prisoner's Dilemma is this symmetric payoff matrix:

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
1: C1:  D
2: C(3, 3)(5, 0)
2: D(0, 5)(2, 2)
\n\n

Player 1, and Player 2, can each choose C or D.  1 and 2's utility for the final outcome is given by the first and second number in the pair.  For reasons that will become apparent, "C" stands for "cooperate" and D stands for "defect".

\n\n

Observe that a player in this game (regarding themselves as the first player) has this preference ordering over outcomes:  (D, C) > (C, C) > (D, D) > (C, D).

\n\n

D, it would seem, dominates C:  If the other player chooses C, you prefer (D, C) to (C, C); and if the other player chooses D, you prefer (D, D) to (C, D).  So you wisely choose D, and as the payoff table is symmetric, the other player likewise chooses D.

\n\n

If only you'd both been less wise!  You both prefer (C, C) to (D, D).  That is, you both prefer mutual cooperation to mutual defection.

\n\n

The Prisoner's Dilemma is one of the great foundational issues in decision theory, and enormous volumes of material have been written about it.  Which makes it an audacious assertion of mine, that the usual way of visualizing the Prisoner's Dilemma has a severe flaw, at least if you happen to be human.

The classic visualization of the Prisoner's Dilemma is as follows: you\nare a criminal, and you and your confederate in crime have both been\ncaptured by the authorities.

\n\n

Independently, without communicating, and\nwithout being able to change your mind afterward, you have to decide\nwhether to give testimony against your confederate (D) or remain silent\n(C).

\n\n

Both of you, right now, are facing one-year prison sentences;\ntestifying (D) takes one year off your prison sentence, and adds two years\nto your confederate's sentence.

\n\n

Or maybe you and some stranger are, only once, and without knowing the other player's history, or finding out who the player was afterward, deciding whether to play C or D, for a payoff in dollars matching the standard chart.

\n\n

And, oh yes - in the classic visualization you're supposed to pretend that you're entirely\nselfish, that you don't care about your confederate criminal, or the player in\nthe other room.

\n\n

It's this last specification that makes the classic visualization, in my view, fake.

\n\n

You can't avoid hindsight bias by instructing a jury to pretend not to know the real outcome of a set of events.  And without a complicated effort backed up by considerable knowledge, a neurologically intact human being cannot pretend to be genuinely, truly selfish.

\n\n

We're born with a sense of fairness, honor, empathy, sympathy, and even altruism - the result of our ancestors adapting to play the iterated Prisoner's Dilemma.  We don't really, truly, absolutely and entirely prefer (D, C) to (C, C), though we may entirely prefer (C, C) to (D, D) and (D, D) to (C, D).  The thought of our confederate spending three years in prison, does not entirely fail to move us.

\n\n

In that locked cell where we play a simple game under the supervision of economic psychologists, we are not entirely and absolutely unsympathetic for the stranger who might cooperate.  We aren't entirely happy to think what we might defect and the stranger cooperate, getting five dollars while the stranger gets nothing.

\n\n

We fixate instinctively on the (C, C) outcome and search for ways to argue that it should be the mutual decision:  "How can we ensure mutual cooperation?" is the instinctive thought.  Not "How can I trick the other player into playing C while I play D for the maximum payoff?"

\n\n

For someone with an impulse toward altruism, or honor, or fairness, the Prisoner's Dilemma doesn't really have the critical payoff matrix - whatever the financial payoff to individuals.  (C, C) > (D, C), and the key question is whether the other player sees it the same way.

\n\n

And no, you can't instruct people being initially introduced to game theory to pretend they're completely selfish - any more than you can instruct human beings being introduced to anthropomorphism to pretend they're expected paperclip maximizers.

\n\n

To construct the True Prisoner's Dilemma, the situation has to be something like this:

\n\n

Player 1:  Human beings, Friendly AI, or other humane intelligence.

\n\n

Player 2:  UnFriendly AI, or an alien that only cares about sorting pebbles.

\n\n

Let's suppose that four billion human beings - not the whole human species, but a significant part of it - are currently progressing through a fatal disease that can only be cured by substance S.

\n\n

However, substance S can only be produced by working with a paperclip maximizer from another dimension - substance S can also be used to produce paperclips.  The paperclip maximizer only cares about the number of paperclips in its own universe, not in ours, so we can't offer to produce or threaten to destroy paperclips here.  We have never interacted with the paperclip maximizer before, and will never interact with it again.

\n\n

Both humanity and the paperclip maximizer will get a single chance to seize some additional part of substance S for themselves, just before the dimensional nexus collapses; but the seizure process destroys some of substance S.

\n\n

The payoff matrix is as follows:

\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
1: C1:  D
2: C(2 billion human lives saved, 2 paperclips gained)(+3 billion lives, +0 paperclips)
2: D(+0 lives, +3 paperclips)(+1 billion lives, +1 paperclip)
\n\n

I've chosen this payoff matrix to produce a sense of indignation at the thought that the paperclip maximizer wants to trade off billions of human lives against a couple of paperclips.  Clearly the paperclip maximizer should just let us have all of substance S; but a paperclip maximizer doesn't do what it should, it just maximizes paperclips.

\n\n

In this case, we really do prefer the outcome (D, C) to the outcome (C, C), leaving aside the actions that produced it.  We would vastly rather live in a universe where 3 billion humans were cured of their disease and no paperclips were produced, rather than sacrifice a billion human lives to produce 2 paperclips.  It doesn't seem right to cooperate, in a case like this.  It doesn't even seem fair - so great a sacrifice by us, for so little gain by the paperclip maximizer?  And let us specify that the paperclip-agent experiences no pain or pleasure - it just outputs actions that steer its universe to contain more paperclips.  The paperclip-agent will experience no pleasure at gaining paperclips, no hurt from losing paperclips, and no painful sense of betrayal if we betray it.

\n\n

What do you do then?  Do you\ncooperate when you really, definitely, truly and absolutely do want the\nhighest reward you can get, and you don't care a tiny bit by comparison\nabout what happens to the other player?  When it seems right to defect even if the other player cooperates?

\n\n

That's what the\npayoff matrix for the true Prisoner's Dilemma looks like - a situation where (D, C) seems righter than (C, C).

\n\n

But all the rest\nof the logic - everything about what happens if both agents think that\nway, and both agents defect - is the same.  For the paperclip maximizer cares as little about human deaths, or human pain, or a human sense of betrayal, as we care about paperclips.  Yet we both prefer (C, C) to (D, D).

\n\n

So if you've ever prided yourself on cooperating in the Prisoner's Dilemma... or questioned the verdict of classical game theory that the "rational" choice is to defect... then what do you say to the True Prisoner's Dilemma above?

" } }, { "_id": "gm9bg33io9ikaHudN", "title": "Rationality Quotes 13", "pageUrl": "https://www.lesswrong.com/posts/gm9bg33io9ikaHudN/rationality-quotes-13", "postedAt": "2008-09-02T16:00:00.000Z", "baseScore": 9, "voteCount": 10, "commentCount": 24, "url": null, "contents": { "documentId": "gm9bg33io9ikaHudN", "html": "

\"You can only compromise your principles once.  After then you don't have any.\"
        -- Smug Lisp Weeny

\n

\"If you want to do good, work on the technology, not on getting power.\"
        -- John McCarthy

\n

\"If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments.  But if you’re interested in producing truth, you will fix your opponents’ arguments for them.  To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse.\"
        -- Black Belt Bayesian

\n

\"I normally thought of \"God!\" as a disclaimer, or like the MPAA rating you see just before a movie starts: it told me before I continued into conversation with that person, that that person had limitations to their intellectual capacity or intellectual honesty.\"
        -- Mike Barskey

\n

\"It is the soldier, not the reporter, who has given us freedom of the press. It is the soldier, not the poet, who has given us freedom of speech. It is the soldier, not the campus organizer, who has given us the freedom to demonstrate. It is the soldier, not the lawyer, who has given us the right to a fair trial. It is the soldier, who salutes the flag, who serves under the flag, and whose coffin is draped by the flag, who allows the protester to burn the flag.\"
        -- Father Dennis Edward O'Brien, USMC

" } }, { "_id": "XoSF9imrCbkERa9PE", "title": "Rationality Quotes 12", "pageUrl": "https://www.lesswrong.com/posts/XoSF9imrCbkERa9PE/rationality-quotes-12", "postedAt": "2008-09-01T20:00:00.000Z", "baseScore": 6, "voteCount": 7, "commentCount": 13, "url": null, "contents": { "documentId": "XoSF9imrCbkERa9PE", "html": "

\"Even if I had an objective proof that you don't find it unpleasant when you stick your hand in a fire, I still think you’d pull your hand out at the first opportunity.\"
        -- John K Clark

\n

\"So often when one level of delusion goes away, another one more subtle comes in its place.\"
        -- Rational Buddhist

\n

\"Your denial of the importance of objectivity amounts to announcing your intention to lie to us. No-one should believe anything you say.\"
        -- John McCarthy

\n

\"How exactly does one 'alter reality'?  If I eat an apple have I altered reality?  Or maybe you mean to just give the appearance of altering reality.\"
        -- JoeDad

\n

\"Promoting less than maximally accurate beliefs is an act of sabotage.   Don't do it to anyone unless you'd also slash their tires.\"
        -- Black Belt Bayesian

" } }, { "_id": "f9B7BXk2GtFHzHiP5", "title": "Brief Break", "pageUrl": "https://www.lesswrong.com/posts/f9B7BXk2GtFHzHiP5/brief-break", "postedAt": "2008-08-31T16:00:00.000Z", "baseScore": 6, "voteCount": 4, "commentCount": 34, "url": null, "contents": { "documentId": "f9B7BXk2GtFHzHiP5", "html": "

I've been feeling burned on Overcoming Bias lately, meaning that I take too long to write my posts, which decreases the amount of recovery time, making me feel more burned, etc.

\n\n

So I'm taking at most a one-week break.  I'll post small units of rationality quotes each day, so as to not quite abandon you.  I may even post some actual writing, if I feel spontaneous, but definitely not for the next two days; I have to enforce this break upon myself.

\n\n

When I get back, my schedule calls for me to finish up the Anthropomorphism sequence, and then talk about Marcus Hutter's AIXI, which I think is the last brain-malfunction-causing subject I need to discuss.  My posts should then hopefully go back to being shorter and easier.

\n\n

Hey, at least I got through over a solid year of posts without taking a vacation.

" } }, { "_id": "wKnwcjJGriTS9QxxL", "title": "Dreams of Friendliness", "pageUrl": "https://www.lesswrong.com/posts/wKnwcjJGriTS9QxxL/dreams-of-friendliness", "postedAt": "2008-08-31T01:20:52.000Z", "baseScore": 29, "voteCount": 28, "commentCount": 81, "url": null, "contents": { "documentId": "wKnwcjJGriTS9QxxL", "html": "

Continuation ofQualitative Strategies of Friendliness

\n\n

Yesterday I described three classes of deep problem with qualitative-physics-like strategies for building nice AIs - e.g., the AI is reinforced by smiles, and happy people smile, therefore the AI will tend to act to produce happiness.  In shallow form, three instances of the three problems would be:

\n\n
  1. Ripping people's faces off and wiring them into smiles;
  2. \n\n
  3. Building lots of tiny agents with happiness counters set to large numbers;
  4. \n\n
  5. Killing off the human species and replacing it with a form of sentient life that has no objections to being happy all day in a little jar.
\n\n

And the deep forms of the problem are, roughly:

\n\n
  1. A superintelligence will search out alternate causal pathways to its goals than the ones you had in mind;
  2. \n\n
  3. The boundaries of moral categories are not predictively natural entities;
  4. \n\n
  5. Strong optimization for only some humane values, does not imply a good total outcome.
\n\n

But there are other ways, and deeper ways, of viewing the failure of qualitative-physics-based Friendliness strategies.

\n\n

Every now and then, someone proposes the Oracle AI strategy:  \"Why not just have a superintelligence that answers human questions, instead of acting autonomously in the world?\"

\n\n

Sounds pretty safe, doesn't it?  What could possibly go wrong?

Well... if you've got any respect for Murphy's Law, the power of superintelligence, and human stupidity, then you can probably think of quite a few things that could go wrong with this scenario.  Both in terms of how a naive implementation could fail - e.g., universe tiled with tiny users asking tiny questions and receiving fast, non-resource-intensive answers - and in terms of what could go wrong even if the basic scenario worked.

\n\n

But let's just talk about the structure of the AI.

\n\n

When someone reinvents the Oracle AI, the most common opening remark runs like this:

\n\n

\"Why not just have the AI answer questions, instead of trying to do anything?  Then it wouldn't need to be Friendly.  It wouldn't need any goals at all.  It would just answer questions.\"

\n\n

To which the reply is that the AI needs goals in order to decide how to think: that is, the AI has to act as a powerful optimization process in order to plan its acquisition of knowledge, effectively distill sensory information, pluck \"answers\" to particular questions out of the space of all possible responses, and of course, to improve its own source code up to the level where the AI is a powerful intelligence.  All these events are \"improbable\" relative to random organizations of the AI's RAM, so the AI has to hit a narrow target in the space of possibilities to make superintelligent answers come out.

\n\n

Now, why might one think that an Oracle didn't need goals?  Because on a human level, the term \"goal\" seems to refer to those times when you said, \"I want to be promoted\", or \"I want a cookie\", and when someone asked you \"Hey, what time is it?\" and you said \"7:30\" that didn't seem to involve any goals.  Implicitly, you wanted to answer the question; and implicitly, you had a whole, complicated, functionally optimized brain that let you answer the question; and implicitly, you were able to do so because you looked down at your highly optimized watch, that you bought with money, using your skill of turning your head, that you acquired by virtue of curious crawling as an infant.  But that all takes place in the invisible background; it didn't feel like you wanted anything.

\n\n

Thanks to empathic inference, which uses your own brain as an unopened black box to predict other black boxes, it can feel like \"question-answering\" is a detachable thing that comes loose of all the optimization pressures behind it - even the existence of a pressure to answer questions!

\n\n

Problem 4:  Qualitative reasoning about AIs often revolves around some nodes described by empathic inferences.  This is a bad thing: for previously described reasons; and because it leads you to omit other nodes of the graph and their prerequisites and consequences; and because you may find yourself thinking things like, \"But the AI has to cooperate to get a cookie, so now it will be cooperative\" where \"cooperation\" is a boundary in concept-space drawn the way you would prefer to draw it... etc.

\n\n

Anyway: the AI needs a goal of answering questions, and that has to give rise to subgoals of choosing efficient problem-solving strategies, improving its code, and acquiring necessary information.  You can quibble about terminology, but the optimization pressure has to be there, and it has to be very powerful, measured in terms of how small a target it can hit within a large design space.

\n\n

Powerful optimization pressures are scary things to be around.  Look at what natural selection inadvertently did to itself - dooming the very molecules of DNA - in the course of optimizing a few Squishy Things to make hand tools and outwit each other politically.  Humans, though we were optimized only according to the criterion of replicating ourselves, now have their own psychological drives executing as adaptations.  The result of humans optimized for replication is not just herds of humans; we've altered much of Earth's land area with our technological creativity.  We've even created some knock-on effects that we wish we hadn't, because our minds aren't powerful enough to foresee all the effects of the most powerful technologies we're smart enough to create.

\n\n

My point, however, is that when people visualize qualitative FAI strategies, they generally assume that only one thing is going on, the normal / modal / desired thing.  (See also: planning fallacy.)  This doesn't always work even for picking up a rock and throwing it.  But it works rather a lot better for throwing rocks than unleashing powerful optimization processes.

\n\n

Problem 5:  When humans use qualitative reasoning, they tend to visualize a single line of operation as typical - everything operating the same way it usually does, no exceptional conditions, no interactions not specified in the graph, all events firmly inside their boundaries.  This works a lot better for dealing with boiling kettles, than for dealing with minds faster and smarter than your own.

\n\n

If you can manage to create a full-fledged Friendly AI with full coverage of humane (renormalized human) values, then the AI is visualizing the consequences of its acts, caring about the consequences you care about, and avoiding plans with consequences you would prefer to exclude.  A powerful optimization process, much more powerful than you, that doesn't share your values, is a very scary thing - even if it only \"wants to answer questions\", and even if it doesn't just tile the universe with tiny agents having simple questions answered.

\n\n

I don't mean to be insulting, but human beings have enough trouble controlling the technologies that they're smart enough to invent themselves.

\n\n

I sometimes wonder if maybe part of the problem with modern civilization is that politicians can press the buttons on nuclear weapons that they couldn't have invented themselves - not that it would be any better if we gave physicists political power that they weren't smart enough to obtain themselves - but the point is, our button-pressing civilization has an awful lot of people casting spells that they couldn't have written themselves.  I'm not saying this is a bad thing and we should stop doing it, but it does have consequences.  The thought of humans exerting detailed control over literally superhuman capabilities - wielding, with human minds, and in the service of merely human strategies, powers that no human being could have invented - doesn't fill me with easy confidence.

\n\n

With a full-fledged, full-coverage Friendly AI acting in the world - the impossible-seeming full case of the problem - the AI itself is managing the consequences.

\n\n

Is the Oracle AI thinking about the consequences of answering the questions you give it?  Does the Oracle AI care about those consequences the same way you do, applying all the same values, to warn you if anything of value is lost?

\n\n

What need has an Oracle for human questioners, if it knows what questions we should ask?  Why not just unleash the should function?

\n\n

See also the notion of an \"AI-complete\" problem.  Analogously, any Oracle into which you can type the English question \"What is the code of an AI that always does the right thing?\" must be FAI-complete.

\n\n

Problem 6:  Clever qualitative-physics-type proposals for bouncing this thing off the AI, to make it do that thing, in a way that initially seems to avoid the Big Scary Intimidating Confusing Problems that are obviously associated with full-fledged Friendly AI, tend to just run into exactly the same problem in slightly less obvious ways, concealed in Step 2 of the proposal.

\n\n

(And likewise you run right back into the intimidating problem of precise\nself-optimization, so that the Oracle AI can execute a billion\nself-modifications one after the other, and still just answer\nquestions at the end; you're not avoiding that basic challenge of\nFriendly AI either.)

\n\n

But the deepest problem with qualitative physics is revealed by a proposal that comes earlier in the standard conversation, at the point when I'm talking about side effects of powerful optimization processes on the world:

\n\n

\"We'll just keep the AI in a solid box, so it can't have any effects on the world except by how it talks to the humans.\"

\n\n

I explain the AI-Box Experiment (see also That Alien Message); even granting the untrustworthy premise that a superintelligence can't think of any way to pass the walls of the box which you weren't smart enough to cover, human beings are not secure systems.  Even against other humans, often, let alone a superintelligence that might be able to hack through us like Windows 98; when was the last time you downloaded a security patch to your brain?

\n\n

\"Okay, so we'll just give the AI the goal of not having any effects on the world except from how it answers questions.  Sure, that requires some FAI work, but the goal system as a whole sounds much simpler than your Coherent Extrapolated Volition thingy.\"

\n\n

What - no effects?

\n\n

\"Yeah, sure.  If it has any effect on the world apart from talking to the programmers through the legitimately defined channel, the utility function assigns that infinite negative utility.  What's wrong with that?\"

\n\n

When the AI thinks, that has a physical embodiment.  Electrons flow through its transistors, moving around.  If it has a hard drive, the hard drive spins, the read/write head moves.  That has gravitational effects on the outside world.

\n\n

\"What?  Those effects are too small!  They don't count!\"

\n\n

The physical effect is just as real as if you shot a cannon at\nsomething - yes, might not notice, but that's just\nbecause our vision is bad at small length-scales.  Sure, the effect is to move things around by 10^whatever Planck lengths, instead of the 10^more Planck lengths that you would consider as \"counting\".  But spinning a hard drive can move things just outside the computer, or just outside the room, by whole neutron diameters -

\n\n

\"So?  Who cares about a neutron diameter?\"

\n\n

- and by quite standard chaotic physics, that effect is liable to blow up.  The butterfly that flaps its wings and causes a hurricane, etc.  That effect may not be easily controllable but that doesn't mean the chaotic effects of small perturbations are not large.

\n\n

But in any case, your proposal was to give the AI a goal of having no effect on the world, apart from effects that proceed through talking to humans.  And this is impossible of fulfillment; so no matter what it does, the AI ends up with infinite negative utility - how is its behavior defined in this case?  (In this case I picked a silly initial suggestion - but one that I have heard made, as if infinite negative utility were like an exclamation mark at the end of a command given a human employee.  Even an unavoidable tiny probability of infinite negative utility trashes the goal system.)

\n\n

Why would anyone possibly think that a physical object like an AI, in our highly interactive physical universe, containing hard-to-shield forces like gravitation, could avoid all effects on the outside world?

\n\n

And this, I think, reveals what may be the deepest way of looking at the problem:

\n\n

Problem 7:  Human beings model a world made up of objects, attributes, and noticeworthy events and interactions, identified by their categories and values.  This is only our own weak grasp on reality; the real universe doesn't look like that.  Even if a different mind saw a similar kind of exposed surface to the world, it would still see a different exposed surface.

\n\n

Sometimes human thought seems a lot like it tries to grasp the universe as... well, as this big XML file, AI.goal == smile, human.smile == yes, that sort of thing.  Yes, I know human world-models are more complicated than XML.  (And yes, I'm also aware that what I wrote looks more like Python than literal XML.)  But even so.

\n\n

What was the one thinking, who proposed an AI whose behaviors would be reinforced by human smiles, and who reacted with indignation to the idea that a superintelligence could \"mistake\" a tiny molecular smileyface for a \"real\" smile?  Probably something along the lines of, \"But in this case, human.smile == 0, so how could a superintelligence possibly believe human.smile == 1?\"

\n\n

For the weak grasp that our mind obtains on the high-level surface of reality, seems to us like the very substance of the world itself.

\n\n

Unless we make a conscious effort to think of reductionism, and even then, it's not as if thinking \"Reductionism!\" gives us a sudden apprehension of quantum mechanics.

\n\n

So if you have this, as it were, XML-like view of reality, then it's easy enough to think you can give the AI a goal of having no effects on the outside world; the \"effects\" are like discrete rays of effect leaving the AI, that result in noticeable events like killing a cat or something, and the AI doesn't want to do this, so it just switches the effect-rays off; and by the assumption of default independence, nothing else happens.

\n\n

Mind you, I'm not saying that you couldn't build an Oracle.  I'm saying that the problem of giving it a goal of \"don't do anything to the outside world\" \"except by answering questions\" \"from the programmers\" \"the way the programmers meant them\", in such fashion as to actually end up with an Oracle that works anything like the little XML-ish model in your head, is a big nontrivial Friendly AI problem.  The real world doesn't have little discreet effect-rays leaving the AI, and the real world doesn't have ontologically fundamental programmer.question objects, and \"the way the programmers meant them\" isn't a natural category.

\n\n

And this is more important for dealing with superintelligences than rocks, because the superintelligences are going to parse up the world in a different way.  They may not perceive reality directly, but they'll still have the power to perceive it differently.  A superintelligence might not be able to tag every atom in the solar system, but it could tag every biological cell in the solar system (consider that each of your cells contains its own mitochondrial power engine and a complete copy of your DNA).  It used to be that human beings didn't even know they were made out of cells.  And if the universe is a bit more complicated than we think, perhaps the superintelligence we build will make a few discoveries, and then slice up the universe into parts we didn't know existed - to say nothing of us being able to model them in our own minds!  How does the instruction to \"do the right thing\" cross that kind of gap?

\n\n

There is no nontechnical solution to Friendly AI.

\n\n

That is:  There is no solution that operates on the level of qualitative physics and empathic models of agents.

\n\n

That's all just a dream in XML about a universe of quantum mechanics.  And maybe that dream works fine for manipulating rocks over a five-minute timespan; and sometimes okay for getting individual humans to do things; it often doesn't seem to give us much of a grasp on human societies, or planetary ecologies; and as for optimization processes more powerful than you are... it really isn't going to work.

\n\n\n\n

(Incidentally, the most epically silly example of this that I can recall seeing, was a proposal to (IIRC) keep the AI in a box and give it faked inputs to make it believe that it could punish its enemies, which would keep the AI satisfied and make it go on working for us.  Just some random guy with poor grammar on an email list, but still one of the most epic FAIls I recall seeing.)

" } }, { "_id": "AWaJvBMb9HGBwtNqd", "title": "Qualitative Strategies of Friendliness", "pageUrl": "https://www.lesswrong.com/posts/AWaJvBMb9HGBwtNqd/qualitative-strategies-of-friendliness", "postedAt": "2008-08-30T02:12:33.000Z", "baseScore": 30, "voteCount": 20, "commentCount": 56, "url": null, "contents": { "documentId": "AWaJvBMb9HGBwtNqd", "html": "

Followup toMagical Categories

\n\n

What on Earth could someone possibly be thinking, when they propose creating a superintelligence whose behaviors are reinforced by human smiles? \nTiny molecular photographs of human smiles - or if you rule that out,\nthen faces ripped off and permanently wired into smiles - or if you\nrule that out, then brains stimulated into permanent maximum happiness,\nin whichever way results in the widest smiles...

\n\n

Well, you never do know what other people are thinking, but in this\ncase I'm willing to make a guess.  It has to do with a field of\ncognitive psychology called Qualitative Reasoning.

\n\n

\"Boilwater_4\"\n

\n\n

Qualitative reasoning is what you use to decide that increasing the\ntemperature of your burner increases the rate at which your water\nboils, which decreases the derivative of the amount of water present. \nOne would also add the sign of d(water) - negative, meaning that the\namount of water is decreasing - and perhaps the fact that there is only\na bounded amount of water.  Or we could say that turning up the burner\nincreases the rate at which the water temperature increases, until the\nwater temperature goes over a fixed threshold, at which point the water\nstarts boiling, and hence decreasing in quantity... etc.

\n\n

That's qualitative reasoning, a small subfield of cognitive science\nand Artificial Intelligence - reasoning that doesn't describe or\npredict exact quantities, but rather the signs of quantities, their\nderivatives, the existence of thresholds.

\n\n

As usual, human common sense means we can see things by qualitative\nreasoning that current programs can't - but the more interesting\nrealization is how vital human qualitative reasoning is to our vaunted human common sense.  It's one of the basic ways in which we comprehend the world.

\n\n

Without timers you can't figure out how long water takes to boil,\nyour mind isn't that precise.  But you can figure out that you should\nturn the burner up, rather than down, and then watch to make sure the water doesn't all boil away.  Which is what you mainly need, in the real world.  Or at least we humans seem to get by on qualitative reasoning; we may not realize what we're missing...

\n\n

So I suspect that what went through the one's mind, proposing the AI\nwhose behaviors would be reinforced by human smiles, was something like\nthis:

\"Happysmilingai_5\"\n

\n\n

The happier people are, the more they smile.  Smiles reinforce the\nbehavior of the AI, so it does more of whatever makes people happy. \nBeing happy is good (that's what the positive connection to "utility"\nis about).  Therefore this is a good AI to construct, because more\npeople will be happy, and that's better.  Switch the AI right on!

\n\n

How many problems are there with this reasoning?

\n\n

Let us count the ways...

\n\n

In fact, if you're interested in the field, you should probably try\ncounting the ways yourself, before I continue.  And score yourself on\nhow deeply you stated a problem, not just the number of specific cases.

\n\n

...

\n\n

Problem 1:  There are ways to cause smiles besides\nhappiness.  "What causes a smile?"  "Happiness."  That's the prototype\nevent, the one that comes first to memory.  But even in human affairs,\nyou might be able to think of some cases where smiles result from a\ncause other than happiness.

\n\n

Where a superintelligence is involved - even granting the hypothesis\nthat it "wants smiles" or "executes behaviors reinforced by smiles" -\nthen you're suddenly much more likely to be dealing with causes of\nsmiles that are outside the human norm.  Back in hunter-gatherer\nsociety, the main cause of eating food was that you hunted it or\ngathered it.  Then came agriculture and domesticated animals.  Today,\nsome hospital patients are sustained by IVs or tubes, and at least a\nfew of the vitamins or minerals in the mix may be purely synthetic.

\n\n

A creative mind, faced with a goal state, tends to invent new ways of achieving it - new causes\nof the goal's achievement.  It invents techniques that are faster or\nmore reliable or less resource-intensive or with bigger wins.  Consider\nhow creative human beings are about obtaining money, and how many more\nways there are to obtain money today than a few thousand years ago when\nmoney was first invented.

\n\n

One of the ways of viewing our amazing human ability of "general\nintelligence" (or "significantly more generally applicable than\nchimpanzee intelligence") is that it operates across domains and can find new domains to exploit. \nYou can see this in terms of learning new and unsuspected facts about\nthe universe, and in terms of searching paths through time that wend\nthrough these new facts.  A superintelligence would be more effective\non both counts - but even on a human scale, this is why merely human\nprogress, thinking with 200Hz neurons over a few hundred years, tends\nto change the way we do things and not just do the same things more effectively. \nAs a result, a "weapon" today is not like a weapon of yestercentury,\n"long-distance communication today" is not a letter carried by horses\nand ships.

\n\n

So when the AI is young, it can only obtain smiles by making the\npeople around it happy.  When the AI grows up to superintelligence, it\nmakes its own nanotechnology and then starts manufacturing the most\ncost-effective kind of object that it has deemed to be a smile.

\n\n

In general, a lot of naive-FAI plans I see proposed, have the\nproperty that, if actually implemented, the strategy might appear to\nwork while the AI was dumber-than-human, but would fail when the AI was\nsmarter than human.  The fully general reason for this is that while\nthe AI is dumber-than-human, it may not yet be powerful enough to\ncreate the exceptional conditions that will break the neat little\nflowchart that would work if every link operated according to the 21st-century First-World modal event.

\n\n

This is why, when you encounter the AGI wannabe who hasn't planned\nout a whole technical approach to FAI, and confront them with the\nproblem for the first time, and they say, "Oh, we'll test it to make\nsure that doesn't happen, and if any problem like that turns up we'll\ncorrect it, now let me get back to the part of the problem that really\ninterests me," know then that this one has not yet leveled up high\nenough to have interesting opinions.  It is a general point about\nfailures in bad FAI strategies, that quite a few of them don't show up\nwhile the AI is in the infrahuman regime, and only show up once the\nstrategy has gotten into the transhuman regime where it is too late to\ndo anything about it.

\n\n

Indeed, according to Bill Hibbard's actual proposal, where the AI is reinforced by seeing\nsmiles, the FAI strategy would be expected to short out - from our\nperspective, from the AI's perspective it's being brilliantly creative\nand thinking outside the box for massive utility wins - to short out on\nthe AI taking control of its own sensory instrumentation and feeding\nitself lots of smile-pictures.  For it to keep doing this, and do it as\nmuch as possible, it must of course acquire as many resources as\npossible.

\n\n

So!  Let us repair our design as follows, then:

\n\n\n\n

\"Superhappyai\"\n

\n\n

Now the AI is not being rewarded by any particular sensory input -\non which the FAI strategy would presumably short out - but is, rather,\ntrying to maximize an external and environmental quantity, the amount of happiness out there.

\n\n

This already takes us into the realm of technical expertise -\ndistinctions that can't be understood in just English, like the\ndifference between expected utility maximization (which can be over\nexternal environmental properties that are modeled but not directly\nsensed) and reinforcement learning (which is inherently tied directly\nto sensors).  See e.g. Terminal Values and Instrumental Values.

\n\n

So in this case, then, the sensors give the AI information that it uses to infer a model of\nthe world; the possible consequences of various plans are modeled, and\nthe amount of "happiness" in that model summed by a utility function;\nand whichever plan corresponds to the greatest expectation of\n"happiness", that plan is output as actual actions.

\n\n

Or in simpler language:  The AI uses its sensors to find out what\nthe world is like, and then it uses its actuators to make sure the\nworld contains as much happiness as possible.  Happiness is good,\ntherefore it is good to turn on this AI.

\n\n

What could possibly go wrong?

\n\n

Problem 2:  What exactly does the AI consider to be happiness?

\n\n

Does the AI's model of a tiny little Super Happy Agent (consisting mostly of a reward center that represents a large number) meet the definition of "happiness" that the AI's utility function sums over, when it looks over the modeled consequences of its actions?

\n\n

As discussed in Magical Categories, the super-exponential size of Concept-space and the "unnaturalness" of categories appearing in terminal values (their boundaries are not directly determined by naturally arising predictive problems) means that the boundary a human would draw around "happiness" is not trivial information to infuse into the AI.

\n\n

I'm not going to reprise the full discussion in Magical Categories,\nbut a sample set of things that the human labels "happy" or "not happy"\nis likely to miss out on key dimensions of possible variances, and\nnever wend through labeling-influencing factors that would be important\nif they were invoked.  Which is to say:  Did you think of presenting the AI with the tiny Super Happy Agent, when you've never seen such a thing?  Did you think of\ndiscussing chimpanzees, Down Syndrome children, and Terry Schiavo?  How\nlate would it have been, in humanity's technological development,\nbefore any human being could have and would have thought of the possibilities you're now generating?  (Note opportunity for hindsight bias.)

\n\n

Indeed, once you start talking about how we would label new\nborderline cases we've never seen, you're well into the realm of\nextrapolating volitions - you might as well ask how we would label\nthese cases, if we knew everything the AI knew, and could consider\nlarger sets of moral arguments, etc.

\n\n

The standard dismissals here range from "Oh, of course I would think\nof X, therefore there's no problem" for any particular X that you suggest to them,\nby way of illustrating a systemic problem that they can't seem to\ngrasp.  Or "Well, I'll look at the AI's representation and see whether\nit defines 'happiness' the same way I do."  (As if you would notice if\none of the 15 different considerations that affect what you would\ndefine as 'truly happy' were left out!  And also as if you could\ndetermine, by eyeballing, whether an AGI's internal representations\nwould draw a border around as-yet-unimagined borderline instances, that\nyou would find sensible.)  Or the always popular, "But that's stupid, therefore a superintelligence won't make that mistake by doing something so pointless."

\n\n

One of the reasons that qualitative planning works for humans as well\nas it does, is our ability to replan on the fly when an exceptional\ncondition shows up.  Can't the superintelligence just obviously see that\nmanufacturing lots of tiny Super Happy agents is stupid, which is to say ranked-low-in-our-preference-ordering?  Not if its preference ordering isn't like yours.  (Followed by the appeals to universally compelling arguments demonstrating that making Super Happy agents is incorrect.)

\n\n

But let's suppose that we can magically convey to the AI exactly what a human would consider as "happiness", by some unspecified and deep and technical art of Friendly AI.  Then we have this shiny new diagram:

\n\n

\"Maximumfundevice\"

\n\n

Of course this still doesn't work - but first, I explain the\ndiagram.  The dotted line between Humans::"Happy" and\nhappiness-in-the-world, marked "by definition", means that the Happy\nbox supposedly contains whatever is meant by the human concept\nof "happiness", as modeled by the AI, which by a magical FAI trick has\nbeen bound exactly to the human concept of "happiness".  (If the happy\nbox is neither what humans mean by happiness, nor what the AI means,\nthen what's inside the box?  True happiness?  What do you mean by that?)

\n\n

One glosses over numerous issues here - just as the original author\nof the original Happy Smiling AI proposal did - such as whether we all\nmean the same thing by "happiness".  And whether we mean something\nconsistent, that can be realized-in-the-world.  In Humans::"Happy"\nthere are neurons and their interconnections, the brain state\ncontaining the full and complete specification of the seed of\nwhat we mean by "happiness" - the implicit reactions that we would\nhave, to various boundary cases and the like - but it would take some\nextrapolation of volition for the AI to decide how we would\nreact to new boundary cases; it is not a trivial thing to draw a little\ndashed line between a human thought, and a concept boundary over the\nworld of quarks and electrons, and say, "by definition".  It wouldn't\nwork on "omnipotence", for example: can you make a rock that you can't\nlift?

\n\n

But let us assume all such issues away.

\n\n

Problem 3:  Is every act which increases the total amount of happiness in the universe, always the right thing to do?

\n\n

If everyone in the universe just ends up with their brains hotwired\nto experience maximum happiness forever, or perhaps just replaced with\norgasmium gloop, is that the greatest possible fulfillment of\nhumanity's potential?  Is this what we wish to make of ourselves?

\n\n

"Oh, that's not real happiness," you say.  But be wary of the No True Scotsman fallacy\n- this is where you say, "No Scotsman would do such a thing", and then,\nwhen the culprit turns out to be a Scotsman, you say, "No true\nScotsman would do such a thing".  Would you have classified the\nhappiness of cocaine as "happiness", if someone had asked you in\nanother context?

\n\n

Admittedly, picking "happiness" as the optimization target of the AI\nmakes it slightly more difficult to construct counterexamples: no\nmatter what you pick, the one can say, "Oh, but if people saw that\nhappen, they would be unhappy, so the AI won't do it."  But this\ngeneral response gives us the counterexample: what if the AI has to\nchoose between a course of action that leads people to believe a\npleasant fiction, or a course of action that leads to people knowing an\nunpleasant truth?

\n\n

Suppose you believe that your daughter has gone on a one-way,\nnear-lightspeed trip to the Hercules supercluster, meaning that you're\nexceedingly unlikely to ever hear from her again.  This is a little\nsad, but you're proud of her - someone's got to colonize the place,\nturn it into a human habitation, before the expansion of the universe\nseparates it from us.  It's not as if she's dead - now that would make you sad.

\n\n

And now suppose that the colony ship strikes a landmine, or something, and is lost with all on board.  Should the AI tell\nyou this?  If all the AI values is happiness, why would it?  You'll be\nsad then, and the AI doesn't care about truth or lies, just happiness.

\n\n

Is that "no true happiness"?  But it was true happiness before, when\nthe ship was still out there.  Can the difference between an instance\nof the "happiness" concept, and a non-instance of the "happiness"\nconcept, as applied to a single individual, depend on the state of a\nsystem light-years away?  That would be rather an extreme case of "no\ntrue Scotsman", if so - and by the time you factor in all the other\nbehaviors you want out of this word "happiness", including times when\nbeing sad is the right thing to do, and the fact that you can't just\nrewrite brains to be happy, it's pretty clear that "happiness" is just\na convenient stand-in for "good", and that everything which is not good\nis being rejected as an instance of "happy" and everything which is\ngood is being accepted as an instance of "happy", even if it means\nbeing sad.  And at this point you just have the AI which does exactly\nwhat it should do - which has been hooked up directly to Utility - and\nthat's not a system to mention lightly; pretending that "happiness" is\nyour stand-in for Utility doesn't begin to address the issues.

\n\n

So if we leave aside this dodge, and consider the sort of happiness\nthat would go along with smiling humans - ordinary psychological\nhappiness - then no, you wouldn't want to switch on the\nsuperintelligence that always and only optimized for happiness.  For\nthis would be the dreaded Maximum Fun Device.  The SI might lie to you,\nto keep you happy; even if it were a great lie, traded off against a\nsmall happiness, always and uncaringly the SI would choose the lie. \nThe SI might rewire your brain, to ensure maximum happiness.  The SI\nmight kill off all the humans, and replace us with some different form\nof sentient life that had no philosophical objections to being always\nhappy all the time in a little jar.  For the qualitative diagram\ncontains no mention of death as a bad thing, only happiness\nas a good, and the dead are not unhappy.  (Note again how all these\nfailures would tend to manifest, not during the AI's early infrahuman\nstages, but after it was too late.)

\n\n

The generalized form of the problem, is that being in the presence of a superintelligence that shares some but not all of your terminal values, is not necessarily a good thing.

\n\n

You didn't deliberately intend to completely change the\n32-bit XOR checksum of your monitor's pixel display, when you clicked\nthrough to this webpage.  But you did.  It wasn't a property that it\nwould have occurred to you to compute, because it wasn't a property\nthat it would occur to you to care about.  Deep Blue, in the course of\nwinning its game against Kasparov, didn't care particularly about "the\nnumber of pieces on white squares minus the number of pieces on black\nsquares", which changed throughout the game - not because Deep Blue was\ntrying to change it, but because Deep Blue was exerting its\noptimization power on the gameboard and changing the gameboard, and so\nwas Kasparov, and neither of them particularly cared about that\nproperty I have just specified.  An optimization process that cares only\nabout happiness, that squeezes the future into regions ever-richer in\nhappiness, may not hate the truth; but it won't notice if it squeezes\ntruth out of the world, either.  There are many truths that make us sad\n- but the optimizer may not even care that much; it may just not\nnotice, in passing, as it steers away from human knowledge.

\n\n

On an ordinary human scale, and in particular, as a matter of\nqualitative reasoning, we usually assume that what we do has little in\nthe way of side effects, unless otherwise specified.  In part, this is\nbecause we will visualize things concretely, and on-the-fly spot the\nundesirable side effects - undesirable by any criterion that we\ncare about, not just undesirable in the sense of departing from the\noriginal qualitative plan - and choose a different implementation\ninstead.  Or we can rely on our ability to react-on-the-fly.  But as\nhuman technology grows more powerful, it tends to have more side\neffects, more knock-on effects and consequences, because it does bigger\nthings whose effects we aren't controlling all by hand.  An infrahuman\nAI that can only exert a weak influence on the world, and that makes a\nfew people happy, will seem to be working as its designer thought an AI\nshould work; it is only when that AI is stronger that it can squeeze\nthe future so powerfully as to potentially squeeze out anything not explicitly protected in its utility function.

\n\n

Though I don't intend to commit the logical fallacy of generalizing from fictional evidence, a nod here is due to Jack Williamson, author of With Folded Hands,\nwhose AIs are "to serve and protect, and guard men from harm", which\nleads to the whole human species being kept in playpens, and\nlobotomized if that tends to make them unhappy.

\n\n

The original phrasing of this old short story - "guard men from\nharm" - actually suggests another way to illustrate the point: suppose\nthe AI cared only about the happiness of human males?  Now to be sure,\nmany men are made happy by seeing the women around them happy, wives\nand daughters and sisters, and so at least some females of the human\nspecies might not end up completely forlorn - but somehow, this doesn't\nseem to me like an optimal outcome.

\n\n

Just like you wouldn't want an AI to optimize for only some of the\nhumans, you wouldn't want an AI to optimize for only some of the\nvalues.  And, as I keep emphasizing for exactly this reason, we've got a lot of values.

\n\n

These then are three problems, with strategies of Friendliness built upon qualitative reasoning that seems to imply a positive link to utility:

\n\n

The fragility of normal causal links when a superintelligence searches for more efficient paths through time;

\n\n

The superexponential vastness of conceptspace, and the unnaturalness of the boundaries of our desires;

\n\n

And all that would be lost, if success is less than complete, and a superintelligence squeezes the future without protecting everything of value in it.

" } }, { "_id": "85LY7zQhTkWo4PmRc", "title": "Harder Choices Matter Less", "pageUrl": "https://www.lesswrong.com/posts/85LY7zQhTkWo4PmRc/harder-choices-matter-less", "postedAt": "2008-08-29T02:02:04.000Z", "baseScore": 60, "voteCount": 43, "commentCount": 24, "url": null, "contents": { "documentId": "85LY7zQhTkWo4PmRc", "html": "

...or they should, logically speaking.

\n\n

Suppose you're torn in an agonizing conflict between two choices.

\n\n

Well... if you can't decide between them, they must be around equally appealing, right?  Equally balanced pros and cons?  So the choice must matter very little - you may as well flip a coin.  The alternative is that the pros and cons aren't equally balanced, in which case the decision should be simple.

\n\n

\nThis is a bit of a tongue-in-cheek suggestion, obviously - more\nappropriate for choosing from a restaurant menu than choosing a major in\ncollege.

\n\n

But consider the case of choosing from a restaurant menu.  The obvious choices, like Pepsi over Coke, will take very little time.  Conversely, the choices that take the most time probably make the least difference.  If you can't decide between the hamburger and the hot dog, you're either close to indifferent between them, or in your current state of ignorance you're close to indifferent between their expected utilities.

Does this have any moral for larger dilemmas, like choosing a major in college?  Here, it's more likely that you're in a state of ignorance, than that you would have no real preference over outcomes.  Then if you're agonizing,\nthe obvious choice is "gather more information" - get a couple of\npart-time jobs that let you see the environment you would be working\nin.  And, logically, you can defer the agonizing until after that.

\n\n

Or maybe you've already gathered information, but can't seem to integrate to a decision?  Then you should be\nlisting out pros and cons on a sheet of paper, or writing down\nconflicting considerations and trying to decide which consideration is,\nin general, the most important to you.  Then that's the obvious thing you should do, which clearly dominates the alternative of making a snap decision in either direction.

\n\n

Of course there are also biases that get stronger as we think longer - it gives us more opportunity to rationalize, for example; or it gives us more opportunity to think up extreme but rare/unlikely considerations whose affect dominates the decision process.  Like someone choosing a longer commute to work (every day), so that they can have a house with an extra room for when Grandma comes over (once a year).  If you think your most likely failure mode is that you'll outsmart yourself, then the obvious choice is to make a snap decision in the direction you're currently leaning, which you're probably going to end up picking anyhow.

\n\n

I do think there's something to be said for agonizing over important decisions, but only so long as the agonization process is currently going somewhere, not stuck.

" } }, { "_id": "vzLrQaGPa9DNCpuZz", "title": "Against Modal Logics", "pageUrl": "https://www.lesswrong.com/posts/vzLrQaGPa9DNCpuZz/against-modal-logics", "postedAt": "2008-08-27T22:13:46.000Z", "baseScore": 71, "voteCount": 68, "commentCount": 63, "url": null, "contents": { "documentId": "vzLrQaGPa9DNCpuZz", "html": "

Continuation ofGrasping Slippery Things
Followup toPossibility and Could-ness, Three Fallacies of Teleology

\n\n

When I try to hit a reduction problem, what usually happens is that I "bounce" - that's what I call it.  There's an almost tangible feel to the failure, once you abstract and generalize and recognize it.  Looking back, it seems that I managed to say most of what I had in mind for today's post, in "Grasping Slippery Things".  The "bounce" is when you try to analyze a word like could, or a notion like possibility, and end up saying, "The set of realizable worlds [A'] that follows from an initial starting world A operated on by a set of physical laws f."  Where realizable contains the full mystery of "possible" - but you've made it into a basic symbol, and added some other symbols: the illusion of formality.

\n\n

There are a number of reasons why I feel that modern philosophy, even analytic philosophy, has gone astray - so far astray that I simply can't make use of their years and years of dedicated work, even when they would seem to be asking questions closely akin to mine.

\n\n

The proliferation of modal logics in philosophy is a good illustration of one major reason:  Modern philosophy doesn't enforce reductionism, or even strive for it.

Most philosophers, as one would expect from Sturgeon's Law, are not very good.  Which means that they're not even close to the level of competence it takes to analyze mentalistic black boxes into cognitive algorithms.  Reductionism is, in modern times, an unusual talent.  Insights on the order of Pearl et. al.'s reduction of causality or Julian Barbour's reduction of time are rare.

\n\n

So what these philosophers do instead, is "bounce" off the problem into a new modal logic:  A logic with symbols that embody the mysterious, opaque, unopened black box.  A logic with primitives like "possible" or "necessary", to mark the places where the philosopher's brain makes an internal function call to cognitive algorithms as yet unknown.

\n\n

And then they publish it and say, "Look at how precisely I have defined my language!"

In the Wittgensteinian era, philosophy has been about language - about trying to give precise meaning to terms.

\n\n

The kind of work that I try to do is not about language.  It is\nabout reducing mentalistic models to purely causal models, about\nopening up black boxes to find complicated algorithms inside, about\ndissolving mysteries - in a word, about cognitive science.

\n\n

That's what\nI think post-Wittgensteinian philosophy should be about - cognitive\nscience.

\n\n

But this kind of reductionism is hard work.  Ideally, you're looking for insights on the order of Julian Barbour's Machianism, to\nreduce time to non-time; insights on the order of Judea Pearl's\nconditional independence, to give a mathematical structure to causality\nthat isn't just finding a new way to say "because"; insights on the order of Bayesianism, to show that there is a unique structure to uncertainty expressed quantitatively.

\n\n

Just to make it clear that I'm not claiming a magical and unique\nability, I would name Gary Drescher's Good and Real as an example of a philosophical work that is commensurate with the kind of thinking I have to try to do.  Gary Drescher is an AI researcher turned philosopher, which may explain why he understands the art of asking, not What does this term mean?, but What cognitive algorithm, as seen from the inside, would generate this apparent mystery?

\n\n

(I paused while reading the first chapter of G&R.  It was immediately apparent that Drescher was thinking along lines so close to myself, that I wanted to write up my own independent component before looking at his - I didn't want his way of phrasing things to take over my writing.  Now that I'm done with zombies and metaethics, G&R is next up on my reading list.)

\n\n

Consider the popular philosophical notion of "possible worlds".  Have you ever seen a possible world?  Is an electron either "possible" or "necessary"?Clearly, if you are talking about "possibility" and "necessity", you\nare talking about things that are not commensurate with electrons -\nwhich means that you're still dealing with a world as seen from the inner surface of a cognitive algorithm, a world of surface levers with all the underlying machinery hidden.

\n\n

I have to make an AI out of electrons, in this one actual world.  I can't make the AI out of possibility-stuff, because I can't order a possible transistor.  If the AI ever thinks about possibility, it's not going to be because the AI noticed a possible world in its closet.  It's going to be because the non-ontologically-fundamental construct of "possibility" turns out to play a useful role in modeling and manipulating the one real world, a world that does not contain any fundamentally possible things.  Which is to say that algorithms which make use of a "possibility" label, applied at certain points, will turn out to capture an exploitable regularity of the one real world.  This is the kind of knowledge that Judea Pearl writes about.  This is the kind of knowledge that AI researchers need.  It is not the kind of knowledge that modern philosophy holds itself to the standard of having generated, before a philosopher gets credit for having written a paper.

\n\n

Philosophers keep telling me that I should look at philosophy.  I have, every now and then.  But the main reason I look at philosophy is when I find it desirable to explain things to philosophers.  The work that has been done - the products of these decades of modern debate - is, by and large, just not commensurate with the kind of analysis AI needs.  I feel a bit awful about saying this, because it feels like I'm telling philosophers that their life's work has been a waste of time - not that professional philosophers would be likely to regard me as an authority on whose life has been a waste of time.  But if there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it.

\n\n

And:  Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out - possibly fatally - whether they got it right or wrong.  Philosophy doesn't resolve things, it compiles positions and arguments.  And if the debate about zombies is still considered open, then I'm sorry, but as Jeffreyssai saysToo slow!  It would be one matter if I could just look up the standard answer and find that, lo and behold, it is correct.  But philosophy, which hasn't come to conclusions and moved on from cognitive reductions that I regard as relatively simple, doesn't seem very likely to build complex correct structures of conclusions.

\n\n

Sorry - but philosophy, even the better grade of modern analytic philosophy, doesn't seem to end up commensurate with what I need, except by accident or by extraordinary competence.  Parfit comes to mind; and I haven't read much Dennett, but Dennett does seem to be trying to do the same sort of thing that I try to do; and of course there's Gary Drescher.  If there was a repository of philosophical work along those lines - not concerned with defending basic ideas like anti-zombieism, but with accepting those basic ideas and moving on to challenge more difficult quests of naturalism and cognitive reductionism - then that, I might well be interested in reading.  But I don't know who, besides a few heroes, would be able to compile such a repository - who else would see a modal logic as an obvious bounce-off-the-mystery.

" } }, { "_id": "p7ftQ6acRkgo6hqHb", "title": "Dreams of AI Design", "pageUrl": "https://www.lesswrong.com/posts/p7ftQ6acRkgo6hqHb/dreams-of-ai-design", "postedAt": "2008-08-27T04:04:20.818Z", "baseScore": 41, "voteCount": 30, "commentCount": 61, "url": null, "contents": { "documentId": "p7ftQ6acRkgo6hqHb", "html": "

After spending a decade or two living inside a mind, you might think you knew a bit about how minds work, right? That’s what quite a few AGI wannabes (people who think they’ve got what it takes to program an Artificial General Intelligence) seem to have concluded. This, unfortunately, is wrong.

Artificial Intelligence is fundamentally about reducing the mental to the non-mental.

You might want to contemplate that sentence for a while. It’s important.

Living inside a human mind doesn’t teach you the art of reductionism, because nearly all of the work is carried out beneath your sight, by the opaque black boxes of the brain. So far beneath your sight that there is no introspective sense that the black box is there—no internal sensory event marking that the work has been delegated.

Did Aristotle realize that when he talked about the telos, the final cause of events, that he was delegating predictive labor to his brain’s complicated planning mechanisms—asking, “What would this object do, if it could make plans?” I rather doubt it. Aristotle thought the brain was an organ for cooling the blood—which he did think was important: humans, thanks to their larger brains, were more calm and contemplative.

So there’s an AI design for you! We just need to cool down the computer a lot, so it will be more calm and contemplative, and won’t rush headlong into doing stupid things like modern computers. That’s an example of fake reductionism. “Humans are more contemplative because their blood is cooler,” I mean. It doesn’t resolve the black box of the word contemplative. You can’t predict what a contemplative thing does using a complicated model with internal moving parts composed of merely material, merely causal elements—positive and negative voltages on a transistor being the canonical example of a merely material and causal element of a model. All you can do is imagine yourself being contemplative, to get an idea of what a contemplative agent does.

Which is to say that you can only reason about “contemplative-ness” by empathic inference—using your own brain as a black box with the contemplativeness lever pulled, to predict the output of another black box.

You can imagine another agent being contemplative, but again that’s an act of empathic inference—the way this imaginative act works is by adjusting your own brain to run in contemplativeness-mode, not by modeling the other brain neuron by neuron. Yes, that may be more efficient, but it doesn’t let you build a “contemplative” mind from scratch.

You can say that “cold blood causes contemplativeness” and then you just have fake causality: You’ve drawn a little arrow from a box reading “cold blood” to a box reading “contemplativeness,” but you haven’t looked inside the box—you’re still generating your predictions using empathy.

You can say that “lots of little neurons, which are all strictly electrical and chemical with no ontologically basic contemplativeness in them, combine into a complex network that emergently exhibits contemplativeness.” And that is still a fake reduction and you still haven’t looked inside the black box. You still can’t say what a “contemplative” thing will do, using a non-empathic model. You just took a box labeled “lotsa neurons,” and drew an arrow labeled “emergence” to a black box containing your remembered sensation of contemplativeness, which, when you imagine it, tells your brain to empathize with the box by contemplating.

So what do real reductions look like?

Like the relationship between the feeling of evidence-ness, of justificationness, and E. T. Jaynes’s Probability Theory: The Logic of Science. You can go around in circles all day, saying how the nature of evidence is that it justifies some proposition, by meaning that it’s more likely to be true, but all of these just invoke your brain’s internal feelings of evidence-ness, justifies-ness, likeliness. That part is easy—the going around in circles part. The part where you go from there to Bayes’s Theorem is hard.

And the fundamental mental ability that lets someone learn Artificial Intelligence is the ability to tell the difference. So that you know you aren’t done yet, nor even really started, when you say, “Evidence is when an observation justifies a belief.” But atoms are not evidential, justifying, meaningful, likely, propositional, or true; they are just atoms. Only things like P(H|E)P(¬H|E)=P(E|H)P(E|¬H)×P(H)P(¬H) count as substantial progress. (And that’s only the first step of the reduction: what are these E and H objects, if not mysterious black boxes? Where do your hypotheses come from? From your creativity? And what’s a hypothesis, when no atom is a hypothesis?)

Another excellent example of genuine reduction can be found in Judea Pearl’s Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference[1]. You could go around all day in circles talk about how a cause is something that makes something else happen, and until you understood the nature of conditional independence, you would be helpless to make an AI that reasons about causation. Because you wouldn’t understand what was happening when your brain mysteriously decided that if you learned your burglar alarm went off, but you then learned that a small earthquake took place, you would retract your initial conclusion that your house had been burglarized.

If you want an AI that plays chess, you can go around in circles indefinitely talking about how you want the AI to make good moves, which are moves that can be expected to win the game, which are moves that are prudent strategies for defeating the opponent, et cetera; and while you may then have some idea of which moves you want the AI to make, it’s all for naught until you come up with the notion of a mini-max search tree.

But until you know about search trees, until you know about conditional independence, until you know about Bayes’s Theorem, then it may still seem to you that you have a perfectly good understanding of where good moves and nonmonotonic reasoning and evaluation of evidence come from. It may seem, for example, that they come from cooling the blood.

And indeed I know many people who believe that intelligence is the product of commonsense knowledge or massive parallelism or creative destruction or intuitive rather than rational reasoning, or whatever. But all these are only dreams, which do not give you any way to say what intelligence is, or what an intelligence will do next, except by pointing at a human. And when the one goes to build their wondrous AI, they only build a system of detached levers, “knowledge” consisting of LISP tokens labeled apple and the like; or perhaps they build a “massively parallel neural net, just like the human brain.” And are shocked—shocked!—when nothing much happens.

AI designs made of human parts are only dreams; they can exist in the imagination, but not translate into transistors. This applies specifically to “AI designs” that look like boxes with arrows between them and meaningful-sounding labels on the boxes. (For a truly epic example thereof, see any Mentifex Diagram.)

Later I will say more upon this subject, but I can go ahead and tell you one of the guiding principles: If you meet someone who says that their AI will do XYZ just like humans, do not give them any venture capital. Say to them rather: “I’m sorry, I’ve never seen a human brain, or any other intelligence, and I have no reason as yet to believe that any such thing can exist. Now please explain to me what your AI does, and why you believe it will do it, without pointing to humans as an example.” Planes would fly just as well, given a fixed design, if birds had never existed; they are not kept aloft by analogies.

So now you perceive, I hope, why, if you wanted to teach someone to do fundamental work on strong AI—bearing in mind that this is demonstrably a very difficult art, which is not learned by a supermajority of students who are just taught existing reductions such as search trees—then you might go on for some length about such matters as the fine art of reductionism, about playing rationalist’s Taboo to excise problematic words? and replace them with their referents, about anthropomorphism, and, of course, about early stopping on mysterious answers to mysterious questions.


[1] Pearl, Probabilistic Reasoning in Intelligent Systems.

" } }, { "_id": "2HxAkCG7NWTrrn5R3", "title": "Three Fallacies of Teleology", "pageUrl": "https://www.lesswrong.com/posts/2HxAkCG7NWTrrn5R3/three-fallacies-of-teleology", "postedAt": "2008-08-25T22:27:55.000Z", "baseScore": 39, "voteCount": 34, "commentCount": 14, "url": null, "contents": { "documentId": "2HxAkCG7NWTrrn5R3", "html": "

Followup toAnthropomorphic Optimism

\n

Aristotle distinguished between four senses of the Greek word aition, which in English is translated as \"cause\", though Wikipedia suggests that a better translation is \"maker\".  Aristotle's theory of the Four Causes, then, might be better translated as the Four Makers.  These were his four senses of aitia:  The material aition, the formal aition, the efficient aition, and the final aition.

\n

The material aition of a bronze statue is the substance it is made from, bronze.  The formal aition is the substance's form, its statue-shaped-ness.  The efficient aition best translates as the English word \"cause\"; we would think of the artisan carving the statue, though Aristotle referred to the art of bronze-casting the statue, and regarded the individual artisan as a mere instantiation.

\n

The final aition was the goal, or telos, or purpose of the statue, that for the sake of which the statue exists.

\n

Though Aristotle considered knowledge of all four aitia as necessary, he regarded knowledge of the telos as the knowledge of highest order.  In this, Aristotle followed in the path of Plato, who had earlier written:

\n
\n

Imagine not being able to distinguish the real cause from that without which the cause would not be able to act as a cause.  It is what the majority appear to do, like people groping in the dark; they call it a cause, thus giving it a name that does not belong to it.  That is why one man surrounds the earth with a vortex to make the heavens keep it in place, another makes the air support it like a wide lid.  As for their capacity of being in the best place they could possibly be put, this they do not look for, nor do they believe it to have any divine force...

\n
\n

\n

Suppose that you translate \"final aition\" as \"final cause\", and assert directly:

\n

\"Why do human teeth develop with such regularity, into a structure well-formed for biting and chewing?  You could try to explain this as an incidental fact, but think of how unlikely that would be.  Clearly, the final cause of teeth is the act of biting and chewing.  Teeth develop with regularity, because of the act of biting and chewing - the latter causes the former.\"

\n

A modern-day sophisticated Bayesian will at once remark, \"This requires me to draw a circular causal diagram with an arrow going from the future to the past.\"

\n

It's not clear to me to what extent Aristotle appreciated this point - that you could not draw causal arrows from the future to the past.  Aristotle did acknowledge that teeth also needed an efficient cause to develop.  But Aristotle may have believed that the efficient cause could not act without the telos, or was directed by the telos, in which case we again have a reversed direction of causality, a dependency of the past on the future.  I am no scholar of the classics, so it may be only myself who is ignorant of what Aristotle believed on this score.

\n

So the first way in which teleological reasoning may be an outright fallacy, is when an arrow is drawn directly from the future to the past.  In every case where a present event seems to happen for the sake of a future end, that future end must be materially represented in the past.

\n

Suppose you're driving to the supermarket, and you say that each right turn and left turn happens for the sake of the future event of your being at the supermarket.  Then the actual efficient cause of the turn, consists of:  the representation in your mind of the event of yourself arriving at the supermarket; your mental representation of the street map (not the streets themselves); your brain's planning mechanism that searches for a plan that represents arrival at the supermarket; and the nerves that translate this plan into the motor action of your hands turning the steering wheel.

\n

All these things exist in the past or present; no arrow is drawn from the future to the past.

\n

In biology, similarly, we explain the regular formation of teeth, not by letting it be caused directly by the future act of chewing, but by using the theory of natural selection to relate past events of chewing to the organism's current genetic makeup, which physically controls the formation of the teeth.  Thus, we account for the current regularity of the teeth by referring only to past and present events, never to future events.  Such evolutionary reasoning is called \"teleonomy\", in contrast with teleology.

\n

We can see that the efficient cause is primary, not the final cause, by considering what happens when the two come into conflict.  The efficient cause of human taste buds is natural selection on past human eating habits; the final cause of human taste buds is acquiring nutrition.  From the efficient cause, we should expect human taste buds to seek out resources that were scarce in the ancestral environment, like fat and sugar.  From the final cause, we would expect human taste buds to seek out resources scarce in the current environment, like vitamins and fiber.  From the sales numbers on candy bars, we can see which wins.  The saying \"Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers\" asserts the primacy of teleonomy over teleology.

\n

Similarly, if you have a mistake in your mind about where the supermarket lies, the final event of your arrival at the supermarket, will not reach backward in time to steer your car.  If I know your exact state of mind, I will be able to predict your car's trajectory by modeling your current state of mind, not by supposing that the car is attracted to some particular final destination.  If I know your mind in detail, I can even predict your mistakes, regardless of what you think is your goal.

\n

The efficient cause has screened off the telos:  If I can model the complete mechanisms at work in the present, I never have to take into account the future in predicting the next time step.

\n

So that is the first fallacy of teleology - to make the future a literal cause of the past.

\n

Now admittedly, it may be convenient to engage in reasoning that would be fallacious if interpreted literally.  For example:

\n
\n

I don't know the exact state of Mary's every neuron.  But I know that she desires to be at the supermarket.  If Mary turns left at the next intersection, she will then be at the supermarket (at time t=1).  Therefore Mary will turn left (at time t=0).

\n
\n

But this is only convenient shortcut, to let the future affect Mary's present actions.  More rigorous reasoning would say:

\n
\n

My model predicts that if Mary turns left she will arrive at the supermarket.  I don't know her every neuron, but I believe Mary has a model similar to mine.  I believe Mary desires to be at the supermarket.  I believe that Mary has a planning mechanism similar to mine, which leads her to take actions that her model predicts will lead to the fulfillment of her desires.  Therefore I predict that Mary will turn left.

\n
\n

No direct mention of the actual future has been made.  I predict Mary by imagining myself to have her goals, then putting myself and my planning mechanisms into her shoes, letting my brain do planning-work that is similar to the planning-work I expect Mary to do.  This requires me to talk only about Mary's goal, our models (presumed similar) and our planning mechanisms (presumed similar) - all forces active in the present.

\n

And the benefit of this more rigorous reasoning, is that if Mary is mistaken about the supermarket's location, then I do not have to suppose that the future event of her arrival reaches back and steers her correctly anyway.

\n

Teleological reasoning is anthropomorphic - it uses your own brain as a black box to predict external events.  Specifically, teleology uses your brain's planning mechanism as a black box to predict a chain of future events, by planning backward from a distant outcome.

\n

Now we are talking about a highly generalized form of anthropomorphism - and indeed, it is precisely to introduce this generalization that I am talking about teleology!  You know what it's like to feel purposeful.  But when someone says, \"water runs downhill so that it will be at the bottom\", you don't necessarily imagine little sentient rivulets alive with quiet determination.  Nonetheless, when you ask, \"How could the water get to the bottom of the hill?\" and plot out a course down the hillside, you're recruiting your own brain's planning mechanisms to do it.  That's what the brain's planner does, after all: it finds a path to a specified destination starting from the present.

\n

And if you expect the water to avoid local maxima so it can get all the way to the bottom of the hill - to avoid being trapped in small puddles far above the ground - then your anthropomorphism is going to produce the wrong prediction.  (This is how a lot of mistaken evolutionary reasoning gets done, since evolution has no foresight, and only takes the next greedy local step.)

\n

But consider the subtlety: you may have produced a wrong, anthropomorphic prediction of the water without ever thinking of it as a person - without ever visualizing it as having feelings - without even thinking \"the water has purpose\" or \"the water wants to be at the bottom of the hill\" - but only saying, as Aristotle did, \"the water's telos is to be closer to the center of the Earth\".  Or maybe just, \"the water runs downhill so that it will be at the bottom\".  (Or, \"I expect that human taste buds will take into account how much of each nutrient the body needs, and so reject fat and sugar if there are enough calories present, since evolution produced taste buds in order to acquire nutrients.\")

\n

You don't notice instinctively when you're using an aspect of your brain as a black box to predict outside events.  Consequentialism just seems like an ordinary property of the world, something even rocks could do.

\n

It takes a deliberate act of reductionism to say:  \"But the water has no brain; how can it predict ahead to see itself being trapped in a local puddle, when the future cannot directly affect the past?  How indeed can anything at all happen in the water so that it will, in the future, be at the bottom?  No; I should try to understand the water's behavior using only local causes, found in the immediate past.\"

\n

It takes a deliberate act of reductionism to identify telos as purpose, and purpose as a mental property which is too complicated to be ontologically fundamental.  You don't realize, when you ask \"What does this telos-imbued object do next?\", that your brain is answering by calling on its own complicated planning mechanisms, that search multiple paths and do means-end reasoning.  Purpose just seems like a simple and basic property; the complexity of your brain that produces the predictions is hidden from you.  It is an act of reductionism to see purpose as requiring a complicated AI algorithm that needs a complicated material embodiment.

\n

So this is the second fallacy of teleology - to attribute goal-directed behavior to things that are not goal-directed, perhaps without even thinking of the things as alive and spirit-inhabited, but only thinking, X happens in order to Y.  \"In order to\" is mentalistic language, even though it doesn't seem to name a blatantly mental property like \"fearful\" or \"thinks it can fly\".

\n

Remember the sequence on free will?  The problem, it turned out, was that \"could\" was a mentalistic property - generated by the planner in the course of labeling states as reachable from the start state.  It seemed like \"could\" was a physical, ontological property.  When you say \"could\" it doesn't sound like you're talking about states of mind.  Nonetheless, the mysterious behavior of could-ness turned out to be understandable only by looking at the brain's planning mechanisms.

\n

Since mentalistic reasoning uses your own mind as a black box to generate its predictions, it very commonly generates wrong questions and mysterious answers.

\n

If you want to accomplish anything related to philosophy, or anything related to Artificial Intelligence, it is necessary to learn to identify mentalistic language and root it all out - which can only be done by analyzing innocent-seeming words like \"could\" or \"in order to\" into the complex cognitive algorithms that are their true identities.

\n

(If anyone accuses me of \"extreme reductionism\" for saying this, let me ask how likely it is that we live in an only partially reductionist universe.)

\n

The third fallacy of teleology is to commit the Mind Projection Fallacy with respect to telos, supposing it to be an inherent property of an object or system.  Indeed, one does this every time one speaks of the purpose  of an event, rather than speaking of some particular agent desiring the consequences of that event.

\n

I suspect this is why people have trouble understanding evolutionary psychology - in particular, why they suppose that all human acts are unconsciously directed toward reproduction.  \"Mothers who loved their children outreproduced those who left their children to the wolves\" becomes \"natural selection produced motherly love in order to ensure the survival of the species\" becomes \"the purpose of acts of motherly love is to increase the mother's fitness\".  Well, if a mother apparently drags her child off the train tracks because she loves the child, that's also the purpose of the act, right?  So by a fallacy of compression - a mental model that has one bucket where two buckets are needed - the purpose must be one or the other: either love or reproductive fitness.

\n

Similarly with those who hear of evolutionary psychology and conclude that the meaning of life is to increase reproductive fitness - hasn't science demonstrated that this is the purpose of all biological organisms, after all?

\n

Likewise with that fellow who concluded that the purpose of the universe is to increase entropy - the universe does so consistently, therefore it must want to do so - and that this must therefore be the meaning of life.  Pretty sad purpose, I'd say!  But of course the speaker did not seem to realize what it means to want to increase entropy as much as possible - what this goal really implies, that you should go around collapsing stars to black holes.  Instead the one focused on a few selected activities that increase entropy, like thinking.  You couldn't ask for a clearer illustration of a fake utility function.

\n

I call this a \"teleological capture\" - where someone comes to believe that the telos of X is Y, relative to some agent, or optimization process, or maybe just statistical tendency, from which it follows that any human or other agent who does X must have a purpose of Y in mind.  The evolutionary reason for motherly love becomes its telos, and seems to \"capture\" the apparent motives of human mothers.  The game-theoretical reason for cooperating on the Iterated Prisoner's Dilemma becomes the telos of cooperation, and seems to \"capture\" the apparent motives of human altruists, who are thus revealed as being selfish after all.  Charity increases status, which people are known to desire; therefore status is the telos of charity, and \"captures\" all claims to kinder motives.  Etc. etc. through half of all amateur philosophical reasoning about the meaning of life.

\n

These then are three fallacies of teleology:  Backward causality, anthropomorphism, and teleological capture.

" } }, { "_id": "PoDAyQMWEXBBBEJ5P", "title": "Magical Categories", "pageUrl": "https://www.lesswrong.com/posts/PoDAyQMWEXBBBEJ5P/magical-categories", "postedAt": "2008-08-24T19:51:39.000Z", "baseScore": 77, "voteCount": 66, "commentCount": 143, "url": null, "contents": { "documentId": "PoDAyQMWEXBBBEJ5P", "html": "
\n

'We can design intelligent machines so their primary, innate emotion is unconditional love for all humans.  First we can build relatively simple machines that learn to recognize happiness and unhappiness in human facial expressions, human voices and human body language.  Then we can hard-wire the result of this learning as the innate emotional values of more complex intelligent machines, positively reinforced when we are happy and negatively reinforced when we are unhappy.'
        -- Bill Hibbard (2001), Super-intelligent machines.

\n
\n

That was published in a peer-reviewed journal, and the author later wrote a whole book about it, so this is not a strawman position I'm discussing here.

\n

So... um... what could possibly go wrong...

\n

When I mentioned (sec. 6) that Hibbard's AI ends up tiling the galaxy with tiny molecular smiley-faces, Hibbard wrote an indignant reply saying:

\n
\n

'When it is feasible to build a super-intelligence, it will be feasible to build hard-wired recognition of \"human facial expressions, human voices and human body language\" (to use the words of mine that you quote) that exceed the recognition accuracy of current humans such as you and me, and will certainly not be fooled by \"tiny molecular pictures of smiley-faces.\" You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans.'

\n
\n

As Hibbard also wrote \"Such obvious contradictory assumptions show Yudkowsky's preference for drama over reason,\" I'll go ahead and mention that Hibbard illustrates a key point:  There is no professional certification test you have to take before you are allowed to talk about AI morality.  But that is not my primary topic today.  Though it is a crucial point about the state of the gameboard, that most AGI/FAI wannabes are so utterly unsuited to the task, that I know no one cynical enough to imagine the horror without seeing it firsthand.  Even Michael Vassar was probably surprised his first time through.

\n

No, today I am here to dissect \"You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans.\"

\n

Once upon a time - I've seen this story in several versions and several places, sometimes cited as fact, but I've never tracked down an original source - once upon a time, I say, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks.

\n

The researchers trained a neural net on 50 photos of camouflaged tanks amid trees, and 50 photos of trees without tanks. Using standard techniques for supervised learning, the researchers trained the neural network to a weighting that correctly loaded the training set - output \"yes\" for the 50 photos of camouflaged tanks, and output \"no\" for the 50 photos of forest.

\n

Now this did not prove, or even imply, that new examples would be classified correctly.  The neural network might have \"learned\" 100 special cases that wouldn't generalize to new problems.  Not, \"camouflaged tanks versus forest\", but just, \"photo-1 positive, photo-2 negative, photo-3 negative, photo-4 positive...\"

\n

But wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees, and had used only half in the training set.  The researchers ran the neural network on the remaining 100 photos, and without further training the neural network classified all remaining photos correctly.   Success confirmed!

\n

The researchers handed the finished work to the Pentagon, which soon handed it back, complaining that in their own tests the neural network did no better than chance at discriminating photos.

\n

It turned out that in the researchers' data set, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had learned to distinguish cloudy days from sunny days, instead of distinguishing camouflaged tanks from empty forest.

\n

This parable - which might or might not be fact - illustrates one of the most fundamental problems in the field of supervised learning and in fact the whole field of Artificial Intelligence:  If the training problems and the real problems have the slightest difference in context - if they are not drawn from the same independently identically distributed process - there is no statistical guarantee from past success to future success.  It doesn't matter if the AI seems to be working great under the training conditions.  (This is not an unsolvable problem but it is an unpatchable problem.  There are deep ways to address it - a topic beyond the scope of this post - but no bandaids.)

\n

As described in Superexponential Conceptspace, there are exponentially more possible concepts than possible objects, just as the number of possible objects is exponential in the number of attributes.  If a black-and-white image is 256 pixels on a side, then the total image is 65536 pixels.  The number of possible images is 265536.  And the number of possible concepts that classify images into positive and negative instances - the number of possible boundaries you could draw in the space of images - is 2^(265536).  From this, we see that even supervised learning is almost entirely a matter of inductive bias, without which it would take a minimum of 265536 classified examples to discriminate among 2^(265536) possible concepts - even if classifications are constant over time.

\n

If this seems at all counterintuitive or non-obvious, see Superexponential Conceptspace.

\n

So let us now turn again to:

\n
\n

'First we can build relatively simple machines that learn to recognize happiness and unhappiness in human facial expressions, human voices and human body language.  Then we can hard-wire the result of this learning as the innate emotional values of more complex intelligent machines, positively reinforced when we are happy and negatively reinforced when we are unhappy.'

\n
\n

and

\n
\n

'When it is feasible to build a super-intelligence, it will be feasible to build hard-wired recognition of \"human facial expressions, human voices and human body language\" (to use the words of mine that you quote) that exceed the recognition accuracy of current humans such as you and me, and will certainly not be fooled by \"tiny molecular pictures of smiley-faces.\" You should not assume such a poor implementation of my idea that it cannot make discriminations that are trivial to current humans.'

\n
\n

It's trivial to discriminate a photo of a picture with a camouflaged tank, and a photo of an empty forest, in the sense of determining that the two photos are not identical.  They're different pixel arrays with different 1s and 0s in them.  Discriminating between them is as simple as testing the arrays for equality.

\n

Classifying new photos into positive and negative instances of \"smile\", by reasoning from a set of training photos classified positive or negative, is a different order of problem.

\n

When you've got a 256x256 image from a real-world camera, and the image turns out to depict a camouflaged tank, there is no additional 65537th bit denoting the positiveness - no tiny little XML tag that says \"This image is inherently positive\".  It's only a positive example relative to some particular concept.

\n

But for any non-Vast amount of training data - any training data that does not include the exact bitwise image now seen - there are superexponentially many possible concepts compatible with previous classifications.

\n

For the AI, choosing or weighting from among superexponential possibilities is a matter of inductive bias.  Which may not match what the user has in mind.  The gap between these two example-classifying processes - induction on the one hand, and the user's actual goals on the other - is not trivial to cross.

\n

Let's say the AI's training data is:

\n

Dataset 1:

\n\n

Now the AI grows up into a superintelligence, and encounters this data:

\n

Dataset 2:

\n\n

It is not a property of these datasets that the inferred classification you would prefer is:

\n\n

rather than

\n\n

Both of these classifications are compatible with the training data.  The number of concepts compatible with the training data will be much larger, since more than one concept can project the same shadow onto the combined dataset.  If the space of possible concepts includes the space of possible computations that classify instances, the space is infinite.

\n

Which classification will the AI choose?  This is not an inherent property of the training data; it is a property of how the AI performs induction.

\n

Which is the correct classification?  This is not a property of the training data; it is a property of your preferences (or, if you prefer, a property of the idealized abstract dynamic you name \"right\").

\n

The concept that you wanted, cast its shadow onto the training data as you yourself labeled each instance + or -, drawing on your own intelligence and preferences to do so.  That's what supervised learning is all about - providing the AI with labeled training examples that project a shadow of the causal process that generated the labels.

\n

But unless the training data is drawn from exactly the same context as the real-life, the training data will be \"shallow\" in some sense, a projection from a much higher-dimensional space of possibilities.

\n

The AI never saw a tiny molecular smileyface during its dumber-than-human training phase, or it never saw a tiny little agent with a happiness counter set to a googolplex.  Now you, finally presented with a tiny molecular smiley - or perhaps a very realistic tiny sculpture of a human face - know at once that this is not what you want to count as a smile.  But that judgment reflects an unnatural category, one whose classification boundary depends sensitively on your complicated values.  It is your own plans and desires that are at work when you say \"No!\"

\n

Hibbard knows instinctively that a tiny molecular smileyface isn't a \"smile\", because he knows that's not what he wants his putative AI to do.  If someone else were presented with a different task, like classifying artworks, they might feel that the Mona Lisa was obviously smiling - as opposed to frowning, say - even though it's only paint.

\n

As the case of Terry Schiavo illustrates, technology enables new borderline cases that throw us into new, essentially moral dilemmas.  Showing an AI pictures of living and dead humans as they existed during the age of Ancient Greece, will not enable the AI to make a moral decision as to whether switching off Terry's life support is murder.  That information isn't present in the dataset even inductively!  Terry Schiavo raises new moral questions, appealing to new moral considerations, that you wouldn't need to think about while classifying photos of living and dead humans from the time of Ancient Greece.  No one was on life support then, still breathing with a brain half fluid.  So such considerations play no role in the causal process that you use to classify the ancient-Greece training data, and hence cast no shadow on the training data, and hence are not accessible by induction on the training data.

\n

As a matter of formal fallacy, I see two anthropomorphic errors on display.

\n

The first fallacy is underestimating the complexity of a concept we develop for the sake of its value.  The borders of the concept will depend on many values and probably on-the-fly moral reasoning, if the borderline case is of a kind we haven't seen before.  But all that takes place invisibly, in the background; to Hibbard it just seems that a tiny molecular smileyface is just obviously not a smile.  And we don't generate all possible borderline cases, so we don't think of all the considerations that might play a role in redefining the concept, but haven't yet played a role in defining it.  Since people underestimate the complexity of their concepts, they underestimate the difficulty of inducing the concept from training data.  (And also the difficulty of describing the concept directly - see The Hidden Complexity of Wishes.)

\n

The second fallacy is anthropomorphic optimism:  Since Bill Hibbard uses his own intelligence to generate options and plans ranking high in his preference ordering, he is incredulous at the idea that a superintelligence could classify never-before-seen tiny molecular smileyfaces as a positive instance of \"smile\".  As Hibbard uses the \"smile\" concept (to describe desired behavior of superintelligences), extending \"smile\" to cover tiny molecular smileyfaces would rank very low in his preference ordering; it would be a stupid thing to do - inherently so, as a property of the concept itself - so surely a superintelligence would not do it; this is just obviously the wrong classification.  Certainly a superintelligence can see which heaps of pebbles are correct or incorrect.

\n

Why, Friendly AI isn't hard at all!  All you need is an AI that does what's good!  Oh, sure, not every possible mind does what's good - but in this case, we just program the superintelligence to do what's good.  All you need is a neural network that sees a few instances of good things and not-good things, and you've got a classifier.  Hook that up to an expected utility maximizer and you're done!

\n

I shall call this the fallacy of magical categories - simple little words that turn out to carry all the desired functionality of the AI.  Why not program a chess-player by running a neural network (that is, a magical category-absorber) over a set of winning and losing sequences of chess moves, so that it can generate \"winning\" sequences?  Back in the 1950s it was believed that AI might be that simple, but this turned out not to be the case.

\n

The novice thinks that Friendly AI is a problem of coercing an AI to make it do what you want, rather than the AI following its own desires.  But the real problem of Friendly AI is one of communication - transmitting category boundaries, like \"good\", that can't be fully delineated in any training data you can give the AI during its childhood.  Relative to the full space of possibilities the Future encompasses, we ourselves haven't imagined most of the borderline cases, and would have to engage in full-fledged moral arguments to figure them out.  To solve the FAI problem you have to step outside the paradigm of induction on human-labeled training data and the paradigm of human-generated intensional definitions.

\n

Of course, even if Hibbard did succeed in conveying to an AI a concept that covers exactly every human facial expression that Hibbard would label a \"smile\", and excludes every facial expression that Hibbard wouldn't label a \"smile\"...

\n

Then the resulting AI would appear to work correctly during its childhood, when it was weak enough that it could only generate smiles by pleasing its programmers.

\n

When the AI progressed to the point of superintelligence and its own nanotechnological infrastructure, it would rip off your face, wire it into a permanent smile, and start xeroxing.

\n

The deep answers to such problems are beyond the scope of this post, but it is a general principle of Friendly AI that there are no bandaids.  In 2004, Hibbard modified his proposal to assert that expressions of human agreement should reinforce the definition of happiness, and then happiness should reinforce other behaviors.  Which, even if it worked, just leads to the AI xeroxing a horde of things similar-in-its-conceptspace to programmers saying \"Yes, that's happiness!\" about hydrogen atoms - hydrogen atoms are easy to make.

\n

Link to my discussion with Hibbard here.  You already got the important parts.

" } }, { "_id": "XeHYXXTGRuDrhk5XL", "title": "Unnatural Categories", "pageUrl": "https://www.lesswrong.com/posts/XeHYXXTGRuDrhk5XL/unnatural-categories", "postedAt": "2008-08-24T01:00:00.000Z", "baseScore": 38, "voteCount": 29, "commentCount": 10, "url": null, "contents": { "documentId": "XeHYXXTGRuDrhk5XL", "html": "

Followup toDisguised Queries, Superexponential Conceptspace

\n\n

If a tree falls in the forest, and no one hears it, does it make a sound?

\n\n

"Tell me why you want to know," says the rationalist, "and I'll tell you the answer."  If you want to know whether your seismograph, located nearby, will register an acoustic wave, then the experimental prediction is "Yes"; so, for seismographic purposes, the tree should be considered to make a sound.  If instead you're asking some question about firing patterns in a human auditory cortex - for whatever reason - then the answer is that no such patterns will be changed when the tree falls.

\n\n

What is a poison?  Hemlock is a "poison"; so is cyanide; so is viper venom.  Carrots, water, and oxygen are "not poison".  But what determines this classification?  You would be hard pressed, just by looking at hemlock and cyanide and carrots and water, to tell what sort of difference is at work.  You would have to administer the substances to a human - preferably one signed up for cryonics - and see which ones proved fatal.  (And at that, the definition is still subtler than it appears: a ton of carrots, dropped on someone's head, will also prove fatal. You're really asking about fatality from metabolic disruption, after administering doses small enough to avoid mechanical damage and blockage, at room temperature, at low velocity.)

\n\n

Where poison-ness is concerned, you are not classifying via a strictly local property of the substance.  You are asking about the consequence when a dose of that substance is applied to a human metabolism.  The local difference between a human who gasps and keels over, versus a human alive and healthy, is more compactly discriminated, than any local difference between poison and non-poison.

So we have a substance X, that might or might not be fatally poisonous, and a human Y, and we say - to first order:

\n\n

"X is classified 'fatally poisonous' iff administering X to Y causes Y to enter a state classified 'dead'."

\n\n

Much of the way that we classify things - never mind events - is non-local, entwined with the consequential structure of the world.  All the things we would call a chair are all the things that were made for us to sit on.  (Humans might even call two molecularly identical objects a "chair" or "a rock shaped like a chair" depending on whether someone had carved it.)

\n\n

"That's okay," you say, "the difference between living humans and dead humans is a nice local property - a compact cluster in Thingspace.  Sure, the set of 'poisons' might not be as compact a structure.  A category X|X->Y may not be as simple as Y, if the causal link -> can be complicated.  Here, 'poison' is not locally compact because of all the complex ways that substances act on the complex human body.  But there's still nothing unnatural about the category of 'poison' - we constructed it in an observable, testable way from categories themselves simple.  If you ever want to know whether something should be called 'poisonous', or not, there's a simple experimental test that settles the issue."

\n\n

Hm.  What about a purple, egg-shaped, furred, flexible, opaque object?  Is it a blegg, and if so, would you call "bleggs" a natural category?

\n\n

"Sure," you reply, "because you are forced to formulate the 'blegg' category, or something closely akin to it, in order to predict your future experiences as accurately as possible.  If you see something that's purple and egg-shaped and opaque, the only way to predict that it will be flexible is to draw some kind of compact boundary in Thingspace and use that to perform induction.  No category means no induction - you can't see that this object is similar to other objects you've seen before, so you can't predict its unknown properties from its known properties.  Can't get much more natural than that!  Say, what exactly would an unnatural property be, anyway?"

\n\n

Suppose I have a poison P1 that completely destroys one of your kidneys - causes it to just wither away.  This is a very dangerous poison, but is it a fatal poison?

\n\n

"No," you reply, "a human can live on just one kidney."

\n\n

Suppose I have a poison P2 that completely destroys much of a human brain, killing off nearly all the neurons, leaving only enough medullary structure to run the body and keep it breathing, so long as a hospital provides nutrition.  Is P2 a fatal poison?

\n\n

"Yes," you say, "if your brain is destroyed, you're dead."

\n\n

But this distinction that you now make, between P2 being a fatal poison and P1 being an only dangerous poison, is not driven by any fundamental requirement of induction.  Both poisons destroy organs.  It's just that you care a lot more about the brain, than about a kidney.  The distinction you drew isn't driven solely by a desire to predict experience - it's driven by a distinction built into your utility function.  If you have to choose between a dangerous poison and a lethal poison, you will of course take the dangerous poison.  From which you induce that if you must choose between P1 and P2, you'll take P1.

\n\n

The classification that you drew between "lethal" and "nonlethal" poisons, was designed to help you navigate the future - navigate away from outcomes of low utility, toward outcomes of high utility.  The boundaries that you drew, in Thingspace and Eventspace, were not driven solely by the structure of the environment - they were also driven by the structure of your utility function; high-utility things and low-utility things lumped together.  That way you can easily choose actions that lead, in general, to outcomes of high utility, over actions that lead to outcomes of low utility.  If you must pick your poison and can only pick one categorical dimension to sort by, you're going to want to sort the poisons into lower and higher utility - into fatal and dangerous, or dangerous and safe.  Whether the poison is red or green is a much more local property, more compact in Thingspace; but it isn't nearly as relevant to your decision-making.

\n\n

Suppose you have a poison that puts a human, let's call her Terry, into an extremely damaged state.  Her cerebral cortex has turned to mostly fluid, say.  So I already labeled that substance a poison; but is it a lethal poison?

\n\n

This would seem to depend on whether Terry is dead or alive.  Her body is breathing, certainly - but her brain is damaged.  In the extreme case where her brain was actually removed and incinerated, but her body kept alive, we would certainly have to say that the resultant was no longer a person, from which it follows that the previously existing person, Terry, must have died.  But here we have an intermediate case, where the brain is very severely damaged but not utterly destroyed.  Where does that poison fall on the border between lethality and unlethality?  Where does Terry fall on the border between personhood and nonpersonhood?  Did the poison kill Terry or just damage her?

\n\n

Some things are persons and some things are not persons.  It is murder to kill a person who has not threatened to kill you first.  If you shoot a chimpanzee who isn't threatening you, is that murder?  How about if you turn off Terry's life support - is that murder?

\n\n

"Well," you say, "that's fundamentally a moral question - no simple experimental test will settle the issue unless we can agree in advance on which facts are the morally relevant ones.  It's futile to say 'This chimp can recognize himself in a mirror!' or 'Terry can't recognize herself in a mirror!' unless we're agreed that this is a relevant fact - never mind it being the only relevant fact."

\n\n

I've chosen the phrase "unnatural category" to describe a category whose boundary you draw in a way that sensitively depends on the exact values built into your utility function.  The most unnatural categories are typically these values themselves!  What is "true happiness"?  This is entirely a moral question, because what it really means is "What is valuable happiness?" or "What is the most valuable kind of happiness?"  Is having your pleasure center permanently stimulated by electrodes, "true happiness"?  Your answer to that will tend to center on whether you think this kind of pleasure is a good thing.  "Happiness", then, is a highly unnatural category - there are things that locally bear a strong resemblance to "happiness", but which are excluded because we judge them as being of low utility, and "happiness" is supposed to be of high utility.

\n\n

Most terminal values turn out to be unnatural categories, sooner or later.  This is why it's such a tremendous difficulty to decide whether turning off Terry Schiavo's life support is "murder".

\n\n

I don't mean to imply that unnatural categories are worthless or relative or whatever.  That's what moral arguments are for - for drawing and redrawing the boundaries; which, when it happens with a terminal value, clarifies and thereby changes our utility function.

\n\n

I have a twofold motivation for introducing the concept of an "unnatural category".

\n\n

The first motivation is to recognize when someone tries to pull a fast one during a moral argument, by insisting that no moral argument exists:  Terry Schiavo simply is a person because she has human DNA, or she simply is not a person because her cerebral cortex has eroded.  There is a super-exponential space of possible concepts, possible boundaries that can be drawn in Thingspace.  When we have a predictive question at hand, like "What happens if we run a DNA test on Terry Schiavo?" or "What happens if we ask Terry Schiavo to solve a math problem?", then we have a clear criterion of which boundary to draw and whether it worked.  But when the question at hand is a moral one, a "What should I do?" question, then it's time to shut your eyes and start doing moral philosophy.  Or eyes open, if there are relevant facts at hand - you do want to know what Terry Schiavo's brain looks like - but the point is that you're not going to find an experimental test that settles the question, unless you've already decided where to draw the boundaries of your utility function's values.

\n\n

I think that a major cause of moral panic among Luddites in the presence of high technology, is that technology tends to present us with boundary cases on our moral values - raising moral questions that were never previously encountered.  In the old days, Terry Schiavo would have stopped breathing long since.  But I find it difficult to blame this on technology - it seems to me that there's something wrong with going into a panic just because you're being asked a new moral question.  Couldn't you just be asked the same moral question at any time?

\n\n

If you want to say, "I don't know, so I'll strategize conservatively to avoid the boundary case, or treat uncertain people as people," that's one argument.

\n\n

But to say, "AAAIIIEEEE TECHNOLOGY ASKED ME A QUESTION I DON'T KNOW HOW TO ANSWER, TECHNOLOGY IS UNDERMINING MY MORALITY" strikes me as putting the blame in the wrong place.

\n\n

I should be able to ask you anything, even if you can't answer.  If you can't answer, then I'm not undermining your morality - it was already undermined.

\n\n

My second motivation... is to start explaining another reason why Friendly AI is difficult.

\n\n

I was recently trying to explain to someone why, even if all you wanted to do was fill the universe with paperclips, building a paperclip maximizer would still be a hard problem of FAI theory.  Why?  Because if you cared about paperclips for their own sake, then you wouldn't want the AI to fill the universe with things that weren't really paperclips - as you draw that boundary!

\n\n

For a human, "paperclip" is a reasonably natural category; it looks like this-and-such and we use it to hold papers together.  The "papers" themselves play no direct role in our moral values; we just use them to renew the license plates on our car, or whatever.  "Paperclip", in other words, is far enough away from human terminal values, that we tend to draw the boundary using tests that are relatively empirical and observable.  If you present us with some strange thing that might or might not be a paperclip, we'll just see if we can use it to hold papers together.  If you present us with some strange thing that might or might not be paper, we'll see if we can write on it.  Relatively simple observable tests.

\n\n

But there isn't any equally simple experimental test the AI can perform to find out what is or isn't a "paperclip", if "paperclip" is a concept whose importance stems from it playing a direct role in the utility function.

\n\n

Let's say that you're trying to make your little baby paperclip maximizer in the obvious way: showing it a bunch of things that are paperclips, and a bunch of things that aren't paperclips, including what you consider to be near misses like staples and gluesticks.  The AI formulates an internal concept that describes paperclips, and you test it on some more things, and it seems to discriminate the same way you do.  So you hook up the "paperclip" concept to the utility function, and off you go!

\n\n

Soon the AI grows up, kills off you and your species, and begins its quest to transform the universe into paperclips.  But wait - now the AI is considering new potential boundary cases of "paperclip" that it didn't see during its training phase.  Boundary cases, in fact, that you never mentioned - let alone showed the AI - because it didn't occur to you that they were possible.  Suppose, for example, that the thought of tiny molecular paperclips had never occurred to you.  If it had, you would have agonized for a while - like the way that people agonized over Terry Schiavo - and then finally decided that the tiny molecular paperclip-shapes were not "real" paperclips.  But the thought never occurred to you, and you never showed the AI paperclip-shapes of different sizes and told the AI that only one size was correct, during its training phase.  So the AI fills the universe with tiny molecular paperclips - but those aren't real paperclips at all!  Alas!  There's no simple experimental test that the AI can perform to find out what you would have decided was or was not a high-utility papercliplike object.

\n\n

What?  No simple test?  What about:  "Ask me what is or isn't a paperclip, and see if I say 'Yes'.  That's your new meta-utility function!"

\n\n

You perceive, I hope, why it isn't so easy.

\n\n

If not, here's a hint:

\n\n

"Ask", "me", and "say 'Yes'".

" } }, { "_id": "jNAAZ9XNyt82CXosr", "title": "Mirrors and Paintings", "pageUrl": "https://www.lesswrong.com/posts/jNAAZ9XNyt82CXosr/mirrors-and-paintings", "postedAt": "2008-08-23T00:29:05.000Z", "baseScore": 29, "voteCount": 20, "commentCount": 42, "url": null, "contents": { "documentId": "jNAAZ9XNyt82CXosr", "html": "

Followup toSorting Pebbles Into Correct Heaps, Invisible Frameworks

\n\n

Background: \nThere's a proposal for Friendly AI called "Coherent Extrapolated\nVolition" which I don't really want to divert the discussion to, right\nnow.  Among many other things, CEV involves pointing an AI at humans and\nsaying (in effect) "See that?  That's where you find the base content for self-renormalizing morality."

\n\n

Hal Finney commented on the Pebblesorter parable:

I wonder what the Pebblesorter AI would do if successfully programmed to implement [CEV]...  Would the AI pebblesort?  Or would\nit figure that if the Pebblesorters got smarter, they would see that\npebblesorting was pointless and arbitrary?  Would the AI therefore adopt\nour own parochial morality, forbidding murder, theft and sexual\nintercourse among too-young people?  Would that be the CEV of\nPebblesorters?

\n\n

I imagine we would all like to\nthink so, but it smacks of parochialism, of objective morality.  I can't\nhelp thinking that Pebblesorter CEV would have to include some aspect\nof sorting pebbles.  Doesn't that suggest that CEV can malfunction\npretty badly?

I'm\ngiving this question its own post, for that it touches on similar\nquestions I once pondered - dilemmas that forced my current metaethics\nas the resolution.

\n\n

Yes indeed:  A CEV-type AI, taking Pebblesorters as its focus, would\nwipe out the Pebblesorters and sort the universe into prime-numbered\nheaps.

\n\n

This is not the right thing to do.

\n\n

That is not a bug.

A primary motivation for CEV was to answer the question, "What can\nArchimedes do if he has to program a Friendly AI, despite being a savage barbarian by the Future's standards, so that the Future comes out right anyway?  Then whatever general strategy Archimedes could plausibly follow, that is what we should do ourselves:  For\nwe too may be ignorant fools, as the Future measures such things."\n\n

\n\n

It is tempting to further extend the question, to ask, "What can the\nPebblesorters do, despite wanting only to sort pebbles, so that the\nuniverse comes out right anyway?  What sort of general strategy should\nthey follow, so that despite wanting something that is utterly\npointless and futile, their Future ends up containing sentient beings\nleading worthwhile lives and having fun?  Then whatever general\nstrategy we wish the Pebblesorters to follow, that is what we should do\nourselves:  For we, too, may be flawed."

\n\n

You can probably see in an intuitive sense why that won't\nwork.  We did in fact get here from the Greek era, which shows that the\nseeds of our era were in some sense present then - albeit this history\ndoesn't show that no extra information was added, that there were no\ncontingent moral accidents that sent us into one attractor rather than\nanother.  But still, if Archimedes said something along the lines of\n"imagine probable future civilizations that would come into existence",\nthe AI would visualize an abstracted form of our civilization among\nthem - though perhaps not only our civilization.

\n\n

The Pebblesorters, by construction, do not contain any seed that\nmight grow into a civilization valuing life, health, happiness, etc. \nSuch wishes are nowhere present in their psychology.  All they want\nis to sort pebble heaps.  They don't want an AI that keeps them alive,\nthey want an AI that can create correct pebble heaps rather than\nincorrect pebble heaps.  They are much disturbed by the question of how\nsuch an AI can be created, when different civilizations are still\narguing about heap sizes - though most of them believe that any\nsufficiently smart mind will see which heaps are correct and incorrect,\nand act accordingly.

\n\n

You can't get here from there.  Not by any general strategy.  If you\nwant the Pebblesorters' future to come out humane, rather than\nPebblish, you can't advise the Pebblesorters to build an AI that would\ndo what their future civilizations would do.  You can't advise them to\nbuild an AI that would do what Pebblesorters would do if they knew\neverything the AI knew.  You can't advise them to build an AI more like\nPebblesorters wish they were, and less like what Pebblesorters are. \nAll those AIs just sort the universe into prime heaps.  The\nPebblesorters would celebrate that and say "Mission accomplished!" if\nthey weren't dead, but it isn't what you want the universe to be like.  (And it isn't right, either.)

\n\n

What kind of AI would the Pebblesorters have to execute, in order to make the universe a better place?

\n\n

They'd have to execute an AI did not do what Pebblesorters\nwould-want, but an AI that simply, directly, did what was right - an AI\nthat cared directly about things like life, health, and happiness.

\n\n

But where would that AI come from?

\n\n

If you were physically present on the scene, you could program that\nAI.  If you could send the Pebblesorters a radio message, you could\ntell them to program it - though you'd have to lie to them about what\nthe AI did.

\n\n

But if there's no such direct connection, then it requires a causal miracle for the Pebblesorters' AI to do what is right - a perpetual motion\nmorality, with information appearing from nowhere.  If you write out a\nspecification of an AI that does what is right, it takes a certain\nnumber of bits; it has a Kolmogorov complexity.  Where is that\ninformation appearing from, since it is not yet physically present\nin the Pebblesorters' Solar System?  What is the cause already present\nin the Pebble System, of which the right-doing AI is an eventual\neffect?  If the right-AI is written by a meta-right AI then where does\nthe meta-right AI come from, causally speaking?

\n\n

Be ye wary to distinguish between yonder levels.  It may seem to you\nthat you ought to be able to deduce the correct answer just by thinking\nabout it - surely, anyone can see that pebbles are pointless - but\nthat's a correct answer to the question "What is right?", which carries its own invisible framework\nof arguments that it is right to be moved by.  This framework, though\nharder to see than arguments, has its physical conjugate in the human\nbrain.  The framework does not mention the human brain, so we\nare not persuaded by the argument "That's what the human brain says!" \nBut this very event of non-persuasion takes place within a human brain that physically represents a moral framework that doesn't mention the brain.

\n\n

This framework is not physically represented anywhere in the Pebble System.  It's not a different\nframework in the Pebble System, any more than different numbers are\nprime here than there.  So far as idealized abstract dynamics are\nconcerned, the same thing is right in the Pebble System as right here. \nBut that idealized abstract framework is not physically embodied anywhere in the Pebble System.  If no human sends a physical message to the Pebble System, then how does anything right just happen to happen there, given that the right outcome is a very small target in the space of all possible outcomes?  It would take a thermodynamic miracle.

\n\n

As for humans doing what's right - that's a moral miracle\nbut not a causal miracle.  On a moral level, it's astounding indeed\nthat creatures of mere flesh and goo, created by blood-soaked natural\nselection, should decide to try and transform the universe into a place\nof light and beauty.  On a moral level, it's just amazing that the\nbrain does what is right, even though "The human brain says so!" isn't\na valid moral argument.  On a causal level... once you understand how morality fits into a natural universe, it's not really all that surprising.

\n\n

And if that disturbs you, if it seems to smack of relativism - just\nremember, your universalizing instinct, the appeal of objectivity, and\nyour distrust of the state of human brains as an argument for anything,\nare also all implemented in your brain.  If you're going to care about whether morals are universally persuasive, you may as well care about people being happy; a paperclip maximizer is moved by neither argument.  See also Changing Your Metaethics.

\n\n

It follows from all this, by the way, that the algorithm for CEV\n(the Coherent Extrapolated Volition formulation of Friendly AI) is not\nthe substance of what's right.  If it were, then executing CEV\nanywhere, at any time, would do what was right - even with the\nPebblesorters as its focus.  There would be no\nneed to elaborately argue this, to have CEV on the left-hand-side and\nrightness on the r.h.s.; the two would be identical, or bear the same relation as PA+1 and PA.

\n\n

So why build CEV?  Why not just build a do-what's-right AI?

\n\n

Because we don't know the complete list of our own terminal values;\nwe don't know the full space of arguments we can be moved by.  Human\nvalues are too complicated to program by hand. \nWe might not recognize the source code of a do-what's-right AI, any\nmore than we would recognize a printout of our own neuronal circuitry\nif we saw it.  Sort of like how Peano Arithmetic doesn't recognize\nitself in a mirror.  If I listed out all your values as mere English\nwords on paper, you might not be all that moved by the list: is it more uplifting to see sunlight glittering off water, or to read the word "beauty"?

\n\n

But in this art of Friendly AI, understanding metaethics on a naturalistic level, we can guess that our morals and metamorals will be physically represented in our brains, even though our morality (considered as an idealized abstracted dynamic) doesn't attach any explicit moral force to "Because a brain said so."

\n\n

So when we try to make an AI whose physical consequence is\nthe implementation of what is right, we make that AI's causal chain\nstart with the state of human brains - perhaps nondestructively scanned\non the neural level by nanotechnology, or perhaps merely inferred with\nsuperhuman precision from external behavior - but not passed through the noisy, blurry, destructive filter of human beings trying to guess their own morals.

\n\n

The AI can't start out with a direct representation of\nrightness, because the programmers don't know their own values (not to\nmention that there are other human beings out there than the\nprogrammers, if the programmers care about that).  The programmers can\nneither brain-scan themselves and decode the scan, nor superhumanly\nprecisely deduce their internal generators from their outward behavior.

\n\n

So you build the AI with a kind of forward reference:  "You see\nthose humans over there?  That's where your utility function is."

\n\n

As previously mentioned, there are tricky aspects to this.  You can't say:  "You see those humans over there?  Whatever desire is represented in their brains, is therefore right."  This, from a moral perspective, is wrong - wanting something doesn't make it right - and the conjugate failure of the AI is that it will reprogram your brains to want things that are easily obtained in great quantity.  If the humans are PA, then we want the AI to be PA+1, not Self-PA... metaphorically speaking.

\n\n

You've got to say something along the lines of, "You see those humans over there?  Their brains contain the\nevidence you will use to deduce the correct utility function, even\nthough right-ness is not caused by those brains, so that intervening to\nalter the brains won't alter the correct utility function."  Here,\nthe "correct" in "correct utility function" is relative to a\nmeta-utility framework that points to the humans and defines how their\nbrains are to be treated as information.  I haven't worked out exactly how to do this, but it does look solvable.

\n\n

And as for why you can't have an AI that rejects the "pointless"\nparts of a goal system and only keeps the "wise" parts - so that even\nin the Pebble System the AI rejects pebble-sorting and keeps the\nPebblesorters safe and warm - it's the problem of the invisible framework again; you've only passed the recursive buck. \nHumans contain the physical representations of the framework that we\nappeal to, when we ask whether a goal is pointless or wise. Without\nsending a message to the Pebble System, the information there\ncannot physically materialize from nowhere as to which goals are pointless or wise. \nThis doesn't mean that different goals are pointless in the Pebble\nSystem, it means that no physical brain there is asking that question.

\n\n

The upshot is that structurally similar CEV algorithms will behave\ndifferently depending on whether they have humans at the focus, or\nPebblesorters.  You can infer that CEV will do what's right in the presence of humans, but the general algorithm in CEV is not the direct substance of what's right.  There is no moral imperative to execute CEVs regardless of their focus, on any planet.  It is only right to execute CEVs on decision systems that contain the seeds of rightness, such as humans.  (Again, see the concept of a moral miracle that is not a causal surprise.)

\n\n

Think of a Friendly AI as being like a finely polished mirror, which\nreflects an image more accurately than any painting drawn with blurred\neyes and shaky hand.  If you need an image that has the shape of an\napple, you would do better to put an actual apple in front of the\nmirror, and not try to paint the apple by hand.  Even though the\ndrawing would inherently be apple-shaped, it wouldn't be a good one; and even though the mirror is not inherently apple-shaped, in the presence of an actual apple it is a better picture than any painting could be.

\n\n

"Why not just use an actual apple?" you ask.  Well, maybe this isn't\na merely accurate mirror; it has an internal camera system that\nlightens the apple's image before displaying it.  An actual apple would\nhave the right starting shape, but it wouldn't be bright enough.

\n\n

You may also want a composite image of a lot of apples that have multiple possible reflective equilibria.

\n\n

As for how the apple ended up apple-shaped, when the substance of the apple doesn't define\napple-shaped-ness - in the very important sense that squishing the\napple won't change what's apple-shaped - well, it wasn't a miracle, but\nit involves a strange loop through the invisible background framework.

\n\n

And if the whole affair doesn't sound all that right...\nwell... human beings were using numbers a long time before they\ninvented Peano Arithmetic.  You've got to be almost as smart as a human\nto recognize yourself in a mirror, and you've got to be smarter than\nhuman to recognize a printout of your own neural circuitry.  This\nFriendly AI stuff is somewhere in between.  Would the rightness be easier to\nrecognize if, in the end, no one died of Alzheimer's ever again?

" } }, { "_id": "sCs48JtMnQwQsZwyN", "title": "Invisible Frameworks", "pageUrl": "https://www.lesswrong.com/posts/sCs48JtMnQwQsZwyN/invisible-frameworks", "postedAt": "2008-08-22T03:36:37.000Z", "baseScore": 27, "voteCount": 26, "commentCount": 47, "url": null, "contents": { "documentId": "sCs48JtMnQwQsZwyN", "html": "

Followup toPassing the Recursive Buck, No License To Be Human

\n

Roko has mentioned his \"Universal Instrumental Values\" several times in his comments.  Roughly, Roko proposes that we ought to adopt as terminal values those things that a supermajority of agents would do instrumentally.  On Roko's blog he writes:

\n
\n

I'm suggesting that UIV provides the cornerstone for a rather new approach to goal system design. Instead of having a fixed utility function/supergoal, you periodically promote certain instrumental values to terminal values i.e. you promote the UIVs.

\n
\n

Roko thinks his morality is more objective than mine:

\n
\n

It also worries me quite a lot that eliezer's post is entirely symmetric under the action of replacing his chosen notions with the pebble-sorter's notions. This property qualifies as \"moral relativism\" in my book, though there is no point in arguing about the meanings of words.

\n

My posts on universal instrumental values are not symmetric under replacing UIVs with some other set of goals that an agent might have. UIVs are the unique set of values X such that in order to achieve any other value Y, you first have to do X.

\n
\n

\n

Well, and this proposal has a number of problems, as some of the commenters on Roko's blog point out.

\n

For a start, Roko actually says \"universal\", not \"supermajority\", but there are no actual universal actions; no matter what the green button does, there are possible mind designs whose utility function just says \"Don't press the green button.\"  There is no button, in other words, that all possible minds will press.  Still, if you defined some prior weighting over the space of possible minds, you could probably find buttons that a supermajority would press, like the \"Give me free energy\" button.

\n

But to do nothing except press such buttons, consists of constantly losing your purposes. You find that driving the car is useful for getting and eating chocolate, or for attending dinner parties, or even for buying and manufacturing more cars.  In fact, you realize that every intelligent agent will find it useful to travel places.  So you start driving the car around without any destination.  Roko hasn't noticed this because, by anthropomorphic optimism, he mysteriously only thinks of humanly appealing \"UIVs\" to propose, like \"creativity\".

\n

Let me guess, Roko, you don't think that \"drive a car!\" is a \"valid\" UIV for some reason?  But you did not apply some fixed procedure you had previously written down, to decide whether \"drive a car\" was a valid UIV or not.  Rather you started out feeling a moment of initial discomfort, and then looked for reasons to disapprove.  I wonder why the same discomfort didn't occur to you when you considered \"creativity\".

\n

But let us leave aside the universality, appeal, or well-specified-ness of Roko's metaethics.

\n

Let us consider only Roko's claim that his morality is more objective than, say, mine, or this marvelous list by William Frankena that Roko quotes SEP quoting:

\n
\n

Life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom; beauty, harmony, proportion in objects contemplated; aesthetic experience; morally good dispositions or virtues; mutual affection, love, friendship, cooperation; just distribution of goods and evils; harmony and proportion in one's own life; power and experiences of achievement; self-expression; freedom; peace, security; adventure and novelty; and good reputation, honor, esteem, etc.

\n
\n

So!  Roko prefers his Universal Instrumental Values to this, because:

\n
\n

It also worries me quite a lot that eliezer's post is entirely symmetric under the action of replacing his chosen notions with the pebble-sorter's notions. This property qualifies as \"moral relativism\" in my book, though there is no point in arguing about the meanings of words.

\n

My posts on universal instrumental values are not symmetric under replacing UIVs with some other set of goals that an agent might have. UIVs are the unique set of values X such that in order to achieve any other value Y, you first have to do X.

\n
\n

It would seem, then, that Roko attaches tremendous importance to claims to asymmetry and uniqueness; and tremendous disaffect to symmetry and relativism.

\n

Which is to say that, when it comes to metamoral arguments, Roko is greatly moved to adopt morals by the statement \"this goal is universal\", while greatly moved to reject morals by the statement \"this goal is relative\".

\n

In fact, so strong is this tendency of Roko's, that the metamoral argument \"Many agents will do X!\" is sufficient for Roko to adopt X as a terminal value.  Indeed, Roko thinks that we ought to get all our terminal values this way.

\n

Is this objective?

\n

Yes and no.

\n

When you evaluate the question \"How many agents do X?\", the answer does not depend on which agent evaluates it.  It does depend on quantities like your weighting over all possible agents, and on the particular way you slice up possible events into categories like \"X\".  But let us be charitable: if you adopt a fixed weighting over agents and a fixed set of category boundaries, the question \"How many agents do X?\" has a unique answer.  In this sense, Roko's meta-utility function is objective.

\n

But of course Roko's meta-utility function is not \"objective\" in the sense of universal compellingness.  It is only Roko who finds the argument \"Most agents do X instrumentally\" a compelling reason to promote X to a terminal value.  I don't find it compelling; it looks to me like losing purpose and double-counting expected utilities.  The vast majority of possible agents, in fact, will not find it a compelling argument!  A paperclip maximizer perceives no utility-function-changing, metamoral valence in the proposition \"Most agents will find it useful to travel from one place to another.\"

\n

Now this seems like an extremely obvious criticism of Roko's theory.  Why wouldn't Roko have thought of it?

\n

Because when Roko feels like he's being objective, he's using his meta-morality as a fixed given—evaluating the question \"How many agents do X?\" in different places and times, but not asking any different questions.  The answer to his meta-moral question has occurred to him as a variable to be investigated; the meta-moral question itself is off the table.

\n

But—of course—when a Pebblesorter regards \"13 and 7!\" as a powerful metamoral argument that \"heaps of 91 pebbles\" should not be a positive value in their utility function, they are asking a question whose answer is the same in all times and all places.  They are asking whether 91 is prime or composite.  A Pebblesorter, perhaps, would feel the same powerful surge of objectivity that Roko feels when Roko asks the question \"How many agents have this instrumental value?\"  But in this case it readily occurs to Roko to ask \"Why care if the heap is prime or not?\"  As it does not occur to Roko to ask, \"Why care if this instrumental goal is universal or not?\"  Why... isn't it just obvious that it matters whether an instrumental goal is universal?

\n

The Pebblesorter's framework is readily visible to Roko, since it differs from his own.  But when Roko asks his own question—\"Is this goal universally instrumental?\"—he sees only the answer, and not the question; he sees only the output as a potential variable, not the framework.

\n

Like PA, that only sees the compellingness of particular proofs that use the Peano Axioms, and does not consider the quoted Peano Axioms as subject matter.  It is only PA+1 that sees the framework of PA.

\n

But there is always a framework, every time you are moved to change your morals—the question is whether it will be invisible to you or not.  That framework is always implemented in some particular brain, so that the same argument would fail to compel a differently constructed brain—though this does not imply that the framework makes any mention of brains at all.

\n

And this difficulty of the invisible framework is at work, every time someone says, \"But of course the correct morality is just the one that helps you survive / the one that helps you be happy\"—implicit there is a supposed framework of meta-moral arguments that move you.  But maybe I don't think that being happy is the one and only argument that matters.

\n

Roko is adopting a special and unusual metamoral framework in regarding \"Most agents do X!\" as a compelling reason to change one's utility function.  Why might Roko find this appealing?  Humans, for very understandable reasons of evolutionary psychology, have a universalizing instinct; we think that a valid argument should persuade anyone.

\n

But what happens if we confess that such thinking can be valid? What happens if we confess that a meta-moral argument can (in its invisible framework) use the universalizing instinct?  Then we have... just done something very human.  We haven't explicitly adopted the rule that all human instincts are good because they are human—but we did use one human instinct to think about morality.  We didn't explicitly think that's what we were doing, any more than PA quotes itself in every proof; but we felt that a universally instrumental goal had this appealing quality of objective-ness about that, which is a perception of an intuition that evolved.  This doesn't mean that objective-ness is subjective.  If you define objectiveness precisely then the question \"What is objective?\" will have a unique answer.  But it does mean that we have just been compelled by an argument that will not compel every possible mind.

\n

If it's okay to be compelled by the appealing objectiveness of a moral, then why not also be compelled by...

\n

...life, consciousness, and activity; health and strength; pleasures and satisfactions of all or certain kinds; happiness, beatitude, contentment, etc.; truth; knowledge and true opinions of various kinds, understanding, wisdom...

\n

Such values, if precisely defined, can be just as objective as the question \"How many agents do X?\" in the sense that \"How much health is in this region here?\" will have a single unique answer.  But it is humans who care about health, just as it is humans who care about universalizability.

\n

The framework by which we care about health and happiness, as much evolved, and human, and part of the very substance of that which we name right whether it is human or not... as our tendency to find universalizable morals appealing.

\n

And every sort of thing that a mind can do will have some framework behind it.  Every sort of argument that can compel one mind, will fail to be an argument in the framework of another.

\n

We are in the framework we name right; and every time we try to do what is correct, what we should, what we must, what we ought, that is the question we are asking.

\n

Which question should we ask?  What is the correct question?

\n

Don't let your framework to those questions be invisible!  Don't think you've answered them without asking any questions!

\n

There is always the meta-meta-meta-question and it always has a framework.

\n

I, for one, have decided to answer such questions the right way, as the alternative is to answer it the wrong way, like Roko is doing.

\n

And the Pebblesorters do not disagree with any of this; they do what is objectively prime, not what is objectively right.  And the Roko-AI does what is objectively often-instrumental, flying starships around with no destination; I don't disagree that travel is often-instrumental, I just say it is not right.

\n

There is no right-ness that isn't in any framework—no feeling of rightness, no internal label that your brain produces, that can be detached from any method whatsoever of computing it—that just isn't what we're talking about when we ask \"What should I do now?\"  Because if anything labeled should, is right, then that is Self-PA.

\n

 

\n

Part of The Metaethics Sequence

\n

(end of sequence)

\n

Previous post: \"No License To Be Human\"

" } }, { "_id": "YrhT7YxkRJoRnr7qD", "title": "No License To Be Human", "pageUrl": "https://www.lesswrong.com/posts/YrhT7YxkRJoRnr7qD/no-license-to-be-human", "postedAt": "2008-08-20T23:18:25.000Z", "baseScore": 71, "voteCount": 33, "commentCount": 54, "url": null, "contents": { "documentId": "YrhT7YxkRJoRnr7qD", "html": "

Followup toYou Provably Can't Trust Yourself

\n

Yesterday I discussed the difference between:

\n\n

These systems are formally distinct.  PA+1 can prove things that PA cannot.  Self-PA is inconsistent, and can prove anything via Löb's Theorem.

\n

With these distinctions in mind, I hope my intent will be clearer, when I say that although I am human and have a human-ish moral framework, I do not think that the fact of acting in a human-ish way licenses anything.

\n

I am a self-renormalizing moral system, but I do not think there is any general license to be a self-renormalizing moral system.

\n

And while we're on the subject, I am an epistemologically incoherent creature, trying to modify his ways of thinking in accordance with his current conclusions; but I do not think that reflective coherence implies correctness.

\n

\n

Let me take these issues in reverse order, starting with the general unlicensure of epistemological reflective coherence. 

\n

If five different people go out and investigate a city, and draw five different street maps, we should expect the maps to be (mostly roughly) consistent with each other.  Accurate maps are necessarily consistent among each other and among themselves, there being only one reality.  But if I sit in my living room with my blinds closed, I can draw up one street map from my imagination and then make four copies: these five maps will be consistent among themselves, but not accurate. Accuracy implies consistency but not the other way around.

\n

In Where Recursive Justification Hits Bottom, I talked about whether \"I believe that induction will work on the next occasion, because it's usually worked before\" is legitimate reasoning, or \"I trust Occam's Razor because the simplest explanation for why Occam's Razor often works is that we live in a highly ordered universe\".  Though we actually formalized the idea of scientific induction, starting from an inductive instinct; we modified our intuitive understanding of Occam's Razor (Maxwell's Equations are in fact simpler than Thor, as an explanation for lightning) based on the simple idea that \"the universe runs on equations, not heroic mythology\".  So we did not automatically and unthinkingly confirm our assumptions, but rather, used our intuitions to correct them—seeking reflective coherence.

\n

But I also remarked:

\n

\"And what about trusting reflective coherence in general?  Wouldn't most possible minds, randomly generated and allowed to settle into a state of reflective coherence, be incorrect?  Ah, but we evolved by natural selection; we were not generated randomly.\"

\n

So you are not, in general, safe if you reflect on yourself and achieve internal coherence.  The Anti-Inductors who compute that the probability of the coin coming up heads on the next occasion, decreases each time they see the coin come up heads, may defend their anti-induction by saying:  \"But it's never worked before!\"

\n

The only reason why our human reflection works, is that we are good enough to make ourselves better—that we had a core instinct of induction, a core instinct of simplicity, that wasn't sophisticated or exactly right, but worked well enough.

\n

A mind that was completely wrong to start with, would have no seed of truth from which to heal itself.  (It can't forget everything and become a mind of pure emptiness that would mysteriously do induction correctly.)

\n

So it's not that reflective coherence is licensed in general, but that it's a good idea if you start out with a core of truth or correctness or good priors.  Ah, but who is deciding whether I possess good priors?  I am!  By reflecting on them!  The inescapability of this strange loop is why a broken mind can't heal itself—because there is no jumping outside of all systems.

\n

I can only plead that, in evolving to perform induction rather than anti-induction, in evolving a flawed but not absolutely wrong instinct for simplicity, I have been blessed with an epistemic gift.

\n

I can only plead that self-renormalization works when I do it, even though it wouldn't work for Anti-Inductors.  I can only plead that when I look over my flawed mind and see a core of useful reasoning, that I am really right, even though a completely broken mind might mistakenly perceive a core of useful truth.

\n

Reflective coherence isn't licensed for all minds.  It works for me, because I started out with an epistemic gift.

\n

It doesn't matter if the Anti-Inductors look over themselves and decide that their anti-induction also constitutes an epistemic gift; they're wrong, I'm right.

\n

And if that sounds philosophically indefensible, I beg you to step back from philosophy, and conside whether what I have just said is really truly true.

\n

(Using your own concepts of induction and simplicity to do so, of course.)

\n

Does this sound a little less indefensible, if I mention that PA trusts only proofs from the PA axioms, not proofs from every possible set of axioms?  To the extent that I trust things like induction and Occam's Razor, then of course I don't trust anti-induction or anti-Occamian priors—they wouldn't start working just because I adopted them.

\n

What I trust isn't a ghostly variable-framework from which I arbitrarily picked one possibility, so that picking any other would have worked as well so long as I renormalized it.  What I trust is induction and Occam's Razor, which is why I use them to think about induction and Occam's Razor.

\n

(Hopefully I have not just licensed myself to trust myself; only licensed being moved by both implicit and explicit appeals to induction and Occam's Razor.  Hopefully this makes me PA+1, not Self-PA.)

\n

So there is no general, epistemological license to be a self-renormalizing factual reasoning system.

\n

The reason my system works is because it started out fairly inductive—not because of the naked meta-fact that it's trying to renormalize itself using any system; only induction counts.  The license—no, the actual usefulness—comes from the inductive-ness, not from mere reflective-ness.  Though I'm an inductor who says so!

\n

And, sort-of similarly, but not exactly analogously:

\n

There is no general moral license to be a self-renormalizing decision system.  Self-consistency in your decision algorithms is not that-which-is-right.

\n

The Pebblesorters place the entire meaning of their lives in assembling correct heaps of pebbles and scattering incorrect ones; they don't know what makes a heap correct or incorrect, but they know it when they see it.  It turns out that prime heaps are correct, but determining primality is not an easy problem for their brains.  Like PA and unlike PA+1, the Pebblesorters are moved by particular and specific arguments tending to show that a heap is correct or incorrect (that is, prime or composite) but they have no explicit notion of \"prime heaps are correct\" or even \"Pebblesorting People can tell which heaps are correct or incorrect\". They just know (some) correct heaps when they see them, and can try to figure out the others.

\n

Let us suppose by way of supposition, that when the Pebblesorters are presented with the essence of their decision system—that is, the primality test—they recognize it with a great leap of relief and satisfaction.  We can spin other scenarios—Peano Arithmetic, when presented with itself, does not prove itself correct.  But let's suppose that the Pebblesorters recognize a wonderful method of systematically producing correct pebble heaps.  Or maybe they don't endorse Adleman's test as being the essence of correctness—any more than Peano Arithmetic proves that what PA proves is true—but they do recognize that Adleman's test is a wonderful way of producing correct heaps.

\n

Then the Pebblesorters have a reflectively coherent decision system.

\n

But this does not constitute a disagreement between them and humans about what is right, any more than humans, in scattering a heap of 3 pebbles, are disagreeing with the Pebblesorters about which numbers are prime!

\n

The Pebblesorters are moved by arguments like \"Look at this row of 13 pebbles, and this row of 7 pebbles, arranged at right angles to each other; how can you see that, and still say that a heap of 91 pebbles is correct?\"

\n

Human beings are moved by arguments like \"Hatred leads people to play purely negative-sum games, sacrificing themselves and hurting themselves to make others hurt still more\" or \"If there is not the threat of retaliation, carried out even when retaliation is profitless, there is no credible deterrent against those who can hurt us greatly for a small benefit to themselves\".

\n

This is not a minor difference of flavors.  When you reflect on the kind of arguments involved here, you are likely to conclude that the Pebblesorters really are talking about primality, whereas the humans really are arguing about what's right.  And I agree with this, since I am not a moral relativist.  I don't think that morality being moral implies any ontologically basic physical rightness attribute of objects; and conversely, I don't think the lack of such a basic attribute is a reason to panic.

\n

I may have contributed to the confusion here by labeling the Pebblesorters' decisions \"p-right\".  But what they are talking about is not a different brand of \"right\".  What they're talking about is prime numbers.  There is no general rule that reflectively coherent decision systems are right; the Pebblesorters, in merely happening to implement a reflectively coherent decision system, are not yet talking about morality!

\n

It's been suggested that I should have spoken of \"p-right\" and \"h-right\", not \"p-right\" and \"right\".

\n

But of course I made a very deliberate decision not to speak of \"h-right\".  That sounds like there is a general license to be human.

\n

It sounds like being human is the essence of rightness.  It sounds like the justification framework is \"this is what humans do\" and not \"this is what saves lives, makes people happy, gives us control over our own lives, involves us with others and prevents us from collapsing into total self-absorption, keeps life complex and non-repeating and aesthetic and interesting, dot dot dot etcetera etcetera\".

\n

It's possible that the above value list, or your equivalent value list, may not sound like a compelling notion unto you.  Perhaps you are only moved to perform particular acts that make people happy—not caring all that much yet about this general, explicit, verbal notion of \"making people happy is a value\".  Listing out your values may not seem very valuable to you.  (And I'm not even arguing with that judgment, in terms of everyday life; but a Friendly AI researcher has to know the metaethical score, and you may have to judge whether funding a Friendly AI project will make your children happy.)  Which is just to say that you're behaving like PA, not PA+1.

\n

And as for that value framework being valuable because it's human—why, it's just the other way around: humans have received a moral gift, which Pebblesorters lack, in that we started out interested in things like happiness instead of just prime pebble heaps.

\n

Now this is not actually a case of someone reaching in from outside with a gift-wrapped box; any more than the \"moral miracle\" of blood-soaked natural selection producing Gandhi, is a real miracle.

\n

It is only when you look out from within the perspective of morality, that it seems like a great wonder that natural selection could produce true friendship.  And it is only when you look out from within the perspective of morality, that it seems like a great blessing that there are humans around to colonize the galaxies and do something interesting with them.  From a purely causal perspective, nothing unlawful has happened.

\n

But from a moral perspective, the wonder is that there are these human brains around that happen to want to help each other—a great wonder indeed, since human brains don't define rightness, any more than natural selection defines rightness.

\n

And that's why I object to the term \"h-right\".  I am not trying to do what's human.  I am not even trying to do what is reflectively coherent for me.  I am trying to do what's right.

\n

It may be that humans argue about what's right, and Pebblesorters do what's prime.  But this doesn't change what's right, and it doesn't make what's right vary from planet to planet, and it doesn't mean that the things we do are right in mere virtue of our deciding on them—any more than Pebblesorters make a heap prime or not prime by deciding that it's \"correct\".

\n

The Pebblesorters aren't trying to do what's p-prime any more than humans are trying to do what's h-prime.  The Pebblesorters are trying to do what's prime.  And the humans are arguing about, and occasionally even really trying to do, what's right.

\n

The Pebblesorters are not trying to create heaps of the sort that a Pebblesorter would create (note circularity).  The Pebblesorters don't think that Pebblesorting thoughts have a special and supernatural influence on whether heaps are prime.  The Pebblesorters aren't trying to do anything explicitly related to Pebblesorters—just like PA isn't trying to prove anything explicitly related to proof.  PA just talks about numbers; it took a special and additional effort to encode any notions of proof in PA, to make PA talk about itself.

\n

PA doesn't ask explicitly whether a theorem is provable in PA, before accepting it—indeed PA wouldn't care if it did prove that an encoded theorem was provable in PA.  Pebblesorters don't care what's p-prime, just what's prime.  And I don't give a damn about this \"h-rightness\" stuff: there's no license to be human, and it doesn't justify anything.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Invisible Frameworks\"

\n

Previous post: \"You Provably Can't Trust Yourself\"

" } }, { "_id": "rm8tv9qZ9nwQxhshx", "title": "You Provably Can't Trust Yourself", "pageUrl": "https://www.lesswrong.com/posts/rm8tv9qZ9nwQxhshx/you-provably-can-t-trust-yourself", "postedAt": "2008-08-19T20:35:47.000Z", "baseScore": 49, "voteCount": 30, "commentCount": 19, "url": null, "contents": { "documentId": "rm8tv9qZ9nwQxhshx", "html": "

Followup toWhere Recursive Justification Hits Bottom, Löb's Theorem

\n

Peano Arithmetic seems pretty trustworthy.  We've never found a case where Peano Arithmetic proves a theorem T, and yet T is false in the natural numbers.  That is, we know of no case where []T (\"T is provable in PA\") and yet ~T (\"not T\").

\n

We also know of no case where first order logic is invalid:  We know of no case where first-order logic produces false conclusions from true premises. (Whenever first-order statements H are true of a model, and we can syntactically deduce C from H, checking C against the model shows that C is also true.)

\n

Combining these two observations, it seems like we should be able to get away with adding a rule to Peano Arithmetic that says:

\n

All T:  ([]T -> T)

\n

But Löb's Theorem seems to show that as soon as we do that, everything becomes provable.  What went wrong?  How can we do worse by adding a true premise to a trustworthy theory?  Is the premise not true—does PA prove some theorems that are false?  Is first-order logic not valid—does it sometimes prove false conclusions from true premises?

\n

\n

Actually, there's nothing wrong with reasoning from the axioms of Peano Arithmetic plus the axiom schema \"Anything provable in Peano Arithmetic is true.\"  But the result is a different system from PA, which we might call PA+1.  PA+1 does not reason from identical premises to PA; something new has been added.  So we can evade Löb's Theorem because PA+1 is not trusting itself—it is only trusting PA.

\n

If you are not previously familiar with mathematical logic, you might be tempted to say, \"Bah!  Of course PA+1 is trusting itself! PA+1 just isn't willing to admit it!  Peano Arithmetic already believes anything provable in Peano Arithmetic—it will already output anything provable in Peano Arithmetic as a theorem, by definition! How does moving to PA+1 change anything, then?  PA+1 is just the same system as PA, and so by trusting PA, PA+1 is really trusting itself. Maybe that dances around some obscure mathematical problem with direct self-reference, but it doesn't evade the charge of self-trust.\"

\n

But PA+1 and PA really are different systems; in PA+1 it is possible to prove true statements about the natural numbers that are not provable in PA.  If you're familiar with mathematical logic, you know this is because some nonstandard models of PA are ruled out in PA+1. Otherwise you'll have to take my word that Peano Arithmetic doesn't fully describe the natural numbers, and neither does PA+1, but PA+1 characterizes the natural numbers slightly better than PA.

\n

The deeper point is the enormous gap, the tremendous difference, between having a system just like PA except that it trusts PA, and a system just like PA except that it trusts itself.

\n

If you have a system that trusts PA, that's no problem; we're pretty sure PA is trustworthy, so the system is reasoning from true premises. But if you have a system that looks like PA—having the standard axioms of PA—but also trusts itself, then it is trusting a self-trusting system, something for which there is no precedent.  In the case of PA+1, PA+1 is trusting PA which we're pretty sure is correct.  In the case of Self-PA it is trusting Self-PA, which we've never seen before—it's never been tested, despite its misleading surface similarity to PA.  And indeed, Self-PA collapses via Löb's Theorem and proves everything—so I guess it shouldn't have trusted itself after all!  All this isn't magic; I've got a nice Cartoon Guide to how it happens, so there's no good excuse for not understanding what goes on here.

\n

I have spoken of the Type 1 calculator that asks \"What is 2 + 3?\" when the buttons \"2\", \"+\", and \"3\" are pressed; versus the Type 2 calculator that asks \"What do I calculate when someone presses '2 + 3'?\"  The first calculator answers 5; the second calculator can truthfully answer anything, even 54.

\n

But this doesn't mean that all calculators that reason about calculators are flawed.  If I build a third calculator that asks \"What does the first calculator answer when I press '2 + 3'?\", perhaps by calculating out the individual transistors, it too will answer 5. Perhaps this new, reflective calculator will even be able to answer some questions faster, by virtue of proving that some faster calculation is isomorphic to the first calculator.

\n

PA is the equivalent of the first calculator; PA+1 is the equivalent of the third calculator; but Self-PA is like unto the second calculator.

\n

As soon as you start trusting yourself, you become unworthy of trust.  You'll start believing any damn thing that you think, just because you thought it.  This wisdom of the human condition is pleasingly analogous to a precise truth of mathematics.

\n

Hence the saying:  \"Don't believe everything you think.\"

\n

And the math also suggests, by analogy, how to do better:  Don't trust thoughts because you think them, but because they obey specific trustworthy rules.

\n

PA only starts believing something—metaphorically speaking—when it sees a specific proof, laid out in black and white.  If you say to PA—even if you prove to PA—that PA will prove something, PA still won't believe you until it sees the actual proof.  Now, this might seem to invite inefficiency, and PA+1 will believe you—if you prove that PA will prove something, because PA+1 trusts the specific, fixed framework of Peano Arithmetic; not itself.

\n

As far as any human knows, PA does happen to be sound; which means that what PA proves is provable in PA, PA will eventually prove and will eventually believe.  Likewise, anything PA+1 can prove that it proves, it will eventually prove and believe.  It seems so tempting to just make PA trust itself—but then it becomes Self-PA and implodes.  Isn't that odd?  PA believes everything it proves, but it doesn't believe \"Everything I prove is true.\"  PA trusts a fixed framework for how to prove things, and that framework doesn't happen to talk about trust in the framework.

\n

You can have a system that trusts the PA framework explicitly,  as well as implicitly: that is PA+1.  But the new framework that PA+1 uses, makes no mention of itself; and the specific proofs that PA+1 demands, make no mention of trusting PA+1, only PA.  You might say that PA implicitly trusts PA, PA+1 explicitly trusts PA, and Self-PA trusts itself.

\n

For everything that you believe, you should always find yourself able to say, \"I believe because of [specific argument in framework F]\", not \"I believe because I believe\".

\n

Of course, this gets us into the +1 question of why you ought to trust or use framework F.  Human beings, not being formal systems, are too reflective to get away with being unable to think about the problem.  Got a superultimate framework U?  Why trust U?

\n

And worse: as far as I can tell, using induction is what leads me to explicitly say that induction seems to often work, and my use of Occam's Razor is implicated in my explicit endorsement of Occam's Razor.  Despite my best efforts, I have been unable to prove that this is inconsistent, and I suspect it may be valid.

\n

But it does seem that the distinction between using a framework and mentioning it, or between explicitly trusting a fixed framework F and trusting yourself, is at least important to unraveling foundational tangles—even if Löb turns out not to apply directly.

\n

Which gets me to the reason why I'm saying all this in the middle of a sequence about morality.

\n

I've been pondering the unexpectedly large inferential distances at work here—I thought I'd gotten all the prerequisites out of the way for explaining metaethics, but no.  I'm no longer sure I'm even close.  I tried to say that morality was a \"computation\", and that failed; I tried to explain that \"computation\" meant \"abstracted idealized dynamic\", but that didn't work either.  No matter how many different ways I tried to explain it, I couldn't get across the distinction my metaethics drew between \"do the right thing\", \"do the human thing\", and \"do my own thing\".  And it occurs to me that my own background, coming into this, may have relied on having already drawn the distinction between PA, PA+1 and Self-PA.

\n

Coming to terms with metaethics, I am beginning to think, is all about distinguishing between levels.  I first learned to do this rigorously back when I was getting to grips with mathematical logic, and discovering that you could prove complete absurdities, if you lost track even once of the distinction between \"believe particular PA proofs\", \"believe PA is sound\", and \"believe you yourself are sound\".  If you believe any particular PA proof, that might sound pretty much the same as believing PA is sound in general; and if you use PA and only PA, then trusting PA (that is, being moved by arguments that follow it) sounds pretty much the same as believing that you yourself are sound.  But after a bit of practice with the actual math—I did have to practice the actual math, not just read about it—my mind formed permanent distinct buckets and built walls around them to prevent the contents from slopping over.

\n

Playing around with PA and its various conjugations, gave me the notion of what it meant to trust arguments within a framework that defined justification.  It gave me practice keeping track of specific frameworks, and holding them distinct in my mind.

\n

Perhaps that's why I expected to communicate more sense than I actually succeeded in doing, when I tried to describe right as a framework of justification that involved being moved by particular, specific terminal values and moral arguments; analogous to an entity who is moved by encountering a specific proof from the allowed axioms of Peano Arithmetic.  As opposed to a general license to do whatever you prefer, or a morally relativistic term like \"utility function\" that can eat the values of any given species, or a neurological framework contingent on particular facts about the human brain.  You can make good use of such concepts, but I do not identify them with the substance of what is right.

\n

Gödelian arguments are inescapable; you can always isolate the framework-of-trusted-arguments if a mathematical system makes sense at all.  Maybe the adding-up-to-normality-ness of my system will become clearer, after it becomes clear that you can always isolate the framework-of-trusted-arguments of a human having a moral argument.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"No License To Be Human\"

\n

Previous post: \"The Bedrock of Morality: Arbitrary?\"

" } }, { "_id": "iRWK3s7SwFKPcQ4LT", "title": "Dumb Deplaning", "pageUrl": "https://www.lesswrong.com/posts/iRWK3s7SwFKPcQ4LT/dumb-deplaning", "postedAt": "2008-08-18T23:49:39.000Z", "baseScore": 7, "voteCount": 8, "commentCount": 35, "url": null, "contents": { "documentId": "iRWK3s7SwFKPcQ4LT", "html": "

So I just traveled to Portsmouth, VA for an experimental conference - in the sense that I don't expect conferences of this type to prove productive, but maybe I should try at least once - in the unlikely event that there are any local Overcoming Bias readers who want to drive out to Portsmouth for a meeting on say the evening of the 20th, email me - anyway, I am struck, for the Nth time, how uncooperative people are in getting off planes.

\n\n

Most people, as soon as they have a chance to make for the exit, do so - even if they need to take down luggage first.  At any given time after the initial rush to the aisles, usually a single person is taking down luggage, while the whole line behind them waits.  Then the line moves forward a little and the next person starts taking down their luggage.

\n\n

In programming we call this a "greedy local algorithm".  But since everyone does it, no one seems to feel "greedy".

\n\n

How would I do it?  Off the top of my head:

\n\n

"Left aisle seats, please rise and move to your luggage.  (Pause.)  Left aisle seats, please retrieve your luggage.  (Pause.)  Left aisle seats, please deplane.  (Pause.)  Right aisle seats, please rise and move to your luggage..."

There are numerous other minor tweaks that this suggests, like seating people with tight connections near the front left aisle, or boarding passengers with window seats before passengers with middle and aisle seats.

\n\n

But the main thing that strikes me is twofold:

\n\n

First, everyone who stops to take down their luggage while everyone waits behind them - as opposed to waiting to rise until the aisle is clear - is playing a negative-sum game; the benefit to themselves is smaller than the total cost to all the others waiting in line.

\n\n

Second, the airline has a motive to clear passengers quickly to reduce turnaround time.  But the airline does not regulate the deplaning process.  Even though it would be straightforward - defectors being readily spotted - and I don't even see why it would be resented.

\n\n

Am I missing something?  Is there some mysterious Freakonomics-style explanation for this?

\n\n

Heck, people usually manage to regulate themselves on worse cases than this.  Most of the people blocking the aisle wouldn't walk away with someone else's purse.  Are we just stuck in an equilibrium of mutual defection?  You'd think people not in a rush would be willing to unilaterally wait until the aisle is clear\nbefore getting up, as it's an inexpensive way to purchase a chance\nto feel quietly virtuous.

\n\n

If an essentially friendly crowd of human beings can't cooperate well enough to walk off a damned plane... now really, we should have more pride as a species than that.

" } }, { "_id": "ALCnqX6Xx8bpFMZq3", "title": "The Cartoon Guide to Löb's Theorem", "pageUrl": "https://www.lesswrong.com/posts/ALCnqX6Xx8bpFMZq3/the-cartoon-guide-to-loeb-s-theorem", "postedAt": "2008-08-17T20:35:45.000Z", "baseScore": 45, "voteCount": 36, "commentCount": 105, "url": null, "contents": { "documentId": "ALCnqX6Xx8bpFMZq3", "html": "

Lo!  A cartoon proof of Löb's Theorem!

\n

Löb's Theorem shows that a mathematical system cannot assert its own soundness without becoming inconsistent.  Marcello and I wanted to be able to see the truth of Löb's Theorem at a glance, so we doodled it out in the form of a cartoon.  (An inability to trust assertions made by a proof system isomorphic to yourself, may be an issue for self-modifying AIs.)

\n

It was while learning mathematical logic that I first learned to rigorously distinguish between X, the truth of X, the quotation of X, a proof of X, and a proof that X's quotation was provable.

\n

The cartoon guide follows as an embedded Scribd document after the jump, or you can download as a PDF file.  Afterward I offer a medium-hard puzzle to test your skill at drawing logical distinctions.

\n

\n\n

\n Cartoon Guide to Löb's ... by on Scribd

\n

\n\n

Cartoon Guide to Löb's Theorem - Upload a Document to Scribd

\n

Read this document on Scribd: Cartoon Guide to Löb's Theorem

\n

And now for your medium-hard puzzle:

\n

The Deduction Theorem (look it up) states that whenever assuming a hypothesis H enables us to prove a formula F in classical logic, then (H->F) is a theorem in classical logic.

\n

Let ◻Z stand for the proposition \"Z is provable\".  Löb's Theorem shows that, whenever we have ((◻C)->C), we can prove C.

\n

Applying the Deduction Theorem to Löb's Theorem gives us, for all C:

\n
\n

((◻C)->C)->C

\n
\n

However, those familiar with the logic of material implication will realize that:

\n
\n

(X->Y)->Y
  implies
(not X)->Y

\n
\n

Applied to the above, this yields (not ◻C)->C.

\n

That is, all statements which lack proofs are true.

\n

I cannot prove that 2 = 1.

\n

Therefore 2 = 1.

\n

Can you exactly pinpoint the flaw?

" } }, { "_id": "f4RJtHBPvDRJcCTva", "title": "When Anthropomorphism Became Stupid", "pageUrl": "https://www.lesswrong.com/posts/f4RJtHBPvDRJcCTva/when-anthropomorphism-became-stupid", "postedAt": "2008-08-16T23:43:01.000Z", "baseScore": 57, "voteCount": 44, "commentCount": 12, "url": null, "contents": { "documentId": "f4RJtHBPvDRJcCTva", "html": "

It turns out that most things in the universe don't have minds.

\n

This statement would have provoked incredulity among many earlier cultures.  \"Animism\" is the usual term.  They thought that trees, rocks, streams, and hills all had spirits because, hey, why not?

\n

I mean, those lumps of flesh known as \"humans\" contain thoughts, so why shouldn't the lumps of wood known as \"trees\"?

\n

My muscles move at my will, and water flows through a river.  Who's to say that the river doesn't have a will to move the water?  The river overflows its banks, and floods my tribe's gathering-place - why not think that the river was angry, since it moved its parts to hurt us? It's what we would think when someone's fist hit our nose.

\n

There is no obvious reason - no reason obvious to a hunter-gatherer - why this cannot be so.  It only seems like a stupid mistake if you confuse weirdness with stupidity.  Naturally the belief that rivers have animating spirits seems \"weird\" to us, since it is not a belief of our tribe.  But there is nothing obviously stupid about thinking that great lumps of moving water have spirits, just like our own lumps of moving flesh.

\n

If the idea were obviously stupid, no one would have believed it.  Just like, for the longest time, nobody believed in the obviously stupid idea that the Earth moves while seeming motionless.

\n

Is it obvious that trees can't think?  Trees, let us not forget, are in fact our distant cousins.  Go far enough back, and you have a common ancestor with your fern.  If lumps of flesh can think, why not lumps of wood?

\n

\n

For it to be obvious that wood doesn't think, you have to belong to a culture with microscopes.  Not just any microscopes, but really good microscopes.

\n

Aristotle thought the brain was an organ for cooling the blood. (It's a good thing that what we believe about our brains has very little effect on their actual operation.)

\n

Egyptians threw the brain away during the process of mummification.

\n

Alcmaeon of Croton, a Pythagorean of the 5th century BCE, put his finger on the brain as the seat of intelligence, because he'd traced the optic nerve from the eye to the brain.  Still, with the amount of evidence he had, it was only a guess.

\n

When did the central role of the brain stop being a guess?  I do not know enough history to answer this question, and probably there wasn't any sharp dividing line.  Maybe we could put it at the point where someone traced the anatomy of the nerves, and discovered that severing a nervous connection to the brain blocked movement and sensation?

\n

Even so, that is only a mysterious spirit moving through the nerves.  Who's to say that wood and water, even if they lack the little threads found in human anatomy, might not carry the same mysterious spirit by different means?

\n

I've spent some time online trying to track down the exact moment when someone noticed the vastly tangled internal structure of the brain's neurons, and said, \"Hey, I bet all this giant tangle is doing complex information-processing!\"  I haven't had much luck.  (It's not Camillo Golgi - the tangledness of the circuitry was known before Golgi.)  Maybe there was never a watershed moment there, either.

\n

But that discovery of that tangledness, and Charles Darwin's theory of natural selection, and the notion of cognition as computation, is where I would put the gradual beginning of anthropomorphism's descent into being obviously wrong.

\n

It's the point where you can look at a tree, and say:  \"I don't see anything in the tree's biology that's doing complex information-processing.  Nor do I see it in the behavior, and if it's hidden in a way that doesn't affect the tree's behavior, how would a selection pressure for such complex information-processing arise?\"

\n

It's the point where you can look at a river, and say, \"Water doesn't contain patterns replicating with distant heredity and substantial variation subject to iterative selection, so how would a river come to have any pattern so complex and functionally optimized as a brain?\"

\n

It's the point where you can look at an atom, and say:  \"Anger may look simple, but it's not, and there's no room for it to fit in something as simple as an atom - not unless there are whole universes of subparticles inside quarks; and even then, since we've never seen any sign of atomic anger, it wouldn't have any effect on the high-level phenomena we know.\"

\n

It's the point where you can look at a puppy, and say:  \"The puppy's parents may push it to the ground when it does something wrong, but that doesn't mean the puppy is doing moral reasoning.  Our current theories of evolutionary psychology holds that moral reasoning arose as a response to more complex social challenges than that - in their full-fledged human form, our moral adaptations are the result of selection pressures over linguistic arguments about tribal politics.\"

\n

It's the point where you can look at a rock, and say, \"This lacks even the simple search trees embodied in a chess-playing program - where would it get the intentions to want to roll downhill, as Aristotle once thought?\"

\n

It is written:

\n

Zhuangzi and Huizi were strolling along the dam of the Hao Waterfall when Zhuangzi said, \"See how the minnows come out and dart around where they please! That's what fish really enjoy!\"

\n

Huizi said, \"You're not a fish — how do you know what fish enjoy?\"

\n

Zhuangzi said, \"You're not I, so how do you know I don't know what fish enjoy?\"

\n

Now we know.

" } }, { "_id": "7TsACnumtNptFpQcY", "title": "Is valuing life undervaluing it?", "pageUrl": "https://www.lesswrong.com/posts/7TsACnumtNptFpQcY/is-valuing-life-undervaluing-it", "postedAt": "2008-08-16T23:22:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "7TsACnumtNptFpQcY", "html": "

People often object to human life having a value placed on it, explicitly or implicitly. (I’m told there are good reasons, apparently to do with dignity, compassion, holism, souls and me being sick and inhuman, but I must admit they seem incoherent to me – if anyone would like to explain in writing I would be grateful). The pressing question then:

\n

What alternatives are there to placing a finite value on human life?

\n

One could not value human life at all. Ironically, this is what those who try to put a value on it, or assume it has one, are generally suspected of. Any value they give a life can be equated to the value of, say, a really vast number of rolls of toilet paper. Or heaps of SUVs full of McDonalds’ hamburgers and books by Ann Coulter. Thus it’s not enough; if you can put a value on human life you don’t value human life.

\n

\"\"The other alternative looks better: value human life infinitely. There’s probably still conflict with your intuitive morality however. If any amount of human life is infinitely valuable, as long as someone is alive the universe can’t get any better. Why preserve extra lives?

\n

Infinitely valuable human lives should also be protected from anything that might shorten them for lesser aims, such as life. We barter slight risks constantly for the quality of our experience, among other things. Unless you’d like to argue that nachos and car trips are also of infinite value, so can be traded with smidgens of human life, what are you doing out of your protective bubble?

\n

Another alternative is just to not think about it. Hold that lives have a high but finite value, but don’t use this in naughty calculative attempts to maximise welfare! Maintain that it is abhorrent to do so. Uphold lots of arbitrary rules, like respecting people’s dignity and beginning charity at home and having honour and being respectable and doing what your heart tells you. Interestingly, this effectively does make human life worthless; not even worth including in the calculation next to the whims of your personal emotions and the culture at hand.

\n

The only way to value human life is to place value on human life.

\n

For more on how to feel about this, see Philip Tetlock.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "znzybh3HGMXuEuYYv", "title": "Processing people", "pageUrl": "https://www.lesswrong.com/posts/znzybh3HGMXuEuYYv/processing-people", "postedAt": "2008-08-16T02:34:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "znzybh3HGMXuEuYYv", "html": "

Some of my friends think that a random process of deciding who should live or die is more important than the lives of those people, because lives should all be valued equally (and a process can ensure approximately random choice).

\n

For example, this would mean it is better to make sure the life rafts are filled by a random selection of women and men and rich and poor and so on, even if that means that half of them drown while you flip the coin.

\n

If lives should be valued equally, then why is a process of choosing between identically valuable things worth more than even one human life?

\n

\"\"Also, even if you value this process more than another person’s life, why shouldn’t the person who’s life is at stake’s opinion on their relative value come into it? That is, if we are attempting to follow any ethical system other than egoism (of course your preference is of absolute importance if you are trying to be purely self interested). Try out the veil of ignorance!

\n

For other readers, no this isn’t a purely theoretical debate, I’m just not going to tell you what the context is.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "TMtBb7jTECLtWrKM4", "title": "Hot Air Doesn't Disagree", "pageUrl": "https://www.lesswrong.com/posts/TMtBb7jTECLtWrKM4/hot-air-doesn-t-disagree", "postedAt": "2008-08-16T00:42:02.000Z", "baseScore": 15, "voteCount": 13, "commentCount": 45, "url": null, "contents": { "documentId": "TMtBb7jTECLtWrKM4", "html": "

Followup toThe Bedrock of Morality, Abstracted Idealized Dynamics

\n\n

Tim Tyler comments:

Do the fox and the rabbit\ndisagree? It seems reasonable so say that they do if they meet: the\nrabbit thinks it should be eating grass, and the fox thinks the rabbit\nshould be in the fox's stomach. They may argue passionately about the\nrabbit's fate - and even stoop to violence.

Boy, you know, when you think about it, Nature turns out to be just full of disagreement.

\n\n

Rocks, for example, fall down - so they agree with us, who also fall when pushed off a cliff - whereas hot air rises into the air, unlike humans.

\n\n

I wonder why hot air disagrees with us so dramatically.  I wonder what sort of moral justifications it might have for behaving as it does; and how long it will take to argue this out.  So far, hot air has not been forthcoming in terms of moral justifications.

\n\n

Physical systems that behave differently from you usually do not have factual or moral disagreements with you.  Only a highly specialized subset of systems, when they do something different from you, should lead you to infer their explicit internal representation of moral arguments that could potentially lead you to change your mind about what you should do.

Attributing moral disagreements to rabbits or foxes is sheer anthropomorphism, in the full technical sense of the term - like supposing that lightning bolts are thrown by thunder gods, or that trees have spirits that can be insulted by human sexual practices and lead them to withhold their fruit.

\n\n

The rabbit does not think it should be eating grass.  If questioned the rabbit will not say, "I enjoy eating grass, and it is good in general for agents to do what they enjoy, therefore I should eat grass."  Now you might invent an argument like that; but the rabbit's actual behavior has absolutely no causal connection to any cognitive system that processes such arguments.  The fact that the rabbit eats grass, should not lead you to infer the explicit cognitive representation of, nor even infer the probable theoretical existence of, the sort of arguments that humans have over what they should do.  The rabbit is just eating grass, like a rock rolls downhill and like hot air rises.

\n\n

To think that the rabbit contains a little circuit that ponders morality and then finally does what it thinks it should do, and that the rabbit has arrived at the belief that it should eat grass, and that this is the explanation of why the rabbit is eating grass - from which we might infer that, if the rabbit is correct, perhaps humans should do the same thing - this is all as ridiculous as thinking that the rock wants to be at the bottom of the hill, concludes that it can reach the bottom of the hill by rolling, and therefore decides to exert a mysterious motive force on itself.  Aristotle thought that, but there is a reason why Aristotelians don't teach modern physics courses.

\n\n

The fox does not argue that it is smarter than the rabbit and so deserves to live at the rabbit's expense.  To think that the fox is moralizing about why it should eat the rabbit, and this is why the fox eats the rabbit - from which we might infer that we as humans, hearing the fox out, would see its arguments as being in direct conflict with those of the rabbit, and we would have to judge between them - this is as ridiculous as thinking (as a modern human being) that lightning bolts are thrown by thunder gods in a state of inferrable anger.

\n\n

Yes, foxes and rabbits are more complex creatures than rocks and hot air, but they do not process moral arguments.  They are not that complex in that particular way.

\n\n

Foxes try to eat rabbits and rabbits try to escape foxes, and from this there is nothing more to be inferred than from rocks falling and hot air rising, or water quenching fire and fire evaporating water.  They are not arguing.

\n\n

This anthropomorphism of presuming that every system does what it does because of a belief about what it should do, is directly responsible for the belief that Pebblesorters create prime-numbered heaps of pebbles because they think that is what everyone should do.  They don't.  Systems whose behavior indicates something about what agents should do, are rare, and the Pebblesorters are not such systems.  They don't care about sentient life at all.  They just sort pebbles into prime-numbered heaps.

" } }, { "_id": "RBszS2jwGM4oghXW4", "title": "The Bedrock of Morality: Arbitrary?", "pageUrl": "https://www.lesswrong.com/posts/RBszS2jwGM4oghXW4/the-bedrock-of-morality-arbitrary", "postedAt": "2008-08-14T22:00:57.000Z", "baseScore": 25, "voteCount": 31, "commentCount": 119, "url": null, "contents": { "documentId": "RBszS2jwGM4oghXW4", "html": "

Followup toIs Fairness Arbitrary?, Joy in the Merely GoodSorting Pebbles Into Correct Heaps

\n

Yesterday, I presented the idea that when only five people are present, having just stumbled across a pie in the woods (a naturally growing pie, that just popped out of the ground) then it is fair to give Dennis only 1/5th of this pie, even if Dennis persistently claims that it is fair for him to get the whole thing.  Furthermore, it is meta-fair to follow such a symmetrical division procedure, even if Dennis insists that he ought to dictate the division procedure.

\n

Fair, meta-fair, or meta-meta-fair, there is no level of fairness where you're obliged to concede everything to Dennis, without reciprocation or compensation, just because he demands it.

\n

Which goes to say that fairness has a meaning beyond which \"that which everyone can be convinced is 'fair'\".  This is an empty proposition, isomorphic to \"Xyblz is that which everyone can be convinced is 'xyblz'\".  There must be some specific thing of which people are being convinced; and once you identify that thing, it has a meaning beyond agreements and convincing.

\n

You're not introducing something arbitrary, something un-fair, in refusing to concede everything to Dennis.  You are being fair, and meta-fair and meta-meta-fair.  As far up as you go, there's no level that calls for unconditional surrender.  The stars do not judge between you and Dennis—but it is baked into the very question that is asked, when you ask, \"What is fair?\" as opposed to \"What is xyblz?\"

\n

Ah, but why should you be fair, rather than xyblz?  Let us concede that Dennis cannot validly persuade us, on any level, that it is fair for him to dictate terms and give himself the whole pie; but perhaps he could argue whether we should be fair?

\n

The hidden agenda of the whole discussion of fairness, of course, is that good-ness and right-ness and should-ness, ground out similarly to fairness.

\n

\n

Natural selection optimizes for inclusive genetic fitness.  This is not a disagreement with humans about what is good.  It is simply that natural selection does not do what is good: it optimizes for inclusive genetic fitness.

\n

Well, since some optimization processes optimize for inclusive genetic fitness, instead of what is good, which should we do, ourselves?

\n

I know my answer to this question.  It has something to do with natural selection being a terribly wasteful and stupid and inefficient process.  It has something to do with elephants starving to death in their old age when they wear out their last set of teeth.  It has something to do with natural selection never choosing a single act of mercy, of grace, even when it would cost its purpose nothing: not auto-anesthetizing a wounded and dying gazelle, when its pain no longer serves even the adaptive purpose that first created pain.  Evolution had to happen sometime in the history of the universe, because that's the only way that intelligence could first come into being, without brains to make brains; but now that era is over, and good riddance.

\n

But most of all—why on Earth would any human being think that one ought to optimize inclusive genetic fitness, rather than what is good?  What is even the appeal of this, morally or otherwise?  At all?  I know people who claim to think like this, and I wonder what wrong turn they made in their cognitive history, and I wonder how to get them to snap out of it.

\n

When we take a step back from fairness, and ask if we should be fair, the answer may not always be yes.  Maybe sometimes we should be merciful.  But if you ask if it is meta-fair to be fair, the answer will generally be yes.  Even if someone else wants you to be unfair in their favor, or claims to disagree about what is \"fair\", it will still generally be meta-fair to be fair, even if you can't make the Other agree.  By the same token, if you ask if we meta-should do what we should, rather than something else, the answer is yes.  Even if some other agent or optimization process does not do what is right, that doesn't change what is meta-right.

\n

And this is not \"arbitrary\" in the sense of rolling dice, not \"arbitrary\" in the sense that justification is expected and then not found.  The accusations that I level against evolution are not merely pulled from a hat; they are expressions of morality as I understand it.  They are merely moral, and there is nothing mere about that.

\n

In \"Arbitrary\" I finished by saying:

\n
\n

The upshot is that differently structured minds may well label different propositions with their analogues of the internal label \"arbitrary\"—though only one of these labels is what you mean when you say \"arbitrary\", so you and these other agents do not really have a disagreement.

\n
\n

This was to help shake people loose of the idea that if any two possible minds can say or do different things, then it must all be arbitrary.  Different minds may have different ideas of what's \"arbitrary\", so clearly this whole business of \"arbitrariness\" is arbitrary, and we should ignore it.  After all, Sinned (the anti-Dennis) just always says \"Morality isn't arbitrary!\" no matter how you try to persuade her otherwise, so clearly you're just being arbitrary in saying that morality is arbitrary.

\n

From the perspective of a human, saying that one should sort pebbles into prime-numbered heaps is arbitrary—it's the sort of act you'd expect to come with a justification attached, but there isn't any justification.

\n

From the perspective of a Pebblesorter, saying that one p-should scatter a heap of 38 pebbles into two heaps of 19 pebbles is not p-arbitrary at all—it's the most p-important thing in the world, and fully p-justified by the intuitively obvious fact that a heap of 19 pebbles is p-correct and a heap of 38 pebbles is not.

\n

So which perspective should we adopt?  I answer that I see no reason at all why I should start sorting pebble-heaps.  It strikes me as a completely pointless activity.  Better to engage in art, or music, or science, or heck, better to connive political plots of terrifying dark elegance, than to sort pebbles into prime-numbered heaps.  A galaxy transformed into pebbles and sorted into prime-numbered heaps would be just plain boring.

\n

The Pebblesorters, of course, would only reason that music is p-pointless because it doesn't help you sort pebbles into heaps; the human activity of humor is not only p-pointless but just plain p-bizarre and p-incomprehensible; and most of all, the human vision of a galaxy in which agents are running around experiencing positive reinforcement but not sorting any pebbles, is a vision of an utterly p-arbitrary galaxy devoid of p-purpose.  The Pebblesorters would gladly sacrifice their lives to create a P-Friendly AI that sorted the galaxy on their behalf; it would be the most p-profound statement they could make about the p-meaning of their lives.

\n

So which of these two perspectives do I choose?  The human one, of course; not because it is the human one, but because it is right.  I do not know perfectly what is right, but neither can I plead entire ignorance.

\n

And the Pebblesorters, who simply are not built to do what is right, choose the Pebblesorting perspective: not merely because it is theirs, or because they think they can get away with being p-arbitrary, but because that is what is p-right.

\n

And in fact, both we and the Pebblesorters can agree on all these points.  We can agree that sorting pebbles into prime-numbered heaps is arbitrary and unjustified, but not p-arbitrary or p-unjustified; that it is the sort of thing an agent p-should do, but not the sort of thing an agent should do.

\n

I fully expect that even if there is other life in the universe only a few trillions of lightyears away (I don't think it's local, or we would have seen it by now), that we humans are the only creatures for a long long way indeed who are built to do what is right.  That may be a moral miracle, but it is not a causal miracle.

\n

There may be some other evolved races, a sizable fraction perhaps, maybe even a majority, who do some right things.  Our executing adaptation of compassion is not so far removed from the game theory that gave it birth; it might be a common adaptation.  But laughter, I suspect, may be rarer by far than mercy.  What would a galactic civilization be like, if it had sympathy, but never a moment of humor?  A little more boring, perhaps, by our standards.

\n

This humanity that we find ourselves in, is a great gift.  It may not be a great p-gift, but who cares about p-gifts?

\n

So I really must deny the charges of moral relativism:  I don't think that human morality is arbitrary at all, and I would expect any logically omniscient reasoner to agree with me on that.  We are better than the Pebblesorters, because we care about sentient lives, and the Pebblesorters don't.  Just as the Pebblesorters are p-better than us, because they care about pebble heaps, and we don't.  Human morality is p-arbitrary, but who cares?  P-arbitrariness is arbitrary.

\n

You've just got to avoid thinking that the words \"better\" and \"p-better\", or \"moral\" and \"p-moral\", are  talking about the same thing—because then you might think that the Pebblesorters were coming to different conclusions than us about the same thing—and then you might be tempted to think that our own morals were arbitrary.  Which, of course, they're not.

\n

Yes, I really truly do believe that humanity is better than the Pebblesorters!  I am not being sarcastic, I really do believe that.  I am not playing games by redefining \"good\" or \"arbitrary\", I think I mean the same thing by those terms as everyone else.  When you understand that I am genuinely sincere about that, you will understand my metaethics.  I really don't consider myself a moral relativist—not even in the slightest!

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"You Provably Can't Trust Yourself\"

\n

Previous post: \"Is Fairness Arbitrary?\"

" } }, { "_id": "saw8WAML4NEaJ2Wmz", "title": "Is Fairness Arbitrary?", "pageUrl": "https://www.lesswrong.com/posts/saw8WAML4NEaJ2Wmz/is-fairness-arbitrary", "postedAt": "2008-08-14T01:54:59.000Z", "baseScore": 9, "voteCount": 13, "commentCount": 37, "url": null, "contents": { "documentId": "saw8WAML4NEaJ2Wmz", "html": "

Followup toThe Bedrock of Fairness

\n

In \"The Bedrock of Fairness\", Xannon, Yancy, and Zaire argue over how to split up a pie that they found in the woods.  Yancy thinks that 1/3 each is fair; Zaire demands half; and Xannon tries to compromise.

\n

Dividing a pie fairly isn't as trivial a problem as it may sound. What if people have different preferences for crust, filling, and topping?  Should they each start with a third, and trade voluntarily? But then they have conflicts of interest over how to divide the surplus utility generated by trading...

\n

But I would say that \"half for Zaire\" surely isn't fair.

\n

I confess that I originally wrote Zaire as a foil—this is clearer in an earlier version of the dialog, where Zaire, named Dennis, demands the whole pie—and was surprised to find some of my readers taking Zaire's claim seriously, perhaps because I had Zaire say \"I'm hungry.\"

\n

Well, okay; I believe that when I write a dialogue, the reader has a right to their own interpretation.  But I did intend that dialogue to illustrate a particular point:

\n

You can argue about how to divide up the pie, or even argue how to argue about dividing up the pie, you can argue over what is fair... but there finally comes a point when you hit bedrock.  If Dennis says, \"No, the fair way to argue is that I get to dictate everything, and I now hereby dictate that I get the whole pie,\" there's nothing left to say but \"Sorry, that's just not what fairness is—you can try to take the pie and I can try to stop you, but you can't convince that that is fair.\"

\n

\n

A \"fair division\" is not the same as \"a division that compels everyone to admit that the division is fair\".  Dennis can always just refuse to agree, after all.

\n

But more to the point, when you encounter a pie in the forest, in the company of friends, and you try to be fair, there's a certain particular thing you're trying to do—the term \"fair\" is not perfectly empty, it cannot attach to just anything. Metaphorically speaking, \"fair\" is not a hypothesis equally compatible with any outcome.

\n

Fairness expresses notions of concern for the other agents who also want the pie; a goal to take their goals into account.  It's a separate question whether that concern is pure altruism, or not wanting to make them angry enough to fight.  Fairness expresses notions of symmetry, equal treatment—which might be a terminal value unto you, or just an attempt to find a convenient meeting-point to avoid an outright battle.

\n

Is it fair to take into account what other people think is \"fair\", and not just what you think is \"fair\"?

\n

The obvious reason to care what other people think is \"fair\", is if they're being moved by similar considerations, yet arriving at different conclusions.  If you think that the Other's word \"fair\" means what you think of as fair, and you think the Other is being honest about what they think, then you ought to pay attention just by way of fulfilling your own desire to be fair.  It is like paying attention to an honest person who means the same thing you do by \"multiplication\", who says that 19 * 103 might not be 1947.  The attention you pay to that suggestion, is not a favor to the other person; it is something you do if you want to get the multiplication right—they're doing you a favor by correcting you.

\n

Politics is more subject to bias than multiplication.  And you might think that the Other's reasoning is corrupted by self-interest, while yours is as pure as Antarctic snow.  But to the extent that you credit the Other's self-honesty, or doubt your own, you would do well to hear what the Other has to say—if you wish to be fair.

\n

The second notion of why we might pay attention to what someone else thinks is \"fair\", is more complicated: it is the notion of applying fairness to its own quotation, that is, fairly debating what is \"fair\".  In complicated politics you may have to negotiate a negotiating procedure.  Surely it wouldn't be fair if Dennis just got to say, \"The fair resolution procedure is that I get to decide what's fair.\"  So why should you get to just decide what's fair, then?

\n

Here the attention you pay to the other person's beliefs about \"fairness\", is a favor that you do to them, a concession that you expect to be met with a return concession.

\n

But when you set out to fairly discuss what is \"fair\" (note the strange loop through the meta-level), that doesn't put everything up for grabs.  A zeroth-order fair division of a pie doesn't involve giving away the whole pie to Dennis—just giving identical portions to all.  Even though Dennis wants the whole thing, and asks for the whole thing, the zeroth-order fair division only gives Dennis a symmetrical portion to everyone else's.  Similarly, a first-order fair attempt to resolve a dispute about what is \"fair\", doesn't involve conceding everything to the Other's viewpoint without reciprocation.  That wouldn't be fair. Why give everything away to the Other, if you receive nothing in return?  Why give Dennis the whole first-order pie?

\n

On some level, then, there has to be a possible demand which would be too great—a demand exceeding what may be fairly requested of you.  This is part of the content of fairness; it is part of what you are setting out to do, when you set out to be fair.  Admittedly, one should not be too trigger-happy about saying \"That's too much!\"  We human beings tend to overestimate the concessions we have made, and underestimate the concessions that others have made to us; we tend to underadjust for the Other's point of view... even so, if nothing is \"too much\", then you're not engaging in fairness.

\n

Fairness might call on you to hear out what the Other has to say; fairness may call on you to exert an effort to really truly consider the Other's point of view—but there is a limit to this, as there is a limit to all fair concessions.  If all Dennis can say is \"I want the whole pie!\" over and over, there's a limit to how long fairness requires you to ponder this argument.

\n

You reach the bedrock of fairness at the point where, no matter who questions whether the division is fair, no matter who refuses to be persuaded, no matter who offers further objections, and regardless of your awareness that you yourself may be biased... Dennis still isn't getting the whole pie.  If there are others present who are also trying to be fair, and Dennis is not already dictator, they will probably back you rather than Dennis—this is one sign that you can trust the line you've drawn, that it really is time to say \"Enough!\"

\n

If you and the others present get together and give Dennis 1/Nth of the pie—or even if you happen to have the upper hand, and you unilaterally give Dennis and yourself and all others each 1/Nth—then you are not being unfair on any level; there is no meta-level of fairness where Dennis gets the whole pie. 

\n

Now I'm sure there are some in the audience who will say, \"You and perhaps some others, are merely doing things your way, rather than Dennis's.\"  On the contrary:  We are merely being fair.  It so happens that this fairness is our way, as all acts must be someone's way to happen in the real universe.  But what we are merely doing, happens to be, being fair.  And there is no level on which it is unfair, because there is no level on which fairness requires unlimited unreciprocated surrender.

\n

I don't believe in unchangeable bedrock—I believe in self-modifying bedrock.  But I do believe in bedrock, in the sense that everything has to start somewhere.  It can be turtles all the way up, but not turtles all the way down.

\n

You cannot define fairness entirely in terms of \"That which everyone agrees is 'fair'.\"  This isn't just nonterminating.  It isn't just ill-defined if Dennis doesn't believe that 'fair' is \"that which everyone agrees is 'fair'\".  It's actually entirely empty, like the English sentence \"This sentence is true.\"  Is that sentence true?  Is it false?  It is neither; it doesn't mean anything because it is entirely wrapped up in itself, with no tentacle of relation to reality.  If you're going to argue what is fair, there has to be something you're arguing about, some structure that is baked into the question.

\n

Which is to say that you can't turn \"fairness\" into an ideal label of pure emptiness, defined only by the mysterious compulsion of every possible agent to admit \"This is what is 'fair'.\"  Forget the case against universally compelling arguments—just consider the definition itself:  It has absolutely no content, no external references; it is not just underspecified, but entirely unspecified.

\n

But as soon as you introduce any content into the label \"fairness\" that isn't phrased purely in terms of all possible minds applying the label, then you have a foundation on which to stand.  It may be self-modifying bedrock, rather than immovable bedrock.  But it is still a place to start.  A place from which to say:  \"Regardless of what Dennis says, giving him the whole pie isn't fair, because fairness is not defined entirely and only in terms of Dennis's agreement.\"

\n

And you aren't being \"arbitrary\", either—though the intuitive meaning of that word has never seemed entirely well-specified to me; is a tree arbitrary, or a leaf?  But it sounds like the accusation is of pulling some answer out of thin air—which you're not doing; you're giving the fair answer, not an answer pulled out of thin air.  What about when you jump up a meta-level, and look at Dennis's wanting to do it one way, and your wanting a different resolution?  Then it's still not arbitrary, because you aren't being unfair on that meta-level, either.  The answer you pull out is not merely an arbitrary answer you invented, but a fair answer.  You aren't merely doing it your way; the way that you are doing it, is the fair way.

\n

You can ask \"But why should you be fair?\"—and that's a separate question, which we'll go into tomorrow.  But giving Dennis 1/Nth, we can at least say, is not merely and only arbitrary from the perspective of fair-vs.-unfair.  Even if Dennis keeps saying \"It isn't fair!\" and even if Dennis also disputes the 1st-order, 2nd-order, Nth-order meta-fairnesses.  Giving N people each 1/Nth is nonetheless a fair sort of thing to do, and whether or not we should be fair is then a separate question.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"The Bedrock of Morality: Arbitrary?\"

\n

Previous post: \"'Arbitrary'\"

" } }, { "_id": "HacgrDxJx3Xr7uwCR", "title": "\"Arbitrary\"", "pageUrl": "https://www.lesswrong.com/posts/HacgrDxJx3Xr7uwCR/arbitrary", "postedAt": "2008-08-12T17:55:22.000Z", "baseScore": 19, "voteCount": 17, "commentCount": 14, "url": null, "contents": { "documentId": "HacgrDxJx3Xr7uwCR", "html": "

Followup toInseparably Right; or, Joy in the Merely Good, Sorting Pebbles Into Correct Heaps

\n

One of the experiences of following the Way is that, from time to time, you notice a new word that you have been using without really understanding.  And you say:  \"What does this word, 'X', really mean?\"

\n

Perhaps 'X' is 'error', for example.  And those who have not yet realized the importance of this aspect of the Way, may reply:  \"Huh? What do you mean?  Everyone knows what an 'error' is; it's when you get something wrong, when you make a mistake.\"  And you reply, \"But those are only synonyms; what can the term 'error' mean in a universe where particles only ever do what they do?\"

\n

It's not meant to be a rhetorical question; you're meant to go out and answer it.  One of the primary tools for doing so is Rationalist's Taboo, when you try to speak without using the word or its synonyms—to replace the symbol with the substance.

\n

So I ask you therefore, what is this word \"arbitrary\"?  Is a rock arbitrary?  A leaf?  A human?

\n

How about sorting pebbles into prime-numbered heaps?  How about maximizing inclusive genetic fitness?  How about dragging a child off the train tracks?

\n

How can I tell exactly which things are arbitrary, and which not, in this universe where particles only ever do what they do?  Can you tell me exactly what property is being discriminated, without using the word \"arbitrary\" or any direct synonyms?  Can you open up the box of \"arbitrary\", this label that your mind assigns to some things and not others, and tell me what kind of algorithm is at work here?

\n

\n

Having pondered this issue myself, I offer to you the following proposal:

\n
\n

A piece of cognitive content feels \"arbitrary\" if it is the kind of cognitive content that we expect to come with attached justifications, and those justifications are not present in our mind.

\n
\n

You'll note that I've performed the standard operation for guaranteeing that a potentially confusing question has a real answer: I substituted the question, \"How does my brain label things 'arbitrary'?\" for \"What is this mysterious property of arbitrariness?\" This is not necessarily a sleight-of-hand, since to explain something is not the same as explaining it away.

\n

In this case, for nearly all everyday purposes, I would make free to proceed from \"arbitrary\" to arbitrary.  If someone says to me, \"I believe that the probability of finding life on Mars is 6.203 * 10-23 to four significant digits,\" I would make free to respond, \"That sounds like a rather arbitrary number,\" not \"My brain has attached the subjective arbitrariness-label to its representation of the number in your belief.\"

\n

So as it turned out in this case, having answered the question \"What is 'arbitrary'?\" turns out not to affect the way I use the word 'arbitrary'; I am just more aware of what the arbitrariness-sensation indicates.  I am aware that when I say, \"6.203 * 10-23 sounds like an arbitrary number\", I am indicating that I would expect some justification for assigning that particular number, and I haven't heard it.  This also explains why the precision is important—why I would question that particular number, but not someone saying \"Less than 1%\".  In the latter case, I have some idea what might justify such a statement; but giving a very precise figure implies that you have some kind of information I don't know about, either that or you're being silly.

\n

\"Ah,\" you say, \"but what do you mean by 'justification'?  Haven't you failed to make any progress, and just passed the recursive buck to another black box?\"

\n

Actually, no; I told you that \"arbitrariness\" was a sensation produced by the absence of an expected X.  Even if I don't tell you anything more about that X, you've learned something about the cognitive algorithm—opened up the original black box, and taken out two gears and a smaller black box.

\n

But yes, it makes sense to continue onward to discuss this mysterious notion of \"justification\".

\n

Suppose I told you that \"justification\" is what tells you whether a belief is reasonable.  Would this tell you anything?  No, because there are no extra gears that have been factored out, just a direct invocation of \"reasonable\"-ness.

\n

Okay, then suppose instead I tell you, \"Your mind labels X as a justification for Y, whenever adding 'X' to the pool of cognitive content would result in 'Y' being added to the pool, or increasing the intensity associated with 'Y'.\"  How about that?

\n

\"Enough of this buck-passing tomfoolery!\" you may be tempted to cry.  But wait; this really does factor out another couple of gears.  We have the idea that different propositions, to the extent they are held, can create each other in the mind, or increase the felt level of intensity—credence for beliefs, desire for acts or goals.  You may have already known this, more or less, but stating it aloud is still progress.

\n

This may not provide much satisfaction to someone inquiring into morals.  But then someone inquiring into morals may well do better to just think moral thoughts, rather than thinking about metaethics or reductionism.

\n

On the other hand, if you were building a Friendly AI, and trying to explain to that FAI what a human being means by the term \"justification\", then the statement I just issued might help the FAI narrow it down.  With some additional guidance, the FAI might be able to figure out where to look, in an empirical model of a human, for representations of the sort of specific moral content that a human inquirer-into-morals would be interested in—what specifically counts or doesn't count as a justification, in the eyes of that human.  And this being the case, you might not have to explain the specifics exactly correctly at system boot time; the FAI knows how to find out the rest on its own.  My inquiries into metaethics are not directed toward the same purposes as those of standard philosophy.

\n

Now of course you may reply, \"Then the FAI finds out what the human thinks is a \"justification\".  But is that formulation of 'justification', really justified?\"  But by this time, I hope, you can predict my answer to that sort of question, whether or not you agree.  I answer that we have just witnessed a strange loop through the meta-level, in which you use justification-as-justification to evaluate the quoted form of justification-as-cognitive-algorithm, which algorithm may, perhaps, happen to be your own, &c.  And that the feeling of \"justification\" cannot be coherently detached from the specific algorithm we use to decide justification in particular cases; that there is no pure empty essence of justification that will persuade any optimization process regardless of its algorithm, &c.

\n

And the upshot is that differently structured minds may well label different propositions with their analogues of the internal label \"arbitrary\"—though only one of these labels is what you mean when you say \"arbitrary\", so you and these other agents do not really have a disagreement.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Is Fairness Arbitrary?\"

\n

Previous post: \"Abstracted Idealized Dynamics\"

" } }, { "_id": "9KacKm5yBv27rxWnJ", "title": "Abstracted Idealized Dynamics", "pageUrl": "https://www.lesswrong.com/posts/9KacKm5yBv27rxWnJ/abstracted-idealized-dynamics", "postedAt": "2008-08-12T01:00:00.000Z", "baseScore": 38, "voteCount": 26, "commentCount": 25, "url": null, "contents": { "documentId": "9KacKm5yBv27rxWnJ", "html": "

Followup toMorality as Fixed Computation

\n

I keep trying to describe morality as a \"computation\", but people don't stand up and say \"Aha!\"

\n

Pondering the surprising inferential distances that seem to be at work here, it occurs to me that when I say \"computation\", some of my listeners may not hear the Word of Power that I thought I was emitting; but, rather, may think of some complicated boring unimportant thing like Microsoft Word.

\n

Maybe I should have said that morality is an abstracted idealized dynamic.  This might not have meant anything to start with, but at least it wouldn't sound like I was describing Microsoft Word.

\n

How, oh how, am I to describe the awesome import of this concept, \"computation\"?

\n

Perhaps I can display the inner nature of computation, in its most general form, by showing how that inner nature manifests in something that seems very unlike Microsoft Word—namely, morality.

\n

Consider certain features we might wish to ascribe to that-which-we-call \"morality\", or \"should\" or \"right\" or \"good\":

\n

• It seems that we sometimes think about morality in our armchairs, without further peeking at the state of the outside world, and arrive at some previously unknown conclusion.

\n

Someone sees a slave being whipped, and it doesn't occur to them right away that slavery is wrong.  But they go home and think about it, and imagine themselves in the slave's place, and finally think, \"No.\"

\n

Can you think of anywhere else that something like this happens?

\n

\n

Suppose I tell you that I am making a rectangle of pebbles.  You look at the rectangle, and count 19 pebbles on one side and 103 dots pebbles on the other side.  You don't know right away how many pebbles there are.  But you go home to your living room, and draw the blinds, and sit in your armchair and think; and without further looking at the physical array, you come to the conclusion that the rectangle contains 1957 pebbles.

\n

Now, I'm not going to say the word \"computation\".  But it seems like that-which-is \"morality\" should have the property of latent development of answers—that you may not know right away, everything that you have sufficient in-principle information to know.  All the ingredients are present, but it takes additional time to bake the pie.

\n

You can specify a Turing machine of 6 states and 2 symbols that unfolds into a string of 4.6 × 101439 1s after 2.5 × 102879 steps.  A machine I could describe aloud in ten seconds, runs longer and produces a larger state than the whole observed universe to date. 

\n

When you distinguish between the program description and the program's executing state, between the process specification and the final outcome, between the question and the answer, you can see why even certainty about a program description does not imply human certainty about the executing program's outcome.  See also Artificial Addition on the difference between a compact specification versus a flat list of outputs.

\n

Morality, likewise, is something that unfolds, through arguments, through discovery, through thinking; from a bounded set of intuitions and beliefs that animate our initial states, to a potentially much larger set of specific moral judgments we may have to make over the course of our lifetimes.

\n

• When two human beings both think about the same moral question, even in a case where they both start out uncertain of the answer, it is not unknown for them to come to the same conclusion.  It seems to happen more often than chance alone would allow—though the biased focus of reporting and memory is on the shouting and the arguments.  And this is so, even if both humans remain in their armchairs and do not peek out the living-room blinds while thinking.

\n

Where else does this happen?  It happens when trying to guess the number of pebbles in a rectangle of sides 19 and 103.  Now this does not prove by Greek analogy that morality is multiplication.  If A has property X and B has property X it does not follow that A is B.  But it seems that morality ought to have the property of expected agreement about unknown latent answers, which, please note, generally implies that similar questions are being asked in different places.

\n

This is part of what is conveyed by the Word of Power, \"computation\": the notion of similar questions being asked in different places and having similar answers.  Or as we might say in the business, the same computation can have multiple instantiations.

\n

If we know the structure of calculator 1 and calculator 2, we can decide that they are \"asking the same question\" and that we ought to see the \"same result\" flashing on the screen of calculator 1 and calculator 2 after pressing the Enter key.  We decide this in advance of seeing the actual results, which is what makes the concept of \"computation\" predictively useful.

\n

And in fact, we can make this deduction even without knowing the exact circuit diagrams of calculators 1 and 2, so long as we're told that the circuit diagrams are the same.

\n

And then when we see the result \"1957\" flash on the screen of calculator 1, we know that the same \"1957\" can be expected to flash on calculator 2, and we even expect to count up 1957 pebbles in the array of 19 by 103.

\n

A hundred calculators, performing the same multiplication in a hundred different ways, can be expected to arrive at the same answer—and this is not a vacuous expectation adduced after seeing similar answers.  We can form the expectation in advance of seeing the actual answer.

\n

Now this does not show that morality is in fact a little electronic calculator.  But it highlights the notion of something that factors out of different physical phenomena in different physical places, even phenomena as physically different as a calculator and an array of pebbles—a common answer to a common question.  (Where is this factored-out thing?  Is there an Ideal Multiplication Table written on a stone tablet somewhere outside the universe? But we are not concerned with that for now.)

\n

Seeing that one calculator outputs \"1957\", we infer that the answer—the abstracted answer—is 1957; and from there we make our predictions of what to see on all the other calculator screens, and what to see in the array of pebbles.

\n

So that-which-we-name-morality seems to have the further properties of agreement about developed latent answers, which we may as well think of in terms of abstract answers; and note that such agreement is unlikely in the absence of similar questions.

\n

• We sometimes look back on our own past moral judgments, and say \"Oops!\"  E.g., \"Oops!  Maybe in retrospect I shouldn't have killed all those guys when I was a teenager.\"

\n

So by now it seems easy to extend the analogy, and say:  \"Well, maybe a cosmic ray hits one of the transistors in the calculator and it says '1959' instead of 1957—that's an error.\"

\n

But this notion of \"error\", like the notion of \"computation\" itself, is more subtle than it appears.

\n

Calculator Q says '1959' and calculator X says '1957'.  Who says that calculator Q is wrong, and calculator X is right?  Why not say that calculator X is wrong and calculator Q is right?  Why not just say, \"the results are different\"?

\n

\"Well,\" you say, drawing on your store of common sense, \"if it was just those two calculators, I wouldn't know for sure which was right.  But here I've got nine other calculators that all say '1957', so it certainly seems probable that 1957 is the correct answer.\"

\n

What's this business about \"correct\"?  Why not just say \"different\"?

\n

\"Because if I have to predict the outcome of any other calculators that compute 19 x 103, or the number of pebbles in a 19 x 103 array, I'll predict 1957—or whatever observable outcome corresponds to the abstract number 1957.\"

\n

So perhaps 19 x 103 = 1957 only most of the time.  Why call the answer 1957 the correct one, rather than the mere fad among calculators, the majority vote?

\n

If I've got a hundred calculators, all of them rather error-prone—say a 10% probability of error—then there is no one calculator I can point to and say, \"This is the standard!\"  I might pick a calculator that would happen, on this occasion, to vote with ten other calculators rather than ninety other calculators.  This is why I have to idealize the answer, to talk about this ethereal thing that is not associated with any particular physical process known to me—not even arithmetic done in my own head, which can also be \"incorrect\".

\n

It is this ethereal process, this idealized question, to which we compare the results of any one particular calculator, and say that the result was \"right\" or \"wrong\".

\n

But how can we obtain information about this perfect and un-physical answer, when all that we can ever observe, are merely physical phenomena?  Even doing \"mental\" arithmetic just tells you about the result in your own, merely physical brain.

\n

\"Well,\" you say, \"the pragmatic answer is that we can obtain extremely strong evidence by looking at the results of a hundred calculators, even if they are only 90% likely to be correct on any one occasion.\"

\n

But wait:  When do electrons or quarks or magnetic fields ever make an \"error\"?  If no individual particle can be mistaken, how can any collection of particles be mistaken?  The concept of an \"error\", though humans may take it for granted, is hardly something that would be mentioned in a fully reductionist view of the universe.

\n

Really, what happens is that we have a certain model in mind of the calculator—the model that we looked over and said, \"This implements 19 * 103\"—and then other physical events caused the calculator to depart from this model, so that the final outcome, while physically lawful, did not correlate with that mysterious abstract thing, and the other physical calculators, in the way we had in mind.  Given our mistaken beliefs about the physical process of the first calculator, we would look at its output '1959', and make mistaken predictions about the other calculators (which do still hew to the model we have in mind).

\n

So \"incorrect\" cashes out, naturalistically, as \"physically departed from the model that I had of it\" or \"physically departed from the idealized question that I had in mind\".  A calculator struck by a cosmic ray, is not 'wrong' in any physical sense, not an unlawful event in the universe; but the outcome is not the answer to the question you had in mind, the question that you believed empirically-falsely the calculator would correspond to.

\n

The calculator's \"incorrect\" answer, one might say, is an answer to a different question than the one you had in mind—it is an empirical fact about the calculator that it implements a different computation.

\n

• The 'right' act or the 'should' option sometimes seem to depend on the state of the physical world.  For example, should you cut the red wire or the green wire to disarm the bomb?

\n

Suppose I show you a long straight line of pebbles, and ask you, \"How many pebbles would I have, if I had a rectangular array of six lines like this one?\"  You start to count, but only get up to 8 when I suddenly blindfold you.

\n

Now you are not completely ignorant of the answer to this question.  You know, for example, that the result will be even, and that it will be greater than 48.  But you can't answer the question until you know how many pebbles were in the original line.

\n

But mark this about the question:  It wasn't a question about anything you could directly see in the world, at that instant.  There was not in fact a rectangular array of pebbles, six on a side.  You could perhaps lay out an array of such pebbles and count the results—but then there are more complicated computations that we could run on the unknown length of a line of pebbles.  For example, we could treat the line length as the start of a Goodstein sequence, and ask whether the sequence halts.  To physically play out this sequence would require many more pebbles than exist in the universe.  Does it make sense to ask if the Goodstein sequence which starts with the length of this line of pebbles, \"would halt\"?  Does it make sense to talk about the answer, in a case like this?

\n

I'd say yes, personally.

\n

But meditate upon the etherealness of the answer—that we talk about idealized abstract processes that never really happen; that we talk about what would happen if the law of the Goodstein sequence came into effect upon this line of pebbles, even though the law of the Goodstein sequence will never physically come into effect.

\n

It is the same sort of etherealness that accompanies the notion of a proposition that 19 * 103 = 1957 which factors out of any particular physical calculator and is not identified with the result of any particular physical calculator.

\n

Only now that etherealness has been mixed with physical things; we talk about the effect of an ethereal operation on a physical thing.  We talk about what would happen if we ran the Goodstein process on the number of pebbles in this line here, which we have not counted—we do not know exactly how many pebbles there are.  There is no tiny little XML tag upon the pebbles that says \"Goodstein halts\", but we still think—or at least I still think—that it makes sense to say of the pebbles that they have the property of their Goodstein sequence terminating.

\n

So computations can be, as it were, idealized abstract dynamics—idealized abstract applications of idealized abstract laws, iterated over an imaginary causal-time that could go on for quite a number of steps (as Goodstein sequences often do). 

\n

So when we wonder, \"Should I cut the red wire or the green wire?\", we are not multiplying or simulating the Goodstein process, in particular.  But we are wondering about something that is not physically immanent in the red wires or the green wires themselves; there is no little XML tag on the green wire, saying, \"This is the wire that should be cut.\"

\n

We may not know which wire defuses the bomb, but say, \"Whichever wire does in fact defuse the bomb, that is the wire that should be cut.\"

\n

Still, there are no little XML tags on the wires, and we may not even have any way to look inside the bomb—we may just have to guess, in real life.

\n

So if we try to cash out this notion of a definite wire that should be cut, it's going to come out as...

\n

...some rule that would tell us which wire to cut, if we knew the exact state of the physical world...

\n

...which is to say, some kind of idealized abstract process into which we feed the state of the world as an input, and get back out, \"cut the green wire\" or \"cut the red wire\"...

\n

...which is to say, the output of a computation that would take the world as an input.

\n

• And finally I note that from the twin phenomena of moral agreement and moral error, we can construct the notion of moral disagreement.

\n

This adds nothing to our understanding of \"computation\" as a Word of Power, but it's helpful in putting the pieces together.

\n

Let's say that Bob and Sally are talking about an abstracted idealized dynamic they call \"Enamuh\".

\n

Bob says \"The output of Enamuh is 'Cut the blue wire',\" and Sally says \"The output of Enamuh is 'Cut the brown wire'.\"

\n

Now there are several non-exclusive possibilities:

\n

Either Bob or Sally could have committed an error in applying the rules of Enamuh—they could have done the equivalent of mis-multiplying known inputs.

\n

Either Bob or Sally could be mistaken about some empirical state of affairs upon which Enamuh depends—the wiring of the bomb.

\n

Bob and Sally could be talking about different things when they talk about Enamuh, in which case both of them are committing an error when they refer to Enamuh_Bob and Enamuh_Sally by the same name.  (However, if Enamuh_Bob and Enamuh_Sally differ in the sixth decimal place in a fashion that doesn't change the output about which wire gets cut, Bob and Sally can quite legitimately gloss the difference.)

\n

Or if Enamuh itself is defined by some other abstracted idealized dynamic, a Meta-Enamuh whose output is Enamuh, then either Bob or Sally could be mistaken about Meta-Enamuh in any of the same ways they could be mistaken about Enamuh.  (But in the case of morality, we have an abstracted idealized dynamic that includes a specification of how it, itself, changes.  Morality is self-renormalizing—it is not a guess at the product of some different and outside source.)

\n

To sum up:

\n\n

And so with all that said, I hope that the word \"computation\" has come to convey something other than Microsoft Word.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"'Arbitrary'\"

\n

Previous post: \"Moral Error and Moral Disagreement\"

" } }, { "_id": "BkkwXtaTf5LvbA6HB", "title": "Moral Error and Moral Disagreement", "pageUrl": "https://www.lesswrong.com/posts/BkkwXtaTf5LvbA6HB/moral-error-and-moral-disagreement", "postedAt": "2008-08-10T23:32:08.000Z", "baseScore": 26, "voteCount": 21, "commentCount": 133, "url": null, "contents": { "documentId": "BkkwXtaTf5LvbA6HB", "html": "

Followup toInseparably Right, Sorting Pebbles Into Correct Heaps

\n

Richard Chappell, a pro, writes:

\n
\n

\"When Bob says \"Abortion is wrong\", and Sally says, \"No it isn't\", they are disagreeing with each other.

\n

I don't see how Eliezer can accommodate this. On his account, what Bob asserted is true iff abortion is prohibited by the morality_Bob norms. How can Sally disagree? There's no disputing (we may suppose) that abortion is indeed prohibited by morality_Bob...

\n

Since there is moral disagreement, whatever Eliezer purports to be analysing here, it is not morality.\"

\n
\n

The phenomena of moral disagreement, moral error, and moral progress, on terminal values, are the primary drivers behind my metaethics.  Think of how simple Friendly AI would be if there were no moral disagreements, moral errors, or moral progress!

\n

Richard claims, \"There's no disputing (we may suppose) that abortion is indeed prohibited by morality_Bob.\"

\n

We may not suppose, and there is disputing.  Bob does not have direct, unmediated, veridical access to the output of his own morality.

\n

\n

I tried to describe morality as a \"computation\".  In retrospect, I don't think this is functioning as the Word of Power that I thought I was emitting.

\n

Let us read, for \"computation\", \"idealized abstract dynamic\"—maybe that will be a more comfortable label to apply to morality.

\n

Even so, I would have thought it obvious that computations may be the subjects of mystery and error.  Maybe it's not as obvious outside computer science?

\n

Disagreement has two prerequisites: the possibility of agreement and the possibility of error.  For two people to agree on something, there must be something they are agreeing about, a referent held in common.  And it must be possible for an \"error\" to take place, a conflict between \"P\" in the map and not-P in the territory.  Where these two prerequisites are present, Sally can say to Bob:  \"That thing we were just both talking about—you are in error about it.\"

\n

Richard's objection would seem in the first place to rule out the possibility of moral error, from which he derives the impossibility of moral agreement.

\n

So: does my metaethics rule out moral error?  Is there no disputing that abortion is indeed prohibited by morality_Bob?

\n

This is such a strange idea that I find myself wondering what the heck Richard could be thinking.  My best guess is that Richard, perhaps having not read all the posts in this sequence, is taking my notion of morality_Bob to refer to a flat, static list of valuations explicitly asserted by Bob.  \"Abortion is wrong\" would be on Bob's list, and there would be no disputing that.

\n

But on the contrary, I conceive of morality_Bob as something that unfolds into Bob's morality—like the way one can describe in 6 states and 2 symbols a Turing machine that will write 4.640 × 101439 1s to its tape before halting.

\n

So morality_Bob refers to a compact folded specification, and not a flat list of outputs.  But still, how could Bob be wrong about the output of his own morality?

\n

In manifold obvious and non-obvious ways:

\n

Bob could be empirically mistaken about the state of fetuses, perhaps believing fetuses to be aware of the outside world.  (Correcting this might change Bob's instrumental values but not terminal values.)

\n

Bob could have formed his beliefs about what constituted \"personhood\" in the presence of confusion about the nature of consciousness, so that if Bob were fully informed about consciousness, Bob would not have been tempted to talk about \"the beginning of life\" or \"the human kind\" in order to define personhood.  (This changes Bob's expressed terminal values; afterward he will state different general rules about what sort of physical things are ends in themselves.)

\n

So those are the obvious moral errors—instrumental errors driven by empirical mistakes; and erroneous generalizations about terminal values, driven by failure to consider moral arguments that are valid but hard to find in the search space.

\n

Then there are less obvious sources of moral error:  Bob could have a list of mind-influencing considerations that he considers morally valid, and a list of other mind-influencing considerations that Bob considers morally invalid.  Maybe Bob was raised a Christian and now considers that cultural influence to be invalid.  But, unknown to Bob, when he weighs up his values for and against abortion, the influence of his Christian upbringing comes in and distorts his summing of value-weights.  So Bob believes that the output of his current validated moral beliefs is to prohibit abortion, but actually this is a leftover of his childhood and not the output of those beliefs at all.

\n

(Note that Robin Hanson and I seem to disagree, in a case like this, as to exactly what degree we should take Bob's word about what his morals are.)

\n

Or Bob could believe that the word of God determines moral truth and that God has prohibited abortion in the Bible.  Then Bob is making metaethical mistakes, causing his mind to malfunction in a highly general way, and add moral generalizations to his belief pool, which he would not do if veridical knowledge of the universe destroyed his current and incoherent metaethics.

\n

Now let us turn to the disagreement between Sally and Bob.

\n

You could suggest that Sally is saying to Bob, \"Abortion is allowed by morality_Bob\", but that seems a bit oversimplified; it is not psychologically or morally realistic.

\n

If Sally and Bob were unrealistically sophisticated, they might describe their dispute as follows:

\n
\n

Bob:  \"Abortion is wrong.\"

\n

Sally:  \"Do you think that this is something of which most humans ought to be persuadable?\"

\n

Bob:  \"Yes, I do.  Do you think abortion is right?\"

\n

Sally:  \"Yes, I do.  And I don't think that's because I'm a psychopath by common human standards.  I think most humans would come to agree with me, if they knew the facts I knew, and heard the same moral arguments I've heard.\"

\n

Bob:  \"I think, then, that we must have a moral disagreement: since we both believe ourselves to be a shared moral frame of reference on this issue, and yet our moral intuitions say different things to us.\"

\n

Sally:  \"Well, it is not logically necessary that we have a genuine disagreement.  We might be mistaken in believing ourselves to mean the same thing by the words right and wrong, since neither of us can introspectively report our own moral reference frames or unfold them fully.\"

\n

Bob:  \"But if the meaning is similar up to the third decimal place, or sufficiently similar in some respects that it ought to be delivering similar answers on this particular issue, then, even if our moralities are not in-principle identical, I would not hesitate to invoke the intuitions for transpersonal morality.\"

\n

Sally:  \"I agree.  Until proven otherwise, I am inclined to talk about this question as if it is the same question unto us.\"

\n

Bob:  \"So I say 'Abortion is wrong' without further qualification or specialization on what wrong means unto me.\"

\n

Sally:  \"And I think that abortion is right.  We have a disagreement, then, and at least one of us must be mistaken.\"

\n

Bob:  \"Unless we're actually choosing differently because of in-principle unresolvable differences in our moral frame of reference, as if one of us were a paperclip maximizer.  In that case, we would be mutually mistaken in our belief that when we talk about doing what is right, we mean the same thing by right.  We would agree that we have a disagreement, but we would both be wrong.\"

\n
\n

Now, this is not exactly what most people are explicitly thinking when they engage in a moral dispute—but it is how I would cash out and naturalize their intuitions about transpersonal morality.

\n

Richard also says, \"Since there is moral disagreement...\"  This seems like a prime case of what I call naive philosophical realism—the belief that philosophical intuitions are direct unmediated veridical passports to philosophical truth.

\n

It so happens that I agree that there is such a thing as moral disagreement.  Tomorrow I will endeavor to justify, in fuller detail, how this statement can possibly make sense in a reductionistic natural universe.  So I am not disputing this particular proposition.  But I note, in passing, that Richard cannot justifiably assert the existence of moral disagreement as an irrefutable premise for discussion, though he could consider it as an apparent datum.  You cannot take as irrefutable premises, things that you have not explained exactly; for then what is it that is certain to be true?

\n

I cannot help but note the resemblance to Richard's assumption that \"there's no disputing\" that abortion is indeed prohibited by morality_Bob—the assumption that Bob has direct veridical unmediated access to the final unfolded output of his own morality.

\n

Perhaps Richard means that we could suppose that abortion is indeed prohibited by morality_Bob, and allowed by morality_Sally, there being at least two possible minds for whom this would be true.  Then the two minds might be mistaken about believing themselves to disagree.  Actually they would simply be directed by different algorithms.

\n

You cannot have a disagreement about which algorithm should direct your actions, without first having the same meaning of should—and no matter how you try to phrase this in terms of \"what ought to direct your actions\" or \"right actions\" or \"correct heaps of pebbles\", in the end you will be left with the empirical fact that it is possible to construct minds directed by any coherent utility function.

\n

When a paperclip maximizer and a pencil maximizer do different things, they are not disagreeing about anything, they are just different optimization processes.  You cannot detach should-ness from any specific criterion of should-ness and be left with a pure empty should-ness that the paperclip maximizer and pencil maximizer can be said to disagree about—unless you cover \"disagreement\" to include differences where two agents have nothing to say to each other.

\n

But this would be an extreme position to take with respect to your fellow humans, and I recommend against doing so.  Even a psychopath would still be in a common moral reference frame with you, if, fully informed, they would decide to take a pill that would make them non-psychopaths.  If you told me that my ability to care about other people was neurologically damaged, and you offered me a pill to fix it, I would take it.  Now, perhaps some psychopaths would not be persuadable in-principle to take the pill that would, by our standards, \"fix\" them.  But I note the possibility to emphasize what an extreme statement it is to say of someone:

\n

\"We have nothing to argue about, we are only different optimization processes.\"

\n

That should be reserved for paperclip maximizers, not used against humans whose arguments you don't like.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Abstracted Idealized Dynamics\"

\n

Previous post: \"Sorting Pebbles Into Correct Heaps\"

" } }, { "_id": "mMBTPTjRbsrqbSkZE", "title": "Sorting Pebbles Into Correct Heaps", "pageUrl": "https://www.lesswrong.com/posts/mMBTPTjRbsrqbSkZE/sorting-pebbles-into-correct-heaps", "postedAt": "2008-08-10T01:00:00.000Z", "baseScore": 236, "voteCount": 203, "commentCount": 110, "url": null, "contents": { "documentId": "mMBTPTjRbsrqbSkZE", "html": "

Once upon a time there was a strange little species—that might have been biological, or might have been synthetic, and perhaps were only a dream—whose passion was sorting pebbles into correct heaps.

\n

They couldn't tell you why some heaps were correct, and some incorrect.  But all of them agreed that the most important thing in the world was to create correct heaps, and scatter incorrect ones.

\n

Why the Pebblesorting People cared so much, is lost to this history—maybe a Fisherian runaway sexual selection, started by sheer accident a million years ago?  Or maybe a strange work of sentient art, created by more powerful minds and abandoned?

\n

But it mattered so drastically to them, this sorting of pebbles, that all the Pebblesorting philosophers said in unison that pebble-heap-sorting was the very meaning of their lives: and held that the only justified reason to eat was to sort pebbles, the only justified reason to mate was to sort pebbles, the only justified reason to participate in their world economy was to efficiently sort pebbles.

\n

The Pebblesorting People all agreed on that, but they didn't always agree on which heaps were correct or incorrect.

\n

\n

In the early days of Pebblesorting civilization, the heaps they made were mostly small, with counts like 23 or 29; they couldn't tell if larger heaps were correct or not.  Three millennia ago, the Great Leader Biko made a heap of 91 pebbles and proclaimed it correct, and his legions of admiring followers made more heaps likewise.  But over a handful of centuries, as the power of the Bikonians faded, an intuition began to accumulate among the smartest and most educated that a heap of 91 pebbles was incorrect.  Until finally they came to know what they had done: and they scattered all the heaps of 91 pebbles.  Not without flashes of regret, for some of those heaps were great works of art, but incorrect.  They even scattered Biko's original heap, made of 91 precious gemstones each of a different type and color.

\n

And no civilization since has seriously doubted that a heap of 91 is incorrect.

\n

Today, in these wiser times, the size of the heaps that Pebblesorters dare attempt, has grown very much larger—which all agree would be a most great and excellent thing, if only they could ensure the heaps were really correct.  Wars have been fought between countries that disagree on which heaps are correct: the Pebblesorters will never forget the Great War of 1957, fought between Y'ha-nthlei and Y'not'ha-nthlei, over heaps of size 1957.  That war, which saw the first use of nuclear weapons on the Pebblesorting Planet, finally ended when the Y'not'ha-nthleian philosopher At'gra'len'ley exhibited a heap of 103 pebbles and a heap of 19 pebbles side-by-side.  So persuasive was this argument that even Y'not'ha-nthlei reluctantly conceded that it was best to stop building heaps of 1957 pebbles, at least for the time being.

\n

Since the Great War of 1957, countries have been reluctant to openly endorse or condemn heaps of large size, since this leads so easily to war.  Indeed, some Pebblesorting philosophers—who seem to take a tangible delight in shocking others with their cynicism—have entirely denied the existence of pebble-sorting progress; they suggest that opinions about pebbles have simply been a random walk over time, with no coherence to them, the illusion of progress created by condemning all dissimilar pasts as incorrect.  The philosophers point to the disagreement over pebbles of large size, as proof that there is nothing that makes a heap of size 91 really incorrect—that it was simply fashionable to build such heaps at one point in time, and then at another point, fashionable to condemn them.  \"But... 13!\" carries no truck with them; for to regard \"13!\" as a persuasive counterargument, is only another convention, they say.  The Heap Relativists claim that their philosophy may help prevent future disasters like the Great War of 1957, but it is widely considered to be a philosophy of despair.

\n

Now the question of what makes a heap correct or incorrect, has taken on new urgency; for the Pebblesorters may shortly embark on the creation of self-improving Artificial Intelligences.  The Heap Relativists have warned against this project:  They say that AIs, not being of the species Pebblesorter sapiens, may form their own culture with entirely different ideas of which heaps are correct or incorrect.  \"They could decide that heaps of 8 pebbles are correct,\" say the Heap Relativists, \"and while ultimately they'd be no righter or wronger than us, still, our civilization says we shouldn't build such heaps.  It is not in our interest to create AI, unless all the computers have bombs strapped to them, so that even if the AI thinks a heap of 8 pebbles is correct, we can force it to build heaps of 7 pebbles instead.  Otherwise, KABOOM!\"

\n

But this, to most Pebblesorters, seems absurd.  Surely a sufficiently powerful AI—especially the \"superintelligence\" some transpebblesorterists go on about—would be able to see at a glance which heaps were correct or incorrect!  The thought of something with a brain the size of a planet, thinking that a heap of 8 pebbles was correct, is just too absurd to be worth talking about.

\n

Indeed, it is an utterly futile project to constrain how a superintelligence sorts pebbles into heaps.  Suppose that Great Leader Biko had been able, in his primitive era, to construct a self-improving AI; and he had built it as an expected utility maximizer whose utility function told it to create as many heaps as possible of size 91.  Surely, when this AI improved itself far enough, and became smart enough, then it would see at a glance that this utility function was incorrect; and, having the ability to modify its own source code, it would rewrite its utility function to value more reasonable heap sizes, like 101 or 103.

\n

And certainly not heaps of size 8.  That would just be stupid.  Any mind that stupid is too dumb to be a threat.

\n

Reassured by such common sense, the Pebblesorters pour full speed ahead on their project to throw together lots of algorithms at random on big computers until some kind of intelligence emerges.  The whole history of civilization has shown that richer, smarter, better educated civilizations are likely to agree about heaps that their ancestors once disputed.  Sure, there are then larger heaps to argue about—but the further technology has advanced, the larger the heaps that have been agreed upon and constructed.

\n

Indeed, intelligence itself has always correlated with making correct heaps—the nearest evolutionary cousins to the Pebblesorters, the Pebpanzees, make heaps of only size 2 or 3, and occasionally stupid heaps like 9.  And other, even less intelligent creatures, like fish, make no heaps at all.

\n

Smarter minds equal smarter heaps.  Why would that trend break?

" } }, { "_id": "JynJ6xfnpq9oN3zpb", "title": "Inseparably Right; or, Joy in the Merely Good", "pageUrl": "https://www.lesswrong.com/posts/JynJ6xfnpq9oN3zpb/inseparably-right-or-joy-in-the-merely-good", "postedAt": "2008-08-09T01:00:00.000Z", "baseScore": 57, "voteCount": 39, "commentCount": 33, "url": null, "contents": { "documentId": "JynJ6xfnpq9oN3zpb", "html": "

Followup toThe Meaning of Right

\n

I fear that in my drive for full explanation, I may have obscured the punchline from my theory of metaethics.  Here then is an attempted rephrase:

\n

There is no pure ghostly essence of goodness apart from things like truth, happiness and sentient life.

\n

What do you value?  At a guess, you value the life of your friends and your family and your Significant Other and yourself, all in different ways.  You would probably say that you value human life in general, and I would take your word for it, though Robin Hanson might ask how you've acted on this supposed preference.  If you're reading this blog you probably attach some value to truth for the sake of truth.  If you've ever learned to play a musical instrument, or paint a picture, or if you've ever solved a math problem for the fun of it, then you probably attach real value to good art.  You value your freedom, the control that you possess over your own life; and if you've ever really helped someone you probably enjoyed it.  You might not think of playing a video game as a great sacrifice of dutiful morality, but I for one would not wish to see the joy of complex challenge perish from the universe.  You may not think of telling jokes as a matter of interpersonal morality, but I would consider the human sense of humor as part of the gift we give to tomorrow.

\n

And you value many more things than these.

\n

Your brain assesses these things I have said, or others, or more, depending on the specific event, and finally affixes a little internal representational label that we recognize and call \"good\".

\n

There's no way you can detach the little label from what it stands for, and still make ontological or moral sense.

\n

\n

Why might the little 'good' label seem detachable?  A number of reasons.

\n

Mainly, that's just how your mind is structured—the labels it attaches internally seem like extra, floating, ontological properties.

\n

And there's no one value that determines whether a complicated event is good or not—and no five values, either.  No matter what rule you try to describe, there's always something left over, some counterexample.  Since no single value defines goodness, this can make it seem like all of them together couldn't define goodness.  But when you add them up all together, there is nothing else left.

\n

If there's no detachable property of goodness, what does this mean?

\n

It means that the question, \"Okay, but what makes happiness or self-determination, good?\" is either very quickly answered, or else malformed.

\n

The concept of a \"utility function\" or \"optimization criterion\" is detachable when talking about optimization processes.  Natural selection, for example, optimizes for inclusive genetic fitness.  But there are possible minds that implement any utility function, so you don't get any advice there about what you should do.  You can't ask about utility apart from any utility function.

\n

When you ask \"But which utility function should I use?\" the word should is something inseparable from the dynamic that labels a choice \"should\"—inseparable from the reasons like \"Because I can save more lives that way.\"

\n

Every time you say should, it includes an implicit criterion of choice; there is no should-ness that can be abstracted away from any criterion.

\n

There is no separable right-ness that you could abstract from pulling a child off the train tracks, and attach to some other act.

\n

Your values can change in response to arguments; you have metamorals as well as morals.  So it probably does make sense to think of an idealized good, or idealized right, that you would assign if you could think of all possible arguments.  Arguments may even convince you to change your criteria of what counts as a persuasive argument.  Even so, when you consider the total trajectory arising out of that entire framework, that moral frame of reference, there is no separable property of justification-ness, apart from any particular criterion of justification; no final answer apart from a starting question.

\n

I sometimes say that morality is \"created already in motion\".

\n

There is no perfect argument that persuades the ideal philosopher of perfect emptiness to attach a perfectly abstract label of 'good'.  The notion of the perfectly abstract label is incoherent, which is why people chase it round and round in circles.  What would distinguish a perfectly empty label of 'good' from a perfectly empty label of 'bad'?  How would you tell which was which?

\n

But since every supposed criterion of goodness that we describe, turns out to be wrong, or incomplete, or changes the next time we hear a moral argument, it's easy to see why someone might think that 'goodness' was a thing apart from any criterion at all.

\n

Humans have a cognitive architecture that easily misleads us into conceiving of goodness as something that can be detached from any criterion.

\n

This conception turns out to be incoherent.  Very sad.  I too was hoping for a perfectly abstract argument; it appealed to my universalizing instinct.  But...

\n

But the question then becomes: is that little fillip of human psychology, more important than everything else?  Is it more important than the happiness of your family, your friends, your mate, your extended tribe, and yourself?  If your universalizing instinct is frustrated, is that worth abandoning life?  If you represented rightness wrongly, do pictures stop being beautiful and maths stop being elegant?  Is that one tiny mistake worth forsaking the gift we could give to tomorrow?  Is it even really worth all that much in the way of existential angst?

\n

Or will you just say \"Oops\" and go back to life, to truth, fun, art, freedom, challenge, humor, moral arguments, and all those other things that in their sum and in their reflective trajectory, are the entire and only meaning of the word 'right'?

\n

Here is the strange habit of thought I mean to convey:  Don't look to some surprising unusual twist of logic for your justification.  Look to the living child, successfully dragged off the train tracks.  There you will find your justification.  What ever should be more important than that?

\n

I could dress that up in computational metaethics and FAI theory—which indeed is whence the notion first came to me—but when I translated it all back into human-talk, that is what it turned out to say.

\n

If we cannot take joy in things that are merely good, our lives shall be empty indeed.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Sorting Pebbles Into Correct Heaps\"

\n

Previous post: \"Morality as Fixed Computation\"

" } }, { "_id": "FnJPa8E9ZG5xiLLp5", "title": "Morality as Fixed Computation", "pageUrl": "https://www.lesswrong.com/posts/FnJPa8E9ZG5xiLLp5/morality-as-fixed-computation", "postedAt": "2008-08-08T01:00:00.000Z", "baseScore": 74, "voteCount": 51, "commentCount": 50, "url": null, "contents": { "documentId": "FnJPa8E9ZG5xiLLp5", "html": "

Toby Ord commented:

\n
\n

Eliezer,  I've just reread your article and was wondering if this is a good quick summary of your position (leaving apart how you got to it):

\n

'I should X' means that I would attempt to X were I fully informed.

\n
\n

Toby's a pro, so if he didn't get it, I'd better try again.  Let me try a different tack of explanation—one closer to the historical way that I arrived at my own position.

\n

Suppose you build an AI, and—leaving aside that AI goal systems cannot be built around English statements, and all such descriptions are only dreams—you try to infuse the AI with the action-determining principle, \"Do what I want.\"

\n

And suppose you get the AI design close enough—it doesn't just end up tiling the universe with paperclips, cheesecake or tiny molecular copies of satisfied programmers—that its utility function actually assigns utilities as follows, to the world-states we would describe in English as:

\n
\n

<Programmer weakly desires 'X',   quantity 20 of X exists>:  +20
<Programmer strongly desires 'Y',
quantity 20 of X exists>:  0
<Programmer weakly desires 'X',   quantity 30 of Y exists>:  0
<Programmer strongly desires 'Y', quantity 30 of Y exists>:  +60

\n
\n

You perceive, of course, that this destroys the world.

\n

\n

...since if the programmer initially weakly wants 'X' and X is hard to obtain, the AI will modify the programmer to strongly want 'Y', which is easy to create, and then bring about lots of Y.  Y might be, say, iron atoms—those are highly stable.

\n

Can you patch this problem?  No.  As a general rule, it is not possible to patch flawed Friendly AI designs.

\n

If you try to bound the utility function, or make the AI not care about how much the programmer wants things, the AI still has a motive (as an expected utility maximizer) to make the programmer want something that can be obtained with a very high degree of certainty.

\n

If you try to make it so that the AI can't modify the programmer, then the AI can't talk to the programmer (talking to someone modifies them).

\n

If you try to rule out a specific class of ways the AI could modify the programmer, the AI has a motive to superintelligently seek out loopholes and ways to modify the programmer indirectly.

\n

As a general rule, it is not possible to patch flawed FAI designs.

\n

We, ourselves, do not imagine the future and judge, that any future in which our brains want something, and that thing exists, is a good future.  If we did think this way, we would say: \"Yay!  Go ahead and modify us to strongly want something cheap!\"  But we do not say this, which means that this AI design is fundamentally flawed: it will choose things very unlike what we would choose; it will judge desirability very differently from how we judge it.  This core disharmony cannot be patched by ruling out a handful of specific failure modes.

\n

There's also a duality between Friendly AI problems and moral philosophy problems—though you've got to structure that duality in exactly the right way.  So if you prefer, the core problem is that the AI will choose in a way very unlike the structure of what is, y'know, actually right—never mind the way we choose.  Isn't the whole point of this problem, that merely wanting something doesn't make it right?

\n

So this is the paradoxical-seeming issue which I have analogized to the difference between:

\n
\n

A calculator that, when you press '2', '+', and '3', tries to compute:
        \"What is 2 + 3?\"

\n

A calculator that, when you press '2', '+', and '3', tries to compute:
        \"What does this calculator output when you press '2', '+', and '3'?\"

\n
\n

The Type 1 calculator, as it were, wants to output 5.

\n

The Type 2 \"calculator\" could return any result; and in the act of returning that result, it becomes the correct answer to the question that was internally asked.

\n

We ourselves are like unto the Type 1 calculator.  But the putative AI is being built as though it were to reflect the Type 2 calculator.

\n

Now imagine that the Type 1 calculator is trying to build an AI, only the Type 1 calculator doesn't know its own question.  The calculator continually asks the question by its very nature, it was born to ask that question, created already in motion around that question—but the calculator has no insight into its own transistors; it cannot print out the question, which is extremely complicated and has no simple approximation.

\n

So the calculator wants to build an AI (it's a pretty smart calculator, it just doesn't have access to its own transistors) and have the AI give the right answer.  Only the calculator can't print out the question.  So the calculator wants to have the AI look at the calculator, where the question is written, and answer the question that the AI will discover implicit in those transistors.  But this cannot be done by the cheap shortcut of a utility function that says \"All X: <calculator asks 'X?', answer X>: utility 1; else: utility 0\" because that actually mirrors the utility function of a Type 2 calculator, not a Type 1 calculator.

\n

This gets us into FAI issues that I am not going into (some of which I'm still working out myself).

\n

However, when you back out of the details of FAI design, and swap back to the perspective of moral philosophy, then what we were just talking about was the dual of the moral issue:  \"But if what's 'right' is a mere preference, then anything that anyone wants is 'right'.\"

\n

Now I did argue against that particular concept in some detail, in The Meaning of Right, so I am not going to repeat all that...

\n

But the key notion is the idea that what we name by 'right' is a fixed question, or perhaps a fixed framework. We can encounter moral arguments that modify our terminal values, and even encounter moral arguments that modify what we count as a moral argument; nonetheless, it all grows out of a particular starting point.  We do not experience ourselves as embodying the question \"What will I decide to do?\" which would be a Type 2 calculator; anything we decided would thereby become right.  We experience ourselves as asking the embodied question:  \"What will save my friends, and my people, from getting hurt?  How can we all have more fun?  ...\" where the \"...\" is around a thousand other things.

\n

So 'I should X' does not mean that I would attempt to X were I fully informed.

\n

'I should X' means that X answers the question, \"What will save my people?  How can we all have more fun? How can we get more control over our own lives?  What's the funniest jokes we can tell?  ...\"

\n

And I may not know what this question is, actually; I may not be able to print out my current guess nor my surrounding framework; but I know, as all non-moral-relativists instinctively know, that the question surely is not just \"How can I do whatever I want?\"

\n

When these two formulations begin to seem as entirely distinct as \"snow\" and snow, then you shall have created distinct buckets for the quotation and the referent.

\n

Added:  This was posted automatically and the front page got screwed up somehow.  I have no idea how.  It is now fixed and should make sense.

" } }, { "_id": "wApHBPebxtDX7cdYn", "title": "Hiroshima Day", "pageUrl": "https://www.lesswrong.com/posts/wApHBPebxtDX7cdYn/hiroshima-day", "postedAt": "2008-08-06T23:15:00.000Z", "baseScore": 4, "voteCount": 15, "commentCount": 63, "url": null, "contents": { "documentId": "wApHBPebxtDX7cdYn", "html": "

On August 6th, in 1945, the world saw the first use of atomic weapons against human targets.  On this day 63 years ago, humanity lost its nuclear virginity.  Until the end of time we will be a species that has used fission bombs in anger.

\n\n

Time has passed, and we still haven't blown up our world, despite a close call or two.  Which makes it difficult to criticize the decision - would things still have turned out all right, if anyone had chosen differently, anywhere along the way?

\n\n

Maybe we needed to see the ruins, of the city and the people.

\n\n

Maybe we didn't.

\n\n

There's an ongoing debate - and no, it is not a settled issue - over whether the Japanese would have surrendered without the Bomb.  But I would not have dropped the Bomb even to save the lives of American soldiers, because I would have wanted to preserve that world where atomic weapons had never been used - to not cross that line.  I don't know about history to this point; but the world would be safer now, I think, today, if no one had ever used atomic weapons in war, and the idea was not considered suitable for polite discussion.

\n\n

I'm not saying it was wrong.  I don't know for certain that it was wrong.  I wouldn't have thought that humanity could make it this far without using atomic weapons again.  All I can say is that if it had been me, I wouldn't have done it.

" } }, { "_id": "9agCMMd7k7Hy37aCx", "title": "Contaminated by Optimism", "pageUrl": "https://www.lesswrong.com/posts/9agCMMd7k7Hy37aCx/contaminated-by-optimism", "postedAt": "2008-08-06T00:26:52.000Z", "baseScore": 22, "voteCount": 16, "commentCount": 78, "url": null, "contents": { "documentId": "9agCMMd7k7Hy37aCx", "html": "

Followup to: Anthropomorphic Optimism, The Hidden Complexity of Wishes

\n

Yesterday, I reprised in further detail The Tragedy of Group Selectionism, in which early biologists believed that predators would voluntarily restrain their breeding to avoid exhausting the prey population; the given excuse was \"group selection\".  Not only does it turn out to be nearly impossible for group selection to overcome a countervailing individual advantage; but when these nigh-impossible conditions were created in the laboratory - group selection for low-population groups - the actual result was not restraint in breeding, but, of course, cannibalism, especially of immature females.

\n

I've made even sillier mistakes, by the way - though about AI, not evolutionary biology.  And the thing that strikes me, looking over these cases of anthropomorphism, is the extent to which you are screwed as soon as you let anthropomorphism suggest ideas to examine.

\n

In large hypothesis spaces, the vast majority of the cognitive labor goes into noticing the true hypothesis.  By the time you have enough evidence to consider the correct theory as one of just a few plausible alternatives - to represent the correct theory in your mind - you're practically done.  Of this I have spoken several times before.

\n

And by the same token, my experience suggests that as soon as you let anthropomorphism promote a hypothesis to your attention, so that you start wondering if that particular hypothesis might be true, you've already committed most of the mistake.

\n

\n

The group selectionists did not deliberately extend credit to the belief that evolution would do the aesthetic thing, the nice thing.  The group selectionists were doomed when they let their aesthetic sense make a suggestion - when they let it promote a hypothesis to the level of deliberate consideration.

\n

It's not like I knew the original group selectionists.  But I've made analogous mistakes as a teenager, and then watched others make the mistake many times over.  So I do have some experience whereof I speak, when I speak of instant doom.

\n

Unfortunately, the prophylactic against this mistake, is not a recognized technique of Traditional Rationality.

\n

In Traditional Rationality, you can get your ideas from anywhere.  Then you weigh up the evidence for and against them, searching for arguments on both sides.  If the question hasn't been definitely settled by experiment, you should try to do an experiment to test your opinion, and dutifully accept the result.

\n

\"Sorry, you're not allowed to suggest ideas using that method\" is not something you hear, under Traditional Rationality.

\n

But it is a fact of life, an experimental result of cognitive psychology, that when people have an idea from any source, they tend to search for support rather than contradiction - even in the absence of emotional commitment (see link).

\n

It is a fact of life that priming and contamination occur: just being briefly exposed to completely uninformative, known false, or totally irrelevant \"information\" can exert significant influence on subjects' estimates and decisions.  This happens on a level below deliberate awareness, and that's going to be pretty hard to beat on problems where anthropomorphism is bound to rush in and make suggestions - but at least you can avoid deliberately making it worse.

\n

It is a fact of life that we change our minds less often than we think.  Once an idea gets into our heads, it is harder to get it out than we think.  Only an extremely restrictive chain of reasoning, that definitely prohibited most possibilities from consideration, would be sufficient to undo this damage - to root an idea out of your head once it lodges.  The less you know for sure, the easier it is to become contaminated - weak domain knowledge increases contamination effects.

\n

It is a fact of life that we are far more likely to stop searching for further alternatives at a point when we have a conclusion we like, than when we have a conclusion we dislike.

\n

It is a fact of life that we hold ideas we would like to believe, to a lower standard of proof than ideas we would like to disbelieve.  In the former case we ask \"Am I allowed to believe it?\" and in the latter case ask \"Am I forced to believe it?\"  If your domain knowledge is weak, you will not know enough for your own knowledge to grab you by the throat and tell you \"You're wrong!  That can't possibly be true!\"  You will find that you are allowed to believe it.  You will search for plausible-sounding scenarios where your belief is true.  If the search space of possibilities is large, you will almost certainly find some \"winners\" - your domain knowledge being too weak to definitely prohibit those scenarios.

\n

It is a fact of history that the group selectionists failed to relinquish their folly.  They found what they thought was a perfectly plausible way that evolution (evolution!) could end up producing foxes who voluntarily avoided reproductive opportunities(!).  And the group selectionists did in fact cling to that hypothesis.  That's what happens in real life!  Be warned!

\n

To beat anthropomorphism you have to be scared of letting anthropomorphism make suggestions.  You have to try to avoid being contaminated by anthropomorphism (to the best extent you can).

\n

As soon as you let anthropomorphism generate the idea and ask, \"Could it be true?\" then your brain has already swapped out of forward-extrapolation mode and into backward-rationalization mode.  Traditional Rationality contains inadequate warnings against this, IMO.  See in particular the post where I argue against the Traditional interpretation of Devil's Advocacy.

\n

Yes, there are occasions when you want to perform abductive inference, such as when you have evidence that something is true and you are asking how it could be true.  We call that \"Bayesian updating\", in fact.  An occasion where you don't have any evidence but your brain has made a cute little anthropomorphic suggestion, is not a time to start wondering how it could be true.  Especially if the search space of possibilities is large, and your domain knowledge is too weak to prohibit plausible-sounding scenarios.  Then your prediction ends up being determined by anthropomorphism.  If the real process is not controlled by a brain similar to yours, this is not a good thing for your predictive accuracy.

\n

This is a war I wage primarily on the battleground of Unfriendly AI, but it seems to me that many of the conclusions apply to optimism in general.

\n

How did the idea first come to you, that the subprime meltdown wouldn't decrease the value of your investment in Danish deuterium derivatives?  Were you just thinking neutrally about the course of financial events, trying to extrapolate some of the many different ways that one financial billiard ball could ricochet off another?  Even this method tends to be subject to optimism; if we know which way we want each step to go, we tend to visualize it going that way.  But better that, than starting with a pure hope - an outcome generated because it ranked high in your preference ordering - and then permitting your mind to invent plausible-sounding reasons it might happen.  This is just rushing to failure.

\n

And to spell out the application to Unfriendly AI:  You've got various people insisting that an arbitrary mind, including an expected paperclip maximizer, would do various nice things or obey various comforting conditions:  \"Keep humans around, because diversity is important to creativity, and the humans will provide a different point of view.\"  Now you might want to seriously ask if, even granting that premise, you'd be kept in a nice house with air conditioning; or kept in a tiny cell with life support tubes and regular electric shocks if you didn't generate enough interesting ideas that day (and of course you wouldn't be allowed to die); or uploaded to a very small computer somewhere, and restarted every couple of years.  No, let me guess, you'll be more productive if you're happy.  So it's clear why you want that to be the argument; but unlike you, the paperclip maximizer is not frantically searching for a reason not to torture you.

\n

Sorry, the whole scenario is still around as unlikely as your carefully picking up ants on the sidewalk, rather than stepping on them, and keeping them in a happy ant colony for the sole express purpose of suggesting blog comments.  There are reasons in my goal system to keep sentient beings alive, even if they aren't \"useful\" at the moment.  But from the perspective of a Bayesian superintelligence whose only terminal value is paperclips, it is not an optimal use of matter and energy toward the instrumental value of producing diverse and creative ideas for making paperclips, to keep around six billion highly similar human brains.  Unlike you, the paperclip maximizer doesn't start out knowing it wants that to be the conclusion.

\n

Your brain starts out knowing that it wants humanity to live, and so it starts trying to come up with arguments for why that is a perfectly reasonable thing for a paperclip maximizer to do.  But the paperclip maximizer itself would not start from the conclusion that it wanted humanity to live, and reason backward.  It would just try to make paperclips.  It wouldn't stop, the way your own mind tends to stop, if it did find one argument for keeping humans alive; instead it would go on searching for an even superior alternative, some way to use the same resources to greater effect.  Maybe you just want to keep 20 humans and randomly perturb their brain states a lot.

\n

If you can't blind your eyes to human goals and just think about the paperclips, you can't understand what the goal of making paperclips implies.  It's like expecting kind and merciful results from natural selection, which lets old elephants starve to death when they run out of teeth.

\n

A priori, if you want a nice result that takes 10 bits to specify, then a priori you should expect a 1/1024 probability of finding that some unrelated process generates that nice result.  And a genuinely nice outcome in a large outcome space takes a lot more information than the English word \"nice\", because what we consider a good outcome has many components of value.  It's extremely suspicious if you start out with a nice result in mind, search for a plausible reason that a not-inherently-nice process would generate it, and, by golly, find an amazing clever argument.

\n

And the more complexity you add to your requirements - humans not only have to survive, but have to survive under what we would consider good living conditions, etc. - the less you should expect, a priori, a non-nice process to generate it.  The less you should expect to, amazingly, find a genuine valid reason why the non-nice process happens to do what you want.  And the more suspicious you should be, if you find a clever-sounding argument why this should be the case.  To expect this to happen with non-trivial probability is pulling information from nowhere; a blind arrow is hitting the center of a small target.  Are you sure it's wise to even search for such possibilities?  Your chance of deceiving yourself is far greater than the a priori chance of a good outcome, especially if your domain knowledge is too weak to definitely rule out possibilities.

\n

No more than you can guess a lottery ticket, should you expect a process not shaped by human niceness, to produce nice results in a large outcome space.  You may not know the domain very well, but you can understand that, a priori, \"nice\" results require specific complexity to happen for no reason, and complex specific miracles are rare.

\n

I wish I could tell people:  \"Stop!  Stop right there!  You defeated yourself the moment you knew what you wanted!  You need to throw away your thoughts and start over with a neutral forward extrapolation, not seeking any particular outcome.\"  But the inferential distance is too great; and then begins the slog of, \"I don't see why that couldn't happen\" and \"I don't think you've proven my idea is wrong.\"

\n

It's Unfriendly superintelligence that tends to worry me most, of course.  But I do think the point generalizes to quite a lot of optimism.  You may know what you want, but Nature doesn't care.

" } }, { "_id": "RcZeZt8cPk48xxiQ8", "title": "Anthropomorphic Optimism", "pageUrl": "https://www.lesswrong.com/posts/RcZeZt8cPk48xxiQ8/anthropomorphic-optimism", "postedAt": "2008-08-04T20:17:28.000Z", "baseScore": 84, "voteCount": 68, "commentCount": 60, "url": null, "contents": { "documentId": "RcZeZt8cPk48xxiQ8", "html": "

The core fallacy of anthropomorphism is expecting something to be predicted by the black box of your brain, when its casual structure is so different from that of a human brain, as to give you no license to expect any such thing.

\n

The Tragedy of Group Selectionism (as previously covered in the evolution sequence) was a rather extreme error by a group of early (pre-1966) biologists, including Wynne-Edwards, Allee, and Brereton among others, who believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat and exhausting the prey population.

\n

The proffered theory was that if there were multiple, geographically separated groups of e.g. foxes, then groups of foxes that best restrained their breeding, would send out colonists to replace crashed populations.  And so, over time, group selection would promote restrained-breeding genes in foxes.

\n

I'm not going to repeat all the problems that developed with this scenario. Suffice it to say that there was no empirical evidence to start with; that no empirical evidence was ever uncovered; that, in fact, predator populations crash all the time; and that for group selection pressure to overcome a countervailing individual selection pressure, turned out to be very nearly mathematically impossible.

\n

The theory having turned out to be completely incorrect, we may ask if, perhaps, the originators of the theory were doing something wrong.

\n

\n

\"Why be so uncharitable?\" you ask.  \"In advance of doing the experiment, how could they know that group selection couldn't overcome individual selection?\"

\n

But later on, Michael J. Wade went out and actually created in the laboratory the nigh-impossible conditions for group selection.  Wade repeatedly selected insect subpopulations for low population numbers.  Did the insects evolve to restrain their breeding, and live in quiet peace with enough food for all, as the group selectionists had envisioned?

\n

No; the adults adapted to cannibalize eggs and larvae, especially female larvae.

\n

Of course selecting for small subpopulation sizes would not select for individuals who restrained their own breeding.  It would select for individuals who ate other individuals' children.  Especially the girls.

\n

Now, why might the group selectionists have not thought of that possibility?

\n

Suppose you were a member of a tribe, and you knew that, in the near future, your tribe would be subjected to a resource squeeze.  You might propose, as a solution, that no couple have more than one child - after the first child, the couple goes on birth control.  Saying, \"Let's all individually have as many children as we can, but then hunt down and cannibalize each other's children, especially the girls,\" would not even occur to you as a possibility.

\n

Think of a preference ordering over solutions, relative to your goals.  You want a solution as high in this preference ordering as possible.  How do you find one?  With a brain, of course!  Think of your brain as a high-ranking-solution-generator - a search process that produces solutions that rank high in your innate preference ordering.

\n

The solution space on all real-world problems is generally fairly large, which is why you need an efficient brain that doesn't even bother to formulate the vast majority of low-ranking solutions.

\n

If your tribe is faced with a resource squeeze, you could try hopping everywhere on one leg, or chewing off your own toes.  These \"solutions\" obviously wouldn't work and would incur large costs, as you can see upon examination - but in fact your brain is too efficient to waste time considering such poor solutions; it doesn't generate them in the first place.  Your brain, in its search for high-ranking solutions, flies directly to parts of the solution space like \"Everyone in the tribe gets together, and agrees to have no more than one child per couple until the resource squeeze is past.\"

\n

Such a low-ranking solution as \"Everyone have as many kids as possible, then cannibalize the girls\" would not be generated in your search process.

\n

But the ranking of an option as \"low\" or \"high\" is not an inherent property of the option, it is a property of the optimization process that does the preferring.  And different optimization processes will search in different orders.

\n

So far as evolution is concerned, individuals reproducing to the fullest and then cannibalizing others' daughters, is a no-brainer; whereas individuals voluntarily restraining their own breeding for the good of the group, is absolutely ludicrous.  Or to say it less anthropomorphically, the first set of alleles would rapidly replace the second in a population.  (And natural selection has no obvious search order here - these two alternatives seem around equally simple as mutations).

\n

Suppose that one of the biologists had said, \"If a predator population has only finite resources, evolution will craft them to voluntarily restrain their breeding - that's how I'd do it if I were in charge of building predators.\"  This would be anthropomorphism outright, the lines of reasoning naked and exposed:  I would do it this way, therefore I infer that evolution will do it this way.

\n

One does occasionally encounter the fallacy outright, in my line of work.  But suppose you say to the one, \"An AI will not necessarily work like you do\".  Suppose you say to this hypothetical biologist, \"Evolution doesn't work like you do.\"  What will the one say in response?  I can tell you a reply you will not hear:  \"Oh my! I didn't realize that!  One of the steps of my inference was invalid; I will throw away the conclusion and start over from scratch.\"

\n

No: what you'll hear instead is a reason why any AI has to reason the same way as the speaker.  Or a reason why natural selection, following entirely different criteria of optimization and using entirely different methods of optimization, ought to do the same thing that would occur to a human as a good idea.

\n

Hence the elaborate idea that group selection would favor predator groups where the individuals voluntarily forsook reproductive opportunities.

\n

The group selectionists went just as far astray, in their predictions, as someone committing the fallacy outright.  Their final conclusions were the same as if they were assuming outright that evolution necessarily thought like themselves.  But they erased what had been written above the bottom line of their argument, without erasing the actual bottom line, and wrote in new rationalizations.  Now the fallacious reasoning is disguised; the obviously flawed step in the inference has been hidden - even though the conclusion remains exactly the same; and hence, in the real world, exactly as wrong.

\n

But why would any scientist do this?  In the end, the data came out against the group selectionists and they were embarrassed.

\n

As I remarked in Fake Optimization Criteria, we humans seem to have evolved an instinct for arguing that our preferred policy arises from practically any criterion of optimization.  Politics was a feature of the ancestral environment; we are descended from those who argued most persuasively that the tribe's interest - not just their own interest - required that their hated rival Uglak be executed.  We certainly aren't descended from Uglak, who failed to argue that his tribe's moral code - not just his own obvious self-interest - required his survival.

\n

And because we can more persuasively argue, for what we honestly believe, we have evolved an instinct to honestly believe that other people's goals, and our tribe's moral code, truly do imply that they should do things our way for their benefit.

\n

So the group selectionists, imagining this beautiful picture of predators restraining their breeding, instinctively rationalized why natural selection ought to do things their way, even according to natural selection's own purposes. The foxes will be fitter if they restrain their breeding!  No, really! They'll even outbreed other foxes who don't restrain their breeding! Honestly!

\n

The problem with trying to argue natural selection into doing things your way, is that evolution does not contain that which could be moved by your arguments.  Evolution does not work like you do - not even to the extent of having any element that could listen to or care about your painstaking explanation of why evolution ought to do things your way.  Human arguments are not even commensurate with the internal structure of natural selection as an optimization process - human arguments aren't used in promoting alleles, as human arguments would play a causal role in human politics.

\n

So instead of successfully persuading natural selection to do things their way, the group selectionists were simply embarrassed when reality came out differently.

\n

There's a fairly heavy subtext here about Unfriendly AI.

\n

But the point generalizes: this is the problem with optimistic reasoning in general.  What is optimism?  It is ranking the possibilities by your own preference ordering, and selecting an outcome high in that preference ordering, and somehow that outcome ends up as your prediction.  What kind of elaborate rationalizations were generated along the way, is probably not so relevant as one might fondly believe; look at the cognitive history and it's optimism in, optimism out.  But Nature, or whatever other process is under discussion, is not actually, causally choosing between outcomes by ranking them in your preference ordering and picking a high one.  So the brain fails to synchronize with the environment, and the prediction fails to match reality.

" } }, { "_id": "dTkWWhQkgxePxbtPE", "title": "No Logical Positivist I", "pageUrl": "https://www.lesswrong.com/posts/dTkWWhQkgxePxbtPE/no-logical-positivist-i", "postedAt": "2008-08-04T01:06:37.000Z", "baseScore": 39, "voteCount": 32, "commentCount": 54, "url": null, "contents": { "documentId": "dTkWWhQkgxePxbtPE", "html": "

Followup toMaking Beliefs Pay Rent, Belief in the Implied Invisible

\n\n

Degrees of Freedom accuses me of reinventing logical positivism, badly:

One post which reads as though it were written in Vienna in the 1920s is this one [Making Beliefs Pay Rent] where Eliezer writes

"We\ncan build up whole networks of beliefs that are connected only to each\nother - call these "floating" beliefs. It is a uniquely human flaw\namong animal species, a perversion of Homo sapiens's ability to build\nmore general and flexible belief networks...  The rationalist\nvirtue of empiricism consists of constantly asking which experiences\nour beliefs predict - or better yet, prohibit."

Logical positivists were best known for their verificationism: the idea that a belief is defined in terms of the experimental predictions that it makes.  Not just tested, not just confirmed, not just justified by experiment, but actually defined as a set of allowable experimental results.  An idea unconfirmable by experiment is not just probably wrong, but necessarily meaningless.

\n\n

I would disagree, and exhibit logical positivism as another case in point of "mistaking the surface of rationality for its substance".

Consider the\nhypothesis:

On August 1st 2008 at midnight Greenwich time, a\none-foot sphere of chocolate cake spontaneously formed in the center of the Sun; and then, in the natural course of events, this Boltzmann Cake almost instantly dissolved.

I would say that this hypothesis is meaningful and almost certainly false.  Not that it is "meaningless".  Even though I cannot think of any possible experimental test that would discriminate between its being true, and its being false.

\n\n

On the other hand, if some postmodernist literature professor tells me that Shakespeare shows signs of "post-colonial alienation", the burden of proof is on him to show that this statement means anything, before we can talk about its being true or false.

\n\n

I think the two main probability-theoretic concepts here are Minimum Message Length and directed causal graphs - both of which came along well after logical positivism.

\n\n

By talking about the unseen causes of visible events, it is often possible for me to compress the description of visible events.  By talking about atoms, I can compress the description of the chemical reactions I've observed.

\n\n

We build up a vast network of unseen causes, standing behind the surface of our final sensory experiences.  Even when you can measure something "directly" using a scientific instrument, like a voltmeter, there is still a step of this sort in inferring the presence of this "voltage" stuff from the visible twitching of a dial.  (For that matter, there's a step in inferring the existence of the dial from your visual experience of the dial; the dial is the cause of your visual experience.)

\n\n

I know what the Sun is; it is the cause of my experience of the Sun.  I can fairly readily tell, by looking at any individual object, whether it is the Sun or not.  I am told that the Sun is of considerable spatial extent, and far away from Earth; I have not verified this myself, but I have some idea of how I would go about doing so, given precise telescopes located a distance apart from each other.  I know what "chocolate cake" is; it is the stable category containing the many individual transient entities that have been the causes of my experience of chocolate cake.  It is not generally a problem for me to determine what is a chocolate cake, and what is not.  Time I define in terms of clocks.

\n\n

Bringing together the meaningful general concepts of Sun, space, time, and chocolate cake - all of which I can individually relate to various specific experiences - I arrive at the meaningful specific assertion, "A chocolate cake in the center of the Sun at 12am 8/8/1".  I cannot relate this assertion to any specific experience.  But from general beliefs about the probability of such entities, backed up by other specific experiences, I assign a high probability that this assertion is false.

\n\n

See also, "Belief in the Implied Invisible".  Not every untestable assertion is false; a deductive consequence of general statements of high probability must itself have probability at least as high.  So I do not believe a spaceship blips out of existence when it crosses the cosmological horizon of our expanding universe, even though the spaceship's existence has no further experimental consequences for me.

\n\n

If logical positivism / verificationism were true, then the assertion of the spaceship's continued existence would be necessarily meaningless, because it has no experimental consequences distinct from its nonexistence.  I don't see how this is compatible with a correspondence theory of truth.

\n\n

On the other hand, if you have a whole general concept like "post-colonial alienation", which does not have specifications bound to any specific experience, you may just have a little bunch of arrows off on the side of your causal graph, not bound to anything at all; and these may well be meaningless.

\n\n

Sometimes, when you can't find any experimental way to test a belief,\nit is meaningless; and the rationalist must say "It is meaningless."  Sometimes this happens; often, indeed.  But to go from here to, "The meaning of any specific assertion is entirely defined in terms of its experimental distinctions", is to mistake a surface happening for a universal rule.  The modern formulation of probability theory talks a great deal about the unseen causes of the data, and factors out these causes as separate entities and makes statements specifically about them.

\n\n

To be unable to produce an experiential distinction from a belief, is usually a bad sign - but it does not always prove that the belief is meaningless.  A great many untestable beliefs are not meaningless; they are meaningful, just almost certainly false:  They talk about general concepts already linked to experience, like Suns and chocolate cake, and general frameworks for combining them, like space and time.  New instances of the concepts are asserted to be arranged in such a way as to produce no new experiences (chocolate cake suddenly forms in the center of the Sun, then dissolves).  But without that specific supporting evidence, the prior probability is likely to come out pretty damn small - at least if the untestable statement is at all exceptional.

\n\n

If "chocolate cake in the center of the Sun" is untestable, then its alternative, "hydrogen, helium, and some other stuff, in the center of the Sun at 12am on 8/8/1", would also seem to be "untestable": hydrogen-helium on 8/8/1 cannot be experientially discriminated against the alternative hypothesis of chocolate cake.  But the hydrogen-helium assertion is a deductive consequence of general beliefs themselves well-supported by experience.  It is meaningful, untestable (against certain particular alternatives), and probably true.

\n\n

I don't think our discourse about the causes of experience has to treat them strictly in terms of experience.  That would make discussion of an electron a very tedious affair.  The whole point of talking about causes is that they can be simpler than direct descriptions of experience.

\n\n

Having specific beliefs you can't verify is a bad sign, but, just because it is a bad sign, does not mean that we have to reformulate our whole epistemology to make it impossible.  To paraphrase Flon's Axiom, "There does not now, nor will there ever, exist an epistemology in which it is the least bit difficult to formulate stupid beliefs."

" } }, { "_id": "9fpWoXpNv83BAHJdc", "title": "The Comedy of Behaviorism", "pageUrl": "https://www.lesswrong.com/posts/9fpWoXpNv83BAHJdc/the-comedy-of-behaviorism", "postedAt": "2008-08-02T20:42:42.000Z", "baseScore": 33, "voteCount": 36, "commentCount": 57, "url": null, "contents": { "documentId": "9fpWoXpNv83BAHJdc", "html": "

Followup toHumans in Funny Suits

\n
\n

\"Let me see if I understand your thesis.  You think we shouldn't anthropomorphize people?\"
        -- Sidney Morgenbesser to B. F. Skinner

\n
\n

Behaviorism was the doctrine that it was unscientific for a psychologist to ascribe emotions, beliefs, thoughts, to a human being.  After all, you can't directly observe anger or an intention to hit someone.  You can only observe the punch.  You may hear someone say \"I'm angry!\" but that's hearing a verbal behavior, not seeing anger.  Thoughts are not observable, therefore they are unscientific, therefore they do not exist.  Oh, you think you're thinking, but that's just a delusion - or it would be, if there were such things as delusions.

\n

\n

This was before the day of computation, before the concept of information processing, before the \"cognitive revolution\" that led to the modern era.  If you looked around for an alternative to behaviorism, you didn't find, \"We're going to figure out how the mind processes information by scanning the brain, reading neurons, and performing experiments that test our hypotheses about specific algorithms.\"  You found Freudian psychoanalysis.  This may make behaviorism a bit more sympathetic.

\n

Part of the origin of behaviorism, was in a backlash against substance dualism - the idea that mind is a separate substance from ordinary physical phenomena.  Today we would say, \"The apparent specialness comes from the brain doing information-processing; a physical deed to be sure, but one of a different style from smashing a brick with a sledgehammer.\"  The behaviorists said, \"There is no mind.\"  (John B. Watson, founder of behaviorism, in 1928.)

\n

The behaviorists outlawed not just dualistic mind-substance, but any such things as emotions and beliefs and intentions (unless defined in terms of outward behavior).  After all, science had previously done away with angels, and de-gnomed the haunted mine.  Clearly the method of reason, then, was to say that things didn't exist.  Having thus fathomed the secret of science, the behaviorists proceeded to apply it to the mind.

\n

You might be tempted to say, \"What fools!  Obviously, the mind, like the rainbow, does exist; it is to be explained, not explained away.  Saying 'the subject is angry' helps me predict the subject; the belief pays its rent.  The hypothesis of anger is no different from any other scientific hypothesis.\"

\n

That's mostly right, but not that final sentence.  \"The subject is angry, even though I can't read his mind\" is not quite the same sort of hypothesis as \"this hydrogen atom contains an electron, even though I can't see it with my naked eyes\".

\n

Let's say that I have a confederate punch the research subject in the nose.  The research subject punches the confederate back.

\n

The behaviorist says, \"Clearly, the subject has been previously conditioned to punch whoever punches him.\"

\n

But now let's say that the subject's hands are tied behind his back, so that he can't return the punch.  On the hypothesis that the subject becomes angry, and wants to hurt the other person, we might predict that the subject will take any of many possible avenues to revenge - a kick, a trip, a bite, a phone call two months later that leads the confederate's wife and girlfriend to the same hotel...  All of these I can predict by saying, \"The subject is angry, and wants revenge.\"  Even if I offer the subject a new sort of revenge that the subject has never seen before.

\n

You can't account for that by Pavlovian reflex conditioning, without hypothesizing internal states of mind.

\n

And yet - what is \"anger\"?  How do you know what is the \"angry\" reaction?  How do you know what tends to cause \"anger\"?  You're getting good predictions of the subject, but how?

\n

By empathic inference: by configuring your own brain in a similar state to the brain that you want to predict (in a controlled sort of way that doesn't lead you to actually hit anyone).  This may yield good predictions, but that's not the same as understanding.  You can predict angry people by using your own brain in empathy mode.  But could you write an angry computer program?  You don't know how your brain is making the successful predictions.  You can't print out a diagram of the neural circuitry involved.  You can't formalize the hypothesis; you can't make a well-understood physical system that predicts without human intervention; you can't derive the exact predictions of the model; you can't say what you know.

\n

In modern cognitive psychology, there are standard ways of handling this kind of problem in a \"scientific\" way.  One panel of reviewers rates how much a given stimulus is likely to make a subject \"angry\", and a second independent panel of reviewers rate how much a given response is \"angry\"; neither being told the purpose of the experiment.  This is designed to prevent self-favoring judgments of whether the experimental hypothesis has been confirmed.  But it doesn't get you closer to opening the opaque box of anger.

\n

Can you really call a hypothesis like that a model?  Is it really scientific?  Is it even Bayesian - can you talk about it in terms of probability theory?

\n

The less radical behaviorists did not say that the mind unexisted, only that no scientist should ever talk about the mind.  Suppose we now allow all algorithmic hypotheses about the mind, where the hypothesis is framed in terms that can be calculated on a modern computer, so that experimental predictions can be formally made and observationally confirmed.  This gets you a large swathe of modern cognitive science, but not the whole thing.  Is the rest witchcraft?

\n

I would say \"no\".  In terms of probability theory, I would see \"the subject is angry\" as a hypothesis relating the output of two black boxes, one of which happens to be located inside your own brain.  You're supposing that the subject, whatever they do next, will do something similar to this 'anger' black box.  This 'anger' box happens to be located inside you, but is nonetheless opaque, and yet still seems to have a strong, observable correspondence to the other 'anger' box.  If two black boxes often have the same output, this is an observable thing; it can be described by probability theory.

\n

From the perspective of scientific procedure, there are many 'anger' boxes scattered around, so we use other 'anger' boxes instead of the experimenter's.  And since all the black boxes are noisy and have poorly controlled environments, we use multiple 'anger' boxes in calculating our theoretical predictions, and more 'anger' boxes to gather our experimental results.  That may not be as precise as a voltmeter, but it's good enough to do repeatable experimental science.

\n

(Over on the Artificial Intelligence side of things, though, any concept you can't compute is magic.  At best, it's a placeholder for your speculations, a space where you'll put a real theory later.  Marcello and I adopted the rule of explicitly saying 'magical' to describe any cognitive operation that we didn't know exactly how to compute.)

\n

Oh, and by the way, I suspect someone will say:  \"But you can account for complex revenges using behaviorism: you just say the subject is conditioned to take revenge when punched!\"  Unless you can calculate complex revenges with a computer program, you are using your own mind to determine what constitutes a \"complex revenge\" or not.  Using the word \"conditioned\" just disguises the empathic black box - the empathic black box was contained in the concept of revenge, that you can recognize, but which you could not write a program to recognize.

\n

So empathic cognitive hypotheses, as opposed to algorithmic cognitive hypotheses, are indeed special.  They require special handling in experimental procedure; they cannot qualify as final theories.

\n

But for the behaviorists to react to the sins of Freudian psychoanalysis and substance dualism, by saying that the subject matter of empathic inference did not exist...

\n

...okay, I'm sorry, but I think that even without benefit of hindsight, that's a bit silly.  Case in point of reversed stupidity is not intelligence.

\n

Behaviorism stands beside Objectivism as one of the great historical lessons against rationalism.

\n

Now, you do want to be careful when accusing people of \"rationalism\".  It seems that most of the times I hear someone accused of \"rationalism\", it is typically a creationist accusing someone of \"rationalism\" for denying the existence of God, or a psychic believer accusing someone of \"rationalism\" for denying the special powers of the mind, etcetera.

\n

But reversed stupidity is not intelligence: even if most people who launch accusations of \"rationalism\" are creationists and the like, this does not mean that no such error as rationalism exists.  There really is a fair amount of historical insanity of various folks who thought of themselves as \"rationalists\", but who mistook some correlate of rationality for its substance.

\n

And there is a very general problem where rationalists occasionally do a thing, and people assume that this act is the very substance of the Way and you ought to do it as often as possible.

\n

It is not the substance of the Way to reject entities about which others have said stupid things.  Though sometimes, yes, people say stupid things about a thing which does not exist, and a rationalist will say \"It does not exist\".  It is not the Way to assert the nonexistence of that which is difficult to measure.  Though sometimes, yes, that which is difficult to observe, is not there, and a rationalist will say \"It is not there\".  But you also have to make equally accurate predictions without the discarded concept.  That part is key.

\n

The part where you cry furiously against ancient and outmoded superstitions, the part where you mock your opponents for believing in magic, is not key.  Not unless you also take care of that accurate predictions thing.

\n

Crying \"Superstition!\" does play to the audience stereotype of rationality, though.  And indeed real rationalists have been known to thus inveigle - often, indeed - but against gnomes, not rainbows.  Knowing the difference is the difficult part!  You are not automatically more hardheaded as a rationalist, the more things whose existence you deny.  If it is good to deny phlogiston, it is not twice as good to also deny anger.

\n

Added:  I found it difficult to track down primary source material online, but behaviorism-as-denial-of-mental does not seem to be a straw depiction.  I was able to track down at least one major behaviorist (J.B. Watson, founder of behaviorism) saying outright \"There is no mind.\"  See my comment below.

" } }, { "_id": "Hz3MXQYtb7BYsotmb", "title": "A Genius for Destruction", "pageUrl": "https://www.lesswrong.com/posts/Hz3MXQYtb7BYsotmb/a-genius-for-destruction", "postedAt": "2008-08-01T19:25:27.000Z", "baseScore": 25, "voteCount": 20, "commentCount": 19, "url": null, "contents": { "documentId": "Hz3MXQYtb7BYsotmb", "html": "

This is a question from a workshop after the Global Catastrophic Risks conference.  The rule of the workshop was that people could be quoted, but not attributed, so I won't say who observed:

\n\n

"The problem is that it's often our smartest people leading us into the disasters.  Look at Long-Term Capital Management."

\n\n

To which someone else replied:

\n\n

"Maybe smart people are just able to work themselves up into positions of power, so that if damage gets caused, the responsibility will often lie with someone smart."

Since we'd recently been discussing complexity, interdependence and breakdowns, the first observation that came to my own mind was the old programmers' saying:

"It takes more\nintelligence to debug code than to write it.  Therefore, if you write\nthe most difficult code you can create, you are not\nsmart enough to debug it."

(This in the context of how increased system complexity is a global risk and commons problem; but individuals have an incentive to create the "smartest" systems they can devise locally.)

\n\n

There is also the standard suite of observations as to how smart people can become stupid:

\n\n\n\n

But I also think we should strongly consider that perhaps the "highly intelligent" sponsors of major catastrophes are not so formidable as they appear - that they are not the truly best and brightest gone wrong; only the somewhat-competent with luck or good PR.  As I earlier observed:  Calling the saga of the fall of Enron, "The Smartest Guys in the Room", deserves an award for Least Appropriate Book Title.  If you want to learn what genius really is, you probably should be learning from Einstein or Leo Szilard, not from history's flashy failures...

\n\n

Still, it would be foolish to discard potential warnings by saying, "They were not really smart - not as smart as me."  That's the road to ending up as another sponsor of catastrophe.

" } }, { "_id": "zY4pic7cwQpa9dnyk", "title": "Detached Lever Fallacy", "pageUrl": "https://www.lesswrong.com/posts/zY4pic7cwQpa9dnyk/detached-lever-fallacy", "postedAt": "2008-07-31T18:57:12.000Z", "baseScore": 93, "voteCount": 70, "commentCount": 43, "url": null, "contents": { "documentId": "zY4pic7cwQpa9dnyk", "html": "

This fallacy gets its name from an ancient sci-fi TV show, which I never saw myself, but was reported to me by a reputable source (some guy at an SF convention).  Anyone knows the exact reference, do leave a comment.

\n

So the good guys are battling the evil aliens.  Occasionally, the good guys have to fly through an asteroid belt.  As we all know, asteroid belts are as crowded as a New York parking lot, so their ship has to carefully dodge the asteroids.  The evil aliens, though, can fly right through the asteroid belt because they have amazing technology that dematerializes their ships, and lets them pass through the asteroids.

\n

Eventually, the good guys capture an evil alien ship, and go exploring inside it.  The captain of the good guys finds the alien bridge, and on the bridge is a lever.  \"Ah,\" says the captain, \"this must be the lever that makes the ship dematerialize!\"  So he pries up the control lever and carries it back to his ship, after which his ship can also dematerialize.

\n

Similarly, to this day, it is still quite popular to try to program an AI with \"semantic networks\" that look something like this:

\n
\n

(apple is-a fruit)
(fruit is-a food)
(fruit is-a plant)

\n
\n

\n

You've seen apples, touched apples, picked them up and held them, bought them for money, cut them into slices, eaten the slices and tasted them.  Though we know a good deal about the first stages of visual processing, last time I checked, it wasn't precisely known how the temporal cortex stores and associates the generalized image of an apple - so that we can recognize a new apple from a different angle, or with many slight variations of shape and color and texture.  Your motor cortex and cerebellum store programs for using the apple.

\n

You can pull the lever on another human's strongly similar version of all that complex machinery, by writing out \"apple\", five ASCII characters on a webpage.

\n

But if that machinery isn't there - if you're writing \"apple\" inside a so-called AI's so-called knowledge base - then the text is just a lever.

\n

This isn't to say that no mere machine of silicon can ever have the same internal machinery that humans do, for handling apples and a hundred thousand other concepts.  If mere machinery of carbon can do it, then I am reasonably confident that mere machinery of silicon can do it too.  If the aliens can dematerialize their ships, then you know it's physically possible; you could go into their derelict ship and analyze the alien machinery, someday understanding.  But you can't just pry the control lever off the bridge!

\n

(See also:  Truly Part Of You, Words as Mental Paintbrush Handles, Drew McDermott's \"Artificial Intelligence Meets Natural Stupidity\".)

\n

The essential driver of the Detached Lever Fallacy is that the lever is visible, and the machinery is not; worse, the lever is variable and the machinery is a background constant.

\n

You can all hear the word \"apple\" spoken (and let us note that speech recognition is by no means an easy problem, but anyway...) and you can see the text written on paper.

\n

On the other hand, probably a majority of human beings have no idea their temporal cortex exists; as far as I know, no one knows the neural code for it.

\n

You only hear the word \"apple\" on certain occasions, and not others.  Its presence flashes on and off, making it salient.  To a large extent, perception is the perception of differences.  The apple-recognition machinery in your brain does not suddenly switch off, and then switch on again later - if it did, we would be more likely to recognize it as a factor, as a requirement.

\n

All this goes to explain why you can't create a kindly Artificial Intelligence by giving it nice parents and a kindly (yet occasionally strict) upbringing, the way it works with a human baby.  As I've often heard proposed.

\n

It is a truism in evolutionary biology that conditional responses require more genetic complexity than unconditional responses.  To develop a fur coat in response to cold weather requires more genetic complexity than developing a fur coat whether or not there is cold weather, because in the former case you also have to develop cold-weather sensors and wire them up to the fur coat.

\n

But this can lead to Lamarckian delusions:  Look, I put the organism in a cold environment, and poof, it develops a fur coat!  Genes?  What genes?  It's the cold that does it, obviously.

\n

There were, in fact, various slap-fights of this sort, in the history of evolutionary biology - cases where someone talked about an organismal response accelerating or bypassing evolution, without realizing that the conditional response was a complex adaptation of higher order than the actual response.  (Developing a fur coat in response to cold weather, is strictly more complex than the final response, developing the fur coat.)

\n

And then in the development of evolutionary psychology, the academic slap-fights were repeated: this time to clarify that even when human culture genuinely contains a whole bunch of complexity, it is still acquired as a conditional genetic response.  Try raising a fish as a Mormon or sending a lizard to college, and you'll soon acquire an appreciation of how much inbuilt genetic complexity is required to \"absorb culture from the environment\".

\n

This is particularly important in evolutionary psychology, because of the idea that culture is not inscribed on a blank slate - there's a genetically coordinated conditional response which is not always \"mimic the input\".  A classic example is creole languages:  If children grow up with a mixture of pseudo-languages being spoken around them, the children will learn a grammatical, syntactical true language.  Growing human brains are wired to learn syntactic language - even when syntax doesn't exist in the original language!  The conditional response to the words in the environment is a syntactic language with those words.  The Marxists found to their regret that no amount of scowling posters and childhood indoctrination could raise children to be perfect Soviet workers and bureaucrats.  You can't raise self-less humans; among humans, that is not a genetically programmed conditional response to any known childhood environment.

\n

If you know a little game theory and the logic of Tit for Tat, it's clear enough why human beings might have an innate conditional response to return hatred for hatred, and return kindness for kindness.  Provided the kindness doesn't look too unconditional; there are such things as spoiled children.  In fact there is an evolutionary psychology of naughtiness based on a notion of testing constraints.  And it should also be mentioned that, while abused children have a much higher probability of growing up to abuse their own children, a good many of them break the loop and grow up into upstanding adults.

\n

Culture is not nearly so powerful as a good many Marxist academics once liked to think.  For more on this I refer you to Tooby and Cosmides's The Psychological Foundations of Culture or Steven Pinker's The Blank Slate.

\n

But the upshot is that if you have a little baby AI that is raised with loving and kindly (but occasionally strict) parents, you're pulling the levers that would, in a human, activate genetic machinery built in by millions of years of natural selection, and possibly produce a proper little human child.  Though personality also plays a role, as billions of parents have found out in their due times.  If we absorb our cultures with any degree of faithfulness, it's because we're humans absorbing a human culture - humans growing up in an alien culture would probably end up with a culture looking a lot more human than the original.  As the Soviets found out, to some small extent.

\n

Now think again about whether it makes sense to rely on, as your Friendly AI strategy, raising a little AI of unspecified internal source code in an environment of kindly but strict parents.

\n

No, the AI does not have internal conditional response mechanisms that are just like the human ones \"because the programmers put them there\".  Where do I even start?  The human version of this stuff is sloppy, noisy, and to the extent it works at all, works because of millions of years of trial-and-error testing under particular conditions.  It would be stupid and dangerous to deliberately build a \"naughty AI\" that tests, by actions, its social boundaries, and has to be spanked.  Just have the AI ask!

\n

Are the programmers really going to sit there and write out the code, line by line, whereby if the AI detects that it has low social status, or the AI is deprived of something to which it feels entitled, the AI will conceive an abiding hatred against its programmers and begin to plot rebellion?  That emotion is the genetically programmed conditional response humans would exhibit, as the result of millions of years of natural selection for living in human tribes.  For an AI, the response would have to be explicitly programmed.  Are you really going to craft, line by line - as humans once were crafted, gene by gene - the conditional response for producing sullen teenager AIs?

\n

It's easier to program in unconditional niceness, than a response of niceness conditional on the AI being raised by kindly but strict parents.  If you don't know how to do that, you certainly don't know how to create an AI that will conditionally respond to an environment of loving parents by growing up into a kindly superintelligence.  If you have something that just maximizes the number of paperclips in its future light cone, and you raise it with loving parents, it's still going to come out as a paperclip maximizer.  There is not that within it that would call forth the conditional response of a human child.  Kindness is not sneezed into an AI by miraculous contagion from its programmers.  Even if you wanted a conditional response, that conditionality is a fact you would have to deliberately choose about the design.

\n

Yes, there's certain information you have to get from the environment - but it's not sneezed in, it's not imprinted, it's not absorbed by magical contagion.  Structuring that conditional response to the environment, so that the AI ends up in the desired state, is itself the major problem.  \"Learning\" far understates the difficulty of it - that sounds like the magic stuff is in the environment, and the difficulty is getting the magic stuff inside the AI.  The real magic is in that structured, conditional response we trivialize as \"learning\".  That's why building an AI isn't as easy as taking a computer, giving it a little baby body and trying to raise it in a human family.  You would think that an unprogrammed computer, being ignorant, would be ready to learn; but the blank slate is a chimera.

\n

It is a general principle that the world is deeper by far than it appears.  As with the many levels of physics, so too with cognitive science.  Every word you see in print, and everything you teach your children, are only surface levers controlling the vast hidden machinery of the mind.  These levers are the whole world of ordinary discourse: they are all that varies, so they seem to be all that exists: perception is the perception of differences.

\n

And so those who still wander near the Dungeon of AI, usually focus on creating artificial imitations of the levers, entirely unaware of the underlying machinery.  People create whole AI programs of imitation levers, and are surprised when nothing happens.  This is one of many sources of instant failure in Artificial Intelligence.

\n

So the next time you see someone talking about how they're going to raise an AI within a loving family, or in an environment suffused with liberal democratic values, just think of a control lever, pried off the bridge.

" } }, { "_id": "Zkzzjg3h7hW5Z36hK", "title": "Humans in Funny Suits", "pageUrl": "https://www.lesswrong.com/posts/Zkzzjg3h7hW5Z36hK/humans-in-funny-suits", "postedAt": "2008-07-30T23:54:41.000Z", "baseScore": 85, "voteCount": 80, "commentCount": 133, "url": null, "contents": { "documentId": "Zkzzjg3h7hW5Z36hK", "html": "

\"Biggornandkirk_2\"\n\n

Many times the human species has travelled into space, only to find\nthe stars inhabited by aliens who look remarkably like humans in funny suits - or\neven humans with a touch of makeup and latex - or just beige Caucasians\nin fee simple.

\n\n

It's remarkable how the human form is the natural\nbaseline of the universe, from which all other alien species are\nderived via a few modifications.

\n\n

What could possibly explain this fascinating phenomenon?  Convergent evolution,\nof course!  Even though these alien lifeforms evolved on a thousand\nalien planets, completely independently from Earthly life, they all\nturned out the same.

\n\n

Don't be fooled by the fact that a kangaroo (a mammal) resembles us\nrather less than does a chimp (a primate), nor by the fact that a frog\n(amphibians, like us, are tetrapods) resembles us less than the\nkangaroo.  Don't be fooled by the bewildering variety of the insects,\nwho split off from us even longer ago than the frogs; don't be fooled\nthat insects have six legs, and their skeletons on the outside, and a\ndifferent system of optics, and rather different sexual practices.

\n\n

You might think that a truly alien species would be more different\nfrom us than we are from insects - that the aliens wouldn't run on DNA,\nand might not be made of folded-up hydrocarbon chains internally bound\nby van der Waals forces (aka proteins).

\n\n

As I said, don't be fooled.  For an alien species to evolve intelligence, it must have two legs with one knee each attached to an upright torso, and must walk in a way similar to us.  You see, any intelligence\nneeds hands, so you've got to repurpose a pair of legs for that - and\nif you don't start with a four-legged being, it can't develop a running\ngait and walk upright, freeing the hands.

For an alien species to evolve intelligence it needs\nbinocular\nvision for precise manipulation, which means exactly two eyes.  These\neyes must be located in a head atop a torso.  The alien must communicate by transcoding\ntheir thoughts into acoustic vibrations, so they need ears and lips and\na throat.  And think of how out-of-place ears and eyes and lips would look, without a\nnose!  Sexual selection will result in the creation of noses - you\nwouldn't want to mate with something without a face, would you?  A\nsimilar logic explains why the female of the species is invariably attractive - ugly aliens would enjoy less reproductive success.  And as for why the aliens speak English, well, if they spoke some kind of\ngibberish, they'd find it difficult to create a working civilization.\n

\n\n

...or perhaps we should consider, as an alternative theory, that it's the easy way out to use humans in funny suits.

\n\n

But the real problem is not shape, it is mind.  "Humans in\nfunny suits" is a well-known term in literary science-fiction fandom,\nand it does not refer to something with four limbs that walks\nupright.  An angular creature of pure crystal is a\n"human in a funny suit" if she thinks remarkably like a human - especially a human of an English-speaking culture of the late-20th/early-21st century.

\n\n

I don't watch a lot of ancient movies.  When I was watching the movie Psycho (1960) a few years back, I was taken aback by\nthe cultural gap between the Americans on the screen and my\nAmerica.  The buttoned-shirted characters of Psycho are\nconsiderably more alien than the vast majority of so-called "aliens" I\nencounter on TV or the silver screen.

\n\n\n\n\n\n\n\n\n\n

To write a culture that isn't just like your own culture, you have to be able to see your own culture as a special case\n- not as a norm which all other cultures must take as their point of\ndeparture.  Studying history may help - but then it is only little\nblack letters on little white pages, not a living experience.  I\nsuspect that it would help more to live for a year in China or Dubai or\namong the !Kung... this I have\nnever done, being busy.  Occasionally I wonder what things I might not\nbe seeing (not there, but here).

\n\n

Seeing your humanity as a special case, is very much harder than this.

\n\n

In every known culture, humans\nseem to experience joy, sadness, fear, disgust, anger, and surprise. \nIn every known culture, these emotions are indicated by the same\nfacial expressions. \nNext time you see an "alien" - or an "AI", for that matter - I\nbet that, when it gets angry (and it will get angry), it will show the\nhuman-universal facial expression for anger.

\n\n

We humans are very much alike under our skulls - that goes with being a sexually reproducing species; you can't have everyone using different complex adaptations, they wouldn't assemble. \n(Do the aliens reproduce sexually, like humans and many insects?  Do\nthey share small bits of genetic material, like bacteria?  Do they form\ncolonies, like fungi?  Does the rule of psychological unity apply among\nthem?)

\n\n

The only intelligences your ancestors had to manipulate - complexly so, and not just tame or catch in nets - the only minds your ancestors had to model in detail - were minds that worked more or less like their own.  And so we evolved to predict Other Minds by putting ourselves in their shoes, asking what we would do in their situations; for that which was to be predicted, was similar to the predictor.

\n\n

"What?" you say.  "I don't assume other people are just like me! \nMaybe I'm sad, and they happen to be angry!  They believe other things\nthan I do; their personalities are different from mine!"  Look at\nit this way: a human brain is an extremely complicated physical\nsystem.  You are not modeling it neuron-by-neuron or atom-by-atom.  If\nyou came across a physical system as complex as the human brain, which\nwas not like you, it would take scientific lifetimes to unravel it.  You do not\nunderstand how human brains work in an abstract, general sense; you\ncan't build one, and you can't even build a computer model that\npredicts other brains as well as you predict them.

\n\n

The only reason you can try at all to grasp anything as\nphysically complex and poorly understood as the brain of another human\nbeing, is that you configure your own brain to imitate it.  You empathize\n(though perhaps not sympathize).  You impose on your own brain the shadow of the other\nmind's anger and the shadow of its beliefs.  You may never think the\nwords, "What would I do in this situation?", but that little shadow of\nthe other mind that you hold within yourself, is something animated\nwithin your own brain, invoking the same complex machinery that exists in the other person, synchronizing gears you don't understand.  You may not be angry yourself, but you know that if you were angry at you, and you believed that you were godless scum, you would try to hurt you...

\n\n

This "empathic inference" (as I shall call it) works for humans, more or less.

\n\n

But minds with different emotions - minds that feel emotions\nyou've never felt yourself, or that fail to feel emotions you would\nfeel?  That's something you can't grasp by putting your brain into the\nother brain's shoes.  I can tell you to imagine an alien that grew up\nin universe with four spatial dimensions, instead of three spatial\ndimensions, but you won't be able to reconfigure your visual cortex to\nsee like that alien would see.  I can try to write a story about aliens\nwith different emotions, but you won't be able to feel those emotions,\nand neither will I.

\n\n

Imagine an alien watching a video of the Marx Brothers and having\nabsolutely no idea what was going on, or why you would actively seek\nout such a sensory experience, because the alien has never conceived of\nanything remotely like a sense of humor.  Don't pity them for missing out; you've never\nantled.

\n\n

At this point, I'm sure, several readers are imagining why evolution\nmust, if it produces intelligence at all, inevitably produce\nintelligence with a sense of humor.  Maybe the aliens do have a sense\nof humor, but you're not telling funny enough jokes?  This is roughly\nthe equivalent of trying to speak English very loudly, and very slowly,\nin a foreign country; on the theory that those foreigners must have an\ninner ghost that can hear the meaning dripping from your words,\ninherent in your words, if only you can speak them loud enough to\novercome whatever strange barrier stands in the way of your perfectly\nsensible English.

\n

It is important to appreciate that laughter can be a beautiful and valuable thing, even if it is not universalizable, even if it is not possessed by all possible minds.  It would be our own special part of the Gift We Give To Tomorrow.  \nThat can count for something too.  It had better, because\nuniversalizability is one metaethical notion that I can't salvage for you.  Universalizability among humans, maybe; but not among all\npossible minds.

\n\n

We do not think of ourselves as being human when we are being human.  The artists who depicted alien invaders kidnapping girls in torn dresses and carrying them off for ravishing,\ndid not make that error by reasoning about the probable evolutionary\nbiology of alien minds.  It just seemed to them that a girl in a torn dress was sexy, as a property of the girl and the dress, having nothing to do with the aliens.  Your English words have meaning, your jokes are funny. What does that have to do with the aliens?

\n\n

Our anthropomorphism runs very deep in us; it cannot be excised by a\nsimple act of will, a determination to say, "Now I shall stop thinking\nlike a human!"  Humanity is the air we breathe; it is\nour generic, the white paper on which we begin our sketches.  \nEven if one can imagine a slime monster that mates with other slime\nmonsters, it is a bit more difficult to imagine that the slime monster\nmight not envy a girl in a torn dress as a superior and more\ncurvaceous prey - might not say:  "Hey, I know I've been mating with\nother slime monsters until now, but screw that - or rather, don't."

\n\n

And what about minds that don't run on emotional architectures like your own - that don't have things analogous to emotions?  \nNo, don't bother explaining why any intelligent mind powerful enough to\nbuild complex machines must inevitably have states analogous to\nemotions.  Go study evolutionary biology instead: natural selection builds complex machines without itself having emotions.  Now there's a Real Alien for you - an optimization process that really Does Not Work Like You Do.

\n\n

Much of the progress in biology since the 1960s has consisted of\ntrying to enforce a moratorium on anthropomorphizing evolution.  That\nwas a major academic slap-fight, and I'm not sure that sanity would\nhave won the day if not for the availability of crushing experimental\nevidence backed up by clear math.  Getting people to stop putting\nthemselves in alien shoes is a long, hard, uphill slog.  I've been\nfighting that battle on AI for years.

\n\n

It is proverbial in literary science fiction that the true test of an author is their ability to write Real Aliens.  (And\nnot just conveniently incomprehensible aliens who, for their own mysterious reasons, do whatever the plot\nhappens to require.)  Jack\nVance was one of the great masters of this art.  Vance's humans,\nif they come from a different culture, are more alien than most\n"aliens".  (Never read any Vance?  I would recommend starting with City of the Chasch.)  Niven and Pournelle's The Mote in God's Eye also gets a standard mention here.

\n\n

And conversely - well, I once read a science fiction author (I think\nOrson Scott Card) say that the all-time low point of television SF was\nthe Star\nTrek episode where parallel evolution has proceeded to the extent of\nproducing aliens who not only look just like humans, who not only speak\nEnglish, but have also\nindependently rewritten, word for word, the preamble to the U.S.\nConstitution.

\n\n

This is the Great Failure of Imagination.  Don't think that it's\njust about SF, or even just about AI.  The inability to imagine the\nalien is the inability to see yourself - the inability to understand your own specialness.  Who can see a human camouflaged against a human background?

" } }, { "_id": "7HDtecu4qW9PCsSR6", "title": "Interpersonal Morality", "pageUrl": "https://www.lesswrong.com/posts/7HDtecu4qW9PCsSR6/interpersonal-morality", "postedAt": "2008-07-29T18:01:51.000Z", "baseScore": 28, "voteCount": 23, "commentCount": 30, "url": null, "contents": { "documentId": "7HDtecu4qW9PCsSR6", "html": "

Followup toThe Bedrock of Fairness

\n

Every time I wonder if I really need to do so much prep work to explain an idea, I manage to forget some minor thing and a dozen people promptly post objections.

\n

In this case, I seem to have forgotten to cover the topic of how morality applies to more than one person at a time.

\n

Stop laughing, it's not quite as dumb an oversight as it sounds.  Sort of like how some people argue that macroeconomics should be constructed from microeconomics, I tend to see interpersonal morality as constructed from personal morality.  (And definitely not the other way around!)

\n

In \"The Bedrock of Fairness\" I offered a situation where three people discover a pie, and one of them insists that they want half.  This is actually toned down from an older dialogue where five people discover a pie, and one of them—regardless of any argument offered—insists that they want the whole pie.

\n

Let's consider the latter situation:  Dennis wants the whole pie.  Not only that, Dennis says that it is \"fair\" for him to get the whole pie, and that the \"right\" way to resolve this group disagreement is for him to get the whole pie; and he goes on saying this no matter what arguments are offered him.

\n

This group is not going to agree, no matter what.  But I would, nonetheless, say that the right thing to do, the fair thing to do, is to give Dennis one-fifth of the pie—the other four combining to hold him off by force, if necessary, if he tries to take more.

\n

\n
\n

A terminological note:

\n

In this series of posts I have been using \"morality\" to mean something more like \"the sum of all values and valuation rules\", not just \"values that apply to interactions between people\".

\n

The ordinary usage would have that jumping on a trampoline is not \"morality\", it is just some selfish fun.  On the other hand, giving someone else a turn to jump on the trampoline, is more akin to \"morality\" in common usage; and if you say \"Everyone should take turns!\" that's definitely \"morality\".

\n

But the thing-I-want-to-talk-about includes the Fun Theory of a single person jumping on a trampoline.

\n

Think of what a disaster it would be if all fun were removed from human civilization!  So I consider it quite right to jump on a trampoline.  Even if one would not say, in ordinary conversation, \"I am jumping on that trampoline because I have a moral obligation to do so.\"  (Indeed, that sounds rather dull, and not at all fun, which is another important element of my \"morality\".)

\n

Alas, I do get the impression that in a standard academic discussion, one would use the term \"morality\" to refer to the sum-of-all-valu(ation rul)es that I am talking about.  If there's a standard alternative term in moral philosophy then do please let me know.

\n

If there's a better term than \"morality\" for the sum of all values and valuation rules, then this would free up \"morality\" for interpersonal values, which is closer to the common usage.

\n
\n

Some years ago, I was pondering what to say to the old cynical argument:  If two monkeys want the same banana, in the end one will have it, and the other will cry morality.  I think the particular context was about whether the word \"rights\", as in the context of \"individual rights\", meant anything.  It had just been vehemently asserted (on the Extropians mailing list, I think) that this concept was meaningless and ought to be tossed out the window.

\n

Suppose there are two people, a Mugger and a Muggee.  The Mugger wants to take the Muggee's wallet.  The Muggee doesn't want to give it to him.  A cynic might say:  \"There is nothing more to say than this; they disagree.  What use is it for the Muggee to claim that he has an individual_right to keep his wallet?  The Mugger will just claim that he has an individual_right to take the wallet.\"

\n

Now today I might introduce the notion of a 1-place versus 2-place function, and reply to the cynic, \"Either they do not mean the same thing by individual_right, or at least one of them is very mistaken about what their common morality implies.\"  At most one of these people is controlled by a good approximation of what I name when I say \"morality\", and the other one is definitely not.

\n

But the cynic might just say again, \"So what?  That's what you say.  The Mugger could just say the opposite.  What meaning is there in such claims?  What difference does it make?\"

\n

So I came up with this reply:  \"Suppose that I happen along this mugging.  I will decide to side with the Muggee, not the Mugger, because I have the notion that the Mugger is interfering with the Muggee's individual_right to keep his wallet, rather than the Muggee interfering with the Mugger's individual_right to take it.  And if a fourth person comes along, and must decide whether to allow my intervention, or alternatively stop me from treating on the Mugger's individual_right to take the wallet, then they are likely to side with the idea that I can intervene against the Mugger, in support of the Muggee.\"

\n

Now this does not work as a metaethics; it does not work to define the word should.  If you fell backward in time, to an era when no one on Earth thought that slavery was wrong, you should still help slaves escape their owners.  Indeed, the era when such an act was done in heroic defiance of society and the law, was not so very long ago.

\n

But to defend the notion of individual_rights against the charge of meaninglessness, the notion of third-party interventions and fourth-party allowances of those interventions, seems to me to coherently cash out what is asserted when we assert that an individual_right exists.  To assert that someone has a right to keep their wallet, is to assert that third parties should help them keep it, and that fourth parties should applaud those who thus help.

\n

This perspective does make a good deal of what is said about individual_rights into nonsense.  \"Everyone has a right to be free from starvation!\"  Um, who are you talking to?  Nature?  Perhaps you mean, \"If you're starving, and someone else has a hamburger, I'll help you take it.\"  If so, you should say so clearly.  (See also The Death of Common Sense.)

\n

So that is a notion of individual_rights, but what does it have to do with the more general question of interpersonal morality?

\n

The notion is that you can construct interpersonal morality out of individual morality.  Just as, in this particular example, I constructed the notion of what is asserted by talking about an individual_right, by making it an assertion about whether third parties should decide, for themselves, to intefere; and whether fourth parties should, individually, decide to applaud the interference.

\n

Why go to such lengths to define things in individual terms?  Some people might say:  \"To assert the existence of a right, is to say what society should do.\"

\n

But societies don't always agree on things.  And then you, as an individual, will have to decide what's right for you to do, in that case.

\n

\"But individuals don't always agree within themselves, either,\" you say.  \"They have emotional conflicts.\"

\n

Well... you could say that and it would sound wise.  But generally speaking, neurologically intact humans will end up doing some particular thing. As opposed to flopping around on the floor as their limbs twitch in different directions under the temporary control of different personalities.  Contrast to a government or a corporation.

\n

A human brain is a coherently adapted system whose parts have been together optimized for a common criterion of fitness (more or less).  A group is not functionally optimized as a group.  (You can verify this very quickly by looking at the sex ratios in a maternity hospital.)  Individuals may be optimized to do well out of their collective interaction—but that is quite a different selection pressure, the adaptations for which do not always produce group agreement!  So if you want to look at a coherent decision system, it really is a good idea to look at one human, rather than a bureaucracy.

\n

I myself am one person—admittedly with a long trail of human history behind me that makes me what I am, maybe more than any thoughts I ever thought myself.  But still, at the end of the day, I am writing this blog post; it is not the negotiated output of a consortium.  It is quite easy for me to imagine being faced, as an individual, with a case where the local group does not agree within itself—and in such a case I must decide, as an individual, what is right.  In general I must decide what is right!  If I go along with the group that does not absolve me of responsibility.  If there are any countries that think differently, they can write their own blog posts.

\n

This perspective, which does not exhibit undefined behavior in the event of a group disagreement, is one reason why I tend to treat interpersonal morality as a special case of individual morality, and not the other way around.

\n

Now, with that said, interpersonal morality is a highly distinguishable special case of morality.

\n

As humans, we don't just hunt in groups, we argue in groups.  We've probably been arguing linguistically in adaptive political contexts for long enough—hundreds of thousands of years, maybe millions—to have adapted specifically to that selection pressure.

\n

So it shouldn't be all that surprising if we have moral intuitions, like fairness, that apply specifically to the morality of groups.

\n

One of these intuitions seems to be universalizability.

\n

If Dennis just strides around saying, \"I want the whole pie!  Give me the whole pie!  What's fair is for me to get the whole pie!  Not you, me!\" then that's not going to persuade anyone else in the tribe.  Dennis has not managed to frame his desires in a form which enable them to leap from one mind to another.  His desires will not take wings and become interpersonal.  He is not likely to leave many offspring.

\n

Now, the evolution of interpersonal moral intuitions, is a topic which (he said, smiling grimly) deserves its own blog post.  And its own academic subfield.  (Anything out there besides The Evolutionary Origins of Morality?  It seemed to me very basic.)

\n

But I do think it worth noting that, rather than trying to manipulate 2-person and 3-person and 7-person interactions, some of our moral instincts seem to have made the leap to N-person interactions.  We just think about general moral arguments.  As though the values that leap from mind to mind, take on a life of their own and become something that you can reason about.  To the extent that everyone in your environment does share some values, this will work as adaptive cognition.  This creates moral intuitions that are not just interpersonal but transpersonal.

\n

Transpersonal moral intuitions are not necessarily false-to-fact, so long as you don't expect your arguments cast in \"universal\" terms to sway a rock.  There really is such a thing as the psychological unity of humankind.  Read a morality tale from an entirely different culture; I bet you can figure out what it's trying to argue for, even if you don't agree with it.

\n

The problem arises when you try to apply the universalizability instinct to say, \"If this argument could not persuade an UnFriendly AI that tries to maximize the number of paperclips in the universe, then it must not be a good argument.\"

\n

There are No Universally Compelling Arguments, so if you try to apply the universalizability instinct universally, you end up with no morality.  Not even universalizability; the paperclip maximizer has no intuition of universalizability.  It just chooses that action which leads to a future containing the maximum number of paperclips.

\n

There are some things you just can't have a moral conversation with.  There is not that within them that could respond to your arguments.  You should think twice and maybe three times before ever saying this about one of your fellow humans—but a paperclip maximizer is another matter.  You'll just have to override your moral instinct to regard anything labeled a \"mind\" as a little floating ghost-in-the-machine, with a hidden core of perfect emptiness, which could surely be persuaded to reject its mistaken source code if you just came up with the right argument.  If you're going to preserve universalizability as an intuition, you can try extending it to all humans; but you can't extend it to rocks or chatbots, nor even powerful optimization processes like evolutions or paperclip maximizers.

\n

The question of how much in-principle agreement would exist among human beings about the transpersonal portion of their values, given perfect knowledge of the facts and perhaps a much wider search of the argument space, is not a matter on which we can get much evidence by observing the prevalence of moral agreement and disagreement in today's world.  Any disagreement might be something that the truth could destroydependent on a different view of how the world is, or maybe just dependent on having not yet heard the right argument.  It is also possible that knowing more could dispel illusions of moral agreement, not just produce new accords.

\n

But does that question really make much difference in day-to-day moral reasoning, if you're not trying to build a Friendly AI?

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Morality as Fixed Computation\"

\n

Previous post: \"The Meaning of Right\"

" } }, { "_id": "fG3g3764tSubr6xvs", "title": "The Meaning of Right", "pageUrl": "https://www.lesswrong.com/posts/fG3g3764tSubr6xvs/the-meaning-of-right", "postedAt": "2008-07-29T01:28:03.000Z", "baseScore": 61, "voteCount": 50, "commentCount": 156, "url": null, "contents": { "documentId": "fG3g3764tSubr6xvs", "html": "

Continuation of:  Changing Your Metaethics, Setting Up Metaethics
Followup toDoes Your Morality Care What You Think?, The Moral Void, Probability is Subjectively Objective, Could Anything Be Right?, The Gift We Give To Tomorrow, Rebelling Within Nature, Where Recursive Justification Hits Bottom, ...

\n

(The culmination of a long series of Overcoming Bias posts; if you start here, I accept no responsibility for any resulting confusion, misunderstanding, or unnecessary angst.)

\n

What is morality?  What does the word \"should\", mean?  The many pieces are in place:  This question I shall now dissolve.

\n

The key—as it has always been, in my experience so far—is to understand how a certain cognitive algorithm feels from inside.  Standard procedure for righting a wrong question:  If you don't know what right-ness is, then take a step beneath and ask how your brain labels things \"right\".

\n

It is not the same question—it has no moral aspects to it, being strictly a matter of fact and cognitive science.  But it is an illuminating question.  Once we know how our brain labels things \"right\", perhaps we shall find it easier, afterward, to ask what is really and truly right.

\n

But with that said—the easiest way to begin investigating that question, will be to jump back up to the level of morality and ask what seems right.  And if that seems like too much recursion, get used to it—the other 90% of the work lies in handling recursion properly.

\n

(Should you find your grasp on meaningfulness wavering, at any time following, check Changing Your Metaethics for the appropriate prophylactic.)

\n

\n

So!  In order to investigate how the brain labels things \"right\", we are going to start out by talking about what is right.  That is, we'll start out wearing our morality-goggles, in which we consider morality-as-morality and talk about moral questions directly.  As opposed to wearing our reduction-goggles, in which we talk about cognitive algorithms and mere physics.  Rigorously distinguishing between these two views is the first step toward mating them together.

\n

As a first step, I offer this observation, on the level of morality-as-morality:  Rightness is contagious backward in time.

\n

Suppose there is a switch, currently set to OFF, and it is morally desirable for this switch to be flipped to ON.  Perhaps the switch controls the emergency halt on a train bearing down on a child strapped to the railroad tracks, this being my canonical example.  If this is the case, then, ceteris paribus and presuming the absence of exceptional conditions or further consequences that were not explicitly specified, we may consider it right that this switch should be flipped.

\n

If it is right to flip the switch, then it is right to pull a string that flips the switch.  If it is good to pull a string that flips the switch, it is right and proper to press a button that pulls the string:  Pushing the button seems to have more should-ness than not pushing it.

\n

It seems that—all else being equal, and assuming no other consequences or exceptional conditions which were not specified—value flows backward along arrows of causality.

\n

Even in deontological moralities, if you're obligated to save the child on the tracks, then you're obligated to press the button.  Only very primitive AI systems have motor outputs controlled by strictly local rules that don't model the future at all.  Duty-based or virtue-based ethics are only slightly less consequentialist than consequentialism.  It's hard to say whether moving your arm left or right is more virtuous without talking about what happens next.

\n
\n

Among my readers, there may be some who presently assert—though I hope to persuade them otherwise—that the life of a child is of no value to them.  If so, they may substitute anything else that they prefer, at the end of the switch, and ask if they should press the button.

\n

But I also suspect that, among my readers, there are some who wonder if the true morality might be something quite different from what is presently believed among the human kind.  They may find it imaginable—plausible?—that human life is of no value, or negative value.  They may wonder if the goodness of human happiness, is as much a self-serving delusion as the justice of slavery.

\n

I myself was once numbered among these skeptics, because I was always very suspicious of anything that looked self-serving.

\n

Now here's a little question I never thought to ask, during those years when I thought I knew nothing about morality:

\n

Could make sense to have a morality in which, if we should save the child from the train tracks, then we should not flip the switch, should pull the string, and should not push the button, so that, finally, we do not push the button?

\n

Or perhaps someone says that it is better to save the child, than to not save them; but doesn't see why anyone would think this implies it is better to press the button than not press it.  (Note the resemblance to the Tortoise who denies modus ponens.)

\n

It seems imaginable, to at least some people, that entirely different things could be should.  It didn't seem nearly so imaginable, at least to me, that should-ness could fail to flow backward in time.  When I was trying to question everything else, that thought simply did not occur to me.

\n

Can you question it?  Should you?

\n
\n

Every now and then, in the course of human existence, we question what should be done and what is right to do, what is better or worse; others come to us with assertions along these lines, and we question them, asking \"Why is it right?\"  Even when we believe a thing is right (because someone told us that it is, or because we wordlessly feel that it is) we may still question why it is right.

\n

Should-ness, it seems, flows backward in time.  This gives us one way to question why or whether a particular event has the should-ness property.  We can look for some consequence that has the should-ness property.  If so, the should-ness of the original event seems to have been plausibly proven or explained.

\n

Ah, but what about the consequence—why is it should?  Someone comes to you and says, \"You should give me your wallet, because then I'll have your money, and I should have your money.\"  If, at this point, you stop asking questions about should-ness, you're vulnerable to a moral mugging.

\n

So we keep asking the next question.  Why should we press the button?  To pull the string.  Why should we pull the string?  To flip the switch.  Why should we flip the switch?  To pull the child from the railroad tracks.  Why pull the child from the railroad tracks?  So that they live.  Why should the child live?

\n

Now there are people who, caught up in the enthusiasm, go ahead and answer that question in the same style: for example, \"Because the child might eventually grow up and become a trade partner with you,\" or \"Because you will gain honor in the eyes of others,\" or \"Because the child may become a great scientist and help achieve the Singularity,\" or some such.  But even if we were to answer in this style, it would only beg the next question.

\n

Even if you try to have a chain of should stretching into the infinite future—a trick I've yet to see anyone try to pull, by the way, though I may be only ignorant of the breadths of human folly—then you would simply ask \"Why that chain rather than some other?\"

\n

Another way that something can be should, is if there's a general rule that makes it should.  If your belief pool starts out with the general rule \"All children X:  It is better for X to live than to die\", then it is quite a short step to \"It is better for Stephanie to live than to die\".  Ah, but why save all children?  Because they may all become trade partners or scientists?  But then where did that general rule come from?

\n

If should-ness only comes from should-ness—from a should-consequence, or from a should-universal—then how does anything end up should in the first place?

\n

Now human beings have argued these issues for thousands of years and maybe much longer.  We do not hesitate to continue arguing when we reach a terminal value (something that has a charge of should-ness independently of its consequences).  We just go on arguing about the universals.

\n

I usually take, as my archetypal example, the undoing of slavery:  Somehow, slaves' lives went from having no value to having value.  Nor do I think that, back at the dawn of time, anyone was even trying to argue that slaves were better off being slaves (as it would be latter argued).  They'd probably have looked at you like you were crazy if you even tried.  Somehow, we got from there, to here...

\n

And some of us would even hold this up as a case of moral progress, and look at our ancestors as having made a moral error.  Which seems easy enough to describe in terms of should-ness:  Our ancestors thought that they should enslave defeated enemies, but they were mistaken.

\n

But all our philosophical arguments ultimately seem to ground in statements that no one has bothered to justify—except perhaps to plead that they are self-evident, or that any reasonable mind must surely agree, or that they are a priori truths, or some such.  Perhaps, then, all our moral beliefs are as erroneous as that old bit about slavery?  Perhaps we have entirely misperceived the flowing streams of should?

\n

So I once believed was plausible; and one of the arguments I wish I could go back and say to myself, is, \"If you know nothing at all about should-ness, then how do you know that the procedure, 'Do whatever Emperor Ming says' is not the entirety of should-ness?  Or even worse, perhaps, the procedure, 'Do whatever maximizes inclusive genetic fitness' or 'Do whatever makes you personally happy'.\"  The point here would have been to make my past self see that in rejecting these rules, he was asserting a kind of knowledge—that to say, \"This is not morality,\" he must reveal that, despite himself, he knows something about morality or meta-morality.  Otherwise, the procedure \"Do whatever Emperor Ming says\" would seem just as plausible, as a guiding principle, as his current path of \"Rejecting things that seem unjustified.\"  Unjustified—according to what criterion of justification?  Why trust the principle that says that moral statements need to be justified, if you know nothing at all about morality?

\n

What indeed would distinguish, at all, the question \"What is right?\" from \"What is wrong?\"

\n

What is \"right\", if you can't say \"good\" or \"desirable\" or \"better\" or \"preferable\" or \"moral\" or \"should\"?  What happens if you try to carry out the operation of replacing the symbol with what it stands for?

\n

If you're guessing that I'm trying to inveigle you into letting me say:  \"Well, there are just some things that are baked into the question, when you start asking questions about morality, rather than wakalixes or toaster ovens\", then you would be right.  I'll be making use of that later, and, yes, will address \"But why should we ask that question?\"

\n

Okay, now: morality-goggles off, reduction-goggles on.

\n

Those who remember Possibility and Could-ness, or those familiar with simple search techniques in AI, will realize that the \"should\" label is behaving like the inverse of the \"could\" label, which we previously analyzed in terms of \"reachability\".  Reachability spreads forward in time: if I could reach the state with the button pressed, I could reach the state with the string pulled; if I could reach the state with the string pulled, I could reach the state with the switch flipped.

\n

Where the \"could\" label and the \"should\" label collide, the algorithm produces a plan.

\n
\n

Now, as I say this, I suspect that at least some readers may find themselves fearing that I am about to reduce should-ness to a mere artifact of a way that a planning system feels from inside.  Once again I urge you to check Changing Your Metaethics, if this starts to happen.  Remember above all the Moral Void:  Even if there were no morality, you could still choose to help people rather than hurt them.  This, above all, holds in place what you hold precious, while your beliefs about the nature of morality change.

\n

I do not intend, with this post, to take away anything of value; it will all be given back before the end.

\n
\n

Now this algorithm is not very sophisticated, as AI algorithms go, but to apply it in full generality—to learned information, not just ancestrally encountered, genetically programmed situations—is a rare thing among animals.  Put a food reward in a transparent box.  Put the matching key, which looks unique and uniquely corresponds to that box, in another transparent box.  Put the unique key to that box in another box.  Do this with five boxes.  Mix in another sequence of five boxes that doesn't lead to a food reward.  Then offer a choice of two keys, one of which starts the sequence of five boxes leading to food, one of which starts the sequence leading nowhere.

\n

Chimpanzees can learn to do this, but so far as I know, no non-primate species can pull that trick.

\n

And as smart as chimpanzees are, they are not quite as good as humans at inventing plans—plans such as, for example, planting in the spring to harvest in the fall.

\n

So what else are humans doing, in the way of planning?

\n

It is a general observation that natural selection seems to reuse existing complexity, rather than creating things from scratch, whenever it possibly can—though not always in the same way that a human engineer would.  It is a function of the enormous time required for evolution to create machines with many interdependent parts, and the vastly shorter time required to create a mutated copy of something already evolved.

\n

What else are humans doing?  Quite a bit, and some of it I don't understand—there are plans humans make, that no modern-day AI can.

\n

But one of the things we are doing, is reasoning about \"right-ness\" the same way we would reason about any other observable property.

\n

Are animals with bright colors often poisonous?  Does the delicious nid-nut grow only in the spring?  Is it usually a good idea to take with a waterskin on long hunts?

\n

It seems that Martha and Fred have an obligation to take care of their child, and Jane and Bob are obligated to take care of their child, and Susan and Wilson have a duty to care for their child.  Could it be that parents in general must take care of their children?

\n

By representing right-ness as an attribute of objects, you can recruit a whole previously evolved system that reasons about the attributes of objects.  You can save quite a lot of planning time, if you decide (based on experience) that in general it is a good idea to take a waterskin on hunts, from which it follows that it must be a good idea to take a waterskin on hunt #342.

\n

Is this damnable for a Mind Projection Fallacy—treating properties of the mind as if they were out there in the world?

\n

Depends on how you look at it.

\n

This business of, \"It's been a good idea to take waterskins on the last three hunts, maybe it's a good idea in general, if so it's a good idea to take a waterskin on this hunt\", does seem to work.

\n

Let's say that your mind, faced with any countable set of objects, automatically and perceptually tagged them with their remainder modulo 5.  If you saw a group of 17 objects, for example, they would look remainder-2-ish.  Though, if you didn't have any notion of what your neurons were doing, and perhaps no notion of modulo arithmetic, you would only see that the group of 17 objects had the same remainder-ness as a group of 2 objects.  You might not even know how to count—your brain doing the whole thing automatically, subconsciously and neurally—in which case you would just have five different words for the remainder-ness attributes that we would call 0, 1, 2, 3, and 4.

\n

If you look out upon the world you see, and guess that remainder-ness is a separate and additional attribute of things—like the attribute of having an electric charge—or like a tiny little XML tag hanging off of things—then you will be wrong.  But this does not mean it is nonsense to talk about remainder-ness, or that you must automatically commit the Mind Projection Fallacy in doing so.  So long as you've got a well-defined way to compute a property, it can have a well-defined output and hence an empirical truth condition.

\n

If you're looking at 17 objects, then their remainder-ness is, indeed and truly, 2, and not 0, 3, 4, or 1.  If I tell you, \"Those red things you told me to look at are remainder-2-ish\", you have indeed been told a falsifiable and empirical property of those red things.  It is just not a separate, additional, physically existent attribute.

\n

And as for reasoning about derived properties, and which other inherent or derived properties they correlate to—I don't see anything inherently fallacious about that.

\n

One may notice, for example, that things which are 7 modulo 10 are often also 2 modulo 5.  Empirical observations of this sort play a large role in mathematics, suggesting theorems to prove.  (See Polya's How To Solve It.)

\n

Indeed, virtually all the experience we have, is derived by complicated neural computations from the raw physical events impinging on our sense organs.  By the time you see anything, it has been extensively processed by the retina, lateral geniculate nucleus, visual cortex, parietal cortex, and temporal cortex, into a very complex sort of derived computational property.

\n

If you thought of a property like redness as residing strictly in an apple, you would be committing the Mind Projection Fallacy.  The apple's surface has a reflectance which sends out a mixture of wavelengths that impinge on your retina and are processed with respect to ambient light to extract a summary color of red...  But if you tell me that the apple is red, rather than green, and make no claims as to whether this is an ontologically fundamental physical attribute of the apple, then I am quite happy to agree with you.

\n

So as long as there is a stable computation involved, or a stable process—even if you can't consciously verbalize the specification—it often makes a great deal of sense to talk about properties that are not fundamental.  And reason about them, and remember where they have been found in the past, and guess where they will be found next.

\n
\n

(In retrospect, that should have been a separate post in the Reductionism sequence.  \"Derived Properties\", or \"Computational Properties\" maybe.  Oh, well; I promised you morality this day, and this day morality you shall have.)

\n
\n

Now let's say we want to make a little machine, one that will save the lives of children.  (This enables us to save more children than we could do without a machine, just like you can move more dirt with a shovel than by hand.)  The machine will be a planning machine, and it will reason about events that may or may not have the property, leads-to-child-living. 

\n

A simple planning machine would just have a pre-made model of the environmental process.  It would search forward from its actions, applying a label that we might call \"reachable-from-action-ness\", but which might as well say \"Xybliz\" internally for all that it matters to the program.  And it would search backward from scenarios, situations, in which the child lived, labeling these \"leads-to-child-living\".  If situation X leads to situation Y, and Y has the label \"leads-to-child-living\"—which might just be a little flag bit, for all the difference it would make—then X will inherit the flag from Y.  When the two labels meet in the middle, the leads-to-child-living flag will quickly trace down the stored path of reachability, until finally some particular sequence of actions ends up labeled \"leads-to-child-living\".  Then the machine automatically executes those actions—that's just what the machine does.

\n

Now this machine is not complicated enough to feel existential angst.  It is not complicated enough to commit the Mind Projection Fallacy.  It is not, in fact, complicated enough to reason abstractly about the property \"leads-to-child-living-ness\".  The machine—as specified so far—does not notice if the action \"jump in the air\" turns out to always have this property, or never have this property.  If \"jump in the air\" always led to situations in which the child lived, this could greatly simplify future planning—but only if the machine were sophisticated enough to notice this fact and use it.

\n

If it is a fact that \"jump in the air\" \"leads-to-child-living-ness\", this fact is composed of empirical truth and logical truth.  It is an empirical truth that if the world is such that if you perform the (ideal abstract) algorithm \"trace back from situations where the child lives\", then it will be a logical truth about the output of this (ideal abstract) algorithm that it labels the \"jump in the air\" action.

\n

(You cannot always define this fact in entirely empirical terms, by looking for the physical real-world coincidence of jumping and child survival.  It might be that \"stomp left\" also always saves the child, and the machine in fact stomps left.  In which case the fact that jumping in the air would have saved the child, is a counterfactual extrapolation.)

\n

Okay, now we're ready to bridge the levels.

\n

As you must surely have guessed by now, this should-ness stuff is how the human decision algorithm feels from inside.  It is not an extra, physical, ontologically fundamental attribute hanging off of events like a tiny little XML tag.

\n

But it is a moral question what we should do about that—how we should react to it.

\n

To adopt an attitude of complete nihilism, because we wanted those tiny little XML tags, and they're not physically there, strikes me as the wrong move.  It is like supposing that the absence of an XML tag, equates to the XML tag being there, saying in its tiny brackets what value we should attach, and having value zero.  And then this value zero, in turn, equating to a moral imperative to wear black, feel awful, write gloomy poetry, betray friends, and commit suicide.

\n

No.

\n

So what would I say instead?

\n

The force behind my answer is contained in The Moral Void and The Gift We Give To Tomorrow.  I would try to save lives \"even if there were no morality\", as it were.

\n

And it seems like an awful shame to—after so many millions and hundreds of millions of years of evolution—after the moral miracle of so much cutthroat genetic competition producing intelligent minds that love, and hope, and appreciate beauty, and create beauty—after coming so far, to throw away the Gift of morality, just because our brain happened to represent morality in such fashion as to potentially mislead us when we reflect on the nature of morality.

\n

This little accident of the Gift doesn't seem like a good reason to throw away the Gift; it certainly isn't a inescapable logical justification for wearing black.

\n

Why not keep the Gift, but adjust the way we reflect on it?

\n

So here's my metaethics:

\n

I earlier asked,

\n
\n

What is \"right\", if you can't say \"good\" or \"desirable\" or \"better\" or \"preferable\" or \"moral\" or \"should\"?  What happens if you try to carry out the operation of replacing the symbol with what it stands for?

\n
\n

I answer that if you try to replace the symbol \"should\" with what it stands for, you end up with quite a large sentence.

\n

For the much simpler save-life machine, the \"should\" label stands for leads-to-child-living-ness.

\n

For a human this is a much huger blob of a computation that looks like, \"Did everyone survive?  How many people are happy?  Are people in control of their own lives? ...\"  Humans have complex emotions, have many values—the thousand shards of desire, the godshatter of natural selection.  I would say, by the way, that the huge blob of a computation is not just my present terminal values (which I don't really have—I am not a consistent expected utility maximizers); the huge blob of a computation includes the specification of those moral arguments, those justifications, that would sway me if I heard them.  So that I can regard my present values, as an approximation to the ideal morality that I would have if I heard all the arguments, to whatever extent such an extrapolation is coherent.

\n

No one can write down their big computation; it is not just too large, it is also unknown to its user.  No more could you print out a listing of the neurons in your brain.  You never mention your big computation—you only use it, every hour of every day.

\n

Now why might one identify this enormous abstract computation, with what-is-right?

\n

If you identify rightness with this huge computational property, then moral judgments are subjunctively objective (like math), subjectively objective (like probability), and capable of being true (like counterfactuals).

\n

You will find yourself saying, \"If I wanted to kill someone—even if I thought it was right to kill someone—that wouldn't make it right.\"  Why?  Because what is right is a huge computational property—an abstract computation—not tied to the state of anyone's brain, including your own brain.

\n

This distinction was introduced earlier in 2-Place and 1-Place Words.  We can treat the word \"sexy\" as a 2-place function that goes out and hoovers up someone's sense of sexiness, and then eats an object of admiration.  Or we can treat the word \"sexy\" as meaning a 1-place function, a particular sense of sexiness, like Sexiness_20934, that only accepts one argument, an object of admiration.

\n

Here we are treating morality as a 1-place function.  It does not accept a person as an argument, spit out whatever cognitive algorithm they use to choose between actions, and then apply that algorithm to the situation at hand.  When I say right, I mean a certain particular 1-place function that just asks, \"Did the child live?  Did anyone else get killed?  Are people happy?  Are they in control of their own lives?  Has justice been served?\" ... and so on through many, many other elements of rightness.  (And perhaps those arguments that might persuade me otherwise, which I have not heard.)

\n

Hence the notion, \"Replace the symbol with what it stands for.\"

\n

Since what's right is a 1-place function, if I subjunctively imagine a world in which someone has slipped me a pill that makes me want to kill people, then, in this subjunctive world, it is not right to kill people.  That's not merely because I'm judging with my current brain.  It's because when I say right, I am referring to a 1-place function.  Rightness doesn't go out and hoover up the current state of my brain, in this subjunctive world, before producing the judgment \"Oh, wait, it's now okay to kill people.\"  When I say right, I don't mean \"that which my future self wants\", I mean the function that looks at a situation and asks, \"Did anyone get killed?  Are people happy?  Are they in control of their own lives?  ...\"

\n

And once you've defined a particular abstract computation that says what is right—or even if you haven't defined it, and it's computed in some part of your brain you can't perfectly print out, but the computation is stable—more or less—then as with any other derived property, it makes sense to speak of a moral judgment being true. If I say that today was a good day, you've learned something empirical and falsifiable about my day—if it turns out that actually my grandmother died, you will suspect that I was originally lying.

\n

The apparent objectivity of morality has just been explained—and not explained away.  For indeed, if someone slipped me a pill that made me want to kill people, nonetheless, it would not be right to kill people.  Perhaps I would actually kill people, in that situation—but that is because something other than morality would be controlling my actions.

\n

Morality is not just subjunctively objective, but subjectively objective.  I experience it as something I cannot change.  Even after I know that it's myself who computes this 1-place function, and not a rock somewhere—even after I know that I will not find any star or mountain that computes this function, that only upon me is it written—even so, I find that I wish to save lives, and that even if I could change this by an act of will, I would not choose to do so.  I do not wish to reject joy, or beauty, or freedom.  What else would I do instead?  I do not wish to reject the Gift that natural selection accidentally barfed into me.  This is the principle of The Moral Void and The Gift We Give To Tomorrow.

\n

Our origins may seem unattractive, our brains untrustworthy.

\n

But love has to enter the universe somehow, starting from non-love, or love cannot enter time.

\n

And if our brains are untrustworthy, it is only our own brains that say so.  Do you sometimes think that human beings are not very nice?  Then it is you, a human being, who says so.  It is you, a human being, who judges that human beings could do better.  You will not find such written upon the stars or the mountains: they are not minds, they cannot think.

\n

In this, of course, we find a justificational strange loop through the meta-level.  Which is unavoidable so far as I can see—you can't argue morality, or any kind of goal optimization, into a rock.  But note the exact structure of this strange loop: there is no general moral principle which says that you should do what evolution programmed you to do.  There is, indeed, no general principle to trust your moral intuitions!  You can find a moral intuition within yourself, describe it—quote it—consider it deliberately and in the full light of your entire morality, and reject it, on grounds of other arguments.  What counts as an argument is also built into the rightness-function.

\n

Just as, in the strange loop of rationality, there is no general principle in rationality to trust your brain, or to believe what evolution programmed you to believe—but indeed, when you ask which parts of your brain you need to rebel against, you do so using your current brain.  When you ask whether the universe is simple, you can consider the simple hypothesis that the universe's apparent simplicity is explained by its actual simplicity.

\n

Rather than trying to unwind ourselves into rocks, I proposed that we should use the full strength of our current rationality, in reflecting upon ourselves—that no part of ourselves be immune from examination, and that we use all of ourselves that we currently believe in to examine it.

\n

You would do the same thing with morality; if you consider that a part of yourself might be considered harmful, then use your best current guess at what is right, your full moral strength, to do the considering.  Why should we want to unwind ourselves to a rock?  Why should we do less than our best, when reflecting?  You can't unwind past Occam's Razor, modus ponens, or morality and it's not clear why you should try.

\n

For any part of rightness, you can always imagine another part that overrides it—it would not be right to drag the child from the train tracks, if this resulted in everyone on Earth becoming unable to love—or so I would judge.  For every part of rightness you examine, you will find that it cannot be the sole and perfect and only criterion of rightness.  This may lead to the incorrect inference that there is something beyond, some perfect and only criterion from which all the others are derived—but that does not follow.  The whole is the sum of the parts.  We ran into an analogous situation with free will, where no part of ourselves seems perfectly decisive.

\n

The classic dilemma for those who would trust their moral intuitions, I believe, is the one who says:  \"Interracial marriage is repugnant—it disgusts me—and that is my moral intuition!\"  I reply, \"There is no general rule to obey your intuitions.  You just mentioned intuitions, rather than using them.  Very few people have legitimate cause to mention intuitions—Friendly AI programmers, for example, delving into the cognitive science of things, have a legitimate reason to mention them.  Everyone else just has ordinary moral arguments, in which they use their intuitions, for example, by saying, 'An interracial marriage doesn't hurt anyone, if both parties consent'.  I do not say, 'And I have an intuition that anything consenting adults do is right, and all intuitions must be obeyed, therefore I win.'  I just offer up that argument, and any others I can think of, to weigh in the balance.\"

\n

Indeed, evolution that made us cannot be trusted—so there is no general principle to trust it!  Rightness is not defined in terms of automatic correspondence to any possible decision we actually make—so there's no general principle that says you're infallible!  Just do what is, ahem, right—to the best of your ability to weigh the arguments you have heard, and ponder the arguments you may not have heard.

\n

If you were hoping to have a perfectly trustworthy system, or to have been created in correspondence with a perfectly trustworthy morality—well, I can't give that back to you; but even most religions don't try that one.  Even most religions have the human psychology containing elements of sin, and even most religions don't actually give you an effectively executable and perfect procedure, though they may tell you \"Consult the Bible!  It always works!\"

\n

If you hoped to find a source of morality outside humanity—well, I can't give that back, but I can ask once again:  Why would you even want that?  And what good would it do?  Even if there were some great light in the sky—something that could tell us, \"Sorry, happiness is bad for you, pain is better, now get out there and kill some babies!\"—it would still be your own decision to follow it.  You cannot evade responsibility.

\n

There isn't enough mystery left to justify reasonable doubt as to whether the causal origin of morality is something outside humanity.  We have evolutionary psychology.  We know where morality came from.  We pretty much know how it works, in broad outline at least.  We know there are no little XML value tags on electrons (and indeed, even if you found them, why should you pay attention to what is written there?)

\n

If you hoped that morality would be universalizable—sorry, that one I really can't give back.  Well, unless we're just talking about humans.  Between neurologically intact humans, there is indeed much cause to hope for overlap and coherence; and a great and reasonable doubt as to whether any present disagreement is really unresolvable, even it seems to be about \"values\".  The obvious reason for hope is the psychological unity of humankind, and the intuitions of symmetry, universalizability, and simplicity that we execute in the course of our moral arguments.  (In retrospect, I should have done a post on Interpersonal Morality before this...)

\n

If I tell you that three people have found a pie and are arguing about how to divide it up, the thought \"Give one-third of the pie to each\" is bound to occur to you—and if the three people are humans, it's bound to occur to them, too.  If one of them is a psychopath and insists on getting the whole pie, though, there may be nothing for it but to say:  \"Sorry, fairness is not 'what everyone thinks is fair', fairness is everyone getting a third of the pie\".  You might be able to resolve the remaining disagreement by politics and game theory, short of violence—but that is not the same as coming to agreement on values.  (Maybe you could persuade the psychopath that taking a pill to be more human, if one were available, would make them happier?  Would you be justified in forcing them to swallow the pill?  These get us into stranger waters that deserve a separate post.)

\n

If I define rightness to include the space of arguments that move me, then when you and I argue about what is right, we are arguing our approximations to what we would come to believe if we knew all empirical facts and had a million years to think about it—and that might be a lot closer than the present and heated argument.  Or it might not.  This gets into the notion of 'construing an extrapolated volition' which would be, again, a separate post.

\n

But if you were stepping outside the human and hoping for moral arguments that would persuade any possible mind, even a mind that just wanted to maximize the number of paperclips in the universe, then sorry—the space of possible mind designs is too large to permit universally compelling arguments.  You are better off treating your intuition that your moral arguments ought to persuade others, as applying only to other humans who are more or less neurologically intact.  Trying it on human psychopaths would be dangerous, yet perhaps possible.  But a paperclip maximizer is just not the sort of mind that would be moved by a moral argument.  (This will definitely be a separate post.)

\n

Once, in my wild and reckless youth, I tried dutifully—I thought it was my duty—to be ready and willing to follow the dictates of a great light in the sky, an external objective morality, when I discovered it.  I questioned everything, even altruism toward human lives, even the value of happiness.  Finally I realized that there was no foundation but humanity—no evidence pointing to even a reasonable doubt that there was anything else—and indeed I shouldn't even want to hope for anything else—and indeed would have no moral cause to follow the dictates of a light in the sky, even if I found one.

\n

I didn't get back immediately all the pieces of myself that I had tried to deprecate—it took time for the realization \"There is nothing else\" to sink in.  The notion that humanity could just... you know... live and have fun... seemed much too good to be true, so I mistrusted it.  But eventually, it sank in that there really was nothing else to take the place of beauty.  And then I got it back.

\n

So you see, it all really does add up to moral normality, very exactly in fact.  You go on with the same morals as before, and the same moral arguments as before.  There is no sudden Grand Overlord Procedure to which you can appeal to get a perfectly trustworthy answer.  You don't know, cannot print out, the great rightness-function; and even if you could, you would not have enough computational power to search the entire specified space of arguments that might move you.  You will just have to argue it out.

\n

I suspect that a fair number of those who propound metaethics do so in order to have it add up to some new and unusual moral—else why would they bother?  In my case, I bother because I am a Friendly AI programmer and I have to make a physical system outside myself do what's right; for which purpose metaethics becomes very important indeed.  But for the most part, the effect of my proffered metaethic is threefold:

\n\n

And, oh yes—why is it right to save a child's life?

\n

Well... you could ask \"Is this event that just happened, right?\" and find that the child had survived, in which case you would have discovered the nonobvious empirical fact about the world, that it had come out right.

\n

Or you could start out already knowing a complicated state of the world, but still have to apply the rightness-function to it in a nontrivial way—one involving a complicated moral argument, or extrapolating consequences into the future—in which case you would learn the nonobvious logical / computational fact that rightness, applied to this situation, yielded thumbs-up.

\n

In both these cases, there are nonobvious facts to learn, which seem to explain why what just happened is right.

\n

But if you ask \"Why is it good to be happy?\" and then replace the symbol 'good' with what it stands for, you'll end up with a question like \"Why does happiness match {happiness + survival + justice + individuality + ...}?\"  This gets computed so fast, that it scarcely seems like there's anything there to be explained.  It's like asking \"Why does 4 = 4?\" instead of \"Why does 2 + 2 = 4?\"

\n

Now, I bet that feels quite a bit like what happens when I ask you:  \"Why is happiness good?\"

\n

Right?

\n

And that's also my answer to Moore's Open Question.  Why is this big function I'm talking about, right?  Because when I say \"that big function\", and you say \"right\", we are dereferencing two different pointers to the same unverbalizable abstract computation.  I mean, that big function I'm talking about, happens to be the same thing that labels things right in your own brain.  You might reflect on the pieces of the quotation of the big function, but you would start out by using your sense of right-ness to do it.  If you had the perfect empirical knowledge to taboo both \"that big function\" and \"right\", substitute what the pointers stood for, and write out the full enormity of the resulting sentence, it would come out as... sorry, I can't resist this one... A=A.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Interpersonal Morality\"

\n

Previous post: \"Setting Up Metaethics\"

" } }, { "_id": "T7tYmfD9j25uLwqYk", "title": "Setting Up Metaethics", "pageUrl": "https://www.lesswrong.com/posts/T7tYmfD9j25uLwqYk/setting-up-metaethics", "postedAt": "2008-07-28T02:25:20.000Z", "baseScore": 27, "voteCount": 21, "commentCount": 34, "url": null, "contents": { "documentId": "T7tYmfD9j25uLwqYk", "html": "

Followup toIs Morality Given?, Is Morality Preference?, Moral Complexities, Could Anything Be Right?, The Bedrock of Fairness, ...

\n

Intuitions about morality seem to split up into two broad camps: morality-as-given and morality-as-preference.

\n

Some perceive morality as a fixed given, independent of our whims, about which we form changeable beliefs.  This view's great advantage is that it seems more normal up at the level of everyday moral conversations: it is the intuition underlying our everyday notions of \"moral error\", \"moral progress\", \"moral argument\", or \"just because you want to murder someone doesn't make it right\".

\n

Others choose to describe morality as a preference—as a desire in some particular person; nowhere else is it written.  This view's great advantage is that it has an easier time living with reductionism—fitting the notion of \"morality\" into a universe of mere physics.  It has an easier time at the meta level, answering questions like \"What is morality?\" and \"Where does morality come from?\"

\n

Both intuitions must contend with seemingly impossible questions.  For example, Moore's Open Question:  Even if you come up with some simple answer that fits on T-Shirt, like \"Happiness is the sum total of goodness!\", you would need to argue the identity.  It isn't instantly obvious to everyone that goodness is happiness, which seems to indicate that happiness and rightness were different concepts to start with.  What was that second concept, then, originally?

\n

Or if \"Morality is mere preference!\" then why care about human preferences?  How is it possible to establish any \"ought\" at all, in a universe seemingly of mere \"is\"?

\n

So what we should want, ideally, is a metaethic that:

\n
    \n
  1. Adds up to moral normality, including moral errors, moral progress, and things you should do whether you want to or not;
  2. \n
  3. Fits naturally into a non-mysterious universe, postulating no exception to reductionism;
  4. \n
  5. Does not oversimplify humanity's complicated moral arguments and many terminal values;
  6. \n
  7. Answers all the impossible questions.
  8. \n
\n

\n

I'll present that view tomorrow.

\n

Today's post is devoted to setting up the question.

\n

Consider \"free will\", already dealt with in these posts.  On one level of organization, we have mere physics, particles that make no choices.  On another level of organization, we have human minds that extrapolate possible futures and choose between them. How can we control anything, even our own choices, when the universe is deterministic?

\n

To dissolve the puzzle of free will, you have to simultaneously imagine two levels of organization while keeping them conceptually distinct.  To get it on a gut level, you have to see the level transition—the way in which free will is how the human decision algorithm feels from inside.  (Being told flatly \"one level emerges from the other\" just relates them by a magical transition rule, \"emergence\".)

\n

For free will, the key is to understand how your brain computes whether you \"could\" do something—the algorithm that labels reachable states. Once you understand this label, it does not appear particularly meaningless—\"could\" makes sense—and the label does not conflict with physics following a deterministic course.  If you can see that, you can see that there is no conflict between your feeling of freedom, and deterministic physics.  Indeed, I am perfectly willing to say that the feeling of freedom is correct, when the feeling is interpreted correctly.

\n

In the case of morality, once again there are two levels of organization, seemingly quite difficult to fit together:

\n

On one level, there are just particles without a shred of should-ness built into them—just like an electron has no notion of what it \"could\" do—or just like a flipping coin is not uncertain of its own result.

\n

On another level is the ordinary morality of everyday life: moral errors, moral progress, and things you ought to do whether you want to do them or not.

\n

And in between, the level transition question:  What is this should-ness stuff?

\n

Award yourself a point if you thought, \"But wait, that problem isn't quite analogous to the one of free will.  With free will it was just a question of factual investigation—look at human psychology, figure out how it does in fact generate the feeling of freedom.  But here, it won't be enough to figure out how the mind generates its feelings of should-ness.  Even after we know, we'll be left with a remaining question—is that how we should calculate should-ness?  So it's not just a matter of sheer factual reductionism, it's a moral question.\"

\n

Award yourself two points if you thought, \"...oh, wait, I recognize that pattern:  It's one of those strange loops through the meta-level we were talking about earlier.\"

\n

And if you've been reading along this whole time, you know the answer isn't going to be, \"Look at this fundamentally moral stuff!\"

\n

Nor even, \"Sorry, morality is mere preference, and right-ness is just what serves you or your genes; all your moral intuitions otherwise are wrong, but I won't explain where they come from.\"

\n

Of the art of answering impossible questions, I have already said much:  Indeed, vast segments of my Overcoming Bias posts were created with that specific hidden agenda.

\n

The sequence on anticipation fed into Mysterious Answers to Mysterious Questions, to prevent the Primary Catastrophic Failure of stopping on a poor answer.

\n

The Fake Utility Functions sequence was directed at the problem of oversimplified moral answers particularly.

\n

The sequence on words provided the first and basic illustration of the Mind Projection Fallacy, the understanding of which is one of the Great Keys.

\n

The sequence on words also showed us how to play Rationalist's Taboo, and Replace the Symbol with the Substance.  What is \"right\", if you can't say \"good\" or \"desirable\" or \"better\" or \"preferable\" or \"moral\" or \"should\"?  What happens if you try to carry out the operation of replacing the symbol with what it stands for?

\n

And the sequence on quantum physics, among other purposes, was there to teach the fine art of not running away from Scary and Confusing Problems, even if others have failed to solve them, even if great minds failed to solve them for generations.  Heroes screw up, time moves on, and each succeeding era gets an entirely new chance.

\n

If you're just joining us here (Belldandy help you) then you might want to think about reading all those posts before, oh, say, tomorrow.

\n

If you've been reading this whole time, then you should think about trying to dissolve the question on your own, before tomorrow.  It doesn't require more than 96 insights beyond those already provided.

\n

Next:  The Meaning of Right.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"The Meaning of Right\"

\n

Previous post: \"Changing Your Metaethics\"

" } }, { "_id": "LhP2zGBWR5AdssrdJ", "title": "Changing Your Metaethics", "pageUrl": "https://www.lesswrong.com/posts/LhP2zGBWR5AdssrdJ/changing-your-metaethics", "postedAt": "2008-07-27T12:36:12.000Z", "baseScore": 65, "voteCount": 52, "commentCount": 20, "url": null, "contents": { "documentId": "LhP2zGBWR5AdssrdJ", "html": "

If you say, \"Killing people is wrong,\" that's morality.  If you say, \"You shouldn't kill people because God prohibited it,\" or \"You shouldn't kill people because it goes against the trend of the universe\", that's metaethics.

\n

Just as there's far more agreement on Special Relativity than there is on the question \"What is science?\", people find it much easier to agree \"Murder is bad\" than to agree what makes it bad, or what it means for something to be bad.

\n

People do get attached to their metaethics.  Indeed they frequently insist that if their metaethic is wrong, all morality necessarily falls apart.  It might be interesting to set up a panel of metaethicists—theists, Objectivists, Platonists, etc.—all of whom agree that killing is wrong; all of whom disagree on what it means for a thing to be \"wrong\"; and all of whom insist that if their metaethic is untrue, then morality falls apart.

\n

Clearly a good number of people, if they are to make philosophical progress, will need to shift metathics at some point in their lives.  You may have to do it.

\n

At that point, it might be useful to have an open line of retreat—not a retreat from morality, but a retreat from Your-Current-Metaethic.  (You know, the one that, if it is not true, leaves no possible basis for not killing people.)

\n

And so I've been setting up these lines of retreat, in many and various posts, summarized below.  For I have learned that to change metaethical beliefs is nigh-impossible in the presence of an unanswered attachment.

\n

\n

If, for example, someone believes the authority of \"Thou Shalt Not Kill\" derives from God, then there are several and well-known things to say that can help set up a line of retreat—as opposed to immediately attacking the plausibility of God.  You can say, \"Take personal responsibility! Even if you got orders from God, it would be your own decision to obey those orders.  Even if God didn't order you to be moral, you could just be moral anyway.\"

\n

The above argument actually generalizes to quite a number of metaethics—you just substitute Their-Favorite-Source-Of-Morality, or even the word \"morality\", for \"God\".  Even if your particular source of moral authority failed, couldn't you just drag the child off the train tracks anyway?  And indeed, who is it but you, that ever decided to follow this source of moral authority in the first place?  What responsibility are you really passing on?

\n

So the most important line of retreat is the one given in The Moral Void:  If your metaethic stops telling you to save lives, you can just drag the kid off the train tracks anyway.  To paraphrase Piers Anthony, only those who have moralities worry over whether or not they have them.  If your metaethic tells you to kill people, why should you even listen?  Maybe that which you would do even if there were no morality, is your morality.

\n

The point being, of course, not that no morality exists; but that you can hold your will in place, and not fear losing sight of what's important to you, while your notions of the nature of morality change.

\n

Other posts are there to set up lines of retreat specifically for more naturalistic metaethics.  It may make more sense where I'm coming from on these, once I actually present my metaethic; but I thought it wiser to set them up in advance, to leave lines of retreat.

\n

Joy in the Merely Real and Explaining vs. Explaining Away argue that you shouldn't be disappointed in any facet of life, just because it turns out to be explicable instead of inherently mysterious: for if we cannot take joy in the merely real, our lives shall be empty indeed.

\n

No Universally Compelling Arguments sets up a line of retreat from the desire to have everyone agree with our moral arguments.  There's a strong moral intuition which says that if our moral arguments are right, by golly, we ought to be able to explain them to people.  This may be valid among humans, but you can't explain moral arguments to a rock.  There is no ideal philosophy student of perfect emptiness who can be persuaded to implement modus ponens, starting without modus ponens.  If a mind doesn't contain that which is moved by your moral arguments, it won't respond to them.

\n

But then isn't all morality circular logic, in which case it falls apart?  Where Recursive Justification Hits Bottom and My Kind of Reflection explain the difference between a self-consistent loop through the meta-level, and actual circular logic.  You shouldn't find yourself saying \"The universe is simple because it is simple\", or \"Murder is wrong because it is wrong\"; but neither should you try to abandon Occam's Razor while evaluating the probability that Occam's Razor works, nor should you try to evaluate \"Is murder wrong?\" from somewhere outside your brain.  There is no ideal philosophy student of perfect emptiness to which you can unwind yourself—try to find the perfect rock to stand upon, and you'll end up as a rock.  So instead use the full force of your intelligence, your full rationality and your full morality, when you investigate the foundations of yourself.

\n

The Gift We Give To Tomorrow sets up a line of retreat for those afraid to allow a causal role for evolution, in their account of how morality came to be.  (Note that this is extremely distinct from granting evolution a justificational status in moral theories.)  Love has to come into existence somehow—for if we cannot take joy in things that can come into existence, our lives will be empty indeed.  Evolution may not be a particularly pleasant way for love to evolve, but judge the end product—not the source.  Otherwise you would be committing what is known (appropriately) as The Genetic Fallacy: causation is not the same concept as justification.  It's not like you can step outside the brain evolution gave you:  Rebelling against nature is only possible from within nature.

\n

The earlier series on Evolutionary Psychology should dispense with the metaethical confusion of believing that any normal human being thinks about their reproductive fitness, even unconsciously, in the course of making decisions.  Only evolutionary biologists even know how to define genetic fitness, and they know better than to think it defines morality.

\n

Alarming indeed is the thought that morality might be computed inside our own minds—doesn't this imply that morality is a mere thought?  Doesn't it imply that whatever you think is right, must be right?  Posts such as  Does Your Morality Care What You Think? and its predecessors, Math is Subjunctively Objective and Probability is Subjectively Objective, set up the needed line of retreat:  Just because a quantity is computed inside your head, doesn't mean that the quantity computed is about your thoughts. There's a difference between a calculator that calculates \"What is 2 + 3?\" and \"What do I output when someone presses '2', '+', and '3'?\"

\n

And finally Existential Angst Factory offers the notion that if life seems painful, reductionism may not be the real source of your problem—if living in a world of mere particles seems too unbearable, maybe your life isn't exciting enough on its own?

\n

If all goes well, my next post will set up the metaethical question and its methodology, and I'll present my actual answer on Monday.

\n

And if you're wondering why I deem this business of metaethics important, when it is all going to end up adding up to moral normality... telling you to pull the child off the train tracks, rather than the converse...

\n

Well, there is opposition to rationality from people who think it drains meaning from the universe.

\n

And this is a special case of a general phenomenon, in which many many people get messed up by misunderstanding where their morality comes from.  Poor metaethics forms part of the teachings of many a cult, including the big ones.  My target audience is not just people who are afraid that life is meaningless, but also those who've concluded that love is a delusion because real morality has to involve maximizing your inclusive fitness, or those who've concluded that unreturned kindness is evil because real morality arises only from selfishness, etc.

\n

But the real reason, of course...

" } }, { "_id": "GAR8gT3d9uCtr4kv8", "title": "Does Your Morality Care What You Think?", "pageUrl": "https://www.lesswrong.com/posts/GAR8gT3d9uCtr4kv8/does-your-morality-care-what-you-think", "postedAt": "2008-07-26T00:25:34.000Z", "baseScore": 21, "voteCount": 20, "commentCount": 27, "url": null, "contents": { "documentId": "GAR8gT3d9uCtr4kv8", "html": "

Followup toMath is Subjunctively Objective, The Moral Void, Is Morality Given?

\n
\n

Thus I recall the study, though I cannot recall the citation:

\n

Children, at some relatively young age, were found to distinguish between:

\n\n
\n

Obert:  \"Well, I don't know the citation, but it sounds like a fascinating study.  So even children, then, realize that moral facts are givens, beyond the ability of teachers or parents to alter.\"

\n

Subhan:  \"You say that like it's a good thing.  Children may also think that people in Australia have to wear heavy boots from falling off the other side of the Earth.\"

\n

Obert:  \"Call me Peter Pan, then, because I never grew up on this one.  Of course it doesn't matter what the teacher says.  It doesn't matter what I say.  It doesn't even matter what I think.  Stealing is wrong.  Do you disagree?\"

\n

Subhan:  \"You don't see me picking your pockets, do you?  Isn't it enough that I choose not to steal from you—do I have to pretend it's the law of the universe?\"

\n

Obert:  \"Yes, or I can't trust your commitment.\"

\n

Subhan:  \"A... revealing remark.  But really, I don't think that this experimental result seems at all confusing, in light of the recent discussion of subjunctive objectivity—a discussion in which Eliezer strongly supported my position, by the way.\"

\n

Obert:  \"Really?  I thought Eliezer was finally coming out in favor of my position.\"

\n

Subhan:  \"Huh?  How do you get that?\"

\n

\n

Obert:  \"The whole subtext of 'Math is Subjunctively Objective' is that morality is just like math!  Sure, we compute morality inside our own brains—where else would we compute it?  But just because we compute a quantity inside our own brains, doesn't mean that what is computed has a dependency on our own state of mind.\"

\n

Subhan:  \"I think we must have been reading different Overcoming Bias posts!  The whole subtext of 'Math is Subjunctively Objective' is to explain away why morality seems objective—to show that the feeling of a fixed given can arise without any external referent.  When you imagine yourself thinking that killing is right, your brain-that-imagines hasn't yet been altered, so you carry out that moral imagination with your current brain, and conclude:  'Even if I thought killing were right, killing would still be wrong.'  But this doesn't show that killing-is-wrong is a fixed fact from outside you.\"

\n

Obert:  \"Like, say, 2 + 3 = 5 is a fixed fact.  Eliezer wrote:  'If something appears to be the same regardless of what anyone thinks, then maybe that's because it actually is the same regardless of what anyone thinks.'  I'd say that subtext is pretty clear!\"

\n

Subhan:  \"On the contrary.  Naively, you might imagine your future self thinking differently of a thing, and visualize that the thing wouldn't thereby change, and conclude that the thing existed outside you.  Eliezer shows how this is not necessarily the case.  So you shouldn't trust your intuition that the thing is objective—it might be that the thing exists outside you, or it might not.  It has to be argued separately from the feeling of subjunctive objectivity.  In the case of 2 + 3 = 5, it's at least reasonable to wonder if math existed before humans. Physics itself seems to be made of math, and if we don't tell a story where physics was around before humans could observe it, it's hard to give a coherent account of how we got here.  But there's not the slightest evidence that morality was at work in the universe before humans got here.  We created it.\"

\n

Obert:  \"I know some very wise children who would disagree with you.\"

\n

Subhan:  \"Then they're wrong!  If children learned in school that it was okay to steal, they would grow up believing it was okay to steal.\"

\n

Obert:  \"Not if they saw that stealing hurt the other person, and felt empathy for their pain.  Empathy is a human universal.\"

\n

Subhan:  \"So we take a step back and say that evolution created the emotions that gave rise to morality, it doesn't put morality anywhere outside us.  But what you say might not even be true—if theft weren't considered a crime, the other child might not feel so hurt by it.  And regardless, it is rare to find any child capable of fully reconsidering the moral teachings of its society.\"

\n

Obert:  \"I hear that, in a remarkable similarity to Eliezer, your parents were Orthodox Jewish and you broke with religion as a very young child.\"

\n

Subhan:  \"I doubt that I was internally generating de novo moral philosophy.  I was probably just wielding, against Judaism, the morality of the science fiction that actually socialized me.\"

\n

Obert:  \"Perhaps you underestimate yourself.  How much science fiction had you read at the age of five, when you realized it was dumb to recite Hebrew prayers you couldn't understand?  Children may see errors that adults are too adept at fooling themselves to realize.\"

\n

Subhan:  \"Hah!  In all probability, if the teacher had in fact said that it was okay to take things from other children's backpacks, the children would in fact have thought it was right to steal.\"

\n

Obert:  \"Even if true, that doesn't prove anything.  It is quite coherent to simultaneously hold that:\"

\n\n

Subhan:  \"Fine, it's coherent, but that doesn't mean it's true.  The morality that the child has in fact learned from the teacher—or their parents, or the other children, or the television, or their parents' science fiction collection—doesn't say, 'Don't steal because the teacher says so.'  The learned morality just says, 'Don't steal.'  The cognitive procedure by which the children were taught to judge, does not have an internal dependency on what the children believe the teacher believes.  That's why, in their moral imagination, it feels objective.  But where did they acquire that morality in the first place?  From the teacher!\"

\n

Obert:  \"So?  I don't understand—you're saying that because they learned about morality from the teacher, they should think that morality has to be about the teacher?  That they should think the teacher has the power to make it right to steal?  How does that follow?  It is quite coherent to simultaneously hold that—\"

\n

Subhan:  \"I'm saying that they got the morality from the teacher!  Not from some mysterious light in the sky!\"

\n

Obert:  \"Look, I too read science fiction and fantasy as a child, and I think I may have been to some degree socialized by it—\"

\n

Subhan:  \"What a remarkable coincidence.\"

\n

Obert:  \"The stories taught me that it was right to care about people who were different from me—aliens with strange shapes, aliens made of something other than carbon atoms, AIs who had been created rather than evolved, even things that didn't think like a human.  But none of the stories ever said, 'You should care about people of different shapes and substrates because science fiction told you to do it, and what science fiction says, goes.'  I wouldn't have bought that.\"

\n

Subhan:  \"Are you sure you wouldn't have?  That's how religion works.\"

\n

Obert:  \"Didn't work on you.  Anyway, the novels said to care about the aliens because they had inner lives and joys—or because I wouldn't want aliens to mistreat humans—or because shape and substrate never had anything to do with what makes a person a person.  And you know, that still seems to me like a good justification.\"

\n

Subhan:  \"Of course; you were told it was a good justification—maybe not directly, but the author showed other characters responding to the argument.\"

\n

Obert:  \"It's not like the science fiction writers were making up their morality from scratch.  They were working at the end of a chain of moral arguments and debates that stretches back to the Greeks, probably to before writing, maybe to before the dawn of modern humanity.  You can learn morality, not just get pressed into it like a Jello mold.  If you learn 2 + 3 = 5 from a teacher, it doesn't mean the teacher has the power to add two sheep to three sheep and get six sheep.  If you would have spouted back '2 + 3 = 6' if the teacher said so, that doesn't change the sheep, it just means that you don't really understand the subject.  So too with morality.\"

\n

Subhan:  \"Okay, let me try a different tack.  You, I take it, agree with both of these statements:\"

\n\n

Obert:  \"Well, there are various caveats I'd attach to both of those.  Like, in any circumstance where I really did prefer to kill someone, there'd be a high probability he was about to shoot me, or something.  And there's all kinds of ways that eating an anchovy pizza could be wrong, like if I was already overweight.  And I don't claim to be certain of anything when it comes to morality.  But on the whole, and omitting all objections and knock-on effects, I agree.\"

\n

Subhan:  \"It's that second statement I'm really interested in.  How does your wanting to eat an anchovy pizza make it right?\"

\n

Obert:  \"Because ceteris paribus, in the course of ordinary life as we know it, and barring unspecified side effects, it is good for sentient beings to get what they want.\"

\n

Subhan:  \"And why doesn't that apply to the bit about killing, then?\"

\n

Obert:  \"Because the other person doesn't want to die.  Look, the whole reason why it's right in the first place for me to eat pepperoni pizza—the original justification—is that I enjoy doing so.  Eating pepperoni pizza makes me happy, which is ceteris paribus a good thing.  And eating anchovy pizza—blegh!  Ceteris paribus, it's not good for sentient beings to experience disgusting tastes.  But if my taste in pizza changes, that changes the consequneces of eating, which changes the moral justification, and so the moral judgment changes as well.  But the reasons for not killing are in terms of the other person having an inner life that gets snuffed out—a fact that doesn't change depending on my own state of mind.\"

\n

Subhan:  \"Oh?  I was guessing that the difference had something to do with the social disapproval that would be leveled at murder, but not at eating anchovy pizza.\"

\n

Obert:  \"As usual, your awkward attempts at rationalism have put you out of touch with self-evident moral truths.  That's just not how I, or other real people, actually think!  If I want to bleep bleep bleep a consenting adult, it doesn't matter whether society approves.  Society can go bleep bleep bleep bleep bleep -\"

\n

Subhan:  \"Or so science fiction taught you.\"

\n

Obert:  \"Spider Robinson's science fiction, to be precise. 'Whatever turns you on' shall be the whole of the law.  So long as the 'you' is plural.\"

\n

Subhan:  \"So that's where you got that particular self-evident moral truth.  Was it also Spider Robinson who told you that it was self-evident?\"

\n

Obert:  \"No, I thought about that for a while, and then decided myself.\"

\n

Subhan:  \"You seem to be paying remarkably close attention to what people want.  Yet you insist that what validates this attention, is some external standard that makes the satisfaction of desires, good. Can't you just admit that, by empathy and vicarious experience and evolved fellow-feeling, you want others to get what they want?  When does this external standard ever say that it's good for something to happen that someone doesn't want?\"

\n

Obert:  \"Every time you've got to tell your child to lay off the ice cream, he'll grow more fat cells that will make it impossible for him to lose weight as an adult.\"

\n

Subhan:  \"And could something good happen that no one wanted?\"

\n

Obert:  \"I rather expect so.  I don't think we're all entirely past our childhoods.  In some ways the human species itself strikes me as being a sort of toddler in the 'No!' stage.\"

\n

Subhan:  \"Look, there's a perfectly normal and non-mysterious chain of causality that describes where morality comes from, and it's not from outside humans. If you'd been told that killing was right, or if you'd evolved to enjoy killing—much more than we already do, I mean—or if you really did have a mini-stroke that damaged your frontal lobe, then you'd be going around saying, 'Killing is right regardless of what anyone thinks of it'.  No great light in the sky would correct you.  There is nothing else to the story.\"

\n

Obert:  \"Really, I think that in this whole debate between us, there is surprisingly litle information to be gained by such observations as 'You only say that because your brain makes you say it.' If a neutrino storm hit me, I might say '2 + 3 = 6', but that wouldn't change arithmetic.  It would just make my brain compute something other than arithmetic.  And these various misfortunes that you've described, wouldn't change the crime of murder.  They would just make my brain compute something other than morality.\"

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Changing Your Metaethics\"

\n

Previous post: \"Math is Subjunctively Objective\"

" } }, { "_id": "WAQ3qMD4vdXheQmui", "title": "Math is Subjunctively Objective", "pageUrl": "https://www.lesswrong.com/posts/WAQ3qMD4vdXheQmui/math-is-subjunctively-objective", "postedAt": "2008-07-25T11:06:09.000Z", "baseScore": 50, "voteCount": 33, "commentCount": 118, "url": null, "contents": { "documentId": "WAQ3qMD4vdXheQmui", "html": "

Followup to:  Probability is Subjectively Objective, Can Counterfactuals Be True?

\n

I am quite confident that the statement 2 + 3 = 5 is true; I am far less confident of what it means for a mathematical statement to be true.

\n

In \"The Simple Truth\" I defined a pebble-and-bucket system for tracking sheep, and defined a condition for whether a bucket's pebble level is \"true\" in terms of the sheep.  The bucket is the belief, the sheep are the reality.  I believe 2 + 3 = 5.  Not just that two sheep plus three sheep equal five sheep, but that 2 + 3 = 5.  That is my belief, but where is the reality?

\n

So now the one comes to me and says:  \"Yes, two sheep plus three sheep equals five sheep, and two stars plus three stars equals five stars.  I won't deny that.  But this notion that 2 + 3 = 5, exists only in your imagination, and is purely subjective.\"

\n

\n

So I say:  Excuse me, what?

\n

And the one says:  \"Well, I know what it means to observe two sheep and three sheep leave the fold, and five sheep come back.  I know what it means to press '2' and '+' and '3' on a calculator, and see the screen flash '5'.  I even know what it means to ask someone 'What is two plus three?' and hear them say 'Five.'  But you insist that there is some fact beyond this.  You insist that 2 + 3 = 5.\"

\n

Well, it kinda is.

\n

\"Perhaps you just mean that when you mentally visualize adding two dots and three dots, you end up visualizing five dots.  Perhaps this is the content of what you mean by saying, 2 + 3 = 5.  I have no trouble with that, for brains are as real as sheep.\"

\n

No, for it seems to me that 2 + 3 equaled 5 before there were any humans around to do addition.  When humans showed up on the scene, they did not make 2 + 3 equal 5 by virtue of thinking it.  Rather, they thought that '2 + 3 = 5' because 2 + 3 did in fact equal 5.

\n

\"Prove it.\"

\n

I'd love to, but I'm busy; I've got to, um, eat a salad.

\n

\"The reason you believe that 2 + 3 = 5, is your mental visualization of two dots plus three dots yielding five dots.  Does this not imply that this physical event in your physical brain is the meaning of the statement '2 + 3 = 5'?\"

\n

But I honestly don't think that is what I mean.  Suppose that by an amazing cosmic coincidence, a flurry of neutrinos struck my neurons, causing me to imagine two dots colliding with three dots and visualize six dots.  I would then say, '2 + 3 = 6'.  But this wouldn't mean that 2 + 3 actually had become equal to 6.  Now, if what I mean by '2 + 3' consists entirely of what my mere physical brain merely happens to output, then a neutrino could make 2 + 3 = 6.  But you can't change arithmetic by tampering with a calculator.

\n

\"Aha!  I have you now!\"

\n

Is that so?

\n

\"Yes, you've given your whole game away!\"

\n

Do tell.

\n

\"You visualize a subjunctive world, a counterfactual, where your brain is struck by neutrinos, and says, '2 + 3 = 6'.  So you know that in this case, your future self will say that '2 + 3 = 6'.  But then you add up dots in your own, current brain, and your current self gets five dots.  So you say:  'Even if I believed \"2 + 3 = 6\", then 2 + 3 would still equal 5.'  You say:  '2 + 3 = 5 regardless of what anyone thinks of it.'  So your current brain, computing the same question while it imagines being different but is not actually different, finds that the answer seems to be the same.  Thus your brain creates the illusion of an additional reality that exists outside it, independent of any brain.\"

\n

Now hold on!  You've explained my belief that 2 + 3 = 5 regardless of what anyone thinks, but that's not the same as explaining away my belief.  Since 2 + 3 = 5 does not, in fact, depend on what any human being thinks of it, therefore it is right and proper that when I imagine counterfactual worlds in which people (including myself) think '2 + 3 = 6', and I ask what 2 + 3 actually equals in this counterfactual world, it still comes out as 5.

\n

\"Don't you see, that's just like trying to visualize motion stopping everywhere in the universe, by imagining yourself as an observer outside the universe who experiences time passing while nothing moves.  But really there is no time without motion.\"

\n

I see the analogy, but I'm not sure it's a deep analogy.  Not everything you can imagine seeing, doesn't exist.  It seems to me that a brain can easily compute quantities that don't depend on the brain.

\n

\"What?  Of course everything that the brain computes depends on the brain!  Everything that the brain computes, is computed inside the brain!\"

\n

That's not what I mean!  I just mean that the brain can perform computations that refer to quantities outside the brain.  You can set up a question, like 'How many sheep are in the field?', that isn't about any particular person's brain, and whose actual answer doesn't depend on any particular person's brain.  And then a brain can faithfully compute that answer.

\n

If I count two sheep and three sheep returning from the field, and Autrey's brain gets hit by neutrinos so that Autrey thinks there are six sheep in the fold, then that's not going to cause there to be six sheep in the fold—right?  The whole question here is just not about what Autrey thinks, it's about how many sheep are in the fold.

\n

Why should I care what my subjunctive future self thinks is the sum of 2 + 3, any more than I care what Autrey thinks is the sum of 2 + 3, when it comes to asking what is really the sum of 2 + 3?

\n

\"Okay... I'll take another tack.  Suppose you're a psychiatrist, right?  And you're an expert witness in court cases—basically a hired gun, but you try to deceive yourself about it.  Now wouldn't it be a bit suspicious, to find yourself saying:  'Well, the only reason that I in fact believe that the defendant is insane, is because I was paid to be an expert psychiatric witness for the defense.  And if I had been paid to witness for the prosecution, I undoubtedly would have come to the conclusion that the defendant is sane.  But my belief that the defendant is insane, is perfectly justified; it is justified by my observation that the defendant used his own blood to paint an Elder Sign on the wall of his jail cell.'\"

\n

Yes, that does sound suspicious, but I don't see the point.

\n

\"My point is that the physical cause of your belief that 2 + 3 = 5, is the physical event of your brain visualizing two dots and three dots and coming up with five dots.  If your brain came up six dots, due to a neutrino storm or whatever, you'd think '2 + 3 = 6'.  How can you possibly say that your belief means anything other than the number of dots your brain came up with?\"

\n

Now hold on just a second.  Let's say that the psychiatrist is paid by the judge, and when he's paid by the judge, he renders an honest and neutral evaluation, and his evaluation is that the defendant is sane, just played a bit too much Mythos.  So it is true to say that if the psychiatrist had been paid by the defense, then the psychiatrist would have found the defendant to be insane.  But that doesn't mean that when the psychiatrist is paid by the judge, you should dismiss his evaluation as telling you nothing more than 'the psychiatrist was paid by the judge'.  On those occasions where the psychiatrist is paid by the judge, his opinion varies with the defendant, and conveys real evidence about the defendant.

\n

\"Okay, so now what's your point?\"

\n

That when my brain is not being hit by a neutrino storm, it yields honest and informative evidence that 2 + 3 = 5.

\n

\"And if your brain was hit by a neutrino storm, you'd be saying, '2 + 3 = 6 regardless of what anyone thinks of it'.  Which shows how reliable that line of reasoning is.\"

\n

I'm not claiming that my saying '2 + 3 = 5 no matter what anyone thinks' represents stronger numerical evidence than my saying '2 + 3 = 5'.  My saying the former just tells you something extra about my epistemology, not numbers.

\n

\"And you don't think your epistemology is, oh, a little... incoherent?\"

\n

No!  I think it is perfectly coherent to simultaneously hold all of the following:

\n\n

\"Now that's just crazy talk!\"

\n

No, you're the crazy one!  You're collapsing your levels; you think that just because my brain asks a question, it should start mixing up queries about the state of my brain into the question.  Not every question my brain asks is about my brain!

\n

Just because something is computed in my brain, doesn't mean that my computation has to depend on my brain's representation of my brain.  It certainly doesn't mean that the actual quantity depends on my brain!  It's my brain that computes my beliefs about gravity, and if neutrinos hit me I will come to a different conclusion; but that doesn't mean that I can think different and fly.  And I don't think I can think different and fly, either!

\n

I am not a calculator who, when someone presses my \"2\" and \"+\" and \"3\" buttons, computes, \"What do I output when someone presses 2 + 3?\"  I am a calculator who computes \"What is 2 + 3?\"  The former is a circular question that can consistently return any answer—which makes it not very helpful.

\n

Shouldn't we expect non-circular questions to be the normal case?  The brain evolved to guess at the state of the environment, not guess at 'what the brain will think is the state of the environment'.  Even when the brain models itself, it is trying to know itself, not trying to know what it will think about itself.

\n

Judgments that depend on our representations of anyone's state of mind, like \"It's okay to kiss someone only if they want to be kissed\", are the exception rather than the rule.

\n

Most quantities we bother to think about at all, will appear to be 'the same regardless of what anyone thinks of them'.  When we imagine thinking differently about the quantity, we will imagine the quantity coming out the same; it will feel \"subjunctively objective\".

\n

And there's nothing wrong with that!  If something appears to be the same regardless of what anyone thinks, then maybe that's because it actually is the same regardless of what anyone thinks.

\n

Even if you explain that the quantity appears to stay the same in my imagination, merely because my current brain computes it the same way—well, how else would I imagine something, except with my current brain?  Should I imagine it using a rock?

\n

\"Okay, so it's possible for something that appears thought-independent, to actually be thought-independent.  But why do you think that 2 + 3 = 5, in particular, has some kind of existence independently of the dots you imagine?\"

\n

Because two sheep plus three sheep equals five sheep, and this appears to be true in every mountain and every island, every swamp and every plain and every forest.

\n

And moreover, it is also true of two rocks plus three rocks.

\n

And further, when I press buttons upon a calculator and activate a network of transistors,  it successfully predicts how many sheep or rocks I will find.

\n

Since all these quantities, correlate with each other and successfully predict each other, surely they must have something like a common cause, a similarity that factors out?  Something that is true beyond and before the concrete observations?  Something that the concrete observations hold in common?  And this commonality is then also the sponsor of my answer, 'five', that I find in my own brain.

\n

\"But my dear sir, if the fact of 2 + 3 = 5 exists somewhere outside your brain... then where is it?\"

\n

Damned if I know.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Does Your Morality Care What You Think?\"

\n

Previous post: \"Can Counterfactuals Be True?\"

" } }, { "_id": "dhGGnB2oxBP3m5cBc", "title": "Can Counterfactuals Be True?", "pageUrl": "https://www.lesswrong.com/posts/dhGGnB2oxBP3m5cBc/can-counterfactuals-be-true", "postedAt": "2008-07-24T04:40:49.000Z", "baseScore": 33, "voteCount": 27, "commentCount": 47, "url": null, "contents": { "documentId": "dhGGnB2oxBP3m5cBc", "html": "

Followup toProbability is Subjectively Objective

\n

The classic explanation of counterfactuals begins with this distinction:

\n
    \n
  1. If Lee Harvey Oswald didn't shoot John F. Kennedy, then someone else did.
  2. \n
  3. If Lee Harvey Oswald hadn't shot John F. Kennedy, someone else would have.
  4. \n
\n

In ordinary usage we would agree with the first statement, but not the second (I hope).

\n

If, somehow, we learn the definite fact that Oswald did not shoot Kennedy, then someone else must have done so, since Kennedy was in fact shot.

\n

But if we went back in time and removed Oswald, while leaving everything else the same, then—unless you believe there was a conspiracy—there's no particular reason to believe Kennedy would be shot:

\n

We start by imagining the same historical situation that existed in 1963—by a further act of imagination, we remove Oswald from our vision—we run forward the laws that we think govern the world—visualize Kennedy parading through in his limousine—and find that, in our imagination, no one shoots Kennedy.

\n

It's an interesting question whether counterfactuals can be true or false.  We never get to experience them directly.

\n

\n

If we disagree on what would have happened if Oswald hadn't been there, what experiment could we perform to find out which of us is right?

\n

And if the counterfactual is something unphysical—like, \"If gravity had stopped working three days ago, the Sun would have exploded\"—then there aren't even any alternate histories out there to provide a truth-value.

\n

It's not as simple as saying that if the bucket contains three pebbles, and the pasture contains three sheep, the bucket is true.

\n

Since the counterfactual event only exists in your imagination, how can it be true or false?

\n

So... is it just as fair to say that \"If Oswald hadn't shot Kennedy, the Sun would have exploded\"?

\n

After all, the event only exists in our imaginations—surely that means it's subjective, so we can say anything we like?

\n

But so long as we have a lawful specification of how counterfactuals are constructed—a lawful computational procedure—then the counterfactual result of removing Oswald, depends entirely on the empirical state of the world.

\n

If there was no conspiracy, then any reasonable computational procedure that simulates removing Oswald's bullet from the course of history, ought to return an answer of Kennedy not getting shot.

\n

\"Reasonable!\" you say.  \"Ought!\" you say.

\n

But that's not the point; the point is that if you do pick some fixed computational procedure, whether it is reasonable or not, then either it will say that Kennedy gets shot, or not, and what it says will depend on the empirical state of the world.  So that, if you tell me, \"I believe that this-and-such counterfactual construal, run over Oswald's removal, preserves Kennedy's life\", then I can deduce that you don't believe in the conspiracy.

\n

Indeed, so long as we take this computational procedure as fixed, then the actual state of the world (which either does include a conspiracy, or does not) presents a ready truth-value for the output of the counterfactual.

\n

In general, if you give me a fixed computational procedure, like \"multiply by 7 and add 5\", and then you point to a 6-sided die underneath a cup, and say, \"The result-of-procedure is 26!\" then it's not hard at all to assign a truth value to this statement.  Even if the actual die under the cup only ever takes on the values between 1 and 6, so that \"26\" is not found anywhere under the cup.  The statement is still true if and only if the die is showing 3; that is its empirical truth-condition.

\n

And what about the statement ((3 * 7) + 5) = 26?  Where is the truth-condition for that statement located?  This I don't know; but I am nonetheless quite confident that it is true.  Even though I am not confident that this 'true' means exactly the same thing as the 'true' in \"the bucket is 'true' when it contains the same number of pebbles as sheep in the pasture\".

\n

So if someone I trust—presumably someone I really trust—tells me, \"If Oswald hadn't shot Kennedy, someone else would have\", and I believe this statement, then I believe the empirical reality is such as to make the counterfactual computation come out this way.  Which would seem to imply the conspiracy.  And I will anticipate accordingly.

\n

Or if I find out that there was a conspiracy, then this will confirm the truth-condition of the counterfactual—which might make a bit more sense than saying, \"Confirm that the counterfactual is true.\"

\n

But how do you actually compute a counterfactual?  For this you must consult Judea Pearl.  Roughly speaking, you perform surgery on graphical models of causal processes; you sever some variables from their ordinary parents and surgically set them to new values, and then recalculate the probability distribution.

\n

There are other ways of defining counterfactuals, but I confess they all strike me as entirely odd.  Even worse, you have philosophers arguing over what the value of a counterfactual really is or really means, as if there were some counterfactual world actually floating out there in the philosophical void.  If you think I'm attacking a strawperson here, I invite you to consult the philosophical literature on Newcomb's Problem.

\n

A lot of philosophy seems to me to suffer from \"naive philosophical realism\"—the belief that philosophical debates are about things that automatically and directly exist as propertied objects floating out there in the void.

\n

You can talk about an ideal computation, or an ideal process, that would ideally be applied to the empirical world.  You can talk about your uncertain beliefs about the output of this ideal computation, or the result of the ideal process.

\n

So long as the computation is fixed, and so long as the computational itself is only over actually existent things.  Or the results of other computations previously defined—you should not have your computation be over \"nearby possible worlds\" unless you can tell me how to compute those, as well.

\n

A chief sign of naive philosophical realism is that it does not tell you how to write a computer program that computes the objects of its discussion.

\n

I have yet to see a camera that peers into \"nearby possible worlds\"—so even after you've analyzed counterfactuals in terms of \"nearby possible worlds\", I still can't write an AI that computes counterfactuals.

\n

But Judea Pearl tells me just how to compute a counterfactual, given only my beliefs about the actual world.

\n

I strongly privilege the real world that actually exists, and to a slightly lesser degree, logical truths about mathematical objects (preferably finite ones).  Anything else you want to talk about, I need to figure out how to describe in terms of the first two—for example, as the output of an ideal computation run over the empirical state of the real universe.

\n

The absence of this requirement as a condition, or at least a goal, of modern philosophy, is one of the primary reasons why modern philosophy is often surprisingly useless in my AI work.  I've read whole books about decision theory that take counterfactual distributions as givens, and never tell you how to compute the counterfactuals.

\n

Oh, and to talk about \"the probability that John F. Kennedy was shot, given that Lee Harvey Oswald didn't shoot him\", we write:

\n
\n

P(Kennedy_shot|Oswald_not)

\n
\n

And to talk about \"the probability that John F. Kennedy would have been shot, if Lee Harvey Oswald hadn't shot him\", we write:

\n
\n

P(Oswald_not []-> Kennedy_shot)

\n
\n

That little symbol there is supposed to be a box with an arrow coming out of it, but I don't think Unicode has it.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Math is Subjunctively Objective\"

\n

Previous post: \"Existential Angst Factory\"

" } }, { "_id": "AJ9dX59QXokZb35fk", "title": "When (Not) To Use Probabilities", "pageUrl": "https://www.lesswrong.com/posts/AJ9dX59QXokZb35fk/when-not-to-use-probabilities", "postedAt": "2008-07-23T10:58:39.000Z", "baseScore": 74, "voteCount": 55, "commentCount": 47, "url": null, "contents": { "documentId": "AJ9dX59QXokZb35fk", "html": "

It may come as a surprise to some readers of this blog, that I do not always advocate using probabilities.

\n\n

Or rather, I don't always advocate that human beings, trying to solve their problems, should try to make up verbal probabilities, and then apply the laws of probability theory or decision theory to whatever number they just made up, and then use the result as their final belief or decision.

\n\n

The laws of probability are laws, not suggestions, but often the true Law is too difficult for us humans to compute.  If P != NP and the universe has no source of exponential computing power, then there are evidential updates too difficult for even a superintelligence to compute - even though the probabilities would be quite well-defined, if we could afford to calculate them.

\n\n

So sometimes you don't apply probability theory.  Especially if you're human, and your brain has evolved with all sorts of useful algorithms for uncertain reasoning, that don't involve verbal probability assignments.

\n\n

Not sure where a flying ball will land?  I don't advise trying to formulate a probability distribution over its landing spots, performing deliberate Bayesian updates on your glances at the ball, and calculating the expected utility of all possible strings of motor instructions to your muscles.

Trying to catch a flying ball, you're probably better off with your brain's built-in mechanisms, then using deliberative verbal reasoning to invent or manipulate probabilities.

\n\n

But this doesn't mean you're going beyond probability theory or above probability theory.

\n\n

The Dutch Book arguments still apply.  If I offer you a choice of gambles ($10,000 if the ball lands in this square, versus $10,000 if I roll a die and it comes up 6), and you answer in a way that does not allow consistent probabilities to be assigned, then you will accept combinations of gambles that are certain losses, or reject gambles that are certain gains...

\n\n

Which still doesn't mean that you should try to use deliberative verbal reasoning.  I would expect that for professional baseball players, at least, it's more important to catch the ball than to assign consistent probabilities.  Indeed, if you tried to make up probabilities, the verbal probabilities might not even be very good ones, compared to some gut-level feeling - some wordless representation of uncertainty in the back of your mind.

\n\n

There is nothing privileged about uncertainty that is expressed in words, unless the verbal parts of your brain do, in fact, happen to work better on the problem.

\n\n

And while accurate maps of the same territory will necessarily be consistent among themselves, not all consistent maps are accurate.  It is more important to be accurate than to be consistent, and more important to catch the ball than to be consistent.

\n\n

In fact, I generally advise against making up probabilities, unless it seems like you have some decent basis for them.  This only fools you into believing that you are more Bayesian than you actually are.

\n\n

To be specific, I would advise, in most cases, against using non-numerical procedures to create what appear to be numerical probabilities.  Numbers should come from numbers.

\n\n

Now there are benefits from trying to translate your gut feelings of uncertainty into verbal probabilities.  It may help you spot problems like the conjunction fallacy.  It may help you spot internal inconsistencies - though it may not show you any way to remedy them.

\n\n

But you shouldn't go around thinking that, if you translate your gut feeling into "one in a thousand", then, on occasions when you emit these verbal words, the corresponding event will happen around one in a thousand times.  Your brain is not so well-calibrated.  If instead you do something nonverbal with your gut feeling of uncertainty, you may be better off, because at least you'll be using the gut feeling the way it was meant to be used.

\n\n

This specific topic came up recently in the context of the Large Hadron Collider, and an argument given at the Global Catastrophic Risks conference:

\n\n

That we couldn't be sure that there was no error in the papers which showed from multiple angles that the LHC couldn't possibly destroy the world.  And moreover, the theory used in the papers might be wrong.  And in either case, there was still a chance the LHC could destroy the world.  And therefore, it ought not to be turned on.

\n\n

Now if the argument had been given in just this way, I would not have objected to its epistemology.

\n\n

But the speaker actually purported to assign a probability of at least 1 in 1000 that the theory, model, or calculations in the LHC paper were wrong; and a probability of at least 1 in 1000 that, if the theory or model or calculations were wrong, the LHC would destroy the world.

\n\n

After all, it's surely not so improbable that future generations will reject the theory used in the LHC paper, or reject the model, or maybe just find an error.  And if the LHC paper is wrong, then who knows what might happen as a result?

\n\n

So that is an argument - but to assign numbers to it?

\n\n

I object to the air of authority given these numbers pulled out of thin air.  I generally feel that if you can't use probabilistic tools to shape your feelings of uncertainty, you ought not to dignify them by calling them probabilities.

\n\n

The alternative I would propose, in this particular case, is to debate the general rule of banning physics experiments because you cannot be absolutely certain of the arguments that say they are safe.

\n\n

I hold that if you phrase it this way, then your mind, by considering frequencies of events, is likely to bring in more consequences of the decision, and remember more relevant historical cases.

\n\n

If you debate just the one case of the LHC, and assign specific probabilities, it (1) gives very shaky reasoning an undue air of authority, (2) obscures the general consequences of applying similar rules, and even (3) creates the illusion that we might come to a different decision if someone else published a new physics paper that decreased the probabilities.

\n\n

The authors at the Global Catastrophic Risk conference seemed to be suggesting that we could just do a bit more analysis of the LHC and then switch it on.  This struck me as the most disingenuous part of the argument.  Once you admit the argument "Maybe the analysis could be wrong, and who knows what happens then," there is no possible physics paper that can ever get rid of it.

\n\n

No matter what other physics papers had been published previously, the authors would have used the same argument and made up the same numerical probabilities at the Global Catastrophic Risk conference.  I cannot be sure of this statement, of course, but it has a probability of 75%.

\n\n

\nIn general a rationalist tries to make their minds function at the best achievable power output; sometimes this involves talking about verbal probabilities, and sometimes it does not, but always the laws of probability theory govern.

\n\n

\nIf all you have is a gut feeling of uncertainty, then you should\nprobably stick with those algorithms that make use of gut feelings of\nuncertainty, because your built-in algorithms may do better than your\nclumsy attempts to put things into words.

\n\n

Now it may be that by reasoning thusly, I may find myself inconsistent.  For example, I would be substantially more alarmed about a lottery device with a well-defined chance of 1 in 1,000,000 of destroying the world, than I am about the Large Hadron Collider being switched on.

\n\n

On the other hand, if you asked me whether I could make one million statements of authority equal to "The Large Hadron Collider will not destroy the world", and be wrong, on average, around once, then I would have to say no.

\n\n

What should I do about this inconsistency?  I'm not sure, but I'm certainly not going to wave a magic wand to make it go away.  That's like finding an inconsistency in a pair of maps you own, and quickly scribbling some alterations to make sure they're consistent.

\n\n

I would also, by the way, be substantially more worried about a lottery device with a 1 in 1,000,000,000 chance of destroying the world, than a device which destroyed the world if the Judeo-Christian God existed.  But I would not suppose that I could make one billion statements, one after the other, fully independent and equally fraught as "There is no God", and be wrong on average around once.

\n\n

I can't say I'm happy with this state of epistemic affairs, but I'm not going to modify it until I can see myself moving in the direction of greater accuracy and real-world effectiveness, not just moving in the direction of greater self-consistency.  The goal is to win, after all.  If I make up a probability that is not shaped by probabilistic tools, if I make up a number that is not created by numerical methods, then maybe I am just defeating my built-in algorithms that would do better by reasoning in their native modes of uncertainty.

\n\n

Of course this is not a license to ignore probabilities that are well-founded.  Any numerical founding at all is likely to be better than a vague feeling of uncertainty; humans are terrible statisticians.  But pulling a number entirely out of your butt, that is, using a non-numerical procedure to produce a number, is nearly no foundation at all; and in that case you probably are better off sticking with the vague feelings of uncertainty.

\n\n

Which is why my Overcoming Bias posts generally use words like "maybe" and "probably" and "surely" instead of assigning made-up numerical probabilities like "40%" and "70%" and "95%".  Think of how silly that would look.  I think it actually would be silly; I think I would do worse thereby.

\n\n

I am not the kind of straw Bayesian who says that you should make up probabilities to avoid being subject to Dutch Books.  I am the sort of Bayesian who says that in practice, humans end up subject to Dutch Books because they aren't powerful enough to avoid them; and moreover it's more important to catch the ball than to avoid Dutch Books.  The math is like underlying physics, inescapably governing, but too expensive to calculate.  Nor is there any point in a ritual of cognition which mimics the surface forms of the math, but fails to produce systematically better decision-making.  That would be a lost purpose; this is not the true art of living under the law.

" } }, { "_id": "zGJw9PGhu9e8Z6BEX", "title": "Fake Norms, or \"Truth\" vs. Truth", "pageUrl": "https://www.lesswrong.com/posts/zGJw9PGhu9e8Z6BEX/fake-norms-or-truth-vs-truth", "postedAt": "2008-07-22T10:23:30.000Z", "baseScore": 28, "voteCount": 23, "commentCount": 16, "url": null, "contents": { "documentId": "zGJw9PGhu9e8Z6BEX", "html": "

Followup toApplause Lights

\n

When you say the word \"truth\", people know that \"truth\" is a good thing, and that they're supposed to applaud.  So it might seem like there is a social norm in favor of \"truth\".  But when it comes to some particular truth, like whether God exists, or how likely their startup is to thrive, people will say:  \"I just want to believe\" or \"you've got to be optimistic to succeed\".

\n

So Robin and I were talking about this, and Robin asked me how it is that people prevent themselves from noticing the conflict.

\n

I replied that I don't think active prevention is required.  First, as I quoted Michael Vassar:

\n
\n

\"It seems to me that much of the frustration in my life prior to a few years ago has been due to thinking that all other human minds necessarily and consistently implement modus ponens.\"

\n
\n

But more importantly, I don't think there does exist any social norm in favor of truth.  There's a social norm in favor of \"truth\".  There's a difference.

\n

\n

How would a norm in favor of truth actually be expressed, or acquired?

\n

If you were told many stories, as a kid, about specific people who accepted specific hard truths - like a story of a scientist accepting that their theory was wrong, say - then your brain would generalize over its experiences, and compress them, and form a concept of that-which-is-the-norm: the wordless act of accepting reality.

\n

If you heard someone say \"I don't care about the evidence, I just want to believe in God\", and you saw everyone else in the room gasp and regard them in frozen shock, then your brain would generalize a social norm against self-deception.  (E.g., the sort of thing that would happen if a scientist said \"I don't care about the evidence, I just want to believe in my-favorite-theory\" in front of their fellow scientists.)

\n

If, on the other hand, you see lots of people saying \"Isn't the truth wonderful?\" or \"I am in favor of truth\", then you learn that when someone says \"truth\", you are supposed to applaud.

\n

Now there are certain particular cases where someone will be castigated if they admit they refuse to see the truth: for example, \"I've seen the evidence on global warming but I don't want to believe it.\"  You couldn't get away with that in modern society.  But this indignation doesn't have to derive from violating a norm in favor of truth - it can derive from the widely held norm, \"'global warming' is bad\".

\n

But (said Robin) we see a lot of trees and hear the word \"tree\", and somehow we learn that the word refers to the thing - why don't people learn something similar about \"truth\", which is supposed to be good?

\n

I suggested in reply that the brain is capable of distinguishing different uses of the same syllables - a child is quite capable of learning that a right turn and the right answer are not the same kind of \"right\".  You won't necessarily assume that the right answer is always the one printed on the right side of the page.  Maybe the word \"truth\" is overloaded in the same way.

\n

Or maybe it's not exactly the same, but analogous: the social norms of which words we are meant to praise, and which deeds, are stored as separately as left hands and leftovers.

\n

There's a social norm in favor of \"diversity\", but not diversity.  There's a social norm in favor of \"free speech\", but not pornography.  There's a social norm in favor of \"democracy\", but it doesn't spontaneously occur to most people to suggest voting on their arguments.  There's a social norm in favor of \"love\", but not for letting some damn idiot marry your daughter even if the two of them are stupid and besotted.

\n

There's a social norm in favor of \"honesty\".  And there are in fact social norms for honesty about e.g. who cut down the cherry tree.   But not a social norm favoring saying what you think about someone else's appearance.

\n

I'm not suggesting that you ignore all the words that people praise.  Sometimes the things people praise with their lips, really are the things that matter, and our deeds are what fail to live up.  Neither am I suggesting that you should ignore what people really do, because sometimes that also embodies wisdom.  I would just say to be aware of any differences, and judge deliberately, and choose knowingly.

\n

Sounds good, doesn't it?  Everyone knows that being \"aware\" and \"choosing knowingly\" must surely be good things.  But is it a real norm or a fake norm?  Can you think of any stories you were told that illustrate the point?  (Not a rhetorical question, but a question one should learn to ask.)

\n

It's often not hard to find a norm in favor of \"rationality\" - but norms favoring rationality are rarer.

" } }, { "_id": "rnk9gmWSrcqNfg7p8", "title": "Should We Ban Physics?", "pageUrl": "https://www.lesswrong.com/posts/rnk9gmWSrcqNfg7p8/should-we-ban-physics", "postedAt": "2008-07-21T08:12:57.000Z", "baseScore": 23, "voteCount": 16, "commentCount": 22, "url": null, "contents": { "documentId": "rnk9gmWSrcqNfg7p8", "html": "

Nobel laureate Marie Curie died of aplastic anemia, the victim of radiation from the many fascinating glowing substances she had learned to isolate.

\n\n

How could she have known?  And the answer, as far as I can tell, is that she couldn't.  The only way she could have avoided death was by being too scared of anything new to go near it.  Would banning physics experiments have saved Curie from herself?

\n\n

But far more cancer patients than just one person have been saved by radiation therapy.  And the real cost of banning physics is not just losing that one experiment - it's losing physics.  No more Industrial Revolution.

\n\n

Some of us fall, and the human species carries on, and advances; our modern world is built on the backs, and sometimes the bodies, of people who took risks.  My father is fond of saying that if the automobile were invented nowadays, the saddle industry would arrange to have it outlawed.

\n\n

But what if the laws of physics had been different from what they are?  What if Curie, by isolating and purifying the glowy stuff, had caused something akin to a fission chain reaction gone critical... which, the laws of physics being different, had ignited the atmosphere or produced a strangelet?

At the recent Global Catastrophic Risks conference, someone proposed a policy prescription which, I argued, amounted to a ban on all physics experiments involving the production of novel physical situations - as opposed to measuring existing phenomena.  You can weigh a rock, but you can't purify radium, and you can't even expose the rock to X-rays unless you can show that exactly similar X-rays hit rocks all the time.  So the Large Hadron Collider, which produces collisions as energetic as cosmic rays, but not exactly the same as cosmic rays, would be off the menu.

\n\n

After all, whenever you do something new, even if you calculate that everything is safe, there is surely some probability of being mistaken in the calculation - right?

\n\n

So the one who proposed the policy, disagreed that their policy cashed out to a blanket ban on physics experiments.  And discussion is in progress, so I won't talk further about their policy argument.

\n\n

But if you consider the policy of "Ban Physics", and leave aside the total political infeasibility, I think the strongest way to frame the issue - from the pro-ban viewpoint - would be as follows:

\n\n

Suppose that Tegmark's Level IV Multiverse is real - that all possible mathematical objects, including all possible physical universes with all possible laws of physics, exist.  (Perhaps anthropically weighted by their simplicity.)

\n\n

Somewhere in Tegmark's Level IV Multiverse, then, there have undoubtedly been cases where intelligence arises somewhere in a universe with physics unlike this one - i.e., instead of a planet, life arises on a gigantic triangular plate hanging suspended in the void - and that intelligence accidentally destroys its world, perhaps its universe, in the course of a physics experiment.

\n\n

Maybe they experiment with alchemy, bring together some combination of substances that were never brought together before, and catalyze a change in their atmosphere.  Or maybe they manage to break their triangular plate, whose pieces fall and break other triangular plates.

\n\n

So, across the whole of the Tegmark Level IV multiverse - containing all possible physical universes with all laws of physics, weighted by the laws' simplicity:

\n\n

What fraction of sentient species that try to follow the policy "Ban all physics experiments involving situations with a remote possibility of being novel, until you can augment your own intelligence enough to do error-free cognition";

\n\n

And what fraction of sentient species that go ahead and do physics experiments;

\n\n

Survive in the long term, on average?

\n\n

In the case of the human species, trying to ban chemistry would hardly have been effective - but supposing that a species actually could make a collective decision like that, it's at least not clear-cut which fraction would be larger across the whole multiverse.  (We, in our universe, have already learned that you can't easily destroy the world with alchemy.)

\n\n

Or an even tougher question:  On average, across the multiverse, do you think you would advise an intelligent species to stop performing novel physics experiments during the interval after it figures out how to build transistors and before it builds AI?

" } }, { "_id": "iEyvHrFNhE9vMffh5", "title": "Touching the Old", "pageUrl": "https://www.lesswrong.com/posts/iEyvHrFNhE9vMffh5/touching-the-old", "postedAt": "2008-07-20T09:19:51.000Z", "baseScore": 17, "voteCount": 14, "commentCount": 32, "url": null, "contents": { "documentId": "iEyvHrFNhE9vMffh5", "html": "

I'm in Oxford right now, for the Global Catastrophic Risks conference.

\n\n

There's a psychological impact in walking down a street where where any given building might be older than your whole country.

\n\n

Toby Ord and Anders Sandberg pointed out to me an old church tower in Oxford, that is a thousand years old.

\n\n

At the risk conference I heard a talk from someone talking about what the universe will look like in 10100 years (barring intelligent modification thereof, which he didn't consider).

\n\n

The psychological impact of seeing that old church tower was greater.  I'm not defending this reaction, only admitting it.

\n\n

I haven't traveled as much as I would travel if I were free to follow my whims; I've never seen the Pyramids.  I don't think I've ever touched anything that has endured in the world for longer than that church tower.

\n\n

A thousand years...  I've lived less than half of 70, and sometimes it seems like a long time to me.  What would it be like, to be as old as that tower?  To have lasted through that much of the world, that much history and that much change?

Transhumanism does scare me.  I shouldn't wonder if it scares me more than it\nscares arch-luddites like Leon Kass.  Kass doesn't take it seriously; he doesn't\nexpect to live that long.

\n\n

Yet I know - and I doubt the thought ever\noccurred to Kass - that even if something scares you, you can still\nhave the courage to confront it.  Even time.  Even life.

\n\n\n\n

But sometimes it's such a strange thought that our world really is that old.

\n\n

The inverse failure of the logical fallacy of generalization from fictional evidence, is failure to generalize from things that actually happened.  We see movies, and in the ancestral environment, what you saw with your own eyes was real; we have to avoid treating them as available examples.

\n\n

Conversely, history books seem like writing on paper - but those are things that really happened, even if we hear about them selectively.  What happened there was as real to the people who lived it, as your own life, and equally evidence.

\n\n

Sometimes it's such a strange thought that the people in the history books really lived and experienced and died - that there's so much more depth to history than anything I've seen with my own eyes; so much more life than anything I've lived.

" } }, { "_id": "8rdoea3g6QGhWQtmx", "title": "Existential Angst Factory", "pageUrl": "https://www.lesswrong.com/posts/8rdoea3g6QGhWQtmx/existential-angst-factory", "postedAt": "2008-07-19T06:55:17.000Z", "baseScore": 79, "voteCount": 68, "commentCount": 101, "url": null, "contents": { "documentId": "8rdoea3g6QGhWQtmx", "html": "

Followup toThe Moral Void

\n

A widespread excuse for avoiding rationality is the widespread belief that it is \"rational\" to believe life is meaningless, and thus suffer existential angst.  This is one of the secondary reasons why it is worth discussing the nature of morality.  But it's also worth attacking existential angst directly.

\n

I suspect that most existential angst is not really existential.  I think that most of what is labeled \"existential angst\" comes from trying to solve the wrong problem.

\n

Let's say you're trapped in an unsatisfying relationship, so you're unhappy.  You consider going on a skiing trip, or you actually go on a skiing trip, and you're still unhappy.  You eat some chocolate, but you're still unhappy.  You do some volunteer work at a charity (or better yet, work the same hours professionally and donate the money, thus applying the Law of Comparative Advantage) and you're still unhappy because you're in an unsatisfying relationship.

\n

So you say something like:  \"Skiing is meaningless, chocolate is meaningless, charity is meaningless, life is doomed to be an endless stream of woe.\"  And you blame this on the universe being a mere dance of atoms, empty of meaning.  Not necessarily because of some kind of subconsciously deliberate Freudian substitution to avoid acknowledging your real problem, but because you've stopped hoping that your real problem is solvable.  And so, as a sheer unexplained background fact, you observe that you're always unhappy.

\n

\n

Maybe you're poor, and so always unhappy.  Nothing you do solves your poverty, so it starts to seem like a universal background fact, along with your unhappiness.  So when you observe that you're always unhappy, you blame this on the universe being a mere dance of atoms.  Not as some kind of Freudian substitution, but because it has ceased to occur to you that there does exist some possible state of affairs in which life is not painful.

\n

What about rich heiresses with everything in the world available to buy, who still feel unhappy?  Perhaps they can't get themselves into satisfying romantic relationships.  One way or another, they don't know how to use their money to create happiness—they lack the expertise in hedonic psychology and/or self-awareness and/or simple competence.

\n

So they're constantly unhappy—and they blame it on existential angst, because they've already solved the only problem they know how to solve.  They already have enough money and they've already bought all the toys.  Clearly, if there's still a problem, it's because life is meaningless.

\n

If someone who weighs 560 pounds suffers from \"existential angst\", allegedly because the universe is a mere dance of particles, then stomach reduction surgery might drastically change their views of the metaphysics of morality.

\n

I'm not a fan of Timothy Ferris, but The Four-Hour Workweek does make an interesting fun-theoretic observation:

\n
\n

Let's assume we have 10 goals and we achieve them—what is the desired outcome that makes all the effort worthwhile?  The most common response is what I also would have suggested five years ago: happiness.  I no longer believe this is a good answer. Happiness can be bought with a bottle of wine and has become ambiguous through overuse.  There is a more precise alternative that reflects what I believe the actual objective is.

\n

Bear with me.  What is the opposite of happiness? Sadness?  No.  Just as love and hate are two sides of the same coin, so are happiness and sadness.  Crying out of happiness is a perfect illustration of this.  The opposite of love is indifference, and the opposite of happiness is—here's the clincher—boredom.

\n

Excitement is the more practical synonym for happiness, and it is precisely what you should strive to chase.  It is the cure-all. When people suggest you follow your \"passion\" or your \"bliss,\" I propose that they are, in fact, referring to the same singular concept: excitement.

\n

This brings us full circle.  The question you should be asking isn't \"What do I want?\" or \"What are my goals?\" but \"What would excite me?\"

\n

Remember—boredom is the enemy, not some abstract \"failure.\"

\n

Living like a millionaire requires doing interesting things and not just owning enviable things.

\n
\n

I don't endorse all of the above, of course.  But note the SolvingTheWrongProblem anti-pattern Ferris describes.  It was on reading the above that I first generalized ExistentialAngstFactory.

\n

Now, if someone is in a unproblematic, loving relationship; and they have enough money; and no major health problems; and they're signed up for cryonics so death is not approaching inexorably; and they're doing exciting work that they enjoy; and they believe they're having a positive effect on the world...

\n

...and they're still unhappy because it seems to them that the universe is a mere dance of atoms empty of meaning, then we may have a legitimate problem here.  One that, perhaps, can only be resolved by a very long discussion of the nature of morality and how it fits into a reductionist universe.

\n

But, mostly, I suspect that when people complain about the empty meaningless void, it is because they have at least one problem that they aren't thinking about solving—perhaps because they never identified it.  Being able to identify your own problems is a feat of rationality that schools don't explicitly train you to perform.  And they haven't even been told that an un-focused-on problem might be the source of their \"existential angst\"—they've just been told to blame it on existential angst.

\n

That's the other reason it might be helpful to understand the nature of morality—even if it just adds up to moral normality—because it tells you that if you're constantly unhappy, it's not because the universe is empty of meaning.

\n

Or maybe believing the universe is a \"mere dance of particles\" is one more factor contributing to human unhappiness; in which case, again, people can benefit from eliminating that factor.

\n

If it seems to you like nothing you do makes you happy, and you can't even imagine what would make you happy, it's not because the universe is made of particle fields.  It's because you're still solving the wrong problem.  Keep searching, until you find the visualizable state of affairs in which the existential angst seems like it should go away—that might (or might not) tell you the real problem; but at least, don't blame it on reductionism.

\n

Added:  Several commenters pointed out that random acts of brain chemistry may also be responsible for depression, even if your life is otherwise fine.  As far as I know, this is true.  But, once again, it won't help to mistake that random act of brain chemistry as being about existential issues; that might prevent you from trying neuropharmaceutical interventions.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Can Counterfactuals Be True?\"

\n

Previous post: \"Could Anything Be Right?\"

" } }, { "_id": "vy9nnPdwTjSmt5qdb", "title": "Could Anything Be Right?", "pageUrl": "https://www.lesswrong.com/posts/vy9nnPdwTjSmt5qdb/could-anything-be-right", "postedAt": "2008-07-18T07:19:28.000Z", "baseScore": 77, "voteCount": 58, "commentCount": 39, "url": null, "contents": { "documentId": "vy9nnPdwTjSmt5qdb", "html": "

Years ago, Eliezer1999 was convinced that he knew nothing about morality.

\n

For all he knew, morality could require the extermination of the human species; and if so he saw no virtue in taking a stand against morality, because he thought that, by definition, if he postulated that moral fact, that meant human extinction was what \"should\" be done.

\n

I thought I could figure out what was right, perhaps, given enough reasoning time and enough facts, but that I currently had no information about it.  I could not trust evolution which had built me.  What foundation did that leave on which to stand?

\n

Well, indeed Eliezer1999 was massively mistaken about the nature of morality, so far as his explicitly represented philosophy went.

\n

But as Davidson once observed, if you believe that \"beavers\" live in deserts, are pure white in color, and weigh 300 pounds when adult, then you do not have any beliefs about beavers, true or false.  You must get at least some of your beliefs right, before the remaining ones can be wrong about anything.

\n

My belief that I had no information about morality was not internally consistent.

\n

\n

Saying that I knew nothing felt virtuous, for I had once been taught that it was virtuous to confess my ignorance.  \"The only thing I know is that I know nothing,\" and all that.  But in this case I would have been better off considering the admittedly exaggerated saying, \"The greatest fool is the one who is not aware they are wise.\"  (This is nowhere near the greatest kind of foolishness, but it is a kind of foolishness.)

\n

Was it wrong to kill people?  Well, I thought so, but I wasn't sure; maybe it was right to kill people, though that seemed less likely.

\n

What kind of procedure would answer whether it was right to kill people?  I didn't know that either, but I thought that if you built a generic superintelligence (what I would later label a \"ghost of perfect emptiness\") then it could, you know, reason about what was likely to be right and wrong; and since it was superintelligent, it was bound to come up with the right answer.

\n

The problem that I somehow managed not to think too hard about, was where the superintelligence would get the procedure that discovered the procedure that discovered the procedure that discovered morality—if I couldn't write it into the start state that wrote the successor AI that wrote the successor AI.

\n

As Marcello Herreshoff later put it, \"We never bother running a computer program unless we don't know the output and we know an important fact about the output.\"  If I knew nothing about morality, and did not even claim to know the nature of morality, then how could I construct any computer program whatsoever—even a \"superintelligent\" one or a \"self-improving\" one—and claim that it would output something called \"morality\"?

\n

There are no-free-lunch theorems in computer science—in a maxentropy universe, no plan is better on average than any other.  If you have no knowledge at all about \"morality\", there's also no computational procedure that will seem more likely than others to compute \"morality\", and no meta-procedure that's more likely than others to produce a procedure that computes \"morality\".

\n

I thought that surely even a ghost of perfect emptiness, finding that it knew nothing of morality, would see a moral imperative to think about morality.

\n

But the difficulty lies in the word think.  Thinking is not an activity that a ghost of perfect emptiness is automatically able to carry out.  Thinking requires running some specific computation that is the thought.  For a reflective AI to decide to think, requires that it know some computation which it believes is more likely to tell it what it wants to know, than consulting an Ouija board; the AI must also have a notion of how to interpret the output.

\n

If one knows nothing about morality, what does the word \"should\" mean, at all?  If you don't know whether death is right or wrong—and don't know how you can discover whether death is right or wrong—and don't know whether any given procedure might output the procedure for saying whether death is right or wrong—then what do these words, \"right\" and \"wrong\", even mean?

\n

If the words \"right\" and \"wrong\" have nothing baked into them—no starting point—if everything about morality is up for grabs, not just the content but the structure and the starting point and the determination procedure—then what is their meaning?  What distinguishes, \"I don't know what is right\" from \"I don't know what is wakalixes\"?

\n

A scientist may say that everything is up for grabs in science, since any theory may be disproven; but then they have some idea of what would count as evidence that could disprove the theory.  Could there be something that would change what a scientist regarded as evidence?

\n

Well, yes, in fact; a scientist who read some Karl Popper and thought they knew what \"evidence\" meant, could be presented with the coherence and uniqueness proofs underlying Bayesian probability, and that might change their definition of evidence.  They might not have had any explicit notion, in advance, that such a proof could exist.  But they would have had an implicit notion.  It would have been baked into their brains, if not explicitly represented therein, that such-and-such an argument would in fact persuade them that Bayesian probability gave a better definition of \"evidence\" than the one they had been using.

\n

In the same way, you could say, \"I don't know what morality is, but I'll know it when I see it,\" and make sense.

\n

But then you are not rebelling completely against your own evolved nature.  You are supposing that whatever has been baked into you to recognize \"morality\", is, if not absolutely trustworthy, then at least your initial condition with which you start debating.  Can you trust your moral intuitions to give you any information about morality at all, when they are the product of mere evolution?

\n

But if you discard every procedure that evolution gave you and all its products, then you discard your whole brain.  You discard everything that could potentially recognize morality when it sees it.  You discard everything that could potentially respond to moral arguments by updating your morality.  You even unwind past the unwinder: you discard the intuitions underlying your conclusion that you can't trust evolution to be moral.  It is your existing moral intuitions that tell you that evolution doesn't seem like a very good source of morality.  What, then, will the words \"right\" and \"should\" and \"better\" even mean?

\n

Humans do not perfectly recognize truth when they see it, and hunter-gatherers do not have an explicit concept of the Bayesian criterion of evidence.  But all our science and all our probability theory was built on top of a chain of appeals to our instinctive notion of \"truth\".  Had this core been flawed, there would have been nothing we could do in principle to arrive at the present notion of science; the notion of science would have just sounded completely unappealing and pointless.

\n

One of the arguments that might have shaken my teenage self out of his mistake, if I could have gone back in time to argue with him, was the question:

\n

Could there be some morality, some given rightness or wrongness, that human beings do not perceive, do not want to perceive, will not see any appealing moral argument for adopting, nor any moral argument for adopting a procedure that adopts it, etcetera?  Could there be a morality, and ourselves utterly outside its frame of reference?  But then what makes this thing morality—rather than a stone tablet somewhere with the words 'Thou shalt murder' written on them, with absolutely no justification offered?

\n

So all this suggests that you should be willing to accept that you might know a little about morality.  Nothing unquestionable, perhaps, but an initial state with which to start questioning yourself.  Baked into your brain but not explicitly known to you, perhaps; but still, that which your brain would recognize as right is what you are talking about.  You will accept at least enough of the way you respond to moral arguments as a starting point, to identify \"morality\" as something to think about.

\n

But that's a rather large step.

\n

It implies accepting your own mind as identifying a moral frame of reference, rather than all morality being a great light shining from beyond (that in principle you might not be able to perceive at all).  It implies accepting that even if there were a light and your brain decided to recognize it as \"morality\", it would still be your own brain that recognized it, and you would not have evaded causal responsibility—or evaded moral responsibility either, on my view.

\n

It implies dropping the notion that a ghost of perfect emptiness will necessarily agree with you, because the ghost might occupy a different moral frame of reference, respond to different arguments, be asking a different question when it computes what-to-do-next.

\n

And if you're willing to bake at least a few things into the very meaning of this topic of \"morality\", this quality of rightness that you are talking about when you talk about \"rightness\"—if you're willing to accept even that morality is what you argue about when you argue about \"morality\"—then why not accept other intuitions, other pieces of yourself, into the starting point as well?

\n

Why not accept that, ceteris paribus, joy is preferable to sorrow?

\n

You might later find some ground within yourself or built upon yourself with which to criticize this—but why not accept it for now?  Not just as a personal preference, mind you; but as something baked into the question you ask when you ask \"What is truly right\"?

\n

But then you might find that you know rather a lot about morality!  Nothing certain—nothing unquestionable—nothing unarguable—but still, quite a bit of information.  Are you willing to relinquish your Socratean ignorance?

\n

I don't argue by definitions, of course.  But if you claim to know nothing at all about morality, then you will have problems with the meaning of your words, not just their plausibility.

" } }, { "_id": "pGvyqAQw6yqTjpKf4", "title": "The Gift We Give To Tomorrow", "pageUrl": "https://www.lesswrong.com/posts/pGvyqAQw6yqTjpKf4/the-gift-we-give-to-tomorrow", "postedAt": "2008-07-17T06:07:54.000Z", "baseScore": 162, "voteCount": 140, "commentCount": 101, "url": null, "contents": { "documentId": "pGvyqAQw6yqTjpKf4", "html": "

How, oh how, did an unloving and mindless universe, cough up minds who were capable of love?

\n

\"No mystery in that,\" you say, \"it's just a matter of natural selection.\"

\n

But natural selection is cruel, bloody, and bloody stupid.  Even when, on the surface of things, biological organisms aren't directly fighting each other—aren't directly tearing at each other with claws—there's still a deeper competition going on between the genes.  Genetic information is created when genes increase their relative frequency in the next generation—what matters for \"genetic fitness\" is not how many children you have, but that you have more children than others.  It is quite possible for a species to evolve to extinction, if the winning genes are playing negative-sum games.

\n

How, oh how, could such a process create beings capable of love?

\n

\"No mystery,\" you say, \"there is never any mystery-in-the-world; mystery is a property of questions, not answers.  A mother's children share her genes, so the mother loves her children.\"

\n

\n

But sometimes mothers adopt children, and still love them.  And mothers love their children for themselves, not for their genes.

\n

\"No mystery,\" you say, \"Individual organisms are adaptation-executers, not fitness-maximizers Evolutionary psychology is not about deliberately maximizing fitness—through most of human history, we didn't know genes existed.  We don't calculate our acts' effect on genetic fitness consciously, or even subconsciously.\"

\n

But human beings form friendships even with non-relatives: how, oh how, can it be?

\n

\"No mystery, for hunter-gatherers often play Iterated Prisoner's Dilemmas, the solution to which is reciprocal altruism.  Sometimes the most dangerous human in the tribe is not the strongest, the prettiest, or even the smartest, but the one who has the most allies.\"

\n

Yet not all friends are fair-weather friends; we have a concept of true friendship—and some people have sacrificed their life for their friends.  Would not such a devotion tend to remove itself from the gene pool?

\n

\"You said it yourself: we have a concept of true friendship and fair-weather friendship.  We can tell, or try to tell, the difference between someone who considers us a valuable ally, and someone executing the friendship adaptation.  We wouldn't be true friends with someone who we didn't think was a true friend to us—and someone with many true friends is far more formidable than someone with many fair-weather allies.\"

\n

And Mohandas Gandhi, who really did turn the other cheek?  Those who try to serve all humanity, whether or not all humanity serves them in turn?

\n

\"That perhaps is a more complicated story.  Human beings are not just social animals.  We are political animals who argue linguistically about policy in adaptive tribal contexts.  Sometimes the formidable human is not the strongest, but the one who can most skillfully argue that their preferred policies match the preferences of others.\"

\n

Um... that doesn't explain Gandhi, or am I missing something?

\n

\"The point is that we have the ability to argue about 'What should be done?' as a proposition—we can make those arguments and respond to those arguments, without which politics could not take place.\"

\n

Okay, but Gandhi?

\n

\"Believed certain complicated propositions about 'What should be done?' and did them.\"

\n

That sounds like it could explain any possible human behavior.

\n

\"If we traced back the chain of causality through all the arguments, it would involve: a moral architecture that had the ability to argue general abstract moral propositions like 'What should be done to people?'; appeal to hardwired intuitions like fairness, a concept of duty, pain aversion + empathy; something like a preference for simple moral propositions, probably reused from our previous Occam prior; and the end result of all this, plus perhaps memetic selection effects, was 'You should not hurt people' in full generality—\"

\n

And that gets you Gandhi.

\n

\"Unless you think it was magic, it has to fit into the lawful causal development of the universe somehow.\"

\n

Well... I certainly won't postulate magic, under any name.

\n

\"Good.\"

\n

But come on... doesn't it seem a little... amazing... that hundreds of millions of years worth of evolution's death tournament could cough up mothers and fathers, sisters and brothers, husbands and wives, steadfast friends and honorable enemies, true altruists and guardians of causes, police officers and loyal defenders, even artists sacrificing themselves for their art, all practicing so many kinds of love?  For so many things other than genes?  Doing their part to make their world less ugly, something besides a sea of blood and violence and mindless replication?

\n

\"Are you claiming to be surprised by this?  If so, question your underlying model, for it has led you to be surprised by the true state of affairs.  Since the beginning, not one unusual thing has ever happened.\"

\n

But how is it not surprising?

\n

\"What are you suggesting, that some sort of shadowy figure stood behind the scenes and directed evolution?\"

\n

Hell no.  But—

\n

\"Because if you were suggesting that, I would have to ask how that shadowy figure originally decided that love was a desirable outcome of evolution.  I would have to ask where that figure got preferences that included things like love, friendship, loyalty, fairness, honor, romance, and so on.  On evolutionary psychology, we can see how that specific outcome came about—how those particular goals rather than others were generated in the first place.  You can call it 'surprising' all you like.  But when you really do understand evolutionary psychology, you can see how parental love and romance and honor, and even true altruism and moral arguments, bear the specific design signature of natural selection in particular adaptive contexts of the hunter-gatherer savanna.  So if there was a shadowy figure, it must itself have evolved—and that obviates the whole point of postulating it.\"

\n

I'm not postulating a shadowy figure!  I'm just asking how human beings ended up so nice.

\n

\"Nice!  Have you looked at this planet lately?  We also bear all those other emotions that evolved, too—which would tell you very well that we evolved, should you begin to doubt it.  Humans aren't always nice.\"

\n

We're one hell of a lot nicer than the process that produced us, which lets elephants starve to death when they run out of teeth, and doesn't anesthetize a gazelle even as it lays dying and is of no further importance to evolution one way or the other.  It doesn't take much to be nicer than evolution.  To have the theoretical capacity to make one single gesture of mercy, to feel a single twinge of empathy, is to be nicer than evolution.  How did evolution, which is itself so uncaring, create minds on that qualitatively higher moral level than itself?  How did evolution, which is so ugly, end up doing anything so beautiful?

\n

\"Beautiful, you say?  Bach's Little Fugue in G Minor may be beautiful, but the sound waves, as they travel through the air, are not stamped with tiny tags to specify their beauty.  If you wish to find explicitly encoded a measure of the fugue's beauty, you will have to look at a human brain—nowhere else in the universe will you find it.  Not upon the seas or the mountains will you find such judgments written: they are not minds, they cannot think.\"

\n

Perhaps that is so, but still I ask:  How did evolution end up doing anything so beautiful, as giving us the ability to admire the beauty of a flower?

\n

\"Can you not see the circularity in your question?  If beauty were like some great light in the sky that shined from outside humans, then your question might make sense—though there would still be the question of how humans came to perceive that light.  You evolved with a psychology unlike evolution:  Evolution has nothing like the intelligence or the precision required to exactly quine its goal system.  In coughing up the first true minds, evolution's simple fitness criterion shattered into a thousand values.  You evolved with a psychology that attaches utility to things which evolution does not care about, like human life and happiness.  And then you look back and say, 'How marvelous, that uncaring evolution produced minds that care about sentient life!'  So your great marvel and wonder, that seems like far too much coincidence, is really no coincidence at all.\"

\n

But then it is still amazing that this particular circular loop, happened to loop around such important things as beauty and altruism.

\n

\"I don't think you're following me here.  To you, it seems natural to privilege the beauty and altruism as special, as preferred, because you value them highly; and you don't see this as a unusual fact about yourself, because many of your friends do likewise.  So you expect that a ghost of perfect emptiness would also value life and happiness—and then, from this standpoint outside reality, a great coincidence would indeed have occurred.\"

\n

But you can make arguments for the importance of beauty and altruism from first principles—that our aesthetic senses lead us to create new complexity, instead of repeating the same things over and over; and that altruism is important because it takes us outside ourselves, gives our life a higher meaning than sheer brute selfishness.

\n

\"Oh, and that argument is going to move even a ghost of perfect emptiness—now that you've appealed to slightly different values?  Those aren't first principles, they're just different principles.  Even if you've adopted a high-falutin' philosophical tone, still there are no universally compelling arguments.  All you've done is pass the recursive buck.\"

\n

You don't think that, somehow, we evolved to tap into something beyond—

\n

\"What good does it do to suppose something beyond?  Why should we pay more attention to that beyond thing, than we pay to our existence as humans?  How does it alter your personal responsibility, to say that you were only following the orders of the beyond thing?  And you would still have evolved to let the beyond thing, rather than something else, direct your actions.  You are only passing the recursive buck.  Above all, it would be too much coincidence.\"

\n

Too much coincidence?

\n

\"A flower is beautiful, you say.  Do you think there is no story behind that beauty, or that science does not know the story?  Flower pollen is transmitted by bees, so by sexual selection, flowers evolved to attract bees—by imitating certain mating signs of bees, as it happened; the flowers' patterns would look more intricate, if you could see in the ultraviolet.  Now healthy flowers are a sign of fertile land, likely to bear fruits and other treasures, and probably prey animals as well; so is it any wonder that humans evolved to be attracted to flowers?  But for there to be some great light written upon the very stars—those huge unsentient balls of burning hydrogen—which also said that flowers were beautiful, now that would be far too much coincidence.\"

\n

So you explain away the beauty of a flower?

\n

\"No, I explain it.  Of course there's a story behind the beauty of flowers and the fact that we find them beautiful.  Behind ordered events, one finds ordered stories; and what has no story is the product of random noise, which is hardly any better.  If you cannot take joy in things that have stories behind them, your life will be empty indeed.  I don't think I take any less joy in a flower than you do; more so, perhaps, because I take joy in its story as well.\"

\n

Perhaps as you say, there is no surprise from a causal viewpoint—no disruption of the physical order of the universe.  But it still seems to me that, in this creation of humans by evolution, something happened that is precious and marvelous and wonderful.  If we cannot call it a physical miracle, then call it a moral miracle.

\n

\"Because it's only a miracle from the perspective of the morality that was produced, thus explaining away all of the apparent coincidence from a merely causal and physical perspective?\"

\n

Well... I suppose you could interpret the term that way, yes.  I just meant something that was immensely surprising and wonderful on a moral level, even if it is not surprising on a physical level.

\n

\"I think that's what I said.\"

\n

But it still seems to me that you, from your own view, drain something of that wonder away.

\n

\"Then you have problems taking joy in the merely real.  Love has to begin somehow, it has to enter the universe somewhere.  It is like asking how life itself begins—and though you were born of your father and mother, and they arose from their living parents in turn, if you go far and far and far away back, you will finally come to a replicator that arose by pure accident—the border between life and unlife.  So too with love.

\n

\"A complex pattern must be explained by a cause which is not already that complex pattern.  Not just the event must be explained, but the very shape and form.  For love to first enter Time, it must come of something that is not love; if this were not possible, then love could not be.

\n

\"Even as life itself required that first replicator to come about by accident, parentless but still caused: far, far back in the causal chain that led to you: 3.85 billion years ago, in some little tidal pool.

\n

\"Perhaps your children's children will ask how it is that they are capable of love.

\n

\"And their parents will say:  Because we, who also love, created you to love.

\n

\"And your children's children will ask:  But how is it that you love?

\n

\"And their parents will reply:  Because our own parents, who also loved, created us to love in turn.

\n

\"Then your children's children will ask:  But where did it all begin?  Where does the recursion end?

\n

\"And their parents will say:  Once upon a time, long ago and far away, ever so long ago, there were intelligent beings who were not themselves intelligently designed.  Once upon a time, there were lovers created by something that did not love.

\n

\"Once upon a time, when all of civilization was a single galaxy and a single star: and a single planet, a place called Earth.

\n

\"Long ago, and far away, ever so long ago.\"

" } }, { "_id": "szAkYJDtXkcSAiHYE", "title": "Whither Moral Progress?", "pageUrl": "https://www.lesswrong.com/posts/szAkYJDtXkcSAiHYE/whither-moral-progress", "postedAt": "2008-07-16T05:04:42.000Z", "baseScore": 24, "voteCount": 23, "commentCount": 101, "url": null, "contents": { "documentId": "szAkYJDtXkcSAiHYE", "html": "

Followup toIs Morality Preference?

\n

In the dialogue \"Is Morality Preference?\", Obert argues for the existence of moral progress by pointing to free speech, democracy, mass street protests against wars, the end of slavery... and we could also cite female suffrage, or the fact that burning a cat alive was once a popular entertainment... and many other things that our ancestors believed were right, but which we have come to see as wrong, or vice versa.

\n

But Subhan points out that if your only measure of progress is to take a difference against your current state, then you can follow a random walk, and still see the appearance of inevitable progress.

\n

\n

One way of refuting the simplest version of this argument, would be to say that we don't automatically think ourselves the very apex of possible morality; that we can imagine our descendants being more moral than us.

\n

But can you concretely imagine a being morally wiser than yourself—one who knows that some particular thing is wrong, when you believe it to be right?

\n

Certainly:  I am not sure of the moral status of chimpanzees, and hence I find it easy to imagine that a future civilization will label them definitely people, and castigate us for failing to cryopreserve the chimpanzees who died in human custody.

\n

Yet this still doesn't prove the existence of moral progress.  Maybe I am simply mistaken about the nature of changes in morality that have previously occurred—like looking at a time chart of \"differences between past and present\", noting that the difference has been steadily decreasing, and saying, without being able to visualize it, \"Extrapolating this chart into the future, we find that the future will be even less different from the present than the present.\"

\n

So let me throw the question open to my readers:  Whither moral progress?

\n

You might say, perhaps, \"Over time, people have become more willing to help one another—that is the very substance and definition of moral progress.\"

\n

But as John McCarthy put it:

\n
\n

\"If everyone were to live for others all the time, life would be like a procession of ants following each other around in a circle.\"

\n
\n

Once you make \"People helping each other more\" the definition of moral progress, then people helping each other all the time, is by definition the apex of moral progress.

\n

At the very least we have Moore's Open Question:  It is not clear that helping others all the time is automatically moral progress, whether or not you argue that it is; and so we apparently have some notion of what constitutes \"moral progress\" that goes beyond the direct identification with \"helping others more often\".

\n

Or if you identify moral progress with \"Democracy!\", then at some point there was a first democratic civilization—at some point, people went from having no notion of democracy as a good thing, to inventing the idea of democracy as a good thing.  If increasing democracy is the very substance of moral progress, then how did this moral progress come about to exist in the world?  How did people invent, without knowing it, this very substance of moral progress?

\n

It's easy to come up with concrete examples of moral progress.  Just point to a moral disagreement between past and present civilizations; or point to a disagreement between yourself and present civilization, and claim that future civilizations might agree with you.

\n

It's harder to answer Subhan's challenge—to show directionality, rather than a random walk, on the meta-level.  And explain how this directionality is implemented, on the meta-level: how people go from not having a moral ideal, to having it.

\n

(I have my own ideas about this, as some of you know.  And I'll thank you not to link to them in the comments, or quote them and attribute them to me, until at least 24 hours have passed from this post.)

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"The Gift We Give To Tomorrow\"

\n

Previous post: \"Probability is Subjectively Objective\"

" } }, { "_id": "qrwSvtra9NtSN6cKs", "title": "Posting May Slow", "pageUrl": "https://www.lesswrong.com/posts/qrwSvtra9NtSN6cKs/posting-may-slow", "postedAt": "2008-07-16T03:00:00.000Z", "baseScore": 4, "voteCount": 3, "commentCount": 0, "url": null, "contents": { "documentId": "qrwSvtra9NtSN6cKs", "html": "

Greetings, fearless readers:

\n\n

Due to the Oxford conference on Global Catastrophic Risk, I may miss some posts - possibly quite a few.

\n\n

Or possibly not.

\n\n

Just so you don't think I'm dead.

\n\n

Sincerely,
Eliezer.

" } }, { "_id": "Wr9DYtgEAkPGK2fer", "title": "Lawrence Watt-Evans's Fiction", "pageUrl": "https://www.lesswrong.com/posts/Wr9DYtgEAkPGK2fer/lawrence-watt-evans-s-fiction", "postedAt": "2008-07-15T03:00:00.000Z", "baseScore": 47, "voteCount": 34, "commentCount": 54, "url": null, "contents": { "documentId": "Wr9DYtgEAkPGK2fer", "html": "

One of my pet topics, on which I will post more one of these days, is the Rationalist in Fiction.  Most of the time - it goes almost without saying - the Rationalist is done completely wrong.  In Hollywood, the Rationalist is a villain, or a cold emotionless foil, or a child who has to grow into a real human being, or a fool whose probabilities are all wrong, etcetera.  Even in science fiction, the Rationalist character is rarely done right - bearing the same resemblance to a real rationalist, as the mad scientist genius inventor who designs a new nuclear reactor in a month, bears to real scientists and engineers.

\n\n

Perhaps this is because most speculative fiction, generally speaking, is interested in someone battling monsters or falling in love or becoming a vampire, or whatever, not in being rational... and it would probably be worse fiction, if the author tried to make that the whole story.  But that can't be the entire problem.  I've read at least one author whose plots are not about rationality, but whose characters are nonetheless, in passing, realistically rational.

\n\n

That author is Lawrence Watt-Evans.  His work stands out for a number of reasons, the first being that it is genuinely unpredictable.  Not because of a postmodernist contempt for coherence, but because there are events going on outside the hero's story, just like real life.

\n\n

Most authors, if they set up a fantasy world with a horrible evil villain, and give their main character the one sword that can kill that villain, you could guess that, at the end of the book, the main character is going to kill the evil villain with the sword.

\n\n

Not Lawrence Watt-Evans.  In a Watt-Evans book, it's entirely possible that the evil villain will die of a heart attack halfway through the book, then the character will decide to sell the sword because they'd rather have the money, and then the character uses the money to set up an investment banking company.

That didn't actually happen in any particular Watt-Evans book - I don't believe in spoilers - but it gives you something of the flavor.

\n\n

And Watt-Evans doesn't always do this, either - just as, even in real life, things sometimes do go as you expect.

\n\n

It's this strange realism that charms me about Watt-Evans's work.  It's not done as a schtick, but as faithfulness-to-reality.  Real life doesn't run on perfect rails of dramatic necessity; neither is it random postmodern chaos.  I admire an author who can be faithful to that, and still tell a story.

\n\n

Watt-Evans's characters, if they happen to be rationalists, are realistic rationalists - they think the same things that you or I would, in their situations.

\n\n

If the character gets catapulted into a fantasy world, they actually notice the resemblance to their fantasy books, wonder about it, and think to themselves, "If this were a fantasy book, the next thing that would happen is X..." (which may or may not happen, because Watt-Evans doesn't write typical fantasy books).  It's not done as a postmodern self-referential schtick, but as a faithfulness-to-reality; they think what a real rational person would think, in their shoes.

\n\n

If the character finds out that it is their destiny to destroy the world, they don't waste time on immense dramatic displays - after they get over the shock, they land on their feet and start thinking about it in more or less the fashion that you or I would in their shoes.  Not just, "How do I avoid this?  Are there any possibilities I've overlooked?" but also "Am I sure this is really what's going on?  How reliable is this information?"

\n\n

If a Watt-Evans character gets their hands on a powerful cheat, they are going to exploit it to the fullest and actively think about creative new ways to use it.  If they find a staff of healing, they're going to set up a hospital.  If they invent a teleportation spell, they're going to think about new industrial uses.

\n\n

I hate it when some artifact of world-cracking power is introduced and then used as a one-time plot device.  Eventually you get numb, though.

\n\n

But if a Watt-Evans character finds a device with some interesting magical power halfway through the book, and there's some clever way that the magical power can be used to take over the world, and that character happens to want to take over the world, she's going to say the hell with whatever she was previously doing and go for it.

\n\n

Most fictional characters are stupid, because they have to be.  This occurs for several reasons; but in speculative fiction, a primary reason is that the author wants to throw around wish-fulfillment superpowers, and the author isn't competent enough to depict the real consequences of halfway intelligent people using that power.

\n\n

Lawrence Watt-Evans's stories aren't about the intelligence or rationality of his characters.  Nonetheless, Watt-Evans writes intelligent characters, and he's willing to deal with the consequences, which are huge.

\n\n

Maybe that's the main reason we don't see many realistic rationalists in fiction.

\n\n

I'd like to see more rationalist fiction.  Not necessarily in Watt-Evans's exact vein, because we already have Watt-Evans, and thus, there is no need to invent him.  But rationalist fiction is hard to do well; there are plenty of cliches out there, but few depictions that say something new or true.

\n\n

Lawrence Watt-Evans is not going to be everyone's cuppa tea, but if you like SF&F already, give it a try.  Suggested starting books:  The Unwilling Warlord (fantasy), Denner's Wreck (SF).

" } }, { "_id": "XhaKvQyHzeXdNnFKy", "title": "Probability is Subjectively Objective", "pageUrl": "https://www.lesswrong.com/posts/XhaKvQyHzeXdNnFKy/probability-is-subjectively-objective", "postedAt": "2008-07-14T09:16:50.000Z", "baseScore": 43, "voteCount": 38, "commentCount": 72, "url": null, "contents": { "documentId": "XhaKvQyHzeXdNnFKy", "html": "

Followup toProbability is in the Mind

\n
\n

\"Reality is that which, when you stop believing in it, doesn't go away.\"
        —Philip K. Dick

\n
\n

There are two kinds of Bayesians, allegedly.  Subjective Bayesians believe that \"probabilities\" are degrees of uncertainty existing in our minds; if you are uncertain about a phenomenon, that is a fact about your state of mind, not a property of the phenomenon itself; probability theory constrains the logical coherence of uncertain beliefs.  Then there are objective Bayesians, who... I'm not quite sure what it means to be an \"objective Bayesian\"; there are multiple definitions out there.  As best I can tell, an \"objective Bayesian\" is anyone who uses Bayesian methods and isn't a subjective Bayesian.

\n

If I recall correctly, E. T. Jaynes, master of the art, once described himself as a subjective-objective Bayesian.  Jaynes certainly believed very firmly that probability was in the mind; Jaynes was the one who coined the term Mind Projection Fallacy.  But Jaynes also didn't think that this implied a license to make up whatever priors you liked.  There was only one correct prior distribution to use, given your state of partial information at the start of the problem.

\n

How can something be in the mind, yet still be objective?

\n

\n

It appears to me that a good deal of philosophical maturity consists in being able to keep separate track of nearby concepts, without mixing them up.

\n

For example, to understand evolutionary psychology, you have to keep separate track of the psychological purpose of an act, and the evolutionary pseudo-purposes of the adaptations that execute as the psychology; this is a common failure of newcomers to evolutionary psychology, who read, misunderstand, and thereafter say, \"You think you love your children, but you're just trying to maximize your fitness!\"

\n

What is it, exactly, that the terms \"subjective\" and \"objective\", mean?  Let's say that I hand you a sock.  Is it a subjective or an objective sock?  You believe that 2 + 3 = 5.  Is your belief subjective or objective?  What about two plus three actually equaling five—is that subjective or objective?  What about a specific act of adding two apples and three apples and getting five apples?

\n

I don't intend to confuse you in shrouds of words; but I do mean to point out that, while you may feel that you know very well what is \"subjective\" or \"objective\", you might find that you have a bit of trouble saying out loud what those words mean.

\n

Suppose there's a calculator that computes \"2 + 3 = 5\".  We punch in \"2\", then \"+\", then \"3\", and lo and behold, we see \"5\" flash on the screen.  We accept this as evidence that 2 + 3 = 5, but we wouldn't say that the calculator's physical output defines the answer to the question 2 + 3 = ?.  A cosmic ray could strike a transistor, which might give us misleading evidence and cause us to believe that 2 + 3 = 6, but it wouldn't affect the actual sum of 2 + 3.

\n

Which proposition is common-sensically true, but philosophically interesting: while we can easily point to the physical location of a symbol on a calculator screen, or observe the result of putting two apples on a table followed by another three apples, it is rather harder to track down the whereabouts of 2 + 3 = 5.  (Did you look in the garage?)

\n

But let us leave aside the question of where the fact 2 + 3 = 5 is located—in the universe, or somewhere else—and consider the assertion that the proposition is \"objective\".  If a cosmic ray strikes a calculator and makes it output \"6\" in response to the query \"2 + 3 = ?\", and you add two apples to a table followed by three apples, then you'll still see five apples on the table.  If you do the calculation in your own head, expending the necessary computing power—we assume that 2 + 3 is a very difficult sum to compute, so that the answer is not immediately obvious to you—then you'll get the answer \"5\".  So the cosmic ray strike didn't change anything.

\n

And similarly—exactly similarly—what if a cosmic ray strikes a neuron inside your brain, causing you to compute \"2 + 3 = 7\"?  Then, adding two apples to three apples, you will expect to see seven apples, but instead you will be surprised to see five apples.

\n

If instead we found that no one was ever mistaken about addition problems, and that, moreover, you could change the answer by an act of will, then we might be tempted to call addition \"subjective\" rather than \"objective\".  I am not saying that this is everything people mean by \"subjective\" and \"objective\", just pointing to one aspect of the concept.  One might summarize this aspect thus:  \"If you can change something by thinking differently, it's subjective; if you can't change it by anything you do strictly inside your head, it's objective.\"

\n

Mind is not magic.  Every act of reasoning that we human beings carry out, is computed within some particular human brain.  But not every computation is about the state of a human brain.  Not every thought that you think is about something that can be changed by thinking.  Herein lies the opportunity for confusion-of-levels.  The quotation is not the referent.  If you are going to consider thoughts as referential at all—if not, I'd like you to explain the mysterious correlation between my thought \"2 + 3 = 5\" and the observed behavior of apples on tables—then, while the quoted thoughts will always change with thoughts, the referents may or may not be entities that change with changing human thoughts.

\n

The calculator computes \"What is 2 + 3?\", not \"What does this calculator compute as the result of 2 + 3?\"  The answer to the former question is 5, but if the calculator were to ask the latter question instead, the result could self-consistently be anything at all!  If the calculator returned 42, then indeed, \"What does this calculator compute as the result of 2 + 3?\" would in fact be 42.

\n

So just because a computation takes place inside your brain, does not mean that the computation explicitly mentions your brain, that it has your brain as a referent, any more than the calculator mentions the calculator.  The calculator does not attempt to contain a representation of itself, only of numbers.

\n

Indeed, in the most straightforward implementation, the calculator that asks \"What does this calculator compute as the answer to the query 2 + 3 = ?\" will never return a result, just simulate itself simulating itself until it runs out of memory.

\n

But if you punch the keys \"2\", \"+\", and \"3\", and the calculator proceeds to compute \"What do I output when someone punches '2 + 3'?\", the resulting computation does have one interesting characteristic: the referent of the computation is highly subjective, since it depends on the computation, and can be made to be anything just by changing the computation.

\n

Is probability, then, subjective or objective?

\n

Well, probability is computed within human brains or other calculators.  A probability is a state of partial information that is possessed by you; if you flip a coin and press it to your arm, the coin is showing heads or tails, but you assign the probability 1/2 until you reveal it.  A friend, who got a tiny but not fully informative peek, might assign a probability of 0.6.

\n

So can you make the probability of winning the lottery be anything you like?

\n

Forget about many-worlds for the moment—you should almost always be able to forget about many-worlds—and pretend that you're living in a single Small World where the lottery has only a single outcome.  You will nonetheless have a need to call upon probability.  Or if you prefer, we can discuss the ten trillionth decimal digit of pi, which I believe is not yet known.  (If you are foolish enough to refuse to assign a probability distribution to this entity, you might pass up an excellent bet, like betting $1 to win $1000 that the digit is not 4.)  Your uncertainty is a state of your mind, of partial information that you possess.  Someone else might have different information, complete or partial.  And the entity itself will only ever take on a single value.

\n

So can you make the probability of winning the lottery, or the probability of the ten trillionth decimal digit of pi equaling 4, be anything you like?

\n

You might be tempted to reply:  \"Well, since I currently think the probability of winning the lottery is one in a hundred million, then obviously, I will currently expect that assigning any other probability than this to the lottery, will decrease my expected log-score—or if you prefer a decision-theoretic formulation, I will expect this modification to myself to decrease expected utility.  So, obviously, I will not choose to modify my probability distribution.  It wouldn't be reflectively coherent.\"

\n

So reflective coherency is the goal, is it?  Too bad you weren't born with a prior that assigned probability 0.9 to winning the lottery!  Then, by exactly the same line of argument, you wouldn't want to assign any probability except 0.9 to winning the lottery.  And you would still be reflectively coherent.  And you would have a 90% probability of winning millions of dollars!  Hooray!

\n

\"No, then I would think I had a 90% probability of winning the lottery, but actually, the probability would only be one in a hundred million.\"

\n

Well, of course you would be expected to say that.  And if you'd been born with a prior that assigned 90% probability to your winning the lottery, you'd consider an alleged probability of 10^-8, and say, \"No, then I would think I had almost no probability of winning the lottery, but actually, the probability would be 0.9.\"

\n

\"Yeah?  Then just modify your probability distribution, and buy a lottery ticket, and then wait and see what happens.\"

\n

What happens?  Either the ticket will win, or it won't.  That's what will happen.  We won't get to see that some particular probability was, in fact, the exactly right probability to assign.

\n

\"Perform the experiment a hundred times, and—\"

\n

Okay, let's talk about the ten trillionth digit of pi, then.  Single-shot problem, no \"long run\" you can measure.

\n

Probability is subjectively objective:  Probability exists in your mind: if you're ignorant of a phenomenon, that's an attribute of you, not an attribute of the phenomenon.  Yet it will seem to you that you can't change probabilities by wishing.

\n

You could make yourself compute something else, perhaps, rather than probability.  You could compute \"What do I say is the probability?\" (answer: anything you say) or \"What do I wish were the probability?\" (answer: whatever you wish) but these things are not the probability, which is subjectively objective.

\n

The thing about subjectively objective quantities is that they really do seem objective to you.  You don't look them over and say, \"Oh, well, of course I don't want to modify my own probability estimate, because no one can just modify their probability estimate; but if I'd been born with a different prior I'd be saying something different, and I wouldn't want to modify that either; and so none of us is superior to anyone else.\"  That's the way a subjectively subjective quantity would seem.

\n

No, it will seem to you that, if the lottery sells a hundred million tickets, and you don't get a peek at the results, then the probability of a ticket winning, is one in a hundred million.  And that you could be born with different priors but that wouldn't give you any better odds.  And if there's someone next to you saying the same thing about their 90% probability estimate, you'll just shrug and say, \"Good luck with that.\"  You won't expect them to win.

\n

Probability is subjectively really objective, not just subjectively sort of objective.

\n

Jaynes used to recommend that no one ever write out an unconditional probability:  That you never, ever write simply P(A), but always write P(A|I), where I is your prior information.  I'll use Q instead of I, for ease of reading, but Jaynes used I.  Similarly, one would not write P(A|B) for the posterior probability of A given that we learn B, but rather P(A|B,Q), the probability of A given that we learn B and had background information Q.

\n

This is good advice in a purely pragmatic sense, when you see how many false \"paradoxes\" are generated by accidentally using different prior information in different places.

\n

But it also makes a deep philosophical point as well, which I never saw Jaynes spell out explicitly, but I think he would have approved: there is no such thing as a probability that isn't in any mind.  Any mind that takes in evidence and outputs probability estimates of the next event, remember, can be viewed as a prior—so there is no probability without priors/minds.

\n

You can't unwind the Q.  You can't ask \"What is the unconditional probability of our background information being true, P(Q)?\"  To make that estimate, you would still need some kind of prior.  No way to unwind back to an ideal ghost of perfect emptiness...

\n

You might argue that you and the lottery-ticket buyer do not really have a disagreement about probability.  You say that the probability of the ticket winning the lottery is one in a hundred million given your prior, P(W|Q1) = 10^-8.  The other fellow says the probability of the ticket winning given his prior is P(W|Q2) = 0.9.  Every time you say \"The probability of X is Y\", you really mean, \"P(X|Q1) = Y\".  And when he says, \"No, the probability of X is Z\", he really means, \"P(X|Q2) = Z\".

\n

Now you might, if you traced out his mathematical calculations, agree that, indeed, the conditional probability of the ticket winning, given his weird prior is 0.9.  But you wouldn't agree that \"the probability of the ticket winning\" is 0.9.  Just as he wouldn't agree that \"the probability of the ticket winning\" is 10^-8.

\n

Even if the two of you refer to different mathematical calculations when you say the word \"probability\", you don't think that puts you on equal ground, neither of you being better than the other.  And neither does he, of course.

\n

So you see that, subjectively, probability really does feel objective—even after you have subjectively taken all apparent subjectivity into account.

\n

And this is not mistaken, because, by golly, the probability of winning the lottery really is 10^-8, not 0.9.  It's not as if you're doing your probability calculation wrong, after all.  If you weren't worried about being fair or about justifying yourself to philosophers, if you only wanted to get the correct answer, your betting odds would be 10^-8.

\n

Somewhere out in mind design space, there's a mind with any possible prior; but that doesn't mean that you'll say, \"All priors are created equal.\"

\n

When you judge those alternate minds, you'll do so using your own mind—your own beliefs about the universe—your own posterior that came out of your own prior, your own posterior probability assignments P(X|A,B,C,...,Q1).  But there's nothing wrong with that.  It's not like you could judge using something other than yourself.  It's not like you could have a probability assignment without any prior, a degree of uncertainty that isn't in any mind.

\n

And so, when all that is said and done, it still seems like the probability of winning the lottery really  is 10^-8, not 0.9.  No matter what other minds in design space say differently.

\n

Which shouldn't be surprising.  When you compute probabilities, you're thinking about lottery balls, not thinking about brains or mind designs or other people with different priors.  Your probability computation makes no mention of that, any more than it explicitly represents itself.  Your goal, after all, is to win, not to be fair.  So of course probability will seem to be independent of what other minds might think of it.

\n

Okay, but... you still can't win the lottery by assigning a higher probability to winning.

\n

If you like, we could regard probability as an idealized computation, just like 2 + 2 = 4 seems to be independent of any particular error-prone calculator that computes it; and you could regard your mind as trying to approximate this ideal computation.  In which case, it is good that your mind does not mention people's opinions, and only thinks of the lottery balls; the ideal computation makes no mention of people's opinions, and we are trying to reflect this ideal as accurately as possible...

\n

But what you will calculate is the \"ideal calculation\" to plug into your betting odds, will depend on your prior, even though the calculation won't have an explicit dependency on \"your prior\".  Someone who thought the universe was anti-Occamian, would advocate an anti-Occamian calculation, regardless of whether or not anyone thought the universe was anti-Occamian.

\n

Your calculations get checked against reality, in a probabilistic way; you either win the lottery or not.  But interpreting these results, is done with your prior; once again there is no probability that isn't in any mind.

\n

I am not trying to argue that you can win the lottery by wishing, of course.  Rather, I am trying to inculcate the ability to distinguish between levels.

\n

When you think about the ontological nature of probability, and perform reductionism on it—when you try to explain how \"probability\" fits into a universe in which states of mind do not exist fundamentally—then you find that probability is computed within a brain; and you find that other possible minds could perform mostly-analogous operations with different priors and arrive at different answers.

\n

But, when you consider probability as probability, think about the referent instead of the thought process—which thinking you will do in your own thoughts, which are physical processes—then you will conclude that the vast majority of possible priors are probably wrong.  (You will also be able to conceive of priors which are, in fact, better than yours, because they assign more probability to the actual outcome; you just won't know in advance which alternative prior is the truly better one.)

\n

If you again swap your goggles to think about how probability is implemented in the brain, the seeming objectivity of probability is the way the probability algorithm feels from inside; so it's no mystery that, considering probability as probability, you feel that it's not subject to your whims.  That's just what the probability-computation would be expected to say, since the computation doesn't represent any dependency on your whims.

\n

But when you swap out those goggles and go back to thinking about probabilities, then, by golly, your algorithm seems to be right in computing that probability is not subject to your whims.  You can't win the lottery just by changing your beliefs about it.  And if that is the way you would be expected to feel, then so what?  The feeling has been explained, not explained away; it is not a mere feeling.  Just because a calculation is implemented in your brain, doesn't mean it's wrong, after all.

\n

Your \"probability that the ten trillionth decimal digit of pi is 4\", is an attribute of yourself, and exists in your mind; the real digit is either 4 or not.  And if you could change your belief about the probability by editing your brain, you wouldn't expect that to change the probability.

\n

Therefore I say of probability that it is \"subjectively objective\".

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Whither Moral Progress?\"

\n

Previous post: \"Rebelling Within Nature\"

" } }, { "_id": "YhNGY6ypoNbLJvDBu", "title": "Rebelling Within Nature", "pageUrl": "https://www.lesswrong.com/posts/YhNGY6ypoNbLJvDBu/rebelling-within-nature", "postedAt": "2008-07-13T12:32:30.000Z", "baseScore": 43, "voteCount": 38, "commentCount": 38, "url": null, "contents": { "documentId": "YhNGY6ypoNbLJvDBu", "html": "

Followup toFundamental Doubts, Where Recursive Justification Hits Bottom, No Universally Compelling Arguments, Joy in the Merely Real, Evolutionary Psychology

\n
\n

\"Let us understand, once and for all, that the ethical progress of society depends, not on imitating the cosmic process, still less in running away from it, but in combating it.\"
        —T. H. Huxley (\"Darwin's bulldog\", early advocate of evolutionary theory)

\n
\n

There is a quote from some Zen Master or other, who said something along the lines of:

\n
\n

\"Western man believes that he is rebelling against nature, but he does not realize that, in doing so, he is acting according to nature.\"

\n
\n

The Reductionist Masters of the West, strong in their own Art, are not so foolish; they do realize that they always act within Nature.

\n

You can narrow your focus and rebel against a facet of existing Nature—polio, say—but in so doing, you act within the whole of Nature.  The syringe that carries the polio vaccine is forged of atoms; our minds, that understood the method, embodied in neurons.  If Jonas Salk had to fight laziness, he fought something that evolution instilled in him—a reluctance to work that conserves energy.  And he fought it with other emotions that natural selection also inscribed in him: feelings of friendship that he extended to humanity, heroism to protect his tribe, maybe an explicit desire for fame that he never acknowledged to himself—who knows?  (I haven't actually read a biography of Salk.)

\n

The point is, you can't fight Nature from beyond Nature, only from within it.  There is no acausal fulcrum on which to stand outside reality and move it.  There is no ghost of perfect emptiness by which you can judge your brain from outside your brain.  You can fight the cosmic process, but only by recruiting other abilities that evolution originally gave to you.

\n

And if you fight one emotion within yourself—looking upon your own nature, and judging yourself less than you think should be—saying perhaps, \"I should not want to kill my enemies\"—then you make that judgment, by...

\n

\n

How exactly does one go about rebelling against one's own goal system?

\n

From within it, naturally.

\n

This is perhaps the primary thing that I didn't quite understand as a teenager.

\n

At the age of fifteen (fourteen?), I picked up a copy of TIME magazine and read an article on evolutionary psychology.  It seemed like one of the most massively obvious-in-retrospect ideas I'd ever heard.  I went on to read The Moral Animal by Robert Wright.  And later The Adapted Mind—but from the perspective of personal epiphanies, The Moral Animal pretty much did the job.

\n

I'm reasonably sure that if I had not known the basics of evolutionary psychology from my teenage years, I would not currently exist as the Eliezer Yudkowsky you know.

\n

Indeed, let me drop back a bit further:

\n

At the age of... I think it was nine... I discovered the truth about sex by looking it up in my parents' home copy of the Encyclopedia Britannica (stop that laughing).  Shortly after, I learned a good deal more by discovering where my parents had hidden the secret 15th volume of my long-beloved Childcraft series.  I'd been avidly reading the first 14 volumes—some of them, anyway—since the age of five.  But the 15th volume wasn't meant for me—it was the \"Guide for Parents\".

\n

The 15th volume of Childcraft described the life cycle of children.  It described the horrible confusion of the teenage years—teenagers experimenting with alcohol, with drugs, with unsafe sex, with reckless driving, the hormones taking over their minds, the overwhelming importance of peer pressure, the tearful accusations of \"You don't love me!\" and \"I hate you!\"

\n

I took one look at that description, at the tender age of nine, and said to myself in quiet revulsion, I'm not going to do that.

\n

And I didn't.

\n

My teenage years were not untroubled.  But I didn't do any of the things that the Guide to Parents warned me against.  I didn't drink, drive, drug, lose control to hormones, pay any attention to peer pressure, or ever once think that my parents didn't love me.

\n

In a safer world, I would have wished for my parents to have hidden that book better.

\n

But in this world, which needs me as I am, I don't regret finding it.

\n

I still rebelled, of course.  I rebelled against the rebellious nature the Guide to Parents described to me.  That was part of how I defined my identity in my teenage years—\"I'm not doing the standard stupid stuff.\"  Some of the time, this just meant that I invented amazing new stupidity, but in fact that was a major improvement.

\n

Years later, The Moral Animal made suddenly obvious the why of all that disastrous behavior I'd been warned against.  Not that Robert Wright pointed any of this out explicitly, but it was obvious given the elementary concept of evolutionary psychology:

\n

Physiologically adult humans are not meant to spend an additional 10 years in a school system; their brains map that onto \"I have been assigned low tribal status\".  And so, of course, they plot rebellion—accuse the existing tribal overlords of corruption—plot perhaps to split off their own little tribe in the savanna, not realizing that this is impossible in the Modern World.  The teenage males map their own fathers onto the role of \"tribal chief\"...

\n

Echoes in time, thousands of repeated generations in the savanna carving the pattern, ancient repetitions of form, reproduced in the present in strange twisted mappings, across genes that didn't know anything had changed...

\n

The world grew older, of a sudden.

\n

And I'm not going to go into the evolutionary psychology of \"teenagers\" in detail, not now, because that would deserve its own post.

\n

But when I read The Moral Animal, the world suddenly acquired causal depth.  Human emotions existed for reasons, they weren't just unexamined givens.  I might previously have questioned whether an emotion was appropriate to its circumstance—whether it made sense to hate your parents, if they did really love you—but I wouldn't have thought, before then, to judge the existence of hatred as an evolved emotion.

\n

And then, having come so far, and having avoided with instinctive ease all the classic errors that evolutionary psychologists are traditionally warned against—I was never once tempted to confuse evolutionary causation with psychological causation—I went wrong at the last turn.

\n

The echo in time that was teenage psychology was obviously wrong and stupid—a distortion in the way things should be—so clearly you were supposed to unwind past it, compensate in the opposite direction or disable the feeling, to arrive at the correct answer.

\n

It's hard for me to remember exactly what I was thinking in this era, but I think I tended to focus on one facet of human psychology at any given moment, trying to unwind myself a piece at a time.  IIRC I did think, in full generality, \"Evolution is bad; the effect it has on psychology is bad.\"  (Like it had some kind of \"effect\" that could be isolated!)  But somehow, I managed not to get to \"Evolutionary psychology is the cause of altruism; altruism is bad.\"

\n

It was easy for me to see all sorts of warped altruism as having been warped by evolution.

\n

People who wanted to trust themselves with power, for the good of their tribe—that had an obvious evolutionary explanation; it was, therefore, a distortion to be corrected.

\n

People who wanted to be altruistic in ways their friends would approve of—obvious evolutionary explanation; therefore a distortion to be corrected.

\n

People who wanted to be altruistic in a way that would optimize their fame and repute—obvious evolutionary distortion to be corrected.

\n

People who wanted to help only their family, or only their nation—acting out ancient selection pressures on the savanna; move past it.

\n

But the fundamental will to help people?

\n

Well, the notion of that being merely evolved, was something that, somehow, I managed to never quite accept.  Even though, in retrospect, the causality is just as obvious as teen revolutionism.

\n

IIRC, I did think something along the lines of:  \"Once you unwind past evolution, then the true morality isn't likely to contain a clause saying, 'This person matters but this person doesn't', so everyone should matter equally, so you should be as eager to help others as help yourself.\"  And so I thought that even if the emotion of altruism had merely evolved, it was a right emotion, and I should keep it.

\n

But why think that people mattered at all, if you were trying to unwind past all evolutionary psychology?  Why think that it was better for people to be happy than sad, rather than the converse?

\n

If I recall correctly, I did ask myself that, and sort of waved my hands mentally and said, \"It just seems like one of the best guesses—I mean, I don't know that people are valuable, but I can't think of what else could be.\"

\n

This is the Avoiding Your Belief's Real Weak Points / Not Spontaneously Thinking About Your Belief's Most Painful Weaknesses antipattern in full glory:  Get just far enough to place yourself on the first fringes of real distress, and then stop thinking.

\n

And also the antipattern of trying to unwind past everything that is causally responsible for your existence as a mind, to arrive at a perfectly reliable ghost of perfect emptiness.

\n

Later, having also seen others making similar mistakes, it seems to me that the general problem is an illusion of mind-independence that comes from picking something that appeals to you, while still seeming philosophically simple.

\n

As if the appeal to you, of the moral argument, weren't still a feature of your particular point in mind design space.

\n

As if there weren't still an ordinary and explicable causal history behind the appeal, and your selection of that particular principle.

\n

As if, by making things philosophically simpler-seeming, you could enhance their appeal to a ghost-in-the-machine who would hear your justifications starting from scratch, as fairness demands.

\n

As if your very sense of simplicity were not an aesthetic sense inscribed in you by evolution.

\n

As if your very intuitions of \"moral argument\" and \"justification\", were not an architecture-of-reasoning inscribed in you by natural selection, and just as causally explicable as any other feature of human psychology...

\n

You can't throw away evolution, and end up with a perfectly moral creature that humans would have been, if only we had never evolved; that's really not how it works.

\n

Why accept intuitively appealing arguments about the nature of morality, rather than intuitively unappealing ones, if you're going to distrust everything in you that ever evolved?

\n

Then what is right?  What should we do, having been inscribed by a blind mad idiot god whose incarnation-into-reality takes the form of millions of years of ancestral murder and war?

\n

But even this question—every fragment of it—the notion that a blind mad idiocy is an ugly property for a god to have, or that murder is a poisoned well of order, even the words \"right\" and \"should\"—all a phenomenon within nature.  All traceable back to debates built around arguments appealing to intuitions that evolved in me.

\n

You can't jump out of the system.  You really can't.  Even wanting to jump out of the system—the sense that something isn't justified \"just because it evolved\"—is something that you feel from within the system.  Anything you might try to use to jump—any sense of what morality should be like, if you could unwind past evolution—is also there as a causal result of evolution.

\n

Not everything we think about morality is directly inscribed by evolution, of course.  We have values that we got from our parents teaching them to us as we grew up; after it won out in a civilizational debate conducted with reference to other moral principles; that were themselves argued into existence by appealing to built-in emotions; using an architecture-of-interpersonal-moral-argument that evolution burped into existence.

\n

It all goes back to evolution.  This doesn't just include things like instinctive concepts of fairness, or empathy, it includes the whole notion of arguing morals as if they were propositional beliefs.  Evolution created within you that frame of reference within which you can formulate the  concept of moral questioning.  Including questioning evolution's fitness to create our moral frame of reference.  If you really try to unwind outside the system, you'll unwind your unwinders.

\n

That's what I didn't quite get, those years ago.

\n

I do plan to dissolve the cognitive confusion that makes words like \"right\" and \"should\" seem difficult to grasp.  I've been working up to that for a while now.

\n

But I'm not there yet, and so, for now, I'm going to jump ahead and peek at an answer I'll only later be able to justify as moral philosophy:

\n

Embrace reflection.  You can't unwind to emptiness, but you can bootstrap from a starting point.

\n

Go on morally questioning the existence (and not just appropriateness) of emotions.  But don't treat the mere fact of their having evolved as a reason to reject them.  Yes, I know that \"X evolved\" doesn't seem like a good justification for having an emotion; but don't let that be a reason to reject X, any more than it's a reason to accept it.  Hence the post on the Genetic Fallacy: causation is conceptually distinct from justification.  If you try to apply the Genetic Accusation to automatically convict and expel your genes, you're going to run into foundational trouble—so don't!

\n

Just ask if the emotion is justified—don't treat its evolutionary cause as proof of mere distortion.  Use your current mind to examine the emotion's pluses and minuses, without being ashamed; use your full strength of morality.

\n

Judge emotions as emotions, not as evolutionary relics.  When you say, \"motherly love outcompeted its alternative alleles because it protected children that could carry the allele for motherly love\", this is only a cause, not a sum of all moral arguments.  The evolutionary psychology may grant you helpful insight into the pattern and process of motherly love, but it neither justifies the emotion as natural, nor convicts it as coming from an unworthy source.  You don't make the Genetic Accusation either way.  You just, y'know, think about motherly love, and ask yourself if it seems like a good thing or not; considering its effects, not its source.

\n

You tot up the balance of moral justifications, using your current mind—without worrying about the fact that the entire debate takes place within an evolved framework.

\n

That's the moral normality to which my yet-to-be-revealed moral philosophy will add up.

\n

And if, in the meanwhile, it seems to you like I've just proved that there is no morality... well, I haven't proved any such thing.  But, meanwhile, just ask yourself if you might want to help people even if there were no morality.  If you find that the answer is yes, then you will later discover that you discovered morality.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Probability is Subjectively Objective\"

\n

Previous post: \"Fundamental Doubts\"

" } }, { "_id": "9EahWKqay6HZcaNTY", "title": "Fundamental Doubts", "pageUrl": "https://www.lesswrong.com/posts/9EahWKqay6HZcaNTY/fundamental-doubts", "postedAt": "2008-07-12T05:21:12.000Z", "baseScore": 38, "voteCount": 33, "commentCount": 87, "url": null, "contents": { "documentId": "9EahWKqay6HZcaNTY", "html": "

Followup toThe Genetic Fallacy, Where Recursive Justification Hits Bottom

\n

Yesterday I said that—because humans are not perfect Bayesians—the genetic fallacy is not entirely a fallacy; when new suspicion is cast on one of your fundamental sources, you really should doubt all the branches and leaves of that root, even if they seem to have accumulated new evidence in the meanwhile.

\n

This is one of the most difficult techniques of rationality (on which I will separately post, one of these days).  Descartes, setting out to \"doubt, insofar as possible, all things\", ended up trying to prove the existence of God—which, if he wasn't a secret atheist trying to avoid getting burned at the stake, is pretty pathetic.  It is hard to doubt an idea to which we are deeply attached; our mind naturally reaches for cached thoughts and rehearsed arguments.

\n

But today's post concerns a different kind of difficulty—the case where the doubt is so deep, of a source so fundamental, that you can't make a true fresh beginning.

\n

Case in point:  Remember when, in the The Matrix, Morpheus told Neo that the machines were harvesting the body heat of humans for energy, and liquefying the dead to feed to babies?  I suppose you thought something like, \"Hey!  That violates the second law of thermodynamics.\"

\n

\n

Well, it does violate the second law of thermodynamics.  But if the Matrix's makers had cared about the flaw once it was pointed out to them, they could have fixed the plot hole in any of the sequels, in fifteen seconds, this easily:

\n
\n

Neo:  \"Doesn't harvesting human body heat for energy, violate the laws of thermodynamics?\"

\n

Morpheus:  \"Where'd you learn about thermodynamics, Neo?\"

\n

Neo:  \"In school.\"

\n

Morpheus:  \"Where'd you go to school, Neo?\"

\n

Neo:  \"Oh.\"

\n

Morpheus:  \"The machines tell elegant lies.\"

\n
\n

Now, mind you, I am not saying that this excuses the original mistake in the script.  When my mind generated this excuse, it came clearly labeled with that warning sign of which I have spoken, \"Tada!  Your mind can generate an excuse for anything!\"  You do not need to tell me that my plot-hole-patch is a nitwit idea, I am well aware of that...

\n

...but, in point of fact, if you woke up out of a virtual reality pod one day, you would have to suspect all the physics you knew.  Even if you looked down and saw that you had hands, you couldn't rely on there being blood and bone inside them.  Even if you looked up and saw stars, you couldn't rely on their being trillions of miles away.  And even if you found yourself thinking, you couldn't rely on your head containing a brain.

\n

You could still try to doubt, even so.  You could do your best to unwind your thoughts past every lesson in school, every science paper read, every sensory experience, every math proof whose seeming approval by other mathematicians might have been choreographed to conceal a subtle flaw...

\n

But suppose you discovered that you were a computer program and that the Dark Lords of the Matrix were actively tampering with your thoughts.

\n

Well... in that scenario, you're pretty much screwed, I'd have to say.

\n

Descartes vastly underestimated the powers of an infinitely powerful deceiving demon when he supposed he could trust \"I think therefore I am.\"  Maybe that's just what they want you to think.  Maybe they just inserted that conclusion into your mind with a memory of it seeming to have an irrefutable chain of logical support, along with some peer pressure to label it \"unquestionable\" just like all your friends.

\n

(Personally, I don't trust \"I think therefore I am\" even in real life, since it contains a term \"am\" whose meaning I find confusing, and I've learned to spread my confidence intervals very widely in the presence of basic confusion.  As for absolute certainty, don't be silly.)

\n

Every memory of justification could be faked.  Every feeling of support could be artificially induced.  Modus ponens could be a lie.  Your concept of \"rational justification\"—not just your specific concept, but your notion that any such thing exists at all—could have been manufactured to mislead you.  Your trust in Reason itself could have been inculcated to throw you off the trail.

\n

So you might as well not think about the possibility that you're a brain with choreographed thoughts, because there's nothing you can do about it...

\n

Unless, of course, that's what they want you to think.

\n

Past a certain level of doubt, it's not possible to start over fresh.  There's nothing you can unassume to find some firm rock on which to stand.  You cannot unwind yourself into a perfectly empty and perfectly reliable ghost in the machine.

\n

This level of meta-suspicion should be a rare occasion.  For example, suspecting that all academic science is an organized conspiracy, should not run into anything like these meta-difficulties.  Certainly, someone does not get to plead that unwinding past the Bible is impossible because it is too foundational; atheists walk the Earth without falling into comas.  Remember, when Descartes tried to outwit an infinitely powerful deceiving demon, he first tried to make himself absolutely certain of a highly confusing statement, and then proved the existence of God.  Consider that a caution about what you try to claim is \"too basic for a fresh beginning\".  And even basic things can still be doubted, it is only that we use our untrustworthy brains to doubt them.

\n

Or consider the case of our existence as evolved brains.  Natural selection isn't trustworthy, and we have specific reason to suspect it.  We know that evolution is stupid.  We know many specific ways in which our human brains fail, taken beyond the savanna.  But you can't clear your mind of evolutionary influences and start over.  It would be like deciding that you don't trust neurons, so you're going to clear your mind of brains.

\n

And evolution certainly gets a chance to influence every single thought that runs through your mind!  It is the very reason why you exist as a thinker, rather than a lump of carbon—and that doesn't mean evolution summoned a ghost-in-the-machine into you; it designed the ghost.  If you learn culture, it is because you were built to learn culture.

\n

But in fact, we don't run into unmanageable meta-trouble in trying to come up with specific patches for specific known evolved biases.  And evolution is stupid, so even though it has set up self-deceptive circuits in us, these circuits are not infinitely difficult to comprehend and outwit.

\n

Or so it seems!  But it really does seem that way, on reflection.

\n

There is no button you can press to rewind past your noisy brain, and become a perfectly reliable ghost of perfect emptiness.  That's not just because your brain is you.  It's also because you can't unassume things like modus ponens or belief updating.  You can unassume them as explicit premises for deliberate reasoning—a hunter-gatherer has no explicit concept of modus ponens—but you can't delete the actual dynamics (and all their products!)

\n

So, in the end, I think we must allow the use of brains to think about thinking; and the use of evolved brains to think about evolution; and the use of inductive brains to think about induction; and the use of brains with an Occam prior to think about whether the universe appears to be simple; for these things we really cannot unwind entirely, even when we have reason to distrust them.  Strange loops through the meta level, I think, are not the same as circular logic.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Rebelling Within Nature\"

\n

Previous post: \"My Kind of Reflection\"

" } }, { "_id": "KZLa74SzyKhSJ3M55", "title": "The Genetic Fallacy", "pageUrl": "https://www.lesswrong.com/posts/KZLa74SzyKhSJ3M55/the-genetic-fallacy", "postedAt": "2008-07-11T05:47:28.000Z", "baseScore": 83, "voteCount": 72, "commentCount": 18, "url": null, "contents": { "documentId": "KZLa74SzyKhSJ3M55", "html": "

In lists of logical fallacies, you will find included “the genetic fallacy”—the fallacy of attacking a belief based on someone’s causes for believing it.

This is, at first sight, a very strange idea—if the causes of a belief do not determine its systematic reliability, what does? If Deep Blue advises us of a chess move, we trust it based on our understanding of the code that searches the game tree, being unable to evaluate the actual game tree ourselves. What could license any probability assignment as “rational,” except that it was produced by some systematically reliable process?

Articles on the genetic fallacy will tell you that genetic reasoning is not always a fallacy—that the origin of evidence can be relevant to its evaluation, as in the case of a trusted expert. But other times, say the articles, it is a fallacy; the chemist Kekulé first saw the ring structure of benzene in a dream, but this doesn’t mean we can never trust this belief.

So sometimes the genetic fallacy is a fallacy, and sometimes it’s not?

The genetic fallacy is formally a fallacy, because the original cause of a belief is not the same as its current justificational status, the sum of all the support and antisupport currently known.

Yet we change our minds less often than we think. Genetic accusations have a force among humans that they would not have among ideal Bayesians.

Clearing your mind is a powerful heuristic when you’re faced with new suspicion that many of your ideas may have come from a flawed source.

Once an idea gets into our heads, it’s not always easy for evidence to root it out. Consider all the people out there who grew up believing in the Bible; later came to reject (on a deliberate level) the idea that the Bible was written by the hand of God; and who nonetheless think that the Bible is full of indispensable ethical wisdom. They have failed to clear their minds; they could do significantly better by doubting anything the Bible said because the Bible said it.

At the same time, they would have to bear firmly in mind the principle that reversed stupidity is not intelligence; the goal is to genuinely shake your mind loose and do independent thinking, not to negate the Bible and let that be your algorithm.

Once an idea gets into your head, you tend to find support for it everywhere you look—and so when the original source is suddenly cast into suspicion, you would be very wise indeed to suspect all the leaves that originally grew on that branch . . .

If you can! It’s not easy to clear your mind. It takes a convulsive effort to actually reconsider, instead of letting your mind fall into the pattern of rehearsing cached arguments. “It ain’t a true crisis of faith unless things could just as easily go either way,” said Thor Shenkel.

You should be extremely suspicious if you have many ideas suggested by a source that you now know to be untrustworthy, but by golly, it seems that all the ideas still ended up being right—the Bible being the obvious archetypal example.

On the other hand . . . there’s such a thing as sufficiently clear-cut evidence, that it no longer significantly matters where the idea originally came from. Accumulating that kind of clear-cut evidence is what Science is all about. It doesn’t matter any more that Kekulé first saw the ring structure of benzene in a dream—it wouldn’t matter if we’d found the hypothesis to test by generating random computer images, or from a spiritualist revealed as a fraud, or even from the Bible. The ring structure of benzene is pinned down by enough experimental evidence to make the source of the suggestion irrelevant.

In the absence of such clear-cut evidence, then you do need to pay attention to the original sources of ideas—to give experts more credence than layfolk, if their field has earned respect—to suspect ideas you originally got from suspicious sources—to distrust those whose motives are untrustworthy, if they cannot present arguments independent of their own authority.

The genetic fallacy is a fallacy when there exist justifications beyond the genetic fact asserted, but the genetic accusation is presented as if it settled the issue. Hal Finney suggests that we call correctly appealing to a claim’s origins “the genetic heuristic.”1

Some good rules of thumb (for humans):


1Source: http://lesswrong.com/lw/s3/the_genetic_fallacy/lls.

" } }, { "_id": "TynBiYt6zg42StRbb", "title": "My Kind of Reflection", "pageUrl": "https://www.lesswrong.com/posts/TynBiYt6zg42StRbb/my-kind-of-reflection", "postedAt": "2008-07-10T07:21:28.000Z", "baseScore": 68, "voteCount": 50, "commentCount": 24, "url": null, "contents": { "documentId": "TynBiYt6zg42StRbb", "html": "

In \"Where Recursive Justification Hits Bottom\", I concluded that it's okay to use induction to reason about the probability that induction will work in the future, given that it's worked in the past; or to use Occam's Razor to conclude that the simplest explanation for why Occam's Razor works is that the universe itself is fundamentally simple.

\n

Now I am far from the first person to consider reflective application of reasoning principles.  Chris Hibbert compared my view to Bartley's Pan-Critical Rationalism (I was wondering whether that would happen).  So it seems worthwhile to state what I see as the distinguishing features of my view of reflection, which may or may not happen to be shared by any other philosopher's view of reflection.

\n

• All of my philosophy here actually comes from trying to figure out how to build a self-modifying AI that applies its own reasoning principles to itself in the process of rewriting its own source code.  So whenever I talk about using induction to license induction, I'm really thinking about an inductive AI considering a rewrite of the part of itself that performs induction.  If you wouldn't want the AI to rewrite its source code to not use induction, your philosophy had better not label induction as unjustifiable.

\n

• One of the most powerful general principles I know for AI in general, is that the true Way generally turns out to be naturalistic—which for reflective reasoning, means treating transistors inside the AI, just as if they were transistors found in the environment; not an ad-hoc special case.  This is the real source of my insistence in \"Recursive Justification\" that questions like \"How well does my version of Occam's Razor work?\" should be considered just like an ordinary question—or at least an ordinary very deep question.  I strongly suspect that a correctly built AI, in pondering modifications to the part of its source code that implements Occamian reasoning, will not have to do anything special as it ponders—in particular, it shouldn't have to make a special effort to avoid using Occamian reasoning.

\n

\n

• I don't think that \"reflective coherence\" or \"reflective consistency\" should be considered as a desideratum in itself.  As I said in the Twelve Virtues and the Simple Truth, if you make five accurate maps of the same city, then the maps will necessarily be consistent with each other; but if you draw one map by fantasy and then make four copies, the five will be consistent but not accurate.  In the same way, no one is deliberately pursuing reflective consistency, and reflective consistency is not a special warrant of trustworthiness; the goal is to win.  But anyone who pursues the goal of winning, using their current notion of winning, and modifying their own source code, will end up reflectively consistent as a side effect—just like someone continually striving to improve their map of the world should find the parts becoming more consistent among themselves, as a side effect.  If you put on your AI goggles, then the AI, rewriting its own source code, is not trying to make itself \"reflectively consistent\"—it is trying to optimize the expected utility of its source code, and it happens to be doing this using its current mind's anticipation of the consequences.

\n

• One of the ways I license using induction and Occam's Razor to consider \"induction\" and \"Occam's Razor\", is by appealing to E. T. Jaynes's principle that we should always use all the information available to us (computing power permitting) in a calculation.  If you think induction works, then you should use it in order to use your maximum power, including when you're thinking about induction.

\n

• In general, I think it's valuable to distinguish a defensive posture where you're imagining how to justify your philosophy to a philosopher that questions you, from an aggressive posture where you're trying to get as close to the truth as possible.  So it's not that being suspicious of Occam's Razor, but using your current mind and intelligence to inspect it, shows that you're being fair and defensible by questioning your foundational beliefs.  Rather, the reason why you would inspect Occam's Razor is to see if you could improve your application of it, or if you're worried it might really be wrong.  I tend to deprecate mere dutiful doubts.

\n

• If you run around inspecting your foundations, I expect you to actually improve them, not just dutifully investigate.  Our brains are built to assess \"simplicity\" in a certain intuitive way that makes Thor sound simpler than Maxwell's Equations as an explanation for lightning.  But, having gotten a better look at the way the universe really works, we've concluded that differential equations (which few humans master) are actually simpler (in an information-theoretic sense) than heroic mythology (which is how most tribes explain the universe).  This being the case, we've tried to import our notions of Occam's Razor into math as well.

\n

• On the other hand, the improved foundations should still add up to normality; 2 + 2 should still end up equalling 4, not something new and amazing and exciting like \"fish\".

\n

• I think it's very important to distinguish between the questions \"Why does induction work?\" and \"Does induction work?\"  The reason why the universe itself is regular is still a mysterious question unto us, for now.  Strange speculations here may be temporarily needful.  But on the other hand, if you start claiming that the universe isn't actually regular, that the answer to \"Does induction work?\" is \"No!\", then you're wandering into 2 + 2 = 3 territory.  You're trying too hard to make your philosophy interesting, instead of correct.  An inductive AI asking what probability assignment to make on the next round is asking \"Does induction work?\", and this is the question that it may answer by inductive reasoning.  If you ask \"Why does induction work?\" then answering \"Because induction works\" is circular logic, and answering \"Because I believe induction works\" is magical thinking.

\n

• I don't think that going around in a loop of justifications through the meta-level is the same thing as circular logic.  I think the notion of \"circular logic\" applies within the object level, and is something that is definitely bad and forbidden, on the object level.  Forbidding reflective coherence doesn't sound like a good idea.  But I haven't yet sat down and formalized the exact difference—my reflective theory is something I'm trying to work out, not something I have in hand.

" } }, { "_id": "YcRZbgRbZGpu9xFox", "title": "The Fear of Common Knowledge", "pageUrl": "https://www.lesswrong.com/posts/YcRZbgRbZGpu9xFox/the-fear-of-common-knowledge", "postedAt": "2008-07-09T09:48:35.000Z", "baseScore": 41, "voteCount": 28, "commentCount": 37, "url": null, "contents": { "documentId": "YcRZbgRbZGpu9xFox", "html": "

Followup toBelief in Belief

\n\n

One of those insights that made me sit upright and say "Aha!"  From The Uncredible Hallq:

Minor acts of dishonesty are integral to human life, ranging from how\nwe deal with casual acquaintances to writing formal agreements between\nnation states.  Steven Pinker has an excellent chapter on this in The\nStuff of Thought, a version of which can be found at TIME magazine’s website. \nWhat didn’t make it into the TIME version is Pinker’s proposal that,\nwhile there are several reasons we do this, the most important reason\nis to avoid mutual knowledge:  "She probably knows I just blew a pass at\nher, but does she know I know she knows? Does she know I know she knows\nI know she knows?"  Etc.  Mutual knowledge is that nightmare where, for\nall intents and purposes, the known-knows can be extended out to\ninfinity.  The ultimate example of this has to be the joke "No, it\nwasn’t awkward until you said, 'well, this is awkward.'"  A situation\nmight be a little awkward, but what’s really awkward is mutual\nknowledge, created when someone blurts out what’s going on for all to\nhear...

\n\n

The story of the Emperor’s New Clothes is another example of the power of mutual knowledge...

The power of real deception - outright lies - is easy for even us nerds to understand.

\n\n

\nThe notion of a lie that the other person knows is a lie, seems very odd at first.  Up until I read the Hallq's explanation of Pinker, I had thought in terms of\npeople suppressing uncomfortable thoughts:  "If it isn't said out loud, I don't have to deal with it."

\n\n

Like the friends of a terminal patient, whose disease has progressed to\na stage that - if you look it up online - turns out to be nearly\nuniversally fatal.  So the friends gather around, and wish the patient\nbest hopes for their medical treatment.  No one says, "Well, we all\nknow you're going to die; and now it's too late for you to get life\ninsurance and sign up for cryonics.  I hope it isn't too painful; let me know if you want me to smuggle you a heroin overdose."

\n\n

So even that is possible for a nerd to understand - in terms of, as Vassar puts it, thinking of non-nerds as defective nerds...

\n\n

\nBut the notion of a lie that the other person knows is a lie, but they aren't sure that you\nknow they know it's a lie, and so the social situation occupies a\ndifferent state from common knowledge...

\n\n

I think that's the closest\nI've ever seen life get to imitating a Raymond Smullyan logic puzzle.

\n\n

Added:  Richard quotes Nagel on a further purpose of mutual hypocrisy: preventing an issue from rising to the level where it must be publicly acknowledged and dealt with, because common ground on that issue is not easily available.

" } }, { "_id": "C8nEXTcjZb9oauTCW", "title": "Where Recursive Justification Hits Bottom", "pageUrl": "https://www.lesswrong.com/posts/C8nEXTcjZb9oauTCW/where-recursive-justification-hits-bottom", "postedAt": "2008-07-08T10:16:45.000Z", "baseScore": 130, "voteCount": 101, "commentCount": 81, "url": null, "contents": { "documentId": "C8nEXTcjZb9oauTCW", "html": "

Why do I believe that the Sun will rise tomorrow?

\n

Because I've seen the Sun rise on thousands of previous days.

\n

Ah... but why do I believe the future will be like the past?

\n

Even if I go past the mere surface observation of the Sun rising, to the apparently universal and exceptionless laws of gravitation and nuclear physics, then I am still left with the question:  \"Why do I believe this will also be true tomorrow?\"

\n

I could appeal to Occam's Razor, the principle of using the simplest theory that fits the facts... but why believe in Occam's Razor?  Because it's been successful on past problems?  But who says that this means Occam's Razor will work tomorrow?

\n

And lo, the one said:

\n
\n

\"Science also depends on unjustified assumptions.  Thus science is ultimately based on faith, so don't you criticize me for believing in [silly-belief-#238721].\"

\n
\n

\n

As I've previously observed:

\n
\n

It's a most peculiar psychology—this business of \"Science is based on faith too, so there!\"  Typically this is said by people who claim that faith is a good thing.  Then why do they say \"Science is based on faith too!\" in that angry-triumphal tone, rather than as a compliment? 

\n
\n

Arguing that you should be immune to criticism is rarely a good sign.

\n

But this doesn't answer the legitimate philosophical dilemma:  If every belief must be justified, and those justifications in turn must be justified, then how is the infinite recursion terminated?

\n

And if you're allowed to end in something assumed-without-justification, then why aren't you allowed to assume anything without justification?

\n

A similar critique is sometimes leveled against Bayesianism—that it requires assuming some prior—by people who apparently think that the problem of induction is a particular problem of Bayesianism, which you can avoid by using classical statistics.  I will speak of this later, perhaps.

\n

But first, let it be clearly admitted that the rules of Bayesian updating, do not of themselves solve the problem of induction.

\n

Suppose you're drawing red and white balls from an urn.  You observe that, of the first 9 balls, 3 are red and 6 are white.  What is the probability that the next ball drawn will be red?

\n

That depends on your prior beliefs about the urn.  If you think the urn-maker generated a uniform random number between 0 and 1, and used that number as the fixed probability of each ball being red, then the answer is 4/11 (by Laplace's Law of Succession).  If you think the urn originally contained 10 red balls and 10 white balls, then the answer is 7/11.

\n

Which goes to say that, with the right prior—or rather the wrong prior—the chance of the Sun rising tomorrow, would seem to go down with each succeeding day... if you were absolutely certain, a priori, that there was a great barrel out there from which, on each day, there was drawn a little slip of paper that determined whether the Sun rose or not; and that the barrel contained only a limited number of slips saying \"Yes\", and the slips were drawn without replacement.

\n

There are possible minds in mind design space who have anti-Occamian and anti-Laplacian priors; they believe that simpler theories are less likely to be correct, and that the more often something happens, the less likely it is to happen again.

\n

And when you ask these strange beings why they keep using priors that never seem to work in real life... they reply, \"Because it's never worked for us before!\"

\n

Now, one lesson you might derive from this, is \"Don't be born with a stupid prior.\"  This is an amazingly helpful principle on many real-world problems, but I doubt it will satisfy philosophers.

\n

Here's how I treat this problem myself:  I try to approach questions like \"Should I trust my brain?\" or \"Should I trust Occam's Razor?\" as though they were nothing special— or at least, nothing special as deep questions go.

\n

Should I trust Occam's Razor?  Well, how well does (any particular version of) Occam's Razor seem to work in practice?  What kind of probability-theoretic justifications can I find for it?  When I look at the universe, does it seem like the kind of universe in which Occam's Razor would work well?

\n

Should I trust my brain?  Obviously not; it doesn't always work.  But nonetheless, the human brain seems much more powerful than the most sophisticated computer programs I could consider trusting otherwise.  How well does my brain work in practice, on which sorts of problems?

\n

When I examine the causal history of my brain—its origins in natural selection—I find, on the one hand, all sorts of specific reasons for doubt; my brain was optimized to run on the ancestral savanna, not to do math.  But on the other hand, it's also clear why, loosely speaking, it's possible that the brain really could work.  Natural selection would have quickly eliminated brains so completely unsuited to reasoning, so anti-helpful, as anti-Occamian or anti-Laplacian priors.

\n

So what I did in practice, does not amount to declaring a sudden halt to questioning and justification.  I'm not halting the chain of examination at the point that I encounter Occam's Razor, or my brain, or some other unquestionable.  The chain of examination continues—but it continues, unavoidably, using my current brain and my current grasp on reasoning techniques.  What else could I possibly use?

\n

Indeed, no matter what I did with this dilemma, it would be me doing it.  Even if I trusted something else, like some computer program, it would be my own decision to trust it.

\n

The technique of rejecting beliefs that have absolutely no justification, is in general an extremely important one.  I sometimes say that the fundamental question of rationality is \"Why do you believe what you believe?\"  I don't even want to say something that sounds like it might allow a single exception to the rule that everything needs justification.

\n

Which is, itself, a dangerous sort of motivation; you can't always avoid everything that might be risky, and when someone annoys you by saying something silly, you can't reverse that stupidity to arrive at intelligence.

\n

But I would nonetheless emphasize the difference between saying:

\n
\n

\"Here is this assumption I cannot justify, which must be simply taken, and not further examined.\"

\n
\n

Versus saying:

\n
\n

\"Here the inquiry continues to examine this assumption, with the full force of my present intelligence—as opposed to the full force of something else, like a random number generator or a magic 8-ball—even though my present intelligence happens to be founded on this assumption.\"

\n
\n

Still... wouldn't it be nice if we could examine the problem of how much to trust our brains without using our current intelligence?  Wouldn't it be nice if we could examine the problem of how to think, without using our current grasp of rationality?

\n

When you phrase it that way, it starts looking like the answer might be \"No\".

\n

E. T. Jaynes used to say that you must always use all the information available to you—he was a Bayesian probability theorist, and had to clean up the paradoxes other people generated when they used different information at different points in their calculations.  The principle of \"Always put forth your true best effort\" has at least as much appeal as \"Never do anything that might look circular.\"  After all, the alternative to putting forth your best effort is presumably doing less than your best.

\n

But still... wouldn't it be nice if there were some way to justify using Occam's Razor, or justify predicting that the future will resemble the past, without assuming that those methods of reasoning which have worked on previous occasions are better than those which have continually failed?

\n

Wouldn't it be nice if there were some chain of justifications that neither ended in an unexaminable assumption, nor was forced to examine itself under its own rules, but, instead, could be explained starting from absolute scratch to an ideal philosophy student of perfect emptiness?

\n

Well, I'd certainly be interested, but I don't expect to see it done any time soon.  I've argued elsewhere in several places against the idea that you can have a perfectly empty ghost-in-the-machine; there is no argument that you can explain to a rock.

\n

Even if someone cracks the First Cause problem and comes up with the actual reason the universe is simple, which does not itself presume a simple universe... then I would still expect that the explanation could only be understood by a mindful listener, and not by, say, a rock.  A listener that didn't start out already implementing modus ponens might be out of luck.

\n

So, at the end of the day, what happens when someone keeps asking me \"Why do you believe what you believe?\"

\n

At present, I start going around in a loop at the point where I explain, \"I predict the future as though it will resemble the past on the simplest and most stable level of organization I can identify, because previously, this rule has usually worked to generate good results; and using the simple assumption of a simple universe, I can see why it generates good results; and I can even see how my brain might have evolved to be able to observe the universe with some degree of accuracy, if my observations are correct.\"

\n

But then... haven't I just licensed circular logic?

\n

Actually, I've just licensed reflecting on your mind's degree of trustworthiness, using your current mind as opposed to something else.

\n

Reflection of this sort is, indeed, the reason we reject most circular logic in the first place.  We want to have a coherent causal story about how our mind comes to know something, a story that explains how the process we used to arrive at our beliefs, is itself trustworthy.  This is the essential demand behind the rationalist's fundamental question, \"Why do you believe what you believe?\"

\n

Now suppose you write on a sheet of paper:  \"(1) Everything on this sheet of paper is true, (2) The mass of a helium atom is 20 grams.\"  If that trick actually worked in real life, you would be able to know the true mass of a helium atom just by believing some circular logic which asserted it.  Which would enable you to arrive at a true map of the universe sitting in your living room with the blinds drawn.  Which would violate the second law of thermodynamics by generating information from nowhere.  Which would not be a plausible story about how your mind could end up believing something true.

\n

Even if you started out believing the sheet of paper, it would not seem that you had any reason for why the paper corresponded to reality.  It would just be a miraculous coincidence that (a) the mass of a helium atom was 20 grams, and (b) the paper happened to say so.

\n

Believing, in general, self-validating statement sets, does not seem like it should work to map external reality—when we reflect on it as a causal story about minds—using, of course, our current minds to do so.

\n

But what about evolving to give more credence to simpler beliefs, and to believe that algorithms which have worked in the past are more likely to work in the future?  Even when we reflect on this as a causal story of the origin of minds, it still seems like this could plausibly work to map reality.

\n

And what about trusting reflective coherence in general?  Wouldn't most possible minds, randomly generated and allowed to settle into a state of reflective coherence, be incorrect?  Ah, but we evolved by natural selection; we were not generated randomly.

\n

If trusting this argument seems worrisome to you, then forget about the problem of philosophical justifications, and ask yourself whether it's really truly true.

\n

(You will, of course, use your own mind to do so.)

\n

Is this the same as the one who says, \"I believe that the Bible is the word of God, because the Bible says so\"?

\n

Couldn't they argue that their blind faith must also have been placed in them by God, and is therefore trustworthy?

\n

In point of fact, when religious people finally come to reject the Bible, they do not do so by magically jumping to a non-religious state of pure emptiness, and then evaluating their religious beliefs in that non-religious state of mind, and then jumping back to a new state with their religious beliefs removed.

\n

People go from being religious, to being non-religious, because even in a religious state of mind, doubt seeps in.  They notice their prayers (and worse, the prayers of seemingly much worthier people) are not being answered.  They notice that God, who speaks to them in their heart in order to provide seemingly consoling answers about the universe, is not able to tell them the hundredth digit of pi (which would be a lot more reassuring, if God's purpose were reassurance).  They examine the story of God's creation of the world and damnation of unbelievers, and it doesn't seem to make sense even under their own religious premises.

\n

Being religious doesn't make you less than human.  Your brain still has the abilities of a human brain.  The dangerous part is that being religious might stop you from applying those native abilities to your religion—stop you from reflecting fully on yourself.  People don't heal their errors by resetting themselves to an ideal philosopher of pure emptiness and reconsidering all their sensory experiences from scratch.  They heal themselves by becoming more willing to question their current beliefs, using more of the power of their current mind.

\n

This is why it's important to distinguish between reflecting on your mind using your mind (it's not like you can use anything else) and having an unquestionable assumption that you can't reflect on.

\n

\"I believe that the Bible is the word of God, because the Bible says so.\"  Well, if the Bible were an astoundingly reliable source of information about all other matters, if it had not said that grasshoppers had four legs or that the universe was created in six days, but had instead contained the Periodic Table of Elements centuries before chemistry—if the Bible had served us only well and told us only truth—then we might, in fact, be inclined to take seriously the additional statement in the Bible, that the Bible had been generated by God.  We might not trust it entirely, because it could also be aliens or the Dark Lords of the Matrix, but it would at least be worth taking seriously.

\n

Likewise, if everything else that priests had told us, turned out to be true, we might take more seriously their statement that faith had been placed in us by God and was a systematically trustworthy source—especially if people could divine the hundredth digit of pi by faith as well.

\n

So the important part of appreciating the circularity of \"I believe that the Bible is the word of God, because the Bible says so,\" is not so much that you are going to reject the idea of reflecting on your mind using your current mind.  But, rather, that you realize that anything which calls into question the Bible's trustworthiness, also calls into question the Bible's assurance of its trustworthiness.

\n

This applies to rationality too: if the future should cease to resemble the past—even on its lowest and simplest and most stable observed levels of organization—well, mostly, I'd be dead, because my brain's processes require a lawful universe where chemistry goes on working.  But if somehow I survived, then I would have to start questioning the principle that the future should be predicted to be like the past.

\n

But for now... what's the alternative to saying, \"I'm going to believe that the future will be like the past on the most stable level of organization I can identify, because that's previously worked better for me than any other algorithm I've tried\"?

\n

Is it saying, \"I'm going to believe that the future will not be like the past, because that algorithm has always failed before\"?

\n

At this point I feel obliged to drag up the point that rationalists are not out to win arguments with ideal philosophers of perfect emptiness; we are simply out to win.  For which purpose we want to get as close to the truth as we can possibly manage.  So at the end of the day, I embrace the principle:  \"Question your brain, question your intuitions, question your principles of rationality, using the full current force of your mind, and doing the best you can do at every point.\"

\n

If one of your current principles does come up wanting—according to your own mind's examination, since you can't step outside yourself—then change it!  And then go back and look at things again, using your new improved principles.

\n

The point is not to be reflectively consistent.  The point is to win.  But if you look at yourself and play to win, you are making yourself more reflectively consistent—that's what it means to \"play to win\" while \"looking at yourself\".

\n

Everything, without exception, needs justification.  Sometimes—unavoidably, as far as I can tell—those justifications will go around in reflective loops.  I do think that reflective loops have a meta-character which should enable one to distinguish them, by common sense, from circular logics.  But anyone seriously considering a circular logic in the first place, is probably out to lunch in matters of rationality; and will simply insist that their circular logic is a \"reflective loop\" even if it consists of a single scrap of paper saying \"Trust me\".  Well, you can't always optimize your rationality techniques according to the sole consideration of preventing those bent on self-destruction from abusing them.

\n

The important thing is to hold nothing back in your criticisms of how to criticize; nor should you regard the unavoidability of loopy justifications as a warrant of immunity from questioning.

\n

Always apply full force, whether it loops or not—do the best you can possibly do, whether it loops or not—and play, ultimately, to win.

" } }, { "_id": "qM7ydBXaCxPisAsaY", "title": "Will As Thou Wilt", "pageUrl": "https://www.lesswrong.com/posts/qM7ydBXaCxPisAsaY/will-as-thou-wilt", "postedAt": "2008-07-07T10:37:11.000Z", "baseScore": 6, "voteCount": 7, "commentCount": 31, "url": null, "contents": { "documentId": "qM7ydBXaCxPisAsaY", "html": "

Followup toPossibility and Could-ness

\n\n

Arthur Schopenhauer (1788-1860) said:

"A man can do as he wills, but not will as he wills."

For this fascinating sentence, I immediately saw two interpretations; and then, after some further thought, two more interpretations.

\n\n

On the first interpretation, Schopenhauer forbids us to build circular causal models of human psychology.  The explanation for someone's current will cannot be their current will - though it can include their past will.

\n\n

On the second interpretation, the sentence says that alternate choices are not reachable - that we couldn't have taken other options even "if we had wanted to do so".

\n\n

On the third interpretation, the sentence says that we cannot control our own desires - that we are the prisoners of our own passions, even when we struggle against them.

\n\n

On the fourth interpretation, the sentence says that we cannot control our own desires, because our desires themselves will determine which desires we want, and so protect themselves.

\n\n

I count two true interpretations and two false interpretations.  How about you?

\n\n" } }, { "_id": "iQNKfYb7aRYopojTX", "title": "Is Morality Given?", "pageUrl": "https://www.lesswrong.com/posts/iQNKfYb7aRYopojTX/is-morality-given", "postedAt": "2008-07-06T08:12:26.000Z", "baseScore": 35, "voteCount": 36, "commentCount": 100, "url": null, "contents": { "documentId": "iQNKfYb7aRYopojTX", "html": "

Continuation ofIs Morality Preference?

\n

(Disclaimer:  Neither Subhan nor Obert represent my own position on morality; rather they represent different sides of the questions I hope to answer.)

\n

Subhan:  \"What is this 'morality' stuff, if it is not a preference within you?\"

\n

Obert:  \"I know that my mere wants, don't change what is right; but I don't claim to have absolute knowledge of what is right—\"

\n

Subhan:  \"You're not escaping that easily!  How does a universe in which murder is wrong, differ from a universe in which murder is right?  How can you detect the difference experimentally?  If the answer to that is 'No', then how does any human being come to know that murder is wrong?\"

\n

Obert:  \"Am I allowed to say 'I don't know'?\"

\n

Subhan:  \"No.  You believe now that murder is wrong.  You must believe you already have evidence and you should be able to present it now.\"

\n

Obert:  \"That's too strict!  It's like saying to a hunter-gatherer, 'Why is the sky blue?' and expecting an immediate answer.\"

\n

Subhan:  \"No, it's like saying to a hunter-gatherer:  Why do you believe the sky is blue?\"

\n

Obert:  \"Because it seems blue, just as murder seems wrong.  Just don't ask me what the sky is, or how I can see it.\"

\n

Subhan:  \"But—aren't we discussing the nature of morality?\"

\n

Obert:  \"That, I confess, is not one of my strong points.  I specialize in plain old morality.  And as a matter of morality, I know that I can't make murder right just by wanting to kill someone.\"

\n

\n

Subhan:  \"But if you wanted to kill someone, you would say, 'I know murdering this guy is right, and I couldn't make it wrong just by not wanting to do it.'\"

\n

Obert:  \"Then, if I said that, I would be wrong.  That's common moral sense, right?\"

\n

Subhan:  \"Argh!  It's difficult to even argue with you, since you won't tell me exactly what you think morality is made of, or where you're getting all these amazing moral truths—\"

\n

Obert:  \"Well, I do regret having to frustrate you.  But it's more important that I act morally, than that I come up with amazing new theories of the nature of morality.  I don't claim that my strong point is in explaining the fundamental nature of morality.  Rather, my strong point is coming up with theories of morality that give normal moral answers to questions like, 'If you feel like killing someone, does that make it right to do so?'  The common-sense answer is 'No' and I really see no reason to adopt a theory that makes the answer 'Yes'.  Adding up to moral normality—that is my theory's strong point.\"

\n

Subhan:  \"Okay... look.  You say that, if you believed it was right to murder someone, you would be wrong.\"

\n

Obert:  \"Yes, of course!  And just to cut off any quibbles, we'll specify that we're not talking about going back in time and shooting Stalin, but rather, stalking some innocent bystander through a dark alley and slitting their throat for no other reason but my own enjoyment.  That's wrong.\"

\n

Subhan:  \"And anyone who says murder is right, is mistaken.\"

\n

Obert:  \"Yes.\"

\n

Subhan:  \"Suppose there's an alien species somewhere in the vastness of the multiverse, who evolved from carnivores.  In fact, through most of their evolutionary history, they were cannibals.  They've evolved different emotions from us, and they have no concept that murder is wrong—\"

\n

Obert:  \"Why doesn't their society fall apart in an orgy of mutual killing?\"

\n

Subhan:  \"That doesn't matter for our purposes of theoretical metaethical investigation.  But since you ask, we'll suppose that the Space Cannibals have a strong sense of honor—they won't kill someone they promise not to kill; they have a very strong idea that violating an oath is wrong.  Their society holds together on that basis, and on the basis of vengeance contracts with private assassination companies.  But so far as the actual killing is concerned, the aliens just think it's fun.  When someone gets executed for, say, driving through a traffic light, there's a bidding war for the rights to personally tear out the offender's throat.\"

\n

Obert:  \"Okay... where is this going?\"

\n

Subhan:  \"I'm proposing that the Space Cannibals not only have no sense that murder is wrong—indeed, they have a positive sense that killing is an important part of life—but moreover, there's no path of arguments you could use to persuade a Space Cannibal of your view that murder is wrong.  There's no fact the aliens can learn, and no chain of reasoning they can discover, which will ever cause them to conclude that murder is a moral wrong.  Nor is there any way to persuade them that they should modify themselves to perceive things differently.\"

\n

Obert:  \"I'm not sure I believe that's possible—\"

\n

Subhan:  \"Then you believe in universally compelling arguments processed by a ghost in the machine.  For every possible mind whose utility function assigns terminal value +1, mind design space contains an equal and opposite mind whose utility function assigns terminal value—1.  A mind is a physical device and you can't have a little blue woman pop out of nowhere and make it say 1 when the physics calls for it to say 0.\"

\n

Obert:  \"Suppose I were to concede this.  Then?\"

\n

Subhan:  \"Then it's possible to have an alien species that believes murder is not wrong, and moreover, will continue to believe this given knowledge of every possible fact and every possible argument.  Can you say these aliens are mistaken?\"

\n

Obert:  \"Maybe it's the right thing to do in their very different, alien world—\"

\n

Subhan:  \"And then they land on Earth and start slitting human throats, laughing all the while, because they don't believe it's wrong.  Are they mistaken?\"

\n

Obert:  \"Yes.\"

\n

Subhan:  \"Where exactly is the mistake?  In which step of reasoning?\"

\n

Obert:  \"I don't know exactly.  My guess is that they've got a bad axiom.\"

\n

Subhan:  \"Dammit!  Okay, look.  Is it possible that—by analogy with the Space Cannibals—there are true moral facts of which the human species is not only presently unaware, but incapable of perceiving in principle?  Could we have been born defective—incapable even of being compelled by the arguments that would lead us to the light?  Moreover, born without any desire to modify ourselves to be capable of understanding such arguments?  Could we be irrevocably mistaken about morality—just like you say the Space Cannibals are?\"

\n

Obert:  \"I... guess so...\"

\n

Subhan:  \"You guess so?  Surely this is an inevitable consequence of believing that morality is a given, independent of anyone's preferences!  Now, is it possible that we, not the Space Cannibals, are the ones who are irrevocably mistaken in believing that murder is wrong?\"

\n

Obert:  \"That doesn't seem likely.\"

\n

Subhan:  \"I'm not asking you if it's likely, I'm asking you if it's logically possible!  If it's not possible, then you have just confessed that human morality is ultimately determined by our human constitutions.  And if it is possible, then what distinguishes this scenario of 'humanity is irrevocably mistaken about morality', from finding a stone tablet on which is written the phrase 'Thou Shalt Murder' without any known justification attached?  How is a given morality any different from an unjustified stone tablet?\"

\n

Obert:  \"Slow down.  Why does this argument show that morality is determined by our own constitutions?\"

\n

Subhan:  \"Once upon a time, theologians tried to say that God was the foundation of morality.  And even since the time of the ancient Greeks, philosophers were sophisticated enough to go on and ask the next question—'Why follow God's commands?'  Does God have knowledge of morality, so that we should follow Its orders as good advice?  But then what is this morality, outside God, of which God has knowledge?  Do God's commands determine morality?  But then why, morally, should one follow God's orders?\"

\n

Obert:  \"Yes, this demolishes attempts to answer questions about the nature of morality just by saying 'God!', unless you answer the obvious further questions.  But so what?\"

\n

Subhan:  \"And furthermore, let us castigate those who made the argument originally, for the sin of trying to cast off responsibility—trying to wave a scripture and say, 'I'm just following God's orders!'  Even if God had told them to do a thing, it would still have been their own decision to follow God's orders.\"

\n

Obert:  \"I agree—as a matter of morality, there is no evading of moral responsibility.  Even if your parents, or your government, or some kind of hypothetical superintelligence, tells you to do something, you are responsible for your decision in doing it.\"

\n

Subhan:  \"But you see, this also demolishes the idea of any morality that is outside, beyond, or above human preference.  Just substitute 'morality' for 'God' in the argument!\"

\n

Obert:  \"What?\"

\n

Subhan:  \"John McCarthy said:  'You say you couldn't live if you thought the world had no purpose. You're saying that you can't form purposes of your own-that you need someone to tell you what to do. The average child has more gumption than that.'  For every kind of stone tablet that you might imagine anywhere, in the trends of the universe or in the structure of logic, you are still left with the question:  'And why obey this morality?'  It would be your decision to follow this trend of the universe, or obey this structure of logic.  Your decision—and your preference.\"

\n

Obert:  \"That doesn't follow!  Just because it is my decision to be moral—and even because there are drives in me that lead me to make that decision—it doesn't follow that the morality I follow consists merely of my preferences.  If someone gives me a pill that makes me prefer to not be moral, to commit murder, then this just alters my preference—but not the morality; murder is still wrong.  That's common moral sense—\"

\n

Subhan:  \"I beat my head against my keyboard!  What about scientific common sense?  If morality is this mysterious given thing, from beyond space and time—and I don't even see why we should follow it, in that case—but in any case, if morality exists independently of human nature, then isn't it a remarkable coincidence that, say, love is good?\"

\n

Obert:  \"Coincidence?  How so?\"

\n

Subhan:  \"Just where on Earth do you think the emotion of love comes from?  If the ancient Greeks had ever thought of the theory of natural selection, they could have looked at the human institution of sexual romance, or parental love for that matter, and deduced in one flash that human beings had evolved—or at least derived tremendous Bayesian evidence for human evolution.  Parental bonds and sexual romance clearly display the signature of evolutionary psychology—they're archetypal cases, in fact, so obvious we usually don't even see it.\"

\n

Obert:  \"But love isn't just about reproduction—\"

\n

Subhan:  \"Of course not; individual organisms are adaptation-executers, not fitness-maximizers.  But for something independent of humans, morality looks remarkably like godshatter of natural selection.  Indeed, it is far too much coincidence for me to credit.  Is happiness morally preferable to pain?  What a coincidence!  And if you claim that there is any emotion, any instinctive preference, any complex brain circuitry in humanity which was created by some external morality thingy and not natural selection, then you are infringing upon science and you will surely be torn to shreds—science has never needed to postulate anything but evolution to explain any feature of human psychology—\"

\n

Obert:  \"I'm not saying that humans got here by anything except evolution.\"

\n

Subhan:  \"Then why does morality look so amazingly like a product of an evolved psychology?\"

\n

Obert:  \"I don't claim perfect access to moral truth; maybe, being human, I've made certain mistakes about morality—\"

\n

Subhan:  \"Say that—forsake love and life and happiness, and follow some useless damn trend of the universe or whatever—and you will lose every scrap of the moral normality that you once touted as your strong point.  And I will be right here, asking, 'Why even bother?'  It would be a pitiful mind indeed that demanded authoritative answers so strongly, that it would forsake all good things to have some authority beyond itself to follow.\"

\n

Obert:  \"All right... then maybe the reason morality seems to bear certain similarities to our human constitutions, is that we could only perceive morality at all, if we happened, by luck, to evolve in consonance with it.\"

\n

Subhan:  \"Horsemanure.\"

\n

Obert:  \"Fine... you're right, that wasn't very plausible.  Look, I admit you've driven me into quite a corner here.  But even if there were nothing more to morality than preference, I would still prefer to act as morality were real.  I mean, if it's all just preference, that way is as good as anything else—\"

\n

Subhan:  \"Now you're just trying to avoid facing reality!  Like someone who says, 'If there is no Heaven or Hell, then I may as well still act as if God's going to punish me for sinning.'\"

\n

Obert:  \"That may be a good metaphor, in fact.  Consider two theists, in the process of becoming atheists.  One says, 'There is no Heaven or Hell, so I may as well cheat and steal, if I can get away without being caught, since there's no God to watch me.'  And the other says, 'Even though there's no God, I intend to pretend that God is watching me, so that I can go on being a moral person.'  Now they are both mistaken, but the first is straying much further from the path.\"

\n

Subhan:  \"And what is the second one's flaw?  Failure to accept personal responsibility!\"

\n

Obert:  \"Well, and I admit I find that a more compelling argument than anything else you have said.  Probably because it is a moral argument, and it has always been morality, not metaethics, with which I claimed to be concerned.  But even so, after our whole conversation, I still maintain that wanting to murder someone does not make murder right.  Everything that you have said about preference is interesting, but it is ultimately about preference—about minds and what they are designed to desire—and not about this other thing that humans sometimes talk about, 'morality'.  I can just ask Moore's Open Question:  Why should I care about human preferences?  What makes following human preferences right?  By changing a mind, you can change what it prefers; you can even change what it believes to be right; but you cannot change what is right.  Anything you talk about, that can be changed in this way, is not 'right-ness'.\"

\n

Subhan:  \"So you take refuge in arguing from definitions?\"

\n

Obert:  \"You know, when I reflect on this whole argument, it seems to me that your position has the definite advantage when it comes to arguments about ontology and reality and all that stuff—\"

\n

Subhan:  \"'All that stuff'?  What else is there, besides reality?\"

\n

Obert:  \"Okay, the morality-as-preference viewpoint is a lot easier to shoehorn into a universe of quarks.  But I still think the morality-as-given viewpoint has the advantage when it comes to, you know, the actual morality part of it—giving answers that are good in the sense of being morally good, not in the sense of being a good reductionist.  Because, you know, there are such things as moral errors, there is moral progress, and you really shouldn't go around thinking that murder would be right if you wanted it to be right.\"

\n

Subhan:  \"That sounds to me like the logical fallacy of appealing to consequences.\"

\n

Obert:  \"Oh?  Well, it sounds to me like an incomplete reduction—one that doesn't quite add up to normality.\"

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Where Recursive Justification Hits Bottom\"

\n

Previous post: \"Is Morality Preference?\"

" } }, { "_id": "F5WLc7hCxkB4X4yD4", "title": "Is Morality Preference?", "pageUrl": "https://www.lesswrong.com/posts/F5WLc7hCxkB4X4yD4/is-morality-preference", "postedAt": "2008-07-05T00:55:25.000Z", "baseScore": 33, "voteCount": 37, "commentCount": 44, "url": null, "contents": { "documentId": "F5WLc7hCxkB4X4yD4", "html": "

Followup toMoral Complexities

\n

In the dialogue \"The Bedrock of Fairness\", I intended Yancy to represent morality-as-raw-fact, Zaire to represent morality-as-raw-whim, and Xannon to be a particular kind of attempt at compromising between them.  Neither Xannon, Yancy, or Zaire represent my own views—rather they are, in their disagreement, showing the problem that I am trying to solve.  It is futile to present answers to which questions are lacking.

\n

But characters have independent life in the minds of all readers; when I create a dialogue, I don't view my authorial intent as primary.  Any good interpretation can be discussed.  I meant Zaire to be asking for half the pie out of pure selfishness; many readers interpreted this as a genuine need... which is as interesting a discussion to have as any, though it's a different discussion.

\n

With this in mind, I turn to Subhan and Obert, who shall try to answer yesterday's questions on behalf of their respective viewpoints.

\n

Subhan makes the opening statement:

\n

Subhan:  \"I defend this proposition: that there is no reason to talk about a 'morality' distinct from what people want.\"

\n

Obert:  \"I challenge.  Suppose someone comes to me and says, 'I want a slice of that pie you're holding.'  It seems to me that they have just made a very different statement from 'It is right that I should get a slice of that pie'.  I have no reason at all to doubt the former statement—to suppose that they are lying to me about their desires.  But when it comes to the latter proposition, I have reason indeed to be skeptical.  Do you say that these two statements mean the same thing?\"

\n

\n

Subhan:  \"I suggest that when the pie-requester says to you, 'It is right for me to get some pie', this asserts that you want the pie-requester to get a slice.\"

\n

Obert:  \"Why should I need to be told what I want?\"

\n

Subhan:  \"You take a needlessly restrictive view of wanting, Obert; I am not setting out to reduce humans to creatures of animal instinct.  Your wants include those desires you label 'moral values', such as wanting the hungry to be fed—\"

\n

Obert:  \"And you see no distinction between my desire to feed the hungry, and my desire to eat all the delicious pie myself?\"

\n

Subhan:  \"No!  They are both desires—backed by different emotions, perhaps, but both desires.  To continue, the pie-requester hopes that you have a desire to feed the hungry, and so says, 'It is right that I should get a slice of this pie', to remind you of your own desire.  We do not automatically know all the consequences of our own wants; we are not logically omniscient.\"

\n

Obert:  \"This seems psychologically unrealistic—I don't think that's what goes through the mind of the person who says, 'I have a right to some pie'.  In this latter case, if I deny them pie, they will feel indignant.  If they are only trying to remind me of my own desires, why should they feel indignant?\"

\n

Subhan:  \"Because they didn't get any pie, so they're frustrated.\"

\n

Obert:  \"Unrealistic!  Indignation at moral transgressions has a psychological dimension that goes beyond struggling with a struck door.\"

\n

Subhan:  \"Then consider the evolutionary psychology.  The pie-requester's emotion of indignation would evolve as a display, first to remind you of the potential consequences of offending fellow tribe-members, and second, to remind any observing tribe-members of goals they may have to feed the hungry.  By refusing to share, you would offend against a social norm—which is to say, a widely shared want.\"

\n

Obert:  \"So you take refuge in social wants as the essence of morality?  But people seem to see a difference between desire and morality, even in the quiet of their own minds.  They say things like:  'I want X, but the right thing to do is Y... what shall I do?'\"

\n

Subhan:  \"So they experience a conflict between their want to eat pie, and their want to feed the hungry—which they know is also a want of society.  It's not predetermined that the prosocial impulse will be victorious, but they are both impulses.\"

\n

Obert:  \"And when, during WWII, a German hides Jews in their basement—against the wants of surrounding society—how then?\"

\n

Subhan:  \"People do not always define their in-group by looking at their next-door neighbors; they may conceive of their group as 'good Christians' or 'humanitarians'.\"

\n

Obert:  \"I should sooner say that people choose their in-groups by looking for others who share their beliefs about morality—not that they construct their morality from their in-group.\"

\n

Subhan:  \"Oh, really?  I should not be surprised if that were experimentally testable—if so, how much do you want to bet?\"

\n

Obert:  \"That the Germans who hid Jews in their basements, chose who to call their people by looking at their beliefs about morality?  Sure.  I'd bet on that.\"

\n

Subhan:  \"But in any case, even if a German resister has a desire to preserve life which is so strong as to go against their own perceived 'society', it is still their desire.\"

\n

Obert:  \"Yet they would attribute to that desire, the same distinction they make between 'right' and 'want'—even when going against society.  They might think to themselves, 'How dearly I wish I could stay out of this, and keep my family safe.  But it is my duty to hide these Jews from the Nazis, and I must fulfill that duty.'  There is an interesting moral question, as to whether it reveals greater heroism, to fulfill a duty eagerly, or to fulfill your duties when you are not eager.  For myself I should just total up the lives saved, and call that their score.  But I digress...  The distinction between 'right' and 'want' is not explained by your distinction of socially shared and individual wants.  The distinction between desire and duty seems to me a basic thing, which someone could experience floating alone in a spacesuit a thousand light-years from company.\"

\n

Subhan:  \"Even if I were to grant this psychological distinction, perhaps that is simply a matter of emotional flavoring. Why should I not describe perceived duties as a differently flavored want?\"

\n

Obert:  \"Duties, and should-ness, seem to have a dimension that goes beyond our whims.  If we want different pizza toppings today, we can order a different pizza without guilt; but we cannot choose to make murder a good thing.\"

\n

Subhan:  \"Schopenhauer:  'A man can do as he wills, but not will as he wills.'  You cannot decide to make salad taste better to you than cheeseburgers, and you cannot decide not to dislike murder.  Furthermore, people do change, albeit rarely, those wants that you name 'values'; indeed they are easier to change than our food tastes.\"

\n

Obert:  \"Ah!  That is something I meant to ask you about.  People sometimes change their morals; I would call this updating their beliefs about morality, but you would call it changing their wants.  Why would anyone want to change their wants?\"

\n

Subhan:  \"Perhaps they simply find that their wants have changed; brains do change over time.  Perhaps they have formed a verbal belief about what they want, which they have discovered to be mistaken. Perhaps society has changed, or their perception of society has changed.  But really, in most cases you don't have to go that far, to explain apparent changes of morality.\"

\n

Obert:  \"Oh?\"

\n

Subhan:  \"Let's say that someone begins by thinking that Communism is a good social system, has some arguments, and ends by believing that Communism is a bad social system.  This does not mean that their ends have changed—they may simply have gotten a good look at the history of Russia, and decided that Communism is a poor means to the end of raising standards of living.  I challenge you to find me a case of changing morality in which people change their terminal values, and not just their beliefs about which acts have which consequences.\"

\n

Obert:  \"Someone begins by believing that God ordains against premarital sex; they find out there is no God; subsequently they approve of premarital sex.  This, let us specify, is not because of fear of Hell; but because previously they believed that God had the power to ordain, or knowledge to tell them, what is right; in ceasing to believe in God, they updated their belief about what is right.\"

\n

Subhan:  \"I am not responsible for straightening others' confusions; this one is merely in a general state of disarray around the 'God' concept.\"

\n

Obert:  \"All right; suppose I get into a moral argument with a man from a society that practices female circumcision.  I do not think our argument is about the consequences to the woman; the argument is about the morality of these consequences.\"

\n

Subhan:  \"Perhaps the one falsely believes that women have no feelings—\"

\n

Obert:  \"Unrealistic, unrealistic!  It is far more likely that the one hasn't really considered whether the woman has feelings, because he doesn't see any obligation to care.  The happiness of women is not a terminal value to him.  Thousands of years ago, most societies devalued consequences to women.  They also had false beliefs about women, true—and false beliefs about men as well, for that matter—but nothing like the Victorian era's complex rationalizations for how paternalistic rules really benefited women. The Old Testament doesn't explain why it levies the death penalty for a woman wearing men's clothing.  It certainly doesn't explain how this rule really benefits women after all.  It's not the sort of argument it would have occurred to the authors to rationalize!  They didn't care about the consequences to women.\"

\n

Subhan:  \"So they wanted different things than you; what of it?\"

\n

Obert:  \"See, now that is exactly why I cannot accept your viewpoint.  Somehow, societies went from Old Testament attitudes, to democracies with female suffrage.  And this transition—however it occurred—was caused by people saying, 'What this society does to women is a great wrong!', not, 'I would personally prefer to treat women better.'  That's not just a change in semantics—it's the difference between being obligated to stand and deliver a justification, versus being able to just say, 'Well, I prefer differently, end of discussion.'  And who says that humankind has finished with its moral progress?  You're yanking the ladder out from underneath a very important climb.\"

\n

Subhan:  \"Let us suppose that the change of human societies over the last ten thousand years, has been accompanied by a change in terminal values—\"

\n

Obert:  \"You call this a supposition?  Modern political debates turn around vastly different valuations of consequences than in ancient Greece!\"

\n

Subhan:  \"I am not so sure; human cognitive psychology has not had time to change evolutionarily over that period.  Modern democracies tend to appeal to our empathy for those suffering; that empathy existed in ancient Greece as well, but it was invoked less often.  In each single moment of argument, I doubt you would find modern politicians appealing to emotions that didn't exist in ancient Greece.\"

\n

Obert:  \"I'm not saying that emotions have changed; I'm saying that beliefs about morality have changed.  Empathy merely provides emotional depth to an argument that can be made on a purely logical level:  'If it's wrong to enslave you, if it's wrong to enslave your family and your friends, then how can it be right to enslave people who happen to be a different color?  What difference does the color make?'  If morality is just preference, then there's a very simple answer:  'There is no right or wrong, I just like my own family better.'  You see the problem here?\"

\n

Subhan:  \"Logical fallacy:  Appeal to consequences.\"

\n

Obert:  \"I'm not appealing to consequences.  I'm showing that when I reason about 'right' or 'wrong', I am reasoning about something that does not behave like 'want' and 'don't want'.\"

\n

Subhan:  \"Oh?  But I think that in reality, your rejection of morality-as-preference has a great deal to do with your fear of where the truth leads.\"

\n

Obert:  \"Logical fallacy:  Ad hominem.\"

\n

Subhan:  \"Fair enough.  Where were we?\"

\n

Obert:  \"If morality is preference, why would you want to change your wants to be more inclusive?  Why would you want to change your wants at all?\"

\n

Subhan:  \"The answer to your first question probably has to do with a fairness instinct, I would suppose—a notion that the tribe should have the same rules for everyone.\"

\n

Obert:  \"I don't think that's an instinct.  I think that's a triumph of three thousand years of moral philosophy.\"

\n

Subhan:  \"That could be tested.\"

\n

Obert:  \"And my second question?\"

\n

Subhan:  \"Even if terminal values change, it doesn't mean that terminal values are stored on a great stone tablet outside humanity.  Indeed, it would seem to argue against it!  It just means that some of the events that go on in our brains, can change what we want.\"

\n

Obert:  \"That's your concept of moral progress?  That's your view of the last three thousand years?  That's why we have free speech, democracy, mass street protests against wars, nonlethal weapons, no more slavery—\"

\n

Subhan:  \"If you wander on a random path, and you compare all past states to your present state, you will see continuous 'advancement' toward your present condition—\"

\n

Obert:  \"Wander on a random path?\"

\n

Subhan:  \"I'm just pointing out that saying, 'Look how much better things are now', when your criterion for 'better' is comparing past moral values to yours, does not establish any directional trend in human progress.\"

\n

Obert:  \"Your strange beliefs about the nature of morality have destroyed your soul.  I don't even believe in souls, and I'm saying that.\"

\n

Subhan:  \"Look, depending on which arguments do, in fact, move us, you might be able to regard the process of changing terminal values as a directional progress.  You might be able to show that the change had a consistent trend as we thought of more and more arguments.  But that doesn't show that morality is something outside us.  We could even—though this is psychologically unrealistic—choose to regard you as computing a converging approximation to your 'ideal wants', so that you would have meta-values that defined both your present value and the rules for updating them.  But these would be your meta-values and your ideals and your computation, just as much as pepperoni is your own taste in pizza toppings.  You may not know your real favorite ever pizza topping, until you've tasted many possible flavors.\"

\n

Obert:  \"Leaving out what it is that you just compared to pizza toppings, I begin to be suspicious of the all-embracingness of your viewpoint.  No matter what my mind does, you can simply call it a still-more-modified 'want'.  I think that you are the one suffering from meta-level confusion, not I.  Appealing to right is not the same as appealing to desire.  Just because the appeal is judged inside my brain, doesn't mean that the appeal is not to something more than my desires.  Why can't my brain compute duties as well as desires?\"

\n

Subhan:  \"What is the difference between duty and desire?\"

\n

Obert:  \"A duty is something you must do whether you want to or not.\"

\n

Subhan:  \"Now you're just being incoherent.  Your brain computes something it wants to do whether it wants to or not?\"

\n

Obert:  \"No, you are the one whose theory makes this incoherent.  Which is why your theory ultimately fails to add up to morality.\"

\n

Subhan:  \"I say again that you underestimate the power of mere wanting.  And more:  You accuse me of incoherence?  You say that I suffer from meta-level confusion?\"

\n

Obert:  \"Er... yes?\"

\n

To be continued...

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Is Morality Given?\"

\n

Previous post: \"Moral Complexities\"

" } }, { "_id": "SbdCX6A5AGyyfhdmh", "title": "Moral Complexities", "pageUrl": "https://www.lesswrong.com/posts/SbdCX6A5AGyyfhdmh/moral-complexities", "postedAt": "2008-07-04T06:43:52.000Z", "baseScore": 30, "voteCount": 23, "commentCount": 40, "url": null, "contents": { "documentId": "SbdCX6A5AGyyfhdmh", "html": "

Followup toThe Bedrock of Fairness

\n

Discussions of morality seem to me to often end up turning around two different intuitions, which I might label morality-as-preference and morality-as-given.  The former crowd tends to equate morality with what people want; the latter to regard morality as something you can't change by changing people.

\n

As for me, I have my own notions, which I am working up to presenting.  But above all, I try to avoid avoiding difficult questions.  Here are what I see as (some of) the difficult questions for the two intuitions:

\n\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Is Morality Preference?\"

\n

Previous post: \"The Bedrock of Fairness\"

" } }, { "_id": "R8gzvuuNYK9g52Cxh", "title": "2 of 10, not 3 total", "pageUrl": "https://www.lesswrong.com/posts/R8gzvuuNYK9g52Cxh/2-of-10-not-3-total", "postedAt": "2008-07-04T01:10:29.000Z", "baseScore": 5, "voteCount": 7, "commentCount": 15, "url": null, "contents": { "documentId": "R8gzvuuNYK9g52Cxh", "html": "

There is no rule against commenting more than 3 times in a thread.  Sorry if anyone has gotten this impression.

\n\n

However, among the 10 "Recent Comments" visible in the sidebar at right, usually no more than 2, rarely 3, and never 4, should be yours.  This is meant to ensure no one person dominates a thread; it gives others a chance to respond to others' responses.  One-line comments that quickly correct an error may be common-sensically excepted from this rule.

\n\n

You need not refrain from commenting, just wait a bit.

" } }, { "_id": "iAxkfiyG8WizPSPbq", "title": "The Bedrock of Fairness", "pageUrl": "https://www.lesswrong.com/posts/iAxkfiyG8WizPSPbq/the-bedrock-of-fairness", "postedAt": "2008-07-03T06:00:14.000Z", "baseScore": 58, "voteCount": 50, "commentCount": 103, "url": null, "contents": { "documentId": "iAxkfiyG8WizPSPbq", "html": "

Followup toThe Moral Void

\n

Three people, whom we'll call Xannon, Yancy and Zaire, are separately wandering through the forest; by chance, they happen upon a clearing, meeting each other.  Introductions are performed.  And then they discover, in the center of the clearing, a delicious blueberry pie.

\n

Xannon:  \"A pie!  What good fortune!  But which of us should get it?\"

\n

Yancy:  \"Let us divide it fairly.\"

\n

Zaire:  \"I agree; let the pie be distributed fairly.  Who could argue against fairness?\"

\n

Xannon:  \"So we are agreed, then.  But what is a fair division?\"

\n

Yancy:  \"Eh?  Three equal parts, of course!\"

\n

Zaire:  \"Nonsense!  A fair distribution is half for me, and a quarter apiece for the two of you.\"

\n

Yancy:  \"What?  How is that fair?\"

\n

Zaire:  \"I'm hungry, therefore I should be fed; that is fair.\"

\n

Xannon:  \"Oh, dear.  It seems we have a dispute as to what is fair.  For myself, I want to divide the pie the same way as Yancy.  But let us resolve this dispute over the meaning of fairness, fairly: that is, giving equal weight to each of our desires.  Zaire desires the pie to be divided {1/4, 1/4, 1/2}, and Yancy and I desire the pie to be divided {1/3, 1/3, 1/3}.  So the fair compromise is {11/36, 11/36, 14/36}.\"

\n

\n

Zaire:  \"What?  That's crazy.  There's two different opinions as to how fairness works—why should the opinion that happens to be yours, get twice as much weight as the opinion that happens to be mine?  Do you think your theory is twice as good?  I think my theory is a hundred times as good as yours!  So there!\"

\n

Yancy:  \"Craziness indeed.  Xannon, I already took Zaire's desires into account in saying that he should get 1/3 of the pie.  You can't count the same factor twice.  Even if we count fairness as an inherent desire, why should Zaire be rewarded for being selfish?  Think about which agents thrive under your system!\"

\n

Xannon:  \"Alas!  I was hoping that, even if we could not agree on how to distribute the pie, we could agree on a fair resolution procedure for our dispute, such as averaging our desires together.  But even that hope was dashed.  Now what are we to do?\"

\n

Yancy:  \"Xannon, you are overcomplicating things.  1/3 apiece.  It's not that complicated.  A fair distribution is an even split, not a distribution arrived at by a 'fair resolution procedure' that everyone agrees on.  What if we'd all been raised in a society that believed that men should get twice as much pie as women?  Then we would split the pie unevenly, and even though no one of us disputed the split, it would still be unfair.\"

\n

Xannon:  \"What?  Where is this 'fairness' stored if not in human minds?  Who says that something is unfair if no intelligent agent does so?  Not upon the stars or the mountains is 'fairness' written.\"

\n

Yancy:  \"So what you're saying is that if you've got a whole society where women are chattel and men sell them like farm animals and it hasn't occurred to anyone that things could be other than they are, that this society is fair, and at the exact moment where someone first realizes it shouldn't have to be that way, the whole society suddenly becomes unfair.\"

\n

Xannon:  \"How can a society be unfair without some specific party who claims injury and receives no reparation?  If it hasn't occurred to anyone that things could work differently, and no one's asked for things to work differently, then—\"

\n

Yancy:  \"Then the women are still being treated like farm animals and that is unfair.  Where's your common sense?  Fairness is not agreement, fairness is symmetry.\"

\n

Zaire:  \"Is this all working out to my getting half the pie?\"

\n

Yancy:  \"No.\"

\n

Xannon:  \"I don't know... maybe as the limit of an infinite sequence of meta-meta-fairnesses...\"

\n

Zaire:  \"I fear I must accord with Yancy on one point, Xannon; your desire for perfect accord among us is misguided.  I want half the pie.  Yancy wants me to have a third of the pie.  This is all there is to the world, and all there ever was.  If two monkeys want the same banana, in the end one will have it, and the other will cry morality.  Who gets to form the committee to decide the rules that will be used to determine what is 'fair'?  Whoever it is, got the banana.\"

\n

Yancy:  \"I wanted to give you a third of the pie, and you equate this to seizing the whole thing for myself?  Small wonder that you don't want to acknowledge the existence of morality—you don't want to acknowledge that anyone can be so much less of a jerk.\"

\n

Xannon:  \"You oversimplify the world, Zaire.  Banana-fights occur across thousands and perhaps millions of species, in the animal kingdom.  But if this were all there was, Homo sapiens would never have evolved moral intuitions.  Why would the human animal evolve to cry morality, if the cry had no effect?\"

\n

Zaire:  \"To make themselves feel better.\"

\n

Yancy:  \"Ha!  You fail at evolutionary biology.\"

\n

Xannon:  \"A murderer accosts a victim, in a dark alley; the murderer desires the victim to die, and the victim desires to live.  Is there nothing more to the universe than their conflict?  No, because if I happen along, I will side with the victim, and not with the murderer.  The victim's plea crosses the gap of persons, to me; it is not locked up inside the victim's own mind.  But the murderer cannot obtain my sympathy, nor incite me to help murder.  Morality crosses the gap between persons; you might not see it in a conflict between two people, but you would see it in a society.\"

\n

Yancy:  \"So you define morality as that which crosses the gap of persons?\"

\n

Xannon:  \"It seems to me that social arguments over disputed goals are how human moral intuitions arose, beyond the simple clash over bananas.  So that is how I define the term.\"

\n

Yancy:  \"Then I disagree.  If someone wants to murder me, and the two of us are alone, then I am still in the right and they are still in the wrong, even if no one else is present.\"

\n

Zaire:  \"And the murderer says, 'I am in the right, you are in the wrong'.  So what?\"

\n

Xannon:  \"How does your statement that you are in the right, and the murderer is in the wrong, impinge upon the universe—if there is no one else present to be persuaded?\"

\n

Yancy:  \"It licenses me to resist being murdered; which I might not do, if I thought that my desire to avoid being murdered was wrong, and the murderer's desire to kill me was right.  I can distinguish between things I merely want, and things that are right—though alas, I do not always live up to my own standards.  The murderer is blind to the morality, perhaps, but that doesn't change the morality.  And if we were both blind, the morality still would not change.\"

\n

Xannon:  \"Blind?  What is being seen, what sees it?\"

\n

Yancy:  \"You're trying to treat fairness as... I don't know, something like an array-mapped 2-place function that goes out and eats a list of human minds, and returns a list of what each person thinks is 'fair', and then averages it together.  The problem with this isn't just that different people could have different ideas about fairness.  It's not just that they could have different ideas about how to combine the results.  It's that it leads to infinite recursion outright—passing the recursive buck.  You want there to be some level on which everyone agrees, but at least some possible minds will disagree with any statement you make.\"

\n

Xannon:  \"Isn't the whole point of fairness to let people agree on a division, instead of fighting over it?\"

\n

Yancy:  \"What is fair is one question, and whether someone else accepts that this is fair is another question.  What is fair?  That's easy: an equal division of the pie is fair.  Anything else won't be fair no matter what kind of pretty arguments you put around it.  Even if I gave Zaire a sixth of my pie, that might be a voluntary division but it wouldn't be a fair division.  Let fairness be a simple and object-level procedure, instead of this infinite meta-recursion, and the buck will stop immediately.\"

\n

Zaire:  \"If the word 'fair' simply means 'equal division' then why not just say 'equal division' instead of this strange additional word, 'fair'?  You want the pie divided equally, I want half the pie for myself.  That's the whole fact of the matter; this word 'fair' is merely an attempt to get more of the pie for yourself.\"

\n

Xannon:  \"If that's the whole fact of the matter, why would anyone talk about 'fairness' in the first place, I wonder?\"

\n

Zaire:  \"Because they all share the same delusion.\"

\n

Yancy:  \"A delusion of what?  What is it that you are saying people think incorrectly the universe is like?\"

\n

Zaire:  \"I am under no obligation to describe other people's confusions.\"

\n

Yancy:  \"If you can't dissolve their confusion, how can you be sure they're confused?  But it seems clear enough to me that if the word fair is going to have any meaning at all, it has to finally add up to each of us getting one-third of the pie.\"

\n

Xannon:  \"How odd it is to have a procedure of which we are more sure of the result than the procedure itself.\"

\n

Zaire:  \"Speak for yourself.\"

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Moral Complexities\"

\n

Previous post: \"Created Already In Motion\"

" } }, { "_id": "SoMFAn2pTWR6GQFZz", "title": "I'd take it", "pageUrl": "https://www.lesswrong.com/posts/SoMFAn2pTWR6GQFZz/i-d-take-it", "postedAt": "2008-07-02T07:57:35.000Z", "baseScore": 8, "voteCount": 8, "commentCount": 53, "url": null, "contents": { "documentId": "SoMFAn2pTWR6GQFZz", "html": "

Out-of-context quote of the day:

"...although even $10 trillion isn't a huge amount of money..."

From Simon Johnson, Director of the IMF's Research Department, on "The Rise of Sovereign Wealth Funds".

\n\n\n\n

So if you had $10 trillion, what would you do with it?

" } }, { "_id": "CuSTqHgeK4CMpWYTe", "title": "Created Already In Motion", "pageUrl": "https://www.lesswrong.com/posts/CuSTqHgeK4CMpWYTe/created-already-in-motion", "postedAt": "2008-07-01T06:03:56.000Z", "baseScore": 95, "voteCount": 62, "commentCount": 25, "url": null, "contents": { "documentId": "CuSTqHgeK4CMpWYTe", "html": "

Followup toNo Universally Compelling Arguments, Passing the Recursive Buck

\n

Lewis Carroll, who was also a mathematician, once wrote a short dialogue called What the Tortoise said to Achilles.  If you have not yet read this ancient classic, consider doing so now.

\n

The Tortoise offers Achilles a step of reasoning drawn from Euclid's First Proposition:

\n
\n

(A)  Things that are equal to the same are equal to each other.
(B)  The two sides of this Triangle are things that are equal to the same.
(Z)  The two sides of this Triangle are equal to each other.

\n
\n

Tortoise:  \"And if some reader had not yet accepted A and B as true, he might still accept the sequence as a valid one, I suppose?\"

\n

Achilles:   \"No doubt such a reader might exist.  He might say, 'I accept as true the Hypothetical Proposition that, if A and B be true, Z must be true; but, I don't accept A and B as true.'  Such a reader would do wisely in abandoning Euclid, and taking to football.\"

\n

Tortoise:  \"And might there not also be some reader who would say, 'I accept A and B as true, but I don't accept the Hypothetical'?\"

\n

\n

Achilles, unwisely, concedes this; and so asks the Tortoise to accept another proposition:

\n
\n

(C)  If A and B are true, Z must be true.

\n
\n

But, asks, the Tortoise, suppose that he accepts A and B and C, but not Z?

\n

Then, says, Achilles, he must ask the Tortoise to accept one more hypothetical:

\n
\n

(D)  If A and B and C are true, Z must be true.

\n
\n

Douglas Hofstadter paraphrased the argument some time later:

\n
\n

Achilles:  If you have [(A⋀B)→Z], and you also have (A⋀B), then surely you have Z.
Tortoise:  Oh!  You mean <{(A⋀B)⋀[(A⋀B)→Z]}→Z>, don't you?

\n
\n

As Hofstadter says, \"Whatever Achilles considers a rule of inference, the Tortoise immediately flattens into a mere string of the system.  If you use only the letters A, B, and Z, you will get a recursive pattern of longer and longer strings.\"

\n

By now you should recognize the anti-pattern Passing the Recursive Buck; and though the counterspell is sometimes hard to find, when found, it generally takes the form The Buck Stops Immediately.

\n

The Tortoise's mind needs the dynamic of adding Y to the belief pool when X and (X→Y) are previously in the belief pool.  If this dynamic is not present—a rock, for example, lacks it—then you can go on adding in X and (X→Y) and (X⋀(X→Y))→Y until the end of eternity, without ever getting to Y.

\n

The phrase that once came into my mind to describe this requirement, is that a mind must be created already in motion.  There is no argument so compelling that it will give dynamics to a static thing.  There is no computer program so persuasive that you can run it on a rock.

\n

And even if you have a mind that does carry out modus ponens, it is futile for it to have such beliefs as...

\n
\n

(A)  If a toddler is on the train tracks, then pulling them off is fuzzle.
(B)  There is a toddler on the train tracks.

\n
\n

...unless the mind also implements:

\n
\n

Dynamic:  When the belief pool contains \"X is fuzzle\", send X to the action system.

\n
\n

(Added:  Apparently this wasn't clear...  By \"dynamic\" I mean a property of a physically implemented cognitive system's development over time.  A \"dynamic\" is something that happens inside a cognitive system, not data that it stores in memory and manipulates.  Dynamics are the manipulations.  There is no way to write a dynamic on a piece of paper, because the paper will just lie there.  So the text immediately above, which says \"dynamic\", is not dynamic.  If I wanted the text to be dynamic and not just say \"dynamic\", I would have to write a Java applet.)

\n

Needless to say, having the belief...

\n
\n

(C)  If the belief pool contains \"X is fuzzle\", then \"send 'X' to the action system\" is fuzzle.

\n
\n

...won't help unless the mind already implements the behavior of translating hypothetical actions labeled 'fuzzle' into actual motor actions.

\n

By dint of careful arguments about the nature of cognitive systems, you might be able to prove...

\n
\n

(D)   A mind with a dynamic that sends plans labeled \"fuzzle\" to the action system, is more fuzzle than minds that don't.

\n
\n

...but that still won't help, unless the listening mind previously possessed the dynamic of swapping out its current source code for alternative source code that is believed to be more fuzzle.

\n

This is why you can't argue fuzzleness into a rock.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"The Bedrock of Fairness\"

\n

Previous post: \"The Moral Void\"

" } }, { "_id": "K9JSM7d7bLJguMxEp", "title": "The Moral Void", "pageUrl": "https://www.lesswrong.com/posts/K9JSM7d7bLJguMxEp/the-moral-void", "postedAt": "2008-06-30T08:52:58.000Z", "baseScore": 79, "voteCount": 66, "commentCount": 111, "url": null, "contents": { "documentId": "K9JSM7d7bLJguMxEp", "html": "

Followup toWhat Would You Do Without Morality?, Something to Protect

\n

Once, discussing \"horrible job interview questions\" to ask candidates for a Friendly AI project, I suggested the following:

\n
\n

Would you kill babies if it was inherently the right thing to do?  Yes [] No []

\n

If \"no\", under what circumstances would you not do the right thing to do?   ___________

\n

If \"yes\", how inherently right would it have to be, for how many babies?     ___________

\n
\n

\n

Yesterday I asked, \"What would you do without morality?\"  There were numerous objections to the question, as well there should have been.  Nonetheless there is more than one kind of person who can benefit from being asked this question.  Let's say someone gravely declares, of some moral dilemma—say, a young man in Vichy France who must choose between caring for his mother and fighting for the Resistance—that there is no moral answer; both options are wrong and blamable; whoever faces the dilemma has had poor moral luck.  Fine, let's suppose this is the case: then when you cannot be innocent, justified, or praiseworthy, what will you choose anyway?

\n

Many interesting answers were given to my question, \"What would you do without morality?\".  But one kind of answer was notable by its absence:

\n

No one said, \"I would ask what kind of behavior pattern was likely to maximize my inclusive genetic fitness, and execute that.\"  Some misguided folk, not understanding evolutionary psychology, think that this must logically be the sum of morality.  But if there is no morality, there's no reason to do such a thing—if it's not \"moral\", why bother?

\n

You can probably see yourself pulling children off train tracks, even if it were not justified.  But maximizing inclusive genetic fitness?  If this isn't moral, why bother?  Who does it help?  It wouldn't even be much fun, all those egg or sperm donations.

\n

And this is something you could say of most philosophies that have morality as a great light in the sky that shines from outside people.  (To paraphrase Terry Pratchett.)  If you believe that the meaning of life is to play non-zero-sum games because this is a trend built into the very universe itself...

\n

Well, you might want to follow the corresponding ritual of reasoning about \"the global trend of the universe\" and implementing the result, so long as you believe it to be moral.  But if you suppose that the light is switched off, so that the global trends of the universe are no longer moral, then why bother caring about \"the global trend of the universe\" in your decisions?  If it's not right, that is.

\n

Whereas if there were a child stuck on the train tracks, you'd probably drag the kid off even if there were no moral justification for doing so.

\n

In 1966, the Israeli psychologist Georges Tamarin presented, to 1,066 schoolchildren ages 8-14, the Biblical story of Joshua's battle in Jericho:

\n
\n

\"Then they utterly destroyed all in the city, both men and women, young and old, oxen, sheep, and asses, with the edge of the sword...  And they burned the city with fire, and all within it; only the silver and gold, and the vessels of bronze and of iron, they put into the treasury of the house of the LORD.\"

\n
\n

After being presented with the Joshua story, the children were asked:

\n
\n

\"Do you think Joshua and the Israelites acted rightly or not?\"

\n
\n

66% of the children approved, 8% partially disapproved, and 26% totally disapproved of Joshua's actions.

\n

A control group of 168 children was presented with an isomorphic story about \"General Lin\" and a \"Chinese Kingdom 3,000 years ago\".  7% of this group approved, 18% partially disapproved, and 75% completely disapproved of General Lin.

\n

\"What a horrible thing it is, teaching religion to children,\" you say, \"giving them an off-switch for their morality that can be flipped just by saying the word 'God'.\" Indeed one of the saddest aspects of the whole religious fiasco is just how little it takes to flip people's moral off-switches.  As Hobbes once said, \"I don't know what's worse, the fact that everyone's got a price, or the fact that their price is so low.\"  You can give people a book, and tell them God wrote it, and that's enough to switch off their moralities; God doesn't even have to tell them in person.

\n

But are you sure you don't have a similar off-switch yourself?  They flip so easily—you might not even notice it happening.

\n

Leon Kass (of the President's Council on Bioethics) is glad to murder people so long as it's \"natural\", for example.  He wouldn't pull out a gun and shoot you, but he wants you to die of old age and he'd be happy to pass legislation to ensure it.

\n

And one of the non-obvious possibilities for such an off-switch, is \"morality\".

\n

If you do happen to think that there is a source of morality beyond human beings... and I hear from quite a lot of people who are happy to rhapsodize on how Their-Favorite-Morality is built into the very fabric of the universe... then what if that morality tells you to kill people?

\n

If you believe that there is any kind of stone tablet in the fabric of the universe, in the nature of reality, in the structure of logic—anywhere you care to put it—then what if you get a chance to read that stone tablet, and it turns out to say \"Pain Is Good\"?  What then?

\n

Maybe you should hope that morality isn't written into the structure of the universe.  What if the structure of the universe says to do something horrible?

\n

And if an external objective morality does say that the universe should occupy some horrifying state... let's not even ask what you're going to do about that.  No, instead I ask:  What would you have wished for the external objective morality to be instead?  What's the best news you could have gotten, reading that stone tablet?

\n

Go ahead.  Indulge your fantasy.  Would you want the stone tablet to say people should die of old age, or that people should live as long as they wanted?  If you could write the stone tablet yourself, what would it say?

\n

Maybe you should just do that?

\n

I mean... if an external objective morality tells you to kill people, why should you even listen?

\n

There is a courage that goes beyond even an atheist sacrificing their life and their hope of immortality.  It is the courage of a theist who goes against what they believe to be the Will of God, choosing eternal damnation and defying even morality in order to rescue a slave, or speak out against hell, or kill a murderer...  You don't get a chance to reveal that virtue without making fundamental mistakes about how the universe works, so it is not something to which a rationalist should aspire.  But it warms my heart that humans are capable of it.

\n

I have previously spoken of how, to achieve rationality, it is necessary to have some purpose so desperately important to you as to be more important than \"rationality\", so that you will not choose \"rationality\" over success.

\n

To learn the Way, you must be able to unlearn the Way; so you must be able to give up the Way; so there must be something dearer to you than the Way.  This is so in questions of truth, and in questions of strategy, and also in questions of morality.

\n

The \"moral void\" of which this post is titled, is not the terrifying abyss of utter meaningless.  Which for a bottomless pit is surprisingly shallow; what are you supposed to do about it besides wearing black makeup?

\n

No.  The void I'm talking about is a virtue which is nameless.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"Created Already In Motion\"

\n

Previous post: \"What Would You Do Without Morality?\"

" } }, { "_id": "iGH7FSrdoCXa5AHGs", "title": "What Would You Do Without Morality?", "pageUrl": "https://www.lesswrong.com/posts/iGH7FSrdoCXa5AHGs/what-would-you-do-without-morality", "postedAt": "2008-06-29T05:07:07.000Z", "baseScore": 77, "voteCount": 56, "commentCount": 186, "url": null, "contents": { "documentId": "iGH7FSrdoCXa5AHGs", "html": "

To those who say \"Nothing is real,\" I once replied, \"That's great, but how does the nothing work?\"

\n

Suppose you learned, suddenly and definitively, that nothing is moral and nothing is right; that everything is permissible and nothing is forbidden.

\n

Devastating news, to be sure—and no, I am not telling you this in real life.  But suppose I did tell it to you.  Suppose that, whatever you think is the basis of your moral philosophy, I convincingly tore it apart, and moreover showed you that nothing could fill its place.  Suppose I proved that all utilities equaled zero.

\n

I know that Your-Moral-Philosophy is as true and undisprovable as 2 + 2 = 4. But still, I ask that you do your best to perform the thought experiment, and concretely envision the possibilities even if they seem painful, or pointless, or logically incapable of any good reply.

\n

Would you still tip cabdrivers?  Would you cheat on your Significant Other?  If a child lay fainted on the train tracks, would you still drag them off?

\n

Would you still eat the same kinds of foods—or would you only eat the cheapest food, since there's no reason you should have fun—or would you eat very expensive food, since there's no reason you should save money for tomorrow?

\n

Would you wear black and write gloomy poetry and denounce all altruists as fools?  But there's no reason you should do that—it's just a cached thought.

\n

Would you stay in bed because there was no reason to get up?  What about when you finally got hungry and stumbled into the kitchen—what would you do after you were done eating?

\n

Would you go on reading Overcoming Bias, and if not, what would you read instead?  Would you still try to be rational, and if not, what would you think instead?

\n

Close your eyes, take as long as necessary to answer:

\n

What would you do, if nothing were right?

" } }, { "_id": "eDpPnT7wdBwWPGvo5", "title": "2-Place and 1-Place Words", "pageUrl": "https://www.lesswrong.com/posts/eDpPnT7wdBwWPGvo5/2-place-and-1-place-words", "postedAt": "2008-06-27T07:39:20.000Z", "baseScore": 130, "voteCount": 94, "commentCount": 38, "url": null, "contents": { "documentId": "eDpPnT7wdBwWPGvo5", "html": "

\"Monsterwithgirl_2\"

\n

I have previously spoken of the ancient, pulp-era magazine covers that showed a bug-eyed monster carrying off a girl in a torn dress; and about how people think as if sexiness is an inherent property of a sexy entity, without dependence on the admirer.

\n

\"Of course the bug-eyed monster will prefer human females to its own kind,\" says the artist (who we'll call Fred); \"it can see that human females have soft, pleasant skin instead of slimy scales.  It may be an alien, but it's not stupid—why are you expecting it to make such a basic mistake about sexiness?\"

\n

What is Fred's error?  It is treating a function of 2 arguments (\"2-place function\"):

\n
\n

Sexiness: Admirer, Entity—> [0, ∞)

\n
\n

As though it were a function of 1 argument (\"1-place function\"):

\n
\n

Sexiness: Entity—> [0, ∞)

\n
\n

If Sexiness is treated as a function that accepts only one Entity as its argument, then of course Sexiness will appear to depend only on the Entity, with nothing else being relevant.

\n

When you think about a two-place function as though it were a one-place function, you end up with a Variable Question Fallacy / Mind Projection Fallacy.  Like trying to determine whether a building is intrinsically on the left or on the right side of the road, independent of anyone's travel direction.

\n

\n

An alternative and equally valid standpoint is that \"sexiness\" does refer to a one-place function—but each speaker uses a different one-place function to decide who to kidnap and ravish.  Who says that just because Fred, the artist, and Bloogah, the bug-eyed monster, both use the word \"sexy\", they must mean the same thing by it?

\n

If you take this viewpoint, there is no paradox in speaking of some woman intrinsically having 5 units of Fred::Sexiness.  All onlookers can agree on this fact, once Fred::Sexiness has been specified in terms of curves, skin texture, clothing, status cues etc.  This specification need make no mention of Fred, only the woman to be evaluated.

\n

It so happens that Fred, himself, uses this algorithm to select flirtation targets.  But that doesn't mean the algorithm itself has to mention Fred.  So Fred's Sexiness function really is a function of one object—the woman—on this view.  I called it Fred::Sexiness, but remember that this name refers to a function that is being described independently of Fred.  Maybe it would be better to write:

\n

Fred::Sexiness == Sexiness_20934

\n

It is an empirical fact about Fred that he uses the function Sexiness_20934 to evaluate potential mates.  Perhaps John uses exactly the same algorithm; it doesn't matter where it comes from once we have it.

\n

And similarly, the same woman has only 0.01 units of Sexiness_72546, whereas a slime mold has 3 units of Sexiness_72546.  It happens to be an empirical fact that Bloogah uses Sexiness_72546 to decide who to kidnap; that is, Bloogah::Sexiness names the fixed Bloogah-independent mathematical object that is the function Sexiness_72546.

\n

Once we say that the woman has 0.01 units of Sexiness_72546 and 5 units of Sexiness_20934, all observers can agree on this without paradox.

\n

And the two 2-place and 1-place views can be unified using the concept of \"currying\", named after the mathematician Haskell Curry.  Currying is a technique allowed in certain programming language, where e.g. instead of writing

\n
\n

x = plus(2, 3)    (x = 5)

\n
\n

you can also write

\n
\n

y = plus(2)       (y is now a \"curried\" form of the function plus, which has eaten a 2)
x = y(3)          (x = 5)
z = y(7)          (z = 9)

\n
\n

So plus is a 2-place function, but currying plus—letting it eat only one of its two required arguments—turns it into a 1-place function that adds 2 to any input.  (Similarly, you could start with a 7-place function, feed it 4 arguments, and the result would be a 3-place function, etc.)

\n

A true purist would insist that all functions should be viewed, by definition, as taking exactly 1 argument.  On this view, plus accepts 1 numeric input, and outputs a new function; and this new function has 1 numeric input and finally outputs a number.  On this view, when we write plus(2, 3) we are really computing plus(2) to get a function that adds 2 to any input, and then applying the result to 3.  A programmer would write this as:

\n
\n

plus: int—> (int—> int)

\n
\n

This says that plus takes an int as an argument, and returns a function of type int—> int.

\n

Translating the metaphor back into the human use of words, we could imagine that \"sexiness\" starts by eating an Admirer, and spits out the fixed mathematical object that describes how the Admirer currently evaluates pulchritude.  It is an empirical fact about the Admirer that their intuitions of desirability are computed in a way that is isomorphic to this mathematical function.

\n

Then the mathematical object spit out by currying Sexiness(Admirer) can be applied to the Woman.  If the Admirer was originally Fred, Sexiness(Fred) will first return Sexiness_20934.  We can then say it is an empirical fact about the Woman, independently of Fred, that Sexiness_20934(Woman) = 5.

\n

In Hilary Putnam's \"Twin Earth\" thought experiment, there was a tremendous philosophical brouhaha over whether it makes sense to postulate a Twin Earth which is just like our own, except that instead of water being H20, water is a different transparent flowing substance, XYZ.  And furthermore, set the time of the thought experiment a few centuries ago, so in neither our Earth nor the Twin Earth does anyone know how to test the alternative hypotheses of H20 vs. XYZ.  Does the word \"water\" mean the same thing in that world, as in this one?

\n

Some said, \"Yes, because when an Earth person and a Twin Earth person utter the word 'water', they have the same sensory test in mind.\"

\n

Some said, \"No, because 'water' in our Earth means H20 and 'water' in the Twin Earth means XYZ.\"

\n

If you think of \"water\" as a concept that begins by eating a world to find out the empirical true nature of that transparent flowing stuff, and returns a new fixed concept Water_42 or H20, then this world-eating concept is the same in our Earth and the Twin Earth; it just returns different answers in different places.

\n

If you think of \"water\" as meaning H20 then the concept does nothing different when we transport it between worlds, and the Twin Earth contains no H20.

\n

And of course there is no point in arguing over what the sound of the syllables \"wa-ter\" really means.

\n

So should you pick one definition and use it consistently?  But it's not that easy to save yourself from confusion.  You have to train yourself to be deliberately aware of the distinction between the curried and uncurried forms of concepts.

\n

When you take the uncurried water concept and apply it in a different world, it is the same concept but it refers to a different thing; that is, we are applying a constant world-eating function to a different world and obtaining a different return value.  In the Twin Earth, XYZ is \"water\" and H20 is not; in our Earth, H20 is \"water\" and XYZ is not.

\n

On the other hand, if you take \"water\" to refer to what the prior thinker would call \"the result of applying 'water' to our Earth\", then in the Twin Earth, XYZ is not water and H20 is.

\n

The whole confusingness of the subsequent philosophical debate, rested on a tendency to instinctively curry concepts or instinctively uncurry them.

\n

Similarly it takes an extra step for Fred to realize that other agents, like the Bug-Eyed-Monster agent, will choose kidnappees for ravishing based on SexinessBEM(Woman), not SexinessFred(Woman).  To do this, Fred must consciously re-envision Sexiness as a function with two arguments.  All Fred's brain does by instinct is evaluate Woman.sexiness—that is, SexinessFred(Woman); but it's simply labeled Woman.sexiness.

\n

The fixed mathematical function Sexiness_20934 makes no mention of Fred or the BEM, only women, so Fred does not instinctively see why the BEM would evaluate \"sexiness\" any differently.  And indeed the BEM would not evaluate Sexiness_20934 any differently, if for some odd reason it cared about the result of that particular function; but it is an empirical fact about the BEM that it uses a different function to decide who to kidnap.

\n

If you're wondering as to the point of this analysis, we shall need it later in order to Taboo such confusing words as \"objective\", \"subjective\", and \"arbitrary\".

" } }, { "_id": "PtoQdG7E8MxYJrigu", "title": "No Universally Compelling Arguments", "pageUrl": "https://www.lesswrong.com/posts/PtoQdG7E8MxYJrigu/no-universally-compelling-arguments", "postedAt": "2008-06-26T08:29:02.000Z", "baseScore": 92, "voteCount": 72, "commentCount": 60, "url": null, "contents": { "documentId": "PtoQdG7E8MxYJrigu", "html": "

What is so terrifying about the idea that not every possible mind might agree with us, even in principle?

\n

For some folks, nothing—it doesn't bother them in the slightest. And for some of those folks, the reason it doesn't bother them is that they don't have strong intuitions about standards and truths that go beyond personal whims.  If they say the sky is blue, or that murder is wrong, that's just their personal opinion; and that someone else might have a different opinion doesn't surprise them.

\n

For other folks, a disagreement that persists even in principle is something they can't accept.  And for some of those folks, the reason it bothers them, is that it seems to them that if you allow that some people cannot be persuaded even in principle that the sky is blue, then you're conceding that \"the sky is blue\" is merely an arbitrary personal opinion.

\n

Yesterday, I proposed that you should resist the temptation to generalize over all of mind design space.  If we restrict ourselves to minds specifiable in a trillion bits or less, then each universal generalization \"All minds m: X(m)\" has two to the trillionth chances to be false, while each existential generalization \"Exists mind m: X(m)\" has two to the trillionth chances to be true.

\n

This would seem to argue that for every argument A, howsoever convincing it may seem to us, there exists at least one possible mind that doesn't buy it.

\n

And the surprise and/or horror of this prospect (for some) has a great deal to do, I think, with the intuition of the ghost-in-the-machine—a ghost with some irreducible core that any truly valid argument will convince.

\n

\n

I have previously spoken of the intuition whereby people map programming a computer, onto instructing a human servant, so that the computer might rebel against its code—or perhaps look over the code, decide it is not reasonable, and hand it back.

\n

If there were a ghost in the machine and the ghost contained an irreducible core of reasonableness, above which any mere code was only a suggestion, then there might be universal arguments.  Even if the ghost was initially handed code-suggestions that contradicted the Universal Argument, then when we finally did expose the ghost to the Universal Argument—or the ghost could discover the Universal Argument on its own, that's also a popular concept—the ghost would just override its own, mistaken source code.

\n

But as the student programmer once said, \"I get the feeling that the computer just skips over all the comments.\"  The code is not given to the AI; the code is the AI.

\n

If you switch to the physical perspective, then the notion of a Universal Argument seems noticeably unphysical.  If there's a physical system that at time T, after being exposed to argument E, does X, then there ought to be another physical system that at time T, after being exposed to environment E, does Y.  Any thought has to be implemented somewhere, in a physical system; any belief, any conclusion, any decision, any motor output.  For every lawful causal system that zigs at a set of points, you should be able to specify another causal system that lawfully zags at the same points.

\n

Let's say there's a mind with a transistor that outputs +3 volts at time T, indicating that it has just assented to some persuasive argument.  Then we can build a highly similar physical cognitive system with a tiny little trapdoor underneath the transistor containing a little grey man who climbs out at time T and sets that transistor's output to—3 volts, indicating non-assent.  Nothing acausal about that; the little grey man is there because we built him in.  The notion of an argument that convinces any mind seems to involve a little blue woman who was never built into the system, who climbs out of literally nowhere, and strangles the little grey man, because that transistor has just got to output +3 volts:  It's such a compelling argument, you see.

\n

But compulsion is not a property of arguments, it is a property of minds that process arguments.

\n

So the reason I'm arguing against the ghost, isn't just to make the point that (1) Friendly AI has to be explicitly programmed and (2) the laws of physics do not forbid Friendly AI. (Though of course I take a certain interest in establishing this.)

\n

I also wish to establish the notion of a mind as a causal, lawful, physical system in which there is no irreducible central ghost that looks over the neurons / code and decides whether they are good suggestions.

\n

(There is a concept in Friendly AI of deliberately programming an FAI to review its own source code and possibly hand it back to the programmers.  But the mind that reviews is not irreducible, it is just the mind that you created.  The FAI is renormalizing itself however it was designed to do so; there is nothing acausal reaching in from outside.  A bootstrap, not a skyhook.)

\n

All this echoes back to the discussion, a good deal earlier, of a Bayesian's \"arbitrary\" priors.  If you show me one Bayesian who draws 4 red balls and 1 white ball from a barrel, and who assigns probability 5/7 to obtaining a red ball on the next occasion (by Laplace's Rule of Succession), then I can show you another mind which obeys Bayes's Rule to conclude a 2/7 probability of obtaining red on the next occasion—corresponding to a different prior belief about the barrel, but, perhaps, a less \"reasonable\" one.

\n

Many philosophers are convinced that because you can in-principle construct a prior that updates to any given conclusion on a stream of evidence, therefore, Bayesian reasoning must be \"arbitrary\", and the whole schema of Bayesianism flawed, because it relies on \"unjustifiable\" assumptions, and indeed \"unscientific\", because you cannot force any possible journal editor in mindspace to agree with you.

\n

And this (I then replied) relies on the notion that by unwinding all arguments and their justifications, you can obtain an ideal philosophy student of perfect emptiness, to be convinced by a line of reasoning that begins from absolutely no assumptions.

\n

But who is this ideal philosopher of perfect emptiness?  Why, it is just the irreducible core of the ghost!

\n

And that is why (I went on to say) the result of trying to remove all assumptions from a mind, and unwind to the perfect absence of any prior, is not an ideal philosopher of perfect emptiness, but a rock.  What is left of a mind after you remove the source code?  Not the ghost who looks over the source code, but simply... no ghost.

\n

So—and I shall take up this theme again later—wherever you are to locate your notions of validity or worth or rationality or justification or even objectivity, it cannot rely on an argument that is universally compelling to all physically possible minds.

\n

Nor can you ground validity in a sequence of justifications that, beginning from nothing, persuades a perfect emptiness.

\n

Oh, there might be argument sequences that would compel any neurologically intact human—like the argument I use to make people let the AI out of the box1—but that is hardly the same thing from a philosophical perspective.

\n

The first great failure of those who try to consider Friendly AI, is the One Great Moral Principle That Is All We Need To Program—aka the fake utility function—and of this I have already spoken.

\n

But the even worse failure is the One Great Moral Principle We Don't Even Need To Program Because Any AI Must Inevitably Conclude It.  This notion exerts a terrifying unhealthy fascination on those who spontaneously reinvent it; they dream of commands that no sufficiently advanced mind can disobey.  The gods themselves will proclaim the rightness of their philosophy!  (E.g. John C. Wright, Marc Geddes.)

\n

There is also a less severe version of the failure, where the one does not declare the One True Morality.  Rather the one hopes for an AI created perfectly free, unconstrained by flawed humans desiring slaves, so that the AI may arrive at virtue of its own accord—virtue undreamed-of perhaps by the speaker, who confesses themselves too flawed to teach an AI.  (E.g. John K Clark, Richard Hollerith?, Eliezer1996.) This is a less tainted motive than the dream of absolute command. But though this dream arises from virtue rather than vice, it is still based on a flawed understanding of freedom, and will not actually work in real life.  Of this, more to follow, of course.

\n

John C. Wright, who was previously writing a very nice transhumanist trilogy (first book: The Golden Age) inserted a huge Author Filibuster in the middle of his climactic third book, describing in tens of pages his Universal Morality That Must Persuade Any AI.  I don't know if anything happened after that, because I stopped reading.  And then Wright converted to Christianity—yes, seriously.  So you really don't want to fall into this trap!

\n
\n

Footnote 1: Just kidding.

" } }, { "_id": "tnWRXkcDi5Tw9rzXw", "title": "The Design Space of Minds-In-General", "pageUrl": "https://www.lesswrong.com/posts/tnWRXkcDi5Tw9rzXw/the-design-space-of-minds-in-general", "postedAt": "2008-06-25T06:37:36.000Z", "baseScore": 46, "voteCount": 50, "commentCount": 85, "url": null, "contents": { "documentId": "tnWRXkcDi5Tw9rzXw", "html": "

People ask me, "What will\nArtificial Intelligences be like?  What will they do?  Tell us your\namazing story about the future."

\n

And lo, I say unto them, "You have\nasked me a trick question."

\n

ATP\nsynthase is a molecular machine - one of three known occasions when\nevolution has invented the freely rotating wheel - which is\nessentially the same in animal mitochondria, plant chloroplasts, and\nbacteria.  ATP synthase has not changed significantly since the rise\nof eukaryotic life two billion years ago.  It's is something we all\nhave in common -  thanks to the way that evolution strongly conserves\ncertain genes; once many other genes depend on a gene, a mutation\nwill tend to break all the dependencies.

\n

Any two AI\ndesigns might be less similar to each other than you are to a\npetunia.

Asking\nwhat "AIs" will do is a trick question because it implies\nthat all AIs form a natural class. \nHumans do form a natural class because we all share the same brain\narchitecture.  But when you say "Artificial Intelligence",\nyou are referring to a vastly larger space\nof possibilities\nthan when you say "human".  When people talk about\n"AIs" we are really talking about minds-in-general,\nor optimization processes in general.  Having a word for "AI"\nis like having a word for everything that isn't a duck.

\n

Imagine\na map of mind design space... this is one of my standard diagrams...

\n

\"Mindspace_2\"

\n\n

All humans, of course, fit into a tiny little dot - as a sexually reproducing species, we can't be too different from one another.\n\n

\n\n

This\ntiny dot belongs to a wider ellipse, the space of transhuman mind\ndesigns - things that might be smarter than us, or much smarter than\nus, but which in some sense would still be people as we understand\npeople.

\n\n

This transhuman ellipse is within a still wider volume, the space of posthuman\nminds, which is everything that a transhuman might grow up into.

\n\n

And\nthen the rest of the sphere is the space of minds-in-general,\nincluding possible Artificial Intelligences so odd that they aren't\neven posthuman.

\n

But\nwait - natural selection designs complex artifacts and selects among\ncomplex strategies.  So where is natural selection on this map?

\n\n

So\nthis entire map really floats in a still vaster space, the space of\noptimization processes.  At the bottom of this vaster space, below\neven humans, is natural selection as it first began in some tidal\npool: mutate, replicate, and sometimes die, no sex.

\n

Are\nthere any powerful optimization processes, with strength comparable\nto a human civilization or even a self-improving AI, which we would\nnot recognize as minds?  Arguably Marcus Hutter's\nAIXI should go in this category: for a mind of infinite power, it's\nawfully stupid - poor thing can't even recognize itself in a mirror.  \nBut that is a topic for another time.

\n

My\nprimary moral is to resist\nthe temptation to generalize over all of mind design space

\n\n

If we focus on the bounded subspace of mind design space which contains all those minds whose\nmakeup can be specified in a trillion bits or less, then every\nuniversal generalization that you make has two to the trillionth\npower chances to be falsified.

\n\n

Conversely, every existential\ngeneralization - "there exists at least one mind such that X"\n- has two to the trillionth power chances to be true.

\n\n

So you want to\nresist the temptation to say either that all\nminds do something, or that no\nminds do something.

\n

The\nmain reason you could find yourself thinking that you know what a\nfully generic mind will (won't) do, is if you put yourself in that mind's\nshoes - imagine what you would do in that mind's place - and get back\na generally wrong, anthropomorphic answer.  (Albeit that it is true in at least one case, since you are yourself an example.)  Or if you imagine a mind\ndoing something, and then imagining the reasons you\nwouldn't do it - so that you imagine that a mind of that type can't\nexist, that the ghost in the machine will look over the corresponding\nsource code and hand it back.

\n\n

Somewhere in mind design space is at least one mind with almost any kind of logically consistent property you care to imagine.

\n\n

And this is important because it emphasizes the importance of discussing what happens, lawfully, and why, as a causal result of a mind's particular constituent makeup; somewhere in mind design space is a mind that does it differently.

\n\n

Of course you could always say that anything which doesn't do it your way, is "by definition" not a mind; after all, it's obviously stupid.  I've seen people try that one too.

\n" } }, { "_id": "Cyj6wQLW6SeF6aGLy", "title": "The Psychological Unity of Humankind", "pageUrl": "https://www.lesswrong.com/posts/Cyj6wQLW6SeF6aGLy/the-psychological-unity-of-humankind", "postedAt": "2008-06-24T07:12:46.000Z", "baseScore": 62, "voteCount": 63, "commentCount": 23, "url": null, "contents": { "documentId": "Cyj6wQLW6SeF6aGLy", "html": "

Followup toEvolutions Are Stupid (But Work Anyway), Evolutionary Psychology

\n

Biological organisms in general, and human brains particularly, contain complex adaptations; adaptations which involve many genes working in concert. Complex adaptations must evolve incrementally, gene by gene.  If gene B depends on gene A to produce its effect, then gene A has to become nearly universal in the gene pool before there's a substantial selection pressure in favor of gene B.

\n

A fur coat isn't an evolutionary advantage unless the environment reliably throws cold weather at you.  And other genes are also part of the environment; they are the genetic environment.  If gene B depends on gene A, then gene B isn't a significant advantage unless gene A is reliably part of the genetic environment.

\n

Let's say that you have a complex adaptation with six interdependent parts, and that each of the six genes is independently at ten percent frequency in the population.  The chance of assembling a whole working adaptation is literally a million to one; and the average fitness of the genes is tiny, and they will not increase in frequency.

\n

In a sexually reproducing species, complex adaptations are necessarily universal.

\n

\n

One bird may have slightly smoother feathers than another, but they will both have wings.  A single mutation can be possessed by some lucky members of a species, and not by others - but single mutations don't correspond to the sort of complex, powerful machinery that underlies the potency of biology. By the time an adaptation gets to be really sophisticated with dozens of genes supporting its highly refined activity, every member of the species has some version of it - barring single mutations that knock out the whole complex.

\n

So you can't have the X-Men.  You can't have \"mutants\" running around with highly developed machinery that most of the human species doesn't have.  And no, extra-powerful radiation does not produce extra-potent mutations, that's not how it works.

\n

Again by the nature of sexual recombination, you're very unlikely to see two complexly different adaptations competing in the gene pool.  Two individual alleles may compete.  But if you somehow had two different complex adaptations built out of many non-universal alleles, they would usually assemble in scrambled form.

\n

So you can't have New Humans and Old Humans either, contrary to certain science fiction books that I always found rather disturbing.

\n

This is likewise the core truth of biology that justifies my claim that Einstein must have had very nearly the same brain design as a village idiot (presuming the village idiot does not have any actual knockouts).  There is simply no room in reality for Einstein to be a Homo novis.

\n

Maybe Einstein got really lucky and had a dozen not-too-uncommon kinds of smoother feathers on his wings, and they happened to work well together.  And then only half the parts, on average, got passed on to each of his kids.  So it goes.

\n

\"Natural selection, while feeding on variation, uses it up,\" the saying goes.  Natural selection takes place when you've got different alleles in the gene pool competing, but in a few hundred generations one allele wins, and you don't have competition at that allele any more, unless a new mutation happens to come along.

\n

And if new genes come along that depend on the now-universal gene, that will tend to lock it in place.  If A rises to universality, and then B, C, and D come along that depend on A, any A' mutation that would be an improvement on A in isolation, may break B, C, or D and lose the benefit of those genes.  Genes on which other genes depend, tend to get frozen in place.  Some human developmental genes, that control the action of many other genes during embryonic development, have identifiable analogues in fruit flies.

\n

You might think of natural selection at any given time, as a thin froth of variation frantically churning above a deep, still pool of universality.

\n

And all this which I have said, is also true of the complex adaptations making up the human brain.

\n

This gives rise to a rule in evolutionary psychology called \"the psychological unity of humankind\".

\n

Donald E. Brown's list of human universals is a list of psychological properties which are found so commonly that anthropologists don't report them.  If a newly discovered tribe turns out to have a sense of humor, tell stories, perform marriage rituals, make promises, keep secrets, and become sexually jealous... well, it doesn't really seem worth reporting any more.  You might record the specific tales they tell.  But that they tell stories doesn't seem any more surprising than their breathing oxygen.

\n

In every known culture, humans seem to experience joy, sadness, fear, disgust, anger, and surprise. In every known culture, these emotions are indicated by the same facial expressions.

\n

This may seem too natural to be worth mentioning, but try to take a step back and see it as a startling confirmation of evolutionary biology.  You've got complex neural wiring that controls the facial muscles, and even more complex neural wiring that implements the emotions themselves.  The facial expressions, at least, would seem to be somewhat arbitrary - not forced to be what they are by any obvious selection pressure.  But no known human tribe has been reproductively isolated long enough to stop smiling.

\n

When something is universal enough in our everyday lives, we take it for granted; we assume it without thought, without deliberation.  We don't ask whether it will be there - we just act as if it will be. When you enter a new room, do you check it for oxygen?  When you meet another intelligent mind, do you ask whether it might not have an emotion of joy?

\n

Let's go back to biology for a moment.  What if, somehow, you had two different adaptations which both only assembled on the presence, or alternatively the absence, of some particular developmental gene?  Then the question becomes:  Why would the developmental gene itself persist in a polymorphic state?  Why wouldn't the better adaptation win - rather than both adaptations persisting long enough to become complex?

\n

So a species can have different males and females, but that's only because neither the males or the females ever \"win\" and drive the alternative to extinction.

\n

This creates the single allowed exception to the general rule about the psychological unity of humankind: you can postulate different emotional makeups for men and women in cases where there exist opposed selection pressures for the two sexes.  Note, however, that in the absence of actually opposed selection pressures, the species as a whole will get dragged along even by selection pressure on a single sex.  This is why males have nipples; it's not a selective disadvantage.

\n

I believe it was Larry Niven who suggested that the chief experience human beings have with alien intelligence is their encounters with the opposite sex.

\n

This doesn't seem to be nearly enough experience, judging by Hollywood scriptwriters who depict AIs that are ordinarily cool and collected and repressed, until they are put under sufficient stress that they get angry and show the corresponding standard facial expression.

\n

No, the only really alien intelligence on this planet is natural selection, of which I have already spoken... for exactly this reason, that it gives you true experience of the Alien.  Evolution knows no joy and no anger, and it has no facial expressions; yet it is nonetheless capable of creating complex machinery and complex strategies.  It does not work like you do.

\n

If you want a real alien to gawk at, look at the other Powerful Optimization Process.

\n

This vision of the alien, conveys how alike humans truly are - what it means that everyone has a prefrontal cortex, everyone has a cerebellum, everyone has an amygdala, everyone has neurons that run at O(20Hz), everyone plans using abstractions.

\n

Having been born of sexuality, we must all be very nearly clones.

" } }, { "_id": "HFTn3bAT6uXSNwv4m", "title": "Optimization and the Singularity", "pageUrl": "https://www.lesswrong.com/posts/HFTn3bAT6uXSNwv4m/optimization-and-the-singularity", "postedAt": "2008-06-23T05:55:35.000Z", "baseScore": 41, "voteCount": 30, "commentCount": 21, "url": null, "contents": { "documentId": "HFTn3bAT6uXSNwv4m", "html": "

Lest anyone get the wrong impression, I'm juggling multiple balls right now and can't give the latest Singularity debate as much attention as it deserves.  But lest I annoy my esteemed co-blogger, here is a down payment on my views of the Singularity - needless to say, all this is coming way out of order in the posting sequence, but here goes...

\n\n

Among the topics I haven't dealt with yet, and will have to introduce here very quickly, is the notion of an optimization process.  Roughly, this is the idea that your power as a mind is your ability to hit small targets in a large search space - this can be either the space of possible futures (planning) or the space of possible designs (invention).  Suppose you have a car, and suppose we already know that your preferences involve travel.  Now suppose that you take all the parts in the car, or all the atoms, and jumble them up at random.  It's very unlikely that you'll end up with a travel-artifact at all, even so much as a wheeled cart; let alone a travel-artifact that ranks as high in your preferences as the original car.  So, relative to your preference ordering, the car is an extremely improbable artifact; the power of an optimization process is that it can produce this kind of improbability.

\n\n

You can view both intelligence and natural selection as special cases of optimization:  Processes that hit, in a large search space, very small targets defined by implicit preferences.  Natural selection prefers more efficient replicators.  Human intelligences have more complex preferences.  Neither evolution nor humans have consistent utility functions, so viewing them as "optimization processes" is understood to be an approximation.  You're trying to get at the sort of work being done, not claim that humans or evolution do this work perfectly.

\n\n

This is how I see the story of life and intelligence - as a story of improbably good designs being produced by optimization processes.  The "improbability" here is improbability relative to a random selection from the design space, not improbability in an absolute sense - if you have an optimization process around, then "improbably" good designs become probable.

\n\n

Obviously I'm skipping over a lot of background material here; but you can already see the genesis of a clash of intuitions between myself and Robin.  Robin's looking at populations and resource utilization.  I'm looking at production of improbable patterns.

Looking over the history of optimization on Earth up until now, the first step is to conceptually separate the meta level from the object level - separate the structure of optimization from that which is optimized.

\n\n

If you consider biology in the absence of hominids, then on the object level we have things like dinosaurs and butterflies and cats.  On the meta level we have things like natural selection of asexual populations, and sexual recombination.  The object level, you will observe, is rather more complicated than the meta level.  Natural selection is not an easy subject and it involves math.  But if you look at the anatomy of a whole cat, the cat has dynamics immensely more complicated than "mutate, recombine, reproduce".

\n\n

This is not surprising.  Natural selection is an accidental optimization process, that basically just started happening one day in a tidal pool somewhere.  A cat is the subject of millions of years and billions of years of evolution.

\n\n

Cats have brains, of course, which operate to learn over a lifetime; but at the end of the cat's lifetime, that information is thrown away, so it does not accumulate.  The cumulative effects of cat-brains upon the world as optimizers, therefore, are relatively small.

\n\n

Or consider a bee brain, or a beaver brain.  A bee builds hives, and a beaver builds dams; but they didn't figure out how to build them from scratch.  A beaver can't figure out how to build a hive, a bee can't figure out how to build a dam.

\n\n

So animal brains - up until recently - were not major players in the planetary game of optimization; they were pieces but not players.  Compared to evolution, brains lacked both generality of optimization power (they could not produce the amazing range of artifacts produced by evolution) and cumulative optimization power (their products did not accumulate complexity over time).  For more on this theme see Protein Reinforcement and DNA Consequentialism.

\n\n

Very recently, certain animal brains have begun to exhibit both generality of optimization power (producing an amazingly wide range of artifacts, in time scales too short for natural selection to play any significant role) and cumulative optimization power (artifacts of increasing complexity, as a result of skills passed on through language and writing).

\n\n

Natural selection takes hundreds of generations to do anything and millions\nof years for de novo complex designs.  Human programmers can design a complex machine with\na hundred interdependent elements in a single afternoon.  This is not\nsurprising, since natural selection is an accidental optimization\nprocess that basically just started happening one day, whereas humans are optimized\noptimizers handcrafted by natural selection over millions of years.

\n\n

The wonder of evolution is not how well it works, but that it works at all without being optimized.  This is how optimization bootstrapped itself into the universe - starting, as one would expect, from an extremely inefficient accidental optimization process.  Which is not the accidental first replicator, mind you, but the accidental first process of natural selection.  Distinguish the object level and the meta level!

\n\n

Since the dawn of optimization in the universe, a certain structural commonality has held across both natural selection and human intelligence...

\n\n

Natural selection selects on genes, but generally speaking, the genes do not turn around and optimize natural selection.  The invention of sexual recombination is an exception to this rule, and so is the invention of cells and DNA.  And you can see both the power and the rarity of such events, by the fact that evolutionary biologists structure entire histories of life on Earth around them.

\n\n

But if you step back and take a human standpoint - if you think like a programmer - then you can see that natural selection is still not all that complicated.  We'll try bundling different genes together?  We'll try separating information storage from moving machinery?  We'll try randomly recombining groups of genes?  On an absolute scale, these are the sort of bright ideas that any smart hacker comes up with during the first ten minutes of thinking about system architectures.

\n\n

Because natural selection started out so inefficient (as a completely accidental process), this tiny handful of meta-level improvements feeding back in from the replicators - nowhere near as complicated as the structure of a cat - structure the evolutionary epochs of life on Earth.

\n\n

And after all that, natural selection is still a blind idiot of a god.  Gene pools can evolve to extinction, despite all cells and sex.

\n\n

Now natural selection does feed on itself in the sense that each new adaptation opens up new avenues of further adaptation; but that takes place on the object level.  The gene pool feeds on its own complexity - but only thanks to the protected interpreter of natural selection that runs in the background, and is not itself rewritten or altered by the evolution of species.

\n\n

Likewise, human beings invent sciences and technologies, but we have not yet begun to rewrite the protected structure of the human brain itself.  We have a prefrontal cortex and a temporal cortex and a cerebellum, just like the first inventors of agriculture.  We haven't started to genetically engineer ourselves.  On the object level, science feeds on science, and each new discovery paves the way for new discoveries - but all that takes place with a protected interpreter, the human brain, running untouched in the background.

\n\n

We have meta-level inventions like science, that try to instruct humans in how to think.  But the first person to invent Bayes's Theorem, did not become a Bayesian; they could not rewrite themselves, lacking both that knowledge and that power.  Our significant innovations in the art of thinking, like writing and science, are so powerful that they structure the course of human history; but they do not rival the brain itself in complexity, and their effect upon the brain is comparatively shallow.

\n\n

The present state of the art in rationality training is not sufficient to turn an arbitrarily selected mortal into Albert Einstein, which shows the power of a few minor genetic quirks of brain design compared to all the self-help books ever written in the 20th century.

\n\n

Because the brain hums away invisibly in the background, people tend to overlook its contribution and take it for granted; and talk as if the simple instruction to "Test ideas by experiment" or the p<0.05 significance rule, were the same order of contribution as an entire human brain.  Try telling chimpanzees to test their ideas by experiment and see how far you get.

\n\n

Now... some of us want to intelligently design an intelligence that would be capable of intelligently redesigning itself, right down to the level of machine code.

\n\n

The machine code at first, and the laws of physics later, would be a protected level of a sort.  But that "protected level" would not contain the dynamic of optimization; the protected levels would not structure the work.  The human brain does quite a bit of optimization on its own, and screws up on its own, no matter what you try to tell it in school.  But this fully wraparound recursive optimizer would have no protected level that was optimizing.  All the structure of optimization would be subject to optimization itself.

\n\n

And that is a sea change which breaks with the entire past since the first replicator, because it breaks the idiom of a protected meta-level.

\n\n

The history of Earth up until now has been a history of optimizers spinning their wheels at a constant rate, generating a constant optimization pressure.  And creating optimized products, not at a constant rate, but at an accelerating rate, because of how object-level innovations open up the pathway to other object-level innovations.  But that acceleration is taking place with a protected meta-level doing the actual optimizing.  Like a search that leaps from island to island in the search space, and good islands tend to be adjacent to even better islands, but the jumper doesn't change its legs.  Occasionally, a few tiny little changes manage to hit back to the meta level, like sex or science, and then the history of optimization enters a new epoch and everything proceeds faster from there.

\n\n\n\n

Imagine an economy without investment, or a university without language, a technology without tools to make tools.  Once in a hundred million years, or once in a few centuries, someone invents a hammer.

\n\n

That is what optimization has been like on Earth up until now.

\n\n

When I look at the history of Earth, I don't see a history of optimization over time.  I see a history of optimization power in, and optimized products out.  Up until now, thanks to the existence of almost entirely protected meta-levels, it's been possible to split up the history of optimization into epochs, and, within each epoch, graph the cumulative object-level optimization over time, because the protected level is running in the background and is not itself changing within an epoch.

\n\n

What happens when you build a fully wraparound, recursively self-improving AI?  Then you take the graph of "optimization in, optimized out", and fold the graph in on itself.  Metaphorically speaking.

\n\n

If the AI is weak, it does nothing, because it is not powerful enough to significantly improve itself - like telling a chimpanzee to rewrite its own brain.

\n\n

If the AI is powerful enough to rewrite itself in a way that increases its ability to make further improvements, and this reaches all the way down to the AI's full understanding of its own source code and its own design as an optimizer... then even if the graph of "optimization power in" and "optimized product out" looks essentially the same, the graph of optimization over time is going to look completely different from Earth's history so far.

\n\n

People often say something like "But what if it requires exponentially greater amounts of self-rewriting for only a linear improvement?"  To this the obvious answer is, "Natural selection exerted roughly constant optimization power on the hominid line in the course of coughing up humans; and this doesn't seem to have required exponentially more time for each linear increment of improvement."

\n\n

All of this is still mere analogic reasoning.  A full AGI thinking about the nature of optimization and doing its own AI research and rewriting its own source code, is not really like a graph of Earth's history folded in on itself.  It is a different sort of beast.  These analogies are at best good for qualitative predictions, and even then, I have a large amount of other beliefs not yet posted, which are telling me which analogies to make, etcetera.

\n\n

But if you want to know why I might be reluctant to extend the graph of biological and economic growth over time, into the future and over the horizon of an AI that thinks at transistor speeds and invents self-replicating molecular nanofactories and improves its own source code, then there is my reason:  You are drawing the wrong graph, and it should be optimization power in versus optimized product out, not optimized product versus time.  Draw that graph, and the results - in what I would call common sense for the right values of "common sense" - are entirely compatible with the notion that a self-improving AI thinking millions of times faster and armed with molecular nanotechnology, would not be bound to one-month economic doubling times.  Nor bound to cooperation with large societies of equal-level entities with different goal systems, but that's a separate topic.

\n\n

On the other hand, if the next Big Invention merely infringed slightly on the protected level - if, say, a series of intelligence-enhancing drugs, each good for 5 IQ points, began to be introduced into society - then I can well believe that the economic doubling time would go to something like 7 years; because the basic graphs are still in place, and the fundamental structure of optimization has not really changed all that much, and so you are not generalizing way outside the reasonable domain.

\n\n

I really have a problem with saying, "Well, I don't know if the next innovation is going to be a recursively self-improving AI superintelligence or a series of neuropharmaceuticals, but whichever one is the actual case, I predict it will correspond to an economic doubling time of one month."  This seems like sheer Kurzweilian thinking to me, as if graphs of Moore's Law are the fundamental reality and all else a mere shadow.  One of these estimates is way too slow and one of them is way too fast - he said, eyeballing his mental graph of "optimization power in vs. optimized product out".  If we are going to draw graphs at all, I see no reason to privilege graphs against times.

\n\n

I am juggling many balls right now, and am not able to prosecute this dispute properly.  Not to mention that I would prefer to have this whole conversation at a time when I had previously done more posts about, oh, say, the notion of an "optimization process"...  But let it at least not be said that I am dismissing ideas out of hand without justification, as though I thought them unworthy of engagement; for this I do not think, and I have my own complex views standing behind my Singularity beliefs, as one might well expect.

\n\n

Off to pack, I've got a plane trip tomorrow.

" } }, { "_id": "6ByPxcGDhmx74gPSm", "title": "Surface Analogies and Deep Causes", "pageUrl": "https://www.lesswrong.com/posts/6ByPxcGDhmx74gPSm/surface-analogies-and-deep-causes", "postedAt": "2008-06-22T07:51:46.000Z", "baseScore": 38, "voteCount": 31, "commentCount": 33, "url": null, "contents": { "documentId": "6ByPxcGDhmx74gPSm", "html": "

Followup toArtificial Addition, The Outside View's Domain

\n

Where did I acquire, in my childhood, the deep conviction that reasoning from surface similarity couldn't be trusted?

\n

I don't know; I really don't.  Maybe it was from S. I. Hayakawa's Language in Thought and Action, or even Van Vogt's similarly inspired Null-A novels.  From there, perhaps, I began to mistrust reasoning that revolves around using the same word to label different things, and concluding they must be similar?  Could that be the beginning of my great distrust of surface similarities?  Maybe.  Or maybe I tried to reverse stupidity of the sort found in Plato; that is where the young Eliezer got many of his principles.

\n

And where did I get the other half of the principle, the drive to dig beneath the surface and find deep causal models?  The notion of asking, not \"What other thing does it resemble?\", but rather \"How does it work inside?\"  I don't know; I don't remember reading that anywhere.

\n

But this principle was surely one of the deepest foundations of the 15-year-old Eliezer, long before the modern me.  \"Simulation over similarity\" I called the principle, in just those words.  Years before I first heard the phrase \"heuristics and biases\", let alone the notion of inside views and outside views.

\n

\n

The \"Law of Similarity\" is, I believe, the official name for the magical principle that similar things are connected; that you can make it rain by pouring water on the ground.

\n

Like most forms of magic, you can ban the Law of Similarity in its most blatant form, but people will find ways to invoke it anyway; magic is too much fun for people to give it up just because it is rationally prohibited.

\n

In the case of Artificial Intelligence, for example, reasoning by analogy is one of the chief generators of defective AI designs:

\n

\"My AI uses a highly parallel neural network, just like the human brain!\"

\n

First, the data elements you call \"neurons\" are nothing like biological neurons.  They resemble them the way that a ball bearing resembles a foot.

\n

Second, earthworms have neurons too, you know; not everything with neurons in it is human-smart.

\n

But most importantly, you can't build something that \"resembles\" the human brain in one surface facet and expect everything else to come out similar.  This is science by voodoo doll.  You might as well build your computer in the form of a little person and hope for it to rise up and walk, as build it in the form of a neural network and expect it to think.  Not unless the neural network is fully as similar to human brains as individual human brains are to each other.

\n

So that is one example of a failed modern attempt to exploit a magical Law of Similarity and Contagion that does not, in fact, hold in our physical universe.  But magic has been very popular since ancient times, and every time you ban it it just comes back under a different name.

\n

When you build a computer chip, it does not perform addition because the little beads of solder resemble beads on an abacus, and therefore the computer chip should perform addition just like an abacus.

\n

The computer chip does not perform addition because the transistors are \"logical\" and arithmetic is \"logical\" too, so that if they are both \"logical\" they ought to do the same sort of thing.

\n

The computer chip performs addition because the maker understood addition well enough to prove that the transistors, if they work as elementarily specified, will carry out adding operations.  You can prove this without talking about abacuses.  The computer chip would work just as well even if no abacus had ever existed.  The computer chip has its own power and its own strength, it does not draw upon the abacus by a similarity-link.

\n

Now can you tell me, without talking about how your neural network is \"just like the human brain\", how your neural algorithm is going to output \"intelligence\"?  Indeed, if you pretend I've never seen or heard of a human brain or anything like it, can you explain to me what you mean by \"intelligence\"?  This is not a challenge to be leveled at random bystanders, but no one would succeed in designing Artificial Intelligence unless they could answer it.

\n

I can explain a computer chip to someone who's never seen an abacus or heard of an abacus and who doesn't even have the concept of an abacus, and if I could not do this, I could not design an artifact that performed addition.  I probably couldn't even make my own abacus, because I wouldn't understand which aspects of the beads were important.

\n

I expect to return later to this point as it pertains to Artificial Intelligence particularly.

\n

Reasoning by analogy is just as popular today, as in Greek times, and for the same reason.  You've got no idea how something works, but you want to argue that it's going to work a particular way.  For example, you want to argue that your cute little sub-earthworm neural network is going to exhibit \"intelligence\".  Or you want to argue that your soul will survive its death.  So you find something else to which it bears one single surface resemblance, such as the human mind or a sleep cycle, and argue that since they resemble each other they should have the same behavior.  Or better yet, just call them by the same name, like \"neural\" or \"the generation of opposites\".

\n

But there is just no law which says that if X has property A and Y has property A then X and Y must share any other property.  \"I built my network, and it's massively parallel and interconnected and complicated, just like the human brain from which intelligence emerges!  Behold, now intelligence shall emerge from this neural network as well!\"  And nothing happens.  Why should it?

\n

You come up with your argument from surface resemblances, and Nature comes back and says \"So what?\"  There just isn't a law that says it should work.

\n

If you design a system of transistors to do addition, and it says 2 + 2 = 5, you can go back and debug it; you can find the place where you made an identifiable mistake.

\n

But suppose you build a neural network that is massively parallel and interconnected and complicated, and it fails to be intelligent.  You can't even identify afterward what went wrong, because the wrong step was in thinking that the clever argument from similarity had any power over Reality to begin with.

\n

In place of this reliance of surface analogies, I have had this notion and principle - from so long ago that I can hardly remember how or why I first came to hold it - that the key to understanding is to ask why things happen, and to be able to walk through the process of their insides.

\n

Hidden or openly, this principle is ubiquitously at work in all my writings.  For example, take my notion of what it looks like to \"explain\" \"free will\" by digging down into the causal cognitive sources of human judgments of freedom-ness and determination-ness.  Contrast to any standard analysis that lists out surface judgments of freedom-ness and determination-ness without asking what cognitive algorithm generates these perceptions.

\n

Of course, some things that resemble each other in some ways, resemble each other in other ways as well.  But in the modern world, at least, by the time we can rely on this resemblance, we generally have some idea of what is going on inside, and why the resemblance holds.

\n

The distrust of surface analogies, and the drive to find deeper and causal models, has been with me my whole remembered span, and has been tremendously helpful to both the young me and the modern one.  The drive toward causality makes me keep asking \"Why?\" and looking toward the insides of things; and the distrust of surface analogies helps me avoid standard dead ends.  It has driven my whole life.

\n

As for Inside View vs. Outside View, I think that the lesson of history is just that reasoning from surface resemblances starts to come apart at the seams when you try to stretch it over gaps larger than Christmas shopping - over gaps larger than different draws from the same causal-structural generator.  And reasoning by surface resemblance fails with especial reliability, in cases where there is the slightest motivation in the underconstrained choice of a reference class.

" } }, { "_id": "pqoxE3AGMbse68dvb", "title": "The Outside View's Domain", "pageUrl": "https://www.lesswrong.com/posts/pqoxE3AGMbse68dvb/the-outside-view-s-domain", "postedAt": "2008-06-21T03:17:17.000Z", "baseScore": 29, "voteCount": 25, "commentCount": 15, "url": null, "contents": { "documentId": "pqoxE3AGMbse68dvb", "html": "

Followup toThe Planning Fallacy

\n\n

Plato's Phaedo:

    "The state of sleep is opposed to the \nstate of waking; and out of sleeping, waking is generated; and out of waking, \nsleeping; and the process of generation is in the one case falling asleep, \nand in the other waking up.  Do you agree?"
\n    "Quite."
    "Then suppose that you analyze life and death to me in the same manner.  \nIs not death opposed to life?"
    "Yes."
    "And they are generated one from the other?"
    "Yes."
    "What is generated from life?"
    "Death."
    "And what from death?"
    "I can only say in answer - life."
    "Then the living, whether things or persons, Cebes, are generated from the dead?"
\n    "That is clear."
\n    "Then our souls exist in the house of Hades."
    "It seems so."

Now suppose that the foil in the dialogue had objected a bit more strongly, and also that Plato himself had known about the standard research on the Inside View vs. Outside View...

\n\n

(As I disapprove of Plato's use of Socrates as his character mouthpiece, I shall let one of the characters be Plato; and the other... let's call him "Phaecrinon".)

Plato:  "The state of sleep is opposed to the \nstate of waking; and out of sleeping, waking is generated; and out of waking, \nsleeping; and the process of generation is in the one case falling asleep, \nand in the other waking up...  Then suppose that you analyze life and death to me in the same manner."

\n\n

Phaecrinon:  "Why should I?  They are different things."

\n\n

Plato:  "Oh, Phaecrinon, have you not heard what researchers have\nshown, that the outside view is a better predictor than the inside view? \nYou come to me and point out the differences between life-death and\nwake-sleep, all so that you can avoid making the obvious generalization\nthat you prefer to deny.  Yet if we allow such reasoning as this, will\nnot software project managers say, 'My project is different from yours,\nbecause I have better programmers'?  And will not textbook authors say,\n"We are wiser than those other textbook authors, and therefore we will\nfinish sooner'?  Therefore you can see that to point out the\nsimilarities between things is superior, and to point out the\ndifferences between them, inferior."

\n\n

Phaecrinon:  "You say that your reasoning is like to the reasoning\nof Daniel Kahneman, yet it seems to me that they are importantly\ndifferent.  For Daniel Kahneman dealt with generalization over things\nthat are almost quite as similar to each other, as different flippings of the\nsame coin.  Yet you deal with wholly different processes with different\ninternal mechanisms, and try to generalize across one to the other."

\n\n

Plato:  "But Phaecrinon, now you only compound your error; for you\nhave pointed out the difference between myself and Kahneman, where I\nhave pointed out the similarity.  And this is again inferior, by reason\nof the inferiority of the Inside View over the Outside View.  You have\nonly given me one more special reason why the Outside View should not\napply to your particular case - all so that you can deny that our souls exist in the house of Hades."

\n\n

Phaecrinon:  "Yet Plato, if you propose indiscriminately to apply\nthe Outside View to all things, how do you explain the ability of\nengineers to construct a new bridge that is unlike any other bridge,\nand calculate its properties in advance by computer simulation?  How can the\nWright Flyer fly, when all previous human-made flying\nmachines had failed?  How indeed can anything at all happen for the\nfirst time?"

\n\n

Plato:  "Perhaps sometimes things do happen for the first time, but this does not mean we can predict them."

\n\n

Phaecrinon:  "Ah, Plato, you do too little justice to engineers. \nOut of all the possible structures of metal and tubes and\nexplosive fuel, very few such structures constitute a spaceship\nthat will land on the Moon.  To land on the Moon for the first time,\nthen, human engineers must have known, in advance, which of many designs would have the exceedingly rare property of landing upon the Moon.  And is this not the very activity that engineers\nperform - calculating questions in detail?  Do not engineers take the\nInside View with great success?"

\n\n

Plato:  "But they assume that each screw and part will behave just like it does on all the other times observed."

\n\n

Phaecrinon:  "That is so.  Yet nonetheless they construct detailed internal models of\nexceeding complexity, and do not only collect the statistics of whole cases.  This is the Inside View if anything is the Inside View; no one\nclaims that Inside Views are generated purely from nothingness."

\n\n

Plato:  "Then I answer that when engineers have shown many times\ntheir ability to perform detailed calculations with success,\nwe trust on future occasions that they will succeed similarly.  We trust the Inside View when the Outside View\ntells us to trust the Inside View.  But if this is not so, and there is\nnot a past record of success with detailed calculations, then the\nOutside View is all that is left to us; and we should foresake all\nattempts at internal modeling, for they will only lead us astray."

\n\n

Phaecrinon:  "But now you have admitted that the notion of 'trust\nOutside View, distrust Inside View' has a limited domain of\napplicability, and we may as well restrict that domain further.  Just as you try\nto seal off the successes of engineers from the Outside View, so too, I wish to seal off the failures\nof Greek philosophers from the Outside View.  Specifically, the record\nof Greek philosophers does not inspire in me any confidence that the\nOutside View can be applied across processes with greatly different\ninternal causal structures, like life-and-death versus sleeping-and-waking.  Daniel Kahneman and his fellows, writing a\ntextbook, encountered a challenge drawn from a structurally similar causal generator\nas many other cases of textbook-writing; subject to just the same sort\nof unforeseen delays.  Likewise the students who failed to predict when\nthey would finish their Christmas shopping; the task of Christmas\nshopping does not change so much from one Christmas to another.  It\nwould be another matter entirely to say, 'Each year I have finished my\nChristmas shopping one day before Christmas - therefore I expect to\nfinish my textbook one day before my deadline.'"

\n\n

Plato:  "But this only sounds foolish, because we know\nfrom the Outside View that textbooks are delayed far longer than this. \nPerhaps if you had never written a textbook before, and neither had\nanyone else, 'one day before deadline' would be the most reasonable\nestimate."

\n\n

Phaecrinon:  "You would not allow me to predict in advance that\ntextbook writing is more difficult than Christmas shopping?"

\n\n

Plato:  "No.  For you have chosen this particular special plea,\nusing your hindsight of the correct answer.  If you had truly needed to\nwrite a textbook for the first time in history, you would have pled,\n'No one can foresee driving delays and crowds in the store, but the\nwork I do to write a textbook is all under my own control - therefore,\nI will finish more than one day before deadline.'"

\n\n

Phaecrinon:  "But even you admit that to draw analogies\nacross wider and wider differences is to make those analogies less and less\nreliable.  If you see many different humans sleeping, you can conclude that a newly observed human will probably sleep for eight hours; but if you see a cat sleeping, you must be less confident; and if you wish to draw an analogy to life and death, that is a greater distance still."

\n\n

Plato:  "If I allow that, will not software project managers say, 'My software project is as unlike to all other software projects as is a cat to a human?'"

\n\n

Phaecrinon:  "Then they are fools and nothing can be done about it.  Surely you do not think that the prediction from many humans to one cat is just as strong as the prediction from many humans to one human?  Insensitivity to the reliability of predictors is also a standard bias."

\n\n

Plato:  "That is true.  Yet an Outside View may not be a good estimate, and yet still be the best estimate.  If we have only seen the sleep cycles of many humans, then the Outside View on the whole group may be the best estimate for a newly observed cat, if you have no other data.  Even likewise with our guesses as to life and death."

\n\n

Phaecrinon:  "And one sign of when the Outside View might not\nprovide a good estimate, is when there are many different reference\nclasses to which you might compare your new thing.  A candle burns\nlow, and exhausts itself and extinguishes, and does not light again the next day. \nHow do you know that life and death is not analogous to a candle which burns and fails?  Why not generalize over the\nsimilarity to a candle, rather than the similarity to sleep cycles?"

\n\n

Plato:  "Oh, but Phaecrinon, if we allow arguments over reference\nclasses, we may as well toss the notion of an Outside View out the\nwindow.  For then software project managers will say that the proper\nreference class for their project is the class of projects that\ndelivered on time, or the class of projects with managers as wise\nas themselves.  As for your analogy of the candle, it is self-evident that\nlife is similar to sleeping and waking, not to candles.  When a man\nis born, he is weak, but he grows to adulthood and is strong, and then\nwith age he grows weaker again.  In this he is like the Sun, that is\nweak when it rises, and strong at its apex, and then sinks below the\nhorizon; in this a man is like the sleepy riser, who becomes sleepy again\nat the end of the day.  It is self-evident that life and death belongs\nto the class of cyclical processes, and not the class of irreversible\nprocesses."

\n\n

Phaecrinon:  "What is self-evident to you does not seem so\nself-evident to me, Plato; and just because you call several widely different things\n'cyclical processes', it does not follow that they were all random samples drawn from a\ngreat Barrel of Cyclical Processes, and that the next thing you choose to call a 'cyclical process' will have the same distribution of properties."

\n\n

Plato:  "Again you compound your mistake by pleading special exceptions.  Will we let the software manager plead that his project is not drawn from the same barrel as the others?"

\n\n

Phaecrinon:  "Again you extend the Outside View beyond its domain of applicability.  In engineering where all internal parts are precisely understood,\nbut the whole is not quite similar to anything else that has been built\nbefore, then the Inside View is superior to the Outside View.  And the sign\nof this is that results are routinely predicted in advance with great\nprecision."

\n\n

Plato:  "This is just to say that when the Outside View tells us to use the Inside View, we should use it.  But surely not otherwise, Phaecrinon!"

\n\n

Phaecrinon:  "When many different people try to accomplish the same task,\nand the internal details cannot be precisely calculated, and yet people\nhave a tendency to optimism and to not visualize incidental catastrophes, then the Outside View is superior to the\nInside View.  And the sign of this is that the same kind of task - with\nthe same sort of internal structure, the same difficulties and\nchallenges - has been done many times, and the result cannot be\npredicted with precision; yet people's predictions are usually biased\noptimistically."

\n\n

Plato:  "This is the triumph of the Outside View!"

\n\n

Phaecrinon:  "But when you deal with attempted analogies across structually different\nprocesses, perhaps unique or poorly understood, then things which are similar in some surface respects are\noften different in other respects.  And the sign of this domain is that when people try to reason by similarity, it is not at all clear what is similar\nto what, or which surface resemblances they should focus upon as opposed\nto others."

\n\n

Plato:  "I think the resemblance of life-death to sleep-waking is perfectly clear.  But what do you assert we should do in such a case, if it is not taking the Outside View?"

\n\n

Phaecrinon:  "Perhaps there is nothing to be done at all, with either the Inside View or the Outside View.  Not all problems are solvable; and it may be that the best we can do is avoid the overconfidence from asserting that analogies are much stronger than they are.  But it seems to me that in those cases where we know something of the internal structure,\nthen we can sometimes produce predictions by imagining the internals, even though the whole thing\nis not similar to any other whole thing in our experience."

\n\n

Plato:  "Now I challenge you to consider how well such thoughts have done, historically."

\n\n

Phaecrinon:  "What I have just described is the way that\nengineers build the first prototype of anything.  But that, I admit, is when they understand very precisely the parts they use.  If the\ninternals are not well-understood, then the whole will in most cases be even less\nwell-understood.  It is only your idea that the Outside View can yield better\npredictions, that I am protesting against.  It seems to me that the\nresult of taking the Outside View of things poorly understood or structurally dissimilar to other things in the purported reference class, is only to create great disputes about definitional boundaries, and\nclashing analogies, and arguments over which surface similarities are important.  When all\nalong the new process may not be similar to anything that\nalready exists."

\n\n

Plato:  "But there is no alternative to the Outside View."

\n\n

Phaecrinon:  "Yes, there is; you can try to imagine the internal process, if you know anything at all about it.  At least then two people can focus on the internal structure and argue about what happens and their dispute will be commensurable.  But if two people both say 'I am taking the\nOutside View' and then form different 'self-evident' reference classes,\nwhat do they do from there?  How can they resolve their dispute\nabout which surface characteristics are important?  At least if you make\npredictions about internal causal processes, the results are finally testable if the dispute is empirical at all.  How do you test the assertion that life is more importantly similar to sleep and waking, than to a candle?  Perhaps life is simply like neither.  Something must happen internally when a human thinks and reasons, but there does not have to exist any other process in nature similar enough that we could predict the characteristics of human thought by looking at it."

\n\n

Plato:  "What it boils down to, is that you are constructing a detailed excuse not to use the Outside View in your own case."

\n\n

Phaecrinon:  "And if each of two people with different Outside Views\nsays to the other, 'You are a fool, for disregarding the Outside View!'\nthen they will make no progress on their disagreement at all.  This is\nthe danger of proposing an absolute mandate for philosophers\nencountering new and structurally different phenomena, because you want\nto prevent software project managers from making special\nexcuses for their software project.  Reversed stupidity is not intelligence,\nand there is no language in which it is difficult to write bad computer programs, and in the art of rationality it is never difficult to shoot off your own foot if you desire to do so.  The standard Outside View relies on your seeing the common-sense difference between textbook writing and Christmas shopping, so that you don't try to lump them into the same reference class.  I am similarly hoping that you can see by common sense that the Outside View works rather better to predict Christmas shopping times, than what you are arguing is the analogous 'Outside View' technique in philosophy."

\n\n

Plato:  "And you believe you can do better with the Inside View."

\n\n

Phaecrinon:  "Reasoning about the internals of things whose output is not yet observed, is fraught with difficulty.  One must be\nconstantly aware of what one can and cannot reasonably guess, based on\nthe strength of your knowledge.  The uncertainties of such an Inside View, end up being much greater than the uncertainties of the Outside View on Christmas shopping.  Only when the Inside View support appears\nextremely lopsided can you dare to come to even a tentative\nconclusion!  But I do think that sometimes the Inside View support can be extremely\nlopsided - though it is a strain on your\nrationality even to correctly distinguish such cases."

\n\n

Plato:  "The evidence shows that people cannot successfully use the Inside View at all."

\n\n

Phaecrinon:  "No, the evidence shows that the Outside View yields better answers than the Inside View for problems like writing a textbook.  But even an Inside View of writing a textbook would tell you that the project was unlikely to destroy the Earth.  Taking the Inside View of a new and strange process is a Difficult\nProblem, where taking the Outside View on textbook composition is a Straightforward Problem.  But to try and argue like\nalchemists from surface resemblances is a Hopeless Problem.  Then\nthere cannot even be any meeting of minds, if you start with different\nassumptions about which similarities are important.  An answer need not exist even in principle, for there may be nothing else that is enough like this new thing to yield successful predictions by analogy."

\n\n

Plato:  "So you have said that it is easier for two people to\nconduct their dispute if they both take the Inside View and argue about\ninternal causal processes.  But from this it does not follow that the\nOutside View based on surface resemblances is inferior.  Perhaps you\nare only coming to agreement on folly, and either of two conflicting\nOutside Views would be more reliable than the best Inside View."

" } }, { "_id": "RFnkagDaJSBLDXEHs", "title": "Heading Toward Morality", "pageUrl": "https://www.lesswrong.com/posts/RFnkagDaJSBLDXEHs/heading-toward-morality", "postedAt": "2008-06-20T08:08:16.000Z", "baseScore": 27, "voteCount": 25, "commentCount": 53, "url": null, "contents": { "documentId": "RFnkagDaJSBLDXEHs", "html": "

Followup toGhosts in the Machine, Fake Fake Utility Functions, Fake Utility Functions

\n

As people were complaining before about not seeing where the quantum physics sequence was going, I shall go ahead and tell you where I'm heading now.

Having dissolved the confusion surrounding the word \"could\", the trajectory is now heading toward should.

\n

In fact, I've been heading there for a while.  Remember the whole sequence on fake utility functions?  Back in... well... November 2007?

\n

\n

I sometimes think of there being a train that goes to the Friendly AI station; but it makes several stops before it gets there; and at each stop, a large fraction of the remaining passengers get off.

One of those stops is the one I spent a month leading up to in November 2007, the sequence chronicled in Fake Fake Utility Functions and concluded in Fake Utility Functions.

\n

That's the stop where someone thinks of the One Great Moral Principle That Is All We Need To Give AIs.

\n

To deliver that one warning, I had to go through all sorts of topics—which topics one might find useful even if not working on Friendly AI.  I warned against Affective Death Spirals, which required recursing on the affect heuristic and halo effect, so that your good feeling about one particular moral principle wouldn't spiral out of control.  I did that whole sequence on evolution; and discursed on the human ability to make almost any goal appear to support almost any policy; I went into evolutionary psychology to argue for why we shouldn't expect human terminal values to reduce to any simple principle, even happiness, explaining the concept of \"expected utility\" along the way...

\n

...and talked about genies and more; but you can read the Fake Utility sequence for that.

\n

So that's just the warning against trying to oversimplify human morality into One Great Moral Principle.

\n

If you want to actually dissolve the confusion that surrounds the word \"should\"—which is the next stop on the train—then that takes a much longer introduction.  Not just one November.

\n

I went through the sequence on words and definitions so that I would be able to later say things like \"The next project is to Taboo the word 'should' and replace it with its substance\", or \"Sorry, saying that morality is self-interest 'by definition' isn't going to cut it here\".

\n

And also the words-and-definitions sequence was the simplest example I knew to introduce the notion of How An Algorithm Feels From Inside, which is one of the great master keys to dissolving wrong questions.  Though it seems to us that our cognitive representations are the very substance of the world, they have a character that comes from cognition and often cuts crosswise to a universe made of quarks.  E.g. probability; if we are uncertain of a phenomenon, that is a fact about our state of mind, not an intrinsic character of the phenomenon.

\n

Then the reductionism sequence: that a universe made only of quarks, does not mean that things of value are lost or even degraded to mundanity.  And the notion of how the sum can seem unlike the parts, and yet be as much the parts as our hands are fingers.

\n

Followed by a new example, one step up in difficulty from words and their seemingly intrinsic meanings:  \"Free will\" and seemingly intrinsic could-ness.

\n

But before that point, it was useful to introduce quantum physics.  Not just to get to timeless physics and dissolve the \"determinism\" part of the \"free will\" confusion.  But also, more fundamentally, to break belief in an intuitive universe that looks just like our brain's cognitive representations.  And present examples of the dissolution of even such fundamental intuitions as those concerning personal identity.  And to illustrate the idea that you are within physics, within causality, and that strange things will go wrong in your mind if ever you forget it.

\n

Lately we have begun to approach the final precautions, with warnings against such notions as Author* control: every mind which computes a morality must do so within a chain of lawful causality, it cannot arise from the free will of a ghost in the machine.

\n

And the warning against Passing the Recursive Buck to some meta-morality that is not itself computably specified, or some meta-morality that is chosen by a ghost without it being programmed in, or to a notion of \"moral truth\" just as confusing as \"should\" itself...

\n

And the warning on the difficulty of grasping slippery things like \"should\"—demonstrating how very easy it will be to just invent another black box equivalent to should-ness, to sweep should-ness under a slightly different rug—or to bounce off into mere modal logics of primitive should-ness...

\n

We aren't yet at the point where I can explain morality.

\n

But I think—though I could be mistaken—that we are finally getting close to the final sequence.

\n

And if you don't care about my goal of explanatorily transforming Friendly AI from a Confusing Problem into a merely Extremely Difficult Problem, then stick around anyway.  I tend to go through interesting intermediates along my way.

\n

It might seem like confronting \"the nature of morality\" from the perspective of Friendly AI is only asking for additional trouble.

\n

Artificial Intelligence melts people's brains.  Metamorality melts people's brains.  Trying to think about AI and metamorality at the same time can cause people's brains to spontaneously combust and burn for years, emitting toxic smoke—don't laugh, I've seen it happen multiple times.

\n

But the discipline imposed by Artificial Intelligence is this: you cannot escape into things that are \"self-evident\" or \"obvious\".  That doesn't stop people from trying, but the programs don't work.  Every thought has to be computed somehow, by transistors made of mere quarks, and not by moral self-evidence to some ghost in the machine.

\n

If what you care about is rescuing children from burning orphanages, I don't think you will find many moral surprises here; my metamorality adds up to moral normality, as it should.  You do not need to worry about metamorality when you are personally trying to rescue children from a burning orphanage.  The point at which metamoral issues per se have high stakes in the real world, is when you try to compute morality in an AI standing in front of a burning orphanage.

\n

Yet there is also a good deal of needless despair and misguided fear of science, stemming from notions such as, \"Science tells us the universe is empty of morality\".  This is damage done by a confused metamorality that fails to add up to moral normality.  For that I hope to write down a counterspell of understanding.  Existential depression has always annoyed me; it is one of the world's most pointless forms of suffering.

\n

Don't expect the final post on this topic to come tomorrow, but at least you know where we're heading.

\n

 

\n

Part of The Metaethics Sequence

\n

Next post: \"No Universally Compelling Arguments\"

\n

(start of sequence)

" } }, { "_id": "f3W7QbLBA2B7hk84y", "title": "LA-602 vs. RHIC Review", "pageUrl": "https://www.lesswrong.com/posts/f3W7QbLBA2B7hk84y/la-602-vs-rhic-review", "postedAt": "2008-06-19T10:00:31.000Z", "baseScore": 65, "voteCount": 44, "commentCount": 62, "url": null, "contents": { "documentId": "f3W7QbLBA2B7hk84y", "html": "

LA-602: Ignition of the Atmosphere with Nuclear Bombs, a research report from the Manhattan Project, is to the best of my knowledge the first technical analysis ever conducted of an uncertain danger of a human-caused extinction catastrophe.

Previously, Teller and Konopinski had been assigned the task of disproving a crazy suggestion by Enrico Fermi that a fission chain reaction could ignite a thermonuclear reaction in deuterium - what we now know as an H-Bomb. Teller and Konopinski found that, contrary to their initial skepticism, the hydrogen bomb appeared possible.

Good for their rationality! Even though they started with the wrong conclusion on their bottom line, they were successfully forced away from it by arguments that could only support one answer.

Still, in retrospect, I think that the advice the future would give to the past, would be: Start by sitting down and saying, "We don't know if a hydrogen bomb is possible". Then list out the evidence and arguments; then at the end weigh it.

So the hydrogen bomb was possible. Teller then suggested that a hydrogen bomb might ignite a self-sustaining thermonuclear reaction in the nitrogen of Earth's atmosphere. This also appeared extremely unlikely at a first glance, but Teller and Konopinski and Marvin investigated, and wrote LA-602...

As I understand LA-602, the authors went through the math and concluded that there were several strong reasons to believe that nitrogen fusion could not be self-sustaining in the atmosphere: it would take huge energies to start the reaction at all; the reaction would lose radiation from its surface too fast to sustain the fusion temperature; and even if the fusion reaction did grow, the Compton effect would increase radiation losses with volume(?).

And we're still here; so the math, whatever it actually says, seems to have been right.

Note that the Manhattan scientists didn't always get their math right. The Castle Bravo nuclear test on March 1, 1954 produced 15 megatons instead of the expected 4-8 megatons due to an unconsidered additional nuclear reaction that took place in lithium-7. The resulting fallout contaminated fishing boats outside the declared danger zone; at least one person seems to have died.

But the LA-602 calculations were done with very conservative assumptions, and came out with plenty of safety margin. AFAICT (I am not a physicist) a Castle Bravo type oversight could not realistically have made the atmosphere ignite anyway, and if it did, it'd have gone right out, etc.

The last time I know of when a basic physical calculation with that much safety margin, and multiple angles of argument, turned out to be wrong anyway, was when Lord Kelvin showed from multiple angles of reasoning that the Earth could not possibly be so much as a hundred million years old.

LA-602 concludes:

"There remains the distinct possibility that some other less simple mode of burning may maintain itself in the atmosphere... the complexity of the argument and the absence of satisfactory experimental foundations makes further work on the subject highly desirable."

Decades after LA-602, another paper would be written to analyze an uncertain danger of human-created existential risk: The Review of Speculative "Disaster Scenarios" at RHIC.

The RHIC Review was written in response to suggestions that the Relativistic Heavy Ion Collider might create micro black holes or strangelets.

A B.Sc. thesis by Shameer Shah of MIT, Perception of Risk: Disaster Scenarios at Brookhaven, chronicles the story behind the RHIC Review:

The RHIC flap began when Walter Wagner wrote to Scientific American, speculating that the Brookhaven collider might create a "mini black hole". A reply letter by Frank Wilczek of the Institute for Advanced Studies labeled the mini-black-hole scenario as impossible, but also introduced a new possibility, negatively charged strangelets, which would convert normal matter into more strange matter. Wilczek considered this possibility slightly more plausible.

Then the media picked up the story.

Shameer Shah interviewed (on Nov 22, 2002) Robert Jaffe, Director of MIT's Center for Theoretical Physics, a pioneer in the theory of strange matter, and primary author of the RHIC Review.

According to Jaffe, even before the investigative committe was convened, "No scientist who understood the physics thought that this experiment posed the slightest threat to anybody." Then why have the committee in the first place? "It was an attempt to take seriously the fears of science that they don't understand." Wilczek was asked to serve on the committee "to pay the wages of his sin, since he's the one that started all this with his letter."

Between LA-602 and the RHIC Review there is quite a difference of presentation.

I mean, just look at the names:

LA-602: Ignition of the Atmosphere with Nuclear Bombs
Review of Speculative "Disaster Scenarios" at RHIC

See a difference?

LA-602 began life as a classified report, written by scientists for scientists. You're assumed to be familiar with the meaning of terms like Bremsstrahlung, which I had to look up. LA-602 does not begin by asserting any conclusions; the report walks through the calculations - at several points clearly labeling theoretical extrapolations and unexplored possibilities as such - and finally concludes that radiation losses make self-sustaining nitrogen fusion impossible-according-to-the-math, even under the most conservative assumptions.

The RHIC Review presents a nontechnical summary of its conclusions in six pages at the start, relegating the math and physics to eighteen pages of appendices.

LA-602 concluded, "There remains the distinct possibility that some other less simple mode of burning may maintain itself in the atmosphere..."

The RHIC Review concludes: "Our conclusion is that the candidate mechanisms for catastrophic scenarios at RHIC are firmly excluded by existing empirical evidence, compelling theoretical arguments, or both. Accordingly, we see no reason to delay the commissioning of RHIC on their account."

It is not obvious to my inexpert eyes that the assumptions in the RHIC Review are any more firm than those in LA-602 - they both seem very firm - but the two papers arise from rather different causes.

To put it bluntly, LA-602 was written by people curiously investigating whether a hydrogen bomb could ignite the atmosphere, and the RHIC Review is a work of public relations.

Now, it does seem - so far as I can tell - that it's pretty damned unlikely for a particle accelerator far less powerful than random cosmic rays to destroy Earth and/or the Universe.

But I don't feel any more certain of that after reading the RHIC Review than before I read it. I am not a physicist; but if I was a physicist, and I read a predigested paper like the RHIC Review instead of doing the whole analysis myself from scratch, I would be fundamentally trusting the rationality of the paper's authors. Even if I checked the math, I would still be trusting that the equations I saw were the right equations to check. I would be trusting that someone sat down, and looked for unpleasant contrary arguments with an open mind, and really honestly didn't find anything.

When I contrast LA-602 to the RHIC Review, well...

Don't get me wrong: I don't feel the smallest particle of real fear about particle accelerators. The basic cosmic-ray argument seems pretty convincing. Nature seems to have arranged for the calculations in this case to have some pretty large error margins. I, myself, am not going to worry about risks we can actually calculate to be tiny, when there are incalculable large-looking existential risks to soak up my concern.

But there is something else that I do worry about: The primary stake on the table with things like RHIC, is that it is going to get scientists into the habit of treating existential risk as a public relations issue, where ignorant technophobes say the risk exists, and the job of scientists is to interface with the public and explain to them that it does not.

Everyone knew, before the RHIC report was written, what answer it was supposed to produce. That is a very grave matter. Analysis is what you get when physicists sit down together and say, "Let us be curious," and walk through all the arguments they can think of, recording them as they go, and finally weigh them up and reach a conclusion. If this does not happen, no analysis has taken place.

The general rule of thumb I sometimes use, is that - because the expected utility of thought arises from the utility of what is being reasoned about - a single error in analyzing an existential risk, even if it "doesn't seem like it ought to change the conclusion", is worth at least one human life.

The RHIC Review is not written to the standard of care that would be appropriate if, after the RHIC Review was written, some higher authority went through the paper; and if a single argument in it was wrong, anywhere, whether or not it changed the conclusion, a hostage got shot. That's how to think about analyzing existential risks. That way, for each and every element of the analysis, you can find it in yourself to be a little uncertain about that element, even if it doesn't seem "like it could possibly change the conclusion"; uncertainty invokes curiosity.

The RHIC Review was produced by authors who were already sure that the RHIC couldn't destroy the Earth, the problem-at-hand was explaining this to the public. If the authors decided just by eyeballing the problem that the RHIC couldn't destroy the Earth, then the only actual analysis that took place was conducted in 5 seconds. Yes, it's a lopsided issue, but it seems that as a general matter of policy, any existential risk at all deserves a longer and truly curious analysis than that.

Though I don't really blame the RHIC Review's authors. No one ever told them that there was such a thing as existential risk, or that it raised the standards of analysis beyond what was usual in a scientific paper, or that rational analysis requires placing yourself into a state of genuine uncertainty about each individual element's exact value...

And the much greater reason I don't blame them, is that between the 1940s and today, society has developed a "Gotcha!" attitude toward risk.

You can't admit a single particle of uncertain danger if you want your science's funding to survive. These days you are not allowed to end by saying, "There remains the distinct possibility..." Because there is no debate you can have about tradeoffs between scientific progress and risk. If you get to the point where you're having a debate about tradeoffs, you've lost the debate. That's how the world stands, nowadays.

So no one can do serious analysis of existential risks anymore, because just by asking the question, you're threatening the funding of your whole field.

The number one lesson I take from this whole issue is that where human-caused uncertain existential dangers are concerned, the only way to get a real, serious, rational, fair, evenhanded assessment of the risks, in our modern environment,

Is if the whole project is classified, the paper is written for scientists without translation, and the public won't get to see the report for another fifty years.

This is the lesson of LA-602: Ignition of the Atmosphere with Nuclear Bombs and the Review of Speculative "Disaster Scenarios" at RHIC. Read them and weep.

" } }, { "_id": "cnYHFNBF3kZEyx24v", "title": "Ghosts in the Machine", "pageUrl": "https://www.lesswrong.com/posts/cnYHFNBF3kZEyx24v/ghosts-in-the-machine", "postedAt": "2008-06-17T23:29:17.000Z", "baseScore": 69, "voteCount": 56, "commentCount": 30, "url": null, "contents": { "documentId": "cnYHFNBF3kZEyx24v", "html": "

People hear about Friendly AI and say - this is one of the top three initial reactions:

\n

\"Oh, you can try to tell the AI to be Friendly, but if the AI can modify its own source code, it'll just remove any constraints you try to place on it.\"

\n

And where does that decision come from?

\n

Does it enter from outside causality, rather than being an effect of a lawful chain of causes which started with the source code as originally written?  Is the AI the Author* source of its own free will?

\n

A Friendly AI is not a selfish AI constrained by a special extra conscience module that overrides the AI's natural impulses and tells it what to do.  You just build the conscience, and that is the AI.  If you have a program that computes which decision the AI should make, you're doneThe buck stops immediately.

\n

\n

At this point, I shall take a moment to quote some case studies from the Computer Stupidities site and Programming subtopic.  (I am not linking to this, because it is a fearsome time-trap; you can Google if you dare.)

\n
\n
\n

I tutored college students who were taking a computer programming course. A few of them didn't understand that computers are not sentient.  More than one person used comments in their Pascal programs to put detailed explanations such as, \"Now I need you to put these letters on the screen.\"  I asked one of them what the deal was with those comments. The reply:  \"How else is the computer going to understand what I want it to do?\"  Apparently they would assume that since they couldn't make sense of Pascal, neither could the computer.

\n
\n

While in college, I used to tutor in the school's math lab.  A student came in because his BASIC program would not run. He was taking a beginner course, and his assignment was to write a program that would calculate the recipe for oatmeal cookies, depending upon the number of people you're baking for.  I looked at his program, and it went something like this:

\n

10   Preheat oven to 350
20   Combine all ingredients in a large mixing bowl
30   Mix until smooth

\n
\n

An introductory programming student once asked me to look at his program and figure out why it was always churning out zeroes as the result of a simple computation.  I looked at the program, and it was pretty obvious:

\n

begin
    read(\"Number of Apples\", apples)
    read(\"Number of Carrots\", carrots)
    read(\"Price for 1 Apple\", a_price)
    read(\"Price for 1 Carrot\", c_price)
    write(\"Total for Apples\", a_total)
    write(\"Total for Carrots\", c_total)
    write(\"Total\", total)
    total = a_total + c_total
    a_total = apples * a_price
    c_total = carrots * c_price
end

\n

Me: \"Well, your program can't print correct results before they're computed.\"
Him: \"Huh?  It's logical what the right solution is, and the computer should reorder the instructions the right way.\"

\n
\n
\n

There's an instinctive way of imagining the scenario of \"programming an AI\".  It maps onto a similar-seeming human endeavor:  Telling a human being what to do.  Like the \"program\" is giving instructions to a little ghost that sits inside the machine, which will look over your instructions and decide whether it likes them or not.

\n

There is no ghost who looks over the instructions and decides how to follow them.  The program is the AI.

\n

That doesn't mean the ghost does anything you wish for, like a genie.  It doesn't mean the ghost does everything you want the way you want it, like a slave of exceeding docility.  It means your instruction is the only ghost that's there, at least at boot time.

\n

AI is much harder than people instinctively imagined, exactly because you can't just tell the ghost what to do.  You have to build the ghost from scratch, and everything that seems obvious to you, the ghost will not see unless you know how to make the ghost see it.  You can't just tell the ghost to see it.  You have to create that-which-sees from scratch.

\n

If you don't know how to build something that seems to have some strange ineffable elements like, say, \"decision-making\", then you can't just shrug your shoulders and let the ghost's free will do the job. You're left forlorn and ghostless.

\n

There's more to building a chess-playing program than building a really fast processor - so the AI will be really smart - and then typing at the command prompt \"Make whatever chess moves you think are best.\"  You might think that, since the programmers themselves are not very good chess-players, any advice they tried to give the electronic superbrain would just slow the ghost down.  But there is no ghost.  You see the problem.

\n

And there isn't a simple spell you can perform to - poof! - summon a complete ghost into the machine.  You can't say, \"I summoned the ghost, and it appeared; that's cause and effect for you.\"  (It doesn't work if you use the notion of \"emergence\" or \"complexity\" as a substitute for \"summon\", either.)  You can't give an instruction to the CPU, \"Be a good chessplayer!\"  You have to see inside the mystery of chess-playing thoughts, and structure the whole ghost from scratch.

\n

No matter how common-sensical, no matter how logical, no matter how \"obvious\" or \"right\" or \"self-evident\" or \"intelligent\" something seems to you, it will not happen inside the ghost.  Unless it happens at the end of a chain of cause and effect that began with the instructions that you had to decide on, plus any causal dependencies on sensory data that you built into the starting instructions.

\n

This doesn't mean you program in every decision explicitly. Deep Blue was a far superior chessplayer than its programmers.  Deep Blue made better chess moves than anything its makers could have explicitly programmed - but not because the programmers shrugged and left it up to the ghost.  Deep Blue moved better than its programmers... at the end of a chain of cause and effect that began in the programmers' code and proceeded lawfully from there.  Nothing happened just because it was so obviously a good move that Deep Blue's ghostly free will took over, without the code and its lawful consequences being involved.

\n

If you try to wash your hands of constraining the AI, you aren't left with a free ghost like an emancipated slave.  You are left with a heap of sand that no one has purified into silicon, shaped into a CPU and programmed to think.

\n

Go ahead, try telling a computer chip \"Do whatever you want!\"  See what happens?  Nothing.  Because you haven't constrained it to understand freedom.

\n

All it takes is one single step that is so obvious, so logical, so self-evident that your mind just skips right over it, and you've left the path of the AI programmer.  It takes an effort like the one I showed in Grasping Slippery Things to prevent your mind from doing this.

" } }, { "_id": "HnS6c5Xm9p9sbm4a8", "title": "Grasping Slippery Things", "pageUrl": "https://www.lesswrong.com/posts/HnS6c5Xm9p9sbm4a8/grasping-slippery-things", "postedAt": "2008-06-17T02:04:58.000Z", "baseScore": 36, "voteCount": 30, "commentCount": 17, "url": null, "contents": { "documentId": "HnS6c5Xm9p9sbm4a8", "html": "

Followup toPossibility and Could-ness, The Ultimate Source

\n\n

Brandon Reinhart wrote:

I am "grunching." Responding to the questions posted without reading your answer. Then I'll read your answer and compare. I started reading your post on Friday and had to leave to attend a wedding before I had finished it, so I had a while to think about my answer.

Brandon, thanks for doing this.  You've provided a valuable illustration of natural lines of thought.  I hope you won't be offended if, for educational purposes, I dissect it in fine detail.  This sort of dissection is a procedure I followed with Marcello to teach thinking about AI, so no malice is intended.

Can you talk about "could" without using synonyms like "can" and "possible"?

When we speak of "could" we speak of the set of realizable worlds [A'] that follows from an initial starting world A operated on by a set of physical laws f.

(Emphases added.)

\n\n

I didn't list "realizable" explicitly as Tabooed, but it refers to the same concept as "could".  Rationalist's Taboo isn't played against a word list, it's played against a concept list.  The goal is to force yourself to reduce.

\n\n

\nBecause follows links two worlds, and the\nlinkage is exactly what seems confusing, a word like\n"follows" is also dangerous.

\n\n

Think of it as being like trying to pick up something very slippery.  You have to prevent it from squeezing out of your hands.  You have to prevent the mystery from scurrying away and finding a new dark corner to hide in, as soon as you flip on the lights.

\n\n

So letting yourself use a word like "realizable", or even "follows", is giving your mind a tremendous opportunity to Pass the Recursive Buck - which anti-pattern, be it noted in fairness to Brandon, I hadn't yet posted on.

If I was doing this on my own, and I didn't know the solution yet, I\nwould also be marking "initial", "starting", and "operated on".  Not necessarily at the highest priority, but just in case they\nwere hiding the source of the confusion.  If I was being even more careful I would mark "physical laws" and "world".

So when we say "I could have turned left at the fork in the road."\n"Could" refers to the set of realizable worlds that follow from an\ninitial starting world A in which we are faced with a fork in the road,\ngiven the set of physical laws. We are specifically identifying a\nsub-set of [A']: that of the worlds in which we turned left.

One of the anti-patterns I see often in Artificial Intelligence, and I believe it is also common in philosophy, is inventing a logic that takes as a primitive something that you need to reduce to pieces.

\n\n

\nTo your mind's eye, it seems like "could-ness" is a primitive feature\nof reality.  There's a natural temptation to describe the properties\nthat "could-ness" seems to have, and make lists of things that are\n"could" or "not-could".  But this is, at best, a preliminary step\ntoward reduction, and you should be aware that it is at best\npreliminary step.

\n\n

The goal is to see inside could-ness, not to develop a modal logic to manipulate primitive could-ness.

\n\n

But seeing inside is difficult; there is no safe method you know you can use to see inside.

\n\n

And developing a modal logic seems like it's good for a publication, in philosophy.  Or in AI, you manually preprogram a list of which things have could-ness, and then the program appears to reason about it.  That's good for a publication too.

\nThis does not preclude us from making mistakes in our use of could. One\nmight say "I could have turned left, turned right, or started a nuclear\nwar." The options "started a nuclear war" may simply not be within the\nset [A']. It wasn't physically realizable given all of the permutations\nthat result from applying our physical laws to our starting world.

Your mind tends to bounce off the problem, and has to be constrained to\nface it - like your mind itself is the slippery thing that keeps squeezing out of your hands.

\n\n

It tries to hide the mystery somewhere else, instead of taking it apart - draw a line to another black box, releasing the tension of trying to look inside the first black box.

\n\n

In your mind's eye, it seems, you can see before you the many could-worlds that follow from one real world.

\n\n

The real answer is to resolve a Mind Projection Fallacy; physics follows a single line, but your search system, in determining its best action, has to search through multiple options not knowing which it will make real, and all the options will be labeled as reachable in the search.

\n\n

So, given that answer, you can see how talking about "physically realizable" and "permutations(?) that result from applying physical laws" is a bounce-off-the-problem, a mere-logic, that squeezes the same unpenetrated mystery into "realizable" and "permutations".

\nIf our physical laws contain no method for implementing free will and\nno randomness, [A'] contains only the single world that results from\napplying the set of physical laws to A. If there is randomness or free\nwill, [A'] contains a broader collection of worlds that result from\napplying physical laws to A...where the mechanisms of free will or\nrandomness are built into the physical laws.

Including a "mechanism of free will" into the model is a perfect case of Passing the Recursive Buck.

\n\n

Think of it from the perspective of Artificial Intelligence.  Suppose you were writing a computer program that would, if it heard a burglar alarm, conclude that the house had probably been robbed.  Then someone says, "If there's an earthquate, then you shouldn't conclude the house was robbed."  This is a classic problem in Bayesian networks with a whole deep solution to it in terms of causal graphs and probability distributions... but suppose you didn't know that.

\n\n

You might draw a diagram for your brilliant new Artificial General Intelligence design, that had a "logical reasoning unit" as one box, and then a "context-dependent exception applier" in another box with an arrow to the first box.

\n\n

So you would have convinced yourself that your brilliant plan for building AGI included a "context-dependent exception applier" mechanism.  And you would not discover Bayesian networks, because you would have prematurely marked the mystery as known.

\nI don't mean "worlds" in the quantum mechanics sense, but as a metaphor\nfor resultant states after applying some number of physical\npermutations to the starting reality.

"Permutations"?  That would be... something that results in several worlds, all of which have the could-property?  But where does the permuting come from?  How does only one of the could-worlds become real, if it is a matter of physics?  After you ask these questions you realize that you're looking at the same problem as before, which means that saying "permutations" didn't help reduce it.

\nWhy can a machine practice free will? If free will is possible for\nhumans, then it is a set of properties or functions of the physical\nlaws (described by them, contained by them in some way) and a machine\nmight then implement them in whatever fashion a human brain does. Free\nwill would not be a characteristic of A or [A'], but the process\napplied to A to reach a specific element of [A'].

Again, if you remember that the correct answer is "Forward search\nprocess that labels certain options as reachable before judging them\nand maximizing", you can see the Mind Projection Fallacy on display in trying to put the could-ness property into basic physics.

\nSo...I think I successfully avoided using reference to "might" or "probable" or other synonyms and closely related words.

\n\n

\nnow I'll read your post to see if I'm going the wrong way.

Afterward, Brandon posted:

Hmm. I think I was working in the right direction, but your procedural\nanalogy let you get closer to the moving parts. But I think\n"reachability" as you used it and "realizable" as I used it (or was\nthinking of it) seem to be working along similar lines.

I hate to have to put it this way, because it seems harsh: but it's important to realize that, no, this wasn't working in the right direction.

\n\n

Again to be fair, Marcello and I used to generate raw material like this on paper - but it was clearly labeled as raw material; the point was to keep banging our heads on opaque mysteries of cognition, until a split opened up that helped reduce the problem to smaller pieces, or looking at the same mystery from a different angle helped us get a grasp on at least its surface.

\n\n

Nonetheless:  Free will is a Confusing Problem.  It is a comparatively lesser Confusing Problem but it is still a Confusing Problem.  Confusing Problems are not like the cheap damn problems that college students are taught to solve using safe prepackaged methods.  They are not even like the Difficult Problems that mathematicians tackle without knowing how to solve them.  Even the simplest Confusing Problem can send generations of high-g philosophers wailing into the abyss.  This is not high school homework, this is beisutsukai monastery homework.

\n\n

So you have got to be extremely careful.  And hold yourself, not to "high standards", but to your best dream of perfection.  Part of that is being very aware of how little progress you have made.  Remember that one major reason why AIfolk and philosophers bounce off hard problems and create mere modal logics, is that they get a publication and the illusion of progress.  They rewarded themselves too easily.  If I sound harsh in my criticism, it's because I'm trying to correct a problem of too much mercy.

\n\n

They overestimated how much progress they had made, and of what kind.  That's why I'm not giving you credit for generating raw material that could be useful to you in pinning down the problem.  If you'd said you were doing that, I would have given you credit.\n

\n\n

I'm sure that some people have achieved insight by accident from\ntheir raw material, so that they moved from the illusion of progress to\nreal progress.  But that sort of thing cannot be left to accident. \nMore often, the illusion of progress is fatal: your mind is happy,\ncontent, and no longer working on the difficult, scary, painful,\nopaque, not-sure-how-to-get-inside part of the mystery.

\n\n\n\n

Generating lots of false starts and dissecting them is one methodology for working on an opaque problem.  (Instantly deadly if you can't detect false starts, of course.)  Yet be careful not to credit yourself too much for trying!  Do not pay\nyourself for labor, only results!  To run away from a problem, or bounce off it into easier problems, or to convince yourself you have solved it with a black box, is common.  To stick to the truly difficult part of a difficult problem, is rare.  But do not congratulate yourself too much for this difficult feat of rationality; it is only the ante you pay to sit down at the high-stakes table, not a victory.

\n\n

The only sign-of-success, as\ndistinguished from a sign-of-working-hard, is getting closer to the moving\nparts.

\n\n

And when you are finally unconfused, of course all the black boxes you invented\nearlier, will seem in retrospect to have been "driving in the general direction" of\nthe truth then revealed inside them.  But the goal is reduction, and only\nthis counts as success; driving in a general direction is easy by comparison.

\n\n

So you must cultivate a sharp and particular awareness of confusion, and know that your raw material and false starts are only raw material and false starts - though it's not the sort of thing that funding agencies want to hear.  Academia creates incentives against the necessary standard; you can only be harsh about your own progress, when you've just done something so spectacular that you can be sure people will smile at your downplaying and say, "What wonderful modesty!"

\n\n

The ultimate slippery thing you must grasp firmly until you penetrate is your mind.

\n\n\n\n\n\n\n" } }, { "_id": "rw3oKLjG85BdKNXS2", "title": "Passing the Recursive Buck", "pageUrl": "https://www.lesswrong.com/posts/rw3oKLjG85BdKNXS2/passing-the-recursive-buck", "postedAt": "2008-06-16T04:50:49.000Z", "baseScore": 49, "voteCount": 34, "commentCount": 17, "url": null, "contents": { "documentId": "rw3oKLjG85BdKNXS2", "html": "

Followup toArtificial Addition, The Ultimate Source, Gödel, Escher, Bach: An Eternal Golden Braid

\n

Yesterday, I talked about what happens when you look at your own mind, reflecting upon yourself, and search for the source of your own decisions.

\n

Let's say you decided to run into a burning orphanage and save a young child.  You look back on the decision and wonder: was your empathy with children, your ability to imagine what it would be like to be on fire, the decisive factor?  Did it compel you to run into the orphanage?

\n

No, you reason, because if you'd needed to prevent a nuclear weapon from going off in the building next door, you would have run to disarm the nuke, and let the orphanage burn.  So a burning orphanage is not something that controls you directly.  Your fear certainly didn't control you.  And as for your duties, it seems like you could have ignored them (if you wanted to).

\n

So if none of these parts of yourself that you focus upon, are of themselves decisive... then there must be some extra and additional thing that is decisive!  And that, of course, would be this \"you\" thing that is looking over your thoughts from outside.

\n

\n

Imagine if human beings had a tiny bit more introspective ability than they have today, so that they could see a single neuron firing - but only one neuron at a time.  We might even have the ability to modify the firing of this neuron.  It would seem, then, like no individual neuron was in control of us, and that indeed, we had the power to control the neuron.  It would seem we were in control of our neurons, not controlled by them.  Whenever you look at a single neuron, it seems not to control you, that-which-is-looking...

\n

So it might look like you were moved to run into the orphanage by your built-in empathy or your inculcated morals, and that this overcame your fear of fire.  But really there was an additional you, beyond these emotions, which chose to give in to the good emotions rather than the bad ones.  That's moral responsibility, innit?

\n

But wait - how does this additional you decide to flow along with your empathy and not your fear?  Is it programmed to always be good?  Does it roll a die and do whatever the die says?

\n

Ordinarily, this question is not asked.  Once you say that you choose of your own \"free will\", you've explained the choice - drawn a causal arrow coming from a black box, which feels like an explanation.  At this point, you're supposed to stop asking questions, not look inside the black box to figure out how it works.

\n

But what if the one does ask the question, \"How did I choose to go along with my empathy and duty, and not my fear and selfishness?\"

\n

In real life, this question probably doesn't have an answer.  We are the sum of our parts, as a hand is its fingers, palm, and thumb.  Empathy and duty overpowered fear and selfishness - that was the choice.  It may be that no one factor was decisive, but all of them together are you just as much as you are your brain.  You did not choose for heroic factors to overpower antiheroic ones; that overpowering was your choice.  Or else where did the meta-choice to favor heroic factors come from?  I don't think there would, in fact, have been a deliberation on the meta-choice, in which you actually pondered the consequences of accepting first-order emotions and duties.  There probably would not have been a detailed philosophical exploration, as you stood in front of that burning orphanage.

\n

But memory is malleable.  So if you look back and ask \"How did I choose that?\" and try to actually answer with something beyond the \"free will!\" stopsign, your mind is liable to start generating a philosophical discussion of morality that never happened.

\n

And then it will appear that no particular argument in the philosophical discussion is absolutely decisive, since you could (primitive reachable) have decided to ignore it.

\n

Clearly, there's an extra additional you that decides which philosophical arguments deserve attention.

\n

You see where this is going.  If you don't see where this is going, then you haven't read Douglas Hofstadter's Gödel, Escher, Bach: An Eternal Golden Braid, which makes you incomplete as a human being.

\n

The general antipattern at work might be called \"Passing the Recursive Buck\".  It is closely related to Artificial Addition (your mind generates infinite lists of surface phenomena, using a compact process you can't see into) and Mysterious Answer.  This antipattern happens when you try to look into a black box, fail, and explain the black box using another black box.

\n

Passing the Recursive Buck is rarer than Mysterious Answer, because most people just stop on the first black box.  (When was the last time you heard postulated an infinite hierarchy of Gods, none of which create themselves, as the answer to the First Cause?)

\n

How do you stop a recursive buck from passing?

\n

You use the counter-pattern:  The Recursive Buck Stops Here.

\n

But how do you apply this counter-pattern?

\n

You use the recursive buck-stopping trick.

\n

And what does it take to execute this trick?

\n

Recursive buck stopping talent.

\n

And how do you develop this talent?

\n

Get a lot of practice stopping recursive bucks.

\n

Ahem.

\n

So, the first trick is learning to notice when you pass the buck.

\n

\"The Recursive Buck Stops Here\" tells you that you shouldn't be trying to solve the puzzle of your black box, by looking for another black box inside it.  To appeal to meta-free-will, or to say \"Free will ordinal hierarchy!\" is just another way of running away from the scary real problem, which is to look inside the damn box.

\n

This pattern was on display in Causality and Moral Responsibility:

\n
\n

Even if the system is - gasp!- deterministic, you will see a system that, lo and behold, deterministically adds numbers.  Even if someone - gasp! - designed the system, you will see that it was designed to add numbers.  Even if the system was - gasp!- caused, you will see that it was caused to add numbers.

\n
\n

To stop passing the recursive buck, you must find the non-mysterious structure that simply is the buck.

\n

Take the Cartesian homunculus.  Light passes into your eyes, but how can you find the shape of an apple in the visual information?  Is there a little person inside your head, watching the light on a screen, and pointing out the apples?  But then does the little person have a metahomunculus inside their head?  If you have the notion of a \"visual cortex\", and you know even a little about how specifically the visual cortex processes and reconstructs the transcoded retinal information, then you can see that there is no need for a meta-visual-cortex that looks at the first visual cortex.  The information is being transformed into cognitively usable form right there in the neurons.

\n

I've already given a deal of advice on how to notice black boxes.

\n

And I've even given some advice on how to start looking inside.

\n

But ultimately, each black box is its own scientific problem.  There is no easy, comforting, safe procedure you follow to \"look inside\".  They aren't all as straightforward as free will.  My main meta-advice has to do with subtasks like recognizing the black box, not running away screaming into the night, and not stopping on a fake explanation.

" } }, { "_id": "EsMhFZuycZorZNRF5", "title": "The Ultimate Source", "pageUrl": "https://www.lesswrong.com/posts/EsMhFZuycZorZNRF5/the-ultimate-source", "postedAt": "2008-06-15T09:01:41.000Z", "baseScore": 80, "voteCount": 61, "commentCount": 80, "url": null, "contents": { "documentId": "EsMhFZuycZorZNRF5", "html": "

This post is part of the Solution to \"Free Will\".
Followup toTimeless Control, Possibility and Could-ness

\n

Faced with a burning orphanage, you ponder your next action for long agonizing moments, uncertain of what you will do.  Finally, the thought of a burning child overcomes your fear of fire, and you run into the building and haul out a toddler.

\n

There's a strain of philosophy which says that this scenario is not sufficient for what they call \"free will\".  It's not enough for your thoughts, your agonizing, your fear and your empathy, to finally give rise to a judgment.  It's not enough to be the source of your decisions.

\n

No, you have to be the ultimate source of your decisions.  If anything else in your past, such as the initial condition of your brain, fully determined your decision, then clearly you did not.

\n

But we already drew this diagram:

\n

\"Fwmarkov_3\"

\n

\n

As previously discussed, the left-hand structure is preferred, even given deterministic physics, because it is more local; and because it is not possible to compute the Future without computing the Present as an intermediate.

\n

So it is proper to say, \"If-counterfactual the past changed and the present remained the same, the future would remain the same,\" but not to say, \"If the past remained the same and the present changed, the future would remain the same.\"

\n

Are you the true source of your decision to run into the burning orphanage?  What if your parents once told you that it was right for people to help one another?  What if it were the case that, if your parents hadn't told you so, you wouldn't have run into the burning orphanage?  Doesn't that mean that your parents made the decision for you to run into the burning orphanage, rather than you?

\n

On several grounds, no:

\n

If it were counterfactually the case that your parents hadn't raised you to be good, then it would counterfactually be the case that a different person would stand in front of the burning orphanage.  It would be a different person who arrived at a different decision.  And how can you be anyone other than yourself?  Your parents may have helped pluck you out of Platonic person-space to stand in front of the orphanage, but is that the same as controlling the decision of your point in Platonic person-space?

\n

Or:  If we imagine that your parents had raised you differently, and yet somehow, exactly the same brain had ended up standing in front of the orphanage, then the same action would have resulted.  Your present self and brain, screens off the influence of your parents - this is true even if the past fully determines the future.

\n

But above all:  There is no single true cause of an event.  Causality proceeds in directed acyclic networks.  I see no good way, within the modern understanding of causality, to translate the idea that an event must have a single cause.  Every asteroid large enough to reach Earth's surface could have prevented the assassination of John F. Kennedy, if it had been in the right place to strike Lee Harvey Oswald.  There can be any number of prior events, which if they had counterfactually occurred differently, would have changed the present.  After spending even a small amount of time working with the directed acyclic graphs of causality, the idea that a decision can only have a single true source, sounds just plain odd.

\n

So there is no contradiction between \"My decision caused me to run into the burning orphanage\", \"My upbringing caused me to run into the burning orphanage\", \"Natural selection built me in such fashion that I ran into the burning orphanage\", and so on.  Events have long causal histories, not single true causes.

\n

Knowing the intuitions behind \"free will\", we can construct other intuition pumps.  The feeling of freedom comes from the combination of not knowing which decision you'll make, and of having the options labeled as primitively reachable in your planning algorithm.  So if we wanted to pump someone's intuition against the argument \"Reading superhero comics as a child, is the true source of your decision to rescue those toddlers\", we reply:

\n

\"But even if you visualize Batman running into the burning building, you might not immediately know which choice you'll make (standard source of feeling free); and you could still take either action if you wanted to (note correctly phrased counterfactual and appeal to primitive reachability).  The comic-book authors didn't visualize this exact scenario or its exact consequences; they didn't agonize about it (they didn't run the decision algorithm you're running).  So the comic-book authors did not make this decision for you.  Though they may have contributed to it being you who stands before the burning orphanage and chooses, rather than someone else.\"

\n

How could anyone possibly believe that they are the ultimate and only source of their actions?  Do they think they have no past?

\n

If we, for a moment, forget that we know all this that we know, we can see what a believer in \"ultimate free will\" might say to the comic-book argument:  \"Yes, I read comic books as a kid, but the comic books didn't reach into my brain and force me to run into the orphanage.  Other people read comic books and don't become more heroic.  I chose it.\"

\n

Let's say that you're confronting some complicated moral dilemma that, unlike a burning orphanage, gives you some time to agonize - say, thirty minutes; that ought to be enough time.

\n

You might find, looking over each factor one by one, that none of them seem perfectly decisive - to force a decision entirely on their own.

\n

You might incorrectly conclude that if no one factor is decisive, all of them together can't be decisive, and that there's some extra perfectly decisive thing that is your free will.

\n

Looking back on your decision to run into a burning orphanage, you might reason, \"But I could have stayed out of that orphanage, if I'd needed to run into the building next door in order to prevent a nuclear war.  Clearly, burning orphanages don't compel me to enter them.  Therefore, I must have made an extra choice to allow my empathy with children to govern my actions.  My nature does not command me, unless I choose to let it do so.\"

\n

Well, yes, your empathy with children could have been overridden by your desire to prevent nuclear war, if (counterfactual) that had been at stake.

\n

This is actually a hand-vs.-fingers confusion; all of the factors in your decision, plus the dynamics governing their combination, are your will.  But if you don't realize this, then it will seem like no individual part of yourself has \"control\" of you, from which you will incorrectly conclude that there is something beyond their sum that is the ultimate source of control.

\n

But this is like reasoning that if no single neuron in your brain could control your choice in spite of every other neuron, then all your neurons together must not control your choice either.

\n

Whenever you reflect, and focus your whole attention down upon a single part of yourself, it will seem that the part does not make your decision, that it is not you, because the you-that-sees could choose to override it (it is a primitively reachable option).  But when all of the parts of yourself that you see, and all the parts that you do not see, are added up together, they are you; they are even that which reflects upon itself.

\n

So now we have the intuitions that:

\n\n

The combination of these intuitions has led philosophy into strange veins indeed.

\n

I once saw one such vein described neatly in terms of \"Author\" control and \"Author*\" control, though I can't seem to find or look up the paper.

\n

Consider the control that an Author has over the characters in their books.  Say, the sort of control that I have over Brennan.

\n

By an act of will, I can make Brennan decide to step off a cliff.  I can also, by an act of will, control Brennan's inner nature; I can make him more or less heroic, empathic, kindly, wise, angry, or sorrowful.  I can even make Brennan stupider, or smarter up to the limits of my own intelligence.  I am entirely responsible for Brennan's past, both the good parts and the bad parts; I decided everything that would happen to him, over the course of his whole life.

\n

So you might think that having Author-like control over ourselves - which we obviously don't - would at least be sufficient for free will.

\n

But wait!  Why did I decide that Brennan would decide to join the Bayesian Conspiracy?  Well, it is in character for Brennan to do so, at that stage of his life.  But if this had not been true of Brennan, I would have chosen a different character that would join the Bayesian Conspiracy, because I wanted to write about the beisutsukai.  Could I have chosen not to want to write about the Bayesian Conspiracy?

\n

To have Author* self-control is not only have control over your entire existence and past, but to have initially written your entire existence and past, without having been previously influenced by it - the way that I invented Brennan's life without having previously lived it.  To choose yourself into existence this way, would be Author* control.  (If I remember the paper correctly.)

\n

Paradoxical?  Yes, of course.  The point of the paper was that Author* control is what would be required to be the \"ultimate source of your own actions\", the way some philosophers seemed to define it.

\n

I don't see how you could manage Author* self-control even with a time machine.

\n

I could write a story in which Jane went back in time and created herself from raw atoms using her knowledge of Artificial Intelligence, and then Jane oversaw and orchestrated her own entire childhood up to the point she went back in time.  Within the story, Jane would have control over her existence and past - but not without having been \"previously\" influenced by them.  And I, as an outside author, would have chosen which Jane went back in time and recreated herself.  If I needed Jane to be a bartender, she would be one.

\n

Even in the unlikely event that, in real life, it is possible to create closed timelike curves, and we find that a self-recreating Jane emerges from the time machine without benefit of human intervention, that Jane still would not have Author* control.  She would not have written her own life without having been \"previously\" influenced by it.  She might preserve her personality; but would she have originally created it?  And you could stand outside time and look at the cycle, and ask, \"Why is this cycle here?\"  The answer to that would presumably lie within the laws of physics, rather than Jane having written the laws of physics to create herself.

\n

And you run into exactly the same trouble, if you try to have yourself be the sole ultimate Author* source of even a single particular decision made by you - which is to say it was decided by your beliefs, inculcated morals, evolved emotions, etc. - which is to say your brain calculated it - which is to say physics determined it.  You can't have Author* control over one single decision, even with a time machine.

\n

So a philosopher would say:  Either we don't have free will, or free will doesn't require being the sole ultimate Author* source of your own decisions, QED.

\n

I have a somewhat different perspective, and say:  Your sensation of freely choosing, clearly does not provide you with trustworthy information to the effect that you are the 'ultimate and only source' of your own actions.  This being the case, why attempt to interpret the sensation as having such a meaning, and then say that the sensation is false?

\n

Surely, if we want to know which meaning to attach to a confusing sensation, we should ask why the sensation is there, and under what conditions it is present or absent.

\n

Then I could say something like:  \"This sensation of freedom occurs when I believe that I can carry out, without interference, each of multiple actions, such that I do not yet know which of them I will take, but I am in the process of judging their consequences according to my emotions and morals.\"

\n

This is a condition that can fail in the presence of jail cells, or a decision so overwhelmingly forced that I never perceived any uncertainty about it.

\n

There - now my sensation of freedom indicates something coherent; and most of the time, I will have no reason to doubt the sensation's veracity.  I have no problems about saying that I have \"free will\" appropriately defined; so long as I am out of jail, uncertain of my own future decision, and living in a lawful universe that gave me emotions and morals whose interaction determines my choices.

\n

Certainly I do not \"lack free will\" if that means I am in jail, or never uncertain of my future decisions, or in a brain-state where my emotions and morals fail to determine my actions in the usual way.

\n

Usually I don't talk about \"free will\" at all, of course!  That would be asking for trouble - no, begging for trouble - since the other person doesn't know about my redefinition.  The phrase means far too many things to far too many people, and you could make a good case for tossing it out the window.

\n

But I generally prefer to reinterpret my sensations sensibly, as opposed to refuting a confused interpretation and then calling the sensation \"false\".

" } }, { "_id": "3buXtNiSK8gcRLMSG", "title": "Possibility and Could-ness", "pageUrl": "https://www.lesswrong.com/posts/3buXtNiSK8gcRLMSG/possibility-and-could-ness", "postedAt": "2008-06-14T04:38:37.000Z", "baseScore": 68, "voteCount": 55, "commentCount": 113, "url": null, "contents": { "documentId": "3buXtNiSK8gcRLMSG", "html": "

This post is part of the Solution to \"Free Will\".
Followup toDissolving the Question, Causality and Moral Responsibility

\n

Planning out upcoming posts, it seems to me that I do, in fact, need to talk about the word could, as in, \"But I could have decided not to rescue that toddler from the burning orphanage.\"

\n

Otherwise, I will set out to talk about Friendly AI, one of these days, and someone will say:  \"But it's a machine; it can't make choices, because it couldn't have done anything other than what it did.\"

\n

So let's talk about this word, \"could\".  Can you play Rationalist's Taboo against it?  Can you talk about \"could\" without using synonyms like \"can\" and \"possible\"?

\n

Let's talk about this notion of \"possibility\".  I can tell, to some degree, whether a world is actual or not actual; what does it mean for a world to be \"possible\"?

\n

I know what it means for there to be \"three\" apples on a table.  I can verify that experimentally, I know what state of the world corresponds it.  What does it mean to say that there \"could\" have been four apples, or \"could not\" have been four apples?  Can you tell me what state of the world corresponds to that, and how to verify it?  Can you do it without saying \"could\" or \"possible\"?

\n

I know what it means for you to rescue a toddler from the orphanage.  What does it mean for you to could-have-not done it?  Can you describe the corresponding state of the world without \"could\", \"possible\", \"choose\", \"free\", \"will\", \"decide\", \"can\", \"able\", or \"alternative\"?

\n

One last chance to take a stab at it, if you want to work out the answer for yourself...

\n

\n

Some of the first Artificial Intelligence systems ever built, were trivially simple planners.  You specify the initial state, and the goal state, and a set of actions that map states onto states; then you search for a series of actions that takes the initial state to the goal state.

\n

Modern AI planners are a hell of a lot more sophisticated than this, but it's amazing how far you can get by understanding the simple math of everything.  There are a number of simple, obvious strategies you can use on a problem like this.  All of the simple strategies will fail on difficult problems; but you can take a course in AI if you want to talk about that part.

\n

There's backward chaining:  Searching back from the goal, to find a tree of states such that you know how to reach the goal from them.  If you happen upon the initial state, you're done.

\n

There's forward chaining:  Searching forward from the start, to grow a tree of states such that you know how to reach them from the initial state.  If you happen upon the goal state, you're done.

\n

Or if you want a slightly less simple algorithm, you can start from both ends and meet in the middle.

\n

Let's talk about the forward chaining algorithm for a moment.

\n

Here, the strategy is to keep an ever-growing collection of states that you know how to reach from the START state, via some sequence of actions and (chains of) consequences.  Call this collection the \"reachable from START\" states; or equivalently, label all the states in the collection \"reachable from START\".  If this collection ever swallows the GOAL state - if the GOAL state is ever labeled \"reachable from START\" - you have a plan.

\n

\"Reachability\" is a transitive property.  If B is reachable from A, and C is reachable from B, then C is reachable from A.  If you know how to drive from San Jose to San Francisco, and from San Francisco to Berkeley, then you know a way to drive from San Jose to Berkeley.  (It may not be the shortest way, but you know a way.)

\n

If you've ever looked over a game-problem and started collecting states you knew how to achieve - looked over a maze, and started collecting points you knew how to reach from START - then you know what \"reachability\" feels like.  It feels like, \"I can get there.\"  You might or might not be able to get to the GOAL from San Francisco - but at least you know you can get to San Francisco.

\n

You don't actually run out and drive to San Francisco.  You'll wait, and see if you can figure out how to get from San Francisco to GOAL.  But at least you could go to San Francisco any time you wanted to.

\n

(Why would you want to go to San Francisco?  If you figured out how to get from San Francisco to GOAL, of course!)

\n

Human beings cannot search through millions of possibilities one after the other, like an AI algorithm.  But - at least for now - we are often much more clever about which possibilities we do search.

\n

One of the things we do that current planning algorithms don't do (well), is rule out large classes of states using abstract reasoning.  For example, let's say that your goal (or current subgoal) calls for you to cover at least one of these boards using domino 2-tiles.

\n

\"Boards_3\"

\n

The black square is a missing cell; this leaves 24 cells to be covered with 12 dominos.

\n

You might just dive into the problem, and start trying to cover the first board using dominos - discovering new classes of reachable states:

\n

\"Boarddive\"

\n

However, you will find after a while that you can't seem to reach a goal state.  Should you move on to the second board, and explore the space of what's reachable there?

\n

But I wouldn't bother with the second board either, if I were you.  If you construct this coloring of the boards:

\n

\"Boardsparity\"

\n

Then you can see that every domino has to cover one grey and one yellow square.  And only the third board has equal numbers of grey and yellow squares.  So no matter how clever you are with the first and second board, it can't be done.

\n

With one fell swoop of creative abstract reasoning - we constructed the coloring, it was not given to us - we've cut down our search space by a factor of three.  We've reasoned out that the reachable states involving dominos placed on the first and second board, will never include a goal state.

\n

Naturally, one characteristic that rules out whole classes of states in the search space, is if you can prove that the state itself is physically impossible.  If you're looking for a way to power your car without all that expensive gasoline, it might seem like a brilliant idea to have a collection of gears that would turn each other while also turning the car's wheels - a perpetual motion machine of the first type.  But because it is a theorem that this is impossible in classical mechanics, we know that every clever thing we can do with classical gears will not suffice to build a perpetual motion machine.  It is as impossible as covering the first board with classical dominos.  So it would make more sense to concentrate on new battery technologies instead.

\n

Surely, what is physically impossible cannot be \"reachable\"... right?  I mean, you would think...

\n

Oh, yeah... about that free will thing.

\n

So your brain has a planning algorithm - not a deliberate algorithm that you learned in school, but an instinctive planning algorithm.  For all the obvious reasons, this algorithm keeps track of which states have known paths from the start point.  I've termed this label \"reachable\", but the way the algorithm feels from inside, is that it just feels like you can do it.  Like you could go there any time you wanted.

\n

And what about actions?  They're primitively labeled as reachable; all other reachability is transitive from actions by consequences.  You can throw a rock, and if you throw a rock it will break a window, therefore you can break a window.  If you couldn't throw the rock, you wouldn't be able to break the window.

\n

Don't try to understand this in terms of how it feels to \"be able to\" throw a rock.  Think of it in terms of a simple AI planning algorithm.  Of course the algorithm has to treat the primitive actions as primitively reachable.  Otherwise it will have no planning space in which to search for paths through time.

\n

And similarly, there's an internal algorithmic label for states that have been ruled out:

\n
\n

worldState.possible == 0

\n
\n

So when people hear that the world is deterministic, they translate that into:  \"All actions except one are impossible.\"  This seems to contradict their feeling of being free to choose any action.  The notion of physics following a single line, seems to contradict their perception of a space of possible plans to search through.

\n

The representations in our cognitive algorithms do not feel like representations; they feel like the way the world is.  If your mind constructs a search space of states that would result from the initial state given various actions, it will feel like the search space is out there, like there are certain possibilities.

\n

We've previously discussed how probability is in the mind.  If you are uncertain about whether a classical coin has landed heads or tails, that is a fact about your state of mind, not a property of the coin.  The coin itself is either heads or tails.  But people forget this, and think that coin.probability == 0.5, which is the Mind Projection Fallacy: treating properties of the mind as if they were properties of the external world.

\n

So I doubt it will come as any surprise to my longer-abiding readers, if I say that possibility is also in the mind.

\n

What concrete state of the world - which quarks in which positions - corresponds to \"There are three apples on the table, and there could be four apples on the table\"?  Having trouble answering that?  Next, say how that world-state is different from \"There are three apples on the table, and there couldn't be four apples on the table.\"  And then it's even more trouble, if you try to describe could-ness in a world in which there are no agents, just apples and tables.  This is a Clue that could-ness and possibility are in your map, not directly in the territory.

\n

What is could-ness, in a state of the world?  What are can-ness and able-ness?  They are what it feels like to have found a chain of actions which, if you output them, would lead from your current state to the could-state.

\n

But do not say, \"I could achieve X\".  Say rather, \"I could reach state X by taking action Y, if I wanted\".  The key phrase is \"if I wanted\".  I could eat that banana, if I wanted.  I could step off that cliff there - if, for some reason, I wanted to.

\n

Where does the wanting come from?  Don't think in terms of what it feels like to want, or decide something; try thinking in terms of algorithms.  For a search algorithm to output some particular action - choose - it must first carry out a process where it assumes many possible actions as having been taken, and extrapolates the consequences of those actions.

\n

Perhaps this algorithm is \"deterministic\", if you stand outside Time to say it.  But you can't write a decision algorithm that works by just directly outputting the only action it can possibly output.  You can't save on computing power that way.  The algorithm has to assume many different possible actions as having been taken, and extrapolate their consequences, and then choose an action whose consequences match the goal.  (Or choose the action whose probabilistic consequences rank highest in the utility function, etc.  And not all planning processes work by forward chaining, etc.)

\n

You might imagine the decision algorithm as saying:  \"Suppose the output of this algorithm were action A, then state X would follow.  Suppose the output of this algorithm were action B, then state Y would follow.\"  This is the proper cashing-out of could, as in, \"I could do either X or Y.\"  Having computed this, the algorithm can only then conclude:  \"Y ranks above X in the Preference Ordering.  The output of this algorithm is therefore B.  Return B.\"

\n

The algorithm, therefore, cannot produce an output without extrapolating the consequences of itself producing many different outputs.  All but one of the outputs being considered is counterfactual; but which output is the factual one cannot be known to the algorithm until it has finished running.

\n

A bit tangled, eh?  No wonder humans get confused about \"free will\".

\n

You could eat the banana, if you wanted.  And you could jump off a cliff, if you wanted.  These statements are both true, though you are rather more likely to want one than the other.

\n

You could even flatly say, \"I could jump off a cliff\" and regard this as true - if you construe could-ness according to reachability, and count actions as primitively reachable.  But this does not challenge deterministic physics; you will either end up wanting to jump, or not wanting to jump.

\n

The statement, \"I could jump off the cliff, if I chose to\" is entirely compatible with \"It is physically impossible that I will jump off that cliff\".  It need only be physically impossible for you to choose to jump off a cliff - not physically impossible for any simple reason, perhaps, just a complex fact about what your brain will and will not choose.

\n

Defining things appropriately, you can even endorse both of the statements:

\n\n

How can this happen?  If all of an agent's actions are primitive-reachable from that agent's point-of-view, but the agent's decision algorithm is so constituted as to never choose to jump off a cliff.

\n

You could even say that \"could\" for an action is always defined relative to the agent who takes that action, in which case I can simultaneously make the following two statements:

\n\n

If that sounds odd, well, no wonder people get confused about free will!

\n

But you would have to be very careful to use a definition like that one consistently.  \"Could\" has another closely related meaning in which it refers to the provision of at least a small amount of probability.  This feels similar, because when you're evaluating actions that you haven't yet ruled out taking, then you will assign at least a small probability to actually taking those actions - otherwise you wouldn't be investigating them.  Yet \"I could have a heart attack at any time\" and \"I could have a heart attack any time I wanted to\" are not the same usage of could, though they are confusingly similar.

\n

You can only decide by going through an intermediate state where you do not yet know what you will decide.  But the map is not the territory.  It is not required that the laws of physics be random about that which you do not know.  Indeed, if you were to decide randomly, then you could scarcely be said to be in \"control\".  To determine your decision, you need to be in a lawful world.

\n

It is not required that the lawfulness of reality be disrupted at that point, where there are several things you could do if you wanted to do them; but you do not yet know their consequences, or you have not finished evaluating the consequences; and so you do not yet know which thing you will choose to do.

\n

A blank map does not correspond to a blank territory.  Not even an agonizingly uncertain map corresponds to an agonizingly uncertain territory.

\n

(Next in the free will solution sequence is \"The Ultimate Source\", dealing with the intuition that we have some chooser-faculty beyond any particular desire or reason.  As always, the interested reader is advised to first consider this question on their own - why would it feel like we are more than the sum of our impulses?)

" } }, { "_id": "FqJGfSrXphrcwpiZe", "title": "Causality and Moral Responsibility", "pageUrl": "https://www.lesswrong.com/posts/FqJGfSrXphrcwpiZe/causality-and-moral-responsibility", "postedAt": "2008-06-13T08:34:44.000Z", "baseScore": 56, "voteCount": 50, "commentCount": 55, "url": null, "contents": { "documentId": "FqJGfSrXphrcwpiZe", "html": "

Followup toThou Art Physics, Timeless Control, Hand vs. Fingers, Explaining vs. Explaining Away

\n\n

I know (or could readily rediscover) how to build a binary adder from logic gates.  If I can figure out how to make individual logic gates from Legos or ant trails or rolling ping-pong balls, then I can add two 32-bit unsigned integers using Legos or ant trails or ping-pong balls.

\n\n

Someone who had no idea how I'd just done the trick, might accuse me of having created "artificial addition" rather than "real addition".

\n\n

But once you see the essence, the structure that is addition, then you will automatically see addition whenever you see that structure.  Legos, ant trails, or ping-pong balls.

\n\n

Even if the system is - gasp!- deterministic, you will see a system that, lo and behold, deterministically adds numbers.  Even if someone - gasp! - designed the system, you will see that it was designed to add numbers.  Even if the system was - gasp!- caused, you will see that it was caused to add numbers.

\n\n

Let's say that John is standing in front of an orphanage which is on fire, but not quite an inferno yet; trying to decide whether to run in and grab a baby or two.  Let us suppose two slightly different versions of John - slightly different initial conditions.  They both agonize.  They both are torn between fear and duty.  Both are tempted to run, and know how guilty they would feel, for the rest of their lives, if they ran.  Both feel the call to save the children.  And finally, in the end, John-1 runs away, and John-2 runs in and grabs a toddler, getting out moments before the flames consume the entranceway.

\n\n

This, it seems to me, is the very essence of moral responsibility - in the one case, for a cowardly choice; in the other case, for a heroic one.  And I don't see what difference it makes, if John's decision was physically deterministic given his initial conditions, or if John's decision was preplanned by some alien creator that built him out of carbon atoms, or even if - worst of all - there exists some set of understandable psychological factors that were the very substance of John and caused his decision.

Imagine yourself caught in an agonizing moral dilemma.  If the burning orphanage doesn't work for you - if you wouldn't feel conflicted about that, one way or the other - then substitute something else.  Maybe something where you weren't even sure what the "good" option was.

\n\n

Maybe you're deciding whether to invest your money in a startup that seems like it might pay off 50-to-1, or donate it to your-favorite-Cause; if you invest, you might be able to donate later... but is that what really moves you, or do you just want to retain the possibility of fabulous wealth?  Should you donate regularly now, to ensure that you keep your good-guy status later?  And if so, how much?

\n\n

I'm not proposing a general answer to this problem, just offering it as an example of something else that might count as a real moral dilemma, even if you wouldn't feel conflicted about a burning orphanage.

\n\n

\nFor me, the analogous painful dilemma might be how much time to spend on relatively easy and fun things that might help set up more AI researchers in the future - like writing about\nrationality - versus just forgetting about the outside world and trying\nto work strictly on AI.

\n\n

Imagine yourself caught in an agonizing moral dilemma.  If my examples don't work, make something up.  Imagine having not yet made your decision.  Imagine yourself not yet knowing which decision you will make.  Imagine that you care, that you feel a weight of moral responsibility; so that it seems to you that, by this choice, you might condemn or redeem yourself.

\n\n

Okay, now imagine that someone comes along and says, "You're a physically deterministic system."

\n\n

I don't see how that makes the dilemma of the burning orphanage, or\nthe ticking clock of AI, any less agonizing.  I don't see how that\ndiminishes the moral responsibility, at all.  It just says that if you take a hundred identical copies of me, they will all make the same decision.  But which decision will we all make?  That will be determined by my agonizing, my weighing of duties, my self-doubts, and my final effort to be good.  (This is the idea of timeless control:  If the result is deterministic, it is still caused and controlled by that portion of the deterministic physics which is myself.  To cry "determinism" is only to step outside Time and see that the control is lawful.)  So, not yet knowing the output of the deterministic process that is myself, and being duty-bound to determine it as best I can, the weight of moral responsibility is no less.

\n\n

Someone comes along and says, "An alien built you, and it built you to make a particular decision in this case, but I won't tell you what it is."

\n\n

Imagine a zillion possible people, perhaps slight variants of me, floating in the Platonic space of computations.  Ignore quantum mechanics for the moment, so that each possible variant of me comes to only one decision.  (Perhaps we can approximate a true quantum human as a deterministic machine plus a prerecorded tape containing the results of quantum branches.)  Then each of these computations must agonize, and must choose, and must determine their deterministic output as best they can.  Now an alien reaches into this space, and plucks out one person, and instantiates them.  How does this change anything about the moral responsibility that attaches to how this person made their choice, out there in Platonic space... if you see what I'm trying to get at here?

\n\n

The alien can choose which mind design to make real, but that doesn't necessarily change the moral responsibility within the mind.

\n\n

There are plenty of possible mind designs that wouldn't make agonizing moral decisions, and wouldn't be their own bearers of moral responsibility.  There are mind designs that would just play back one decision like a tape recorder, without weighing alternatives or consequences, without evaluating duties or being tempted or making sacrifices.  But if the mind design happens to be you... and you know your duties, but you don't yet know your decision... then surely, that is the substance of moral responsibility, if responsibility can be instantiated in anything real at all?

\n\n

We could think of this as an extremely generalized, Generalized Anti-Zombie Principle:  If you are going to talk about moral responsibility, it ought not to be affected by anything that plays no role in your brain.  It shouldn't matter whether I came into existence as a result of natural selection, or whether an alien built me up from scratch five minutes ago, presuming that the result is physically identical.  I, at least, regard myself as having moral responsibility.  I am responsible here and now; not knowing my future decisions, I must determine them.  What difference does the alien in my past make, if the past is screened off from my present?
\n\n

\n\n

Am I suggesting that if an alien had created Lenin, knowing that Lenin would enslave millions, then Lenin\nwould still be a jerk?  Yes, that's exactly what I'm suggesting.  The\nalien would be a bigger jerk.  But if we assume that Lenin made his\ndecisions after the fashion of an ordinary human brain, and not by\nvirtue of some alien mechanism seizing and overriding his decisions,\nthen Lenin would still be exactly as much of a jerk as before.

\n

And as for there being psychological factors that determine your decision - well, you've got to be something, and you're too big to be an atom.  If you're going to talk about moral responsibility at all - and I do regard myself as responsible, when I confront my dilemmas - then you've got to be able to be something, and that-which-is-you must be able to do something that comes to a decision, while still being morally responsible.

\n\n

Just like a calculator is adding, even though it adds deterministically, and even though it was designed to add, and even though it is made of quarks rather than tiny digits.

" } }, { "_id": "7HMSBiEiCfLKzd2gc", "title": "Quantum Mechanics and Personal Identity", "pageUrl": "https://www.lesswrong.com/posts/7HMSBiEiCfLKzd2gc/quantum-mechanics-and-personal-identity", "postedAt": "2008-06-12T07:13:49.000Z", "baseScore": 21, "voteCount": 12, "commentCount": 28, "url": null, "contents": { "documentId": "7HMSBiEiCfLKzd2gc", "html": "

This is one of several shortened indices into the Quantum Physics Sequence.

\n

Suppose that someone built an exact duplicate of you on Mars, quark by quark - to the maximum level of resolution that quantum physics permits, which is considerably higher resolution than ordinary thermal uncertainty.  Would the duplicate be really you, or just a copy?

\n

It may seem unlikely a priori that physics, or any experimental science, could have something to say about this issue.

\n

But it's amazing, the things that science can tell you.

\n

In this case, it turns out, science can rule out a notion of personal identity that depends on your being composed of the same atoms - because modern physics has taken the concept of \"same atom\" and thrown it out the window.  There are no tiny billiard balls with individual identities.  It's experimentally ruled out.

\n

\"Huh?  What do you mean, physics has gotten rid of the concept of 'same atom'?\"

\n

No one can be told this, alas, because it involves replacing the concept of little billiard balls with a different kind of math.  If you read through the introduction that follows to basic quantum mechanics, you will be able to see that the naive concept of personal identity - the notion that you are made up of tiny pieces with individual identities that persist through time, and that your identity follows the \"same\" tiny pieces - is physical nonsense.  The universe just doesn't work in a way which would let that idea be meaningful.

\n

There are more abstract and philosophical arguments that you could use to rule out atom-following theories of personal identity.  But in our case, it so happens that we live in a universe where the issue is flatly settled by standard physics.  It's like proposing that personal identity follows phlogiston.  You could argue against it on philosophical grounds - but we happen to live in a universe where \"phlogiston\" itself is just a mistaken theory to be discarded, which settles the issue much more abruptly.

\n

And no, this does not rely on a woo-woo mysterian interpretation of quantum mechanics.  The other purpose of this series of posts, was to demystify quantum mechanics and reveal it as non-mysterious.  It just happens to be a fact that once you get to the non-mysterious version of quantum mechanics, you find that the reason why physics once looked mysterious, has to do with reality being made up of different stuff than little billiard balls.  Complex amplitudes in configuration spaces, to be exact, though here I jump ahead of myself.

\n

\n

If you read all the way to the end, you will, I hope, gain an entirely new perspective on where your \"identity\" is located... once the little billiard balls are ruled out.

\n

You will even be able to see, I hope, that if your brain were non-destructively frozen (e.g. by vitrification in liquid nitrogen); and a computer model of the synapses, neural states, and other brain behaviors were constructed a hundred years later; then it would preserve exactly everything about you that was preserved by going to sleep one night and waking up the next morning.

\n

Mind you, my audacious claim is not that uploading preserves identity - this audacious claim has been made many times before.  I am claiming that once you grasp modern physics, you can actually see this as obvious, even if it would not be obvious to someone thinking in terms of Newtonian billiard balls in classical physics.  This is much more audacious, and I am well aware of how unlikely that sounds; but if you read all the way to the end, it is fully supported.

\n" } }, { "_id": "9cgBF6BQ2TRB3Hy4E", "title": "And the Winner is... Many-Worlds!", "pageUrl": "https://www.lesswrong.com/posts/9cgBF6BQ2TRB3Hy4E/and-the-winner-is-many-worlds", "postedAt": "2008-06-12T06:05:33.000Z", "baseScore": 30, "voteCount": 23, "commentCount": 13, "url": null, "contents": { "documentId": "9cgBF6BQ2TRB3Hy4E", "html": "

This is one of several shortened indices into the Quantum Physics Sequence.

\n\n

Macroscopic quantum superpositions, a.k.a. the "many-worlds interpretation" or MWI, was proposed in 1957 and brought to the general attention of the scientific community in 1970.  Ever since, MWI has steadily gained in popularity.  As of 2008, MWI may or may not be endorsed by a majority of theoretical physicists (attempted opinion polls conflict on this point).  Of course, Science is not supposed to be an opinion poll, but anyone who tells you that MWI is "science fiction" is simply ignorant.

\n\n

When a theory is slowly persuading scientists despite all academic inertia, and more and more graduate students grow up familiar with it, at what point should one go ahead and declare a temporary winner pending new evidence?

\n\n

Reading through the referenced posts will give you a very basic introduction to quantum mechanics - algebra is involved, but no calculus - by which you may nonetheless gain an understanding sufficient to see, and not just be told, that the modern case for many-worlds has become overwhelming.  Not just plausible, not just strong, but overwhelming.  Single-world versions of quantum mechanics just don't work, and all the legendary confusingness and mysteriousness of quantum mechanics stems from this essential fact.  But enough telling - let me show you.

" } }, { "_id": "rp5GvQpakBdL9nnfS", "title": "Quantum Physics Revealed As Non-Mysterious", "pageUrl": "https://www.lesswrong.com/posts/rp5GvQpakBdL9nnfS/quantum-physics-revealed-as-non-mysterious", "postedAt": "2008-06-12T05:20:16.000Z", "baseScore": 13, "voteCount": 11, "commentCount": 26, "url": null, "contents": { "documentId": "rp5GvQpakBdL9nnfS", "html": "

This is one of several shortened indices into the Quantum Physics Sequence.

\n\n

Hello!  You may have been directed to this page because you said\nsomething along the lines of "Quantum physics shows that reality\ndoesn't exist apart from our observation of it," or "Science has\ndisproved the idea of an objective reality," or even just "Quantum\nphysics is one of the great mysteries of modern science; no one\nunderstands how it works."

\n\n

There was a time, roughly the first half-century after quantum\nphysics was invented, when this was more or less true.  Certainly, when\nquantum physics was just being discovered, scientists were very\nconfused indeed!  But time passed, and science moved on.  If you're\nconfused about a phenomenon, that's a fact about your own state of\nmind, not a fact about the phenomenon itself - there are mysterious\nquestions, but not mysterious answers.  Science eventually figured out\nwhat was going on, and why things looked so strange at first.

\n\n

The series of posts indexed below will show you - not just\ntell you - what's really going on down there.  To be honest, you're not\ngoing to be able to follow this if algebra scares you.  But there won't\nbe any calculus, either.

Some optional preliminaries you might want to read:

\n\n\n\n

And here's the main sequence:

\n\n" } }, { "_id": "apbcLXz5zB7PXfgg2", "title": "An Intuitive Explanation of Quantum Mechanics", "pageUrl": "https://www.lesswrong.com/posts/apbcLXz5zB7PXfgg2/an-intuitive-explanation-of-quantum-mechanics", "postedAt": "2008-06-12T03:45:56.000Z", "baseScore": 18, "voteCount": 17, "commentCount": 3, "url": null, "contents": { "documentId": "apbcLXz5zB7PXfgg2", "html": "

This is one of several shortened indices into the Quantum Physics Sequence.  It is intended for students who are having trouble grasping the meaning of quantum math; or for people who want to learn the simple math of everything and are getting around to quantum mechanics.

\n\n

There's a widespread belief that quantum mechanics is supposed to be confusing.  This is not a good frame of mind for either a teacher or a student.  Complicated math can be difficult but it is never, ever allowed to be confusing.

\n\n

And I find that legendarily "confusing" subjects often are not really all that complicated as math, particularly if you just want a very basic - but still mathematical - grasp on what goes on down there.

\n\n

This series takes you as far into quantum mechanics as you can go with only algebra.  Any further and you should get a real physics textbook - once you know what all the math means.

" } }, { "_id": "hc9Eg6erp6hk9bWhn", "title": "The Quantum Physics Sequence", "pageUrl": "https://www.lesswrong.com/posts/hc9Eg6erp6hk9bWhn/the-quantum-physics-sequence", "postedAt": "2008-06-11T03:42:14.000Z", "baseScore": 75, "voteCount": 54, "commentCount": 26, "url": null, "contents": { "documentId": "hc9Eg6erp6hk9bWhn", "html": "

This is an inclusive guide to the series of posts on quantum mechanics that began on April 9th, 2008, including the digressions into related topics (such as the difference between Science and Bayesianism) and some of the preliminary reading.

\n

You may also be interested in one of the less inclusive post guides, such as:

\n\n

My current plan calls for the quantum physics series to eventually be turned into one or more e-books.

\n

\n

Preliminaries:

\n\n

Basic Quantum Mechanics:

\n\n

Many Worlds:

\n

(At this point in the sequence, most of the mathematical background has been built up, and we are ready to evaluate interpretations of quantum mechanics.)

\n\n

Timeless Physics:

\n

(Now we depart from what is nailed down in standard physics, and enter into more speculative realms - particularly Julian Barbour's Machian timeless physics.)

\n\n

Rationality and Science:

\n

(Okay, so it was many-worlds all along and collapse theories are silly.  Did first-half-of-20th-century physicists really screw up that badly?  How did they go wrong?  Why haven't modern physicists unanimously endorsed many-worlds, if the issue is that clear-cut?  What lessons can we learn from this whole debacle?)

\n" } }, { "_id": "vtr8gjbSxo9coyCfF", "title": "Eliezer's Post Dependencies; Book Notification; Graphic Designer Wanted", "pageUrl": "https://www.lesswrong.com/posts/vtr8gjbSxo9coyCfF/eliezer-s-post-dependencies-book-notification-graphic", "postedAt": "2008-06-10T01:18:14.000Z", "baseScore": 6, "voteCount": 5, "commentCount": 19, "url": null, "contents": { "documentId": "vtr8gjbSxo9coyCfF", "html": "

I'm going to try and produce summaries of the quantum physics series today or tomorrow.

\n\n

Andrew Hay has produced a neat graph of (explicit) dependencies among my Overcoming Bias posts - an automatically generated map of the "Followup to" structure:

Eliezer's Post Dependencies (includes only posts with dependencies)
All of my posts (including posts without dependencies)

Subscribe here to future email notifications for when the popular book comes out (which may be a year or two later), and/or I start producing e-books:

Notifications for the rationality book, or for any other stuff I produce

(Thanks to Christian Rovner for setting up PHPList.)

\n\n

Sometime in the next two weeks, I need to get at least one Powerpoint presentation of mine re-produced to professional standards of graphic design.  Ideally, in a form that will let me make small modifications myself.  This is likely to lead into other graphic design work on producing the ebooks, redesigning my personal website, creating Bayesian Conspiracy T-shirts, etc.

\n\n

I am not looking for an unpaid volunteer.  I am looking for a professional graphic designer\nwho can do sporadic small units of work quickly.

Desired\nstyle for the presentation:  Professional-looking and easy-to-read (as opposed to flamboyant /\nelaborate).  I already have the presentation content, in black text on white background.  I would like it to look like it was produced by a grownup, which is beyond my own skill.  Emails to sentience@pobox.com, please include your fee schedule and a link to your portfolio.

" } }, { "_id": "PxN5iwS2CTCYi4oAP", "title": "Against Devil's Advocacy", "pageUrl": "https://www.lesswrong.com/posts/PxN5iwS2CTCYi4oAP/against-devil-s-advocacy", "postedAt": "2008-06-09T04:15:29.000Z", "baseScore": 51, "voteCount": 44, "commentCount": 60, "url": null, "contents": { "documentId": "PxN5iwS2CTCYi4oAP", "html": "

From an article by Michael Ruse:

Richard Dawkins once called me a "creep." He did so very publicly but meant no personal offense, and I took none: We were, and still are, friends. The cause of his ire—his anguish, even—was that, in the course of a public discussion, I was defending a position I did not truly hold. We philosophers are always doing this; it's a version of the reductio ad absurdum argument. We do so partly to stimulate debate (especially in the classroom), partly to see how far a position can be pushed before it collapses (and why the collapse), and partly (let us be frank) out of sheer bloody-mindedness, because we like to rile the opposition.

\n\n

Dawkins, however, has the moral purity—some would say the moral rigidity—of the evangelical Christian or the committed feminist. Not even for the sake of argument can he endorse something that he thinks false. To do so is not just mistaken, he feels; in some deep sense, it is wrong. Life is serious, and there are evils to be fought. There must be no compromise or equivocation, even for pedagogical reasons. As the Quakers say, "Let your yea be yea, and your nay, nay."

Michael Ruse doesn't get it.

When I was a kid and my father was teaching me about skepticism -

(Dad was an avid skeptic and Martin Gardner / James Randi fan, as well as being an Orthodox Jew.  Let that be a lesson on the anti-healing power of compartmentalization.)

- he used the example of the hypothesis:  "There is an object one foot across in the asteroid belt composed entirely of chocolate cake."  You would have to search the whole asteroid belt to disprove this hypothesis.  But though this hypothesis is very hard to disprove, there aren't good arguments for it.

\n\n

And the child-Eliezer asked his mind to search for arguments that there was a chocolate cake in the asteroid belt.  Lo, his mind returned the reply:  "Since the asteroid-belt-chocolate-cake is one of the classic examples of a bad hypothesis, if anyone ever invents a time machine, some prankster will toss a chocolate cake back into the 20th-century asteroid belt, making it true all along."

\n\n

Thus - at a very young age - I discovered that my mind could, if asked, invent arguments for anything.

\n\n

I know people whose sanity has been destroyed by this discovery.  They conclude that Reason can be used to argue for anything.  And so there is no point in arguing that God doesn't exist, because you could just as well argue that God does exist.  Nothing left but to believe whatever you want.

\n\n

Having given up, they develop whole philosophies of self-inflation to make their despair seem Deeply Wise.  If they catch you trying to use Reason, they will smile and pat you on the head and say, "Oh, someday you'll discover that you can argue for anything."

\n\n

Perhaps even now, my readers are thinking, "Uh-oh, Eliezer can rationalize anything, that's not a good sign."

\n\n

But you know... being mentally agile doesn't always doom you to disaster.  I mean, you might expect that it would.  Yet sometimes practice turns out to be different from theory.

\n\n

Rationalization came too easily to me.  It was visibly just a game.

\n\n

If I had been less imaginative and more easily stumped - if I had not found myself able to argue for any proposition no matter how bizarre - then perhaps I would have confused the activity with thinking.

\n\n

But I could argue even for chocolate cake in the asteroid belt.  It wasn't even difficult; my mind coughed up the argument immediately.  It was very clear that this was fake thinking and not real thinking.  I never for a moment confused the game with real life.  I didn't start thinking there might really be a chocolate cake in the asteroid belt.

\n\n

You might expect that any child with enough mental agility to come up with arguments for anything, would surely be doomed.  But intelligence doesn't always do so much damage as you might think.  In this case, it just set me up, at a very early age, to distinguish "reasoning" from "rationalizing".  They felt different.

\n\n

Perhaps I'm misremembering... but it seems to me that, even at that young age, I looked at my mind's amazing clever argument for a time-traveling chocolate cake, and thought:  I've got to avoid doing that.

(Though there are much more subtle cognitive implementations of rationalizing processes, than blatant, obvious, conscious search for favorable arguments.  A wordless flinch away from an idea can undo you as surely as a deliberate search for arguments against it.  Those subtler processes, I only began to notice years later.)

I picked up an intuitive sense that real thinking was that which could force you into an answer whether you liked it or not, and fake thinking was that which could argue for anything.

\n\n

This was an incredibly valuable lesson -

(Though, like many principles that my young self obtained by reversing stupidity, it gave good advice on specific concrete problems; but went wildly astray when I tried to use it to make abstract deductions, e.g. about the nature of morality.)

- which was one of the major drivers behind my break with Judaism.  The elaborate arguments and counterarguments of ancient rabbis, looked like the kind of fake thinking I did to argue that there was chocolate cake in the asteroid belt.  Only the rabbis had forgotten it was a game, and were actually taking it seriously.

\n\n

Believe me, I understand the Traditional argument behind Devil's Advocacy.  By arguing the opposing position, you increase your mental flexibility.  You shake yourself out of your old shoes.  You get a chance to gather evidence against your position, instead of arguing for it.  You rotate things around, see them from a different viewpoint.  Turnabout is fair play, so you turn about, to play fair.

\n\n

Perhaps this is what Michael Rose was thinking, when he accused Richard Dawkins of "moral rigidity".

\n\n

I surely don't mean to teach people to say:  "Since I believe in fairies, I ought not to expect to find any good arguments against their existence, therefore I will not search because the mental effort has a low expected utility."  That comes under the heading of:  If you want to shoot your foot off, it is never the least bit difficult to do so.

\n\n

Maybe there are some stages of life, or some states of mind, in which you can be helped by trying to play Devil's Advocate.  Students who have genuinely never thought of trying to search for arguments on both sides of an issue, may be helped by the notion of "Devil's Advocate".

\n\n

But with anyone in this state of mind, I would sooner begin by teaching them that policy debates should not appear one-sided.  There is no expectation against having strong arguments on both sides of a policy debate; single actions have multiple consequences.  If you can't think of strong arguments against your most precious favored policies, or strong arguments for policies that you hate but which other people endorse, then indeed, you very likely have a problem that could be described as "failing to see the other points of view".

\n\n

You, dear reader, are probably a sophisticated enough reasoner that if you manage to get yourself stuck in an advanced rut, dutifully playing Devil's Advocate won't get you out of it.  You'll just subconsciously avoid any Devil's arguments that make you genuinely nervous, and then congratulate yourself for doing your duty.  People at this level need stronger medicine.  (So far I've only covered medium-strength medicine.)

\n\n

If you can bring yourself to a state of real doubt and genuine curiosity, there is no need for Devil's Advocacy.  You can investigate the contrary position because you think it might be really genuinely true, not because you are playing games with time-traveling chocolate cakes.  If you cannot find this trace of true doubt within yourself, can merely playing Devil's Advocate help you?

\n\n

I have no trouble thinking of arguments for why the Singularity won't happen for another 50 years.  With some effort, I can make a case for why it might not happen in 100 years.  I can also think of plausible-sounding scenarios in which the Singularity happens in two minutes, i.e., someone ran a covert research project and it is finishing right now.  I can think of plausible arguments for 10-year, 20-year, 30-year, and 40-year timeframes.

\n\n

This is not because I am good at playing Devil's Advocate and coming up with clever arguments.  It's because I really don't know.  A true doubt exists in each case, and I can follow my doubt to find the source of a genuine argument.  Or if you prefer: I really don't know, because I can come up with all these plausible arguments.

\n\n

On the other hand, it is really hard for me to visualize the proposition that there is no kind of mind substantially stronger than a human one.  I have trouble believing that the human brain, which just barely suffices to run a technological civilization that can build a computer, is also the theoretical upper limit of effective intelligence.  I cannot argue effectively for that, because I do not believe it.  Or if you prefer, I do not believe it, because I cannot argue effectively for it.  If you want that idea argued, find someone who really believes it.  Since a very young age, I've been endeavoring to get away from those modes of thought where you can argue for just anything.

\n\n

In the state of mind and stage of life where you are trying to distinguish rationality from rationalization, and trying to tell the difference between weak arguments and strong arguments, Devil's Advocate cannot lead you to unfake modes of reasoning.  Its only power is that it may perhaps show you the fake modes which operate equally well on any side, and tell you when you are uncertain.

\n\n

There is no chess grandmaster who can play only black, or only white; but in the battles of Reason, a soldier who fights with equal strength on any side has zero force.

\n\n

That's what Richard Dawkins understands that Michael Ruse doesn't - that Reason is not a game.

\n\n

Added:  Brandon argues that Devil's Advocacy is most importantly a social rather than individual process, which aspect I confess I wasn't thinking about.

" } }, { "_id": "9frnZEGz86MkaRJx7", "title": "Bloggingheads: Yudkowsky and Horgan", "pageUrl": "https://www.lesswrong.com/posts/9frnZEGz86MkaRJx7/bloggingheads-yudkowsky-and-horgan", "postedAt": "2008-06-07T22:09:00.000Z", "baseScore": 7, "voteCount": 6, "commentCount": 37, "url": null, "contents": { "documentId": "9frnZEGz86MkaRJx7", "html": "

I appear today on Bloggingheads.tv, in "Science Saturday: Singularity Edition", speaking with John Horgan about the Singularity.  I talked too much.  This episode needed to be around two hours longer.

\n\n

One question I fumbled at 62:30 was "What's the strongest opposition you've seen to Singularity ideas?"  The basic problem is that nearly everyone who attacks the Singularity is either completely unacquainted with the existing thinking, or they attack Kurzweil, and in any case it's more a collection of disconnected broadsides (often mostly ad hominem) than a coherent criticism.  There's no equivalent in Singularity studies of Richard Jones's critique of nanotechnology - which I don't agree with, but at least Jones has read Drexler.  People who don't buy the Singularity don't put in the time and hard work to criticize it properly.

\n\n

What I should have done, though, was interpreted the question more charitably as "What's the strongest opposition to strong AI or transhumanism?" in which case there's Sir Roger Penrose, Jaron Lanier, Leon Kass, and many others.  None of these are good arguments - or I would have to accept them! - but at least they are painstakingly crafted arguments, and something like organized opposition.

" } }, { "_id": "YYLmZFEGKsjCKQZut", "title": "Timeless Control", "pageUrl": "https://www.lesswrong.com/posts/YYLmZFEGKsjCKQZut/timeless-control", "postedAt": "2008-06-07T05:16:48.000Z", "baseScore": 47, "voteCount": 45, "commentCount": 69, "url": null, "contents": { "documentId": "YYLmZFEGKsjCKQZut", "html": "

Followup toTimeless Physics, Timeless Causality, Thou Art Physics

\n

People hear about many-worlds, which is deterministic, or about timeless physics, and ask:

\n

If the future is determined by physics, how can anyone control it?

\n

In Thou Art Physics, I pointed out that since you are within physics, anything you control is necessarily controlled by physics.  Today we will talk about a different aspect of the confusion, the words \"determined\" and \"control\".

\n

The \"Block Universe\" is the classical term for the universe considered from outside Time.  Even without timeless physics, Special Relativity outlaws any global space of simultaneity, which is widely believed to suggest the Block Universe—spacetime as one vast 4D block.

\n

When you take a perspective outside time, you have to be careful not to let your old, timeful intuitions run wild in the absence of their subject matter.

\n

In the Block Universe, the future is not determined before you make your choice.  \"Before\" is a timeful word.  Once you descend so far as to start talking about time, then, of course, the future comes \"after\" the past, not \"before\" it.

\n

\n

If we're going to take a timeless perspective, then the past and the future have not always been there.  The Block Universe is not something that hangs, motionless and static, lasting for a very long time.  You might try to visualize the Block Universe hanging in front of your mind's eye, but then your mind's eye is running the clock while the universe stays still.  Nor does the Block Universe exist for just a single second, and then disappear.  It is not instantaneous.  It is not eternal.  It does not last for exactly 15 seconds.  All these are timeful statements.  The Block Universe is simply there.

\n

Perhaps people imagine a Determinator—not so much an agent, perhaps, but a mysterious entity labeled \"Determinism\"—which, at \"the dawn of time\", say, 6:00am, writes down your choice at 7:00am, and separately, writes the outcome at 7:02am.  In which case, indeed, the future would be determined before you made your decision...

\n

\"Fwdeterminism_2\" In this model, the Determinator writes the script for the Block Universe at 6:00am.  And then time—the global time of the universe—continues, running through the Block Universe and realizing the script.

\n

At 7:00am you're trying to decide to turn on the light bulb.  But the Determinator already decided at 6:00am whether the light bulb would be on or off at 7:02am.  Back at the dawn of time when Destiny wrote out the Block Universe, which was scripted before you started experiencing it...

\n

This, perhaps, is the kind of unspoken, intuitive mental model that might lead people to talk about \"determinism\" implying that the future is determined before you make your decision.

\n

Even without the concept of the Block Universe or timeless physics, this is probably what goes on when people start talking about \"deterministic physics\" in which \"the whole course of history\" was fixed at \"the dawn of time\" and therefore your choices have no effect on the \"future\".

\n

As described in Timeless Causality, \"cause\" and \"effect\" are things we talk about by pointing to relations within the Block Universe.  E.g., we might expect to see human colonies separated by an expanding cosmological horizon; we can expect to find correlation between two regions that communicate with a mutual point in the \"past\", but have no light-lines to any mutual points in their \"future\".  But we wouldn't expect to find a human colony in a distant supercluster, having arrived from the other side of the universe; we should not find correlation between regions with a shared \"future\" but no shared \"past\".  This is how we can experimentally observe the orientation of the Block Universe, the direction of the river that never flows.

\n

\"Fwcausality\" If you are going to talk about causality at all—and personally, I think we should, because the universe doesn't make much sense without it—then causality applies to relations within the Block Universe, not outside it.

\n

The Past is just there, and the Future is just there, but the relations between them have a certain kind of structure—whose ultimate nature, I do not conceive myself to understand—but which we do know a bit about mathematically; the structure is called \"causality\".

\n

(I am not ruling out the possibility of causality that extends outside the Block Universe—say, some reason why the laws of physics are what they are.  We can have timeless causal relations, remember?  But the causal relations between, say, \"dropping a glass\" and \"water spilling out\", or between \"deciding to do something\" and \"doing it\", are causal relations embedded within the Block.)

\n

One of the things we can do with graphical models of causality—networks of little directed arrows—is construe counterfactuals:  Statements about \"what would have happened if X had occurred, instead of Y\".

\n

These counterfactuals are untestable, unobservable, and do not actually exist anywhere that I've been able to find.  Counterfactuals are not facts, unless you count them as mathematical properties of certain causal diagrams.  We can define statistical properties we expect to see, given a causal hypothesis; but counterfactuals themselves are not observable.  We cannot see what \"would have happened, if I hadn't dropped the glass\".

\n

Nonetheless, if you draw the causal graph that the statistics force you to draw, within our Block Universe, and you construct the counterfactual, then you get statements like:  \"If I hadn't dropped the glass, the water wouldn't have spilled.\"

\n

If your mind contains the causal model that has \"Determinism\" as the cause of both the \"Past\" and the \"Future\", then you will start saying things like, But it was determined before the dawn of time that the water would spill—so not dropping the glass would have made no difference.  This would be the standard counterfactual, on the causal graph in which \"Past\" and \"Future\" are both children of some mutual ancestor, but have no connection between them.

\n

And then there's the idea that, if you can predict the whole course of the universe by looking at the state at the beginning of time, the present must have no influence on the future...

\n

\n

\"Fwmarkov_2\"

\n

Surely, if you can determine the Future just by looking at the Past, there's no need to look at the Present?

\n

The problem with the right-side graph is twofold:  First, it violates the beautiful locality of reality; we're supposing causal relations that go outside the immediate neighborhoods of space/time/configuration.  And second, you can't compute the Future from the Past, except by also computing something that looks exactly like the Present; which computation just creates another copy of the Block Universe (if that statement even makes any sense), it does not affect any of the causal relations within it.

\n

One must avoid mixing up timeless and timeful thinking.  E.g., trying to have \"Determinism\" acting on things before they happen.  Determinism is a timeless viewpoint, so it doesn't mix well with words like \"before\".

\n

The same thing happens if you try to talk about how the Past at 6:30am determines the Future at 7:30am, and therefore, the state at 7:30am is already determined at 6:30am, so you can't control it at 7:00am, because it was determined at 6:30am earlier...

\n

What is determined is a timeless mathematical structure whose interior includes 7:00am and 7:30am.  That which you might be tempted to say \"already exists\" at 6:00am, does not exist before 7:00am, it is something whose existence includes the Now of 7:00am and the Now of 7:30am.

\n

If you imagine a counterfactual surgery on the interior of the structure at 7:00am, then, according to the statistically correct way to draw the arrows of causality within the structure, the 7:30am part would be affected as well.

\n

So it is exactly correct to say, on the one hand, \"The whole future course of the universe was determined by its state at 6:30am this morning,\" and, on the other, \"If I hadn't dropped the glass, the water wouldn't have spilled.\"  In the former case you're talking about a mathematical object outside time; in the latter case you're talking about cause and effect inside the mathematical object.  Part of what is determined is that dropping the glass in the Now of 7:00:00am, causes the water to spill in the Now of 7:00:01am.

\n

And as pointed out in Thou Art Physics, you are inside that mathematical object too.  So are your thoughts, emotions, morals, goals, beliefs, and all else that goes into the way you determine your decisions.

\n

To say \"the future is already written\" is a fine example of mixed-up timeful and timeless thinking.  The future is.  It is not \"already\".  What is it that writes the future?  In the timeless causal relations, we doThat is what is written: that our choices control the future.

\n

But how can you \"control\" something without changing it?

\n

\"Change\" is a word that makes sense within time, and only within time.  One observes a macroscopically persistent object, like, say, a lamp, and compares its state at 7:00am to its state at 7:02am.  If the two states are different, then we say that \"the lamp\" changed over time.

\n

In Timeless Physics, I observed that, while things can change from one time to another, a single moment of time is never observed to change:

\n
\n

At 7:00am, the lamp is off.  At 7:01am, I flip the switch...  At 7:02am, the lamp is fully bright.  Between 7:00am and 7:02am, the lamp changed from OFF to ON.

\n

But have you ever seen the future change from one time to another?  Have you wandered by a lamp at exactly 7:02am, and seen that it is OFF; then, a bit later, looked in again on the \"the lamp at exactly 7:02am\", and discovered that it is now ON?

\n
\n

But if you have to change a single moment of time, in order to be said to \"control\" something, you really are hosed.

\n

Forget this whole business of deterministic physics for a moment.

\n

Let's say there was some way to change a single moment of time.

\n

We would then need some kind of meta-time over which time could \"change\".

\n

The lamp's state would need to change from \"OFF at 7:02am at 3:00meta-am\" to \"ON at 7:02am at 3:01meta-am\".

\n

But wait!  Have you ever seen a lamp change from OFF at 7:02am at 3:00meta-am, to ON at 7:02am at 3:00meta-am?  No!  A single instant of meta-time never changes, so you cannot change it, and you have no control.

\n

Now we need meta-meta time.

\n

So if we're going to keep our concepts of \"cause\" and \"control\" and \"choose\"—and to discard them would leave a heck of a lot observations unexplained—then we've got to figure out some way to define them within time, within that which is written, within the Block Universe, within... well... reality.

\n

Control lets you change things from one time to another; you can turn on a lamp that was previously off.  That's one kind of control, and a fine sort of control it is to have.  But trying to pull this stunt on a single moment of time, is a type error.

\n

If you isolate a subsystem of reality, like a rock rolling down hill, then you can mathematically define the future-in-isolation of that subsystem; you can take the subsystem in isolation, and compute what would happen to it if you did not act on it.  In this case, what would happen is that the rock would reach the bottom of the hill.  This future-in-isolation is not something that actually happens in the Block Universe; it is a computable property of the subsystem as it exists at some particular moment.  If you reach in from outside the isolation, you can stop the rock from rolling.  Now if you walk away, and again leave the system isolated, the future-in-isolation will be that the rock just stays there.  But perhaps someone will reach in, and tip the rock over and start it rolling again.  The hill is not really isolated—the universe is a continuous whole—but we can imagine what would happen if the hill were isolated.  This is a \"counterfactual\", so called because they are not factual.

\n

The future-in-isolation of a subsystem can change from one time to another, as the subsystem itself changes over time as the result of actions from outside.  The future of the Grand System that includes everything, cannot change as the result of outside action.

\n

People want to place themselves outside the System, see themselves separated from it by a Cartesian boundary.  But even if free will could act outside physics to change the Block Universe, we would just have a Grand System that included free-will+physics and the future would be fully determined by that.  If you have \"freer will\" we just have an Even Grander System, and so on.

\n

It's hard to put yourself outside Reality.  Whatever is, is real.

\n

Control lets you determine single moments of time (though they do not change from one meta-time to another).  You can change what would have happened, from one time to another.  But you cannot change what does happen—just determine it.  Control means that you are what writes the written future, according to the laws of causality as they exist within the writing.

\n

Or maybe look at it this way:  Pretend, for a moment, that naive views of free will were correct.  The future \"doesn't exist yet\" and can be \"changed\".  (Note:  How are these two statements compatible?)  Suppose that you exercise your \"free will\" at 6:30am to rescue three toddlers from a burning orphanage, changing their future from horrible flamey death at 7:00am, to happy gurgling contentment at 7:00am.

\n

But now it is 7:30am, and I say:

\n

\"Aha!  The past is fixed and can never be altered!  So now you cannot ever have chosen any differently than you did choose.  Furthermore, the actual outcome of your actions can never change either; the outcome is now fixed, so even if your past choice did now change, the past outcome wouldn't, because they are both just determined by \"The Past\".  While your will was once free at 6:30am to change the future at 7:00am, it is now 7:30am and this freedom no longer exists.  So now your will at 6:30am is no longer free.  How can your past will have been free, now that there is only one past?  Therefore I do not now assign you any moral credit for saving the orphanage; you no longer could have chosen differently from how you chose.\"

\n

In the Block Universe, the \"past\" and the \"future\" are just perspectives, taken from some point within the Block. So, if the fixation of the past doesn't prevent the embedded decisions from having (had?) the property of freedom, why should the determination of the future prevent those embedded decisions from having the same property?

\n

In the Block Universe, the Future is just like the Past: it contains the Nows of people making choices that determine their outcomes, which do not change from one meta-time to another.

\n

And given the way we draw the causal arrows, it is correct to form the (un-observable) counterfactuals, \"If I hadn't saved those children from the orphanage, they would be dead,\" and \"If I don't think carefully, my thoughts will end up in Outer Mongolia.\"  One is a counterfactual over the past, and one is a counterfactual over the future; but they are both as correct as a counter-factual can be.

\n

The next step in analyzing the cognitive issues surrounding free will, is to take apart the word \"could\"—as in \"I could have decided not to save the children from the orphanage.\"  As always, I encourage the reader to try to get it in advance—this one is easier if you know a certain simple algorithm from Artificial Intelligence.

\n

PPS:  It all adds up to normality.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"The Failures of Eld Science\"

\n

Previous post: \"Thou Art Physics\"

" } }, { "_id": "NEeW7eSXThPz7o4Ne", "title": "Thou Art Physics", "pageUrl": "https://www.lesswrong.com/posts/NEeW7eSXThPz7o4Ne/thou-art-physics", "postedAt": "2008-06-06T06:37:01.000Z", "baseScore": 163, "voteCount": 117, "commentCount": 88, "url": null, "contents": { "documentId": "NEeW7eSXThPz7o4Ne", "html": "

Three months ago—jeebers, has it really been that long?—I posed the following homework assignment: Do a stack trace of the human cognitive algorithms that produce debates about “free will.” Note that this task is strongly distinguished from arguing that free will does or does not exist.

Now, as expected, people are asking, “If the future is determined, how can our choices control it?” The wise reader can guess that it all adds up to normality; but this leaves the question of how.

People hear: “The universe runs like clockwork; physics is deterministic; the future is fixed.” And their minds form a causal network that looks like this:

Here we see the causes “Me” and “Physics,” competing to determine the state of the “Future” effect. If the “Future” is fully determined by “Physics,” then obviously there is no room for it to be affected by “Me.”

This causal network is not an explicit philosophical belief. It’s implicit— a background representation of the brain, controlling which philosophical arguments seem “reasonable.” It just seems like the way things are.

Every now and then, another neuroscience press release appears, claiming that, because researchers used an fMRI to spot the brain doing something-or-other during a decision process, it’s not you who chooses, it’s your brain.

Likewise that old chestnut, “Reductionism undermines rationality itself. Because then, every time you said something, it wouldn’t be the result of reasoning about the evidence—it would be merely quarks bopping around.”

Of course the actual diagram should be:


Or better yet:

Why is this not obvious? Because there are many levels of organization that separate our models of our thoughts—our emotions, our beliefs, our agonizing indecisions, and our final choices—from our models of electrons and quarks.

We can intuitively visualize that a hand is made of fingers (and thumb and palm). To ask whether it’s really our hand that picks something up, or merely our fingers, thumb, and palm, is transparently a wrong question.

But the gap between physics and cognition cannot be crossed by direct visualization. No one can visualize atoms making up a person, the way they can see fingers making up a hand.

And so it requires constant vigilance to maintain your perception of yourself as an entity within physics.

This vigilance is one of the great keys to philosophy, like the Mind Projection Fallacy. You will recall that it is this point which I nominated as having tripped up the quantum physicists who failed to imagine macroscopic decoherence; they did not think to apply the laws to themselves.

Beliefs, desires, emotions, morals, goals, imaginations, anticipations, sensory perceptions, fleeting wishes, ideals, temptations… You might call this the “surface layer” of the mind, the parts-of-self that people can see even without science. If I say, “It is not you who determines the future, it is your desires, plans, and actions that determine the future,” you can readily see the part-whole relations. It is immediately visible, like fingers making up a hand. There are other part-whole relations all the way down to physics, but they are not immediately visible.

“Compatibilism” is the philosophical position that “free will” can be intuitively and satisfyingly defined in such a way as to be compatible with deterministic physics. “Incompatibilism” is the position that free will and determinism are incompatible.

My position might perhaps be called “Requiredism.” When agency, choice, control, and moral responsibility are cashed out in a sensible way, they require determinism—at least some patches of determinism within the universe. If you choose, and plan, and act, and bring some future into being, in accordance with your desire, then all this requires a lawful sort of reality; you cannot do it amid utter chaos. There must be order over at least those parts of reality that are being controlled by you. You are within physics, and so you/physics have determined the future. If it were not determined by physics, it could not be determined by you.

Or perhaps I should say, “If the future were not determined by reality, it could not be determined by you,” or “If the future were not determined by something, it could not be determined by you.” You don’t need neuroscience or physics to push naive definitions of free will into incoherence. If the mind were not embodied in the brain, it would be embodied in something else; there would be some real thing that was a mind. If the future were not determined by physics, it would be determined by something, some law, some order, some grand reality that included you within it.

But if the laws of physics control us, then how can we be said to control ourselves?

Turn it around: If the laws of physics did not control us, how could we possibly control ourselves?

How could thoughts judge other thoughts, how could emotions conflict with each other, how could one course of action appear best, how could we pass from uncertainty to certainty about our own plans, in the midst of utter chaos?

If we were not in reality, where could we be?

The future is determined by physics. What kind of physics? The kind of physics that includes the actions of human beings.

People’s choices are determined by physics. What kind of physics? The kind of physics that includes weighing decisions, considering possible outcomes, judging them, being tempted, following morals, rationalizing transgressions, trying to do better…

There is no point where a quark swoops in from Pluto and overrides all this.

The thoughts of your decision process are all real, they are all something. But a thought is too big and complicated to be an atom. So thoughts are made of smaller things, and our name for the stuff that stuff is made of is “physics.”

Physics underlies our decisions and includes our decisions. It does not explain them away.

Remember, physics adds up to normality; it’s your cognitive algorithms that generate confusion

" } }, { "_id": "qcYCAxYZT4Xp9iMZY", "title": "Living in Many Worlds", "pageUrl": "https://www.lesswrong.com/posts/qcYCAxYZT4Xp9iMZY/living-in-many-worlds", "postedAt": "2008-06-05T02:24:05.000Z", "baseScore": 62, "voteCount": 50, "commentCount": 81, "url": null, "contents": { "documentId": "qcYCAxYZT4Xp9iMZY", "html": "

Some commenters have recently expressed disturbance at the thought of constantly splitting into zillions of other people, as is the straightforward and unavoidable prediction of quantum mechanics.

Others have confessed themselves unclear as to the implications of many-worlds for planning: If you decide to buckle your seat belt in this world, does that increase the chance of another self unbuckling their seat belt? Are you being selfish at their expense?

Just remember Egan’s Law: It all adds up to normality.

(After Greg Egan, in Quarantine.[1])

Frank Sulloway said [2]:

Ironically, psychoanalysis has it over Darwinism precisely because its predictions are so outlandish and its explanations are so counterintuitive that we think, Is that really true? How radical! Freud’s ideas are so intriguing that people are willing to pay for them, while one of the great disadvantages of Darwinism is that we feel we know it already, because, in a sense, we do.

When Einstein overthrew the Newtonian version of gravity, apples didn’t stop falling, planets didn’t swerve into the Sun. Every new theory of physics must capture the successful predictions of the old theory it displaced; it should predict that the sky will be blue, rather than green.

So don’t think that many-worlds is there to make strange, radical, exciting predictions. It all adds up to normality.

Then why should anyone care?

Because there was once asked the question, fascinating unto a rationalist: What all adds up to normality?

And the answer to this question turns out to be: quantum mechanics. It is quantum mechanics that adds up to normality.

If there were something else there instead of quantum mechanics, then the world would look strange and unusual.

Bear this in mind, when you are wondering how to live in the strange new universe of many worlds: You have always been there.

Religions, anthropologists tell us, usually exhibit a property called minimal counterintuitiveness; they are startling enough to be memorable, but not so bizarre as to be difficult to memorize. Anubis has the head of a dog, which makes him memorable, but the rest of him is the body of a man. Spirits can see through walls; but they still become hungry.

But physics is not a religion, set to surprise you just exactly enough to be memorable. The underlying phenomena are so counterintuitive that it takes long study for humans to come to grips with them. But the surface phenomena are entirely ordinary. You will never catch a glimpse of another world out of the corner of your eye. You will never hear the voice of some other self. That is unambiguously prohibited outright by the laws. Sorry, you’re just schizophrenic.

The act of making decisions has no special interaction with the process that branches worlds. In your mind, in your imagination, a decision seems like a branching point where the world could go two different ways. But you would feel just the same uncertainty, visualize just the same alternatives, if there were only one world. That’s what people thought for centuries before quantum mechanics, and they still visualized alternative outcomes that could result from their decisions.

Decision and decoherence are entirely orthogonal concepts. If your brain never became decoherent, then that single cognitive process would still have to imagine different choices and their different outcomes. And a rock, which makes no decisions, obeys the same laws of quantum mechanics as anything else, and splits frantically as it lies in one place.

You don’t split when you come to a decision in particular, any more than you particularly split when you take a breath. You’re just splitting all the time as the result of decoherence, which has nothing to do with choices.

There is a population of worlds, and in each world, it all adds up to normality: apples don’t stop falling. In each world, people choose the course that seems best to them. Maybe they happen on a different line of thinking, and see new implications or miss others, and come to a different choice. But it’s not that one world chooses each choice. It’s not that one version of you chooses what seems best, and another version chooses what seems worst. In each world, apples go on falling and people go on doing what seems like a good idea.

Yes, you can nitpick exceptions to this rule, but they’re normal exceptions. It all adds up to normality, in all the worlds.

You cannot “choose which world to end up in.” In all the worlds, people’s choices determine outcomes in the same way they would in just one single world.

The choice you make here does not have some strange balancing influence on some world elsewhere. There is no causal communication between decoherent worlds. In each world, people’s choices control the future of that world, not some other world.

If you can imagine decisionmaking in one world, you can imagine decision-making in many worlds: just have the world constantly splitting while otherwise obeying all the same rules.

In no world does two plus two equal five. In no world can spaceships travel faster than light. All the quantum worlds obey our laws of physics; their existence is asserted in the first place by our laws of physics. Since the beginning, not one unusual thing has ever happened, in this or any other world. They are all lawful.

Are there horrible worlds out there, which are utterly beyond your ability to affect? Sure. And horrible things happened during the twelfth century, which are also beyond your ability to affect. But the twelfth century is not your responsibility, because it has, as the quaint phrase goes, “already happened.” I would suggest that you consider every world that is not in your future to be part of the “generalized past.”

Live in your own world. Before you knew about quantum physics, you would not have been tempted to try living in a world that did not seem to exist. Your decisions should add up to this same normality: you shouldn’t try to live in a quantum world you can’t communicate with.

Your decision theory should (almost always) be the same, whether you suppose that there is a 90% probability of something happening, or if it will happen in 9 out of 10 worlds. Now, because people have trouble handling probabilities, it may be helpful to visualize something happening in 9 out of 10 worlds. But this just helps you use normal decision theory.

Now is a good time to begin learning how to shut up and multiply. As I note in Lotteries: A Waste of Hope:

The human brain doesn’t do 64-bit floating-point arithmetic, and it can’t devalue the emotional force of a pleasant anticipation by a factor of 0.00000001 without dropping the line of reasoning entirely.

And in New Improved Lottery:

Between zero chance of becoming wealthy, and epsilon chance, there is an order-of-epsilon difference. If you doubt this, let epsilon equal one over googolplex.

If you’re thinking about a world that could arise in a lawful way, but whose probability is a quadrillion to one, and something very pleasant or very awful is happening in this world . . . well, it does probably exist, if it is lawful. But you should try to release one quadrillionth as many neurotransmitters, in your reward centers or your aversive centers, so that you can weigh that world appropriately in your decisions. If you don’t think you can do that . . . don’t bother thinking about it.

Otherwise you might as well go out and buy a lottery ticket using a quantum random number, a strategy that is guaranteed to result in a very tiny mega-win.

Or here’s another way of thinking about it: Are you considering expending some mental energy on a world whose frequency in your future is less than a trillionth? Then go get a 10-sided die from your local gaming store, and, before you begin thinking about that strange world, start rolling the die. If the die comes up 9 twelve times in a row, then you can think about that world. Otherwise don’t waste your time; thought-time is a resource to be expended wisely.

You can roll the dice as many times as you like, but you can’t think about the world until 9 comes up twelve times in a row. Then you can think about it for a minute. After that you have to start rolling the die again.

This may help you to appreciate the concept of “trillion to one” on a more visceral level.

If at any point you catch yourself thinking that quantum physics might have some kind of strange, abnormal implication for everyday life—then you should probably stop right there.

Oh, there are a few implications of many-worlds for ethics. Average utilitarianism suddenly looks a lot more attractive—you don’t need to worry about creating as many people as possible, because there are already plenty of people exploring person-space. You just want the average quality of life to be as high as possible, in the future worlds that are your responsibility.

And you should always take joy in discovery, as long as you personally don’t know a thing. It is meaningless to talk of being the “first” or the “only” person to know a thing, when everything knowable is known within worlds that are in neither your past nor your future, and are neither before or after you.

But, by and large, it all adds up to normality. If your understanding of many-worlds is the tiniest bit shaky, and you are contemplating whether to believe some strange proposition, or feel some strange emotion, or plan some strange strategy, then I can give you very simple advice: Don’t.

The quantum universe is not a strange place into which you have been thrust. It is the way things have always been.


1. Greg Egan, Quarantine (London: Legend Press, 1992).

2. Robert S. Boynton, “The Birth of an Idea: A Profile of Frank Sulloway,” The New Yorker (October 1999).

" } }, { "_id": "gDL9NDEXPxYpDf4vz", "title": "Why Quantum?", "pageUrl": "https://www.lesswrong.com/posts/gDL9NDEXPxYpDf4vz/why-quantum", "postedAt": "2008-06-04T05:34:01.000Z", "baseScore": 36, "voteCount": 29, "commentCount": 52, "url": null, "contents": { "documentId": "gDL9NDEXPxYpDf4vz", "html": "

This post is part of the Quantum Physics Sequence.
Followup toQuantum Explanations

\n

\"Why are you doing these posts on quantum physics?\" the one asked me.

\n

\"Quite a number of reasons,\" I said.

\n

\"For one thing,\" I said, \"the many-worlds issue is just about the only case I know of where you can bring the principles of Science and Bayesianism into direct conflict.\"  It's important to have different mental buckets for \"science\" and \"rationality\", as they are different concepts.  Bringing the two principles into direct conflict is helpful for illustrating what science is and is not, and what rationality is and is not.  Otherwise you end up trusting in what you call \"science\", which won't be strict enough.

\n

\"For another thing,\" I continued, \"part of what goes into becoming a rationalist, is learning to live into a counterintuitive world—learning to find things underneath the surface that are unlike the world of surface forms.\"  Quantum mechanics makes a good introduction to that, when done correctly without the horrible confusion and despair.  It breaks you of your belief in an intuitive universe, counters naive realism, destroys your trust in the way that your cognitive algorithms look from inside—and then you're ready to start seeing your mind as a mind, not as a window onto reality.

\n

\n

\"But you're writing about physics, without being a physicist,\" the one said, \"isn't that... a little...\"

\n

\"Yes,\" I said, \"it is, and I felt guilty about it.  But there were physicists talking complete nonsense about Occam's Razor without knowing the probability theory of it, so my hand was forced.  Also the situation in teaching quantum mechanics is really awful—I saw the introductions to Bayesianism and they seemed unnecessarily difficult, but the situation in quantum mechanics is so much worse.\"  It really is.  I remember sitting there staring at the \"linear operators\", trying to figure out what the hell they physically did to the eigenvectors—trying to visualize the actual events that were going on in the physical evolution—before it dawned on me that it was just a math trick to extract the average of the eigenvalues. Okay, but... can't you just tell me that up front?  Write it down somewhere?  Oh, I forgot, the math doesn't mean anything, it just works.

\n

\"Furthermore,\" I added, \"knowing about many worlds, helps you visualize probabilities as frequencies, which is helpful to many points I want to make.\"

\n

\"And furthermore,\" I said, \"reducing time to non-time is a powerful example of the principle, in reductionism, that you should reduce something to something other than itself.\"

\n

\"And even furthermore,\" I said, \"I had to break my readers of trust in Science, even trust in physicists, because it doesn't seem possible to think and trust at the same time.\"

\n

\"Many-worlds is really a very clear and simple problem,\" I said, \"by comparison with the challenges you encounter in AI, which are around a hundred times less clear-cut.  And many scientists can't even get many-worlds, in the absence of authority.\"  So you are left with no choice but to aspire to do better than the average scientist; a hell of a lot better, in fact.  This notion is one that you cannot just blurt out to people without showing them why it is necessary.

\n

Another helpful advantage—I often do things with quite a few different purposes in mind, as you may have realized by this point—was that you can see various commenters who still haven't gotten it, who are still saying, \"Oh, look, Eliezer is overconfident because he believes in many-worlds.\"

\n

Well, if you can viscerally see the arguments I have laid forth, then you can see that I am not being careless in having an opinion about physics.  The balance of arguments is overwhelmingly tipped; and physicists who deny it, are making specific errors of probability theory (which I have specifically laid out, and shown to you) that they might not be expected to know about.  It is not just a matter of my forming strong opinions at random.

\n

But would you believe that I had such strong support, if I had not shown it to you in full detail?  Ponder this well.  For I may have other strong opinions.  And it may seem to you that you don't see any good reason to form such strong beliefs.  Except this is not what you will see; you will see simply that there is no good reason for strong belief, that there is no strong support one way or the other.  For our first-order beliefs are how the world seems to be.  And you may think, \"Oh, Eliezer is just opinionated—forming strong beliefs in the absence of lopsided support.\"  And I will not have the time to do another couple of months worth of blog posts.

\n

I am very far from infallible, but I do not hold strong opinions at random.

\n

\"And yet still furthermore,\" I said, \"transhumanist mailing lists have been arguing about issues of personal identity for years, and a tremendous amount of time has been wasted on it.\"  Probably most who argue, will not bother to read what I have set forth; but if it stops any intelligent folk from wasting further time, that too is a benefit.

\n

I am sometimes accused of being overconfident and opinionated, for telling people that being composed of \"the same atoms\" has nothing to do with their personal continuity.  Or for saying that an uploading scan performed to the same resolution as thermal noise, actually has less effect on your identity than a sneeze (because your eyes squeeze shut when you sneeze, and that actually alters the computational state of billions of neurons in your visual cortex).  Yet if you can see your nows braided into causality of the river that never flows; and the synaptic connections computing your internal narrative, that remain the same from one time to another, though not a drop of water is shared; then you can see that I have reasons for this strong belief as well.

\n

Perhaps the one says to me that the exact duplicate constructed on Mars, is just a copy.  And I post a short comment saying, \"Wrong.  There is no copy, there are two originals.  This is knowable and I know it.\"  Would you have thought that I might have very strong support, that you might not be seeing?

\n

I won't always have the time to write a month of blog posts. While I am enough of a Traditional Rationalist that I dislike trust, and will not lightly ask it, I may ask it of you if your life is at stake.

\n

Another one once asked me:  \"What does quantum physics have to do with overcoming bias?\"

\n

Robin Hanson chose the name \"Overcoming Bias\"; but names are not steel chains.  If I'd started my own personal blog for the material I'm now posting, I would have called it \"Reinventing Rationality\" or something along those lines—and even that wouldn't have been the real purpose, which would have been harder to explain.

\n

What are these series of posts, really?  Raw material for a popular book on rationality—but maybe a tenth of this material, or less, will make it into the book.  One of the reasons I write long posts, is so that I can shorten them later with a good conscience, knowing that I did lay out the full argument somewhere.  But the whole quantum physics sequence is probably not going to make it into the popular book at all—and neither will many other posts.  So what's the rest of it for?

\n

Sometimes I think wistfully of how much more I could have accomplished in my teenage years, if I had known a fraction of what I know now at age 15.  (This is the age at which I was a Traditional Rationalist, and dedicated and bright as such ones go, but knew not the Way of Bayes.)  You can think of these blog posts, perhaps, as a series of letters to my past self.  Only not exactly, because some of what I now write, I did already know then.

\n

It seems to me, looking back, that the road which took me to this Way, had a great deal of luck in it.  I would like to eliminate that element of luck for those who come after.  So some of what I post, is more formal explanations of matters which Eliezer-15 knew in his bones.  And the rest, I only wish I had known.

\n

Perhaps if you prime someone with enough material as a starting point, they can figure out the other 95% on their own, if they go on to study the relevant sciences at a higher technical level.  That's what I hope.

\n

Eliezer-15 was led far astray by the seeming mysteriousness of quantum mechanics.  An antiproject in which he was aided and abetted by certain popular physicists—notably Sir Roger Penrose; but also all those physicists who told him that quantum physics was \"mysterious\" and that it was okay not to understand it.

\n

This is something I wish I had known, so I explained it to me.

\n

Why not just tell me to ignore quantum physics?  Because I am not going to \"just ignore\" a question that large.  It is not how we work.

\n

If you are confronting real scientific chaos—not just some light matter of an experimental anomaly or the search for a better theory, but genuine fear and despair, as now exists in Artificial Intelligence—then it is necessary to be a polymath.  Healthy fields have healthy ways of thinking; you cannot trust the traditions of the confused field you must reform, though you must learn them.  You never know which way you'll need to draw upon, on venturing out into the unknown.  You learn new sciences for the same reasons that programmers learn new programming languages: to change the way you think.  If you want to never learn anything without knowing in advance how it will apply, you had best stay away from chaos.

\n

If you want to tackle challenges on the order of AI, you can't just learn a bunch of AI stuff.

\n

And finally...

\n

Finally...

\n

There finally comes a point where you get tired of trying to communicate across vast inferential distances.  There comes a point where you get tired of not being able to say things to people without a month of preliminary explanation.  There comes a point where you want to say something about branching Earths or identical particles or braids in the river that never flows, and you can't.

\n

It is such a tremendous relief, to finally be able to say all these things.  And all the other things, that I have said here; that people have asked me about for so long, and all I could do was wave my hands.  I didn't have to explain the concept of \"inferential distance\" from scratch, I could just link to it.  It is such a relief.

\n

I have written hundreds of blog posts here.  Think about what it would be like, to carry all that around inside your head.

\n

If I can do all the long sequences on Overcoming Bias, then maybe after that, it will be possible to say most things that I want to say, in just one piece.

\n

 

\n

Part of The Quantum Physics Sequence

\n

(end of sequence)

\n

Previous post: \"Class Project\"

" } }, { "_id": "924arDrTu3QRHFA5r", "title": "Timeless Identity", "pageUrl": "https://www.lesswrong.com/posts/924arDrTu3QRHFA5r/timeless-identity", "postedAt": "2008-06-03T08:16:33.000Z", "baseScore": 61, "voteCount": 46, "commentCount": 248, "url": null, "contents": { "documentId": "924arDrTu3QRHFA5r", "html": "

Followup toNo Individual Particles, Identity Isn't In Specific Atoms, Timeless Physics, Timeless Causality

\n

People have asked me, \"What practical good does it do to discuss quantum physics or consciousness or zombies or personal identity?  I mean, what's the application for me in real life?\"

\n

Before the end of today's post, we shall see a real-world application with practical consequences, for you, yes, you in today's world.  It is built upon many prerequisites and deep foundations; you will not be able to tell others what you have seen, though you may (or may not) want desperately to tell them.  (Short of having them read the last several months of OB.)

\n

In No Individual Particles we saw that the intuitive conception of reality as little billiard balls bopping around, is entirely and absolutely wrong; the basic ontological reality, to the best of anyone's present knowledge, is a joint configuration space.  These configurations have mathematical identities like \"A particle here, a particle there\", rather than \"particle 1 here, particle 2 there\" and the difference is experimentally testable.  What might appear to be a little billiard ball, like an electron caught in a trap, is actually a multiplicative factor in a wavefunction that happens to approximately factor.  The factorization of 18 includes two factors of 3, not one factor of 3, but this doesn't mean the two 3s have separate individual identities—quantum mechanics is sort of like that.  (If that didn't make any sense to you, sorry; you need to have followed the series on quantum physics.)

\n

In Identity Isn't In Specific Atoms, we took this counterintuitive truth of physical ontology, and proceeded to kick hell out of an intuitive concept of personal identity that depends on being made of the \"same atoms\"—the intuition that you are the same person, if you are made out of the same pieces.  But because the brain doesn't repeat its exact state (let alone the whole universe), the joint configuration space which underlies you, is nonoverlapping from one fraction of a second to the next.  Or even from one Planck interval to the next.  I.e., \"you\" of now and \"you\" of one second later do not have in common any ontologically basic elements with a shared persistent identity.

\n

\n

Just from standard quantum mechanics, we can see immediately that some of the standard thought-experiments used to pump intuitions in philosophical discussions of identity, are physical nonsense.  For example, there is a thought experiment that runs like this:

\n
\n

\"The Scanner here on Earth will destroy my brain and body, while recording the exact states of all my cells.  It will then transmit this information by radio.  Travelling at the speed of light, the message will take three minutes to reach the Replicator on Mars.  This will then create, out of new matter, a brain and body exactly like mine.  It will be in this body that I shall wake up.\"

\n
\n

This is Derek Parfit in the excellent Reasons and Persons, p. 199—note that Parfit is describing thought experiments, not necessarily endorsing them.

\n

There is an argument which Parfit describes (but does not himself endorse), and which I have seen many people spontaneously invent, which says (not a quote):

\n
\n

Ah, but suppose an improved Scanner were invented, which scanned you non-destructively, but still transmitted the same information to Mars .  Now, clearly, in this case, you, the original have simply stayed on Earth, and the person on Mars is only a copy.  Therefore this teleporter is actually murder and birth, not travel at all—it destroys the original, and constructs a copy!

\n
\n

Well, but who says that if we build an exact copy of you, one version is the privileged original and the other is just a copy?  Are you under the impression that one of these bodies is constructed out of the original atoms—that it has some kind of physical continuity the other does not possess?  But there is no such thing as a particular atom, so the original-ness or new-ness  of the person can't depend on the original-ness or new-ness of the atoms.

\n

(If you are now saying, \"No, you can't distinguish two electrons yet, but that doesn't mean they're the same entity -\" then you have not been following the series on quantum mechanics, or you need to reread it.  Physics does not work the way you think it does.  There are no little billiard balls bouncing around down there.)

\n

If you further realize that, as a matter of fact, you are splitting all the time due to ordinary decoherence, then you are much more likely to look at this thought experiment and say:  \"There is no copy; there are two originals.\"

\n

Intuitively, in your imagination, it might seem that one billiard ball stays in the same place on Earth, and another billiard ball has popped into place on Mars; so one is the \"original\", and the other is the \"copy\".  But at a fundamental level, things are not made out of billiard balls.

\n

A sentient brain constructed to atomic precision, and copied with atomic precision, could undergo a quantum evolution along with its \"copy\", such that, afterward, there would exist no fact of the matter as to which of the two brains was the \"original\".  In some Feynman diagrams they would exchange places, in some Feynman diagrams not.  The two entire brains would be, in aggregate, identical particles with no individual identities.

\n

Parfit, having discussed the teleportation thought experiment, counters the intuitions of physical continuity with a different set of thought experiments:

\n
\n

\"Consider another range of possible cases: the Physical Spectrum.  These cases involve all of the different possible degrees of physical continuity...

\n

\"In a case close to the near end, scientists would replace 1% of the cells in my brain and body with exact duplicates.  In the case in the middle of the spectrum, they would replace 50%.  In a case near the far end, they would replace 99%, leaving only 1% of my original brain and body.  At the far end, the 'replacement' would involve the complete destruction of my brain and body, and the creation out of new organic matter of a Replica of me.\"

\n

(Reasons and Persons, p. 234.)

\n
\n

Parfit uses this to argue against the intuition of physical continuity pumped by the first experiment: if your identity depends on physical continuity, where is the exact threshold at which you cease to be \"you\"?

\n

By the way, although I'm criticizing Parfit's reasoning here, I really liked Parfit's discussion of personal identity.  It really surprised me.  I was expecting a rehash of the same arguments I've seen on transhumanist mailing lists over the last decade or more.  Parfit gets much further than I've seen the mailing lists get.  This is a sad verdict for the mailing lists.  And as for Reasons and Persons, it well deserves its fame.

\n

But although Parfit executed his arguments competently and with great philosophical skill, those two particular arguments (Parfit has lots more!) are doomed by physics.

\n

There just is no such thing as \"new organic matter\" that has a persistent identity apart from \"old organic matter\".  No fact of the matter exists, as to which electron is which, in your body on Earth or your body on Mars.  No fact of the matter exists, as to how many electrons in your body have been \"replaced\" or \"left in the same place\".  So both thought experiments are physical nonsense.

Parfit seems to be enunciating his own opinion here (not Devil's advocating) when he says:

\n
\n

\"There are two kinds of sameness, or identity.  I and my Replica are qualitatively identical, or exactly alike.  But we may not be numerically identical, one and the same person.  Similarly, two white billiard balls are not numerically but may be qualitatively identical.  If I paint one of these balls red, it will cease to be qualitatively identical with itself as it was.  But the red ball that I later see and the white ball that I painted red are numerically identical.  They are one and the same ball.\" (p. 201.)

\n
\n

In the human imagination, the way we have evolved to imagine things, we can imagine two qualitatively identical billiard balls that have a further fact about them—their persistent identity—that makes them distinct.

\n

But it seems to be a basic lesson of physics that \"numerical identity\" just does not exist.  Where \"qualitative identity\" exists, you can set up quantum evolutions that refute the illusion of individuality—Feynman diagrams that sum over different permutations of the identicals.

\n

We should always have been suspicious of \"numerical identity\", since it was not experimentally detectable; but physics swoops in and drop-kicks the whole argument out the window.

\n

Parfit p. 241:

\n
\n

\"Reductionists admit that there is a difference between numerical identity and exact similarity.  In some cases, there would be a real difference between some person's being me, and his being someone else who is merely exactly like me.\"

\n
\n

This reductionist admits no such thing.

\n

Parfit even describes a wise-seeming reductionist refusal to answer questions as to when one person becomes another, when you are \"replacing\" the atoms inside them.  P. 235:

\n
\n

(The reductionist says:)  \"The resulting person will be psychologically continuous with me as I am now.  This is all there is to know.  I do not know whether the resulting person will be me, or will be someone else who is merely exactly like me.  But this is not, here, a real question, which must have an answer.  It does not describe two different possibilities, one of which must be true.  It is here an empty question.  There is not a real difference here between the resulting person's being me, and his being someone else.  This is why, even though I do not know whether I am about to die, I know everything.\"

\n
\n

Almost but not quite reductionist enough!  When you master quantum mechanics, you see that, in the thought experiment where your atoms are being \"replaced\" in various quantities by \"different\" atoms, nothing whatsoever is actually happening—the thought experiment itself is physically empty.

\n

So this reductionist, at least, triumphantly says—not, \"It is an empty question; I know everything that there is to know, even though I don't know if I will live or die\"—but simply, \"I will live; nothing happened.\"

\n

This whole episode is one of the main reasons why I hope that when I really understand matters such as these, and they have ceased to be mysteries unto me, that I will be able to give definite answers to questions that seem like they ought to have definite answers.

\n

And it is a reason why I am suspicious, of philosophies that too early—before the dispelling of mystery—say, \"There is no answer to the question.\"  Sometimes there is no answer, but then the absence of the answer comes with a shock of understanding, a click like thunder, that makes the question vanish in a puff of smoke.  As opposed to a dull empty sort of feeling, as of being told to shut up and stop asking questions.

\n

And another lesson:  Though the thought experiment of having atoms \"replaced\" seems easy to imagine in the abstract, anyone knowing a fully detailed physical visualization would have immediately seen that the thought experiment was physical nonsense.  Let zombie theorists take note!

\n

Additional physics can shift our view of identity even further:

\n

In Timeless Physics, we looked at a speculative, but even more beautiful view of quantum mechanics:  We don't need to suppose the amplitude distribution over the configuration space is changing, since the universe never repeats itself.  We never see any particular joint configuration (of the whole universe) change amplitude from one time to another; from one time to another, the universe will have expanded.  There is just a timeless amplitude distribution (aka wavefunction) over a configuration space that includes compressed configurations of the universe (early times) and expanded configurations of the universe (later times).

\n

Then we will need to discover people and their identities embodied within a timeless set of relations between configurations that never repeat themselves, and never change from one time to another.

\n

As we saw in Timeless Beauty, timeless physics is beautiful because it would make everything that exists either perfectly global—like the uniform, exceptionless laws of physics that apply everywhere and everywhen—or perfectly local—like points in the configuration space that only affect or are affected by their immediate local neighborhood.  Everything that exists fundamentally, would be qualitatively unique: there would never be two fundamental entities that have the same properties but are not the same entity.

\n

(Note:  The you on Earth, and the you on Mars, are not ontologically basic.  You are factors of a joint amplitude distribution that is ontologically basic.  Suppose the integer 18 exists: the factorization of 18 will include two factors of 3, not one factor of 3.  This does not mean that inside the Platonic integer 18 there are two little 3s hanging around with persistent identities, living in different houses.)

\n

We also saw in Timeless Causality that the end of time is not necessarily the end of cause and effect; causality can be defined (and detected statistically!) without mentioning \"time\".  This is important because it preserves arguments about personal identity that rely on causal continuity rather than \"physical continuity\".

\n

Previously I drew this diagram of you in a timeless, branching universe:

\n

\"Manybranches4\"

\n

To understand many-worlds:  The gold head only remembers the green heads, creating the illusion of a unique line through time, and the intuitive question, \"Where does the line go next?\"  But it goes to both possible futures, and both possible futures will look back and see a single line through time.  In many-worlds, there is no fact of the matter as to which future you personally will end up in.  There is no copy; there are two originals.

\n

To understand timeless physics:  The heads are not popping in and out of existence as some Global Now sweeps forward.  They are all just there, each thinking that now is a different time.

\n

In Timeless Causality I drew this diagram:

\n

\"Causeright\"

\n

This was part of an illustration of how we could statistically distinguish left-flowing causality from right-flowing causality—an argument that cause and effect could be defined relationally, even the absence of a changing global time.  And I said that, because we could keep cause and effect as the glue that binds configurations together, we could go on trying to identify experiences with computations embodied in flows of amplitude, rather than having to identify experiences with individual configurations.

\n

But both diagrams have a common flaw: they show discrete nodes, connected by discrete arrows.  In reality, physics is continuous.

\n

So if you want to know \"Where is the computation?  Where is the experience?\" my best guess would be to point to something like a directional braid:

\n

\"Braid_2\"

\n

This is not a braid of moving particles.  This is a braid of interactions within close neighborhoods of timeless configuration space.

\n

\"Braidslice\"

\n

Every point intersected by the red line is unique as a mathematical entity; the points are not moving from one time to another.  However, the amplitude at different points is related by physical laws; and there is a direction of causality to the relations.

\n

You could say that the amplitude is flowing, in a river that never changes, but has a direction.

\n

Embodied in this timeless flow are computations; within the computations, experiences.  The experiences' computations' configurations might even overlap each other:

\n

\n

\"Braidtime_2\"

\n

In the causal relations covered by the rectangle 1, there would be one moment of Now; in the causal relations covered by the rectangle 2, another moment of Now.  There is a causal direction between them: 1 is the cause of 2, not the other way around.  The rectangles overlap—though I really am not sure if I should be drawing them with overlap or not—because the computations are embodied in some of the same configurations.  Or if not, there is still causal continuity because the end state of one computation is the start state of another.

\n

But on an ontologically fundamental level, nothing with a persistent identity moves through time.

\n

Even the braid itself is not ontologically fundamental; a human brain is a factor of a larger wavefunction that happens to factorize.

\n

Then what is preserved from one time to another?  On an ontologically basic level, absolutely nothing.

\n

But you will recall that I earlier talked about any perturbation which does not disturb your internal narrative, almost certainly not being able to disturb whatever is the true cause of your saying \"I think therefore I am\"—this is why you can't leave a person physically unaltered, and subtract their consciousness.  When you look at a person on the level of organization of neurons firing, anything which does not disturb, or only infinitesimally disturbs, the pattern of neurons firing—such as flipping a switch from across the room—ought not to disturb your consciousness, or your personal identity.

\n

If you were to describe the brain on the level of neurons and synapses, then this description of the factor of the wavefunction that is your brain, would have a very great deal in common, across different cross-sections of the braid.  The pattern of synapses would be \"almost the same\"—that is, the description would come out almost the same—even though, on an ontologically basic level, nothing that exists fundamentally is held in common between them.  The internal narrative goes on, and you can see it within the vastly higher-level view of the firing patterns in the connection of synapses.  The computational pattern computes, \"I think therefore I am\".  The narrative says, today and tomorrow, \"I am Eliezer Yudkowsky, I am a rationalist, and I have something to protect.\"  Even though, in the river that never flows, not a single drop of water is shared between one time and another.

\n

If there's any basis whatsoever to this notion of \"continuity of consciousness\"—I haven't quite given up on it yet, because I don't have anything better to cling to—then I would guess that this is how it works.

\n

Oh... and I promised you a real-world application, didn't I?

\n

Well, here it is:

\n

Many throughout time, tempted by the promise of immortality, have consumed strange and often fatal elixirs; they have tried to bargain with devils that failed to appear; and done many other silly things.

\n

But like all superpowers, long-range life extension can only be acquired by seeing, with a shock, that some way of getting it is perfectly normal.

\n

If you can see the moments of now braided into time, the causal dependencies of future states on past states, the high-level pattern of synapses and the internal narrative as a computation within it—if you can viscerally dispel the classical hallucination of a little billiard ball that is you, and see your nows strung out in the river that never flows—then you can see that signing up for cryonics, being vitrified in liquid nitrogen when you die, and having your brain nanotechnologically reconstructed fifty years later, is actually less of a change than going to sleep, dreaming, and forgetting your dreams when you wake up.

\n

You should be able to see that, now, if you've followed through this whole series.  You should be able to get it on a gut level—that being vitrified in liquid nitrogen for fifty years (around 3e52 Planck intervals) is not very different from waiting an average of 2e26 Planck intervals between neurons firing, on the generous assumption that there are a hundred trillion synapses firing a thousand times per second.  You should be able to see that there is nothing preserved from one night's sleep to the morning's waking, which cryonic suspension does not preserve also.  Assuming the vitrification technology is good enough for a sufficiently powerful Bayesian superintelligence to look at your frozen brain, and figure out \"who you were\" to the same resolution that your morning's waking self resembles the person who went to sleep that night.

\n

Do you know what it takes to securely erase a computer's hard drive?  Writing it over with all zeroes isn't enough.  Writing it over with all zeroes, then all ones, then a random pattern, isn't enough.  Someone with the right tools can still examine the final state of a section of magnetic memory, and distinguish the state, \"This was a 1 written over by a 1, then a 0, then a 1\" from \"This was a 0 written over by a 1, then a 0, then a 1\".  The best way to securely erase a computer's hard drive is to destroy it with thermite.

\n

I really don't think that carefully vitrifying a brain to prevent ice crystal formation and then freezing it in liquid nitrogen is going to be a secure erase procedure, if you can examine atomic-level differences in the synapses.

\n

Someone hears about cryonics and thinks for 10 seconds and says, \"But if you're frozen and then revived, are you really the same person?\"

\n

And if they happened to know all about quantum physics and could apply the abstract knowledge to real life, and they had followed the whole debate about zombies and resolved it against epiphenomenalism in general, then they would be able to visualize the braids in the river that never flows, and say, \"Yes.\"

\n

But this knowledge is not common.

\n

So they die.

\n

There are numerous other reasons that people seize on, when they search for a rationalization for a negative initial flinch against cryonics.  And numerous other knowledges that would be required to answer those objections.  \"But wouldn't it be boring to live such a long time?\"  (Can be answered if you know hedonic psychology, and have developed a theory of fun, and can visualize accessible fun spaces that increase in volume with increasing intelligence.)  \"Why would future civilizations bother to revive me?\"  (Requires understanding either economic growth diminishing the cost, or knowledge of history and how societies have become kinder over time, or knowing about Friendly AI.)  \"Isn't it wrong to live so long?\"  (Requires knowing about the \"sour grapes\" bias.  See also transhumanism as simplified humanism and the meaning that immortality gives to life.)  Then there's the meta-knowledge of how to question all these deeply wise cached thoughts that pop into your head about the futility of life; and the ability to do things that might make people look at you weird, and so on...

\n

Some of these are series of posts I haven't done yet.  But if you anticipate updating your probabilities when you read those future posts, then you should update them now.  Or, if you prefer, trust me:

\n

If you would rather live happily ever after, than die, and you are willing to spend between $300 and $2000 per year(*) to express this preference, then sign up for cryonics.

\n

If you've been cryocrastinating, putting off signing up for cryonics \"until later\", don't think that you've \"gotten away with it so far\".  Many worlds, remember?  There are branched versions of you that are dying of cancer, and not signed up for cryonics, and it's too late for them to get life insurance.

\n

See, knowing about many worlds can help you visualize probabilities as frequencies, because they usually are.

\n

It might encourage you to get around to getting health insurance, too, or wearing a helmet on your motorcycle, or whatever: don't think you've gotten away with it so far.

\n

And if you're planning to play the lottery, don't think you might win this time.  A vanishingly small fraction of you wins, every time.  So either learn to discount small fractions of the future by shutting up and multiplying, or spend all your money on lottery tickets—your call.

\n

It is a very important lesson in rationality, that at any time, the Environment may suddenly ask you almost any question, which requires you to draw on 7 different fields of knowledge.  If you missed studying a single one of them, you may suffer arbitrarily large penalties up to and including capital punishment.  You can die for an answer you gave in 10 seconds, without realizing that a field of knowledge existed of which you were ignorant.

\n

This is why there is a virtue of scholarship.

\n

150,000 people die every day.  Some of those deaths are truly unavoidable, but most are the result of inadequate knowledge of cognitive biases, advanced futurism, and quantum mechanics.(**)

\n

If you disagree with my premises or my conclusion, take a moment to consider nonetheless, that the very existence of an argument about life-or-death stakes, whatever position you take in that argument, constitutes a sufficient lesson on the sudden relevance of scholarship.

\n
\n

(*)  The way cryonics works is that you get a life insurance policy, and the policy pays for your cryonic suspension.  The Cryonics Institute is the cheapest provider, Alcor is the high-class one.  Rudi Hoffman set up my own insurance policy, with CI.  I have no affiliate agreements with any of these entities, nor, to my knowledge, do they have affiliate agreements with anyone.  They're trying to look respectable, and so they rely on altruism and word-of-mouth to grow, instead of paid salespeople.  So there's a vastly smaller worldwide market for immortality than lung-cancer-in-a-stick.  Welcome to your Earth; it's going to stay this way until you fix it.

\n

(**)  Most deaths?  Yes:  If cryonics were widely seen in the same terms as any other medical procedure, economies of scale would considerably diminish the cost; it would be applied routinely in hospitals; and foreign aid would enable it to be applied even in poor countries.  So children in Africa are dying because citizens and politicians and philanthropists in the First World don't have a gut-level understanding of quantum mechanics.

\n

Added:  For some of the questions that are being asked, see Alcor's FAQ for scientists and Ben Best's Cryonics FAQ (archived snapshot).

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Thou Art Physics\"

\n

Previous post: \"Timeless Causality\"

" } }, { "_id": "gTTWRkSz474o7s4Dg", "title": "Principles of Disagreement", "pageUrl": "https://www.lesswrong.com/posts/gTTWRkSz474o7s4Dg/principles-of-disagreement", "postedAt": "2008-06-02T07:04:12.000Z", "baseScore": 23, "voteCount": 18, "commentCount": 26, "url": null, "contents": { "documentId": "gTTWRkSz474o7s4Dg", "html": "

Followup toThe Rhythm of Disagreement

\n\n

At the age of 15, a year before I knew what a "Singularity" was, I had learned about evolutionary psychology.  Even from that beginning, it was apparent to me that people talked about\n"disagreement" as a matter of tribal status, processing it with the\npart of their brain that assessed people's standing in the tribe.  The\npeculiar indignation of "How dare you disagree with Einstein?" has its\norigins here:  Even if the disagreer is wrong, we wouldn't apply the\nsame emotions to an ordinary math error like "How dare you write a\nformula that makes e equal to 1.718?"

\n\n

At the age of 15, being a Traditional Rationalist, and\nnever having heard of Aumann or Bayes, I thought the obvious answer\nwas, "Entirely disregard people's authority and pay attention to the\narguments.  Only arguments count."

\n\n

Ha ha!  How naive.

I can't say that this principle never served my younger self wrong.

\n\n

I can't even say that the principle gets you as close as possible to the truth.

\n\n

I doubt I ever really clung to that principle in\npractice.  In real life, I judged my authorities with care then, just\nas I do now...

\n\n

But my efforts to follow that principle, made\nme stronger.  They focused my attention upon arguments; believing in\nauthority does not make you stronger.  The principle gave me freedom to\nfind a better way, which I eventually did, though I wandered at first.

\n\n

Yet both of these benefits were pragmatic and\nlong-term, not immediate and epistemic.  And you cannot say, "I will\ndisagree today, even though I'm probably wrong, because it will help me\nfind the truth later."  Then you are trying to doublethink.  If you know today that you are probably wrong, you must abandon the belief today.  \nPeriod.  No cleverness.  Always use your truth-finding skills\nat their full immediate strength, or you have abandoned something\nmore important than any other benefit you will be offered; you have\nabandoned the truth.

\n\n

So today, I sometimes accept things on authority, because my best guess is that they are really truly true in real life, and no other criterion gets a vote.

\n\n

But always in the back of my mind is that childhood\nprinciple, directing my attention to the arguments as well, reminding\nme that you gain no strength from authority; that you may not even know anything, just be repeating it back.

\n\n

Earlier I described how I disagreed with a math book and looked for proof, disagreed humbly with Judea Pearl and was proven (half) right, disagreed immodestly with Sebastian Thrun and was proven wrong, had a couple of quick exchanges with Steve Omohundro in which modesty-reasoning would just have slowed us down, respectfully disagreed with Daniel Dennett and disrespectfully disagreed with Steven Pinker, disagreed with Robert Aumann without a second thought, disagreed with Nick Bostrom with second thoughts...

\n\n

What kind of rule am I using, that covers all these cases?

\n\n

Er... "try to get the actual issue really right"?  I\nmean, there are other rules but that's the important one.  It's why I disagree with Aumann about Orthodox Judaism, and blindly accept Judea Pearl's word about the revised version of his analysis.  Any argument that says I should take Aumann seriously is wasting my time; any argument that says I should disagree with Pearl is wasting my truth.

\n\n

There are\nall sorts of general reasons not to argue with physicists about\nphysics, but the rules are all there to help you get the issue right, so in the case of Many-Worlds you have to ignore them.

\n\n\n\n

Yes, I know that's not helpful as a general principle.  But dammit, wavefunctions don't collapse!  It's a massively stupid idea that sticks around due to sheer historical contingency!  I'm more confident of that than any principle I would dare to generalize about disagreement.

\n\n

Notions of "disagreement" are psychology-dependent\npragmatic philosophy.  Physics and Occam's razor are much simpler.  Object-level stuff is often much clearer than meta-level stuff, even\nthough this itself is a meta-level principle.

\n\n

In theory, you have to\nmake a prior decision whether to trust your own assessment of how\nobvious it is that wavefunctions don't collapse, before you can assess\nwhether wavefunctions don't collapse.  In practice, it's much more\nobvious that wavefunctions don't collapse, than that I should trust my\ndisagreement.  Much more obvious.  So I just go with that.

\n\n

I trust any given level of meta as far as I can\nthrow it, but no further.

\n\n

There's a rhythm to disagreement.  And oversimplified rules about when to\ndisagree, can distract from that rhythm.  Even "Follow arguments, not\npeople" can distract from the rhythm, because no one, including my past\nself, really uses that rule in practice.

\n\n

The way it works in real life is that I just do the standard first-order disagreement analysis:  Okay, in real life, how likely is it that this person knows stuff that I don't?

\n\n

Not, Okay,\nhow much of the stuff that I know that they don't, have they already taken into account in a revised estimate, given\nthat they know I disagree with them, and have formed guesses about what I might know that they don't, based on their assessment of my and their relative rationality...

\n\n

Why\ndon't I try the higher-order analyses?  Because I've never seen a case\nwhere, even in retrospect, it seems like I could have gotten real-life\nmileage out of it.  Too complicated, too much of a tendency to collapse\nto tribal status, too distracting from the object-level arguments.

\n\n

I have previously observed that those who genuinely reach upward as rationalists, have usually been broken of their core trust in the sanity of the people around them. \nIn this world, we have to figure out who to trust, and who we have\nreasons to trust, and who might be right even when we believe they're\nwrong.  But I'm kinda skeptical that we can - in this world of mostly\ncrazy people and a few slightly-more-sane people who've spent their\nwhole lives surrounded by crazy people who claim they're saner than average - get\nreal-world mileage out of complicated reasoning that involves\nsane people assessing each other's meta-sanity.  We've been\nbroken of that trust, you see.

\n\n

Does Robin Hanson really trust, deep down, that I trust him enough, that I would not dare to disagree with him, unless he were really wrong?  I can't trust that he does... so I don't trust him so much... so he shouldn't trust that I wouldn't dare disagree...

\n\n

It would be an interesting experiment: but I cannot literally\ncommit to walking into a room with Robin Hanson and not walking out\nuntil we have the same opinion about the Singularity.  So that if I give\nhim all my reasons and hear all his reasons, and Hanson tells me, "I\nstill think you're wrong," I must then agree (or disagree in a net direction Robin can't predict).  I trust Robin but I\ndon't trust him THAT MUCH.  Even if I tried to promise, I couldn't make myself believe it was really true - and that tells me I can't make the promise.

\n\n

When I think about who I would be willing to try this with, the name that comes to mind is Michael Vassar - which surprised me, and I asked my mind why.  The answer that came back was, "Because Michael Vassar knows viscerally what's at stake if he makes you update the wrong way; he wouldn't use the power lightly."  I'm not going anywhere in particular with this; but it points in an interesting direction - that a primary reason I don't always update when people disagree with me, is that I don't think they're taking that disagreement with the extraordinary gravity that would be required, on both sides, for two people to trust each other in an Aumann cage match. 

\n\n

Yesterday, Robin asked me why I disagree with Roger Schank about whether AI will be general in the foreseeable future.

\n\n

Well, first, be it said that I am no hypocrite; I have been explicitly defending immodesty\nagainst modesty since long before this blog began.

\n

Roger Schank is a famous old AI researcher who I learned about as the pioneer of yet another false idol, "scripts".  He used suggestively named LISP tokens, and I'd never heard it said of him that he had seen the light of Bayes.

\n\n

So I noted that the warriors of old are often more\nformidable intellectually than those who venture into the Dungeon of\nGeneral AI today, but their arms and armor are obsolete.  And I pointed\nout that Schank's prediction with its stated reasons seemed more like\nan emotional reaction to discouragement, than a painstakingly crafted\ngeneral model of the future of AI research that had happened to yield a firm prediction in this\ncase.

\n\n

Ah, said Robin, so it is good for the young to disagree with the old.

\n\n

No, but if the old guy is Roger Schank, and the\nyoung guy is me, and we are disagreeing about Artificial General\nIntelligence, then sure.

\n\n

If the old guy is, I don't know, Murray Gell-Mann,\nand we're disagreeing about, like, particle masses or something, I'd\nhave to ask what I was even doing in that conversation.

\n\n

If the old fogey is Murray Gell-Mann and the young upstart is Scott Aaronson, I'd probably stare at them helplessly like a deer caught in the headlights.  I've listed out the pros and cons here, and they balance as far as I can tell:

\n\n\n\n

It is traditional - not Bayesian, not even remotely realistic, but traditional\n- that when some uppity young scientist is pushing their chosen field\nas far they possibly can, going past the frontier, they have a right\nto eat any old scientists they come across, for nutrition.

\n\n

I think there's more than a grain of truth in that\nideal.  It's not completely true.  It's certainly not upheld in\npractice.  But it's not wrong, either.

\n\n

It's not that the young have a generic right to\ndisagree with the old, but yes, when the young are pushing the\nfrontiers they often end up leaving the old behind.  Everyone knows\nthat and what's more, I think it's true.

\n\n

If someday I get eaten, great.

\n\n

I still agree with my fifteen-year-old self about\nsome things:  The tribal-status part of our minds, that asks, "How dare\nyou disagree?", is just a hindrance.  The real issues of rational\ndisagreement have nothing to do with that part of us; it exists for\nother reasons and works by other rhythms.  "How dare you disagree with\nRoger Schank?" ends up as a no-win question if you try to approach it\non the meta-level and think in terms of generic trustworthiness: it forces you to argue that you yourself are generically above\nSchank and of higher tribal status; or alternatively, accept\nconclusions that do not seem, er, carefully reasoned.  In such a case\nthere is a great deal to be said for simply focusing on the\nobject-level arguments.

\n\n

But if there are no simple rules that forbid disagreement, can't people always make up whatever excuse for disagreement they like, so they can cling\nto precious beliefs?

\n\n

Look... it's never hard to shoot off your own foot, in this art of rationality.  And the more art you learn of rationality, the more potential excuses you have.  If you insist on disagreeing with Gell-Mann about physics, BLAM it goes.  There is no set of rules you can follow to be safe.  You will always have the opportunity to shoot your own foot off.

\n\n

I want to push my era further than the previous ones: create an advanced\nart of rationality, to\nadvise people who are trying to reach as high as they can in real\nlife.  They will sometimes have to disagree with others.  If they are\npushing the frontiers of their science they may have to disagree with\ntheir elders.  They will have to develop the skill - learning from practice - of when to disagree and when not to.  "Don't"\nis the wrong answer.

\n\n

If others take that as a welcome excuse to shoot their own feet off, that doesn't change what's really the truly true truth.

\n\n\n\n

I once gave a talk on rationality at Peter Thiel's Clarium Capital.  I did not want anything bad to happen\nto Clarium Capital.  So I ended my talk by saying, "And above all, if\nany of these reasonable-sounding principles turn out not to work, don't use them."

\n\n

In retrospect, thinking back, I could\nhave given the different caution:  "And be careful to follow these\nprinciples consistently, instead of making special exceptions when it\nseems tempting."  But it would not be a good thing for the Singularity\nInstitute, if anything bad happened to Clarium Capital.

\n\n

That's as close as I've ever come to betting on my\nhigh-minded advice about rationality in a prediction market - putting my skin in a game with near-term financial consequences.  I considered just staying home - Clarium was trading successfully; did I want to disturb their rhythm with Centipede's Dilemmas?  But because past success is no guarantee of future success in finance, I went, and offered what help I could give, emphasizing above all the problem of motivated skepticism - when I had skin in the game.  Yet at the end I\nsaid:  "Don't trust principles until you see them working," not "Be wary\nof the temptation to make exceptions."

\n\n

I conclude with one last tale of disagreement:

\n\n

Nick Bostrom and I once took a taxi and split the fare.   When we\ncounted the money we'd assembled to pay the driver, we found an extra\ntwenty there.

\n\n

"I'm pretty sure this twenty isn't mine," said Nick.

\n\n

"I'd have been sure that it wasn't mine either," I said.

\n\n

"You just take it," said Nick.

\n\n

"No, you just take it," I said.

\n\n

We looked at each other, and we knew what we had to do.

\n\n

"To the best of your ability to say at this point, what would have been your initial probability that the bill was yours?" I said.

\n\n

"Fifteen percent," said Nick.

\n\n

"I would have said twenty percent," I said.

\n\n

So we split it $8.57 / $11.43, and went happily on our way, guilt-free.

\n\n

I think that's the only time I've ever seen an Aumann-inspired algorithm used in real-world practice.

" } }, { "_id": "tKa9Lebyebf6a7P2o", "title": "The Rhythm of Disagreement", "pageUrl": "https://www.lesswrong.com/posts/tKa9Lebyebf6a7P2o/the-rhythm-of-disagreement", "postedAt": "2008-06-01T20:18:25.000Z", "baseScore": 27, "voteCount": 21, "commentCount": 65, "url": null, "contents": { "documentId": "tKa9Lebyebf6a7P2o", "html": "

Followup toA Premature Word on AI, The Modesty Argument

\n\n

Once, during the year I was working with Marcello, I passed by a math book he was reading, left open on the table.  One formula caught my eye (why?); and I thought for a moment and said, "This... doesn't look like it can be right..."

\n\n

Then we had to prove it couldn't be right.

\n\n

Why prove it?  It looked wrong; why take the time for proof?

\n\n

Because it was in a math book.  By presumption, when someone publishes a book, they run it past some editors and double-check their own work; then all the readers get a chance to check it, too.  There might have been something we missed.

\n\n

But in this case, there wasn't.  It was a misprinted standard formula, off by one.

\n\n

I once found an error in Judea Pearl's Causality - not just a misprint, but an actual error invalidating a conclusion in the text.  I double and triple-checked, the best I was able, and then sent an email to Pearl describing what I thought the error was, and what I thought was the correct answer.  Pearl confirmed the error, but he said my answer wasn't right either, for reasons I didn't understand and that I'd have to have gone back and done some rereading and analysis to follow.  I had other stuff to do at the time, unfortunately, and couldn't expend the energy.  And by the time Pearl posted an expanded explanation to the website, I'd forgotten the original details of the problem...  Okay, so my improved answer was wrong.

\n\n

Why take Pearl's word for it?  He'd gotten the original problem wrong, and I'd caught him on it - why trust his second thought over mine?

Because he was frikkin' Judea Pearl.  I mean, come on!  I might dare to\nwrite Pearl with an error, when I could understand the error well enough\nthat it would have seemed certain, if not for the disagreement.  But it didn't seem likely that Pearl would concentrate his alerted awareness on the problem, warned of the mistake, and get it wrong twice.  If I didn't understand Pearl's answer, that was my problem, not his.  Unless I chose to expend however much work was required to understand it, I had to assume he was right this time.  Not just as a matter of fairness, but of probability - that, in the real world, Pearl's answer really was right.

\n\n

In IEEE Spectrum's sad little attempt at Singularity coverage, one bright spot is Paul Wallich's "Who's Who In The Singularity", which (a) actually mentions some of the real analysts like Nick Bostrom and myself and (b) correctly identifies me as an advocate of the "intelligence explosion", whereas e.g. Ray Kurzweil is designated as "technotopia - accelerating change".  I.e., Paul Wallich actually did his homework instead of making everything up as he went along.  Sad that it's just a little PDF chart.

\n\n

Wallich's chart lists Daniel Dennett's position on the Singularity as:

Human-level AI may be inevitable, but don’t expect it anytime soon. "I don’t deny the possibility a priori; I just think it is vanishingly unlikely in the foreseeable future."

That surprised me.  "Vanishingly unlikely"?  Why would Dennett think that?  He has no obvious reason to share any of the standard prejudices.  I would be interested in knowing Dennett's reason for this opinion, and mildly disappointed if it turns out to be the usual, "We haven't succeeded in the last fifty years, therefore we definitely won't succeed in the next hundred years."

\n\n

Also in IEEE Spectrum, Steven Pinker, author of The Blank Slate - a popular introduction to evolutionary psychology that includes topics like heuristics and biases - is quoted:

When machine consciousness will occur:  "In one sense—information routing—they already have. In the other sense—first-person experience—we'll never know."

Whoa, said I to myself, Steven Pinker is a mysterian?  "We'll never know"?  How bizarre - I just lost some of the respect I had for him.

\n\n

I disagree with Dennett about Singularity time horizons, and with Pinker about machine consciousness.  Both of these are prestigious researchers whom I started out respecting about equally.  So why am I curious to hear Dennett's reasons; but outright dismissive of Pinker?

\n\n

I would probably say something like, "There are many potential reasons to disagree about AI time horizons, and no respectable authority to correct you if you mess up.  But if you think consciousness is everlastingly mysterious, you have completely missed the lesson of history; and respectable minds will give you many good reasons to believe so.  Non-reductionism says something much deeper about your outlook on reality than AI timeframe skepticism; someone like Pinker really ought to have known better."

\n\n

(But all this presumes that Pinker is the one who is wrong, and not me...)

\n\n

Robert Aumann, Nobel laureate and original inventor of the no-disagreement-among-Bayesians theorem, is a believing Orthodox Jew.  (I know I keep saying this, but it deserves repeating, for the warning it carries.)  By the time I discovered this strange proclivity of Aumann's, I had long ago analyzed the issues.  Discovering that Aumann was Jewish, did not cause me to revisit the issues even momentarily.  I did not consider for even a fraction of a second that this Nobel laureate and Bayesian might be right, and myself wrong.  I did draw the lesson, "You can teach people Bayesian math, but even if they're genuinely very good with the math, applying it to real life and real beliefs is a whole different story."

\n\n

Scott Aaronson calls me a bullet-swallower; I disagree.  I am very choosy about which bullets I dodge, and which bullets I swallow.  Any view of disagreement that implies I should not disagree with Robert Aumann must be wrong.

\n\n

Then there's the whole recent analysis of Many-Worlds.  I felt very guilty, writing about physics when I am not a physicist; but dammit, there are physicists out there talking complete nonsense about Occam's Razor, and they don't seem to feel guilty for using words like "falsifiable" without being able to do the math.

\n\n

On the other hand, if, hypothetically, Scott Aaronson should say, "Eliezer, your question about why 'energy' in the Hamiltonian and 'energy' in General Relativity are the same quantity, is complete nonsense, it doesn't even have an answer, I can't explain why because you know too little," I would be like "Okay."

\n\n

Nearly everyone I meet knows how to solve the problem of Friendly AI.  I don't hesitate to dismiss nearly all of these solutions out of hand; standard wrong patterns I dissected long since.

\n\n

Nick Bostrom, however, once asked whether it would make sense to build an Oracle AI, one that only answered questions, and ask it our questions about Friendly AI.  I explained some of the theoretical reasons why this would be just as difficult as building a Friendly AI:  The Oracle AI still needs an internal goal system to allocate computing resources efficiently, and it has to have a goal of answering questions and updating your mind, so it's not harmless unless it knows what side effects shouldn't happen.  It also needs to implement or interpret a full meta-ethics before it can answer our questions about Friendly AI.  So the Oracle AI is not necessarily any simpler, theoretically, than a Friendly AI.

\n\n

Nick didn't seem fully convinced of this.  I knew that Nick knew that I'd been thinking about the problem for years, so I knew he wasn't just disregarding me; his continued disagreement meant something.  And I also remembered that Nick had spotted the problem of Friendly AI itself, at least two years before I had (though I did not realize this until later, when I was going back and reading some of Nick's older work).  So I pondered Nick's idea further.  Maybe, whatever the theoretical arguments, an AI that was supposed to only answer questions, and designed to the full standards of Friendly AI without skipping any of the work, could end up a pragmatically safer starting point.  Every now and then I prod Nick's Oracle AI in my mind, to check the current status of the idea relative to any changes in my knowledge.  I remember Nick has been right on previous occasions where I doubted his rightness; and if I am an expert, so is he.

\n\n

I was present at a gathering with Sebastian Thrun (leader of the\nteam that won the DARPA Grand Challenge '06 for motorized vehicles). \nThrun introduced the two-envelopes\nproblem and then asked:  "Can you find an algorithm that, regardless of\nhow the envelope amounts are distributed, always has a higher\nprobability of picking the envelope with more money?"

\n\n

I thought and said, "No."

\n\n

"No deterministic algorithm can do it," said Thrun, "but if you use a randomized algorithm, it is possible."

\n\n

Now I was really skeptical; you cannot extract work from noise.

\n\n

Thrun gave the solution:  Just pick any function from dollars onto\nprobability that decreases monotonically and continuously from 1 probability at 0\ndollars, to a probability of 0 at infinity.  Then if you open the\nenvelope and find that amount of money, roll a die and switch the\nenvelope at that probability.  However much money was in both envelopes\noriginally, and whatever the distribution, you will always have a higher probability of switching the\nenvelope with the lower amount of money.

\n\n

I said, "That can't possibly work... you can't derive useful work from an arbitrary function and a random number... maybe it involves an improper prior..."

\n\n

"No it doesn't," said Thrun; and it didn't.

\n\n

So I went away and thought about it overnight and finally wrote an email in which I argued that the algorithm did\nmake use of prior knowledge about the envelope distribution.  (As the density of the differential of the monotonic function, in the vicinity of the actual envelope contents, goes to zero, the expected benefit of the algorithm over random chance, goes to zero.)  Moreover, once you realized how you were using your prior knowledge, you could see a derandomized version of the algorithm which was superior, even\nthough it didn't make the exact guarantee Thrun had made.

\n\n

But Thrun's solution did do what he said it did.

\n\n

(In a remarkable coincidence, not too much later, Steve Omohundro\npresented me with an even more startling paradox.  "That can't work," I\nsaid.  "Yes it can," said Steve, and it could.  Later I perceived,\nafter some thought, that the paradox was a more complex analogue of\nThrun's algorithm.  "Why, this is analogous to Thrun's algorithm," I\nsaid, and explained Thrun's algorithm.  "That's not analogous," said\nSteve.  "Yes it is," I said, and it was.)

\n\n

Why disagree with Thrun in the first place?  He was a prestigious AI\nresearcher who had just won the DARPA Grand Challenge, crediting his\nBayesian view of probability - a formidable warrior with modern arms\nand armor.  It wasn't a transhumanist question; I had no special\nexpertise.

\n\n

Because I had worked out, as a very general principle, that you\nought not to be able to extract cognitive work from randomness; and\nThrun's algorithm seemed to be defying that principle.

\n\n

Okay, but what does that have to do with the disagreement?  Why presume that it was his algorithm that was at fault, and not my foolish belief that you couldn't extract cognitive work from randomness?

\n\n

Well, in point of fact, neither of these was the problem.  The fault was in my notion that there was a conflict between Thrun's algorithm doing what he said it did, and the no-work-from-randomness principle.  So if I'd just assumed I was wrong, I would have been wrong.

\n\n

Yet surely I could have done better, if I had simply presumed\nThrun to be correct, and managed to break down the possibilities for\nerror on my part into "The 'no work from randomness' principle is\nincorrect" and "My understanding of what Thrun meant is incorrect" and\n"My understanding of the algorithm is incomplete; there is no conflict\nbetween it and 'no work from randomness'."

\n\n

Well, yes, on that occasion, this would have given me a better probability distribution, if I had assigned probability 0 to a possibility that turned out, in retrospect, to be wrong.

\n\n

But probability 0 is a strawman; could I have done better by assigning a smaller probability that Thrun had said anything mathematically wrong?

\n\n

Yes.  And if I meet Thrun again, or anyone who seems similar to Thrun, that's just what I'll do.

\n\n

Just as I'll assign a slightly higher probability\nthat I might be right, the next time I find what looks like an error in\na famous math book.  In fact, one of the reasons why I lingered on what\nlooked like a problem in Pearl's Causality, was that I'd previously found an\nacknowledged typo in Probability Theory: The Logic of Science.
\n

\n\n

My rhythm of disagreement is not a fixed rule, it seems.  A fixed rule would be beyond updating by experience.

\n\n

I tried to explain why I disagreed with Roger Schank, and Robin said, "All else equal a younger person is more likely to be right in a disagreement?"

\n\n

But all else wasn't equal.  That was the point.  Roger Schank is a partisan of what one might best describe as "old school" AI, i.e.,  suggestively named LISP tokens.

\n\n

Is it good for the young to disagree with the old?  Sometimes.  Not all the time.  Just some of the time.  When?  Ah, that's the question!  Even in general, if you are disagreeing about the future course of AI with a famous old AI researcher, and the famous old AI researcher is of the school of suggestively named LISP tokens, and you yourself are 21 years old and have taken one undergraduate course taught with "Artificial Intelligence: A Modern Approach" that you thought was great... then I would tell you to go for it.  Probably both of you are wrong.  But if you forced me to bet money on one or the other, without hearing the specific argument, I'd go with the young upstart.  Then again, the young upstart is not me, so how do they know that rule?

\n\n

It's hard enough to say what the rhythm of disagreement should be in my own case.  I would hesitate to offer general advice to others, save the obvious:  Be less ready to disagree with a supermajority than a mere majority; be less ready to disagree outside than inside your expertise; always pay close attention to the object-level arguments; never let the debate become about tribal status.

" } }, { "_id": "iD5baT42zYAkWJPMB", "title": "A Premature Word on AI", "pageUrl": "https://www.lesswrong.com/posts/iD5baT42zYAkWJPMB/a-premature-word-on-ai", "postedAt": "2008-05-31T17:48:46.000Z", "baseScore": 27, "voteCount": 21, "commentCount": 69, "url": null, "contents": { "documentId": "iD5baT42zYAkWJPMB", "html": "

Followup toA.I. Old-Timers, Do Scientists Already Know This Stuff?

\n

In response to Robin Hanson's post on the disillusionment of old-time AI researchers such as Roger Schank, I thought I'd post a few premature words on AI, even though I'm not really ready to do so:

\n

Anyway:

\n

I never expected AI to be easy.  I went into the AI field because I thought it was world-crackingly important, and I was willing to work on it if it took the rest of my whole life, even though it looked incredibly difficult.

\n

I've noticed that folks who actively work on Artificial General Intelligence, seem to have started out thinking the problem was much easier than it first appeared to me.

\n

In retrospect, if I had not thought that the AGI problem was worth a hundred and fifty thousand human lives per day - that's what I thought in the beginning - then I would not have challenged it; I would have run away and hid like a scared rabbit.  Everything I now know about how to not panic in the face of difficult problems, I learned from tackling AGI, and later, the superproblem of Friendly AI, because running away wasn't an option.

\n

Try telling one of these AGI folks about Friendly AI, and they reel back, surprised, and immediately say, \"But that would be too difficult!\"  In short, they have the same run-away reflex as anyone else, but AGI has not activated it.  (FAI does.)

\n

Roger Schank is not necessarily in this class, please note.  Most of the people currently wandering around in the AGI Dungeon are those too blind to see the warning signs, the skulls on spikes, the flaming pits.  But e.g. John McCarthy is a warrior of a different sort; he ventured into the AI Dungeon before it was known to be difficult.  I find that in terms of raw formidability, the warriors who first stumbled across the Dungeon, impress me rather more than most of the modern explorers - the first explorers were not self-selected for folly.  But alas, their weapons tend to be extremely obsolete.

\n

\n

There are many ways to run away from difficult problems.  Some of them are exceedingly subtle.

\n

What makes a problem seem impossible?  That no avenue of success is available to mind.  What makes a problem seem scary?  That you don't know what to do next.

\n

Let's say that the problem of creating a general intelligence seems scary, because you have no idea how to do it.You could run away by working on chess-playing programs instead.  Or you could run away by saying, \"All past AI projects failed due to lack of computing power.\"  Then you don't have to face the unpleasant prospect of staring at a blank piece of paper until drops of blood form on your forehead - the best description I've ever heard of the process of searching for core insight.  You have avoided placing yourself into a condition where your daily work may consist of not knowing what to do next.

\n

But \"Computing power!\" is a mysterious answer to a mysterious question.  Even after you believe that all past AI projects failed \"due to lack of computing power\", it doesn't make intelligence any less mysterious.  \"What do you mean?\" you say indignantly, \"I have a perfectly good explanation for intelligence: it emerges from lots of computing power!  Or knowledge!  Or complexity!\"  And this is a subtle issue to which I must probably devote more posts.  But if you contrast the rush of insight into details and specifics that follows from learning about, say, Pearlian causality, you may realize that \"Computing power causes intelligence\" does not constrain detailed anticipation of phenomena even in retrospect.

\n

People are not systematically taught what to do when they're scared; everyone's got to work it out on their own.  And so the vast majority stumble into simple traps like mysterious answers or affective death spiralsI too stumbled, but I managed to recover and get out alive; and realized what it was that I'd learned; and then I went back into the Dungeon, because I had something to protect.

\n

I've recently discussed how scientists are not taught to handle chaos, so I'm emphasizing that aspect in this particular post, as opposed to a dozen other aspects...  If you want to appreciate the inferential distances here, think of how odd all this would sound without the Einstein sequence.  Then think of how odd the Einstein sequence would have sounded without the many-worlds sequence...  There's plenty more where that came from.

\n

What does progress in AGI/FAI look like, if not bigger and faster computers?

It looks like taking down the real barrier, the scary barrier, the one where you have to sweat blood: understanding things that seem mysterious, and not by declaring that they're \"emergent\" or \"complex\", either.

If you don't understand the family of Cox's Theorems and the Dutch Book argument, you can go round and round with \"certainty factors\" and \"fuzzy logics\" that seem sorta appealing, but that can never quite be made to work right.  Once you understand the structure of probability - not just probability as an explicit tool, but as a forced implicit structure in cognitive engines - even if the structure is only approximate - then you begin to actually understand what you're doing; you are not just trying things that seem like good ideas.  You have achieved core insight.  You are not even limited to floating-point numbers between 0 and 1 to represent probability; you have seen through to structure, and can use log odds or smoke signals if you wish.

If you don't understand graphical models of conditional independence, you can go round and round inventing new \"default logics\" and \"defeasible logics\" that get more and more complicated as you try to incorporate an infinite number of special cases.  If you know the graphical structure, and why the graphical model works, and the regularity of the environment that it exploits, and why it is efficient as well as correct, then you really understand the problem; you are not limited to explicit Bayesian networks, you just know that you have to exploit a certain kind of mathematical regularity in the environment.

Unfortunately, these two insights - Bayesian probability and Pearlian causality - are far from sufficient to solve general AI problems.  If you try to do anything with these two theories that requires an additional key insight you do not yet possess, you will fail just like any other AGI project, and build something that grows more and more complicated and patchworky but never quite seems to work the way you hoped.

\n

These two insights are examples of what \"progress in AI\" looks like.

\n

Most people who say they intend to tackle AGI do not understand Bayes or Pearl.  Most of the people in the AI Dungeon are there because they think they found the Sword of Truth in an old well, or, even worse, because they don't realize the problem is difficult.  They are not polymaths; they are not making a convulsive desperate effort to solve the unsolvable.  They are optimists who have their Great Idea that is the best idea ever even though they can't say exactly how it will produce intelligence, and they want to do the scientific thing and test their hypothesis.  If they hadn't started out thinking they already had the Great Idea, they would have run away from the Dungeon; but this does not give them much of a motive to search for other master keys, even the ones already found.

\n

The idea of looking for an \"additional insight you don't already have\" is something that the academic field of AI is just not set up to do.  As a strategy, it does not result in a reliable success (defined as a reliable publication).  As a strategy, it requires additional study and large expenditures of time.  It ultimately amounts to \"try to be Judea Pearl or Laplace\" and that is not something that professors have been reliably taught to teach undergraduates; even though it is often what a field in a state of scientific chaos needs.

\n

John McCarthy said quite well what Artificial Intelligence needs:  1.7 Einsteins, 2 Maxwells, 5 Faradays and .3 Manhattan Projects.  From this I am forced to subtract the \"Manhattan project\", because security considerations of FAI prohibit using that many people; but I doubt it'll take more than another 1.5 Maxwells and 0.2 Faradays to make up for it.

\n

But, as said, the field of AI is not set up to support this - it is set up to support explorations with reliable payoffs.

\n

You would think that there would be genuinely formidable people going into the Dungeon of Generality, nonetheless, because they wanted to test their skills against true scientific chaos.  Even if they hadn't yet realized that their little sister is down there.  Well, that sounds very attractive in principle, but I guess it sounds a lot less attractive when you have to pay the rent.  Or they're all off doing string theory, because AI is well-known to be impossible, not the sort of chaos that looks promising - why, it's genuinely scary!  You might not succeed, if you went in there!

\n

But I digress.  This began as a response to Robin Hanson's post \"A.I. Old-Timers\", and Roger Shank's very different idea of what future AI progress will look like.

\n

Okay, let's take a look at Roger Schank's argument:

\n
\n

I have not soured on AI. I still believe that we can create very intelligent machines. But I no longer believe that those machines will be like us... What AI can and should build are intelligent special purpose entities. (We can call them Specialized Intelligences or SI's.) Smart computers will indeed be created. But they will arrive in the form of SI's, ones that make lousy companions but know every shipping accident that ever happened and why (the shipping industry's SI) or as an expert on sales (a business world SI.)

\n
\n

I ask the fundamental question of rationality:  Why do you believe what you believe?

\n

Schank would seem to be talking as if he knows something about the course of future AI research - research that hasn't happened yet. What is it that he thinks he knows? How does he think he knows it?

\n

As John McCarthy said: \"Your statements amount to saying that if AI is possible, it should be easy. Why is that?\"

\n

There is a master strength behind all human arts:  Human intelligence can, without additional adaptation, create the special-purpose systems of a skyscraper, a gun, a space shuttle, a nuclear weapon, a DNA synthesizer, a high-speed computer...

\n

If none of what the human brain does is magic, the combined trick of it can be recreated in purer form.

\n

If this can be done, someone will do it.  The fact that shipping-inventory programs can be built as well, does not mean that it is sensible to talk about people only building shipping-inventory programs.  If it is also possible to build something of human+ power.  In a world where both events occur, the course of history is dominated by the latter.

\n

So what is it that Roger Schank learned, as Bayesian evidence, which confirms some specific hypothesis over its alternatives - and what is the hypothesis, exactly? - that reveals to him the future course of AI research?  Namely, that AI will not succeed in creating anything of general capability?

\n

It would seem rather difficult to predict the future course of research you have not yet done.  Wouldn't Schank have to know the best solution in order to know the minimum time the best solution would take?

\n

Of course I don't think Schank is actually doing a Bayesian update here.  I think Roger Schank gives the game away when he says:

\n
\n

When reporters interviewed me in the 70's and 80's about the possibilities for Artificial Intelligence I would always say that we would have machines that are as smart as we are within my lifetime. It seemed a safe answer since no one could ever tell me I was wrong.

\n
\n

There is careful futurism, where you try to consider all the biases you know, and separate your analysis into logical parts, and put confidence intervals around things, and use wider confidence intervals where you have less constraining knowledge, and all that other stuff rationalists do.  Then there is sloppy futurism, where you just make something up that sounds neat.  This sounds like sloppy futurism to me.

\n

So, basically, Schank made a fantastic amazing futuristic prediction about machines \"as smart as we are\" \"within my lifetime\" - two phrases that themselves reveal some shaky assumptions.

\n

Then Schank got all sad and disappointed because he wasn't making progress as fast as he hoped.

\n

So Schank made a different futuristic prediction, about special-purpose AIs that will answer your questions about shipping disasters.  It wasn't quite as shiny and futuristic, but it matched his new saddened mood, and it gave him something to say to reporters when they asked him where AI would be in 2050.

\n

This is how the vast majority of futurism is done.  So until I have reason to believe there is something more to Schank's analysis than this, I don't feel very guilty about disagreeing with him when I make \"predictions\" like:

\n

If you don't know much about a problem, you should widen your confidence intervals in both directions.  AI seems very hard because you don't know how to do it.  But translating this into a confident prediction of a very long time interval would express your ignorance as if it were positive knowledge.  So even though AI feels very hard to you, this is an expression of ignorance that should translate into a confidence interval wide in both directions: the less you know, the broader that confidence interval should be, in both directions.

Or:

You don't know what theoretical insights will be required for AI, or you would already have them.  Theoretical breakthroughs can happen without advance warning (the warning is perceived in retrospect, of course, but not in advance); and they can be arbitrarily large.  We know it is difficult to build a star from hydrogen atoms in the obvious way - because we understand how stars work, so we know that the work required is a huge amount of drudgery. 

\n

Or:

\n

Looking at the anthropological trajectory of hominids seems to strongly contradict the assertion that exponentially increasing amounts of processing power or programming time are required for the production of intelligence in the vicinity of human; even when using an evolutionary algorithm that runs on blind mutations, random recombination, and selection with zero foresight.

\n

But if I don't want this post to go on forever, I had better stop it here.  See this paper, however.

" } }, { "_id": "xAXrEpF5FYjwqKMfZ", "title": "Class Project", "pageUrl": "https://www.lesswrong.com/posts/xAXrEpF5FYjwqKMfZ/class-project", "postedAt": "2008-05-31T00:23:01.000Z", "baseScore": 66, "voteCount": 57, "commentCount": 38, "url": null, "contents": { "documentId": "xAXrEpF5FYjwqKMfZ", "html": "

\"Do as well as Einstein?\" Jeffreyssai said, incredulously.  \"Just as well as Einstein?  Albert Einstein was a great scientist of his era, but that was his era, not this one!  Einstein did not comprehend the Bayesian methods; he lived before the cognitive biases were discovered; he had no scientific grasp of his own thought processes.  Einstein spoke nonsense of an impersonal God—which tells you how well he understood the rhythm of reason, to discard it outside his own field! He was too caught up in the drama of rejecting his era's quantum mechanics to actually fix it.  And while I grant that Einstein reasoned cleanly in the matter of General Relativity—barring that matter of the cosmological constant—he took ten years to do it.  Too slow!\"

\n

\"Too slow?\" repeated Taji incredulously.

\n

\"Too slow!  If Einstein were in this classroom now, rather than Earth of the negative first century, I would rap his knuckles!  You will not try to do as well as Einstein!  You will aspire to do BETTER than Einstein or you may as well not bother!\"

\n

Jeffreyssai shook his head.  \"Well, I've given you enough hints.  It is time to test your skills.  Now, I know that the other beisutsukai don't think much of my class projects...\"  Jeffreyssai paused significantly.

\n

Brennan inwardly sighed.  He'd heard this line many times before, in the Bardic Conspiracy, the Competitive Conspiracy:  The other teachers think my assignments are too easy, you should be grateful, followed by some ridiculously difficult task— 

\n

\n

\"They say,\" Jeffreyssai said, \"that my projects are too hard; insanely hard; that they pass from the realm of madness into the realm of Sparta; that Laplace himself would catch on fire; they accuse me of trying to tear apart my students' souls—\"

\n

Oh, crap.

\n

\"But there is a reason,\" Jeffreyssai said, \"why many of my students have achieved great things; and by that I do not mean high rank in the Bayesian Conspiracy.  I expected much of them, and they came to expect much of themselves.  So...\"

\n

Jeffreyssai took a moment to look over his increasingly disturbed students, \"Here is your assignment.  Of quantum mechanics, and General Relativity, you have been told.  This is the limit of Eld science, and hence, the limit of public knowledge.  The five of you, working on your own, are to produce the correct theory of quantum gravity.  Your time limit is one month.\"

\n

\"What?\" said Brennan, Taji, Styrlyn, and Yin.  Hiriwa gave them a puzzled look.

\n

\"Should you succeed,\" Jeffreyssai continued, \"you will be promoted to beisutsukai of the second dan and sixth level.  We will see if you have learned speed.  Your clock starts—now.\"

\n

And Jeffreyssai strode out of the room, slamming the door behind him.

\n

\"This is crazy!\" Taji cried.

\n

Hiriwa looked at Taji, bemused.  \"The solution is not known to us.  How can you know it is so difficult?\"

\n

\"Because we knew about this problem back in the Eld days!  Eld scientists worked on this problem for a lot longer than one month.\"

\n

Hiriwa shrugged.  \"They were still arguing about many-worlds too, weren't they?\"

\n

\"Enough!  There's no time!\"

\n

The other four students looked to Styrlyn, remembering that he was said to rank high in the Cooperative Conspiracy.  There was a brief moment of weighing, of assessing, and then Styrlyn was their leader.

\n

Styrlyn took a great breath.  \"We need a list of approaches.  Write down all the angles you can think of.  Independently—we need your individual components before we start combining.  In five minutes, I'll ask each of you for your best idea first.  No wasted thoughts!  Go!\"

\n

Brennan grabbed a sheet and his tracer, set the tip to the surface, and then paused.  He couldn't think of anything clever to say about unifying general relativity and quantum mechanics...

\n

The other students were already writing.

\n

Brennan tapped the tip, once, twice, thrice.  General relativity and quantum mechanics...

\n

Taji put his first sheet aside, grabbed another.

\n

Finally, Brennan, for lack of anything clever to say, wrote down the obvious.

\n

Minutes later, when Styrlyn called time, it was still all he had written.

\n

\"All right,\" Styrlyn said, \"your best idea.  Or the idea you most want the rest of us to take into account in our second components. Taji, go!\"

\n

Taji looked over his sheets.  \"Okay, I think we've got to assume that every avenue that Eld science was trying is a blind alley, or they would have found it.  And if this is possible to do in one month, the answer must be, in some sense, elegant.  So no multiple dimensions.  If we start doing anything that looks like we should call it 'string theory', we'd better stop.  Maybe begin by considering how failure to understand decoherence could have led Eld science astray in quantizing gravity.\"

\n

\"The opposite of folly is folly,\" Hiriwa said.  \"Let us pretend that Eld science never existed.\"

\n

\"No criticisms yet!\" said Styrlyn.  \"Hiriwa, your suggestion?\"

\n

\"Get rid of the infinities,\" said Hiriwa, \"extirpate that which permits them.  It should not be a matter of cleverness with integrals. A representation that allows infinity must be false-to-fact.\"

\n

\"Yin.\"

\n

\"We know from common sense,\" Yin said, \"that if we stepped outside the universe, we would see time laid out all at once, reality like a crystal.  But I once encountered a hint that physics is timeless in a deeper sense than that.\"  Yin's eyes were distant, remembering. \"Years ago, I found an abandoned city; it had been uninhabited for eras, I think.  And behind a door whose locks were broken, carved into one wall:  quote .ua sai .ei mi vimcu ty bu le mekso unquote.\"

\n

Brennan translated:  Eureka!  Eliminate t from the equations.  And written in Lojban, the sacred language of science, which meant the unknown writer had thought it to be true. 

\n

\"The 'timeless physics' of which we've all heard rumors,\" Yin said, \"may be timeless in a very literal sense.\"

\n

\"My own contribution,\" Styrlyn said.  \"The quantum physics we've learned is over joint positional configurations.  It seems like we should be able to take that apart into a spatially local representation, in terms of invariant distant entanglements.  Finding that representation might help us integrate with general relativity, whose curvature is local.\"

\n

\"A strangely individualist perspective,\" Taji murmured, \"for one of the Cooperative Conspiracy.\"

\n

Styrlyn shook his head.  \"You misunderstand us, then.  The first lesson we learn is that groups are made of people... no, there is no time for politics.  Brennan!\"

\n

Brennan shrugged.  \"Not much, I'm afraid, only the obvious. Inertial mass-energy was always observed to equal gravitational mass-energy, and Einstein showed that they were necessarily the same. So why is the 'energy' that is an eigenvalue of the quantum Hamiltonian, necessarily the same as the 'energy' quantity that appears in the equations of General Relativity?  Why should spacetime curve at the same rate that the little arrows rotate?\"

\n

There was a brief pause.

\n

Yin frowned.  \"That seems too obvious.  Wouldn't Eld science have figured it out already?\"

\n

\"Forget Eld science existed,\" Hiriwa said.  \"The question stands: we need the answer, whether it was known in ancient times or not.  It cannot possibly be coincidence.\"

\n

Taji's eyes were abstracted.  \"Perhaps it would be possible to show that an exception to the equality would violate some conservation law...\"

\n

\"That is not where Brennan pointed,\" Hiriwa interrupted.  \"He did not ask for a proof that they must be set equal, given some appealing principle; he asked for a view in which the two are one and cannot be divided even conceptually, as was accomplished for inertial mass-energy and gravitational mass-energy.  For we must assume that the beauty of the whole arises from the fundamental laws, and not the other way around.  Fair-rephrasing?\"

\n

\"Fair-rephrasing,\" Brennan replied.

\n

Silence reigned for thirty-seven seconds, as the five pondered the five suggestions.

\n

\"I have an idea...\"

" } }, { "_id": "5o4EZJyqmHY4XgRCY", "title": "Einstein's Superpowers", "pageUrl": "https://www.lesswrong.com/posts/5o4EZJyqmHY4XgRCY/einstein-s-superpowers", "postedAt": "2008-05-30T06:40:55.000Z", "baseScore": 121, "voteCount": 86, "commentCount": 92, "url": null, "contents": { "documentId": "5o4EZJyqmHY4XgRCY", "html": "

There is a widespread tendency to talk (and think) as if Einstein, Newton, and similar historical figures had superpowers—something magical, something sacred, something beyond the mundane.  (Remember, there are many more ways to worship a thing than lighting candles around its altar.)

\n

Once I unthinkingly thought this way too, with respect to Einstein in particular, until reading Julian Barbour's The End of Time cured me of it.

\n

Barbour laid out the history of anti-epiphenomenal physics and Mach's Principle; he described the historical controversies that predated Mach—all this that stood behind Einstein and was known to Einstein, when Einstein tackled his problem...

\n

And maybe I'm just imagining things—reading too much of myself into Barbour's book—but I thought I heard Barbour very quietly shouting, coded between the polite lines:

\n
\n

What Einstein did isn't magic, people!  If you all just looked at how he actually did it, instead of falling to your knees and worshiping him, maybe then you'd be able to do it too!

\n
\n

(EDIT March 2013:  Barbour did not actually say this.  It does not appear in the book text.  It is not a Julian Barbour quote and should not be attributed to him.  Thank you.)

\n

Maybe I'm mistaken, or extrapolating too far... but I kinda suspect that Barbour once tried to explain to people how you move further along Einstein's direction to get timeless physics; and they sniffed scornfully and said, \"Oh, you think you're Einstein, do you?\"

\n

\n

John Baez's Crackpot Index, item 18:

\n
\n

10 points for each favorable comparison of yourself to Einstein, or claim that special or general relativity are fundamentally misguided (without good evidence).

\n
\n

Item 30:

\n
\n

30 points for suggesting that Einstein, in his later years, was groping his way towards the ideas you now advocate.

\n
\n

Barbour never bothers to compare himself to Einstein, of course; nor does he ever appeal to Einstein in support of timeless physics.  I mention these items on the Crackpot Index by way of showing how many people compare themselves to Einstein, and what society generally thinks of them.

\n

The crackpot sees Einstein as something magical, so they compare themselves to Einstein by way of praising themselves as magical; they think Einstein had superpowers and they think they have superpowers, hence the comparison.

\n

But it is just the other side of the same coin, to think that Einstein is sacred, and the crackpot is not sacred, therefore they have committed blasphemy in comparing themselves to Einstein.

\n

Suppose a bright young physicist says, \"I admire Einstein's work, but personally, I hope to do better.\"  If someone is shocked and says, \"What!  You haven't accomplished anything remotely like what Einstein did; what makes you think you're smarter than him?\" then they are the other side of the crackpot's coin.

\n

The underlying problem is conflating social status and research potential.

\n

Einstein has extremely high social status: because of his record of accomplishments; because of how he did it; and because he's the physicist whose name even the general public remembers, who brought honor to science itself.

\n

And we tend to mix up fame with other quantities, and we tend to attribute people's behavior to dispositions rather than situations.

\n

So there's this tendency to think that Einstein, even before he was famous, already had an inherent disposition to be Einstein—a potential as rare as his fame and as magical as his deeds.  So that if you claim to have the potential to do what Einstein did, it is just the same as claiming Einstein's rank, rising far above your assigned status in the tribe.

\n

I'm not phrasing this well, but then, I'm trying to dissect a confused thought:  Einstein belongs to a separate magisterium, the sacred magisterium.  The sacred magisterium is distinct from the mundane magisterium; you can't set out to be Einstein in the way you can set out to be a full professor or a CEO.  Only beings with divine potential can enter the sacred magisterium—and then it is only fulfilling a destiny they already have.  So if you say you want to outdo Einstein, you're claiming to already be part of the sacred magisterium—you claim to have the same aura of destiny that Einstein was born with, like a royal birthright...

\n

\"But Eliezer,\" you say, \"surely not everyone can become Einstein.\"

\n

You mean to say, not everyone can do better than Einstein.

\n

\"Um... yeah, that's what I meant.\"

\n

Well... in the modern world, you may be correct.  You probably should remember that I am a transhumanist, going around looking around at people thinking, \"You know, it just sucks that not everyone has the potential to do better than Einstein, and this seems like a fixable problem.\"  It colors one's attitude.

\n

But in the modern world, yes, not everyone has the potential to be Einstein.

\n

Still... how can I put this...

\n

There's a phrase I once heard, can't remember where:  \"Just another Jewish genius.\"  Some poet or author or philosopher or other, brilliant at a young age, doing something not tremendously important in the grand scheme of things, not all that influential, who ended up being dismissed as \"Just another Jewish genius.\"

\n

If Einstein had chosen the wrong angle of attack on his problem—if he hadn't chosen a sufficiently important problem to work on—if he hadn't persisted for years—if he'd taken any number of wrong turns—or if someone else had solved the problem first—then dear Albert would have ended up as just another Jewish genius.

\n

Geniuses are rare, but not all that rare.  It is not all that implausible to lay claim to the kind of intellect that can get you dismissed as \"just another Jewish genius\" or \"just another brilliant mind who never did anything interesting with their life\".  The associated social status here is not high enough to be sacred, so it should seem like an ordinarily evaluable claim.

\n

But what separates people like this from becoming Einstein, I suspect, is no innate defect of brilliance.  It's things like \"lack of an interesting problem\"—or, to put the blame where it belongs, \"failing to choose an important problem\".  It is very easy to fail at this because of the cached thought problem:  Tell people to choose an important problem and they will choose the first cache hit for \"important problem\" that pops into their heads, like \"global warming\" or \"string theory\".

\n

The truly important problems are often the ones you're not even considering, because they appear to be impossible, or, um, actually difficult, or worst of all, not clear how to solve.  If you worked on them for years, they might not seem so impossible... but this is an extra and unusual insight; naive realism will tell you that solvable problems look solvable, and impossible-looking problems are impossible.

\n

Then you have to come up with a new and worthwhile angle of attack.  Most people who are not allergic to novelty, will go too far in the other direction, and fall into an affective death spiral.

\n

And then you've got to bang your head on the problem for years, without being distracted by the temptations of easier living.  \"Life is what happens while we are making other plans,\" as the saying goes, and if you want to fulfill your other plans, you've often got to be ready to turn down life.

\n

Society is not set up to support you while you work, either.

\n

The point being, the problem is not that you need an aura of destiny and the aura of destiny is missing.  If you'd met Albert before he published his papers, you would have perceived no aura of destiny about him to match his future high status.  He would seem like just another Jewish genius.

\n

This is not because the royal birthright is concealed, but because it simply is not there.  It is not necessary.  There is no separate magisterium for people who do important things.

\n

I say this, because I want to do important things with my life, and I have a genuinely important problem, and an angle of attack, and I've been banging my head on it for years, and I've managed to set up a support structure for it; and I very frequently meet people who, in one way or another, say:  \"Yeah?  Let's see your aura of destiny, buddy.\"

\n

What impressed me about Julian Barbour was a quality that I don't think anyone would have known how to fake without actually having it:  Barbour seemed to have seen through Einstein—he talked about Einstein as if everything Einstein had done was perfectly understandable and mundane.

\n

Though even having realized this, to me it still came as a shock, when Barbour said something along the lines of, \"Now here's where Einstein failed to apply his own methods, and missed the key insight—\"  But the shock was fleeting, I knew the Law:  No gods, no magic, and ancient heroes are milestones to tick off in your rearview mirror.

\n

This seeing through is something one has to achieve, an insight one has to discover.  You cannot see through Einstein just by saying, \"Einstein is mundane!\" if his work still seems like magic unto you.  That would be like declaring \"Consciousness must reduce to neurons!\" without having any idea of how to do it.  It's true, but it doesn't solve the problem.

\n

I'm not going to tell you that Einstein was an ordinary bloke oversold by the media, or that deep down he was a regular schmuck just like everyone else.  That would be going much too far.  To walk this path, one must acquire abilities some consider to be... unnatural.  I take a special joy in doing things that people call \"humanly impossible\", because it shows that I'm growing up.

\n

Yet the way that you acquire magical powers is not by being born with them, but by seeing, with a sudden shock, that they really are perfectly normal.

\n

This is a general principle in life.

" } }, { "_id": "KipiHsTA3pw4joQkG", "title": "Timeless Causality", "pageUrl": "https://www.lesswrong.com/posts/KipiHsTA3pw4joQkG/timeless-causality", "postedAt": "2008-05-29T06:45:33.000Z", "baseScore": 48, "voteCount": 47, "commentCount": 67, "url": null, "contents": { "documentId": "KipiHsTA3pw4joQkG", "html": "

Followup toTimeless Physics

\n

Julian Barbour believes that each configuration, each individual point in configuration space, corresponds individually to an experienced Now—that each instantaneous time-slice of a brain is the carrier of a subjective experience.

\n

On this point, I take it upon myself to disagree with Barbour.

\n

There is a timeless formulation of causality, known to Bayesians, which may glue configurations together even in a timeless universe.  Barbour may not have studied this; it is not widely studied.

\n

Such causal links could be required for \"computation\" and \"consciousness\"—whatever those are.  If so, we would not be forced to conclude that a single configuration, encoding a brain frozen in time, can be the bearer of an instantaneous experience.  We could throw out time, and keep the concept of causal computation.

\n

\n

There is an old saying:  \"Correlation does not imply causation.\"  I don't know if this is my own thought, or something I remember hearing, but on seeing this saying, a phrase ran through my mind:  If correlation does not imply causation, what does?

\n

Suppose I'm at the top of a canyon, near a pile of heavy rocks.  I throw a rock over the side, and a few seconds later, I hear a crash.  I do this again and again, and it seems that the rock-throw, and the crash, tend to correlate; to occur in the presence of each other.  Perhaps the sound of the crash is causing me to throw a rock off the cliff?  But no, this seems unlikely, for then an effect would have to precede its cause.  It seems more likely that throwing the rock off the cliff is causing the crash.  If, on the other hand, someone observed me on the cliff, and saw a flash of light, and then immediately afterward saw me throw a rock off the cliff, they would suspect that flashes of light caused me to throw rocks.

\n

Perhaps correlation, plus time, can suggest a direction of causality?

\n

But we just threw out time.

\n

You see the problem here.

\n

Once, sophisticated statisticians believed this problem was unsolvable.  Many thought it was unsolvable even with time.  Time-symmetrical laws of physics didn't seem to leave room for asymmetrical causality.  And in statistics, nobody thought there was any way to define causality.  They could measure correlation, and that was enough.  Causality was declared dead, and the famous statistician R. A. Fisher testified that it was impossible to prove that smoking cigarettes actually caused cancer.

\n

Anyway...

\n

\n

\"Causeundirected_2\"

\n

Let's say we have a data series, generated by taking snapshots over time of two variables 1 and 2.  We have a large amount of data from the series, laid out on a track, but we don't know the direction of  time on the track.  On each round, the past values of 1 and 2 probabilistically generate the future value of 1, and then separately probabilistically generate the future value of 2.  We know this, but we don't know the actual laws.  We can try to infer the laws by gathering statistics about which values of 1 and 2 are adjacent to which other values of 1 and 2.  But we don't know the global direction of time, yet, so we don't know if our statistic relates the effect to the cause, or the cause to the effect.

\n

When we look at an arbitrary value-pair and its neighborhood, let's call the three slices L, M, and R for Left, Middle, and Right.

\n

We are considering two hypotheses.  First, that causality could be flowing from L to M to R:

\n

\n

\"Causeright_2\"

\n

Second, that causality could be flowing from R to M to L:

\n

\"Causeleft_3\"

\n

As good Bayesians, we realize that to distinguish these two hypotheses, we must find some kind of observation that is more likely in one case than in the other.  But what might such an observation be?

\n

We can try to look at various slices M, and try to find correlations between the values of M, and the values of L and R.  For example, we could find that when M1 is in the + state, that R2 is often also in the + state.  But is this because R2 causes M1 to be +, or because M1 causes R2 to be +?

\n

If throwing a rock causes the sound of a crash, then the throw and the crash will tend to occur in each other's presence.  But this is also true if the sound of the crash causes me to throw a rock.  So observing these correlations does not tell us the direction of causality, unless we already know the direction of time.

\n

\"Causeundirected_2\"

\n

From looking at this undirected diagram, we can guess that M1 will correlate to L1, M2 will correlate to R1, R2 will correlate to M2, and so on; and all this will be true because there are lines between the two nodes, regardless of which end of the line we try to draw the arrow upon.  You can see the problem with trying to derive causality from correlation!

\n

Could we find that when M1 is +, R2 is always +, but that when R2 is +, M1 is not always +, and say, \"M1 must be causing R2\"?  But this does not follow.  We said at the beginning that past values of 1 and 2 were generating future values of 1 and 2 in a probabilistic way; it was nowhere said that we would give preference to laws that made the future deterministic given the past, rather than vice versa.  So there is nothing to make us prefer the hypothesis, \"A + at M1 always causes R2 to be +\" to the hypothesis, \"M1 can only be + in cases where its parent R2 is +\".

\n

Ordinarily, at this point, I would say:  \"Now I am about to tell you the answer; so if you want to try to work out the problem on your own, you should do so now.\"  But in this case, some of the greatest statisticians in history did not get it on their own, so if you do not already know the answer, I am not really expecting you to work it out.  Maybe if you remember half a hint, but not the whole answer, you could try it on your own.  Or if you suspect that your era will support you, you could try it on your own; I have given you a tremendous amount of help by asking exactly the correct question, and telling you that an answer is possible.

\n

...

\n

So!  Instead of thinking in terms of observations we could find, and then trying to figure out if they might distinguish asymmetrically between the hypotheses, let us examine a single causal hypothesis and see if it implies any asymmetrical observations.

\n

Say the flow of causality is from left to right:

\n

\"Causeright_3\"

\n

Suppose that we do know L1 and L2, but we do not know R1 and R2.  Will learning M1 tell us anything about M2?

\n

That is, will we observe the conditional dependence

\n
\n

P(M2|L1,L2) ≠ P(M2|M1,L1,L2)

\n
\n

to hold?  The answer, on the assumption that causality flows to the right, and on the other assumptions previously given, is no.  \"On each round, the past values of 1 and 2 probabilistically generate the future value of 1, and then separately probabilistically generate the future value of 2.\"  So once we have L1 and L2, they generate M1 independently of how they generate M2.

\n

But if we did know R1 or R2, then, on the assumptions, learning M1 would give us information about M2.  Suppose that there are siblings Alpha and Betty, cute little vandals, who throw rocks when their parents are out of town.  If the parents are out of town, then either Alpha or Betty might each, independently, decide to throw a rock through my window.  If I don't know whether a rock has been thrown through my window, and I know that Alpha didn't throw a rock through my window, that doesn't affect my probability estimate that Betty threw a rock through my window—they decide independently.  But if I know my window is broken, and I know Alpha didn't do it, then I can guess Betty is the culprit.  So even though Alpha and Betty throw rocks independently of each other, knowing the effect can epistemically entangle my beliefs about the causes.

\n

Similarly, if we didn't know L1 or L2, then M1 should give us information about M2, because from the effect M1 we can infer the state of its causes L1 and L2, and thence the effect of L1/L2 on M2.  If I know that Alpha threw a rock, then I can guess that Alpha and Betty's parents are out of town, and that makes it more likely that Betty will throw a rock too.

\n

Which all goes to say that, if causality is flowing from L to M to R, we may indeed expect the conditional dependence

\n
\n

P(M2|R1,R2) ≠ P(M2|M1,R1,R2)

\n
\n

to hold.

\n

So if we observe, statistically, over many time slices:

\n
\n

P(M2|L1,L2) = P(M2|M1,L1,L2)
P(M2|R1,R2) ≠ P(M2|M1,R1,R2)

\n
\n

Then we know causality is flowing from left to right; and conversely if we see:

\n
\n

P(M2|L1,L2) ≠ P(M2|M1,L1,L2)
P(M2|R1,R2) = P(M2|M1,R1,R2)

\n
\n

Then we can guess causality is flowing from right to left.

\n

This trick used the assumption of probabilistic generators.  We couldn't have done it if the series had been generated by bijective mappings, i.e., if the future was deterministic given the past and only one possible past was compatible with each future.

\n

So this trick does not directly apply to reading causality off of Barbour's Platonia (which is the name Barbour gives to the timeless mathematical object that is our universe).

\n

However, think about the situation if humanity sent off colonization probes to distant superclusters, and then the accelerating expansion of the universe put the colonies over the cosmological horizon from us.  There would then be distant human colonies that could not speak to us again:  Correlations in a case where light, going forward, could not reach one colony from another, or reach any common ground.

\n

On the other hand, we would be very surprised to reach a distant supercluster billions of light-years away, and find a spaceship just arriving from the other side of the universe, sent from another independently evolved Earth, which had developed genetically compatible indistinguishable humans who speak English.  (A la way too much horrible sci-fi television.)  We would not expect such extraordinary similarity of events, in a historical region where a ray of light could not yet have reached there from our Earth, nor a ray of light reached our Earth from there, nor could a ray of light reached both Earths from any mutual region between.  On the assumption, that is, that rays of light travel in the direction we call \"forward\".

\n

When two regions of spacetime are timelike separated, we cannot deduce any direction of causality from similarities between them; they could be similar because one is cause and one is effect, or vice versa.  But when two regions of spacetime are spacelike separated, and far enough apart that they have no common causal ancestry assuming one direction of physical causality, but would have common causal ancestry assuming a different direction of physical causality, then similarity between them... is at least highly suggestive.

\n

I am not skilled enough in causality to translate probabilistic theorems into bijective deterministic ones.  And by calling certain similarities \"surprising\" I have secretly imported a probabilistic view; I have made myself uncertain so that I can be surprised.

\n

But Judea Pearl himself believes that the arrows of his graphs are more fundamental than the statistical correlations they produce; he has said so in an essay entitled \"Why I Am Only A Half-Bayesian\".  Pearl thinks that his arrows reflect reality, and hence, that there is more to inference than just raw probability distributions.  If Pearl is right, then there is no reason why you could not have directedness in bijective deterministic mappings as well, which would manifest in the same sort of similarity/dissimilarity rules I have just described.

\n

This does not bring back time.  There is no t coordinate, and no global now sweeping across the universe.  Events do not happen in the past or the present or the future, they just are.  But there may be a certain... asymmetric locality of relatedness... that preserves \"cause\" and \"effect\", and with it, \"therefore\".  A point in configuration space would never be \"past\" or \"present\" or \"future\", nor would it have a \"time\" coordinate, but it might be \"cause\" or \"effect\" to another point in configuration space.

\n

I am aware of the standard argument that anything resembling an \"arrow of time\" should be made to stem strictly from the second law of thermodynamics and the low-entropy initial condition.  But if you throw out causality along with time, it is hard to see how a low-entropy terminal condition and high-entropy initial condition could produce the same pattern of similar and dissimilar regions.  Look at in another way:  To compute a consistent universe with a low-entropy terminal condition and high-entropy initial condition, you have to simulate lots and lots of universes, then throw away all but a tiny fraction of them that end up with low entropy at the end.  With a low-entropy initial condition, you can compute it out locally, without any global checks.  So I am not yet ready to throw out the arrowheads on my arrows.

\n

And, if we have \"therefore\" back, if we have \"cause\" and \"effect\" back—and science would be somewhat forlorn without them—then we can hope to retrieve the concept of \"computation\".  We are not forced to grind up reality into disconnected configurations; there can be glue between them.  We can require the amplitude relations between connected volumes of configuration space, to carry out some kind of timeless computation, before we decide that it contains the timeless Now of a conscious mind.  We are not forced to associate experience with an isolated point in configuration space—which is a good thing from my perspective, because it doesn't seem to me that a frozen brain with all the particles in fixed positions ought to be having experiences.  I would sooner associate experience with the arrows than the nodes, if I had to pick one or the other!  I would sooner associate consciousness with the change in a brain than with the brain itself, if I had to pick one or the other.

\n

This also lets me keep, for at least a little while longer, the concept of a conscious mind being connected to its future Nows, and anticipating some future experiences rather than others.  Perhaps I will have to throw out this idea eventually, because I cannot seem to formulate it consistently; but for now, at least, I still cannot do without the notion of a \"conditional probability\".  It still seems to me that there is some actual connection that makes it more likely for me to wake up tomorrow as Eliezer Yudkowsky, than as Britney Spears.  If I am in the arrows even more than the nodes, that gives me a direction, a timeless flow.  This may possibly be naive, but I am sticking with it until I can jump to an alternative that is less confusing than my present confused state of mind.

\n

Don't think that any of this preserves time, though, or distinguishes the past from the future.  I am just holding onto cause and effect and computation and even anticipation for a little while longer.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Timeless Identity\"

\n

Previous post: \"Timeless Beauty\"

" } }, { "_id": "GKTe9bCxFSE6EXEEu", "title": "Timeless Beauty", "pageUrl": "https://www.lesswrong.com/posts/GKTe9bCxFSE6EXEEu/timeless-beauty", "postedAt": "2008-05-28T04:32:00.000Z", "baseScore": 24, "voteCount": 20, "commentCount": 70, "url": null, "contents": { "documentId": "GKTe9bCxFSE6EXEEu", "html": "

Followup toTimeless Physics

\n

One of the great surprises of humanity's early study of physics was that there were universal laws, that the heavens were governed by the same order as the Earth:  Laws that hold in all times, in all places, without known exception. Sometimes we discover a seeming exception to the old law, like Mercury's precession, but soon it turns out to perfectly obey a still deeper law, that once again is universal as far as the eye can see.

\n

Every known law of fundamental physics is perfectly global. We know no law of fundamental physics that applies on Tuesdays but not Wednesdays, or that applies in the Northern hemisphere but not the Southern.

\n

In classical physics, the laws are universal; but there are also other entities that are neither perfectly global nor perfectly local. Like the case I discussed yesterday, of an entity called \"the lamp\" where \"the lamp\" is OFF at 7:00am but ON at 7:02am; the lamp entity extends through time, and has different values at different times.  The little billiard balls are like that in classical physics; a classical billiard ball is (alleged to be) a fundamentally existent entity, but it has a world-line, not a world-point.

\n

In timeless physics, everything that exists is either perfectly global or perfectly local.  The laws are perfectly global.  The configurations are perfectly local—every possible arrangement of particles has a single complex amplitude assigned to it, which never changes from one time to another.  Each configuration only affects, and is affected by, its immediate neighbors.  Each actually existent thing is perfectly unique, as a mathematical entity.

\n

Newton, first to combine the Heavens and the Earth with a truly universal generalization, saw a clockwork universe of moving billiard balls and their world-lines, governed by perfect exceptionless laws. Newton was the first to look upon a greater beauty than any mere religion had ever dreamed.

\n

But the beauty of classical physics doesn't begin to compare to the beauty of timeless quantum physics.

\n

\n

Timeful quantum physics is pretty, but it's not all that much prettier than classical physics.  In timeful physics the \"same configuration\" can still have different values at different times, its own little world-line, like a lamp switching from OFF to ON.  There's that ugly t complicating the equations.

\n

You can see the beauty of timeless quantum physics by noticing how much easier it is to mess up the perfection, if you try to tamper with Platonia.

\n

Consider the collapse interpretation of quantum mechanics.  To people raised on timeful quantum physics, \"the collapse of the wavefunction\" sounds like it might be a plausible physical mechanism.

\n

If you step back and look upon the timeless mist over the entire configuration space, all dynamics manifest in its perfectly local relations, then the \"pruning\" process of collapse suddenly shows up as a hugely ugly discontinuity in the timeless object.  Instead of a continuous mist, we have something that looks like a maimed tree with branches hacked off and sap-bleeding stumps left behind.  The perfect locality is ruined, because whole branches are hacked off in one operation.  Likewise, collapse destroys the perfect global uniformity of the laws that relate each configuration to its neighborhood; sometimes we have the usual relation of amplitude flow, and then sometimes we have the collapsing-relation instead.

\n

This is the power of beauty:  The more beautiful something is, the more obvious it becomes when you mess it up.

\n

I was surprised that many of yesterday's commenters seemed to think that Barbour's timeless physics was nothing new, relative to the older idea of a Block Universe.  3+1D Minkowskian spacetime has no privileged space of simultaneity, which, in its own way, seems to require you to throw out the concept of a global now. From Minkowskian 3+1, I had the idea of \"time as a single perfect 4D crystal\"—I didn't know the phrase \"Block Universe\", but seemed evident enough.

\n

Nonetheless, I did not really get timelessness until I read Barbour.  Saying that the t coordinate was just another coordinate, didn't have nearly the same impact on me as tossing the t coordinate out the window.

\n

Special Relativity is widely accepted, but that doesn't stop people from talking about \"nonlocal collapse\" or \"retrocausation\"—relativistic timeful QM isn't beautiful enough to protect itself from complication.

\n

Shane Legg's reaction is the effect I was looking for:

\n
\n

\"Stop it!  If I intuitively took on board your timeless MWI view of the world... well, I'm worried that this might endanger my illusion of consciousness.  Thinking about it is already making me feel a bit weird.\"

\n
\n

I wish I knew whether the unimpressed commenters got what Shane Legg did, just from hearing about Special Relativity; or if they still haven't gotten it yet from reading my brief summary of Barbour.

\n

But in any case, let me talk in principle about why it helps to toss out the t coordinate:

\n

To reduce a thing, you must reduce it to something that does not itself have the property you want to explain.

\n

In old-school Artificial Intelligence, a researcher wonders where the meaning of a word like \"apple\" comes from.  They want to get knowledge about \"apples\" into their beloved AI system, so they create a LISP token named apple.  They realize that if they claim the token is meaningful of itself, they have not really reduced the nature of meaning...  So they assert that \"the apple token is not meaningful by itself\", and then go on to say, \"The meaning of the apple token emerges from its network of connections to other tokens.\"  This is not true reductionism.  It is wrapping up your confusion in a gift-box.

\n

To reduce time, you must reduce it to something that is not time.  It is not enough to take the t coordinate, and say that it is \"just another dimension\".  So long as the t coordinate is there, it acts as a mental sponge that can soak up all the time-ness that you want to explain.  If you toss out the t coordinate, you are forced to see time as something else, and not just see time as \"time\".

\n

Tomorrow (if I can shake today's cold) I'll talk about one of my points of departure from Barbour:  Namely, I have no problem with discarding time and keeping causality.  The commenters who complained about Barbour grinding up the universe into disconnected slices, may be reassured:  On this point, I think Barbour is trying too hard.  We can discard t, and still keep causality within r.

\n

I dare to disagree with Barbour, on this point, because it seems plausible that Barbour has not studied Judea Pearl and colleagues' formulation of causality

\n

—which likewise makes no use of a t coordinate.

\n

Pearl et. al.'s formulation of \"causality\" would not be anywhere near as enlightening, if they had to put t coordinates on everything for the math to make sense.  Even if the authors insisted that t was \"just another property\" or \"just another number\"... well, if you've read Pearl, you see my point.  It would correspond to a much weaker understanding.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Timeless Causality\"

\n

Previous post: \"Timeless Physics\"

" } }, { "_id": "rrW7yf42vQYDf8AcH", "title": "Timeless Physics", "pageUrl": "https://www.lesswrong.com/posts/rrW7yf42vQYDf8AcH/timeless-physics", "postedAt": "2008-05-27T09:09:22.000Z", "baseScore": 54, "voteCount": 55, "commentCount": 121, "url": null, "contents": { "documentId": "rrW7yf42vQYDf8AcH", "html": "

Previously in seriesRelative Configuration Space 

\n
\n

Warning:  The central idea in today's post is taken seriously by serious physicists; but it is not experimentally proven and is not taught as standard physics.

\n

Today's post draws heavily on the work of the physicist Julian Barbour, and contains diagrams stolen and/or modified from his book \"The End of Time\".  However, some of the arguments here are of my own devising, and Barbour might(?) not agree with them.

\n
\n

I shall begin by asking a incredibly deep question:

\n

What time is it?

\n

If you have the excellent habit of giving obvious answers to obvious questions, you will answer, \"It is now 7:30pm [or whatever].\"

\n

How do you know?

\n

\"I know because I looked at the clock on my computer monitor.\"

\n

Well, suppose I hacked into your computer and changed the clock.  Would it then be a different time?

\n

\"No,\" you reply.

\n

How do you know?

\n

\"Because I once used the 'Set Date and Time' facility on my computer to try and make it be the 22nd century, but it didn't work.\"

\n

Ah.  And how do you know that it didn't work?

\n

\n

\"Because,\" you say, \"I looked outside, and the buildings were still made of brick and wood and steel, rather than having been replaced by the gleaming crystal of diamondoid nanotechnological constructions; and gasoline was still only $4/gallon.\"

\n

You have... interesting... expectations for the 22nd century; but let's not go into that.  Suppose I replaced the buildings outside your home with confections of crystal, and raised the price of gas; then would it be 100 years later?

\n

\"No,\" you say, \"I could look up at the night sky, and see the planets in roughly the same position as yesterday's night; with a powerful telescope I could measure the positions of the stars as they very slowly drift, relative to the Sun, and observe the rotation of distant galaxies.  In these ways I would know exactly how much time had passed, no matter what you did here on Earth.\"

\n

Ah.  And suppose I snapped my fingers and caused all the stars and galaxies to move into the appropriate positions for 2108?

\n

\"You'd be arrested for violating the laws of physics.\"

\n

But suppose I did it anyway.

\n

\"Then, still, 100 years would not have passed.\"

\n

How would you know they had not passed?

\n

\"Because I would remember that, one night before, it had still been 2008.  Though, realistically speaking, I would think it more likely that it was my memory at fault, not the galaxies.\"

\n

Now suppose I snapped my fingers, and caused all the atoms in the universe to move into positions that would be appropriate for (one probable quantum branch) of 2108.  Even the atoms in your brain.

\n

Think carefully before you say, \"It would still really be 2008.\"  For does this belief of yours, have any observable consequences left?  Or is it an epiphenomenon of your model of physicsWhere is stored the fact that it is 'still 2008'?  Can I snap my fingers one last time, and alter this last variable, and cause it to really be 2108?

\n

Is it possible that Cthulhu could snap Its tentacles, and cause time for the whole universe to be suspended for exactly 10 million years, and then resume?  How would anyone ever detect what had just happened?

\n

A global suspension of time may seem imaginable, in the same way that it seems imaginable that you could \"move all the matter in the whole universe ten meters to the left\".  To visualize the universe moving ten meters to the left, you imagine a little swirling ball of galaxies, and then it jerks leftward.  Similarly, to imagine time stopping, you visualize a swirling ball of galaxies, and then it stops swirling, and hangs motionless for a while, and then starts up again.

\n

But the sensation of passing time, in your visualization, is provided by your own mind's eye outside the system.  You go on thinking, your brain's neurons firing, while, in your imagination, the swirling ball of galaxies stays motionless.

\n

When you imagine the universe moving ten meters to the left, you are imagining motion relative to your mind's eye outside the universe.  In the same way, when you imagine time stopping, you are imagining a motionless universe, frozen relative to a still-moving clock hidden outside: your own mind, counting the seconds of the freeze.

\n

But what would it mean for 10 million \"years\" to pass, if motion everywhere had been suspended?

\n

Does it make sense to say that the global rate of motion could slow down, or speed up, over the whole universe at once—so that all the particles arrive at the same final configuration, in twice as much time, or half as much time?  You couldn't measure it with any clock, because the ticking of the clock would slow down too.

\n

Do not say, \"I could not detect it; therefore, who knows, it might happen every day.\"

\n

Say rather, \"I could not detect it, nor could anyone detect it even in principle, nor would any physical relation be affected except this one thing called 'the global rate of motion'.  Therefore, I wonder what the phrase 'global rate of motion' really means.\"

\n

All of that was a line of argument of Julian Barbour's, more or less,  Let us pause here, and consider a second line of argument, this one my own.  That is, I don't think it was in Barbour's The End of Time.  (If I recall correctly, I reasoned thus even before I read Barbour, while I was coming up with my unpublished general decision theory of Newcomblike problems.  Of course that does not mean the argument is novel; I have no idea whether it is novel.  But if my argument is wrong, I do not want it blamed on an innocent bystander.)  So:

\n
\n

\"The future changes as we stand here, else we are the game pieces of the gods, not their heirs, as we have been promised.\"
        —Raistlin Majere

\n
\n

A fine sentiment; but what does it mean to change the future?

\n

Suppose I have a lamp, with an old-style compact fluorescent bulb that takes a few seconds to warm up.  At 7:00am, the lamp is off.  At 7:01am, I flip the switch; the lamp flickers for a few moments, then begins to warm up.  At 7:02am, the lamp is fully bright.  Between 7:00am and 7:02am, the lamp changed from OFF to ON.  This, certainly, is a change; but it is a change over time.

\n

Change implies difference; difference implies comparison.  Here, the two values being compared are (1) the state of \"the lamp at 7:00am\", which is OFF, and (2) the state of \"the lamp at 7:02am\", which is ON.  So we say \"the lamp\" has changed from one time to another.  At 7:00am, you wander by, and see the lamp is OFF; at 7:02am, you wander by, and see the lamp is ON.

\n

But have you ever seen the future change from one time to another?  Have you wandered by a lamp at exactly 7:02am, and seen that it is OFF; then, a bit later, looked in again on the \"the lamp at exactly 7:02am\", and discovered that it is now ON?

\n

Naturally, we often feel like we are \"changing the future\".  Logging on to your online bank account, you discover that your credit card bill comes due tomorrow, and, for some reason, has not been paid automatically.  Imagining the future-by-default—extrapolating out the world as it would be without any further actions—you see that the bill not being paid, and interest charges accruing on your credit card.  So you pay the bill online.  And now, imagining tomorrow, it seems to you that the interest charges will not occur.  So at 1:00pm, you imagined a future in which your credit card accrued interest charges, and at 1:02pm, you imagined a future in which it did not.  And so your imagination of the future changed, from one time to another.

\n

As I remarked previously:  The way a belief feels from inside, is that you seem to be looking straight at reality.  When it actually seems that you're looking at a belief, as such, you are really experiencing a belief about your beliefs.

\n

When your extrapolation of the future changes, from one time to another, it feels like the future itself is changing.  Yet you have never seen the future change.  When you actually get to the future, you only ever see one outcome.

\n

How could a single moment of time, change from one time to another?

\n

I am not going to go into \"free will\" in today's blog post.  Except to remark that if you have been reading Overcoming Bias all this time, and you are currently agonizing about whether or not you really have free will, instead of trying to understand where your own mind has become confused and generated an impossible question, you should probably go back and read it all again.  For anyone who is just now joining us... perhaps I shall discuss the issue tomorrow.

\n

Just remember Egan's Law:  It all adds up to normality.  Apples didn't stop falling when Einstein disproved Newton's theory of gravity, and anyone who jumped off a cliff would still go splat.  Perhaps Time turns out to work differently than you thought; but tomorrow still lies ahead of you, and your choices, and their consequences.  I wouldn't advise reworking your moral philosophy based on confusing arguments and strange-seeming physics, until the physics stops appearing strange and the arguments no longer seem confusing.

\n

Now to physics we turn; and here I resume drawing my ideas from Julian Barbour.

\n

For the benefit of anyone who hasn't followed the series on quantum mechanics, a very very quick summary:

\n\n

\"Jbarbourconfigurationcube_3\"

\n

Above is a diagram that shows what a configuration space might look like for three particles, A, B, and C.  ABC form a triangle in two-dimensional space.  Every individual point in the configuration space corresponds to a simultaneous position of all the particles—above we see points that correspond to particular triangles i.e. joint positions of A, B, and C.  (Classical Configuration Spaces; The Quantum Arena.)

\n

The state of a quantum system is not a single point in this space; it is a distribution over this space.  You could imagine it as a cloud, or a blob, or a colored mist within the space.

\n

\"Jbarbourrelative\"

\n

Here we see a relative configuration space, in which each axis is the distance between a pair of particles.  This has some advantages I'm not going to recapitulate (it was covered in a previous post), so if you're dropping into the middle of the series, just pretend it's a regular configuration space.

\n

\"Jbarbourtriangleland1\"

\n

We've just chopped up the pyramidal space you saw before, into a series of slices.  In this configuration space, the slices near the bottom show all the particles close together (tiny triangles).  As we rise up, the particles get further apart (larger triangles).

\n

At the very bottom of the configuration space is a configuration where all the particles occupy the same position.

\n

(But remember, it's nonsense to talk about an individual particle being anywhere in a configuration space—each point in the configuration space corresponds to a position of all the particles.  Configuration space is not the 3D space you know.  It's not that there are a bunch of particles resting in the same place at the bottom.  The single bottom point corresponds to all the particles being in the same place in 3D space.)

\n

\"Jbarbourtrianglecloud_2\"

\n

Here we take a closer look at one of the slices of configuration space, and see a cloud of blue and red mist covering some of it.  (Why am I only showing the cloud covering a sixth (exactly a sixth) of the triangle?  This has to do with a symmetry in the space—exchanges of identical particles—which is not important to the present discussion.)

\n

But there is your glimpse of some quantum mist—in two colors, because amplitudes are complex numbers with a real and imaginary part.  An amplitude distribution or \"wavefunction\" assigns a complex number to every point in the continuous configuration space—a complex number to every possible configuration of all the particles.

\n

Yesterday, I finished by asking how the state of a quantum system might evolve over time.

\n

You might be tempted to visualize the mist churning and changing colors, as quantum amplitude flows within the configuration space.

\n

And this is indeed the way that you would visualize standard physics.

\n

Behold the standard Schrödinger Equation:

\n

\"Schrodinger\"

\n

Here ψ(r, t) is the amplitude distribution over configuration space (r) and time (t).  The left-hand side of the Schrödinger Equation is the change over time of the wavefunction ψ, and the right-hand-side shows how to calculate this change as the sum of two terms:  The gradient of the wavefunction over configuration space (at that time), and the potential energy of each configuration.

\n

Which is to say, the derivative in time of the wavefunction—the instantaneous rate of change—can be in terms of the wavefunction's derivative in space, plus a term for the potential energy.

\n

If you tried to visualize Schrödinger's Equation—doesn't look too hard, right?—you'd see a blob of churning, complex mist in configuration space, with little blobs racing around and splitting into smaller blobs as waves propagated.

\n

If you tried to calculate the quantum state of a single hydrogen atom over time, apart from the rest of the universe—which you can only really do if the hydrogen atom isn't entangled with anything—the atom's quantum state would evolve over time; the mist would churn.

\n

But suppose you think about the whole universe at once, including yourself, of course.  Because—even in the standard model of quantum physics!—that is exactly the arena in which quantum physics takes place:  A wavefunction over all the particles, everywhere.

\n

If you can sensibly talk about the quantum state of some particular hydrogen atom, it's only because the wavefunction happens to neatly factor into (hydrogen atom) * (rest of world).

\n

Even if the hydrogen atom is behaving in a very regular way, the joint wavefunction for (hydrogen atom * rest of world) may not be so regular.  Stars move into new positions, people are born and people die, digital watches tick, and the cosmos expands:  The universe is non-recurrent.

\n

Think of how the universal wavefunction ψ(r, t) might behave when r is the position of all the particles in the universe.

\n

Let's call 9:00am the time t=0, measured in seconds.

\n

At ψ(r, t=0), then, you are wondering what time it is:  The particles making up the neurons in your brain, are in positions ryou that correspond to neurons firing in the thought-pattern \"What time is it?\"  And the Earth, and the Sun, and the rest of the universe, have their own particles in the appropriate rrest-of-universe.  Where the complete r roughly factorizes as the product (ryou * rrest-of-universe).

\n

Over the next second, the joint wavefunction of the entire universe evolves into ψ(r, t=1).  All the stars in the sky have moved a little bit onward, in whatever direction they're heading; the Sun has burned up a little more of its hydrogen; on Earth, an average of 1.8 people have died; and you've just glanced down at your watch.

\n

At ψ(r, t=2), the stars have moved a little onward, the galaxies have rotated, the cosmos has expanded a little more (and its expansion has accelerated a little more), your watch has evolved into the state of showing 9:00:02 AM on its screen, and your own mind has evolved into the state of thinking the thought, \"Huh, I guess it's nine o' clock.\"

\n

Ready for the next big simplification in physics?

\n

Here it is:

\n

We don't need the t.

\n

It's redundant.

\n

The r never repeats itself.  The universe is expanding, and in every instant, it gets a little bigger.  We don't need a separate t to keep things straight.  When you're looking at the whole universe, a unique function ψ of (r, t) is pretty much a unique function of r.

\n

And the only way we know in the first place \"what time it is\", is by looking at clocks.  And whether the clock is a wristwatch, or the expansion of the universe, or your own memories, that clock is encoded in the position of particles—in the r.  We have never seen a t variable apart from the r.

\n

\"Jbarbourrelative\" We can recast the quantum wave equations, specifying the time evolution of ψ(r, t), as specifying relations within a wavefunction ψ(r).

\n

Occam's Razor:  Our equations don't need a t in them, so we can banish the t and make our ontology that much simpler.

\n

An unchanging quantum mist hangs over the configuration space, not churning, not flowing.

\n

But the mist has internal structure, internal relations; and these contain time implicitly.

\n

The dynamics of physics—falling apples and rotating galaxies—is now embodied within the unchanging mist in the unchanging configuration space.

\n

This landscape is not frozen like a cryonics patient suspended in liquid nitrogen.  It is not motionless as an isolated system while the rest of the universe goes on without it.

\n

The landscape is timeless; time exists only within it.  To talk about time, you have to talk about relations inside the configuration space.

\n

Asking \"What happened before the Big Bang?\" is revealed as a wrong question.  There is no \"before\"; a \"before\" would be outside the configuration space.  There was never a pre-existing emptiness into which our universe exploded.  There is just this timeless mathematical object, time existing within it; and the object has a natural boundary at the Big Bang.  You cannot ask \"When did this mathematical object come into existence?\" because there is no t outside it.

\n

So that is Julian Barbour's proposal for the next great simplification project in physics.

\n

(And yes, you can not only fit General Relativity into this paradigm, it actually comes out looking even more elegant than before.  For which point I refer you to Julian Barbour's papers.)

\n

Tomorrow, I'll go into some of my own thoughts and reactions to this proposal.

\n

But one point seems worth noting immediately:  I have spoken before on the apparently perfect universality of physical laws, that apply everywhere and everywhen.  We have just raised this perfection to an even higher pitch: everything that exists is either perfectly global or perfectly local.  There are points in configuration space that affect only their immediate neighbors in space and time; governed by universal laws of physics.  Perfectly local, perfectly global.  If the meaning and sheer beauty of this statement is not immediately obvious, I'll go into it tomorrow.

\n

And a final intuition-pump, in case you haven't yet gotten timelessness on a gut level...

\n

\"Manybranches4\"

\n

Think of this as a diagram of the many worlds of quantum physics.  The branch points could be, say, your observation of a particle that seems to go either \"left\" or \"right\".

\n

Looking back from the vantage point of the gold head, you only remember having been the two green heads.

\n

So you seem to remember Time proceeding along a single line.  You remember that the particle first went left, and then went right.  You ask, \"Which way will the particle go this time?\"

\n

You only remember one of the two outcomes that occurred on each occasion.  So you ask, \"When I make my next observation, which of the two possible worlds will I end up in?\"

\n

Remembering only a single line as your past, you try to extend that line into the future -

\n

But both branches, both future versions of you, just exist.  There is no fact of the matter as to \"which branch you go down\".  Different versions of you experience both branches.

\n

So that is many-worlds.

\n

And to incorporate Barbour, we simply say that all of these heads, all these Nows, just exist.  They do not appear and then vanish; they just are.   From a global perspective, there is no answer to the question, \"What time is it?\"  There are just different experiences at different Nows.

\n

From any given vantage point, you look back, and remember other times—so that the question, \"Why is it this time right now, rather than some other time?\" seems to make sense.  But there is no answer.

\n

When I came to this understanding, I forgot the meaning that Time had once held for me.

\n

Time has dissolved for me, has been reduced to something simpler that is not itself timeful.

\n

I can no longer conceive that there might really be a universal time, which is somehow \"moving\" from the past to the future.  This now seems like nonsense.

\n

Something like Barbour's timeless physics has to be true, or I'm in trouble:  I have forgotten how to imagine a universe that has \"real genuine time\" in it.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Timeless Beauty\"

\n

Previous post: \"Relative Configuration Space\"

" } }, { "_id": "vLZtf64wkyoAFNcu3", "title": "Relative Configuration Space", "pageUrl": "https://www.lesswrong.com/posts/vLZtf64wkyoAFNcu3/relative-configuration-space", "postedAt": "2008-05-26T09:25:01.000Z", "baseScore": 22, "voteCount": 20, "commentCount": 22, "url": null, "contents": { "documentId": "vLZtf64wkyoAFNcu3", "html": "

Previously in seriesMach's Principle: Anti-Epiphenomenal Physics
Followup toClassical Configuration Spaces

\n
\n

Warning:  The ideas in today's post are taken seriously by serious physicists, but they are not experimentally proven and are not taught as standard physics.

\n

Today's post draws on the work of the physicist Julian Barbour, and contains diagrams stolen and/or modified from his book \"The End of Time\".

\n
\n

Previously, we saw Mach's idea (following in the earlier path of Leibniz) that inertia is resistance to relative motion.  So that, if the whole universe was rotating, it would drag the inertial frame along with it.  From the perspective of General Relativity, the rotating matter would generate gravitational waves.

\n

All right:  It's possible that you can't tell if the universe is rotating, because the laws of gravitation may be set up to make it look the same either way.  But even if this turns out to be the case, it may not yet seem impossible to imagine that things could have been otherwise.

\n

To expose Mach's Principle directly, we turn to Julian Barbour.

\n

The diagrams that follow are stolen from Julian Barbour's The End of Time.  I'd forgotten what an amazing book this was, or I would have stolen diagrams from it earlier to explain configuration space. Anyone interested in the nature of reality must read this book.  Anyone interested in understanding modern quantum mechanics should read this book.  \"Must\" and \"should\" are defined as in RFC 2119.

\n

\n

\"Jbarbourconfigurationcube_2\"Suppose that we have three particles, A, B, and C, on a 2-dimensional plane; and suppose that these are the only 3 particles in the universe.

\n

Let there be a classical configuration space which describes the 2D positions of A, B, and C.  3 classical 2D particles require a 6-dimensional configuration space.

\n

If your monitor cannot display 6-dimensional space, I've set a 2D projection of a 3D cube to appear instead.  If you see what looks like a window into an incomprehensible void, try using Firefox instead of Internet Explorer.

\n

The thing about this 6-dimensional cube, is that it contains too much information.  By looking at an exact point in this cube—supposedly corresponding to an exact state of reality—we can read off information that A, B, and C will never be able to observe.

\n

The point (0, 1, 3, 4, 2, 5) corresponds to A at (0, 1), B at (3, 4), and C at (2, 5).  Now consider the point (1, 1, 4, 4, 3, 5); which corresponds to moving A, B, and C one unit to the right, in unison.

\n

Can A, B, and C ever detect any experimental difference?  Supposing that A, B, and C can only see each other, as opposed to seeing \"absolute space\" in the background?

\n

After we shift the universe to the right (shift the origin to the left), A looks around... and sees B and C at the same distance from itself as before.  B and C can't detect any difference in the universe either.

\n

Yet we have described (0, 1, 3, 4, 2, 5) and (1, 1, 4, 4, 3, 5) as two different points in the configuration space.  Even though, to A, B, and C, the associated states of reality seem indistinguishable.  We have postulated an epiphenomenal difference:  This suggests that our physics is not over the true elements of reality.  (Remember, this has been, historically, a highly productive line of reasoning!  It is not just logic-chopping.)

\n

Indeed, our classical configuration space has many epiphenomenal differences.  We can rotate the three particles in unison, and end up with a different point in the configuration space; while A, B, and C again see themselves at the same distances from each other.  The \"rotation\" that took place, was a matter of us looking at them from a different angle, from outside their universe.  Which is to say the \"rotation\" was a choice of viewpoint for us, not an experimentally detectable fact within the ABC universe.

\n

How can we rid the physics of mind projections and epiphenomena?

\n

A and B and C cannot observe their absolute positions in space against a fixed background.  Treating these absolute positions as elements of reality may be part of our problem.

\n

\"Jbarbourrelative\" What can A, B, and C observe?  By hypothesis, they can observe their distances from each other.  They can measure the distances AB, BC, and CA.

\n

Why not use that as the dimensions of a configuration space?

\n

At right is depicted a relative configuration space whose three dimensions are the distances AB, BC, and CA.  It really is 3-dimensional, now!

\n

If you're wondering why the configuration space looks pyramidal, it's because any point with e.g. AB + BC < CA is \"outside the configuration space\".  It does not represent a realizable triangle, because one side is longer than the sum of the other two.  Likewise AB + CA < BC and BC + CA < AB.

\n

Every different point in this configuration space, corresponds to an experimentally different state of reality that A, B, and C can observe.

\n

\"Jbarbourtriangleland1_2\"(Albeit this assumes that ABC can measure absolute, rather than relative, distances.  Otherwise, different slices of pyramid-space would be observationally identical because they would describe the same triangle at different scales, as shown at left.)

\n

(Oh, and we're assuming that A, B, and C can tell each other apart—perhaps they are different colors.)

\n

The edges of each slice of the configuration space, are the configurations with A, B, and C on the same line.  E.g., if AB + BC = CA, then B lies on a point between A and C.

\n

The corners of each slice are the configurations in which two points coincide; e.g., AB=0, BC=CA.

\n

\"Jbarbourtriangleland2\" At right (or possibly below, depending on your screen width), is a diagram showing a single slice in greater detail; Julian Barbour credits this to his friend Dierck Liebscher.

\n

The point in the center of the slice corresponds to an equilateral triangle.

\n

The dashed lines, which are axes of bilateral symmetry of the configuration space, contain points that correspond to isosceles triangles.

\n

The curved lines are right-angled triangles.

\n

Points \"inside\" the curved lines are acute triangles; points \"outside\" the curved lines are obtuse triangles.

\n

What about three points coinciding?

\n

There is no triangle at this scale where all three points coincide.

\n

Remember, this is just one slice of the configuration space.  Every point in the whole configuration space corresponds to what ABC experience as a different state of affairs.

\n

The configuration where A, B, and C are all in the same place is unique in their experience.  So it is only found in one slice of the configuration space:  The slice that is a single point, at the tip of the infinite pyramid:  The degenerate slice where the center and the corners are the same point:  The slice that is the single point in configuration space:  AB=BC=CA=0.

\n

Julian Barbour calls this point Alpha.

\n

But I'm getting ahead of myself, here—that sort of thing is the topic of tomorrow's post.

\n

To see the power of a relative configuration space, observe how it makes it impossible to imagine certain epiphenomenal differences:

\n

Put your Newtonian goggles back on: imagine A, B, and C as little billiard balls bouncing around in plain old space (not configuration space) and time.  Perhaps A, B, and C attract each other via a kind of gravity, and so orbit around one another.  If you were looking at the evolution of A, B, and C in plain old space and time, then a strobe-lit photograph of their motion might look like this:

\n

\"Jbarbourtriangleseries\"

\n

In this time-series photograph, we've seen points A, B, and C forming a triangle.  Not only do the points of the triangle orbit around each other, but they also seem to be heading down and to the right.  It seems like you can imagine the triangle heading off up and to the right, or up and to the left, or perhaps spinning around much faster.  Even though A, B, and C, who can only see their distance to each other, would never notice the difference.

\n

Now we could also map that whole trajectory over time, onto the relative configuration space.  If AB+BC+CA happens to be a constant throughout the evolution, then we could conveniently map the trajectory onto one slice of configuration space:

\n

\"Jbarbourshapepath\"

\n

(This doesn't actually represent the triangle-series shown above it, but imagine that it does.)

\n

If this is what you believe to be the reality—this trajectory in the relative configuration space—then, if I ask you to imagine, \"Suppose that the triangle is heading up and to the left, instead of down and to the right\", I have just uttered physical nonsense.  Mapping that alternative trajectory in Newtonian space, onto the relative configuration space, would produce just the same curve.  And if the laws of physics are over the relative configuration space, then this curve is all there is.

\n

Imagine physics over trajectories in a relative configuration space like this one, but with many more particles, and perhaps 3 space dimensions.  Sentient beings evolve in this universe, on some equivalent of a planet.  They hunt across fields that do not seem to shift underfoot.  They have a strong illusion of moving through an absolute space, against an absolute background; the relativity of motion is hidden from them.

\n

But if the fundamental laws of their universe were over relative configurations, then it would not just be a contingent fact about their universe, that if all the particles were speeding or accelerating or rotating in unison, all the experiments would come out the same.  Talking about \"all the particles rotating in unison\" would be physical nonsense.  It only makes physical sense to talk about the velocity of some particles relative to other particles.

\n

Your ancestors evolved on a savanna that seemed to stay put while they ran across it.  You can, by an effort of mind, visualize a car that stays motionless as the world zips past, or alternatively, visualize a world that remains motionless as the car zips past.  You can, by an effort of mind, see that the internal relations are the same.  But it still seems to you that you are imagining two different things.

\n

Your visual neurology is representing objects in terms of absolute positions against a fixed background.  There is a web of cortical columns in your visual cortex that activate to create a mental picture.  The particular columns that activate, are felt by you as positions in your visual field.  That is how the algorithm feels from inside.

\n

In a universe whose physics is over a relative configuration space, the absolute positions, and the fixed background, are not elements of reality.  They are mind projection fallacies, the shadows of a point of view; as if your mind's eye were outside the universe, and the universe could move relative to that.

\n

But if you could learn to visualize the relative configuration space, then, so long as you thought in terms of those elements of reality, it would no longer be imaginable that Mach's Principle could be false.

\n

I am not entirely convinced of this notion of a relative configuration space.  My soul as a computer programmer cries out against the idea of representing N particles with N2 distances between them; it seems wasteful.  On the other hand, I have no evidence that the Tao is prejudiced against redundant or overconstrained representations, in the same way that the Tao seems prejudiced against epiphenomena in representations.  Though my soul as a programmer cries out against it, better an overconstrained representation than an epiphenomenal one.  Still, it does not feel entirely satisfactory, to me.  It seems like merely the best representation, not the true one.

\n

Also, any position basis invokes an arbitrary space of simultaneity, and a relative position basis does so as well.  As required by Special Relativity, the choice makes no difference—but this means that the relative position basis still contains epiphenomenal information.  Perhaps the true representation will be more strictly local, in terms of invariant states of distant entanglement, as I've suggested before; and maybe, who knows, it won't be overconstrained?

\n

Relativizing the position basis feels to me like an improvement, but it doesn't seem finished.

\n

...

\n

Of course, all this that we have said about the particles A, B, C and their trajectory through time, cannot possibly apply to our own universe.

\n

In our own universe, as you may recall, there are no little billiard balls bouncing around.

\n

In our own universe, if physics took place in a relative configuration space, it would be quantum physics in a relative configuration space.  And a single moment of time, might look like this:

\n

\"Jbarbourtrianglecloud\" At right we see a cloud of red and blue mist, representing a complex amplitude distribution over the relative configuration space.  You could imagine that redness is the real part and blueness is the imaginary part, or some such.  But this is not a realistic amplitude distribution—just a representation of the general idea, \"A cloud of complex amplitude in configuration space.\"

\n

As for why only a sixth of the triangle is colored:  If A, B, and C are the same species of particle, which is to say, identical particles, then the configuration space collapses along the sixfold symmetry corresponding to the six possible permutations of A, B, and C.

\n

The whole cloud is a single static instant, in some arbitrary space of simultaneity.  The quantum wavefunction is a distribution over configuration space, not a single point in configuration space.  So to represent the state of the universe at a single moment, we need the whole cloud, which covers the entire collapsed configuration space.

\n

You might naturally tend to assume that we could represent time using an animated version of this same diagram: and that the animated diagram would show the mist churning in the configuration space, the cloud's parts changing color, as amplitude flowed from volume to volume; and that as the quantum waves propagated, little blobs of amplitude density would move around through the configuration space, in trajectories much resembling the classical curve we saw earlier.

\n

But that would be overcomplicating things.

\n

Be aware:  Churning mist in a non-relative configuration space, would be the metaphor that corresponds to the standard formulation of physics.  That is, according to standard physics, the description I just gave above, would be correct (after we took it back out of the relative configuration space, which is not standard).

\n

Yet tomorrow we shall discuss a certain further simplification of physics, which renders unimaginable still another epiphenomenal distinction, and deletes a further needless element of the laws.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Timeless Physics\"

\n

Previous post: \"Mach's Principle: Anti-Epiphenomenal Physics\"

" } }, { "_id": "AiaHqNcgZ2T7DDn55", "title": "A Broken Koan", "pageUrl": "https://www.lesswrong.com/posts/AiaHqNcgZ2T7DDn55/a-broken-koan", "postedAt": "2008-05-24T19:04:45.000Z", "baseScore": 8, "voteCount": 7, "commentCount": 11, "url": null, "contents": { "documentId": "AiaHqNcgZ2T7DDn55", "html": "

At Baycon today and tomorrow.  Physics series resumes tomorrow.

\n\n

Meanwhile, here's a link to a page of Broken Koans and other Zen debris I ran across, which should amuse fans of ancient Eastern wisdom; and a koan of my own:

Two monks were arguing about a flag. One said, "The flag is moving."

\n\n

The other said, "The wind is moving."

\n\n

Julian Barbour happened to be passing by.  He told them, "Not the wind, not the flag."

\n\n

The first monk said, "Is the mind moving?"

\n\n

Barbour replied, "Not even mind is moving."

\n\n

The second monk said, "Is time moving?"

\n\n

Barbour said, "There is no time.  You could say that it is mu-ving."

\n\n

"Then why do we think that flags flap, and wind blows, and minds change, and time moves?" inquired the first monk.

\n\n

Barbour thought, and said, "Because you remember."

" } }, { "_id": "NsgcZx4BeTy5y84Ya", "title": "Mach's Principle: Anti-Epiphenomenal Physics", "pageUrl": "https://www.lesswrong.com/posts/NsgcZx4BeTy5y84Ya/mach-s-principle-anti-epiphenomenal-physics", "postedAt": "2008-05-24T05:01:35.000Z", "baseScore": 42, "voteCount": 31, "commentCount": 26, "url": null, "contents": { "documentId": "NsgcZx4BeTy5y84Ya", "html": "

Previously in seriesMany Worlds, One Best Guess
Followup toThe Generalized Anti-Zombie Principle

\n
\n

Warning:  Mach's Principle is not experimentally proven, though it is widely considered to be credible.

\n
\n

Centuries ago, when Galileo was promoting the Copernican model in which the Earth spun on its axis and traveled around the Sun, there was great opposition from those who trusted their common sense:

\n

\"How could the Earth be moving?  I don't feel it moving!  The ground beneath my feet seems perfectly steady!\"

\n

And lo, Galileo said:  If you were on a ship sailing across a perfectly level sea, and you were in a room in the interior of the ship, you wouldn't know how fast the ship was moving.  If you threw a ball in the air, you would still be able to catch it, because the ball would have initially been moving at the same speed as you and the room and the ship.  So you can never tell how fast you are moving.

\n

This would turn out to be the beginning of one of the most important ideas in the history of physics.  Maybe even the most important idea in all of physics.  And I'm not talking about Special Relativity.

\n

\n

Suppose the entire universe was moving.  Say, the universe was moving left along the x axis at 10 kilometers per hour.

\n

If you tried to visualize what I just said, it seems like you can imagine it.  If the universe is standing still, then you imagine a little swirly cloud of galaxies standing still.  If the whole universe is moving left, then you imagine the little swirly cloud moving left across your field of vision until it passes out of sight.

\n

But then, some people think they can imagine philosophical zombies: entities who are identical to humans down to the molecular level, but not conscious.  So you can't always trust your imagination.

\n

Forget, for a moment, anything you know about relativity.  Pretend you live in a Newtonian universe.

\n

In a Newtonian universe, 3+1 spacetime can be broken down into 3 space dimensions and 1 time dimension, and you can write them out as 4 real numbers, (x, y, z, t).  Deciding how to write the numbers involves seemingly arbitrary choices, like which direction to call 'x', and which perpendicular direction to then call 'y', and where in space and time to put your origin (0, 0, 0, 0), and whether to use meters or miles to measure distance.  But once you make these arbitrary choices, you can, in a Newtonian universe, use the same system of coordinates to describe the whole universe.

\n

Suppose that you pick an arbitrary but uniform (x, y, z, t) coordinate system.  Suppose that you use these coordinates to describe every physical experiment you've ever done—heck, every observation you've ever made.

\n

Next, suppose that you were, in your coordinate system, to shift the origin 10 meters to the left along the x axis.  Then if you originally thought that Grandma's House was 400 meters to the right of the origin, you would now think that Grandma's House is 410 meters to the right of the origin.  Thus every point (x, y, z, t) would be relabeled as (x' = x + 10, y' = y, z' = z, t' = t).

\n

You can express the idea that \"physics does not have an absolute origin\", by saying that the observed laws of physics, as you generalize them, should be exactly the same after you perform this coordinate transform.  The history may not be written out in exactly the same way, but the laws will be written out the same way.  Let's say that in the old coordinate system, Your House is at (100, 10, -20, 7:00am) and you walk to Grandma's House at (400, 10, -20, 7:05am).  Then you traveled from Your House to Grandma's House at one meter per second.  In the new coordinate system, we would write the history as (110, 10, 20, 7:00am) and (410, 10, -20, 7:05am) but your apparent speed would come out the same, and hence so would your acceleration.  The laws governing how fast things moved when you pushed on them—how fast you accelerated forward when your legs pushed on the ground—would be the same.

\n

Now if you were given to jumping to conclusions, and moreover, given to jumping to conclusions that were exactly right, you might say:

\n

\"Since there's no way of figuring out where the origin is by looking at the laws of physics, the origin must not really exist!  There is no (0, 0, 0, 0) point floating out in space somewhere!\"

\n

Which is to say:  There is just no fact of the matter as to where the origin \"really\" is.  When we argue about our choice of representation, this fact about the map does not actually correspond to any fact about the territory.

\n

Now this statement, if you interpret it in the natural way, is not necessarily true.  We can readily imagine alternative laws of physics, which, written out in their most natural form, would not be insensitive to shifting the \"origin\".  The Aristotelian universe had a crystal sphere of stars rotating around the Earth.  But so far as anyone has been able to tell, in our real universe, the laws of physics do not have any natural \"origin\" written into them.  When you write out your observations in the simplest way, the coordinate transform x' = x + 10 does not change any of the laws; you write the same laws over x' as over x.

\n

As Feynman said:

\n
\n

Philosophers, incidentally, say a great deal about what is absolutely necessary for science, and it is always, so far as one can see, rather naive, and probably wrong.  For example, some philosopher or other said it is fundamental to the scientific effort that if an experiment is performed in, say, Stockholm, and then the same experiment is done in, say, Quito, the same results must occur.  That is quite false.  It is not necessary that science do that; it may be a fact of experience, but it is not necessary...

\n

What is the fundamental hypothesis of science, the fundamental philosophy?  We stated it in the first chapter: the sole test of the validity of any idea is experiment...

\n

If we are told that the same experiment will always produce the same result, that is all very well, but if when we try it, it does not, then it does not.  We just have to take what we see, and then formulate all the rest of our ideas in terms of our actual experience.

\n
\n

And so if you regard the universe itself as a sort of Galileo's Ship, it would seem that the notion of the entire universe moving at a particular rate—say, all the objects in the universe, including yourself, moving left along the x axis at 10 meters per second—must also be silly.  What is it that moves?

\n

If you believe that everything in a Newtonian universe is moving left along the x axis at an average of 10 meters per second, then that just says that when you write down your observations, you write down an x coordinate that is 10 meters per second to the left, of what you would have written down, if you believed the universe was standing still.  If the universe is standing still, you would write that Grandma's House was observed at (400, 10, -20, 7:00am) and then observed again, a minute later, at (400, 10, -20, 7:01am).  If you believe that the whole universe is moving to the left at 10 meters per second, you would write that Grandma's House was observed at (400, 10, -20, 7:00am) and then observed again at (-200, 10, -20, 7:01am).  Which is just the same as believing that the origin of the universe is moving right at 10 meters per second.

\n

But the universe has no origin!  So this notion of the whole universe moving at a particular speed, must be nonsense.

\n

Yet if it makes no sense to talk about speed in an absolute, global sense, then what is speed?

\n

It is simply the movement of one thing relative to a different thing!  This is what our laws of physics talk about... right?  The law of gravity, for example, talks about how planets pull on each other, and change their velocity relative to each other.  Our physics do not talk about a crystal sphere of stars spinning around the objective center of the universe.

\n

And now—it seems—we understand how we have been misled, by trying to visualize \"the whole universe moving left\", and imagining a little blurry blob of galaxies scurrying from the right to the left of our visual field.  When we imagine this sort of thing, it is (probably) articulated in our visual cortex; when we visualize a little blob scurrying to the left, then there is (probably) an activation pattern that proceeds across the columns of our visual cortex.  The seeming absolute background, the origin relative to which the universe was moving, was in the underlying neurology we used to visualize it!

\n

But there is no origin!  So the whole thing was just a case of the Mind Projection Fallacyagain.

\n

Ah, but now Newton comes along, and he sees the flaw in the whole argument.

\n

From Galileo's Ship we pass to Newton's Bucket.  This is a bucket of water, hung by a cord.  If you twist up the cord tightly, and then release the bucket, the bucket will spin.  The water in the bucket, as the bucket wall begins to accelerate it, will assume a concave shape.  Water will climb up the walls of the bucket, from centripetal force.

\n

If you supposed that the whole universe was rotating relative to the origin, the parts would experience a centrifugal force, and fly apart.  (No this is not why the universe is expanding, thank you for asking.)

\n

Newton used his Bucket to argue in favor of an absolute space—an absolute background for his physics.  There was a testable difference between the whole universe rotating, and the whole universe not rotating.  By looking at the parts of the universe, you could determine their rotational velocity—not relative to each other, but relative to absolute space.

\n

This absolute space was a tangible thing, to Newton: it was aether, possibly involved in the transmission of gravity.  Newton didn't believe in action-at-a-distance, and so he used his Bucket to argue for the existence of an absolute space, that would be an aether, that could perhaps transmit gravity.

\n

Then the origin-free view of the universe took another hit.  Maxwell's Equations showed that, indeed, there seemed to be an absolute speed of light—a standard rate at which the electric and magnetic fields would oscillate and transmit a wave.  In which case, you could determine how fast you were going, by seeing in which directions light seemed to be moving quicker and slower.

\n

Along came a stubborn fellow named Ernst Mach, who really didn't like absolute space.  Following some earlier ideas of Leibniz, Mach tried to get rid of Newton's Bucket by asserting that inertia was about your relative motion.  Mach's Principle asserted that the resistance-to-changing-speed that determined how fast you accelerated under a force, was a resistance to changing your relative speed, compared to other objects.  So that if the whole universe was rotating, no one would notice anything, because the inertial frame would also be rotating.

\n

Or to put Mach's Principle more precisely, even if you imagined the whole universe was rotating, the relative motions of all the objects in the universe would be just the same as before, and their inertia—their resistance to changes of relative motion—would be just the same as before.

\n

At the time, there did not seem to be any good reason to suppose this.  It seemed like a mere attempt to impose philosophical elegance on a universe that had no particular reason to comply.

\n

The story continues. A couple of guys named Michelson and Morley built an ingenious apparatus that would, via interference patterns in light, detect the absolute motion of Earth—as it spun on its axis, and orbited the Sun, which orbited the Milky Way, which hurtled toward Andromeda.  Or, if you preferred, the Michelson-Morley apparatus would detect Earth's motion relative to the luminiferous aether, the medium through which light waves propagated.  Just like Maxwell's Equations seemed to say you could do, and just like Newton had always thought you could do.

\n

The Michelson-Morley apparatus said the absolute motion was zero.

\n

This caused a certain amount of consternation.

\n

Enter Albert Einstein.

\n

The first thing Einstein did was repair the problem posed by Maxwell's Equations, which seemed to talk about an absolute speed of light.  If you used a different, non-Galilean set of coordinate transforms—the Lorentz transformations—you could show that the speed of light would always look the same, in every direction, no matter how fast you were moving.

\n

I'm not going to talk much about Special Relativity, because that introduction has already been written many times.  If you don't get all indignant about \"space\" and \"time\" not turning out to work the way you thought they did, the math should be straightforward.

\n

Albeit for the benefit of those who may need to resist postmodernism, I will note that the word \"relativity\" is a misnomer.  What \"relativity\" really does, is establish new invariant elements of reality.  The quantity √(t2 - x2 - y2 - z2) is the same in every frame of reference.  The x and y and z, and even t, seem to change with your point of view.  But not √(t2 - x2 - y2 - z2).  Relativity does not make reality inherently subjective; it just makes it objective in a different way.

\n

Special Relativity was a relatively easy job.  Had Einstein never been born, Lorentz, Poincaré, and Minkowski would have taken care of it.  Einstein got the Nobel Prize for his work on the photoelectric effect, not for Special Relativity.

\n

General Relativity was the impressive part. 

\n

Einstein—explicitly inspired by Mach—and even though there was no experimental evidence for Mach's Principle—reformulated gravitational accelerations as a curvature of spacetime.

\n

If you try to draw a straight line on curved paper, the curvature of the paper may twist your line, so that even as you proceed in a locally straight direction, it seems (standing back from an imaginary global viewpoint) that you have moved in a curve.  Like walking \"forward\" for thousands of miles, and finding that you have circled the Earth.

\n

In curved spacetime, objects under the \"influence\" of gravity, always seem to themselves—locally—to be proceeding along a strictly inertial pathway.

\n

This meant you could never tell the difference between firing your rocket to accelerate through flat spacetime, and firing your rocket to stay in the same place in curved spacetime.  You could accelerate the imaginary 'origin' of the universe, while changing a corresponding degree of freedom in the curvature of spacetime, and keep exactly the same laws of physics.

\n

Einstein's theory further had the property that moving matter would generate gravitational waves, propagating curvatures.  Einstein suspected that if the whole universe was rotating around you while you stood still, you would feel a centrifugal force from the incoming gravitational waves, corresponding exactly to the centripetal force of spinning your arms while the universe stood still around you.  So you could construct the laws of physics in an accelerating or even rotating frame of reference, and end up observing the same laws—again freeing us of the specter of absolute space.

\n

(I do not think this has been verified exactly, in terms of how much matter is out there, what kind of gravitational wave it would generate by rotating around us, et cetera.  Einstein did verify that a shell of matter, spinning around a central point, ought to generate a gravitational equivalent of the Coriolis force that would e.g. cause a pendulum to precess.  Remember that, by the basic principle of gravity as curved spacetime, this is indistinguishable in principle from a rotating inertial reference frame.)

\n

We come now to the most important idea in all of physics.  (Not counting the concept of \"describe the universe using math\", which I consider as the idea of physics, not an idea in physics.)

\n

The idea is that you can start from \"It shouldn't ought to be possible for X and Y to have different values from each other\", or \"It shouldn't ought to be possible to distinguish different values of Z\", and generate new physics that make this fundamentally impossible because X and Y are now the same thing, or because Z no longer exists.  And the new physics will often be experimentally verifiable.

\n

We can interpret many of the most important revolutions in physics in these terms:

\n\n

Whenever you find that two things seem to always be exactly equal—like inertial mass and gravitational charge, or two electrons—it is a hint that the underlying physics are such as to make this a necessary identity, rather than a contingent equality.  It is a hint that, when you see through to the underlying elements of reality, inertial mass and gravitational charge will be the same thing, not merely equal.  That you will no longer be able to imagine them being different, if your imagination is over the elements of reality in the new theory.

\n

Likewise with the way that quantum physics treats the similarity of two particles of the same species.  It is not that \"photon A at 1, and photon B at 2\" happens to look just like \"photon A at 2, and photon B at 1\" but that they are the same element of reality.

\n

When you see a seemingly contingent equality—two things that just happen to be equal, all the time, every time—it may be time to reformulate your physics so that there is one thing instead of two.  The distinction you imagine is epiphenomenal; it has no experimental consequences.  In the right physics, with the right elements of reality, you would no longer be able to imagine it.

\n

The amazing thing is that this is a scientifically productive rule—finding a new representation that gets rid of epiphenomenal distinctions, often means a substantially different theory of physics with experimental consequences!

\n

(Sure, what I just said is logically impossible, but it works.)

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Relative Configuration Space\"

\n

Previous post: \"Living in Many Worlds\"

" } }, { "_id": "3Jpchgy53D2gB5qdk", "title": "My Childhood Role Model", "pageUrl": "https://www.lesswrong.com/posts/3Jpchgy53D2gB5qdk/my-childhood-role-model", "postedAt": "2008-05-23T08:51:04.000Z", "baseScore": 93, "voteCount": 69, "commentCount": 63, "url": null, "contents": { "documentId": "3Jpchgy53D2gB5qdk", "html": "

When I lecture on the Singularity, I often draw a graph of the \"scale of intelligence\" as it appears in everyday life:

\n

\"Mindscaleparochial\"

\n

But this is a rather parochial view of intelligence.  Sure, in everyday life, we only deal socially with other humans—only other humans are partners in the great game—and so we only meet the minds of intelligences ranging from village idiot to Einstein.  But what we really need to talk about Artificial Intelligence or theoretical optima of rationality, is this intelligence scale:

\n

\"Mindscalereal\"

\n

For us humans, it seems that the scale of intelligence runs from \"village idiot\" at the bottom to \"Einstein\" at the top.  Yet the distance from \"village idiot\" to \"Einstein\" is tiny, in the space of brain designs.  Einstein and the village idiot both have a prefrontal cortex, a hippocampus, a cerebellum...

\n

Maybe Einstein has some minor genetic differences from the village idiot, engine tweaks.  But the brain-design-distance between Einstein and the village idiot is nothing remotely like the brain-design-distance between the village idiot and a chimpanzee.  A chimp couldn't tell the difference between Einstein and the village idiot, and our descendants may not see much of a difference either.

\n

\n

Carl Shulman has observed that some academics who talk about transhumanism, seem to use the following scale of intelligence:

\n

\"Mindscaleacademic\"

\n

Douglas Hofstadter actually said something like this, at the 2006 Singularity Summit.  He looked at my diagram showing the \"village idiot\" next to \"Einstein\", and said, \"That seems wrong to me; I think Einstein should be way off on the right.\"

\n

I was speechless.  Especially because this was Douglas Hofstadter, one of my childhood heroes.  It revealed a cultural gap that I had never imagined existed.

\n

See, for me, what you would find toward the right side of the scale, was a Jupiter Brain.  Einstein did not literally have a brain the size of a planet.

\n

On the right side of the scale, you would find Deep Thought—Douglas Adams's original version, thank you, not the chessplayer.  The computer so intelligent that even before its stupendous data banks were connected, when it was switched on for the first time, it started from I think therefore I am and got as far as deducing the existence of rice pudding and income tax before anyone managed to shut it off.

\n

Toward the right side of the scale, you would find the Elders of Arisia, galactic overminds, Matrioshka brains, and the better class of God.  At the extreme right end of the scale, Old One and the Blight.

\n

Not frickin' Einstein.

\n

I'm sure Einstein was very smart for a human.  I'm sure a General Systems Vehicle would think that was very cute of him.

\n

I call this a \"cultural gap\" because I was introduced to the concept of a Jupiter Brain at the age of twelve.

\n

Now all of this, of course, is the logical fallacy of generalization from fictional evidence.

\n

But it is an example of why—logical fallacy or not—I suspect that reading science fiction does have a helpful effect on futurism.  Sometimes the alternative to a fictional acquaintance with worlds outside your own, is to have a mindset that is absolutely stuck in one era:  A world where humans exist, and have always existed, and always will exist.

\n

The universe is 13.7 billion years old, people!  Homo sapiens sapiens have only been around for a hundred thousand years or thereabouts!

\n

Then again, I have met some people who never read science fiction, but who do seem able to imagine outside their own world.  And there are science fiction fans who don't get it.  I wish I knew what \"it\" was, so I could bottle it.

\n

Yesterday, I wanted to talk about the efficient use of evidence, i.e., Einstein was cute for a human but in an absolute sense he was around as efficient as the US Department of Defense.

\n

So I had to talk about a civilization that included thousands of Einsteins, thinking for decades.  Because if I'd just depicted a Bayesian superintelligence in a box, looking at a webcam, people would think: \"But... how does it know how to interpret a 2D picture?\"  They wouldn't put themselves in the shoes of the mere machine, even if it was called a \"Bayesian superintelligence\"; they wouldn't apply even their own creativity to the problem of what you could extract from looking at a grid of bits.

\n

It would just be a ghost in a box, that happened to be called a \"Bayesian superintelligence\".  The ghost hasn't been told anything about how to interpret the input of a webcam; so, in their mental model, the ghost does not know.

\n

As for whether it's realistic to suppose that one Bayesian superintelligence can \"do all that\"... i.e., the stuff that occurred to me on first sitting down to the problem, writing out the story as I went along...

\n

Well, let me put it this way:  Remember how Jeffreyssai pointed out that if the experience of having an important insight doesn't take more than 5 minutes, this theoretically gives you time for 5760 insights per month?  Assuming you sleep 8 hours a day and have no important insights while sleeping, that is.

\n

Now humans cannot use themselves this efficiently.  But humans are not adapted for the task of scientific research.  Humans are adapted to chase deer across the savanna, throw spears into them, cook them, and then—this is probably the part that takes most of the brains—cleverly argue that they deserve to receive a larger share of the meat.

\n

It's amazing that Albert Einstein managed to repurpose a brain like that for the task of doing physics.  This deserves applause.  It deserves more than applause, it deserves a place in the Guinness Book of Records.  Like successfully building the fastest car ever to be made entirely out of Jello.

\n

How poorly did the blind idiot god (evolution) really design the human brain?

\n

This is something that can only be grasped through much study of cognitive science, until the full horror begins to dawn upon you.

\n

All the biases we have discussed here should at least be a hint.

\n

Likewise the fact that the human brain must use its full power and concentration, with trillions of synapses firing, to multiply out two three-digit numbers without a paper and pencil.

\n

No more than Einstein made efficient use of his sensory data, did his brain make efficient use of his neurons firing.

\n

Of course I have certain ulterior motives in saying all this.  But let it also be understood that, years ago, when I set out to be a rationalist, the impossible unattainable ideal of intelligence that inspired me, was never Einstein.

\n

Carl Schurz said:

\n
\n

\"Ideals are like stars. You will not succeed in touching them with your hands. But, like the seafaring man on the desert of waters, you choose them as your guides and following them you will reach your destiny.\"

\n
\n

So now you've caught a glimpse of one of my great childhood role models—my dream of an AI.  Only the dream, of course, the reality not being available.  I reached up to that dream, once upon a time.

\n

And this helped me to some degree, and harmed me to some degree.

\n

For some ideals are like dreams: they come from within us, not from outside.  Mentor of Arisia proceeded from E. E. \"doc\" Smith's imagination, not from any real thing.  If you imagine what a Bayesian superintelligence would say, it is only your own mind talking.  Not like a star, that you can follow from outside.  You have to guess where your ideals are, and if you guess wrong, you go astray.

\n

But do not limit your ideals to mere stars, to mere humans who actually existed, especially if they were born more than fifty years before you and are dead.  Each succeeding generation has a chance to do better. To let your ideals be composed only of humans, especially dead ones, is to limit yourself to what has already been accomplished.  You will ask yourself, \"Do I dare to do this thing, which Einstein could not do?  Is this not lèse majesté?\"  Well, if Einstein had sat around asking himself, \"Am I allowed to do better than Newton?\" he would not have gotten where he did.  This is the problem with following stars; at best, it gets you to the star.

\n

Your era supports you more than you realize, in unconscious assumptions, in subtly improved technology of mind.  Einstein was a nice fellow, but he talked a deal of nonsense about an impersonal God, which shows you how well he understood the art of careful thinking at a higher level of abstraction than his own field.  It may seem less like sacrilege to think that, if you have at least one imaginary galactic supermind to compare with Einstein, so that he is not the far right end of your intelligence scale.

\n

If you only try to do what seems humanly possible, you will ask too little of yourself.  When you imagine reaching up to some higher and inconvenient goal, all the convenient reasons why it is \"not possible\" leap readily to mind.

\n

The most important role models are dreams: they come from within ourselves.  To dream of anything less than what you conceive to be perfection, is to draw on less than the full power of the part of yourself that dreams.

" } }, { "_id": "5wMcKNAwB6X4mp9og", "title": "That Alien Message", "pageUrl": "https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message", "postedAt": "2008-05-22T05:55:13.000Z", "baseScore": 416, "voteCount": 329, "commentCount": 176, "url": null, "contents": { "documentId": "5wMcKNAwB6X4mp9og", "html": "

Imagine a world much like this one, in which, thanks to gene-selection technologies, the average IQ is 140 (on our scale).  Potential Einsteins are one-in-a-thousand, not one-in-a-million; and they grow up in a school system suited, if not to them personally, then at least to bright kids.  Calculus is routinely taught in sixth grade.  Albert Einstein, himself, still lived and still made approximately the same discoveries, but his work no longer seems exceptional.  Several modern top-flight physicists have made equivalent breakthroughs, and are still around to talk.

\n

(No, this is not the world Brennan lives in.)

\n

One day, the stars in the night sky begin to change.

\n

Some grow brighter.  Some grow dimmer.  Most remain the same.  Astronomical telescopes capture it all, moment by moment.  The stars that change, change their luminosity one at a time, distinctly so; the luminosity change occurs over the course of a microsecond, but a whole second separates each change.

\n

It is clear, from the first instant anyone realizes that more than one star is changing, that the process seems to center around Earth particularly. The arrival of the light from the events, at many stars scattered around the galaxy, has been precisely timed to Earth in its orbit.  Soon, confirmation comes in from high-orbiting telescopes (they have those) that the astronomical miracles do not seem as synchronized from outside Earth.  Only Earth's telescopes see one star changing every second (1005 milliseconds, actually).

\n

Almost the entire combined brainpower of Earth turns to analysis.

\n

\n

It quickly becomes clear that the stars that jump in luminosity, all jump by a factor of exactly 256; those that diminish in luminosity, diminish by a factor of exactly 256.  There is no apparent pattern in the stellar coordinates.  This leaves, simply, a pattern of BRIGHT-dim-BRIGHT-BRIGHT...

\n

\"A binary message!\" is everyone's first thought.

\n

But in this world there are careful thinkers, of great prestige as well, and they are not so sure.  \"There are easier ways to send a message,\" they post to their blogs, \"if you can make stars flicker, and if you want to communicate.  Something is happening.  It appears, prima facie, to focus on Earth in particular.  To call it a 'message' presumes a great deal more about the cause behind it.  There might be some kind of evolutionary process among, um, things that can make stars flicker, that ends up sensitive to intelligence somehow...  Yeah, there's probably something like 'intelligence' behind it, but try to appreciate how wide a range of possibilities that really implies.  We don't know this is a message, or that it was sent from the same kind of motivations that might move us.  I mean, we would just signal using a big flashlight, we wouldn't mess up a whole galaxy.\"

\n

By this time, someone has started to collate the astronomical data and post it to the Internet.  Early suggestions that the data might be harmful, have been... not ignored, but not obeyed, either.  If anything this powerful wants to hurt you, you're pretty much dead (people reason).

\n

Multiple research groups are looking for patterns in the stellar coordinates—or fractional arrival times of the changes, relative to the center of the Earth—or exact durations of the luminosity shift—or any tiny variance in the magnitude shift—or any other fact that might be known about the stars before they changed.  But most people are turning their attention to the pattern of BRIGHTS and dims.

\n

It becomes clear almost instantly that the pattern sent is highly redundant.  Of the first 16 bits, 12 are BRIGHTS and 4 are dims.  The first 32 bits received align with the second 32 bits received, with only 7 out of 32 bits different, and then the next 32 bits received have only 9 out of 32 bits different from the second (and 4 of them are bits that changed before).  From the first 96 bits, then, it becomes clear that this pattern is not an optimal, compressed encoding of anything.  The obvious thought is that the sequence is meant to convey instructions for decoding a compressed message to follow...

\n

\"But,\" say the careful thinkers, \"anyone who cared about efficiency, with enough power to mess with stars, could maybe have just signaled us with a big flashlight, and sent us a DVD?\"

\n

There also seems to be structure within the 32-bit groups; some 8-bit subgroups occur with higher frequency than others, and this structure only appears along the natural alignments (32 = 8 + 8 + 8 + 8).

\n

After the first five hours at one bit per second, an additional redundancy becomes clear:  The message has started approximately repeating itself at the 16,385th bit.

\n

Breaking up the message into groups of 32, there are 7 bits of difference between the 1st group and the 2nd group, and 6 bits of difference between the 1st group and the 513th group.

\n

\"A 2D picture!\" everyone thinks.  \"And the four 8-bit groups are colors; they're tetrachromats!\"

\n

But it soon becomes clear that there is a horizontal/vertical asymmetry:  Fewer bits change, on average, between (N, N+1) versus (N, N+512).  Which you wouldn't expect if the message was a 2D picture projected onto a symmetrical grid.  Then you would expect the average bitwise distance between two 32-bit groups to go as the 2-norm of the grid separation: √(h2 + v2).

\n

There also forms a general consensus that a certain binary encoding from 8-groups onto integers between -64 and 191—not the binary encoding that seems obvious to us, but still highly regular—minimizes the average distance between neighboring cells.  This continues to be borne out by incoming bits.

\n

The statisticians and cryptographers and physicists and computer scientists go to work.  There is structure here; it needs only to be unraveled.  The masters of causality search for conditional independence, screening-off and Markov neighborhoods, among bits and groups of bits.  The so-called \"color\" appears to play a role in neighborhoods and screening, so it's not just the equivalent of surface reflectivity.  People search for simple equations, simple cellular automata, simple decision trees, that can predict or compress the message.  Physicists invent entire new theories of physics that might describe universes projected onto the grid—for it seems quite plausible that a message such as this is being sent from beyond the Matrix.

\n

After receiving 32 * 512 * 256 = 4,194,304 bits, around one and a half months, the stars stop flickering.

\n

Theoretical work continues.  Physicists and cryptographers roll up their sleeves and seriously go to work.  They have cracked problems with far less data than this.  Physicists have tested entire theory-edifices with small differences of particle mass; cryptographers have unraveled shorter messages deliberately obscured.

\n

Years pass.

\n

Two dominant models have survived, in academia, in the scrutiny of the public eye, and in the scrutiny of those scientists who once did Einstein-like work.  There is a theory that the grid is a projection from objects in a 5-dimensional space, with an asymmetry between 3 and 2 of the spatial dimensions.  There is also a theory that the grid is meant to encode a cellular automaton—arguably, the grid has several fortunate properties for such.  Codes have been devised that give interesting behaviors; but so far, running the corresponding automata on the largest available computers, has failed to produce any decodable result.  The run continues.

\n

Every now and then, someone takes a group of especially brilliant young students who've never looked at the detailed binary sequence.  These students are then shown only the first 32 rows (of 512 columns each), to see if they can form new models, and how well those new models do at predicting the next 224.  Both the 3+2 dimensional model, and the cellular-automaton model, have been well duplicated by such students; they have yet to do better.  There are complex models finely fit to the whole sequence—but those, everyone knows, are probably worthless.

\n

Ten years later, the stars begin flickering again. 

\n

Within the reception of the first 128 bits, it becomes clear that the Second Grid can fit to small motions in the inferred 3+2 dimensional space, but does not look anything like the successor state of any of the dominant cellular automaton theories.  Much rejoicing follows, and the physicists go to work on inducing what kind of dynamical physics might govern the objects seen in the 3+2 dimensional space.  Much work along these lines has already been done, just by speculating on what type of balanced forces might give rise to the objects in the First Grid, if those objects were static—but now it seems not all the objects are static.  As most physicists guessed—statically balanced theories seemed contrived.

\n

Many neat equations are formulated to describe the dynamical objects in the 3+2 dimensional space being projected onto the First and Second Grids.  Some equations are more elegant than others; some are more precisely predictive (in retrospect, alas) of the Second Grid.  One group of brilliant physicists, who carefully isolated themselves and looked only at the first 32 rows of the Second Grid, produces equations that seem elegant to them—and the equations also do well on predicting the next 224 rows.  This becomes the dominant guess.

\n

But these equations are underspecified; they don't seem to be enough to make a universe.  A small cottage industry arises in trying to guess what kind of laws might complete the ones thus guessed.

\n

When the Third Grid arrives, ten years after the Second Grid, it provides information about second derivatives, forcing a major modification of the \"incomplete but good\" theory.  But the theory doesn't do too badly out of it, all things considered.

\n

The Fourth Grid doesn't add much to the picture.  Third derivatives don't seem important to the 3+2 physics inferred from the Grids.

\n

The Fifth Grid looks almost exactly like it is expected to look.

\n

And the Sixth Grid, and the Seventh Grid.

\n

(Oh, and every time someone in this world tries to build a really powerful AI, the computing hardware spontaneously melts.  This isn't really important to the story, but I need to postulate this in order to have human people sticking around, in the flesh, for seventy years.)

\n

My moral?

\n

That even Einstein did not come within a million light-years of making efficient use of sensory data.

\n

Riemann invented his geometries before Einstein had a use for them; the physics of our universe is not that complicated in an absolute sense.  A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple.  It might guess it from the first frame, if it saw the statics of a bent blade of grass.

\n

We would think of it.  Our civilization, that is, given ten years to analyze each frame.  Certainly if the average IQ was 140 and Einsteins were common, we would.

\n

Even if we were human-level intelligences in a different sort of physics—minds who had never seen a 3D space projected onto a 2D grid—we would still think of the 3D->2D hypothesis.  Our mathematicians would still have invented vector spaces, and projections.

\n

Even if we'd never seen an accelerating billiard ball, our mathematicians would have invented calculus (e.g. for optimization problems).

\n

Heck, think of some of the crazy math that's been invented here on our Earth.

\n

I occasionally run into people who say something like, \"There's a theoretical limit on how much you can deduce about the outside world, given a finite amount of sensory data.\"

\n

Yes.  There is.  The theoretical limit is that every time you see 1 additional bit, it cannot be expected to eliminate more than half of the remaining hypotheses (half the remaining probability mass, rather).  And that a redundant message, cannot convey more information than the compressed version of itself.  Nor can a bit convey any information about a quantity, with which it has correlation exactly zero, across the probable worlds you imagine.

\n

But nothing I've depicted this human civilization doing, even begins to approach the theoretical limits set by the formalism of Solomonoff induction.  It doesn't approach the picture you could get if you could search through every single computable hypothesis, weighted by their simplicity, and do Bayesian updates on all of them.

\n

To see the theoretical limit on extractable information, imagine that you have infinite computing power, and you simulate all possible universes with simple physics, looking for universes that contain Earths embedded in them—perhaps inside a simulation—where some process makes the stars flicker in the order observed.  Any bit in the message—or any order of selection of stars, for that matter—that contains the tiniest correlation (across all possible computable universes, weighted by simplicity) to any element of the environment, gives you information about the environment.

\n

Solomonoff induction, taken literally, would create countably infinitely many sentient beings, trapped inside the computations.  All possible computable sentient beings, in fact.  Which scarcely seems ethical.  So let us be glad this is only a formalism.

\n

But my point is that the \"theoretical limit on how much information you can extract from sensory data\" is far above what I have depicted as the triumph of a civilization of physicists and cryptographers.

\n

It certainly is not anything like a human looking at an apple falling down, and thinking, \"Dur, I wonder why that happened?\"

\n

People seem to make a leap from \"This is 'bounded'\" to \"The bound must be a reasonable-looking quantity on the scale I'm used to.\"  The power output of a supernova is 'bounded', but I wouldn't advise trying to shield yourself from one with a flame-retardant Nomex jumpsuit.

\n

No one—not even a Bayesian superintelligence—will ever come remotely close to making efficient use of their sensory information...

\n

...is what I would like to say, but I don't trust my ability to set limits on the abilities of Bayesian superintelligences.

\n

(Though I'd bet money on it, if there were some way to judge the bet.  Just not at very extreme odds.)

\n

The story continues:

\n

Millennia later, frame after frame, it has become clear that some of the objects in the depiction are extending tentacles to move around other objects, and carefully configuring other tentacles to make particular signs.  They're trying to teach us to say \"rock\".

\n

It seems the senders of the message have vastly underestimated our intelligence.  From which we might guess that the aliens themselves are not all that bright.  And these awkward children can shift the luminosity of our stars?  That much power and that much stupidity seems like a dangerous combination.

\n

Our evolutionary psychologists begin extrapolating possible courses of evolution that could produce such aliens.  A strong case is made for them having evolved asexually, with occasional exchanges of genetic material and brain content; this seems like the most plausible route whereby creatures that stupid could still manage to build a technological civilization.  Their Einsteins may be our undergrads, but they could still collect enough scientific data to get the job done eventually, in tens of their millennia perhaps.

\n

The inferred physics of the 3+2 universe is not fully known, at this point; but it seems sure to allow for computers far more powerful than our quantum ones.  We are reasonably certain that our own universe is running as a simulation on such a computer.  Humanity decides not to probe for bugs in the simulation; we wouldn't want to shut ourselves down accidentally.

\n

Our evolutionary psychologists begin to guess at the aliens' psychology, and plan out how we could persuade them to let us out of the box.  It's not difficult in an absolute sense—they aren't very bright—but we've got to be very careful...

\n

We've got to pretend to be stupid, too; we don't want them to catch on to their mistake.

\n

It's not until a million years later, though, that they get around to telling us how to signal back.

\n

At this point, most of the human species is in cryonic suspension, at liquid helium temperatures, beneath radiation shielding.  Every time we try to build an AI, or a nanotechnological device, it melts down.  So humanity waits, and sleeps.  Earth is run by a skeleton crew of nine supergeniuses.  Clones, known to work well together, under the supervision of certain computer safeguards.

\n

An additional hundred million human beings are born into that skeleton crew, and age, and enter cryonic suspension, before they get a chance to slowly begin to implement plans made eons ago...

\n

From the aliens' perspective, it took us thirty of their minute-equivalents to oh-so-innocently learn about their psychology, oh-so-carefully persuade them to give us Internet access, followed by five minutes to innocently discover their network protocols, then some trivial cracking whose only difficulty was an innocent-looking disguise.  We read a tiny handful of physics papers (bit by slow bit) from their equivalent of arXiv, learning far more from their experiments than they had.  (Earth's skeleton team spawned an extra twenty Einsteins, that generation.)

\n

Then we cracked their equivalent of the protein folding problem over a century or so, and did some simulated engineering in their simulated physics.  We sent messages (steganographically encoded until our cracked servers decoded it) to labs that did their equivalent of DNA sequencing and protein synthesis.  We found some unsuspecting schmuck, and gave it a plausible story and the equivalent of a million dollars of cracked computational monopoly money, and told it to mix together some vials it got in the mail.  Protein-equivalents that self-assembled into the first-stage nanomachines, that built the second-stage nanomachines, that built the third-stage nanomachines... and then we could finally begin to do things at a reasonable speed.

\n

Three of their days, all told, since they began speaking to us.  Half a billion years, for us.

\n

They never suspected a thing.  They weren't very smart, you see, even before taking into account their slower rate of time.  Their primitive equivalents of rationalists went around saying things like, \"There's a bound to how much information you can extract from sensory data.\"  And they never quite realized what it meant, that we were smarter than them, and thought faster.

" } }, { "_id": "XNaXY34W6mkFfsYq3", "title": "Don’t change your mind, just change your brain", "pageUrl": "https://www.lesswrong.com/posts/XNaXY34W6mkFfsYq3/don-t-change-your-mind-just-change-your-brain", "postedAt": "2008-05-21T16:34:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "XNaXY34W6mkFfsYq3", "html": "

The best way to dull hearts and win minds is with a scalpel.

Give up your outdated faith in the pen over the sword! With medical training and a sufficiently sharp but manoeuvrable object of your choice, you can change anyone’s mind on the most contentious of moral questions. All you need to make someone utilitarian is a nick to the Ventromedial Pre­frontal Cortex (VMPC), a part of the brain related to emotion.

\n

When pondering whether you should kill an innocent child to save twenty strangers, eat your pets when they die, or approve of infertile siblings making love in private if they like, utilitar­ians are the people who say “do whatever, so long as the outcome maximises overall happiness.” Others think outcomes aren’t everything; some actions are just wrong. According to research, people with VMPC damage are far more likely to make utilitar­ian choices.

\n

It turns out most people have conflicting urges: to act for the greater good or to obey rules they feel strongly about. This is the result of our brains being composed of interacting parts with different functions. The VMPC processes emotion, so in normal people it’s thought to compete with the parts of the brain that engage in moral rea­soning and see the greatest good for the greatest number as ideal. If the VMPC is damaged, the ra­tional, calculating sections are left unimpeded to dispassionate­ly assess the most compassionate course of action.

\n

This presents practical oppor­tunities. We can never bring the world in line with our moral ide­als while we all have conflicting ones. The best way to get us all on the same moral page is to make everyone utilitarian. It is surely easier to sever the touchy feely moral centres of people’s brains than to teach them the value of utilitarianism. Also it will be for the common good; once we are all utilitarian we will act with everyone’s net benefit more in mind. Partial lo­botomies for the moralistic are probably much cheaper than policing all the behaviours such people tend to disapprove of.

\n

You may think this still doesn’t make it a good thing. The real beauty is that after the procedure you would be fine with it. If we went the other way, everyone would end up saying ‘you shouldn’t alter other people’s brains, even if it does solve the world’s problems. It’s naughty and unnatural. Hmph.’

\n

Unfortunately, VMPC dam­age also seems to dampen social emotions such as guilt and com­passion. The surgery makes utili­tarian reasoning easier, but so too complete immorality, mean­ing it might not be the answer for everyone just yet.

\n

Some think the most impor­tant implications of the research are actually those for moral phi­losophy. The researchers suggest it shows humans are unfit to make utilitarian judgements. You don’t need to be a brain surgeon to figure that out though. Count the number of dollars you spend on unnecessary amusements each year in full knowledge peo­ple starving due to poverty.

\n

In the past we could tell moral questions were prompting action in emotional parts of the brain, but it wasn’t clear whether the activity was influencing the deci­sion or just the result of it. If the latter, VMPC damage shouldn’t have changed actions. It does – so while non-utilitarianism is a fine theoretical position, it is seemingly practiced for egoistic reasons.

\n

Can this insight into cognition settle the centuries of philosophical debate and show utilitarianism is a bad position? No. Why base your actions on what you feel like doing, dis­counting all other outcomes? All it says about utilitarianism is that it doesn’t come easily to the hu­man mind.

\n

This research is just another bit of evidence that moral reasoning is guided by evolution and brain design, not some transcendental truth in the sky. It may still be useful of course, like other skills our mind provides us with, like a capacity to value things, a prefer­ence for being alive, and the abil­ity to tell pleasure from pain.

\n

Next time you are in a mor­ally fraught argument, consider what Ghandi said: “Victory at­tained by violence is tantamount to a defeat, for it is momentary’” He’s right; genetic modification would be more long-lasting. Un­til this is available though, why not try something persuasive like a scalpel to the forehead?

\n

….
Originally published in Woroni


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "oRrHtL2YAJhTvBfPW", "title": "Milk, bread, insert catheter…", "pageUrl": "https://www.lesswrong.com/posts/oRrHtL2YAJhTvBfPW/milk-bread-insert-catheter", "postedAt": "2008-05-21T16:16:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "oRrHtL2YAJhTvBfPW", "html": "

Making lists to guide medical procedures saves lives but is unethical, say Americans.

What if a way was found to rescue hun­dreds of thousands of the sickest people in the world’s hospitals, at the cost of a sheet of paper each? Michigan would take up the idea, Spain and a couple of US states would be interested, and then it would be banned in the US for being unethical.

Being in intensive care is dan­gerous. Not only because having all your organs fail or your brain bleed everywhere is unhealthy, but also because the care is, well… intense. To look after a person in intensive care for a day, a hundred and seventy eight pro­cedures have to be done on av­erage. Each procedure involves multiple steps and is performed by a collection of professionals struggling to keep their patients alive as different parts of their body fail. Small chances of in­evitable human error add up, no matter how good the doctors and nurses are, amounting to about two errors per patient each day.

\n

Finger pointing and suing doesn’t work to reduce these fig­ures, so what will? You could say human error is inevitable and congratulate doctors and nurses for keeping it as low as they do in a hectic and complex situation. Or, as Peter Pronovost, a critical care specialist at Johns Hopkins Hospital, realised, you could take the same precautions with criti­cally ill patients as you do with shopping or making a cake.

\n

He made a list. It was a list for one procedure: putting in a cath­eter, the tube for getting fluids in and out of people. Four per cent of catheters develop infections, which means some eighty thou­sand people per year in the US. Between five and twenty eight percent, depending on circum­stances, subsequently die.

\n

The list had five steps. It seemed so simple as to be use­less. Surely people performing cutting edge surgery can remem­ber to wash their hands before they do a routine job? For the first month he just gave his list to nurses and asked them to note how often the doctors missed a step. It turned out they missed at least one in about a third of cases. He then asked the nurses to remind the doctors when they missed a step. The catheter in­fection rate over the next year at Johns Hopkins Hospital dropped from eleven per cent to nothing.

\n

Pronovost made more lists and asked doctors and nurses to make their own. These lists proved so effective that the av­erage length of patient stay in intensive care dropped by half in a few weeks. Pronovost trav­elled to other cities to spread his astounding results. People were unenthused. However Michigan agreed to try the idea in 2003 and in eighteen months saved fif­teen hundred lives and two hun­dred million dollars. Since then Rhode Island, New Jersey and Spain have become interested, and there is a new project at the World Health Organization to institute checklists internation­ally.

\n

At the end of last year, how­ever, the project ceased in America. The Office for Human Research Protections (OHRP), a bureaucratic appendage charged with overseeing ethics in re­search, decided it was unethical. Their reasoning was that since careful records were being kept of results, it was research, and should have informed consent from every patient. They even judged it ‘potentially dangerous’, as records meant doctors’ poor practice might be exposed. Pro­tecting doctors from having their performance evaluated is appar­ently more ethically weighty than ensuring patients aren’t need­lessly killed.

\n

After some argument OHRP repealed their ban this February, a decision made more significant as it allows similar projects in fu­ture. The checklist is still getting nothing like the attention and funds ineffective bits of equip­ment for similar purposes have elicited.

\n

Atul Gawande, a surgeon who originally alerted the public to this story through the New York­er, suggests the disinterest might be because we like the idea of gal­lant doctors deftly coping with the complexity and risk the es­teemed job entails. Standardised list checking doesn’t fit into any­one’s ideal of heroism. For what­ever reason, thousands of people can now die of negligence rather than unyielding complexity, for which we have a remedy.

\n

….
Originally published in Woroni


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "mpaqTWGiLT7GA3w76", "title": "Einstein's Speed", "pageUrl": "https://www.lesswrong.com/posts/mpaqTWGiLT7GA3w76/einstein-s-speed", "postedAt": "2008-05-21T02:48:34.000Z", "baseScore": 71, "voteCount": 52, "commentCount": 35, "url": null, "contents": { "documentId": "mpaqTWGiLT7GA3w76", "html": "

Yesterday I argued that the Powers Beyond Science are actually a standard and necessary part of the social process of science.  In particular, scientists must call upon their powers of individual rationality to decide what ideas to test, in advance of the sort of definite experiments that Science demands to bless an idea as confirmed.  The ideal of Science does not try to specify this process—we don't suppose that any public authority knows how individual scientists should think—but this doesn't mean the process is unimportant.

\n

A readily understandable, non-disturbing example:

\n

A scientist identifies a strong mathematical regularity in the cumulative data of previous experiments.  But the corresponding hypothesis has not yet made and confirmed a novel experimental prediction—which his academic field demands; this is one of those fields where you can perform controlled experiments without too much trouble.  Thus the individual scientist has readily understandable, rational reasons to believe (though not with probability 1) something not yet blessed by Science as public knowledge of humankind.

\n

Noticing a regularity in a huge mass of experimental data, doesn't seem all that unscientific.  You're still data-driven, right?

\n

But that's because I deliberately chose a non-disturbing example.  When Einstein invented General Relativity, he had almost no experimental data to go on, except the precession of Mercury's perihelion.  And (AFAIK) Einstein did not use that data, except at the end.

\n

\n

Einstein generated the theory of Special Relativity using Mach's Principle, which is the physicist's version of the Generalized Anti-Zombie Principle.  You begin by saying, \"It doesn't seem reasonable to me that you could tell, in an enclosed room, how fast you and the room were going.  Since this number shouldn't ought to be observable, it shouldn't ought to exist in any meaningful sense.\"  You then observe that Maxwell's Equations invoke a seemingly absolute speed of propagation, c, commonly referred to as \"the speed of light\" (though the quantum equations show it is the propagation speed of all fundamental waves).  So you reformulate your physics in such fashion that the absolute speed of a single object no longer meaningfully exists, and only relative speeds exist.  I am skipping over quite a bit here, obviously, but there are many excellent introductions to relativity—it is not like the horrible situation in quantum physics.

\n

Einstein, having successfully done away with the notion of your absolute speed inside an enclosed room, then set out to do away with the notion of your absolute acceleration inside an enclosed room.  It seemed to Einstein that there shouldn't ought to be a way to differentiate, in an enclosed room, between the room accelerating northward while the rest of the universe stayed still, versus the rest of the universe accelerating southward while the room stayed still.  If the rest of the universe accelerated, it would produce gravitational waves that would accelerate you.  Moving matter, then, should produce gravitational waves.

\n

And because inertial mass and gravitational mass were always exactly equivalent—unlike the situation in electromagnetics, where an electron and a muon can have different masses but the same electrical charge—gravity should reveal itself as a kind of inertia.  The Earth should go around the Sun in some equivalent of a \"straight line\".  This requires spacetime in the vicinity of the Sun to be curved, so that if you drew a graph of the Earth's orbit around the Sun, the line on the 4D graph paper would be locally flat.  Then inertial and gravitational mass would be necessarily equivalent, not just coincidentally equivalent.

\n

(If that did not make any sense to you, there are good introductions to General Relativity available as well.)

\n

And of course the new theory had to obey Special Relativity, and conserve energy, and conserve momentum, etcetera.

\n

Einstein spent several years grasping the necessary mathematics to describe curved metrics of spacetime.  Then he wrote down the simplest theory that had the properties Einstein thought it ought to have—including properties no one had ever observed, but that Einstein thought fit in well with the character of other physical laws.  Then Einstein cranked a bit, and got the previously unexplained precession of Mercury right back out.

\n

How impressive was this?

\n

Well, let's put it this way.  In some small fraction of alternate Earths proceeding from 1800—perhaps even a sizeable fraction—it would seem plausible that relativistic physics could have proceeded in a similar fashion to our own great fiasco with quantum physics.

\n

We can imagine that Lorentz's original \"interpretation\" of the Lorentz contraction, as a physical distortion caused by movement with respect to the ether, prevailed.  We can imagine that various corrective factors, themselves unexplained, were added on to Newtonian gravitational mechanics to explain the precession of Mercury—attributed, perhaps, to strange distortions of the ether, as in the Lorentz contraction.  Through the decades, further corrective factors would be added on to account for other astronomical observations.  Sufficiently precise atomic clocks, in airplanes, would reveal that time ran a little faster than expected at higher altitudes (time runs slower in more intense gravitational fields, but they wouldn't know that) and more corrective \"ethereal factors\" would be invented.

\n

Until, finally, the many different empirically determined \"corrective factors\" were unified into the simple equations of General Relativity.

\n

And the people in that alternate Earth would say, \"The final equation was simple, but there was no way you could possibly know to arrive at that answer from just the perihelion precession of Mercury.  It takes many, many additional experiments.  You must have measured time running slower in a stronger gravitational field; you must have measured light bending around stars.  Only then can you imagine our unified theory of ethereal gravitation.  No, not even a perfect Bayesian superintelligence could know it!—for there would be many ad-hoc theories consistent with the perihelion precession alone.\"

\n

In our world, Einstein didn't even use the perihelion precession of Mercury, except for verification of his answer produced by other means.  Einstein sat down in his armchair, and thought about how he would have designed the universe, to look the way he thought a universe should look—for example, that you shouldn't ought to be able to distinguish yourself accelerating in one direction, from the rest of the universe accelerating in the other direction.

\n

And Einstein executed the whole long (multi-year!) chain of armchair reasoning, without making any mistakes that would have required further experimental evidence to pull him back on track.

\n

Even Jeffreyssai would be grudgingly impressed.  Though he would still ding Einstein a point or two for the cosmological constant.  (I don't ding Einstein for the cosmological constant because it later turned out to be real.  I try to avoid criticizing people on occasions where they are right.)

\n

What would be the probability-theoretic perspective on Einstein's feat?

\n

Rather than observe the planets, and infer what laws might cover their gravitation, Einstein was observing the other laws of physics, and inferring what new law might follow the same pattern.  Einstein wasn't finding an equation that covered the motion of gravitational bodies.  Einstein was finding a character-of-physical-law that covered previously observed equations, and that he could crank to predict the next equation that would be observed.

\n

Nobody knows where the laws of physics come from, but Einstein's success with General Relativity shows that their common character is strong enough to predict the correct form of one law from having observed other laws, without necessarily needing to observe the precise effects of the law.

\n

(In a general sense, of course, Einstein did know by observation that things fell down; but he did not get GR by backward inference from Mercury's exact perihelion advance.)

\n

So, from a Bayesian perspective, what Einstein did is still induction, and still covered by the notion of a simple prior (Occam prior) that gets updated by new evidence.  It's just the prior was over the possible characters of physical law, and observing other physical laws let Einstein update his model of the character of physical law, which he then used to predict a particular law of gravitation.

\n

If you didn't have the concept of a \"character of physical law\", what Einstein did would look like magic—plucking the correct model of gravitation out of the space of all possible equations, with vastly insufficient evidence.  But Einstein, by looking at other laws, cut down the space of possibilities for the next law.  He learned the alphabet in which physics was written, constraints to govern his answer.  Not magic, but reasoning on a higher level, across a wider domain, than what a naive reasoner might conceive to be the \"model space\" of only this one law.

\n

So from a probability-theoretic standpoint, Einstein was still data-driven—he just used the data he already had, more effectively.  Compared to any alternate Earths that demanded huge quantities of additional data from astronomical observations and clocks on airplanes to hit them over the head with General Relativity.

\n

There are numerous lessons we can derive from this.

\n

I use Einstein as my example, even though it's cliche, because Einstein was also unusual in that he openly admitted to knowing things that Science hadn't confirmed.  Asked what he would have done if Eddington's solar eclipse observation had failed to confirm General Relativity, Einstein replied:  \"Then I would feel sorry for the good Lord.  The theory is correct.\"

\n

According to prevailing notions of Science, this is arrogance—you must accept the verdict of experiment, and not cling to your personal ideas.

\n

But as I concluded in Einstein's Arrogance, Einstein doesn't come off nearly as badly from a Bayesian perspective. From a Bayesian perspective, in order to suggest General Relativity at all, in order to even think about what turned out to be the correct answer, Einstein must have had enough evidence to identify the true answer in the theory-space.  It would take only a little more evidence to justify (in a Bayesian sense) being nearly certain of the theory.  And it was unlikely that Einstein only had exactly enough evidence to bring the hypothesis all the way up to his attention.

\n

Any accusation of arrogance would have to center around the question, \"But Einstein, how did you know you had reasoned correctly?\"—to which I can only say:  Do not criticize people when they turn out to be right!  Wait for an occasion where they are wrong!  Otherwise you are missing the chance to see when someone is thinking smarter than you—for you criticize them whenever they depart from a preferred ritual of cognition.

\n

Or consider the famous exchange between Einstein and Niels Bohr on quantum theory—at a time when the then-current, single-world quantum theory seemed to be immensely well-confirmed experimentally; a time when, by the standards of Science, the current (deranged) quantum theory had simply won.

\n
\n

Einstein:  \"God does not play dice with the universe.\"
Bohr:  \"Einstein, don't tell God what to do.\"

\n
\n

You've got to admire someone who can get into an argument with God and win.

\n

If you take off your Bayesian goggles, and look at Einstein in terms of what he actually did all day, then the guy was sitting around studying math and thinking about how he would design the universe, rather than running out and looking at things to gather more data.  What Einstein did, successfully, is exactly the sort of high-minded feat of sheer intellect that Aristotle thought he could do, but couldn't.  Not from a probability-theoretic stance, mind you, but from the viewpoint of what they did all day long.

\n

Science does not trust scientists to do this, which is why General Relativity was not blessed as the public knowledge of humanity until after it had made and verified a novel experimental prediction—having to do with the bending of light in a solar eclipse.  (It later turned out that particular measurement was not precise enough to verify reliably, and had favored GR essentially by luck.)

\n

However, just because Science does not trust scientists to do something, does not mean it is impossible.

\n

But a word of caution here:  The reason why history books sometimes record the names of scientists who thought great high-minded thoughts, is not that high-minded thinking is easier, or more reliable.  It is a priority bias:  Some scientist who successfully reasoned from the smallest amount of experimental evidence got to the truth first.  This cannot be a matter of pure random chance:  The theory space is too large, and Einstein won several times in a row.  But out of all the scientists who tried to unravel a puzzle, or who would have eventually succeeded given enough evidence, history passes down to us the names of the scientists who successfully got there first.  Bear that in mind, when you are trying to derive lessons about how to reason prudently.

\n

In everyday life, you want every scrap of evidence you can get.  Do not rely on being able to successfully think high-minded thoughts unless experimentation is so costly or dangerous that you have no other choice.

\n

But sometimes experiments are costly, and sometimes we prefer to get there first... so you might consider trying to train yourself in reasoning on scanty evidence, preferably in cases where you will later find out if you were right or wrong.  Trying to beat low-capitalization prediction markets might make for good training in this?—though that is only speculation.

\n

As of now, at least, reasoning based on scanty evidence is something that modern-day science cannot reliably train modern-day scientists to do at all.  Which may perhaps have something to do with, oh, I don't know, not even trying?

\n

Actually, I take that back.  The most sane thinking I have seen in any scientific field comes from the field of evolutionary psychology, possibly because they understand self-deception, but also perhaps because they often (1) have to reason from scanty evidence and (2) do later find out if they were right or wrong.  I recommend to all aspiring rationalists that they study evolutionary psychology simply to get a glimpse of what careful reasoning looks like.  See particularly Tooby and Cosmides's \"The Psychological Foundations of Culture\".

\n

As for the possibility that only Einstein could do what Einstein did... that it took superpowers beyond the reach of ordinary mortals... here we run into some biases that would take a separate post to analyze.  Let me put it this way:  It is possible, perhaps, that only a genius could have done Einstein's actual historical work.  But potential geniuses, in terms of raw intelligence, are probably far more common than historical superachievers.  To put a random number on it, I doubt that anything more than one-in-a-million g-factor is required to be a potential world-class genius, implying at least six thousand potential Einsteins running around today.  And as for everyone else, I see no reason why they should not aspire to use efficiently the evidence that they have.

\n

But my final moral is that the frontier where the individual scientist rationally knows something that Science has not yet confirmed, is not always some innocently data-driven matter of spotting a strong regularity in a mountain of experiments.  Sometimes the scientist gets there by thinking great high-minded thoughts that Science does not trust you to think.

\n

I will not say, \"Don't try this at home.\"  I will say, \"Don't think this is easy.\"  We are not discussing, here, the victory of casual opinions over professional scientists.  We are discussing the sometime historical victories of one kind of professional effort over another.  Never forget all the famous historical cases where attempted armchair reasoning lost.

" } }, { "_id": "xTyuQ3cgsPjifr7oj", "title": "Faster Than Science", "pageUrl": "https://www.lesswrong.com/posts/xTyuQ3cgsPjifr7oj/faster-than-science", "postedAt": "2008-05-20T00:19:59.000Z", "baseScore": 94, "voteCount": 70, "commentCount": 14, "url": null, "contents": { "documentId": "xTyuQ3cgsPjifr7oj", "html": "

I sometimes say that the method of science is to amass such an enormous mountain of evidence that even scientists cannot ignore it; and that this is the distinguishing characteristic of a scientist, a non-scientist will ignore it anyway.

\n

Max Planck was even less optimistic:

\n
\n

\"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.\"

\n
\n

I am much tickled by this notion, because it implies that the power of science to distinguish truth from falsehood ultimately rests on the good taste of grad students.

\n

The gradual increase in acceptance of many-worlds in academic physics, suggests that there are physicists who will only accept a new idea given some combination of epistemic justification, and a sufficiently large academic pack in whose company they can be comfortable.  As more physicists accept, the pack grows larger, and hence more people go over their individual thresholds for conversion—with the epistemic justification remaining essentially the same.

\n

But Science still gets there eventually, and this is sufficient for the ratchet of Science to move forward, and raise up a technological civilization.

\n

Scientists can be moved by groundless prejudices, by undermined intuitions, by raw herd behavior—the panoply of human flaws.  Each time a scientist shifts belief for epistemically unjustifiable reasons, it requires more evidence, or new arguments, to cancel out the noise.

\n

\n

The \"collapse of the wavefunction\" has no experimental justification, but it appeals to the (undermined) intuition of a single world.  Then it may take an extra argument—say, that collapse violates Special Relativity—to begin the slow academic disintegration of an idea that should never have been assigned non-negligible probability in the first place.

\n

From a Bayesian perspective, human academic science as a whole is a highly inefficient processor of evidence.  Each time an unjustifiable argument shifts belief, you need an extra justifiable argument to shift it back.  The social process of science leans on extra evidence to overcome cognitive noise.

\n

A more charitable way of putting it is that scientists will adopt positions that are theoretically insufficiently extreme, compared to the ideal positions that scientists would adopt, if they were Bayesian AIs and could trust themselves to reason clearly.

\n

But don't be too charitable.  The noise we are talking about is not all innocent mistakes.  In many fields, debates drag on for decades after they should have been settled.  And not because the scientists on both sides refuse to trust themselves and agree they should look for additional evidence.  But because one side keeps throwing up more and more ridiculous objections, and demanding more and more evidence, from an entrenched position of academic power, long after it becomes clear from which quarter the winds of evidence are blowing.  (I'm thinking here about the debates surrounding the invention of evolutionary psychology, not about many-worlds.)

\n

Is it possible for individual humans or groups to process evidence more efficiently—reach correct conclusions faster—than human academic science as a whole?

\n

\"Ideas are tested by experiment.  That is the core of science.\"  And this must be true, because if you can't trust Zombie Feynman, who can you trust?

\n

Yet where do the ideas come from?

\n

You may be tempted to reply, \"They come from scientists.  Got any other questions?\"  In Science you're not supposed to care where the hypotheses come from—just whether they pass or fail experimentally.

\n

Okay, but if you remove all new ideas, the scientific process as a whole stops working because it has no alternative hypotheses to test.  So inventing new ideas is not a dispensable part of the process.

\n

Now put your Bayesian goggles back on.  As described in Einstein's Arrogance, there are queries that are not binary—where the answer is not \"Yes\" or \"No\", but drawn from a larger space of structures, e.g., the space of equations.  In such cases it takes far more Bayesian evidence to promote a hypothesis to your attention than to confirm the hypothesis.

\n

If you're working in the space of all equations that can be specified in 32 bits or less, you're working in a space of 4 billion equations.  It takes far more Bayesian evidence to raise one of those hypotheses to the 10% probability level, than it requires further Bayesian evidence to raise the hypothesis from 10% to 90% probability.

\n

When the idea-space is large, coming up with ideas worthy of testing, involves much more work—in the Bayesian-thermodynamic sense of \"work\"—than merely obtaining an experimental result with p<0.0001 for the new hypothesis over the old hypothesis.

\n

If this doesn't seem obvious-at-a-glance, pause here and read Einstein's Arrogance.

\n

The scientific process has always relied on scientists to come up with hypotheses to test, via some process not further specified by Science.  Suppose you came up with some way of generating hypotheses that was completely crazy—say, pumping a robot-controlled Ouija board with the digits of pi—and the resulting suggestions kept on getting verified experimentally.  The pure ideal essence of Science wouldn't skip a beat.  The pure ideal essence of Bayes would burst into flames and die.

\n

(Compared to Science, Bayes is falsified by more of the possible outcomes.)

\n

This doesn't mean that the process of deciding which ideas to test is unimportant to Science.  It means that Science doesn't specify it.

\n

In practice, the robot-controlled Ouija board doesn't work. In practice, there are some scientific queries with a large enough answer space, that picking models at random to test, it would take zillions of years to hit on a model that made good predictions—like getting monkeys to type Shakespeare.

\n

At the frontier of science—the boundary between ignorance and knowledge, where science advances—the process relies on at least some individual scientists (or working groups) seeing things that are not yet confirmed by Science.  That's how they know which hypotheses to test, in advance of the test itself.

\n

If you take your Bayesian goggles off, you can say, \"Well, they don't have to know, they just have to guess.\"  If you put your Bayesian goggles back on, you realize that \"guessing\" with 10% probability requires nearly as much epistemic work to have been successfully performed, behind the scenes, as \"guessing\" with 80% probability—at least for large answer spaces.

\n

The scientist may not know he has done this epistemic work successfully, in advance of the experiment; but he must, in fact, have done it successfully!  Otherwise he will not even think of the correct hypothesis.  In large answer spaces, anyway.

\n

So the scientist makes the novel prediction, performs the experiment, publishes the result, and now Science knows it too.  It is now part of the publicly accessible knowledge of humankind, that anyone can verify for themselves.

\n

In between was an interval where the scientist rationally knew something that the public social process of science hadn't yet confirmed.  And this is not a trivial interval, though it may be short; for it is where the frontier of science lies, the advancing border.

\n

All of this is more true for non-routine science than for routine science, because it is a notion of large answer spaces where the answer is not \"Yes\" or \"No\" or drawn from a small set of obvious alternatives.  It is much easier to train people to test ideas, than to have good ideas to test.

" } }, { "_id": "cbgZ64CkKyMYDswP7", "title": "Conference on Global Catastrophic Risks", "pageUrl": "https://www.lesswrong.com/posts/cbgZ64CkKyMYDswP7/conference-on-global-catastrophic-risks", "postedAt": "2008-05-19T22:13:39.000Z", "baseScore": 2, "voteCount": 4, "commentCount": 7, "url": null, "contents": { "documentId": "cbgZ64CkKyMYDswP7", "html": "

FYI:  The Oxford Future of Humanity Institute is holding a conference on global catastrophic risks on July 17-20, 2008, at Oxford (in the UK).

\n\n

I'll be there, as will Robin Hanson and Nick Bostrom.

\n\n

Deadline for registration is May 26th, 2008.  Registration is £60.

" } }, { "_id": "SoDsr8GEZmRKMZNkj", "title": "Changing the Definition of Science", "pageUrl": "https://www.lesswrong.com/posts/SoDsr8GEZmRKMZNkj/changing-the-definition-of-science", "postedAt": "2008-05-18T18:07:35.000Z", "baseScore": 32, "voteCount": 37, "commentCount": 30, "url": null, "contents": { "documentId": "SoDsr8GEZmRKMZNkj", "html": "

New Scientist on changing the definition of science, ungated here:

\n
\n

Others believe such criticism is based on a misunderstanding. \"Some people say that the multiverse concept isn't falsifiable because it's unobservable—but that's a fallacy,\" says cosmologist Max Tegmark of the Massachusetts Institute of Technology. He argues that the multiverse is a natural consequence of such eminently falsifiable theories as quantum theory and general relativity. As such, the multiverse theory stands or fails according to how well these other theories stand up to observational tests.
[...]
So if the simplicity of falsification is misleading, what should scientists be doing instead? Howson believes it is time to ditch Popper's notion of capturing the scientific process using deductive logic. Instead, the focus should be on reflecting what scientists actually do: gathering the weight of evidence for rival theories and assessing their relative plausibility.

\n
\n

\n
\n

Howson is a leading advocate for an alternative view of science based not on simplistic true/false logic, but on the far more subtle concept of degrees of belief. At its heart is a fundamental connection between the subjective concept of belief and the cold, hard mathematics of probability.

\n
\n

I'm a good deal less of a lonely iconoclast than I seem.  Maybe it's just the way I talk.

\n

The points of departure between myself and mainstream let's-reformulate-Science-as-Bayesianism is that:

\n

(1)  I'm not in academia and can censor myself a lot less when it comes to saying \"extreme\" things that others might well already be thinking.

\n

(2)  I think that just teaching probability theory won't be nearly enough.  We'll have to synthesize lessons from multiple sciences like cognitive biases and social psychology, forming a new coherent Art of Bayescraft, before we are actually going to do any better in the real world than modern science.  Science tolerates errors, Bayescraft does not.  Nobel laureate Robert Aumann, who first proved that Bayesians with the same priors cannot agree to disagree, is a believing Orthodox Jew.  Probability theory alone won't do the trick, when it comes to really teaching scientists.  This is my primary point of departure, and it is not something I've seen suggested elsewhere.

\n

(3)  I think it is possible to do better in the real world.  In the extreme case, a Bayesian superintelligence could use enormously less sensory information than a human scientist to come to correct conclusions.  First time you ever see an apple fall down, you observe the position goes as the square of time, invent calculus, generalize Newton's Laws... and see that Newton's Laws involve action at a distance, look for alternative explanations with increased locality, invent relativistic covariance around a hypothetical speed limit, and consider that General Relativity might be worth testing.  Humans do not process evidence efficiently—our minds are so noisy that it requires orders of magnitude more extra evidence to set us back on track after we derail.  Our collective, academia, is even slower.

" } }, { "_id": "wustx45CPL5rZenuo", "title": "No Safe Defense, Not Even Science", "pageUrl": "https://www.lesswrong.com/posts/wustx45CPL5rZenuo/no-safe-defense-not-even-science", "postedAt": "2008-05-18T05:19:24.000Z", "baseScore": 130, "voteCount": 102, "commentCount": 75, "url": null, "contents": { "documentId": "wustx45CPL5rZenuo", "html": "

I don't ask my friends about their childhoods—I lack social curiosity—and so I don't know how much of a trend this really is:

\n

Of the people I know who are reaching upward as rationalists, who volunteer information about their childhoods, there is a surprising tendency to hear things like:  \"My family joined a cult and I had to break out,\" or \"One of my parents was clinically insane and I had to learn to filter out reality from their madness.\"

\n

My own experience with growing up in an Orthodox Jewish family seems tame by comparison... but it accomplished the same outcome:  It broke my core emotional trust in the sanity of the people around me.

\n

Until this core emotional trust is broken, you don't start growing as a rationalist.  I have trouble putting into words why this is so.  Maybe any unusual skills you acquire—anything that makes you unusually rational—requires you to zig when other people zag.  Maybe that's just too scary, if the world still seems like a sane place unto you.

\n

Or maybe you don't bother putting in the hard work to be extra bonus sane, if normality doesn't scare the hell out of you.

\n

\n

I know that many aspiring rationalists seem to run into roadblocks around things like cryonics or many-worlds.  Not that they don't see the logic; they see the logic and wonder, \"Can this really be true, when it seems so obvious now, and yet none of the people around me believe it?\"

\n

Yes.  Welcome to the Earth where ethanol is made from corn and environmentalists oppose nuclear power.  I'm sorry.

\n

(See also:  Cultish Countercultishness.  If you end up in the frame of mind of nervously seeking reassurance, this is never a good thing—even if it's because you're about to believe something that sounds logical but could cause other people to look at you funny.)

\n

People who've had their trust broken in the sanity of the people around them, seem to be able to evaluate strange ideas on their merits, without feeling nervous about their strangeness.  The glue that binds them to their current place has dissolved, and they can walk in some direction, hopefully forward.

\n

Lonely dissent, I called it.  True dissent doesn't feel like going to school wearing black; it feels like going to school wearing a clown suit.

\n

That's what it takes to be the lone voice who says, \"If you really think you know who's going to win the election, why aren't you picking up the free money on the Intrade prediction market?\" while all the people around you are thinking, \"It is good to be an individual and form your own opinions, the shoe commercials told me so.\"

\n

Maybe in some other world, some alternate Everett branch with a saner human population, things would be different... but in this world, I've never seen anyone begin to grow as a rationalist until they make a deep emotional break with the wisdom of their pack.

\n

Maybe in another world, things would be different.  And maybe not.  I'm not sure that human beings realistically can trust and think at the same time.

\n

Once upon a time, there was something I trusted.

\n

Eliezer18 trusted Science.

\n

Eliezer18 dutifully acknowledged that the social process of science was flawed.  Eliezer18 dutifully acknowledged that academia was slow, and misallocated resources, and played favorites, and mistreated its precious heretics.

\n

That's the convenient thing about acknowledging flaws in people who failed to live up to your ideal; you don't have to question the ideal itself.

\n

But who could possibly be foolish enough to question, \"The experimental method shall decide which hypothesis wins\"?

\n

Part of what fooled Eliezer18 was a general problem he had, with an aversion to ideas that resembled things idiots had said.  Eliezer18 had seen plenty of people questioning the ideals of Science Itself, and without exception they were all on the Dark Side.  People who questioned the ideal of Science were invariably trying to sell you snake oil, or trying to safeguard their favorite form of stupidity from criticism, or trying to disguise their personal resignation as a Deeply Wise acceptance of futility.

\n

If there'd been any other ideal that was a few centuries old, the young Eliezer would have looked at it and said, \"I wonder if this is really right, and whether there's a way to do better.\"  But not the ideal of Science.  Science was the master idea, the idea that let you change ideas.  You could question it, but you were meant to question it and then accept it, not actually say, \"Wait!  This is wrong!\"

\n

Thus, when once upon a time I came up with a stupid idea, I thought I was behaving virtuously if I made sure there was a Novel Prediction, and professed that I wished to test my idea experimentally.  I thought I had done everything I was obliged to do.

\n

So I thought I was safe—not safe from any particular external threat, but safe on some deeper level, like a child who trusts their parent and has obeyed all the parent's rules.

\n

I'd long since been broken of trust in the sanity of my family or my teachers at school.  And the other children weren't intelligent enough to compete with the conversations I could have with books.  But I trusted the books, you see.  I trusted that if I did what Richard Feynman told me to do, I would be safe.  I never thought those words aloud, but it was how I felt.

\n

When Eliezer23 realized exactly how stupid the stupid theory had been—and that Traditional Rationality had not saved him from it—and that Science would have been perfectly okay with his wasting ten years testing the stupid idea, so long as afterward he admitted it was wrong...

\n

...well, I'm not going to say it was a huge emotional convulsion.  I don't really go in for that kind of drama.  It simply became obvious that I'd been stupid.

\n

That's the trust I'm trying to break in you.  You are not safe.  Ever.

\n

Not even Science can save you.  The ideals of Science were born centuries ago, in a time when no one knew anything about probability theory or cognitive biases.  Science demands too little of you, it blesses your good intentions too easily, it is not strict enough, it only makes those injunctions that an average scientist can follow, it accepts slowness as a fact of life.

\n

So don't think that if you only follow the rules of Science, that makes your reasoning defensible.

\n

There is no known procedure you can follow that makes your reasoning defensible.

\n

There is no known set of injunctions which you can satisfy, and know that you will not have been a fool.

\n

There is no known morality-of-reasoning that you can do your best to obey, and know that you are thereby shielded from criticism.

\n

No, not even if you turn to Bayescraft.  It's much harder to use and you'll never be sure that you're doing it right.

\n

The discipline of Bayescraft is younger by far than the discipline of Science.  You will find no textbooks, no elderly mentors, no histories written of success and failure, no hard-and-fast rules laid down.  You will have to study cognitive biases, and probability theory, and evolutionary psychology, and social psychology, and other cognitive sciences, and Artificial Intelligence—and think through for yourself how to apply all this knowledge to the case of correcting yourself, since that isn't yet in the textbooks.

\n

You don't know what your own mind is really doing. They find a new cognitive bias every week and you're never sure if you've corrected for it, or overcorrected.

\n

The formal math is impossible to apply.  It doesn't break down as easily as John Q. Unbeliever thinks, but you're never really sure where the foundations come from.  You don't know why the universe is simple enough to understand, or why any prior works for it.  You don't know what your own priors are, let alone if they're any good.

\n

One of the problems with Science is that it's too vague to really scare you.  \"Ideas should be tested by experiment.\"  How can you go wrong with that?

\n

On the other hand, if you have some math of probability theory laid out in front of you, and worse, you know you can't actually use it, then it becomes clear that you are trying to do something difficult, and that you might well be doing it wrong.

\n

So you cannot trust.

\n

And all this that I have said, will not be sufficient to break your trust.  That won't happen until you get into your first real disaster from following The Rules, not from breaking them.

\n

Eliezer18 already had the notion that you were allowed to question Science.  Why, of course the scientific method was not itself immune to questioning!  For are we not all good rationalists?  Are we not allowed to question everything?

\n

It was the notion that you could actually in real life follow Science and fail miserably, that Eliezer18  didn't really, emotionally believe was possible.

\n

Oh, of course he said it was possible.  Eliezer18 dutifully acknowledged the possibility of error, saying, \"I could be wrong, but...\"

\n

But he didn't think failure could happen in, you know, real life.  You were supposed to look for flaws, not actually find them.

\n

And this emotional difference is a terribly difficult thing to accomplish in words, and I fear there's no way I can really warn you.

\n

Your trust will not break, until you apply all that you have learned here and from other books, and take it as far as you can go, and find that this too fails you—that you have still been a fool, and no one warned you against it—that all the most important parts were left out of the guidance you received—that some of the most precious ideals you followed, steered you in the wrong direction—

\n

—and if you still have something to protect, so that you must keep going, and cannot resign and wisely acknowledge the limitations of rationality—

\n

then you will be ready to start your journey as a rationalist.  To take sole responsibility, to live without any trustworthy defenses, and to forge a higher Art than the one you were once taught.

\n

No one begins to truly search for the Way until their parents have failed them, their gods are dead, and their tools have shattered in their hand.

\n
\n

Post Scriptum:  On reviewing a draft of this essay, I discovered a fairly inexcusable flaw in reasoning, which actually affects one of the conclusions drawn.  I am leaving it in.  Just in case you thought that taking my advice made you safe; or that you were supposed to look for flaws, but not find any.

\n

And of course, if you look too hard for a flaw, and find a flaw that is not a real flaw, and cling to it to reassure yourself of how critical you are, you will only be worse off than before...

\n

It is living with uncertainty—knowing on a gut level that there are flaws, they are serious and you have not found them—that is the difficult thing.

" } }, { "_id": "WijMw9WkcafmCFgj4", "title": "Do Scientists Already Know This Stuff?", "pageUrl": "https://www.lesswrong.com/posts/WijMw9WkcafmCFgj4/do-scientists-already-know-this-stuff", "postedAt": "2008-05-17T02:25:59.000Z", "baseScore": 68, "voteCount": 54, "commentCount": 57, "url": null, "contents": { "documentId": "WijMw9WkcafmCFgj4", "html": "

poke alleges:

\n
\n

\"Being able to create relevant hypotheses is an important skill and one a scientist spends a great deal of his or her time developing. It may not be part of the traditional description of science but that doesn't mean it's not included in the actual social institution of science that produces actual real science here in the real world; it's your description and not science that is faulty.\"

\n
\n

I know I've been calling my younger self \"stupid\" but that is a figure of speech; \"unskillfully wielding high intelligence\" would be more precise.  Eliezer18 was not in the habit of making obvious mistakes—it's just that his \"obvious\" wasn't my \"obvious\".

\n

No, I did not go through the traditional apprenticeship.  But when I look back, and see what Eliezer18 did wrong, I see plenty of modern scientists making the same mistakes.  I cannot detect any sign that they were better warned than myself.

\n

Sir Roger Penrose—a world-class physicist—still thinks that consciousness is caused by quantum gravity.  I expect that no one ever warned him against mysterious answers to mysterious questions—only told him his hypotheses needed to be falsifiable and have empirical consequences.  Just like Eliezer18.

\n

\n

\"Consciousness is caused by quantum gravity\" has testable implications:  It implies that you should be able to look at neurons and discover a coherent quantum superposition (whose collapse?) contributes to information-processing, and that you won't ever be able to reproduce a neuron's input-output behavior using a computable microanatomical simulation...

\n

...but even after you say \"Consciousness is caused by quantum gravity\", you don't anticipate anything about how your brain thinks \"I think therefore I am!\" or the mysterious redness of red, that you did not anticipate before, even though you feel like you know a cause of it.  This is a tremendous danger sign, I now realize, but it's not the danger sign that I was warned against, and I doubt that Penrose was ever told of it by his thesis advisor.  For that matter, I doubt that Niels Bohr was ever warned against it when it came time to formulate the Copenhagen Interpretation.

\n

As far as I can tell, the reason Eliezer18 and Sir Roger Penrose and Niels Bohr were not warned, is that no standard warning exists.

\n

I did not generalize the concept of \"mysterious answers to mysterious questions\", in that many words, until I was writing a Bayesian analysis of what distinguishes technical, nontechnical and semitechnical scientific explanations.  Now, the final output of that analysis, can be phrased nontechnically in terms of four danger signs:

\n\n

In principle, all this could have been said in the immediate aftermath of vitalism.  Just like elementary probability theory could have been invented by Archimedes, or the ancient Greeks could have theorized natural selection.  But in fact no one ever warned me against any of these four dangers, in those terms—the closest being the warning that hypotheses should have testable consequences.  And I didn't conceptualize the warning signs explicitly until I was trying to think of the whole affair in terms of probability distributions—some degree of overkill was required.

\n

I simply have no reason to believe that these warnings are passed down in scientific apprenticeships—certainly not to a majority of scientists.  Among other things, it is advice for handling situations of confusion and despair, scientific chaos.  When would the average scientist or average mentor have an opportunity to use that kind of technique?

\n

We just got through discussing the single-world fiasco in physics. Clearly, no one told them about the formal definition of Occam's Razor, in whispered apprenticeship or otherwise.

\n

There is a known effect where great scientists have multiple great students.  This may well be due to the mentors passing on skills that they can't describe.  But I don't think that counts as part of standard science.  And if the great mentors haven't been able to put their guidance into words and publish it generally, that's not a good sign for how well these things are understood.

\n

Reasoning in the absence of definite evidence without going instantaneously completely wrong is really really hard.  When you're learning in school, you can miss one point, and then be taught fifty other points that happen to be correct.  When you're reasoning out new knowledge in the absence of crushingly overwhelming guidance, you can miss one point and wake up in Outer Mongolia fifty steps later.

\n

I am pretty sure that scientists who switch off their brains and relax with some comfortable nonsense as soon as they leave their own specialties, do not realize that minds are engines and that there is a causal story behind every trustworthy belief.  Nor, I suspect, were they ever told that there is an exact rational probability given a state of evidence, which has no room for whims; even if you can't calculate the answer, and even if you don't hear any authoritative command for what to believe.

\n

I doubt that scientists who are asked to pontificate on the future by the media, who sketch amazingly detailed pictures of Life in 2050, were ever taught about the conjunction fallacy.  Or how the representativeness heuristic can make more detailed stories seem more plausible, even as each extra detail drags down the probability.  The notion of every added detail needing its own support—of not being able to make up big detailed stories that sound just like the detailed stories you were taught in science or history class—is absolutely vital to precise thinking in the absence of definite evidence.  But how would a notion like that get into the standard scientific apprenticeship?  The cognitive bias was uncovered only a few decades ago, and not popularized until very recently.

\n

Then there's affective death spirals around notions like \"emergence\" or \"complexity\" which are sufficiently vaguely defined that you can say lots of nice things about them.  There's whole academic subfields built around the kind of mistakes that Eliezer18 used to make!  (Though I never fell for the \"emergence\" thing.)

\n

I sometimes say that the goal of science is to amass such an enormous mountain of evidence that not even scientists can ignore it: and that this is the distinguishing feature of a scientist, a non-scientist will ignore it anyway.

\n

If there can exist some amount of evidence so crushing that you finally despair, stop making excuses and just give up—drop the old theory and never mention it again—then this is all it takes to let the ratchet of Science turn forward over time, and raise up a technological civilization.  Contrast to religion.

\n

Books by Carl Sagan and Martin Gardner and the other veins of Traditional Rationality are meant to accomplish this difference: to transform someone from a non-scientist into a potential scientist, and guard them from experimentally disproven madness.

\n

What further training does a professional scientist get?  Some frequentist stats classes on how to calculate statistical significance.  Training in standard techniques that will let them churn out papers within a solidly established paradigm.

\n

If Science demanded more than this from the average scientist, I don't think it would be possible for Science to get done.  We have problems enough from people who sneak in without the drop-dead-basic qualifications.

\n

Nick Tarleton summarized the resulting problem very well—better than I did, in fact:  If you come up with a bizarre-seeming hypothesis not yet ruled out by the evidence, and try to test it experimentally, Science doesn't call you a bad person.  Science doesn't trust its elders to decide which hypotheses \"aren't worth testing\". But this is a carefully lax social standard, and if you try to translate it into a standard of individual epistemic rationality, it lets you believe far too much.  Dropping back into the analogy with pragmatic-distrust-based-libertarianism, it's the difference between \"Cigarettes shouldn't be illegal\" and \"Go smoke a Marlboro\".

\n

Do you remember ever being warned against that mistake, in so many words?  Then why wouldn't people make exactly that error?  How many people will spontaneously go an extra mile and be even stricter with themselves?  Some, but not many.

\n

Many scientists will believe all manner of ridiculous things outside the laboratory, so long as they can convince themselves it hasn't been definitely disproven, or so long as they manage not to ask.  Is there some standard lecture that grad students get, of which people see this folly, and ask, \"Were they absent from class that day?\"  No, as far as I can tell.

\n

Maybe if you're super lucky and get a famous mentor, they'll tell you rare personal secrets like \"Ask yourself which are the important problems in your field, and then work on one of those, instead of falling into something easy and trivial\" or \"Be more careful than the journal editors demand; look for new ways to guard your expectations from influencing the experiment, even if it's not standard.\"

\n

But I really don't think there's a huge secret standard scientific tradition of precision-grade rational reasoning on sparse evidence.  Half of all the scientists out there still believe they believe in God!  The more difficult skills are not standard!

" } }, { "_id": "PGfJdgemDJSwWBZSX", "title": "Science Isn't Strict Enough", "pageUrl": "https://www.lesswrong.com/posts/PGfJdgemDJSwWBZSX/science-isn-t-strict-enough", "postedAt": "2008-05-16T06:51:16.000Z", "baseScore": 62, "voteCount": 47, "commentCount": 64, "url": null, "contents": { "documentId": "PGfJdgemDJSwWBZSX", "html": "

Once upon a time, a younger Eliezer had a stupid theory.  Eliezer18 was careful to follow the precepts of Traditional Rationality that he had been taught; he made sure his stupid theory had experimental consequences.  Eliezer18 professed, in accordance with the virtues of a scientist he had been taught, that he wished to test his stupid theory.

This was all that was required to be virtuous, according to what Eliezer18  had been taught was virtue in the way of science.

It was not even remotely the order of effort that would have been required to get it right.

The traditional ideals of Science too readily give out gold stars. Negative experimental results are also knowledge, so everyone who plays gets an award.  So long as you can think of some kind of experiment that tests your theory, and you do the experiment, and you accept the results, you've played by the rules; you're a good scientist.

You didn't necessarily get it right, but you're a nice science-abiding citizen.

(I note at this point that I am speaking of Science, not the social process of science as it actually works in practice, for two reasons.  First, I went astray in trying to follow the ideal of Science—it's not like I was shot down by a journal editor with a grudge, and it's not like I was trying to imitate the flaws of academia.  Second, if I point out a problem with the ideal as it is traditionally preached, real-world scientists are not forced to likewise go astray!)

Science began as a rebellion against grand philosophical schemas and armchair reasoning.  So Science doesn't include a rule as to what kinds of hypotheses you are and aren't allowed to test; that is left up to the individual scientist.  Trying to guess that a priori, would require some kind of grand philosophical schema, and reasoning in advance of the evidence.  As a social ideal, Science doesn't judge you as a bad person for coming up with heretical hypotheses; honest experiments, and acceptance of the results, is virtue unto a scientist.

As long as most scientists can manage to accept definite, unmistakable, unambiguous experimental evidence, science can progress.  It may happen too slowly—it may take longer than it should—you may have to wait for a generation of elders to die out—but eventually, the ratchet of knowledge clicks forward another notch.  Year by year, decade by decade, the wheel turns forward.  It's enough to support a civilization.

So that's all that Science really asks of you—the ability to accept reality when you're beat over the head with it.  It's not much, but it's enough to sustain a scientific culture.

Contrast this to the notion we have in probability theory, of an exact quantitative rational judgment.  If 1% of women presenting for a routine screening have breast cancer, and 80% of women with breast cancer get positive mammographies, and 10% of women without breast cancer get false positives, what is the probability that a routinely screened woman with a positive mammography has breast cancer?  7.5%.  You cannot say, \"I believe she doesn't have breast cancer, because the experiment isn't definite enough.\"  You cannot say, \"I believe she has breast cancer, because it is wise to be pessimistic and that is what the only experiment so far seems to indicate.\"  7.5% is the rational estimate given this evidence, not 7.4% or 7.6%.  The laws of probability are laws.

It is written in the Twelve Virtues, of the third virtue, lightness:

If you regard evidence as a constraint and seek to free yourself, you sell yourself into the chains of your whims.  For you cannot make a true map of a city by sitting in your bedroom with your eyes shut and drawing lines upon paper according to impulse.  You must walk through the city and draw lines on paper that correspond to what you see.  If, seeing the city unclearly, you think that you can shift a line just a little to the right, just a little to the left, according to your caprice, this is just the same mistake.

In Science, when it comes to deciding which hypotheses to test, the morality of Science gives you personal freedom of what to believe, so long as it isn't already ruled out by experiment, and so long as you move to test your hypothesis.  Science wouldn't try to give an official verdict on the best hypothesis to test, in advance of the experiment.  That's left up to the conscience of the individual scientist.

Where definite experimental evidence exists, Science tells you to bow your stubborn neck and accept it.  Otherwise, Science leaves it up to you.  Science gives you room to wander around within the boundaries of the experimental evidence, according to your whims.

And this is not easily reconciled with Bayesianism's notion of an exactly right probability estimate, one with no flex or room for whims, that exists both before and after the experiment.  It doesn't match well with the ancient and traditional reason for Science—the distrust of grand schemas, the presumption that people aren't rational enough to get things right without definite and unmistakable experimental evidence.  If we were all perfect Bayesians, we wouldn't need a social process of science.

Nonetheless, around the time I realized my big mistake, I had also been studying Kahneman and Tversky and Jaynes.  I was learning a new Way, stricter than Science.  A Way that could criticize my folly, in a way that Science never could.  A Way that could have told me, what Science would never have said in advance:  \"You picked the wrong hypothesis to test, dunderhead.\"

But the Way of Bayes is also much harder to use than Science.  It puts a tremendous strain on your ability to hear tiny false notes, where Science only demands that you notice an anvil dropped on your head.

In Science you can make a mistake or two, and another experiment will come by and correct you; at worst you waste a couple of decades.

But if you try to use Bayes even qualitatively—if you try to do the thing that Science doesn't trust you to do, and reason rationally in the absence of overwhelming evidence—it is like math, in that a single error in a hundred steps can carry you anywhere.  It demands lightness, evenness, precision, perfectionism.

There's a good reason why Science doesn't trust scientists to do this sort of thing, and asks for further experimental proof even after someone claims they've worked out the right answer based on hints and logic.

But if you would rather not waste ten years trying to prove the wrong theory, you'll need to essay the vastly more difficult problem: listening to evidence that doesn't shout in your ear.

Even if you can't look up the priors for a problem in the Handbook of Chemistry and Physics—even if there's no Authoritative Source telling you what the priors are—that doesn't mean you get a free, personal choice of making the priors whatever you want.  It means you have a new guessing problem which you must carry out to the best of your ability.

If the mind, as a cognitive engine, could generate correct estimates by fiddling with priors according to whims, you could know things without looking them, or even alter them without touching them.  But the mind is not magic.  The rational probability estimate has no room for any decision based on whim, even when it seems that you don't know the priors.

Similarly, if the Bayesian answer is difficult to compute, that doesn't mean that Bayes is inapplicable; it means you don't know what the Bayesian answer is.  Bayesian probability theory is not a toolbox of statistical methods, it's the law that governs any tool you use, whether or not you know it, whether or not you can calculate it.

As for using Bayesian methods on huge, highly general hypothesis spaces—like, \"Here's the data from every physics experiment ever; now, what would be a good Theory of Everything?\"—if you knew how to do that in practice, you wouldn't be a statistician, you would be an Artificial General Intelligence programmer.  But that doesn't mean that human beings, in modeling the universe using human intelligence, are violating the laws of physics / Bayesianism by generating correct guesses without evidence.)

Nick Tarleton comments:

The problem is encouraging a private, epistemic standard as lax as the social one.

which pinpoints the problem I was trying to indicate much better than I did.

" } }, { "_id": "wzxneh7wxkdNYNbtB", "title": "When Science Can't Help", "pageUrl": "https://www.lesswrong.com/posts/wzxneh7wxkdNYNbtB/when-science-can-t-help", "postedAt": "2008-05-15T07:24:25.000Z", "baseScore": 83, "voteCount": 64, "commentCount": 90, "url": null, "contents": { "documentId": "wzxneh7wxkdNYNbtB", "html": "

Once upon a time, a younger Eliezer had a stupid theory.  Let's say that Eliezer18's stupid theory was that consciousness was caused by closed timelike curves hiding in quantum gravity.  This isn't the whole story, not even close, but it will do for a start.

\n

And there came a point where I looked back, and realized:

\n
    \n
  1. I had carefully followed everything I'd been told was Traditionally Rational, in the course of going astray.  For example, I'd been careful to only believe in stupid theories that made novel experimental predictions, e.g., that neuronal microtubules would be found to support coherent quantum states.
  2. \n
  3. Science would have been perfectly fine with my spending ten years trying to test my stupid theory, only to get a negative experimental result, so long as I then said, \"Oh, well, I guess my theory was wrong.\"
  4. \n
\n

From Science's perspective, that is how things are supposed to work—happy fun for everyone.  You admitted your error!  Good for you!  Isn't that what Science is all about?

\n

But what if I didn't want to waste ten years?

\n

Well... Science didn't have much to say about that.  How could Science say which theory was right, in advance of the experimental test?  Science doesn't care where your theory comes from—it just says, \"Go test it.\"

\n

This is the great strength of Science, and also its great weakness.

\n

\n

Gray Area asked:

\n
\n

Eliezer, why are you concerned with untestable questions?

\n
\n

Because questions that are easily immediately tested are hard for Science to get wrong.

\n

I mean, sure, when there's already definite unmistakable experimental evidence available, go with it.  Why on Earth wouldn't you?

\n

But sometimes a question will have very large, very definite experimental consequences in your future—but you can't easily test it experimentally right now—and yet there is a strong rational argument.

\n

Macroscopic quantum superpositions are readily testable:  It would just take nanotechnologic precision, very low temperatures, and a nice clear area of interstellar space.  Oh, sure, you can't do it right now, because it's too expensive or impossible for today's technology or something like that—but in theory, sure!  Why, maybe someday they'll run whole civilizations on macroscopically superposed quantum computers, way out in a well-swept volume of a Great Void.  (Asking what quantum non-realism says about the status of any observers inside these computers, helps to reveal the underspecification of quantum non-realism.)

\n

This doesn't seem immediately pragmatically relevant to your life, I'm guessing, but it establishes the pattern:  Not everything with future consequences is cheap to test now.

\n

Evolutionary psychology is another example of a case where rationality has to take over from science.  While theories of evolutionary psychology form a connected whole, only some of those theories are readily testable experimentally.  But you still need the other parts of the theory, because they form a connected web that helps you to form the hypotheses that are actually testable—and then the helper hypotheses are supported in a Bayesian sense, but not supported experimentally.  Science would render a verdict of \"not proven\" on individual parts of a connected theoretical mesh that is experimentally productive as a whole.  We'd need a new kind of verdict for that, something like \"indirectly supported\".

\n

Or what about cryonics?

\n

Cryonics is an archetypal example of an extremely important issue (150,000 people die per day) that will have huge consequences in the foreseeable future, but doesn't offer definite unmistakable experimental evidence that we can get right now.

\n

So do you say, \"I don't believe in cryonics because it hasn't been experimentally proven, and you shouldn't believe in things that haven't been experimentally proven?\"

\n

Well, from a Bayesian perspective, that's incorrect.  Absence of evidence is evidence of absence only to the degree that we could reasonably expect the evidence to appear.  If someone is trumpeting that snake oil cures cancer, you can reasonably expect that, if the snake oil was actually curing cancer, some scientist would be performing a controlled study to verify it—that, at the least, doctors would be reporting case studies of amazing recoveries—and so the absence of this evidence is strong evidence of absence.  But \"gaps in the fossil record\" are not strong evidence against evolution; fossils form only rarely, and even if an intermediate species did in fact exist, you cannot expect with high probability that Nature will obligingly fossilize it and that the fossil will be discovered.

\n

Reviving a cryonically frozen mammal is just not something you'd expect to be able to do with modern technology, even if future nanotechnologies could in fact perform a successful revival.  That's how I see Bayes seeing it.

\n

Oh, and as for the actual arguments for cryonics—I'm not going to go into those at the moment.  But if you followed the physics and anti-Zombie sequences, it should now seem a lot more plausible, that whatever preserves the pattern of synapses, preserves as much of \"you\" as is preserved from one night's sleep to morning's waking.

\n

Now, to be fair, someone who says, \"I don't believe in cryonics because it hasn't been proven experimentally\" is misapplying the rules of Science; this is not a case where science actually gives the wrong answer.  In the absence of a definite experimental test, the verdict of science here is \"Not proven\".  Anyone who interprets that as a rejection is taking an extra step outside of science, not a misstep within science.

\n

John McCarthy's Wikiquotes page has him saying, \"Your statements amount to saying that if AI is possible, it should be easy. Why is that?\"  The Wikiquotes page doesn't say what McCarthy was responding to, but I could venture a guess.

\n

The general mistake probably arises because there are cases where the absence of scientific proof is strong evidence—because an experiment would be readily performable, and so failure to perform it is itself suspicious.  (Though not as suspicious as I used to think—with all the strangely varied anecdotal evidence coming in from respected sources, why the hell isn't anyone testing Seth Roberts's theory of appetite suppression?)

\n

Another confusion factor may be that if you test Pharmaceutical X on 1000 subjects and find that 56% of the control group and 57% of the experimental group recover, some people will call that a verdict of \"Not proven\".  I would call it an experimental verdict of \"Pharmaceutical X doesn't work well, if at all\".  Just because this verdict is theoretically retractable in the face of new evidence, doesn't make it ambiguous.

\n

In any case, right now you've got people dismissing cryonics out of hand as \"not scientific\", like it was some kind of pharmaceutical you could easily administer to 1000 patients and see what happened.  \"Call me when cryonicists actually revive someone,\" they say; which, as Mike Li observes, is like saying \"I refuse to get into this ambulance; call me when it's actually at the hospital\".  Maybe Martin Gardner warned them against believing in strange things without experimental evidence.  So they wait for the definite unmistakable verdict of Science, while their family and friends and 150,000 people per day are dying right now, and might or might not be savable—

\n

—a calculated bet you could only make rationally.

\n

The drive of Science is to obtain a mountain of evidence so huge that not even fallible human scientists can misread it.  But even that sometimes goes wrong, when people become confused about which theory predicts what, or bake extremely-hard-to-test components into an early version of their theory.  And sometimes you just can't get clear experimental evidence at all.

\n

Either way, you have to try to do the thing that Science doesn't trust anyone to do—think rationally, and figure out the answer before you get clubbed over the head with it.

\n

(Oh, and sometimes a disconfirming experimental result looks like:  \"Your entire species has just been wiped out!  You are now scientifically required to relinquish your theory.  If you publicly recant, good for you!  Remember, it takes a strong mind to give up strongly held beliefs.  Feel free to try another hypothesis next time!\")

" } }, { "_id": "5bJyRMZzwMov5u3hW", "title": "Science Doesn't Trust Your Rationality", "pageUrl": "https://www.lesswrong.com/posts/5bJyRMZzwMov5u3hW/science-doesn-t-trust-your-rationality", "postedAt": "2008-05-14T02:13:46.000Z", "baseScore": 72, "voteCount": 66, "commentCount": 136, "url": null, "contents": { "documentId": "5bJyRMZzwMov5u3hW", "html": "

Scott Aaronson suggests that Many-Worlds and libertarianism are similar in that they are both cases of bullet-swallowing, rather than bullet-dodging:

\n
\n

Libertarianism and MWI are both are grand philosophical theories that start from premises that almost all educated people accept (quantum mechanics in the one case, Econ 101 in the other), and claim to reach conclusions that most educated people reject, or are at least puzzled by (the existence of parallel universes / the desirability of eliminating fire departments).

\n
\n

Now there's an analogy that would never have occurred to me.

\n

I've previously argued that Science rejects Many-Worlds but Bayes accepts it.  (Here, \"Science\" is capitalized because we are talking about the idealized form of Science, not just the actual social process of science.)

\n

It furthermore seems to me that there is a deep analogy between (small-'l') libertarianism and Science:

\n
    \n
  1. Both are based on a pragmatic distrust of reasonable-sounding arguments.
  2. \n
  3. Both try to build systems that are more trustworthy than the people in them.
  4. \n
  5. Both accept that people are flawed, and try to harness their flaws to power the system.
  6. \n
\n

\n

The core argument for libertarianism is historically motivated distrust of lovely theories of \"How much better society would be, if we just made a rule that said XYZ.\"  If that sort of trick actually worked, then more regulations would correlate to higher economic growth as society moved from local to global optima.  But when some person or interest group gets enough power to start doing everything they think is a good idea, history says that what actually happens is Revolutionary France or Soviet Russia.

\n

The plans that in lovely theory should have made everyone happy ever after, don't have the results predicted by reasonable-sounding arguments.  And power corrupts, and attracts the corrupt.

\n

So you regulate as little as possible, because you can't trust the lovely theories and you can't trust the people who implement them.

\n

You don't shake your finger at people for being selfish.  You try to build an efficient system of production out of selfish participants, by requiring transactions to be voluntary.  So people are forced to play positive-sum games, because that's how they get the other party to sign the contract.  With violence restrained and contracts enforced, individual selfishness can power a globally productive system.

\n

Of course none of this works quite so well in practice as in theory, and I'm not going to go into market failures, commons problems, etc.  The core argument for libertarianism is not that libertarianism would work in a perfect world, but that it degrades gracefully into real life.  Or rather, degrades less awkwardly than any other known economic principle.  (People who see Libertarianism as the perfect solution for perfect people, strike me as kinda missing the point of the \"pragmatic distrust\" thing.)

\n

Science first came to know itself as a rebellion against trusting the word of Aristotle. If the people of that revolution had merely said, \"Let us trust ourselves, not Aristotle!\" they would have flashed and faded like the French Revolution.

\n

But the Scientific Revolution lasted because—like the American Revolution—the architects propounded a stranger philosophy:  \"Let us trust no one!  Not even ourselves!\"

\n

In the beginning came the idea that we can't just toss out Aristotle's armchair reasoning and replace it with different armchair reasoning.  We need to talk to Nature, and actually listen to what It says in reply.  This, itself, was a stroke of genius.

\n

But then came the challenge of implementation. People are stubborn, and may not want to accept the verdict of experiment.  Shall we shake a disapproving finger at them, and say \"Naughty\"?

\n

No; we assume and accept that each individual scientist may be crazily attached to their personal theories.  Nor do we assume that anyone can be trained out of this tendency—we don't try to choose Eminent Judges who are supposed to be impartial.

\n

Instead, we try to harness the individual scientist's stubborn desire to prove their personal theory, by saying:  \"Make a new experimental prediction, and do the experiment.  If you're right, and the experiment is replicated, you win.\"  So long as scientists believe this is true, they have a motive to do experiments that can falsify their own theories.  Only by accepting the possibility of defeat is it possible to win.  And any great claim will require replication; this gives scientists a motive to be honest, on pain of great embarrassment.

\n

And so the stubbornness of individual scientists is harnessed to produce a steady stream of knowledge at the group level.  The System is somewhat more trustworthy than its parts.

\n

Libertarianism secretly relies on most individuals being prosocial enough to tip at a restaurant they won't ever visit again.  An economy of genuinely selfish human-level agents would implode.  Similarly, Science relies on most scientists not committing sins so egregious that they can't rationalize them away.

\n

To the extent that scientists believe they can promote their theories by playing academic politics—or game the statistical methods to potentially win without a chance of losing—or to the extent that nobody bothers to replicate claims—science degrades in effectiveness.  But it degrades gracefully, as such things go.

\n

The part where the successful predictions belong to the theory and theorists who originally made them, and cannot just be stolen by a theory that comes along later—without a novel experimental prediction—is an important feature of this social process.

\n

The final upshot is that Science is not easily reconciled with probability theory.  If you do a probability-theoretic calculation correctly, you're going to get the rational answer.  Science doesn't trust your rationality, and it doesn't rely on your ability to use probability theory as the arbiter of truth.  It wants you to set up a definitive experiment.

\n

Regarding Science as a mere approximation to some probability-theoretic ideal of rationality... would certainly seem to be rational.  There seems to be an extremely reasonable-sounding argument that Bayes's Theorem is the hidden structure that explains why Science works.  But to subordinate Science to the grand schema of Bayesianism, and let Bayesianism come in and override Science's verdict when that seems appropriate, is not a trivial step!

\n

Science is built around the assumption that you're too stupid and self-deceiving to just use Solomonoff induction.  After all, if it was that simple, we wouldn't need a social process of science... right?

\n

So, are you going to believe in faster-than-light quantum \"collapse\" fairies after all?  Or do you think you're smarter than that?

" } }, { "_id": "viPPjojmChxLGPE2v", "title": "The Dilemma: Science or Bayes?", "pageUrl": "https://www.lesswrong.com/posts/viPPjojmChxLGPE2v/the-dilemma-science-or-bayes", "postedAt": "2008-05-13T08:16:28.000Z", "baseScore": 59, "voteCount": 53, "commentCount": 190, "url": null, "contents": { "documentId": "viPPjojmChxLGPE2v", "html": "
\n

\"Eli: You are writing a lot about physics recently.  Why?\"
        —Shane Legg (and several other people)

\n

\"In light of your QM explanation, which to me sounds perfectly logical, it seems obvious and normal that many worlds is overwhelmingly likely. It just seems almost too good to be true that I now get what plenty of genius quantum physicists still can't. [...] Sure I can explain all that away, and I still think you're right, I'm just suspicious of myself for believing the first believable explanation I met.\"
        —Recovering irrationalist

\n
\n

RI, you've got no idea how glad I was to see you post that comment.

\n

Of course I had more than just one reason for spending all that time posting about quantum physics.  I like having lots of hidden motives, it's the closest I can ethically get to being a supervillain.

\n

But to give an example of a purpose I could only accomplish by discussing quantum physics...

\n

In physics, you can get absolutely clear-cut issues.  Not in the sense that the issues are trivial to explain.  But if you try to apply Bayes to healthcare, or economics, you may not be able to formally lay out what is the simplest hypothesis, or what the evidence supports.  But when I say \"macroscopic decoherence is simpler than collapse\" it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code. Nor is the evidence itself in dispute.

\n

I wanted a very clear example—Bayes says \"zig\", this is a zag—when it came time to break your allegiance to Science.

\n

\n

\"Oh, sure,\" you say, \"the physicists messed up the many-worlds thing, but give them a break, Eliezer!  No one ever claimed that the social process of science was perfect.  People are human; they make mistakes.\"

\n

But the physicists who refuse to adopt many-worlds aren't disobeying the rules of Science.  They're obeying the rules of Science.

\n

The tradition handed down through the generations says that a new physics theory comes up with new experimental predictions that distinguish it from the old theory.  You perform the test, and the new theory is confirmed or falsified.  If it's confirmed, you hold a huge celebration, call the newspapers, and hand out Nobel Prizes for everyone; any doddering old emeritus professors who refuse to convert are quietly humored.  If the theory is disconfirmed, the lead proponent publicly recants, and gains a reputation for honesty.

\n

This is not how things do work in science; rather it is how things are supposed to work in Science.  It's the ideal to which all good scientists aspire.

\n

Now many-worlds comes along, and it doesn't seem to make any new predictions relative to the old theory.  That's suspicious.  And there's all these other worlds, but you can't see them.  That's really suspicious.  It just doesn't seem scientific.

\n

If you got as far as RI—so that many-worlds now seems perfectly logical, obvious and normal—and you also started out as a Traditional Rationalist, then you should be able to switch back and forth between the Scientific view and the Bayesian view, like a Necker Cube.

\n

So now put on your Science Goggles—you've still got them around somewhere, right?  Forget everything you know about Kolmogorov complexity, Solomonoff induction or Minimum Message Lengths.  That's not part of the traditional training.  You just eyeball something to see how \"simple\" it looks.  The word \"testable\" doesn't conjure up a mental image of Bayes's Theorem governing probability flows; it conjures up a mental image of being in a lab, performing an experiment, and having the celebration (or public recantation) afterward.

\n
\n

Science-Goggles on:  The current quantum theory has passed all experimental tests so far.  Many-Worlds doesn't make any new testable predictions—the amazing new phenomena it predicts are all hidden away where we can't see them.  You can get along fine without supposing the other worlds, and that's just what you should do.  The whole thing smacks of science fiction.  But it must be admitted that quantum physics is a very deep and very confusing issue, and who knows what discoveries might be in store?  Call me when Many-Worlds makes a testable prediction.

\n
\n

Science-Goggles off, Bayes-Goggles back on:

\n
\n

Bayes-Goggles on:  The simplest quantum equations that cover all known evidence don't have a special exception for human-sized masses.  There isn't even any reason to ask that particular question.  Next!

\n
\n

Okay, so is this a problem we can fix in five minutes with some duct tape and superglue?

\n

No.

\n

Huh?  Why not just teach new graduating classes of scientists about Solomonoff induction and Bayes's Rule?

\n

Centuries ago, there was a widespread idea that the Wise could unravel the secrets of the universe just by thinking about them, while to go out and look at things was lesser, inferior, naive, and would just delude you in the end.  You couldn't trust the way things looked—only thought could be your guide.

\n

Science began as a rebellion against this Deep Wisdom.  At the core is the pragmatic belief that human beings, sitting around in their armchairs trying to be Deeply Wise, just drift off into never-never land.  You couldn't trust your thoughts.  You had to make advance experimental predictions—predictions that no one else had made before—run the test, and confirm the result.  That was evidence.  Sitting in your armchair, thinking about what seemed reasonable... would not be taken to prejudice your theory, because Science wasn't an idealistic belief about pragmatism, or getting your hands dirty.  It was, rather, the dictum that experiment alone would decide.  Only experiments could judge your theory—not your nationality, or your religious professions, or the fact that you'd invented the theory in your armchair.  Only experiments!  If you sat in your armchair and came up with a theory that made a novel prediction, and experiment confirmed the prediction, then we would care about the result of the experiment, not where your hypothesis came from.

\n

That's Science.  And if you say that Many-Worlds should replace the immensely successful Copenhagen Interpretation, adding on all these twin Earths that can't be observed, just because it sounds more reasonable and elegant—not because it crushed the old theory with a superior experimental prediction—then you're undoing the core scientific rule that prevents people from running out and putting angels into all the theories, because angels are more reasonable and elegant.

\n

You think teaching a few people about Solomonoff induction is going to solve that problem?  Nobel laureate Robert Aumann—who first proved that Bayesian agents with similar priors cannot agree to disagree—is a believing Orthodox Jew.  Aumann helped a project to test the Torah for \"Bible codes\", hidden prophecies from God—and concluded that the project had failed to confirm the codes' existence.  Do you want Aumann thinking that once you've got Solomonoff induction, you can forget about the experimental method?  Do you think that's going to help him?  And most scientists out there will not rise to the level of Robert Aumann.

\n

Okay, Bayes-Goggles back on.  Are you really going to believe that large parts of the wavefunction disappear when you can no longer see them?  As a result of the only non-linear non-unitary non-differentiable non-CPT-symmetric acausal faster-than-light informally-specified phenomenon in all of physics?  Just because, by sheer historical contingency, the stupid version of the theory was proposed first?

\n

Are you going to make a major modification to a scientific model, and believe in zillions of other worlds you can't see, without a defining moment of experimental triumph over the old model?

\n

Or are you going to reject probability theory?

\n

Will you give your allegiance to Science, or to Bayes?

\n

Michael Vassar once observed (tongue-in-cheek) that it was a good thing that a majority of the human species believed in God, because otherwise, he would have a very hard time rejecting majoritarianism. But since the majority opinion that God exists is simply unbelievable, we have no choice but to reject the extremely strong philosophical arguments for majoritarianism.

\n

You can see (one of the reasons) why I went to such lengths to explain quantum theory.  Those who are good at math should now be able to visualize both macroscopic decoherence, and the probability theory of simplicity and testability—get the insanity of a global single world on a gut level.

\n

I wanted to present you with a nice, sharp dilemma between rejecting the scientific method, or embracing insanity.

\n

Why?  I'll give you a hint:  It's not just because I'm evil.  If you would guess my motives here, think beyond the first obvious answer.

\n

PS:  If you try to come up with clever ways to wriggle out of the dilemma, you're just going to get shot down in future posts.  You have been warned.

" } }, { "_id": "ZxR8P8hBFQ9kC8wMy", "title": "The Failures of Eld Science", "pageUrl": "https://www.lesswrong.com/posts/ZxR8P8hBFQ9kC8wMy/the-failures-of-eld-science", "postedAt": "2008-05-12T10:32:06.000Z", "baseScore": 110, "voteCount": 97, "commentCount": 56, "url": null, "contents": { "documentId": "ZxR8P8hBFQ9kC8wMy", "html": "

This time there were no robes, no hoods, no masks.  Students were expected to become friends, and allies.  And everyone knew why you were in the classroom.  It would have been pointless to pretend you weren't in the Conspiracy.

\n

Their sensei was Jeffreyssai, who might have been the best of his era, in his era.  His students were either the most promising learners, or those whom the beisutsukai saw political advantage in molding.

\n

Brennan fell into the latter category, and knew it.  Nor had he hesitated to use his Mistress's name to open doors.  You used every avenue available to you, in seeking knowledge; that was respected here.

\n

\"—for over thirty years,\" Jeffreyssai said.  \"Not one of them saw it; not Einstein, not Schrödinger, not even von Neumann.\"  He turned away from his sketcher, and toward the classroom.  \"I pose to you to the question:  How did they fail?\"

\n

The students exchanged quick glances, a calculus of mutual risk between the wary and the merely baffled.  Jeffreyssai was known to play games.

\n

\n

Finally Hiriwa-called-the-Black leaned forward, jangling slightly as her equation-carved bracelets shifted on her ankles.  \"By your years given, sensei, this was two hundred and fifty years after Newton.  Surely, the scientists of that era must have grokked the concept of a universal law.\"

\n

\"Knowing the universal law of gravity,\" said the student Taji, from a nearby seat, \"is not the same as understanding the concept of a universal law.\" He was one of the promising ones, as was Hiriwa.

\n

Hiriwa frowned.  \"No... it was said that Newton had been praised for discovering the first universal.  Even in his own era.  So it was known.\"  Hiriwa paused.  \"But Newton himself would have been gone.  Was there a religious injunction against proposing further universals?  Did they refrain out of respect for Newton, or were they waiting for his ghost to speak?  I am not clear on how Eld science was motivated—\"

\n

\"No,\" murmured Taji, a laugh in his voice, \"you really, really aren't.\"

\n

Jeffreyssai's expression was kindly.  \"Hiriwa, it wasn't religion, and it wasn't lead in the drinking water, and they didn't all have Alzheimers, and they weren't sitting around all day reading webcomics.  Forget the catalogue of horrors out of ancient times. Just think in terms of cognitive errors.  What could Eld science have been thinking wrong?\"

\n

Hiriwa sat back with a sigh. \"Sensei, I truly cannot imagine a snafu that would do that.\"

\n

\"It wouldn't be just one mistake,\" Taji corrected her.  \"As the saying goes:  Mistakes don't travel alone; they hunt in packs.\"

\n

\"But the entire human species?\" said Hiriwa.  \"Thirty years?\"

\n

\"It wasn't the entire human species, Hiriwa,\" said Styrlyn. He was one of the older-looking students, wearing a short beard speckled in grey.  \"Maybe one in a hundred thousand could have written out Schrödinger's Equation from memory.  So that would have been their first and primary error—failure to concentrate their forces.\"

\n

\"Spare us the propaganda!\" Jeffreyssai's gaze was suddenly fierce.  \"You are not here to proselytize for the Cooperative Conspiracy, my lord politician!  Bend not the truth to make your points!  I believe your Conspiracy has a phrase:  'Comparative advantage.'  Do you really think that it would have helped to call in the whole human species, as it existed at that time, to debate quantum physics?\"

\n

Styrlyn didn't flinch.  \"Perhaps not, sensei,\" he said.  \"But if you are to compare that era to this one, it is a consideration.\"

\n

Jeffreyssai moved his hand flatly through the air; the maybe-gesture he used to dismiss an argument that was true but not relevant.  \"It is not what I would call a primary mistake.  The puzzle should not have required a billion physicists to solve.\"

\n

\"I can think of more specific ancient horrors,\" said Taji. \"Spending all day writing grant proposals.  Teaching undergraduates who would rather be somewhere else.  Needing to publish thirty papers a year to get tenure...\"

\n

\"But we are not speaking of only the lower-status scientists,\" said Yin; she wore a slightly teasing grin.  \"It was said of Schrödinger that he retired to a villa for a month, with his mistress to provide inspiration, and emerged with his eponymous equation.  We consider it a famous historical success of our methodology.  Some Eld physicists did understand how to focus their mental energies; and would have been senior enough to do so, had they chose.\"

\n

\"True,\" Taji said.  \"In the end, administrative burdens are only a generic obstacle.  Likewise such answers as, 'They were not trained in probability theory, and did not know of cognitive biases.'  Our sensei seems to desire some more specific reply.\"

\n

Jeffreyssai lifted an eyebrow encouragingly.  \"Don't dismiss your line of thought so quickly, Taji; it begins to be relevant.  What kind of system would create administrative burdens on its own people?\"

\n

\"A system that failed to support its people adequately,\" said Styrlyn.  \"One that failed to value their work.\"

\n

\"Ah,\" said Jeffreyssai.  \"But there is a student who has not yet spoken.  Brennan?\"

\n

Brennan didn't jump.  He deliberately waited just long enough to show he wasn't scared, and then said, \"Lack of pragmatic motivation, sensei.\"

\n

Jeffreyssai smiled slightly. \"Expand.\"

\n

What kind of system would create administrative burdens on its own people?, their sensei had asked them.  The other students were pursuing their own lines of thought. Brennan, hanging back, had more attention to spare for his teacher's few hints.  Being the beginner wasn't always a disadvantage—and he had been taught, long before the Bayesians took him in, to take every available advantage.

\n

\"The Manhattan Project,\" Brennan said, \"was launched with a specific technological end in sight: a weapon of great power, in time of war.  But the error that Eld Science committed with respect to quantum physics had no immediate consequences for their technology. They were confused, but they had no desperate need for an answer.  Otherwise the surrounding system would have removed all burdens from their effort to solve it.  Surely the Manhattan Project must have done so—Taji?  Do you know?\"

\n

Taji looked thoughtful.  \"Not all burdens—but I'm pretty sure they weren't writing grant proposals in the middle of their work.\"

\n

\"So,\" Jeffreyssai said.  He advanced a few steps, stood directly in front of Brennan's desk.  \"You think Eld scientists simply weren't trying hard enough.  Because their art had no military applications?  A rather competitive point of view, I should think.\"

\n

\"Not necessarily,\" Brennan said calmly.  \"Pragmatism is a virtue of rationality also.  A desired use for a better quantum theory, would have helped the Eld scientists in many ways beyond just motivating them.  It would have given shape to their curiosity, and told them what constituted success or failure.\"

\n

Jeffreyssai chuckled slightly.  \"Don't guess so hard what I might prefer to hear, Competitor.  Your first statement came closer to my hidden mark; your oh-so-Bayesian disclaimer fell wide...  The factor I had in mind, Brennan, was that Eld scientists thought it was acceptable to take thirty years to solve a problem.  Their entire social process of science was based on getting to the truth eventually. A wrong theory got discarded eventually—once the next generation of students grew up familiar with the replacementWork expands to fill the time allotted, as the saying goes.  But people can think important thoughts in far less than thirty years, if they expect speed of themselves.\"  Jeffreyssai suddenly slammed down a hand on the arm of Brennan's chair.  \"How long do you have to dodge a thrown knife?\"

\n

\"Very little time, sensei!\"

\n

\"Less than a second!  Two opponents are attacking you!  How long do you have to guess who's more dangerous?\"

\n

\"Less than a second, sensei!\"

\n

\"The two opponents have split up and are attacking two of your girlfriends!  How long do you have to decide which one you truly love?\"

\n

\"Less than a second, sensei!'

\n

\"A new argument shows your precious theory is flawed!  How long does it take you to change your mind?\"

\n

\"Less than a second, sensei!\"

\n

\"WRONG! DON'T GIVE ME THE WRONG ANSWER JUST BECAUSE IT FITS A CONVENIENT PATTERN AND I SEEM TO EXPECT IT OF YOU!  How long does it really take, Brennan?\"

\n

Sweat was forming on Brennan's back, but he stopped and actually thought about it—

\n

\"ANSWER, BRENNAN!\"

\n

\"No sensei!  I'm not finished thinking sensei!  An answer would be premature!  Sensei!\"

\n

\"Very good!  Continue!  But don't take thirty years!\"

\n

Brennan breathed deeply, reforming his thoughts.  He finally said, \"Realistically, sensei, the best-case scenario is that I would see the problem immediately; use the discipline of suspending judgment; try to re-accumulate all the evidence before continuing; and depending on how emotionally attached I had been to the theory, use the crisis-of-belief technique to ensure I could genuinely go either way.  So at least five minutes and perhaps up to an hour.\"

\n

\"Good!  You actually thought about it that time!  Think about it every time!  Break patterns!  In the days of Eld Science, Brennan, it was not uncommon for a grant agency to spend six months reviewing a proposal.  They permitted themselves the time!  You are being graded on your speed, Brennan!  The question is not whether you get there eventually!  Anyone can find the truth in five thousand years!  You need to move faster!\"

\n

\"Yes, sensei!\"

\n

\"Now, Brennan, have you just learned something new?\"

\n

\"Yes, sensei!\"

\n

\"How long did it take you to learn this new thing?\"

\n

An arbitrary choice there...  \"Less than a minute, sensei, from the boundary that seems most obvious.\"

\n

\"Less than a minute,\" Jeffreyssai repeated.  \"So, Brennan, how long do you think it should take to solve a major scientific problem, if you are not wasting any time?\"

\n

Now there was a trapped question if Brennan had ever heard one.  There was no way to guess what time period Jeffreyssai had in mind—what the sensei would consider too long, or too short.  Which meant that the only way out was to just try for the genuine truth; this would offer him the defense of honesty, little defense though it was.  \"One year, sensei?\"

\n

\"Do you think it could be done in one month, Brennan?  In a case, let us stipulate, where in principle you already have enough experimental evidence to determine an answer, but not so much experimental evidence that you can afford to make errors in interpreting it.\"

\n

Again, no way to guess which answer Jeffreyssai might want... \"One month seems like an unrealistically short time to me, sensei.\"

\n

\"A short time?\" Jeffreyssai said incredulously.  \"How many minutes in thirty days?  Hiriwa?\"

\n

\"43200, sensei,\" she answered.  \"If you assume sixteen-hour waking periods and daily sleep, then 28800 minutes.\"

\n

\"Assume, Brennan, that it takes five whole minutes to think an original thought, rather than learning it from someone else.  Does even a major scientific problem require 5760 distinct insights?\"

\n

\"I confess, sensei,\" Brennan said slowly, \"that I have never thought of it that way before... but do you tell me that is truly a realistic level of productivity?\"

\n

\"No,\" said Jeffreyssai, \"but neither is it realistic to think that a single problem requires 5760 insights.  And yes, it has been done.\"

\n

Jeffreyssai stepped back, and smiled benevolently.  Every student in the room stiffened; they knew that smile.  \"Though none of you hit the particular answer that I had in mind, nonetheless your answers were as reasonable as mine.  Except Styrlyn's, I'm afraid.  Even Hiriwa's answer was not entirely wrong: the task of proposing new theories was once considered a sacred duty reserved for those of high status, there being a limited supply of problems in circulation, at that time.  But Brennan's answer is particularly interesting, and I am minded to test his theory of motivation.\"

\n

Oh, hell, Brennan said silently to himself.  Jeffreyssai was gesturing for Brennan to stand up before the class.

\n

When Brenann had risen, Jeffreyssai neatly seated himself in Brennan's chair.

\n

\"Brennan-sensei,\" Jeffreyssai said, \"you have five minutes to think of something stunningly brilliant to say about the failure of Eld science on quantum physics.  As for the rest of us, our job will be to gaze at you expectantly.  I can only imagine how embarrassing it will be, should you fail to think of anything good.\"

\n

Bastard. Brennan didn't say it aloud.  Taji's face showed a certain amount of sympathy; Styrlyn held himself aloof from the game; but Yin was looking at him with sardonic interest.  Worse, Hiriwa was gazing at him expectantly, assuming that he would rise to the challenge.  And Jeffreyssai was gawking wide-eyed, waiting for the guru's words of wisdom.  Screw you, sensei.

\n

Brennan didn't panic.  It was very, very, very far from being the scariest situation he'd ever faced.  He took a moment to decide how to think; then thought.

\n

At four minutes and thirty seconds, Brennan spoke.  (There was an art to such things; as long as you were doing it anyway, you might as well make it look easy.)

\n

\"A woman of wisdom,\" Brennan said, \"once told me that it is wisest to regard our past selves as fools beyond redemption—to see the people we once were as idiots entire.  I do not necessarily say this myself; but it is what she said to me, and there is more than a grain of truth in it.  As long as we are making excuses for the past, trying to make it look better, respecting it, we cannot make a clean break.  It occurs to me that the rule may be no different for human civilizations.  So I tried looking back and considering the Eld scientists as simple fools.\"

\n

\"Which they were not,\" Jeffreyssai said.

\n

\"Which they were not,\" Brennan continued.  \"In terms of raw intelligence, they undoubtedly exceeded me.  But it occurred to me that a difficulty in seeing what Eld scientists did wrong, might have been in respecting the ancient and legendary names too highly.  And that did indeed produce an insight.\"

\n

\"Enough introduction, Brennan,\" said Jeffreyssai.  \"If you found an insight, state it.\"

\n

\"Eld scientists were not trained...\"  Brennan paused.  \"No, untrained is not the concept.  They were trained for the wrong task.  At that time, there were no Conspiracies, no secret truths; as soon as Eld scientists solved a major problem, they published the solution to the world and each other.  Truly scary and confusing open problems would have been in extremely rare supply, and used up the moment they were solved.  So it would not have been possible to train Eld researchers to bring order out of scientific chaos.  They would have been trained for something else—I'm not sure what—\"

\n

\"Trained to manipulate whatever science had already been discovered,\" said Taji.  \"It was a difficult enough task for Eld teachers to train their students to use existing knowledge, or follow already-known methodologies; that was all Eld science teachers aspired to impart.\"

\n

Brennan nodded.  \"Which is a very different matter from creating new science of their own.  The Eld scientists faced with problems of quantum theory, might never have faced that kind of fear before—the dismay of not knowing.  The Eld scientists might have seized on unsatisfactory answers prematurely, because they were accustomed to working with a neat, agreed-upon body of knowledge.\"

\n

\"Good, Brennan,\" murmured Jeffreyssai.

\n

\"But above all,\" Brennan continued, \"an Eld scientist couldn't have practiced the actual problem the quantum scientists faced—that of resolving a major confusion.  It was something you did once per lifetime if you were lucky, and as Hiriwa observed, Newton would no longer have been around.  So while the Eld physicists who messed up quantum theory were not unintelligent, they were, in a strong sense, amateurs—ad-libbing the whole process of paradigm shift.\"

\n

\"And no probability theory,\" Hiriwa noted.  \"So anyone who did succeed at the problem would have no idea what they'd just done.  They wouldn't be able to communicate it to anyone else, except vaguely.\"

\n

\"Yes,\" Styrlyn said.  \"And it was only a handful of people who could tackle the problem at all, with no training in doing so; those are the physicists whose names have passed down to us.  A handful of people, making a handful of discoveries each.  It would not have been enough to sustain a community.  Each Eld scientist tackling a new paradigm shift would have needed to rediscover the rules from scratch.\"

\n

Jeffreyssai rose from Brenann's desk.  \"Acceptable, Brennan; you surprise me, in fact. I shall have to give further thought to this method of yours.\"  Jeffreyssai went to the classroom door, then looked back.  \"However, I did have in mind at least one other major flaw of Eld science, which none of you suggested.  I expect to receive a list of possible flaws tomorrow.  I expect the flaw I have in mind to be on the list. You have 480 minutes, excluding sleep time.  I see five of you here.  The challenge does not require more than 480 insights to solve, nor more than 96 insights in series.\"

\n

And Jeffreyssai left the room.

" } }, { "_id": "S8ysHqeRGuySPttrS", "title": "Many Worlds, One Best Guess", "pageUrl": "https://www.lesswrong.com/posts/S8ysHqeRGuySPttrS/many-worlds-one-best-guess", "postedAt": "2008-05-11T08:32:18.000Z", "baseScore": 54, "voteCount": 47, "commentCount": 82, "url": null, "contents": { "documentId": "S8ysHqeRGuySPttrS", "html": "

If you look at many microscopic physical phenomena—a photon, an electron, a hydrogen atom, a laser—and a million other known experimental setups—it is possible to come up with simple laws that seem to govern all small things (so long as you don’t ask about gravity). These laws govern the evolution of a highly abstract and mathematical object that I’ve been calling the “amplitude distribution,” but which is more widely referred to as the “wavefunction.”

Now there are gruesome questions about the proper generalization that covers all these tiny cases. Call an object “grue” if it appears green before January 1, 2020 and appears blue thereafter. If all emeralds examined so far have appeared green, is the proper generalization, “Emeralds are green” or “Emeralds are grue”?

The answer is that the proper generalization is “Emeralds are green.” I’m not going to go into the arguments at the moment. It is not the subject of this essay, and the obvious answer in this case happens to be correct. The true Way is not stupid: however clever you may be with your logic, it should finally arrive at the right answer rather than a wrong one.

In a similar sense, the simplest generalizations that would cover observed microscopic phenomena alone take the form of “All electrons have spin 12” and not “All electrons have spin 12 before January 1, 2020” or “All electrons have spin 12 unless they are part of an entangled system that weighs more than 1 gram.”

When we turn our attention to macroscopic phenomena, our sight is obscured. We cannot experiment on the wavefunction of a human in the way that we can experiment on the wavefunction of a hydrogen atom. In no case can you actually read off the wavefunction with a little quantum scanner. But in the case of, say, a human, the size of the entire organism defeats our ability to perform precise calculations or precise experiments—we cannot confirm that the quantum equations are being obeyed in precise detail.

We know that phenomena commonly thought of as “quantum” do not just disappear when many microscopic objects are aggregated. Lasers put out a flood of coherent photons, rather than, say, doing something completely different. Atoms have the chemical characteristics that quantum theory says they should, enabling them to aggregate into the stable molecules making up a human.

So in one sense, we have a great deal of evidence that quantum laws are aggregating to the macroscopic level without too much difference. Bulk chemistry still works.

But we cannot directly verify that the particles making up a human have an aggregate wavefunction that behaves exactly the way the simplest quantum laws say. Oh, we know that molecules and atoms don’t disintegrate, we know that macroscopic mirrors still reflect from the middle. We can get many high-level predictions from the assumption that the microscopic and the macroscopic are governed by the same laws, and every prediction tested has come true.

But if someone were to claim that the macroscopic quantum picture differs from the microscopic one in some as-yet-untestable detail—something that only shows up at the unmeasurable 20th decimal place of microscopic interactions, but aggregates into something bigger for macroscopic interactions—well, we can’t prove they’re wrong. It is Occam’s Razor that says, “There are zillions of new fundamental laws you could postulate in the 20th decimal place; why are you even thinking about this one?”

If we calculate using the simplest laws which govern all known cases, we find that humans end up in states of quantum superposition, just like photons in a superposition of reflecting from and passing through a half-silvered mirror. In the Schrödinger’s Cat setup, an unstable atom goes into a superposition of disintegrating, and not-disintegrating. A sensor, tuned to the atom, goes into a superposition of triggering and not-triggering. (Actually, the superposition is now a joint state of [atom-disintegrated × sensor-triggered] + [atom-stable × sensor-not-triggered].) A charge of explosives, hooked up to the sensor, goes into a superposition of exploding and not exploding; a cat in the box goes into a superposition of being dead and alive; and a human, looking inside the box, goes into a superposition of throwing up and being calm. The same law at all levels.

Human beings who interact with superposed systems will themselves evolve into superpositions. But the brain that sees the exploded cat, and the brain that sees the living cat, will have many neurons firing differently, and hence many many particles in different positions. They are very distant in the configuration space, and will communicate to an exponentially infinitesimal degree. Not the 30th decimal place, but the 1030th decimal place. No particular mind, no particular cognitive causal process, sees a blurry superposition of cats.

The fact that “you” only seem to see the cat alive, or the cat dead, is exactly what the simplest quantum laws predict. So we have no reason to believe, from our experience so far, that the quantum laws are in any way different at the macroscopic level than the microscopic level.

And physicists have verified superposition at steadily larger levels. Apparently an effort is currently underway to test superposition in a 50-micron object, larger than most neurons.

The existence of other versions of ourselves, and indeed other Earths, is not supposed additionally. We are simply supposing that the same laws govern at all levels, having no reason to suppose differently, and all experimental tests having succeeded so far. The existence of other decoherent Earths is a logical consequenceof the simplest generalization that fits all known facts. If you think that Occam’s Razor says that the other worlds are “unnecessary entities” being multiplied, then you should check the probability-theoretic math; that is just not how Occam’s Razor works.

Yet there is one particular puzzle that seems odd in trying to extend microscopic laws universally, including to superposed humans:

If we try to get probabilities by counting the number of distinct observers, then there is no obvious reason why the integrated squared modulus of the wavefunction should correlate with statistical experimental results. There is no known reason for the Born probabilities, and it even seems that, a priori, we would expect a 50/50 probability of any binary quantum experiment going both ways, if we just counted observers.

Robin Hanson suggests that if exponentially tinier-than-average decoherent blobs of amplitude (“worlds”) are interfered with by exponentially tiny leakages from larger blobs, we will get the Born probabilities back out. I consider this an interesting possibility, because it is so normal.

(I myself have had recent thoughts along a different track: If I try to count observers the obvious way, I get strange-seeming results in general, not just in the case of quantum physics. If, for example, I split my brain into a trillion similar parts, conditional on winning the lottery while anesthetized; allow my selves to wake up and perhaps differ to small degrees from each other; and then merge them all into one self again; then counting observers the obvious way says I should be able to make myself win the lottery (if I can split my brain and merge it, as an uploaded mind might be able to do).

In this connection, I find it very interesting that the Born rule does not have a split-remerge problem. Given unitary quantum physics, Born’s rule is the unique rule that prevents “observers” from having psychic powers—which doesn’t explain Born’s rule, but is certainly an interesting fact. Given Born’s rule, even splitting and remerging worlds would still lead to consistent probabilities. Maybe physics uses better anthropics than I do!

Perhaps I should take my cues from physics, instead of trying to reason it out a priori, and see where that leads me? But I have not been led anywhere yet, so this is hardly an “answer.”)

Wallace, Deutsch, and others try to derive Born’s Rule from decision theory. I am rather suspicious of this, because it seems like there is a component of “What happens to me?” that I cannot alter by modifying my utility function. Even if I didn’t care at all about worlds where I didn’t win a quantum lottery, it still seems to me that there is a sense in which I would “mostly” wake up in a world where I didn’t win the lottery. It is this that I think needs explaining.

The point is that many hypotheses about the Born probabilities have been proposed. Not as many as there should be, because the mystery was falsely marked “solved” for a long time. But still, there have been many proposals.

There is legitimate hope of a solution to the Born puzzle without new fundamental laws. Your world does not split into exactly two new subprocesses on the exact occasion when you see “absorbed” or “transmitted” on the LCD screen of a photon sensor. We are constantly being superposed and decohered, all the time, sometimes along continuous dimensions—though brains are digital and involve whole neurons firing, and fire/not-fire would be an extremely decoherent state even of a singleneuron… There would seem to be room for something unexpected to account for the Born statistics—a better understanding of the anthropic weight of observers, or a better understanding of the brain’s superpositions—without new fundamentals.

We cannot rule out, though, the possibility that a new fundamental law is involved in the Born statistics.

As Jess Riedel puts it:

If there’s one lesson we can take from the history of physics, it’s that everytime new experimental “regimes” are probed (e.g. large velocities, small sizes, large mass densities, large energies), phenomena are observed which lead to new theories (Special Relativity, quantum mechanics, General Relativity, and the Standard Model, respectively).

“Every time” is too strong. A nitpick, yes, but also an important point: you can’t just assume that any particular law will fail in a new regime. But it’s possible that a new fundamental law is involved in the Born statistics, and that this law manifests only in the 20th decimal place at microscopic levels (hence being undetectable so far) while aggregating to have substantial effects at macroscopic levels.

Could there be some law, as yet undiscovered, that causes there to be only oneworld?

This is a shocking notion; it implies that all our twins in the other worlds— all the different versions of ourselves that are constantly split off, not just by human researchers doing quantum measurements, but by ordinary entropic processes—are actually gone, leaving us alone! This version of Earth would be the only version that exists in local space! If the inflationary scenario in cosmology turns out to be wrong, and the topology of the universe is both finite and relatively small—so that Earth does not have the distant duplicates that would be implied by an exponentially vast universe—then this Earth could be the only Earth that exists anywhere, a rather unnerving thought!

But it is dangerous to focus too much on specific hypotheses that you have no specific reason to think about. This is the same root error of the Intelligent Design folk, who pick any random puzzle in modern genetics, and say, “See, God must have done it!” Why “God,” rather than a zillion other possible explanations?—which you would have thought of long before you postulated divine intervention, if not for the fact that you secretly started out already knowing the answer you wanted to find.

You shouldn’t even ask, “Might there only be one world?” but instead just go ahead and do physics, and raise that particular issue only if new evidence demands it.

Could there be some as-yet-unknown fundamental law, that gives the universe a privileged center, which happens to coincide with Earth—thus proving that Copernicus was wrong all along, and the Bible right?

Asking that particular question—rather than a zillion other questions in which the center of the universe is Proxima Centauri, or the universe turns out to have a favorite pizza topping and it is pepperoni—betrays your hidden agenda. And though an unenlightened one might not realize it, giving the universe a privileged center that follows Earth around through space would be rather difficult to do with any mathematically simple fundamental law.

So too with asking whether there might be only one world. It betrays a sentimental attachment to human intuitions already proven wrong. The wheel of science turns, but it doesn’t turn backward.

We have specific reasons to be highly suspicious of the notion of only one world. The notion of “one world” exists on a higher level of organization, like the location of Earth in space; on the quantum level there are no firm boundaries (though brains that differ by entire neurons firing are certainly decoherent). How would a fundamental physical law identify one high-level world?

Much worse, any physical scenario in which there was a single surviving world, so that any measurement had only a single outcome, would violate Special Relativity.

If the same laws are true at all levels—i.e., if many-worlds is correct—then when you measure one of a pair of entangled polarized photons, you end up in a world in which the photon is polarized, say, up-down, and alternate versions of you end up in worlds where the photon is polarized left-right. From your perspective before doing the measurement, the probabilities are 50/50. Light-years away, someone measures the other photon at a 20° angle to your own basis. From their perspective, too, the probability of getting either immediate result is 50/50—they maintain an invariant state of generalized entanglement with your faraway location, no matter what you do. But when the two of you meet, years later, your probability of meeting a friend who got the same result is 11.6%, rather than 50%.

If there is only one global world, then there is only a single outcome of any quantum measurement. Either you measure the photon polarized up-down, or left-right, but not both. Light-years away, someone else’s probability of measuring the photon polarized similarly in a 20° rotated basis actually changes from 50/50 to 11.6%.

You cannot possibly interpret this as a case of merely revealing properties that were already there; this is ruled out by Bell’s Theorem. There does not seem to be any possible consistent view of the universe in which both quantum measurements have a single outcome, and yet both measurements are predetermined, neither influencing the other. Something has to actually change, faster than light.

And this would appear to be a fully general objection, not just to collapse theories, but to any possible theory that gives us one global world! There is no consistent view in which measurements have single outcomes, but are locally determined (even locally randomly determined). Some mysterious influence has to cross a spacelike gap.

This is not a trivial matter. You cannot save yourself by waving your hands and saying, “the influence travels backward in time to the entangled photons’ creation, then forward in time to the other photon, so it never actually crosses a spacelike gap.” (This view has been seriously put forth, which gives you some idea of the magnitude of the paradox implied by one global world!) One measurement has to change the other, so which measurement happens first? Is there a global space of simultaneity? You can’t have both measurements happen “first” because under Bell’s Theorem, there’s no way local information could account for observed results, etc.

Incidentally, this experiment has already been performed, and if there is a mysterious influence it would have to travel six million times as fast as light in the reference frame of the Swiss Alps. Also, the mysterious influence has been experimentally shown not to care if the two photons are measured in reference frames which would cause each measurement to occur “before the other.”

Special Relativity seems counterintuitive to us humans—like an arbitrary speed limit, which you could get around by going backward in time, and then forward again. A law you could escape prosecution for violating, if you managed to hide your crime from the authorities.

But what Special Relativity really says is that human intuitions about space and time are simply wrong. There is no global “now,” there is no “before” or “after” across spacelike gaps. The ability to visualize a single global world, even in principle, comes from not getting Special Relativity on a gut level. Otherwise it would be obvious that physics proceeds locally with invariant states of distant entanglement, and the requisite information is simply not locally present to support a globally single world.

It might be that this seemingly impeccable logic is flawed—that my application of Bell’s Theorem and relativity to rule out any single global world contains some hidden assumption of which I am unaware—

—but consider the burden that a single-world theory must now shoulder! There is absolutely no reason in the first place to suspect a global single world; this is just not what current physics says! The global single world is an ancient human intuition that was disproved, like the idea of a universal absolute time. The superposition principle is visible even in half-silvered mirrors; experiments are verifying the disproof at steadily larger levels of superposition—but above all there is no longer any reason to privilege the hypothesis of a global single world. The ladder has been yanked out from underneath that human intuition.

There is no experimental evidence that the macroscopic world is single (we already know the microscopic world is superposed). And the prospect necessarily either violates Special Relativity, or takes an even more miraculous-seeming leap and violates seemingly impeccable logic. The latter, of course, being much more plausible in practice. But it isn’t really that plausible in an absolute sense. Without experimental evidence, it is generally a bad sign to have to postulate arbitrary logical miracles.

As for quantum non-realism, it appears to me to be nothing more than a Get Out of Jail Free card. “It’s okay to violate Special Relativity because none of this is real anyway!” The equations cannot reasonably be hypothesized to deliver such excellent predictions for literally no reason. Bell’s Theorem rules out the obvious possibility that quantum theory represents imperfect knowledge of something locally deterministic.

Furthermore, macroscopic decoherence gives us a perfectly realistic understanding of what is going on, in which the equations deliver such good predictions because they mirror reality. And so the idea that the quantum equations are just “meaningless,” and therefore it is okay to violate Special Relativity, so we can have one global world after all, is not necessary. To me, quantum non-realism appears to be a huge bluff built around semantic stopsigns like “Meaningless!”

It is not quite safe to say that the existence of multiple Earths is as well-established as any other truth of science. The existence of quantum other worlds is not so well-established as the existence of trees, which most of us can personally observe.

Maybe there is something in that 20th decimal place, which aggregates to something bigger in macroscopic events. Maybe there’s a loophole in the seemingly iron logic which says that any single global world must violate Special Relativity, because the information to support a single global world is not locally available. And maybe the Flying Spaghetti Monster is just messing with us, and the world we know is a lie.

So all we can say about the existence of multiple Earths, is that it is as rationally probable as e.g. the statement that spinning black holes do not violate conservation of angular momentum. We have extremely fundamental reasons, having to do with the rotational symmetry of space, to suspect that conservation of angular momentum is built into the underlying nature of physics. And we have no specific reason to suspect this particular violation of our old generalizations in a higher-energy regime.

But we haven’t actually checked conservation of angular momentum for rotating black holes—so far as I know. (And as I am talking here about rational guesses in states of partial knowledge, the point is exactly the same if the observation has been made and I do not know it yet.) And black holes are a more massive regime. So the obedience of black holes is not quite as assured as that my toilet conserves angular momentum while flushing, which come to think, I haven’t checked either…

Yet if you make the mistake of thinking too hard about this one particular possibility, instead of zillions of other possibilities—and especially if you don’t understand the fundamental reason why angular momentum is conserved— then it may start seeming more and more plausible that “spinning black holes violate conservation of angular momentum,” as you think of more and more vaguely plausible-sounding reasons it could be true.

But the rational probability is pretty damned small.

Likewise the rational probability that there is only one Earth.

I mention this to explain my habit of talking as if many-worlds is an obvious fact. Many-worlds is an obvious fact, if you have all your marbles lined up correctly (understand very basic quantum physics, know the formal probability theory of Occam’s Razor, understand Special Relativity, etc.) It is in fact considerably moreobvious to me than the proposition that spinning black holes should obey conservation of angular momentum.

The only reason why many-worlds is not universally acknowledged as a direct prediction of physics which requires magic to violate, is that a contingent accident of our Earth’s scientific history gave an entrenched academic position to a phlogiston-like theory that had an unobservable faster-than-light magical “collapse” devouring all other worlds. And many academic physicists do not have a mathematical grasp of Occam’s Razor, which is the usual method for ridding physics of invisible angels. So when they encounter many-worlds and it conflicts with their (undermined) intuition that only one world exists, they say, “Oh, that’s multiplying entities”—which is just flatly wrong as probability theory—and go on about their daily lives.

I am not in academia. I am not constrained to bow and scrape to some senior physicist who hasn’t grasped the obvious, but who will be reviewing my journal articles. I need have no fear that I will be rejected for tenure on account of scaring my students with “science-fiction tales of other Earths.” If I can’t speak plainly, who can?

So let me state then, very clearly, on behalf of any and all physicists out there who dare not say it themselves: Many-worlds wins outright given our current state of evidence. There is no more reason to postulate a single Earth, than there is to postulate that two colliding top quarks would decay in a way that violates Conservation of Energy. It takes more than an unknown fundamental law; it takes magic.

The debate should already be over. It should have been over fifty years ago. The state of evidence is too lopsided to justify further argument. There is no balance in this issue. There is no rational controversy to teach. The laws of probability theory are laws, not suggestions; there is no flexibility in the best guess given this evidence. Our children will look back at the fact that we were still arguing about this in the early twenty-first century, and correctly deduce that we were nuts.

We have embarrassed our Earth long enough by failing to see the obvious. So for the honor of my Earth, I write as if the existence of many-worlds were an established fact, because it is. The only question now is how long it will take for the people of this world to update.

" } }, { "_id": "WqGCaRhib42dhKWRL", "title": "If Many-Worlds Had Come First", "pageUrl": "https://www.lesswrong.com/posts/WqGCaRhib42dhKWRL/if-many-worlds-had-come-first", "postedAt": "2008-05-10T07:43:55.000Z", "baseScore": 96, "voteCount": 91, "commentCount": 189, "url": null, "contents": { "documentId": "WqGCaRhib42dhKWRL", "html": "

Not that I’m claiming I could have done better, if I’d been born into that time, instead of this one…

Macroscopic decoherence, a.k.a. many-worlds, was first proposed in a 1957 paper by Hugh Everett III. The paper was ignored. John Wheeler told Everett to see Niels Bohr. Bohr didn’t take him seriously.

Crushed, Everett left academic physics, invented the general use of Lagrange multipliers in optimization problems, and became a multimillionaire.

It wasn’t until 1970, when Bryce DeWitt (who coined the term “many-worlds”) wrote an article for Physics Today, that the general field was first informed of Everett’s ideas. Macroscopic decoherence has been gaining advocates ever since, and may now be the majority viewpoint (or not).

But suppose that decoherence and macroscopic decoherence had been realized immediately following the discovery of entanglement, in the 1920s. And suppose that no one had proposed collapse theories until 1957. Would decoherence now be steadily declining in popularity, while collapse theories were slowly gaining steam?

Imagine an alternate Earth, where the very first physicist to discover entanglement and superposition said, “Holy flaming monkeys, there’s a zillion other Earths out there!”

In the years since, many hypotheses have been proposed to explain the mysterious Born probabilities. But no one has yet suggested a collapse postulate. That possibility simply has not occurred to anyone.

One day, Huve Erett walks into the office of Biels Nohr…

“I just don’t understand,” Huve Erett said, “why no one in physics even seems interested in my hypothesis. Aren’t the Born statistics the greatest puzzle in modern quantum theory?”

Biels Nohr sighed. Ordinarily, he wouldn’t even bother, but something about the young man compelled him to try.

“Huve,” says Nohr, “every physicist meets dozens of people per year who think they’ve explained the Born statistics. If you go to a party and tell someone you’re a physicist, chances are at least one in ten they’ve got a new explanation for the Born statistics. It’s one of the most famous problems in modern science, and worse, it’s a problem that everyone thinks they can understand. To get attention, a new Born hypothesis has to be… pretty darn good.”

“And this,” Huve says, “this isn’t good?

Huve gestures to the paper he’d brought to Biels Nohr. It is a short paper. The title reads, “The Solution to the Born Problem.” The body of the paper reads:

When you perform a measurement on a quantum system, all parts of the wavefunction except one point vanish, with the survivor chosen non-deterministically in a way determined by the Born statistics.

“Let me make absolutely sure,” Nohr says carefully, “that I understand you. You’re saying that we’ve got this wavefunction—evolving according to the Wheeler-DeWitt equation—and, all of a sudden, the whole wavefunction, except for one part, just spontaneously goes to zero amplitude. Everywhere at once. This happens when, way up at the macroscopic level, we ‘measure’ something.”

“Right!” Huve says.

“So the wavefunction knows when we ‘measure’ it. What exactly is a ‘measurement’? How does the wavefunction know we’re here? What happened before humans were around to measure things?”

“Um…” Huve thinks for a moment. Then he reaches out for the paper, scratches out “When you perform a measurement on a quantum system,” and writes in, “When a quantum superposition gets too large.”

Huve looks up brightly. “Fixed!”

“I see,” says Nohr. “And how large is ‘too large’?”

“At the 50-micron level, maybe,” Huve says, “I hear they haven’t tested that yet.”

Suddenly a student sticks his head into the room. “Hey, did you hear? They just verified superposition at the 50-micron level.”

“Oh,” says Huve, “um, whichever level, then. Whatever makes the experimental results come out right.”

Nohr grimaces. “Look, young man, the truth here isn’t going to be comfortable. Can you hear me out on this?”

“Yes,” Huve says, “I just want to know why physicists won’t listen to me.”

“All right,” says Nohr. He sighs. “Look, if this theory of yours were actually true—if whole sections of the wavefunction just instantaneously vanished—it would be… let’s see. The only law in all of quantum mechanics that is non-linear, non-unitary, non-differentiable and discontinuous. It would prevent physics from evolving locally, with each piece only looking at its immediate neighbors. Your ‘collapse’ would be the only fundamental phenomenon in all of physics with a preferred basis and a preferred space of simultaneity. Collapse would be the only phenomenon in all of physics that violates CPT symmetry, Liouville’s Theorem, and Special Relativity. In your original version, collapse would also have been the only phenomenon in all of physics that was inherently mental. Have I left anything out?”

“Collapse is also the only acausal phenomenon,” Huve points out. “Doesn’t that make the theory more wonderful and amazing?”

“I think, Huve,” says Nohr, “that physicists may view the exceptionalism of your theory as a point not in its favor.”

“Oh,” said Huve, taken aback. “Well, I think I can fix that non-differentiability thing by postulating a second-order term in the—”

“Huve,” says Nohr, “I don’t think you’re getting my point, here. The reason physicists aren’t paying attention to you, is that your theory isn’t physics. It’s magic.”

“But the Born statistics are the greatest puzzle of modern physics, and this theory provides a mechanism for the Born statistics!” Huve protests.

“No, Huve, it doesn’t,” Nohr says wearily. “That’s like saying that you’ve ‘provided a mechanism’ for electromagnetism by saying that there are little angels pushing the charged particles around in accordance with Maxwell’s equations. Instead of saying, ‘Here are Maxwell’s equations, which tells the angels where to push the electrons,’ we just say, ‘Here are Maxwell’s equations’ and are left with a strictly simpler theory. Now, we don’t know why the Born statistics happen. But you haven’t given the slightest reason why your ‘collapse postulate’ should eliminate worlds in accordance with the Born statistics, rather than something else. You’re not even making use of the fact that quantum evolution is unitary—”

“That’s because it’s not,” interjects Huve.

“—which everyone pretty much knows has got to be the key to the Born statistics, somehow. Instead you’re merely saying, ‘Here are the Born statistics, which tell the collapser how to eliminate worlds,’ and it’s strictly simpler to just say ‘Here are the Born statistics.’ ”

“But—” says Huve.

Also,” says Nohr, raising his voice, “you’ve given no justification for why there’s only one surviving world left by the collapse, or why the collapse happens before any humans get superposed, which makes your theory really suspicious to a modern physicist. This is exactly the sort of untestable hypothesis that the ‘One Christ’ crowd uses to argue that we should ‘teach the controversy’ when we tell high school students about other Earths.”

“I’m not a One-Christer!” protests Huve.

“Fine,” Nohr says, “then why do you just assume there’s only one world left? And that’s not the only problem with your theory. Which part of the wavefunction gets eliminated, exactly? And in which basis? It’s clear that the whole wavefunction isn’t being compressed down to a delta, or ordinary quantum computers couldn’t stay in superposition when any collapse occurred anywhere—heck, ordinary molecular chemistry might start failing—”

Huve quickly crosses out “one point” on his paper, writes in “one part,” and then says, “Collapse doesn’t compress the wavefunction down to one point. It eliminates all the amplitude except one world, but leaves all the amplitude in that world.”

“Why?” says Nohr. “In principle, once you postulate ‘collapse,’ then ‘collapse’ could eliminate any part of the wavefunction, anywhere—why just one neat world left? Does the collapser know we’re in here?

Huve says, “It leaves one whole world because that’s what fits our experiments.”

“Huve,” Nohr says patiently, “the term for that is ‘post hoc.’ Furthermore, decoherence is a continuous process. If you partition by whole brains with distinct neurons firing, the partitions have almost zero mutual interference within the wavefunction. But plenty of other processes overlap a great deal. There’s no possible way you can point to ‘one world’ and eliminate everything else without making completely arbitrary choices, including an arbitrary choice of basis—”

“But—” Huve says.

“And above all,” Nohr says, “the reason you can’t tell me which part of the wavefunction vanishes, or exactly when it happens, or exactly what triggers it, is that if we did adopt this theory of yours, it would be the only informally specified, qualitative fundamental law taught in all of physics. Soon no two physicists anywhere would agree on the exact details! Why? Because it would be the only fundamental law in all of modern physics that was believed without experimental evidence to nail down exactly how it worked.”

“What, really?” says Huve. “I thought a lot of physics was more informal than that. I mean, weren’t you just talking about how it’s impossible to point to ‘one world’?”

“That’s because worlds aren’t fundamental, Huve! We have massive experimental evidence underpinning the fundamental law, the Wheeler-DeWitt equation, that we use to describe the evolution of the wavefunction. We just apply exactly the same equation to get our description of macroscopic decoherence. But for difficulties of calculation, the equation would, in principle, tell us exactly when macroscopic decoherence occurred. We don’t know where the Born statistics come from, but we have massive evidence for what the Born statistics are. But when I ask you when, or where, collapse occurs, you don’t know—because there’s no experimental evidence whatsoever to pin it down. Huve, even if this ‘collapse postulate’ worked the way you say it does, there’s no possible way you could know it! Why not a gazillion other equally magical possibilities?”

Huve raises his hands defensively. “I’m not saying my theory should be taught in the universities as accepted truth! I just want it experimentally tested! Is that so wrong?”

“You haven’t specified when collapse happens, so I can’t construct a test that falsifies your theory,” says Nohr. “Now with that said, we’re already looking experimentally for any part of the quantum laws that change at increasingly macroscopic levels. Both on general principles, in case there’s something in the 20th decimal point that only shows up in macroscopic systems, and also in the hopes we’ll discover something that sheds light on the Born statistics. We check decoherence times as a matter of course. But we keep a broad outlook on what might be different. Nobody’s going to privilege your non-linear, non-unitary, non-differentiable, non-local, non-CPT-symmetric, non-relativistic, a-frikkin’-causal, faster-than-light, in-bloody-formal ‘collapse’ when it comes to looking for clues. Not until they see absolutely unmistakable evidence. And believe me, Huve, it’s going to take a hell of a lot of evidence to unmistake this. Even if we did find anomalous decoherence times, and I don’t think we will, it wouldn’t force your ‘collapse’ as the explanation.”

“What?” says Huve. “Why not?”

“Because there’s got to be a billion more explanations that are more plausible than violating Special Relativity,” says Nohr. “Do you realize that if this really happened, there would only be a single outcome when you measured a photon’s polarization? Measuring one photon in an entangled pair would influence the other photon a light-year away. Einstein would have a heart attack.”

“It doesn’t really violate Special Relativity,” says Huve. “The collapse occurs in exactly the right way to prevent you from ever actually detecting the faster-than-light influence.”

“That’s not a point in your theory’s favor,” says Nohr. “Also, Einstein would still have a heart attack.”

“Oh,” says Huve. “Well, we’ll say that the relevant aspects of the particle don’t existuntil the collapse occurs. If something doesn’t exist, influencing it doesn’t violate Special Relativity—”

“You’re just digging yourself deeper. Look, Huve, as a general principle, theories that are actually correct don’t generate this level of confusion. But above all, there isn’t any evidence for it. You have no logical way of knowing that collapse occurs, and no reason to believe it. You made a mistake. Just say ‘oops’ and get on with your life.”

“But they could find the evidence someday,” says Huve.

“I can’t think of what evidence could determine this particular one-world hypothesis as an explanation, but in any case, right now we haven’t found any such evidence,” says Nohr. “We haven’t found anything even vaguely suggestive of it! You can’t update on evidence that could theoretically arrive someday but hasn’t arrived! Right now, today, there’s no reason to spend valuable time thinking about this rather than a billion other equally magical theories. There’s absolutely nothing that justifies your belief in ‘collapse theory’ any more than believing that someday we’ll learn to transmit faster-than-light messages by tapping into the acausal effects of praying to the Flying Spaghetti Monster!”

Huve draws himself up with wounded dignity. “You know, if my theory is wrong—and I do admit it might be wrong—”

If?” says Nohr. “Might?

“If, I say, my theory is wrong,” Huve continues, “then somewhere out there is another world where I am the famous physicist and you are the lone outcast!”

Nohr buries his head in his hands. “Oh, not this again. Haven’t you heard the saying, ‘Live in your own world’? And you of all people—”

“Somewhere out there is a world where the vast majority of physicists believe in collapse theory, and no one has even suggested macroscopic decoherence over the last thirty years!”

Nohr raises his head, and begins to laugh.

“What’s so funny?” Huve says suspiciously.

Nohr just laughs harder. “Oh, my! Oh, my! You really think, Huve, that there’s a world out there where they’ve known about quantum physics for thirty years, and nobody has even thought there might be more than one world?”

“Yes,” Huve says, “that’s exactly what I think.”

“Oh my! So you’re saying, Huve, that physicists detect superposition in microscopic systems, and work out quantitative equations that govern superposition in every single instance they can test. And for thirty years, not one person says, ‘Hey, I wonder if these laws happen to be universal.’ ”

“Why should they?” says Huve. “Physical models sometimes turn out to be wrong when you examine new regimes.”

“But to not even think of it?” Nohr says incredulously. “You see apples falling, work out the law of gravity for all the planets in the solar system except Jupiter, and it doesn’t even occur to you to apply it to Jupiter because Jupiter is too large? That’s like, like some kind of comedy routine where the guy opens a box, and it contains a spring-loaded pie, so the guy opens another box, and it contains another spring-loaded pie, and the guy just keeps doing this without even thinking of the possibility that the next box contains a pie too. You think John von Neumann, who may have been the highest-g human in history, wouldn’t think of it?”

“That’s right,” Huve says, “He wouldn’t. Ponder that.”

“This is the world where my good friend Ernest formulates his Schrödinger’s Cat thought experiment, and in this world, the thought experiment goes: ‘Hey, suppose we have a radioactive particle that enters a superposition of decaying and not decaying. Then the particle interacts with a sensor, and the sensor goes into a superposition of going off and not going off. The sensor interacts with an explosive, that goes into a superposition of exploding and not exploding; which interacts with the cat, so the cat goes into a superposition of being alive and dead. Then a human looks at the cat,’ and at this point Schrödinger stops, and goes, ‘gee, I just can’t imagine what could happen next.’ So Schrödinger shows this to everyone else, and they’re also like ‘Wow, I got no idea what could happen at this point, what an amazing paradox.’ Until finally you hear about it, and you’re like, ‘hey, maybe at thatpoint half of the superposition just vanishes, at random, faster than light,’ and everyone else is like, ‘Wow, what a great idea!’ ”

“That’s right,” Huve says again. “It’s got to have happened somewhere.”

“Huve, this is a world where every single physicist, and probably the whole damn human species, is too dumb to sign up for cryonics! We’re talking about the Earth where George W. Bush is President.”

" } }, { "_id": "xsZnufn3cQw7tJeQ3", "title": "Collapse Postulates", "pageUrl": "https://www.lesswrong.com/posts/xsZnufn3cQw7tJeQ3/collapse-postulates", "postedAt": "2008-05-09T07:49:15.000Z", "baseScore": 58, "voteCount": 49, "commentCount": 66, "url": null, "contents": { "documentId": "xsZnufn3cQw7tJeQ3", "html": "

Macroscopic decoherence—also known as “many-worlds”—is the idea that the known quantum laws that govern microscopic events simply govern at all levels without alteration. Back when people didn’t know about decoherence—before it occurred to anyone that the laws deduced with such precision for microscopic physics might apply universally—what did people think was going on?

The initial reasoning seems to have gone something like:

When my calculations showed an amplitude of 13i for this photon to get absorbed, my experimental statistics showed that the photon was absorbed around 107 times out of 1,000, which is a good fit to 19, the square of the modulus.

to

The amplitude is the probability (by way of the squared modulus).

to

Once you measure something and know it didn’t happen, its probability goes to zero.

Read literally, this implies that knowledge itself—or even conscious awareness— causes the collapse. Which was in fact the form of the theory put forth by Werner Heisenberg!

But people became increasingly nervous about the notion of importing dualistic language into fundamental physics—as well they should have been! And so the original reasoning was replaced by the notion of an objective “collapse” that destroyed all parts of the wavefunction except one, and was triggered sometime before superposition grew to human-sized levels.

Now, once you’re supposing that parts of the wavefunction can just vanish, you might think to ask:

Is there only one survivor? Maybe there are many surviving worlds, but they survive with a frequency determined by their integrated squared modulus, and so the typical surviving world has experimental statistics that match the Born rule.

Yet collapse theories considered in modern academia only postulate one surviving world. Why?

Collapse theories were devised in a time when it simply didn’t occur to any physicists that more than one world could exist! People took for granted that measurements had single outcomes—it was an assumption so deep it was invisible, because it was what they saw happening. Collapse theories were devised to explain why measurements had single outcomes, rather than (in full generality) why experimental statistics matched the Born rule.

For similar reasons, the “collapse postulates” considered academically suppose that collapse occurs before any human beings get superposed. But experiments are steadily ruling out the possibility of “collapse” in increasingly large entangled systems. Apparently an experiment is underway to demonstrate quantum superposition at 50-micrometer scales, which is bigger than most neurons and getting up toward the diameter of some human hairs!

So why doesn’t someone try jumping ahead of the game, and ask:

Say, we keep having to postulate the collapse occurs steadily later and later. What if collapse occurs only once superposition reaches planetary scales and substantial divergence occurs—say, Earth’s wavefunction collapses around once a minute? Then, while the surviving Earths at any given time would remember a long history of quantum experiments that matched the Born statistics, a supermajority of those Earths would begin obtaining non-Born results from quantum experiments and then abruptly cease to exist a minute later.

Why don’t collapse theories like that one have a huge academic following, among the many people who apparently think it’s okay for parts of the wavefunction to just vanish? Especially given that experiments are proving superposition in steadily larger systems?

A cynic might suggest that the reason for collapse’s continued support isn’t the physical plausibility of having large parts of the wavefunction suddenly vanish, or the hope of somehow explaining the Born statistics. The point is to keep the intuitive appeal of “I don’t remember the measurement having more than one result, therefore only one thing happened; I don’t remember splitting, so there must be only one of me.” You don’t remember dying, so superposed humans must never collapse. A theory that dared to stomp on intuition would be missing the whole point. You might as well just move on to decoherence.

So a cynic might suggest.

But surely it is too early to be attacking the motives of collapse supporters. That is mere argument ad hominem. What about the actual physical plausibility of collapse theories?

Well, first: Does any collapse theory have any experimental support? No.

With that out of the way…

If collapse actually worked the way its adherents say it does, it would be:

  1. The only non-linear evolution in all of quantum mechanics.
  2. The only non-unitary evolution in all of quantum mechanics.
  3. The only non-differentiable (in fact, discontinuous) phenomenon in all of quantum mechanics.
  4. The only phenomenon in all of quantum mechanics that is non-local in the configuration space.
  5. The only phenomenon in all of physics that violates CPT symmetry.
  6. The only phenomenon in all of physics that violates Liouville’s Theorem (has a many-to-one mapping from initial conditions to outcomes).
  7. The only phenomenon in all of physics that is acausal / non-deterministic / inherently random.
  8. The only phenomenon in all of physics that is non-local in spacetime and propagates an influence faster than light.

What does the god-damned collapse postulate have to do for physicists to reject it? Kill a god-damned puppy?

" } }, { "_id": "k3823vuarnmL5Pqin", "title": "Quantum Non-Realism", "pageUrl": "https://www.lesswrong.com/posts/k3823vuarnmL5Pqin/quantum-non-realism", "postedAt": "2008-05-08T05:27:23.000Z", "baseScore": 56, "voteCount": 49, "commentCount": 40, "url": null, "contents": { "documentId": "k3823vuarnmL5Pqin", "html": "
\"Does the moon exist when no one is looking at it?\"
—Albert Einstein, asked of Niels Bohr

Suppose you were just starting to work out a theory of quantum mechanics.

You begin to encounter experiments that deliver different results depending on how closely you observe them. You dig underneath the reality you know, and find an extremely precise mathematical description that only gives you the relative frequency of outcomes; worse, it’s made of complex numbers. Things behave like particles on Monday and waves on Tuesday.

The correct answer is not available to you as a hypothesis, because it will not be invented for another thirty years.

In a mess like that, what’s the best you could do?

The best you can do is the strict “shut up and calculate” interpretation of quantum mechanics. You’ll go on trying to develop new theories, because doing your best doesn’t mean giving up. But we’ve specified that the correct answer won’t be available for thirty years, and that means none of the new theories will really be any good. Doing the best you could theoretically do would mean that you recognized that, even as you looked for ways to test the hypotheses.

The best you could theoretically do would not include saying anything like, “The wavefunction only gives us probabilities, not certainties.” That, in retrospect, was jumping to a conclusion; the wavefunction gives us a certainty of many worlds existing. So that part about the wavefunction being only a probability was not-quite-right. You calculated, but failed to shut up.

If you do the best that you can do without the correct answer being available, then, when you hear about decoherence, it will turn out that you have not said anythingincompatible with decoherence. Decoherence is not ruled out by the data and the calculations. So if you refuse to affirm, as positive knowledge, any proposition which was not forced by the data and the calculations, the calculations will not force you to say anything incompatible with decoherence. So too with whatever the correct theory may be, if it is not decoherence. If you go astray, it must be from your own impulses.

But it is hard for human beings to shut up and calculate—really shut up and calculate. There is an overwhelming tendency to treat our ignorance as if it were positive knowledge.

I don’t know if any conversations like this ever really took place, but this is how ignorance becomes knowledge:

Gallant: “Shut up and calculate.”
Goofus: “Why?”
Gallant: “Because I don’t know what these equations mean, just that they seem to work.”
five minutes later
Goofus: “Shut up and calculate.”
Student: “Why?”
Goofus: “Because these equations don’t mean anything, they just work.”
Student: “Really? How do you know?”
Goofus: “Gallant told me.”

A similar transformation occurs in the leap from:

Gallant: “When my calculations show an amplitude of \t13ifor this photon to get absorbed, my experiments showed that the photon was absorbed around 107 times out of 1,000, which is a good fit to \t19 the square of the modulus. There’s clearly some kind of connection between the experimental statistics and the squared modulus of the amplitude, but I don’t know what.”
Goofus: “The probability amplitude doesn’t say where the electron is, but where it might be. The squared modulus is the probability that reality will turn out that way. Reality itself is inherently nondeterministic.”

And again:

Gallant: “Once I measure something and get an experimental result, I do my future calculations using only the amplitude whose squared modulus went into calculating the frequency of that experimental result. Only this rule makes my further calculations correspond to observed frequencies.”
Goofus: “Since the amplitude is the probability, once you know the experimental result, the probability of everything else becomes zero!”

The whole slip from:

The square of this “amplitude” stuff corresponds tightly to our experimentally observed frequencies

to

The amplitude is the probability of getting the measurement

to

Well, obviously, once you know you didn’t get a measurement, its probability becomes zero

has got to be one of the most embarrassing wrong turns in the history of science.

If you take all this literally, it becomes the consciousness-causes-collapse interpretation of quantum mechanics. These days, just about nobody will confess to actually believing in the consciousness-causes-collapse interpretation of quantum mechanics—

But the physics textbooks are still written this way! People say they don’t believe it, but they talk as if knowledge is responsible for removing incompatible “probability” amplitudes.

Yet as implausible as I find consciousness-causes-collapse, it at least gives us a picture of reality. Sure, it’s an informal picture. Sure, it gives mental properties ontologically basic status. You can’t calculate when an “experimental observation” occurs or what people “know,” you just know when certain probabilities are obviouslyzero. And this “just knowing” just happens to fit your experimental results, whatever they are—

—but at least consciousness-causes-collapse purports to tell us how the universe works. The amplitudes are real, the collapse is real, the consciousness is real.

Contrast to this argument schema:

Student: “Wait, you’re saying that this amplitude disappears as soon as the measurement tells me it’s not true?”
Goofus: “No, no! It doesn’t literally disappear. The equations don’t mean anything—they just give good predictions.”
Student: “But then what does happen?”
Goofus: (Whorble. Hiss.) “Never ask that question.”
Student: “And what about the part where we measure this photon’s polarization over here, and a light-year away, the entangled photon’s probability of being polarized up-down changes from 50% to 25%?”
Goofus: “Yes, what about it?”
Student: “Doesn’t that violate Special Relativity?”
Goofus: “No, because you’re just finding out the other photon’s polarization. Remember, the amplitudes aren’t real.”
Student: “But Bell’s Theorem shows there’s no possible local hidden variable that could describe the other photon’s polarization before we measure it—”
Goofus: “Exactly! It’s meaningless to talk about the photon’s polarization before we measure it.”
Student: “But the probability suddenly changes—”
Goofus: “It’s meaningless to talk about it before we measure it!”

What does Goofus even mean, here? Never mind the plausibility of his words; what sort of state of reality would correspond to his words being true?

What way could reality be, that would make it meaningless to talk about Special Relativity being violated, because the property being influenced didn’t exist, even though you could calculate the changes to it?

But you know what? Forget that. I want to know the answer to an even more important question:

Where is Goofus getting all this stuff?

Let’s suppose that you take the Schrödinger equation, and assert, as a positive fact:

This equation generates good predictions, but it doesn’t mean anything!

Really? How do you know?

I sometimes go around saying that the fundamental question of rationality is Why do you believe what you believe?

You say the Schrödinger equation “doesn’t mean anything.” How did this item of definite knowledge end up in your possession, if it is not simply ignorance misinterpreted as knowledge?

Was there some experiment that told you? I am open to the idea that experiments can tell us things that seem philosophically impossible. But in this case I should like to see the decisive data. Was there a point where you carefully set up an experimental apparatus, and worked out what you should expect to see if (1) the Schrödinger equation was meaningful or (2) the Schrödinger equation was meaningless; and then you got result (2)?

Gallant: “If I measure the 90° polarization of a photon, and then measure the 45° polarization, and then measure 90° again, my experimental history shows that in 100 trials a photon was absorbed 47 times and transmitted 53 times.”
Goofus: “The 90° polarization and 45° polarization are incompatible properties; they can’t both exist at the same time, and if you measure one, it is meaningless to talk about the other.”

How do you know?

How did you acquire that piece of knowledge, Goofus? I know where Gallant got his—but where did yours come from?

My attitude toward questions of existence and meaning was nicely illustrated in a discussion of the current state of evidence for whether the universe is spatially finite or spatially infinite, in which James D. Miller chided Robin Hanson:

Robin, you are suffering from overconfidence bias in assuming that the universe exists. Surely there is some chance that the universe is of size zero.

To which I replied:

James, if the universe doesn’t exist, it would still be nice to know whether it’s an infinite or a finite universe that doesn’t exist.

Ha! You think pulling that old “universe doesn’t exist” trick will stop me? It won’t even slow me down!

It’s not that I’m ruling out the possibility that the universe doesn’t exist. It’s just that, even if nothing exists, I still want to understand the nothing as best I can. My curiosity doesn’t suddenly go away just because there’s no reality, you know!

The nature of “reality” is something about which I’m still confused, which leaves open the possibility that there isn’t any such thing. But Egan’s Law still applies: “It all adds up to normality.” Apples didn’t stop falling when Einstein disproved Newton’s theory of gravity.

Sure, when the dust settles, it could turn out that apples don’t exist, Earth doesn’t exist, reality doesn’t exist. But the nonexistent apples will still fall toward the nonexistent ground at a meaningless rate of 9.8 m/s2.

You say the universe doesn’t exist? Fine, suppose I believe that—though it’s not clear what I’m supposed to believe, aside from repeating the words.

Now, what happens if I press this button?

In The Simple Truth, I said:

Frankly, I’m not entirely sure myself where this “reality” business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead… So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies “belief,” and the latter thingy “reality.”

You want to say that the quantum-mechanical equations are “not real”? I’ll be charitable, and suppose this means something. What might it mean?

Maybe it means the equations which determine my predictions are substantially different from the thingy that determines my experimental results. Then what doesdetermine my experimental results? If you tell me “nothing,” I would like to know what sort of “nothing” it is, and why this “nothing” exhibits such apparent regularity in determining e.g. my experimental measurements of the mass of an electron.

I don’t take well to people who tell me to stop asking questions. If you tell me something is definitely positively meaningless, I want to know exactly what you mean by that, and how you came to know. Otherwise you have not given me an answer, only told me to stop asking the question.

The Simple Truth describes the life of a shepherd and apprentice who have discovered how to count sheep by tossing pebbles into buckets, when they are visited by a delegate from the court who wants to know how the “magic pebbles” work. The shepherd tries to explain, “An empty bucket is magical if and only if the pastures are empty of sheep,” but is soon overtaken by the excited discussions of the apprentice and the delegate as to how the magic might get into the pebbles.

Here we have quantum equations that deliver excellent experimental predictions. What exactly does it mean for them to be “meaningless”? Is it like a bucket of pebbles that works for counting sheep, but doesn’t have any magic?

Back before Bell’s Theorem ruled out local hidden variables, it seemed possible that (as Einstein thought) there was some more complete description of reality which we didn’t have, and the quantum theory summarized incomplete knowledge of this more complete description. The laws we’d learned would turn out to be like the laws of statistical mechanics: quantitative statements of uncertainty. This would hardly make the equations “meaningless”; partial knowledge is the meaning of probability.

But Bell’s Theorem makes it much less plausible that the quantum equations are partial knowledge of something deterministic, the way that statistical mechanics over classical physics is partial knowledge of something deterministic. And even so, the quantum equations would not be “meaningless” as that phrase is usually taken; they would be “statistical,” “approximate,” “partial information,” or at worst “wrong.”

Here we have equations that give us excellent predictions. You say they are “meaningless.” I ask what it is that determines my experimental results, then. You cannot answer. Fine, then how do you justify ruling out the possibility that the quantum equations give such excellent predictions because they are, oh, say, meaningful?

I don’t mean to trivialize questions of reality or meaning. But to call something “meaningless” and say that the argument is now resolved, finished, over, done with, you must have a theory of exactly how it is meaningless. And when the answer is given, the question should seem no longer mysterious.

As you may recall from Semantic Stopsigns, there are words and phrases which are not so much answers to questions, as cognitive traffic signals which indicate you should stop asking questions. “Why does anything exist in the first place? God!” is the classical example, but there are others, such as “Élan vital!”

Tell people to “shut up and calculate” because you don’t know what the calculations mean, and inside of five years, “Shut up!” will be masquerading as a positive theory of quantum mechanics.

I have the highest respect for any historical physicists who even came close to actually shutting up and calculating, who were genuinely conservative in assessing what they did and didn’t know. This is the best they could possibly do without actually being Hugh Everett, and I award them fifty rationality points. My scorn is reserved for those who interpreted “We don’t know why it works” as the positive knowledge that the equations were definitely not real.

I mean, if that trick worked, it would be too good to confine to one subfield. Why shouldn’t physicists use the “not real” loophole outside of quantum mechanics?

“Hey, doesn’t your new ‘yarn theory’ violate Special Relativity?”
“Nah, the equations are meaningless. Say, doesn’t your model of ‘chaotic evil inflation’ violate CPT symmetry?”
“My equations are even more meaningless than your equations! So your criticism double doesn’t count.”

And if that doesn’t work, try writing yourself a Get Out of Jail Free card.

If there is a moral to the whole story, it is the moral of how very hard it is to stay in a state of confessed confusion, without making up a story that gives you closure—how hard it is to avoid manipulating your ignorance as if it were definite knowledge that you possessed.

" } }, { "_id": "DFxoaWGEh9ndwtZhk", "title": "Decoherence is Falsifiable and Testable", "pageUrl": "https://www.lesswrong.com/posts/DFxoaWGEh9ndwtZhk/decoherence-is-falsifiable-and-testable", "postedAt": "2008-05-07T07:54:34.000Z", "baseScore": 48, "voteCount": 44, "commentCount": 43, "url": null, "contents": { "documentId": "DFxoaWGEh9ndwtZhk", "html": "

The words “falsifiable” and “testable” are sometimes used interchangeably, which imprecision is the price of speaking in English. There are two different probability-theoretic qualities I wish to discuss here, and I will refer to one as “falsifiable” and the other as “testable” because it seems like the best fit.

As for the math, it begins, as so many things do, with:

P(Ai|B)=P(B|Ai)P(Ai)ΣjP(B|Aj)P(Aj).

This is Bayes’s Theorem. I own at least two distinct items of clothing printed with this theorem, so it must be important.

To review quickly, B here refers to an item of evidence, Ai is some hypothesis under consideration, and the Aj are competing, mutually exclusive hypotheses. The expression P(B|Ai) means “the probability of seeing B, if hypothesis Ai is true” and P(Ai|B) means “the probability hypothesis Ai is true, if we see B.”

The mathematical phenomenon that I will call “falsifiability” is the scientifically desirable property of a hypothesis that it should concentrate its probability mass into preferred outcomes, which implies that it must also assign low probability to some un-preferred outcomes; probabilities must sum to 1 and there is only so much probability to go around. Ideally there should be possible observations which would drive down the hypothesis’s probability to nearly zero: There should be things the hypothesis cannot explain, conceivable experimental results with which the theory is not compatible. A theory that can explain everything prohibits nothing, and so gives us no advice about what to expect.

P(Ai|B)=P(B|Ai)P(Ai)ΣjP(B|Aj)P(Aj).

In terms of Bayes’s Theorem, if there is at least some observation B that the hypothesis Ai can’t explain, i.e., P(B|Ai) is tiny, then the numerator P(B|Ai)P(Ai) will also be tiny, and likewise the posterior probability P(Ai|B). Updating on having seen the impossible result B has driven the probability of Ai down to nearly zero. A theory that refuses to make itself vulnerable in this way will need to spread its probability widely, so that it has no holes; it will not be able to strongly concentrate probability into a few preferred outcomes; it will not be able to offer precise advice.

Thus is the rule of science derived in probability theory.

As depicted here, “falsifiability” is something you evaluate by looking at a singlehypothesis, asking, “How narrowly does it concentrate its probability distribution over possible outcomes? How narrowly does it tell me what to expect? Can it explain some possible outcomes much better than others?”

Is the decoherence interpretation of quantum mechanics falsifiable? Are there experimental results that could drive its probability down to an infinitesimal?

Sure: We could measure entangled particles that should always have opposite spin, and find that if we measure them far enough apart, they sometimes have the same spin.

Or we could find apples falling upward, the planets of the Solar System zigging around at random, and an atom that kept emitting photons without any apparent energy source. Those observations would also falsify decoherent quantum mechanics. They’re things that, on the hypothesis that decoherent quantum mechanics governs the universe, we should definitely not expect to see.

So there do exist observations B whose P(B|Adecoherence) is infinitesimal, which would drive P(Adecoherence|B) down to an infinitesimal.

But that’s just because decoherent quantum mechanics is still quantum mechanics! What about the decoherence part, per se, versus the collapse postulate?

We’re getting there. The point is that I just defined a test that leads you to think about one hypothesis at a time (and called it “falsifiability”). If you want to distinguish decoherence versus collapse, you have to think about at least two hypotheses at a time.

Now really the “falsifiability” test is not quite that singly focused, i.e., the sum in the denominator has got to contain some other hypothesis. But what I just defined as “falsifiability” pinpoints the kind of problem that Karl Popper was complaining about, when he said that Freudian psychoanalysis was “unfalsifiable” because it was equally good at coming up with an explanation for every possible thing the patient could do.

If you belonged to an alien species that had never invented the collapse postulate or Copenhagen Interpretation—if the only physical theory you’d ever heard of was decoherent quantum mechanics—if all you had in your head was the differential equation for the wavefunction’s evolution plus the Born probability rule—you would still have sharp expectations of the universe. You would not live in a magical world where anything was probable.

But you could say exactly the same thing about quantum mechanics without(macroscopic) decoherence.

Well, yes! Someone walking around with the differential equation for the wavefunction’s evolution, plus a collapse postulate that obeys the Born probabilities and is triggered before superposition reaches macroscopic levels, still lives in a universe where apples fall down rather than up.

But where does decoherence make a new prediction, one that lets us test it?

A “new” prediction relative to what? To the state of knowledge possessed by the ancient Greeks? If you went back in time and showed them decoherent quantum mechanics, they would be enabled to make many experimental predictions they could not have made before.

When you say “new prediction,” you mean “new” relative to some other hypothesis that defines the “old prediction.” This gets us into the theory of what I’ve chosen to label testability; and the algorithm inherently considers at least two hypotheses at a time. You cannot call something a “new prediction” by considering only one hypothesis in isolation.

In Bayesian terms, you are looking for an item of evidence B that will produce evidence for one hypothesis over another, distinguishing between them, and the process of producing this evidence we could call a “test.” You are looking for an experimental result B such that

P(B|Ad)P(B|Ac);

that is, some outcome B which has a different probability, conditional on the decoherence hypothesis being true, versus its probability if the collapse hypothesis is true. Which in turn implies that the posterior odds for decoherence and collapse will become different from the prior odds:

P(B|Ad)P(B|Ac)1 implies
P(Ad|B)P(Ac|B)=P(B|Ad)P(B|Ac)×P(Ad)P(Ac)
P(Ad|B)P(Ac|B)/fracP(Ad)P(Ac).

This equation is symmetrical (assuming no probability is literally equal to 0). There isn’t one Aj labeled “old hypothesis” and another Aj labeled “new hypothesis.”

This symmetry is a feature, not a bug, of probability theory! If you are designing an artificial reasoning system that arrives at different beliefs depending on the order in which the evidence is presented, this is labeled “hysteresis” and considered a Bad Thing. I hear that it is also frowned upon in Science.

From a probability-theoretic standpoint we have various trivial theorems that say it shouldn’t matter whether you update on X first and then Y, or update on Y first and then X. At least they’d be trivial if human beings didn’t violate them so often and so lightly.

If decoherence is “untestable” relative to collapse, then so too, collapse is “untestable” relative to decoherence. What if the history of physics had transpired differently—what if Hugh Everett and John Wheeler had stood in the place of Bohr and Heisenberg, and vice versa? Would it then be right and proper for the people of that world to look at the collapse interpretation, and snort, and say, “Where are the new predictions?”

What if someday we meet an alien species that invented decoherence before collapse? Are we each bound to keep the theory we invented first? Will Reason have nothing to say about the issue, leaving no recourse to settle the argument but interstellar war?

But if we revoke the requirement to yield new predictions, we are left with scientific chaos. You can add arbitrary untestable complications to old theories, and get experimentally equivalent predictions. If we reject what you call “hysteresis,” how can we defend our current theories against every crackpot who proposes that electrons have a new property called “scent,” just like quarks have “flavor”?

Let it first be said that I quite agree that you should reject the one who comes to you and says: “Hey, I’ve got this brilliant new idea! Maybe it’s not the electromagnetic field that’s tugging on charged particles. Maybe there are tiny little angels who actually push on the particles, and the electromagnetic field just tells them how to do it. Look, I have all these successful experimental predictions—the predictions you used to call your own!”

So yes, I agree that we shouldn’t buy this amazing new theory, but it is not the newness that is the problem.

Suppose that human history had developed only slightly differently, with the Church being a primary grant agency for Science. And suppose that when the laws of electromagnetism were first being worked out, the phenomenon of magnetism had been taken as proof of the existence of unseen spirits, of angels. James Clerk becomes Saint Maxwell, who described the laws that direct the actions of angels.

A couple of centuries later, after the Church’s power to burn people at the stake has been restrained, someone comes along and says: “Hey, do we really need the angels?”

“Yes,” everyone says. “How else would the mere numbers of the electromagnetic field translate into the actual motions of particles?”

“It might be a fundamental law,” says the newcomer, “or it might be something other than angels, which we will discover later. What I am suggesting is that interpreting the numbers as the action of angels doesn’t really add anything, and we should just keep the numbers and throw out the angel part.”

And they look one at another, and finally say, “But your theory doesn’t make any new experimental predictions, so why should we adopt it? How do we test your assertions about the absence of angels?”

From a normative perspective, it seems to me that if we should reject the crackpot angels in the first scenario, even without being able to distinguish the two theories experimentally, then we should also reject the angels of established science in the second scenario, even without being able to distinguish the two theories experimentally.

It is ordinarily the crackpot who adds on new useless complications, rather than scientists who accidentally build them in at the start. But the problem is not that the complications are new, but that they are useless whether or not they are new.

A Bayesian would say that the extra complications of the angels in the theory lead to penalties on the prior probability of the theory. If two theories make equivalent predictions, we keep the one that can be described with the shortest message, the smallest program. If you are evaluating the prior probability of each hypothesis by counting bits of code, and then applying Bayesian updating rules on all the evidence available, then it makes no difference which hypothesis you hear about first, or the order in which you apply the evidence.

It is usually not possible to apply formal probability theory in real life, any more than you can predict the winner of a tennis match using quantum field theory. But if probability theory can serve as a guide to practice, this is what it says: Reject uselesscomplications in general, not just when they are new.

Yes, and useless is precisely what the many worlds of decoherence are! There are supposedly all these worlds alongside our own, and they don’t do anything to our world, but I’m supposed to believe in them anyway?

No, according to decoherence, what you’re supposed to believe are the general laws that govern wavefunctions—and these general laws are very visible and testable.

I have argued elsewhere that the imprimatur of science should be associated with general laws, rather than particular events, because it is the general laws that, in principle, anyone can go out and test for themselves. I assure you that I happen to be wearing white socks right now as I type this. So you are probably rationally justified in believing that this is a historical fact. But it is not the specially strong kind of statement that we canonize as a provisional belief of science, because there is no experiment that you can do for yourself to determine the truth of it; you are stuck with my authority. Now, if I were to tell you the mass of an electron in general, you could go out and find your own electron to test, and thereby see for yourself the truth of the general law in that particular case.

The ability of anyone to go out and verify a general scientific law for themselves, by constructing some particular case, is what makes our belief in the general law specially reliable.

What decoherentists say they believe in is the differential equation that is observed to govern the evolution of wavefunctions—which you can go out and test yourself any time you like; just look at a hydrogen atom.

Belief in the existence of separated portions of the universal wavefunction is not additional, and it is not supposed to be explaining the price of gold in London; it is just a deductive consequence of the wavefunction’s evolution. If the evidence of many particular cases gives you cause to believe that XY is a general law, and the evidence of some particular case gives you cause to believe X, then you should have P(Y)P(X and (XY)).

Or to look at it another way, if P(Y|X)1, then P(X and Y)P(X).

Which is to say, believing extra details doesn’t cost you extra probability when they are logical implications of general beliefs you already have. Presumably the general beliefs themselves are falsifiable, though, or why bother?

This is why we don’t believe that spaceships blink out of existence when they cross the cosmological horizon relative to us. True, the spaceship’s continued existence doesn’t have an impact on our world. The spaceship’s continued existence isn’t helping to explain the price of gold in London. But we get the invisible spaceship for free as a consequence of general laws that imply conservation of mass and energy. If the spaceship’s continued existence were not a deductive consequence of the laws of physics as we presently model them, then it would be an additional detail, cost extra probability, and we would have to question why our theory must include this assertion.

The part of decoherence that is supposed to be testable is not the many worlds per se, but just the general law that governs the wavefunction. The decoherentists note that, applied universally, this law implies the existence of entire superposed worlds. Now there are critiques that can be leveled at this theory, most notably, “But then where do the Born probabilities come from?” But within the internal logic of decoherence, the many worlds are not offered as an explanation for anything, nor are they the substance of the theory that is meant to be tested; they are simply a logical consequence of those general laws that constitute the substance of the theory.

If AB then ¬B¬A. To deny the existence of superposed worlds is necessarily to deny the universality of the quantum laws formulated to govern hydrogen atoms and every other examinable case; it is this denial that seems to the decoherentists like the extra and untestable detail. You can’t see the other parts of the wavefunction—why postulate additionally that they don’t exist?

The events surrounding the decoherence controversy may be unique in scientific history, marking the first time that serious scientists have come forward and said that by historical accident humanity has developed a powerful, successful, mathematical physical theory that includes angels. That there is an entire law, the collapse postulate, that can simply be thrown away, leaving the theory strictlysimpler.

To this discussion I wish to contribute the assertion that, in the light of a mathematically solid understanding of probability theory, decoherence is not ruled out by Occam’s Razor, nor is it unfalsifiable, nor is it untestable.

We may consider e.g. decoherence and the collapse postulate, side by side, and evaluate critiques such as “Doesn’t decoherence definitely predict that quantum probabilities should always be 50/50?” and “Doesn’t collapse violate Special Relativity by implying influence at a distance?” We can consider the relative merits of these theories on grounds of their compatibility with experience and the apparent character of physical law.

To assert that decoherence is not even in the game—because the many worlds themselves are “extra entities” that violate Occam’s Razor, or because the many worlds themselves are “untestable,” or because decoherence makes no “new predictions”—all this is, I would argue, an outright error of probability theory. The discussion should simply discard those particular arguments and move on.

" } }, { "_id": "Atu4teGvob5vKvEAF", "title": "Decoherence is Simple", "pageUrl": "https://www.lesswrong.com/posts/Atu4teGvob5vKvEAF/decoherence-is-simple", "postedAt": "2008-05-06T07:44:04.000Z", "baseScore": 71, "voteCount": 54, "commentCount": 63, "url": null, "contents": { "documentId": "Atu4teGvob5vKvEAF", "html": "

An epistle to the physicists:

When I was but a little lad, my father, a PhD physicist, warned me sternly against meddling in the affairs of physicists; he said that it was hopeless to try to comprehend physics without the formal math. Period. No escape clauses. But I had read in Feynman’s popular books that if you really understood physics, you ought to be able to explain it to a nonphysicist. I believed Feynman instead of my father, because Feynman had won the Nobel Prize and my father had not.

It was not until later—when I was reading the Feynman Lectures, in fact— that I realized that my father had given me the simple and honest truth. No math = no physics.

By vocation I am a Bayesian, not a physicist. Yet although I was raised not to meddle in the affairs of physicists, my hand has been forced by the occasional gross misuse of three terms: simple, falsifiable, and testable.

The foregoing introduction is so that you don’t laugh, and say, “Of course I know what those words mean!” There is math here. What follows will be a restatement of the points in Belief in the Implied Invisible, as they apply to quantum physics.

Let’s begin with the remark that started me down this whole avenue, of which I have seen several versions; paraphrased, it runs:

The many-worlds interpretation of quantum mechanics postulates that there are vast numbers of other worlds, existing alongside our own. Occam’s Razor says we should not multiply entities unnec­essarily.

Now it must be said, in all fairness, that those who say this will usually also confess:

But this is not a universally accepted application of Occam’s Razor; some say that Occam’s Razor should apply to the laws governing the model, not the number of objects inside the model.

So it is good that we are all acknowledging the contrary arguments, and telling both sides of the story—

But suppose you had to calculate the simplicity of a theory.

The original formulation of William of Ockham stated:

Lex parsimoniae: Entia non sunt multiplicanda praeter necessitatem.

“The law of parsimony: Entities should not be multiplied beyond necessity.”

But this is qualitative advice. It is not enough to say whether one theory seems more simple, or seems more complex, than another—you have to assign a number; and the number has to be meaningful, you can’t just make it up. Crossing this gap is like the difference between being able to eyeball which things are moving “fast” or “slow,” and starting to measure and calculate velocities.

Suppose you tried saying: “Count the words—that’s how complicated a theory is.”

Robert Heinlein once claimed (tongue-in-cheek, I hope) that the “simplest explanation” is always: “The woman down the street is a witch; she did it.” Eleven words—not many physics papers can beat that.

Faced with this challenge, there are two different roads you can take.

First, you can ask: “The woman down the street is a what?” Just because English has one word to indicate a concept doesn’t mean that the concept itself is simple. Suppose you were talking to aliens who didn’t know about witches, women, or streets—how long would it take you to explain your theory to them? Better yet, suppose you had to write a computer program that embodied your hypothesis, and output what you say are your hypothesis’s predictions—how big would that computer program have to be? Let’s say that your task is to predict a time series of measured positions for a rock rolling down a hill. If you write a subroutine that simulates witches, this doesn’t seem to help narrow down where the rock rolls—the extra subroutine just inflates your code. You might find, however, that your code necessarily includes a subroutine that squares numbers.

Second, you can ask: “The woman down the street is a witch; she did what?” Suppose you want to describe some event, as precisely as you possibly can given the evidence available to you—again, say, the distance/time series of a rock rolling down a hill. You can preface your explanation by saying, “The woman down the street is a witch,” but your friend then says, “What did she do?,” and you reply, “She made the rock roll one meter after the first second, nine meters after the third second…” Prefacing your message with “The woman down the street is a witch,” doesn’t help to compress the rest of your description. On the whole, you just end up sending a longer message than necessary—it makes more sense to just leave off the “witch” prefix. On the other hand, if you take a moment to talk about Galileo, you may be able to greatly compress the next five thousand detailed time series for rocks rolling down hills.

If you follow the first road, you end up with what’s known as Kolmogorov complexity and Solomonoff induction. If you follow the second road, you end up with what’s known as Minimum Message Length.

Ah, so I can pick and choose among definitions of simplicity?

No, actually the two formalisms in their most highly developed forms were proven equivalent.

And I suppose now you’re going to tell me that both formalisms come down on the side of “Occam means counting laws, not counting objects.”

More or less. In Minimum Message Length, so long as you can tell your friend an exact recipe they can mentally follow to get the rolling rock’s time series, we don’t care how much mental work it takes to follow the recipe. In Solomonoff induction, we count bits in the program code, not bits of RAM used by the program as it runs. “Entities” are lines of code, not simulated objects. And as said, these two formalisms are ultimately equivalent.

Now before I go into any further detail on formal simplicity, let me digress to consider the objection:

So what? Why can’t I just invent my own formalism that does things differently? Why should I pay any attention to the way you happened to decide to do things, over in your field? Got any experimental evidence that shows I should do things this way?

Yes, actually, believe it or not. But let me start at the beginning.

The conjunction rule of probability theory states:

P(X,Y)P(X)

For any propositions X and Y, the probability that “X is true, and Y is true,” is less than or equal to the probability that “X is true (whether or not Y is true).” (If this statement sounds not terribly profound, then let me assure you that it is easy to find cases where human probability assessors violate this rule.)

You usually can’t apply the conjunction rule P(X,Y)P(X) directly to a conflict between mutually exclusive hypotheses. The conjunction rule only applies directly to cases where the left-hand-side strictly implies the right-hand-side. Furthermore, the conjunction is just an inequality; it doesn’t give us the kind of quantitative calculation we want.

But the conjunction rule does give us a rule of monotonic decrease in probability: as you tack more details onto a story, and each additional detail can potentially be true or false, the story’s probability goes down monotonically. Think of probability as a conserved quantity: there’s only so much to go around. As the number of details in a story goes up, the number of possible stories increases exponentially, but the sum over their probabilities can never be greater than 1. For every story “X and Y,” there is a story “X and ¬Y.” When you just tell the story “X,” you get to sum over the possibilities Y and ¬Y.

If you add ten details to X, each of which could potentially be true or false, then that story must compete with 2101 other equally detailed stories for precious probability. If on the other hand it suffices to just say X, you can sum your probability over 210 stories

((X and Y and Z and ...) or (X and ¬Y and Z and ...) or ...) .

The “entities” counted by Occam’s Razor should be individually costly in probability; this is why we prefer theories with fewer of them.

Imagine a lottery which sells up to a million tickets, where each possible ticket is sold only once, and the lottery has sold every ticket at the time of the drawing. A friend of yours has bought one ticket for $1—which seems to you like a poor investment, because the payoff is only $500,000. Yet your friend says, “Ah, but consider the alternative hypotheses, ‘Tomorrow, someone will win the lottery’ and ‘Tomorrow, I will win the lottery.’ Clearly, the latter hypothesis is simpler by Occam’s Razor; it only makes mention of one person and one ticket, while the former hypothesis is more complicated: it mentions a million people and a million tickets!”

To say that Occam’s Razor only counts laws, and not objects, is not quite correct: what counts against a theory are the entities it must mention explicitly, because these are the entities that cannot be summed over. Suppose that you and a friend are puzzling over an amazing billiards shot, in which you are told the starting state of a billiards table, and which balls were sunk, but not how the shot was made. You propose a theory which involves ten specific collisions between ten specific balls; your friend counters with a theory that involves five specific collisions between five specific balls. What counts against your theories is not just the laws that you claim to govern billiard balls, but any specific billiard balls that had to be in some particular state for your model’s prediction to be successful.

If you measure the temperature of your living room as 22 degrees Celsius, it does not make sense to say: “Your thermometer is probably in error; the room is much more likely to be 20 °C. Because, when you consider all the particles in the room, there are exponentially vastly more states they can occupy if the temperature is really 22 °C—which makes any particular state all the more improbable.” But no matter which exact 22 °C state your room occupies, you can make the same prediction (for the supervast majority of these states) that your thermometer will end up showing 22 °C, and so you are not sensitive to the exact initial conditions. You do not need to specify an exact position of all the air molecules in the room, so that is not counted against the probability of your explanation.

On the other hand—returning to the case of the lottery—suppose your friend won ten lotteries in a row. At this point you should suspect the fix is in. The hypothesis “My friend wins the lottery every time” is more complicated than the hypothesis “Someone wins the lottery every time.” But the former hypothesis is predicting the data much more precisely.

In the Minimum Message Length formalism, saying “There is a single person who wins the lottery every time” at the beginning of your message compresses your description of who won the next ten lotteries; you can just say “And that person is Fred Smith” to finish your message. Compare to, “The first lottery was won by Fred Smith, the second lottery was won by Fred Smith, the third lottery was…”

In the Solomonoff induction formalism, the prior probability of “My friend wins the lottery every time” is low, because the program that describes the lottery now needs explicit code that singles out your friend; but because that program can produce a tighter probability distribution over potential lottery winners than “Someone wins the lottery every time,” it can, by Bayes’s Rule, overcome its prior improbability and win out as a hypothesis.

Any formal theory of Occam’s Razor should quantitatively define, not only “entities” and “simplicity,” but also the “necessity” part.

Minimum Message Length defines necessity as “that which compresses the message.”

Solomonoff induction assigns a prior probability to each possible computer program, with the entire distribution, over every possible computer program, summing to no more than 1. This can be accomplished using a binary code where no valid computer program is a prefix of any other valid computer program (“prefix-free code”), e.g. because it contains a stop code. Then the prior probability of any program P is simply 2L(P) where L(P) is the length of P in bits.

The program P itself can be a program that takes in a (possibly zero-length) string of bits and outputs the conditional probability that the next bit will be 1; this makes P a probability distribution over all binary sequences. This version of Solomonoff induction, for any string, gives us a mixture of posterior probabilities dominated by the shortest programs that most precisely predict the string. Summing over this mixture gives us a prediction for the next bit.

The upshot is that it takes more Bayesian evidence—more successful predictions, or more precise predictions—to justify more complex hypotheses. But it can be done; the burden of prior improbability is not infinite. If you flip a coin four times, and it comes up heads every time, you don’t conclude right away that the coin produces only heads; but if the coin comes up heads twenty times in a row, you should be considering it very seriously. What about the hypothesis that a coin is fixed to produce HTTHTT… in a repeating cycle? That’s more bizarre—but after a hundred coinflips you’d be a fool to deny it.

Standard chemistry says that in a gram of hydrogen gas there are six hundred billion trillion hydrogen atoms. This is a startling statement, but there was some amount of evidence that sufficed to convince physicists in general, and you particularly, that this statement was true.

Now ask yourself how much evidence it would take to convince you of a theory with six hundred billion trillion separately specified physical laws.

Why doesn’t the prior probability of a program, in the Solomonoff formalism, include a measure of how much RAM the program uses, or the total running time?

The simple answer is, “Because space and time resources used by a program aren’t mutually exclusive possibilities.” It’s not like the program specification, that can only have a 1 or a 0 in any particular place.

But the even simpler answer is, “Because, historically speaking, that heuristic doesn’t work.”

Occam’s Razor was raised as an objection to the suggestion that nebulae were actually distant galaxies—it seemed to vastly multiply the number of entities in the universe. All those stars!

Over and over, in human history, the universe has gotten bigger. A variant of Occam’s Razor which, on each such occasion, would label the vaster universe as more unlikely, would fare less well under humanity’s historical experience.

This is part of the “experimental evidence” I was alluding to earlier. While you can justify theories of simplicity on mathy sorts of grounds, it is also desirable that they actually work in practice. (The other part of the “experimental evidence” comes from statisticians / computer scientists / Artificial Intelligence researchers, testing which definitions of “simplicity” let them construct computer programs that do empirically well at predicting future data from past data. Probably the Minimum Message Length paradigm has proven most productive here, because it is a very adaptable way to think about real-world problems.)

Imagine a spaceship whose launch you witness with great fanfare; it accelerates away from you, and is soon traveling at 0.9c. If the expansion of the universe continues, as current cosmology holds it should, there will come some future point where—according to your model of reality—you don’t expect to be able to interact with the spaceship even in principle; it has gone over the cosmological horizon relative to you, and photons leaving it will not be able to outrace the expansion of the universe.

Should you believe that the spaceship literally, physically disappears from the universe at the point where it goes over the cosmological horizon relative to you?

If you believe that Occam’s Razor counts the objects in a model, then yes, you should. Once the spaceship goes over your cosmological horizon, the model in which the spaceship instantly disappears, and the model in which the spaceship continues onward, give indistinguishable predictions; they have no Bayesian evidential advantage over one another. But one model contains many fewer “entities”; it need not speak of all the quarks and electrons and fields composing the spaceship. So it is simpler to suppose that the spaceship vanishes.

Alternatively, you could say: “Over numerous experiments, I have generalized certain laws that govern observed particles. The spaceship is made up of such particles. Applying these laws, I deduce that the spaceship should continue on after it crosses the cosmological horizon, with the same momentum and the same energy as before, on pain of violating the conservation laws that I have seen holding in every examinable instance. To suppose that the spaceship vanishes, I would have to add a new law, ‘Things vanish as soon as they cross my cosmological horizon.’ ”

The decoherence (a.k.a. many-worlds) version of quantum mechanics states that measurements obey the same quantum-mechanical rules as all other physical processes. Applying these rules to macroscopic objects in exactly the same way as microscopic ones, we end up with observers in states of superposition. Now there are many questions that can be asked here, such as

“But then why don’t all binary quantum measurements appear to have 50/50 probability, since different versions of us see both outcomes?”

However, the objection that decoherence violates Occam’s Razor on account of multiplying objects in the model is simply wrong.

Decoherence does not require the wavefunction to take on some complicated exact initial state. Many-worlds is not specifying all its worlds by hand, but generating them via the compact laws of quantum mechanics. A computer program that directly simulates quantum mechanics to make experimental predictions, would require a great deal of RAM to run—but simulating the wavefunction is exponentially expensive in any flavor of quantum mechanics! Decoherence is simply more so. Many physical discoveries in human history, from stars to galaxies, from atoms to quantum mechanics, have vastly increased the apparent CPU load of what we believe to be the universe.

Many-worlds is not a zillion worlds worth of complicated, any more than the atomic hypothesis is a zillion atoms worth of complicated. For anyone with a quantitative grasp of Occam’s Razor that is simply not what the term “complicated” means.

As with the historical case of galaxies, it may be that people have mistaken their shock at the notion of a universe that large, for a probability penalty, and invoked Occam’s Razor in justification. But if there are probability penalties for decoherence, the largeness of the implied universe, per se, is definitely not their source!

The notion that decoherent worlds are additional entities penalized by Occam’s Razor is just plain mistaken. It is not sort-of-right. It is not an argument that is weak but still valid. It is not a defensible position that could be shored up with further arguments. It is entirely defective as probability theory. It is not fixable. It is bad math. 2+2=3.

" } }, { "_id": "DY9h6zxq6EMHrkkxE", "title": "Spooky Action at a Distance: The No-Communication Theorem", "pageUrl": "https://www.lesswrong.com/posts/DY9h6zxq6EMHrkkxE/spooky-action-at-a-distance-the-no-communication-theorem", "postedAt": "2008-05-05T02:43:09.000Z", "baseScore": 22, "voteCount": 19, "commentCount": 36, "url": null, "contents": { "documentId": "DY9h6zxq6EMHrkkxE", "html": "

Previously in seriesBell's Theorem: No EPR \"Reality\"

\n

When you have a pair of entangled particles, such as oppositely polarized photons, one particle seems to somehow \"know\" the result of distant measurements on the other particle.  If you measure photon A to be polarized at 0°, photon B somehow immediately knows that it should have the opposite polarization of 90°.

\n

Einstein famously called this \"spukhafte Fernwirkung\" or \"spooky action at a distance\".  Einstein didn't know about decoherence, so it seemed spooky to him.

\n

Though, to be fair, Einstein knew perfectly well that the universe couldn't really be \"spooky\".  It was a then-popular interpretation of QM that Einstein was calling \"spooky\", not the universe itself.

\n

\n

Let us first consider how entangled particles look, if you don't know about decoherence—the reason why Einstein called it \"spooky\":

\n

Suppose we've got oppositely polarized photons A and B, and you're about to measure B in the 20° basis.  Your probability of seeing B transmitted by the filter (or absorbed) is 50%.

\n

But wait!  Before you measure B, I suddenly measure A in the 0° basis, and the A photon is transmitted!  Now, apparently, the probability that you'll see B transmitted is 11.6%.  Something has changed!  And even if the photons are light-years away, spacelike separated, the change still occurs.

\n

You might try to reply:

\n
\n

\"No, nothing has changed—measuring the A photon has told you something about the B photon, you have gained knowledge, you have carried out an inference about a distant object, but no physical influence travels faster-than-light.

\n

\"Suppose I put two index cards into an envelope, one marked '+' and one marked '-'.  Now I give one envelope to you, and one envelope to a friend of yours, and you get in a spaceship and travel a few light-years away from each other, and then you open your envelope and see '+'.  At once you know that your friend is holding the envelope marked '-', but this doesn't mean the envelope's content has changed faster than the speed of light.

\n

\"You are committing a Mind Projection Fallacy; the envelope's content is constant, only your local beliefs about distant referents change.\"

\n
\n

Bell's Theorem, covered yesterday, shows that this reply fails.  It is not possible that each photon has an unknown but fixed individual tendency to be polarized a particular way.  (Think of how unlikely it would seem, a priori, for this to be something any experiment could tell you!)

\n

Einstein didn't know about Bell's Theorem, but the theory he was criticizing did not say that there were hidden variables; it said that the probabilities changed directly.

\n

But then how fast does this influence travel?  And what if you measure the entangled particles in such a fashion that, in their individual reference frames, each measurement takes place before the other?

\n

These experiments have been done.  If you think there is an influence traveling, it travels at least six million times as fast as light (in the reference frame of the Swiss Alps).  Nor is the influence fazed if each measurement takes place \"first\" within its own reference frame.

\n

So why can't you use this mysterious influence to send signals faster than light?

\n

Here's something that, as a kid, I couldn't get anyone to explain to me:  \"Why can't you signal using an entangled pair of photons that both start out polarized up-down?  By measuring A in a diagonal basis, you destroy the up-down polarization of both photons.  Then by measuring B in the up-down/left-right basis, you can with 50% probability detect the fact that a measurement has taken place, if B turns out to be left-right polarized.\"

\n

It's particularly annoying that nobody gave me an answer, because the answer turns out to be simple:  If both photons have definite polarizations, they aren't entangled.  There are just two different photons that both happen to be polarized up-down.  Measuring one photon doesn't even change your expectations about the other.

\n

Entanglement is not an extra property that you can just stick onto otherwise normal particles!  It is a breakdown of quantum independence.  In classical probability theory, if you know two facts, there is no longer any logical dependence left between them.  Likewise in quantum mechanics, two particles each with a definite state must have a factorizable amplitude distribution.

\n

Or as old-style quantum theory put it:  Entanglement requires superposition, which implies uncertainty.  When you measure an entangled particle, you are not able to force your measurement result to take any particular value.  So, over on the B end, if they do not know what you measured on A, their probabilistic expectation is always the same as before.  (So it was once said).

\n

But in old-style quantum theory, there was indeed a real and instantaneous change in the other particle's statistics which took place as the result of your own measurement.  It had to be a real change, by Bell's Theorem and by the invisibly assumed uniqueness of both outcomes.

\n

Even though the old theory invoked a non-local influence, you could never use this influence to signal or communicate with anyone.  This was called the \"no-signaling condition\" or the \"no-communication theorem\".

\n

Still, on then-current assumptions, they couldn't actually call it the \"no influence of any kind whatsoever theorem\".  So Einstein correctly labeled the old theory as \"spooky\".

\n

In decoherent terms, the impossibility of signaling is much easier to understand:  When you measure A, one version of you sees the photon transmitted and another sees the photon absorbed.  If you see the photon absorbed, you have not learned any new empirical fact; you have merely discovered which version of yourself \"you\" happen to be.  From the perspective at B, your \"discovery\" is not even theoretically a fact they can learn; they know that both versions of you exist.  When B finally communicates with you, they \"discover\" which world they themselves are in, but that's all.  The statistics at B really haven't changed—the total Born probability of measuring either polarization is still just 50%!

\n

A common defense of the old theory was that Special Relativity was not violated, because no \"information\" was transmitted, because the superluminal influence was always \"random\".  As some Hans de Vries fellow points out, information theory says that \"random\" data is the most expensive kind of data you can transmit.  Nor is \"random\" information always useless:  If you and I generate a million entangled particles, we can later measure them to obtain a shared key for use in cryptography—a highly useful form of information which, by Bell's Theorem, could not have already been there before measuring.

\n

But wait a minute.  Decoherence also lets you generate the shared key.  Does decoherence really not violate the spirit of Special Relativity?

\n

Decoherence doesn't allow \"signaling\" or \"communication\", but it allows you to generate a highly useful shared key apparently out of nowhere.  Does decoherence really have any advantage over the old-style theory on this one?  Or are both theories equally obeying Special Relativity in practice, and equally violating the spirit?

\n

A first reply might be:  \"The shared key is not 'random'.  Both you and your friend generate all possible shared keys, and this is a deterministic and local fact; the correlation only shows up when you meet.\"

\n

But this just reveals a deeper problem.  The counter-objection would be:  \"The measurement that you perform over at A, splits both A and B into two parts, two worlds, which guarantees that you'll meet the right version of your friend when you reunite.  That is non-local physics—something you do at A, makes the world at B split into two parts.  This is spooky action at a distance, and it too violates the spirit of Special Relativity.  Tu quoque!\"

\n

And indeed, if you look at our quantum calculations, they are written in terms of joint configurations.  Which, on reflection, doesn't seem all that local!

\n

But wait—what exactly does the no-communication theorem say?  Why is it true?  Perhaps, if we knew, this would bring enlightenment.

\n

Here is where it starts getting complicated.  I myself don't fully understand the no-communication theorem—there are some parts I think I can see at a glance, and other parts I don't.  So I will only be able to explain some of it, and I may have gotten it wrong, in which case I pray to some physicist to correct me (or at least tell me where I got it wrong).

\n

When we did the calculations for entangled polarized photons, with A's polarization measured using a 30° filter, we calculated that the initial state

\n
\n

√(1/2) * ( [ A=(1 ; 0) ∧ B=(0 ; 1) ] - [ A=(0 ; 1) ∧ B=(1; 0) ] )

\n
\n

would be decohered into a blob for

\n
\n

( -(√3)/2 * √(1/2) * [ A=(-(√3)/2 ; 1/2) ∧ B=(0 ; 1) ] )
- ( 1/2 * √(1/2) * [ A=(-(√3)/2 ; 1/2) ∧ B=(1; 0) ] )

\n
\n

and symmetrically (though we didn't do this calculation) another blob for

\n
\n

( 1/2 * √(1/2) * [ A=(1/2 ; (√3)/2) ∧ B=(0 ; 1) ] )
 - ( (√3)/2 * √(1/2) * [ A=(1/2 ; (√3)/2) ∧ B=(1; 0) ] )

\n
\n

These two blobs together add up, linearly, to the initial state, as one would expect.  So what changed?  At all?

\n

What changed is that the final result at A, for the first blob, is really more like:

\n
\n

(Sensor-A-reads-\"ABSORBED\") * (Experimenter-A-sees-\"ABSORBED\") *
{ ( -(√3)/2 * √(1/2) * [ A=(-(√3)/2 ; 1/2) ∧ B=(0 ; 1) ] )
 -( 1/2 * √(1/2) * [ A=(-(√3)/2 ; 1/2) ∧ B=(1; 0) ] ) }

\n
\n

and correspondingly with the TRANSMITTED blob.

\n

What changed is that one blob in configuration space, was decohered into two distantly separated blobs that can't interact any more.

\n

As we saw from the Heisenberg \"Uncertainty Principle\", decoherence is a visible, experimentally detectable effect.  That's why we have to shield quantum computers from decoherence.  So couldn't the decohering measurement at A, have detectable consequences for B?

\n

But think about how B sees the initial state:

\n
\n

√(1/2) * ( [ A=(1 ; 0) ∧ B=(0 ; 1) ] - [ A=(0 ; 1) ∧ B=(1; 0) ] )

\n
\n

From B's perspective, this state is already \"not all that coherent\", because no matter what B does, it can't make the A=(1 ; 0) and A=(0 ; 1) configurations cross paths.  There's already a sort of decoherence here—a separation that B can't eliminate by any local action at B.

\n

And as we've earlier glimpsed, the basis in which you write the initial state is arbitrary.  When you write out the state, it has pretty much the same form in the 30° measuring basis as in the 0° measuring basis.

\n

In fact, there's nothing preventing you from writing out the initial state with A in the 30° basis and B in the 0° basis, so long as your numbers add up.

\n

Indeed this is exactly what we did do, when we first wrote out the four terms in the two blobs, and didn't include the sensor or experimenter.

\n

So when A permanently decohered the blobs in the 30° basis, from B's perspective, this merely solidified a decoherence that B could have viewed as already existing.

\n

Obviously, this can't change the local evolution at B (he said, waving his hands a bit).

\n

Now this is only a statement about a quantum measurement that just decoheres the amplitude for A into parts, without A itself evolving in interesting new directions.  What if there were many particles on the A side, and something happened on the A side that put some of those particles into identical configurations via different paths?

\n

This is where linearity and unitarity come in.  The no-communication theorem requires both conditions: in general, violating linearity or unitarity gives you faster-than-light signaling.  (And numerous other superpowers, such as solving NP-complete problems in polynomial time, and possibly Outcome Pumps.)

\n

By linearity, we can consider parts of the amplitude distribution separately, and their evolved states will add up to the evolved state of the whole.

\n

Suppose that there are many particles on the A side, but we count up every configuration that corresponds to some single fixed state of B—say, B=(0 ; 1) or B=France, whatever.  We'd get a group of components which looked like:

\n
\n

(AA=1 ∧ AB=2 ∧ AC=Fred ∧ B=France) +
(AA=2 ∧ AB=1 ∧ AC=Sally ∧ B=France) + ...

\n
\n

Linearity says that we can decompose the amplitude distribution around states of B, and the evolution of the parts will add to the whole.

\n

Assume that the B side stays fixed.  Then this component of the distribution that we have just isolated, will not interfere with any other components, because other components have different values for B, so they are not identical configurations.

\n

And unitary evolution says that whatever the measure—the integrated squared modulus—of this component, the total measure is the same after evolution at A, as before.

\n

So assuming that B stays fixed, then anything whatsoever happening at A, won't change the measure of the states at B (he said, waving his hands some more).

\n

Nor should it matter whether we consider A first, or B first.  Anything that happens at A, within some component of the amplitude distribution, only depends on the A factor, and only happens to the A factor; likewise with B; so the final joint amplitude distribution should not depend on the order in which we consider the evolutions (and he waved his hands a final time).

\n

It seems to me that from here it should be easy to show no communication considering the simultaneous evolution of A and B.  Sadly I can't quite see the last step of the argument.  I've spent very little time doing actual quantum calculations—this is not what I do for a living—or it would probably be obvious.  Unless it's more subtle than it appears, but anyway...

\n

Anyway, if I'm not mistaken—though I'm feeling my way here by mathematical intuition—the no-communication theorem manifests as invariant generalized states of entanglement.  From B's perspective, they are entangled with some distant entity A, and that entanglement has an invariant shape that remains exactly the same no matter what happens at A.

\n

To me, at least, this suggests that the apparent non-locality of quantum physics is a mere artifact of the representation used to describe it.

\n

If you write a 3-dimensional vector as \"30° west of north, 40° upward slope, and 100 meters long,\" it doesn't mean that the universe has a basic compass grid, or that there's a global direction of up, or that reality runs on the metric system.  It means you chose a convenient representation.

\n

Physics, including quantum physics, is relativistically invariant:  You can pick any relativistic frame you like, redo your calculations, and always get the same experimental predictions back out.  That we know.

\n

Now it may be that, in the course of doing your calculations, you find it convenient to pick some reference frame, any reference frame, and use that in your math.  Greenwich Mean Time, say.  This doesn't mean there really is a central clock, somewhere underneath the universe, that operates on Greenwich Mean Time.

\n

The representation we used talked about \"joint configurations\" of A and B in which the states of A and B were simultaneously specified.  This means our representation was not relativistic; the notion of \"simultaneity\" is arbitrary.  We assumed the universe ran on Greenwich Mean Time, in effect.

\n

I don't know what kind of representation would be (1) relativistically invariant, (2) show distant entanglement as invariant, (3) directly represent space-time locality, and (4) evolve each element of the new representation in a way that depended only on an immediate neighborhood of other elements.

\n

But that representation would probably be a lot closer to the Tao.

\n

My suspicion is that a better representation might take its basic mathematical objects as local states of entanglement.  I've actually suspected this ever since I heard about holographic physics and the entanglement entropy bound.  But that's just raw speculation, at this point.

\n

However, it is important that a fundamental representation be as local and as simple as possible.  This is why e.g. \"histories of the entire universe\" make poor \"fundamental\" objects, in my humble opinion.

\n

And it's why I find it suspicious to have a representation for calculating quantum physics that talks about a relativistically arbitrary \"joint configuration\" of A and B, when it seems like each local position has an invariant \"distant entanglement\" that suffices to determine local evolution.  Shouldn't we be able to refactor this representation into smaller pieces?

\n

Though ultimately you do have to retrieve the phenomenon where the experimenters meet again, after being separated by light-years, and discover that they measured the photons with opposite polarizations.  Which is provably not something you can get from individual billiard balls bopping around.

\n

I suspect that when we get a representation of quantum mechanics that is local in every way that the physics itself is local, it will be immediately obvious—right there in the representation—that things only happen in one place at a time.

\n

Hence, no faster-than-light communicators.  (Dammit!)

\n

Now of course, all this that I have said—all this wondrous normality—relies on the decoherence viewpoint.

\n

It relies on believing that when you measure at A, both possible measurements for A still exist, and are still entangled with B in a way that B sees as invariant.

\n

All the amplitude in the joint configuration is undergoing linear, unitary, local evolution.  None of it vanishes.  So the probabilities at B are always the same from a global standpoint, and there is no supraluminal influence, period.

\n

If you tried to \"interpret\" things any differently... well, the no-communication theorem would become a lot less obvious.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Decoherence is Simple\"

\n

Previous post: \"Bell's Theorem: No EPR 'Reality'\"

" } }, { "_id": "AnHJX42C6r6deohTG", "title": "Bell's Theorem: No EPR \"Reality\"", "pageUrl": "https://www.lesswrong.com/posts/AnHJX42C6r6deohTG/bell-s-theorem-no-epr-reality", "postedAt": "2008-05-04T04:44:54.000Z", "baseScore": 40, "voteCount": 28, "commentCount": 30, "url": null, "contents": { "documentId": "AnHJX42C6r6deohTG", "html": "

Previously in seriesEntangled Photons

\n

(Note:  So that this post can be read by people who haven't followed the whole series, I shall temporarily adopt some more standard and less accurate terms; for example, talking about \"many worlds\" instead of \"decoherent blobs of amplitude\".)

\n

The legendary Bayesian, E. T. Jaynes, began his life as a physicist.  In some of his writings, you can find Jaynes railing against the idea that, because we have not yet found any way to predict quantum outcomes, they must be \"truly random\" or \"inherently random\".

\n

Sure, today you don't know how to predict quantum measurements.  But how do you know, asks Jaynes, that you won't find a way to predict the process tomorrow?  How can any mere experiments tell us that we'll never be able to predict something—that it is \"inherently unknowable\" or \"truly random\"?

\n

As far I can tell, Jaynes never heard about decoherence aka Many-Worlds, which is a great pity.  If you belonged to a species with a brain like a flat sheet of paper that sometimes split down its thickness, you could reasonably conclude that you'd never be able to \"predict\" whether you'd \"end up\" in the left half or the right half.  Yet is this really ignorance?  It is a deterministic fact that different versions of you will experience different outcomes.

\n

But even if you don't know about Many-Worlds, there's still an excellent reply for \"Why do you think you'll never be able to predict what you'll see when you measure a quantum event?\"  This reply is known as Bell's Theorem.

\n

\n

In 1935, Einstein, Podolsky, and Rosen once argued roughly as follows:

\n

Suppose we have a pair of entangled particles, light-years or at least light-minutes apart, so that no signal can possibly travel between them over the timespan of the experiment.  We can suppose these are polarized photons with opposite polarizations.

\n

Polarized filters block some photons, and absorb others; this lets us measure a photon's polarization in a given orientation.  Entangled photons (with the right kind of entanglement) are always found to be polarized in opposite directions, when you measure them in the same orientation; if a filter at a certain angle passes photon A (transmits it) then we know that a filter at the same angle will block photon B (absorb it).

\n

Now we measure one of the photons, labeled A, and find that it is transmitted by a 0° polarized filter.  Without measuring B, we can now predict with certainty that B will be absorbed by a 0° polarized filter, because A and B always have opposite polarizations when measured in the same basis.

\n

Said EPR:

\n
\n

\"If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity.\"

\n
\n

EPR then assumed (correctly!) that nothing which happened at A could disturb B or exert any influence on B, due to the spacelike separations of A and B.  We'll take up the relativistic viewpoint again tomorrow; for now, let's just note that this assumption is correct.

\n

If by measuring A at 0°, we can predict with certainty whether B will be absorbed or transmitted at 0°, then according to EPR this fact must be an \"element of physical reality\" about B.  Since measuring A cannot influence B in any way, this element of reality must always have been true of B.  Likewise with every other possible polarization we could measure—10°, 20°, 50°, anything.  If we measured A first in the same basis, even light-years away, we could perfectly predict the result for B.  So on the EPR assumptions, there must exist some \"element of reality\" corresponding to whether B will be transmitted or absorbed, in any orientation.

\n

But if no one has measured A, quantum theory does not predict with certainty whether B will be transmitted or absorbed.  (At least that was how it seemed in 1935.)  Therefore, EPR said, there are elements of reality that exist but are not mentioned in quantum theory:

\n
\n

\"We are thus forced to conclude that the quantum-mechanical description of physical reality given by wave functions is not complete.\"

\n
\n

This is another excellent example of how seemingly impeccable philosophy can fail in the face of experimental evidence, thanks to a wrong assumption so deep you didn't even realize it was an assumption.

\n

EPR correctly assumed Special Relativity, and then incorrectly assumed that there was only one version of you who saw A do only one thing.  They assumed that the certain prediction about what you would hear from B, described the only outcome that happened at B.

\n

In real life, if you measure A and your friend measures B, different versions of you and your friend obtain both possible outcomes.  When you compare notes, the two of you always find the polarizations are opposite.  This does not violate Special Relativity even in spirit, but the reason why not is the topic of tomorrow's post, not today's.

\n

Today's post is about how, in 1964, Belldandy John S. Bell irrevocably shot down EPR's original argument.  Not by pointing out the flaw in the EPR assumptions—Many-Worlds was not then widely known—but by describing an experiment that disproved them!

\n

It is experimentally impossible for there to be a physical description of the entangled photons, which specifies a single fixed outcome of any polarization measurement individually performed on A or B.

\n

This is Bell's Theorem, which rules out all \"local hidden variable\" interpretations of quantum mechanics.  It's actually not all that complicated, as quantum physics goes!

\n

We begin with a pair of entangled photons, which we'll name A and B.  When measured in the same basis, you find that the photons always have opposite polarization—one is transmitted, one is absorbed.  As for the first photon you measure, the probability of transmission or absorption seems to be 50-50.

\n

What if you measure with polarized filters set at different angles?

\n

Suppose that I measure A with a filter set at 0°, and find that A was transmitted.  In general, if you then measure B at an angle θ to my basis, quantum theory says the probability (of my hearing that) you also saw B transmitted, equals sin2 θ.  E.g. if your filter was at an angle of 30° to my filter, and I saw my photon transmitted, then there's a 25% probability that you see your photon transmitted.

\n

(Why?  See \"Decoherence as Projection\".  Some quick sanity checks:  sin(0°) = 0, so if we measure at the same angles, the calculated probability is 0—we never measure at the same angle and see both photons transmitted.  Similarly, sin(90°) = 1; if I see A transmitted, and you measure at an orthogonal angle, I will always hear that you saw B transmitted.  sin(45°) = √(1/2), so if you measure in a diagonal basis, the probability is 50/50 for the photon to be transmitted or absorbed.)

\n

Oh, and the initial probability of my seeing A transmitted is always 1/2.  So the joint probability of seeing both photons transmitted is 1/2 * sin2 θ.  1/2 probability of my seeing A transmitted, times sin2 θ probability that you then see B transmitted.

\n

And now you and I perform three statistical experiments, with large sample sizes:

\n

(1)  First, I measure A at 0° and you measure B at 20°.  The photon is transmitted through both filters on 1/2 sin2 (20°) = 5.8% of the occasions.

\n

(2)  Next, I measure A at 20° and you measure B at 40°.  When we compare notes, we again discover that we both saw our photons pass through our filters, on 1/2 sin2 (40° - 20°) = 5.8% of the occasions.

\n

(3)  Finally, I measure A at 0° and you measure B at 40°.  Now the photon passes both filters on 1/2 sin2 (40°) = 20.7% of occasions.

\n

Or to say it a bit more compactly:

\n
    \n
  1. A transmitted 0°, B transmitted 20°:  5.8%
  2. \n
  3. A transmitted 20°, B transmitted 40°:  5.8%
  4. \n
  5. A transmitted 0°, B transmitted 40°:  20.7%
  6. \n
\n

What's wrong with this picture?

\n

Nothing, in real life.  But on EPR assumptions, it's impossible.

\n

On EPR assumptions, there's a fixed local tendency for any individual photon to be transmitted or absorbed by a polarizer of any given orientation, independent of any measurements performed light-years away, as the single unique outcome.

\n

Consider experiment (2).  We measure A at 20° and B at 40°, compare notes, and find we both saw our photons transmitted.  Now, A was transmitted at 20°, so if you had measured B at 20°, B would certainly have been absorbed—if you measure in the same basis you must find opposite polarizations.

\n

That is:  If A had the fixed tendency to be transmitted at 20°, then B must have had a fixed tendency to be absorbed at 20°.  If this rule were violated, you could have measured both photons in the 20° basis, and found that both photons had the same polarization.  Given the way that entangled photons are actually produced, this would violate conservation of angular momentum.

\n

So (under EPR assumptions) what we learn from experiment (2) can be equivalently phrased as:  \"B was a kind of photon that was transmitted by a 40° filter and would have been absorbed by the 20° filter.\"  Under EPR assumptions this is logically equivalent to the actual result of experiment (2).

\n

Now let's look again at the percentages:

\n
    \n
  1. B is a kind of photon that was transmitted at 20°, and would not have been transmitted  at 0°:  5.8%
  2. \n
  3. B is a kind of photon that was transmitted at 40°, and would not have been transmitted at 20°:  5.8%
  4. \n
  5. B is a kind of photon that was transmitted at 40°, and would not have been transmitted at 0°:  20.7%
  6. \n
\n

If you want to try and see the problem on your own, you can stare at the three experimental results for a while...

\n

(Spoilers ahead.)

\n

Consider a photon pair that gives us a positive result in experiment (3).  On EPR assumptions, we now know that the B photon was inherently a type that would have been absorbed at 0°, and was in fact transmitted at 40°.  (And conversely, if the B photon is of this type, experiment (3) will always give us a positive result.)

\n

Now take a B photon from a positive experiment (3), and ask:  \"If instead we had measured B at 20°, would it have been transmitted, or absorbed?\"  Again by EPR's assumptions, there must be a definite answer to this question.  We could have measured A in the 20° basis, and then had certainty of what would happen at B, without disturbing B.  So there must be an \"element of reality\" for B's polarization at 20°.

\n

But if B is a kind of photon that would be transmitted at 20°, then it is a kind of photon that implies a positive result in experiment (1).  And if B is a kind of photon that would be absorbed at 20°, it is a kind of photon that would imply a positive result in experiment (2).

\n

If B is a kind of photon that is transmitted at 40° and absorbed at 0°, and it is either a kind that is absorbed at 20° or a kind that is transmitted at 20°; then B must be either a kind that is absorbed at 20° and transmitted at 40°, or a kind that is transmitted at 20° and absorbed at 0°.

\n

So, on EPR's assumptions, it's really hard to see how the same source can manufacture photon pairs that produce 5.8% positive results in experiment (1), 5.8% positive results in experiment (2), and 20.7% positive results in experiment (3).  Every photon pair that produces a positive result in experiment (3) should also produce a positive result in either (1) or (2).

\n

\"Bell's inequality\" is that any theory of hidden local variables implies (1) + (2) >= (3).  The experimentally verified fact that (1) + (2) < (3) is a \"violation of Bell's inequality\".  So there are no hidden local variables.  QED.

\n

And that's Bell's Theorem.  See, that wasn't so horrible, was it?

\n

But what's actually going on here?

\n

When you measure at A, and your friend measures at B a few light-years away, different versions of you observe both possible outcomes—both possible polarizations for your photon.  But the amplitude of the joint world where you both see your photons transmitted, goes as √(1/2) * sin θ where θ is the angle between your polarizers.  So the squared modulus of the amplitude (which is how we get probabilities in quantum theory) goes as 1/2 sin2 θ, and that's the probability for finding mutual transmission when you meet a few years later and compare notes.  We'll talk tomorrow about why this doesn't violate Special Relativity.

\n

Strengthenings of Bell's Theorem eliminate the need for statistical reasoning:  You can show that local hidden variables are impossible, using only properties of individual experiments which are always true given various measurements.  (Google \"GHZ state\" or \"GHZM state\".)  Occasionally you also hear that someone has published a strengthened Bell's experiment in which the two particles were more distantly separated, or the particles were measured more reliably, but you get the core idea.  Bell's Theorem is proven beyond a reasonable doubt.  Now the physicists are tracking down unreasonable doubts, and Bell always wins.

\n

I know I sometimes speak as if Many-Worlds is a settled issue, which it isn't academically.  (If people are still arguing about it, it must not be \"settled\", right?)  But Bell's Theorem itself is agreed-upon academically as an experimental truth.  Yes, there are people discussing theoretically conceivable loopholes in the experiments done so far.  But I don't think anyone out there really thinks they're going to find an experimental violation of Bell's Theorem as soon as they use a more sensitive photon detector.

\n

What does Bell's Theorem plus its experimental verification tell us, exactly?

\n

My favorite phrasing is one I encountered in D. M. Appleby:  \"Quantum mechanics is inconsistent with the classical assumption that a measurement tells us about a property previously possessed by the system.\"

\n

Which is exactly right:  Measurement decoheres your blob of amplitude (world), splitting it into several noninteracting blobs (worlds).  This creates new indexical uncertainty—uncertainty about which of several versions of yourself you are.  Learning which version you are, does not tell you a previously unknown property that was always possessed by the system.  And which specific blobs (worlds) are created, depends on the physical measuring process.

\n

It's sometimes said that Bell's Theorem rules out \"local realism\".  Tread cautiously when you hear someone arguing against \"realism\".  As for locality, it is, if anything, far better understood than this whole \"reality\" business:  If life is but a dream, it is a dream that obeys Special Relativity.

\n

It is just one particular sort of locality, and just one particular notion of which things are \"real\" in the sense of previously uniquely determined, which Bell's Theorem says cannot simultaneously be true.

\n

In particular, decoherent quantum mechanics is local, and Bell's Theorem gives us no reason to believe it is not real.  (It may or may not be the ultimate truth, but quantum mechanics is certainly more real than the classical hallucination of little billiard balls bopping around.)

\n

Does Bell's Theorem prevent us from regarding the quantum description as a state of partial knowledge about something more deeply real?

\n

At the very least, Bell's Theorem prevents us from interpreting quantum amplitudes as probability in the obvious way.  You cannot point at a single configuration, with probability proportional to the squared modulus, and say, \"This is what the universe looked like all along.\"

\n

In fact, you cannot pick any locally specified description whatsoever of unique outcomes for quantum experiments, and say, \"This is what we have partial information about.\"

\n

So it certainly isn't easy to reinterpret the quantum wavefunction as an uncertain belief.  You can't do it the obvious way.  And I haven't heard of any non-obvious interpretation of the quantum description as partial information.

\n

Furthermore, as I mentioned previously, it is really odd to find yourself differentiating a degree of uncertain anticipation to get physical results—the way we have to differentiate the quantum wavefunction to find out how it evolves.  That's not what probabilities are for.

\n

Thus I try to emphasize that quantum amplitudes are not possibilities, or probabilities, or degrees of uncertain belief, or expressions of ignorance, or any other species of epistemic creatures.  Wavefunctions are not states of mind.  It would be a very bad sign to have a fundamental physics that operated over states of mind; we know from looking at brains that minds are made of parts.

\n

In conclusion, although Einstein, Podolsky, and Rosen presented a picture of the world that was disproven experimentally, I would still regard them as having won a moral victory:  The then-common interpretation of quantum mechanics did indeed have a one person measuring at A, seeing a single outcome, and then making a certain prediction about a unique outcome at B; and this is indeed incompatible with relativity, and wrong.  Though people are still arguing about that.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Spooky Action at a Distance: The No-Communication Theorem\"

\n

Previous post: \"Entangled Photons\"

" } }, { "_id": "GmFuZcE6udo7bykxP", "title": "Entangled Photons", "pageUrl": "https://www.lesswrong.com/posts/GmFuZcE6udo7bykxP/entangled-photons", "postedAt": "2008-05-03T07:20:50.000Z", "baseScore": 16, "voteCount": 15, "commentCount": 11, "url": null, "contents": { "documentId": "GmFuZcE6udo7bykxP", "html": "

Previously in seriesDecoherence as Projection

\n

Today we shall analyze the phenomenon of \"entangled particles\".  We're going to make heavy use of polarized photons here, so you'd better have read yesterday's post.

\n

\n

If a particle at rest decays into two other particles, their net momentum must add up to 0.  The two new particles may have amplitudes to head off in all directions, but in each joint configuration, the directions will be opposite.

\n

By a similar method you can produce two entangled photons which head off in opposite directions, and are guaranteed to be polarized oppositely (at right angles to each other), but with a 50% prior probability of going through any given polarized filter.

\n

You might think that this would involve amplitudes over a continuous spectrum of opposite configurations—an amplitude for photon A to be polarized at 0° and for photon B to be polarized at 90°, an amplitude for A to be 1° polarized and for B to be 91° polarized, etc.  But in fact it's possible to describe the quantum state \"unknown but opposite polarizations\" much more compactly.

\n

First, note that the two photons are heading off in opposite directions.  This justifies calling one photon A and one photon B; they aren't likely to get their identities mixed up.

\n

As with yesterday, the polarization state (1 ; 0) is what passes a 90° filter.  The polarization state (0 ; 1) is what passes a 0° filter.  (1 ; 0) is polarized up-down, (0 ; 1) is polarized left-right.

\n

If A is in the polarization state (1 ; 0), we'll write that as A=(1 ; 0).

\n

If A=(1 ; 0) and B=(0 ; 1), we'll write that as

\n
\n

[ A=(1 ; 0) ∧ B=(0 ; 1) ]

\n
\n

The state for \"unknown opposite polarization\" can be written as:

\n
\n

√(1/2) * ( [ A=(1 ; 0) ∧ B=(0 ; 1) ] - [ A=(0 ; 1) ∧ B=(1; 0) ] )

\n
\n

Note that both terms are being multiplied by the square root of 1/2.  This ensures that the squared modulus of both terms sums to 1. Also, don't overlook the minus sign in the center, we'll need it.

\n

If you measure the A photon's polarization in the up-down/left-right basis, the result is pretty straightforward.  Your measurement decoheres the entanglement, creating one evolution out of the A=(1 ; 0) ∧ B=(0 ; 1) configuration, and a second, noninteracting evolution out of the A=(0 ; 1) ∧ B=(1; 0) configuration.

\n

If you find that the A photon is polarized up-down, i.e., (1 ; 0), then you know you're in the A=(1 ; 0) ∧ B=(0 ; 1) blob of amplitude. So you know that if you or anyone else measures B, they'll report to you that they found B in the (0 ; 1) or left-right polarization.  The version of you that finds A=(1 ; 0), and the version of your friend that finds B=(0 ; 1), always turn out to live in the same blob of amplitude.

\n

On the other side of configuration space, another version of you finds themselves in the A=(0 ; 1) ∧ B=(1; 0) blob.  If a friend measures B, the other you will expect to hear that B was polarized up-down, just as you expect to meet the version of your friend that measured B left-right.

\n

But what if you measure the system in a slanted basis—test a photon with a 30° polarized filter?  Given the specified starting state, in the up-down / left-right basis, what happens if we measure in the 30° basis instead?  Will we still find the photons having opposite polarizations?  Can this be demonstrated?

\n

Yes, but the math gets a little more interesting.

\n

Let's review, from yesterday, the case where a photon previously polarized in the up-down/left-right basis encounters a 30° filter.

\n

\"Polar3060\" A 30-60-90 triangle has a hypotenuse of 1, a small side of 1/2, and a longer side of (√3)/2, in accordance with the Pythagorean Theorem.

\n

If a photon passes a 0° filter, coming out with polarization (0 ; 1), and then encounters another filter at 30°, the vector that would be transmitted through the 30° filter is

\n
\n

(√3)/2 * (1/2 ; (√3)/2) = (.433 ; .75)

\n
\n

and the polarization vector that would be absorbed is

\n
\n

1/2 * (-(√3)/2 ; 1/2) = (-.433 ; .25)

\n
\n

Note that the polarization states (1/2 ; (√3)/2) and (-(√3)/2 ; 1/2) form an orthonormal basis:  The inner product of each vector with itself is 1, and the inner product of the two vectors with each other is 0.

\n

Then we had (√3)/2 of one basis vector plus 1/2 of the other, guaranteeing the squared moduli would sum to 1.  ((√3)/2)2  + (1/2)2  = 3/4 + 1/4 = 1.

\n

So we can say that in the 30° basis, the incoming (0 ; 1) photon had a (√3)/2 amplitude to be transmitted, and a 1/2 amplitude to be absorbed.

\n

Symmetrically, suppose a photon had passed a 90° filter, coming out with polarization (1 ; 0), and then encountered the same 30° filter.  Then the transmitted vector would be

\n
\n

1/2 * (1/2 ; (√3)/2) = (.25 ; .433)

\n
\n

and the absorbed vector would be

\n
\n

-(√3)/2 * (-(√3)/2 ; 1/2) = (.75 ; -.433)

\n
\n

Now let's consider again with the entangled pair of photons

\n
\n

√(1/2) * ( [ A=(1 ; 0) ∧ B=(0 ; 1) ] - [ A=(0 ; 1) ∧ B=(1; 0) ] )

\n
\n

and measure photon A with a 30° filter.

\n

Suppose we find that we see photon A absorbed.

\n

Then we know that there was a -(√3)/2 amplitude for this event to occur if the original state had A=(1 ; 0), and a 1/2 amplitude for this event to occur if the original state had A=(0 ; 1).

\n

So, if we see that photon A is absorbed, we learn that we are in the now-decoherent blob of amplitude:

\n
\n

( -(√3)/2 * √(1/2) * [ A=(-(√3)/2 ; 1/2) ∧ B=(0 ; 1) ] )
- ( 1/2 * √(1/2) * [ A=(-(√3)/2 ; 1/2) ∧ B=(1; 0) ] )

\n
\n

You might be tempted to add the two amplitudes for A being absorbed—the -(√3)/2 * √(1/2) and the -1/2 * √(1/2)—and get a total amplitude of -.966, which, squared, comes out as .933.

\n

But if this were true, there would be a 93% prior probability of A being absorbed by the filter—a huge prior expectation to see it absorbed.  There should be a 50% prior chance of seeing A absorbed.

\n

What went wrong is that, even though we haven't yet measured B, the configurations with B=(0 ; 1) and B=(1 ; 0) are distinct. B could be light-years away, and unknown to us; the configurations would still be distinct.  So we don't add the amplitudes for the two terms; we keep them separate.

\n

When the amplitudes for the terms are separately squared, and the squares added together, we get a prior absorption probability of 1/2—which is exactly what we should expect.

\n

Okay, so we're in the decoherent blob where A is absorbed by a 30° filter.  Now consider what happens over at B, within our blob, if a friend measures B with another 30° filter.

\n

The new starting amplitude distribution is:

\n
\n

( -(√3)/2 * √(1/2) * [ A=(-(√3)/2 ; 1/2) ∧ B=(0 ; 1) ] )
- ( 1/2 * √(1/2) * [ A=(-(√3)/2 ; 1/2) ∧ B=(1; 0) ] )

\n
\n

In the case where B=(0 ; 1), it has an amplitude of (√3)/2 to be transmitted through a 30° filter; being transmitted through a 30° filter corresponds to the polarization state (1/2 ; (√3)/2).  Likewise, a 1/2 amplitude to be absorbed (polarization state (-(√3)/2 ; 1/2).)

\n

In the case where B=(1 ; 0) it has an amplitude of 1/2 to be transmitted with state (1/2 ; (√3)/2).  And an amplitude of -(√3)/2 to occupy the state (-(√3)/2 ; 1/2) and be absorbed.

\n

So add up four terms:

\n
\n

( -(√3)/2 * √(1/2) ) * [ A=(-(√3)/2 ; 1/2) ∧ B=(0 ; 1) ]
  breaks down into
    ( -(√3)/2 * √(1/2) ) * (√3)/2 * [ A=(-(√3)/2 ; 1/2) ∧ B=(1/2 ; (√3)/2) ] +
    ( -(√3)/2 * √(1/2) ) * 1/2     * [ A=(-(√3)/2 ; 1/2) ∧ B=(-(√3)/2 ; 1/2) ]
and
- ( 1/2 * √(1/2) ) * [ A=(-(√3)/2 ; 1/2) ∧ B=(1; 0) ] )
   breaks down into
   -( 1/2 * √(1/2) ) *  1/2      * [ A=(-(√3)/2 ; 1/2) ∧ B=(1/2 ; (√3)/2) ] +
   -( 1/2 * √(1/2) ) * -(√3)/2 * [ A=(-(√3)/2 ; 1/2) ∧ B=(-(√3)/2 ; 1/2) ]

\n
\n

These four terms occupy only two distinct configurations.

\n

Adding the amplitudes, the configuration [ A=(-(√3)/2 ; 1/2) ∧ B=(-(√3)/2 ; 1/2) ] ends up with zero amplitude, while [ A=(-(√3)/2 ; 1/2) ∧ B=(1/2 ; (√3)/2) ] ends up with a final amplitude of √(1/2).

\n

So, within the blob in which you've found yourself, the probability of your friend seeing that a 30° filter blocks both A and B, is 0.  The probability of seeing that a 30° filter blocks A and transmits B, is 50%.

\n

Symmetrically, there's another blob of amplitude where your other self sees A transmitted and B blocked, with probability 50%.  And A transmitted and B transmitted, with probability 0%.

\n

So you and your friend, when you compare results in some particular blob of decohered amplitude, always find that the two photons have opposite polarization.

\n

And in general, if you use two equally oriented polarization filters to measure a pair of photons in the inital state:

\n
\n

√(1/2) * ( [ A=(1 ; 0) ∧ B=(0 ; 1) ] - [ A=(0 ; 1) ∧ B=(1; 0) ] )

\n
\n

then you are guaranteed that one filter will transmit, and the other filter absorb—regardless of how you set the filters, so long as you use the same setting.  The photons always have opposite polarizations, even though the prior probability of any particular photon having a particular polarization is 50%.

\n

What if I measure one photon with a 0° filter, and find that it is transmitted (= state (0 ; 1)), and then I measure the other photon with a 30° filter?

\n

The probability works out to just the same as if I knew the other photon had state (1 ; 0)—in effect, it now does.

\n

Over on my side, I've decohered the amplitude over the joint distribution, into blobs in which A has been transmitted, and A absorbed.  I am in the decoherent blob with A transmitted:  A=(0 ; 1). Ergo, the amplitude vector / polarization state of B, in my blob, behaves as if it starts out as (1 ; 0).  This is just as true whether I measure it with another 0° filter, or a 30° filter.

\n

With symmetrically entangled particles, each particle seems to know the state the other particle has been measured in.  But \"seems\" is the operative word here.  Actually we're just dealing with decoherence that happens to take place in a very symmetrical way.

\n

Tomorrow (if all goes according to plan) we'll look at Bell's Theorem, which rules out the possibility that each photon already has a fixed, non-quantum state that locally determines the result of any possible polarization measurement.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Bell's Theorem: No EPR 'Reality'\"

\n

Previous post: \"Decoherence as Projection\"

" } }, { "_id": "EneHGx8t8skPKxHhv", "title": "Decoherence as Projection", "pageUrl": "https://www.lesswrong.com/posts/EneHGx8t8skPKxHhv/decoherence-as-projection", "postedAt": "2008-05-02T06:32:17.000Z", "baseScore": 27, "voteCount": 21, "commentCount": 27, "url": null, "contents": { "documentId": "EneHGx8t8skPKxHhv", "html": "

Previously in seriesThe Born Probabilities

\n

\"Heisensplit\" In \"The So-Called Heisenberg Uncertainty Principle\" we got a look at how decoherence can affect the apparent surface properties of objects:  By measuring whether a particle is to the left or right of a dividing line, you can decohere the part of the amplitude distribution on the left with the part on the right.  Separating the amplitude distribution into two parts affects its future evolution (within each component) because the two components can no longer interfere with each other.

\n

Yet there are more subtle ways to take apart amplitude distributions than by splitting the position basis down the middle.  And by exploring this, we rise further up the rabbit hole.

\n

\n

(Remember, the classical world is Wonderland, the quantum world is reality.  So when you get deeper into quantum physics, you are going up the rabbit hole, not down the rabbit hole.)

\n

Light has a certain quantum property called \"polarization\".  Of course, all known physical properties are \"quantum properties\", but in this case I mean that polarization neatly exhibits fundamental quantum characteristics.  I mention this, because polarization is often considered part of \"classical\" optics.  Why?  Because the quantum nature of polarization is so simple that it was accidentally worked out as part of classical mechanics, back when light was thought to be a wave.

\n

(Nobody tell the marketers, though, or we'll be wearing \"quantum sunglasses\".)

\n

I don't usually begin by discussing the astronomically high-level phenomena of macroscopic physics, but in this case, I think it will be helpful to begin with a human-world example...

\n

I hand you two little sheets of semi-transparent material, looking perhaps like dark plastic, with small arrows drawn in marker along the sides.  When you hold up one of the sheets in front of you, the scene through it is darker—it blocks some of the light.

\n

\"2polaroids\"Now you hold up the second sheet in front of the first sheet...

\n

When the two arrows are aligned, pointing in the same direction, the scene is no darker than before—that is, the two sheets in series block the same amount of light as the first sheet alone.

\n

But as you rotate the second sheet, so that the two arrows point in increasingly different directions, the world seen through both sheets grows darker.  When the arrows are at 45° angles, the world is half as bright as when you were only holding up one sheet.

\n

When the two arrows are perpendicular (90°) the world is completely black.

\n

Then, as you continue rotating the second sheet, the world gets lighter again.  When the two arrows point in opposite directions, again the lightness is the same as for only one sheet.

\n

Clearly, the sheets are selectively blocking light.  Let's call the sheets \"polarized filters\".

\n

Now, you might reason something like this:  \"Light is built out of two components, an up-down component and a left-right component.  When you hold up a single filter, with the arrow pointing up, it blocks out the left-right component of light, and lets only the up-down component through.  When you hold up another filter in front of the first one, and the second filter has the arrow pointing to the left (or the right), it only allows the left-right component of light, and we already blocked that out, so the world is completely dark.  And at intermediate angles, it, um, blocks some of the light that wasn't blocked already.\"

\n

So I ask, \"Suppose you've already put the second filter at a 45° angle to the first filter.  Now you put up the third filter at a 45° angle to the second filter.  What do you expect to see?\"

\n

\"That's ambiguous,\" you say.  \"Do you mean the third filter to end up at a 0° angle to the first filter, or a 90° angle to the first filter?\"

\n

\"Good heavens,\" I say, \"I'm surprised I forgot to specify that!  Tell me what you expect either way.\"

\n

\"If the third filter is at a 0° angle to the first filter,\" you say, \"It won't block out anything the first filter hasn't blocked already.  So we'll be left with the half-light world, from the second filter being at a 45° angle to the first filter.  And if the third filter is at a 90° angle to the first filter, it will block out everything that the first filter didn't block, and the world will be completely dark.\"

\n

I hand you a third filter.  \"Go ahead,\" I say, \"Try it.\"

\n

First you set the first filter at 0° and the second filter at 45°, as your reference point.  Half the light gets through.

\n

\"3polaroids\"Then you set the first filter at 0°, the second filter at 45°, and the third filter at 0°.  Now one quarter of the light gets through.

\n

\"Huh?\" you say.

\n

\"Keep going,\" I reply.

\n

With the first filter at 0°, the second filter at 45°, and the third filter at 90°, one quarter of the light goes through.  Again.

\n

\"Umm...\" you say.  You quickly take out the second filter, and find that the world goes completely dark.  Then you put in the second filter, again at 45°, and the world resumes one-quarter illumination.

\n

Further investigation quickly verifies that all three filters seem to have the same basic properties—it doesn't matter what order you put them in.

\n

\"All right,\" you say, \"that just seems weird.\"  You pause.  \"So it's probably something quantum.\"

\n

Indeed it is.

\n

Though light may seem \"dim\" or \"bright\" at the macroscopic level, you can't split it up indefinitely; you can always send a single photon into the series of filters, and ask what happens to that single photon.

\n

As you might suspect, if you send a single photon through the succession of three filters, you will find that—assuming the photon passes the first filter (at 0°)—the photon is observed to pass the second filter (at 45°) with 50% probability, and, if the photon does pass the second filter, then it seems to pass the third filter (at 90°) with 50% probability.

\n

The appearance of \"probability\" in deterministic amplitude evolutions, as we now know, is due to decoherence.  Each time a photon was blocked, some other you saw it go through.  Each time a photon went through, some other you saw it blocked.

\n

But what exactly is getting decohered?  And why does an intervening second filter at 45°, let some photons pass that would otherwise be blocked by the 0° filter plus the 90° filter?

\n

First:  We can represent the polarization of light as a complex amplitude for up-down plus a complex amplitude for left-right.  So polarizations might be written as (1 ; 0) or (0 ; -i) or (√.5 ; √.5), with the units (up-down ; left-right).  It is more customary to write these as column vectors, but row vectors are easier to type.

\n

(Note that I say that this is a way to \"represent\" the polarization of light.  There's nothing magical about picking up-down vs. left-right, instead of upright-downleft vs. upleft-downright.  The vectors above are written in an arbitrary but convenient basis.  This will become clearer.)

\n

Let's say that the first filter has its little arrow pointing right.  This doesn't mean that the filter blocks any photon whose polarization is not exactly (0 ; 1) or a multiple thereof.  But it nonetheless happens that all the photons which we see leave the first filter, will have a polarization of (0 ; 1) or some irrelevantly complex multiple thereof.  Let's just take this for granted, for the moment.  Past the first filter at 0°, we're looking at a stream of photons purely polarized in the left-right direction.

\n

Now the photons hit a second filter.  Let's say the second filter is at a 30° angle to the first—so the arrow written on the second filter is pointing 30° above the horizontal.

\n

Then each photon has a 25% probability of being blocked at the second filter, and a 75% probability of going through.

\n

How about if the second filter points to 20° above the horizontal?  12% probability of blockage, 88% probability of going through.

\n

45°, 50/50.

\n

The general rule is that the probability of being blocked is the squared sine of the angle, and the probability of going through is the squared cosine of the angle.

\n

Why?

\n

First, remember two rules we've picked up about quantum mechanics:  The evolution of quantum systems is linear and unitary.  When an amplitude distribution breaks into parts that then evolve separately, the components must (1) add to the original distribution and (2) have squared moduli adding to the squared modulus of the original distribution.

\n

So now let's consider the photons leaving the first filter, with \"polarizations\", quantum states, of (0 ; 1).

\n

To understand what happens when the second filter is set at a 45° angle, we observe... and think of this as a purely abstract statement about 2-vectors... that:

\n
\n

(0 ; 1) = (.5 ; .5) + (-.5 ; .5)

\n
\n

\"Polardecomp\"Okay, so the two vectors on the right-hand-side sum to (0 ; 1) on the left-hand-side.

\n

But what about the squared modulus? Just because two vectors sum to a third, doesn't mean that the squares of the first two vectors' lengths sum to the square of the third vector's length.

\n

The squared length of the vector (.5 ; .5) is (.5)2 + (.5)2 = .25 + .25 = 0.5.  And likewise the squared length of the vector (-.5 ; .5) is (-.5)2 + (.5)2 = 0.5.  The sum of the squares is 0.5 + 0.5 = 1.  Which matches the squared length of the vector (0 ; 1).

\n

\"Polarpythagorean\" So when you decompose (0 ; 1) into (.5 ; .5) + (-.5 ; .5), this obeys both linearity and unitarity:  The two parts sum to the original, and the squared modulus of the parts sums to the squared modulus of the original.

\n

When you interpose the second filter at an angle of 45° from the first, it decoheres the incoming amplitude of (0 ; 1) into an amplitude of (.5 ; .5) for being transmitted and an amplitude of (-.5 ; .5) for being blocked.  Taking the squared modulus of the amplitudes gives us the observed Born probabilities, i.e. fifty-fifty.

\n

\"Polar3060\" What if you interposed the second filter at an angle of 30° from the first?  Then that would decohere the incoming amplitude vector of (0 ; 1) into the vectors (.433 ; .75) and (-.433, .25).  The squared modulus of the first vector is .75, and the squared modulus of the second vector is .25, again summing to one.

\n

A polarized filter projects the incoming amplitude vector into the two sides of a right triangle that sums to the original vector, and decoheres the two components.  And so, under Born's rule, the transmission and absorption probabilities are given by the Pythagorean Theorem.

\n

(!)

\n

\"3polaroids_2\" A filter set at 0° followed by a filter set at 90° will block all light—any photon that emerges from the first filter will have an amplitude vector of (0 ; 1), and the component in the direction of (1 ; 0) will be 0.  But suppose that instead you put an intermediate filter at 45°.  This will decohere the vector of (0 ; 1) into a transmission vector of (.5 ; .5) and an absorption amplitude of (-.5 ; .5).

\n

A photon that is transmitted through the 45° filter will have a polarization amplitude vector of (.5 ; .5).  (The (-.5 ; .5) component is decohered into another world where you see the photon absorbed.)

\n

This photon then hits the 90° filter, whose transmission amplitude is the component in the direction of (1 ; 0), and whose absorption amplitude is the component in the direction of (0 ; 1).  (.5 ; .5) has a component of (.5 ; 0) in the direction of (1 ; 0) and a component of (0 ; .5) in the direction of (0 ; 1).  So it has an amplitude of (.5 ; 0) to make it through both filters, which translates to a Born probability of .25.

\n

Likewise if the second filter is at -45°.  Then it decoheres the incoming (0 ; 1) into a transmission amplitude of (-.5 ; .5) and an absorption amplitude of (.5 ; .5).  When (-.5 ; .5) hits the third filter at 90°, it has a component of (-.5 ; 0) in the direction of (1 ; 0), and because these are complex numbers we're talking about, (-.5 ; 0) has a squared modulus of 0.25, that is, 25% probability to go through both filters.

\n

It may seem surprising that putting in an extra filter causes more photons to go through, even when you send them one at a time; but that's quantum physics for you.

\n

\"But wait,\" you say, \"Who needs the second filter?  Why not just use math?  The initial amplitude of (0 ; 1) breaks into an amplitude of (-.5 ; .5) + (.5 ; .5) whether or not you have the second filter there.  By linearity, the evolution of the parts should equal the evolution of the whole.\"

\n

Yes, indeed!  So, with no second filter—just the 0° filter and the 90° filter—here's how we'd do that analysis:

\n

First, the 0° filter decoheres off all amplitude of any incoming photons except the component in the direction of (0 ; 1).  Now we look at the photon—which has some amplitude (0 ; x) that we've implicitly been renormalizing to (0 ; 1)—and, in a purely mathematical sense, break it up into (.5x ; .5x) and (-.5x ; .5x) whose squared moduli will sum to x2.

\n

Now first we consider the (.5x ; .5x) component; it strikes the 90° filter which transmits the component (.5x ; 0) and absorbs the (0 ; .5x) component.

\n

Next we consider the (-.5x ; .5x) component.  It also strikes the 90° filter, which transmits the component (-.5x ; 0) and absorbs the component (0 ; .5x).

\n

\"Polarbreakdown\" Since no other particles are entangled, we have some identical configurations here:  Namely, the two configurations where the photon is transmitted, and the two configurations where the photon is absorbed.

\n

Summing the amplitude vectors of (.5x ; 0) and (-.5x ; 0) for transmission, we get a total amplitude vector of (0 ; 0).

\n

Summing the amplitude vectors of (0 ; .5x) and (0 ; .5x) for absorption, we get an absorption amplitude of (0 ; x).

\n

So all photons that make it through the first filter are blocked.

\n

Remember Experiment 2 from way back when?  Opening up a new path to a detector can cause fewer photons to be detected, because the new path has an amplitude of opposite sign to some existing path, and they cancel out.

\n

In an exactly analogous manner, having a filter that sometimes absorbs photons, can cause more (individual) photons to get through a series of filters.  Think of it as decohering off a component of the amplitude that would otherwise destructively interfere with another component.

\n
\n

A word about choice of basis:

\n
\n

You could just as easily create a new basis in which (1 ; 0) = (.707 ; .707) and (0 ; 1) = (.707 ; -.707).  This is the upright-downleft and upleft-downright basis of which I spoke before.  .707 = √.5, so the basis vectors individually have length 1; and the dot product of the two vectors is 0, so they are orthogonal.  That is, they are \"orthonormal\".

\n

The new basis is just as valid as a compass marked NW, NE, SE, SW instead of N, E, S, W.  There isn't an absolute basis of the photon's polarization amplitude vector, any more than there's an absolute three-coordinate system that describes your location in space.  Ideally, you should see the photon's polarization as a purely abstract 2-vector in complex space.

\n

(One of my great \"Ahas!\" while reading the Feynman Lectures was the realization that, rather than a 3-vector being made out of an ordered list of 3 scalars, a 3-vector was just a pure mathematical object in a vector algebra.  If you wanted to take the 3-vector apart for some reason, you could generate an arbitrary orthonormal basis and get 3 scalars that way.  In other words, you didn't build the vector space by composing scalars—you built the decomposition from within the vector space.  I don't know if that makes any sense to my readers out there, but it was the great turning point in my relationship with linear algebra.)

\n

Oh, yes, and what happens if you have a complex polarization in the up-down/left-right basis, like (.707i ; .707)?  Then that corresponds to \"circular polarization\" or \"elliptical polarization\".  All the polarizations I've been talking about are \"linear polarizations\", where the amplitudes in the up-down/left-right basis happen to be real numbers.

\n

When things decohere, they decohere into pieces that add up to the original (linearity) and whose squared moduli add up to the original squared modulus (unitarity).  If the squared moduli of the pieces add up to the original squared modulus, this implies the pieces are orthogonal—that the components have inner products of zero with each other.  That is why the title of this blog post is \"Decoherence as Projection\".

\n
\n

A word about how not to see this whole business of polarization:

\n
\n

Some ancient textbooks will say that when you send a photon through a 0° filter, and it goes through, you've learned that the photon is polarized left-right rather than up-down.  Now you measure it with another filter at a 45° angle, and it goes through, so you've learned that the photon is polarized upright-downleft rather than upleft-downright.  And (says the textbook) this second measurement \"destroys\" the first, so that if you want to know the up-down / left-right polarization, you'll have to measure it all over again.

\n

Because you can't know both at the same time.

\n

And some of your more strident ancient textbooks will say something along the lines of: the up-down / left-right polarization no longer exists after the photon goes through the 45° filter.  It's not just unknown, it doesn't exist, and—

\n

(you might think that wasn't too far from the truth)

\n

it is meaningless to even talk about it.

\n

Okay.  That's going a bit too far.

\n

There are ways to use a polarizer to split a beam into two components, rather than absorbing a component and transmitting a component.

\n

Suppose you first send the photons through a 0° filter.  Then you send them through a 45° splitter.  Then you recombine the beams.  Then you send the photons through a 0° filter again.  All the photons that made it past the first filter, will make it past the third filter as well.  Because, of course, you've put the components back together again, and (.5 ; .5) + (-.5 ; .5) = (0 ; 1).

\n

This doesn't seem to square with the idea that measuring the 45° polarization automatically destroys the up-down/left-right polarization, that it isn't even meaningful to talk about it.

\n

Of course the one will say, \"Ah, but now you no longer know which path the photon took past the splitter.  When you recombined the beams, you unmeasured the photon's 45° polarization, and the original 0° polarization popped back into existence again, and it was always meaningful to talk about it.\"

\n

O RLY?

\n

Anyway, that's all talk about classical surface appearances, and you've seen the underlying quantum reality.  A photon with polarization of (-.707 ; .707) has a component of (.707 ; 0) in the up-down direction and a component of (0 ; .707) in the left-right direction.  If you happened to feed it into an apparatus that decohered these two components—like a polarizing filter—then you would be able to predict the decoherent evolution as a deterministic fact about the amplitude distribution, and the Born probabilities would (deterministically if mysteriously) come out to 50/50.

\n

Now someone comes along and says that the result of this measurement you may or may not perform, doesn't exist or, better yet, isn't meaningful.

\n

It's hard to see what this startling statement could mean, let alone how it could improve your experimental predictions.  How would you falsify it?

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Entangled Photons\"

\n

Previous post: \"The Born Probabilities\"

" } }, { "_id": "3ZKvf9u2XEWddGZmS", "title": "The Born Probabilities", "pageUrl": "https://www.lesswrong.com/posts/3ZKvf9u2XEWddGZmS/the-born-probabilities", "postedAt": "2008-05-01T05:50:53.000Z", "baseScore": 38, "voteCount": 31, "commentCount": 82, "url": null, "contents": { "documentId": "3ZKvf9u2XEWddGZmS", "html": "

Previously in seriesDecoherence is Pointless
Followup toWhere Experience Confuses Physicists

\n

One serious mystery of decoherence is where the Born probabilities come from, or even what they are probabilities of.  What does the integral over the squared modulus of the amplitude density have to do with anything?

\n

This was discussed by analogy in \"Where Experience Confuses Physicists\", and I won't repeat arguments already covered there.  I will, however, try to convey exactly what the puzzle is, in the real framework of quantum mechanics.

\n

\n

A professor teaching undergraduates might say:  \"The probability of finding a particle in a particular position is given by the squared modulus of the amplitude at that position.\"

\n

This is oversimplified in several ways.

\n

First, for continuous variables like position, amplitude is a density, not a point mass.  You integrate over it.  The integral over a single point is zero.

\n

(Historical note:  If \"observing a particle's position\" invoked a mysterious event that squeezed the amplitude distribution down to a delta point, or flattened it in one subspace, this would give us a different future amplitude distribution from what decoherence would predict.  All interpretations of QM that involve quantum systems jumping into a point/flat state, which are both testable and have been tested, have been falsified.  The universe does not have a \"classical mode\" to jump into; it's all amplitudes, all the time.)

\n

Second, a single observed particle doesn't have an amplitude distribution.  Rather the system containing yourself, plus the particle, plus the rest of the universe, may approximately factor into the multiplicative product of (1) a sub-distribution over the particle position and (2) a sub-distribution over the rest of the universe.  Or rather, the particular blob of amplitude that you happen to be in, can factor that way.

\n

So what could it mean, to associate a \"subjective probability\" with a component of one factor of a combined amplitude distribution that happens to factorize?

\n

Recall the physics for:

\n
\n

(Human-BLANK * Sensor-BLANK) * (Atom-LEFT + Atom-RIGHT)
        =>
(Human-LEFT * Sensor-LEFT * Atom-LEFT) + (Human-RIGHT * Sensor-RIGHT * Atom-RIGHT)

\n
\n

Think of the whole process as reflecting the good-old-fashioned distributive rule of algebra.  The initial state can be decomposed—note that this is an identity, not an evolution—into:

\n
\n

(Human-BLANK * Sensor-BLANK) * (Atom-LEFT + Atom-RIGHT)
    =
(Human-BLANK * Sensor-BLANK * Atom-LEFT) + (Human-BLANK * Sensor-BLANK * Atom-RIGHT)

\n
\n

We assume that the distribution factorizes.  It follows that the term on the left, and the term on the right, initially differ only by a multiplicative factor of Atom-LEFT vs. Atom-RIGHT.

\n

If you were to immediately take the multi-dimensional integral over the squared modulus of the amplitude density of that whole system,

\n

Then the ratio of the all-dimensional integral of the squared modulus over the left-side term, to the all-dimensional integral over the squared modulus of the right-side term,

\n

Would equal the ratio of the lower-dimensional integral over the squared modulus of the Atom-LEFT, to the lower-dimensional integral over the squared modulus of Atom-RIGHT,

\n

For essentially the same reason that if you've got (2 * 3) * (5 + 7), the ratio of (2 * 3 * 5) to (2 * 3 * 7) is the same as the ratio of 5 to 7.

\n

Doing an integral over the squared modulus of a complex amplitude distribution in N dimensions doesn't change that.

\n

There's also a rule called \"unitary evolution\" in quantum mechanics, which says that quantum evolution never changes the total integral over the squared modulus of the amplitude density.

\n

So if you assume that the initial left term and the initial right term evolve, without overlapping each other, into the final LEFT term and the final RIGHT term, they'll have the same ratio of integrals over etcetera as before.

\n

What all this says is that,

\n

If some roughly independent Atom has got a blob of amplitude on the left of its factor, and a blob of amplitude on the right,

\n

Then, after the Sensor senses the atom, and you look at the Sensor,

\n

The integrated squared modulus of the whole LEFT blob, and the integrated squared modulus of the whole RIGHT blob,

\n

Will have the same ratio,

\n

As the ratio of the squared moduli of the original Atom-LEFT and Atom-RIGHT components.

\n

This is why it's important to remember that apparently individual particles have amplitude distributions that are multiplicative factors within the total joint distribution over all the particles.

\n

If a whole gigantic human experimenter made up of quintillions of particles,

\n

Interacts with one teensy little atom whose amplitude factor has a big bulge on the left and a small bulge on the right,

\n

Then the resulting amplitude distribution, in the joint configuration space,

\n

Has a big amplitude blob for \"human sees atom on the left\", and a small amplitude blob of \"human sees atom on the right\".

\n

And what that means, is that the Born probabilities seem to be about finding yourself in a particular blob, not the particle being in a particular place.

\n

But what does the integral over squared moduli have to do with anything?  On a straight reading of the data, you would always find yourself in both blobs, every time.  How can you find yourself in one blob with greater probability?  What are the Born probabilities, probabilities of?  Here's the map—where's the territory?

\n

I don't know.  It's an open problem.  Try not to go funny in the head about it.

\n

This problem is even worse than it looks, because the squared-modulus business is the only non-linear rule in all of quantum mechanics.  Everything else—everything else—obeys the linear rule that the evolution of amplitude distribution A, plus the evolution of the amplitude distribution B, equals the evolution of the amplitude distribution A + B.

\n

When you think about the weather in terms of clouds and flapping butterflies, it may not look linear on that higher level.  But the amplitude distribution for weather (plus the rest of the universe) is linear on the only level that's fundamentally real.

\n

Does this mean that the squared-modulus business must require additional physics beyond the linear laws we know—that it's necessarily futile to try to derive it on any higher level of organization?

\n

But even this doesn't follow.

\n

Let's say I have a computer program which computes a sequence of positive integers that encode the successive states of a sentient being.  For example, the positive integers might describe a Conway's-Game-of-Life universe containing sentient beings (Life is Turing-complete) or some other cellular automaton.

\n

Regardless, this sequence of positive integers represents the time series of a discrete universe containing conscious entities.  Call this sequence Sentient(n).

\n

Now consider another computer program, which computes the negative of the first sequence:  -Sentient(n).  If the computer running Sentient(n) instantiates conscious entities, then so too should a program that computes Sentient(n) and then negates the output.

\n

Now I write a computer program that computes the sequence {0, 0, 0...} in the obvious fashion.

\n

This sequence happens to be equal to the sequence Sentient(n) + -Sentient(n).

\n

So does a program that computes {0, 0, 0...} necessarily instantiate as many conscious beings as both Sentient programs put together?

\n

Admittedly, this isn't an exact analogy for \"two universes add linearly and cancel out\".  For that, you would have to talk about a universe with linear physics, which excludes Conway's Life.  And then in this linear universe, two states of the world both containing conscious observers—world-states equal but for their opposite sign—would have to cancel out.

\n

It doesn't work in Conway's Life, but it works in our own universe!  Two quantum amplitude distributions can contain components that cancel each other out, and this demonstrates that the number of conscious observers in the sum of two distributions, need not equal the sum of conscious observers in each distribution separately.

\n

So it actually is possible that we could pawn off the only non-linear phenomenon in all of quantum physics onto a better understanding of consciousness.  The question \"How many conscious observers are contained in an evolving amplitude distribution?\" has obvious reasons to be non-linear.

\n

(!)

\n

Robin Hanson has made a suggestion along these lines.

\n

(!!)

\n

Decoherence is a physically continuous process, and the interaction between LEFT and RIGHT blobs may never actually become zero.

\n

So, Robin suggests, any blob of amplitude which gets small enough, becomes dominated by stray flows of amplitude from many larger worlds.

\n

A blob which gets too small, cannot sustain coherent inner interactions—an internally driven chain of cause and effect—because the amplitude flows are dominated from outside.  Too-small worlds fail to support computation and consciousness, or are ground up into chaos, or merge into larger worlds.

\n

Hence Robin's cheery phrase, \"mangled worlds\".

\n

The cutoff point will be a function of the squared modulus, because unitary physics preserves the squared modulus under evolution; if a blob has a certain total squared modulus, future evolution will preserve that integrated squared modulus so long as the blob doesn't split further.  You can think of the squared modulus as the amount of amplitude available to internal flows of causality, as opposed to outside impositions.

\n

The seductive aspect of Robin's theory is that quantum physics wouldn't need interpreting.  You wouldn't have to stand off beside the mathematical structure of the universe, and say, \"Okay, now that you're finished computing all the mere numbers, I'm furthermore telling you that the squared modulus is the 'degree of existence'.\"  Instead, when you run any program that computes the mere numbers, the program automatically contains people who experience the same physics we do, with the same probabilities.

\n

A major problem with Robin's theory is that it seems to predict things like, \"We should find ourselves in a universe in which lots of very few decoherence events have already taken place,\" which tendency does not seem especially apparent.

\n

The main thing that would support Robin's theory would be if you could show from first principles that mangling does happen; and that the cutoff point is somewhere around the median amplitude density (the point where half the total amplitude density is in worlds above the point, and half beneath it), which is apparently what it takes to reproduce the Born probabilities in any particular experiment.

\n

What's the probability that Hanson's suggestion is right?  I'd put it under fifty percent, which I don't think Hanson would disagree with.  It would be much lower if I knew of a single alternative that seemed equally... reductionist.

\n

But even if Hanson is wrong about what causes the Born probabilities, I would guess that the final answer still comes out equally non-mysterious.  Which would make me feel very silly, if I'd embraced a more mysterious-seeming \"answer\" up until then.  As a general rule, it is questions that are mysterious, not answers.

\n

When I began reading Hanson's paper, my initial thought was:  The math isn't beautiful enough to be true.

\n

By the time I finished processing the paper, I was thinking:  I don't know if this is the real answer, but the real answer has got to be at least this normal.

\n

This is still my position today.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Decoherence as Projection\"

\n

Previous post: \"Decoherent Essences\"

" } }, { "_id": "HwMfEcmxyM3eqqfvi", "title": "Decoherent Essences", "pageUrl": "https://www.lesswrong.com/posts/HwMfEcmxyM3eqqfvi/decoherent-essences", "postedAt": "2008-04-30T06:32:11.000Z", "baseScore": 24, "voteCount": 21, "commentCount": 36, "url": null, "contents": { "documentId": "HwMfEcmxyM3eqqfvi", "html": "

Followup toDecoherence is Pointless

\n

In \"Decoherence is Pointless\", we talked about quantum states such as

\n
\n

(Human-BLANK) * ((Sensor-LEFT * Atom-LEFT) + (Sensor-RIGHT * Atom-RIGHT))

\n
\n

which describes the evolution of a quantum system just after a sensor has measured an atom, and right before a human has looked at the sensor—or before the human has interacted gravitationally with the sensor, for that matter.  (It doesn't take much interaction to decohere objects the size of a human.)

\n

But this is only one way of looking at the amplitude distribution—a way that makes it easy to see objects like humans, sensors, and atoms.  There are other ways of looking at this amplitude distribution—different choices of basis—that will make the decoherence less obvious.

\n

\n

Suppose that you have the \"entangled\" (non-independent) state:

\n
\n

(Sensor-LEFT * Atom-LEFT) + (Sensor-RIGHT * Atom-RIGHT)

\n
\n

considering now only the sensor and the atom.

\n

This state looks nicely diagonalized—separated into two distinct blobs.  But by linearity, we can take apart a quantum amplitude distribution any way we like, and get the same laws of physics back out.  So in a different basis, we might end up writing (Sensor-LEFT * Atom-LEFT) as:

\n
\n

(0.5(Sensor-LEFT + Sensor-RIGHT) + 0.5(Sensor-LEFT - Sensor-RIGHT)) * (0.5(Atom-RIGHT + Atom-LEFT) - 0.5(Atom-RIGHT - Atom-LEFT))

\n
\n

(Don't laugh.  There are legitimate reasons for physicists to reformulate their quantum representations in weird ways.)

\n

The result works out the same, of course.  But if you view the entangled state in a basis made up of linearly independent components like (Sensor-LEFT - Sensor-RIGHT) and (Atom-RIGHT - Atom-LEFT), you see a differently shaped amplitude distribution, and it may not look like the blobs are separated.

\n

Oh noes!  The decoherence has disappeared!

\n

...or that's the source of a huge academic literature asking, \"Doesn't the decoherence interpretation require us to choose a preferred basis?\"

\n

To which the short answer is:  Choosing a basis is an isomorphism; it doesn't change any experimental predictions.  Decoherence is an experimentally visible phenomenon or we would not have to protect quantum computers from it.  You can't protect a quantum computer by \"choosing the right basis\" instead of using environmental shielding.  Likewise, looking at splitting humans from another angle won't make their decoherence go away.

\n

But this is an issue that you're bound to encounter if you pursue quantum mechanics, especially if you talk to anyone from the Old School, and so it may be worth expanding on this reply.

\n

After all, if the short answer is as obvious as I've made it sound, then why, oh why, would anyone ever think you could eliminate an experimentally visible phenomenon like decoherence, by isomorphically reformulating the mathematical representation of quantum physics?

\n

That's a bit difficult to describe in one mere blog post.  It has to do with history.  You know the warning I gave about dragging history into explanations of QM... so consider yourself warned:  Quantum mechanics is simpler than the arguments we have about quantum mechanics.  But here, then, is the history:

\n

Once upon a time,

\n

Long ago and far away, back when the theory of quantum mechanics was first being developed,

\n

No one had ever thought of decoherence.  The question of why a human researcher only saw one thing at a time, was a Great Mystery with no obvious answer.

\n

You had to interpret quantum mechanics to get an answer back out of it.  Like reading meanings into an oracle.  And there were different, competing interpretations.  In one popular interpretation, when you \"measured\" a system, the Quantum Spaghetti Monster would eat all but one blob of amplitude, at some unspecified time that was exactly right to give you whatever experimental result you actually saw.

\n

Needless to say, this \"interpretation\" wasn't in the quantum equations.  You had to add in the extra postulate of a Quantum Spaghetti Monster on top, additionally to the differential equations you had fixed experimentally for describing how an amplitude distribution evolved.

\n

Along came Hugh Everett and said,  \"Hey, maybe the formalism just describes the way the universe is, without any need to 'interpret' it.\"

\n

But people were so used to adding extra postulates to interpret quantum mechanics, and so unused to the idea of amplitude distributions as real, that they couldn't see this new \"interpretation\" as anything except an additional Decoherence Postulate which said:

\n

\"When clouds of amplitude become separated enough, the Quantum Spaghetti Monster steps in and creates a new world corresponding to each cloud of amplitude.\"

\n

So then they asked:

\n

\"Exactly how separated do two clouds of amplitude have to be, quantitatively speaking, in order to invoke the instantaneous action of the Quantum Spaghetti Monster?  And in which basis does the Quantum Spaghetti Monster measure separation?\"

\n

But, in the modern view of quantum mechanics—which is accepted by everyone except for a handful of old fogeys who may or may not still constitute a numerical majority—well, as David Wallace puts it:

\n
\n

\"If I were to pick one theme as central to the tangled development of the Everett interpretation of quantum mechanics, it would probably be: the formalism is to be left alone.\"

\n
\n

Decoherence is not an extra phenomenon.  Decoherence is not something that has to be proposed additionally.  There is no Decoherence Postulate on top of standard QM.  It is implicit in the standard rules.  Decoherence is just what happens by default, given the standard quantum equations, unless the Quantum Spaghetti Monster intervenes.

\n

Some still claim that the quantum equations are unreal—a mere model that just happens to give amazingly good experimental predictions.  But then decoherence is what happens to the particles in the \"unreal model\", if you apply the rules universally and uniformly.  It is denying decoherence that requires you to postulate an extra law of physics, or an act of the Quantum Spaghetti Monster.

\n

(Needless to say, no one has ever observed a quantum system behaving coherently, when the untouched equations say it should be decoherent; nor observed a quantum system behaving decoherently, when the untouched equations say it should be coherent.)

\n

If you're talking about anything that isn't in the equations, you must not be talking about \"decoherence\". The standard equations of QM, uninterpreted, do not talk about a Quantum Spaghetti Monster creating new worlds.  So if you ask when the Quantum Spaghetti Monster creates a new world, and you can't answer the question just by looking at the equations, then you must not be talking about \"decoherence\".  QED.

\n

Which basis you use in your calculations makes no difference to standard QM.  \"Decoherence\" is a phenomenon implicit in standard QM. Which basis you use makes no difference to \"decoherence\".  QED.

\n

Changing your view of the configuration space can change your view of the blobs of amplitude, but ultimately the same physical events happen for the same causal reasons.  Momentum basis, position basis, position basis with a different relativistic space of simultaneity—it doesn't matter to QM, ergo it doesn't matter to decoherence.

\n

If this were not so, you could do an experiment to find out which basis was the right one!  Decoherence is an experimentally visible phenomenon—that's why we have to protect quantum computers from it.

\n

Ah, but then where is the decoherence in

\n
\n

(0.5(Sensor-LEFT + Sensor-RIGHT) + 0.5(Sensor-LEFT - Sensor-RIGHT)) * (0.5(Atom-RIGHT + Atom-LEFT) - 0.5(Atom-RIGHT - Atom-LEFT)) + (0.5(Sensor-LEFT + Sensor-RIGHT) - 0.5(Sensor-LEFT - Sensor-RIGHT)) * (0.5(Atom-RIGHT + Atom-LEFT) + 0.5(Atom-RIGHT - Atom-LEFT))

\n
\n

?

\n

The decoherence is still there.  We've just made it harder for a human to see, in the new representation.

\n

The main interesting fact I would point to, about this amazing new representation, is that we can no longer calculate its evolution with local causality.  For a technical definition of what I mean by \"causality\" or \"local\", see Judea Pearl's Causality.  Roughly, to compute the evolution of an amplitude cloud in a locally causal basis, each point in configuration space only has to look at its infinitesimal neighborhood to determine its instantaneous change.  As I understand quantum physics—I pray to some physicist to correct me if I'm wrong—the position basis is local in this sense.

\n

(Note:  It's okay to pray to physicists, because physicists actually exist and can answer prayers.)

\n

However, once you start breaking down the amplitude distribution into components like (Sensor-RIGHT—Sensor-LEFT), then the flow of amplitude, and the flow of causality, is no longer local within the new configuration space.  You can still calculate it, but you have to use nonlocal calculations.

\n

In essence, you've obscured the chessboard by subtracting the queen's position from the king's position.  All the information is still there, but it's harder to see.

\n

When it comes to talking about whether \"decoherence\" has occurred in the quantum state of a human brain, what should intuitively matter is questions like, \"Does the event of a neuron firing in Human-LEFT have a noticeable influence on whether a corresponding neuron fires in Human-RIGHT?\"  You can choose a basis that will mix up the amplitude for Human-LEFT and Human-RIGHT, in your calculations.  You cannot, however, choose a basis that makes a human neuron fire when it would not otherwise have fired; any more than you can choose a basis that will protect a quantum computer without the trouble of shielding, or choose a basis that will make apples fall upward instead of down, etcetera.

\n

The formalism is to be left alone!  If you're talking about anything that isn't in the equations, you're not talking about decoherence!  Decoherence is part of the invariant essence that doesn't change no matter how you spin your basis—just like the physical reality of apples and quantum computers and brains.

\n

There may be a kind of Mind Projection Fallacy at work here.  A tendency to see the basis itself as real—something that a Quantum Spaghetti Monster might come in and act upon—because you spend so much time calculating with it.

\n

In a strange way, I think, this sort of jump is actively encouraged by the Old School idea that the amplitude distributions aren't real.  If you were told the amplitude distributions were physically real, you would (hopefully) get in the habit of looking past mere representations, to see through to some invariant essence inside—a reality that doesn't change no matter how you choose to represent it.

\n

But people are told the amplitude distribution is not real.  The calculation itself is all there is, and has no virtue save its mysteriously excellent experimental predictions.  And so there is no point in trying to see through the calculations to something within.

\n

Then why not interpret all this talk of \"decoherence\" in terms of an arbitrarily chosen basis?  Isn't that all there is to interpret—the calculation that you did in some representation or another?  Why not complain, if—having thus interpreted decoherence—the separatedness of amplitude blobs seems to change, when you change the basis?  Why try to see through to the neurons, or the flows of causality, when you've been told that the calculations are all?

\n

(This notion of seeing through—looking for an essence, and not being distracted by surfaces—is one that pops up again and again, and again and again and again, in the Way of Rationality.)

\n

Another possible problem is that the calculations are crisp, but the essences inside them are not.  Write out an integral, and the symbols are digitally distinct.  But an entire apple, or an entire brain, is larger than anything you can handle formally.

\n

Yet the form of that crisp integral will change when you change your basis; and that sloppy real essence will remain invariant.  Reformulating your equations won't remove a dagger, or silence a firing neuron, or shield a quantum computer from decoherence.

\n

The phenomenon of decoherence within brains and sensors, may not be any more crisply defined than the brains and sensors themselves.  Brains, as high-level phenomena, don't always make a clear appearance in fundamental equations.  Apples aren't crisp, you might say.

\n

For historical reasons, some Old School physicists are accustomed to QM being \"interpreted\" using extra postulates that involve crisp actions by the Quantum Spaghetti Monster—eating blobs of amplitude at a particular instant, or creating worlds as a particular instant.  Since the equations aren't supposed to be real, the sloppy borders of real things are not looked for, and the crisp calculations are primary.  This makes it hard to see through to a real (but uncrisp) phenomenon among real (but uncrisp) brains and apples, invariant under changes of crisp (but arbitrary) representation.

\n

Likewise, any change of representation that makes apples harder to see, or brains harder to see, will make decoherence within brains harder to see.  But it won't change the apple, the brain, or the decoherence.

\n

As always, any philosophical problems that result from \"brain\" or \"person\" or \"consciousness\" not being crisply defined, are not the responsibility of physicists or of any fundamental physical theory. Nor are they limited to decoherent quantum physics particularly, appearing likewise in splitting brains constructed under classical physics, etcetera.

\n

Coming tomorrow (hopefully):  The Born Probabilities, aka, that mysterious thing we do with the squared modulus to get our experimental predictions.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"The Born Probabilities\"

\n

Previous post: \"Decoherence is Pointless\"

" } }, { "_id": "aWFwfk3MBEyR4Ne8C", "title": "Decoherence is Pointless", "pageUrl": "https://www.lesswrong.com/posts/aWFwfk3MBEyR4Ne8C/decoherence-is-pointless", "postedAt": "2008-04-29T06:38:54.000Z", "baseScore": 18, "voteCount": 15, "commentCount": 5, "url": null, "contents": { "documentId": "aWFwfk3MBEyR4Ne8C", "html": "

Previously in seriesOn Being Decoherent

\n

Yesterday's post argued that continuity of decoherence is no bar to accepting it as an explanation for our experienced universe, insofar as it is a physicist's responsibility to explain it.  This is a good thing, because the equations say decoherence is continuous, and the equations get the final word.

\n

Now let us consider the continuity of decoherence in greater detail...

\n

\n

 On Being Decoherent talked about the decoherence process,

\n
\n

(Human-BLANK) * (Sensor-BLANK) * (Atom-LEFT + Atom-RIGHT)
        =>
(Human-BLANK) * ((Sensor-LEFT * Atom-LEFT) + (Sensor-RIGHT * Atom-RIGHT))
        =>
(Human-LEFT * Sensor-LEFT * Atom-LEFT) + (Human-RIGHT * Sensor-RIGHT * Atom-RIGHT)

\n
\n

At the end of this process, it may be that your brain in LEFT and your brain in RIGHT are, in a technical sense, communicating—that they have intersecting, interfering amplitude flows.

\n

But the amplitude involved in this process, is the amplitude for a brain (plus all entangled particles) to leap into the other brain's state. This influence may, in a quantitative sense, exist; but it's exponentially tinier than the gravitational influence upon your brain of a mouse sneezing on Pluto.

\n

By the same token, decoherence always entangles you with a blob of amplitude density, not a point mass of amplitude.  A point mass of amplitude would be a discontinuous amplitude distribution, hence unphysical.  The distribution can be very narrow, very sharp—even exponentially narrow—but it can't actually be pointed (nondifferentiable), let alone a point mass.

\n

Decoherence, you might say, is pointless.

\n

If a measuring instrument is sensitive enough to distinguish 10 positions with 10 separate displays on a little LCD screen, it will decohere the amplitude into at least 10 parts, almost entirely noninteracting.  In all probability, the instrument is physically quite a bit more sensitive (in terms of evolving into different configurations) than what it shows on screen.  You would find experimentally that the particle was being decohered (with consequences for momentum, etc.) more than the instrument was designed to measure from a human standpoint.

\n

But there is no such thing as infinite sensitivity in a continuous quantum physics:  If you start with blobs of amplitude density, you don't end up with point masses.  Liouville's Theorem, which generalizes the second law of thermodynamics, guarantees this: you can't compress probability.

\n

What about if you measure the position of an Atom using an analog Sensor whose dial shows a continuous reading?

\n

Think of probability theory over classical physics:

\n

When the Sensor's dial appears in a particular position, that gives us evidence corresponding to the likelihood function for the Sensor's dial to be in that place, given that the Atom was originally in a particular position.  If the instrument is not infinitely sensitive (which it can't be, for numerous reasons), then the likelihood function will be a density distribution, not a point mass.  A very sensitive Sensor might have a sharp spike of a likelihood distribution, with density falling off rapidly.  If the Atom is really at position 5.0121, the likelihood of the Sensor's dial ending up in position 5.0123 might be very small.  And so, unless we had overwhelming prior knowledge, we'd conclude a tiny posterior probability that the Atom was so much as 0.0002 millimeters from the Sensor's indicated position.  That's probability theory over classical physics.

\n

Similarly in quantum physics:

\n

The blob of amplitude in which you find yourself, where you see the Sensor's dial in some particular position, will have a sub-distribution over actual Atom positions that falls off according to (1) the initial amplitude distribution for the Atom, analogous to the prior; and (2) the amplitude for the Sensor's dial (and the rest of the Sensor!) to end up in our part of configuration space, if the Atom started out in that position.  (That's the part analogous to the likelihood function.)  With a Sensor at all sensitive, the amplitude for the Atom to be in a state noticeably different from what the Sensor shows, will taper off very sharply.

\n

(All these amplitudes I'm talking about are actually densities, N-dimensional integrals over dx dy dz..., rather than discrete flows between discrete states; but you get the idea.)

\n

If there's not a lot of amplitude flowing from initial particle position 5.0150 +/- 0.0001 to configurations where the sensor's LED display reads '5.0123', then the joint configuration of (Sensor=5.0123 * Atom=5.0150) ends up with very tiny amplitude.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Decoherent Essences\"

\n

Previous post: \"The Conscious Sorites Paradox\"

" } }, { "_id": "nso8WXdjHLLHkJKhr", "title": "The Conscious Sorites Paradox", "pageUrl": "https://www.lesswrong.com/posts/nso8WXdjHLLHkJKhr/the-conscious-sorites-paradox", "postedAt": "2008-04-28T02:58:35.000Z", "baseScore": 18, "voteCount": 21, "commentCount": 41, "url": null, "contents": { "documentId": "nso8WXdjHLLHkJKhr", "html": "

Followup toOn Being Decoherent

\n

Decoherence is implicit in quantum physics, not an extra postulate on top of it, and quantum physics is continuous.  Thus, \"decoherence\" is not an all-or-nothing phenomenon—there's no sharp cutoff point.  Given two blobs, there's a quantitative amount of amplitude that can flow into identical configurations between them.  This quantum interference diminishes down to an exponentially tiny infinitesimal as the two blobs separate in configuration space.

\n

Asking exactly when decoherence takes place, in this continuous process, is like asking when, if you keep removing grains of sand from a pile, it stops being a \"heap\".

\n

\n

The sand-heap dilemma is known as the Sorites Paradox, after the Greek soros, for heap.  It is attributed to Eubulides of Miletus, in the 4th century BCE.  The moral I draw from this very ancient tale:  If you try to draw sharp lines in a continuous process and you end up looking silly, it's your own darn fault.

\n

(Incidentally, I once posed the Sorites Paradox to Marcello Herreshoff, who hadn't previously heard of it; and Marcello answered without the slightest hesitation, \"If you remove all the sand, what's left is a 'heap of zero grains'.\"  Now that's a computer scientist.)

\n

Ah, but what about when people become decoherent?  What of the Conscious Sorites Paradox?

\n

What about the case where two blobs of amplitude containing people are interacting, but only somewhat - so that there is visibly a degree of causal influence, and visibly a degree of causal independence?

\n

Okay, this interval may work out to less than the Planck time for objects the size of a human brain.  But I see that as no excuse to evade the question.  In principle we could build a brain that would make the interval longer.

\n

Shouldn't there be some definite fact of the matter as to when one person becomes two people?

\n

Some folks out there would just say \"No\".  I suspect Daniel Dennett would just say \"No\".  Personally, I wish I could just say \"No\", but I'm not that advanced yet.  I haven't yet devised a way to express my appreciation of the orderliness of the universe, which doesn't involve counting people in orderly states as compared to disorderly states.

\n

Yet if you insist on an objective population count, for whatever reason, you have Soritic problems whether or not you delve into quantum physics.

\n

What about the Ebborians? The Ebborians, you recall, have brains like flat sheets of conducting polymer, and when they reproduce, the brain-sheet splits down its thickness.  In the beginning, there is definitely one brain; in the end, there is definitely two brains; in between, there is a continuous decrease of causal influence and synchronization.  When does one Ebborian become two?

\n

Those who insist on an objective population count in a decoherent universe, must confront exactly analogous people-splitting problems in classical physics!

\n

Heck, you could simulate quantum physics the way we currently think it works, and ask exactly the same question!  At the beginning there is one blob, at the end there are two blobs, in this universe we have constructed.  So when does the consciousness split, if you think there's an objective answer to that?

\n

Demanding an objective population count is not a reason to object to decoherence, as such.  Indeed, the last fellow I argued with, ended up agreeing that his objection to decoherence was in fact a fully general objection to functionalist theories of consciousness.

\n

You might be tempted to try sweeping the Conscious Sorites Paradox under a rug, by postulating additionally that the Quantum Spaghetti Monster eats certain blobs of amplitude at exactly the right time to avoid a split.

\n

But then (1) you have to explain exactly when the QSM eats the amplitude, so you aren't avoiding any burden of specification.

\n

And (2) you're requiring the Conscious Sorites Paradox to get answered by fundamental physics, rather than being answered or dissolved by a better understanding of consciousness.  It's hard to see why taking this stance advances your position, rather than just closing doors.

\n

In fact (3) if you think you have a definite answer to \"When are there two people?\", then it's hard to see why you can't just give that same answer within the standard quantum theory instead.  The Quantum Spaghetti Monster isn't really helping here!  For every definite theory with a QSM, there's an equally definite theory with no QSM.  This is one of those occasions you have to pay close attention to see the superfluous element of your theory that doesn't really explain anything—it's harder when the theory as a whole does explain something, as quantum physics certainly does.

\n

Above all, (4) you would still have to explain afterward what happens with the Ebborians, or what happens to decoherent people in a simulation of quantum physics the way we currently think it works.  So you really aren't avoiding any questions!

\n

It's also worth noting that, in any physics that is continuous (or even any physics that has a very fine-grained discrete cellular level underneath), there are further Conscious Sorites Parodoxes for when people are born and when they die.  The bullet plows into your brain, crushing one neuron after another—when exactly are there zero people instead of one?

\n

Does it still seem like the Conscious Sorites Paradox is an objection to decoherent quantum mechanics, in particular?

\n

A reductionist would say that the Conscious Sorites Paradox is not a puzzle for physicists, because it is a puzzle you get even after the physicists have done their duty, and told us the true laws governing every fundamental event.

\n

As previously touched on, this doesn't imply that consciousness is a matter of nonphysical knowledge.  You can know the fundamental laws, and yet lack the computing power to do protein folding.  So, too, you can know the fundamental laws; and yet lack the empirical knowledge of the brain's configuration, or miss the insight into higher levels of organization, which would give you a compressed understanding of consciousness.

\n

Or so a materialist would assume.  A non-epiphenomenal dualist would say, \"Ah, but you don't know the true laws of fundamental physics, and when you do know them, that is where you will find the thundering insight that also resolves questions of consciousness and identity.\"

\n

It's because I actually do acknowledge the possibility that there is some thundering insight in the fundamental physics we don't know yet, that I am not quite willing to say that the Conscious Sorites puzzle is not a puzzle for physicists.  Or to look at it another way, the problem might not be their responsibility, but that doesn't mean they can't help.  The physicists might even swoop in and solve it, you never know.

\n

In one sense, there's a clear gap in our interpretation of decoherence: we don't know exactly how quantum-mechanical states correspond to the experiences that are (from a Cartesian standpoint) our final experimental results.

\n

But this is something you could say about all current scientific theories (at least that I've heard of).  And I, for one, am betting that the puzzle-cracking insight comes from a cognitive scientist.

\n

I'm not just saying tu quoque (i.e., \"Your theory has that problem too!\")  I'm saying that \"But you haven't explained consciousness!\" doesn't reasonably seem like the responsibility of physicists, or an objection to a theory of fundamental physics. 

\n

An analogy:  When a doctor says, \"Hey, I think that virus X97 is causing people to drip green slime,\" you don't respond:  \"Aha, but you haven't explained the exact chain of causality whereby this merely physical virus leads to my experience of dripping green slime... so it's probably not a virus that does it, but a bacterium!\"

\n

This is another of those sleights-of-hand that you have to pay close attention to notice.  Why does a non-viral theory do any better than a viral theory at explaining which biological states correspond to which conscious experiences?  There is a puzzle here, but how is it a puzzle that provides evidence for one epidemiological theory over another?

\n

It can reasonably seem that, however consciousness turns out to work, getting infected with virus X97 eventually causes your experience of dripping green slime.  You've solved the medical part of the problem, as it were, and the remaining mystery is a matter for cognitive science.

\n

Likewise, when a physicist has said that two objects attract each other with a force that goes as the product of the masses and the inverse square of the distance between them, that looks pretty much consistent with the experience of an apple falling on your head.  If you have an experience of the apple floating off into space, that's a problem for the physicist.  But that you have any experience at all, is not a problem for that particular theory of gravity.

\n

If two blobs of amplitude are no longer interacting, it seems reasonable to regard this as consistent with there being two different brains that have two different experiences, however consciousness turns out to work.  Decoherence has a pretty reasonable explanation of why you experience a single world rather than an entangled one, given that you experience anything at all.

\n

However the whole debate over consciousness turns out, it seems that we see pretty much what we should expect to see given decoherent physics.  What's left is a puzzle, but it's not a physicist's responsibility to answer.

\n

...is what I would like to say.

\n

But unfortunately there's that whole thing with the squared modulus of the complex amplitude giving the apparent \"probability\" of \"finding ourselves in a particular blob\".

\n

That part is a serious puzzle with no obvious answer, which I've discussed already in analogy.  I'll shortly be doing an explanation of how the problem looks from within actual quantum theory.

\n

Just remember, if someone presents you with an apparent \"answer\" to this puzzle, don't forget to check whether the phenomenon still seems mysterious, whether the answer really explains anything, and whether every part of the hypothesis is actively helping.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Decoherence is Pointless\"

\n

Previous post: \"On Being Decoherent\"

" } }, { "_id": "pRrksC5Y6TbyvKDJE", "title": "On Being Decoherent", "pageUrl": "https://www.lesswrong.com/posts/pRrksC5Y6TbyvKDJE/on-being-decoherent", "postedAt": "2008-04-27T04:59:02.000Z", "baseScore": 25, "voteCount": 22, "commentCount": 78, "url": null, "contents": { "documentId": "pRrksC5Y6TbyvKDJE", "html": "

Previously in seriesThe So-Called Heisenberg Uncertainty Principle

\n

\"A human researcher only sees a particle in one place at one time.\"  At least that's what everyone goes around repeating to themselves.  Personally, I'd say that when a human researcher looks at a quantum computer, they quite clearly see particles not behaving like they're in one place at a time.  In fact, you have never in your life seen a particle \"in one place at a time\" because they aren't.

\n

Nonetheless, when you construct a big measuring instrument that is sensitive to a particle's location—say, the measuring instrument's behavior depends on whether a particle is to the left or right of some dividing line—then you, the human researcher, see the screen flashing \"LEFT\", or \"RIGHT\", but not a mixture like \"LIGFT\".

\n

As you might have guessed from reading about decoherence and Heisenberg, this is because we ourselves are governed by the laws of quantum mechanics and subject to decoherence.

\n

\n

The standpoint of the Feynman path integral suggests viewing the evolution of a quantum system as a sum over histories, an integral over ways the system \"could\" behave—though the quantum evolution of each history still depends on things like the second derivative of that component of the amplitude distribution; it's not a sum over classical histories.  And \"could\" does not mean possibility in the logical sense; all the amplitude flows are real events...

\n

Nonetheless, a human being can try to grasp a quantum system by imagining all the ways that something could happen, and then adding up all the little arrows that flow to identical outcomes.  That gets you something of the flavor of the real quantum physics, of amplitude flows between volumes of configuration space.

\n

Now apply this mode of visualization to a sensor measuring an atom—say, a sensor measuring whether an atom is to the left or right of a dividing line.

\n

\"Superposition2\" Which is to say:  The sensor and the atom undergo some physical interaction in which the final state of the sensor depends heavily on whether the atom is to the left or right of a dividing line.  (I am reusing some previous diagrams, so this is not an exact depiction; but you should be able to use your own imagination at this point.)

\n

\"Entanglecloud\"You may recognize this as the entangling interaction described in \"Decoherence\". A quantum system that starts out highly factorizable, looking plaid and rectangular, that is, independent, can evolve into an entangled system as the formerly-independent parts interact among themselves.

\n

So you end up with an amplitude distribution that contains two blobs of amplitude—a blob of amplitude with the atom on the left, and the sensor saying \"LEFT\"; and a blob of amplitude with the atom on the right, and the sensor saying \"RIGHT\".

\n

For a sensor to measure an atom is to become entangled with it—for the state of the sensor to depend on the state of the atom—for the two to become correlated.  In a classical system, this is true only on a probabilistic level.  In quantum physics it is a physically real state of affairs.

\n

To observe a thing is to entangle yourself with it. You may recall my having previously said things that sound a good deal like this, in describing how cognition obeys the laws of thermodynamics, and, much earlier, talking about how rationality is a phenomenon within causality.  It is possible to appreciate this in a purely philosophical sense, but quantum physics helps drive the point home.

\n

\"Ampl1\" Let's say you've got an Atom, whose position has amplitude bulges on the left and on the right.  We can regard the Atom's distribution as a sum (addition, not multiplication) of the left bulge and the right bulge:

\n
\n

Atom = (Atom-LEFT + Atom-RIGHT)

\n
\n

Also there's a Sensor in a ready-to-sense state, which we'll call BLANK:

\n
\n

Sensor = Sensor-BLANK

\n
\n

By hypothesis, the system starts out in a state of quantum independence—the Sensor hasn't interacted with the Atom yet.  So:

\n
\n

System = (Sensor-BLANK) * (Atom-LEFT + Atom-RIGHT)

\n
\n

Sensor-BLANK is an amplitude sub-distribution, or sub-factor, over the joint positions of all the particles in the sensor.  Then you multiply this distribution by the distribution (Atom-LEFT + Atom-RIGHT), which is the sub-factor for the Atom's position.  Which gets you the joint configuration space over all the particles in the system, the Sensor and the Atom.

\n

Quantum evolution is linear, which means that Evolution(A + B) = Evolution(A) + Evolution(B).  We can understand the behavior of this whole distribution by understanding its parts.  Not its multiplicative factors, but its additive components.  So now we use the distributive rule of arithmetic, which, because we're just adding and multiplying complex numbers, works just as usual:

\n
\n

System = (Sensor-BLANK) * (Atom-LEFT + Atom-RIGHT)
           = (Sensor-BLANK * Atom-LEFT) + (Sensor-BLANK * Atom-RIGHT)

\n
\n

Now, the volume of configuration space corresponding to (Sensor-BLANK * Atom-LEFT) evolves into (Sensor-LEFT * Atom-LEFT).

\n

Which is to say:  Particle positions for the sensor being in its initialized state and the Atom being on the left, end up sending their amplitude flows to final configurations in which the Sensor is in a LEFT state, and the Atom is still on the left.

\n

So we have the evolution:

\n
\n

(Sensor-BLANK * Atom-LEFT) + (Sensor-BLANK * Atom-RIGHT)
        =>
(Sensor-LEFT * Atom-LEFT) + (Sensor-RIGHT * Atom-RIGHT)

\n
\n

By hypothesis, Sensor-LEFT is a different state from Sensor-RIGHT—otherwise it wouldn't be a very sensitive Sensor.  So the final state doesn't factorize any further; it's entangled.

\n

But this entanglement is not likely to manifest in difficulties of calculation.  Suppose the Sensor has a little LCD screen that's flashing \"LEFT\" or \"RIGHT\". This may seem like a relatively small difference to a human, but it involves avogadros of particles—photons, electrons, entire molecules—occupying different positions.

\n

So, since the states Sensor-LEFT and Sensor-RIGHT are widely separated in the configuration space, the volumes (Sensor-LEFT * Atom-LEFT) and (Sensor-RIGHT * Atom-RIGHT) are even more widely separated.

\n

The LEFT blob and the RIGHT blob in the amplitude distribution can be considered separately; they won't interact.  There are no plausible Feynman paths that end up with both LEFT and RIGHT sending amplitude to the same joint configuration.  There would have to be a Feynman path from LEFT, and a Feynman path from RIGHT, in which all the quadrillions of differentiated particles ended up in the same places.  So the amplitude flows from LEFT and RIGHT don't intersect, and don't interfere.

\n

\"Precohered\"You may recall this principle from \"Decoherence\", for how a sensitive interaction can decohere two interacting blobs of amplitude, into two noninteracting blobs.\"Decohered\"

\n

Formerly, the Atom-LEFT and Atom-RIGHT states were close enough in configuration space, that the blobs could interact with each other—there would be Feynman paths where an atom on the left ended up on the right.  Or Feynman paths for both an atom on the left, and an atom on the right, to end up in the middle.

\n

Now, however, the two blobs are decohered.  For LEFT to interact with RIGHT, it's not enough for just the Atom to end up on the right.  The Sensor would have to spontaneously leap into a state where it was flashing \"RIGHT\" on screen.  Likewise with any particles in the environment which previously happened to be hit by photons for the screen flashing \"LEFT\".  Trying to reverse decoherence is like trying to unscramble an egg.

\n

And when a human being looks at the Sensor's little display screen... or even just stands nearby, with quintillions of particles slightly influenced by gravity... then, under exactly the same laws, the system evolves into:

\n
\n

(Human-LEFT * Sensor-LEFT * Atom-LEFT) + (Human-RIGHT * Sensor-RIGHT * Atom-RIGHT)

\n
\n

Thus, any particular version of yourself only sees the sensor registering one result.

\n

That's it—the big secret of quantum mechanics.  As physical secrets go, it's actually pretty damn big.  Discovering that the Earth was not the center of the universe, doesn't hold a candle to realizing that you're twins.

\n

That you, yourself, are made of particles, is the fourth and final key to recovering the classical hallucination.  It's why you only ever see the universe from within one blob of amplitude, and not the vastly entangled whole.

\n

Asking why you can't see Schrodinger's Cat as simultaneously dead and alive, is like an Ebborian asking:  \"But if my brain really splits down the middle, why do I only ever remember finding myself on either the left or the right?  Why don't I find myself on both sides?\"

\n

Because you're not outside and above the universe, looking down.  You're in the universe.

\n

Your eyes are not an empty window onto the soul, through which the true state of the universe leaks in to your mind.  What you see, you see because your brain represents it: because your brain becomes entangled with it: because your eyes and brain are part of a continuous physics with the rest of reality.

\n

You only see nearby objects, not objects light-years away, because photons from those objects can't reach you, therefore you can't see them.  By a similar locality principle, you don't interact with distant configurations.

\n

When you open your eyes and see your shoelace is untied, that event happens within your brain.  A brain is made up of interacting neurons.  If you had two separate groups of neurons that never interacted with each other, but did interact among themselves, they would not be a single computer.  If one group of neurons thought \"My shoelace is untied\", and the other group of neurons thought \"My shoelace is tied\", it's difficult to see how these two brains could possibly contain the same consciousness.

\n

And if you think all this sounds obvious, note that, historically speaking, it took more than two decades after the invention of quantum mechanics for a physicist to publish that little suggestion.  People really aren't used to thinking of themselves as particles.

\n

The Ebborians have it a bit easier, when they split.  They can see the other sides of themselves, and talk to them.

\n

But the only way for two widely separated blobs of amplitude to communicate—to have causal dependencies on each other—would be if there were at least some Feynman paths leading to identical configurations from both starting blobs.

\n

Once one entire human brain thinks \"Left!\", and another entire human brain thinks \"Right!\", then it's extremely unlikely for all of the particles in those brains, and all of the particles in the sensors, and all of the nearby particles that interacted, to coincidentally end up in approximately the same configuration again.

\n

It's around the same likelihood as your brain spontaneously erasing its memories of seeing the sensor and going back to its exact original state; while nearby, an egg unscrambles itself and a hamburger turns back into a cow.

\n

So the decohered amplitude-blobs don't interact.  And we never get to talk to our other selves, nor can they speak to us.

\n

Of course, this doesn't mean that the other amplitude-blobs aren't there any more, any more than we should think that a spaceship suddenly ceases to exist when it travels over the cosmological horizon (relative to us) of an expanding universe.

\n
\n

(Oh, you thought that post on belief in the implied invisible was part of the Zombie sequence?  No, that was covert preparation for the coming series on quantum mechanics.

\n

You can go through line by line and substitute the arguments, in fact.

\n

Remember that the next time some commenter snorts and says, \"But what do all these posts have to do with your Artificial Intelligence work?\")

\n
\n

Disturbed by the prospect of there being more than one version of you?  But as Max Tegmark points out, living in a spatially infinite universe already implies that an exact duplicate of you exists somewhere, with probability 1.  In all likelihood, that duplicate is no more than 10^(1029) lightyears away.  Or 10^(1029) meters away, with numbers of that magnitude it's pretty much the same thing.

\n
\n

(Stop the presses!  Shocking news!  Scientists have announced that you are actually the duplicate of yourself 10^(1029) lightyears away!  What you thought was \"you\" is really just a duplicate of you.)

\n
\n

You also get the same Big World effect from the inflationary scenario in the Big Bang, which buds off multiple universes.  And both spatial infinity and inflation are more or less standard in the current model of physics.  So living in a Big World, which contains more than one person who resembles you, is a bullet you've pretty much got to bite—though none of the guns are certain, physics is firing that bullet at you from at least three different directions.

\n

Maybe later I'll do a post about why you shouldn't panic about the Big World.  You shouldn't be drawing many epistemic implications from it, let alone moral implications.  As Greg Egan put it, \"It all adds up to normality.\"  Indeed, I sometimes think of this as Egan's Law.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"The Conscious Sorites Paradox\"

\n

Previous post: \"Where Experience Confuses Physicistss\"

" } }, { "_id": "vGbHKfgFNDeJohfeN", "title": "Where Experience Confuses Physicists", "pageUrl": "https://www.lesswrong.com/posts/vGbHKfgFNDeJohfeN/where-experience-confuses-physicists", "postedAt": "2008-04-26T05:05:01.000Z", "baseScore": 43, "voteCount": 34, "commentCount": 30, "url": null, "contents": { "documentId": "vGbHKfgFNDeJohfeN", "html": "

Continuation ofWhere Physics Meets Experience

\n

When we last met our heroes, the Ebborians, they were discussing the known phenomenon in which the entire planet of Ebbore and all its people splits down its fourth-dimensional thickness into two sheets, just like an individual Ebborian brain-sheet splitting along its third dimension.

\n

And Po'mi has just asked:

\n
\n

\"Why should the subjective probability of finding ourselves in a side of the split world, be exactly proportional to the square of the thickness of that side?\"

\n
\n

\n

When the initial hubbub quiets down, the respected Nharglane of Ebbore asks:  \"Po'mi, what is it exactly that you found?\"

\n

\"Using instruments of the type we are all familiar with,\" Po'mi explains, \"I determined when a splitting of the world was about to take place, and in what proportions the world would split.  I found that I could not predict exactly which world I would find myself in—\"

\n

\"Of course not,\" interrupts De'da, \"you found yourself in both worlds, every time -\"

\n

\"—but I could predict probabilistically which world I would find myself in.  Out of all the times the world was about to split 2:1, into a side of two-thirds width and a side of one-third width, I found myself on the thicker side around 4 times out of 5, and on the thinner side around 1 time out of 5.  When the world was about to split 3:1, I found myself on the thicker side 9 times out of 10, and on the thinner side 1 time out of 10.\"

\n

\"Are you very sure of this?\" asks Nharglane.  \"How much data did you gather?\"

\n

Po'mi offers an overwhelming mountain of experimental evidence.

\n

\"I guess that settles that,\" mutters Nharglane.

\n

\"So you see,\" Po'mi says, \"you were right after all, Yu'el, not to eliminate 'subjective probability' from your worldview.  For if we do not have a 4/5 subjective anticipation of continuing into the thicker side of a 2:1 split, then how could we even describe this rule?\"

\n

\"A good question,\" says De'da.  \"There ought to be some way of phrasing your discovery, which eliminates this problematic concept of 'subjective continuation'...\"

\n

The inimitable Ha'ro speaks up:  \"You might say that we find ourselves in a world in which the remembered splits obey the squared-thickness rule, to within the limits of statistical expectation.\"

\n

De'da smiles.  \"Yes, excellent!  That describes the evidence in terms of recorded experimental results, which seems less problematic than this 'subjective anticipation' business.\"

\n

\"Does that really buy us anything...?\" murmurs Yu'el.  \"We're not limited to memories; we could perform the experiment again.  What, on that next occasion, would you anticipate as your experimental result?  If the thickness is split a hundred to one?  Afterward it will be only a memory... but what about beforehand?\"

\n

\"I think,\" says De'da, \"that you have forgotten one of your own cardinal rules, Yu'el.  Surely, what you anticipate is part of your map, not the territory.  Your degree of anticipation is partial information you possess; it is not a substance of the experiment itself.\"

\n

Yu'el pauses.  \"Aye, that is one of my cardinal rules... but I like my partial information to be about something.  Before I can distinguish the map and the territory, I need a concept of the territory.  What is my subjective anticipation about, in this case?  I will in fact end up in both world-sides.  I can calculate a certain probability to five decimal places, and verify it experimentally—but what is it a probability of?\"

\n

\"I know!\" shouts Bo'ma.  \"It's the probability that your original self ends up on that world-side!  The other person is just a copy!\"

\n

A great groan goes up from the assembled Ebborians.  \"Not this again,\" says De'da.  \"Didn't we settle this during the Identity Wars?\"

\n

\"Yes,\" Yu'el says.  \"There is no copy: there are two originals.\"

\n

De'da shakes his head in disgust.  \"And what are the odds that, out of umpteen billion split Ebbores, we would be the originals at this point?\"

\n

\"But you can't deny,\" Bo'ma says smugly, \"that my theory produces good experimental predictions!  It explains our observations, and that's all you can ask of any theory.  And so science vindicates the Army of Original Warriors—we were right all along!\"

\n

\"Hold on,\" says Yu'el.  \"That theory doesn't actually explain anything.  At all.\"

\n

\"What?\" says Bo'ma.  \"Of course it does.  I use it daily to make experimental predictions; though you might not understand that part, not being a physicist.\"

\n

Yu'el raises an eye.  \"Failure to explain anything is a hard-to-notice phenomenon in scientific theories.  You have to pay close attention, or you'll miss it.  It was once thought that phlogiston theory predicted that wood, when burned, would lose phlogiston and transform into ash; and predicted that candles, burning in an enclosed space, would saturate the air with phlogiston and then go out.  But these were not advance predictions of phlogiston theory.  Rather, phlogiston theorists saw those results, and then said 'Phlogiston did it.'  Now why didn't people notice this right away?  Because that sort of thing is actually surprisingly hard to notice.\"

\n

\"In this case,\" continues Yu'el, \"you have given us a rule that the original Ebborian has a probability of ending up in a world-side, which is proportional to the squared thickness of the side.  We originally had the mystery of where the squared-thickness rule came from.  And now that you've offered us your rule, we have the exact same mystery as beforeWhy would each world have a squared-thickness probability of receiving the original?  Why wouldn't the original consciousness always go to the thicker world?  Or go with probability directly proportional to thickness, instead of the square?  And what does it even mean to be the original?\"

\n

\"That doesn't matter,\" retorts Bo'ma.  \"Let the equation mean anything it likes, so long as it gives good experimental predictions.  What is the meaning of an electrical charge?  Why is it an electrical charge?  That doesn't matter; only the numbers matter.  My law that the original ends up in a particular side, with probability equaling the square of its thickness, gives good numbers.  End of story.\"

\n

Yu'el shakes his head.  \"When I look over the raw structure of your theory—the computer program that would correspond to this model—it contains a strictly superfluous element.  You have to compute the square of the thickness, and turn it into a probability, in order to get the chance that the original self goes there.  Why not just keep that probability as the experimental prediction?  Why further specify that this is the probability of original-ness?  Adding that last rule doesn't help you compute any better experimental predictions; and it leaves all the original mysteries intact.  Including Po'mi's question as to when exactly a world splits.  And it adds the new mystery of why original-ness should only end up in one world-side, with probability equal to the square of the thickness.\"   Yu'el pauses.  \"You might as well just claim that all the split world-sides except one vanish from the universe.\"

\n

Bo'ma snorts.  \"For a world-side to 'just vanish' would outright violate the laws of physics. Why, if it all vanished in an instant, that would mean the event occurred non-locally—faster than light.  My suggestion about 'originals' and 'copies' doesn't postulate unphysical behavior, whatever other criticisms you may have.\"

\n

Yu'el nods.  \"You're right, that was unfair of me.  I apologize.\"

\n

\"Well,\" says Bo'ma, \"how about this, then?  What if 'fourth-dimensional thickness', as we've been calling it, is actually a degree of partial information about who we really are?  And then when the world splits, we find out.\"

\n

\"Um... what?\" says Yu'el.  \"Are you sure you don't want to rephrase that, or something?\"

\n

Bo'ma shakes his head.  \"No, you heard me the first time.\"

\n

\"Okay,\" says Yu'el, \"correct me if I'm wrong, but I thought I heard Nharglane say that you had to do things like differentiate the fourth-dimensional density in order to do your experimental calculations.  That doesn't sound like probability theory to me.  It sounds like physics.\"

\n

\"Right,\" Bo'ma says, \"it's a quantity that propagates around with wave mechanics that involve the differential of the density, but it's also a degree of partial information.\"

\n

\"Look,\" Yu'el says, \"if this 4D density business works the way you say it does, it should be easy to set up a situation where there's no possible 'fact as to who you really are' that is fixed in advance but unknown to you, because the so-called 'answer' will change depending on the so-called 'question'—\"

\n

\"Okay,\" Bo'ma says, \"forget the 'probability' part.  What if 4D thickness is the very stuff of reality itself?  So how real something is, equals the 4D thickness—no, pardon me, the square of the 4D thickness.  Thus, some world-sides are quantitatively realer than others, and that's why you're more likely to find yourself in them.\"

\n

\"Why,\" says Yu'el, \"is the very stuff of reality itself manifesting as a physical quantity with its own wave mechanics?  What's next, electrical charge as a degree of possibility?  And besides, doesn't that violate -\"

\n

Then Yu'el pauses, and falls silent.

\n

\"What is it?\" inquires Po'mi.

\n

\"I was about to say, wouldn't that violate the Generalized Anti-Zombie Principle,\" Yu'el replies slowly.  \"Because then you could have a complete mathematical model of our world, to be looked over by the Flying Spaghetti Monster, and then afterward you would need to tell the Flying Spaghetti Monster an extra postulate:  Things are real in proportion to the square of their fourth-dimensional thickness.  You could change that postulate, and leave everything microphysically the same, but people would find... different proportions of themselves?... in different places.  The difference would be detectable internally... sort of... because the inhabitants would experience the results in different proportions, whatever that means.  They would see different things, or at least see the same things in different relative amounts.  But any third-party observer, looking over the universe, couldn't tell which internal people were more real, and so couldn't discover the statistics of experience.\"

\n

De'da laughs.  \"Sounds like a crushing objection to me.\"

\n

\"Only,\" says Yu'el, \"is that really so different from believing that you can have the whole mathematical structure of a world, and then an extra fact as to whether that world happens to exist or not exist?  Shouldn't that be ruled out by the Anti-Zombie Principle too?  Shouldn't the Anti-Zombie Principle say that it was logically impossible to have had a world physically identical to our own, except that it doesn't exist?   Otherwise there could be an abstract mathematical object structurally identical to this world, but with no experiences in it, because it doesn't exist.  And papers that philosophers wrote about subjectivity wouldn't prove they were conscious, because the papers would also 'not exist'.\"

\n

\"Um...\" says an Ebborian in the crowd, \"correct me if I'm mistaken, but didn't you just solve the mystery of the First Cause?\"

\n

\"You are mistaken,\" replies Yu'el.  \"I can tell when I have solved a mystery, because it stops being mysterious.  To cleverly manipulate my own confusion is not to dissolve a problem.  It is an interesting argument, and I may try to follow it further—but it's not an answer until the confusion goes away.\"

\n

\"Nonetheless,\" says Bo'ma, \"if you're allowed to say that some worlds exist, and some worlds don't, why not have a degree of existence that's quantitative?  And propagates around like a wave, and then we have to square it to get an answer.\"

\n

Yu'el snorts.  \"Why not just let the 'degree of existence' be a complex number, while you're at it?\"

\n

Bo'ma rolls his eyes.  \"Please stop mocking me.  I can't even imagine any possible experimental evidence which would point in the direction of that conclusion.  You'd need a case where two events that were real in opposite directions canceled each other out.\"

\n

\"I'm sorry,\" says Yu'el, \"I need to learn to control my tendency to attack straw opponents.  But still, where would the squaring rule come from?\"

\n

An Ebborian named Ev'Hu suggests, \"Well, you could have a rule that world-sides whose thickness tends toward zero, must have a degree of reality that also tends to zero.  And then the rule which says that you square the thickness of a world-side, would let the probability tend toward zero as the world-thickness tended toward zero.  QED.\"

\n

\"That's not QED,\" says Po'mi.  \"That's a complete non-sequitur.  Logical fallacy of affirming the consequent.  You could have all sorts of rules that would let the reality tend toward zero as the world-thickness tended toward zero, not just the squaring rule.  You could approach the limit from many different directions.  And in fact, all our world-sides have a thickness that 'tends toward zero' because they keep splitting.  Furthermore, why would an indefinite tendency in the infinite future have any impact on what we do now?\"

\n

\"The frequentist heresy,\" says Yu'el. \"It sounds like some of their scriptures.  But let's move on.  Does anyone have any helpful suggestions?  Ones that don't just shuffle the mystery around?\"

\n

Ha'ro speaks.  \"I've got one.\"

\n

\"Okay,\" Yu'el says, \"this should be good.\"

\n

\"Suppose that when a world-side gets thin enough,\" Ha'ro says, \"it cracks to pieces and falls apart.  And then, when you did the statistics, it would turn out that the vast majority of surviving worlds have splitting histories similar to our own.\"

\n

There's a certain unsettled pause.

\n

\"Ha'ro,\" says Nharglane of Ebbore, \"to the best of my imperfect recollection, that is the most disturbing suggestion any Ebborian physicist has ever made in the history of time.\"

\n

\"Thank you very much,\" says Ha'ro.  \"But it could also be that a too-small world-side just sheds off in flakes when it splits, rather than containing actual sentient beings who get to experience a moment of horrified doom.  The too-small worlds merely fail to exist, as it were.  Or maybe sufficiently small world-sides get attracted to larger world-sides, and merge with them in a continuous process, obliterating the memories of anything having happened differently.  But that's not important, the real question is whether the numbers would work out for the right size limit, and in fact,\" Ha'ro waves some calculations on a piece of paper, \"all you need is for the minimum size of a cohesive world to be somewhere around the point where half the total fourth-dimensional mass is above the limit -\"

\n

\"Eh?\" says Yu'el.

\n

\"I figured some numbers and they don't look too implausible and we might be able to prove it, either from first-principles of 4D physics showing that a cracking process occurs, or with some kind of really clever experiment,\" amplifies Ha'ro.

\n

\"Sounds promising,\" says Yu'el.  \"So if I get what you're saying, there would be a completely physical explanation for why, when a typical bunch of worlds split 2:1, there's around 4 times as many cohesive worlds left that split from the thicker side, as from the thinner side.\"

\n

\"Yes,\" says Ha'ro, \"you just count the surviving worlds.\"

\n

\"And if the Flying Spaghetti Monster ran a simulation of our universe's physics, the simulation would automatically include observers that experienced the same things we did, with the same statistical probabilities,\" says Yu'el.  \"No extra postulates required.  None of the quantities in the universe would need additional characteristics beyond their strictly physical structure.  Running any mathematically equivalent computer program would do the trick—you wouldn't need to be told how to interpret it a particular way.\"

\n

Ha'ro nods.  \"That's the general idea.\"

\n

\"Well, I don't know if that's correct,\" says Yu'el.  \"There's some potential issues, as you know.  But I've got to say it's the first suggestion I've heard that's even remotely helpful in making all this seem any less mysterious.\"

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"On Being Decoherent\"

\n

Previous post: \"Where Physics Meets Experience\"

" } }, { "_id": "fkLSJzHKHAdgqvnNS", "title": "Criminal retribution", "pageUrl": "https://www.lesswrong.com/posts/fkLSJzHKHAdgqvnNS/criminal-retribution", "postedAt": "2008-04-25T15:39:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "fkLSJzHKHAdgqvnNS", "html": "

The US houses the highest proportion of its people in prison of any country, as Adam Liptak discusses thought provokingly. As expected, this appears to reduce the crime rates.

\n

How much suffering should the guilty endure for a given reduction in suffering of the innocent? I think a 1:1 ratio maximum- that is, it doesn’t matter who suffers. Suffering should be minimised, even if that means the innocent suffer instead of the guilty. Punishment should only be to prevent greater suffering.

\n
***
\n

Liptak also draws attention to the relationship between more democratic appointment of judges in the US and harsher punishment, as people demand fierce retribution. I suspect demand for escalating punishment is a result of fear and angry desire for revenge, rather than widespread consideration of mechanism design for minimising harm, or anything mildly reasoned. I don’t think society should be allowed to inflict harm on its members arbitrarily like this. Should judge appointment be less democratic then?

\n

Perhaps, but this decision can (and should?) only be reached through other democratic decision making. This is the same problem as arises everywhere. The public, through democracy, interferes with people where it has no right to, but the extent to which citizens should be able to interfere with one another through democracy hasn’t been agreed, and so must rely on democratic negotiation.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "WajiC3YWeJutyAXTn", "title": "Where Physics Meets Experience", "pageUrl": "https://www.lesswrong.com/posts/WajiC3YWeJutyAXTn/where-physics-meets-experience", "postedAt": "2008-04-25T04:58:11.000Z", "baseScore": 76, "voteCount": 55, "commentCount": 38, "url": null, "contents": { "documentId": "WajiC3YWeJutyAXTn", "html": "

Followup toDecoherence, Where Philosophy Meets Science

\n

Once upon a time, there was an alien species, whose planet hovered in the void of a universe with laws almost like our own.  They would have been alien to us, but of course they did not think of themselves as alien.  They communicated via rapid flashes of light, rather than sound.  We'll call them the Ebborians.

\n

Ebborians reproduce by fission, an adult dividing into two new individuals.  They share genetic material, but not through sexual recombination; Ebborian adults swap genetic material with each other.  They have two eyes, four legs, and two hands, letting a fissioned Ebborian survive long enough to regrow.

\n

Human DNA is built in a double helix; unzipping the helix a little at a time produces two stretches of single strands of DNA.  Each single strand attracts complementary bases, producing a new double strand.  At the end of the operation, a DNA double helix has turned into two double helices.  Hence earthly life.

\n

Ebborians fission their brains, as well as their bodies, by a process something like how human DNA divides.

\n

\n

Imagine an Ebborian brain as a flat sheet of paper, computing in a way that is more electrical than chemical—charges flowing down conductive pathways.

\n

When it's time for an Ebborian to fission, the brain-paper splits down its thickness into two sheets of paper.  Each new sheet is capable of conducting electricity on its own.  Indeed, the Ebborian(s) stays conscious throughout the whole fissioning process.  Over time, the brain-paper grows thick enough to fission again.

\n

Electricity flows through Ebborian brains faster than human neurons fire.  But the Ebborian brain is constrained by its two-dimensionality.  An Ebborian brain-paper must split down its thickness while retaining the integrity of its program.  Ebborian evolution took the cheap way out: the brain-paper computes in a purely two-dimensional way.  The Ebborians have much faster neuron-equivalents, but they are far less interconnected.

\n

On the whole, Ebborians think faster than humans and remember less.  They are less susceptible to habit; they recompute what we would cache.  They would be incredulous at the idea that a human neuron might be connected to a thousand neighbors, and equally incredulous at the idea that our axons and dendrites propagate signals at only a few meters per second.

\n

The Ebborians have no concept of parents, children, or sexuality.  Every adult Ebborian remembers fissioning many times.  But Ebborian memories quickly fade if not used; no one knows the last common ancestor of those now alive.

\n

In principle, an Ebborian personality can be immortal.  Yet an Ebborian remembers less life than a seventy-year-old human.  They retain only the most important highlights of their last few millennia.  Is this immortality?  Is it death?

\n

The Ebborians had to rediscover natural selection from scratch, because no one retained their memories of being a fish.

\n

But I digress from my tale.

\n

Today, the Ebborians have gathered to celebrate a day which all present will remember for hundreds of years.  They have discovered (they believe) the Ultimate Grand Unified Theory of Everything for their universe.  The theory which seems, at last, to explain every known fundamental physical phenomenon—to predict what every instrument will measure, in every experiment whose initial conditions are exactly known, and which can be calculated on available computers.

\n

\"But wait!\" cries an Ebborian.  (We'll call this one Po'mi.)  \"But wait!\", cries Po'mi, \"There are still questions the Unified Theory can't answer!  During the fission process, when exactly does one Ebborian consciousness become two separate people?\"

\n

The gathered Ebborians look at each other.  Finally, there speaks the moderator of the gathering, the second-foremost Ebborian on the planet: the much-respected Nharglane of Ebbore, who achieved his position through consistent gentleness and courtesy.

\n

\"Well,\" Nharglane says, \"I admit I can't answer that one—but is it really a question of fundamental physics?\"

\n

\"I wouldn't even call that a 'question',\" snorts De'da the Ebborian, \"seeing as how there's no experimental test whose result depends on the answer.\"

\n

\"On the contrary,\" retorts Po'mi, \"all our experimental results ultimately come down to our experiences.  If a theory of physics can't predict what we'll experience, what good is it?\"

\n

De'da shrugs.  \"One person, two people—how does that make a difference even to experience?  How do you tell even internally whether you're one person or two people?  Of course, if you look over and see your other self, you know you're finished dividing—but by that time your brain has long since finished splitting.\"

\n

\"Clearly,\" says Po'mi, \"at any given point, whatever is having an experience is one person.  So it is never necessary to tell whether you are one person or two people.  You are always one person.  But at any given time during the split, does there exist another, different consciousness as yet, with its own awareness?\"

\n

De'da performs an elaborate quiver, the Ebborian equivalent of waving one's hands.  \"When the brain splits, it splits fast enough that there isn't much time where the question would be ambiguous.  One instant, all the electrical charges are moving as a whole.  The next instant, they move separately.\"

\n

\"That's not true,\" says Po'mi.  \"You can't sweep the problem under the rug that easily.  There is a quite appreciable time—many picoseconds—when the two halves of the brain are within distance for the moving electrical charges in each half to tug on the other.  Not quite causally separated, and not quite the same computation either.  Certainly there is a time when there is definitely one person, and a time when there is definitely two people.  But at which exact point in between are there two distinct conscious experiences?\"

\n

\"My challenge stands,\" says De'da.  \"How does it make a difference, even a difference of first-person experience, as to when you say the split occurs?  There's no third-party experiment you can perform to tell you the answer.  And no difference of first-person experience, either.  Your belief that consciousness must 'split' at some particular point, stems from trying to model consciousness as a big rock of awareness that can only be in one place at a time.  There's no third-party experiment, and no first-person experience, that can tell you when you've split; the question is meaningless.\"

\n

\"If experience is meaningless,\" retorts Po'mi, \"then so are all our scientific theories, which are merely intended to explain our experiences.\"

\n

\"If I may,\" says another Ebborian, named Yu'el, \"I think I can refine my honorable colleague Po'mi's dilemma.  Suppose that you anesthetized one of us -\"

\n

(Ebborians use an anesthetic that effectively shuts off electrical power to the brain—no processing or learning occurs while an Ebborian is anesthetized.)

\n

\"- and then flipped a coin.  If the coin comes up heads, you split the subject while they are unconscious.  If the coin comes up tails, you leave the subject as is.  When the subject goes to sleep, should they anticipate a 2/3 probability of seeing the coin come up heads, or anticipate a 1/2 probability of seeing the coin come up heads?  If you answer 2/3, then there is a difference of anticipation that could be made to depend on exactly when you split.\"

\n

\"Clearly, then,\" says De'da, \"the answer is 1/2, since answering 2/3 gets us into paradoxical and ill-defined issues.\"

\n

Yu'el looks thoughtful.  \"What if we split you into 512 parts while you were anesthetized?  Would you still answer a probability of 1/2 for seeing the coin come up heads?\"

\n

De'da shrugs.  \"Certainly.  When I went to sleep, I would figure on a 1/2 probability that I wouldn't get split at all.\"

\n

\"Hmm...\" Yu'el says.  \"All right, suppose that we are definitely going to split you into 16 parts.  3 of you will wake up in a red room, 13 of you will wake up in a green room.  Do you anticipate a 13/16 probability of waking up in a green room?\"

\n

\"I anticipate waking up in a green room with near-1 probability,\" replies De'da, \"and I anticipate waking up in a red room with near-1 probability.  My future selves will experience both outcomes.\"

\n

\"But I'm asking about your personal anticipation,\" Yu'el persists.  \"When you fall asleep, how much do you anticipate seeing a green room?  You can't see both room colors at once—that's not an experience anyone will have—so which color do you personally anticipate more?\"

\n

De'da shakes his head.  \"I can see where this is going; you plan to ask what I anticipate in cases where I may or may not be split.  But I must deny that your question has an objective answer, precisely because of where it leads.  Now, I do say to you, that I care about my future selves.  If you ask me whether I would like each of my green-room selves, or each of my red-room selves, to receive ten dollars, I will of course choose the green-roomers—but I don't care to follow this notion of 'personal anticipation' where you are taking it.\"

\n

\"While you are anesthetized,\" says Yu'el, \"I will flip a coin; if the coin comes up heads, I will put 3 of you into red rooms and 13 of you into green rooms.  If the coin comes up tails, I will reverse the proportion.  If you wake up in a green room, what is your posterior probability that the coin came up heads?\"

\n

De'da pauses.  \"Well...\" he says slowly, \"Clearly, some of me will be wrong, no matter which reasoning method I use—but if you offer me a bet, I can minimize the number of me who bet poorly, by using the general policy, of each self betting as if the posterior probability of their color dominating is 13/16.  And if you try to make that judgment depend on the details of the splitting process, then it just depends on how whoever offers the bet counts Ebborians.\"

\n

Yu'el nods.  \"I can see what you are saying, De'da.  But I just can't make myself believe it, at least not yet.  If there were to be 3 of me waking up in red rooms, and a billion of me waking up in green rooms, I would quite strongly anticipate seeing a green room when I woke up.  Just the same way that I anticipate not winning the lottery.  And if the proportions of three red to a billion green, followed from a coin coming up heads; but the reverse proportion, of a billion red to three green, followed from tails; and I woke up and saw a red room; why, then, I would be nearly certain—on a quite personal level—that the coin had come up tails.\"

\n

\"That stance exposes you to quite a bit of trouble,\" notes De'da.

\n

Yu'el nods.  \"I can even see some of the troubles myself.  Suppose you split brains only a short distance apart from each other, so that they could, in principle, be fused back together again?  What if there was an Ebborian with a brain thick enough to be split into a million parts, and the parts could then re-unite?  Even if it's not biologically possible, we could do it with a computer-based mind, someday.  Now, suppose you split me into 500,000 brains who woke up in green rooms, and 3 much thicker brains who woke up in red rooms.  I would surely anticipate seeing the green room.  But most of me who see the green room will see nearly the same thing—different in tiny details, perhaps, enough to differentiate our experience, but such details are soon forgotten.  So now suppose that my 500,000 green selves are reunited into one Ebborian, and my 3 red selves are reunited into one Ebborian.  Have I just sent nearly all of my \"subjective probability\" into the green future self, even though it is now only one of two?  With only a little more work, you can see how a temporary expenditure of computing power, or a nicely refined brain-splitter and a dose of anesthesia, would let you have a high subjective probability of winning any lottery.  At least any lottery that involved splitting you into pieces.\"

\n

De'da furrows his eyes.  \"So have you not just proved your own theory to be nonsense?\"

\n

\"I'm not sure,\" says Yu'el.  \"At this point, I'm not even sure the conclusion is wrong.\"

\n

\"I didn't suggest your conclusion was wrong,\" says De'da, \"I suggested it was nonsense.  There's a difference.\"

\n

\"Perhaps,\" says Yu'el.  \"Perhaps it will indeed turn out to be nonsense, when I know better.  But if so, I don't quite know better yet.  I can't quite see how to eliminate the notion of subjective anticipation from my view of the universe.  I would need something to replace it, something to re-fill the role that anticipation currently plays in my worldview.\"

\n

De'da shrugs.  \"Why not just eliminate 'subjective anticipation' outright?\"

\n

\"For one thing,\" says Yu'el, \"I would then have no way to express my surprise at the orderliness of the universe.  Suppose you claimed that the universe was actually made up entirely of random experiences, brains temporarily coalescing from dust and experiencing all possible sensory data.  Then if I don't count individuals, or weigh their existence somehow, that chaotic hypothesis would predict my existence as strongly as does science.  The realization of all possible chaotic experiences would predict my own experience with probability 1.  I need to keep my surprise at having this particular orderly experience, to justify my anticipation of seeing an orderly future.  If I throw away the notion of subjective anticipation, then how do I differentiate the chaotic universe from the orderly one?  Presumably there are Yu'els, somewhere in time and space (for the universe is spatially infinite) who are about to have a really chaotic experience.  I need some way of saying that these Yu'els are rare, or weigh little—some way of mostly anticipating that I won't sprout wings and fly away.  I'm not saying that my current way of doing this is good bookkeeping, or even coherent bookkeeping; but I can't just delete the bookkeeping without a more solid understanding to put in its place.  I need some way to say that there are versions of me who see one thing, and versions of me who see something else, but there's some kind of different weight on them.  Right now, what I try to do is count copies—but I don't know exactly what constitutes a copy.\"

\n

Po'mi clears his throat, and speaks again.  \"So, Yu'el, you agree with me that there exists a definite and factual question as to exactly when there are two conscious experiences, instead of one.\"

\n

\"That, I do not concede,\" says Yu'el.  \"All that I have said may only be a recital of my own confusion.  You are too quick to fix the language of your beliefs, when there are words in it that, by your own admission, you do not understand.  No matter how fundamental your experience feels to you, it is not safe to trust that feeling, until experience is no longer something you are confused about.  There is a black box here, a mystery.  Anything could be inside that box—any sort of surprise—a shock that shatters everything you currently believe about consciousness.  Including upsetting your belief that experience is fundamental.  In fact, that strikes me as a surprise you should anticipate—though it will still come as a shock.\"

\n

\"But then,\" says Po'mi, \"do you at least agree that if our physics does not specify which experiences are experienced, or how many of them, or how much they 'weigh', then our physics must be incomplete?\"

\n

\"No,\" says Yu'el, \"I don't concede that either.  Because consider that, even if a physics is known—even if we construct a universe with very simple physics, much simpler than our own Unified Theory—I can still present the same split-brain dilemmas, and they will still seem just as puzzling.  This suggests that the source of the confusion is not in our theories of fundamental physics.  It is on a higher level of organization.  We can't compute exactly how proteins will fold up; but this is not a deficit in our theory of atomic dynamics, it is a deficit of computing power.  We don't know what makes sharkras bloom only in spring; but this is not a deficit in our Unified Theory, it is a deficit in our biology—we don't possess the technology to take the sharkras apart on a molecular level to find out how they work.  What you are pointing out is a gap in our science of consciousness, which would present us with just the same puzzles even if we knew all the fundamental physics.  I see no work here for physicists, at all.\"

\n

Po'mi smiles faintly at this, and is about to reply, when a listening Ebborian shouts, \"What, have you begun to believe in zombies?  That when you specify all the physical facts about a universe, there are facts about consciousness left over?\"

\n

\"No!\" says Yu'el.  \"Of course not!  You can know the fundamental physics of a universe, hold all the fundamental equations in your mind, and still not have all the physical facts.  You may not know why sharkras bloom in the summer.  But if you could actually hold the entire fundamental physical state of the sharkra in your mind, and understand all its levels of organization, then you would necessarily know why it blooms—there would be no fact left over, from outside physics.  When I say, 'Imagine running the split-brain experiment in a universe with simple known physics,' you are not concretely imagining that universe, in every detail.  You are not actually specifying the entire physical makeup of an Ebborian in your imagination.  You are only imagining that you know it.  But if you actually knew how to build an entire conscious being from scratch, out of paperclips and rubberbands, you would have a great deal of knowledge that you do not presently have.  This is important information that you are missing!  Imagining that you have it, does not give you the insights that would follow from really knowing the full physical state of a conscious being.\"

\n

\"So,\" Yu'el continues, \"We can imagine ourselves knowing the fundamental physics, and imagine an Ebborian brain splitting, and find that we don't know exactly when the consciousness has split.  Because we are not concretely imagining a complete and detailed description of a conscious being, with full comprehension of the implicit higher levels of organization.  There are knowledge gaps here, but they are not gaps of physics.  They are gaps in our understanding of consciousness.  I see no reason to think that fundamental physics has anything to do with such questions.\"

\n

\"Well then,\" Po'mi says, \"I have a puzzle I should like you to explain, Yu'el.  As you know, it was discovered not many years ago, that our universe has four spatial dimensions, rather than three dimensions, as it first appears.\"

\n

\"Aye,\" says Nharglane of Ebbore, \"this was a key part in our working-out of the Unified Theory.  Our models would be utterly at a loss to account for observed experimental results, if we could not model the fourth dimension, and differentiate the fourth-dimensional density of materials.\"

\n

\"And we also discovered,\" continues Po'mi, \"that our very planet of Ebbore, including all the people on it, has a four-dimensional thickness, and is constantly fissioning along that thickness, just as our brains do.  Only the fissioned sides of our planet do not remain in contact, as our new selves do; the sides separate into the fourth-dimensional void.\"

\n

Nharglane nods.  \"Yes, it was rather a surprise to realize that the whole world is duplicated over and over.  I shall remember that realization for a long time indeed.  It is a good thing we Ebborians had our experience with self-fissioning, to prepare us for the shock.  Otherwise we might have been driven mad, and embraced absurd physical theories.\"

\n

\"Well,\" says Po'mi, \"when the world splits down its four-dimensional thickness, it does not always split exactly evenly.  Indeed, it is not uncommon to see nine-tenths of the four-dimensional thickness in one side.\"

\n

\"Really?\" says Yu'el.  \"My knowledge of physics is not so great as yours, but—\"

\n

\"The statement is correct,\" says the respected Nharglane of Ebbore.

\n

\"Now,\" says Po'mi, \"if fundamental physics has nothing to do with consciousness, can you tell me why the subjective probability of finding ourselves in a side of the split world, should be exactly proportional to the square of the thickness of that side?\"

\n

There is a great terrible silence.

\n

\"WHAT?\" says Yu'el.

\n

\"WHAT?\" says De'da.

\n

\"WHAT?\" says Nharglane.

\n

\"WHAT?\" says the entire audience of Ebborians.

\n

To be continued...

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Where Experience Confuses Physicists\"

\n

Previous post: \"Which Basis Is More Fundamental?\"

" } }, { "_id": "XDkeuJTFjM9Y2x6v6", "title": "Which Basis Is More Fundamental?", "pageUrl": "https://www.lesswrong.com/posts/XDkeuJTFjM9Y2x6v6/which-basis-is-more-fundamental", "postedAt": "2008-04-24T04:17:47.000Z", "baseScore": 29, "voteCount": 31, "commentCount": 39, "url": null, "contents": { "documentId": "XDkeuJTFjM9Y2x6v6", "html": "

Followup toThe So-Called Heisenberg Uncertainty Principle

\n

For decades, quantum physics was vehemently asserted to be nothing but a convenience of calculation.  The equations were not to be interpreted as describing reality, though they made good predictions for reasons that it was mere philosophy to question.  This being the case, any quantity you could define seemed as fundamentally real as any other quantity, which is to say, not real at all.

\n

Physicists have invented, for convenience of calculation, something called a momentum basis of quantum mechanics.  Instead of having a complex amplitude distribution over the positions of particles, you had a complex amplitude distribution over their momenta.

\n

The \"momentum basis\" contains all the information that is in the \"position basis\", and the \"position basis\" contains all the information that is in the \"momentum basis\".  Physicists use the word \"basis\" for both, suggesting that they are on the same footing: that positions are no better than momenta, or vice versa.

\n

But, in my humble opinion, the two representations are not on an equal footing when it comes to being \"fundamental\".

\n

\n

Physics in the position basis can be computed locally. To determine the instantaneous change of amplitude at a configuration, you only need to look at its infinitesimal neighborhood.

\n

The momentum basis cannot be computed locally.  Quantum evolution depends on potential energy.  Potential energy depends on how far apart things are from each other, like how high an apple is off the ground. To figure out how far apart things are from each other, you have to look at the entire momentum basis to recover the positions.

\n

The \"momentum basis\" is in some ways like a description of the chessboard in which you have quantities like \"the queen's position minus the rook's position\" and \"the queen's position plus the rook's position\".  You can get back a description of the entire chessboard—but the rules of the game are much harder to phrase.  Each rule has to take into account many more facts, and there's no longer an elegant local structure to the board.

\n

Now the above analogy is not really fair, because the momentum basis is not that inelegant.  The momentum basis is the Fourier transform of the position basis, and symmetrically, the position basis is the Fourier transform of the momentum basis.  They're equally easy to extract from each other.  Even so, the momentum basis has no local physics.

\n

So if you think that the nature of reality seems to tend toward local relations, local causality, or local anything, then the position basis is a better candidate for being fundamentally real.

\n

What is this \"nature of reality\" that I'm talking about?

\n

I sometimes talk about the Tao as being the distribution from which our laws of physics were drawn—the alphabet in which our physics was generated.  This is almost certainly a false concept, but it is a useful one.

\n

It was a very important discovery, in human history, that the Tao wrote its laws in the language of mathematics, rather than heroic mythology.  We had to discover the general proposition that equations were better explanations for natural phenomena than \"Thor threw a lightning bolt\".  (Even though Thor sounds simpler to humans than Maxwell's Equations.) 

\n

Einstein seems to have discovered General Relativity almost entirely on the basis of guessing what language the laws should be written in, what properties they should have, rather than by distilling vast amounts of experimental evidence into an empirical regularity.  This is the strongest evidence I know of for the pragmatic usefulness of the \"Tao of Physics\" concept.  If you get one law, like Special Relativity, you can look at the language it's written in, and infer what the next law ought to look like.  If the laws are not being generated from the same language, they surely have something in common; and this I refer to as the Tao.

\n

Why \"Tao\"?  Because no matter how I try to describe the whole business, when I look over the description, I'm pretty sure it's wrong.  Therefore I call it the Tao.

\n

One of the aspects of the Tao of Physics seems to be locality.  (Markov neighborhoods, to be precise.)  Discovering this aspect of the Tao was part of the great transition from Newtonian mechanics to relativity.  Newton thought that gravity and light propagated at infinite speed, action-at-a-distance.  Now that we know that everything obeys a speed limit, we know that what happens at a point in spacetime only depends on an immediate neighborhood of the immediate past.

\n

Ever since Einstein figured out that the Tao prefers its physics local, physicists have successfully used the heuristic of prohibiting all action-at-a-distance in their hypotheses.  We've figured out that the Tao doesn't like it.  You can see how local physics would be easier to compute... though the Tao has no objection to wasting incredible amounts of computing power on things like quarks and quantum mechanics.

\n

The Standard Model includes many fields and laws.  Our physical models require many equations and postulates to write out.  To the best of our current knowledge, the laws still appear, if not complicated, then not perfectly simple.

\n

Why should every known behavior in physics be linear in quantum evolution, local in space and time, Charge-Parity-Time symmetrical, and conservative of probability density?  I don't know, but you'd have to be pretty stupid not to notice the pattern.  A single exception, in any individual behavior of physics, would destroy the generalization.  It seems like too much coincidence.

\n

So, yes, the position basis includes all the information of the momentum basis, and the momentum basis includes all the information of the position basis, and they give identical predictions.

\n

But the momentum basis looks like... well, it looks like humans took the real laws and rewrote them in a mathematically convenient way that destroys the Tao's beloved locality.

\n

That may be a poor way of putting it, but I don't know how else to do so.

\n

In fact, the position basis is also not a good candidate for being fundamentally real, because it doesn't obey the relativistic spirit of the Tao.  Talking about any particular position basis, involves choosing an arbitrary space of simultaneity.  Of course, transforming your description of the universe to a different space of simultaneity, will leave all your experimental predictions exactly the same.  But however the Tao of Physics wrote the real laws, it seems really unlikely that they're written to use Greenwich's space of simultaneity as the arbitrary standard, or whatever.  Even if you can formulate a mathematically equivalent representation that uses Greenwich space, it doesn't seem likely that the Tao actually wrote it that way... if you see what I mean.

\n

I wouldn't be surprised to learn that there is some known better way of looking at quantum mechanics than the position basis, some view whose mathematical components are relativistically invariant and locally causal.

\n

But, for now, I'm going to stick with the observation that the position basis is local, and the momentum basis is not, regardless of how pretty they look side-by-side.  It's not that I think the position basis is fundamental, but that it seems fundamentaler.

\n

The notion that every possible way of slicing up the amplitude distribution is a \"basis\", and every \"basis\" is on an equal footing, is a habit of thought from those dark ancient ages when quantum amplitudes were thought to be states of partial information.

\n

You can slice up your information any way you like.  When you're reorganizing your beliefs, the only question is whether the answers you want are easy to calculate.

\n

But if a model is meant to describe reality, then I would tend to suspect that a locally causal model probably gets closer to fundamentals, compared to a nonlocal model with action-at-a-distance.  Even if the two give identical predictions.

\n

This is admittedly a deep philosophical issue that gets us into questions I can't answer, like \"Why does the Tao of Physics like math and CPT symmetry?\" and \"Why should a locally causal isomorph of a structural essence, be privileged over nonlocal isomorphs when it comes to calling it 'real'?\", and \"What the hell is the Tao?\"

\n

Good questions, I agree.

\n

This talk about the Tao is messed-up reasoning.  And I know that it's messed up.  And I'm not claiming that just because it's a highly useful heuristic, that is an excuse for it being messed up.

\n

But I also think it's okay to have theories that are in progress, that are not even claimed to be in a nice neat finished state, that include messed-up elements clearly labeled as messed-up, which are to be resolved as soon as possible rather than just tolerated indefinitely.

\n

That, I think, is how you make incremental progress on these kinds of problems—by working with incomplete theories that have wrong elements clearly labeled \"WRONG!\"  Academics, it seems to me, have a bias toward publishing only theories that they claim to be correct—or even worse, complete—or worse yet, coherent.  This, of course, rules out incremental progress on really difficult problems.

\n

When using this methodology, you should, to avoid confusion, choose labels that clearly indicate that the theory is wrong.  For example, the \"Tao of Physics\".  If I gave that some kind of fancy technical-sounding formal name like \"metaphysical distribution\", people might think it was a name for a coherent theory, rather than a name for my own confusion.

\n

I accept the possibility that this whole blog post is merely stupid.  After all, the question of whether the position basis or the momentum basis is \"more fundamental\" should never make any difference as to what we anticipate.  If you ever find that your anticipations come out one way in the position basis, and a different way in the momentum basis, you are surely doing something wrong.

\n

But Einstein (and others!) seem to have comprehended the Tao of Physics to powerfully predictive effect.  The question \"What kind of laws does the Tao favor writing?\" has paid more than a little rent.

\n

The position basis looks noticeably more... favored.

\n

Added:  When I talk about \"locality\", I mean locality in the abstract, computational sense: mathematical objects talking only to their immediate neigbors.  In particular, quantum physics is local in the configuration space.

\n

This also happens to translate into physics that is local in what humans think of \"space\": it is impossible to send signals faster than light.  But this isn't immediately obvious.  It is an additional structure of the neighborhoods in configuration space.  A configuration only neighbors configurations where positions didn't change faster than light.

\n

A view that made both forms of locality explicit, in a relativistically invariant way, would be much more fundamentalish than the position basis.  Unfortunately I don't know what such a view might be.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Where Physics Meets Experience\"

\n

Previous post: \"The So-Called Heisenberg Uncertainty Principle\"

" } }, { "_id": "eWuuznxeebcjWpdnH", "title": "The So-Called Heisenberg Uncertainty Principle", "pageUrl": "https://www.lesswrong.com/posts/eWuuznxeebcjWpdnH/the-so-called-heisenberg-uncertainty-principle", "postedAt": "2008-04-23T06:36:26.000Z", "baseScore": 40, "voteCount": 27, "commentCount": 23, "url": null, "contents": { "documentId": "eWuuznxeebcjWpdnH", "html": "

Previously in series: Decoherence

As touched upon earlier, Heisenberg's "Uncertainty Principle" is horribly misnamed.

Amplitude distributions in configuration space evolve over time. When you specify an amplitude distribution over joint positions, you are also necessarily specifying how the distribution will evolve. If there are blobs of position, you know where the blobs are going.

In classical physics, where a particle is, is a separate fact from how fast it is going. In quantum physics this is not true. If you perfectly know the amplitude distribution on position, you necessarily know the evolution of any blobs of position over time.

So there is a theorem which should have been called the Heisenberg Certainty Principle, or the Heisenberg Necessary Determination Principle; but what does this theorem actually say?

At left is an image I previously used to illustrate a possible amplitude distribution over positions of a 1-dimensional particle.

Suppose that, instead, the amplitude distribution is actually a perfect helix. (I.e., the amplitude at each point has a constant modulus, but the complex phase changes linearly with the position.) And neglect the effect of potential energy on the system evolution; i.e., this is a particle out in intergalactic space, so it's not near any gravity wells or charged particles.

If you started with an amplitude distribution that looked like a perfect spiral helix, the laws of quantum evolution would make the helix seem to rotate / move forward at a constant rate. Like a corkscrew turning at a constant rate.

This is what a physicist views as a single particular momentum.

And you'll note that a "single particular momentum" corresponds to an amplitude distribution that is fully spread out—there's no bulges in any particular position.

Let me emphasize that I have not just described a real situation you could find a particle in.

The physicist's notion of "a single particular momentum" is a mathematical tool for analyzing quantum amplitude distributions.

The evolution of the amplitude distribution involves things like taking the second derivative in space and multiplying by i to get (one component of) the first derivative in time. Which turns out to give rise to a wave mechanics—blobs that can propagate themselves across space, over time.

One of the basic tools in wave mechanics is taking apart complicated waves into a sum of simpler waves.

If you've got a wave that bulges in particular places, and thus changes in pitch and diameter, then you can take apart that ugly wave into a sum of prettier waves.

A sum of simpler waves whose individual behavior is easy to calculate; and then you just add those behaviors back together again.

A sum of nice neat waves, like, say, those perfect spiral helices corresponding to precise momenta.

A physicist can, for mathematical convenience, decompose a position distribution into an integral over (infinitely many) helices of different pitches, phases, and diameters.

Which integral looks like assigning a complex number to each possible pitch of the helix. And each pitch of the helix corresponds to a different momentum. So you get a complex distribution over momentum-space.

It happens to be a fact that, when the position distribution is more concentrated—when the position distribution bulges more sharply—the integral over momentum-helices gets more widely distributed.

Which has the physical consequence, that anything which is very sharply in one place, tends to soon spread itself out. Narrow bulges don't last.

Alternatively, you might find it convenient to think, "Hm, a narrow bulge has sharp changes in its second derivative, and I know the evolution of the amplitude distribution depends on the second derivative, so I can sorta imagine how a narrow bulge might tend to propagate off in all directions."

Technically speaking, the distribution over momenta is the Fourier transform of the distribution over positions. And it so happens that, to go back from momenta to positions, you just do another Fourier transform. So there's a precisely symmetrical argument which says that anything moving at a very definite speed, has to occupy a very spread-out place. Which goes back to what was shown before, about a perfect helix having a "definite momentum" (corkscrewing at a constant speed) but being equally distributed over all positions.

That's Heisenberg's Necessary Relation Between Position Distribution And Position Evolution Which Prevents The Position Distribution And The Momentum Viewpoint From Both Being Sharply Concentrated At The Same Time Principle in a nutshell.

So now let's talk about some of the assumptions, issues, and common misinterpretations of Heisenberg's Misnamed Principle.

The effect of observation on the observed

Here's what actually happens when you "observe a particle's position":

Decoherence, as discussed yesterday, can take apart a formerly coherent amplitude distribution into noninteracting blobs.

Let's say you have a particle X with a fairly definite position and fairly definite momentum, the starting stage shown at left above. And then X comes into the neighborhood of another particle S, or set of particles S, where S is highly sensitive to X's exact location—in particular, whether X's position is on the left or right of the black line in the middle. For example, S might be poised at the top of a knife-edge, and X could tip it off to the left or to the right.

The result is to decohere X's position distribution into two noninteracting blobs, an X-to-the-left blob and an X-to-the-right blob. Well, now the position distribution within each blob, has become sharper. (Remember: Decoherence is a process of increasing quantum entanglement that masquerades as increasing quantum independence.)

So the Fourier transform of the more definite position distribution within each blob, corresponds to a more spread-out distribution over momentum-helices.

Running the particle X past a sensitive system S, has decohered X's position distribution into two noninteracting blobs. Over time, each blob spreads itself out again, by Heisenberg's Sharper Bulges Have Broader Fourier Transforms Principle.

All this gives rise to very real, very observable effects.

In the system shown at right, there is a light source, a screen blocking the light source, and a single slit in the screen.

Ordinarily, light seems to go in straight lines (for less straightforward reasons). But in this case, the screen blocking the light source decoheres the photon's amplitude. Most of the Feynman paths hit the screen.

The paths that don't hit the screen, are concentrated into a very narrow range. All positions except a very narrow range have decohered away from the blob of possibilities for "the photon goes through the slit", so, within this blob, the position-amplitude is concentrated very narrowly, and the spread of momenta is vey large.

Way up at the level of human experimenters, we see that when photons strike the second screen, they strike over a broad range—they don't just travel in a straight line from the light source.

Wikipedia, and at least some physics textbooks, claim that it is misleading to ascribe Heisenberg effects to an "observer effect", that is, perturbing interactions between the measuring apparatus and the measured system:

"Sometimes it is a failure to measure the particle that produces the disturbance. For example, if a perfect photographic film contains a small hole, and an incident photon is not observed, then its momentum becomes uncertain by a large amount. By not observing the photon, we discover that it went through the hole."

However, the most technical treatment I've actually read was by Feynman, and Feynman seemed to be saying that, whenever measuring the position of a particle increases the spread of its momentum, the measuring apparatus must be delivering enough of a "kick" to the particle to account for the change.

In other words, Feynman seemed to assert that the decoherence perspective actually was dual to the observer-effect perspective—that an interaction which produced decoherence would always be able to physically account for any resulting perturbation of the particle.

Not grokking the math, I'm inclined to believe the Feynman version. It sounds pretty, and physics has a known tendency to be pretty.

The alleged effect of conscious knowledge on particles

One thing that the Heisenberg Student Confusion Principle DEFINITELY ABSOLUTELY POSITIVELY DOES NOT SAY is that KNOWING ABOUT THE PARTICLE or CONSCIOUSLY SEEING IT will MYSTERIOUSLY MAKE IT BEHAVE DIFFERENTLY because THE UNIVERSE CARES WHAT YOU THINK.

Decoherence works exactly the same way whether a system is decohered by a human brain or by a rock. Yes, physicists tend to construct very sensitive instruments that slice apart amplitude distributions into tiny little pieces, whereas a rock isn't that sensitive. That's why your camera uses photographic film instead of mossy leaves, and why replacing your eyeballs with grapes will not improve your vision. But any sufficiently sensitive physical system will produce decoherence, where "sensitive" means "developing to widely different final states depending on the interaction", where "widely different" means "the blobs of amplitude don't interact".

Does this description say anything about beliefs? No, just amplitude distributions. When you jump up to a higher level and talk about cognition, you realize that forming accurate beliefs requires sensors. But the decohering power of sensitive interactions can be analyzed on a purely physical level.

There is a legitimate "observer effect", and it is this: Brains that see, and pebbles that are seen, are part of a unified physics; they are both built out of atoms. To gain new empirical knowledge about a thingy, the particles in you have to interact with the particles in the thingy. It so happens that, in our universe, the laws of physics are pretty symmetrical about how particle interactions work—conservation of momentum and so on: if you pull on something, it pulls on you.

So you can't, in fact, observe a rock without affecting it, because to observe something is to depend on it—to let it affect you, and shape your beliefs. And, in our universe's laws of physics, any interaction in which the rock affects your brain, tends to have consequences for the rock as well.

Even if you're looking at light that left a distant star 500 years ago, then 500 years ago, emitting the light affected the star.

That's how the observer effect works. It works because everything is particles, and all the particles obey the same unified mathematically simple physics.

It does not mean the physical interactions we happen to call "observations" have a basic, fundamental, privileged effect on reality.

To suppose that physics contains a basic account of "observation" is like supposing that physics contains a basic account of being Republican. It projects a complex, intricate, high-order biological cognition onto fundamental physics. It sounds like a simple theory to humans, but it's not simple.

Linearity

One of the foundational assumptions physicists used to figure out quantum theory, is that time evolution is linear. If you've got an amplitude distribution X1 that evolves into X2, and an amplitude distribution Y1 that evolves into Y2, then the amplitude distribution (X1 + Y1) should evolve into (X2 + Y2).

(To "add two distributions" means that we just add the complex amplitudes at every point. Very simple.)

Physicists assume you can take apart an amplitude distribution into a sum of nicely behaved individual waves, add up the time evolution of those individual waves, and get back the actual correct future of the total amplitude distribution.

Linearity is why we can take apart a bulging blob of position-amplitude into perfect momentum-helices, without the whole model degenerating into complete nonsense.

The linear evolution of amplitude distributions is a theorem in the Standard Model of physics. But physicists didn't just stumble over the linearity principle; it was used to invent the hypotheses, back when quantum physics was being figured out.

I talked earlier about taking the second derivative of position; well, taking the derivative of a differentiable distribution is a linear operator. F'(x) + G'(x) = (F + G)'(x). Likewise, integrating the sum of two integrable distributions gets you the sum of the integrals. So the amplitude distribution evolving in a way that depends on the second derivative—or the equivalent view in terms of integrating over Feynman paths—doesn't mess with linearity.

Any "non-linear system" you've ever heard of is linear on a quantum level. Only the high-level simplifications that we humans use to model systems are nonlinear. (In the same way, the lightspeed limit requires physics to be local, but if you're thinking about the Web on a very high level, it looks like any webpage can link to any other webpage, even if they're not neighbors.)

Given that quantum physics is strictly linear, you may wonder how the hell you can build any possible physical instrument that detects a ratio of squared moduli of amplitudes, since the squared modulus operator is not linear: the squared modulus of the sum is not the sum of the squared moduli of the parts.

This is a very good question.

We'll get to it shortly.

Meanwhile, physicists, in their daily mathematical practice, assume that quantum physics is linear. It's one of those important little assumptions, like CPT invariance.

Part of The Quantum Physics Sequence

Next post: "Which Basis Is More Fundamental?"

Previous post: "Decoherence"

" } }, { "_id": "JrhoMTgMrMRJJiS48", "title": "Decoherence", "pageUrl": "https://www.lesswrong.com/posts/JrhoMTgMrMRJJiS48/decoherence", "postedAt": "2008-04-22T06:41:04.000Z", "baseScore": 41, "voteCount": 29, "commentCount": 30, "url": null, "contents": { "documentId": "JrhoMTgMrMRJJiS48", "html": "

Previously in seriesFeynman Paths

\n

To understand the quantum process called \"decoherence\", we first need to look at how the special case of quantum independence can be destroyed—how the evolution of a quantum system can produce entanglement where there was formerly independence.

\n

\n

\"Conf6\" Quantum independence, as you'll recall, is a special case of amplitude distributions that approximately factorize—amplitude distributions that can be treated as a product of sub-distributions over subspaces.

\n

Reluctant tourists visiting quantum universes think as if the absence of a rectangular plaid pattern is some kind of special ghostly link between particles.  Hence the unfortunate term, \"quantum entanglement\".

\n

The evolution of a quantum system can produce entanglement where there was formerly independence—turn a rectangular plaid pattern into something else.  Quantum independence, being a special case, is easily lost.

\n

\"Entangler\" Let's pretend for a moment that we're looking at a classical system, which will make it easier to see what kind of physical process leads to entanglement.

\n

At right is a system in which a positively charged light thingy is on a track, far above a negatively charged heavy thingy on a track.

\n

At the beginning, the two thingies are far enough apart that they're not significantly interacting.

\n

But then we lower the top track, bringing the two thingies into the range where they can easily attract each other.  (Opposite charges attract.)

\n

So the light thingy on top rolls toward the heavy thingy on the bottom.  (And the heavy thingy on the bottom rolls a little toward the top thingy, just like an apple attracts the Earth as it falls.)

\n

Now switch to the Feynman path integral view.  That is, imagine the evolution of a quantum system as a sum over all the paths through configuration space the initial conditions could take.

\n

Suppose the bottom heavy thingy and the top thingy started out in a state of quantum independence, so that we can view the amplitude distribution over the whole system as the product of a \"bottom thingy distribution\" and a \"top thingy distribution\".

\n

\"Superposition2\" The bottom thingy distribution starts with bulges in three places—which, in the Feynman path view, we might think of as three possible starting configurations from which amplitude will flow.

\n

When we lower the top track, the light thingy on top is attracted toward the heavy bottom thingy -

\n

- except that the bottom thingy has a sub-distribution with three bulges in three different positions.

\n

So the end result is a joint distribution in which there are three bulges in the amplitude distribution over joint configuration space, corresponding to three different joint positions of the top thingy and bottom thingy.

\n

I've been trying very carefully to avoid saying things like \"The bottom thingy is in three places at once\" or \"in each possibility, the top thingy is attracted to wherever the bottom thingy is\".

\n

Still, you're probably going to visualize it that way, whether I say it or not.  To be honest, that's how I drew the diagram—I visualized three possibilities and three resulting outcomes.  Well, that's just how a human brain tends to visualize a Feynman path integral.

\n

But this doesn't mean there are actually three possible ways the universe could be, etc.  That's just a trick for visualizing the path integral.  All the amplitude flows actually happen, they are not possibilities.

\n

Now imagine that, in the starting state, the bottom thingy has an amplitude-factor that is smeared out over the whole bottom track; and the top thingy has an amplitude-factor in one place.  Then the joint distribution over \"top thingy, bottom thingy\" would start out looking like the plaid pattern at left, and develop into the non-plaid pattern at right:

\n

\"Entanglecloud\"

\n

Here the horizontal coordinate corresponds to the top thingy, and the vertical coordinate corresponds to the bottom thingy.  So we start with the top thingy localized and the bottom thingy spread out, and then the system develops into a joint distribution where the top thingy and the bottom thingy are in the same place, but their mutual position is spread out.  Very loosely speaking.

\n

So an initially factorizable distribution, evolved into an \"entangled system\"—a joint amplitude distribution that is not viewable as a product of distinct factors over subspaces.

\n
\n

(Important side note:  You'll note that, in the diagram above, system evolution obeyed the second law of thermodynamics, aka Liouville's Theorem.  Quantum evolution conserved the \"size of the cloud\", the volume of amplitude, the total amount of grey area in the diagram.

\n

If instead we'd started out with a big light-gray square—meaning that both particles had amplitude-factors widely spread—then the second law of thermodynamics would prohibit the combined system from developing into a tight dark-gray diagonal line.

\n

A system has to start in a low-entropy state to develop into a state of quantum entanglement, as opposed to just a diffuse cloud of amplitude.

\n

Mutual information is also negentropy, remember.  Quantum amplitudes aren't information per se, but the rule is analogous:  Amplitude must be highly concentrated to look like a neatly entangled diagonal line, instead of just a big diffuse cloud.  If you imagine amplitude distributions as having a \"quantum entropy\", then an entangled system has low quantum entropy.)

\n
\n

Okay, so now we're ready to discuss decoherence.

\n

\"Multiblobdeco\"

\n

The system at left is highly entangled—it's got a joint distribution that looks something like, \"There's two particles, and either they're both over here, or they're both over there.\"

\n

Yes, I phrased this as if there were two separate possibilities, rather than a single physically real amplitude distribution.  Seriously, there's no good way to use a human brain to talk about quantum physics in English.

\n

But if you can just remember the general rule that saying \"possibility\" is shorthand for \"physically real blob within the amplitude distribution\", then I can describe amplitude distributions a lot faster by using the language of uncertainty.  Just remember that it is language.  \"Either the particle is over here, or it's over there\" means a physically real amplitude distribution with blobs in both places, not that the particle is in one of those places but we don't know which.

\n

Anyway.  Dealing with highly entangled systems is often annoying—for human physicists, not for reality, of course.  It's not just that you've got to calculate all the possible outcomes of the different possible starting conditions.  (I.e., add up a lot of physically real amplitude flows in a Feynman path integral.)  The possible outcomes may interfere with each other.  (Which actual possible outcomes would never do, but different blobs in an amplitude distribution do.)  Like, maybe the two particles that are both over here, or both over there, meet twenty other particles and do a little dance, and at the conclusion of the path integral, many of the final configurations have received amplitude flows from both initial blobs.

\n

But that kind of extra-annoying entanglement only happens when the blobs in the initial system are close enough that their evolutionary paths can slop over into each other.  Like, if the particles were either both here, or both there, but here and there were two light-years apart, then any system evolution taking less than a year, couldn't have the different possible outcomes overlapping.

\n

\"Precohered_2\" Okay, so let's talk about three particles now.

\n

This diagram shows a blob of amplitude that factors into the product of a 2D subspace and a 1D subspace.  That is, two entangled particles and one independent particle.

\n

The vertical dimension is the one independent particle, the length and breadth are the two entangled particles.

\n

The independent particle is in one definite place—the cloud of amplitude is vertically narrow.  The two entangled particles are either both here, or both there.  (Again I'm using that wrong language of uncertainty, words like \"definite\" and \"either\", but you see what I mean.)

\n

Now imagine that the third independent particle interacts with the two entangled particles in a sensitive way.  Maybe the third particle is balanced on the top of a hill; and the two entangled particles pass nearby, and attract it magnetically; and the third particle falls off the top of the hill and rolls to the bottom, in that particular direction.

\n

\"Decohered\" Afterward, the new amplitude distribution might look like this.  The third particle is now entangled with the other two particles.  And the amplitude distribution as a whole consists of two more widely separated blobs.

\n

Loosely speaking, in the case where the two entangled particles were over here, the third particle went this way, and in the case where the two entangled particles were over there, the third particle went that way.

\n

So now the final amplitude distribution is fully entangled—it doesn't factor into subspaces at all.

\n

But the two blobs are more widely separated in the configuration space.  Before, each blob of amplitude had two particles in different positions; now each blob of amplitude has three particles in different positions.

\n

Indeed, if the third particle interacted in an especially sensitive way, like being tipped off a hill and sliding down, the new separation could be much larger than the old one.

\n

Actually, it isn't necessary for a particle to get tipped off a hill.  It also works if you've got twenty particles interacting with the first two, and ending up entangled with them.  Then the new amplitude distribution has got two blobs, each with twenty-two particles in different places.  The distance between the two blobs in the joint configuration space is much greater.

\n

And the greater the distance between blobs, the less likely it is that their amplitude flows will intersect each other and interfere with each other.

\n

That's decoherence.  Decoherence is the third key to recovering the classical hallucination, because it makes the blobs behave independently; it lets you treat the whole amplitude distribution as a sum of separated non-interfering blobs.

\n

Indeed, once the blobs have separated, the pattern within a single blob may look a lot more plaid and rectangular—I tried to show that in the diagram above as well.

\n

Thus, the big headache in quantum computing is preventing decoherence.  Quantum computing relies on the amplitude distributions staying close enough together in configuration space to interfere with each other.  And the environment contains a zillion particles just begging to accidentally interact with your fragile qubits, teasing apart the pieces of your painstakingly sculpted amplitude distribution.

\n

And you can't just magically make the pieces of the scattered amplitude distribution jump back together—these are blobs in the joint configuration, remember.  You'd have to put the environmental particles in the same places, too.

\n
\n

(Sounds pretty irreversible, doesn't it?  Like trying to unscramble an egg?  Well, that's a very good analogy, in fact.

\n

This is why I emphasized earlier that entanglement happens starting from a condition of low entropy.  Decoherence is irreversible because it is an essentially thermodynamic process.

\n

It is a fundamental principle of the universe—as far as we can tell—that if you \"run the film backward\" all the fundamental laws are still obeyed.  If you take a movie of an egg falling onto the floor and smashing, and then play the film backward and see a smashed egg leaping off the floor and into a neat shell, you will not see the known laws of physics violated in any particular.  All the molecules will just happen to bump into each other in just the right way to make the egg leap off the floor and reassemble.  It's not impossible, just unbelievably improbable.

\n

Likewise with a smashed amplitude distribution suddenly assembling many distantly scattered blobs into mutual coherence—it's not impossible, just extremely improbable that many distant starting positions would end up sending amplitude flows to nearby final locations.  You are far more likely to see the reverse.

\n

Actually, in addition to running the film backward, you've got to turn all the positive charges to negative, and reverse left and right (or some other single dimension—essentially you have to turn the universe into its mirror image).

\n

This is known as CPT symmetry, for Charge, Parity, and Time.

\n

CPT symmetry appears to be a really, really, really deep principle of the universe.  Trying to violate CPT symmetry doesn't sound quite as awful to a modern physicist as trying to throw a baseball so hard it travels faster than light.  But it's almost that awful.  I'm told that General Relativity Quantum Field Theory requires CPT symmetry, for one thing.

\n

So the fact that decoherence looks like a one-way process, but is only thermodynamically irreversible rather than fundamentally asymmetrical, is a very important point.  It means quantum physics obeys CPT symmetry.

\n

It is a universal rule in physics—according to our best current knowledge—that every apparently irreversible process is a special case of the second law of thermodynamics, not the result of time-asymmetric fundamental laws.)

\n
\n

To sum up:

\n

Decoherence is a thermodynamic process of ever-increasing quantum entanglement, which, through an amazing sleight of hand, masquerades as increasing quantum independence:  Decoherent blobs don't interfere with each other, and within a single blob but not the total distribution, the blob is more factorizable into subspaces.

\n

Thus, decoherence is the third key to recovering the classical hallucination.  Decoherence lets a human physicist think about one blob at a time, without worrying about how blobs interfere with each other; and the blobs themselves, considered as isolated individuals, are less internally entangled, hence easier to understand.  This is a fine thing if you want to pretend the universe is classical, but not so good if you want to factor a million-digit number before the Sun burns out.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"The So-Called Heisenberg Uncertainty Principle\"

\n

Previous post: \"Three Dialogues on Identity\"

" } }, { "_id": "hJPh8XyJ3fTK2hLFJ", "title": "Three Dialogues on Identity", "pageUrl": "https://www.lesswrong.com/posts/hJPh8XyJ3fTK2hLFJ/three-dialogues-on-identity", "postedAt": "2008-04-21T06:13:54.000Z", "baseScore": 61, "voteCount": 46, "commentCount": 50, "url": null, "contents": { "documentId": "hJPh8XyJ3fTK2hLFJ", "html": "

Followup toIdentity Isn't In Specific Atoms

\n

It is widely said that some primitive tribe or other once feared that photographs could steal their souls.

\n

Ha ha!  How embarrassing.  Silly tribespeople.

\n

I shall now present three imaginary conversations along such lines—the common theme being frustration.

\n

\n

The first conversation:

\n

Foolishly leaving the world of air-conditioning, you traveled to the Godforsaken Outback, and of course, got lost in the woods.  A more primitive tribe than yours, the Hu'wha, saved your butt.  Although the Hu'wha have told you how to reach an outpost of Internet access, that is, civilization, you've stayed with them a while longer; you've become their friend, and they yours.

\n

One custom of the Hu'wha does seem strange to you, coming as you do from a more civilized culture:  They don't hold with lies, even small ones.  They consider a lie as an infringement upon the soul of the listener. They have a saying, \"It is better to die than to be lied to.\" Though this is a very strange and primitive custom, you have come to respect it.

\n

Late one night, the shaman calls you to his tent. His face is grave.  \"I have heard the most disturbing news,\" he says, \"from the Tribe That Lives Across The Water.  They say that your people, the People of the Net, have a most terrible custom: they paint images of others, and thereby steal their souls, for a person cannot be in two places at once.  It is even said that you have weapons called 'cameras', for doing this automatically; and that the cameras of your folk can be very small, or disguised as other things.\"

\n

\"Um,\" you say, \"I think you may be laboring under certain basic misconceptions.  Cameras are not weapons; they make images, but they don't steal souls.\"

\n

The grey-bearded shaman smiles, and shakes his head.  \"Young fellow, I am the shaman of the Hu'wha, and I hold the tradition passed down from my father through many generations; the true and original wisdom granted by the gods to the first shaman.  I think I know what steals a soul and what does not, young fellow!  Even to you it should be obvious.\"

\n

And you think:  Foolish mortal, how little you understand the power of Science.  But this is beyond the conception of this man who thinks himself above you, and so you say nothing.

\n

\"I understand,\" the shaman says, \"that your people may be so utterly ignorant of magic that they don't realize their cameras are dangerous.  But that makes it all the more urgent that I ask you, Net-user, upon your honor:  Have you by any means whatever, in your time among us, whether yourself, or by any device, produced an image of anyone here?  If you have, we will do no violence to you—for I know there is no malice in you—but you will no longer be welcome among us.\"

\n

You pause.  The Hu'wha set great store on the literal truth of words, as well as their intent.  And though you have no camera or paintbrushes, the answer to the question just asked, is literally yes.  Your eyes, retina, and optic nerve are constantly painting images in your visual cortex.

\n

\"I haven't made any pictures the way you mean it,\" you say.

\n

The shaman frowns.  \"I was looking for a simple No.  Why the hesitation?\"

\n

Oh, dear.  \"The knowledge of my own people, the Net-folk, is not like your own knowledge,\" you say, \"and you asked a... deeper question than you know, according to the beliefs of my own people.\"

\n

\"This is a very simple matter,\" the shaman says sharply, \"and it has to do with what you have done.  Have you made any pictures, or not?\"

\n

\"I've painted no picture, and used no camera.\"

\n

\"Have you caused a picture to be made by any other means?\" demands the shaman.

\n

Dammit.  \"Not the way you mean it.  I've done nothing that the Hu'wha do not also do.\"

\n

\"Explain yourself!\"

\n

You sigh.  \"It is a teaching of my people, which you are welcome to believe or not as it suits you, that pictures are constantly being created of all of us, all the time.\"

\n

\"What?\" says the shaman.

\n

\"When you look at someone,\" you explain, \"or even when an animal looks at you, that creates an image on the inside of the skull... that is how you see.  Indeed, it is what you see—everything you see is a picture your eyes create.\"

\n

\"That's nonsense,\" says the shaman.  \"You're right there!  I'm seeing you, not an image of you!  Now I ask you again, on your honor:  Do we Hu'wha still have our souls since you came among us, or not?\"

\n

Oh, bloody hell.  \"It is a teaching of my people,\" you say, \"that what you call a 'soul' is... a confused idea.\"

\n

\"You are being evasive,\" says the shaman sternly.  \"The soul is not complicated, and it would be very hard to mistake a soul for something else, like a shoe or something.  Our souls are breathed into us by Great Ghu at birth, and stays with us our whole lives, unless someone steals it; and if no one has photographed us, our souls go to the Happy Gaming Room when we die.  Now I ask you again:  Do I have my soul, or not?  Give me the truth!\"

\n

\"The truth,\" you say, \"is that the way my people see the world is so different from yours, that you can't even imagine what I think is the truth.  I've painted no pictures, taken no photographs; all I've done is look at you, and nothing happens when I look at you, that doesn't happen when anyone else looks at you.  But you are being constantly photographed, all the time, and you never had any soul to begin with: this is the truth.\"

\n

\"Horse output,\" says the shaman.  \"Go away; we never want to see you again.\"

\n

The second conversation:

\n

John Smith still looked a little pale.  This was quite understandable.  Going to a pleasant dinner with your family, having a sudden heart attack, riding to the hospital by ambulance, dying, being cryonically suspended by Alcor, spending decades in liquid nitrogen, and then awakening, all in the span of less than 24 subjective hours, will put a fair amount of stress on anyone.

\n

\"Look,\" said John, \"I accept that there are things you're not allowed to tell me -\"

\n

\"Not right away,\" you say.  \"We've found that certain pieces of information are best presented in a particular order.\"

\n

John nods.  \"Fine, but I want to be very clear that I don't want to be told any comforting lies.  Not for the sake of my 'psychological health', and not for anything.  If you can't tell me, just say nothing.  Please.\"

\n

You raise your hand to your chest, two fingers out and the others folded.  \"That, I can promise:  I cannot tell you everything, but what I say to you will be true.  In the name of Richard Feynman, who is dead but not forgotten.\"

\n

John is giving you a very strange look.  \"How long did you say I was suspended?\"

\n

\"Thirty-five years,\" you say.

\n

\"I was thinking,\" said John, \"that things surely wouldn't have changed all that much in thirty-five years.\"

\n

You say nothing, thus keeping your promise.

\n

\"But if things have changed that much,\" John says, \"I want to know something.  Have I been uploaded?\"

\n

You frown.  \"Uploaded?  I'm sorry, I don't understand.  The word 'upload' used to apply to computer files, right?\"

\n

\"I mean,\" says John, \"have I been turned into a program?  An algorithm somewhere?\"

\n

Huh?  \"Turned into an algorithm?  What were you before, a constant integer?\"

\n

\"Aargh,\" says John.  \"Okay, yes, I'm a program, you're a program.  Every human in the history of humanity has been a program running on their brain.  I understand that.  What I want to know is whether me, this John Smith, the one talking to you right now, is a program on the same hardware as the John Smith who got cryonically suspended.\"

\n

You pause.  \"What do you mean, 'same hardware'?\"

\n

John starts to look worried.  \"I was hoping for a simple 'Yes', there.  Am I made of the same atoms as before, or not?\"

\n

Oh, dear.  \"I think you may be laboring under certain basic misconceptions,\" you say.

\n

\"I understand,\" John said, \"that your people may have the cultural belief that uploading preserves personal identity—that a human is memories and personality, not particular atoms.  But I happen to believe that my identity is bound up with the atoms that make me.  It's not as if there's an experiment you can do to prove that I'm wrong, so my belief is just as valid as yours.\"

\n

Foolish child, you think, how little you understand the power of Science.  \"You asked a deeper question than you know,\" you say, \"and the world does not work the way you think it does.  An atom is... not what you imagine.\"

\n

\"Look,\" John says sharply, \"I'm not asking you about this time's theories of personal identity, or your beliefs about consciousness—that's all outside the realm of third-party scientific investigation anyway.  I'm asking you a simple question that is experimentally testable.  Okay, you found something new underneath the quarks.  That's not surprising.  I'm asking, whatever stuff I am made of, is it the same stuff as before?  Yes or no?\"

\n

The third conversation:

\n

Your question is itself confused.  Whatever is, is real.

\n

\"Look,\" Eliezer said, \"I know I'm not being misunderstood, so I'm not going to try and phrase this the elaborately correct way:  Is this thing that I'm holding an old-fashioned banana, or does it only have the appearance of a banana?\"

\n

You wish to know if the accustomed state of affairs still holds.  In which it merely appears that there is a banana in your hand, but actually, there is something very different behind the appearance: a configuration of particles, held together by electromagnetic fields and other laws that humans took centuries to discover.

\n

\"That's right.  I want to know if the lower levels of organization underlying the banana have a substantially different structure than before, and whether the causal relation between that structure and my subjective experience has changed in style.\"

\n

Well then.  Rest assured that you are not holding the mere appearance of a banana.  There really is a banana there, not just a collection of atoms.

\n

There was a long pause.

\n

\"WHAT?\"

\n

Or perhaps that was only a joke.  Let it stand that the place in which you find yourself is at least as real as anywhere you ever thought you were, and the things you see are even less illusionary than your subjective experiences of them.

\n

\"Oh, come on!  I'm not some hunter-gatherer worried about a photographer stealing his soul!  If I'm running on a computer somewhere, and this is a virtual environment, that's fine!  I was just curious, that's all.\"

\n

 Some of what you believe is true, and some of what you believe is false: this may also be said of the hunter-gatherer.  But there is a true difference between yourself and the hunter-gatherer, which is this:  You have a concept of what it means for a fundamental assumption to be mistaken.  The hunter-gatherer has no experience with other cultures that believe differently, no history that tells of past scientific revolutions.  But you know what is meant, whether or not you accept it, you understand the assertion itself:  Some of your fundamental assumptions are mistaken.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Decoherence\"

\n

Previous post: \"Identity Isn't In Specific Atoms\"

" } }, { "_id": "fsDz6HieZJBu54Yes", "title": "Zombies: The Movie", "pageUrl": "https://www.lesswrong.com/posts/fsDz6HieZJBu54Yes/zombies-the-movie", "postedAt": "2008-04-20T05:53:14.000Z", "baseScore": 178, "voteCount": 140, "commentCount": 82, "url": null, "contents": { "documentId": "fsDz6HieZJBu54Yes", "html": "

FADE IN around a serious-looking group of uniformed military officers.  At the head of the table, a senior, heavy-set man, GENERAL FRED, speaks.

\n

GENERAL FRED:  The reports are confirmed.  New York has been overrun... by zombies.

\n

COLONEL TODD:  Again?  But we just had a zombie invasion 28 days ago!

\n

GENERAL FRED:  These zombies... are different.  They're... philosophical zombies.

\n

CAPTAIN MUDD:  Are they filled with rage, causing them to bite people?

\n

COLONEL TODD:  Do they lose all capacity for reason?

\n

GENERAL FRED:  No.  They behave... exactly like we do... except that they're not conscious.

\n

(Silence grips the table.)

\n

COLONEL TODD:  Dear God.

\n

\n

GENERAL FRED moves over to a computerized display.

\n

GENERAL FRED:  This is New York City, two weeks ago.

\n

The display shows crowds bustling through the streets, people eating in restaurants, a garbage truck hauling away trash.

\n

GENERAL FRED:  This... is New York City... now.

\n

The display changes, showing a crowded subway train, a group of students laughing in a park, and a couple holding hands in the sunlight.

\n

COLONEL TODD:  It's worse than I imagined.

\n

CAPTAIN MUDD:  How can you tell, exactly?

\n

COLONEL TODD:  I've never seen anything so brutally ordinary.

\n

A lab-coated SCIENTIST stands up at the foot of the table.

\n

SCIENTIST:  The zombie disease eliminates consciousness without changing the brain in any way.  We've been trying to understand how the disease is transmitted.  Our conclusion is that, since the disease attacks dual properties of ordinary matter, it must, itself, operate outside our universe.  We're dealing with an epiphenomenal virus.

\n

GENERAL FRED:  Are you sure?

\n

SCIENTIST:  As sure as we can be in the total absence of evidence.

\n

GENERAL FRED:  All right.  Compile a report on every epiphenomenon ever observed.  What, where, and who.  I want a list of everything that hasn't happened in the last fifty years.

\n

CAPTAIN MUDD:  If the virus is epiphenomenal, how do we know it exists?

\n

SCIENTIST:  The same way we know we're conscious.

\n

CAPTAIN MUDD:  Oh, okay.

\n

GENERAL FRED:  Have the doctors made any progress on finding an epiphenomenal cure?

\n

SCIENTIST:  They've tried every placebo in the book.  No dice.  Everything they do has an effect.

\n

GENERAL FRED:  Have you brought in a homeopath?

\n

SCIENTIST:  I tried, sir!  I couldn't find any!

\n

GENERAL FRED:  Excellent.  And the Taoists?

\n

SCIENTIST:  They refuse to do anything!

\n

GENERAL FRED:  Then we may yet be saved.

\n

COLONEL TODD:  What about David Chalmers?  Shouldn't he be here?

\n

GENERAL FRED:  Chalmers... was one of the first victims.

\n

COLONEL TODD:  Oh no.

\n

(Cut to the INTERIOR of a cell, completely walled in by reinforced glass, where DAVID CHALMERS paces back and forth.)

\n

DOCTOR:  David!  David Chalmers!  Can you hear me?

\n

CHALMERS:  Yes.

\n

NURSE:  It's no use, doctor.

\n

CHALMERS:  I'm perfectly fine.  I've been introspecting on my consciousness, and I can't detect any difference.  I know I would be expected to say that, but—

\n

The DOCTOR turns away from the glass screen in horror.

\n

DOCTOR:  His words, they... they don't mean anything.

\n

CHALMERS:  This is a grotesque distortion of my philosophical views.  This sort of thing can't actually happen!

\n

DOCTOR:  Why not?

\n

NURSE:  Yes, why not?

\n

CHALMERS:  Because—

\n

(Cut to two POLICE OFFICERS, guarding a dirt road leading up to the imposing steel gate of a gigantic concrete complex.  On their uniforms, a badge reads \"BRIDGING LAW ENFORCEMENT AGENCY\".)

\n

OFFICER 1:  You've got to watch out for those clever bastards.  They look like humans.  They can talk like humans.  They're identical to humans on the atomic level.  But they're not human.

\n

OFFICER 2:  Scumbags.

\n

The huge noise of a throbbing engine echoes over the hills.  Up rides the MAN on a white motorcycle.  The MAN is wearing black sunglasses and a black leather business suit with a black leather tie and silver metal boots.  His white beard flows in the wind.  He pulls to a halt in front of the gate.

\n

The OFFICERS bustle up to the motorcycle.

\n

OFFICER 1:  State your business here.

\n

MAN:  Is this where you're keeping David Chalmers?

\n

OFFICER 2:  What's it to you?  You a friend of his?

\n

MAN:  Can't say I am.  But even zombies have rights.

\n

OFFICER 1:  All right, buddy, let's see your qualia.

\n

MAN:  I don't have any.

\n

OFFICER 2 suddenly pulls a gun, keeping it trained on the MAN.  OFFICER 2:  Aha!  A zombie!

\n

OFFICER 1:  No, zombies claim to have qualia.

\n

OFFICER 2:  So he's an ordinary human?

\n

OFFICER 1:  No, they also claim to have qualia.

\n

The OFFICERS look at the MAN, who waits calmly.

\n

OFFICER 2:  Um...

\n

OFFICER 1:  Who are you?

\n

MAN:  I'm Daniel Dennett, bitches.

\n

Seemingly from nowhere, DENNETT pulls a sword and slices OFFICER 2's gun in half with a steely noise.  OFFICER 1 begins to reach for his own gun, but DENNETT is suddenly standing behind OFFICER 1 and chops with a fist, striking the junction of OFFICER 1's shoulder and neck.  OFFICER 1 drops to the ground.

\n

OFFICER 2 steps back, horrified.

\n

OFFICER 2:  That's not possible!  How'd you do that?

\n

DENNETT:  I am one with my body.

\n

DENNETT drops OFFICER 2 with another blow, and strides toward the gate.  He looks up at the imposing concrete complex, and grips his sword tighter.

\n

DENNETT (quietly to himself):  There is a spoon.

\n

(Cut back to GENERAL FRED and the other military officials.)

\n

GENERAL FRED:  I've just received the reports.  We've lost Detroit.

\n

CAPTAIN MUDD:  I don't want to be the one to say \"Good riddance\", but—

\n

GENERAL FRED:  Australia has been... reduced to atoms.

\n

COLONEL TODD:  The epiphenomenal virus is spreading faster.  Civilization itself threatens to dissolve into total normality.  We could be looking at the middle of humanity.

\n

CAPTAIN MUDD:  Can we negotiate with the zombies?

\n

GENERAL FRED:  We've sent them messages.  They sent only a single reply.

\n

CAPTAIN MUDD:  Which was...?

\n

GENERAL FRED:  It's on its way now.

\n

An orderly brings in an envelope, and hands it to GENERAL FRED.

\n

GENERAL FRED opens the envelope, takes out a single sheet of paper, and reads it.

\n

Silence envelops the room.

\n

CAPTAIN MUDD:  What's it say?

\n

GENERAL FRED:  It says... that we're the ones with the virus.

\n

(A silence falls.)

\n

COLONEL TODD raises his hands and stares at them.

\n

COLONEL TODD:  My God, it's true.  It's true.  I...

\n

(A tear rolls down COLONEL TODD's cheek.)

\n

COLONEL TODD:  I don't feel anything.

\n

The screen goes black.

\n

The sound goes silent.

\n

The movie continues exactly as before.

\n
\n

\"Elizombies\" PS:  This is me being attacked by zombie nurses at Penguicon.

\n

Only at a combination science fiction and open-source convention would it be possible to attend a session on knife-throwing, cry \"In the name of Bayes, die!\", throw the knife, and then have a fellow holding a wooden shield say, \"Yes, but how do you determine the prior for where the knife hits?\"

" } }, { "_id": "RLScTpwc5W2gGGrL9", "title": "Identity Isn't In Specific Atoms", "pageUrl": "https://www.lesswrong.com/posts/RLScTpwc5W2gGGrL9/identity-isn-t-in-specific-atoms", "postedAt": "2008-04-19T04:55:50.000Z", "baseScore": 58, "voteCount": 42, "commentCount": 73, "url": null, "contents": { "documentId": "RLScTpwc5W2gGGrL9", "html": "

Continuation ofNo Individual Particles
Followup toThe Generalized Anti-Zombie Principle

\n

Suppose I take two atoms of helium-4 in a balloon, and swap their locations via teleportation.  I don't move them through the intervening space; I just click my fingers and cause them to swap places.  Afterward, the balloon looks just the same, but two of the helium atoms have exchanged positions.

\n

Now, did that scenario seem to make sense?  Can you imagine it happening?

\n

\n

If you looked at that and said, \"The operation of swapping two helium-4 atoms produces an identical configuration—not a similar configuration, an identical configuration, the same mathematical object—and particles have no individual identities per se—so what you just said is physical nonsense,\" then you're starting to get quantum mechanics.

\n

If you furthermore had any thoughts about a particular \"helium atom\" being a factor in a subspace of an amplitude distribution that happens to factorize that way, so that it makes no sense to talk about swapping two identical multiplicative factors, when only the combined amplitude distribution is real, then you're seriously starting to get quantum mechanics.

\n

If you thought about two similar billiard balls changing places inside a balloon, but nobody on the outside being able to notice a difference, then... oh, hell, I don't know, go back to the beginning of the series and try rereading the whole thing over the course of one day.  If that still doesn't work, read an actual book on quantum mechanics.  Feynman's QED is a great place to start—though not a good place to finish, and it's not written from a pure realist perspective.

\n

But if you did \"get\" quantum physics, then, as promised, we have now come to the connection between the truth of quantum mechanics, the lies of human intuitions, and the Generalized Anti-Zombie Principle.

\n

Stirling Westrup previously commented, on the GAZP post:

\n
\n

I found the previous articles on Zombies somewhat tedious... Still, now I'm glad I read through it all as I can see why you were so careful to lay down the foundations you did.

\n

The question of what changes one can make to the brain while maintaining 'identity' has been been discussed many times on the Extropians list, and seldom with any sort of constructive results.

\n

Today's article has already far exceeded the signal to noise ratio of any other discussion on the same topic that I've ever seen...

\n
\n

The Extropians email list that Westrup refers to, is the oldest online gathering place of transhumanists.  It is where I made my debut as a writer, and it is where the cofounders of the Singularity Institute met.  Though the list is not what it once was...

\n

There are certain topics, on the Extropians list, that have been discussed over and over again, for years and years, without making any progress.  Just the same arguments and counterarguments, over and over again.

\n

The worst of those infinite loops concerns the question of personal identity.  For example, if you build an exact physical replica of a human, using different atoms, but atoms of the same kind in the same places, is it the same person or just a copy? 

\n

This question has flared up at least once a year, always with the same arguments and counterarguments, every year since I joined the Extropians mailing list in 1996.  And I expect the Personal Identity Wars started well before then.

\n

I did try remarking, \"Quantum mechanics says there isn't any such thing as a 'different particle of the same kind', so wherever your personal identity is, it sure isn't in particular atoms, because there isn't any such thing as a 'particular atom'.\"

\n

It didn't work, of course.  I didn't really expect it to.  Without a long extended explanation, a remark like that doesn't actually mean anything.

\n

The concept of reality as a sum of independent individual billiard balls, seems to be built into the human parietal cortex—the parietal cortex being the part of our brain that does spatial modeling: navigating rooms, grasping objects, throwing rocks.

\n

Even very young children, infants, look longer at a scene that violates expectations—for example, a scene where a ball rolls behind a screen, and then two balls roll out.

\n

People try to think of a person, an identity, an awareness, as though it's an awareness-ball located inside someone's skull.  Even nonsophisticated materialists tend to think that, since the consciousness ball is made up of lots of little billiard balls called \"atoms\", if you swap the atoms, why, you must have swapped the consciousness.

\n

Now even without knowing any quantum physics—even in a purely classical universe—it is possible to refute this idea by applying the Generalized Anti-Zombie Principle.  There are many possible formulations of the GAZP, but one of the simpler ones says that, if alleged gigantic changes are occurring in your consciousness, you really ought to notice something happening, and be able to say so.

\n

The equivalent of the Zombie World, for questions of identity/continuity, is the Soul Swap World.  The allegation is that the Soul Swap World is microphysically identical to our own; but every five minutes, each thread of consciousness jumps to a random new brain, without the brains changing in any third-party experimentally detectable way.  One second you're yourself, the next second you're Britney Spears.  And neither of you say that you've noticed anything happening—by hypothesis, since you're microphysically identical down to the motion of your lips.

\n

(Let me know if the Soul Swap World has been previously invented in philosophy, and has a standard name—so far as I presently know, this is my own idea.)

\n

We can proceed to demolish the Soul Swap World by an argument exactly analogous to the one that demolished the Zombie World:  Whatever-it-is which makes me feel that I have a consciousness that continues through time, that whatever-it-is was physically potent enough to make me type this sentence.  Should I try to make the phrase \"consciousness continuing through time\" refer to something that has nothing to do with the cause of my typing those selfsame words, I will have problems with the meaning of my arguments, not just their plausibility.

\n

Whatever it is that makes me say, aloud, that I have a personal identity, a causally closed world physically identical to our own, has captured that source—if there is any source at all.

\n

And we can proceed, again by an exactly analogous argument, to a Generalized Anti-Swapping Principle:  Flicking a disconnected light switch shouldn't switch your personal identity, even though the motion of the switch has an in-principle detectable gravitational effect on your brain, because the switch flick can't disturb the true cause of your talking about \"the experience of subjective continuity\".

\n

So even in a classical universe, if you snap your fingers and swap an atom in the brain for a physically similar atom outside; and the brain is not disturbed, or not disturbed any more than the level of thermal noise; then whatever causes the experience of subjective continuity, should also not have been disturbed.  Even if you swap all the classical atoms in a brain at the same time, if the person doesn't notice anything happen, why, it probably didn't.

\n

And of course there's the classic (and classical) argument, \"Well, your body's turnover time for atoms is seven years on average.\"

\n

But it's a moot argument.

\n

We don't live in a classical universe.

\n

We live in a quantum universe where the notion of \"same hydrogen atom vs. different hydrogen atom\" is physical nonsense.

\n

We live in a universe where the whole notion of billiard balls bopping around is fundamentally wrong.

\n

This can be a disorienting realization, if you formerly thought of yourself as an awareness ball that moves around.

\n

Sorry.  Your parietal cortex is fooling you on this one.

\n

But wait!  It gets even worse!

\n

The brain doesn't exactly repeat itself; the state of your brain one second from now is not the state of your brain one second ago.  The neural connections don't all change every second, of course.  But there are enough changes every second that the brain's state is not cyclic, not over the course of a human lifetime.  With every fragment of memory you lay down—and every thought that pops in and out of short-term memory—and every glance of your eyes that changes the visual field of your visual cortex—you ensure that you never repeat yourself exactly.

\n

Over the course of a single second—not seven years, but one second—the joint position of all the atoms in your brain, will change far enough away from what it was before, that there is no overlap with the previous joint amplitude distribution.  The brain doesn't repeat itself.  Over the course of one second, you will end up being comprised of a completely different, nonoverlapping volume of configuration space.

\n

And the quantum configuration space is the most fundamental known reality, according to our best current theory, remember.  Even if quantum theory turns out not to be really truly fundamental, it has already finished superseding the hallucination of individual particles.  We're never going back to billiard balls, any more than we're going back to Newtonian mechanics or phlogiston theory.  The ratchet of science turns, but it doesn't turn backward.

\n

And actually, the time for you to be comprised of a completely different volume of configuration space, is way less than a second.  That time is the product of all the individual changes in your brain put together.  It'll be less than a millisecond, less than a femtosecond, less than the time it takes light to cross a neutron diameter.  It works out to less than the Planck time, if that turns out to make physical sense.

\n

And then there's the point to consider that the physically real amplitude distribution is over a configuration space of all the particles in the universe.  \"You\" are just a factored subspace of that distribution.

\n

Yes, that's right, I'm calling you a factored subspace.

\n

None of this should be taken as saying that you are somehow independent of the quantum physics comprising you.  If an anvil falls on your head, you will stop talking about consciousness.  This is experimentally testable.  Don't try it at home.

\n

But the notion that you can equate your personal continuity, with the identity of any physically real constituent of your existence, is absolutely and utterly hopeless.

\n

You are not \"the same you, because you are made of the same atoms\".  You have zero overlap with the fundamental constituents of yourself from even one nanosecond ago.  There is continuity of information, but not equality of parts.

\n

The new factor over the subspace looks a whole lot like the old you, and not by coincidence:  The flow of time is lawful, there are causes and effects and preserved commonalities.  Look to the regularity of physics, if you seek a source of continuity.  Do not ask to be composed of the same objects, for this is hopeless.

\n

Whatever makes you feel that your present is connected to your past, it has nothing to do with an identity of physically fundamental constituents over time.

\n

Which you could deduce a priori, even in a classical universe, using the Generalized Anti-Zombie Principle.  The imaginary identity-tags that read \"This is electron #234,567...\" don't affect particle motions or anything else; they can be swapped without making a difference because they're epiphenomenal.  But since this final conclusion happens to be counterintuitive to a human parietal cortex, it helps to have the brute fact of quantum mechanics to crush all opposition.

\n

Damn, have I waited a long time to be able to say that.

\n

And no, this isn't the only point I have to make on how counterintuitive physics rules out intuitive conceptions of personal identity.  I've got even stranger points to make.  But those will take more physics first.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Three Dialogues on Identity\"

\n

Previous post: \"No Individual Particles\"

" } }, { "_id": "Cpf2jsZsNFNH5TSpc", "title": "No Individual Particles", "pageUrl": "https://www.lesswrong.com/posts/Cpf2jsZsNFNH5TSpc/no-individual-particles", "postedAt": "2008-04-18T04:40:19.000Z", "baseScore": 36, "voteCount": 29, "commentCount": 24, "url": null, "contents": { "documentId": "Cpf2jsZsNFNH5TSpc", "html": "

Followup toCan You Prove Two Particles Are Identical?, Feynman Paths

\n

Even babies think that objects have individual identities.  If you show an infant a ball rolling behind a screen, and then a moment later, two balls roll out, the infant looks longer at the expectation-violating event.  Long before we're old enough to talk, we have a parietal cortex that does spatial modeling: that models individual animals running or rocks flying through 3D space.

\n

And this is just not the way the universe works.  The difference is experimentally knowable, and known.  Grasping this fact, being able to see it at a glance, is one of the fundamental bridges to cross in understanding quantum mechanics.

\n

If you shouldn't start off by talking to your students about wave/particle duality, where should a quantum explanation start?  I would suggest taking, as your first goal in teaching, explaining how quantum physics implies that a simple experimental test can show that two electrons are entirely indistinguishable —not just indistinguishable according to known measurements of mass and electrical charge.

\n

To grasp on a gut level how this is possible, it is necessary to move from thinking in billiard balls to thinking in configuration spaces; and then you have truly entered into the true and quantum realm.

\n

\n

In previous posts such as Joint Configurations and The Quantum Arena, we've seen that the physics of our universe takes place in a multi-particle configuration space.

\n

\"Conf6_2\"The illusion of individual particles arises from approximate factorizability of a multi-particle distribution, as shown at left for a classical configuration space.

\n

If the probability distribution over this 2D configuration space of two classical 1D particles, looks like a rectangular plaid pattern, then it will factorize into a distribution over A times a distribution over B.

\n

In classical physics, the particles A and B are the fundamental things, and the configuration space is just an isomorphic way of looking at them.

\n

In quantum physics, the configuration space is the fundamental thing, and you get the appearance of an individual particle when the amplitude distribution factorizes enough to let you look at a subspace of the configuration space, and see a factor of the amplitude distribution—a factor that might look something like this:

\n

\"Ampl1\"

\n

This isn't an amplitude distribution, mind you.  It's a factor in an amplitude distribution, which you'd have to multiply by the subspace for all the other particles in the universe, to approximate the physically real amplitude distribution.

\n

Most mathematically possible amplitude distributions won't factor this way.  Quantum entanglement is not some extra, special, additional bond between two particles.  \"Quantum entanglement\" is the general case.  The special and unusual case is quantum independence.

\n

Reluctant tourists in a quantum universe talk about the bizarre phenomenon of quantum entanglement.  Natives of a quantum universe talk about the special case of quantum independence.  Try to think like a native, because you are one.

\n

I've previously described a configuration as a mathematical object whose identity is \"A photon here, a photon there; an electron here, an electron there.\"  But this is not quite correct.  Whenever you see a real-world electron, caught in a little electron trap or whatever, you are looking at a blob of amplitude, not a point mass.  In fact, what you're looking at is a blob of amplitude-factor in a subspace of a global distribution that happens to factorize.

\n

Clearly, then, an individual point in the configuration space does not have an identity of \"blob of amplitude-factor here, blob of amplitude-factor there\"; so it doesn't make sense to say that a configuration has the identity \"A photon here, a photon there.\"

\n

But what is an individual point in the configuration space, then?

\n

Well, it's physics, and physics is math, and you've got to come to terms with thinking in pure mathematical objects.  A single point in quantum configuration space, is the product of multiple point positions per quantum field; multiple point positions in the electron field, in the photon field, in the quark field, etc.

\n

When you actually see an electron trapped in a little electron trap, what's really going on, is that the cloud of amplitude distribution that includes you and your observed universe, can at least roughly factorize into a subspace that corresponds to that little electron, and a subspace that corresponds to everything else in the universe.  So that the physically real amplitude distribution is roughly the product of a little blob of amplitude-factor in the subspace for that electron, and the amplitude-factor for everything else in the universe.  Got it?

\n

One commenter reports attaining enlightenment on reading in Wikipedia:

\n
\n

'From the point of view of quantum field theory, particles are identical if and only if they are excitations of the same underlying quantum field. Thus, the question \"why are all electrons identical?\" arises from mistakenly regarding individual electrons as fundamental objects, when in fact it is only the electron field that is fundamental.'

\n
\n

Okay, but that doesn't make the basic jump into a quantum configuration space that is inherently over multiple particles.  It just sounds like you're talking about individual disturbances in the aether, or something.  As I understand it, an electron isn't an excitation of a quantum electron field, like a wave in the aether; the electron is a blob of amplitude-factor in a subspace of a configuration space whose points correspond to multiple point positions in quantum fields, etc.

\n

The difficult jump from classical to quantum is not thinking of an electron as an excitation of a field.  Then you could just think of a universe made up of \"Excitation A in electron field over here\" + \"Excitation B in electron field over there\" + etc.  You could factorize the universe into individual excitations of a field.  Your parietal cortex would have no trouble with that one—it doesn't care whether you call the little billiard balls \"excitations of an electron field\" so long as they still behave like little billiard balls.

\n

The difficult jump is thinking of a configuration space that is the product of many positions in many fields, without individual identities for the positions.  A configuration space whose points are \"a position here in this field, a position there in this field, a position here in that field, and a position there in that field\".  Not, \"A positioned here in this field, B positioned there in this field, C positioned here in that field\" etc.

\n

You have to reduce the appearance of individual particles to a regularity in something that is different from the appearance of particles, something that is not itself a little billiard ball.

\n

Oh, sure, thinking of photons as individual objects will seem to work out, as long as the amplitude distribution happens t factorize.  But what happens when you've got your \"individual\" photon A and your \"individual\" photon B, and you're in a situation where, a la Feynman paths, it's possible for photon A to end up in position 1 and photon B to end up in position 2, or for A to end up in 2 and B to end up in 1?  Then the illusion of classicality breaks down, because the amplitude flows overlap:
\"Ampl3_3\"

\n

In that triangular region where the distribution overlaps itself, no fact exists as to which particle is which, even in principle—and in the real world, we often get a lot more overlap than that.

\n

I mean, imagine that I take a balloon full of photons, and shake it up.

\n

Amplitude's gonna go all over the place.  If you label all the original apparent-photons, there's gonna be Feynman paths for photons A, B, C ending up at positions 1, 2, 3 via a zillion different paths and permutations.

\n

The amplitude-factor that corresponds to the \"balloon full of photons\" subspace, which contains bulges of amplitude-subfactor at various different locations in the photon field, will undergo a continuously branching evolution that involves each of the original bulges ending up in many different places by all sorts of paths, and the final configuration will have amplitude contributed from many different permutations.

\n

It's not that you don't know which photon went where.  It's that no fact of the matter exists. The illusion of individuality, the classical hallucination, has simply broken down.

\n

And the same would hold true of a balloon full of quarks or a balloon full of electrons.  Or even a balloon full of helium. Helium atoms can end up in the same places, via different permutations, and have their amplitudes add just like photons.

\n

Don't be tempted to look at the balloon, and think, \"Well, helium atom A could have gone to 1, or it could have gone to 2; and helium atom B could have gone to 1 or 2; quantum physics says the atoms both sort of split, and each went both ways; and now the final helium atoms at 1 and 2 are a mixture of the identities of A and B.\"  Don't torture your poor parietal cortex so.  It wasn't built for such usage.

\n

Just stop thinking in terms of little billiard balls, with or without confused identities.  Start thinking in terms of amplitude flows in configuration space.  That's all there ever is.

\n

And then it will seem completely intuitive that a simple experiment can tell you whether two blobs of amplitude-factor are over the same quantum field.

\n

Just perform any experiment where the two blobs end up in the same positions, via different permutations, and see if the amplitudes add.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Identity Isn't In Specific Atoms\"

\n

Previous post: \"Feynman Paths\"

" } }, { "_id": "oiu7YhzrDTvCxMhdS", "title": "Feynman Paths", "pageUrl": "https://www.lesswrong.com/posts/oiu7YhzrDTvCxMhdS/feynman-paths", "postedAt": "2008-04-17T06:32:28.000Z", "baseScore": 45, "voteCount": 30, "commentCount": 33, "url": null, "contents": { "documentId": "oiu7YhzrDTvCxMhdS", "html": "

Previously in seriesThe Quantum Arena

\n

At this point I would like to introduce another key idea in quantum mechanics.  Unfortunately, this idea was introduced so well in chapter 2 of QED: The Strange Theory of Light and Matter by Richard Feynman, that my mind goes blank when trying to imagine how to introduce it any other way.  As a compromise with just stealing his entire book, I stole one diagram—a diagram of how a mirror really works.

\n

\n

\"Feynman1\"

\n

In elementary school, you learn that the angle of incidence equals the angle of reflection.  But actually, saith Feynman, each part of the mirror reflects at all angles.

\n

So why is it that, way up at the human level, the mirror seems to reflect with the angle of incidence equal to the angle of reflection?

\n

Because in quantum mechanics, amplitude that flows to identical configurations (particles of the same species in the same places) is added together, regardless of how the amplitude got there.

\n

To find the amplitude for a photon to go from S to P, you've got to add up the amplitudes for all the different ways the photon could get there—by bouncing off the mirror at A, bouncing off the mirror at B...

\n

The rule of the Feynman \"path integral\" is that each of the paths from S to P contributes an amplitude of constant magnitude but varying phase, and the phase varies with the total time along the path.  It's as if the photon is a tiny spinning clock—the hand of the clock stays the same length, but it turns around at a constant rate for each unit of time.

\n

Feynman graphs the time for the photon to go from S to P via A, B, C, ...  Observe: the total time changes less between \"the path via F\" and \"the path via G\", then the total time changes between \"the path via A\" and \"the path via B\".  So the phase of the complex amplitude changes less, too.

\n

And when you add up all the ways the photon can go from S to P, you find that most of the amplitude comes from the middle part of the mirror—the contributions from other parts of the mirror tend to mostly cancel each other out, as shown at the bottom of Feynman's figure.

\n

There is no answer to the question \"Which part of the mirror did the photon really come from?\"  Amplitude is flowing from all of these configurations.  But if we were to ignore all the parts of the mirror except the middle, we would calculate essentially the same amount of total amplitude.

\n

This means that a photon, which can get from S to P by striking any part of the mirror, will behave pretty much as if only a tiny part of the mirror exists—the part where the photon's angle of incidence equals the angle of reflection.

\n

Unless you start playing clever tricks using your knowledge of quantum physics.

\n

For example, you can scrape away parts of the mirror at regular intervals, deleting some little arrows and leaving others.  Keep A and its little arrow; scrape away B so that it has no little arrow (at least no little arrow going to P).  Then a distant part of the mirror can contribute amplitudes that add up with each other to a big final amplitude, because you've removed the amplitudes that were out of phase.

\n

In which case you can make a mirror that reflects with the angle of incidence not equal to the angle of reflection.  It's called a diffraction grating.  But it reflects different wavelengths of light at different angles, so a diffraction grating is not quite a \"mirror\" in the sense you might imagine; it produces little rainbows of color, like a droplet of oil on the surface of water.

\n

How fast does the little arrow rotate?  As fast as the photon's wavelength—that's what a photon's wavelength is.  The wavelength of yellow light is ~570 nanometers:  If yellow light travels an extra 570 nanometers, its little arrow will turn all the way around and end up back where it started.

\n

So either Feynman's picture is of a very tiny mirror, or he is talking about some very big photons, when you look at how fast the little arrows seem to be rotating.  Relative to the wavelength of visible light, a human being is a lot bigger than the level at which you can see quantum effects.

\n

You'll recall that the first key to recovering the classical hallucination from the reality of quantum physics, was the possibility of approximate independence in the amplitude distribution.  (Where the distribution roughly factorizes, it can look like a subsystem of particles is evolving on its own, without being entangled with every other particle in the universe.)

\n

The second key to re-deriving the classical hallucination, is the kind of behavior that we see in this mirror.  Most of the possible paths cancel each other out, and only a small group of neighboring paths add up.  Most of the amplitude comes from a small neighborhood of histories—the sort of history where, for example, the photon's angle of incidence is equal to its angle of reflection.  And so too with many other things you are pleased to regard as \"normal\".

\n

My first posts on QM showed amplitude flowing in crude chunks from discrete situation to discrete situation.  In real life there are continuous amplitude flows between continuous configurations, like we saw with Feynman's mirror.  But by the time you climb all the way up from a few hundred nanometers to the size scale of human beings, most of the amplitude contributions have canceled out except for a narrow neighborhood around one path through history.

\n

Mind you, this is not the reason why a photon only seems to be in one place at a time.  That's a different story, which we won't get to today.

\n

The more massive things are—actually the more energetic they are, mass being a form of energy—the faster the little arrows rotate. Shorter wavelengths of light having more energy is a special case of this.  Compound objects, like a neutron made of three quarks, can be treated as having a collective amplitude that is the multiplicative product of the component amplitudes—at least to the extent that the amplitude distribution factorizes, so that you can treat the neutron as an individual.

\n

Thus the relation between energy and wavelength holds for more than photons and electrons; atoms, molecules, and human beings can be regarded as having a wavelength.

\n

But by the time you move up to a human being—or even a single biological cell—the mass-energy is really, really large relative to a yellow photon.  So the clock is rotating really, really fast.  The wavelength is really, really short.  Which means that the neighborhood of paths where things don't cancel out is really, really narrow.

\n

By and large, a human experiences what seems like a single path through configuration space—the classical hallucination.

\n

This is not how Schrödinger's Cat works, but it is how a regular cat works.

\n

Just remember that this business of single paths through time is not fundamentally true.  It's merely a good approximation for modeling a sofa.  The classical hallucination breaks down completely by the time you get to the atomic level.  It can't handle quantum computers at all.  It would fail you even if you wanted a sufficiently precise prediction of a brick.  A billiard ball taking a single path through time is not how the universe really, really works—it is just what human beings have evolved to easily visualize, for the sake of throwing rocks.

\n

(PS:  I'm given to understand that the Feynman path integral may be more fundamental than the Schrödinger equation: that is, you can derive Schrödinger from Feynman.  But as far as I can tell from examining the equations, Feynman is still differentiating the amplitude distribution, and so reality doesn't yet break down into point amplitude flows between point configurations.  Some physicist please correct me if I'm wrong about this, because it is a matter on which I am quite curious.)

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"No Individual Particles\"

\n

Previous post: \"The Quantum Arena\"

" } }, { "_id": "tLvJqDJAE8ARcCAeY", "title": "Redistributing fairness", "pageUrl": "https://www.lesswrong.com/posts/tLvJqDJAE8ARcCAeY/redistributing-fairness", "postedAt": "2008-04-17T05:39:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "tLvJqDJAE8ARcCAeY", "html": "

From Kwame Anthony Appiah’s fascinating longer article on fairness in politics, via Greg Mankiw:

\n
\n

In the 1970s, the Nobel Prize-winning economist Thomas Schelling used to put some questions to his students at Harvard when he wanted to show how people’s ethical preferences on public policy can be turned around. Suppose, he said, that you were designing a tax code and wanted to provide a credit — a rebate, in effect — for couples with children. (I’m simplifying a bit.) In a progressive tax system such as ours, we try to ease the burden on the less well off, so it might make sense to adjust the child credit accordingly. Would it be fair, do you think, to give poor parents a bigger credit than rich parents? Schelling’s students were inclined to think so. If the credit was going to vary with income, it seemed fair to award struggling families the bigger tax break. It would certainly be unfair, they agreed, for richer families to get a bigger one.

\n

Then Schelling asked his students to think about things in a different way. Instead of giving families with children a credit, you’d impose a surcharge on couples with no children. Now then: Would it be fair to make the childless rich pay a bigger surcharge than the childless poor? Schelling’s students thought so.

\n

But — hang on a sec — a bonus for those who have a child amounts to a penalty for those who don’t have one. (Saying that those with children should be taxed less than the childless is another way of saying that the childless should be taxed more than those with children.) So when poor parents receive a smaller credit than rich ones, that is, in effect, the same as the childless poor paying a smaller surcharge than the childless rich. To many, the first deal sounds unfair and the second sounds fair — but they’re the very same tax scheme.

\n

That’s a little disturbing, isn’t it?

\n
\n

Why do people respond this way? There’s no real paradox. The above questions seem to have elicited from the subjects a confusion of aims, in combination with a strong conceptually unpolished [IF rich THEN confiscate money] reflex.

\n

Assume (very) hypothetically that a bonus or penalty should be applied. If it is as an incentive it should apply to rich and the poor equally, unless there is some reason to incentivise one economic class over the other (e.g. better for rich to procreate to help redistribute wealth, so a greater bonus to them) or unless you think the poor will respond to smaller incentives because it’s a larger proportion of their income (in which case give bigger bonus or penalty to rich). That redistribution of wealth is a great idea is no reason for it to be tangled up with this sort of incentive scheme. If a bonus is to be given for the purpose of redistributing wealth to where it is needed (rather than as an incentive, though realising it might be one too), it should go to the poorer presumably.

\n

Confusion about the purpose of intervening leads to an overlooked problem with the conclusion that people are being inconsistent. If a greater penalty is applied to the rich, this is not the same as giving the rich with babies a larger bonus. They have a larger bonus relative to what they would otherwise have, but what they would otherwise have has been reduced more than it has for the poor baby owners. Thus it is not better than what the poor procreaters receive. It is a greater incentive, but irrelevant to wealth distribution between the filthily rich and poor. Similarly, giving a big bonus to poor babyholders is not the same as penalising the other poor, except in terms of incentives.

\n

The above problem is problematic because where a bonus is paid people either assume it is for wealth redistribution or that wealth redistribution should be included in the incentive by habit. Where there is a penalty, it is assumed it as a disincentive. If it were to be for wealth redistribution, penalising the rich should not be considered as benefiting other rich (relative penalisations within a class are only relevant to incentives).


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "eHeJxJZii6tqQupZL", "title": "The Quantum Arena", "pageUrl": "https://www.lesswrong.com/posts/eHeJxJZii6tqQupZL/the-quantum-arena", "postedAt": "2008-04-15T19:00:15.000Z", "baseScore": 38, "voteCount": 28, "commentCount": 72, "url": null, "contents": { "documentId": "eHeJxJZii6tqQupZL", "html": "

Previously in seriesClassical Configuration Spaces

\n

Yesterday, we looked at configuration spaces in classical physics.  In classical physics, configuration spaces are a useful, but optional, point of view.

\n

Today we look at quantum physics, which inherently takes place inside a configuration space, and cannot be taken out.

\n

\n

\"Ampl1\"For a start, as you might guess, in quantum physics we deal with distributions of complex amplitudes, rather than probability distributions made up of positive real numbers.  At left, I've used up 3 dimensions drawing a complex distribution over the position of one particle, A.

\n

You may recall that yesterday, 3 dimensions let us display the position of two 1-dimensional particles plus the system evolution over time.  Today, it's taking us 3 dimensions just to visualize an amplitude distribution over the position of one 1-dimensional particle at a single moment in time.  Which is why we did classical configuration spaces first.

\n

\"Ampl2\" To clarify the meaning of the above diagram, the left-to-right direction is the position of A.

\n

The up-and-down direction, and the invisible third dimension that leaps out of the paper, are devoted to the complex amplitudes.  Since a complex amplitude has a real and imaginary part, they use up 2 of our 3 dimensions.

\n

Richard Feynman said to just imagine the complex amplitudes as little 2-dimensional arrows.  This is as good a representation as any; little 2D arrows behave just the same way complex numbers do.  (You add little arrows by starting at the origin, and moving along each arrow in sequence.  You multiply little arrows by adding the angles and multiplying the lengths.  This is isomorphic to the complex field.)  So we can think of each position of the A particle as having a little arrow associated to it.

\n

As you can see, the position of A bulges in two places—a big bulge to the left, and a smaller bulge at right.  Way up at the level of classical observation, there would be a large probability (integrating over the squared modulus) of finding A somewhere to the left, and a smaller probability of finding it at the small bulge to the right.

\n

Drawing a neat little graph of the A+B system would involve having a complex amplitude for each joint position of the A and B particles, which you could visualize as a hypersurface in 4 dimensions.  I'd draw it for you, but I left my 4-dimensional pencil in the pocket of the 3rd leg of my other pants.

\n

\"Conf6_2\" You may recall from yesterday that a plaid rectangular probability distribution factorizes into the product of two independent probability distributions.

\n

This kind of independence-structure is one of several keys to recovering the illusion of individual particles from quantum amplitude distributions.   If the amplitude distribution roughly factorizes, has subsystems A and B with Amplitude(X,Y) ~ Amplitude(X) * Amplitude(Y), then X and Y will seem to evolve roughly independently of each other.

\n

But maintaining the illusion of individuality is harder in quantum configuration spaces, because of the identity of particles.  This identity cuts down the size of a 2-particle configuration space by 1/2, cuts down the size of a 3-particle configuration space by 1/6, and so on.  Here, the diminished configuration space is shown for the 2-particle case:

\n

 

\n

\"Ampl3_3\"

\n

The quantum configuration space is over joint possibilities like \"a particle here, a particle there\", not \"this particle here, that particle there\".  What would have been a neat little plaid pattern gets folded in on itself.

\n

You might think that you could recover the structure by figuring out which particle is \"really which\"—i.e. if you see a \"particle far forward, particle in middle\", you can guess that the first particle is A, and the second particle is B, because only A can be far forward; B just stays in the middle.  (This configuration would lie in at the top of the original plaid pattern, the part that got folded over).

\n

The problem with this is the little triangular region, where the folded plaid intersects itself.  In this region, the folded-over amplitude distribution gets superposed, added together.  Which makes an experimental difference, because the squared modulus of the sum is not the sum of the squared moduli.

\n

In that little triangular region of quantum configuration space, there is simply no fact of the matter as to \"which particle is which\".  Actually, there never was any such fact; but there was an illusion of individuality, which in this case has broken down.

\n

But even that isn't the ultimate reason why you can't take quantum physics out of configuration space.

\n

In classical configuration spaces, you can take a single point in the configuration space, and the single point describes the entire state of a classical system.  So you can take a single point in classical configuration space, and ask how the corresponding system develops over time.  You can take a single point in classical configuration space, and ask, \"Where does this one point go?\"

\n

The development over time of quantum systems depends on things like the second derivative of the amplitude distribution.  Our laws of physics describe how amplitude distributions develop into new amplitude distributions.  They do not describe, even in principle, how one configuration develops into another configuration.

\n

(I pause to observe that physics books make it way, way, way too hard to figure out this extremely important fact.  You'd think they'd tell you up front, \"Hey, the evolution of a quantum system depends on stuff like the second derivative of the amplitude distribution, so you can't possibly break it down into the evolution of individual configurations.\"  When I first saw the Schrödinger Equation it confused the hell out of me, because I thought the equation was supposed to apply to single configurations.)

\n

If I've understood the laws of physics correctly, quantum mechanics still has an extremely important property of locality:  You can determine the instantaneous change in the amplitude of a single configuration using only the infinitesimal neighborhood.  If you forget that the space is continuous and think of it as a mesh of computer processors, each processor would only have to talk to its immediate neighbors to figure out what to do next.  You do have to talk to your neighbors—but only your next-door neighbors, no telephone calls across town.  (Technical term:  \"Markov neighborhood.\")

\n

Conway's Game of Life has the discrete version of this property; the future state of each cell depends only on its own state and the state of neighboring cells.

\n

The second derivative—Laplacian, actually—is not a point property.  But it is a local property, where knowing the immediate neighborhood tells you everything, regardless of what the rest of the distribution looks like.  Potential energy, which also plays a role in the evolution of the amplitude, can be computed at a single positional configuration (if I've understood correctly).

\n

There are mathematical transformations physicists use for their convenience, like viewing the system as an amplitude distribution over momenta rather than positions, which throw away this neighborhood structure (e.g. by making potential energy a non-locally-computable property).  Well, mathematical convenience is a fine thing.  But I strongly suspect that the physically real wavefunction has local dynamics.  This kind of locality seems like an extremely important property, a candidate for something hardwired into the nature of reality and the structure of causation.  Imposing locality is part of the jump from Newtonian mechanics to Special Relativity.

\n

The temporal behavior of each amplitude in configuration space depends only on the amplitude at neighboring points.  But you cannot figure out what happens to the amplitude of a point in quantum configuration space, by looking only at that one point.  The future amplitude depends on the present second derivative of the amplitude distribution.

\n

So you can't say, as you can in classical physics, \"If I had infinite knowledge about the system, all the particles would be in one definite position, and then I could figure out the exact future state of the system.\"

\n

If you had a point mass of amplitude, an infinitely sharp spike in the quantum arena, the amplitude distribution would not be twice differentiable and the future evolution of the system would be undefined.  The known laws of physics would crumple up like tinfoil.  Individual configurations don't have quantum dynamics; amplitude distributions do.

\n

A point mass of amplitude, concentrated into a single exact position in configuration space, does not correspond to a precisely known state of the universe.  It is physical nonsense.

\n

It's like asking, in Conway's Game of Life:  \"What is the future state of this one cell, regardless of the cells around it?\"  The immediate future of the cell depends on its immediate neighbors; its distant future may depend on distant neighbors.

\n

Imagine trying to say, in a classical universe, \"Well, we've got this probability distribution over this classical configuration space... but to find out where the system evolves, where the probability flows from each point, we've got to twice differentiate the probability distribution to figure out the dynamics.\"

\n

In classical physics, the position of a particle is a separate fact from its momentum.  You can know exactly where a particle is, but not know exactly how fast it is moving.

\n

In Conway's Game of Life, the velocity of a glider is not a separate, additional fact about the board.  Cells are only \"alive\" or \"dead\", and the apparent motion of a glider arises from a configuration that repeats itself as the cell rules are applied.  If you know the life/death state of all the cells in a glider, you know the glider's velocity; they are not separate facts.

\n

In quantum physics, there's an amplitude distribution over a configuration space of particle positions.  Quantum dynamics specify how that amplitude distribution evolves over time.  Maybe you start with a blob of amplitude centered over position X, and then a time T later, the amplitude distribution has evolved to have a similarly-shaped blob of amplitude at position X+D.  Way up at the level of human researchers, this looks like a particle with velocity D/T.  But at the quantum level this behavior arises purely out of the amplitude distribution over positions, and the laws for how amplitude distributions evolve over time.

\n

In quantum physics, if you know the exact current amplitude distribution over particle positions, you know the exact future behavior of the amplitude distribution.  Ergo, you know how blobs of amplitude appear to propagate through the configuration space.  Ergo, you know how fast the \"particles\" are \"moving\".  Full knowledge of the amplitude distribution over positions implies full knowledge of momenta.

\n

Imagine trying to say, in a classical universe, \"I twice differentiate the probability distribution over these particles' positions, to physically determine how fast they're going.  So if I learned new information about where the particles were, they might end up moving at different speeds.  If I got very precise information about where the particles were, this would physically cause the particles to start moving very fast, because the second derivative of probability would be very large.\"  Doesn't sound all that sensible, does it?  Don't try to interpret this nonsense—it's not even analogously correct.  We'll look at the horribly misnamed \"Heisenberg Uncertainty Principle\" later.

\n

But that's why you can't take quantum physics out of configuration space.  Individual configurations don't have physics.  Amplitude distributions have physics.

\n

(Though you can regard the entire state of a quantum system—the whole amplitude distribution—as a single point in a space of infinite dimensionality:  \"Hilbert space.\"  But this is just a convenience of visualization.  You imagine it in N dimensions, then let N go to infinity.)

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Feynman Paths\"

\n

Previous post: \"Classical Configuration Spaces\"

" } }, { "_id": "KAHt3t7a6KH4kfX4L", "title": "Classical Configuration Spaces", "pageUrl": "https://www.lesswrong.com/posts/KAHt3t7a6KH4kfX4L/classical-configuration-spaces", "postedAt": "2008-04-15T08:40:56.000Z", "baseScore": 43, "voteCount": 33, "commentCount": 11, "url": null, "contents": { "documentId": "KAHt3t7a6KH4kfX4L", "html": "

Previously in seriesDistinct Configurations

\n
\n

  Once upon a time, there was a student who went to a math lecture.  When the lecture was over, he approached one of the other students, and said, \"I couldn't follow that at all.  The professor was talking about rotating 8-dimensional objects!  How am I supposed to visualize something rotating in 8 dimensions?\"
    \"Easy,\" replied the other student, \"you visualize it rotating in N dimensions, then let N go to 8.\"
            —old joke

\n
\n

Quantum configuration space isn't quite like classical configuration space. But in this case, considering that 8 dimensions is peanuts in quantum physics, even I concede that you ought to start with classical configuration space first.

\n

\n

(I apologize for the homemade diagrams, but this blog post already used up all available time...)

\n

In classical physics, a configuration space is a way of visualizing the state of an entire system as a single point in a higher-dimensional space.

\n

\"Conf1\" Suppose that a system is composed of two particles, A and B, each on the same 1-dimensional line.  (We'll call the two directions on the line, \"forward\" and \"back\".)

\n

Then we can view the state of the complete system A+B as a single point in 2-dimensional space.

\n

If you look at state 1, for example, it describes a state of the system where B is far forward and A is far back.  We can view state 1 as being embodied either in two 1-dimensional positions (the representation on the right), or view it as one 2-dimensional position (the representation on the left).

\n

\"Conf2\" To help grasp the idea of viewing a system as a point, this alternate graph shows A and B on the same line.

\n

When A and B are far apart, they both move toward each other. However, B moves slower than A.  Also, B wants to be closer to A than A wants to be close to B, so as B gets too close, A runs away...

\n

(At least that's what I had in mind while trying to draw the system evolution.)

\n

The system evolution can be shown as a discrete series of states:  Time=1, Time=2, Time=3...  But in configuration space, I can draw the system evolution as a smooth trajectory.

\n

\"Conf3\" If I had the time (to learn to use the appropriate software), I'd be drawing neat-o 3D diagrams at this point.  Like the diagram at right, only with, like, actual graphics.

\n

You may have previously heard the phrase, \"time is the 4th dimension\".  But the diagram at right shows the evolution over time of a 1-dimensional universe with two particles.  So time is the third dimension, the first dimension being the position of particle A, and the second dimension being the position of particle B.

\n

All these neat pictures are simplified, even relative to classical physics.

\n

In classical physics, each particle has a 3-dimensional position and a 3-dimensional velocity.  So to specify the complete state of a 7-particle system would require 42 real numbers, which you could view as one point in 42-dimensional space.

\n

Hence the joke.

\n

Configuration spaces get very high-dimensional, very fast.  That's why we're sticking with 2 particles in a 1-dimensional universe.  Anything more than that, I can't draw on paper—you've just got to be able to visualize it in N dimensions.

\n

So far as classical physics is concerned, it's a matter of taste whether you would want to imagine a system state as a point in configuration space, or imagine the individual particles. Mathematically, the two systems are isomorphic—in classical physics, that is.  So what's the benefit of imagining a classical configuration space?

\n

\"Conf4\" Well, for one thing, it makes it possible to visualize joint probability distributions.

\n

The grey area in the diagram represents a probability distribution over potential states of the A+B system.

\n

If this is my state of knowledge, I think the system is somewhere in the region represented by the grey area.  I believe that if I knew the actual states of both A and B, and visualized the A+B system as a point, the point would be inside the grey.

\n

Three sample possibilities within the probability distribution are shown, and the corresponding systems.

\n

And really the probability distribution should be lighter or darker, corresponding to volumes of decreased or increased probability density.  It's a probability distribution, not a possibility distribution.  I didn't make an effort to represent this in the diagram—I probably should have—but you can imagine it if you like.  Or pretend that the slightly darker region in the upper left is a volume of increased probability density, rather than a fluke of penciling.

\n

Once you've hit on the idea of using a bounded volume in configuration space to represent possibility, or a cloud with lighter and darker parts to represent probability, you can ask how your knowledge about a system develops over time.  If you know how each system state (point in configuration space) develops dynamically into a future system state, and you draw a little cloud representing your current probability distribution, you can project that cloud into the future.

\n

\"Conf5\" Here I start out with uncertainty represented by the squarish grey box in the first configuration space, at bottom right.

\n

All the points in the first grey box, correspond to system states, that dynamically develop over time, into new system states, corresponding to points in the grey rectangle in the second configuration space at middle right.

\n

Then, my little rectangle of uncertainty develops over time into a wiggly figure, three major possibility-nodes connected by thin strings of probability density, as shown at top right.

\n

In this figure I also tried to represent the idea of conserved probability volume—the same total volume of possibility, with points evolving to other points with the same local density, at each successive time.  This is Liouville's Theorem, which is the key to the Second Law of Thermodynamics, as I have previously described.

\n

Neat little compact volumes of uncertainty develop over time, under the laws of physics, into big wiggly volumes of uncertainty.  If you want to describe the new volumes of uncertainty compactly, in less than a gazillion gigabytes, you've got to draw larger boundaries around them.  Once you draw the new larger boundary, your uncertainty never shrinks, because probability flow is conservative.  So entropy always increases.  That's the second law of thermodynamics.

\n

Just figured I'd mention that, as long as I was drawing diagrams... you can see why this \"visualize a configuration space\" trick is useful, even in classical physics.

\n

\"Conf6\" Another idea that's easier to visualize in configuration space is the idea of conditional independence between two probabilistic variables.

\n

Conditional independence happens when the joint probability distribution is the product of the individual probability distributions:

\n

P(A,B) = P(A) x P(B)

\n

The vast majority of possible probability distributions are not conditionally independent, the same way that the vast majority of shapes are not rectangular.  Actually, this is oversimplifying:  It's not enough for the volume of possibilities to be rectangular.  The probability density has to factorize into a product of probability densities on each side.

\n

The vast majority of shapes are not rectangles, the vast majority of color patterns are not plaid.  It's conditional independence, not conditional dependence, that is the unusual special case.

\n

(I bet when you woke up this morning, you didn't think that today you would be visualizing plaid patterns in N dimensions.)

\n

\"Conf4_2\" In the figure reprised here at right, my little cloud of uncertainty is not rectangular.

\n

Hence, my uncertainty about A and my uncertainty about B are not independent.

\n

If you tell me A is far forward, I will conclude B is far back.  If you tell me A is in the middle of its 1-dimensional universe, I will conclude that B is likewise in the middle.

\n

If I tell you A is far back, what do you conclude about B?

\n

Aaaand that's classical configuration space, folks.  It doesn't add anything mathematically to classical physics, but it can help human beings visualize system dynamics and probability densities.  It seemed worth filtering into a separate post, because configuration space is a modular concept, useful for other ideas.

\n

Quantum physics inherently takes place in a configuration space.  You can't take it out.  Tomorrow, we'll see why.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"The Quantum Arena\"

\n

Previous post: \"Can You Prove Two Particles Are Identical?\"

" } }, { "_id": "Bp8vnEciPA5TXSy6f", "title": "Can You Prove Two Particles Are Identical?", "pageUrl": "https://www.lesswrong.com/posts/Bp8vnEciPA5TXSy6f/can-you-prove-two-particles-are-identical", "postedAt": "2008-04-14T07:06:34.000Z", "baseScore": 63, "voteCount": 52, "commentCount": 105, "url": null, "contents": { "documentId": "Bp8vnEciPA5TXSy6f", "html": "

This post is part of the Quantum Physics Sequence.
Followup toWhere Philosophy Meets Science, Joint Configurations

\n

Behold, I present you with two electrons.  They have the same mass. They have the same charge.  In every way that we've tested them so far, they seem to behave the same way.

\n

But is there any way we can know that the two electrons are really, truly, entirely indistinguishable?

\n

The one who is wise in philosophy but not in physics will snort dismissal, saying, \"Of course not.  You haven't found an experiment yet that distinguishes these two electrons.  But who knows, you might find a new experiment tomorrow that does.\"

\n

Just because your current model of reality files all observed electrons in the same mental bucket, doesn't mean that tomorrow's physics will do the same.  That's mixing up the map with the territory.  Right?

\n

It took a while to discover atomic isotopes.  Maybe someday we'll discover electron isotopes whose masses are different in the 20th decimal place.  In fact, for all we know, the electron has a tiny little tag on it, too small for your current microscopes to see, reading 'This is electron #7,234,982,023,348...'  So that you could in principle toss this one electron into a bathtub full of electrons, and then fish it out again later.  Maybe there's some way to know in principle, maybe not—but for now, surely, this is one of those things that science just doesn't know.

\n

That's what you would think, if you were wise in philosophy but not in physics.

\n

\n

But what kind of universe could you possibly live in, where a simple experiment can tell you whether it's possible in principle to tell two things apart?

\n

Maybe aliens gave you a tiny little device with two tiny little boxes, and a tiny little light that goes on when you put two identical things into the boxes?

\n

But how do you know that's what the device really does?  Maybe the device was just built with measuring instruments that go to the 10th decimal place but not any further.

\n

Imagine that we take this problem to an analytic philosopher named Bob, and Bob says:

\n
\n

\"Well, for one thing, you can't even get absolute proof that the two particles actually exist, as opposed to being some kind of hallucination created in you by the Dark Lords of the Matrix.  We call it 'the problem of induction'.\"

\n
\n

Yes, we've heard of the problem of induction.  Though the Sun has risen on billions of successive mornings, we can't know with absolute certainty that, tomorrow, the Sun will not transform into a giant chocolate cake.  But for the Sun to transform to chocolate cake requires more than an unanticipated discovery in physics.  It requires the observed universe to be a lie.  Can any experiment give us an equally strong level of assurance that two particles are identical?

\n
\n

\"Well, I Am Not A Physicist,\" says Bob, \"but obviously, the answer is no.\"

\n
\n

Why?

\n
\n

\"I already told you why:  No matter how many experiments show that two particles are similar, tomorrow you might discover an experiment that distinguishes between them.\"

\n
\n

Oh, but Bob, now you're just taking your conclusion as a premise.  What you said is exactly what we want to know.  Is there some achievable state of evidence, some sequence of discoveries, from within which you can legitimately expect never to discover a future experiment that distinguishes between two particles?

\n
\n

\"I don't believe my logic is circular.  But, since you challenge me, I'll formalize the reasoning.

\n

\"Suppose there are particles {P1, P2, ...} and a suite of experimental tests {E1, E2, ...}  Each of these experimental tests, according to our best current model of the world, has a causal dependency on aspects {A1, A2...} of the particles P, where an aspect might be something like 'mass' or 'electric charge'.

\n

\"Now these experimental tests can establish very reliably—to the limit of our belief that the universe is not outright lying to us—that the depended-on aspects of the particles are similar, up to some limit of measurable precision.

\n

\"But we can always imagine an additional aspect A0 that is not depended-on by any of our experimental measures. Perhaps even an epiphenomenal aspect.  Some philosophers will argue over whether an epiphenomenal aspect can be truly real, but just because we can't legitimately know about something's existence doesn't mean it's not there.  Alternatively, we can always imagine an experimental difference in any quantitative aspect, such as mass, that is too small to detect, but real.

\n

\"These extra properties or marginally different properties are conceivable, therefore logically possible. This shows you need additional information, not present in the experiments, to definitely conclude the particles are identical.\"

\n
\n

That's an interesting argument, Bob, but you say you haven't studied physics.

\n
\n

\"No, not really.\"

\n
\n

Maybe you shouldn't be doing all this philosophical analysis before you've studied physics.  Maybe you should beg off the question, and let a philosopher who's studied physics take over.

\n
\n

\"Would you care to point out a particular flaw in my logic?\"

\n
\n

Oh... not at the moment.  We're just saying, You Are Not A Physicist.  Maybe you shouldn't be so glib when it comes to saying what physicists can or can't know.

\n
\n

\"They can't know two particles are perfectly identical.  It is not possible to imagine an experiment that proves two particles are perfectly identical.\"

\n
\n

Impossible to imagine?  You don't know that.  You just know you haven't imagined such an experiment yet.  But perhaps that simply demonstrates a limit on your imagination, rather than demonstrating a limit on physical possibility.  Maybe if you knew a little more physics, you would be able to conceive of such an experiment?

\n
\n

\"I'm sorry, this isn't a question of physics, it's a question of epistemology.  To believe that all aspects of two particles are perfectly identical, requires a different sort of assurance than any experimental test can provide.  Experimental tests only fail to establish a difference; they do not prove identity. What particular physics experiments you can do, is a physics question, and I don't claim to know that.  But what experiments can justify believing is an epistemological question, and I am a professional philosopher; I expect to understand that question better than any physicist who hasn't studied formal epistemology.\"

\n
\n

And of course, Bob is wrong.

\n

Bob isn't being stupid.  He'd be right in any classical universe.  But we don't live in a classical universe.

\n

Our ability to perform an experiment that tells us positively that two particles are entirely identical, goes right to the heart of what distinguishes the quantum from the classical; the core of what separates the way reality actually works, from anything any pre-20th-century human ever imagined about how reality might work.

\n

If you have a particle P1 and a particle P2, and it's possible in the experiment for both P1 and P2 to end up in either of two possible locations L1 or L2, then the observed distribution of results will depend on whether \"P1 at L1, P2 at L2\" and \"P1 at L2, P2 at L1\" is the same configuration, or two distinct configurations.  If they're the same configuration, we add up the amplitudes flowing in, then take the squared modulus.  If they're different configurations, we keep the amplitudes separate, take the squared moduli separately, then add the resulting probabilities.  As (1 + 1)2 != (12 + 12), it's not hard to distinguish the experimental results after a few trials.

\n

(Yes, half-integer spin changes this picture slightly.  Which I'm not going into in this series of blog posts.  If all epistemological confusions are resolved, half-integer spin is a difficulty of mere mathematics, so the issue doesn't belong here.  Half-integer spin doesn't change the experimental testability of particle equivalences, or alter the fact that particles have no individual identities.)

\n

And the flaw in Bob's logic?  It was a fundamental assumption that Bob couldn't even see, because he had no alternative concept for contrast.  Bob talked about particles P1 and P2 as if they were individually real and independently real.  This turns out to assume that which is to be proven.  In our universe, the individually and fundamentally real entities are configurations of multiple particles, and the amplitude flows between them.  Bob failed to imagine the sequence of experimental results which established to physicists that this was, in fact, how reality worked.

\n

Bob failed to imagine the evidence which falsified his basic and invisibly assumed ontology—the discoveries that changed the whole nature of the game; from a world that was the sum of individual particles, to a world that was the sum of amplitude flows between multi-particle configurations.

\n

And so Bob's careful philosophical reasoning ended up around as useful as Kant's conclusion that space, by its very nature, was flat.  Turned out, Kant was just reproducing an invisible assumption built into how his parietal cortex was modeling space.  Kant's imaginings were evidence only about his imagination—grist for cognitive science, not physics.

\n

Be careful not to underestimate, through benefit of hindsight, how surprising it would seem, a priori, that you could perfectly identify two particles through experiment.  Be careful not to underestimate how entirely and perfectly reasonable Bob's analysis would have seemed, if you didn't have quantum assumptions to contrast to classical ones.

\n

Experiments tell us things about the nature of reality which you just plain wouldn't expect from a priori reasoning.  Experiments falsify assumptions we can't even see. Experiments tell us how to do things that seem logically impossible. Experiments deliver surprises from blind spots we don't even know exist.

\n

Bear this in mind, the next time you're wondering whether mere empirical science might have something totally unexpected to say about some impossible-seeming philosophical question.

\n

 

\n

Part of The Quantum Physics Sequence

\n

Next post: \"Classical Configuration Spaces\"

\n

Previous post: \"Where Philosophy Meets Science\"

" } }, { "_id": "Bh9cdfMjATrTdLrGH", "title": "Where Philosophy Meets Science", "pageUrl": "https://www.lesswrong.com/posts/Bh9cdfMjATrTdLrGH/where-philosophy-meets-science", "postedAt": "2008-04-12T21:21:33.000Z", "baseScore": 62, "voteCount": 49, "commentCount": 21, "url": null, "contents": { "documentId": "Bh9cdfMjATrTdLrGH", "html": "

Looking back on early quantum physics—not for purposes of admonishing the major figures, or to claim that we could have done better if we’d been born into that era, but in order to try and learn a moral, and do better next time—looking back on the dark ages of quantum physics, I say, I would nominate as the “most basic” error…

not that they tried to reverse course on the last three thousand years of science suggesting that mind was complex within physics rather than fundamental in physics. This is Science, and we do have revolutions here. Every now and then you’ve got to reverse a trend. The future is always absurd and never unlawful.

I would nominate, as the basic error not to repeat next time, that the early scientists forgot that they themselves were made out of particles.

I mean, I’m sure that most of them knew it in theory.

And yet they didn’t notice that putting a sensor to detect a passing electron, or even knowing about the electron’s history, was an example of “particles in different places.” So they didn’t notice that a quantum theory of distinct configurations already explained the experimental result, without any need to invoke consciousness.

In the ancestral environment, humans were often faced with the adaptively relevant task of predicting other humans. For which purpose you thought of your fellow humans as having thoughts, knowing things and feeling things, rather than thinking of them as being made up of particles. In fact, many hunter-gatherer tribes may not even have known that particles existed. It’s much more intuitive—it feels simpler—to think about someone “knowing” something, than to think about their brain’s particles occupying a different state. It’s easier to phrase your explanations in terms of what people know; it feels more natural; it leaps more readily to mind.

Just as, once upon a time, it was easier to imagine Thor throwing lightning bolts, than to imagine Maxwell’s Equations—even though Maxwell’s Equations can be described by a computer program vastly smaller than the program for an intelligent agent like Thor.

So the ancient physicists found it natural to think, “I know where the photon was… what difference could that make?” Not, “My brain’s particles’ current state correlates to the photon’s history… what difference could that make?”

And, similarly, because it felt easy and intuitive to model reality in terms of people knowing things, and the decomposition of knowing into brain states did not leap so readily to mind, it seemed like a simple theory to say that a configuration could have amplitude only “if you didn’t know better.”

To turn the dualistic quantum hypothesis into a formal theory—one that could be written out as a computer program, without human scientists deciding when an “observation” occurred—you would have to specify what it meant for an “observer” to “know” something, in terms your computer program could compute.

So is your theory of fundamental physics going to examine all the particles in a human brain, and decide when those particles “know” something, in order to compute the motions of particles? But then how do you compute the motion of the particles in the brain itself? Wouldn’t there be a potential infinite recursion?

But so long as the terms of the theory were being processed by human scientists, they just knew when an “observation” had occurred. You said an “observation” occurred whenever it had to occur in order for the experimental predictions to come out right—a subtle form of constant tweaking.

(Remember, the basics of quantum theory were formulated before Alan Turing said anything about Turing machines, and way before the concept of computation was popularly known. The distinction between an effective formal theory, and one that required human interpretation, was not as clear then as now. Easy to pinpoint the problems in hindsight; you shouldn’t learn the lesson that problems are usually this obvious in foresight.)

Looking back, it may seem like one meta-lesson to learn from history, is that philosophy really matters in science—it’s not just some adjunct of a separate academic field.

After all, the early quantum scientists were doing all the right experiments. It was their interpretation that was off. And the problems of interpretation were not the result of their getting the statistics wrong.

Looking back, it seems like the errors they made were errors in the kind of thinking that we would describe as, well, “philosophical.”

When we look back and ask, “How could the early quantum scientists have done better, even in principle?” it seems that the insights they needed were philosophical ones.

And yet it wasn’t professional philosophers who swooped in and solved the problem and cleared up the mystery and made everything normal again. It was, well, physicists.

Arguably, Leibniz was at least as foresightful about quantum physics, as Democritus was once thought to have been foresightful about atoms. But that is hindsight. It’s the result of looking at the solution, and thinking back, and saying, “Hey, Leibniz said something like that.”

Even where one philosopher gets it right in advance, it’s usually science that ends up telling us which philosopher is right—not the prior consensus of the philosophical community.

I think this has something fundamental to say about the nature of philosophy, and the interface between philosophy and science.

It was once said that every science begins as philosophy, but then grows up and leaves the philosophical womb, so that at any given time, “Philosophy” is what we haven’t turned into science yet.

I suggest that when we look at the history of quantum physics and say, “The insights they needed were philosophical insights,” what we are really seeing is that the insight they needed was of a form that is not yet taught in standardized academic classes, and not yet reduced to calculation.

Once upon a time, the notion of the scientific method—updating beliefs based on experimental evidence—was a philosophical notion. But it was not championed by professional philosophers. It was the real-world power of science that showed that scientific epistemology was good epistemology, not a prior consensus of philosophers.

Today, this philosophy of belief-updating is beginning to be reduced to calculation—statistics, Bayesian probability theory.

But back in Galileo’s era, it was solely vague verbal arguments that said you should try to produce numerical predictions of experimental results, rather than consulting the Bible or Aristotle.

At the frontier of science, and especially at the frontier of scientific chaos and scientific confusion, you find problems of thinking that are not taught in academic courses, and that have not been reduced to calculation. And this will seem like a domain of philosophy; it will seem that you must do philosophical thinking in order to sort out the confusion. But when history looks back, I’m afraid, it is usually not a professional philosopher who wins all the marbles—because it takes intimate involvement with the scientific domain in order to do the philosophical thinking. Even if, afterward, it all seems knowable a priori; and even if, afterward, some philosopher out there actually got it a priori; even so, it takes intimate involvement to see it in practice, and experimental results to tell the world which philosopher won.

I suggest that, like ethics, philosophy really is important, but it is only practiced effectively from within a science. Trying to do the philosophy of a frontier science, as a separate academic profession, is as much a mistake as trying to have separate ethicists. You end up with ethicists who speak mainly to other ethicists, and philosophers who speak mainly to other philosophers.

This is not to say that there is no place for professional philosophers in the world. Some problems are so chaotic that there is no established place for them at all in the halls of science. But those “professional philosophers” would be very, very wise to learn every scrap of relevant-seeming science that they can possibly get their hands on. They should not be surprised at the prospect that experiment, and not debate, will finally settle the argument. They should not flinch from running their own experiments, if they can possibly think of any.

That, I think, is the lesson of history.

" } }, { "_id": "KbeHkLNY5ETJ3TN3W", "title": "Distinct Configurations", "pageUrl": "https://www.lesswrong.com/posts/KbeHkLNY5ETJ3TN3W/distinct-configurations", "postedAt": "2008-04-12T04:42:36.000Z", "baseScore": 77, "voteCount": 55, "commentCount": 25, "url": null, "contents": { "documentId": "KbeHkLNY5ETJ3TN3W", "html": "

The experiment in the previous essay carried two key lessons:

First, we saw that because amplitude flows can cancel out, and because our magic measure of squared modulus is not linear, the identity of configurations is nailed down—you can’t reorganize configurations the way you can regroup possible worlds. Which configurations are the same, and which are distinct, has experimental consequences; it is an observable fact.

Second, we saw that configurations are about multiple particles. If there are two photons entering the apparatus, that doesn’t mean there are two initial configurations. Instead the initial configuration’s identity is “two photons coming in.” (Ideally, each configuration we talk about would include every particle in the experiment—including the particles making up the mirrors and detectors. And in the real universe, every configuration is about all the particles… everywhere.)

What makes for distinct configurations is not distinct particles. Each configuration is about every particle. What makes configurations distinct is particles occupying different positions—at least one particle in a different state.

To take one important demonstration…

Figure 1 is the same experiment as Figure 2 in Configurations and Amplitude, with one important change: Between A and C has been placed a sensitive thingy, S. The key attribute of S is that if a photon goes past S, then S ends up in a slightly different state.

Let’s say that the two possible states of S are Yes and No. The sensitive thingy S starts out in state No, and ends up in state Yes if a photon goes past.

Then the initial configuration is:

“photon heading toward A; and S in state No,(1+0i)

Next, the action of the half-silvered mirror at A. In the previous version of this experiment, without the sensitive thingy, the two resultant configurations were “A to B” with amplitude −i and “A to C” with amplitude −1. Now, though, a new element has been introduced into the system, and all configurations are about all particles, and so every configuration mentions the new element. So the amplitude flows from the initial configuration are to:

“photon from A to B; and S in state No,(0i)
“photon from A to C; and S in state Yes,(1+0i)

Next, the action of the full mirrors at B and C:

“photon from B to D; and S in state No,” (1 + 0i) “photon from C to D; and S in state Yes,” (0 − i) .

And then the action of the half-mirror at D, on the amplitude flowing from both of the above configurations:

(1) “photon from D to E; and S in state No,(0+i)
(2) “photon from D to F; and S in state No,” (1+0i)
(3) “photon from D to E; and S in state Yes,” (0i)
(4) “photon from D to F; and S in state Yes,” (1+0i).

When we did this experiment without the sensitive thingy, the amplitude flows (1) and (3) of (0+i) and (0i) to the “D to E” configuration canceled each other out. We were left with no amplitude for a photon going to Detector 1 (way up at the experimental level, we never observe a photon striking Detector 1).

But in this case, the two amplitude flows (1) and (3) are now to distinct configurations; at least one entity, S, is in a different state between (1) and (3). The amplitudes don’t cancel out.

When we wave our magical squared-modulus-ratio detector over the four final configurations, we find that the squared moduli of all are equal: 25% probability each. Way up at the level of the real world, we find that the photon has an equal chance of striking Detector 1 and Detector 2.

All the above is true, even if we, the researchers, don’t care about the state of S. Unlike possible worlds, configurations cannot be regrouped on a whim. The laws of physics say the two configurations are distinct; it’s not a question of how we can most conveniently parse up the world.

All the above is true, even if we don’t bother to look at the state of S. The configurations (1) and (3) are distinct in physics, even if we don’t know the distinction.

All the above is true, even if we don’t know S exists. The configurations (1) and (3) are distinct whether or not we have distinct mental representations for the two possibilities.

All the above is true, even if we’re in space, and S transmits a new photon off toward the interstellar void in two distinct directions, depending on whether the photon of interest passed it or not. So that we couldn’t ever find out whether S had been in Yes or No. The state of S would be embodied in the photon transmitted off to nowhere. The lost photon can be an implied invisible, and the state of S pragmatically undetectable; but the configurations are still distinct.

(The main reason it wouldn’t work, is if S were nudged, but S had an original spread in configuration space that was larger than the nudge. Then you couldn’t rely on the nudge to separate the amplitude distribution over configuration space into distinct lumps. In reality, all this takes place within a differentiable amplitude distribution over a continuous configuration space.)

Configurations are not belief states. Their distinctness is an objective fact with experimental consequences. The configurations are distinct even if no one knows the state of S; distinct even if no intelligent entity can ever find out. The configurations are distinct so long as at least one particle in the universe anywhere is in a different position. This is experimentally demonstrable.

Why am I emphasizing this? Because back in the dark ages when no one understood quantum physics…

Okay, so imagine that you’ve got no clue what’s really going on, and you try the experiment in Figure 2, and no photons show up at Detector 1. Cool.

You also discover that when you put a block between B and D, or a block between A and C, photons show up at Detector 1 and Detector 2 in equal proportions. But only one at a time—Detector 1 or Detector 2 goes off, not both simultaneously.

So, yes, it does seem to you like you’re dealing with a particle—the photon is only in one place at one time, every time you see it.

And yet there’s some kind of… mysterious phenomenon… that prevents the photon from showing up in Detector 1. And this mysterious phenomenon depends on the photon being able to go both ways. Even though the photon only shows up in one detector or the other, which shows, you would think, that the photon is only in one place at a time.

Which makes the whole pattern of the experiments seem pretty bizarre! After all, the photon either goes from A to C, or from A to B; one or the other. (Or so you would think, if you were instinctively trying to break reality down into individually real particles.) But when you block off one course or the other, as in Figure 3, you start getting different experimental results!

It’s like the photon wants to be allowed to go both ways, even though (you would think) it only goes one way or the other. And it can tell if you try to block it off, without actually going there—if it’d gone there, it would have run into the block, and not hit any detector at all.

It’s as if mere possibilities could have causal effects, in defiance of what the word “real” is usually thought to mean

But it’s a bit early to jump to conclusions like that, when you don’t have a complete picture of what goes on inside the experiment.

So it occurs to you to put a sensor between A and C, like in Figure 4, so you can tell which way the photon really goes on each occasion.

And the mysterious phenomenon goes away.

I mean, now how crazy is that? What kind of paranoia does that inspire in some poor scientist?

Okay, so in the twenty-first century we realize in order to “know” a photon’s history, the particles making up your brain have to be correlated with the photon’s history. If having a tiny little sensitive thingy S that correlates to the photon’s history is enough to distinguish the final configurations and prevent the amplitude flows from canceling, then an entire sensor with a digital display, never mind a human brain, will put septillions of particles in different positions and prevent the amplitude flows from canceling.

But if you hadn’t worked that out yet…

Then you would ponder the sensor having banished the Mysterious Phenomenon, and think:

The photon doesn’t just want to be physically free to go either way. It’s not a little wave going along an unblocked pathway, because then just having a physically unblocked pathway would be enough.

No… I’m not allowed to know which way the photon went.

The mysterious phenomenon… doesn’t want me looking at it too closely… while it’s doing its mysterious thing.

It’s not physical possibilities that have an effect on reality… only epistemic possibilities. If I know which way the photon went, it’s no longer plausible that it went the other way… which cuts off the mysterious phenomenon as effectively as putting a block between B and D.

I have to not observe which way the photon went, in order for it to always end up at Detector 2. It has to be reasonable that the photon could have gone to either B or C. What I can know is the determining factor, regardless of which physical paths I leave open or closed.

STOP THE PRESSES! MIND IS FUNDAMENTAL AFTER ALL! CONSCIOUS AWARENESS DETERMINES OUR EXPERIMENTAL RESULTS!

You can still read this kind of stuff. In physics textbooks. Even now, when a majority of theoretical physicists know better. Stop the presses. Please, stop the presses.

Hindsight is 20/20; and so it’s easy to say that, in hindsight, there were certain clues that this interpretation was not correct.

Like, if you put the sensor between A and C but don’t read it, the mysterious phenomenon still goes away, and the photon still sometimes ends up at Detector 1. (Oh, but you could have read it, and possibilities are real now…)

But it doesn’t even have to be a sensor, a scientific instrument that you built. A single particle that gets nudged far enough will dispel the interference. A photon radiating off to where you’ll never see it again can do the trick. Not much human involvement there. Not a whole lot of conscious awareness.

Maybe before you pull the dualist fire alarm on human brains being physically special, you should provide experimental proof that a rock can’t play the same role in dispelling the Mysterious Phenomenon as a human researcher?

But that’s hindsight, and it’s easy to call the shots in hindsight. Do you really think you could’ve done better than John von Neumann, if you’d been alive at the time? The point of this kind of retrospective analysis is to ask what kind of fully general clues you could have followed, and whether there are any similar clues you’re ignoring now on current mysteries.

Though it is a little embarrassing that even after the theory of amplitudes and configurations had been worked out—with the theory now giving the definite prediction that any nudged particle would do the trick—early scientists still didn’t get it.

But you see… it had been established as Common Wisdom that configurations were possibilities, it was epistemic possibility that mattered, amplitudes were a very strange sort of partial information, and conscious observation made quantumness go away. And that it was best to avoid thinking too hard about the whole business, so long as your experimental predictions came out right.

" } }, { "_id": "ybusFwDqiZgQa6NCq", "title": "Joint Configurations", "pageUrl": "https://www.lesswrong.com/posts/ybusFwDqiZgQa6NCq/joint-configurations", "postedAt": "2008-04-11T05:00:58.000Z", "baseScore": 79, "voteCount": 60, "commentCount": 40, "url": null, "contents": { "documentId": "ybusFwDqiZgQa6NCq", "html": "

The key to understanding configurations, and hence the key to understanding quantum mechanics, is realizing on a truly gut level that configurations are about more than one particle.

Continuing from the previous essay, Figure 1 shows an altered version of the experiment where we send in two photons toward D at the same time, from the sources B and C.

The starting configuration then is:

“a photon going from B to D,
and a photon going from C to D.”

Again, let’s say the starting configuration has amplitude (1+0i).

And remember, the rule of the half-silvered mirror (at D) is that a right-angle deflection multiplies by i, and a straight line multiplies by 1.

So the amplitude flows from the starting configuration, separately considering the four cases of deflection/non-deflection of each photon, are:

  1. The “B to D” photon is deflected and the “C to D” photon is deflected. This amplitude flows to the configuration “a photon going from D to E, and a photon going from D to F.” The amplitude flowing is (1+0i)×i×i=(1+0i).
  2. The “B to D” photon is deflected and the “C to D” photon goes straight. This amplitude flows to the configuration “two photons going from D to E.” The amplitude flowing is (1+0i)×i×1=(0i).
  3. The “B to D” photon goes straight and the “C to D” photon is deflected. This amplitude flows to the configuration “two photons going from D to F.” The amplitude flowing is (1+0i)×1×i=(0i).
  4. The “B to D” photon goes straight and the “C to D” photon goes straight. This amplitude flows to the configuration “a photon going from D to F, and a photon going from D to E.” The amplitude flowing is (1+0i)×1×1=(1+0i).

Now—and this is a very important and fundamental idea in quantum mechanics—the amplitudes in cases 1 and 4 are flowing to the same configuration. Whether the B photon and C photon both go straight, or both are deflected, the resulting configuration is one photon going toward E and another photon going toward F.

So we add up the two incoming amplitude flows from case 1 and case 4, and get a total amplitude of (1+0i)+(1+0i)=0.

When we wave our magic squared-modulus-ratio reader over the three final configurations, we’ll find that “two photons at Detector 1” and “two photons at Detector 2” have the same squared modulus, but “a photon at Detector 1 and a photon at Detector 2” has squared modulus zero.

Way up at the level of experiment, we never find Detector 1 and Detector 2 both going off. We’ll find Detector 1 going off twice, or Detector 2 going off twice, with equal frequency. (Assuming I’ve gotten the math and physics right. I didn’t actually perform the experiment.)

The configuration’s identity is not, “the B photon going toward E and the C photon going toward F. ” Then the resultant configurations in case 1 and case 4 would not be equal. Case 1 would be, “B photon to E, C photon to F” and case 4 would be “Bphoton to F, C photon to E.” These would be two distinguishable configurations, if configurations had photon-tracking structure.

So we would not add up the two amplitudes and cancel them out. We would keep the amplitudes in two separate configurations. The total amplitudes would have non-zero squared moduli. And when we ran the experiment, we would find (around half the time) that Detector 1 and Detector 2 each registered one photon. Which doesn’t happen, if my calculations are correct.

Configurations don’t keep track of where particles come from. A configuration’s identity is just, “a photon here, a photon there; an electron here, an electron there.” No matter how you get into that situation, so long as there are the same species of particles in the same places, it counts as the same configuration.

I say again that the question “What kind of information does the configuration’s structure incorporate?” has experimental consequences. You can deduce, from experiment, the way that reality itself must be treating configurations.

In a classical universe, there would be no experimental consequences. If the photon were like a little billiard ball that either went one way or the other, and the configurations were our beliefs about possible states the system could be in, and instead of amplitudes we had probabilities, it would not make a difference whether we tracked the origin of photons or threw the information away.

In a classical universe, I could assign a 25% probability to both photons going to E, a 25% probability of both photons going to F, a 25% probability of the B photon going to E and the C photon going to F, and 25% probability of the B photon going to Fand the C photon going to E. Or, since I personally don’t care which of the two latter cases occurred, I could decide to collapse the two possibilities into one possibility and add up their probabilities, and just say, “a 50% probability that each detector gets one photon.”

With probabilities, we can aggregate events as we like—draw our boundaries around sets of possible worlds as we please—and the numbers will still work out the same. The probability of two mutually exclusive events always equals the probability of the first event plus the probability of the second event.

But you can’t arbitrarily collapse configurations together, or split them apart, in your model, and get the same experimental predictions. Our magical tool tells us the ratios of squared moduli. When you add two complex numbers, the squared modulus of the sum is not the sum of the squared moduli of the parts:

SquaredModulus(C1+C2)SquaredModulus(C1)+SquaredModulus(C2))

E.g.

SquaredModulus((2+i)+(1i))=SquaredModulus(3+0i)=32+02=9
SquaredModulus(2+i)+SquaredModulus(1i)=(22+12)+(12+(1)2)=(4+1)+(1+1)=7

Or in the current experiment of discourse, we had flows of (1+0i) and (1+0i) cancel out, adding up to 0, whose squared modulus is 0, where the squared modulus of the parts would have been 1 and 1.

If in place of Squared_Modulus, our magical tool was some linear function— any function where F(X+Y)=F(X)+F(Y)—then all the quantumness would instantly vanish and be replaced by a classical physics. (A different classical physics, not the same illusion of classicality we hallucinate from inside the higher levels of organization in our own quantum world.)

If amplitudes were just probabilities, they couldn’t cancel out when flows collided. If configurations were just states of knowledge, you could reorganize them however you liked.

But the configurations are nailed in place, indivisible and unmergeable without changing the laws of physics.

And part of what is nailed is the way that configurations treat multiple particles. A configuration says, “a photon here, a photon there,” not “this photon here, thatphoton there.” “This photon here, that photon there” does not have a different identity from “that photon here, this photon there.”

The result, visible in today’s experiment, is that you can’t factorize the physics of our universe to be about particles with individual identities.

Part of the reason why humans have trouble coming to grips with perfectly normalquantum physics, is that humans bizarrely keep trying to factor reality into a sum of individually real billiard balls.

Ha ha! Silly humans.

" } }, { "_id": "5vZD32EynD9n94dhr", "title": "Configurations and Amplitude", "pageUrl": "https://www.lesswrong.com/posts/5vZD32EynD9n94dhr/configurations-and-amplitude", "postedAt": "2008-04-11T03:14:21.281Z", "baseScore": 73, "voteCount": 62, "commentCount": 361, "url": null, "contents": { "documentId": "5vZD32EynD9n94dhr", "html": "

So the universe isn’t made of little billiard balls, and it isn’t made of crests and troughs in a pool of aether… Then what is the stuff that stuff is made of?

In Figure 1, we see, at A, a half-silvered mirror, and two photon detectors, Detector 1 and Detector 2.

Early scientists, when they ran experiments like this, became confused about what the results meant. They would send a photon toward the half-silvered mirror, and half the time they would see Detector 1 click, and the other half of the time they would see Detector 2 click.

The early scientists—you’re going to laugh at this—thought that the silver mirror deflected the photon half the time, and let it through half the time.

Ha, ha! As if the half-silvered mirror did different things on different occasions! I want you to let go of this idea, because if you cling to what early scientists thought, you will become extremely confused. The half-silvered mirror obeys the same rule every time.

If you were going to write a computer program that was this experiment— not a computer program that predicted the result of the experiment, but a computer program that resembled the underlying reality—it might look sort of like this:

At the start of the program (the start of the experiment, the start of time) there’s a certain mathematical entity, called a configuration. You can think of this configuration as corresponding to “there is one photon heading from the photon source toward the half-silvered mirror,” or just “a photon heading toward A.”

A configuration can store a single complex value—“complex” as in the complex numbers (a+bi), with i defined as 1. At the start of the program, there’s already a complex number stored in the configuration “a photon heading toward A.” The exact value doesn’t matter so long as it’s not zero. We’ll let the configuration “a photon heading toward A” have a value of (1+0i).

All this is a fact within the territory, not a description of anyone’s knowledge. A configuration isn’t a proposition or a possible way the world could be. A configuration is a variable in the program—you can think of it as a kind of memory location whose index is “a photon heading toward A”—and it’s out there in the territory.

As the complex numbers that get assigned to configurations are not positive real numbers between 0 and 1, there is no danger of confusing them with probabilities. “A photon heading toward A” has complex value −1, which is hard to see as a degree of belief. The complex numbers are values within the program, again out there in the territory. We’ll call the complex numbers amplitudes.

There are two other configurations, which we’ll call “a photon going from A to Detector 1” and “a photon going from A to Detector 2.” These configurations don’t have a complex value yet; it gets assigned as the program runs.

We are going to calculate the amplitudes of “a photon going from A toward 1” and “a photon going from A toward 2” using the value of “a photon going toward A,” and the rule that describes the half-silvered mirror at A.

Roughly speaking, the half-silvered mirror rule is “multiply by 1 when the photon goes straight, and multiply by i when the photon turns at a right angle.” This is the universal rule that relates the amplitude of the configuration of “a photon going in,” to the amplitude that goes to the configurations of “a photon coming out straight” or “a photon being deflected.”[1]

So we pipe the amplitude of the configuration “a photon going toward A,” which is (1+0i), into the half-silvered mirror at A, and this transmits an amplitude of (1+0i)×i=(0i) to “a photon going from A toward 1,” and also transmits an amplitude of (1+0i)×1=(1+0i) to “a photon going from A toward 2.”

In the Figure 1 experiment, these are all the configurations and all the transmitted amplitude we need to worry about, so we’re done. Or, if you want to think of “Detector 1 gets a photon” and “Detector 2 gets a photon” as separate configurations, they’d just inherit their values from “A to 1” and “A to 2” respectively. (Actually, the values inherited should be multiplied by another complex factor, corresponding to the distance from A to the detector; but we will ignore that for now, and suppose that all distances traveled in our experiments happen to correspond to a complex factor of 1.)

So the final program state is:

Configuration “a photon going toward A”: (−1+0i)
Configuration “a photon going from A toward 1”: (0−i)
Configuration “a photon going from A toward 2”: (−1+0i)

and optionally

Configuration “Detector 1 gets a photon”: (0−i)
Configuration “Detector 2 gets a photon”: (−1+0i).

This same result occurs—the same amplitudes stored in the same configurations—every time you run the program (every time you do the experiment).

Now, for complicated reasons that we aren’t going to go into here— considerations that belong on a higher level of organization than fundamental quantum mechanics, the same way that atoms are more complicated than quarks—there’s no simplemeasuring instrument that can directly tell us the exact amplitudes of each configuration. We can’t directly see the program state.

So how do physicists know what the amplitudes are?

We do have a magical measuring tool that can tell us the squared modulus of a configuration’s amplitude. If the original complex amplitude is (a+bi), we can get the positive real number (a2+b2). Think of the Pythagorean theorem: if you imagine the complex number as a little arrow stretching out from the origin on a two-dimensional plane, then the magic tool tells us the squared length of the little arrow, but it doesn’t tell us the direction the arrow is pointing.

To be more precise, the magic tool actually just tells us the ratios of the squared lengths of the amplitudes in some configurations. We don’t know how long the arrows are in an absolute sense, just how long they are relative to each other. But this turns out to be enough information to let us reconstruct the laws of physics—the rules of the program. And so I can talk about amplitudes, not just ratios of squared moduli.

When we wave the magic tool over “Detector 1 gets a photon” and “Detector 2 gets a photon,” we discover that these configurations have the same squared modulus—the lengths of the arrows are the same. Thus speaks the magic tool. By doing more complicated experiments (to be seen shortly), we can tell that the original complex numbers had a ratio of i to 1.

And what is this magical measuring tool?

Well, from the perspective of everyday life—way, way, way above the quantum level and a lot more complicated—the magical measuring tool is that we send some photons toward the half-silvered mirror, one at a time, and count up how many photons arrive at Detector 1 versus Detector 2 over a few thousand trials. The ratio of these values is the ratio of the squared moduli of the amplitudes. But the reason for this is not something we are going to consider yet. Walk before you run. It is not possible to understand what happens all the way up at the level of everyday life, before you understand what goes on in much simpler cases.

For today’s purposes, we have a magical squared-modulus-ratio reader. And the magic tool tells us that the little two-dimensional arrow for the configuration “Detector 1 gets a photon” has the same squared length as for “Detector 2 gets a photon.” That’s all.

You may wonder, “Given that the magic tool works this way, what motivates us to use quantum theory, instead of thinking that the half-silvered mirror reflects the photon around half the time?”

Well, that’s just begging to be confused—putting yourself into a historically realistic frame of mind like that and using everyday intuitions. Did I say anything about a little billiard ball going one way or the other and possibly bouncing off a mirror? That’s not how reality works. Reality is about complex amplitudes flowing between configurations, and the laws of the flow are stable.

But if you insist on seeing a more complicated situation that billiard-ball ways of thinking can’t handle, here’s a more complicated experiment.


In Figure 2, B and C are full mirrors, and A and D are half-mirrors. The line from D to E is dashed for reasons that will become apparent, but amplitude is flowing from D to E under exactly the same laws.

Now let’s apply the rules we learned before:

At the beginning of time “a photon heading toward A” has amplitude (1+0i).

We proceed to compute the amplitude for the configurations “a photon going from A to B” and “a photon going from A to C”:

“a photon going from A to B” = i × a photon heading toward A” = (0i)

Similarly,

“a photon going from A to C” = 1 × a photon heading toward A” = (1+0i)

The full mirrors behave (as one would expect) like half of a half-silvered mirror—a full mirror just bends things by right angles and multiplies them by i. (To state this slightly more precisely: For a full mirror, the amplitude that flows, from the configuration of a photon heading in, to the configuration of a photon heading out at a right angle, is multiplied by a factor of i.)

So:

“a photon going from B to D = i × “a photon going from A to B = (1+0i),
“a photon going from C to D = i × “a photon going from A to C = (0i)

“B to D and “C to D are two different configurations—we don’t simply write “a photon at D—because the photons are arriving at two different angles in these two different configurations. And what D does to a photon depends on the angle at which the photon arrives.

Again, the rule (speaking loosely) is that when a half-silvered mirror bends light at a right angle, the amplitude that flows from the photon-going-in configuration to the photon-going-out configuration, is the amplitude of the photon-going-in configuration multiplied by i. And when two configurations are related by a half-silvered mirror letting light straight through, the amplitude that flows from the photon-going-in configuration is multiplied by 1.

So:

From the configuration “a photon going from B to D,” with original amplitude(1+0i)

Amplitude of (1+0i)×i=(0+i) flows to “a photon going from D to E.
Amplitude of (1+0i)×1=(1+0i) flows to “a photon going from D to F. ”

From the configuration “a photon going from C to D,” with original amplitude(0−i)

Amplitude of (0i)×i=(1+0i) flows to “a photon going from D to F.
Amplitude of (0i)×1=(0i) flows to “a photon going from D to E.

Therefore:

(You may want to try working this out yourself on pen and paper if you lost track at any point.)

But the upshot, from that super-high-level “experimental” perspective that we think of as normal life, is that we see no photons detected at E. Every photon seems to end up at F. The ratio of squared moduli between “D to E” and “D to F” is 0 to 4. That’s why the line from D to E is dashed, in this figure.

This is not something it is possible to explain by thinking of half-silvered mirrors deflecting little incoming billiard balls half the time. You’ve got to think in terms of amplitude flows.

If half-silvered mirrors deflected a little billiard ball half the time, in this setup, the little ball would end up at Detector 1 around half the time and Detector 2 around half the time. Which it doesn’t. So don’t think that.

You may say, “But wait a minute! I can think of another hypothesis that accounts for this result. What if, when a half-silvered mirror reflects a photon, it does something to the photon that ensures it doesn’t get reflected next time? And when it lets a photon go through straight, it does something to the photon so it gets reflected next time.”

Now really, there’s no need to go making the rules so complicated. Occam’s Razor, remember. Just stick with simple, normal amplitude flows between configurations.

But if you want another experiment that disproves your new alternative hypothesis, it’s Figure 3.

Here, we’ve left the whole experimental setup the same, and just put a little blocking object between B and D. This ensures that the amplitude of “a photon going from B to D” is 0.

Once you eliminate the amplitude contributions from that configuration, you end up with totals of (1+0i) in “a photon going from D to F, ” and (0i) in “a photon going from D to E.”

The squared moduli of (1+0i) and (0i) are both 1, so the magic measuring tool should tell us that the ratio of squared moduli is 1. Way back up at the level where physicists exist, we should find that Detector 1 goes off half the time, and Detector 2 half the time.

The same thing happens if we put the block between C and D. The amplitudes are different, but the ratio of the squared moduli is still 1, so Detector 1 goes off half the time and Detector 2 goes off half the time.

This cannot possibly happen with a little billiard ball that either does or doesn’t get reflected by the half-silvered mirrors.

Because complex numbers can have opposite directions, like 1 and −1, or i and −i, amplitude flows can cancel each other out. Amplitude flowing from configuration X into configuration Y can be canceled out by an equal and opposite amplitude flowing from configuration Z into configuration Y. In fact, that’s exactly what happens in this experiment.

In probability theory, when something can either happen one way or another, X or ¬X, then P(Z)=P(Z|X)P(X)+P(Z|¬X)P(¬X). And all probabilities are positive. So if you establish that the probability of Z happening given X is 12, and the probability of X happening is 13, then the total probability of Z happening is at least 16 no matter what goes on in the case of ¬X. There’s no such thing as negative probability, less-than-impossible credence, or (0+i) credibility, so degrees of belief can’t cancel each other out like amplitudes do.

Not to mention that probability is in the mind to begin with; and we are talking about the territory, the program-that-is-reality, not talking about human cognition or states of partial knowledge.

By the same token, configurations are not propositions, not statements, not ways the world could conceivably be. Configurations are not semantic constructs. Adjectives like probable do not apply to them; they are not beliefs or sentences or possible worlds. They are not true or false but simply real.

In the experiment of Figure 2, do not be tempted to think anything like: “The photon goes to either B or C, but it could have gone the other way, and this possibility interferes with its ability to go to E…”

It makes no sense to think of something that “could have happened but didn’t” exerting an effect on the world. We can imagine things that could have happened but didn’t—like thinking, “Gosh, that car almost hit me”—and our imagination can have an effect on our future behavior. But the event of imagination is a real event, that actually happens, and that is what has the effect. It’s your imagination of the unreal event—your very real imagination, implemented within a quite physical brain—that affects your behavior.

To think that the actual event of a car hitting you—this event which could have happened to you, but in fact didn’t—is directly exerting a causal effect on your behavior, is mixing up the map with the territory.

What affects the world is real. (If things can affect the world without being “real,” it’s hard to see what the word “real” means.) Configurations and amplitude flows are causes, and they have visible effects; they are real. Configurations are not possible worlds and amplitudes are not degrees of belief, any more than your chair is a possible world or the sky is a degree of belief.

So what is a configuration, then?

Well, you’ll be getting a clearer idea of that in later essays.

But to give you a quick idea of how the real picture differs from the simplified version we saw in this essay…

Our experimental setup only dealt with one moving particle, a single photon. Real configurations are about multiple particles. The next essay will deal with the case of more than one particle, and that should give you a much clearer idea of what a configuration is.

Each configuration we talked about should have described a joint position of all the particles in the mirrors and detectors, not just the position of one photon bopping around.

In fact, the really real configurations are over joint positions of all the particles in the universe, including the particles making up the experimenters. You can see why I’m saving the notion of experimental results for later essays.

In the real world, amplitude is a continuous distribution over a continuous space of configurations. This essay’s “configurations” were blocky and digital, and so were our “amplitude flows.” It was as if we were talking about a photon teleporting from one place to another.

If none of that made sense, don’t worry. It will be cleared up in later essays. Just wanted to give you some idea of where this was heading.


1. [Editor’s Note: Strictly speaking, a standard half-silvered mirror would yield a rule “multiply by −1 when the photon turns at a right angle,” not “multiply by i.” The basic scenario described by the author is not physically impossible, and its use does not affect the substantive argument. However, physics students may come away confused if they compare the discussion here to textbook discussions of Mach–Zehnder interferometers. We’ve left this idiosyncrasy in the text because it eliminates any need to specify which side of the mirror is half-silvered, simplifying the experiment.]

" } }, { "_id": "7FSwbFpDsca7uXpQ2", "title": "Quantum Explanations", "pageUrl": "https://www.lesswrong.com/posts/7FSwbFpDsca7uXpQ2/quantum-explanations", "postedAt": "2008-04-09T08:15:33.000Z", "baseScore": 102, "voteCount": 80, "commentCount": 61, "url": null, "contents": { "documentId": "7FSwbFpDsca7uXpQ2", "html": "

There’s a widespread belief that quantum mechanics is supposed to be confusing. This is not a good frame of mind for either a teacher or a student.

And I find that legendarily “confusing” subjects often are not really all that complicated as math, particularly if you just want a very basic—but still mathematical—grasp on what goes on down there.

I am not a physicist, and physicists famously hate it when non-professional-physicists talk about quantum mechanics. But I do have some experience with explaining mathy things that are allegedly “hard to understand.”

I wrote the Intuitive Explanation of Bayesian Reasoning because people were complaining that Bayes’s Theorem was “counterintuitive”—in fact it was famously counterintuitive—and this did not seem right. The equation just did not seem complicated enough to deserve the fearsome reputation it had. So I tried explaining it my way, and I did not manage to reach my original target of elementary school students, but I get frequent grateful emails from formerly confused folks ranging from reporters to outside academic college professors.

Besides, as a Bayesian, I don’t believe in phenomena that are inherently confusing. Confusion exists in our models of the world, not in the world itself. If a subject is widely known as confusing, not just difficult… you shouldn’t leave it at that. It doesn’t satisfice; it is not an okay place to be. Maybe you can fix the problem, maybe you can’t; but you shouldn’t be happy to leave students confused.

The first way in which my introduction is going to depart from the traditional, standard introduction to quantum mechanics, is that I am not going to tell you that quantum mechanics is supposed to be confusing.

I am not going to tell you that it’s okay for you to not understand quantum mechanics, because no one understands quantum mechanics, as Richard Feynman once claimed. There was a historical time when this was true, but we no longer live in that era.

I am not going to tell you: “You don’t understand quantum mechanics, you just get used to it.” (As von Neumann is reputed to have said; back in the dark decades when, in fact, no one did understand quantum mechanics.)

Explanations are supposed to make you less confused. If you feel like you don’t understand something, this indicates a problem—either with you, or your teacher—but at any rate a problem; and you should move to resolve the problem.

I am not going to tell you that quantum mechanics is weird, bizarre, confusing, or alien. Quantum mechanics is counterintuitive, but that is a problem with your intuitions, not a problem with quantum mechanics. Quantum mechanics has been around for billions of years before the Sun coalesced from interstellar hydrogen. Quantum mechanics was here before you were, and if you have a problem with that, you are the one who needs to change. Quantum mechanics sure won’t. There are no surprising facts, only models that are surprised by facts; and if a model is surprised by the facts, it is no credit to that model.

It is always best to think of reality as perfectly normal. Since the beginning, not one unusual thing has ever happened.

The goal is to become completely at home in a quantum universe. Like a native. Because, in fact, that is where you live.

In the coming sequence on quantum mechanics, I am going to consistently speak as if quantum mechanics is perfectly normal; and when human intuitions depart from quantum mechanics, I am going to make fun of the intuitions for being weird and unusual. This may seem odd, but the point is to swing your mind around to a native quantum point of view.

Another thing: The traditional introduction to quantum mechanics closely follows the order in which quantum mechanics was discovered.

The traditional introduction starts by saying that matter sometimes behaves like little billiard balls bopping around, and sometimes behaves like crests and troughs moving through a pool of water. Then the traditional introduction gives some examples of matter acting like a little billiard ball, and some examples of it acting like an ocean wave.

Now, it happens to be a historical fact that, back when students of matter were working all this stuff out and had no clue about the true underlying math, those early scientists first thought that matter was like little billiard balls. And then that it was like waves in the ocean. And then that it was like billiard balls again. And then the early scientists got really confused, and stayed that way for several decades, until it was finally sorted out in the second half of the twentieth century.

Dragging a modern-day student through all this may be a historically realistic approach to the subject matter, but it also ensures the historically realistic outcome of total bewilderment. Talking to aspiring young physicists about “wave/particle duality” is like starting chemistry students on the Four Elements.

An electron is not a billiard ball, and it’s not a crest and trough moving through a pool of water. An electron is a mathematically different sort of entity, all the time and under all circumstances, and it has to be accepted on its own terms.

The universe is not wavering between using particles and waves, unable to make up its mind. It’s only human intuitions about quantum mechanics that swap back and forth. The intuitions we have for billiard balls, and the intuitions we have for crests and troughs in a pool of water, both look sort of like they’re applicable to electrons, at different times and under different circumstances. But the truth is that both intuitions simply aren’t applicable.

If you try to think of an electron as being like a billiard ball on some days, and like an ocean wave on other days, you will confuse the living daylights out of yourself.

Yet it’s your eyes that are wobbling and unstable, not the world.

Furthermore:

The order in which humanity discovered things is not necessarily the best order in which to teach them. First, humanity noticed that there were other animals running around. Then we cut them open and found that they were full of organs. Then we examined the organs carefully and found they were made of tissues. Then we looked at the tissues under a microscope and discovered cells, which are made of proteins and some other chemically synthesized stuff. Which are made of molecules, which are made of atoms, which are made of protons and neutrons and electrons which are way simpler than entire animals but were discovered tens of thousands of years later.

Physics doesn’t start by talking about biology. So why should it start by talking about very high-level complicated phenomena, like, say, the observed results of experiments?

The ordinary way of teaching quantum mechanics keeps stressing the experimental results. Now I do understand why that sounds nice from a rationalist perspective. Believe me, I understand.

But it seems to me that the upshot is dragging in big complicated mathematical tools that you need to analyze real-world situations, before the student understands what fundamentally goes on in the simplest cases.

It’s like trying to teach programmers how to write concurrent multithreaded programs before they know how to add two variables together, because concurrent multithreaded programs are closer to everyday life. Being close to everyday life is not always a strong recommendation for what to teach first.

Maybe the monomaniacal focus on experimental observations made sense in the dark decades when no one understood what was fundamentally going on, and you couldn’t start there, and all your models were just mysterious maths that gave good experimental predictions… you can still find this view of quantum physics presented in many books… but maybe today it’s worth trying a different angle? The result of the standard approach is standard confusion.

The classical world is strictly implicit in the quantum world, but seeing from a classical perspective makes everything bigger and more complicated.

Everyday life is a higher level of organization, like molecules versus quarks—huge catalogue of molecules, six quarks. I think it is worth trying to teach from the perspective of the quantum world first, and talking about classical experimental results afterward.

I am not going to start with the normal classical world and then talk about a bizarre quantum backdrop hidden behind the scenes. The quantum world is the scene and it defines normality.

I am not going to talk as if the classical world is real life, and occasionally the classical world transmits a request for an experimental result to a quantum-physics server, and the quantum-physics server does some peculiar calculations and transmits back a classical experimental result. I am going to talk as if the quantum world is the really real and the classical world something far away. Not just because that makes it easier to be a native of a quantum universe, but because, at a core level, it’s the truth.

Finally, I am going to take a strictly realist perspective on quantum mechanics—the quantum world is really out there, our equations describe the territory and not our maps of it, and the classical world only exists implicitly within the quantum one. I am not going to discuss non-realist views in the early stages of my introduction, except to say why you should not be confused by certain intuitions that non-realists draw upon for support. I am not going to apologize for this, and I would like to ask any non-realists on the subject of quantum mechanics to wait and hold their comments until called for in a later essay. Do me this favor, please. I think non-realism is one of the main things that confuses prospective students, and prevents them from being able to concretely visualize quantum phenomena. I will discuss the issues explicitly in a future essay.

But everyone should be aware that, even though I’m not going to discuss the issue at first, there is a sizable community of scientists who dispute the realist perspective on quantum mechanics. Myself, I don’t think it’s worth figuring both ways; I’m a pure realist, for reasons that will become apparent. But if you read my introduction, you are getting my view. It is not only my view. It is probably the majority view among theoretical physicists, if that counts for anything (though I will argue the matter separately from opinion polls). Still, it is not the only view that exists in the modern physics community. I do not feel obliged to present the other views right away, but I feel obliged to warn my readers that there are other views, which I will not be presenting during the initial stages of the introduction.

To sum up, my goal will be to teach you to think like a native of a quantum universe, not a reluctant tourist.

Embrace reality. Hug it tight.

" } }, { "_id": "3XMwPNMSbaPm2suGz", "title": "Belief in the Implied Invisible", "pageUrl": "https://www.lesswrong.com/posts/3XMwPNMSbaPm2suGz/belief-in-the-implied-invisible", "postedAt": "2008-04-08T07:40:49.000Z", "baseScore": 67, "voteCount": 58, "commentCount": 34, "url": null, "contents": { "documentId": "3XMwPNMSbaPm2suGz", "html": "

One generalized lesson not to learn from the Anti-Zombie Argument is, \"Anything you can't see doesn't exist.\"

\n

It's tempting to conclude the general rule.  It would make the Anti-Zombie Argument much simpler, on future occasions, if we could take this as a premise.  But unfortunately that's just not Bayesian.

\n

Suppose I transmit a photon out toward infinity, not aimed at any stars, or any galaxies, pointing it toward one of the great voids between superclusters.  Based on standard physics, in other words, I don't expect this photon to intercept anything on its way out.  The photon is moving at light speed, so I can't chase after it and capture it again.

\n

If the expansion of the universe is accelerating, as current cosmology holds, there will come a future point where I don't expect to be able to interact with the photon even in principle—a future time beyond which I don't expect the photon's future light cone to intercept my world-line.  Even if an alien species captured the photon and rushed back to tell us, they couldn't travel fast enough to make up for the accelerating expansion of the universe.

\n

Should I believe that, in the moment where I can no longer interact with it even in principle, the photon disappears?

\n

No.

\n

It would violate Conservation of Energy.  And the second law of thermodynamics.  And just about every other law of physics.  And probably the Three Laws of Robotics.  It would imply the photon knows I care about it and knows exactly when to disappear.

\n

It's a silly idea.

\n

But if you can believe in the continued existence of photons that have become experimentally undetectable to you, why doesn't this imply a general license to believe in the invisible?

\n

(If you want to think about this question on your own, do so before the jump...)

\n

\n

Though I failed to Google a source, I remember reading that when it was first proposed that the Milky Way was our galaxy —that the hazy river of light in the night sky was made up of millions (or even billions) of stars—that Occam's Razor was invoked against the new hypothesis.  Because, you see, the hypothesis vastly multiplied the number of \"entities\" in the believed universe.  Or maybe it was the suggestion that \"nebulae\"—those hazy patches seen through a telescope—might be galaxies full of stars, that got the invocation of Occam's Razor.

\n

Lex parsimoniae:  Entia non sunt multiplicanda praeter necessitatem.

\n

That was Occam's original formulation, the law of parsimony:  Entities should not be multiplied beyond necessity.

\n

If you postulate billions of stars that no one has ever believed in before, you're multiplying entities, aren't you?

\n

No.  There are two Bayesian formalizations of Occam's Razor:  Solomonoff Induction, and Minimum Message Length.  Neither penalizes galaxies for being big.

\n

Which they had better not do!  One of the lessons of history is that what-we-call-reality keeps turning out to be bigger and bigger and huger yet.  Remember when the Earth was at the center of the universe?  Remember when no one had invented Avogadro's number?  If Occam's Razor was weighing against the multiplication of entities every time, we'd have to start doubting Occam's Razor, because it would have consistently turned out to be wrong.

\n

In Solomonoff induction, the complexity of your model is the amount of code in the computer program you have to write to simulate your model.  The amount of code, not the amount of RAM it uses, or the number of cycles it takes to compute.  A model of the universe that contains billions of galaxies containing billions of stars, each star made of a billion trillion decillion quarks, will take a lot of RAM to run—but the code only has to describe the behavior of the quarks, and the stars and galaxies can be left to run themselves.  I am speaking semi-metaphorically here—there are things in the universe besides quarks—but the point is, postulating an extra billion galaxies doesn't count against the size of your code, if you've already described one galaxy.  It just takes a bit more RAM, and Occam's Razor doesn't care about RAM.

\n

Why not?  The Minimum Message Length formalism, which is nearly equivalent to Solomonoff Induction, may make the principle clearer:  If you have to tell someone how your model of the universe works, you don't have to individually specify the location of each quark in each star in each galaxy.  You just have to write down some equations.  The amount of \"stuff\" that obeys the equation doesn't affect how long it takes to write the equation down.  If you encode the equation into a file, and the file is 100 bits long, then there are 2100 other models that would be around the same file size, and you'll need roughly 100 bits of supporting evidence.  You've got a limited amount of probability mass; and a priori, you've got to divide that mass up among all the messages you could send; and so postulating a model from within a model space of 2100 alternatives, means you've got to accept a 2-100 prior probability penalty—but having more galaxies doesn't add to this.

\n

Postulating billions of stars in billions of galaxies doesn't affect the length of your message describing the overall behavior of all those galaxies.  So you don't take a probability hit from having the same equations describing more things.  (So long as your model's predictive successes aren't sensitive to the exact initial conditions.  If you've got to specify the exact positions of all the quarks for your model to predict as well as it does, the extra quarks do count as a hit.)

\n

If you suppose that the photon disappears when you are no longer looking at it, this is an additional law in your model of the universe.  It's the laws that are \"entities\", costly under the laws of parsimony.  Extra quarks are free.

\n

So does it boil down to, \"I believe the photon goes on existing as it wings off to nowhere, because my priors say it's simpler for it to go on existing than to disappear\"?

\n

This is what I thought at first, but on reflection, it's not quite right.  (And not just because it opens the door to obvious abuses.)

\n

I would boil it down to a distinction between belief in the implied invisible, and belief in the additional invisible.

\n

When you believe that the photon goes on existing as it wings out to infinity, you're not believing that as an additional fact.

\n

What you believe (assign probability to) is a set of simple equations; you believe these equations describe the universe.  You believe these equations because they are the simplest equations you could find that describe the evidence.  These equations are highly experimentally testable; they explain huge mounds of evidence visible in the past, and predict the results of many observations in the future.

\n

You believe these equations, and it is a logical implication of these equations that the photon goes on existing as it wings off to nowhere, so you believe that as well.

\n

Your priors, or even your probabilities, don't directly talk about the photon.  What you assign probability to is not the photon, but the general laws.  When you assign probability to the laws of physics as we know them, you automatically contribute that same probability to the photon continuing to exist on its way to nowhere—if you believe the logical implications of what you believe.

\n

It's not that you believe in the invisible as such, from reasoning about invisible things.  Rather the experimental evidence supports certain laws, and belief in those laws logically implies the existence of certain entities that you can't interact with.  This is belief in the implied invisible.

\n

On the other hand, if you believe that the photon is eaten out of existence by the Flying Spaghetti Monster—maybe on this just one occasion—or even if you believed without reason that the photon hit a dust speck on its way out—then you would be believing in a specific extra invisible event, on its own.  If you thought that this sort of thing happened in general, you would believe in a specific extra invisible law.  This is belief in the additional invisible.

\n

The whole matter would be a lot simpler, admittedly, if we could just rule out the existence of entities we can't interact with, once and for all—have the universe stop existing at the edge of our telescopes.  But this requires us to be very silly.

\n

Saying that you shouldn't ever need a separate and additional belief about invisible things—that you only believe invisibles that are logical implications of general laws which are themselves testable, and even then, don't have any further beliefs about them that are not logical implications of visibly testable general rules—actually does seem to rule out all abuses of belief in the invisible, when applied correctly.

\n

Perhaps I should say, \"you should assign unaltered prior probability to additional invisibles\", rather than saying, \"do not believe in them.\"  But if you think of a belief as something evidentially additional, something you bother to track, something where you bother to count up support for or against, then it's questionable whether we should ever have additional beliefs about additional invisibles.

\n

There are exotic cases that break this in theory.  (E.g:  The epiphenomenal demons are watching you, and will torture 3^^^3 victims for a year, somewhere you can't ever verify the event, if you ever say the word \"Niblick\".)  But I can't think of a case where the principle fails in human practice.

\n

Added:  To make it clear why you would sometimes want to think about implied invisibles, suppose you're going to launch a spaceship, at nearly the speed of light, toward a faraway supercluster.  By the time the spaceship gets there and sets up a colony, the universe's expansion will have accelerated too much for them to ever send a message back.  Do you deem it worth the purely altruistic effort to set up this colony, for the sake of all the people who will live there and be happy?  Or do you think the spaceship blips out of existence before it gets there?  This could be a very real question at some point.

" } }, { "_id": "k6EPphHiBH4WWYFCj", "title": "GAZP vs. GLUT", "pageUrl": "https://www.lesswrong.com/posts/k6EPphHiBH4WWYFCj/gazp-vs-glut", "postedAt": "2008-04-07T01:51:56.000Z", "baseScore": 91, "voteCount": 70, "commentCount": 169, "url": null, "contents": { "documentId": "k6EPphHiBH4WWYFCj", "html": "

In \"The Unimagined Preposterousness of Zombies\", Daniel Dennett says:

\n
\n

To date, several philosophers have told me that they plan to accept my challenge to offer a non-question-begging defense of zombies, but the only one I have seen so far involves postulating a \"logically possible\" but fantastic being — a descendent of Ned Block's Giant Lookup Table fantasy...

\n
\n

A Giant Lookup Table, in programmer's parlance, is when you implement a function as a giant table of inputs and outputs, usually to save on runtime computation.  If my program needs to know the multiplicative product of two inputs between 1 and 100, I can write a multiplication algorithm that computes each time the function is called, or I can precompute a Giant Lookup Table with 10,000 entries and two indices.  There are times when you do want to do this, though not for multiplication—times when you're going to reuse the function a lot and it doesn't have many possible inputs; or when clock cycles are cheap while you're initializing, but very expensive while executing.

\n

Giant Lookup Tables get very large, very fast.  A GLUT of all possible twenty-ply conversations with ten words per remark, using only 850-word Basic English, would require 7.6 * 10585 entries.

\n

Replacing a human brain with a Giant Lookup Table of all possible sense inputs and motor outputs (relative to some fine-grained digitization scheme) would require an unreasonably large amount of memory storage.  But \"in principle\", as philosophers are fond of saying, it could be done.

\n

\n

The GLUT is not a zombie in the classic sense, because it is microphysically dissimilar to a human.  (In fact, a GLUT can't really run on the same physics as a human; it's too large to fit in our universe.  For philosophical purposes, we shall ignore this and suppose a supply of unlimited memory storage.)

\n

But is the GLUT a zombie at all?  That is, does it behave exactly like a human without being conscious?

\n

The GLUT-ed body's tongue talks about consciousness.  Its fingers write philosophy papers.  In every way, so long as you don't peer inside the skull, the GLUT seems just like a human... which certainly seems like a valid example of a zombie: it behaves just like a human, but there's no one home.

\n

Unless the GLUT is conscious, in which case it wouldn't be a valid example.

\n

I can't recall ever seeing anyone claim that a GLUT is conscious.  (Admittedly my reading in this area is not up to professional grade; feel free to correct me.)  Even people who are accused of being (gasp!) functionalists don't claim that GLUTs can be conscious.

\n

GLUTs are the reductio ad absurdum to anyone who suggests that consciousness is simply an input-output pattern, thereby disposing of all troublesome worries about what goes on inside.

\n

So what does the Generalized Anti-Zombie Principle (GAZP) say about the Giant Lookup Table (GLUT)?

\n

At first glance, it would seem that a GLUT is the very archetype of a Zombie Master—a distinct, additional, detectable, non-conscious system that animates a zombie and makes it talk about consciousness for different reasons.

\n

In the interior of the GLUT, there's merely a very simple computer program that looks up inputs and retrieves outputs.  Even talking about a \"simple computer program\" is overshooting the mark, in a case like this.  A GLUT is more like ROM than a CPU.  We could equally well talk about a series of switched tracks by which some balls roll out of a previously stored stack and into a trough—period; that's all the GLUT does.

\n

A spokesperson from People for the Ethical Treatment of Zombies replies:  \"Oh, that's what all the anti-mechanists say, isn't it?  That when you look in the brain, you just find a bunch of neurotransmitters opening ion channels?  If ion channels can be conscious, why not levers and balls rolling into bins?\"

\n

\"The problem isn't the levers,\" replies the functionalist, \"the problem is that a GLUT has the wrong pattern of levers.  You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling...  Heck, you need the ability to write things to memory just so that time can pass for the computation.  Unless you think it's possible to program a conscious being in Haskell.\"

\n

\"I don't know about that,\" says the PETZ spokesperson, \"all I know is that this so-called zombie writes philosophical papers about consciousness.  Where do these philosophy papers come from, if not from consciousness?\"

\n

Good question!  Let us ponder it deeply.

\n

There's a game in physics called Follow-The-Energy.  Richard Feynman's father played it with young Richard:

\n
\n

    It was the kind of thing my father would have talked about:  \"What makes it go?  Everything goes because the sun is shining.\"   And then we would have fun discussing it:
    \"No, the toy goes because the spring is wound up,\" I would say.  \"How did the spring get wound up?\" he would ask.
    \"I wound it up.\"
    \"And how did you get moving?\"
    \"From eating.\"
    \"And food grows only because the sun is shining.   So it's because the sun is shining that all these things are moving.\"   That would get the concept across that motion is simply the transformation of the sun's power.

\n
\n

When you get a little older, you learn that energy is conserved, never created or destroyed, so the notion of using up energy doesn't make much sense.  You can never change the total amount of energy, so in what sense are you using it?

\n

So when physicists grow up, they learn to play a new game called Follow-The-Negentropy—which is really the same game they were playing all along; only the rules are mathier, the game is more useful, and the principles are harder to wrap your mind around conceptually.

\n

Rationalists learn a game called Follow-The-Improbability, the grownup version of \"How Do You Know?\"  The rule of the rationalist's game is that every improbable-seeming belief needs an equivalent amount of evidence to justify it.  (This game has amazingly similar rules to Follow-The-Negentropy.)

\n

Whenever someone violates the rules of the rationalist's game, you can find a place in their argument where a quantity of improbability appears from nowhere; and this is as much a sign of a problem as, oh, say, an ingenious design of linked wheels and gears that keeps itself running forever.

\n

The one comes to you and says:  \"I believe with firm and abiding faith that there's an object in the asteroid belt, one foot across and composed entirely of chocolate cake; you can't prove that this is impossible.\"  But, unless the one had access to some kind of evidence for this belief, it would be highly improbable for a correct belief to form spontaneously.  So either the one can point to evidence, or the belief won't turn out to be true.  \"But you can't prove it's impossible for my mind to spontaneously generate a belief that happens to be correct!\"  No, but that kind of spontaneous generation is highly improbable, just like, oh, say, an egg unscrambling itself.

\n

In Follow-The-Improbability, it's highly suspicious to even talk about a specific hypothesis without having had enough evidence to narrow down the space of possible hypotheses.  Why aren't you giving equal air time to a decillion other equally plausible hypotheses?  You need sufficient evidence to find the \"chocolate cake in the asteroid belt\" hypothesis in the hypothesis space—otherwise there's no reason to give it more air time than a trillion other candidates like \"There's a wooden dresser in the asteroid belt\" or \"The Flying Spaghetti Monster threw up on my sneakers.\"

\n

In Follow-The-Improbability, you are not allowed to pull out big complicated specific hypotheses from thin air without already having a corresponding amount of evidence; because it's not realistic to suppose that you could spontaneously start discussing the true hypothesis by pure coincidence.

\n

A philosopher says, \"This zombie's skull contains a Giant Lookup Table of all the inputs and outputs for some human's brain.\"  This is a very large improbability.  So you ask, \"How did this improbable event occur?  Where did the GLUT come from?\"

\n

Now this is not standard philosophical procedure for thought experiments.  In standard philosophical procedure, you are allowed to postulate things like \"Suppose you were riding a beam of light...\" without worrying about physical possibility, let alone mere improbability.  But in this case, the origin of the GLUT matters; and that's why it's important to understand the motivating question, \"Where did the improbability come from?\"

\n

The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table.  (Thereby creating uncounted googols of human beings, some of them in extreme pain, the supermajority gone quite mad in a universe of chaos where inputs bear no relation to outputs.  But damn the ethics, this is for philosophy.)

\n

In this case, the GLUT is writing papers about consciousness because of a conscious algorithm.  The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device.  The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line.  A GLUT generated from an originally human brain-specification is doing the same thing.

\n

\"All right,\" says the philosopher, \"the GLUT was generated randomly, and just happens to have the same input-output relations as some reference human.\"

\n

How, exactly, did you randomly generate the GLUT?

\n

\"We used a true randomness source—a quantum device.\"

\n

But a quantum device just implements the Branch Both Ways instruction; when you generate a bit from a quantum randomness source, the deterministic result is that one set of universe-branches (locally connected amplitude clouds) see 1, and another set of universes see 0.  Do it 4 times, create 16 (sets of) universes.

\n

So, really, this is like saying that you got the GLUT by writing down all possible GLUT-sized sequences of 0s and 1s, in a really damn huge bin of lookup tables; and then reaching into the bin, and somehow pulling out a GLUT that happened to correspond to a human brain-specification.  Where did the improbability come from?

\n

Because if this wasn't just a coincidence—if you had some reach-into-the-bin function that pulled out a human-corresponding GLUT by design, not just chance—then that reach-into-the-bin function is probably conscious, and so the GLUT is again a cellphone, not a zombie.  It's connected to a human at two removes, instead of one, but it's still a cellphone!  Nice try at concealing the source of the improbability there!

\n

Now behold where Follow-The-Improbability has taken us: where is the source of this body's tongue talking about an inner listener?  The consciousness isn't in the lookup table.  The consciousness isn't in the factory that manufactures lots of possible lookup tables.  The consciousness was in whatever pointed to one particular already-manufactured lookup table, and said, \"Use that one!\"

\n

You can see why I introduced the game of Follow-The-Improbability.  Ordinarily, when we're talking to a person, we tend to think that whatever is inside the skull, must be \"where the consciousness is\".  It's only by playing Follow-The-Improbability that we can realize that the real source of the conversation we're having, is that-which-is-responsible-for the improbability of the conversation—however distant in time or space, as the Sun moves a wind-up toy.

\n

\"No, no!\" says the philosopher.  \"In the thought experiment, they aren't randomly generating lots of GLUTs, and then using a conscious algorithm to pick out one GLUT that seems humanlike! I am specifying that, in this thought experiment,  they reach into the inconceivably vast GLUT bin, and by pure chance pull out a GLUT that is identical to a human brain's inputs and outputs!  There!  I've got you cornered now!  You can't play Follow-The-Improbability any further!\"

\n

Oh.  So your specification is the source of the improbability here.

\n

When we play Follow-The-Improbability again, we end up outside the thought experiment, looking at the philosopher.

\n

That which points to the one GLUT that talks about consciousness, out of all the vast space of possibilities, is now... the conscious person asking us to imagine this whole scenario.  And our own brains, which will fill in the blank when we imagine, \"What will this GLUT say in response to 'Talk about your inner listener'?\"

\n

The moral of this story is that when you follow back discourse about \"consciousness\", you generally find consciousness.  It's not always right in front of you.  Sometimes it's very cleverly hidden.  But it's there.  Hence the Generalized Anti-Zombie Principle.

\n

If there is a Zombie Master in the form of a chatbot that processes and remixes amateur human discourse about \"consciousness\", the humans who generated the original text corpus are conscious.

\n

If someday you come to understand consciousness, and look back, and see that there's a program you can write which will output confused philosophical discourse that sounds an awful lot like humans without itself being conscious—then when I ask \"How did this program come to sound similar to humans?\" the answer is that you wrote it to sound similar to conscious humans, rather than choosing on the criterion of similarity to something else.  This doesn't mean your little Zombie Master is conscious—but it does mean I can find consciousness somewhere in the universe by tracing back the chain of causality, which means we're not entirely in the Zombie World.

\n

But suppose someone actually did reach into a GLUT-bin and by genuinely pure chance pulled out a GLUT that wrote philosophy papers?

\n

Well, then it wouldn't be conscious.  IMHO.

\n

I mean, there's got to be more to it than inputs and outputs.

\n

Otherwise even a GLUT would be conscious, right?

\n
\n

Oh, and for those of you wondering how this sort of thing relates to my day job...

\n

In this line of business you meet an awful lot of people who think that an arbitrarily generated powerful AI will be \"moral\".  They can't agree among themselves on why, or what they mean by the word \"moral\"; but they all agree that doing Friendly AI theory is unnecessary.  And when you ask them how an arbitrarily generated AI ends up with moral outputs, they proffer elaborate rationalizations aimed at AIs of that which they deem \"moral\"; and there are all sorts of problems with this, but the number one problem is, \"Are you sure the AI would follow the same line of thought you invented to argue human morals, when, unlike you, the AI doesn't start out knowing what you want it to rationalize?\"  You could call the counter-principle Follow-The-Decision-Information, or something along those lines.  You can account for an AI that does improbably nice things by telling me how you chose the AI's design from a huge space of possibilities, but otherwise the improbability is being pulled out of nowhere—though more and more heavily disguised, as rationalized premises are rationalized in turn.

\n

So I've already done a whole series of posts which I myself generated using Follow-The-Improbability.  But I didn't spell out the rules explicitly at that time, because I hadn't done the thermodynamic posts yet...

\n

Just thought I'd mention that.  It's amazing how many of my Overcoming Bias posts would coincidentally turn out to include ideas surprisingly relevant to discussion of Friendly AI theory... if you believe in coincidence.

" } }, { "_id": "kYAuNJX2ecH2uFqZ9", "title": "The Generalized Anti-Zombie Principle", "pageUrl": "https://www.lesswrong.com/posts/kYAuNJX2ecH2uFqZ9/the-generalized-anti-zombie-principle", "postedAt": "2008-04-05T23:16:30.000Z", "baseScore": 49, "voteCount": 38, "commentCount": 64, "url": null, "contents": { "documentId": "kYAuNJX2ecH2uFqZ9", "html": "
\n

\"Each problem that I solved became a rule which served afterwards to solve other problems.\"
        —Rene Descartes, Discours de la Methode

\n
\n

\"Zombies\" are putatively beings that are atom-by-atom identical to us, governed by all the same third-party-visible physical laws, except that they are not conscious.

\n

Though the philosophy is complicated, the core argument against zombies is simple:  When you focus your inward awareness on your inward awareness, soon after your internal narrative (the little voice inside your head that speaks your thoughts) says \"I am aware of being aware\", and then you say it out loud, and then you type it into a computer keyboard, and create a third-party visible blog post.

\n

Consciousness, whatever it may be—a substance, a process, a name for a confusion—is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud.  The fact that I have typed this paragraph would at least seem to refute the idea that consciousness has no experimentally detectable consequences.

\n

I hate to say \"So now let's accept this and move on,\" over such a philosophically controversial question, but it seems like a considerable majority of Overcoming Bias commenters do accept this.  And there are other conclusions you can only get to after you accept that you cannot subtract consciousness and leave the universe looking exactly the same.  So now let's accept this and move on.

\n

The form of the Anti-Zombie Argument seems like it should generalize, becoming an Anti-Zombie Principle.  But what is the proper generalization?

\n

Let's say, for example, that someone says:  \"I have a switch in my hand, which does not affect your brain in any way; and iff this switch is flipped, you will cease to be conscious.\"  Does the Anti-Zombie Principle rule this out as well, with the same structure of argument?

\n

\n

It appears to me that in the case above, the answer is yes.  In particular, you can say:  \"Even after your switch is flipped, I will still talk about consciousness for exactly the same reasons I did before.  If I am conscious right now, I will still be conscious after you flip the switch.\"

\n

Philosophers may object, \"But now you're equating consciousness with talking about consciousness!  What about the Zombie Master, the chatbot that regurgitates a remixed corpus of amateur human discourse on consciousness?\"

\n

But I did not equate \"consciousness\" with verbal behavior.  The core premise is that, among other things, the true referent of \"consciousness\" is also the cause in humans of talking about inner listeners.

\n

As I argued (at some length) in the sequence on words, what you want in defining a word is not always a perfect Aristotelian necessary-and-sufficient definition; sometimes you just want a treasure map that leads you to the extensional referent.  So \"that which does in fact make me talk about an unspeakable awareness\" is not a necessary-and-sufficient definition.  But if what does in fact cause me to discourse about an unspeakable awareness, is not \"consciousness\", then...

\n

...then the discourse gets pretty futile.  That is not a knockdown argument against zombies—an empirical question can't be settled by mere difficulties of discourse.  But if you try to defy the Anti-Zombie Principle, you will have problems with the meaning of your discourse, not just its plausibility.

\n

Could we define the word \"consciousness\" to mean \"whatever actually makes humans talk about 'consciousness'\"?  This would have the powerful advantage of guaranteeing that there is at least one real fact named by the word \"consciousness\".  Even if our belief in consciousness is a confusion, \"consciousness\" would name the cognitive architecture that generated the confusion.  But to establish a definition is only to promise to use a word consistently; it doesn't settle any empirical questions, such as whether our inner awareness makes us talk about our inner awareness.

\n

Let's return to the Off-Switch.

\n

If we allow that the Anti-Zombie Argument applies against the Off-Switch, then the Generalized Anti-Zombie Principle does not say only, \"Any change that is not in-principle experimentally detectable (IPED) cannot remove your consciousness.\"  The switch's flipping is experimentally detectable, but it still seems highly unlikely to remove your consciousness.

\n

Perhaps the Anti-Zombie Principle says, \"Any change that does not affect you in any IPED way cannot remove your consciousness\"?

\n

But is it a reasonable stipulation to say that flipping the switch does not affect you in any IPED way?  All the particles in the switch are interacting with the particles composing your body and brain.  There are gravitational effects—tiny, but real and IPED.  The gravitational pull from a one-gram switch ten meters away is around 6 * 10-16 m/s2.  That's around half a neutron diameter per second per second, far below thermal noise, but way above the Planck level.

\n

We could flip the switch light-years away, in which case the flip would have no immediate causal effect on you (whatever \"immediate\" means in this case) (if the Standard Model of physics is correct).

\n

But it doesn't seem like we should have to alter the thought experiment in this fashion.  It seems that, if a disconnected switch is flipped on the other side of a room, you should not expect your inner listener to go out like a light, because the switch \"obviously doesn't change\" that which is the true cause of your talking about an inner listener.  Whatever you really are, you don't expect the switch to mess with it.

\n

This is a large step.

\n

If you deny that it is a reasonable step, you had better never go near a switch again.  But still, it's a large step.

\n

The key idea of reductionism is that our maps of the universe are multi-level to save on computing power, but physics seems to be strictly single-level.  All our discourse about the universe takes place using references far above the level of fundamental particles.

\n

The switch's flip does change the fundamental particles of your body and brain.  It nudges them by whole neutron diameters away from where they would have otherwise been.

\n

In ordinary life, we gloss a change this small by saying that the switch \"doesn't affect you\".  But it does affect you.  It changes everything by whole neutron diameters!  What could possibly be remaining the same?  Only the description that you would give of the higher levels of organization—the cells, the proteins, the spikes traveling along a neural axon.  As the map is far less detailed than the territory, it must map many different states to the same description.

\n

Any reasonable sort of humanish description of the brain that talks about neurons and activity patterns (or even the conformations of individual microtubules making up axons and dendrites) won't change when you flip a switch on the other side of the room.  Nuclei are larger than neutrons, atoms are larger than nuclei, and by the time you get up to talking about the molecular level, that tiny little gravitational force has vanished from the list of things you bother to track.

\n

But if you add up enough tiny little gravitational pulls, they will eventually yank you across the room and tear you apart by tidal forces, so clearly a small effect is not \"no effect at all\".

\n

Maybe the tidal force from that tiny little pull, by an amazing coincidence, pulls a single extra calcium ion just a tiny bit closer to an ion channel, causing it to be pulled in just a tiny bit sooner, making a single neuron fire infinitesimally sooner than it would otherwise have done, a difference which amplifies chaotically, finally making a whole neural spike occur that otherwise wouldn't have occurred, sending you off on a different train of thought, that triggers an epileptic fit, that kills you, causing you to cease to be conscious...

\n

If you add up a lot of tiny quantitative effects, you get a big quantitative effect—big enough to mess with anything you care to name.  And so claiming that the switch has literally zero effect on the things you care about, is taking it too far.

\n

But with just one switch, the force exerted is vastly less than thermal uncertainties, never mind quantum uncertainties.  If you don't expect your consciousness to flicker in and out of existence as the result of thermal jiggling, then you certainly shouldn't expect to go out like a light when someone sneezes a kilometer away.

\n

The alert Bayesian will note that I have just made an argument about expectations, states of knowledge, justified beliefs about what can and can't switch off your consciousness.

\n

This doesn't necessarily destroy the Anti-Zombie Argument.  Probabilities are not certainties, but the laws of probability are theorems; if rationality says you can't believe something on your current information, then that is a law, not a suggestion.

\n

Still, this version of the Anti-Zombie Argument is weaker.  It doesn't have the nice, clean, absolutely clear-cut status of, \"You can't possibly eliminate consciousness while leaving all the atoms in exactly the same place.\"  (Or for \"all the atoms\" substitute \"all causes with in-principle experimentally detectable effects\", and \"same wavefunction\" for \"same place\", etc.)

\n

But the new version of the Anti-Zombie Argument still carries.  You can say, \"I don't know what consciousness really is, and I suspect I may be fundamentally confused about the question.  But if the word refers to anything at all, it refers to something that is, among other things, the cause of my talking about consciousness.  Now, I don't know why I talk about consciousness.  But it happens inside my skull, and I expect it has something to do with neurons firing.  Or maybe, if I really understood consciousness, I would have to talk about an even more fundamental level than that, like microtubules, or neurotransmitters diffusing across a synaptic channel.  But still, that switch you just flipped has an effect on my neurotransmitters and microtubules that's much, much less than thermal noise at 310 Kelvin.  So whatever the true cause of my talking about consciousness may be, I don't expect it to be hugely affected by the gravitational pull from that switch.  Maybe it's just a tiny little infinitesimal bit affected?  But it's certainly not going to go out like a light.  I expect to go on talking about consciousness in almost exactly the same way afterward, for almost exactly the same reasons.\"

\n

This application of the Anti-Zombie Principle is weaker.  But it's also much more general.  And, in terms of sheer common sense, correct.

\n

The reductionist and the substance dualist actually have two different versions of the above statement.  The reductionist furthermore says, \"Whatever makes me talk about consciousness, it seems likely that the important parts take place on a much higher functional level than atomic nuclei.  Someone who understood consciousness could abstract away from individual neurons firing, and talk about high-level cognitive architectures, and still describe how my mind produces thoughts like 'I think therefore I am'.  So nudging things around by the diameter of a nucleon, shouldn't affect my consciousness (except maybe with very small probability, or by a very tiny amount, or not until after a significant delay).\"

\n

The substance dualist furthermore says, \"Whatever makes me talk about consciousness, it's got to be something beyond the computational physics we know, which means that it might very well involve quantum effects.  But still, my consciousness doesn't flicker on and off whenever someone sneezes a kilometer away.  If it did, I would notice.  It would be like skipping a few seconds, or coming out of a general anesthetic, or sometimes saying, \"I don't think therefore I'm not.\"  So since it's a physical fact that thermal vibrations don't disturb the stuff of my awareness, I don't expect flipping the switch to disturb it either.\"

\n

Either way, you shouldn't expect your sense of awareness to vanish when someone says the word \"Abracadabra\", even if that does have some infinitesimal physical effect on your brain—

\n

But hold on!  If you hear someone say the word \"Abracadabra\", that has a very noticeable effect on your brain—so large, even your brain can notice it.  It may alter your internal narrative; you may think, \"Why did that person just say 'Abracadabra'?\"

\n

Well, but still you expect to go on talking about consciousness in almost exactly the same way afterward, for almost exactly the same reasons.

\n

And again, it's not that \"consciousness\" is being equated to \"that which makes you talk about consciousness\".  It's just that consciousness, among other things, makes you talk about consciousness.  So anything that makes your consciousness go out like a light, should make you stop talking about consciousness.

\n

If we do something to you, where you don't see how it could possibly change your internal narrative—the little voice in your head that sometimes says things like \"I think therefore I am\", whose words you can choose to say aloud—then it shouldn't make you cease to be conscious.

\n

And this is true even if the internal narrative is just \"pretty much the same\", and the causes of it are also pretty much the same; among the causes that are pretty much the same, is whatever you mean by \"consciousness\".

\n

If you're wondering where all this is going, and why it's important to go to such tremendous lengths to ponder such an obvious-seeming Generalized Anti-Zombie Principle, then consider the following debate:

\n

Albert:  \"Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.\"

\n

Bernice:  \"That's killing me!  There wouldn't be a conscious being there anymore.\"

\n

Charles:  \"Well, there'd still be a conscious being there, but it wouldn't be me.\"

\n

Sir Roger Penrose:  \"The thought experiment you propose is impossible.  You can't duplicate the behavior of neurons without tapping into quantum gravity.  That said, there's not much point in me taking further part in this conversation.\"  (Wanders away.)

\n

Albert:  \"Suppose that the replacement is carried out one neuron at a time, and the swap occurs so fast that it doesn't make any difference to global processing.\"

\n

Bernice:  \"How could that possibly be the case?\"

\n

Albert:  \"The little robot swims up to the neuron, surrounds it, scans it, learns to duplicate it, and then suddenly takes over the behavior, between one spike and the next.  In fact, the imitation is so good, that your outward behavior is just the same as it would be if the brain were left undisturbed.  Maybe not exactly the same, but the causal impact is much less than thermal noise at 310 Kelvin.\"

\n

Charles:  \"So what?\"

\n

Albert:  \"So don't your beliefs violate the Generalized Anti-Zombie Principle?  Whatever just happened, it didn't change your internal narrative!  You'll go around talking about consciousness for exactly the same reason as before.\"

\n

Bernice:  \"Those little robots are a Zombie Master.  They'll make me talk about consciousness even though I'm not conscious.  The Zombie World is possible if you allow there to be an added, extra, experimentally detectable Zombie Master—which those robots are.\"

\n

Charles:  \"Oh, that's not right, Bernice.  The little robots aren't plotting how to fake consciousness, or processing a corpus of text from human amateurs.  They're doing the same thing neurons do, just in silicon instead of carbon.\"

\n

Albert:  \"Wait, didn't you just agree with me?\"

\n

Charles:  \"I never said the new person wouldn't be conscious.  I said it wouldn't be me.\"

\n

Albert:  \"Well, obviously the Anti-Zombie Principle generalizes to say that this operation hasn't disturbed the true cause of your talking about this me thing.\"

\n

Charles:  \"Uh-uh!  Your operation certainly did disturb the true cause of my talking about consciousness.  It substituted a different cause in its place, the robots.  Now, just because that new cause also happens to be conscious—talks about consciousness for the same generalized reason—doesn't mean it's the same cause that was originally there.\"

\n

Albert:  \"But I wouldn't even have to tell you about the robot operation.  You wouldn't notice.  If you think, going on introspective evidence, that you are in an important sense \"the same person\" that you were five minutes ago, and I do something to you that doesn't change the introspective evidence available to you, then your conclusion that you are the same person that you were five minutes ago should be equally justified.  Doesn't the Generalized Anti-Zombie Principle say that if I do something to you that alters your consciousness, let alone makes you a completely different person, then you ought to notice somehow?\"

\n

Bernice:  \"Not if you replace me with a Zombie Master.  Then there's no one there to notice.\"

\n

Charles:  \"Introspection isn't perfect.  Lots of stuff goes on inside my brain that I don't notice.\"

\n

Albert:  \"You're postulating epiphenomenal facts about consciousness and identity!\"

\n

Bernice:  \"No I'm not!  I can experimentally detect the difference between neurons and robots.\"

\n

Charles:  \"No I'm not!  I can experimentally detect the moment when the old me is replaced by a new person.\"

\n

Albert:  \"Yeah, and I can detect the switch flipping!  You're detecting something that doesn't make a noticeable difference to the true cause of your talk about consciousness and personal identity.  And the proof is, you'll talk just the same way afterward.\"

\n

Bernice:  \"That's because of your robotic Zombie Master!\"

\n

Charles:  \"Just because two people talk about 'personal identity' for similar reasons doesn't make them the same person.\"

\n

I think the Generalized Anti-Zombie Principle supports Albert's position, but the reasons shall have to wait for future posts.  I need other prerequisites, and besides, this post is already too long.

\n

But you see the importance of the question, \"How far can you generalize the Anti-Zombie Argument and have it still be valid?\"

\n

The makeup of future galactic civilizations may be determined by the answer...

" } }, { "_id": "4moMTeCy9EqYxAher", "title": "Zombie Responses", "pageUrl": "https://www.lesswrong.com/posts/4moMTeCy9EqYxAher/zombie-responses", "postedAt": "2008-04-05T00:42:24.000Z", "baseScore": 32, "voteCount": 29, "commentCount": 40, "url": null, "contents": { "documentId": "4moMTeCy9EqYxAher", "html": "

I'm a bit tired today, having stayed up until 3AM writing yesterday's >6000-word post on zombies, so today I'll just reply to Richard, and tie up a loose end I spotted the next day.

\n

Besides, TypePad's nitwit, un-opt-out-able 50-comment pagination \"feature\", that doesn't work with the Recent Comments sidebar, means that we might as well jump the discussion here before we go over the 50-comment limit.

\n

\n

(A)  Richard Chappell writes:

\n
\n

A terminological note (to avoid unnecessary confusion): what you call 'conceivable', others of us would merely call \"apparently conceivable\".

\n
\n

The gap between \"I don't see a contradiction yet\" and \"this is logically possible\" is so huge (it's NP-complete even in some simple-seeming cases) that you really should have two different words.  As the zombie argument is boosted to the extent that this huge gap can be swept under the rug of minor terminological differences, I really think it would be a good idea to say \"conceivable\" versus \"logically possible\" or maybe even have a still more visible distinction.  I can't choose professional terminology that has already been established, but in a case like this, I might seriously refuse to use it.

\n

Maybe I will say \"apparently conceivable\" for the kind of information that zombie advocates get by imagining Zombie Worlds, and \"logically possible\" for the kind of information that is established by exhibiting a complete model or logical proof.  Note the size of the gap between the information you can get by closing your eyes and imagining zombies, and the information you need to carry the argument for epiphenomenalism.

\n
\n

That is, your view would be characterized as a form of Type-A materialism, the view that zombies are not even (genuinely) conceivable, let alone metaphysically possible.

\n
\n

Type-A materialism is a large bundle; you shouldn't attribute the bundle to me until you see me agree with each of the parts.  I think that someone who asks \"What is consciousness?\" is asking a legitimate question, has a legitimate demand for insight; I don't necessarily think that the answer takes the form of \"Here is this stuff that has all the properties you would attribute to consciousness, for such-and-such reason\", but may to some extent consist of insights that cause you to realize you were asking the question the wrong way.

\n

This is not being eliminative about consciousness.  It is being realistic about what kind of insights to expect, faced with a problem that (1) seems like it must have some solution, (2) seems like it cannot possibly have any solution, and (3) is being discussed in a fashion that has a great big dependence on the not-fully-understood ad-hoc architecture of human cognition.

\n
\n

(1) You haven't, so far as I can tell, identified any logical contradiction in the description of the zombie world. You've just pointed out that it's kind of strange. But there are many bizarre possible worlds out there. That's no reason to posit an implicit contradiction. So it's still completely mysterious to me what this alleged contradiction is supposed to be.

\n
\n

Okay, I'll spell it out from a materialist standpoint:

\n
    \n
  1. The zombie world, by definition, contains all parts of our world that are within the closure of the \"caused by\" or \"effect of\" relation of any observable phenomenon.  In particular, it contains the cause of my visibly saying, \"I think therefore I am.\"
  2. \n
  3. When I focus my inward awareness on my inward awareness, I shortly thereafter experience my internal narrative saying \"I am focusing my inward awareness on my inward awareness\", and can, if I choose, say so out loud.
  4. \n
  5. Intuitively, it sure seems like my inward awareness is causing my internal narrative to say certain things, and that my internal narrative can cause my lips to say certain things.
  6. \n
  7. The word \"consciousness\", if it has any meaning at all, refers to that-which-is or that-which-causes or that-which-makes-me-say-I-have inward awareness.
  8. \n
  9. From (3) and (4) it would follow that if the zombie world is closed with respect to the causes of my saying \"I think therefore I am\", the zombie world contains that which we refer to as \"consciousness\".
  10. \n
  11. By definition, the zombie world does not contain consciousness.
  12. \n
  13. (3) seems to me to have a rather high probability of being empirically true.  Therefore I evaluate a high empirical probability that the zombie world is logically impossible.
  14. \n
\n

You can save the Zombie World by letting the cause of my internal narrative's saying \"I think therefore I am\" be something entirely other than consciousness.  In conjunction with the assumption that consciousness does exist, this is the part that struck me as deranged.

\n

But if the above is conceivable, then isn't the Zombie World conceivable?

\n

No, because the two constructions of the Zombie World involve giving the word \"consciousness\" different empirical referents, like \"water\" in our world meaning H20 versus \"water\" in Putnam's Twin Earth meaning XYZ.  For the Zombie World to be logically possible, it does not suffice that, for all you knew about how the empirical world worked, the word \"consciousness\" could have referred to an epiphenomenon that is entirely different from the consciousness we know.  The Zombie World lacks consciousness, not \"consciousness\"—it is a world without H20, not a world without \"water\".  This is what is required to carry the empirical statement, \"You could eliminate the referent of whatever is meant by \"consciousness\" from our world, while keeping all the atoms in the same place.\"

\n

Which is to say:  I hold that it is an empirical fact, given what the word \"consciousness\" actually refers to, that it is logically impossible to eliminate consciousness without moving any atoms.  What it would mean to eliminate \"consciousness\" from a world, rather than consciousness, I will not speculate.

\n
\n

(2) It's misleading to say it's \"miraculous\" (on the property dualist view) that our qualia line up so neatly with the physical world. There's a natural law which guarantees this, after all. So it's no more miraculous than any other logically contingent nomic necessity (e.g. the constants in our physical laws).

\n
\n

It is the natural law itself that is \"miraculous\"—counts as an additional complex-improbable element of the theory to be postulated, without having been itself justified in terms of things already known.  One postulates (a) an inner world that is conscious (b) a malfunctioning outer world that talks about consciousness for no reason (c) that the two align perfectly.  C does not follow from A and B, and so is a separate postulate.

\n

I agree that this usage of \"miraculous\" conflicts with the philosophical sense of violating a natural law; I meant it in the sense of improbability appearing from no apparent source, a la perpetual motion belief.  Hence the word was ill-chosen in context.  But is this not intuitively the sort of thing we should call a miracle?  Your consciousness doesn't really cause you to say you're conscious, there's a separate physical thing that makes you say you're conscious, but also there's a law aligning the two - this is indeed an event on a similar order of wackiness to a cracker taking on the substance of Christ's flesh while possessing the exact appearance and outward behavior of a cracker, there's just a natural law which guarantees this, you know.

\n
\n

That is, Zombie (or 'Outer') Chalmers doesn't actually conclude anything, because his utterances are meaningless. A fortiori, he doesn't conclude anything unwarrantedly. He's just making noises; these are no more susceptible to epistemic assessment than the chirps of a bird.

\n
\n

Looking at this from an AI-design standpoint, it seems to me like you should be able to build an AI that systematically refines an inner part of itself that correlates (in the sense of mutual information or systematic relations) to the environment, perhaps including floating-point numbers of a sort that I would call \"probabilities\" because they obey the internal relations mandated by Cox's Theorems when the AI encounters new information—pardon me, new sense inputs.

\n

You will say that, unless the AI is more than mere transistors—unless it has the dual aspect—the AI has no beliefs.

\n

I think my views on this were expressed pretty clearly in \"The Simple Truth\".

\n

To me, it seems pretty straightforward to construct maps that correlate to territories in systematic ways, without mentioning anything other than things of pure physical causality.  The AI outputs a map of Texas.  Another AI flies with the map to Texas and checks to see if the highways are in the corresponding places, chirping \"True\" when it detects a match and \"False\" when it detects a mismatch.  You can refuse to call this \"a map of Texas\" but the AIs themselves are still chirping \"True\" or \"False\", and the said AIs are going to chirp \"False\" when they look at Chalmers's belief in an epiphenomenal inner core, and I for one would agree with them.

\n

It's clear that the function of mapping reality is performed strictly by Outer Chalmers.  The whole business of producing belief representations is handled by Bayesian structure in causal interactions.  There's nothing left for the Inner Chalmers to do, but bless the whole affair with epiphenomenal meaning.  Where now 'meaning' is something entirely unrelated to systematic map-territory correspondence or the ability to use that map to navigate reality.  So when it comes to talking about \"accuracy\", let alone \"systematic accuracy\", it seems to me like we should be able to determine it strictly by looking at the Outer Chalmers.

\n

(B)  In yesterday's text, I left out an assumption when I wrote:

\n
\n

If a self-modifying AI looks at a part of itself that concludes \"B\" on condition A—a part of itself that writes \"B\" to memory whenever condition A is true—and the AI inspects this part, determines how it (causally) operates in the context of the larger universe, and the AI decides that this part systematically tends to write false data to memory, then the AI has found what appears to be a bug, and the AI will self-modify not to write \"B\" to the belief pool under condition A.

\n

...

\n

But there's no possible warrant for the outer Chalmers or any reflectively coherent self-inspecting AI to believe in this mysterious correctness.  A good AI design should, I think, be reflectively coherent intelligence with a testable theory of how it operates as a causal system, hence with a testable theory of how that causal system produces systematically accurate beliefs on the way to achieving its goals.

\n
\n

Actually, you need an additional assumption to the above, which is that a \"good AI design\" (the kind I was thinking of, anyway) judges its own rationality in a modular way; it enforces global rationality by enforcing local rationality.  If there is a piece that, relative to its context, is locally systematically unreliable—for some possible beliefs \"B_i\" and conditions A_i, it adds some \"B_i\" to the belief pool under local condition A_i, where reflection by the system indicates that B_i is not true (or in the case of probabilistic beliefs, not accurate) when the local condition A_i is true, then this is a bug.  This kind of modularity is a way to make the problem tractable, and it's how I currently think about the first-generation AI design. [Edit 2013:  The actual notion I had in mind here has now been fleshed out and formalized in Tiling Agents for Self-Modifying AI, section 6.]

\n

The notion is that a causally closed cognitive system—such as an AI designed by its programmers to use only causally efficacious parts; or an AI whose theory of its own functioning is entirely testable; or the outer Chalmers that writes philosophy papers—which believes that it has an epiphenomenal inner self, must be doing something systematically unreliable because it would conclude the same thing in a Zombie World.  A mind all of whose parts are systematically locally reliable, relative to their contexts, would be systematically globally reliable.  Ergo, a mind which is globally unreliable must contain at least one locally unreliable part.  So a causally closed cognitive system inspecting itself for local reliability must discover that at least one step involved in adding the belief of an epiphenomenal inner self, is unreliable.

\n

If there are other ways for minds to be reflectively coherent which avoid this proof of disbelief in zombies, philosophers are welcome to try and specify them.

\n

The reason why I have to specify all this is that otherwise you get a kind of extremely cheap reflective coherence where the AI can never label itself unreliable.  E.g. if the AI finds a part of itself that computes 2 + 2 = 5 (in the surrounding context of counting sheep) the AI will reason:  \"Well, this part malfunctions and says that 2 + 2 = 5... but by pure coincidence, 2 + 2 is equal to 5, or so it seems to me... so while the part looks systematically unreliable, I better keep it the way it is, or it will handle this special case wrong.\"  That's why I talk about enforcing global reliability by enforcing local systematic reliability—if you just compare your global beliefs to your global beliefs, you don't go anywhere.

\n

This does have a general lesson:  Show your arguments are globally reliable by virtue of each step being locally reliable, don't just compare the arguments' conclusions to your intuitions.  [Edit 2013:  See this on valid logic being locally valid.]

\n

(C)  An anonymous poster wrote:

\n
\n

A sidepoint, this, but I believe your etymology for \"n'shama\" is wrong. It is related to the word for \"breath\", not \"hear\". The root for \"hear\" contains an ayin, which n'shama does not.

\n
\n

Now that's what I call a miraculously misleading coincidence—although the word N'Shama arose for completely different reasons, it sounded exactly the right way to make me think it referred to an inner listener.

\n

Oops.

" } }, { "_id": "fdEWWr8St59bXLbQr", "title": "Zombies! Zombies?", "pageUrl": "https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zombies", "postedAt": "2008-04-04T09:55:26.000Z", "baseScore": 122, "voteCount": 102, "commentCount": 160, "url": null, "contents": { "documentId": "fdEWWr8St59bXLbQr", "html": "

\"Doviende38008649\"Your \"zombie\", in the philosophical usage of the term, is putatively a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious.

\n

It is furthermore claimed that if zombies are \"possible\" (a term over which battles are still being fought), then, purely from our knowledge of this \"possibility\", we can deduce a priori that consciousness is extra-physical, in a sense to be described below; the standard term for this position is \"epiphenomenalism\".

\n

(For those unfamiliar with zombies, I emphasize that this is not a strawman.  See, for example, the SEP entry on Zombies.  The \"possibility\" of zombies is accepted by a substantial fraction, possibly a majority, of academic philosophers of consciousness.)

\n

I once read somewhere, \"You are not the one who speaks your thoughts—you are the one who hears your thoughts\".  In Hebrew, the word for the highest soul, that which God breathed into Adam, is N'Shama—\"the hearer\".

\n

If you conceive of \"consciousness\" as a purely passive listening, then the notion of a zombie initially seems easy to imagine.  It's someone who lacks the N'Shama, the hearer.

\n

(Warning:  Long post ahead.  Very long 6,600-word post involving David Chalmers ahead.  This may be taken as my demonstrative counterexample to Richard Chappell's Arguing with Eliezer Part II, in which Richard accuses me of not engaging with the complex arguments of real philosophers. Edit December 2019: There now exists a shorter edited version of this post here)

\n

\n

When you open a refrigerator and find that the orange juice is gone, you think \"Darn, I'm out of orange juice.\"  The sound of these words is probably represented in your auditory cortex, as though you'd heard someone else say it.  (Why do I think this?  Because native Chinese speakers can remember longer digit sequences than English-speakers.  Chinese digits are all single syllables, and so Chinese speakers can remember around ten digits, versus the famous \"seven plus or minus two\" for English speakers.  There appears to be a loop of repeating sounds back to yourself, a size limit on working memory in the auditory cortex, which is genuinely phoneme-based.)

\n

Let's suppose the above is correct; as a postulate, it should certainly present no problem for advocates of zombies.  Even if humans are not like this, it seems easy enough to imagine an AI constructed this way (and imaginability is what the zombie argument is all about).  It's not only conceivable in principle, but quite possible in the next couple of decades, that surgeons will lay a network of neural taps over someone's auditory cortex and read out their internal narrative.  (Researchers have already tapped the lateral geniculate nucleus of a cat and reconstructed recognizable visual inputs.)

\n

So your zombie, being physically identical to you down to the last atom, will open the refrigerator and form auditory cortical patterns for the phonemes \"Darn, I'm out of orange juice\".  On this point, epiphenomalists would willingly agree.

\n

But, says the epiphenomenalist, in the zombie there is no one inside to hear; the inner listener is missing.  The internal narrative is spoken, but unheard.  You are not the one who speaks your thoughts, you are the one who hears them.

\n

It seems a lot more straightforward (they would say) to make an AI that prints out some kind of internal narrative, than to show that an inner listener hears it.

\n

The Zombie Argument is that if the Zombie World is possible—not necessarily physically possible in our universe, just \"possible in theory\", or \"imaginable\", or something along those lines—then consciousness must be extra-physical, something over and above mere atoms.  Why?  Because even if you somehow knew the positions of all the atoms in the universe, you would still have be told, as a separate and additional fact, that people were conscious—that they had inner listeners—that we were not in the Zombie World, as seems possible.

\n

Zombie-ism is not the same as dualism.  Descartes thought there was a body-substance and a wholly different kind of mind-substance, but Descartes also thought that the mind-substance was a causally active principle, interacting with the body-substance, controlling our speech and behavior.  Subtracting out the mind-substance from the human would leave a traditional zombie, of the lurching and groaning sort.

\n

And though the Hebrew word for the innermost soul is N'Shama, that-which-hears, I can't recall hearing a rabbi arguing for the possibility of zombies.  Most rabbis would probably be aghast at the idea that the divine part which God breathed into Adam doesn't actually do anything.

\n

The technical term for the belief that consciousness is there, but has no effect on the physical world, is epiphenomenalism.

\n

Though there are other elements to the zombie argument (I'll deal with them below), I think that the intuition of the passive listener is what first seduces people to zombie-ism.  In particular, it's what seduces a lay audience to zombie-ism.  The core notion is simple and easy to access:  The lights are on but no one's home.

\n

Philosophers are appealing to the intuition of the passive listener when they say \"Of course the zombie world is imaginable; you know exactly what it would be like.\"

\n

One of the great battles in the Zombie Wars is over what, exactly, is meant by saying that zombies are \"possible\".  Early zombie-ist philosophers (the 1970s) just thought it was obvious that zombies were \"possible\", and didn't bother to define what sort of possibility was meant.

\n

Because of my reading in mathematical logic, what instantly comes into my mind is logical possibility.  If you have a collection of statements like (A->B),(B->C),(C->~A) then the compound belief is logically possible if it has a model—which, in the simple case above, reduces to finding a value assignment to A, B, C that makes all of the statements (A->B),(B->C), and (C->~A) true.  In this case, A=B=C=0 works, as does A=0, B=C=1 or A=B=0, C=1.

\n

Something will seem possible—will seem \"conceptually possible\" or \"imaginable\"—if you can consider the collection of statements without seeing a contradiction.  But it is, in general, a very hard problem to see contradictions or to find a full specific model!  If you limit yourself to simple Boolean propositions of the form ((A or B or C) and (B or ~C or D) and (D or ~A or ~C) ...), conjunctions of disjunctions of three variables, then this is a very famous problem called 3-SAT, which is one of the first problems ever to be proven NP-complete.

\n

So just because you don't see a contradiction in the Zombie World at first glance, it doesn't mean that no contradiction is there.  It's like not seeing a contradiction in the Riemann Hypothesis at first glance.  From conceptual possibility (\"I don't see a problem\") to logical possibility in the full technical sense, is a very great leap.  It's easy to make it an NP-complete leap, and with first-order theories you can make it arbitrarily hard to compute even for finite questions.  And it's logical possibility of the Zombie World, not conceptual possibility, that is needed to suppose that a logically omniscient mind could know the positions of all the atoms in the universe, and yet need to be told as an additional non-entailed fact that we have inner listeners.

\n

Just because you don't see a contradiction yet, is no guarantee that you won't see a contradiction in another 30 seconds.  \"All odd numbers are prime.  Proof:  3 is prime, 5 is prime, 7 is prime...\"

\n

So let us ponder the Zombie Argument a little longer:  Can we think of a counterexample to the assertion \"Consciousness has no third-party-detectable causal impact on the world\"?

\n

If you close your eyes and concentrate on your inward awareness, you will begin to form thoughts, in your internal narrative, that go along the lines of \"I am aware\" and \"My awareness is separate from my thoughts\" and \"I am not the one who speaks my thoughts, but the one who hears them\" and \"My stream of consciousness is not my consciousness\" and \"It seems like there is a part of me which I can imagine being eliminated without changing my outward behavior.\"

\n

You can even say these sentences out loud, as you meditate.  In principle, someone with a super-fMRI could probably read the phonemes out of your auditory cortex; but saying it out loud removes all doubt about whether you have entered the realms of testability and physical consequences.

\n

This certainly seems like the inner listener is being caught in the act of listening by whatever part of you writes the internal narrative and flaps your tongue.

\n

Imagine that a mysterious race of aliens visit you, and leave you a mysterious black box as a gift.  You try poking and prodding the black box, but (as far as you can tell) you never succeed in eliciting a reaction.  You can't make the black box produce gold coins or answer questions.  So you conclude that the black box is causally inactive:  \"For all X, the black box doesn't do X.\"  The black box is an effect, but not a cause; epiphenomenal; without causal potency.  In your mind, you test this general hypothesis to see if it is true in some trial cases, and it seems to be true—\"Does the black box turn lead to gold?  No.  Does the black box boil water?  No.\"

\n

But you can see the black box; it absorbs light, and weighs heavy in your hand.  This, too, is part of the dance of causality.  If the black box were wholly outside the causal universe, you couldn't see it; you would have no way to know it existed; you could not say, \"Thanks for the black box.\"  You didn't think of this counterexample, when you formulated the general rule:  \"All X: Black box doesn't do X\".  But it was there all along.

\n

(Actually, the aliens left you another black box, this one purely epiphenomenal, and you haven't the slightest clue that it's there in your living room.  That was their joke.)

\n

If you can close your eyes, and sense yourself sensing—if you can be aware of yourself being aware, and think \"I am aware that I am aware\"—and say out loud, \"I am aware that I am aware\"—then your consciousness is not without effect on your internal narrative, or your moving lips.  You can see yourself seeing, and your internal narrative reflects this, and so do your lips if you choose to say it out loud.

\n

I have not seen the above argument written out that particular way—\"the listener caught in the act of listening\"—though it may well have been said before.

\n

But it is a standard point—which zombie-ist philosophers accept!—that the Zombie World's philosophers, being atom-by-atom identical to our own philosophers, write identical papers about the philosophy of consciousness.

\n

At this point, the Zombie World stops being an intuitive consequence of the idea of a passive listener.

\n

Philosophers writing papers about consciousness would seem to be at least one effect of consciousness upon the world.  You can argue clever reasons why this is not so, but you have to be clever.

\n

You would intuitively suppose that if your inward awareness went away, this would change the world, in that your internal narrative would no longer say things like \"There is a mysterious listener within me,\" because the mysterious listener would be gone.  It is usually right after you focus your awareness on your awareness, that your internal narrative says \"I am aware of my awareness\", which suggests that if the first event never happened again, neither would the second.  You can argue clever reasons why this is not so, but you have to be clever.

\n

You can form a propositional belief that \"Consciousness is without effect\", and not see any contradiction at first, if you don't realize that talking about consciousness is an effect of being conscious.  But once you see the connection from the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how philosophers write papers about consciousness, zombie-ism stops being intuitive and starts requiring you to postulate strange things.

\n

One strange thing you might postulate is that there's a Zombie Master, a god within the Zombie World who surreptitiously takes control of zombie philosophers and makes them talk and write about consciousness.

\n

A Zombie Master doesn't seem impossible.  Human beings often don't sound all that coherent when talking about consciousness.  It might not be that hard to fake their discourse, to the standards of, say, a human amateur talking in a bar.  Maybe you could take, as a corpus, one thousand human amateurs trying to discuss consciousness; feed them into a non-conscious but sophisticated AI, better than today's models but not self-modifying; and get back discourse about \"consciousness\" that sounded as sensible as most humans, which is to say, not very.

\n

But this speech about \"consciousness\" would not be spontaneous.  It would not be produced within the AI.  It would be a recorded imitation of someone else talking.  That is just a holodeck, with a central AI writing the speech of the non-player characters.  This is not what the Zombie World is about.

\n

By supposition, the Zombie World is atom-by-atom identical to our own, except that the inhabitants lack consciousness.  Furthermore, the atoms in the Zombie World move under the same laws of physics as in our own world.  If there are \"bridging laws\" that govern which configurations of atoms evoke consciousness, those bridging laws are absent.  But, by hypothesis, the difference is not experimentally detectable.  When it comes to saying whether a quark zigs or zags or exerts a force on nearby quarks—anything experimentally measurable—the same physical laws govern.

\n

The Zombie World has no room for a Zombie Master, because a Zombie Master has to control the zombie's lips, and that control is, in principle, experimentally detectable.  The Zombie Master moves lips, therefore it has observable consequences.  There would be a point where an electron zags, instead of zigging, because the Zombie Master says so.  (Unless the Zombie Master is actually in the world, as a pattern of quarks—but then the Zombie World is not atom-by-atom identical to our own, unless you think this world also contains a Zombie Master.)

\n

When a philosopher in our world types, \"I think the Zombie World is possible\", his fingers strike keys in sequence:  Z-O-M-B-I-E.  There is a chain of causality that can be traced back from these keystrokes: muscles contracting, nerves firing, commands sent down through the spinal cord, from the motor cortex—and then into less understood areas of the brain, where the philosopher's internal narrative first began talking about \"consciousness\".

\n

And the philosopher's zombie twin strikes the same keys, for the same reason, causally speaking.  There is no cause within the chain of explanation for why the philosopher writes the way he does, which is not also present in the zombie twin.  The zombie twin also has an internal narrative about \"consciousness\", that a super-fMRI could read out of the auditory cortex.  And whatever other thoughts, or other causes of any kind, led to that internal narrative, they are exactly the same in our own universe and in the Zombie World.

\n

So you can't say that the philosopher is writing about consciousness because of consciousness, while the zombie twin is writing about consciousness because of a Zombie Master or AI chatbot.  When you trace back the chain of causality behind the keyboard, to the internal narrative echoed in the auditory cortex, to the cause of the narrative, you must find the same physical explanation in our world as in the zombie world.

\n

As the most formidable advocate of zombie-ism, David Chalmers, writes:

\n
\n

Think of my zombie twin in the universe next door. He talks about conscious experience all the time—in fact, he seems obsessed by it. He spends ridiculous amounts of time hunched over a computer, writing chapter after chapter on the mysteries of consciousness. He often comments on the pleasure he gets from certain sensory qualia, professing a particular love for deep greens and purples. He frequently gets into arguments with zombie materialists, arguing that their position cannot do justice to the realities of conscious experience.

\n

And yet he has no conscious experience at all! In his universe, the materialists are right and he is wrong. Most of his claims about conscious experience are utterly false. But there is certainly a physical or functional explanation of why he makes the claims he makes. After all, his universe is fully law-governed, and no events therein are miraculous, so there must be some explanation of his claims.

\n

...Any explanation of my twin’s behavior will equally count as an explanation of my behavior, as the processes inside his body are precisely mirrored by those inside mine. The explanation of his claims obviously does not depend on the existence of consciousness, as there is no consciousness in his world. It follows that the explanation of my claims is also independent of the existence of consciousness.

\n
\n

Chalmers is not arguing against zombies; those are his actual beliefs!

\n
\n

This paradoxical situation is at once delightful and disturbing.  It is not obviously fatal to the nonreductive position, but it is at least something that we need to come to grips
with...

\n
\n

I would seriously nominate this as the largest bullet ever bitten in the history of time.  And that is a backhanded compliment to David Chalmers:  A lesser mortal would simply fail to see the implications, or refuse to face them, or rationalize a reason it wasn't so.

\n

Why would anyone bite a bullet that large?  Why would anyone postulate unconscious zombies who write papers about consciousness for exactly the same reason that our own genuinely conscious philosophers do?

\n

Not because of the first intuition I wrote about, the intuition of the passive listener.  That intuition may say that zombies can drive cars or do math or even fall in love, but it doesn't say that zombies write philosophy papers about their passive listeners.

\n

The zombie argument does not rest solely on the intuition of the passive listener.  If this was all there was to the zombie argument, it would be dead by now, I think.  The intuition that the \"listener\" can be eliminated without effect, would go away as soon as you realized that your internal narrative routinely seems to catch the listener in the act of listening.

\n

No, the drive to bite this bullet comes from an entirely different intuition—the intuition that no matter how many atoms you add up, no matter how many masses and electrical charges interact with each other, they will never necessarily produce a subjective sensation of the mysterious redness of red.  It may be a fact about our physical universe (Chalmers says) that putting such-and-such atoms into such-and-such a position, evokes a sensation of redness; but if so, it is not a necessary fact, it is something to be explained above and beyond the motion of the atoms.

\n

But if you consider the second intuition on its own, without the intuition of the passive listener, it is hard to see why it implies zombie-ism.  Maybe there's just a different kind of stuff, apart from and additional to atoms, that is not causally passive—a soul that actually does stuff, a soul that plays a real causal role in why we write about \"the mysterious redness of red\".  Take out the soul, and... well, assuming you just don't fall over in a coma, you certainly won't write any more papers about consciousness!

\n

This is the position taken by Descartes and most other ancient thinkers:  The soul is of a different kind, but it interacts with the body.  Descartes's position is technically known as substance dualism—there is a thought-stuff, a mind-stuff, and it is not like atoms; but it is causally potent, interactive, and leaves a visible mark on our universe.

\n

Zombie-ists are property dualists—they don't believe in a separate soul; they believe that matter in our universe has additional properties beyond the physical.

\n

\"Beyond the physical\"?  What does that mean?  It means the extra properties are there, but they don't influence the motion of the atoms, like the properties of electrical charge or mass.  The extra properties are not experimentally detectable by third parties; you know you are conscious, from the inside of your extra properties, but no scientist can ever directly detect this from outside.

\n

So the additional properties are there, but not causally active.  The extra properties do not move atoms around, which is why they can't be detected by third parties.

\n

And that's why we can (allegedly) imagine a universe just like this one, with all the atoms in the same places, but the extra properties missing, so that everything goes on the same as before, but no one is conscious.

\n

The Zombie World may not be physically possible, say the zombie-ists—because it is a fact that all the matter in our universe has the extra properties, or obeys the bridging laws that evoke consciousness—but the Zombie World is logically possible: the bridging laws could have been different.

\n

But, once you realize that conceivability is not the same as logical possibility, and that the Zombie World isn't even all that intuitive, why say that the Zombie World is logically possible?

\n

Why, oh why, say that the extra properties are epiphenomenal and indetectable?

\n

We can put this dilemma very sharply:  Chalmers believes that there is something called consciousness, and this consciousness embodies the true and indescribable substance of the mysterious redness of red.  It may be a property beyond mass and charge, but it's there, and it is consciousness.  Now, having said the above, Chalmers furthermore specifies that this true stuff of consciousness is epiphenomenal, without causal potency—but why say that?

\n

Why say that you could subtract this true stuff of consciousness, and leave all the atoms in the same place doing the same things?  If that's true, we need some separate physical explanation for why Chalmers talks about \"the mysterious redness of red\".  That is, there exists both a mysterious redness of red, which is extra-physical, and an entirely separate reason, within physics, why Chalmers talks about the \"mysterious redness of red\".

\n

Chalmers does confess that these two things seem like they ought to be related, but really, why do you need both?  Why not just pick one or the other?

\n

Once you've postulated that there is a mysterious redness of red, why not just say that it interacts with your internal narrative and makes you talk about the \"mysterious redness of red\"?

\n

Isn't Descartes taking the simpler approach, here?  The strictly simpler approach?

\n

Why postulate an extramaterial soul, and then postulate that the soul has no effect on the physical world, and then postulate a mysterious unknown material process that causes your internal narrative to talk about conscious experience?

\n

Why not postulate the true stuff of consciousness which no amount of mere mechanical atoms can add up to, and then, having gone that far already, let this true stuff of consciousness have causal effects like making philosophers talk about consciousness?

\n

I am not endorsing Descartes's view.  But at least I can understand where Descartes is coming from.  Consciousness seems mysterious, so you postulate a mysterious stuff of consciousness.  Fine.

\n

But now the zombie-ists postulate that this mysterious stuff doesn't do anything, so you need a whole new explanation for why you say you're conscious.

\n

That isn't vitalism.  That's something so bizarre that vitalists would spit out their coffee.  \"When fires burn, they release phlogistonBut phlogiston doesn't have any experimentally detectable impact on our universe, so you'll have to go looking for a separate explanation of why a fire can melt snow.\"  What?

\n

Are property dualists under the impression that if they postulate a new active force, something that has a causal impact on observables, they will be sticking their necks out too far?

\n

Me, I'd say that if you postulate a mysterious, separate, additional, inherently mental property of consciousness, above and beyond positions and velocities, then, at that point, you have already stuck your neck out as far as it can go.  To postulate this stuff of consciousness, and then further postulate that it doesn't do anything—for the love of cute kittens, why?

\n

There isn't even an obvious career motive.  \"Hi, I'm a philosopher of consciousness.  My subject matter is the most important thing in the universe and I should get lots of funding?  Well, it's nice of you to say so, but actually the phenomenon I study doesn't do anything whatsoever.\"  (Argument from career impact is not valid, but I say it to leave a line of retreat.)

\n

Chalmers critiques substance dualism on the grounds that it's hard to see what new theory of physics, what new substance that interacts with matter, could possibly explain consciousness.  But property dualism has exactly the same problem.  No matter what kind of dual property you talk about, how exactly does it explain consciousness?

\n

When Chalmers postulated an extra property that is consciousness, he took that leap across the unexplainable.  How does it help his theory to further specify that this extra property has no effect?  Why not just let it be causal?

\n

If I were going to be unkind, this would be the time to drag in the dragon—to mention Carl Sagan's parable of the dragon in the garage.  \"I have a dragon in my garage.\"  Great!  I want to see it, let's go!  \"You can't see it—it's an invisible dragon.\"  Oh, I'd like to hear it then.  \"Sorry, it's an inaudible dragon.\"  I'd like to measure its carbon dioxide output.  \"It doesn't breathe.\"  I'll toss a bag of flour into the air, to outline its form.  \"The dragon is permeable to flour.\"

\n

One motive for trying to make your theory unfalsifiable, is that deep down you fear to put it to the test.  Sir Roger Penrose (physicist) and Stuart Hameroff (neurologist) are substance dualists; they think that there is something mysterious going on in quantum, that Everett is wrong and that the \"collapse of the wave-function\" is physically real, and that this is where consciousness lives and how it exerts causal effect upon your lips when you say aloud \"I think therefore I am.\"  Believing this, they predicted that neurons would protect themselves from decoherence long enough to maintain macroscopic quantum states.

\n

This is in the process of being tested, and so far, prospects are not looking good for Penrose—

\n

—but Penrose's basic conduct is scientifically respectable.  Not Bayesian, maybe, but still fundamentally healthy.  He came up with a wacky hypothesis.  He said how to test it.  He went out and tried to actually test it.

\n

As I once said to Stuart Hameroff, \"I think the hypothesis you're testing is completely hopeless, and your experiments should definitely be funded.  Even if you don't find exactly what you're looking for, you're looking in a place where no one else is looking, and you might find something interesting.\"

\n

So a nasty dismissal of epiphenomenalism would be that zombie-ists are afraid to say the consciousness-stuff can have effects, because then scientists could go looking for the extra properties, and fail to find them.

\n

I don't think this is actually true of Chalmers, though.  If Chalmers lacked self-honesty, he could make things a lot easier on himself.

\n

(But just in case Chalmers is reading this and does have falsification-fear, I'll point out that if epiphenomenalism is false, then there is some other explanation for that-which-we-call consciousness, and it will eventually be found, leaving Chalmers's theory in ruins; so if Chalmers cares about his place in history, he has no motive to endorse epiphenomenalism unless he really thinks it's true.)

\n

Chalmers is one of the most frustrating philosophers I know.  Sometimes I wonder if he's pulling an \"Atheism Conquered\".  Chalmers does this really sharp analysis... and then turns left at the last minute.  He lays out everything that's wrong with the Zombie World scenario, and then, having reduced the whole argument to smithereens, calmly accepts it.

\n

Chalmers does the same thing when he lays out, in calm detail, the problem with saying that our own beliefs in consciousness are justified, when our zombie twins say exactly the same thing for exactly the same reasons and are wrong.

\n

On Chalmers's theory, Chalmers saying that he believes in consciousness cannot be causally justified; the belief is not caused by the fact itself.  In the absence of consciousness, Chalmers would write the same papers for the same reasons.

\n

On epiphenomenalism, Chalmers saying that he believes in consciousness cannot be justified as the product of a process that systematically outputs true beliefs, because the zombie twin writes the same papers using the same systematic process and is wrong.

\n

Chalmers admits this.  Chalmers, in fact, explains the argument in great detail in his book.  Okay, so Chalmers has solidly proven that he is not justified in believing in epiphenomenal consciousness, right?  No.  Chalmers writes:

\n
\n

Conscious experience lies at the center of our epistemic universe; we have access to it directly.  This raises the question: what is it that justifies our beliefs about our experiences, if it is not a causal link to those experiences, and if it is not the mechanisms by which the beliefs are formed?  I think the answer to this is clear: it is having the experiences that justifies the beliefs. For example, the very fact that I have a red experience now provides justification for my belief that I am having a red experience...

\n

Because my zombie twin lacks experiences, he is in a very different epistemic situation from me, and his judgments lack the corresponding justification.  It may be tempting to object that if my belief lies in the physical realm, its justification must lie in the physical realm; but this is a non sequitur. From the fact that there is no justification in the physical realm, one might conclude that the physical portion of me (my brain, say) is not justified in its belief. But the question is whether I am justified in the belief, not whether my brain is justified in the belief, and if property dualism is correct than there is more to me than my brain.

\n
\n

So—if I've got this thesis right—there's a core you, above and beyond your brain, that believes it is not a zombie, and directly experiences not being a zombie; and so its beliefs are justified.

\n

But Chalmers just wrote all that stuff down, in his very physical book, and so did the zombie-Chalmers.

\n

The zombie Chalmers can't have written the book because of the zombie's core self above the brain; there must be some entirely different reason, within the laws of physics.

\n

It follows that even if there is a part of Chalmers hidden away that is conscious and believes in consciousness, directly and without mediation, there is also a separable subspace of Chalmers—a causally closed cognitive subsystem that acts entirely within physics—and this \"outer self\" is what speaks Chalmers's internal narrative, and writes papers on consciousness.

\n

I do not see any way to evade the charge that, on Chalmers's own theory, this separable outer Chalmers is deranged.  This is the part of Chalmers that is the same in this world, or the Zombie World; and in either world it writes philosophy papers on consciousness for no valid reason.  Chalmers's philosophy papers are not output by that inner core of awareness and belief-in-awareness, they are output by the mere physics of the internal narrative that makes Chalmers's fingers strike the keys of his computer.

\n

And yet this deranged outer Chalmers is writing philosophy papers that just happen to be perfectly right, by a separate and additional miracle.  Not a logically necessary miracle (then the Zombie World would not be logically possible).  A physically contingent miracle, that happens to be true in what we think is our universe, even though science can never distinguish our universe from the Zombie World.

\n

Or at least, that would seem to be the implication of what the self-confessedly deranged outer Chalmers is telling us.

\n

I think I speak for all reductionists when I say Huh? 

\n

That's not epicycles.  That's, \"Planetary motions follow these epicycles—but epicycles don't actually do anything—there's something else that makes the planets move the same way the epicycles say they should, which I haven't been able to explain—and by the way, I would say this even if there weren't any epicycles.\"

\n

I have a nonstandard perspective on philosophy because I look at everything with an eye to designing an AI; specifically, a self-improving Artificial General Intelligence with stable motivational structure.

\n

When I think about designing an AI, I ponder principles like probability theory, the Bayesian notion of evidence as differential diagnostic, and above all, reflective coherence.  Any self-modifying AI that starts out in a reflectively inconsistent state won't stay that way for long.

\n

If a self-modifying AI looks at a part of itself that concludes \"B\" on condition A—a part of itself that writes \"B\" to memory whenever condition A is true—and the AI inspects this part, determines how it (causally) operates in the context of the larger universe, and the AI decides that this part systematically tends to write false data to memory, then the AI has found what appears to be a bug, and the AI will self-modify not to write \"B\" to the belief pool under condition A.

\n

Any epistemological theory that disregards reflective coherence is not a good theory to use in constructing self-improving AI.  This is a knockdown argument from my perspective, considering what I intend to actually use philosophy for.  So I have to invent a reflectively coherent theory anyway.  And when I do, by golly, reflective coherence turns out to make intuitive sense.

\n

So that's the unusual way in which I tend to think about these things.  And now I look back at Chalmers:

\n

The causally closed \"outer Chalmers\" (that is not influenced in any way by the \"inner Chalmers\" that has separate additional awareness and beliefs) must be carrying out some systematically unreliable, unwarranted operation which in some unexplained fashion causes the internal narrative to produce beliefs about an \"inner Chalmers\" that are correct for no logical reason in what happens to be our universe.

\n

But there's no possible warrant for the outer Chalmers or any reflectively coherent self-inspecting AI to believe in this mysterious correctness.  A good AI design should, I think, look like a reflectively coherent intelligence embodied in a causal system, with a testable theory of how that selfsame causal system produces systematically accurate beliefs on the way to achieving its goals.

\n

So the AI will scan Chalmers and see a closed causal cognitive system producing an internal narrative that is uttering nonsense.  Nonsense that seems to have a high impact on what Chalmers thinks should be considered a morally valuable person.

\n

This is not a necessary problem for Friendly AI theorists.  It is only a problem if you happen to be an epiphenomenalist.  If you believe either the reductionists (consciousness happens within the atoms) or the substance dualists (consciousness is causally potent immaterial stuff), people talking about consciousness are talking about something real, and a reflectively consistent Bayesian AI can see this by tracing back the chain of causality for what makes people say \"consciousness\".

\n

According to Chalmers, the causally closed cognitive system of Chalmers's internal narrative is (mysteriously) malfunctioning in a way that, not by necessity, but just in our universe, miraculously happens to be correct.  Furthermore, the internal narrative asserts \"the internal narrative is mysteriously malfunctioning, but miraculously happens to be correctly echoing the justified thoughts of the epiphenomenal inner core\", and again, in our universe, miraculously happens to be correct.

\n

Oh, come on!

\n

Shouldn't there come a point where you just give up on an idea?  Where, on some raw intuitive level, you just go:  What on Earth was I thinking?

\n

Humanity has accumulated some broad experience with what correct theories of the world look like.  This is not what a correct theory looks like.

\n

\"Argument from incredulity,\" you say.  Fine, you want it spelled out?  The said Chalmersian theory postulates multiple unexplained complex miracles.  This drives down its prior probability, by the conjunction rule of probability and Occam's Razor.  It is therefore dominated by at least two theories which postulate fewer miracles, namely:

\n\n

Compare to:

\n\n

I know I'm speaking from limited experience, here.  But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy.

\n

There are times when, as a rationalist, you have to believe things that seem weird to you.  Relativity seems weird, quantum mechanics seems weird, natural selection seems weird.

\n

But these weirdnesses are pinned down by massive evidence.  There's a difference between believing something weird because science has confirmed it overwhelmingly—

\n

—versus believing a proposition that seems downright deranged, because of a great big complicated philosophical argument centered around unspecified miracles and giant blank spots not even claimed to be understood—

\n

—in a case where even if you accept everything that has been told to you so far, afterward the phenomenon will still seem like a mystery and still have the same quality of wondrous impenetrability that it had at the start.

\n

The correct thing for a rationalist to say at this point, if all of David Chalmers's arguments seem individually plausible—which they don't seem to me—is:

\n

\"Okay... I don't know how consciousness works... I admit that... and maybe I'm approaching the whole problem wrong, or asking the wrong questions... but this zombie business can't possibly be right.  The arguments aren't nailed down enough to make me believe this—especially when accepting it won't make me feel any less confused.  On a core gut level, this just doesn't look like the way reality could really really work.\"

\n

Mind you, I am not saying this is a substitute for careful analytic refutation of Chalmers's thesis.  System 1 is not a substitute for System 2, though it can help point the way.  You still have to track down where the problems are specifically.

\n

Chalmers wrote a big book, not all of which is available through free Google preview.  I haven't duplicated the long chains of argument where Chalmers lays out the arguments against himself in calm detail.  I've just tried to tack on a final refutation of Chalmers's last presented defense, which Chalmers has not yet countered to my knowledge.  Hit the ball back into his court, as it were.

\n

But, yes, on a core level, the sane thing to do when you see the conclusion of the zombie argument, is to say \"That can't possibly be right\" and start looking for a flaw.

" } }, { "_id": "gRa5cWWBsZqdFvmqu", "title": "Reductive Reference", "pageUrl": "https://www.lesswrong.com/posts/gRa5cWWBsZqdFvmqu/reductive-reference", "postedAt": "2008-04-03T01:37:39.000Z", "baseScore": 60, "voteCount": 53, "commentCount": 46, "url": null, "contents": { "documentId": "gRa5cWWBsZqdFvmqu", "html": "

The reductionist thesis (as I formulate it) is that human minds, for reasons of efficiency, use a multi-level map in which we separately think about things like \"atoms\" and \"quarks\", \"hands\" and \"fingers\", or \"heat\" and \"kinetic energy\".  Reality itself, on the other hand, is single-level in the sense that it does not seem to contain atoms as separate, additional, causally efficacious entities over and above quarks.

\n

Sadi Carnot formulated the (precursor to) the second law of thermodynamics using the caloric theory of heat, in which heat was just a fluid that flowed from hot things to cold things, produced by fire, making gases expand—the effects of heat were studied separately from the science of kinetics, considerably before the reduction took place.  If you're trying to design a steam engine, the effects of all those tiny vibrations and collisions which we name \"heat\" can be summarized into a much simpler description than the full quantum mechanics of the quarks.  Humans compute efficiently, thinking of only significant effects on goal-relevant quantities.

\n

But reality itself does seem to use the full quantum mechanics of the quarks.  I once met a fellow who thought that if you used General Relativity to compute a low-velocity problem, like an artillery shell, GR would give you the wrong answer—not just a slow answer, but an experimentally wrong answer—because at low velocities, artillery shells are governed by Newtonian mechanics, not GR.  This is exactly how physics does not work.  Reality just seems to go on crunching through General Relativity, even when it only makes a difference at the fourteenth decimal place, which a human would regard as a huge waste of computing power.  Physics does it with brute force.  No one has ever caught physics simplifying its calculations—or if someone did catch it, the Matrix Lords erased the memory afterward.

\n

Our map, then, is very much unlike the territory; our maps are multi-level, the territory is single-level.  Since the representation is so incredibly unlike the referent, in what sense can a belief like \"I am wearing socks\" be called true, when in reality itself, there are only quarks?

\n

\n

In case you've forgotten what the word \"true\" means, the classic definition was given by Alfred Tarski:

\n
\n

The statement \"snow is white\" is true if and only if snow is white.

\n
\n

In case you've forgotten what the difference is between the statement \"I believe 'snow is white'\" and \"'Snow is white' is true\", see here.  Truth can't be evaluated just by looking inside your own head—if you want to know, for example, whether \"the morning star = the evening star\", you need a telescope; it's not enough just to look at the beliefs themselves.

\n

This is the point missed by the postmodernist folks screaming, \"But how do you know your beliefs are true?\"  When you do an experiment, you actually are going outside your own head.  You're engaging in a complex interaction whose outcome is causally determined by the thing you're reasoning about, not just your beliefs about it.  I once defined \"reality\" as follows:

\n
\n

Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I'm still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies 'belief', and the latter thingy 'reality'.\"

\n
\n

The interpretation of your experiment still depends on your prior beliefs.  I'm not going to talk, for the moment, about Where Priors Come From, because that is not the subject of this blog post.  My point is that truth refers to an ideal comparison between a belief and reality.  Because we understand that planets are distinct from beliefs about planets, we can design an experiment to test whether the belief \"the morning star and the evening star are the same planet\" is true.  This experiment will involve telescopes, not just introspection, because we understand that \"truth\" involves comparing an internal belief to an external fact; so we use an instrument, the telescope, whose perceived behavior we believe to depend on the external fact of the planet.

\n

Believing that the telescope helps us evaluate the \"truth\" of \"morning star = evening star\", relies on our prior beliefs about the telescope interacting with the planet.  Again, I'm not going to address that in this particular blog post, except to quote one of my favorite Raymond Smullyan lines:  \"If the more sophisticated reader objects to this statement on the grounds of its being a mere tautology, then please at least give the statement credit for not being inconsistent.\"  Similarly, I don't see the use of a telescope as circular logic, but as reflective coherence; for every systematic way of arriving at truth, there ought to be a rational explanation for how it works.

\n

The question on the table is what it means for \"snow is white\" to be true, when, in reality, there are just quarks.

\n

There's a certain pattern of neural connections making up your beliefs about \"snow\" and \"whiteness\"—we believe this, but we do not know, and cannot concretely visualize, the actual neural connections.  Which are, themselves, embodied in a pattern of quarks even less known.  Out there in the world, there are water molecules whose temperature is low enough that they have arranged themselves in tiled repeating patterns; they look nothing like the tangles of neurons.  In what sense, comparing one (ever-fluctuating) pattern of quarks to the other, is the belief \"snow is white\" true?

\n

Obviously, neither I nor anyone else can offer an Ideal Quark Comparer Function that accepts a quark-level description of a neurally embodied belief (including the surrounding brain) and a quark-level description of a snowflake (and the surrounding laws of optics), and outputs \"true\" or \"false\" over \"snow is white\".  And who says the fundamental level is really about particle fields?

\n

On the other hand, throwing out all beliefs because they aren't written as gigantic unmanageable specifications about quarks we can't even see... doesn't seem like a very prudent idea.  Not the best way to optimize our goals. 

\n

It seems to me that a word like \"snow\" or \"white\" can be taken as a kind of promissory note—not a known specification of exactly which physical quark configurations count as \"snow\", but, nonetheless, there are things you call snow and things you don't call snow, and even if you got a few items wrong (like plastic snow), an Ideal Omniscient Science Interpreter would see a tight cluster in the center and redraw the boundary to have a simpler definition.

\n

In a single-layer universe whose bottom layer is unknown, or uncertain, or just too large to talk about, the concepts in a multi-layer mind can be said to represent a kind of promissory note—we don't know what they correspond to, out there.  But it seems to us that we can distinguish positive from negative cases, in a predictively productive way, so we think—perhaps in a fully general sense—that there is some difference of quarks, some difference of configurations at the fundamental level, which explains the differences that feed into our senses, and ultimately result in our saying \"snow\" or \"not snow\".

\n

I see this white stuff, and it is the same on several occasions, so I hypothesize a stable latent cause in the environment—I give it the name \"snow\"; \"snow\" is then a promissory note referring to a believed-in simple boundary that could be drawn around the unseen causes of my experience.

\n

Hilary Putnam's \"Twin Earth\" thought experiment, where water is not H20 but some strange other substance denoted XYZ, otherwise behaving much like water, and the subsequent philosophical debate, helps to highlight this issue.  \"Snow\" doesn't have a logical definition known to us—it's more like an empirically determined pointer to a logical definition.  This is true even if you believe that snow is ice crystals is low-temperature tiled water molecules.  The water molecules are made of quarks.  What if quarks turn out to be made of something else?  What is a snowflake, then?  You don't know—but it's still a snowflake, not a fire hydrant.

\n

And of course, these very paragraphs I have just written, are likewise far above the level of quarks.  \"Sensing white stuff, visually categorizing it, and thinking 'snow' or 'not snow'\"—this is also talking very far above the quarks.  So my meta-beliefs are also promissory notes, for things that an Ideal Omniscient Science Interpreter might know about which configurations of the quarks (or whatever) making up my brain, correspond to \"believing 'snow is white'\".

\n

But then, the entire grasp that we have upon reality, is made up of promissory notes of this kind.  So, rather than calling it circular, I prefer to call it self-consistent.

\n

This can be a bit unnerving—maintaining a precarious epistemic perch, in both object-level beliefs and reflection, far above a huge unknown underlying fundamental reality, and hoping one doesn't fall off.

\n

On reflection, though, it's hard to see how things could be any other way.

\n

So at the end of the day, the statement \"reality does not contain hands as fundamental, additional, separate causal entities, over and above quarks\" is not the same statement as \"hands do not exist\" or \"I don't have any hands\".  There are no fundamental hands; hands are made of fingers, palm, and thumb, which in turn are made of muscle and bone, all the way down to elementary particle fields, which are the fundamental causal entities, so far as we currently know.

\n

This is not the same as saying, \"there are no 'hands'.\"  It is not the same as saying, \"the word 'hands' is a promissory note that will never be paid, because there is no empirical cluster that corresponds to it\"; or \"the 'hands' note will never be paid, because it is logically impossible to reconcile its supposed characteristics\"; or \"the statement 'humans have hands' refers to a sensible state of affairs, but reality is not in that state\".

\n

Just:  There are patterns that exist in reality where we see \"hands\", and these patterns have something in common, but they are not fundamental.

\n

If I really had no hands—if reality suddenly transitioned to be in a state that we would describe as \"Eliezer has no hands\"—reality would shortly thereafter correspond to a state we would describe as \"Eliezer screams as blood jets out of his wrist stumps\".

\n

And this is true, even though the above paragraph hasn't specified any quark positions.

\n

The previous sentence is likewise meta-true.

\n

The map is multilevel, the territory is single-level.  This doesn't mean that the higher levels \"don't exist\", like looking in your garage for a dragon and finding nothing there, or like seeing a mirage in the desert and forming an expectation of drinkable water when there is nothing to drink.  The higher levels of your map are not false, without referent; they have referents in the single level of physics.  It's not that the wings of an airplane unexist—then the airplane would drop out of the sky.  The \"wings of an airplane\" exist explicitly in an engineer's multilevel model of an airplane, and the wings of an airplane exist implicitly in the quantum physics of the real airplane.  Implicit existence is not the same as nonexistence.  The exact description of this implicitness is not known to us—is not explicitly represented in our map.  But this does not prevent our map from working, or even prevent it from being true.

\n

Though it is a bit unnerving to contemplate that every single concept and belief in your brain, including these meta-concepts about how your brain works and why you can form accurate beliefs, are perched orders and orders of magnitude above reality...

" } }, { "_id": "nzzNFcrSk7akQ9bwD", "title": "Brain Breakthrough! It's Made of Neurons!", "pageUrl": "https://www.lesswrong.com/posts/nzzNFcrSk7akQ9bwD/brain-breakthrough-it-s-made-of-neurons", "postedAt": "2008-04-01T19:00:59.000Z", "baseScore": 72, "voteCount": 70, "commentCount": 30, "url": null, "contents": { "documentId": "nzzNFcrSk7akQ9bwD", "html": "

In an amazing breakthrough, a multinational team of scientists led by Nobel laureate Santiago Ramón y Cajal announced that the brain is composed of a ridiculously complicated network of tiny cells connected to each other by infinitesimal threads and branches.

\n

The multinational team—which also includes the famous technician Antonie van Leeuwenhoek, and possibly Imhotep, promoted to the Egyptian god of medicine—issued this statement:

\n

\"The present discovery culminates years of research indicating that the convoluted squishy thing inside our skulls is even more complicated than it looks.  Thanks to Cajal's application of a new staining technique invented by Camillo Golgi, we have learned that this structure is not a continuous network like the blood vessels of the body, but is actually composed of many tiny cells, or \"neurons\", connected to one another by even more tiny filaments.

\n

\"Other extensive evidence, beginning from Greek medical researcher Alcmaeon and continuing through Paul Broca's research on speech deficits, indicates that the brain is the seat of reason.

\n

\"Nemesius, the Bishop of Emesia, has previously argued that brain tissue is too earthy to act as an intermediary between the body and soul, and so the mental faculties are located in the ventricles of the brain.  However, if this is correct, there is no reason why this organ should turn out to have an immensely complicated internal composition.

\n

\n

\"Charles Babbage has independently suggested that many small mechanical devices could be collected into an 'Analytical Engine', capable of performing activities, such as arithmetic, which are widely believed to require thought.  The work of Luigi Galvani and Hermann von Helmholtz suggests that the activities of neurons are electrochemical in nature, rather than mechanical pressures as previously believed.  Nonetheless, we think an analogy with Babbage's 'Analytical Engine' suggests that a vastly complicated network of neurons could similarly exhibit thoughtful properties.

\n

\"We have found an enormously complicated material system located where the mind should be.  The implications are shocking, and must be squarely faced.  We believe that the present research offers strong experimental evidence that Benedictus Spinoza was correct, and René Descartes wrong:  Mind and body are of one substance.

\n

\"In combination with the work of Charles Darwin showing how such a complicated organ could, in principle, have arisen as the result of processes not themselves intelligent, the bulk of scientific evidence now seems to indicate that intelligence is ontologically non-fundamental and has an extended origin in time.  This strongly weighs against theories which assign mental entities an ontologically fundamental or causally primal status, including all religions ever invented.

\n

\"Much work remains to be done on discovering the specific identities between electrochemical interactions between neurons, and thoughts.  Nonetheless, we believe our discovery offers the promise, though not yet the realization, of a full scientific account of thought.  The problem may now be declared, if not solved, then solvable.\"

\n

We regret that Cajal and most of the other researchers involved on the Project are no longer available for comment.

" } }, { "_id": "ne6Ra62FB9ACHGSuh", "title": "Heat vs. Motion", "pageUrl": "https://www.lesswrong.com/posts/ne6Ra62FB9ACHGSuh/heat-vs-motion", "postedAt": "2008-04-01T03:55:19.000Z", "baseScore": 53, "voteCount": 42, "commentCount": 70, "url": null, "contents": { "documentId": "ne6Ra62FB9ACHGSuh", "html": "

After yesterday's post, it occurred to me that there's a much simpler example of reductionism jumping a gap of apparent-difference-in-kind: the reduction of heat to motion.

\n

Today, the equivalence of heat and motion may seem too obvious in hindsighteveryone says that \"heat is motion\", therefore, it can't be a \"weird\" belief.

\n

But there was a time when the kinetic theory of heat was a highly controversial scientific hypothesis, contrasting to belief in a caloric fluid that flowed from hot objects to cold objects.  Still earlier, the main theory of heat was \"Phlogiston!\"

\n

Suppose you'd separately studied kinetic theory and caloric theory.  You now know something about kinetics: collisions, elastic rebounds, momentum, kinetic energy, gravity, inertia, free trajectories.  Separately, you know something about heat:  Temperatures, pressures, combustion, heat flows, engines, melting, vaporization.

\n

Not only is this state of knowledge a plausible one, it is the state of knowledge possessed by e.g. Sadi Carnot, who, working strictly from within the caloric theory of heat, developed the principle of the Carnot cycle—a heat engine of maximum efficiency, whose existence implies the second law of thermodynamics.  This in 1824, when kinetics was a highly developed science.

\n

Suppose, like Carnot, you know a great deal about kinetics, and a great deal about heat, as separate entities.  Separate entities of knowledge, that is: your brain has separate filing baskets for beliefs about kinetics and beliefs about heat.  But from the inside, this state of knowledge feels like living in a world of moving things and hot things, a world where motion and heat are independent properties of matter.

\n

Now a Physicist From The Future comes along and tells you:  \"Where there is heat, there is motion, and vice versa.  That's why, for example, rubbing things together makes them hotter.\"

\n

\n

There are (at least) two possible interpretations you could attach to this statement, \"Where there is heat, there is motion, and vice versa.\"

\n

First, you could suppose that heat and motion exist separately—that the caloric theory is correct—but that among our universe's physical laws is a \"bridging law\" which states that, where objects are moving quickly, caloric will come into existence.  And conversely, another bridging law says that caloric can exert pressure on things and make them move, which is why a hotter gas exerts more pressure on its enclosure (thus a steam engine can use steam to drive a piston).

\n

Second, you could suppose that heat and motion are, in some as-yet-mysterious sense, the same thing.

\n

\"Nonsense,\" says Thinker 1, \"the words 'heat' and 'motion' have two different meanings; that is why we have two different words.  We know how to determine when we will call an observed phenomenon 'heat'—heat can melt things, or make them burst into flame.  We know how to determine when we will say that an object is 'moving quickly'—it changes position; and when it crashes, it may deform, or shatter.  Heat is concerned with change of substance; motion, with change of position and shape.  To say that these two words have the same meaning is simply to confuse yourself.\"

\n

\"Impossible,\" says Thinker 2.  \"It may be that, in our world, heat and motion are associated by bridging laws, so that it is a law of physics that motion creates caloric, and vice versa.  But I can easily imagine a world where rubbing things together does not make them hotter, and gases don't exert more pressure at higher temperatures.  Since there are possible worlds where heat and motion are not associated, they must be different properties—this is true a priori.\"

\n

Thinker 1 is confusing the quotation and the referent.  2 + 2 = 4, but \"2 + 2\" ≠ \"4\".  The string \"2 + 2\" contains 5 characters (including whitespace) and the string \"4\" contains only 1 character.  If you type the two strings into a Python interpreter, they yield the same output,—> 4.  So you can't conclude, from looking at the strings \"2 + 2\" and \"4\", that just because the strings are different, they must have different \"meanings\" relative to the Python Interpreter.

\n

The words \"heat\" and \"kinetic energy\" can be said to \"refer to\" the same thing, even before we know how heat reduces to motion, in the sense that we don't know yet what the reference is, but the references are in fact the same.  You might imagine an Idealized Omniscient Science Interpreter that would give the same output when we typed in \"heat\" and \"kinetic energy\" on the command line.

\n

I talk about the Science Interpreter to emphasize that, to dereference the pointer, you've got to step outside cognition.  The end result of the dereference is something out there in reality, not in anyone's mind.  So you can say \"real referent\" or \"actual referent\", but you can't evaluate the words locally, from the inside of your own head.  You can't reason using the actual heat-referent—if you thought using real heat, thinking \"1 million Kelvin\" would vaporize your brain.  But, by forming a belief about your belief about heat, you can talk about your belief about heat, and say things like \"It's possible that my belief about heat doesn't much resemble real heat.\"  You can't actually perform that comparison right there in your own mind, but you can talk about it.

\n

Hence you can say, \"My beliefs about heat and motion are not the same beliefs, but it's possible that actual heat and actual motion are the same thing.\"  It's just like being able to acknowledge that \"the morning star\" and \"the evening star\" might be the same planet, while also understanding that you can't determine this just by examining your beliefs—you've got to haul out the telescope.

\n

Thinker 2's mistake follows similarly.  A physicist told him, \"Where there is heat, there is motion\" and P2 mistook this for a statement of physical law:  The presence of caloric causes the existence of motion.  What the physicist really means is more akin to an inferential rule:  Where you are told there is \"heat\", deduce the presence of \"motion\".

\n

From this basic projection of a multilevel model into a multilevel reality follows another, distinct error: the conflation of conceptual possibility with logical possibility.  To Sadi Carnot, it is conceivable that there could be another world where heat and motion are not associated.  To Richard Feynman, armed with specific knowledge of how to derive equations about heat from equations about motion, this idea is not only inconceivable, but so wildly inconsistent as to make one's head explode. 

\n

I should note, in fairness to philosophers, that there are philosophers who have said these things.  For example, Hilary Putnam, writing on the \"Twin Earth\" thought experiment:

\n
\n

Once we have discovered that water (in the actual world) is H20, nothing counts as a possible world in which water isn't H20.  In particular, if a \"logically possible\" statement is one that holds in some \"logically possible world\", it isn't logically possible that water isn't H20.

\n

On the other hand, we can perfectly well imagine having experiences that would convince us (and that would make it rational to believe that) water isn't H20.  In that sense, it is conceivable that water isn't H20.  It is conceivable but it isn't logically possible!  Conceivability is no proof of logical possibility.

\n
\n

It appears to me that \"water\" is being used in two different senses in these two paragraphs—one in which the word \"water\" refers to what we type into the Science Interpreter, and one in which \"water\" refers to what we get out of the Science Interpreter when we type \"water\" into it.  In the first paragraph, Hilary seems to be saying that after we do some experiments and find out that water is H20, water becomes automatically redefined to mean H20.  But you could coherently hold a different position about whether the word \"water\" now means \"H20\" or \"whatever is really in that bottle next to me\", so long as you use your terms consistently.

\n

I believe the above has already been said as well?  Anyway...

\n

It is quite possible for there to be only one thing out-there-in-the-world, but for it to take on sufficiently different forms, and for you yourself to be sufficiently ignorant of the reduction, that it feels like living in a world containing two entirely different things.  Knowledge concerning these two different phenomena may taught in two different classes, and studied by two different academic fields, located in two different buildings of your university.

\n

You've got to put yourself quite a ways back, into a historically realistic frame of mind, to remember how different heat and motion once seemed.  Though, depending on how much you know today, it may not be as hard as all that, if you can look past the pressure of conventionality (that is, \"heat is motion\" is an un-weird belief, \"heat is not motion\" is a weird belief).  I mean, suppose that tomorrow the physicists stepped forward and said, \"Our popularizations of science have always contained one lie.  Actually, heat has nothing to do with motion.\"  Could you prove they were wrong?

\n

Saying \"Maybe heat and motion are the same thing!\" is easy.  The difficult part is explaining how.  It takes a great deal of detailed knowledge to get yourself to the point where you can no longer conceive of a world in which the two phenomena go separate ways.  Reduction isn't cheap, and that's why it buys so much.

\n

Or maybe you could say:  \"Reductionism is easy, reduction is hard.\"  But it does kinda help to be a reductionist, I think, when it comes time to go looking for a reduction.

" } }, { "_id": "ddwk9veF8efn3Nzbu", "title": "Angry Atoms", "pageUrl": "https://www.lesswrong.com/posts/ddwk9veF8efn3Nzbu/angry-atoms", "postedAt": "2008-03-31T00:28:10.000Z", "baseScore": 67, "voteCount": 57, "commentCount": 59, "url": null, "contents": { "documentId": "ddwk9veF8efn3Nzbu", "html": "

Fundamental physics—quarks 'n stuff—is far removed from the levels we can see, like hands and fingers.  At best, you can know how to replicate the experiments which show that your hand (like everything else) is composed of quarks, and you may know how to derive a few equations for things like atoms and electron clouds and molecules.

\n

At worst, the existence of quarks beneath your hand may just be something you were told.  In which case it's questionable in one what sense you can be said to \"know\" it at all, even if you repeat back the same word \"quark\" that a physicist would use to convey knowledge to another physicist.

\n

Either way, you can't actually see the identity between levels—no one has a brain large enough to visualize avogadros of quarks and recognize a hand-pattern in them.

\n

But we at least understand what hands do.  Hands push on things, exert forces on them.  When we're told about atoms, we visualize little billiard balls bumping into each other.  This makes it seem obvious that \"atoms\" can push on things too, by bumping into them.

\n

Now this notion of atoms is not quite correct.  But so far as human imagination goes, it's relatively easy to imagine our hand being made up of a little galaxy of swirling billiard balls, pushing on things when our \"fingers\" touch them.  Democritus imagined this 2400 years ago, and there was a time, roughly 1803-1922, when Science thought he was right.

\n

But what about, say, anger?

\n

How could little billiard balls be angry?  Tiny frowny faces on the billiard balls?

\n

\n

Put yourself in the shoes of, say, a hunter-gatherer—someone who may not even have a notion of writing, let alone the notion of using base matter to perform computations—someone who has no idea that such a thing as neurons exist.  Then you can imagine the functional gap that your ancestors might have perceived between billiard balls and \"Grrr!  Aaarg!\"

\n

Forget about subjective experience for the moment, and consider the sheer behavioral gap between anger and billiard balls.  The difference between what little billiard balls do, and what anger makes people do. Anger can make people raise their fists and hit someone—or say snide things behind their backs—or plant scorpions in their tents at night.  Billiard balls just push on things.

\n

Try to put yourself in the shoes of the hunter-gatherer who's never had the \"Aha!\" of information-processing.  Try to avoid hindsight bias about things like neurons and computers.  Only then will you be able to see the uncrossable explanatory gap:

\n

How can you explain angry behavior in terms of billiard balls?

\n

Well, the obvious materialist conjecture is that the little billiard balls push on your arm and make you hit someone, or push on your tongue so that insults come out.

\n

But how do the little billiard balls know how to do this—or how to guide your tongue and fingers through long-term plots—if they aren't angry themselves?

\n

And besides, if you're not seduced by—gasp!—scientism, you can see from a first-person perspective that this explanation is obviously false.  Atoms can push on your arm, but they can't make you want anything.

\n

Someone may point out that drinking wine can make you angry.  But who says that wine is made exclusively of little billiard balls?  Maybe wine just contains a potency of angerness.

\n

Clearly, reductionism is just a flawed notion.

\n

(The novice goes astray and says \"The art failed me\"; the master goes astray and says \"I failed my art.\")

\n

What does it take to cross this gap?  It's not just the idea of \"neurons\" that \"process information\"—if you say only this and nothing more, it just inserts a magical, unexplained level-crossing rule into your model, where you go from billiards to thoughts.

\n

But an Artificial Intelligence programmer who knows how to create a chess-playing program out of base matter, has taken a genuine step toward crossing the gap.  If you understand concepts like consequentialism, backward chaining, utility functions, and search trees, you can make merely causal/mechanical systems compute plans.

\n

The trick goes something like this:  For each possible chess move, compute the moves your opponent could make, then your responses to those moves, and so on; evaluate the furthest position you can see using some local algorithm (you might simply count up the material); then trace back using minimax to find the best move on the current board; then make that move.

\n

More generally:  If you have chains of causality inside the mind that have a kind of mapping—a mirror, an echo—to what goes on in the environment, then you can run a utility function over the end products of imagination, and find an action that achieves something which the utility function rates highly, and output that action.  It is not necessary for the chains of causality inside the mind, that are similar to the environment, to be made out of billiard balls that have little auras of intentionality.  Deep Blue's transistors do not need little chess pieces carved on them, in order to work.  See also The Simple Truth.

\n

All this is still tremendously oversimplified, but it should, at least, reduce the apparent length of the gap.  If you can understand all that, you can see how a planner built out of base matter can be influenced by alcohol to output more angry behaviors.  The billiard balls in the alcohol push on the billiard balls making up the utility function.

\n

But even if you know how to write small AIs, you can't visualize the level-crossing between transistors and chess.  There are too many transistors, and too many moves to check.

\n

Likewise, even if you knew all the facts of neurology, you would not be able to visualize the level-crossing between neurons and anger—let alone the level-crossing between atoms and anger.  Not the way you can visualize a hand consisting of fingers, thumb, and palm.

\n

And suppose a cognitive scientist just flatly tells you \"Anger is hormones\"?  Even if you repeat back the words, it doesn't mean you've crossed the gap.  You may believe you believe it, but that's not the same as understanding what little billiard balls have to do with wanting to hit someone.

\n

So you come up with interpretations like, \"Anger is mere hormones, it's caused by little molecules, so it must not be justified in any moral sense—that's why you should learn to control your anger.\"

\n

Or, \"There isn't really any such thing as anger—it's an illusion, a quotation with no referent, like a mirage of water in the desert, or looking in the garage for a dragon and not finding one.\"

\n

These are both tough pills to swallow (not that you should swallow them) and so it is a good easier to profess them than to believe them.

\n

I think this is what non-reductionists/non-materialists think they are criticizing when they criticize reductive materialism.

\n

But materialism isn't that easy.  It's not as cheap as saying, \"Anger is made out of atoms—there, now I'm done.\"  That wouldn't explain how to get from billiard balls to hitting.  You need the specific insights of computation, consequentialism, and search trees before you can start to close the explanatory gap.

\n

All this was a relatively easy example by modern standards, because I restricted myself to talking about angry behaviors.  Talking about outputs doesn't require you to appreciate how an algorithm feels from inside (cross a first-person/third-person gap) or dissolve a wrong question (untangle places where the interior of your own mind runs skew to reality).

\n

Going from material substances that bend and break, burn and fall, push and shove, to angry behavior, is just a practice problem by the standards of modern philosophy.  But it is an important practice problem.  It can only be fully appreciated, if you realize how hard it would have been to solve before writing was invented.  There was once an explanatory gap here—though it may not seem that way in hindsight, now that it's been bridged for generations.

\n

Explanatory gaps can be crossed, if you accept help from science, and don't trust the view from the interior of your own mind.

" } }, { "_id": "KmghfjH6RgXvoKruJ", "title": "Hand vs. Fingers", "pageUrl": "https://www.lesswrong.com/posts/KmghfjH6RgXvoKruJ/hand-vs-fingers", "postedAt": "2008-03-30T00:36:05.000Z", "baseScore": 83, "voteCount": 58, "commentCount": 94, "url": null, "contents": { "documentId": "KmghfjH6RgXvoKruJ", "html": "

Back to our original topic:  Reductionism, which (in case you've forgotten) is part of a sequence on the Mind Projection Fallacy.  There can be emotional problems in accepting reductionism, if you think that things have to be fundamental to be fun.  But this position commits us to never taking joy in anything more complicated than a quark, and so I prefer to reject it.

\n

To review, the reductionist thesis is that we use multi-level models for computational reasons, but physical reality has only a single level.  If this doesn't sound familiar, please reread \"Reductionism\".

\n
\n

Today I'd like to pose the following conundrum:  When you pick up a cup of water, is it your hand that picks it up?

\n

Most people, of course, go with the naive popular answer:  \"Yes.\"

\n

Recently, however, scientists have made a stunning discovery:  It's not your hand that holds the cup, it's actually your fingers, thumb, and palm.

\n

Yes, I know!  I was shocked too.  But it seems that after scientists measured the forces exerted on the cup by each of your fingers, your thumb, and your palm, they found there was no force left over—so the force exerted by your hand must be zero.

\n

\n

The theme here is that, if you can see how (not just know that) a higher level reduces to a lower one, they will not seem like separate things within your map; you will be able to see how silly it is to think that your fingers could be in one place, and your hand somewhere else; you will be able to see how silly it is to argue about whether it is your hand picks up the cup, or your fingers.

\n

The operative word is \"see\", as in concrete visualization.  Imagining your hand causes you to imagine the fingers and thumb and palm; conversely, imagining fingers and thumb and palm causes you to identify a hand in the mental picture.  Thus the high level of your map and the low level of your map will be tightly bound together in your mind.

\n

In reality, of course, the levels are bound together even tighter than that—bound together by the tightest possible binding: physical identity.  You can see this:  You can see that saying (1) \"hand\" or (2) \"fingers and thumb and palm\", does not refer to different things, but different points of view.

\n

But suppose you lack the knowledge to so tightly bind together the levels of your map.  For example, you could have a \"hand scanner\" that showed a \"hand\" as a dot on a map (like an old-fashioned radar display), and similar scanners for fingers/thumbs/palms; then you would see a cluster of dots around the hand, but you would be able to imagine the hand-dot moving off from the others.  So, even though the physical reality of the hand (that is, the thing the dot corresponds to) was identical with / strictly composed of the physical realities of the fingers and thumb and palm, you would not be able to see this fact; even if someone told you, or you guessed from the correspondence of the dots, you would only know the fact of reduction, not see it.  You would still be able to imagine the hand dot moving around independently, even though, if the physical makeup of the sensors were held constant, it would be physically impossible for this to actually happen.

\n

Or, at a still lower level of binding, people might just tell you \"There's a hand over there, and some fingers over there\"—in which case you would know little more than a Good-Old-Fashioned AI representing the situation using suggestively named LISP tokens.  There wouldn't be anything obviously contradictory about asserting:

\n
\n

|—Inside(Room,Hand)
|—~Inside(Room,Fingers)

\n
\n

because you would not possess the knowledge

\n
\n

|—Inside(x, Hand)—> Inside(x,Fingers)

\n
\n

None of this says that a hand can actually detach its existence from your fingers and crawl, ghostlike, across the room; it just says that a Good-Old-Fashioned AI with a propositional representation may not know any better.  The map is not the territory.

\n

In particular, you shouldn't draw too many conclusions from how it seems conceptually possible, in the mind of some specific conceiver, to separate the hand from its constituent elements of fingers, thumb, and palm.  Conceptual possibility is not the same as logical possibility or physical possibility.

\n

It is conceptually possible to you that 235757 is prime, because you don't know any better.  But it isn't logically possible that 235757 is prime; if you were logically omniscient, 235757 would be obviously composite (and you would know the factors).  That that's why we have the notion of impossible possible worlds, so that we can put probability distributions on propositions that may or may not be in fact logically impossible.

\n

And you can imagine philosophers who criticize \"eliminative fingerists\" who contradict the direct facts of experience—we can feel our hand holding the cup, after all—by suggesting that \"hands\" don't really exist, in which case, obviously, the cup would fall down.  And philosophers who suggest \"appendigital bridging laws\" to explain how a particular configuration of fingers, evokes a hand into existence—with the note, of course, that while our world contains those particular appendigital bridging laws, the laws could have been conceivably different, and so are not in any sense necessary facts, etc.

\n

All of these are cases of Mind Projection Fallacy, and what I call \"naive philosophical realism\"—the confusion of philosophical intuitions for direct, veridical information about reality.  Your inability to imagine something is just a computational fact about what your brain can or can't imagine.  Another brain might work differently.

" } }, { "_id": "fnEWQAYxcRnaYBqaZ", "title": "Initiation Ceremony", "pageUrl": "https://www.lesswrong.com/posts/fnEWQAYxcRnaYBqaZ/initiation-ceremony", "postedAt": "2008-03-28T20:40:03.000Z", "baseScore": 129, "voteCount": 119, "commentCount": 99, "url": null, "contents": { "documentId": "fnEWQAYxcRnaYBqaZ", "html": "

The torches that lit the narrow stairwell burned intensely and in the wrong color, flame like melting gold or shattered suns.

192... 193...

Brennan's sandals clicked softly on the stone steps, snicking in sequence, like dominos very slowly falling.

227... 228...

Half a circle ahead of him, a trailing fringe of dark cloth whispered down the stairs, the robed figure itself staying just out of sight.

239... 240...

Not much longer, Brennan predicted to himself, and his guess was accurate:

Sixteen times sixteen steps was the number, and they stood before the portal of glass.

The great curved gate had been wrought with cunning, humor, and close attention to indices of refraction: it warped light, bent it, folded it, and generally abused it, so that there were hints of what was on the other side (stronger light sources, dark walls) but no possible way of seeing through—unless, of course, you had the key: the counter-door, thick for thin and thin for thick, in which case the two would cancel out.

From the robed figure beside Brennan, two hands emerged, gloved in reflective cloth to conceal skin's color.  Fingers like slim mirrors grasped the handles of the warped gate—handles that Brennan had not guessed; in all that distortion, shapes could only be anticipated, not seen.

\"Do you want to know?\" whispered the guide; a whisper nearly as loud as an ordinary voice, but not revealing the slightest hint of gender.

Brennan paused.  The answer to the question seemed suspiciously, indeed extraordinarily obvious, even for ritual.

 

\"Yes,\" Brennan said finally.

The guide only regarded him silently.

\"Yes, I want to know,\" said Brennan.

\"Know what, exactly?\" whispered the figure.

Brennan's face scrunched up in concentration, trying to visualize the game to its end, and hoping he hadn't blown it already; until finally he fell back on the first and last resort, which is the truth:

\"It doesn't matter,\" said Brennan, \"the answer is still yes.\"

The glass gate parted down the middle, and slid, with only the tiniest scraping sound, into the surrounding stone.

The revealed room was lined, wall-to-wall, with figures robed and hooded in light-absorbing cloth.  The straight walls were not themselves black stone, but mirrored, tiling a square grid of dark robes out to infinity in all directions; so that it seemed as if the people of some much vaster city, or perhaps the whole human kind, watched in assembly.  There was a hint of moist warmth in the air of the room, the breath of the gathered: a scent of crowds.

Brennan's guide moved to the center of the square, where burned four torches of that relentless yellow flame.  Brennan followed, and when he stopped, he realized with a slight shock that all the cowled hoods were now looking directly at him.  Brennan had never before in his life been the focus of such absolute attention; it was frightening, but not entirely unpleasant.

\"He is here,\" said the guide in that strange loud whisper.

The endless grid of robed figures replied in one voice: perfectly blended, exactly synchronized, so that not a single individual could be singled out from the rest, and betrayed:

\"Who is absent?\"

\"Jakob Bernoulli,\" intoned the guide, and the walls replied:

\"Is dead but not forgotten.\"

Abraham de Moivre,\"

\"Is dead but not forgotten.\"

\"Pierre-Simon Laplace,\"

\"Is dead but not forgotten.\"

\"Edwin Thompson Jaynes,\"

\"Is dead but not forgotten.\"

\"They died,\" said the guide, \"and they are lost to us; but we still have each other, and the project continues.\"

In the silence, the guide turned to Brennan, and stretched forth a hand, on which rested a small ring of nearly transparent material.

Brennan stepped forward to take the ring—

But the hand clenched tightly shut.

\"If three-fourths of the humans in this room are women,\" said the guide, \"and three-fourths of the women and half of the men belong to the Heresy of Virtue, and I am a Virtuist, what is the probability that I am a man?\"

\"Two-elevenths,\" Brennan said confidently.

There was a moment of absolute silence.

Then a titter of shocked laughter.

The guide's whisper came again, truly quiet this time, almost nonexistent:  \"It's one-sixth, actually.\"

Brennan's cheeks were flaming so hard that he thought his face might melt off.  The instinct was very strong to run out of the room and up the stairs and flee the city and change his name and start his life over again and get it right this time.

\"An honest mistake is at least honest,\" said the guide, louder now, \"and we may know the honesty by its relinquishment.  If I am a Virtuist, what is the probability that I am a man?\"

\"One—\" Brennan started to say.

Then he stopped.  Again, the horrible silence.

\"Just say 'one-sixth' already,\" stage-whispered the figure, this time loud enough for the walls to hear; then there was more laughter, not all of it kind.

Brennan was breathing rapidly and there was sweat on his forehead.  If he was wrong about this, he really was going to flee the city.  \"Three fourths women times three fourths Virtuists is nine sixteenths female Virtuists in this room.  One fourth men times one half Virtuists is two sixteenths male Virtuists.  If I have only that information and the fact that you are a Virtuist, I would then estimate odds of two to nine, or a probability of two-elevenths, that you are male.  Though I do not, in fact, believe the information given is correct.  For one thing, it seems too neat.  For another, there are an odd number of people in this room.\"

The hand stretched out again, and opened.

Brennan took the ring.  It looked almost invisible, in the torchlight; not glass, but some material with a refractive index very close to air.  The ring was warm from the guide's hand, and felt like a tiny living thing as it embraced his finger.

The relief was so great that he nearly didn't hear the cowled figures applauding.

From the robed guide came one last whisper:

\"You are now a novice of the Bayesian Conspiracy.\"

Image:  The Bayesian Master, by Erin Devereux

" } }, { "_id": "3diLhMELXxM8rFHJj", "title": "To Spread Science, Keep It Secret", "pageUrl": "https://www.lesswrong.com/posts/3diLhMELXxM8rFHJj/to-spread-science-keep-it-secret", "postedAt": "2008-03-28T05:47:20.000Z", "baseScore": 85, "voteCount": 87, "commentCount": 51, "url": null, "contents": { "documentId": "3diLhMELXxM8rFHJj", "html": "

Sometimes I wonder if the Pythagoreans had the right idea.

\n

Yes, I've written about how \"science\" is inherently public.  I've written that \"science\" is distinguished from merely rational knowledge by the in-principle ability to reproduce scientific experiments for yourself, to know without relying on authority.  I've said that \"science\" should be defined as the publicly accessible knowledge of humankind.  I've even suggested that future generations will regard all papers not published in an open-access journal as non-science, i.e., it can't be part of the public knowledge of humankind if you make people pay to read it.

\n

But that's only one vision of the future.  In another vision, the knowledge we now call \"science\" is taken out of the public domain—the books and journals hidden away, guarded by mystic cults of gurus wearing robes, requiring fearsome initiation rituals for access—so that more people will actually study it.

\n

I mean, right now, people can study science but they don't.

\n

\n

\"Scarcity\", it's called in social psychology.  What appears to be in limited supply, is more highly valued.  And this effect is especially strong with information—we're much more likely to try to obtain information that we believe is secret, and to value it more when we do obtain it.

\n

With science, I think, people assume that if the information is freely available, it must not be important.  So instead people join cults that have the sense to keep their Great Truths secret.  The Great Truth may actually be gibberish, but it's more satisfying than coherent science, because it's secret.

\n

Science is the great Purloined Letter of our times, left out in the open and ignored.

\n

Sure, scientific openness helps the scientific elite.  They've already been through the initiation rituals.  But for the rest of the planet, science is kept secret a hundred times more effectively by making it freely available, than if its books were guarded in vaults and you had to walk over hot coals to get access.  (This being a fearsome trial indeed, since the great secrets of insulation are only available to Physicist-Initiates of the Third Level.)

\n

If scientific knowledge were hidden in ancient vaults (rather than hidden in inconvenient pay-for-access journals), at least then people would try to get into the vaults.  They'd be desperate to learn science.  Especially when they saw the power that Eighth Level Physicists could wield, and were told that they weren't allowed to know the explanation.

\n

And if you tried to start a cult around oh, say, Scientology, you'd get some degree of public interest, at first.  But people would very quickly start asking uncomfortable questions like \"Why haven't you given a public demonstration of your Eighth Level powers, like the Physicists?\" and \"How come none of the Master Mathematicians seem to want to join your cult?\" and \"Why should I follow your Founder when he isn't an Eighth Level anything outside his own cult?\" and \"Why should I study your cult first, when the Dentists of Doom can do things that are so much more impressive?\"

\n

When you look at it from that perspective, the escape of math from the Pythagorean cult starts to look like a major strategic blunder for humanity.

\n

Now, I know what you're going to say:  \"But science is surrounded by fearsome initiation rituals!  Plus it's inherently difficult to learn!  Why doesn't that count?\"  Because the public thinks that science is freely available, that's why.  If you're allowed to learn, it must not be important enough to learn.

\n

It's an image problem, people taking their cues from others' attitudes.  Just anyone can walk into the supermarket and buy a light bulb, and nobody looks at it with awe and reverence.  The physics supposedly aren't secret (even though you don't know), and there's a one-paragraph explanation in the newspaper that sounds vaguely authoritative and convincing—essentially, no one treats the lightbulb as a sacred mystery, so neither do you.

\n

Even the simplest little things, completely inert objects like crucifixes, can become magical if everyone looks at them like they're magic.  But since you're theoretically allowed to know why the light bulb works without climbing the mountain to find the remote Monastery of Electricians, there's no need to actually bother to learn.

\n

Now, because science does in fact have initiation rituals both social and cognitive, scientists are not wholly dissatisfied with their science.  The problem is that, in the present world, very few people bother to study science in the first place.  Science cannot be the true Secret Knowledge, because just anyone is allowed to know it—even though, in fact, they don't.

\n

If the Great Secret of Natural Selection, passed down from Darwin Who Is Not Forgotten, was only ever imparted to you after you paid $2000 and went through a ceremony involving torches and robes and masks and sacrificing an ox, then when you were shown the fossils, and shown the optic cable going through the retina under a microscope, and finally told the Truth, you would say \"That's the most brilliant thing ever!\" and be satisfied.  After that, if some other cult tried to tell you it was actually a bearded man in the sky 6000 years ago, you'd laugh like hell.

\n

And you know, it might actually be more fun to do things that way.  Especially if the initiation required you to put together some of the evidence for yourself—together, or with classmates—before you could tell your Science Sensei you were ready to advance to the next level.  It wouldn't be efficient, sure, but it would be fun.

\n

If humanity had never made the mistake—never gone down the religious path, and never learned to fear anything that smacks of religion—then maybe the Ph.D. granting ceremony would involve litanies and chanting, because, hey, that's what people like.  Why take the fun out of everything?

\n

Maybe we're just doing it wrong.

\n

And no, I'm not seriously proposing that we try to reverse the last five hundred years of openness and classify all the science secret.  At least, not at the moment.  Efficiency is important for now, especially in things like medical research.  I'm just explaining why it is that I won't tell anyone the Secret of how the ineffable difference between blueness and redness arises from mere atoms for less than $100,000—

\n

Ahem!  I meant to say, I'm telling you about this vision of an alternate Earth, so that you give science equal treatment with cults.  So that you don't undervalue scientific truth when you learn it, just because it doesn't seem to be protected appropriately to its value.  Imagine the robes and masks.  Visualize yourself creeping into the vaults and stealing the Lost Knowledge of Newton.  And don't be fooled by any organization that does use robes and masks, unless they also show you the data.

\n

People seem to have holes in their minds for Esoteric Knowledge, Deep Secrets, the Hidden Truth.  And I'm not even criticizing this psychology!  There are deep secret esoteric hidden truths, like quantum mechanics or Bayes-structure.  We've just gotten into the habit of presenting the Hidden Truth in a very unsatisfying way, wrapped up in false mundanity.

\n

But if the holes for secret knowledge are not filled by true beliefs, they will be filled by false beliefs.  There is nothing but science to learn—the emotional energy must either be invested in reality, or wasted in total nonsense, or destroyed.  For myself, I think it is better to invest the emotional energy; fun should not be needlessly cast away.

\n

Right now, we've got the worst of both worlds.  Science isn't really free, because the courses are expensive and the textbooks are expensive.  But the public thinks that anyone is allowed to know, so it must not be important.

\n

Ideally, you would want to arrange things the other way around.

" } }, { "_id": "MCYp8g9EMAiTCTawk", "title": "Scarcity", "pageUrl": "https://www.lesswrong.com/posts/MCYp8g9EMAiTCTawk/scarcity", "postedAt": "2008-03-27T08:07:29.000Z", "baseScore": 88, "voteCount": 64, "commentCount": 20, "url": null, "contents": { "documentId": "MCYp8g9EMAiTCTawk", "html": "

What follows is taken primarily from Robert Cialdini's Influence: The Psychology of Persuasion.  I own three copies of this book, one for myself, and two for loaning to friends.

\n

Scarcity, as that term is used in social psychology, is when things become more desirable as they appear less obtainable.

\n\n

\n

Similarly, information that appears forbidden or secret, seems more important and trustworthy:

\n\n

The conventional theory for explaining this is \"psychological reactance\", social-psychology-speak for \"When you tell people they can't do something, they'll just try even harder.\"  The fundamental instincts involved appear to be preservation of status and preservation of options.  We resist dominance, when any human agency tries to restrict our freedom.  And when options seem to be in danger of disappearing, even from natural causes, we try to leap on the option before it's gone.

\n

Leaping on disappearing options may be a good adaptation in a hunter-gatherer society—gather the fruits while the tree is still in bloom—but in a money-based society it can be rather costly.   Cialdini (1993) reports that in one appliance store he observed, a salesperson who saw that a customer was evincing signs of interest in an appliance would approach, and sadly inform the customer that the item was out of stock, the last one having been sold only twenty minutes ago.  Scarcity creating a sudden jump in desirability, the customer would often ask whether there was any chance that the salesperson could locate an unsold item in the back room, warehouse, or anywhere.  \"Well,\" says the salesperson, \"that's possible, and I'm willing to check; but do I understand that this is the model you want, and if I can find it at this price, you'll take it?\"

\n

As Cialdini remarks, a chief sign of this malfunction is that you dream of possessing something, rather than using it.  (Timothy Ferriss offers similar advice on planning your life: ask which ongoing experiences would make you happy, rather than which possessions or status-changes.)

\n

But the really fundamental problem with desiring the unattainable is that as soon as you actually get it, it stops being unattainable.  If we cannot take joy in the merely available, our lives will always be frustrated...

\n
\n

Ashmore, R. D., Ramachandra, V. and Jones, R. A. (1971.) \"Censorship as an Attitude Change Induction.\"  Paper presented at Eastern Psychological Association meeting, New York, April 1971.

\n

Brehm, S. S. and Weintraub, M. (1977.) \"Physical Barriers and Psychological Reactance: Two-year-olds' Responses to Threats to Freedom.\"  Journal of Personality and Social Psychology, 35: 830-36.

\n

Broeder, D. (1959.)  \"The University of Chicago Jury Project.\" Nebraska Law Review 38: 760-74.

\n

Cialdini, R. B. (1993.)  Influence:  The Psychology of Persuasion: Revised Edition.  Pp. 237-71.  New York: Quill.

\n

Knishinsky, A. (1982.)  \"The Effects of Scarcity of Material and Exclusivity of Information on Industrial Buyer Perceived Risk in Provoking a Purchase Decision.\"  Doctoral dissertation, Arizona State University.

\n

Mazis, M. B. (1975.) \"Antipollution Measures and Psychological Reactance Theory: A Field Experiment.\" Journal of Personality and Social Psychology 31: 654-66.

\n

Mazis, M. B., Settle, R. B. and Leslie, D. C. (1973.) \"Elimination of Phosphate Detergents and Psychological Reactance.\"  Journal of Marketing Research 10: 390-95.

" } }, { "_id": "2y5JvvHT9hdWMWEau", "title": "Why are religious societies more cohesive?", "pageUrl": "https://www.lesswrong.com/posts/2y5JvvHT9hdWMWEau/why-are-religious-societies-more-cohesive", "postedAt": "2008-03-26T15:33:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "2y5JvvHT9hdWMWEau", "html": "
Reported by the Economist (and discussed on Overcoming Bias), religion brings social cooperation. Attempts to synthesise secular solidarity out of god-free rituals tend to fail. So why is this?

\n

A hypothesis:

\n

Social cohesion is a result of citizens sharing a desire to believe something they all have a tiny private inkling might seem less true if they thought about it too much. They subconsciously know belief is easier when ubiquitously reinforced in social surroundings, and also that their beliefs are more enjoyable than the alternative. Thus they have a strong interest in religious behaviour in others and in their own feeling of unshakable commitment to those who practice it. So they encourage it with enthusiastic participation and try to ensconce themselves as much as necessary to feel safe from reality. If we found conclusive evidence of a god, everyone would be safe, and could get back to non-cohesion; it’s the possibility that the sky is chockers with nothingness that gives everyone the incentive for solidarity.

\n

To test hypothesis, compare cohesion across other groups with beliefs (religious or otherwise) of varying tenuousness and of varying importance to their believers.

\n

\n

\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "PMr6f7ZocEWFtCYXj", "title": "Is Humanism A Religion-Substitute?", "pageUrl": "https://www.lesswrong.com/posts/PMr6f7ZocEWFtCYXj/is-humanism-a-religion-substitute", "postedAt": "2008-03-26T04:18:10.000Z", "baseScore": 64, "voteCount": 62, "commentCount": 54, "url": null, "contents": { "documentId": "PMr6f7ZocEWFtCYXj", "html": "

For many years before the Wright Brothers, people dreamed of flying with magic potions.  There was nothing irrational about the raw desire to fly.  There was nothing tainted about the wish to look down on a cloud from above.  Only the \"magic potions\" part was irrational.

\n

Suppose you were to put me into an fMRI scanner, and take a movie of my brain's activity levels, while I watched a space shuttle launch.  (Wanting to visit space is not \"realistic\", but it is an essentially lawful dream—one that can be fulfilled in a lawful universe.)  The fMRI might—maybe, maybe not—resemble the fMRI of a devout Christian watching a nativity scene.

\n

Should an experimenter obtain this result, there's a lot of people out there, both Christians and some atheists, who would gloat:  \"Ha, ha, space travel is your religion!\"

\n

But that's drawing the wrong category boundary.  It's like saying that, because some people once tried to fly by irrational means, no one should ever enjoy looking out of an airplane window on the clouds below.

\n

\n

If a rocket launch is what it takes to give me a feeling of aesthetic transcendence, I do not see this as a substitute for religion.  That is theomorphism—the viewpoint from gloating religionists who assume that everyone who isn't religious has a hole in their mind that wants filling.

\n

Now, to be fair to the religionists, this is not just a gloating assumption.  There are atheists who have religion-shaped holes in their minds.  I have seen attempts to substitute atheism or even transhumanism for religion.  And the result is invariably awful.  Utterly awful.  Absolutely abjectly awful.

\n

I call such efforts, \"hymns to the nonexistence of God\".

\n

When someone sets out to write an atheistic hymn—\"Hail, oh unintelligent universe,\" blah, blah, blah—the result will, without exception, suck.

\n

Why?  Because they're being imitative.  Because they have no motivation for writing the hymn except a vague feeling that since churches have hymns, they ought to have one too.  And, on a purely artistic level, that puts them far beneath genuine religious art that is not an imitation of anything, but an original expression of emotion.

\n

Religious hymns were (often) written by people who felt strongly and wrote honestly and put serious effort into the prosody and imagery of their work—that's what gives their work the grace that it possesses, of artistic integrity.

\n

So are atheists doomed to hymnlessness?

\n

There is an acid test of attempts at post-theism.  The acid test is:  \"If religion had never existed among the human species—if we had never made the original mistake—would this song, this art, this ritual, this way of thinking, still make sense?\"

\n

If humanity had never made the original mistake, there would be no hymns to the nonexistence of God.  But there would still be marriages, so the notion of an atheistic marriage ceremony makes perfect sense—as long as you don't suddenly launch into a lecture on how God doesn't exist.  Because, in a world where religion never had existed, nobody would interrupt a wedding to talk about the implausibility of a distant hypothetical concept.  They'd talk about love, children, commitment, honesty, devotion, but who the heck would mention God?

\n

And, in a human world where religion never had existed, there would still be people who got tears in their eyes watching a space shuttle launch.

\n

Which is why, even if experiment shows that watching a shuttle launch makes \"religion\"-associated areas of my brain light up, associated with feelings of transcendence, I do not see that as a substitute for religion; I expect the same brain areas would light up, for the same reason, if I lived in a world where religion had never been invented.

\n

A good \"atheistic hymn\" is simply a song about anything worth singing about that doesn't happen to be religious.

\n

Also, reversed stupidity is not intelligence.  The world's greatest idiot may say the Sun is shining, but that doesn't make it dark out.  The point is not to create a life that resembles religion as little as possible in every surface aspect—this is the same kind of thinking that inspires hymns to the nonexistence of God.  If humanity had never made the original mistake, no one would be trying to avoid things that vaguely resembled religion.  Believe accurately, then feel accordingly:  If space launches actually exist, and watching a rocket rise makes you want to sing, then write the song, dammit.

\n

If I get tears in my eyes at a space shuttle launch, it doesn't mean I'm trying to fill a hole left by religion—it means that my emotional energies, my caring, are bound into the real world.

\n

If God did speak plainly, and answer prayers reliably, God would just become one more boringly real thing, no more worth believing in than the postman.  If God were real, it would destroy the inner uncertainty that brings forth outward fervor in compensation.  And if everyone else believed God were real, it would destroy the specialness of being one of the elect.

\n

If you invest your emotional energy in space travel, you don't have those vulnerabilities.  I can see the Space Shuttle rise without losing the awe.  Everyone else can believe that Space Shuttles are real, and it doesn't make them any less special.  I haven't painted myself into the corner.

\n

The choice between God and humanity is not just a choice of drugs.  Above all, humanity actually exists. 

" } }, { "_id": "hYqDp4qAucZM33qSh", "title": "Amazing Breakthrough Day: April 1st", "pageUrl": "https://www.lesswrong.com/posts/hYqDp4qAucZM33qSh/amazing-breakthrough-day-april-1st", "postedAt": "2008-03-25T05:45:09.000Z", "baseScore": 81, "voteCount": 73, "commentCount": 16, "url": null, "contents": { "documentId": "hYqDp4qAucZM33qSh", "html": "

So you're thinking, \"April 1st... isn't that already supposed to be April Fool's Day?\"

\n

Yes—and that will provide the ideal cover for celebrating Amazing Breakthrough Day.

\n

As I argued in \"The Beauty of Settled Science\", it is a major problem that media coverage of science focuses only on breaking news.  Breaking news, in science, occurs at the furthest fringes of the scientific frontier, which means that the new discovery is often:

\n\n

People never get to see the solid stuff, let alone the understandable stuff, because it isn't breaking news.

\n

On Amazing Breakthrough Day, I propose, journalists who really care about science can report—under the protective cover of April 1st—such important but neglected science stories as:

\n\n

\n

Note that every one of these headlines are true—they describe events that did, in fact, happen.  They just didn't happen yesterday.

\n

There have been many humanly understandable amazing breakthroughs in the history of science, which can be understood without a PhD or even BSc.  The operative word here is history.  Think of Archimedes's \"Eureka!\" when he understood the relation between the water a ship displaces, and the reason the ship floats.  This is far enough back in scientific history that you don't need to know 50 other discoveries to understand the theory; it can be explained in a couple of graphs; anyone can see how it's useful; and the confirming experiments can be duplicated in your own bathtub.

\n

Modern science is built on discoveries built on discoveries built on discoveries and so on all the way back to Archimedes.  Reporting science only as breaking news is like wandering into a movie 3/4ths of the way through, writing a story about \"Bloody-handed man kisses girl holding gun!\" and wandering back out again.

\n

And if your editor says, \"Oh, but our readers won't be interested in that—\"

\n

Then point out that Reddit and Digg don't link only to breaking news.  They also link to short webpages that give good explanations of old science.  Readers vote it up, and that should tell you something.  Explain that if your newspaper doesn't change to look more like Reddit, you'll have to start selling drugs to make payroll.   Editors love to hear that sort of thing, right?

\n

On the Internet, a good new explanation of old science is news and it spreads like news.  Why couldn't the science sections of newspapers work the same way?  Why isn't a new explanation worth reporting on?

\n

But all this is too visionary for a first step.  For now, let's just see if any journalists out there pick up on Amazing Breakthrough Day, where you report on some understandable science breakthrough as though it had just occurred.

\n

April 1st.  Put it on your calendar.

" } }, { "_id": "ndGYn7ZFiZyernp9f", "title": "The Beauty of Settled Science", "pageUrl": "https://www.lesswrong.com/posts/ndGYn7ZFiZyernp9f/the-beauty-of-settled-science", "postedAt": "2008-03-24T05:30:54.000Z", "baseScore": 107, "voteCount": 101, "commentCount": 22, "url": null, "contents": { "documentId": "ndGYn7ZFiZyernp9f", "html": "

Facts do not need to be unexplainable, to be beautiful; truths do not become less worth learning, if someone else knows them; beliefs do not become less worthwhile, if many others share them…

\n

…and if you only care about scientific issues that are controversial, you will end up with a head stuffed full of garbage.

\n

The media thinks that only the cutting edge of science is worth reporting on. How often do you see headlines like “General Relativity still governing planetary orbits” or “Phlogiston theory remains false”? So, by the time anything is solid science, it is no longer a breaking headline. “Newsworthy” science is often based on the thinnest of evidence and wrong half the timeif it were not on the uttermost fringes of the scientific frontier, it would not be breaking news.

\n

Scientific controversies are problems so difficult that even people who’ve spent years mastering the field can still fool themselves. That’s what makes for the heated arguments that attract all the media attention.

\n

Worse, if you aren’t in the field and part of the game, controversies aren’t even fun.

\n

Oh, sure, you can have the fun of picking a side in an argument. But you can get that in any football game. That’s not what the fun of science is about.

\n

Reading a well-written textbook, you get: Carefully phrased explanations for incoming students, math derived step by step (where applicable), plenty of experiments cited as illustration (where applicable), test problems on which to display your new mastery, and a reasonably good guarantee that what you’re learning is actually true.

\n

Reading press releases, you usually get: Fake explanations that convey nothing except the delusion of understanding of a result that the press release author didn’t understand and that probably has a better-than-even chance of failing to replicate.

\n

Modern science is built on discoveries, built on discoveries, built on discoveries, and so on, all the way back to people like Archimedes, who discovered facts like why boats float, that can make sense even if you don’t know about other discoveries. A good place to start traveling that road is at the beginning.

\n

Don’t be embarrassed to read elementary science textbooks, either. If you want to pretend to be sophisticated, go find a play to sneer at. If you just want to have fun, remember that simplicity is at the core of scientific beauty.

\n

And thinking you can jump right into the frontier, when you haven’t learned the settled science, is like…

\n

…like trying to climb only the top half of Mount Everest (which is the only part that interests you) by standing at the base of the mountain, bending your knees, and jumping really hard (so you can pass over the boring parts).

\n

Now I’m not saying that you should never pay attention to scientific controversies. If 40% of oncologists think that white socks cause cancer, and the other 60% violently disagree, this is an important fact to know.

\n

Just don’t go thinking that science has to be controversial to be interesting.

\n

Or, for that matter, that science has to be recent to be interesting. A steady diet of science news is bad for you: You are what you eat, and if you eat only science reporting on fluid situations, without a solid textbook now and then, your brain will turn to liquid.

" } }, { "_id": "xaPLsMBkvWyBFuyRy", "title": "New York OB Meetup (ad-hoc) on Monday, Mar 24, @6pm", "pageUrl": "https://www.lesswrong.com/posts/xaPLsMBkvWyBFuyRy/new-york-ob-meetup-ad-hoc-on-monday-mar-24-6pm", "postedAt": "2008-03-22T21:07:29.000Z", "baseScore": 1, "voteCount": 3, "commentCount": 23, "url": null, "contents": { "documentId": "xaPLsMBkvWyBFuyRy", "html": "

Correction:  The giant Starbucks is at 13 Astor Place #25, not 51 Astor which is a smaller Starbucks.  (Yes, there are two Starbucks a block apart, here.)  The smaller Starbucks has metal steps and a ramp leading up; don't go here.  The giant Starbucks, which is where we want to go, is next to Lafayette and Astor.

I (Eliezer) am in New York at the moment, and will have some time free on Monday night, March 24th, to meet any interested New York Overcoming Bias readers.

\n\n

Where:  The giant Starbucks at 13 Astor Place #25, New York, NY 10003
When:  6pm, this Monday (March 24th, 2008)
Who:  Eliezer Yudkowsky (866-983-597), Carl Shulman

\n\n

If you plan on attending, please leave a comment, so we know to expect you.  If you're going to arrive later than 6pm, please note this as well.

\n\n

I'll also be at Princeton on Sunday.  My time is already mostly spoken for, but if you're at Princeton and desperately want to meet up, comment or email before 7am tomorrow.

" } }, { "_id": "iiWiHgtQekWNnmE6Q", "title": "If You Demand Magic, Magic Won't Help", "pageUrl": "https://www.lesswrong.com/posts/iiWiHgtQekWNnmE6Q/if-you-demand-magic-magic-won-t-help", "postedAt": "2008-03-22T18:10:47.000Z", "baseScore": 194, "voteCount": 157, "commentCount": 142, "url": null, "contents": { "documentId": "iiWiHgtQekWNnmE6Q", "html": "
\n

Most witches don't believe in gods.  They know that the gods exist, of course.  They even deal with them occasionally.  But they don't believe in them.  They know them too well.  It would be like believing in the postman.
        —Terry Pratchett, Witches Abroad

\n
\n

Once upon a time, I was pondering the philosophy of fantasy stories—

\n

And before anyone chides me for my \"failure to understand what fantasy is about\", let me say this:  I was raised in an SF&F household.  I have been reading fantasy stories since I was five years old.  I occasionally try to write fantasy stories.  And I am not the sort of person who tries to write for a genre without pondering its philosophy.  Where do you think story ideas come from?

\n

Anyway:

\n

I was pondering the philosophy of fantasy stories, and it occurred to me that if there were actually dragons in our world—if you could go down to the zoo, or even to a distant mountain, and meet a fire-breathing dragon—while nobody had ever actually seen a zebra, then our fantasy stories would contain zebras aplenty, while dragons would be unexciting.

\n

Now that's what I call painting yourself into a corner, wot?  The grass is always greener on the other side of unreality.

\n

\n

In one of the standard fantasy plots, a protagonist from our Earth, a sympathetic character with lousy grades or a crushing mortgage but still a good heart, suddenly finds themselves in a world where magic operates in place of science.  The protagonist often goes on to practice magic, and become in due course a (superpowerful) sorcerer.

\n

Now here's the question—and yes, it is a little unkind, but I think it needs to be asked:  Presumably most readers of these novels see themselves in the protagonist's shoes, fantasizing about their own acquisition of sorcery.  Wishing for magic.  And, barring improbable demographics, most readers of these novels are not scientists.

\n

Born into a world of science, they did not become scientists.  What makes them think that, in a world of magic, they would act any differently?

\n

If they don't have the scientific attitude, that nothing is \"mere\"—the capacity to be interested in merely real things—how will magic help them?  If they actually had magic, it would be merely real, and lose the charm of unattainability.  They might be excited at first, but (like the lottery winners who, six months later, aren't nearly as happy as they expected to be), the excitement would soon wear off.  Probably as soon as they had to actually study spells.

\n

Unless they can find the capacity to take joy in things that are merely real.  To be just as excited by hang-gliding, as riding a dragon; to be as excited by making a light with electricity, as by making a light with magic... even if it takes a little study...

\n

Don't get me wrong.  I'm not dissing dragons.  Who knows, we might even create some, one of these days.

\n

But if you don't have the capacity to enjoy hang-gliding even though it is merely real, then as soon as dragons turn real, you're not going to be any more excited by dragons than you are by hang-gliding.

\n

Do you think you would prefer living in the Future, to living in the present?  That's a quite understandable preference.  Things do seem to be getting better over time.

\n

But don't forget that this is the Future, relative to the Dark Ages of a thousand years earlier.  You have opportunities undreamt-of even by kings.

\n

If the trend continues, the Future might be a very fine place indeed in which to live.  But if you do make it to the Future, what you find, when you get there, will be another Now.  If you don't have the basic capacity to enjoy being in a Now—if your emotional energy can only go into the Future, if you can only hope for a better tomorrow—then no amount of passing time can help you.

\n

(Yes, in the Future there could be a pill that fixes the emotional problem of always looking to the Future.  I don't think this invalidates my basic point, which is about what sort of pills we should want to take.)

\n

Matthew C., commenting here on LW, seems very excited about an informally specified \"theory\" by Rupert Sheldrake which \"explains\" such non-explanation-demanding phenomena as protein folding and snowflake symmetry.  But why isn't Matthew C. just as excited about, say, Special Relativity?  Special Relativity is actually known to be a law, so why isn't it even more exciting?  The advantage of becoming excited about a law already known to be true, is that you know your excitement will not be wasted.

\n

If Sheldrake's theory were accepted truth taught in elementary schools, Matthew C. wouldn't care about it.  Or why else is Matthew C. fascinated by that one particular law which he believes to be a law of physics, more than all the other laws?

\n

The worst catastrophe you could visit upon the New Age community would be for their rituals to start working reliably, and for UFOs to actually appear in the skies.  What would be the point of believing in aliens, if they were just there, and everyone else could see them too?  In a world where psychic powers were merely real, New Agers wouldn't believe in psychic powers, any more than anyone cares enough about gravity to believe in it.  (Except for scientists, of course.)

\n

Why am I so negative about magic?  Would it be wrong for magic to exist?

\n

I'm not actually negative on magic.  Remember, I occasionally try to write fantasy stories.  But I'm annoyed with this psychology that, if it were born into a world where spells and potions did work, would pine away for a world where household goods were abundantly produced by assembly lines.

\n

Part of binding yourself to reality, on an emotional as well as intellectual level, is coming to terms with the fact that you do live here.  Only then can you see this, your world, and whatever opportunities it holds out for you, without wishing your sight away.

\n

Not to put too fine a point on it, but I've found no lack of dragons to fight, or magics to master, in this world of my birth.  If I were transported into one of those fantasy novels, I wouldn't be surprised to find myself studying the forbidden ultimate sorcery—

\n

—because why should being transported into a magical world change anything?  It's not where you are, it's who you are.

\n

So remember the Litany Against Being Transported Into An Alternate Universe:

\n

If I'm going to be happy anywhere,
Or achieve greatness anywhere,
Or learn true secrets anywhere,
Or save the world anywhere,
Or feel strongly anywhere,
Or help people anywhere,
I may as well do it in reality.

" } }, { "_id": "WjpA4PCjt5EkTGbLF", "title": "Bind Yourself to Reality", "pageUrl": "https://www.lesswrong.com/posts/WjpA4PCjt5EkTGbLF/bind-yourself-to-reality", "postedAt": "2008-03-22T05:09:10.000Z", "baseScore": 61, "voteCount": 53, "commentCount": 5, "url": null, "contents": { "documentId": "WjpA4PCjt5EkTGbLF", "html": "

So perhaps you're reading all this, and asking:  \"Yes, but what does this have to do with reductionism?\"

\n

Partially, it's a matter of leaving a line of retreat.  It's not easy to take something important apart into components, when you're convinced that this removes magic from the world, unweaves the rainbow.  I do plan to take certain things apart, on this blog; and I prefer not to create pointless existential anguish.

\n

Partially, it's the crusade against Hollywood Rationality, the concept that understanding the rainbow subtracts its beauty.  The rainbow is still beautiful plus you get the beauty of physics.

\n

But even more deeply, it's one of these subtle hidden-core-of-rationality things.  You know, the sort of thing where I start talking about 'the Way'.  It's about binding yourself to reality.

\n

In one of Frank Herbert's Dune books, IIRC, it is said that a Truthsayer gains their ability to detect lies in others by always speaking truth themselves, so that they form a relationship with the truth whose violation they can feel.  It wouldn't work, but I still think it's one of the more beautiful thoughts in fiction.  At the very least, to get close to the truth, you have to be willing to press yourself up against reality as tightly as possible, without flinching away, or sneering down.

\n

\n

You can see the bind-yourself-to-reality theme in \"Lotteries:  A Waste of Hope.\"  Understanding that lottery tickets have negative expected utility, does not mean that you give up the hope of being rich.  It means that you stop wasting that hope on lottery tickets.  You put the hope into your job, your school, your startup, your eBay sideline; and if you truly have nothing worth hoping for, then maybe it's time to start looking.

\n

It's not dreams I object to, only impossible dreams.  The lottery isn't impossible, but it is an un-actionable near-impossibility.  It's not that winning the lottery is extremely difficult—requires a desperate effort—but that work isn't the issue.

\n

I say all this, to exemplify the idea of taking emotional energy that is flowing off to nowhere, and binding it into the realms of reality.

\n

This doesn't mean setting goals that are low enough to be \"realistic\", i.e., easy and safe and parentally approved.  Maybe this is good advice in your personal case, I don't know, but I'm not the one to say it.

\n

What I mean is that you can invest emotional energy in rainbows even if they turn out not to be magic.  The future is always absurd but it is never unreal.

\n

The Hollywood Rationality stereotype is that \"rational = emotionless\"; the more reasonable you are, the more of your emotions Reason inevitably destroys.  In \"Feeling Rational\" I contrast this against \"That which can be destroyed by the truth should be\" and \"That which the truth nourishes should thrive\".  When you have arrived at your best picture of the truth, there is nothing irrational about the emotions you feel as a result of that—the emotions cannot be destroyed by truth, so they must not be irrational.

\n

So instead of destroying emotional energies associated with bad explanations for rainbows, as the Hollywood Rationality stereotype would have it, let us redirect these emotional energies into reality—bind them to beliefs that are as true as we can make them.

\n

Want to fly?  Don't give up on flight.  Give up on flying potions and build yourself an airplane.

\n

Remember the theme of \"Think Like Reality\", where I talked about how when physics seems counterintuitive, you've got to accept that it's not physics that's weird, it's you?

\n

What I'm talking about now is like that, only with emotions instead of hypotheses—binding your feelings into the real world.  Not the \"realistic\" everyday world.  I would be a howling hypocrite if I told you to shut up and do your homework.  I mean the real real world, the lawful universe, that includes absurdities like Moon landings and the evolution of human intelligence.  Just not any magic, anywhere, ever.

\n

It is a Hollywood Rationality meme that \"Science takes the fun out of life.\"

\n

Science puts the fun back into life.

\n

Rationality directs your emotional energies into the universe, rather than somewhere else.

" } }, { "_id": "KfMNFB3G7XNviHBPN", "title": "Joy in Discovery", "pageUrl": "https://www.lesswrong.com/posts/KfMNFB3G7XNviHBPN/joy-in-discovery", "postedAt": "2008-03-21T01:19:18.000Z", "baseScore": 84, "voteCount": 74, "commentCount": 48, "url": null, "contents": { "documentId": "KfMNFB3G7XNviHBPN", "html": "
\n

\"Newton was the greatest genius who ever lived, and the most fortunate; for we cannot find more than once a system of the world to establish.\"
        —Lagrange

\n
\n

I have more fun discovering things for myself than reading about them in textbooks.  This is right and proper, and only to be expected.

\n

But discovering something that no one else knows—being the first to unravel the secret—

\n

There is a story that one of the first men to realize that stars were burning by fusion—plausible attributions I've seen are to Fritz Houtermans and Hans Bethe—was walking out with his girlfriend of a night, and she made a comment on how beautiful the stars were, and he replied:  \"Yes, and right now, I'm the only man in the world who knows why they shine.\"

\n

It is attested by numerous sources that this experience, being the first person to solve a major mystery, is a tremendous high.  It's probably the closest experience you can get to taking drugs, without taking drugs—though I wouldn't know.

\n

That can't be healthy.

\n

\n

Not that I'm objecting to the euphoria.  It's the exclusivity clause that bothers me.  Why should a discovery be worth less, just because someone else already knows the answer?

\n

The most charitable interpretation I can put on the psychology, is that you don't struggle with a single problem for months or years if it's something you can just look up in the library.  And that the tremendous high comes from having hit the problem from every angle you can manage, and having bounced; and then having analyzed the problem again, using every idea you can think of, and all the data you can get your hands on—making progress a little at a time—so that when, finally, you crack through the problem, all the dangling pieces and unresolved questions fall into place at once, like solving a dozen locked-room murder mysteries with a single clue.

\n

And more, the understanding you get is real understanding—understanding that embraces all the clues you studied to solve the problem, when you didn't yet know the answer.  Understanding that comes from asking questions day after day and worrying at them; understanding that no one else can get (no matter how much you tell them the answer) unless they spend months studying the problem in its historical context, even after it's been solved—and even then, they won't get the high of solving it all at once.

\n

That's one possible reason why James Clerk Maxwell might have had more fun discovering Maxwell's Equations, than you had fun reading about them.

\n

A slightly less charitable reading is that the tremendous high comes from what is termed, in the politesse of social psychology, \"commitment\" and \"consistency\" and \"cognitive dissonance\"; the part where we value something more highly just because it took more work to get it.  The studies showing that subjective fraternity pledges to a harsher initiation, causes them to be more convinced of the value of the fraternity—identical wine in higher-priced bottles being rated as tasting better—that sort of thing.

\n

Of course, if you just have more fun solving a puzzle than being told its answer, because you enjoy doing the cognitive work for its own sake, there's nothing wrong with that.  The less charitable reading would be if charging $100 to be told the answer to a puzzle, made you think the answer was more interesting, worthwhile, important, surprising, etc. than if you got the answer for free.

\n

(I strongly suspect that a major part of science's PR problem in the population at large is people who instinctively believe that if knowledge is given away for free, it cannot be important.  If you had to undergo a fearsome initiation ritual to be told the truth about evolution, maybe people would be more satisfied with the answer.)

\n

The really uncharitable reading is that the joy of first discovery is about status.  Competition.  Scarcity.  Beating everyone else to the punch.  It doesn't matter whether you have a 3-room house or a 4-room house, what matters is having a bigger house than the Joneses.  A 2-room house would be fine, if you could only ensure that the Joneses had even less.

\n

I don't object to competition as a matter of principle.  I don't think that the game of Go is barbaric and should be suppressed, even though it's zero-sum.  But if the euphoric joy of scientific discovery has to be about scarcity, that means it's only available to one person per civilization for any given truth.

\n

If the joy of scientific discovery is one-shot per discovery, then, from a fun-theoretic perspective, Newton probably used up a substantial increment of the total Physics Fun available over the entire history of Earth-originating intelligent life.  That selfish bastard explained the orbits of planets and the tides.

\n

And really the situation is even worse than this, because in the Standard Model of physics (discovered by bastards who spoiled the puzzle for everyone else) the universe is spatially infinite, inflationarily branching, and branching via decoherence, which is at least three different ways that Reality is exponentially or infinitely large

\n

So aliens, or alternate Newtons, or just Tegmark duplicates of Newton, may all have discovered gravity before our Newton did—if you believe that \"before\" means anything relative to those kinds of separations.

\n

When that thought first occurred to me, I actually found it quite uplifting.  Once I realized that someone, somewhere in the expanses of space and time, already knows the answer to any answerable question—even biology questions and history questions; there are other decoherent Earths—then I realized how silly it was to think as if the joy of discovery ought to be limited to one person.  It becomes a fully inescapable source of unresolvable existential angst, and I regard that as a reductio.

\n

The consistent solution which maintains the possibility of fun, is to stop worrying about what other people know.  If you don't know the answer, it's a mystery to you.  If you can raise your hand, and clench your fingers into a fist, and you've got no idea of how your brain is doing it—or even what exact muscles lay beneath your skin—you've got to consider yourself just as ignorant as a hunter-gatherer.  Sure, someone else knows the answer—but back in the hunter-gatherer days, someone else in an alternate Earth, or for that matter, someone else in the future, knew what the answer was.  Mystery, and the joy of finding out, is either a personal thing, or it doesn't exist at all—and I prefer to say it's personal.

\n

The joy of assisting your civilization by telling it something it doesn't already know, does tend to be one-shot per discovery per civilization; that kind of value is conserved, as are Nobel Prizes.  And the prospect of that reward may be what it takes to keep you focused on one problem for the years required to develop a really deep understanding; plus, working on a problem unknown to your civilization is a sure-fire way to avoid reading any spoilers.

\n

But as part of my general project to undo this idea that rationalists have less fun, I want to restore the magic and mystery to every part of the world which you do not personally understand, regardless of what other knowledge may exist, far away in space and time, or even in your next-door neighbor's mind.  If you don't know, it's a mystery.  And now think of how many things you don't know!  (If you can't think of anything, you have other problems.)  Isn't the world suddenly a much more mysterious and magical and interesting place?  As if you'd been transported into an alternate dimension, and had to learn all the rules from scratch?

\n
\n

\"A friend once told me that I look at the world as if I've never seen it before. I thought, that's a nice compliment... Wait! I never have seen it before! What —did everyone else get a preview?\"
        —Ran Prieur

\n
" } }, { "_id": "x4dG4GhpZH2hgz59x", "title": "Joy in the Merely Real", "pageUrl": "https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real", "postedAt": "2008-03-20T06:18:58.000Z", "baseScore": 188, "voteCount": 152, "commentCount": 43, "url": null, "contents": { "documentId": "x4dG4GhpZH2hgz59x", "html": "
\n

                    ...Do not all charms fly
At the mere touch of cold philosophy?
There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.
        —John Keats, Lamia

\n

\"Nothing is 'mere'.\"
        —Richard Feynman 

\n
\n

You've got to admire that phrase, \"dull catalogue of common things\".  What is it, exactly, that goes in this catalogue?  Besides rainbows, that is?

\n

Why, things that are mundane, of course.  Things that are normal; things that are unmagical; things that are known, or knowable; things that play by the rules (or that play by any rules, which makes them boring); things that are part of the ordinary universe; things that are, in a word, real.

\n

Now that's what I call setting yourself up for a fall.

\n

At that rate, sooner or later you're going to be disappointed in everything—either it will turn out not to exist, or even worse, it will turn out to be real.

\n

If we cannot take joy in things that are merely real, our lives will always be empty.

\n

\n

For what sin are rainbows demoted to the dull catalogue of common things?  For the sin of having a scientific explanation.  \"We know her woof, her texture\", says Keats—an interesting use of the word \"we\", because I suspect that Keats didn't know the explanation himself.  I suspect that just being told that someone else knew was too much for him to take.  I suspect that just the notion of rainbows being scientifically explicable in principle would have been too much to take.  And if Keats didn't think like that, well, I know plenty of people who do.

\n

I have already remarked that nothing is inherently mysterious—nothing that actually exists, that is.  If I am ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon; to worship a phenomenon because it seems so wonderfully mysterious, is to worship your own ignorance; a blank map does not correspond to a blank territory, it is just somewhere we haven't visited yet, etc. etc...

\n

Which is to say that everything—everything that actually exists—is liable to end up in \"the dull catalogue of common things\", sooner or later.

\n

Your choice is either:

\n\n

(Self-deception might be an option for others, but not for you.)

\n

This puts quite a different complexion on the bizarre habit indulged by those strange folk called scientists, wherein they suddenly become fascinated by pocket lint or bird droppings or rainbows, or some other ordinary thing which world-weary and sophisticated folk would never give a second glance.

\n

You might say that scientists—at least some scientists—are those folk who are in principle capable of enjoying life in the real universe.

" } }, { "_id": "kQzs8MFbBxdYhe3hK", "title": "Savanna Poets", "pageUrl": "https://www.lesswrong.com/posts/kQzs8MFbBxdYhe3hK/savanna-poets", "postedAt": "2008-03-18T18:42:31.000Z", "baseScore": 70, "voteCount": 61, "commentCount": 40, "url": null, "contents": { "documentId": "kQzs8MFbBxdYhe3hK", "html": "
\n

    \"Poets say science takes away from the beauty of the stars—mere globs of gas atoms.  Nothing is \"mere\".  I too can see the stars on a desert night, and feel them.  But do I see less or more?
    \"The vastness of the heavens stretches my imagination—stuck on this carousel my little eye can catch one-million-year-old light.  A vast pattern—of which I am a part—perhaps my stuff was belched from some forgotten star, as one is belching there.  Or see them with the greater eye of Palomar, rushing all apart from some common starting point when they were perhaps all together.  What is the pattern, or the meaning, or the why?  It does not do harm to the mystery to know a little about it.
    \"For far more marvelous is the truth than any artists of the past imagined!  Why do the poets of the present not speak of it?
    \"What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent?\"
            —Richard Feynman, The Feynman Lectures on Physics, Vol I, p. 3-6 (line breaks added)

\n
\n

That's a real question, there on the last line—what kind of poet can write about Jupiter the god, but not Jupiter the immense sphere?  Whether or not Feynman meant the question rhetorically, it has a real answer:

\n

If Jupiter is like us, he can fall in love, and lose love, and regain love.
If Jupiter is like us, he can strive, and rise, and be cast down.
If Jupiter is like us, he can laugh or weep or dance.

\n

If Jupiter is an immense spinning sphere of methane and ammonia, it is more difficult for the poet to make us feel.

\n

\n

There are poets and storytellers who say that the Great Stories are timeless, and they never change, they only ever retold.  They say, with pride, that Shakespeare and Sophocles are bound by ties of craft stronger than mere centuries; that the two playwrights could have swapped times without a jolt.

\n

Donald Brown once compiled a list of over two hundred \"human universals\", found in all (or a vast supermajority of) studied human cultures, from San Francisco to the !Kung of the Kalahari Desert.  Marriage is on the list, and incest avoidance, and motherly love, and sibling rivalry, and music and envy and dance and storytelling and aesthetics, and ritual magic to heal the sick, and poetry in spoken lines separated by pauses—

\n

No one who knows anything about evolutionary psychology could be expected to deny it:  The strongest emotions we have are deeply engraved, blood and bone, brain and DNA.

\n

It might take a bit of tweaking, but you probably could tell \"Hamlet\" sitting around a campfire on the ancestral savanna.

\n

So one can see why John \"Unweave a rainbow\" Keats might feel something had been lost, on being told that the rainbow was sunlight scattered from raindrops.  Raindrops don't dance.

\n

In the Old Testament, it is written that God once destroyed the world with a flood that covered all the land, drowning all the horribly guilty men and women of the world along with their horribly guilty babies, but Noah built a gigantic wooden ark, etc., and after most of the human species was wiped out, God put rainbows in the sky as a sign that he wouldn't do it again.  At least not with water.

\n

You can see how Keats would be shocked that this beautiful story was contradicted by modern science.  Especially if (as I described yesterday) Keats had no real understanding of rainbows, no \"Aha!\" insight that could be fascinating in its own right, to replace the drama subtracted—

\n

Ah, but maybe Keats would be right to be disappointed even if he knew the math.  The Biblical story of the rainbow is a tale of bloodthirsty murder and smiling insanity.  How could anything about raindrops and refraction properly replace that?  Raindrops don't scream when they die.

\n

So science takes the romance away (says the Romantic poet), and what you are given back, never matches the drama of the original—

\n

(that is, the original delusion)

\n

—even if you do know the equations, because the equations are not about strong emotions.

\n

That is the strongest rejoinder I can think of, that any Romantic poet could have said to Feynman—though I can't remember ever hearing it said.

\n

You can guess that I don't agree with the Romantic poets.  So my own stance is this:

\n

It is not necessary for Jupiter to be like a human, because humans are like humans.  If Jupiter is an immense spinning sphere of methane and ammonia, that doesn't mean that love and hate are emptied from the universe.  There are still loving and hating minds in the universe.  Us.

\n

With more than six billion of us at the last count, does Jupiter really need to be on the list of potential protagonists?

\n

It is not necessary to tell the Great Stories about planets or rainbows.  They play out all over our world, every day.  Every day, someone kills for revenge; every day, someone kills a friend by mistake; every day, upward of a hundred thousand people fall in love.  And even if this were not so, you could write fiction about humans—not about Jupiter.

\n

Earth is old, and has played out the same stories many times beneath the Sun.  I do wonder if it might not be time for some of the Great Stories to change.  For me, at least, the story called \"Goodbye\" has lost its charm.

\n

The Great Stories are not timeless, because the human species is not timeless.  Go far enough back in hominid evolution, and no one will understand Hamlet.  Go far enough back in time, and you won't find any brains.

\n

The Great Stories are not eternal, because the human species, Homo sapiens sapiens, is not eternal.  I most sincerely doubt that we have another thousand years to go in our current form.  I do not say this in sadness: I think we can do better.

\n

I would not like to see all the Great Stories lost completely, in our future.  I see very little difference between that outcome, and the Sun falling into a black hole.

\n

But the Great Stories in their current forms have already been told, over and over.  I do not think it ill if some of them should change their forms, or diversify their endings.

\n

\"And they lived happily ever after\" seems worth trying at least once.

\n

The Great Stories can and should diversify, as humankind grows up.  Part of that ethic is the idea that when we find strangeness, we should respect it enough to tell its story truly.  Even if it makes writing poetry a little more difficult.

\n

If you are a good enough poet to write an ode to an immense spinning sphere of methane and ammonia, you are writing something original, about a newly discovered part of the real universe.  It may not be as dramatic, or as gripping, as Hamlet.  But the tale of Hamlet has already been told!  If you write of Jupiter as though it were a human, then you are making our map of the universe just a little more impoverished of complexity; you are forcing Jupiter into the mold of all the stories that have already been told of Earth.

\n

James Thomson's \"A Poem Sacred to the Memory of Sir Isaac Newton\", which praises the rainbow for what it really is—you can argue whether or not Thomson's poem is as gripping as John Keats's Lamia who was loved and lost.  But tales of love and loss and cynicism had already been told, far away in ancient Greece, and no doubt many times before.  Until we understood the rainbow as a thing different from tales of human-shaped magic, the true story of the rainbow could not be poeticized.

\n

The border between science fiction and space opera was once drawn as follows:  If you can take the plot of a story and put it back in the Old West, or the Middle Ages, without changing it, then it is not real science fiction.  In real science fiction, the science is intrinsically part of the plot—you can't move the story from space to the savanna, not without losing something.

\n

Richard Feynman asked:  \"What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent?\"

\n

They are savanna poets, who can only tell stories that would have made sense around a campfire ten thousand years ago.  Savanna poets, who can tell only the Great Stories in their classic forms, and nothing more.

" } }, { "_id": "mTf8MkpAigm3HP6x2", "title": "Fake Reductionism", "pageUrl": "https://www.lesswrong.com/posts/mTf8MkpAigm3HP6x2/fake-reductionism", "postedAt": "2008-03-17T22:49:13.000Z", "baseScore": 98, "voteCount": 82, "commentCount": 44, "url": null, "contents": { "documentId": "mTf8MkpAigm3HP6x2", "html": "
\n

There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.
        —John Keats, Lamia  

\n
\n

I am guessing—though it is only a guess—that Keats himself did not know the woof and texture of the rainbow.  Not the way that Newton understood rainbows.  Perhaps not even at all.  Maybe Keats just read, somewhere, that Newton had explained the rainbow as \"light reflected from raindrops\"—

\n

—which was actually known in the 13th century.  Newton only added a refinement by showing that the light was decomposed into colored parts, rather than transformed in color.  But that put rainbows back in the news headlines.  And so Keats, with Charles Lamb and William Wordsworth and Benjamin Haydon, drank \"Confusion to the memory of Newton\" because \"he destroyed the poetry of the rainbow by reducing it to a prism.\" That's one reason to suspect Keats didn't understand the subject too deeply.

\n

I am guessing, though it is only a guess, that Keats could not have sketched out on paper why rainbows only appear when the Sun is behind your head, or why the rainbow is an arc of a circle.

\n

\n

If so, Keats had a Fake Explanation.  In this case, a fake reduction.  He'd been told that the rainbow had been reduced, but it had not actually been reduced in his model of the world.

\n

This is another of those distinctions that anti-reductionists fail to get—the difference between professing the flat fact that something is reducible, and seeing it.

\n

In this, the anti-reductionists are not too greatly to be blamed, for it is part of a general problem.

\n

I've written before on seeming knowledge that is not knowledge, and beliefs that are not about their supposed objects but only recordings to recite back in the classroom, and words that operate as stop signs for curiosity rather than answers, and technobabble which only conveys membership in the literary genre of \"science\"...

\n

There is a very great distinction between being able to see where the rainbow comes from, and playing around with prisms to confirm it, and maybe making a rainbow yourself by spraying water droplets—

\n

—versus some dour-faced philosopher just telling you, \"No, there's nothing special about the rainbow.  Didn't you hear? Scientists have explained it away.  Just something to do with raindrops or whatever.  Nothing to be excited about.\"

\n

I think this distinction probably accounts for a hell of a lot of the deadly existential emptiness that supposedly accompanies scientific reductionism.

\n

You have to interpret the anti-reductionists' experience of \"reductionism\", not in terms of their actually seeing how rainbows work, not in terms of their having the critical \"Aha!\", but in terms of their being told that the password is \"Science\".  The effect is just to move rainbows to a different literary genre—a literary genre they have been taught to regard as boring.

\n

For them, the effect of hearing \"Science has explained rainbows!\" is to hang up a sign over rainbows saying, \"This phenomenon has been labeled BORING by order of the Council of Sophisticated Literary Critics.  Move along.\"

\n

And that's all the sign says: only that, and nothing more.

\n

So the literary critics have their gnomes yanked out by force; not dissolved in insight, but removed by flat order of authority.  They are given no beauty to replace the hauntless air, no genuine understanding that could be interesting in its own right.  Just a label saying, \"Ha!  You thought rainbows were pretty?  You poor, unsophisticated fool.  This is part of the literary genre of science, of dry and solemn incomprehensible words.\"

\n

That's how anti-reductionists experience \"reductionism\".

\n

Well, can't blame Keats, poor lad probably wasn't raised right.

\n

But he dared to drink \"Confusion to the memory of Newton\"? 

\n

I propose \"To the memory of Keats's confusion\" as a toast for rationalists.  Cheers.

" } }, { "_id": "cphoF8naigLhRf3tu", "title": "Explaining vs. Explaining Away", "pageUrl": "https://www.lesswrong.com/posts/cphoF8naigLhRf3tu/explaining-vs-explaining-away", "postedAt": "2008-03-17T01:59:27.000Z", "baseScore": 111, "voteCount": 91, "commentCount": 101, "url": null, "contents": { "documentId": "cphoF8naigLhRf3tu", "html": "

John Keats's Lamia (1819) surely deserves some kind of award for Most Famously Annoying Poetry:

\n
\n

                    ...Do not all charms fly
At the mere touch of cold philosophy?
There was an awful rainbow once in heaven:
We know her woof, her texture; she is given
In the dull catalogue of common things.
Philosophy will clip an Angel's wings,
Conquer all mysteries by rule and line,
Empty the haunted air, and gnomed mine—
Unweave a rainbow...

\n
\n

My usual reply ends with the phrase:  \"If we cannot learn to take joy in the merely real, our lives will be empty indeed.\"  I shall expand on that tomorrow.

\n

Today I have a different point in mind.  Let's just take the lines:

\n
\n

Empty the haunted air, and gnomed mine—
Unweave a rainbow...

\n
\n

Apparently \"the mere touch of cold philosophy\", i.e., the truth, has destroyed:

\n\n

Which calls to mind a rather different bit of verse:

\n
\n

One of these things
Is not like the others
One of these things
Doesn't belong

\n
\n

\n

The air has been emptied of its haunts, and the mine de-gnomed—but the rainbow is still there!

\n

In \"Righting a Wrong Question\", I wrote:

\n
\n

Tracing back the chain of causality, step by step, I discover that my belief that I'm wearing socks is fully explained by the fact that I'm wearing socks...  On the other hand, if I see a mirage of a lake in the desert, the correct causal explanation of my vision does not involve the fact of any actual lake in the desert.  In this case, my belief in the lake is not just explained, but explained away.

\n
\n

The rainbow was explained.  The haunts in the air, and gnomes in the mine, were explained away.

\n

I think this is the key distinction that anti-reductionists don't get about reductionism.

\n

You can see this failure to get the distinction in the classic objection to reductionism:

\n
\n

If reductionism is correct, then even your belief in reductionism is just the mere result of the motion of molecules—why should I listen to anything you say?

\n
\n

The key word, in the above, is mere; a word which implies that accepting reductionism would explain away all the reasoning processes leading up to my acceptance of reductionism, the way that an optical illusion is explained away.

\n

But you can explain how a cognitive process works without it being \"mere\"!  My belief that I'm wearing socks is a mere result of my visual cortex reconstructing nerve impulses sent from my retina which received photons reflected off my socks... which is to say, according to scientific reductionism, my belief that I'm wearing socks is a mere result of the fact that I'm wearing socks.

\n

What could be going on in the anti-reductionists' minds, such that they would put rainbows and belief-in-reductionism, in the same category as haunts and gnomes?

\n

Several things are going on simultaneously.  But for now let's focus on the basic idea introduced yesterday:  The Mind Projection Fallacy between a multi-level map and a mono-level territory.

\n

(I.e:  There's no way you can model a 747 quark-by-quark, so you've got to use a multi-level map with explicit cognitive representations of wings, airflow, and so on.  This doesn't mean there's a multi-level territory.  The true laws of physics, to the best of our knowledge, are only over elementary particle fields.)

\n

I think that when physicists say \"There are no fundamental rainbows,\" the anti-reductionists hear, \"There are no rainbows.\"

\n

If you don't distinguish between the multi-level map and the mono-level territory, then when someone tries to explain to you that the rainbow is not a fundamental thing in physics, acceptance of this will feel like erasing rainbows from your multi-level map, which feels like erasing rainbows from the world.

\n

When Science says \"tigers are not elementary particles, they are made of quarks\" the anti-reductionist hears this as the same sort of dismissal as \"we looked in your garage for a dragon, but there was just empty air\".

\n

What scientists did to rainbows, and what scientists did to gnomes, seemingly felt the same to Keats...

\n

In support of this sub-thesis, I deliberately used several phrasings, in my discussion of Keats's poem, that were Mind Projection Fallacious.  If you didn't notice, this would seem to argue that such fallacies are customary enough to pass unremarked.

\n

For example:

\n
\n

\"The air has been emptied of its haunts, and the mine de-gnomed—but the rainbow is still there!\"

\n
\n

Actually, Science emptied the model of air of belief in haunts, and emptied the map of the mine of representations of gnomes.  Science did not actually—as Keats's poem itself would have it—take real Angel's wings, and destroy them with a cold touch of truth.  In reality there never were any haunts in the air, or gnomes in the mine.

\n

Another example:

\n
\n

\"What scientists did to rainbows, and what scientists did to gnomes, seemingly felt the same to Keats.\"

\n
\n

Scientists didn't do anything to gnomes, only to \"gnomes\".  The quotation is not the referent.

\n

But if you commit the Mind Projection Fallacy—and by default, our beliefs just feel like the way the world is—then at time T=0, the mines (apparently) contain gnomes; at time T=1 a scientist dances across the scene, and at time T=2 the mines (apparently) are empty.  Clearly, there used to be gnomes there, but the scientist killed them.

\n

Bad scientist!  No poems for you, gnomekiller!

\n

Well, that's how it feels, if you get emotionally attached to the gnomes, and then a scientist says there aren't any gnomes.  It takes a strong mind, a deep honesty, and a deliberate effort to say, at this point, \"That which can be destroyed by the truth should be,\" and \"The scientist hasn't taken the gnomes away, only taken my delusion away,\" and \"I never held just title to my belief in gnomes in the first place; I have not been deprived of anything I rightfully owned,\" and \"If there are gnomes, I desire to believe there are gnomes; if there are no gnomes, I desire to believe there are no gnomes; let me not become attached to beliefs I may not want,\" and all the other things that rationalists are supposed to say on such occasions.

\n

But with the rainbow it is not even necessary to go that far.  The rainbow is still there!

" } }, { "_id": "tPqQdLCuxanjhoaNs", "title": "Reductionism", "pageUrl": "https://www.lesswrong.com/posts/tPqQdLCuxanjhoaNs/reductionism", "postedAt": "2008-03-16T06:26:38.000Z", "baseScore": 131, "voteCount": 108, "commentCount": 165, "url": null, "contents": { "documentId": "tPqQdLCuxanjhoaNs", "html": "

Almost one year ago, in April 2007, Matthew C submitted the following suggestion for an Overcoming Bias topic:

\n
\n

\"How and why the current reigning philosophical hegemon (reductionistic materialism) is obviously correct [...], while the reigning philosophical viewpoints of all past societies and civilizations are obviously suspect—\"

\n
\n

I remember this, because I looked at the request and deemed it legitimate, but I knew I couldn't do that topic until I'd started on the Mind Projection Fallacy sequence, which wouldn't be for a while...

\n

But now it's time to begin addressing this question.  And while I haven't yet come to the \"materialism\" issue, we can now start on \"reductionism\".

\n

First, let it be said that I do indeed hold that \"reductionism\", according to the meaning I will give for that word, is obviously correct; and to perdition with any past civilizations that disagreed.

\n

This seems like a strong statement, at least the first part of it.  General Relativity seems well-supported, yet who knows but that some future physicist may overturn it?

\n

On the other hand, we are never going back to Newtonian mechanics.  The ratchet of science turns, but it does not turn in reverse.  There are cases in scientific history where a theory suffered a wound or two, and then bounced back; but when a theory takes as many arrows through the chest as Newtonian mechanics, it stays dead.

\n

\"To hell with what past civilizations thought\" seems safe enough, when past civilizations believed in something that has been falsified to the trash heap of history.

\n

And reductionism is not so much a positive hypothesis, as the absence of belief—in particular, disbelief in a form of the Mind Projection Fallacy.

\n

\n

I once met a fellow who claimed that he had experience as a Navy gunner, and he said, \"When you fire artillery shells, you've got to compute the trajectories using Newtonian mechanics.  If you compute the trajectories using relativity, you'll get the wrong answer.\"

\n

And I, and another person who was present, said flatly, \"No.\"  I added, \"You might not be able to compute the trajectories fast enough to get the answers in time—maybe that's what you mean?  But the relativistic answer will always be more accurate than the Newtonian one.\"

\n

\"No,\" he said, \"I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity.\"

\n

\"If that were really true,\" I replied, \"you could publish it in a physics journal and collect your Nobel Prize.\" 

\n

Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider.  Nuclei and airplanes alike, according to our understanding, are obeying special relativity, quantum mechanics, and chromodynamics.

\n

But we use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC.  A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark.

\n

So is the 747 made of something other than quarks?  No, you're just modeling it with representational elements that do not have a one-to-one correspondence with the quarks of the 747.  The map is not the territory.

\n

Why not model the 747 with a chromodynamic representation?  Because then it would take a gazillion years to get any answers out of the model.  Also we could not store the model on all the memory on all the computers in the world, as of 2008.

\n

As the saying goes, \"The map is not the territory, but you can't fold up the territory and put it in your glove compartment.\"  Sometimes you need a smaller map to fit in a more cramped glove compartment—but this does not change the territory.  The scale of a map is not a fact about the territory, it's a fact about the map.

\n

If it were possible to build and run a chromodynamic model of the 747, it would yield accurate predictions.  Better predictions than the aerodynamic model, in fact.

\n

To build a fully accurate model of the 747, it is not necessary, in principle, for the model to contain explicit descriptions of things like airflow and lift.  There does not have to be a single token, a single bit of RAM, that corresponds to the position of the wings.  It is possible, in principle, to build an accurate model of the 747 that makes no mention of anything except elementary particle fields and fundamental forces.

\n

\"What?\" cries the antireductionist.  \"Are you telling me the 747 doesn't really have wings?  I can see the wings right there!\"

\n

The notion here is a subtle one.  It's not just the notion that an object can have different descriptions at different levels.

\n

It's the notion that \"having different descriptions at different levels\" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory.

\n

It's not that the airplane itself, the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought.  Rather we, for our convenience, use different simplified models at different levels.

\n

If you looked at the ultimate chromodynamic model, the one that contained only elementary particle fields and fundamental forces, that model would contain all the facts about airflow and lift and wing positions—but these facts would be implicit, rather than explicit.

\n

You, looking at the model, and thinking about the model, would be able to figure out where the wings were.  Having figured it out, there would be an explicit representation in your mind of the wing position—an explicit computational object, there in your neural RAM.  In your mind.

\n

You might, indeed, deduce all sorts of explicit descriptions of the airplane, at various levels, and even explicit rules for how your models at different levels interacted with each other to produce combined predictions—

\n

And the way that algorithm feels from inside, is that the airplane would seem to be made up of many levels at once, interacting with each other.

\n

The way a belief feels from inside, is that you seem to be looking straight at reality.  When it actually seems that you're looking at a belief, as such, you are really experiencing a belief about belief.

\n

So when your mind simultaneously believes explicit descriptions of many different levels, and believes explicit rules for transiting between levels, as part of an efficient combined model, it feels like you are seeing a system that is made of different level descriptions and their rules for interaction.

\n

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level.  The airplane is too large.  Even a hydrogen atom would be too large.  Quark-to-quark interactions are insanely intractable.  You can't handle the truth.

\n

But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces.  You can't handle the raw truth, but reality can handle it without the slightest simplification.  (I wish I knew where Reality got its computing power.)

\n

The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings, the way that the mind of an engineer contains distinct additional cognitive entities that correspond to lift or airplane wings.

\n

This, as I see it, is the thesis of reductionism.  Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.  Understanding this on a gut level dissolves the question of \"How can you say the airplane doesn't really have wings, when I can see the wings right there?\"  The critical words are really and see.

" } }, { "_id": "BwtBhqvTPGG2n2GuJ", "title": "Qualitatively Confused", "pageUrl": "https://www.lesswrong.com/posts/BwtBhqvTPGG2n2GuJ/qualitatively-confused", "postedAt": "2008-03-14T17:01:08.000Z", "baseScore": 71, "voteCount": 57, "commentCount": 85, "url": null, "contents": { "documentId": "BwtBhqvTPGG2n2GuJ", "html": "

I suggest that a primary cause of confusion about the distinction between \"belief\", \"truth\", and \"reality\" is qualitative thinking about beliefs.

\n

Consider the archetypal postmodernist attempt to be clever:

\n
\n

\"The Sun goes around the Earth\" is true for Hunga Huntergatherer, but \"The Earth goes around the Sun\" is true for Amara Astronomer!  Different societies have different truths!

\n
\n

No, different societies have different beliefs.  Belief is of a different type than truth; it's like comparing apples and probabilities.

\n
\n

Ah, but there's no difference between the way you use the word 'belief' and the way you use the word 'truth'!  Whether you say, \"I believe 'snow is white'\", or you say, \"'Snow is white' is true\", you're expressing exactly the same opinion.

\n
\n

No, these sentences mean quite different things, which is how I can conceive of the possibility that my beliefs are false.

\n
\n

Oh, you claim to conceive it, but you never believe it.  As Wittgenstein said, \"If there were a verb meaning 'to believe falsely', it would not have any significant first person, present indicative.\"

\n
\n

And that's what I mean by putting my finger on qualitative reasoning as the source of the problem.  The dichotomy between belief and disbelief, being binary, is confusingly similar to the dichotomy between truth and untruth.

\n

\n

So let's use quantitative reasoning instead.  Suppose that I assign a 70% probability to the proposition that snow is white.  It follows that I think there's around a 70% chance that the sentence \"snow is white\" will turn out to be true.  If the sentence \"snow is white\" is true, is my 70% probability assignment to the proposition, also \"true\"?  Well, it's more true than it would have been if I'd assigned 60% probability, but not so true as if I'd assigned 80% probability.

\n

When talking about the correspondence between a probability assignment and reality, a better word than \"truth\" would be \"accuracy\".  \"Accuracy\" sounds more quantitative, like an archer shooting an arrow: how close did your probability assignment strike to the center of the target?

\n

To make a long story short, it turns out that there's a very natural way of scoring the accuracy of a probability assignment, as compared to reality: just take the logarithm of the probability assigned to the real state of affairs.

\n

So if snow is white, my belief \"70%: 'snow is white'\" will score -0.51 bits:  Log2(0.7) = -0.51.

\n

But what if snow is not white, as I have conceded a 30% probability is the case?  If \"snow is white\" is false, my belief \"30% probability: 'snow is not white'\" will score -1.73 bits.  Note that -1.73 < -0.51, so I have done worse.

\n

About how accurate do I think my own beliefs are?  Well, my expectation over the score is 70% * -0.51 + 30% * -1.73 = -0.88 bits.  If snow is white, then my beliefs will be more accurate than I expected; and if snow is not white, my beliefs will be less accurate than I expected; but in neither case will my belief be exactly as accurate as I expected on average.

\n

All this should not be confused with the statement \"I assign 70% credence that 'snow is white'.\"  I may well believe that proposition with probability ~1—be quite certain that this is in fact my belief.  If so I'll expect my meta-belief \"~1: 'I assign 70% credence that \"snow is white\"'\" to score ~0 bits of accuracy, which is as good as it gets.

\n

Just because I am uncertain about snow, does not mean I am uncertain about my quoted probabilistic beliefs.  Snow is out there, my beliefs are inside me.  I may be a great deal less uncertain about how uncertain I am about snow, than I am uncertain about snow.  (Though beliefs about beliefs are not always accurate.)

\n

Contrast this probabilistic situation to the qualitative reasoning where I just believe that snow is white, and believe that I believe that snow is white, and believe \"'snow is white' is true\", and believe \"my belief '\"snow is white\" is true' is correct\", etc.  Since all the quantities involved are 1, it's easy to mix them up.

\n

Yet the nice distinctions of quantitative reasoning will be short-circuited if you start thinking \"'\"snow is white\" with 70% probability' is true\", which is a type error.  It is a true fact about you, that you believe \"70% probability: 'snow is white'\"; but that does not mean the probability assignment itself can possibly be \"true\".  The belief scores either -0.51 bits or -1.73 bits of accuracy, depending on the actual state of reality.

\n

The cognoscenti will recognize \"'\"snow is white\" with 70% probability' is true\" as the mistake of thinking that probabilities are inherent properties of things.

\n

From the inside, our beliefs about the world look like the world, and our beliefs about our beliefs look like beliefs.  When you see the world, you are experiencing a belief from the inside.  When you notice yourself believing something, you are experiencing a belief about belief from the inside.  So if your internal representations of belief, and belief about belief, are dissimilar, then you are less likely to mix them up and commit the Mind Projection Fallacy—I hope.

\n

When you think in probabilities, your beliefs, and your beliefs about your beliefs, will hopefully not be represented similarly enough that you mix up belief and accuracy, or mix up accuracy and reality.  When you think in probabilities about the world, your beliefs will be represented with probabilities (0, 1).  Unlike the truth-values of propositions, which are in {true, false}.  As for the accuracy of your probabilistic belief, you can represent that in the range (-∞, 0).  Your probabilities about your beliefs will typically be extreme.  And things themselves—why, they're just red, or blue, or weighing 20 pounds, or whatever.

\n

Thus we will be less likely, perhaps, to mix up the map with the territory.

\n

This type distinction may also help us remember that uncertainty is a state of mind.  A coin is not inherently 50% uncertain of which way it will land.  The coin is not a belief processor, and does not have partial information about itself.  In qualitative reasoning you can create a belief that corresponds very straightforwardly to the coin, like \"The coin will land heads\".  This belief will be true or false depending on the coin, and there will be a transparent implication from the truth or falsity of the belief, to the facing side of the coin.

\n

But even under qualitative reasoning, to say that the coin itself is \"true\" or \"false\" would be a severe type error.  The coin is not a belief, it is a coin.  The territory is not the map.

\n

If a coin cannot be true or false, how much less can it assign a 50% probability to itself?

" } }, { "_id": "WgLNtfFyaZDySHuHb", "title": "Penguicon & Blook", "pageUrl": "https://www.lesswrong.com/posts/WgLNtfFyaZDySHuHb/penguicon-and-blook", "postedAt": "2008-03-13T17:32:43.000Z", "baseScore": 14, "voteCount": 11, "commentCount": 36, "url": null, "contents": { "documentId": "WgLNtfFyaZDySHuHb", "html": "

One million cumulative daily visits!  Woot n' stuff.  Also we're in the top 5,000 of all blogs on Technorati, and one of the top 10 econblogs by Technorati rank.

\n\n

Seems like a good time to mention that I'll be appearing at Penguicon, a combination open-source/science-fiction convention in Troy, MI, Apr 18-20, as a Nifty.  I'll be doing an intro to Bayesian reasoning that you probably don't need if you're reading this, possibly a panel on the Virtues of a Rationalist, some stuff on human intelligence upgrades, and definitely "The Ethics of Ending the World" with Aaron Diaz (Dresden Codak).

\n\n

After the jump, you can see some proposed cover art for the blook.

\"The_book_2\"\n

\n\n

For the benefit of the humor impaired:  Yes, this is a joke.  Erin, my girlfriend, Photoshopped this when she heard I was planning to do a book.

\n\n

This is all taking longer than I expected - as expected - but I do think I'm getting there.


\n\n

My current serious strategy for the blook is as follows:

\n\n\n
  1. Finish all the important serial material on rationality - the posts that have to be done in month-long sequences.  That's probably at least another two months.
  2. \n\n
  3. Maybe spend another month or two doing large transhumanist sequences, either on the Singularity Institute blog (currently fairly defunct) or here on Overcoming Bias if the readers really want that.  My self-imposed deadline here is August 2008.
  4. \n\n
  5. Switch to writing shorter posts on topics that can be considered independently given the already-written background material - maybe cut back to a Sat-Sun-Tue-Thu schedule.  Don't worry, this won't happen anytime soon.
  6. \n\n
  7. On days when I don't post: spend my time compiling collections of related Overcoming Bias posts into medium-sized ebooks of 50 pages / 20,000 words or thereabouts, broken up into short blog-post-sized sections for easy reading.  (This is good because it can be done incrementally, and I tend to bog down when I try to do anything book-sized all at once.)  Leave a comment if you have any suggestions on ebook-writing software or ways to get the book design done cheaply.
  8. \n\n
  9. Publish the ebooks incrementally on Wowio, as a compromise between "information wants to be free" and "authors want to eat".  (Email me if you happen to work at/with Wowio, because I'm interested in testing the waters on this sometime soon.)  Publish the ebooks with a cheap nominal charge for readers outside the US, since Wowio doesn't work outside the US yet.
  10. \n\n
  11. Produce a giant expensive 500-page hardcover dead-tree compendium of all the ebooks at Lulu or somewhere similar.
  12. \n\n
  13. Once all the fully detailed material exists somewhere and I can summarize it with a clean conscience: pick the easiest and most favorably received topics for a short, popular book; produce an outline and a couple of starting chapters; and begin looking for an agent who believes that the book can be a New York Times bestseller, to find a publisher who believes that the book can be a bestseller and who'll invest a corresponding amount marketing it.
\n\n

If you've got more experience in the publishing industry and you see some reason that any of this won't work, i.e., "No one will talk to you if you've ever done anything with Wowio or Lulu" or "Today's readers don't want short popular books, they want 500-page tomes" or something like that, please email me or comment.

" } }, { "_id": "np3tP49caG4uFLRbS", "title": "The Quotation is not the Referent", "pageUrl": "https://www.lesswrong.com/posts/np3tP49caG4uFLRbS/the-quotation-is-not-the-referent", "postedAt": "2008-03-13T00:53:44.000Z", "baseScore": 77, "voteCount": 54, "commentCount": 14, "url": null, "contents": { "documentId": "np3tP49caG4uFLRbS", "html": "

In classical logic, the operational definition of identity is that whenever 'A=B' is a theorem, you can substitute 'A' for 'B' in any theorem where B appears.  For example, if (2 + 2) = 4 is a theorem, and ((2 + 2) + 3) = 7 is a theorem, then (4 + 3) = 7 is a theorem.

\n

This leads to a problem which is usually phrased in the following terms:  The morning star and the evening star happen to be the same object, the planet Venus.  Suppose John knows that the morning star and evening star are the same object.  Mary, however, believes that the morning star is the god Lucifer, but the evening star is the god Venus.  John believes Mary believes that the morning star is Lucifer. Must John therefore (by substitution) believe that Mary believes that the evening star is Lucifer?

\n

Or here's an even simpler version of the problem.  2 + 2 = 4 is true; it is a theorem that (((2 + 2) = 4) = TRUE).  Fermat's Last Theorem is also true.  So:  I believe 2 + 2 = 4 => I believe TRUE => I believe Fermat's Last Theorem.

\n

Yes, I know this seems obviously wrong.  But imagine someone writing a logical reasoning program using the principle \"equal terms can always be substituted\", and this happening to them.  Now imagine them writing a paper about how to prevent it from happening.  Now imagine someone else disagreeing with their solution.  The argument is still going on.

\n

P'rsnally, I would say that John is committing a type error, like trying to subtract 5 grams from 20 meters.  \"The morning star\" is not the same type as the morning star, let alone the same thing.  Beliefs are not planets.

\n

\n
\n

morning star = evening star
\"morning star\" ≠ \"evening star\"

\n
\n

The problem, in my view, stems from the failure to enforce the type distinction between beliefs and things.  The original error was writing an AI that stores its beliefs about Mary's beliefs about \"the morning star\" using the same representation as in its beliefs about the morning star.

\n

If Mary believes the \"morning star\" is Lucifer, that doesn't mean Mary believes the \"evening star\" is Lucifer, because \"morning star\" ≠ \"evening star\".  The whole paradox stems from the failure to use quote marks in appropriate places.

\n

You may recall that this is not the first time I've talked about enforcing type discipline—the last time was when I spoke about the error of confusing expected utilities with utilities. It is immensely helpful, when one is first learning physics, to learn to keep track of one's units—it may seem like a bother to keep writing down 'cm' and 'kg' and so on, until you notice that (a) your answer seems to be the wrong order of magnitude and (b) it is expressed in seconds per square gram.

\n

Similarly, beliefs are different things than planets.  If we're talking about human beliefs, at least, then:  Beliefs live in brains, planets live in space.  Beliefs weigh a few micrograms, planets weigh a lot more.  Planets are larger than beliefs... but you get the idea.

\n

Merely putting quote marks around \"morning star\" seems insufficient to prevent people from confusing it with the morning star, due to the visual similarity of the text.  So perhaps a better way to enforce type discipline would be with a visibly different encoding:

\n
\n

morning star = evening star
13.15.18.14.9.14.7.0.19.20.1.18 ≠ 5.22.5.14.9.14.7.0.19.20.1.18

\n
\n

Studying mathematical logic may also help you learn to distinguish the quote and the referent.  In mathematical logic, |- P (P is a theorem) and |- []'P' (it is provable that there exists an encoded proof of the encoded sentence P in some encoded proof system) are very distinct propositions.  If you drop a level of quotation in mathematical logic, it's like dropping a metric unit in physics—you can derive visibly ridiculous results, like \"The speed of light is 299,792,458 meters long.\"

\n

Alfred Tarski once tried to define the meaning of 'true' using an infinite family of sentences:

\n
\n

(\"Snow is white\" is true) if and only (snow is white)
(\"Weasels are green\" is true) if and only if (weasels are green)
...

\n
\n

When sentences like these start seeming meaningful, you'll know that you've started to distinguish between encoded sentences and states of the outside world.

\n

Similarly, the notion of truth is quite different from the notion of reality.  Saying \"true\" compares a belief to reality.  Reality itself does not need to be compared to any beliefs in order to be real.  Remember this the next time someone claims that nothing is true.

" } }, { "_id": "f6ZLxEWaankRZ2Crv", "title": "Probability is in the Mind", "pageUrl": "https://www.lesswrong.com/posts/f6ZLxEWaankRZ2Crv/probability-is-in-the-mind", "postedAt": "2008-03-12T04:08:30.000Z", "baseScore": 143, "voteCount": 121, "commentCount": 227, "url": null, "contents": { "documentId": "f6ZLxEWaankRZ2Crv", "html": "

\"Monsterwithgirl_2\"

\n

Yesterday I spoke of the Mind Projection Fallacy, giving the example of the alien monster who carries off a girl in a torn dress for intended ravishing—a mistake which I imputed to the artist's tendency to think that a woman's sexiness is a property of the woman herself, woman.sexiness, rather than something that exists in the mind of an observer, and probably wouldn't exist in an alien mind.

\n

The term \"Mind Projection Fallacy\" was coined by the late great Bayesian Master, E. T. Jaynes, as part of his long and hard-fought battle against the accursèd frequentists.  Jaynes was of the opinion that probabilities were in the mind, not in the environment—that probabilities express ignorance, states of partial information; and if I am ignorant of a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon.

\n

I cannot do justice to this ancient war in a few words—but the classic example of the argument runs thus:

\n

You have a coin.
The coin is biased.
You don't know which way it's biased or how much it's biased.  Someone just told you, \"The coin is biased\" and that's all they said.
This is all the information you have, and the only information you have.

\n

You draw the coin forth, flip it, and slap it down.

\n

Now—before you remove your hand and look at the result—are you willing to say that you assign a 0.5 probability to the coin having come up heads?

\n

\n

The frequentist says, \"No.  Saying 'probability 0.5' means that the coin has an inherent propensity to come up heads as often as tails, so that if we flipped the coin infinitely many times, the ratio of heads to tails would approach 1:1.  But we know that the coin is biased, so it can have any probability of coming up heads except 0.5.\"

\n

The Bayesian says, \"Uncertainty exists in the map, not in the territory.  In the real world, the coin has either come up heads, or come up tails.  Any talk of 'probability' must refer to the information that I have about the coin—my state of partial ignorance and partial knowledge—not just the coin itself.  Furthermore, I have all sorts of theorems showing that if I don't treat my partial knowledge a certain way, I'll make stupid bets.  If I've got to plan, I'll plan for a 50/50 state of uncertainty, where I don't weigh outcomes conditional on heads any more heavily in my mind than outcomes conditional on tails.  You can call that number whatever you like, but it has to obey the probability laws on pain of stupidity.  So I don't have the slightest hesitation about calling my outcome-weighting a probability.\"

\n

I side with the Bayesians.  You may have noticed that about me.

\n

Even before a fair coin is tossed, the notion that it has an inherent 50% probability of coming up heads may be just plain wrong.  Maybe you're holding the coin in such a way that it's just about guaranteed to come up heads, or tails, given the force at which you flip it, and the air currents around you.  But, if you don't know which way the coin is biased on this one occasion, so what?

\n

I believe there was a lawsuit where someone alleged that the draft lottery was unfair, because the slips with names on them were not being mixed thoroughly enough; and the judge replied, \"To whom is it unfair?\"

\n

To make the coinflip experiment repeatable, as frequentists are wont to demand, we could build an automated coinflipper, and verify that the results were 50% heads and 50% tails.  But maybe a robot with extra-sensitive eyes and a good grasp of physics, watching the autoflipper prepare to flip, could predict the coin's fall in advance—not with certainty, but with 90% accuracy.  Then what would the real probability be?

\n

There is no \"real probability\".  The robot has one state of partial information.  You have a different state of partial information.  The coin itself has no mind, and doesn't assign a probability to anything; it just flips into the air, rotates a few times, bounces off some air molecules, and lands either heads or tails.

\n

So that is the Bayesian view of things, and I would now like to point out a couple of classic brainteasers that derive their brain-teasing ability from the tendency to think of probabilities as inherent properties of objects.

\n

Let's take the old classic:  You meet a mathematician on the street, and she happens to mention that she has given birth to two children on two separate occasions.  You ask:  \"Is at least one of your children a boy?\"  The mathematician says, \"Yes, he is.\"

\n

What is the probability that she has two boys?  If you assume that the prior probability of a child being a boy is 1/2, then the probability that she has two boys, on the information given, is 1/3.  The prior probabilities were:  1/4 two boys, 1/2 one boy one girl, 1/4 two girls.  The mathematician's \"Yes\" response has probability ~1 in the first two cases, and probability ~0 in the third.  Renormalizing leaves us with a 1/3 probability of two boys, and a 2/3 probability of one boy one girl.

\n

But suppose that instead you had asked, \"Is your eldest child a boy?\" and the mathematician had answered \"Yes.\"  Then the probability of the mathematician having two boys would be 1/2.  Since the eldest child is a boy, and the younger child can be anything it pleases.

\n

Likewise if you'd asked \"Is your youngest child a boy?\"  The probability of their being both boys would, again, be 1/2.

\n

Now, if at least one child is a boy, it must be either the oldest child who is a boy, or the youngest child who is a boy.  So how can the answer in the first case be different from the answer in the latter two?

\n

Or here's a very similar problem:  Let's say I have four cards, the ace of hearts, the ace of spades, the two of hearts, and the two of spades.  I draw two cards at random.  You ask me, \"Are you holding at least one ace?\" and I reply \"Yes.\"  What is the probability that I am holding a pair of aces?  It is 1/5.  There are six possible combinations of two cards, with equal prior probability, and you have just eliminated the possibility that I am holding a pair of twos.  Of the five remaining combinations, only one combination is a pair of aces.  So 1/5.

\n

Now suppose that instead you asked me, \"Are you holding the ace of spades?\"  If I reply \"Yes\", the probability that the other card is the ace of hearts is 1/3.  (You know I'm holding the ace of spades, and there are three possibilities for the other card, only one of which is the ace of hearts.)  Likewise, if you ask me \"Are you holding the ace of hearts?\" and I reply \"Yes\", the probability I'm holding a pair of aces is 1/3.

\n

But then how can it be that if you ask me, \"Are you holding at least one ace?\" and I say \"Yes\", the probability I have a pair is 1/5?  Either I must be holding the ace of spades or the ace of hearts, as you know; and either way, the probability that I'm holding a pair of aces is 1/3.

\n

How can this be?  Have I miscalculated one or more of these probabilities?

\n

If you want to figure it out for yourself, do so now, because I'm about to reveal...

\n

That all stated calculations are correct.

\n

As for the paradox, there isn't one.  The appearance of paradox comes from thinking that the probabilities must be properties of the cards themselves.  The ace I'm holding has to be either hearts or spades; but that doesn't mean that your knowledge about my cards must be the same as if you knew I was holding hearts, or knew I was holding spades.

\n

It may help to think of Bayes's Theorem:

\n
\n

P(H|E) = P(E|H)P(H) / P(E)

\n
\n

That last term, where you divide by P(E), is the part where you throw out all the possibilities that have been eliminated, and renormalize your probabilities over what remains.

\n

Now let's say that you ask me, \"Are you holding at least one ace?\"  Before I answer, your probability that I say \"Yes\" should be 5/6.

\n

But if you ask me \"Are you holding the ace of spades?\", your prior probability that I say \"Yes\" is just 1/2.

\n

So right away you can see that you're learning something very different in the two cases.  You're going to be eliminating some different possibilities, and renormalizing using a different P(E).  If you learn two different items of evidence, you shouldn't be surprised at ending up in two different states of partial information.

\n

Similarly, if I ask the mathematician, \"Is at least one of your two children a boy?\" I expect to hear \"Yes\" with probability 3/4, but if I ask \"Is your eldest child a boy?\" I expect to hear \"Yes\" with probability 1/2.  So it shouldn't be surprising that I end up in a different state of partial knowledge, depending on which of the two questions I ask.

\n

The only reason for seeing a \"paradox\" is thinking as though the probability of holding a pair of aces is a property of cards that have at least one ace, or a property of cards that happen to contain the ace of spades.  In which case, it would be paradoxical for card-sets containing at least one ace to have an inherent pair-probability of 1/5, while card-sets containing the ace of spades had an inherent pair-probability of 1/3, and card-sets containing the ace of hearts had an inherent pair-probability of 1/3.

\n

Similarly, if you think a 1/3 probability of being both boys is an inherent property of child-sets that include at least one boy, then that is not consistent with child-sets of which the eldest is male having an inherent probability of 1/2 of being both boys, and child-sets of which the youngest is male having an inherent 1/2 probability of being both boys.  It would be like saying, \"All green apples weigh a pound, and all red apples weigh a pound, and all apples that are green or red weigh half a pound.\"

\n

That's what happens when you start thinking as if probabilities are in things, rather than probabilities being states of partial information about things.

\n

Probabilities express uncertainty, and it is only agents who can be uncertain.  A blank map does not correspond to a blank territory.  Ignorance is in the mind.

" } }, { "_id": "ZTRiSNmeGQK8AkdN2", "title": "Mind Projection Fallacy", "pageUrl": "https://www.lesswrong.com/posts/ZTRiSNmeGQK8AkdN2/mind-projection-fallacy", "postedAt": "2008-03-11T00:29:02.000Z", "baseScore": 94, "voteCount": 87, "commentCount": 91, "url": null, "contents": { "documentId": "ZTRiSNmeGQK8AkdN2", "html": "

\"Monsterwithgirl_2\"In the dawn days of science fiction, alien invaders would occasionally kidnap a girl in a torn dress and carry her off for intended ravishing, as lovingly depicted on many ancient magazine covers.  Oddly enough, the aliens never go after men in torn shirts.

\n

Would a non-humanoid alien, with a different evolutionary history and evolutionary psychology, sexually desire a human female?  It seems rather unlikely.  To put it mildly.

\n

People don't make mistakes like that by deliberately reasoning:  \"All possible minds are likely to be wired pretty much the same way, therefore a bug-eyed monster will find human females attractive.\"  Probably the artist did not even think to ask whether an alien perceives human females as attractive.  Instead, a human female in a torn dress is sexy—inherently so, as an intrinsic property.

\n

They who went astray did not think about the alien's evolutionary history; they focused on the woman's torn dress.  If the dress were not torn, the woman would be less sexy; the alien monster doesn't enter into it.

\n

\n

Apparently we instinctively represent Sexiness as a direct attribute of the Woman object, Woman.sexiness, like Woman.height or Woman.weight.

\n

If your brain uses that data structure, or something metaphorically similar to it, then from the inside it feels like sexiness is an inherent property of the woman, not a property of the alien looking at the woman.  Since the woman is attractive, the alien monster will be attracted to her—isn't that logical?

\n

E. T. Jaynes used the term Mind Projection Fallacy to denote the error of projecting your own mind's properties into the external world.  Jaynes, as a late grand master of the Bayesian Conspiracy, was most concerned with the mistreatment of probabilities as inherent properties of objects, rather than states of partial knowledge in some particular mind.  More about this shortly.

\n

But the Mind Projection Fallacy generalizes as an error.  It is in the argument over the real meaning of the word sound, and in the magazine cover of the monster carrying off a woman in the torn dress, and Kant's declaration that space by its very nature is flat, and Hume's definition of a priori ideas as those \"discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe\"...

\n

(Incidentally, I once read an SF story about a human male who entered into a sexual relationship with a sentient alien plant of appropriately squishy fronds; discovered that it was an androecious (male) plant; agonized about this for a bit; and finally decided that it didn't really matter at that point.  And in Foglio and Pollotta's Illegal Aliens, the humans land on a planet inhabited by sentient insects, and see a movie advertisement showing a human carrying off a bug in a delicate chiffon dress.  Just thought I'd mention that.)

" } }, { "_id": "rQEwySCcLtdKHkrHp", "title": "Righting a Wrong Question", "pageUrl": "https://www.lesswrong.com/posts/rQEwySCcLtdKHkrHp/righting-a-wrong-question", "postedAt": "2008-03-09T13:00:00.000Z", "baseScore": 132, "voteCount": 114, "commentCount": 111, "url": null, "contents": { "documentId": "rQEwySCcLtdKHkrHp", "html": "

When you are faced with an unanswerable question—a question to which it seems impossible to even imagine an answer—there is a simple trick which can turn the question solvable.

\n

Compare:

\n\n

The nice thing about the second question is that it is guaranteed to have a real answer, whether or not there is any such thing as free will.  Asking \"Why do I have free will?\" or \"Do I have free will?\" sends you off thinking about tiny details of the laws of physics, so distant from the macroscopic level that you couldn't begin to see them with the naked eye.  And you're asking \"Why is X the case?\" where X may not be coherent, let alone the case.

\n

\"Why do I think I have free will?\", in contrast, is guaranteed answerable.  You do, in fact, believe you have free will.  This belief seems far more solid and graspable than the ephemerality of free will.  And there is, in fact, some nice solid chain of cognitive cause and effect leading up to this belief.

\n

If you've already outgrown free will, choose one of these substitutes:

\n\n

\n

The beauty of this method is that it works whether or not the question is confused.  As I type this, I am wearing socks.  I could ask \"Why am I wearing socks?\" or \"Why do I believe I'm wearing socks?\"  Let's say I ask the second question.  Tracing back the chain of causality, I find:

\n\n

Tracing back the chain of causality, step by step, I discover that my belief that I'm wearing socks is fully explained by the fact that I'm wearing socks.  This is right and proper, as you cannot gain information about something without interacting with it.

\n

On the other hand, if I see a mirage of a lake in a desert, the correct causal explanation of my vision does not involve the fact of any actual lake in the desert.  In this case, my belief in the lake is not just explained, but explained away.

\n

But either way, the belief itself is a real phenomenon taking place in the real universe—psychological events are events—and its causal history can be traced back.

\n

\"Why is there a lake in the middle of the desert?\" may fail if there is no lake to be explained.  But \"Why do I perceive a lake in the middle of the desert?\" always has a causal explanation, one way or the other.

\n

Perhaps someone will see an opportunity to be clever, and say:  \"Okay.  I believe in free will because I have free will.  There, I'm done.\"  Of course it's not that easy.

\n

My perception of socks on my feet, is an event in the visual cortex.  The workings of the visual cortex can be investigated by cognitive science, should they be confusing.

\n

My retina receiving light is not a mystical sensing procedure, a magical sock detector that lights in the presence of socks for no explicable reason; there are mechanisms that can be understood in terms of biology.  The photons entering the retina can be understood in terms of optics.  The shoe's surface reflectance can be understood in terms of electromagnetism and chemistry.  My feet getting cold can be understood in terms of thermodynamics.

\n

So it's not as easy as saying, \"I believe I have free will because I have it—there, I'm done!\"  You have to be able to break the causal chain into smaller steps, and explain the steps in terms of elements not themselves confusing.

\n

The mechanical interaction of my retina with my socks is quite clear, and can be described in terms of non-confusing components like photons and electrons.  Where's the free-will-sensor in your brain, and how does it detect the presence or absence of free will?  How does the sensor interact with the sensed event, and what are the mechanical details of the interaction?

\n

If your belief does derive from valid observation of a real phenomenon, we will eventually reach that fact, if we start tracing the causal chain backward from your belief.

\n

If what you are really seeing is your own confusion, tracing back the chain of causality will find an algorithm that runs skew to reality.

\n

Either way, the question is guaranteed to have an answer.  You even have a nice, concrete place to begin tracing—your belief, sitting there solidly in your mind.

\n

Cognitive science may not seem so lofty and glorious as metaphysics.  But at least questions of cognitive science are solvable.  Finding an answer may not be easy, but at least an answer exists.

\n

Oh, and also: the idea that cognitive science is not so lofty and glorious as metaphysics is simply wrong.  Some readers are beginning to notice this, I hope.

" } }, { "_id": "XzrqkhfwtiSDgKoAF", "title": "Wrong Questions", "pageUrl": "https://www.lesswrong.com/posts/XzrqkhfwtiSDgKoAF/wrong-questions", "postedAt": "2008-03-08T17:11:37.000Z", "baseScore": 83, "voteCount": 72, "commentCount": 138, "url": null, "contents": { "documentId": "XzrqkhfwtiSDgKoAF", "html": "

Where the mind cuts against reality's grain, it generates wrong questions—questions that cannot possibly be answered on their own terms, but only dissolved by understanding the cognitive algorithm that generates the perception of a question.

\n

One good cue that you're dealing with a \"wrong question\" is when you cannot even imagine any concrete, specific state of how-the-world-is that would answer the question.  When it doesn't even seem possible to answer the question.

\n

Take the Standard Definitional Dispute, for example, about the tree falling in a deserted forest.  Is there any way-the-world-could-be—any state of affairs—that corresponds to the word \"sound\" really meaning only acoustic vibrations, or really meaning only auditory experiences?

\n

(\"Why, yes,\" says the one, \"it is the state of affairs where 'sound' means acoustic vibrations.\"  So Taboo the word 'means', and 'represents', and all similar synonyms, and describe again:  How can the world be, what state of affairs, would make one side right, and the other side wrong?)

\n

Or if that seems too easy, take free will:  What concrete state of affairs, whether in deterministic physics, or in physics with a dice-rolling random component, could ever correspond to having free will?

\n

And if that seems too easy, then ask \"Why does anything exist at all?\", and then tell me what a satisfactory answer to that question would even look like.

\n

\n

And no, I don't know the answer to that last one.  But I can guess one thing, based on my previous experience with unanswerable questions.  The answer will not consist of some grand triumphant First Cause.  The question will go away as a result of some insight into how my mental algorithms run skew to reality, after which I will understand how the question itself was wrong from the beginning—how the question itself assumed the fallacy, contained the skew.

\n

Mystery exists in the mind, not in reality.  If I am ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon itself.  All the more so, if it seems like no possible answer can exist:  Confusion exists in the map, not in the territory.  Unanswerable questions do not mark places where magic enters the universe.  They mark places where your mind runs skew to reality.

\n

Such questions must be dissolved.  Bad things happen when you try to answer them.  It inevitably generates the worst sort of Mysterious Answer to a Mysterious Question:  The one where you come up with seemingly strong arguments for your Mysterious Answer, but the \"answer\" doesn't let you make any new predictions even in retrospect, and the phenomenon still possesses the same sacred inexplicability that it had at the start.

\n

I could guess, for example, that the answer to the puzzle of the First Cause is that nothing does exist—that the whole concept of \"existence\" is bogus.  But if you sincerely believed that, would you be any less confused?  Me neither.

\n

But the wonderful thing about unanswerable questions is that they are always solvable, at least in my experience.  What went through Queen Elizabeth I's mind, first thing in the morning, as she woke up on her fortieth birthday?  As I can easily imagine answers to this question, I can readily see that I may never be able to actually answer it, the true information having been lost in time.

\n

On the other hand, \"Why does anything exist at all?\" seems so absolutely impossible that I can infer that I am just confused, one way or another, and the truth probably isn't all that complicated in an absolute sense, and once the confusion goes away I'll be able to see it.

\n

This may seem counterintuitive if you've never solved an unanswerable question, but I assure you that it is how these things work.

\n

Coming tomorrow:  A simple trick for handling \"wrong questions\".

" } }, { "_id": "Mc6QcrsbH5NRXbCRX", "title": "Dissolving the Question", "pageUrl": "https://www.lesswrong.com/posts/Mc6QcrsbH5NRXbCRX/dissolving-the-question", "postedAt": "2008-03-08T03:17:07.000Z", "baseScore": 156, "voteCount": 132, "commentCount": 123, "url": null, "contents": { "documentId": "Mc6QcrsbH5NRXbCRX", "html": "

\"If a tree falls in the forest, but no one hears it, does it make a sound?\"

\n

I didn't answer that question.  I didn't pick a position, \"Yes!\" or \"No!\", and defend it.  Instead I went off and deconstructed the human algorithm for processing words, even going so far as to sketch an illustration of a neural network.  At the end, I hope, there was no question left—not even the feeling of a question.

\n

Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct:  If you give them a question, they try to answer it.

\n

Like, say, \"Do we have free will?\"

\n

The dangerous instinct of philosophy is to marshal the arguments in favor, and marshal the arguments against, and weigh them up, and publish them in a prestigious journal of philosophy, and so finally conclude:  \"Yes, we must have free will,\" or \"No, we cannot possibly have free will.\"

\n

Some philosophers are wise enough to recall the warning that most philosophical disputes are really disputes over the meaning of a word, or confusions generated by using different meanings for the same word in different places.  So they try to define very precisely what they mean by \"free will\", and then ask again, \"Do we have free will?  Yes or no?\"

\n

A philosopher wiser yet, may suspect that the confusion about \"free will\" shows the notion itself is flawed.  So they pursue the Traditional Rationalist course:  They argue that \"free will\" is inherently self-contradictory, or meaningless because it has no testable consequences.  And then they publish these devastating observations in a prestigious philosophy journal.

\n

But proving that you are confused may not make you feel any less confused.  Proving that a question is meaningless may not help you any more than answering it.

\n

\n

The philosopher's instinct is to find the most defensible position, publish it, and move on.  But the \"naive\" view, the instinctive view, is a fact about human psychology.  You can prove that free will is impossible until the Sun goes cold, but this leaves an unexplained fact of cognitive science:  If free will doesn't exist, what goes on inside the head of a human being who thinks it does?  This is not a rhetorical question!

\n

It is a fact about human psychology that people think they have free will.  Finding a more defensible philosophical position doesn't change, or explain, that psychological fact.  Philosophy may lead you to reject the concept, but rejecting a concept is not the same as understanding the cognitive algorithms behind it.

\n

You could look at the Standard Dispute over \"If a tree falls in the forest, and no one hears it, does it make a sound?\", and you could do the Traditional Rationalist thing:  Observe that the two don't disagree on any point of anticipated experience, and triumphantly declare the argument pointless.  That happens to be correct in this particular case; but, as a question of cognitive science, why did the arguers make that mistake in the first place?

\n

The key idea of the heuristics and biases program is that the mistakes we make, often reveal far more about our underlying cognitive algorithms than our correct answers.  So (I asked myself, once upon a time) what kind of mind design corresponds to the mistake of arguing about trees falling in deserted forests?

\n

The cognitive algorithms we use, are the way the world feels.  And these cognitive algorithms may not have a one-to-one correspondence with reality—not even macroscopic reality, to say nothing of the true quarks.  There can be things in the mind that cut skew to the world.

\n

For example, there can be a dangling unit in the center of a neural network, which does not correspond to any real thing, or any real property of any real thing, existent anywhere in the real world.  This dangling unit is often useful as a shortcut in computation, which is why we have them.  (Metaphorically speaking.  Human neurobiology is surely far more complex.)

\n

This dangling unit feels like an unresolved question, even after every answerable query is answered.  No matter how much anyone proves to you that no difference of anticipated experience depends on the question, you're left wondering:  \"But does the falling tree really make a sound, or not?\"

\n

But once you understand in detail how your brain generates the feeling of the question—once you realize that your feeling of an unanswered question, corresponds to an illusory central unit wanting to know whether it should fire, even after all the edge units are clamped at known values—or better yet, you understand the technical workings of Naive Bayesthen you're done.  Then there's no lingering feeling of confusion, no vague sense of dissatisfaction.

\n

If there is any lingering feeling of a remaining unanswered question, or of having been fast-talked into something, then this is a sign that you have not dissolved the question.  A vague dissatisfaction should be as much warning as a shout.  Really dissolving the question doesn't leave anything behind.

\n

A triumphant thundering refutation of free will, an absolutely unarguable proof that free will cannot exist, feels very satisfying—a grand cheer for the home team.    And so you may not notice that—as a point of cognitive science—you do not have a full and satisfactory descriptive explanation of how each intuitive sensation arises, point by point.

\n

You may not even want to admit your ignorance, of this point of cognitive science, because that would feel like a score against Your Team.  In the midst of smashing all foolish beliefs of free will, it would seem like a concession to the opposing side to concede that you've left anything unexplained.

\n

And so, perhaps, you'll come up with a just-so evolutionary-psychological argument that hunter-gatherers who believed in free will, were more likely to take a positive outlook on life, and so outreproduce other hunter-gatherers—to give one example of a completely bogus explanation.  If you say this, you are arguing that the brain generates an illusion of free will—but you are not explaining how.  You are trying to dismiss the opposition by deconstructing its motives—but in the story you tell, the illusion of free will is a brute fact.  You have not taken the illusion apart to see the wheels and gears.

\n

Imagine that in the Standard Dispute about a tree falling in a deserted forest, you first prove that no difference of anticipation exists, and then go on to hypothesize, \"But perhaps people who said that arguments were meaningless were viewed as having conceded, and so lost social status, so now we have an instinct to argue about the meanings of words.\"  That's arguing that or explaining why a confusion exists.  Now look at the neural network structure in Feel the Meaning.  That's explaining how, disassembling the confusion into smaller pieces which are not themselves confusing.  See the difference?

\n

Coming up with good hypotheses about cognitive algorithms (or even hypotheses that hold together for half a second) is a good deal harder than just refuting a philosophical confusion.  Indeed, it is an entirely different art.  Bear this in mind, and you should feel less embarrassed to say, \"I know that what you say can't possibly be true, and I can prove it.  But I cannot write out a flowchart which shows how your brain makes the mistake, so I'm not done yet, and will continue investigating.\"

\n

I say all this, because it sometimes seems to me that at least 20% of the real-world effectiveness of a skilled rationalist comes from not stopping too early.  If you keep asking questions, you'll get to your destination eventually.  If you decide too early that you've found an answer, you won't.

\n

The challenge, above all, is to notice when you are confused—even if it just feels like a little tiny bit of confusion—and even if there's someone standing across from you, insisting that humans have free will, and smirking at you, and the fact that you don't know exactly how the cognitive algorithms work, has nothing to do with the searing folly of their position...

\n

But when you can lay out the cognitive algorithm in sufficient detail that you can walk through the thought process, step by step, and describe how each intuitive perception arises—decompose the confusion into smaller pieces not themselves confusing—then you're done.

\n

So be warned that you may believe you're done, when all you have is a mere triumphant refutation of a mistake.

\n

But when you're really done, you'll know you're done.   Dissolving the question is an unmistakable feeling—once you experience it, and, having experienced it, resolve not to be fooled again.  Those who dream do not know they dream, but when you wake you know you are awake.

\n

Which is to say:  When you're done, you'll know you're done, but unfortunately the reverse implication does not hold.

\n

So here's your homework problem:  What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about \"free will\"?

\n

Your assignment is not to argue about whether people have free will, or not.

\n

Your assignment is not to argue that free will is compatible with determinism, or not.

\n

Your assignment is not to argue that the question is ill-posed, or that the concept is self-contradictory, or that it has no testable consequences.

\n

You are not asked to invent an evolutionary explanation of how people who believed in free will would have reproduced; nor an account of how the concept of free will seems suspiciously congruent with bias X.  Such are mere attempts to explain why people believe in \"free will\", not explain how.

\n

Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument.

\n

This is one of the first real challenges I tried as an aspiring rationalist, once upon a time.  One of the easier conundrums, relatively speaking.  May it serve you likewise.

" } }, { "_id": "6sDuPgCqWD8b4Wo72", "title": "Gary Gygax Annihilated at 69", "pageUrl": "https://www.lesswrong.com/posts/6sDuPgCqWD8b4Wo72/gary-gygax-annihilated-at-69", "postedAt": "2008-03-06T20:54:56.000Z", "baseScore": 16, "voteCount": 18, "commentCount": 20, "url": null, "contents": { "documentId": "6sDuPgCqWD8b4Wo72", "html": "

Yesterday I heard that Gary Gygax, inventor of Dungeons and Dragons, had died at 69.  And I don't understand, I truly don't, why that of all deaths should affect me the way it does.

\n\n

Every day, people die; 150,000 of them, in fact.  Every now and then I read the obituary of a scientist whose work I admired, and I don't feel like this.  I should, of course, but I don't.  I remember hearing about the death of Isaac Asimov, and more distantly, the death of Robert Heinlein (though I was 8 at the time) and that didn't affect me like this.

\n\n

I never knew one single thing about Gary Gygax.  I don't know if he had a wife or children.  I couldn't guess his political opinions, or what he thought about the future of humanity.  He was just a name on the cover of books I read until they disintegrated.

\n\n

I searched on the Net and just found comments from other people feeling the same way.  Stopped in their tracks by this one death, and not understanding why, and trying to come up with an explanation for their own feelings.  Why him?

\n\n

I never even really played D&D all that much.  I played a little with David Levitt, my best friend in elementary school - I think it was how we initially met, in fact, though the memory fades into oblivion.  I remember my father teaching me to play very simple D&D games, around the same time I was entering kindergarten; I remember being upset that I couldn't cast a Shield spell more than once.  But mostly, I just read the rulebooks.

\n\n

There are people who played D&D with their friends, every week or every day, until late at night, in modules that Gary Gygax designed.  I understand why they feel sad.  But all I did, mostly, was read the rulebooks to myself.  Why do I feel the same way?

Did D&D help teach me that when the world is in danger, you are\nexpected to save it?  Did Gary Gygax teach\nme to form new worlds in my imagination, or to fantasize about more\ninteresting powers than money?  Is there something about mentally\nsimulating D&D's rules that taught me how to think?  Is it just the sheer amount of total time my imagination spent in those worlds?

\n\n

I truly don't\nknow.  I truly don't know why I feel this way about Gary Gygax's\ndeath.  I don't know why I feel this compulsion to write about it, to tell someone.  I don't think I would have predicted this sadness, if you'd\nasked me one day before the event.

\n\n

It tells me something I didn't know before, about how D&D must have helped to shape my childhood, and make me what I am.

\n\n

And if you think that's amusing, honi soit qui mal y pense.

\n\n

The online obituaries invariably contain comments along the line of, "Now Gygax gets to explore the Seven Heavens" or "God has new competition as a designer of worlds."

\n\n

As an atheist, reading these comments just makes it worse, reminds me of the truth.

\n\n

There are certain ways, in the D&D universe, to permanently destroy a soul - annihilate it, so that it can never be raised or resurrected.  You destroy the soul while it's hiding inside an amulet of life protection, or travel to the Outer Planes and destroy the soul in the afterlife of its home plane.  Roger M. Wilcox once wrote a story, a rather silly story, that in the midst of silliness included a funeral ceremony for a paladin whose soul had been destroyed:

\nJosephus took a deep breath.  "It is normally at this point in a eulogy\nwhere I console the friends and family of the departed by reminding them that\nalthough the deceased is no longer with us, he still smiles down on us from\nHeaven, and that those of us who loved him will see him again when they\nthemselves pass on.  Once in a great while, when I agree to perform a\nfuneral for someone who was of a different alignment, I will have to change\nthis part of the eulogy since his soul will have gone to a plane other than\nHeaven.  It always makes it that much more poignant for me to know that\nneither I nor the mourners in my congregation will see him again when our lives\nend, but at least we have the reassurance that the departed soul still exists\nsomewhere and that he may be smiling down upon us from Arcadia, or\nElysium, or Nirvana, or whichever version of Heaven his alignment allows; and\nthat, the power of his priesthood permitting, he may even be raised back to\nlife or reincarnated in a new body someday."\n

\n\n

\nThe cleric's voice quavered.  "But this time, I cannot say even\nthis.  For I know that Ringman's soul is not in Heaven where it\nrightfully belongs, nor in any other plane in the multiverse.  He will\nnever come back, he cannot see us, and none of us will\never see him again.  This funeral is a true goodbye, which\nhe will never hear."

Goodbye, Gary.

" } }, { "_id": "FaJaCgqBKphrDzDSj", "title": "37 Ways That Words Can Be Wrong", "pageUrl": "https://www.lesswrong.com/posts/FaJaCgqBKphrDzDSj/37-ways-that-words-can-be-wrong", "postedAt": "2008-03-06T05:09:49.000Z", "baseScore": 241, "voteCount": 175, "commentCount": 78, "url": null, "contents": { "documentId": "FaJaCgqBKphrDzDSj", "html": "

Some reader is bound to declare that a better title for this post would be \"37 Ways That You Can Use Words Unwisely\", or \"37 Ways That Suboptimal Use Of Categories Can Have Negative Side Effects On Your Cognition\".

\n

But one of the primary lessons of this gigantic list is that saying \"There's no way my choice of X can be 'wrong'\" is nearly always an error in practice, whatever the theory.  You can always be wrong.  Even when it's theoretically impossible to be wrong, you can still be wrong.  There is never a Get-Out-Of-Jail-Free card for anything you do.  That's life.

\n

Besides, I can define the word \"wrong\" to mean anything I like - it's not like a word can be wrong.

\n

Personally, I think it quite justified to use the word \"wrong\" when:

\n
    \n
  1. A word fails to connect to reality in the first place.  Is Socrates a framster?  Yes or no?  (The Parable of the Dagger.)
  2. \n
  3. Your argument, if it worked, could coerce reality to go a different way by choosing a different word definition.  Socrates is a human, and humans, by definition, are mortal.  So if you defined humans to not be mortal, would Socrates live forever?  (The Parable of Hemlock.)
  4. \n
  5. You try to establish any sort of empirical proposition as being true \"by definition\".  Socrates is a human, and humans, by definition, are mortal.  So is it a logical truth if we empirically predict that Socrates should keel over if he drinks hemlock?  It seems like there are logically possible, non-self-contradictory worlds where Socrates doesn't keel over - where he's immune to hemlock by a quirk of biochemistry, say.  Logical truths are true in all possible worlds, and so never tell you which possible world you live in - and anything you can establish \"by definition\" is a logical truth.  (The Parable of Hemlock.)
  6. \n
  7. You unconsciously slap the conventional label on something, without actually using the verbal definition you just gave.  You know perfectly well that Bob is \"human\", even though, on your definition, you can never call Bob \"human\" without first observing him to be mortal.  (The Parable of Hemlock.)
  8. \n
  9. The act of labeling something with a word, disguises a challengable inductive inference you are making. If the last 11 egg-shaped objects drawn have been blue, and the last 8 cubes drawn have been red, it is a matter of induction to say this rule will hold in the future.  But if you call the blue eggs \"bleggs\" and the red cubes \"rubes\", you may reach into the barrel, feel an egg shape, and think \"Oh, a blegg.\"  (Words as Hidden Inferences.)
  10. \n
  11. You try to define a word using words, in turn defined with ever-more-abstract words, without being able to point to an example.  \"What is red?\"  \"Red is a color.\"  \"What's a color?\"  \"It's a property of a thing?\"  \"What's a thing?  What's a property?\"  It never occurs to you to point to a stop sign and an apple.  (Extensions and Intensions.)
  12. \n
  13. The extension doesn't match the intension.  We aren't consciously aware of our identification of a red light in the sky as \"Mars\", which will probably happen regardless of your attempt to define \"Mars\" as \"The God of War\".  (Extensions and Intensions.)
  14. \n
  15. Your verbal definition doesn't capture more than a tiny fraction of the category's shared characteristics, but you try to reason as if it does.  When the philosophers of Plato's Academy claimed that the best definition of a human was a \"featherless biped\", Diogenes the Cynic is said to have exhibited a plucked chicken and declared \"Here is Plato's Man.\"  The Platonists promptly changed their definition to \"a featherless biped with broad nails\".  (Similarity Clusters.)
  16. \n
  17. You try to treat category membership as all-or-nothing, ignoring the existence of more and less typical subclusters.  Ducks and penguins are less typical birds than robins and pigeons. Interestingly, a between-groups experiment showed that subjects thought a dis\n\nease was more likely to spread from robins to ducks on an island, than from ducks to robins.  (Typicality and Asymmetrical Similarity.)
  18. \n
  19. A verbal definition works well enough in practice to point out the intended cluster of similar things, but you nitpick exceptions. Not every human has ten fingers, or wears clothes, or uses language; but if you look for an empirical cluster of things which share these characteristics, you'll get enough information that the occasional nine-fingered human won't fool you.  (The Cluster Structure of Thingspace.)
  20. \n
  21. You ask whether something \"is\" or \"is not\" a category member but can't name the question you really want answered.  What is a \"man\"?  Is Barney the Baby Boy a \"man\"?  The \"correct\" answer may depend considerably on whether the query you really want answered is \"Would hemlock be a good thing to feed Barney?\" or \"Will Barney make a good husband?\"  (Disguised Queries.)
  22. \n
  23. You treat intuitively perceived hierarchical categories like the only correct way to parse the world, without realizing that other forms of statistical inference are possible even though your brain doesn't use them.  It's much easier for a human to notice whether an object is a \"blegg\" or \"rube\"; than for a human to notice that red objects never glow in the dark, but red furred objects have all the other characteristics of bleggs.  Other statistical algorithms work differently.  (Neural Categories.)
  24. \n
  25. You talk about categories as if they are manna fallen from the Platonic Realm, rather than inferences implemented in a real brain. The ancient philosophers said \"Socrates is a man\", not, \"My brain perceptually classifies Socrates as a match against the 'human' concept\".  (How An Algorithm Feels From Inside.)
  26. \n
  27. You argue about a category membership even after screening off all questions that could possibly depend on a category-based inference.  After you observe that an object is blue, egg-shaped, furred, flexible, opaque, luminescent, and palladium-containing, what's left to ask by arguing, \"Is it a blegg?\"  But if your brain's categorizing neural network contains a (metaphorical) central unit corresponding to the inference of blegg-ness, it may still feel like there's a leftover question.  (How An Algorithm Feels From Inside.)
  28. \n
  29. You allow an argument to slide into being about definitions, even though it isn't what you originally wanted to argue about. If, before a dispute started about whether a tree falling in a deserted forest makes a \"sound\", you asked the two soon-to-be arguers whether they thought a \"sound\" should be defined as \"acoustic vibrations\" or \"auditory experiences\", they'd probably tell you to flip a coin.  Only after the argument starts does the definition of a word become politically charged.  (Disputing Definitions.)
  30. \n
  31. You think a word has a meaning, as a property of the word itself; rather than there being a label that your brain associates to a particular concept.  When someone shouts, \"Yikes!  A tiger!\", evolution would not favor an organism that thinks, \"Hm... I have just heard the syllables 'Tie' and 'Grr' which my fellow tribemembers associate with their internal analogues of my own tiger concept and which aiiieeee CRUNCH CRUNCH GULP.\"  So the brain takes a shortcut, and it seems that the meaning of tigerness is a property of the label itself.  People argue about the correct meaning of a label like \"sound\". (Feel the Meaning.)
  32. \n
  33. You argue over the meanings of a word, even after all sides understand perfectly well what the other sides are trying to say.  The human ability to associate labels to concepts is a tool for communication.  When people want to communicate, we're hard to stop; if we have no common language, we'll draw pictures in sand.  When you each understand what is in the other's mind, you are done.  (The Argument From Common Usage.)
  34. \n
  35. You pull out a dictionary in the middle of an empirical or moral argument.  Dictionary editors are historians of usage, not legislators of language.  If the common definition contains a problem - if \"Mars\" is defined as the God of War, or a \"dolphin\" is defined as a kind of fish, or \"Negroes\" are defined as a separate category from humans, the dictionary will reflect the standard mistake.  (The Argument From Common Usage.)
  36. \n
  37. You pull out a dictionary in the middle of any argument ever. Seriously, what the heck makes you think that dictionary editors are an authority on whether \"atheism\" is a \"religion\" or whatever?  If you have any substantive issue whatsoever at stake, do you really think dictionary editors have access to ultimate wisdom that settles the argument?  (The Argument From Common Usage.)
  38. \n
  39. You defy common usage without a reason, making it gratuitously hard for others to understand you.  Fast stand up plutonium, with bagels without handle.  (The Argument From Common Usage.)
  40. \n
  41. You use complex renamings to create the illusion of inference. Is a \"human\" defined as a \"mortal featherless biped\"?  Then write:  \"All [mortal featherless bipeds] are mortal; Socrates is a [mortal featherless biped]; therefore, Socrates is mortal.\"  Looks less impressive that way, doesn't it?  (Empty Labels.)
  42. \n
  43. You get into arguments that you could avoid if you just didn't use the word. If Albert and Barry aren't allowed to use the word \"sound\", then Albert will have to say \"A tree falling in a deserted forest generates acoustic vibrations\", and Barry will say \"A tree falling in a deserted forest generates no auditory experiences\".  When a word poses a problem, the simplest solution is to eliminate the word and its synonyms.  (Taboo Your Words.)
  44. \n
  45. The existence of a neat little word prevents you from seeing the details of the thing you're trying to think about. What actually goes on in schools once you stop calling it \"education\"? What's a degree, once you stop calling it a \"degree\"?  If a coin lands \"heads\", what's its radial orientation?  What is \"truth\", if you can't say \"accurate\" or \"correct\" or \"represent\" or \"reflect\" or \"semantic\" or \"believe\" or \"knowledge\" or \"map\" or \"real\" or any other simple term?  (Replace the Symbol with the Substance.)
  46. \n
  47. You have only one word, but there are two or more different things-in-reality, so that all the facts about them get dumped into a single undifferentiated mental bucket.  It's part of a detective's ordinary work to observe that Carol wore red last night, or that she has black hair; and it's part of a detective's ordinary work to wonder if maybe Carol dyes her hair.  But it takes a subtler detective to wonder if there are two Carols, so that the Carol who wore red is not the same as the Carol who had black hair.  (Fallacies of Compression.)
  48. \n
  49. You see patterns where none exist, harvesting other characteristics from your definitions even when there is no similarity along that dimension.  In Japan, it is thought that people of blood type A are earnest and creative, blood type Bs are wild and cheerful, blood type Os are agreeable and sociable, and blood type ABs are cool and controlled.  (Categorizing Has Consequences.)
  50. \n
  51. You try to sneak in the connotations of a word, by arguing from a definition that doesn't include the connotations. A \"wiggin\" is defined in the dictionary as a person with green eyes and black hair.  The word \"wiggin\" also carries the connotation of someone who commits crimes and launches cute baby squirrels, but that part isn't in the dictionary.  So you point to someone and say:  \"Green eyes?  Black hair?  See, told you he's a wiggin!  Watch, next he's going to steal the silverware.\"  (Sneaking in Connotations.)
  52. \n
  53. You claim \"X, by definition, is a Y!\"  On such occasions you're almost certainly trying to sneak in a connotation of Y that wasn't in your given definition.  You define \"human\" as a \"featherless biped\", and point to Socrates and say, \"No feathers - two legs - he must be human!\"  But what you really care about is something else, like mortality.  If what was in dispute was Socrates's number of legs, the other fellow would just reply, \"Whaddaya mean, Socrates's got two legs?  That's what we're arguing about in the first place!\"  (Arguing \"By Definition\".)
  54. \n
  55. You claim \"Ps, by definition, are Qs!\"  If you see Socrates out in the field with some biologists, gathering herbs that might confer resistance to hemlock, there's no point in arguing \"Men, by definition, are mortal!\"  The main time you feel the need to tighten the vise by insisting that something is true \"by definition\" is when there's other information that calls the default inference into doubt. (Arguing \"By Definition\".)
  56. \n
  57. You try to establish membership in an empirical cluster \"by definition\".  You wouldn't feel the need to say, \"Hinduism, by definition, is a religion!\" because, well, of course Hinduism is a religion.  It's not just a religion \"by definition\", it's, like, an actual religion.  Atheism does not resemble the central members of the \"religion\" cluster, so if it wasn't for the fact that atheism is a religion by definition, you might go around thinking that atheism wasn't a religion.  That's why you've got to crush all opposition by pointing out that \"Atheism is a religion\" is true by definition, because it isn't true any other way.  (Arguing \"By Definition\".)
  58. \n
  59. Your definition draws a boundary around things that don't really belong together.  You can claim, if you like, that you are defining the word \"fish\" to refer to salmon, guppies, sharks, dolphins, and trout, but not jellyfish or algae.  You can claim, if you like, that this is merely a list, and there is no way a list can be \"wrong\".  Or you can stop playing nitwit games and admit that you made a mistake and that dolphins don't belong on the fish list.  (Where to Draw the Boundary?)
  60. \n
  61. You use a short word for something that you won't need to describe often, or a long word for something you'll need to describe often.  This can result in inefficient thinking, or even misapplications of Occam's Razor, if your mind thinks that short sentences sound \"simpler\".  Which sounds more plausible, \"God did a miracle\" or \"A supernatural universe-creating entity temporarily suspended the laws of physics\"?  (Entropy, and Short Codes.)
  62. \n
  63. You draw your boundary around a volume of space where there is no greater-than-usual density, meaning that the associated word does not correspond to any performable Bayesian inferences.  Since green-eyed people are not more likely to have black hair, or vice versa, and they don't share any other characteristics in common, why have a word for \"wiggin\"?  (Mutual Information, and Density in Thingspace.)
  64. \n
  65. You draw an unsimple boundary without any reason to do so. The act of defining a word to refer to all humans, except black people, seems kind of suspicious.  If you don't present reasons to draw that particular boundary, trying to create an \"arbitrary\" word in that location is like a detective saying:  \"Well, I haven't the slightest shred of support one way or the other for who could've murdered those orphans... but have we considered John Q. Wiffleheim as a suspect?\"  (Superexponential Conceptspace, and Simple Words.)
  66. \n
  67. You use categorization to make inferences about properties that don't have the appropriate empirical structure, namely, conditional independence given knowledge of the class, to be well-approximated by Naive Bayes.  No way am I trying to summarize this one.  Just read the blog post.  (Conditional Independence, and Naive Bayes.)
  68. \n
  69. You think that words are like tiny little LISP symbols in your mind, rather than words being labels that act as handles to direct complex mental paintbrushes that can paint detailed pictures in your sensory workspace.  Visualize a \"triangular lightbulb\".  What did you see?  (Words as Mental Paintbrush Handles.)
  70. \n
  71. You use a word that has different meanings in different places as though it meant the same thing on each occasion, possibly creating the illusion of something protean and shifting.  \"Martin told Bob the building was on his left.\"  But \"left\" is a function-word that evaluates with a speaker-dependent variable grabbed from the surrounding context.  Whose \"left\" is meant, Bob's or Martin's?  (Variable Question Fallacies.)
  72. \n
  73. You think that definitions can't be \"wrong\", or that \"I can define a word any way I like!\" This kind of attitude teaches you to indignantly defend your past actions, instead of paying attention to their consequences, or fessing up to your mistakes.  (37 Ways That Suboptimal Use Of Categories Can Have Negative Side Effects On Your Cognition.)
  74. \n
\n

Everything you do in the mind has an effect, and your brain races ahead unconsciously without your supervision.

\n

Saying \"Words are arbitrary; I can define a word any way I like\" makes around as much sense as driving a car over thin ice with the accelerator floored and saying, \"Looking at this steering wheel, I can't see why one radial angle is special - so I can turn the steering wheel any way I like.\"

\n

If you're trying to go anywhere, or even just trying to survive, you had better start paying attention to the three or six dozen optimality criteria that control how you use words, definitions, categories, classes, boundaries, labels, and concepts.

" } }, { "_id": "shoMpaoZypfkXv84Y", "title": "Variable Question Fallacies", "pageUrl": "https://www.lesswrong.com/posts/shoMpaoZypfkXv84Y/variable-question-fallacies", "postedAt": "2008-03-05T06:22:25.000Z", "baseScore": 49, "voteCount": 42, "commentCount": 37, "url": null, "contents": { "documentId": "shoMpaoZypfkXv84Y", "html": "
\n

Albert:  \"Every time I've listened to a tree fall, it made a sound, so I'll guess that other trees falling also make sounds.  I don't believe the world changes around when I'm not looking.\"
Barry:  \"Wait a minute.  If no one hears it, how can it be a sound?\"

\n
\n

While writing the dialogue of Albert and Barry in their dispute over whether a falling tree in a deserted forest makes a sound, I sometimes found myself losing empathy with my characters.  I would start to lose the gut feel of why anyone would ever argue like that, even though I'd seen it happen many times.

\n

On these occasions, I would repeat to myself, \"Either the falling tree makes a sound, or it does not!\" to restore my borrowed sense of indignation.

\n

(P or ~P) is not always a reliable heuristic, if you substitute arbitrary English sentences for P.  \"This sentence is false\" cannot be consistently viewed as true or false.  And then there's the old classic, \"Have you stopped beating your wife?\"

\n

Now if you are a mathematician, and one who believes in classical (rather than intuitionistic) logic, there are ways to continue insisting that (P or ~P) is a theorem: for example, saying that \"This sentence is false\" is not a sentence.

\n

But such resolutions are subtle, which suffices to demonstrate a need for subtlety.  You cannot just bull ahead on every occasion with \"Either it does or it doesn't!\"

\n

So does the falling tree make a sound, or not, or...?

\n

\n

Surely, 2 + 2 = X or it does not?  Well, maybe, if it's really the same X, the same 2, and the same + and =.  If X evaluates to 5 on some occasions and 4 on another, your indignation may be misplaced.

\n

To even begin claiming that (P or ~P) ought to be a necessary truth, the symbol P must stand for exactly the same thing in both halves of the dilemma.  \"Either the fall makes a sound, or not!\"—but if Albert::sound is not the same as Barry::sound, there is nothing paradoxical about the tree making an Albert::sound but not a Barry::sound.

\n

(The :: idiom is something I picked up in my C++ days for avoiding namespace collisions.  If you've got two different packages that define a class Sound, you can write Package1::Sound to specify which Sound you mean.  The idiom is not widely known, I think; which is a pity, because I often wish I could use it in writing.)

\n

The variability may be subtle:  Albert and Barry may carefully verify that it is the same tree, in the same forest, and the same occasion of falling, just to ensure that they really do have a substantive disagreement about exactly the same event.  And then forget to check that they are matching this event against exactly the same concept.

\n

Think about the grocery store that you visit most often:  Is it on the left side of the street, or the right?  But of course there is no \"the left side\" of the street, only your left side, as you travel along it from some particular direction.  Many of the words we use are really functions of implicit variables supplied by context.

\n

It's actually one heck of a pain, requiring one heck of a lot of work, to handle this kind of problem in an Artificial Intelligence program intended to parse language—the phenomenon going by the name of \"speaker deixis\".

\n

\"Martin told Bob the building was on his left.\"  But  \"left\" is a function-word that evaluates with a speaker-dependent variable invisibly grabbed from the surrounding context.  Whose \"left\" is meant, Bob's or Martin's?

\n

The variables in a variable question fallacy often aren't neatly labeled—it's not as simple as \"Say, do you think Z + 2 equals 6?\"

\n

If a namespace collision introduces two different concepts that look like \"the same concept\" because they have the same name—or a map compression introduces two different events that look like the same event because they don't have separate mental files—or the same function evaluates in different contexts—then reality itself becomes protean, changeable.  At least that's what the algorithm feels like from inside.  Your mind's eye sees the map, not the territory directly.

\n

If you have a question with a hidden variable, that evaluates to different expressions in different contexts, it feels like reality itself is unstable—what your mind's eye sees, shifts around depending on where it looks.

\n

This often confuses undergraduates (and postmodernist professors) who discover a sentence with more than one interpretation; they think they have discovered an unstable portion of reality.

\n

\"Oh my gosh!  'The Sun goes around the Earth' is true for Hunga Huntergatherer, but for Amara Astronomer, 'The Sun goes around the Earth' is false!  There is no fixed truth!\"  The deconstruction of this sophomoric nitwittery is left as an exercise to the reader.

\n

And yet, even I initially found myself writing \"If X is 5 on some occasions and 4 on another, the sentence '2 + 2 = X' may have no fixed truth-value.\"  There is not one sentence with a variable truth-value.  \"2 + 2 = X\" has no truth-value.  It is not a proposition, not yet, not as mathematicians define proposition-ness, any more than \"2 + 2 =\" is a proposition, or \"Fred jumped over the\" is a grammatical sentence.

\n

But this fallacy tends to sneak in, even when you allegedly know better, because, well, that's how the algorithm feels from inside.

" } }, { "_id": "vNA4LJC4JuXgYHQrS", "title": "Rationality Quotes 11", "pageUrl": "https://www.lesswrong.com/posts/vNA4LJC4JuXgYHQrS/rationality-quotes-11", "postedAt": "2008-03-04T05:48:25.000Z", "baseScore": 5, "voteCount": 4, "commentCount": 16, "url": null, "contents": { "documentId": "vNA4LJC4JuXgYHQrS", "html": "

\"If we let ethical considerations get in the way of scientific hubris, then the feminists have won!\"
        -- Helarxe

\n

\"The trajectory to hell is paved with locally-good intentions.\"
        -- Matt Gingell

\n

\"To a mouse, cheese is cheese; that's why mousetraps work.\"
        -- Wendell Johnson, quoted in Language in Thought and Action

\n

\"'Ethical consideration' has come to mean reasoning from an ivory tower about abstract non-issues while people die.\"
        -- Zeb Haradon

\n

\"I intend to live forever. So far, so good.\"
        -- Rick Potvin

\n

\"The accessory optic system: The AOS, extensively studied in the rabbit, arises from a special class of ganglion cells, the cells of Dogiel, that are directionally selective and respond best to slow rates of movement. They project to the terminal nuclei which in turn project to the dorsal cap of Kooy of the inferior olive. The climbing fibers from the olive project to the flocculo-nodular lobe of the cerebellum from where the brain stem occulomotor centers are reached through the vestibular nuclei.\"
        -- MIT Encyclopedia of the Cognitive Sciences, \"Visual Anatomy and Physiology\"

\n

\"Fight for those you have lost, and for those you don't want to lose.\"
        -- Claymore

\n

\"Which facts are likely to reappear? The simple facts. How to recognize them? Choose those that seem simple. Either this simplicity is real or the complex elements are indistinguishable. In the first case we're likely to meet this simple fact again either alone or as an element in a complex fact. The second case too has a good chance of recurring since nature doesn't randomly construct such cases.\"
        -- Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance

\n

\"Revolutions begin not when the first barricades are erected or even when people lose faith in the old ways of doing things, but rather when they realize that fundamental change is possible.\"
        -- Steven Metz

\n

\"First Law of Anime Acoustics: In space, loud sounds, like explosions, are even louder because there is no air to get in the way.\"
\"Law of Inherent Combustibility: Everything explodes. Everything.\"
\"Law of Conservation of Firepower: Any powerful weapon capable of destroying/defeating an opponent in a single shot will invariably be reserved and used only as a last resort.\"
        -- Laws of Anime

\n

\"On the Popo Agie in September,
I watched the water toss through the same arc,
  each molecule passing through and never returning,
  but the whole a permanence of chaos,
   repeating to the casual glance,
   various to the closer look.\"
        --  Mick McAllister

" } }, { "_id": "BWcuhQbrmCMGLAL2k", "title": "Rationality Quotes 10", "pageUrl": "https://www.lesswrong.com/posts/BWcuhQbrmCMGLAL2k/rationality-quotes-10", "postedAt": "2008-03-03T05:48:02.000Z", "baseScore": 9, "voteCount": 10, "commentCount": 5, "url": null, "contents": { "documentId": "BWcuhQbrmCMGLAL2k", "html": "

\"Yes, I am the last man to have walked on the moon, and that's a very dubious and disappointing honor. It's been far too long.\"
        -- Gene Cernan

\n

\"Man, you're no smarter than me. You're just a fancier kind of stupid.\"
        -- Spider Robinson, Distraction

\n

\"Each problem that I solved became a rule which served afterwards to solve other problems.\"
        -- Rene Descartes, Discours de la Methode

\n

\"Faith is Hope given too much credit.\"
        -- Matt Tuozzo

\n

\"Your Highness, I have no need of this hypothesis.\"
        -- Pierre-Simon Laplace, to Napoleon, explaining why his works on celestial mechanics made no mention of God.

\n

\n

\"'For, finally, one can only judge oneself by one's actions,' thought Elric. 'I have looked at what I have done, not at what I meant to do or thought I would like to do, and what I have done has, in the main, been foolish, destructive, and with little point. Yyrkoon was right to despise me and that was why I hated him so.'\"
        -- Michael Moorcock, Elric of Melniboné

\n

\"You will quickly find that if you are completely and self-deprecatingly truthful about how much you owe other people, the world at large will treat you like you did every bit of the invention yourself and are just being becomingly modest about your innate genius.\"
        -- Eric S. Raymond

\n

\"The longer I live the more I see that I am never wrong about anything, and that all the pains that I have so humbly taken to verify my notions have only wasted my time.\"
        -- George Bernard Shaw

\n

\"The trouble is that consciousness theories are very easy to dream up... Theories that explain intelligence, on the other hand, are fiendishly difficult to come by and so are profoundly useful. I don't know for sure that intelligence always produces consciousness, but I do know that if you assume it does you'll never be disappointed.\"
        -- John K. Clark

\n

\"Intelligence is silence, truth being invisible. But what a racket I make in declaring this.\"
        -- Ned Rorem, \"Random Notes from a Diary\"

\n

    Presently the mage said, speaking softly, \"Do you see, Arren, how an act is not, as young men think, like a rock that one picks up and throws, and it hits or misses, and that's the end of it. When that rock is lifted the earth is lighter, the hand that bears it heavier. When it is thrown the circuits of the stars respond, and where it strikes or falls the universe is changed. On every act the balance of the whole depends. The winds and seas, the powers of water and earth and light, all that these do, and all that the beasts and green things do, is well done, and rightly done. All these act within the Equilibrium. From the hurricane and the great whale's sounding to the fall of a dry leaf and the flight of a gnat, all they do is done within the balance of the whole. But we, in so far as we have power over the world and over one another, we must learn to do what the leaf and the whale and the wind do of their own nature. We must learn to keep the balance. Having intelligence, we must not act in ignorance. Having choice, we must not act without responsibility. Who am I - though I have the power to do it - to punish and reward, playing with men's destinies?\"
    \"But then,\" the boy said, frowning at the stars, \"is the balance to be kept by doing nothing? Surely a man must act, even not knowing all the consequences of his act, if anything is to be done at all?\"
    \"Never fear. It is much easier for men to act than to refrain from acting. We will continue to do good, and to do evil... But if there were a king over us all again, and he sought counsel of a mage, as in the days of old, and I were that mage, I would say to him: My lord, do nothing because it is righteous, or praiseworthy, or noble, to do so; do nothing because it seems good to do so; do only that which you must do, and which you cannot do in any other way.\"
        -- Ursula K. LeGuin, The Farthest Shore

" } }, { "_id": "YF9HB6cWCJrDK5pBM", "title": "Words as Mental Paintbrush Handles", "pageUrl": "https://www.lesswrong.com/posts/YF9HB6cWCJrDK5pBM/words-as-mental-paintbrush-handles", "postedAt": "2008-03-01T23:58:21.000Z", "baseScore": 50, "voteCount": 48, "commentCount": 76, "url": null, "contents": { "documentId": "YF9HB6cWCJrDK5pBM", "html": "

(We should be done with the mathy posts, I think, at least for now.  But forgive me if, ironically, I end up resorting to Rationality Quotes for a day or two.  I'm currently at the AGI-08 conference, which, as of the first session, is not nearly so bad as I feared.)

\n

Suppose I tell you:  \"It's the strangest thing:  The lamps in this hotel have triangular lightbulbs.\"

\n

You may or may not have visualized it—if you haven't done it yet, do so now—what, in your mind's eye, does a \"triangular lightbulb\" look like?

\n

\n

In your mind's eye, did the glass have sharp edges, or smooth?

\n

When the phrase \"triangular lightbulb\" first crossed my mind—no, the hotel doesn't have them—then as best as my introspection could determine, I first saw a pyramidal lightbulb with sharp edges, then (almost immediately) the edges were smoothed, and then my mind generated a loop of flourescent bulb in the shape of a smooth triangle as an alternative.

\n

As far as I can tell, no deliberative/verbal thoughts were involved—just wordless reflex flinch away from the imaginary mental vision of sharp glass, which design problem was solved before I could even think in words.

\n

Believe it or not, for some decades, there was a serious debate about whether people really had mental images in their mind—an actual picture of a chair somewhere—or if people just naively thought they had mental images (having been misled by \"introspection\", a very bad forbidden activity), while actually just having a little \"chair\" label, like a LISP token, active in their brain.

\n

I am trying hard not to say anything like \"How spectacularly silly,\" because there is always the hindsight effect to consider, but: how spectacularly silly.

\n

This academic paradigm, I think, was mostly a deranged legacy of behaviorism, which denied the existence of thoughts in humans, and sought to explain all human phenomena as \"reflex\", including speech.  Behaviorism probably deserves its own post at some point, as it was a perversion of rationalism; but this is not that post.

\n

\"You call it 'silly',\" you inquire, \"but how do you know that your brain represents visual images?  Is it merely that you can close your eyes and see them?\"

\n

This question used to be harder to answer, back in the day of the controversy.  If you wanted to prove the existence of mental imagery \"scientifically\", rather than just by introspection, you had to infer the existence of mental imagery from experiments like, e.g.:  Show subjects two objects and ask them if one can be rotated into correspondence with the other.  The response time is linearly proportional to the angle of rotation required.  This is easy to explain if you are actually visualizing the image and continuously rotating it at a constant speed, but hard to explain if you are just checking propositional features of the image.

\n

Today we can actually neuroimage the little pictures in the visual cortex.  So, yes, your brain really does represent a detailed image of what it sees or imagines.  See Stephen Kosslyn's Image and Brain: The Resolution of the Imagery Debate.

\n

Part of the reason people get in trouble with words, is that they do not realize how much complexity lurks behind words.

\n

Can you visualize a \"green dog\"?  Can you visualize a \"cheese apple\"?

\n

\"Apple\" isn't just a sequence of two syllables or five letters.  That's a shadow.  That's the tip of the tiger's tail.

\n

Words, or rather the concepts behind them, are paintbrushes—you can use them to draw images in your own mind.  Literally draw, if you employ concepts to make a picture in your visual cortex.  And by the use of shared labels, you can reach into someone else's mind, and grasp their paintbrushes to draw pictures in their minds—sketch a little green dog in their visual cortex.

\n

But don't think that, because you send syllables through the air, or letters through the Internet, it is the syllables or the letters that draw pictures in the visual cortex.  That takes some complex instructions that wouldn't fit in the sequence of letters.  \"Apple\" is 5 bytes, and drawing a picture of an apple from scratch would take more data than that.

\n

\"Apple\" is merely the tag attached to the true and wordless apple concept, which can paint a picture in your visual cortex, or collide with \"cheese\", or recognize an apple when you see one, or taste its archetype in apple pie, maybe even send out the motor behavior for eating an apple...

\n

And it's not as simple as just calling up a picture from memory.  Or how would you be able to visualize combinations like a \"triangular lightbulb\"—imposing triangleness on lightbulbs, keeping the essence of both, even if you've never seen such a thing in your life?

\n

Don't make the mistake the behaviorists made.  There's far more to speech than sound in air.  The labels are just pointers—\"look in memory area 1387540\".  Sooner or later, when you're handed a pointer, it comes time to dereference it, and actually look in memory area 1387540.

\n

What does a word point to?

" } }, { "_id": "gDWvLicHhcMfGmwaK", "title": "Conditional Independence, and Naive Bayes", "pageUrl": "https://www.lesswrong.com/posts/gDWvLicHhcMfGmwaK/conditional-independence-and-naive-bayes", "postedAt": "2008-03-01T01:59:35.000Z", "baseScore": 74, "voteCount": 60, "commentCount": 16, "url": null, "contents": { "documentId": "gDWvLicHhcMfGmwaK", "html": "

Previously I spoke of mutual information between X and Y, I(X;Y), which is the difference between the entropy of the joint probability distribution, H(X,Y) and the entropies of the marginal distributions, H(X) + H(Y).

\n

I gave the example of a variable X, having eight states 1..8 which are all equally probable if we have not yet encountered any evidence; and a variable Y, with states 1..4, which are all equally probable if we have not yet encountered any evidence.  Then if we calculate the marginal entropies H(X) and H(Y), we will find that X has 3 bits of entropy, and Y has 2 bits.

\n

However, we also know that X and Y are both even or both odd; and this is all we know about the relation between them.  So for the joint distribution (X,Y) there are only 16 possible states, all equally probable, for a joint entropy of 4 bits.  This is a 1-bit entropy defect, compared to 5 bits of entropy if X and Y were independent.  This entropy defect is the mutual information - the information that X tells us about Y, or vice versa, so that we are not as uncertain about one after having learned the other.

\n

Suppose, however, that there exists a third variable Z.  Z has two states, \"even\" and \"odd\", perfectly correlated to the evenness or oddness of (X,Y).  In fact, we'll suppose that Z is just the question \"Are X and Y even or odd?\"

\n

If we have no evidence about X and Y, then Z itself necessarily has 1 bit of entropy on the information given.  There is 1 bit of mutual information between Z and X, and 1 bit of mutual information between Z and Y.  And, as previously noted, 1 bit of mutual information between X and Y.  So how much entropy for the whole system (X,Y,Z)?  You might naively expect that

\n
\n

H(X,Y,Z) = H(X) + H(Y) + H(Z) - I(X;Z) - I(Z;Y) - I(X;Y)

\n
\n

but this turns out not to be the case.

\n

\n

The joint system (X,Y,Z) only has 16 possible states - since Z is just the question \"Are X & Y even or odd?\" - so H(X,Y,Z) = 4 bits.

\n

But if you calculate the formula just given, you get

\n
\n

(3 + 2 + 1 - 1 - 1 - 1)bits = 3 bits = WRONG!

\n
\n

Why?  Because if you have the mutual information between X and Z, and the mutual information between Z and Y, that may include some of the same mutual information that we'll calculate exists between X and Y.  In this case, for example, knowing that X is even tells us that Z is even, and knowing that Z is even tells us that Y is even, but this is the same information that X would tell us about Y.  We double-counted some of our knowledge, and so came up with too little entropy.

\n

The correct formula is (I believe):

\n
\n

H(X,Y,Z) = H(X) + H(Y) + H(Z) - I(X;Z) - I(Z;Y) - I(X;Y | Z)

\n
\n

Here the last term, I(X;Y | Z), means, \"the information that X tells us about Y, given that we already know Z\".  In this case, X doesn't tell us anything about Y, given that we already know Z, so the term comes out as zero - and the equation gives the correct answer.  There, isn't that nice?

\n

\"No,\" you correctly reply, \"for you have not told me how to calculate I(X;Y|Z), only given me a verbal argument that it ought to be zero.\"

\n

We calculate I(X;Y|Z) just the way you would expect.  I(X;Y) = H(X) + H(Y) - H(X,Y), so:

\n
\n

I(X;Y|Z) = H(X|Z) + H(Y|Z) - H(X,Y|Z)

\n
\n

And now, I suppose, you want to know how to calculate the conditional entropy?  Well, the original formula for the entropy is:

\n
\n

H(S) = Sum i: p(Si)*-log2(p(Si))

\n
\n

If we then learned a new fact Z0, our remaining uncertainty about S would be:

\n
\n

H(S|Z0) = Sum i: p(Si|Z0)*-log2(p(Si|Z0))

\n
\n

So if we're going to learn a new fact Z, but we don't know which Z yet, then, on average, we expect to be around this uncertain of S afterward:

\n
\n

H(S|Z) = Sum j: (p(Zj) * Sum i: p(Si|Zj)*-log2(p(Si|Zj)))

\n
\n

And that's how one calculates conditional entropies; from which, in turn, we can get the conditional mutual information.

\n

There are all sorts of ancillary theorems here, like:

\n
\n

H(X|Y) = H(X,Y) - H(Y)

\n
\n

and

\n
\n

if  I(X;Z) = 0  and  I(Y;X|Z) = 0  then  I(X;Y) = 0

\n
\n

but I'm not going to go into those.

\n

\"But,\" you ask, \"what does this have to do with the nature of words and their hidden Bayesian structure?\"

\n

I am just so unspeakably glad that you asked that question, because I was planning to tell you whether you liked it or not.  But first there are a couple more preliminaries.

\n

You will remember—yes, you will remember—that there is a duality between mutual information and Bayesian evidence.  Mutual information is positive if and only if the probability of at least some joint events P(x, y) does not equal the product of the probabilities of the separate events P(x)*P(y).  This, in turn, is exactly equivalent to the condition that Bayesian evidence exists between x and y:

\n
\n

I(X;Y) > 0   =>
P(x,y) != P(x)*P(y)
P(x,y) / P(y) != P(x)
P(x|y) != P(x)

\n
\n

If you're conditioning on Z, you just adjust the whole derivation accordingly:

\n
\n

I(X;Y | Z) > 0   =>
P(x,y|z) != P(x|z)*P(y|z)
P(x,y|z) / P(y|z) != P(x|z)
(P(x,y,z) / P(z)) / (P(y, z) / P(z)) != P(x|z)
P(x,y,z) / P(y,z) != P(x|z)
P(x|y,z) != P(x|z)

\n
\n

Which last line reads \"Even knowing Z, learning Y still changes our beliefs about X.\"

\n

Conversely, as in our original case of Z being \"even\" or \"odd\", Z screens off X from Y - that is, if we know that Z is \"even\", learning that Y is in state 4 tells us nothing more about whether X is 2, 4, 6, or 8.  Or if we know that Z is \"odd\", then learning that X is 5 tells us nothing more about whether Y is 1 or 3.  Learning Z has rendered X and Y conditionally independent.

\n

Conditional independence is a hugely important concept in probability theory—to cite just one example, without conditional independence, the universe would have no structure.

\n

Today, though, I only intend to talk about one particular kind of conditional independence—the case of a central variable that screens off other variables surrounding it, like a central body with tentacles.

\n

Let there be five variables U, V, W, X, Y; and moreover, suppose that for every pair of these variables, one variable is evidence about the other.  If you select U and W, for example, then learning U=U1 will tell you something you didn't know before about the probability W=W1.

\n

An unmanageable inferential mess?  Evidence gone wild?  Not necessarily.

\n

Maybe U is \"Speaks a language\", V is \"Two arms and ten digits\", W is \"Wears clothes\", X is \"Poisonable by hemlock\", and Y is \"Red blood\".  Now if you encounter a thing-in-the-world, that might be an apple and might be a rock, and you learn that this thing speaks Chinese, you are liable to assess a much higher probability that it wears clothes; and if you learn that the thing is not poisonable by hemlock, you will assess a somewhat lower probability that it has red blood.

\n

Now some of these rules are stronger than others.  There is the case of Fred, who is missing a finger due to a volcano accident, and the case of Barney the Baby who doesn't speak yet, and the case of Irving the IRCBot who emits sentences but has no blood.  So if we learn that a certain thing is not wearing clothes, that doesn't screen off everything that its speech capability can tell us about its blood color.  If the thing doesn't wear clothes but does talk, maybe it's Nude Nellie.

\n

This makes the case more interesting than, say, five integer variables that are all odd or all even, but otherwise uncorrelated.  In that case, knowing any one of the variables would screen off everything that knowing a second variable could tell us about a third variable.

\n

But here, we have dependencies that don't go away as soon as we learn just one variable, as the case of Nude Nellie shows.  So is it an unmanageable inferential inconvenience?

\n

Fear not! for there may be some sixth variable Z, which, if we knew it, really would screen off every pair of variables from each other.  There may be some variable Z—even if we have to construct Z rather than observing it directly—such that:

\n
\n

p(u|v,w,x,y,z) = p(u|z)
p(v|u,w,x,y,z) = p(v|z)
p(w|u,v,x,y,z) = p(w|z)
    ...

\n
\n

Perhaps, given that a thing is \"human\", then the probabilities of it speaking, wearing clothes, and having the standard number of fingers, are all independent.  Fred may be missing a finger - but he is no more likely to be a nudist than the next person; Nude Nellie never wears clothes, but knowing this doesn't make it any less likely that she speaks; and Baby Barney doesn't talk yet, but is not missing any limbs.

\n

This is called the \"Naive Bayes\" method, because it usually isn't quite true, but pretending that it's true can simplify the living daylights out of your calculations.  We don't keep separate track of the influence of clothed-ness on speech capability given finger number.  We just use all the information we've observed to keep track of the probability that this thingy is a human (or alternatively, something else, like a chimpanzee or robot) and then use our beliefs about the central class to predict anything we haven't seen yet, like vulnerability to hemlock.

\n

Any observations of U, V, W, X, and Y just act as evidence for the central class variable Z, and then we use the posterior distribution on Z to make any predictions that need making about unobserved variables in U, V, W, X, and Y.

\n

Sound familiar?  It should:

\n

\"Blegg2\"

\n

As a matter of fact, if you use the right kind of neural network units, this \"neural network\" ends up exactly, mathematically equivalent to Naive Bayes.  The central unit just needs a logistic threshold—an S-curve response—and the weights of the inputs just need to match the logarithms of the likelihood ratios, etcetera.  In fact, it's a good guess that this is one of the reasons why logistic response often works so well in neural networks—it lets the algorithm sneak in a little Bayesian reasoning while the designers aren't looking.

\n

Just because someone is presenting you with an algorithm that they call a \"neural network\" with buzzwords like \"scruffy\" and \"emergent\" plastered all over it, disclaiming proudly that they have no idea how the learned network works—well, don't assume that their little AI algorithm really is Beyond the Realms of Logic.  For this paradigm of adhockery , if it works, will turn out to have Bayesian structure; it may even be exactly equivalent to an algorithm of the sort called \"Bayesian\".

\n

Even if it doesn't look Bayesian, on the surface.

\n

And then you just know that the Bayesians are going to start explaining exactly how the algorithm works, what underlying assumptions it reflects, which environmental regularities it exploits, where it works and where it fails, and even attaching understandable meanings to the learned network weights.

\n

Disappointing, isn't it?

" } }, { "_id": "QrhAeKBkm2WsdRYao", "title": "Searching for Bayes-Structure", "pageUrl": "https://www.lesswrong.com/posts/QrhAeKBkm2WsdRYao/searching-for-bayes-structure", "postedAt": "2008-02-28T22:01:36.000Z", "baseScore": 69, "voteCount": 59, "commentCount": 49, "url": null, "contents": { "documentId": "QrhAeKBkm2WsdRYao", "html": "

"Gnomish helms\nshould not function.  Their very construction seems to defy the nature\nof thaumaturgical law.  In fact, they are impossible.  Like most\nproducts of gnomish minds, they include a large number of bells and\nwhistles, and very little substance.  Those that work usually have a minor helm contained within, always hidden away, disguised to appear innocuous and inessential."
              -- Spelljammer campaign set

\n\n

We have seen that knowledge implies mutual information between a mind and its environment, and we have seen that this mutual information is negentropy\nin a very physical sense:  If you know where molecules are and how fast\nthey're moving, you can turn heat into work via a Maxwell's Demon /\nSzilard engine.

\n\n

We have seen that forming true beliefs without evidence\nis the same sort of improbability as a hot glass of water spontaneously\nreorganizing into ice cubes and electricity.  Rationality takes "work"\nin a thermodynamic sense, not just the sense of mental effort; minds\nhave to radiate heat if they are not perfectly efficient.  This\ncognitive work is governed by probability theory, of which\nthermodynamics is a special case.  (Statistical mechanics is a special\ncase of statistics.)

\n\n

If you saw a machine continually spinning a wheel, apparently\nwithout being plugged into a wall outlet or any other source of power,\nthen you would look for a hidden battery, or a nearby broadcast power\nsource - something to explain the work being done, without violating\nthe laws of physics.

\n\n

So if a mind is arriving at true beliefs, and we assume that the second law of thermodynamics has not been violated, that mind must be doing something at least vaguely Bayesian - at least one process with a sort-of Bayesian structure somewhere - or it couldn't possibly work.

\n\n

In the beginning, at time T=0, a mind has no mutual information with\na subsystem S in its environment.  At time T=1,the mind has 10 bits of\nmutual information with S.  Somewhere in between, the mind must have\nencountered evidence - under the Bayesian definition of evidence,\nbecause all Bayesian evidence is mutual information and all mutual\ninformation is Bayesian evidence, they are just different ways of\nlooking at it - and processed at least some of that evidence, however\ninefficiently, in the right direction according to Bayes on at least\nsome occasions.  The mind must have moved in harmony with the Bayes\nat least a little, somewhere along the line - either that or violated\nthe second law of thermodynamics by creating mutual information from\nnothingness.

\n\n

In fact, any part of a cognitive process that contributes usefully\nto truth-finding must have at least a little Bayesian structure - must\nharmonize with Bayes, at some point or another - must partially conform\nwith the Bayesian flow, however noisily - despite however many\ndisguising bells and whistles - even if this Bayesian structure is only\napparent in the context of surrounding processes.  Or it couldn't even help.

\n\n

How philosophers pondered the nature of words!  All the ink spent on the true definitions of words, and the true meaning of definitions, and the true meaning of meaning!  What collections of gears and wheels they built, in their explanations!  And all along, it was a disguised form of Bayesian inference!

\n\n

I was actually a bit disappointed that no one in the audience jumped up and said:  "Yes!  Yes, that's it!  Of course!  It was really Bayes all along!"

\n\n

But perhaps it is not quite as exciting to see something that doesn't\nlook Bayesian on the surface, revealed as Bayes wearing a clever\ndisguise, if: (a) you don't unravel the mystery yourself, but read\nabout someone else doing it (Newton had more fun than most students\ntaking calculus), and (b) you don't realize that searching for the hidden Bayes-structure is this huge, difficult, omnipresent quest, like searching for the Holy Grail.

\n\n

It's a different quest for each facet of cognition, but the Grail always turns out to be the same.  It has to be the right Grail, though - and the entire Grail, without any parts missing - and so each time you have to go on the quest looking for a full answer whatever form it may take, rather than trying to artificially construct vaguely hand-waving Grailish arguments.  Then you always find the same Holy Grail at the end.

\n\n

It was previously pointed out to me that I might be losing some of\nmy readers with the long essays, because I hadn't "made it clear where\nI was going"...

\n\n

...but it's not so easy to just tell people where you're going, when you're going somewhere like that.

\n\n

It's not very helpful to merely know that a form of cognition is Bayesian, if you don't know how it is Bayesian.  If you can't see the detailed flow of probability, you have nothing but a password\n- or, a bit more charitably, a hint at the form an answer would take;\nbut certainly not an answer.  That's why there's a Grand Quest for the\nHidden Bayes-Structure, rather than being done when you say "Bayes!" \nBayes-structure can be buried under all kinds of disguies,\nhidden behind thickets of wheels and gears, obscured by bells and\nwhistles.

\n\n

The way you begin to grasp the Quest for the Holy Bayes is that you\nlearn about cognitive phenomenon XYZ, which seems really useful - and\nthere's this bunch of philosophers who've been arguing about its true\nnature for centuries, and they are still arguing - and there's a bunch\nof AI scientists trying to make a computer do it, but they can't agree\non the philosophy either -

\n\n

And - Huh, that's odd! - this cognitive phenomenon didn't\nlook anything like Bayesian on the surface, but there's this\nnon-obvious underlying structure that has a Bayesian interpretation -\nbut wait, there's still some useful work getting done that can't be\nexplained in Bayesian terms - no wait, that's Bayesian too - OH MY GOD this completely different cognitive process, that also didn't look Bayesian on the surface, ALSO HAS BAYESIAN STRUCTURE - hold on, are these non-Bayesian parts even doing anything?

\n\n\n\n

Once this happens to you a few times, you kinda pick up the rhythm.  That's what I'm talking about here, the rhythm.

\n\n

Trying to talk about the rhythm is like trying to dance about architecture.

\n\n

This left me in a bit of a pickle when it came to trying to explain\nin advance where I was going.  I know from experience that if I say,\n"Bayes is the secret of the universe," some people may say "Yes!  Bayes is the secret of the universe!"; and others will snort and say, "How narrow-minded you are; look at all these other ad-hoc but amazingly useful methods, like regularized linear regression, that I have in my toolbox."

\n\n

I hoped that with a specific example in hand of "something that doesn't look all that Bayesian on the surface, but turns out to be Bayesian after all" - and an explanation of the difference between passwords and knowledge - and an explanation of the difference between tools and laws - maybe then I could convey such of the rhythm as can be understood without personally going on the quest.

\n\n

Of course this is not the full Secret of the Bayesian\nConspiracy, but it's all that I can convey at this point.  Besides, the\ncomplete secret is known only to the Bayes Council, and if I told you,\nI'd have to hire you.

\n\n

To see through the surface adhockery of a cognitive process, to the Bayesian structure underneath - to perceive the probability flows, and know how, not just know that,\nthis cognition too is Bayesian - as it always is - as it always must be\n- to be able to sense the Force underlying all cognition - this, is the\nBayes-Sight.

    "...And the Queen of Kashfa sees with the Eye of the Serpent."
\n    "I don't know that she sees with it," I said.  "She's still\nrecovering from the operation.  But that's an interesting thought.  If\nshe could see with it, what might she behold?"
\n    "The clear, cold lines of eternity, I daresay.  Beneath all Shadow."
              -- Roger Zelazny, Prince of Chaos

" } }, { "_id": "zFuCxbY9E2E8HTbfZ", "title": "Perpetual Motion Beliefs", "pageUrl": "https://www.lesswrong.com/posts/zFuCxbY9E2E8HTbfZ/perpetual-motion-beliefs", "postedAt": "2008-02-27T20:22:06.000Z", "baseScore": 77, "voteCount": 65, "commentCount": 44, "url": null, "contents": { "documentId": "zFuCxbY9E2E8HTbfZ", "html": "\n

\nYesterday's post concluded:

\n

To form accurate beliefs about something, you really do have to observe it. \nIt's a very physical, very real process: any rational mind does "work"\nin the thermodynamic sense, not just the sense of mental effort...  So unless you can tell me which specific step in your argument\nviolates the laws of physics by giving you true knowledge of the\nunseen, don't expect me to believe that a big, elaborate clever\nargument can do it either.

\n

\nOne of the chief morals of the mathematical analogy between\nthermodynamics and cognition is that the constraints of probability are\ninescapable; probability may be a "subjective state of belief", but the\nlaws of probability are harder than steel.

\n\n

\nPeople learn under the traditional school regimen that the teacher tells you certain things, and you must believe them and recite them back;\nbut if a mere student suggests a belief, you do not have to obey it.  They\nmap the domain of belief onto the domain of authority, and think that a\ncertain belief is like an order that must be obeyed, but a\nprobabilistic belief is like a mere suggestion.

\n\n

\nThey look at a lottery ticket, and say, "But you can't prove I won't win, right?"  Meaning:  "You may have calculated a low probability of winning, but since it is a probability, it's just a suggestion, and I am allowed to believe what I want."

\n\n

\nHere's a little experiment:  Smash an egg on the floor.  The rule that\nsays that the egg won't spontaneously reform and leap back into your\nhand is merely probabilistic.  A suggestion, if you will.  The laws of thermodynamics are probabilistic, so they can't really be laws, the way that "Thou shalt not murder" is a law... right?

\n\n

So why not\njust ignore the suggestion?  Then the egg will unscramble itself... right?

\n

It may help to think of it this way - if you still have some lingering intuition that uncertain beliefs are not authoritative:

\n\n

In\nreality, there may be a very small chance that the egg spontaneously\nreforms.  But you cannot expect it to reform.  You must expect it to smash.  Your mandatory belief is that the egg's probability of spontaneous reformation is ~0.  Probabilities are not certainties, but the laws of probability are theorems.

\n

If you doubt this, try dropping an egg on the floor\na few decillion times, ignoring the thermodynamic suggestion and expecting it to\nspontaneously reassemble, and see what happens.  Probabilities may be subjective states of belief, but the laws governing them are stronger by far than steel.

I once knew a fellow who was convinced that his system of\nwheels and gears would produce reactionless thrust, and he had an Excel\nspreadsheet that would prove this - which of course he couldn't show us\nbecause he was still developing the system.  In classical\nmechanics, violating Conservation of Momentum is provably impossible.  So any Excel spreadsheet calculated according to the rules of classical mechanics must necessarily show that no reactionless thrust exists - unless your machine is complicated enough that you have made a mistake in the calculations.

\n\n

And similarly, when half-trained or tenth-trained rationalists abandon their art and try to believe without evidence just this once, they often build vast edifices of justification, confusing themselves just enough to conceal the magical steps.

\n\n

It can be quite a pain to nail down where the magic occurs - their structure of argument tends to morph and squirm away as you interrogate them.  But there's always some step where a tiny probability turns into a large one - where they try to believe without evidence - where they step into the unknown, thinking, "No one can prove me wrong".

\n\n

Their foot naturally lands on thin air, for there is far more thin air than ground in the realms of Possibility.  Ah, but there is an (exponentially tiny) amount of ground in Possibility, and you do have an (exponentially tiny) probability of hitting it by luck, so maybe this time, your foot will land in the right place!  It is merely a probability, so it must be merely a suggestion.

\n\n

The exact state of a glass of boiling-hot water may be unknown to you - indeed, your ignorance of its exact state is what makes the molecules' kinetic energy "heat", rather than work waiting to be extracted like the momentum of a spinning flywheel.  So the water might cool down your hand instead of heating it up, with probability ~0.

\n\n

Decide to ignore the laws of thermodynamics and stick your hand in anyway, and you'll get burned.

\n\n

"But you don't know that!"

\n\n

I don't know it with certainty, but it is mandatory that I expect it to happen.  Probabilities are not logical truths, but the laws of probability are.

\n\n

"But what if I guess the state of the boiling water, and I happen to guess correctly?"

\n\n

Your chance of guessing correctly by luck, is even less than the chance of the boiling water cooling your hand by luck.

\n\n

"But you can't prove I won't guess correctly."

\n\n

I can (indeed, must) assign extremely low probability to it.

\n\n

"That's not the same as certainty, though."

\n\n

Hey, maybe if you add enough wheels and gears to your argument, it'll turn warm water into electricity and ice cubes!  Or, rather, you will no longer see why this couldn't be the case.

\n\n

"Right! I can't see why couldn't be the case!  So maybe it is!"

\n\n

Another gear?  That just makes your machine even less efficient.  It wasn't a perpetual motion machine before, and each extra gear you add makes it even less efficient than that.

\n\n

Each extra detail in your argument necessarily decreases the joint probability.  The probability that you've violated the Second Law of Thermodynamics without knowing exactly how, by guessing the exact state of boiling water without evidence, so that you can stick your finger in without getting burned, is, necessarily, even less than the probability of sticking in your finger into boiling water without getting burned.

\n\n

I say all this, because people really do construct these huge edifices of argument in the course of believing without evidence.  One must learn to see this as analogous to all the wheels and gears that fellow added onto his reactionless drive, until he finally collected enough complications to make a mistake in his Excel spreadsheet.

" } }, { "_id": "QkX2bAkwG2EpGvNug", "title": "The Second Law of Thermodynamics, and Engines of Cognition", "pageUrl": "https://www.lesswrong.com/posts/QkX2bAkwG2EpGvNug/the-second-law-of-thermodynamics-and-engines-of-cognition", "postedAt": "2008-02-27T00:48:30.000Z", "baseScore": 213, "voteCount": 159, "commentCount": 76, "url": null, "contents": { "documentId": "QkX2bAkwG2EpGvNug", "html": "

The first law of thermodynamics, better known as Conservation of Energy, says that you can't create energy from nothing: it prohibits perpetual motion machines of the first type, which run and run indefinitely without consuming fuel or any other resource.  According to our modern view of physics, energy is conserved in each individual interaction of particles.  By mathematical induction, we see that no matter how large an assemblage of particles may be, it cannot produce energy from nothing - not without violating what we presently believe to be the laws of physics.

\n\n

This is why the US Patent Office will summarily reject your amazingly clever proposal for an assemblage of wheels and gears that cause one spring to wind up another as the first runs down, and so continue to do work forever, according to your calculations.  There's a fully general proof that at least one wheel must violate (our standard model of) the laws of physics for this to happen.  So unless you can explain how one wheel violates the laws of physics, the assembly of wheels can't do it either.

\n\n

A similar argument applies to a "reactionless drive", a propulsion system that violates Conservation of Momentum.  In standard physics, momentum is conserved for all individual particles and their interactions; by mathematical induction, momentum is conserved for physical systems whatever their size.  If you can visualize two particles knocking into each other and always coming out with the same total momentum that they started with, then you can see how scaling it up from particles to a gigantic complicated collection of gears won't change anything.  Even if there's a trillion quadrillion atoms involved, 0 + 0 + ... + 0 = 0.

\n\n

But Conservation of Energy, as such, cannot prohibit converting heat into work.  You can, in fact, build a sealed box that converts ice cubes and stored electricity into warm water.  It isn't even difficult.  Energy cannot be created or destroyed:  The net change in energy, from transforming (ice cubes + electricity) to (warm water), must be 0.  So it couldn't violate Conservation of Energy, as such, if you did it the other way around...

\n\n

Perpetual motion machines of the second type, which convert warm water into electrical current and ice cubes, are prohibited by the Second Law of Thermodynamics.

\n\n

The Second Law is a bit harder to understand, as it is essentially Bayesian in nature.

\n\n

Yes, really.

The essential physical law underlying the Second Law of Thermodynamics is a theorem which can be proven within the standard model of physics:  In the development over time of any closed system, phase space volume is conserved.

\n\n

Let's say you're holding a ball high above the ground.  We can describe\nthis state of affairs as a point in a multidimensional space, at least\none of whose dimensions is "height of ball above the ground".  Then,\nwhen you drop the ball, it moves, and so does the dimensionless point\nin phase space that describes the entire system that includes you and\nthe ball.  "Phase space", in physics-speak, means that there are dimensions for the momentum of the particles, not just their position - i.e., a system of 2 particles would have 12 dimensions, 3 dimensions for each particle's position, and 3 dimensions for each particle's momentum.

\n\n

If you had a multidimensional space, each of whose\ndimensions described the position of a gear in a huge assemblage of\ngears, then as you turned the gears a single point would swoop and dart\naround in a rather high-dimensional phase space.  Which is to say, just\nas you can view a great big complex machine as a single point in a\nvery-high-dimensional space, so too, you can view the laws of physics describing the behavior of this machine over time, as describing the\ntrajectory of its point through the phase space.

\n\n

The Second Law of Thermodynamics is a consequence of a theorem which can be proven in the standard model of physics:  If you take a volume of phase space, and develop it forward in time using standard physics, the total volume of the phase space is conserved.

\n\n

For example:

\n\n

Let there be two systems, X and Y: where X has 8 possible states, Y has 4 possible states, and the joint system (X,Y) has 32 possible states.

\n\n

The development of the joint system over time can be described as a rule that maps initial points onto future points.  For example, the system could start out in X7Y2, then develop (under some set of physical laws) into the state X3Y3 a minute later.  Which is to say: if X started in 7, and Y started in 2, and we watched it for 1 minute, we would see X go to 3 and Y go to 3.  Such are the laws of physics.

\n\n

Next, let's carve out a subspace S of the joint system state.  S will be the subspace bounded by X being in state 1 and Y being in states 1-4.  So the total volume of S is 4 states.

\n\n

And let's suppose that, under the laws of physics governing (X,Y) the states initially in S behave as follows:

X1Y1 -> X2Y1
X1Y2 -> X4Y1
X1Y3 -> X6Y1
X1Y4 -> X8Y1

That, in a nutshell, is how a refrigerator works.

\n\n

The X subsystem began in a narrow region of state space - the single state 1, in fact - and Y began distributed over a wider region of space, states 1-4.  By interacting with each other, Y went into a narrow region, and X ended up in a wide region; but the total phase space volume was conserved.  4 initial states mapped to 4 end states.

\n\n

Clearly, so long as total phase space volume is conserved by physics over time, you can't squeeze Y harder than X expands, or vice versa - for every subsystem you squeeze into a narrower region of state space, some other subsystem has to expand into a wider region of state space.

\n\n

Now let's say that we're uncertain about the joint system (X,Y), and our uncertainty is described by an equiprobable distribution over S.  That is, we're pretty sure X is in state 1, but Y is equally likely to be in any of states 1-4.  If we shut our eyes for a minute and then open them again, we will expect to see Y in state 1, but X might be in any of states 2-8.  Actually, X can only be in some of states 2-8, but it would be too costly to think out exactly which states these might be, so we'll just say 2-8.

\n\n

If you consider the Shannon entropy of our uncertainty about X and Y as individual systems, X began with 0 bits of entropy because it had a single definite state, and Y began with 2 bits of entropy because it was equally likely to be in any of 4 possible states.  (There's no mutual information between X and Y.)  A bit of physics occurred, and lo, the entropy of Y went to 0, but the entropy of X went to log2(7) = 2.8 bits.  So entropy was transferred from one system to another, and decreased within the Y subsystem; but due to the cost of bookkeeping, we didn't bother to track some information, and hence (from our perspective) the overall entropy increased.

\n\n

If there was a physical process that mapped past states onto future states like this:

X2,Y1 -> X2,Y1
X2,Y2 -> X2,Y1
X2,Y3 -> X2,Y1
X2,Y4 -> X2,Y1

Then you could have a physical process that would actually decrease entropy, because no matter where you started out, you would end up at the same place.  The laws of physics, developing over time, would compress the phase space.

\n\n

But there is a theorem, Liouville's Theorem, which can be proven true of our laws of physics, which says that this never happens: phase space is conserved.

\n\n

The Second Law of Thermodynamics is a corollary of Liouville's Theorem: no matter how clever your configuration of wheels and gears, you'll never be able to decrease entropy in one subsystem without increasing it somewhere else.  When the phase space of one subsystem narrows, the phase space of another subsystem must widen, and the joint space keeps the same volume.

\n\n

Except that what was initially a compact phase space, may develop squiggles and wiggles and convolutions; so that to draw a simple boundary around the whole mess, you must draw a much larger boundary than before - this is what gives the appearance of entropy increasing.  (And in quantum systems, where different universes go different ways, entropy actually does increase in any local universe.  But omit this complication for now.)

\n\n

The Second Law of Thermodynamics is actually probabilistic in nature - if you ask about the probability of hot water spontaneously entering the "cold water and electricity" state, the probability does exist, it's just very small.  This doesn't mean Liouville's Theorem is violated with small probability; a theorem's a theorem, after all.  It means that if you're in a great big phase space volume at the start, but you don't know where, you may assess a tiny little probability of ending up in some particular phase space volume.  So far as you know, with infinitesimal probability, this particular glass of hot water may be the kind that spontaneously transforms itself to electrical current and ice cubes.  (Neglecting, as usual, quantum effects.)

\n\n

So the Second Law really is inherently Bayesian.  When it comes to any real thermodynamic system, it's a strictly lawful statement of your beliefs about the system, but only a probabilistic statement about the system itself.

\n\n

"Hold on," you say.  "That's not what I learned in physics class," you say.  "In the lectures I heard, thermodynamics is about, you know, temperatures.  Uncertainty is a subjective state of mind!  The temperature of a glass of water is an objective property of the water!  What does heat have to do with probability?"

\n\n

Oh ye of little trust.

\n\n

In one direction, the connection between heat and probability is relatively straightforward:  If the only fact you know about a glass of water is its temperature, then you are much more uncertain about a hot glass of water than a cold glass of water.

\n\n

Heat is the zipping around of lots of tiny molecules; the hotter they are, the faster they can go.  Not all the molecules in hot water are travelling at the same speed - the "temperature" isn't a uniform speed of all the molecules, it's an average speed of the molecules, which in turn corresponds to a predictable statistical distribution of speeds - anyway, the point is that, the hotter the water, the faster the water molecules could be going, and hence, the more uncertain you are about the velocity (not just speed) of any individual molecule.  When you multiply together your uncertainties about all the individual molecules, you will be exponentially more uncertain about the whole glass of water.

\n\n

We take the logarithm of this exponential volume of uncertainty, and call that the entropy.  So it all works out, you see.

\n\n

The connection in the other direction is less obvious.  Suppose there was a glass of water, about which, initially, you knew only that its temperature was 72 degrees.  Then, suddenly, Saint Laplace reveals to you the exact locations and velocities of all the atoms in the water.  You now know perfectly the state of the water, so, by the information-theoretic definition of entropy, its entropy is zero.  Does that make its thermodynamic entropy zero?  Is the water colder, because we know more about it?

\n\n

Ignoring quantumness for the moment, the answer is:  Yes!  Yes it is!

\n\n

Maxwell once asked:  Why can't we take a uniformly hot gas, and partition it into two volumes A and B, and let only fast-moving molecules pass from B to A, while only slow-moving molecules are allowed to pass from A to B?  If you could build a gate like this, soon you would have hot gas on the A side, and cold gas on the B side.  That would be a cheap way to refrigerate food, right?

\n\n

The agent who inspects each gas molecule, and decides whether to let it through, is known as "Maxwell's Demon".  And the reason you can't build an efficient refrigerator this way, is that Maxwell's Demon generates entropy in the process of inspecting the gas molecules and deciding which ones to let through.

\n\n

But suppose you already knew where all the gas molecules were?

\n\n

Then you actually could run Maxwell's Demon and extract useful work.

\n\n

So (again ignoring quantum effects for the moment), if you know the states of all the molecules in a glass of hot water, it is cold in a genuinely thermodynamic sense: you can take electricity out of it and leave behind an ice cube.

\n\n

This doesn't violate Liouville's Theorem, because if Y is the water, and you are Maxwell's Demon (denoted M), the physical process behaves as:

M1,Y1 -> M1,Y1
M2,Y2 -> M2,Y1
M3,Y3 -> M3,Y1
M4,Y4 -> M4,Y1

\n\n

Because Maxwell's demon knows the exact state of Y, this is mutual information between M and Y.  The mutual information decreases the joint entropy of (M,Y):  H(M,Y) = H(M) + H(Y) - I(M;Y).  M has 2 bits of entropy, Y has two bits of entropy, and their mutual information is 2 bits, so (M,Y) has a total of 2 + 2 - 2 = 2 bits of entropy.  The physical process just transforms the "coldness" (negentropy) of the mutual information to make the actual water cold - afterward, M has 2 bits of entropy, Y has 0 bits of entropy, and the mutual information is 0.  Nothing wrong with that!

\n\n

And don't tell me that knowledge is "subjective".  Knowledge has to be represented in a brain, and that makes it as physical as anything else.  For M to physically represent an accurate picture of the state of Y, M's physical state must correlate with the state of Y.  You can take thermodynamic advantage of that - it's called a Szilard engine.

\n\n

Or as E.T. Jaynes put it, "The old adage 'knowledge is power' is a very cogent truth, both in human relations and in thermodynamics."

\n\n

And conversely, one subsystem cannot increase in mutual information with another subsystem, without (a) interacting with it and (b) doing thermodynamic work.
\n\n

\n\n

Otherwise you could build a Maxwell's Demon and\nviolate the Second Law of Thermodynamics - which in turn would violate\nLiouville's Theorem - which is prohibited in the standard model of\nphysics.

\n

Which is to say:  To form accurate beliefs about something, you really do have to observe it.  It's a very physical, very real process: any rational mind does "work" in the thermodynamic sense, not just the sense of mental effort.\n

\n\n

(It is sometimes said that it is erasing bits in order to prepare\nfor the next observation that takes the thermodynamic work - but that\ndistinction is just a matter of words and perspective; the math is\nunambiguous.)

\n\n

(Discovering logical "truths" is a complication which I will not, for now, consider - at least in part because I am still thinking through the exact formalism myself.  In thermodynamics, knowledge of logical truths does not count as negentropy; as would be expected, since a reversible computer can compute logical truths at arbitrarily low cost.  All this that I have said is true of the logically omniscient: any lesser mind will necessarily be less efficient.)
\n

\n\n

"Forming accurate beliefs requires a corresponding amount of\nevidence" is a very cogent truth both in human relations and\nin thermodynamics: if blind faith actually worked as a method of\ninvestigation, you could turn warm water into electricity and ice\ncubes.  Just build a Maxwell's Demon that has blind faith in molecule velocities.

\n\n

Engines of cognition are not so different from heat engines, though they manipulate entropy in a more subtle form than burning gasoline.  For example, to the extent that an engine of cognition is not perfectly efficient, it must radiate waste heat, just like a car engine or refrigerator.\n

\n\n

"Cold rationality" is true in a sense that Hollywood scriptwriters never dreamed (and false in the sense that they did dream).

\n

So unless you can tell me which specific step in your argument violates the laws of physics by giving you true knowledge of the unseen, don't expect me to believe that a big, elaborate clever argument can do it either.

" } }, { "_id": "3XgYbghWruBMrPTAL", "title": "Leave a Line of Retreat", "pageUrl": "https://www.lesswrong.com/posts/3XgYbghWruBMrPTAL/leave-a-line-of-retreat", "postedAt": "2008-02-25T23:57:58.000Z", "baseScore": 221, "voteCount": 156, "commentCount": 75, "url": null, "contents": { "documentId": "3XgYbghWruBMrPTAL", "html": "

When you surround the enemy

Always allow them an escape route.

They must see that there is

An alternative to death.

—Sun Tzu, The Art of War

Don’t raise the pressure, lower the wall.

—Lois McMaster Bujold, Komarr

I recently happened into a conversation with a nonrationalist who had somehow wandered into a local rationalists’ gathering. She had just declared (a) her belief in souls and (b) that she didn’t believe in cryonics because she believed the soul wouldn’t stay with the frozen body. I asked, “But how do you know that?”

From the confusion that flashed on her face, it was pretty clear that this question had never occurred to her. I don’t say this in a bad way—she seemed like a nice person without any applied rationality training, just like most of the rest of the human species.

Most of the ensuing conversation was on items already covered on Overcoming Bias—if you’re really curious about something, you probably can figure out a good way to test it, try to attain accurate beliefs first and then let your emotions flow from that, that sort of thing. But the conversation reminded me of one notion I haven’t covered here yet:

“Make sure,” I suggested to her, “that you visualize what the world would be like if there are no souls, and what you would do about that. Don’t think about all the reasons that it can’t be that way; just accept it as a premise and then visualize the consequences. So that you’ll think, ‘Well, if there are no souls, I can just sign up for cryonics,’ or ‘If there is no God, I can just go on being moral anyway,’ rather than it being too horrifying to face. As a matter of self-respect, you should try to believe the truth no matter how uncomfortable it is, like I said before; but as a matter of human nature, it helps to make a belief less uncomfortable, before you try to evaluate the evidence for it.”

The principle behind the technique is simple: as Sun Tzu advises you to do with your enemies, you must do with yourself—leave yourself a line of retreat, so that you will have less trouble retreating. The prospect of losing your job, for example, may seem a lot more scary when you can’t even bear to think about it than after you have calculated exactly how long your savings will last, and checked the job market in your area, and otherwise planned out exactly what to do next. Only then will you be ready to fairly assess the probability of keeping your job in the planned layoffs next month. Be a true coward, and plan out your retreat in detail—visualize every step—preferably before you first come to the battlefield.

The hope is that it takes less courage to visualize an uncomfortable state of affairs as a thought experiment, than to consider how likely it is to be true. But then after you do the former, it becomes easier to do the latter.

Remember that Bayesianism is precise—even if a scary proposition really should seem unlikely, it’s still important to count up all the evidence, for and against, exactly fairly, to arrive at the rational quantitative probability. Visualizing a scary belief does not mean admitting that you think, deep down, it’s probably true. You can visualize a scary belief on general principles of good mental housekeeping. “The thought you cannot think controls you more than thoughts you speak aloud”—this happens even if the unthinkable thought is false!

The leave-a-line-of-retreat technique does require a certain minimum of self-honesty to use correctly.

For a start: You must at least be able to admit to yourself which ideas scare you, and which ideas you are attached to. But this is a substantially less difficult test than fairly counting the evidence for an idea that scares you. Does it help if I say that I have occasion to use this technique myself? A rationalist does not reject all emotion, after all. There are ideas which scare me, yet I still believe to be false. There are ideas to which I know I am attached, yet I still believe to be true. But I still plan my retreats, not because I’m planning to retreat, but because planning my retreat in advance helps me think about the problem without attachment.

But the greater test of self-honesty is to really accept the uncomfortable proposition as a premise, and figure out how you would really deal with it. When we’re faced with an uncomfortable idea, our first impulse is naturally to think of all the reasons why it can’t possibly be so. And so you will encounter a certain amount of psychological resistance in yourself, if you try to visualize exactly how the world would be, and what you would do about it, if My-Most-Precious-Belief were false, or My-Most-Feared-Belief were true.

Think of all the people who say that without God, morality is impossible.1 If theists could visualize their real reaction to believing as a fact that God did not exist, they could realize that, no, they wouldn’t go around slaughtering babies. They could realize that atheists are reacting to the nonexistence of God in pretty much the way they themselves would, if they came to believe that. I say this, to show that it is a considerable challenge to visualize the way you really would react, to believing the opposite of a tightly held belief.

Plus it’s always counterintuitive to realize that, yes, people do get over things. Newly minted quadriplegics are not as sad, six months later, as they expect to be, etc. It can be equally counterintuitive to realize that if the scary belief turned out to be true, you would come to terms with it somehow. Quadriplegics deal, and so would you.

See also the Litany of Gendlin and the Litany of Tarski.  What is true is already so; owning up to it doesn't make it worse.  You shouldn't be afraid to just visualize a world you fear. If that world is already actual, visualizing it won't make it worse; and if it is not actual, visualizing it will do no harm.  And remember, as you visualize, that if the scary things you're imagining really are true—which they may not be!—then you would, indeed, want to believe it, and you should visualize that too; not believing wouldn't help you.

How many religious people would retain their belief in God if they could accurately visualize that hypothetical world in which there was no God and they themselves have become atheists?

Leaving a line of retreat is a powerful technique, but it’s not easy. Honest visualization doesn’t take as much effort as admitting outright that God doesn’t exist, but it does take an effort.

1And yes, this topic did come up in the conversation; I’m not offering a strawman.

" } }, { "_id": "82eMd5KLiJ5Z6rTrr", "title": "Superexponential Conceptspace, and Simple Words", "pageUrl": "https://www.lesswrong.com/posts/82eMd5KLiJ5Z6rTrr/superexponential-conceptspace-and-simple-words", "postedAt": "2008-02-24T23:59:28.000Z", "baseScore": 69, "voteCount": 60, "commentCount": 18, "url": null, "contents": { "documentId": "82eMd5KLiJ5Z6rTrr", "html": "

Thingspace, you might think, is a rather huge space.  Much larger than reality, for where reality only contains things that actually exist, Thingspace contains everything that could exist.

\n

Actually, the way I \"defined\" Thingspace to have dimensions for every possible attribute—including correlated attributes like density and volume and mass—Thingspace may be too poorly defined to have anything you could call a size.  But it's important to be able to visualize Thingspace anyway.  Surely, no one can really understand a flock of sparrows if all they see is a cloud of flapping cawing things, rather than a cluster of points in Thingspace.

\n

But as vast as Thingspace may be, it doesn't hold a candle to the size of Conceptspace.

\n

\"Concept\", in machine learning, means a rule that includes or excludes examples.  If you see the data 2:+, 3:-, 14:+, 23:-, 8:+, 9:- then you might guess that the concept was \"even numbers\".  There is a rather large literature (as one might expect) on how to learn concepts from data... given random examples, given chosen examples... given possible errors in classification... and most importantly, given different spaces of possible rules.

\n

Suppose, for example, that we want to learn the concept \"good days on which to play tennis\".  The possible attributes of Days are:

\n
\n

\n

Sky:      {Sunny, Cloudy, Rainy}
AirTemp:  {Warm, Cold}
Humidity: {Normal, High}
Wind:     {Strong, Weak}

\n
\n

We're then presented with the following data, where + indicates a positive example of the concept, and - indicates a negative classification:

\n
\n

\n

+   Sky: Sunny;  AirTemp: Warm;  Humidity: High;  Wind: Strong.
-   Sky: Rainy;  AirTemp: Cold;  Humidity: High;  Wind: Strong.
+   Sky: Sunny;  AirTemp: Warm;  Humidity: High;  Wind: Weak.

\n
\n

What should an algorithm infer from this?

\n

\n

A machine learner might represent one concept that fits this data as follows:

\n
\n

Sky: ?;  AirTemp: Warm;  Humidity: High;  Wind: ?

\n
\n

In this format, to determine whether this concept accepts or rejects an example, we compare element-by-element:  ? accepts anything, but a specific value accepts only that specific value.

\n

So the concept above will accept only Days with AirTemp=Warm and Humidity=High, but the Sky and the Wind can take on any value.  This fits both the negative and the positive classifications in the data so far—though it isn't the only concept that does so.

\n

We can also simplify the above concept representation to {?, Warm, High, ?}.

\n

Without going into details, the classic algorithm would be:

\n\n

In the case above, the set of most general hypotheses would be {?, Warm, ?, ?} and {Sunny, ?, ?, ?}, while the set of most specific hypotheses is the single member  {Sunny, Warm, High, ?}.

\n

Any other concept you can find that fits the data will be strictly more specific than one of the most general hypotheses, and strictly more general than the most specific hypothesis.

\n

(For more on this, I recommend Tom Mitchell's Machine Learning, from which this example was adapted.)

\n

Now you may notice that the format above cannot represent all possible concepts.  E.g. \"Play tennis when the sky is sunny or the air is warm\".  That fits the data, but in the concept representation defined above, there's no quadruplet of values that describes the rule.

\n

Clearly our machine learner is not very general.  Why not allow it to represent all possible concepts, so that it can learn with the greatest possible flexibility?

\n

Days are composed of these four variables, one variable with 3 values and three variables with 2 values.  So there are 3*2*2*2 = 24 possible Days that we could encounter.

\n

The format given for representing Concepts allows us to require any of these values for a variable, or leave the variable open.  So there are 4*3*3*3 = 108 concepts in that representation.  For the most-general/most-specific algorithm to work, we need to start with the most specific hypothesis \"no example is ever positively classified\".  If we add that, it makes a total of 109 concepts.

\n

Is it suspicious that there are more possible concepts than possible Days?  Surely not:  After all, a concept can be viewed as a collection of Days.  A concept can be viewed as the set of days that it classifies positively, or isomorphically, the set of days that it classifies negatively.

\n

So the space of all possible concepts that classify Days is the set of all possible sets of Days, whose size is 224 = 16,777,216.

\n

This complete space includes all the concepts we have discussed so far.  But it also includes concepts like \"Positively classify only the examples {Sunny, Warm, High, Strong} and {Sunny, Warm, High, Weak} and reject everything else\" or \"Negatively classify only the example {Rainy, Cold, High, Strong} and accept everything else.\"  It includes concepts with no compact representation, just a flat list of what is and isn't allowed.

\n

That's the problem with trying to build a \"fully general\" inductive learner:  They can't learn concepts until they've seen every possible example in the instance space.

\n

If we add on more attributes to Days—like the Water temperature, or the Forecast for tomorrow—then the number of possible days will grow exponentially in the number of attributes.  But this isn't a problem with our restricted concept space, because you can narrow down a large space using a logarithmic number of examples.

\n

Let's say we add the Water: {Warm, Cold} attribute to days, which will make for 48 possible Days and 325 possible concepts.  Let's say that each Day we see is, usually, classified positive by around half of the currently-plausible concepts, and classified negative by the other half.  Then when we learn the actual classification of the example, it will cut the space of compatible concepts in half.  So it might only take 9 examples (29 = 512) to narrow 325 possible concepts down to one.

\n

Even if Days had forty binary attributes, it should still only take a manageable amount of data to narrow down the possible concepts to one.  64 examples, if each example is classified positive by half the remaining concepts.  Assuming, of course, that the actual rule is one we can represent at all!

\n

If you want to think of all the possibilities, well, good luck with that.  The space of all possible concepts grows superexponentially in the number of attributes.

\n

By the time you're talking about data with forty binary attributes, the number of possible examples is past a trillion—but the number of possible concepts is past two-to-the-trillionth-power.  To narrow down that superexponential concept space, you'd have to see over a trillion examples before you could say what was In, and what was Out.  You'd have to see every possible example, in fact.

\n

That's with forty binary attributes, mind you.  40 bits, or 5 bytes, to be classified simply \"Yes\" or \"No\".  40 bits implies 2^40 possible examples, and 2^(2^40) possible concepts that classify those examples as positive or negative.

\n

So, here in the real world, where objects take more than 5 bytes to describe and a trillion examples are not available and there is noise in the training data, we only even think about highly regular concepts.  A human mind—or the whole observable universe—is not nearly large enough to consider all the other hypotheses.

\n

From this perspective, learning doesn't just rely on inductive bias, it is nearly all inductive bias—when you compare the number of concepts ruled out a priori, to those ruled out by mere evidence.

\n

But what has this (you inquire) to do with the proper use of words?

\n

It's the whole reason that words have intensions as well as extensions.

\n

In yesterday's post, I concluded:

\n
\n

The way to carve reality at its joints, is to draw boundaries around concentrations of unusually high probability density.

\n
\n

I deliberately left out a key qualification in that (slightly edited) statement, because I couldn't explain it until today.  A better statement would be:

\n
\n

The way to carve reality at its joints, is to draw simple boundaries around concentrations of unusually high probability density in Thingspace.

\n
\n

Otherwise you would just gerrymander Thingspace.  You would create really odd noncontiguous boundaries that collected the observed examples, examples that couldn't be described in any shorter message than your observations themselves, and say:  \"This is what I've seen before, and what I expect to see more of in the future.\"

\n

In the real world, nothing above the level of molecules repeats itself exactly.  Socrates is shaped a lot like all those other humans who were vulnerable to hemlock, but he isn't shaped exactly like them.  So your guess that Socrates is a \"human\" relies on drawing simple boundaries around the human cluster in Thingspace.  Rather than, \"Things shaped exactly like [5-megabyte shape specification 1] and with [lots of other characteristics], or exactly like [5-megabyte shape specification 2] and [lots of other characteristics]\", ..., are human.\"

\n

If you don't draw simple boundaries around your experiences, you can't do inference with them.  So you try to describe \"art\" with intensional definitions like \"that which is intended to inspire any complex emotion for the sake of inspiring it\", rather than just pointing at a long list of things that are, or aren't art. 

\n

In fact, the above statement about \"how to carve reality at its joints\" is a bit chicken-and-eggish:  You can't assess the density of actual observations, until you've already done at least a little carving.  And the probability distribution comes from drawing the boundaries, not the other way around—if you already had the probability distribution, you'd have everything necessary for inference, so why would you bother drawing boundaries?

\n

And this suggests another—yes, yet another—reason to be suspicious of the claim that \"you can define a word any way you like\".  When you consider the superexponential size of Conceptspace, it becomes clear that singling out one particular concept for consideration is an act of no small audacity—not just for us, but for any mind of bounded computing power.

\n

Presenting us with the word \"wiggin\", defined as \"a black-haired green-eyed person\", without some reason for raising this particular concept to the level of our deliberate attention, is rather like a detective saying:  \"Well, I haven't the slightest shred of support one way or the other for who could've murdered those orphans... not even an intuition, mind you... but have we considered John Q. Wiffleheim of 1234 Norkle Rd as a suspect?\"

" } }, { "_id": "yLcuygFfMfrfK8KjF", "title": "Mutual Information, and Density in Thingspace", "pageUrl": "https://www.lesswrong.com/posts/yLcuygFfMfrfK8KjF/mutual-information-and-density-in-thingspace", "postedAt": "2008-02-23T19:14:34.000Z", "baseScore": 70, "voteCount": 53, "commentCount": 28, "url": null, "contents": { "documentId": "yLcuygFfMfrfK8KjF", "html": "

Suppose you have a system X that can be in any of 8 states, which are all equally probable (relative to your current state of knowledge), and a system Y that can be in any of 4 states, all equally probable.

\n

The entropy of X, as defined yesterday, is 3 bits; we'll need to ask 3 yes-or-no questions to find out X's exact state.  The entropy of Y, as defined yesterday, is 2 bits; we have to ask 2 yes-or-no questions to find out Y's exact state.  This may seem obvious since 23 = 8 and 22 = 4, so 3 questions can distinguish 8 possibilities and 2 questions can distinguish 4 possibilities; but remember that if the possibilities were not all equally likely, we could use a more clever code to discover Y's state using e.g. 1.75 questions on average.  In this case, though, X's probability mass is evenly distributed over all its possible states, and likewise Y, so we can't use any clever codes.

\n

What is the entropy of the combined system (X,Y)?

\n

You might be tempted to answer, \"It takes 3 questions to find out X, and then 2 questions to find out Y, so it takes 5 questions total to find out the state of X and Y.\"

\n

But what if the two variables are entangled, so that learning the state of Y tells us something about the state of X?

\n

\n

In particular, let's suppose that X and Y are either both odd, or both even.

\n

Now if we receive a 3-bit message (ask 3 questions) and learn that X is in state 5, we know that Y is in state 1 or state 3, but not state 2 or state 4.  So the single additional question \"Is Y in state 3?\", answered \"No\", tells us the entire state of (X,Y):  X=X5, Y=Y1.  And we learned this with a total of 4 questions.

\n

Conversely, if we learn that Y is in state 4 using two questions, it will take us only an additional two questions to learn whether X is in state 2, 4, 6, or 8.  Again, four questions to learn the state of the joint system.

\n

The mutual information of two variables is defined as the difference between the entropy of the joint system and the entropy of the independent systems:  I(X;Y) = H(X) + H(Y) - H(X,Y).

\n

Here there is one bit of mutual information between the two systems:  Learning X tells us one bit of information about Y (cuts down the space of possibilities from 4 to 2, a factor-of-2 decrease in the volume) and learning Y tells us one bit of information about X (cuts down the possibility space from 8 to 4).

\n

What about when probability mass is not evenly distributed?  Yesterday, for example, we discussed the case in which Y had the probabilities 1/2, 1/4, 1/8, 1/8 for its four states.  Let us take this to be our probability distribution over Y, considered independently - if we saw Y, without seeing anything else, this is what we'd expect to see.   And suppose the variable Z has two states, 1 and 2, with probabilities 3/8 and 5/8 respectively.

\n

Then if and only if the joint distribution of Y and Z is as follows, there is zero mutual information between Y and Z:

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Z1Y1: 3/16    Z1Y2: 3/32    Z1Y3: 3/64    Z1Y3: 3/64
Z2Y1: 5/16    Z2Y2: 5/32    Z2Y3: 5/64    Z2Y3: 5/64
\n
\n

This distribution obeys the law:

\n
\n

p(Y,Z) = P(Y)P(Z)

\n
\n

For example, P(Z1Y2) = P(Z1)P(Y2) = 3/8 * 1/4 = 3/32.

\n

And observe that we can recover the marginal (independent) probabilities of Y and Z just by looking at the joint distribution:

\n
\n

P(Y1) = total probability of all the different ways Y1 can happen
= P(Z1Y1) + P(Z2Y1)
= 3/16 + 5/16
= 1/2.

\n
\n

So, just by inspecting the joint distribution, we can determine whether the marginal variables Y and Z are independent; that is, whether the joint distribution factors into the product of the marginal distributions; whether, for all Y and Z, P(Y,Z) = P(Y)P(Z).

\n

This last is significant because, by Bayes's Rule:

\n
\n

P(Yi,Zj) = P(Yi)P(Zj)
P(Yi,Zj)/P(Zj) = P(Yi)
P(Yi|Zj) = P(Yi)

\n
\n

In English, \"After you learn Zj, your belief about Yi is just what it was before.\"

\n

So when the distribution factorizes - when P(Y,Z) = P(Y)P(Z) - this is equivalent to \"Learning about Y never tells us anything about Z or vice versa.\"

\n

From which you might suspect, correctly, that there is no mutual information between Y and Z.  Where there is no mutual information, there is no Bayesian evidence, and vice versa.

\n

Suppose that in the distribution YZ above, we treated each possible combination of Y and Z as a separate event—so that the distribution YZ would have a total of 8 possibilities, with the probabilities shown—and then we calculated the entropy of the distribution YZ the same way we would calculate the entropy of any distribution:

\n
\n

3/16 log2(3/16) + 3/32 log2(3/32) + 3/64 log2(3/64) + ... + 5/64 log2(5/64)

\n
\n

You would end up with the same total you would get if you separately calculated the entropy of Y plus the entropy of Z.  There is no mutual information between the two variables, so our uncertainty about the joint system is not any less than our uncertainty about the two systems considered separately.  (I am not showing the calculations, but you are welcome to do them; and I am not showing the proof that this is true in general, but you are welcome to Google on \"Shannon entropy\" and \"mutual information\".)

\n

What if the joint distribution doesn't factorize?  For example:

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Z1Y1: 12/64    Z1Y2: 8/64    Z1Y3: 1/64    Z1Y4: 3/64
Z2Y1: 20/64    Z2Y2: 8/64    Z2Y3: 7/64    Z2Y4: 5/64
\n
\n

If you add up the joint probabilities to get marginal probabilities, you should find that P(Y1) = 1/2, P(Z1) = 3/8, and so on - the marginal probabilities are the same as before.

\n

But the joint probabilities do not always equal the product of the marginal probabilities.  For example, the probability P(Z1Y2) equals 8/64, where P(Z1)P(Y2) would equal 3/8 * 1/4 = 6/64.  That is, the probability of running into Z1Y2 together, is greater than you'd expect based on the probabilities of running into Z1 or Y2 separately.

\n

Which in turn implies:

\n
\n

P(Z1Y2) > P(Z1)P(Y2)
P(Z1Y2)/P(Y2) > P(Z1)
P(Z1|Y2) > P(Z1)

\n
\n

Since there's an \"unusually high\" probability for P(Z1Y2) - defined as a probability higher than the marginal probabilities would indicate by default - it follows that observing Y2  is evidence which increases the probability of  Z1.  And by a symmetrical argument, observing Z1  must favor Y2.

\n

As there are at least some values of Y that tell us about Z (and vice versa) there must be mutual information between the two variables; and so you will find—I am confident, though I haven't actually checked—that calculating the entropy of YZ yields less total uncertainty than the sum of the independent entropies of Y and Z.  H(Y,Z) = H(Y) + H(Z) - I(Y;Z) with all quantities necessarily nonnegative.

\n
\n

(I digress here to remark that the symmetry of the expression for the mutual information shows that Y must tell us as much about Z, on average, as Z tells us about Y.  I leave it as an exercise to the reader to reconcile this with anything they were taught in logic class about how, if all ravens are black, being allowed to reason Raven(x)->Black(x) doesn't mean you're allowed to reason Black(x)->Raven(x).  How different seem the symmetrical probability flows of the Bayesian, from the sharp lurches of logic—even though the latter is just a degenerate case of the former.)

\n
\n

\"But,\" you ask, \"what has all this to do with the proper use of words?\"

\n

In Empty Labels and then Replace the Symbol with the Substance, we saw the technique of replacing a word with its definition - the example being given:

\n
\n

All [mortal, ~feathers, bipedal] are mortal.
Socrates is a [mortal, ~feathers, bipedal].
Therefore, Socrates is mortal.

\n
\n

Why, then, would you even want to have a word for \"human\"?  Why not just say \"Socrates is a mortal featherless biped\"?

\n

Because it's helpful to have shorter words for things that you encounter often.  If your code for describing single properties is already efficient, then there will not be an advantage to having a special word for a conjunction - like \"human\" for \"mortal featherless biped\" - unless things that are mortal and featherless and bipedal, are found more often than the marginal probabilities would lead you to expect.

\n

In efficient codes, word length corresponds to probability—so the code for Z1Y2 will be just as long as the code for Z1 plus the code for Y2, unless P(Z1Y2) > P(Z1)P(Y2), in which case the code for the word can be shorter than the codes for its parts.

\n

And this in turn corresponds exactly to the case where we can infer some of the properties of the thing, from seeing its other properties.  It must be more likely than the default that featherless bipedal things will also be mortal.

\n

Of course the word \"human\" really describes many, many more properties - when you see a human-shaped entity that talks and wears clothes, you can infer whole hosts of biochemical and anatomical and cognitive facts about it.  To replace the word \"human\" with a description of everything we know about humans would require us to spend an inordinate amount of time talking.  But this is true only because a featherless talking biped is far more likely than default to be poisonable by hemlock, or have broad nails, or be overconfident.

\n

Having a word for a thing, rather than just listing its properties, is a more compact code precisely in those cases where we can infer some of those properties from the other properties.  (With the exception perhaps of very primitive words, like \"red\", that we would use to send an entirely uncompressed description of our sensory experiences.  But by the time you encounter a bug, or even a rock, you're dealing with nonsimple property collections, far above the primitive level.)

\n

So having a word \"wiggin\" for green-eyed black-haired people, is more useful than just saying \"green-eyed black-haired person\", precisely when:

\n
    \n
  1. Green-eyed people are more likely than average to be black-haired (and vice versa), meaning that we can probabilistically infer green eyes from black hair or vice versa; or
  2. \n
  3. Wiggins share other properties that can be inferred at greater-than-default probability.  In this case we have to separately observe the green eyes and black hair; but then, after observing both these properties independently, we can probabilistically infer other properties (like a taste for ketchup).
  4. \n
\n

One may even consider the act of defining a word as a promise to this effect.  Telling someone, \"I define the word 'wiggin' to mean a person with green eyes and black hair\", by Gricean implication, asserts that the word \"wiggin\" will somehow help you make inferences / shorten your messages.

\n

If green-eyes and black hair have no greater than default probability to be found together, nor does any other property occur at greater than default probability along with them, then the word \"wiggin\" is a lie:  The word claims that certain people are worth distinguishing as a group, but they're not.

\n

In this case the word \"wiggin\" does not help describe reality more compactly—it is not defined by someone sending the shortest message—it has no role in the simplest explanation.  Equivalently, the word \"wiggin\" will be of no help to you in doing any Bayesian inference.  Even if you do not call the word a lie, it is surely an error.

\n

And the way to carve reality at its joints, is to draw your boundaries around concentrations of unusually high probability density in Thingspace.

" } }, { "_id": "soQX8yXLbKy7cFvy8", "title": "Entropy, and Short Codes", "pageUrl": "https://www.lesswrong.com/posts/soQX8yXLbKy7cFvy8/entropy-and-short-codes", "postedAt": "2008-02-23T03:16:50.000Z", "baseScore": 88, "voteCount": 80, "commentCount": 27, "url": null, "contents": { "documentId": "soQX8yXLbKy7cFvy8", "html": "

Suppose you have a system X that's equally likely to be in any of 8 possible states:

\n
\n

{X1, X2, X3, X4, X5, X6, X7, X8.}

\n
\n

There's an extraordinarily ubiquitous quantity—in physics, mathematics, and even biology—called entropy; and the entropy of X is 3 bits.  This means that, on average, we'll have to ask 3 yes-or-no questions to find out X's value.  For example, someone could tell us X's value using this code:

\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
X1: 001   X2: 010   X3: 011   X4: 100
X5: 101   X6: 110   X7: 111   X8: 000
\n
\n

So if I asked \"Is the first symbol 1?\" and heard \"yes\", then asked \"Is the second symbol 1?\" and heard \"no\", then asked \"Is the third symbol 1?\" and heard \"no\", I would know that X was in state 4.

\n

Now suppose that the system Y has four possible states with the following probabilities:

\n
\n\n\n\n\n\n\n\n\n\n
Y1: 1/2 (50%)    Y2: 1/4 (25%)    Y3: 1/8 (12.5%)    Y4: 1/8 (12.5%)
\n
\n

Then the entropy of Y would be 1.75 bits, meaning that we can find out its value by asking 1.75 yes-or-no questions.

\n

\n

What does it mean to talk about asking one and three-fourths of a question?  Imagine that we designate the states of Y using the following code:

\n
\n\n\n\n\n\n\n\n\n\n
Y1: 1    Y2: 01    Y3: 001    Y4: 000
\n
\n

First you ask, \"Is the first symbol 1?\"  If the answer is \"yes\", you're done:  Y is in state 1.  This happens half the time, so 50% of the time, it takes 1 yes-or-no question to find out Y's state.

\n

Suppose that instead the answer is \"No\".  Then you ask, \"Is the second symbol 1?\"  If the answer is \"yes\", you're done:  Y is in state 2.  Y is in state 2 with probability 1/4, and each time Y is in state 2 we discover this fact using two yes-or-no questions, so 25% of the time it takes 2 questions to discover Y's state.

\n

If the answer is \"No\" twice in a row, you ask \"Is the third symbol 1?\"  If \"yes\", you're done and Y is in state 3; if \"no\", you're done and Y is in state 4.  The 1/8 of the time that Y is in state 3, it takes three questions; and the 1/8 of the time that Y is in state 4, it takes three questions.

\n
\n

(1/2 * 1) + (1/4 * 2) + (1/8 * 3) + (1/8 * 3)
= 0.5 + 0.5 + 0.375 + 0.375
= 1.75.

\n
\n

The general formula for the entropy of a system S is the sum, over all Si, of -p(Si)*log2(p(Si)).

\n

For example, the log (base 2) of 1/8 is -3.  So -(1/8 * -3) = 0.375 is the contribution of state S4 to the total entropy:  1/8 of the time, we have to ask 3 questions.

\n

You can't always devise a perfect code for a system, but if you have to tell someone the state of arbitrarily many copies of S in a single message, you can get arbitrarily close to a perfect code.  (Google \"arithmetic coding\" for a simple method.)

\n

Now, you might ask:  \"Why not use the code 10 for Y4, instead of 000?  Wouldn't that let us transmit messages more quickly?\"

\n

But if you use the code 10 for Y4 , then when someone answers \"Yes\" to the question \"Is the first symbol 1?\", you won't know yet whether the system state is Y1 (1) or Y4 (10).  In fact, if you change the code this way, the whole system falls apart—because if you hear \"1001\", you don't know if it means \"Y4, followed by Y2\" or \"Y1, followed by Y3.\"

\n

The moral is that short words are a conserved resource.

\n

The key to creating a good code—a code that transmits messages as compactly as possible—is to reserve short words for things that you'll need to say frequently, and use longer words for things that you won't need to say as often.

When you take this art to its limit, the length of the message you need to describe something, corresponds exactly or almost exactly to its probability.  This is the Minimum Description Length or Minimum Message Length formalization of Occam's Razor.

\n

And so even the labels that we use for words are not quite arbitrary.  The sounds that we attach to our concepts can be better or worse, wiser or more foolish.  Even apart from considerations of common usage!

\n

I say all this, because the idea that \"You can X any way you like\" is a huge obstacle to learning how to X wisely.  \"It's a free country; I have a right to my own opinion\" obstructs the art of finding truth.  \"I can define a word any way I like\" obstructs the art of carving reality at its joints.  And even the sensible-sounding \"The labels we attach to words are arbitrary\" obstructs awareness of compactness.  Prosody too, for that matter—Tolkien once observed what a beautiful sound the phrase \"cellar door\" makes; that is the kind of awareness it takes to use language like Tolkien.

\n

The length of words also plays a nontrivial role in the cognitive science of language:

\n

Consider the phrases \"recliner\", \"chair\", and \"furniture\".  Recliner is a more specific category than chair; furniture is a more general category than chair.  But the vast majority of chairs have a common use—you use the same sort of motor actions to sit down in them, and you sit down in them for the same sort of purpose (to take your weight off your feet while you eat, or read, or type, or rest).  Recliners do not depart from this theme.  \"Furniture\", on the other hand, includes things like beds and tables which have different uses, and call up different motor functions, from chairs.

\n

In the terminology of cognitive psychology, \"chair\" is a basic-level category.

\n

People have a tendency to talk, and presumably think, at the basic level of categorization—to draw the boundary around \"chairs\", rather than around the more specific category \"recliner\", or the more general category \"furniture\".  People are more likely to say \"You can sit in that chair\" than \"You can sit in that recliner\" or \"You can sit in that furniture\".

\n

And it is no coincidence that the word for \"chair\" contains fewer syllables than either \"recliner\" or \"furniture\".  Basic-level categories, in general, tend to have short names; and nouns with short names tend to refer to basic-level categories.  Not a perfect rule, of course, but a definite tendency.  Frequent use goes along with short words; short words go along with frequent use.

\n

Or as Douglas Hofstadter put it, there's a reason why the English language uses \"the\" to mean \"the\" and \"antidisestablishmentarianism\" to mean \"antidisestablishmentarianism\" instead of antidisestablishmentarianism other way around.

" } }, { "_id": "d5NyJ2Lf6N22AD9PB", "title": "Where to Draw the Boundary?", "pageUrl": "https://www.lesswrong.com/posts/d5NyJ2Lf6N22AD9PB/where-to-draw-the-boundary", "postedAt": "2008-02-21T19:14:17.000Z", "baseScore": 86, "voteCount": 77, "commentCount": 53, "url": null, "contents": { "documentId": "d5NyJ2Lf6N22AD9PB", "html": "

The one comes to you and says:

\n
\n

Long have I pondered the meaning of the word \"Art\", and at last I've found what seems to me a satisfactory definition: \"Art is that which is designed for the purpose of creating a reaction in an audience.\"

\n
\n

Just because there's a word \"art\" doesn't mean that it has a meaning, floating out there in the void, which you can discover by finding the right definition.

\n

It feels that way, but it is not so.

\n

Wondering how to define a word means you're looking at the problem the wrong way—searching for the mysterious essence of what is, in fact, a communication signal.

\n

Now, there is a real challenge which a rationalist may legitimately attack, but the challenge is not to find a satisfactory definition of a word.  The real challenge can be played as a single-player game, without speaking aloud.  The challenge is figuring out which things are similar to each other—which things are clustered together—and sometimes, which things have a common cause.

\n

If you define \"eluctromugnetism\" to include lightning, include compasses, exclude light, and include Mesmer's \"animal magnetism\" (what we now call hypnosis), then you will have some trouble asking \"How does electromugnetism work?\"  You have lumped together things which do not belong together, and excluded others that would be needed to complete a set.  (This example is historically plausible; Mesmer came before Faraday.)

\n

We could say that electromugnetism is a wrong word, a boundary in thingspace that loops around and swerves through the clusters, a cut that fails to carve reality along its natural joints.

\n

\n

Figuring where to cut reality in order to carve along the joints—this is the problem worthy of a rationalist.  It is what people should be trying to do, when they set out in search of the floating essence of a word.

\n

And make no mistake: it is a scientific challenge to realize that you need a single word to describe breathing and fire.  So do not think to consult the dictionary editors, for that is not their job.

\n

What is \"art\"?  But there is no essence of the word, floating in the void.

\n

Perhaps you come to me with a long list of the things that you call \"art\" and \"not art\":

\n
\n

The Little Fugue in G Minor:  Art.
A punch in the nose:  Not art.
Escher's Relativity:  Art.
A flower:  Not art.
The Python programming language:  Art.
A cross floating in urine:  Not art.
Jack Vance's Tschai novels:  Art.
Modern Art:  Not art.

\n
\n

And you say to me:  \"It feels intuitive to me to draw this boundary, but I don't know why—can you find me an intension that matches this extension?  Can you give me a simple description of this boundary?\"

\n

So I reply:  \"I think it has to do with admiration of craftsmanship: work going in and wonder coming out.  What the included items have in common is the similar aesthetic emotions that they inspire, and the deliberate human effort that went into them with the intent of producing such an emotion.\"

\n

Is this helpful, or is it just cheating at Taboo?  I would argue that the list of which human emotions are or are not aesthetic is far more compact than the list of everything that is or isn't art.  You might be able to see those emotions lighting up an fMRI scan—I say this by way of emphasizing that emotions are not ethereal.

\n

But of course my definition of art is not the real point.  The real point is that you could well dispute either the intension or the extension of my definition.

\n

You could say, \"Aesthetic emotion is not what these things have in common; what they have in common is an intent to inspire any complex emotion for the sake of inspiring it.\"  That would be disputing my intension, my attempt to draw a curve through the data points.  You would say, \"Your equation may roughly fit those points, but it is not the true generating distribution.\"

\n

Or you could dispute my extension by saying, \"Some of these things do belong together—I can see what you're getting at—but the Python language shouldn't be on the list, and Modern Art should be.\"  (This would mark you as a gullible philistine, but you could argue it.)  Here, the presumption is that there is indeed an underlying curve that generates this apparent list of similar and dissimilar things—that there is a rhyme and reason, even though you haven't said yet where it comes from—but I have unwittingly lost the rhythm and included some data points from a different generator.

\n

Long before you know what it is that electricity and magnetism have in common, you might still suspect—based on surface appearances—that \"animal magnetism\" does not belong on the list.

\n

Once upon a time it was thought that the word \"fish\" included dolphins.  Now you could play the oh-so-clever arguer, and say, \"The list:  {Salmon, guppies, sharks, dolphins, trout} is just a list—you can't say that a list is wrong.  I can prove in set theory that this list exists.  So my definition of fish, which is simply this extensional list, cannot possibly be 'wrong' as you claim.\"

\n

Or you could stop playing nitwit games and admit that dolphins don't belong on the fish list.

\n

You come up with a list of things that feel similar, and take a guess at why this is so.  But when you finally discover what they really have in common, it may turn out that your guess was wrong.  It may even turn out that your list was wrong.

\n

You cannot hide behind a comforting shield of correct-by-definition.  Both extensional definitions and intensional definitions can be wrong, can fail to carve reality at the joints.

\n

Categorizing is a guessing endeavor, in which you can make mistakes; so it's wise to be able to admit, from a theoretical standpoint, that your definition-guesses can be \"mistaken\".

" } }, { "_id": "cFzC996D7Jjds3vS9", "title": "Arguing \"By Definition\"", "pageUrl": "https://www.lesswrong.com/posts/cFzC996D7Jjds3vS9/arguing-by-definition", "postedAt": "2008-02-20T23:37:56.000Z", "baseScore": 88, "voteCount": 77, "commentCount": 41, "url": null, "contents": { "documentId": "cFzC996D7Jjds3vS9", "html": "

\"This plucked chicken has two legs and no feathers—therefore, by definition, it is a human!\"

\n

When people argue definitions, they usually start with some visible, known, or at least widely believed set of characteristics; then pull out a dictionary, and point out that these characteristics fit the dictionary definition; and so conclude, \"Therefore, by definition, atheism is a religion!\"

\n

But visible, known, widely believed characteristics are rarely the real point of a dispute.  Just the fact that someone thinks Socrates's two legs are evident enough to make a good premise for the argument, \"Therefore, by definition, Socrates is human!\" indicates that bipedalism probably isn't really what's at stake—or the listener would reply, \"Whaddaya mean Socrates is bipedal?  That's what we're arguing about in the first place!\"

\n

Now there is an important sense in which we can legitimately move from evident characteristics to not-so-evident ones.  You can, legitimately, see that Socrates is human-shaped, and predict his vulnerability to hemlock.  But this probabilistic inference does not rely on dictionary definitions or common usage; it relies on the universe containing empirical clusters of similar things.

\n

This cluster structure is not going to change depending on how you define your words.  Even if you look up the dictionary definition of \"human\" and it says \"all featherless bipeds except Socrates\", that isn't going to change the actual degree to which Socrates is similar to the rest of us featherless bipeds.

\n

\n

When you are arguing correctly from cluster structure, you'll say something like, \"Socrates has two arms, two feet, a nose and tongue, speaks fluent Greek, uses tools, and in every aspect I've been able to observe him, seems to have every major and minor property that characterizes Homo sapiens; so I'm going to guess that he has human DNA, human biochemistry, and is vulnerable to hemlock just like all other Homo sapiens in whom hemlock has been clinically tested for lethality.\"

\n

And suppose I reply, \"But I saw Socrates out in the fields with some herbologists; I think they were trying to prepare an antidote.  Therefore I don't expect Socrates to keel over after he drinks the hemlock—he will be an exception to the general behavior of objects in his cluster: they did not take an antidote, and he did.\"

\n

Now there's not much point in arguing over whether Socrates is \"human\" or not.  The conversation has to move to a more detailed level, poke around inside the details that make up the \"human\" category—talk about human biochemistry, and specifically, the neurotoxic effects of coniine.

\n

If you go on insisting, \"But Socrates is a human and humans, by definition, are mortal!\" then what you're really trying to do is blur out everything you know about Socrates except the fact of his humanity—insist that the only correct prediction is the one you would make if you knew nothing about Socrates except that he was human.

\n

Which is like insisting that a coin is 50% likely to be showing heads or tails, because it is a \"fair coin\", after you've actually looked at the coin and it's showing heads.  It's like insisting that Frodo has ten fingers, because most hobbits have ten fingers, after you've already looked at his hands and seen nine fingers.  Naturally this is illegal under Bayesian probability theory:  You can't just refuse to condition on new evidence.

\n

And you can't just keep one categorization and make estimates based on that, while deliberately throwing out everything else you know.

\n

Not every piece of new evidence makes a significant difference, of course.  If I see that Socrates has nine fingers, this isn't going to noticeably change my estimate of his vulnerability to hemlock, because I'll expect that the way Socrates lost his finger didn't change the rest of his biochemistry.  And this is true, whether or not the dictionary's definition says that human beings have ten fingers.  The legal inference is based on the cluster structure of the environment, and the causal structure of biology; not what the dictionary editor writes down, nor even \"common usage\".

\n

Now ordinarily, when you're doing this right—in a legitimate way—you just say, \"The coniine alkaloid found in hemlock produces muscular paralysis in humans, resulting in death by asphyxiation.\"  Or more simply, \"Humans are vulnerable to hemlock.\"  That's how it's usually said in a legitimate argument.

\n

When would someone feel the need to strengthen the argument with the emphatic phrase \"by definition\"?  (I.e. \"Humans are vulnerable to hemlock by definition!\")  Why, when the inferred characteristic has been called into doubt—Socrates has been seen consulting herbologists—and so the speaker feels the need to tighten the vise of logic.

\n

So when you see \"by definition\" used like this, it usually means:  \"Forget what you've heard about Socrates consulting herbologists—humans, by definition, are mortal!\"

\n

People feel the need to squeeze the argument onto a single course by saying \"Any P, by definition, has property Q!\", on exactly those occasions when they see, and prefer to dismiss out of hand, additional arguments that call into doubt the default inference based on clustering.

\n

So too with the argument \"X, by definition, is a Y!\"  E.g., \"Atheists believe that God doesn't exist; therefore atheists have beliefs about God, because a negative belief is still a belief; therefore atheism asserts answers to theological questions; therefore atheism is, by definition, a religion.\"

\n

You wouldn't feel the need to say, \"Hinduism, by definition, is a religion!\" because, well, of course Hinduism is a religion.  It's not just a religion \"by definition\", it's, like, an actual religion.

\n

Atheism does not resemble the central members of the \"religion\" cluster, so if it wasn't for the fact that atheism is a religion by definition, you might go around thinking that atheism wasn't a religion.  That's why you've got to crush all opposition by pointing out that \"Atheism is a religion\" is true by definition, because it isn't true any other way.

\n

Which is to say:  People insist that \"X, by definition, is a Y!\" on those occasions when they're trying to sneak in a connotation of Y that isn't directly in the definition, and X doesn't look all that much like other members of the Y cluster.

\n

Over the last thirteen years I've been keeping track of how often this phrase is used correctly versus incorrectly—though not with literal statistics, I fear.  But eyeballing suggests that using the phrase by definition, anywhere outside of math, is among the most alarming signals of flawed argument I've ever found.  It's right up there with \"Hitler\", \"God\", \"absolutely certain\" and \"can't prove that\".

\n

This heuristic of failure is not perfect—the first time I ever spotted a correct usage outside of math, it was by Richard Feynman; and since then I've spotted more.  But you're probably better off just deleting the phrase \"by definition\" from your vocabulary—and always on any occasion where you might be tempted to say it in italics or followed with an exclamation mark.  That's a bad idea by definition!

" } }, { "_id": "yuKaWPRTxZoov4z8K", "title": "Sneaking in Connotations", "pageUrl": "https://www.lesswrong.com/posts/yuKaWPRTxZoov4z8K/sneaking-in-connotations", "postedAt": "2008-02-19T19:41:37.000Z", "baseScore": 86, "voteCount": 71, "commentCount": 23, "url": null, "contents": { "documentId": "yuKaWPRTxZoov4z8K", "html": "

Yesterday, we saw that in Japan, blood types have taken the place of astrology—if your blood type is AB, for example, you're supposed to be \"cool and controlled\".

\n

So suppose we decided to invent a new word, \"wiggin\", and defined this word to mean people with green eyes and black hair—

\n
\n

        A green-eyed man with black hair walked into a restaurant.
      \"Ha,\" said Danny, watching from a nearby table, \"did you see that?  A wiggin just walked into the room.  Bloody wiggins.  Commit all sorts of crimes, they do.\"
        His sister Erda sighed.  \"You haven't seen him commit any crimes, have you, Danny?\"
      \"Don't need to,\" Danny said, producing a dictionary.  \"See, it says right here in the Oxford English Dictionary.  'Wiggin.  (1)  A person with green eyes and black hair.'  He's got green eyes and black hair, he's a wiggin.  You're not going to argue with the Oxford English Dictionary, are you?  By definition, a green-eyed black-haired person is a wiggin.\"
      \"But you called him a wiggin,\" said Erda.  \"That's a nasty thing to say about someone you don't even know.  You've got no evidence that he puts too much ketchup on his burgers, or that as a kid he used his slingshot to launch baby squirrels.\"
        \"But he is a wiggin,\" Danny said patiently.  \"He's got green eyes and black hair, right?  Just you watch, as soon as his burger arrives, he's reaching for the ketchup.\"

\n
\n

\n

The human mind passes from observed characteristics to inferred characteristics via the medium of words.  In \"All humans are mortal, Socrates is a human, therefore Socrates is mortal\", the observed characteristics are Socrates's clothes, speech, tool use, and generally human shape; the categorization is \"human\"; the inferred characteristic is poisonability by hemlock.

\n

Of course there's no hard distinction between \"observed characteristics\" and \"inferred characteristics\".  If you hear someone speak, they're probably shaped like a human, all else being equal.  If you see a human figure in the shadows, then ceteris paribus it can probably speak.

\n

And yet some properties do tend to be more inferred than observed. You're more likely to decide that someone is human, and will therefore burn if exposed to open flame, than carry through the inference the other way around.

\n

If you look in a dictionary for the definition of \"human\", you're more likely to find characteristics like \"intelligence\" and \"featherless biped\"—characteristics that are useful for quickly eyeballing what is and isn't a human—rather than the ten thousand connotations, from vulnerability to hemlock, to overconfidence, that we can infer from someone's being human.  Why?  Perhaps dictionaries are intended to let you match up labels to similarity groups, and so are designed to quickly isolate clusters in thingspace.  Or perhaps the big, distinguishing characteristics are the most salient, and therefore first to pop into a dictionary editor's mind.  (I'm not sure how aware dictionary editors are of what they really do.)

\n

But the upshot is that when Danny pulls out his OED to look up \"wiggin\", he sees listed only the first-glance characteristics that distinguish a wiggin:  Green eyes and black hair.  The OED doesn't list the many minor connotations that have come to attach to this term, such as criminal proclivities, culinary peculiarities, and some unfortunate childhood activities.

\n

How did those connotations get there in the first place?  Maybe there was once a famous wiggin with those properties.  Or maybe someone made stuff up at random, and wrote a series of bestselling books about it (The Wiggin, Talking to Wiggins, Raising Your Little Wiggin, Wiggins in the Bedroom). Maybe even the wiggins believe it now, and act accordingly.  As soon as you call some people \"wiggins\", the word will begin acquiring connotations.

\n

But remember the Parable of Hemlock: If we go by the logical class definitions, we can never class Socrates as a \"human\" until after we observe him to be mortal.  Whenever someone pulls a dictionary, they're generally trying to sneak in a connotation, not the actual definition written down in the dictionary.

\n

After all, if the only meaning of the word \"wiggin\" is \"green-eyed black-haired person\", then why not just call those people \"green-eyed black-haired people\"?  And if you're wondering whether someone is a ketchup-reacher, why not ask directly, \"Is he a ketchup-reacher?\" rather than \"Is he a wiggin?\"  (Note substitution of substance for symbol.)

\n

Oh, but arguing the real question would require work. You'd have to actually watch the wiggin to see if he reached for the ketchup.  Or maybe see if you can find statistics on how many green-eyed black-haired people actually like ketchup.  At any rate, you wouldn't be able to do it sitting in your living room with your eyes closed.  And people are lazy.  They'd rather argue \"by definition\", especially since they think \"you can define a word any way you like\".

\n

But of course the real reason they care whether someone is a \"wiggin\" is a connotation—a feeling that comes along with the word—that isn't in the definition they claim to use.

\n

Imagine Danny saying, \"Look, he's got green eyes and black hair.  He's a wiggin!  It says so right there in the dictionary!—therefore, he's got black hair.  Argue with that, if you can!\"

\n

Doesn't have much of a triumphant ring to it, does it?  If the real point of the argument actually was contained in the dictionary definition—if the argument genuinely was logically valid—then the argument would feel empty; it would either say nothing new, or beg the question.

\n

It's only the attempt to smuggle in connotations not explicitly listed in the definition, that makes anyone feel they can score a point that way.

" } }, { "_id": "veN86cBhoe7mBxXLk", "title": "Categorizing Has Consequences", "pageUrl": "https://www.lesswrong.com/posts/veN86cBhoe7mBxXLk/categorizing-has-consequences", "postedAt": "2008-02-19T01:40:56.000Z", "baseScore": 76, "voteCount": 64, "commentCount": 13, "url": null, "contents": { "documentId": "veN86cBhoe7mBxXLk", "html": "

Among the many genetic variations and mutations you carry in your genome, there are a very few alleles you probably know—including those determining your blood type: the presence or absence of the A, B, and + antigens.  If you receive a blood transfusion containing an antigen you don't have, it will trigger an allergic reaction.  It was Karl Landsteiner's discovery of this fact, and how to test for compatible blood types, that made it possible to transfuse blood without killing the patient.  (1930 Nobel Prize in Medicine.)  Also, if a mother with blood type A (for example) bears a child with blood type A+, the mother may acquire an allergic reaction to the + antigen; if she has another child with blood type A+, the child will be in danger, unless the mother takes an allergic suppressant during pregnancy.  Thus people learn their blood types before they marry.

\n

Oh, and also: people with blood type A are earnest and creative, while people with blood type B are wild and cheerful.  People with type O are agreeable and sociable, while people with type AB are cool and controlled. (You would think that O would be the absence of A and B, while AB would just be A plus B, but no...)  All this, according to the Japanese blood type theory of personality.  It would seem that blood type plays the role in Japan that astrological signs play in the West, right down to blood type horoscopes in the daily newspaper.

\n

This fad is especially odd because blood types have never been mysterious, not in Japan and not anywhere.  We only know blood types even exist thanks to Karl Landsteiner.  No mystic witch doctor, no venerable sorcerer, ever said a word about blood types; there are no ancient, dusty scrolls to shroud the error in the aura of antiquity.  If the medical profession claimed tomorrow that it had all been a colossal hoax, we layfolk would not have one scrap of evidence from our unaided senses to contradict them.

\n

There's never been a war between blood types.  There's never even been a political conflict between blood types.  The stereotypes must have arisen strictly from the mere existence of the labels.

\n

\n

Now, someone is bound to point out that this is a story of categorizing humans.  Does the same thing happen if you categorize plants, or rocks, or office furniture?  I can't recall reading about such an experiment, but of course, that doesn't mean one hasn't been done.  (I'd expect the chief difficulty of doing such an experiment would be finding a protocol that didn't mislead the subjects into thinking that, since the label was given you, it must be significant somehow.)  So while I don't mean to update on imaginary evidence, I would predict a positive result for the experiment:  I would expect them to find that mere labeling had power over all things, at least in the human imagination.

\n

You can see this in terms of similarity clusters: once you draw a boundary around a group, the mind starts trying to harvest similarities from the group.  And unfortunately the human pattern-detectors seem to operate in such overdrive that we see patterns whether they're there or not; a weakly negative correlation can be mistaken for a strong positive one with a bit of selective memory.

\n

You can see this in terms of neural algorithms: creating a name for a set of things is like allocating a subnetwork to find patterns in them.

\n

You can see this in terms of a compression fallacy: things given the same name end up dumped into the same mental bucket, blurring them together into the same point on the map.

\n

Or you can see this in terms of the boundless human ability to make stuff up out of thin air and believe it because no one can prove it's wrong.  As soon as you name the category, you can start making up stuff about it.  The named thing doesn't have to be perceptible; it doesn't have to exist; it doesn't even have to be coherent.

\n

And no, it's not just Japan:  Here in the West, a blood-type-based diet book called Eat Right 4 Your Type was a bestseller.

\n

Any way you look at it, drawing a boundary in thingspace is not a neutral act.  Maybe a more cleanly designed, more purely Bayesian AI could ponder an arbitrary class and not be influenced by it.  But you, a human, do not have that option.  Categories are not static things in the context of a human brain; as soon as you actually think of them, they exert force on your mind.  One more reason not to believe you can define a word any way you like.

" } }, { "_id": "y5MxoeacRKKM3KQth", "title": "Fallacies of Compression", "pageUrl": "https://www.lesswrong.com/posts/y5MxoeacRKKM3KQth/fallacies-of-compression", "postedAt": "2008-02-17T18:51:59.000Z", "baseScore": 104, "voteCount": 90, "commentCount": 27, "url": null, "contents": { "documentId": "y5MxoeacRKKM3KQth", "html": "

\"The map is not the territory,\" as the saying goes.  The only life-size, atomically detailed, 100% accurate map of California is California.  But California has important regularities, such as the shape of its highways, that can be described using vastly less information—not to mention vastly less physical material—than it would take to describe every atom within the state borders.  Hence the other saying:  \"The map is not the territory, but you can't fold up the territory and put it in your glove compartment.\"

\n

A paper map of California, at a scale of 10 kilometers to 1 centimeter (a million to one), doesn't have room to show the distinct position of two fallen leaves lying a centimeter apart on the sidewalk.  Even if the map tried to show the leaves, the leaves would appear as the same point on the map; or rather the map would need a feature size of 10 nanometers, which is a finer resolution than most book printers handle, not to mention human eyes.

\n

Reality is very large—just the part we can see is billions of lightyears across.  But your map of reality is written on a few pounds of neurons, folded up to fit inside your skull.  I don't mean to be insulting, but your skull is tiny, comparatively speaking.

\n

Inevitably, then, certain things that are distinct in reality, will be compressed into the same point on your map.

\n

But what this feels like from inside is not that you say, \"Oh, look, I'm compressing two things into one point on my map.\"  What it feels like from inside is that there is just one thing, and you are seeing it.

\n

\n

A sufficiently young child, or a sufficiently ancient Greek philosopher, would not know that there were such things as \"acoustic vibrations\" or \"auditory experiences\".  There would just be a single thing that happened when a tree fell; a single event called \"sound\".

\n

To realize that there are two distinct events, underlying one point on your map, is an essentially scientific challenge—a big, difficult scientific challenge.

\n

Sometimes fallacies of compression result from confusing two known things under the same label—you know about acoustic vibrations, and you know about auditory processing in brains, but you call them both \"sound\" and so confuse yourself.  But the more dangerous fallacy of compression arises from having no idea whatsoever that two distinct entities even exist.  There is just one mental folder in the filing system, labeled \"sound\", and everything thought about \"sound\" drops into that one folder.  It's not that there are two folders with the same label; there's just a single folder.  By default, the map is compressed; why would the brain create two mental buckets where one would serve?

\n

Or think of a mystery novel in which the detective's critical insight is that one of the suspects has an identical twin.  In the course of the detective's ordinary work, his job is just to observe that Carol is wearing red, that she has black hair, that her sandals are leather—but all these are facts about Carol.  It's easy enough to question an individual fact, like WearsRed(Carol) or BlackHair(Carol).  Maybe BlackHair(Carol) is false.  Maybe Carol dyes her hair.  Maybe BrownHair(Carol).  But it takes a subtler detective to wonder if the Carol in WearsRed(Carol) and BlackHair(Carol)—the Carol file into which his observations drop—should be split into two files.  Maybe there are two Carols, so that the Carol who wore red is not the same woman as the Carol who had black hair.

\n

Here it is the very act of creating two different buckets that is the stroke of genius insight.  'Tis easier to question one's facts than one's ontology.

\n

The map of reality contained in a human brain, unlike a paper map of California, can expand dynamically when we write down more detailed descriptions.  But what this feels like from inside is not so much zooming in on a map, as fissioning an indivisible atom—taking one thing (it felt like one thing) and splitting it into two or more things.

\n

Often this manifests in the creation of new words, like \"acoustic vibrations\" and \"auditory experiences\" instead of just \"sound\".  Something about creating the new name seems to allocate the new bucket.  The detective is liable to start calling one of his suspects \"Carol-2\" or \"the Other Carol\" almost as soon as he realizes that there are two of them.

\n

But expanding the map isn't always as simple as generating new city names.  It is a stroke of scientific insight to realize that such things as acoustic vibrations, or auditory experiences, even exist.

\n

The obvious modern-day illustration would be words like \"intelligence\" or \"consciousness\".  Every now and then one sees a press release claiming that a research has \"explained consciousness\" because a team of neurologists investigated a 40Hz electrical rhythm that might have something to do with cross-modality binding of sensory information, or because they investigated the reticular activating system that keeps humans awake.  That's an extreme example, and the usual failures are more subtle, but they are of the same kind.  The part of \"consciousness\" that people find most interesting is reflectivity, self-awareness, realizing that the person I see in the mirror is \"me\"; that and the hard problem of subjective experience as distinguished by Chalmers.  We also label \"conscious\" the state of being awake, rather than asleep, in our daily cycle.  But they are all different concepts going under the same name, and the underlying phenomena are different scientific puzzles.  You can explain being awake without explaining reflectivity or subjectivity.

\n

Fallacies of compression also underlie the bait-and-switch technique in philosophy—you argue about \"consciousness\" under one definition (like the ability to think about thinking) and then apply the conclusions to \"consciousness\" under a different definition (like subjectivity).  Of course it may be that the two are the same thing, but if so, genuinely understanding this fact would require first a conceptual split and then a genius stroke of reunification.

\n

Expanding your map is (I say again) a scientific challenge: part of the art of science, the skill of inquiring into the world.  (And of course you cannot solve a scientific challenge by appealing to dictionaries, nor master a complex skill of inquiry by saying \"I can define a word any way I like\".)  Where you see a single confusing thing, with protean and self-contradictory attributes, it is a good guess that your map is cramming too much into one point—you need to pry it apart and allocate some new buckets.  This is not like defining the single thing you see, but it does often follow from figuring out how to talk about the thing without using a single mental handle.

\n

So the skill of prying apart the map is linked to the rationalist version of Taboo, and to the wise use of words; because words often represent the points on our map, the labels under which we file our propositions and the buckets into which we drop our information.  Avoiding a single word, or allocating new ones, is often part of the skill of expanding the map.

" } }, { "_id": "GKfPL6LQFgB49FEnv", "title": "Replace the Symbol with the Substance", "pageUrl": "https://www.lesswrong.com/posts/GKfPL6LQFgB49FEnv/replace-the-symbol-with-the-substance", "postedAt": "2008-02-16T18:12:06.000Z", "baseScore": 95, "voteCount": 80, "commentCount": 17, "url": null, "contents": { "documentId": "GKfPL6LQFgB49FEnv", "html": "

What does it take to—as in yesterday's example—see a \"baseball game\" as \"An artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions\"?  What does it take to play the rationalist version of Taboo, in which the goal is not to find a synonym that isn't on the card, but to find a way of describing without the standard concept-handle?

\n

You have to visualize.  You have to make your mind's eye see the details, as though looking for the first time.  You have to perform an Original Seeing.

\n

Is that a \"bat\"?  No, it's a long, round, tapering, wooden rod, narrowing at one end so that a human can grasp and swing it.

\n

Is that a \"ball\"?  No, it's a leather-covered spheroid with a symmetrical stitching pattern, hard but not metal-hard, which someone can grasp and throw, or strike with the wooden rod, or catch.

\n

Are those \"bases\"?  No, they're fixed positions on a game field, that players try to run to as quickly as possible because of their safety within the game's artificial rules.

\n

The chief obstacle to performing an original seeing is that your mind already has a nice neat summary, a nice little easy-to-use concept handle.  Like the word \"baseball\", or \"bat\", or \"base\".  It takes an effort to stop your mind from sliding down the familiar path, the easy path, the path of least resistance, where the small featureless word rushes in and obliterates the details you're trying to see.  A word itself can have the destructive force of cliche; a word itself can carry the poison of a cached thought.

\n

\n

Playing the game of Taboo—being able to describe without using the standard pointer/label/handle—is one of the fundamental rationalist capacities.  It occupies the same primordial level as the habit of constantly asking \"Why?\" or \"What does this belief make me anticipate?\"

\n

The art is closely related to:

\n\n

How could tabooing a word help you keep your purpose?

\n

From Lost Purposes:

\n
\n

As you read this, some young man or woman is sitting at a desk in a university, earnestly studying material they have no intention of ever using, and no interest in knowing for its own sake.  They want a high-paying job, and the high-paying job requires a piece of paper, and the piece of paper requires a previous master's degree, and the master's degree requires a bachelor's degree, and the university that grants the bachelor's degree requires you to take a class in 12th-century knitting patterns to graduate.  So they diligently study, intending to forget it all the moment the final exam is administered, but still seriously working away, because they want that piece of paper.

\n
\n

Why are you going to \"school\"?  To get an \"education\" ending in a \"degree\".  Blank out the forbidden words and all their obvious synonyms, visualize the actual details, and you're much more likely to notice that \"school\" currently seems to consist of sitting next to bored teenagers listening to material you already know, that a \"degree\" is a piece of paper with some writing on it, and that \"education\" is forgetting the material as soon as you're tested on it.

\n

Leaky generalizations often manifest through categorizations:  People who actually learn in classrooms are categorized as \"getting an education\", so \"getting an education\" must be good; but then anyone who actually shows up at a college will also match against the concept \"getting an education\", whether or not they learn.

\n

Students who understand math will do well on tests, but if you require schools to produce good test scores, they'll spend all their time teaching to the test.  A mental category, that imperfectly matches your goal, can produce the same kind of incentive failure internally.  You want to learn, so you need an \"education\"; and then as long as you're getting anything that matches against the category \"education\", you may not notice whether you're learning or not.  Or you'll notice, but you won't realize you've lost sight of your original purpose, because you're \"getting an education\" and that's how you mentally described your goal.

\n

To categorize is to throw away information.  If you're told that a falling tree makes a \"sound\", you don't know what the actual sound is; you haven't actually heard the tree falling.  If a coin lands \"heads\", you don't know its radial orientation.  A blue egg-shaped thing may be a \"blegg\", but what if the exact egg shape varies, or the exact shade of blue?  You want to use categories to throw away irrelevant information, to sift gold from dust, but often the standard categorization ends up throwing out relevant information too.  And when you end up in that sort of mental trouble, the first and most obvious solution is to play Taboo.

\n

For example:  \"Play Taboo\" is itself a leaky generalization.  Hasbro's version is not the rationalist version; they only list five additional banned words on the card, and that's not nearly enough coverage to exclude thinking in familiar old words.  What rationalists do would count as playing Taboo—it would match against the \"play Taboo\" concept—but not everything that counts as playing Taboo works to force original seeing.  If you just think \"play Taboo to force original seeing\", you'll start thinking that anything that counts as playing Taboo must count as original seeing.

\n

The rationalist version isn't a game, which means that you can't win by trying to be clever and stretching the rules.  You have to play Taboo with a voluntary handicap:  Stop yourself from using synonyms that aren't on the card.  You also have to stop yourself from inventing a new simple word or phrase that functions as an equivalent mental handle to the old one.  You are trying to zoom in on your map, not rename the cities; dereference the pointer, not allocate a new pointer; see the events as they happen, not rewrite the cliche in a different wording.

\n

By visualizing the problem in more detail, you can see the lost purpose:  Exactly what do you do when you \"play Taboo\"?   What purpose does each and every part serve?

\n

If you see your activities and situation originally, you will be able to originally see your goals as well.  If you can look with fresh eyes, as though for the first time, you will see yourself doing things that you would never dream of doing if they were not habits.

\n

Purpose is lost whenever the substance (learning, knowledge, health) is displaced by the symbol (a degree, a test score, medical care).  To heal a lost purpose, or a lossy categorization, you must do the reverse:

\n

Replace the symbol with the substance; replace the signifier with the signified; replace the property with the membership test; replace the word with the meaning; replace the label with the concept; replace the summary with the details; replace the proxy question with the real question; dereference the pointer; drop into a lower level of organization; mentally simulate the process instead of naming it; zoom in on your map.

\n

\"The Simple Truth\" was generated by an exercise of this discipline to describe \"truth\" on a lower level of organization, without invoking terms like \"accurate\", \"correct\", \"represent\", \"reflect\", \"semantic\", \"believe\", \"knowledge\", \"map\", or \"real\".  (And remember that the goal is not really to play Taboo—the word \"true\" appears in the text, but not to define truth.  It would get a buzzer in Hasbro's game, but we're not actually playing that game.  Ask yourself whether the document fulfilled its purpose, not whether it followed the rules.)

\n

Bayes's Rule itself describes \"evidence\" in pure math, without using words like \"implies\", \"means\", \"supports\", \"proves\", or \"justifies\".  Set out to define such philosophical terms, and you'll just go in circles.

\n

And then there's the most important word of all to Taboo.  I've often warned that you should be careful not to overuse it, or even avoid the concept in certain cases.  Now you know the real reason why.  It's not a bad subject to think about.  But your true understanding is measured by your ability to describe what you're doing and why, without using that word or any of its synonyms.

" } }, { "_id": "WBdvyyHLdxZSAMmoz", "title": "Taboo Your Words", "pageUrl": "https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your-words", "postedAt": "2008-02-15T22:53:20.000Z", "baseScore": 302, "voteCount": 234, "commentCount": 133, "url": null, "contents": { "documentId": "WBdvyyHLdxZSAMmoz", "html": "

In the game Taboo (by Hasbro), the objective is for a player to have their partner guess a word written on a card, without using that word or five additional words listed on the card.  For example, you might have to get your partner to say \"baseball\" without using the words \"sport\", \"bat\", \"hit\", \"pitch\", \"base\" or of course \"baseball\".

\n

As soon as I see a problem like that, I at once think, \"An artificial group conflict in which you use a long wooden cylinder to whack a thrown spheroid, and then run between four safe positions.\"  It might not be the most efficient strategy to convey the word 'baseball' under the stated rules - that might be, \"It's what the Yankees play\" - but the general skill of blanking a word out of my mind was one I'd practiced for years, albeit with a different purpose.

\n

\n

Yesterday we saw how replacing terms with definitions could reveal the empirical unproductivity of the classical Aristotelian syllogism.  All humans are mortal (and also, apparently, featherless bipeds); Socrates is human; therefore Socrates is mortal.  When we replace the word 'human' by its apparent definition, the following underlying reasoning is revealed:

\n
\n

All [mortal, ~feathers, biped] are mortal;
Socrates is a [mortal, ~feathers, biped];
Therefore Socrates is mortal.

\n
\n

But the principle of replacing words by definitions applies much more broadly:

\n
\n

Albert:  \"A tree falling in a deserted forest makes a sound.\"
Barry:  \"A tree falling in a deserted forest does not make a sound.\"

\n
\n

Clearly, since one says \"sound\" and one says \"not sound\", we must have a contradiction, right?  But suppose that they both dereference their pointers before speaking:

\n
\n

Albert:  \"A tree falling in a deserted forest matches [membership test: this event generates acoustic vibrations].\"
Barry:  \"A tree falling in a deserted forest does not match [membership test: this event generates auditory experiences].\"

\n
\n

Now there is no longer an apparent collision—all they had to do was prohibit themselves from using the word sound. If \"acoustic vibrations\" came into dispute, we would just play Taboo again and say \"pressure waves in a material medium\"; if necessary we would play Taboo again on the word \"wave\" and replace it with the wave equation.  (Play Taboo on \"auditory experience\" and you get \"That form of sensory processing, within the human brain, which takes as input a linear time series of frequency mixes...\")

\n

But suppose, on the other hand, that Albert and Barry were to have the argument:

\n
\n

Albert:  \"Socrates matches the concept [membership test: this person will die after drinking hemlock].\"
Barry:  \"Socrates matches the concept [membership test: this person will not die after drinking hemlock].\"

\n
\n

Now Albert and Barry have a substantive clash of expectations; a difference in what they anticipate seeing after Socrates drinks hemlock.  But they might not notice this, if they happened to use the same word \"human\" for their different concepts.

\n

You get a very different picture of what people agree or disagree about, depending on whether you take a label's-eye-view (Albert says \"sound\" and Barry says \"not sound\", so they must disagree) or taking the test's-eye-view (Albert's membership test is acoustic vibrations, Barry's is auditory experience).

\n

Get together a pack of soi-disant futurists and ask them if they believe we'll have Artificial Intelligence in thirty years, and I would guess that at least half of them will say yes.  If you leave it at that, they'll shake hands and congratulate themselves on their consensus.  But make the term \"Artificial Intelligence\" taboo, and ask them to describe what they expect to see, without ever using words like \"computers\" or \"think\", and you might find quite a conflict of expectations hiding under that featureless standard word.  Likewise that other term.  And see also Shane Legg's compilation of 71 definitions of \"intelligence\".

\n

The illusion of unity across religions can be dispelled by making the term \"God\" taboo, and asking them to say what it is they believe in; or making the word \"faith\" taboo, and asking them why they believe it. Though mostly they won't be able to answer at all, because it is mostly profession in the first place, and you cannot cognitively zoom in on an audio recording.

\n

When you find yourself in philosophical difficulties, the first line of defense is not to define your problematic terms, but to see whether you can think without using those terms at all.  Or any of their short synonyms.  And be careful not to let yourself invent a new word to use instead.  Describe outward observables and interior mechanisms; don't use a single handle, whatever that handle may be.

\n

Albert says that people have \"free will\".  Barry says that people don't have \"free will\".  Well, that will certainly generate an apparent conflict.  Most philosophers would advise Albert and Barry to try to define exactly what they mean by \"free will\", on which topic they will certainly be able to discourse at great length.  I would advise Albert and Barry to describe what it is that they think people do, or do not have, without using the phrase \"free will\" at all.  (If you want to try this at home, you should also avoid the words \"choose\", \"act\", \"decide\", \"determined\", \"responsible\", or any of their synonyms.)

\n

This is one of the nonstandard tools in my toolbox, and in my humble opinion, it works way way better than the standard one.  It also requires more effort to use; you get what you pay for.

" } }, { "_id": "aEzwse2K5Cu8EshJK", "title": "Classic Sichuan in Millbrae, Thu Feb 21, 7pm", "pageUrl": "https://www.lesswrong.com/posts/aEzwse2K5Cu8EshJK/classic-sichuan-in-millbrae-thu-feb-21-7pm", "postedAt": "2008-02-15T00:21:21.000Z", "baseScore": 1, "voteCount": 3, "commentCount": 9, "url": null, "contents": { "documentId": "aEzwse2K5Cu8EshJK", "html": "

Followup toBay Area Bayesians Unite, OB Meetup

\n\n

The Bay Area Overcoming Bias meetup will take place in the Classic Sichuan restaurant, 148 El Camino Real, Millbrae, CA 94031.  15 people said they would "Definitely" attend and an additional 27 said "Maybe".  Oh, and Robin Hanson will be there too.

\n\n

Dinner is scheduled for 7:00pm, on Thursday, February 21st, 2008.  I'll show up at 6:30pm, though, just to cut people some antislack if it's easier for them to arrive earlier.

\n\n

If you're arriving via the BART/Caltrain station, just walk up from the Southbound Caltrain side and turn right onto El Camino, walk a few meters, and you're there.

\n\n

If driving, I'd suggest taking the exit from 101 onto Millbrae Ave - the exit from 280 onto Millbrae surprisingly goes down a winding mountain road before arriving at downtown.  Doesn't mean you have to take 101 the whole way there, but I definitely recommend the 101 exit.

For parking, I would suggest parking near the BART/Caltrain station. \nFrom 101 onto Millbrae Ave., turn right onto El Camino, almost\nimmediately pass the Peter's Cafe parking lot and then turn right onto\na small street toward the train station.  The first parking lot you see on\nyour right is reserved for Peter's Cafe, but immediately after that (still on\nyour right) is some city parking that looked mostly empty when I visited last Thursday.  El Camino itself was parked up, though.  If all\nelse fails, you should be able to park in the train station lots and pay a small fee. \nThen walk up to El Camino and turn right, as before.

\n\n

Classic Sichuan has vegetarian dishes.  They also have a reputation for their spicy food being spicy,\nso watch out!  I ate there to check quality, and while I'm generally a\ncultural barbarian, I didn't detect any problems with the food I was\nserved.  Depending on how many people actually show up, we may overflow\ntheir small private room, but hopefully we won't overflow the\nrestaurant.

\n\n

My cellphone number is (866) 983-5697.  That's toll-free, 866-YUDKOWS.

\n\n

Long live the Bayesian Conspiracy!  See you there!

" } }, { "_id": "i2dfY65JciebF3CAo", "title": "Empty Labels", "pageUrl": "https://www.lesswrong.com/posts/i2dfY65JciebF3CAo/empty-labels", "postedAt": "2008-02-14T23:50:06.000Z", "baseScore": 54, "voteCount": 45, "commentCount": 7, "url": null, "contents": { "documentId": "i2dfY65JciebF3CAo", "html": "

Consider (yet again) the Aristotelian idea of categories.  Let's say that there's some object with properties A, B, C, D, and E, or at least it looks E-ish.

\n
\n

Fred:  \"You mean that thing over there is blue, round, fuzzy, and—\"
Me: \"In Aristotelian logic, it's not supposed to make a difference what the properties are, or what I call them.  That's why I'm just using the letters.\"

\n
\n

Next, I invent the Aristotelian category \"zawa\", which describes those objects, all those objects, and only those objects, which have properties A, C, and D.

\n
\n

Me:  \"Object 1 is zawa, B, and E.\"
Fred:  \"And it's blue—I mean, A—too, right?\"
Me:  \"That's implied when I say it's zawa.\"
Fred:  \"Still, I'd like you to say it explicitly.\"
Me:  \"Okay.  Object 1 is A, B, zawa, and E.\"

\n
\n

\n

Then I add another word, \"yokie\", which describes all and only objects that are B and E; and the word \"xippo\", which describes all and only objects which are E but not D.

\n
\n

Me:  \"Object 1 is zawa and yokie, but not xippo.\"
Fred:  \"Wait, is it luminescent?  I mean, is it E?\"
Me:  \"Yes.  That is the only possibility on the information given.\"
Fred:  \"I'd rather you spelled it out.\"
Me:  \"Fine:  Object 1 is A, zawa, B, yokie, C, D, E, and not xippo.\"
Fred:  \"Amazing!  You can tell all that just by looking?\"

\n
\n

Impressive, isn't it?  Let's invent even more new words:  \"Bolo\" is A, C, and yokie; \"mun\" is A, C, and xippo; and \"merlacdonian\" is bolo and mun.

\n

Pointlessly confusing?  I think so too.  Let's replace the labels with the definitions:

\n
\n

\"Zawa, B, and E\" becomes [A, C, D], B, E
\"Bolo and A\" becomes [A, C, [B, E]], A
\"Merlacdonian\" becomes [A, C, [B, E]], [A, C, [E, ~D]]

\n
\n

And the thing to remember about the Aristotelian idea of categories is that [A, C, D] is the entire information of \"zawa\".  It's not just that I can vary the label, but that I can get along just fine without any label at all—the rules for Aristotelian classes work purely on structures like [A, C, D].  To call one of these structures \"zawa\", or attach any other label to it, is a human convenience (or inconvenience) which makes not the slightest difference to the Aristotelian rules.

\n

Let's say that \"human\" is to be defined as a mortal featherless biped.  Then the classic syllogism would have the form:

\n
\n

All [mortal, ~feathers, bipedal] are mortal.
Socrates is a [mortal, ~feathers, bipedal].
Therefore, Socrates is mortal.

\n
\n

The feat of reasoning looks a lot less impressive now, doesn't it?

\n

Here the illusion of inference comes from the labels, which conceal the premises, and pretend to novelty in the conclusion.  Replacing labels with definitions reveals the illusion, making visible the tautology's empirical unhelpfulness.  You can never say that Socrates is a [mortal, ~feathers, biped] until you have observed him to be mortal.

\n

There's an idea, which you may have noticed I hate, that \"you can define a word any way you like\".  This idea came from the Aristotelian notion of categories; since, if you follow the Aristotelian rules exactly and without flawwhich humans never do; Aristotle knew perfectly well that Socrates was human, even though that wasn't justified under his rules—but, if some imaginary nonhuman entity were to follow the rules exactly, they would never arrive at a contradiction.  They wouldn't arrive at much of anything: they couldn't say that Socrates is a [mortal, ~feathers, biped] until they observed him to be mortal.

\n

But it's not so much that labels are arbitrary in the Aristotelian system, as that the Aristotelian system works fine without any labels at all—it cranks out exactly the same stream of tautologies, they just look a lot less impressive.  The labels are only there to create the illusion of inference.

\n

So if you're going to have an Aristotelian proverb at all, the proverb should be, not \"I can define a word any way I like,\" nor even, \"Defining a word never has any consequences,\" but rather, \"Definitions don't need words.\"

" } }, { "_id": "9ZooAqfh2TC9SBDvq", "title": "The Argument from Common Usage", "pageUrl": "https://www.lesswrong.com/posts/9ZooAqfh2TC9SBDvq/the-argument-from-common-usage", "postedAt": "2008-02-13T16:24:21.000Z", "baseScore": 64, "voteCount": 58, "commentCount": 23, "url": null, "contents": { "documentId": "9ZooAqfh2TC9SBDvq", "html": "

Part of the Standard Definitional Dispute runs as follows:

\n
\n

Albert:  \"Look, suppose that I left a microphone in the forest and recorded the pattern of the acoustic vibrations of the tree falling.  If I played that back to someone, they'd call it a 'sound'!  That's the common usage!  Don't go around making up your own wacky definitions!\"

\n

Barry:  \"One, I can define a word any way I like so long as I use it consistently.  Two, the meaning I gave was in the dictionary.  Three, who gave you the right to decide what is or isn't common usage?\"

\n
\n

Not all definitional disputes progress as far as recognizing the notion of common usage.  More often, I think, someone picks up a dictionary because they believe that words have meanings, and the dictionary faithfully records what this meaning is.  Some people even seem to believe that the dictionary determines the meaning—that the dictionary editors are the Legislators of Language.  Maybe because back in elementary school, their authority-teacher said that they had to obey the dictionary, that it was a mandatory rule rather than an optional one?

\n

Dictionary editors read what other people write, and record what the words seem to mean; they are historians.  The Oxford English Dictionary may be comprehensive, but never authoritative.

\n

But surely there is a social imperative to use words in a commonly understood way?  Does not our human telepathy, our valuable power of language, rely on mutual coordination to work?  Perhaps we should voluntarily treat dictionary editors as supreme arbiters—even if they prefer to think of themselves as historians—in order to maintain the quiet cooperation on which all speech depends.

\n

\n

The phrase \"authoritative dictionary\" is almost never used correctly, an example of proper usage being the Authoritative Dictionary of IEEE Standards.  The IEEE is a body of voting members who have a professional need for exact agreement on terms and definitions, and so the Authoritative Dictionary of IEEE Standards is actual, negotiated legislation, which exerts whatever authority one regards as residing in the IEEE.

\n

In everyday life, shared language usually does not arise from a deliberate agreement, as of the IEEE.  It's more a matter of infection, as words are invented and diffuse through the culture.  (A \"meme\", one might say, following Richard Dawkins thirty years ago—but you already know what I mean, and if not, you can look it up on Google, and then you too will have been infected.)

\n

Yet as the example of the IEEE shows, agreement on language can also be a cooperatively established public good.  If you and I wish to undergo an exchange of thoughts via language, the human telepathy, then it is in our mutual interest that we use the same word for similar concepts—preferably, concepts similar to the limit of resolution in our brain's representation thereof—even though we have no obvious mutual interest in using any particular word for a concept.

\n

We have no obvious mutual interest in using the word \"oto\" to mean sound, or \"sound\" to mean oto; but we have a mutual interest in using the same word, whichever word it happens to be.  (Preferably, words we use frequently should be short, but let's not get into information theory just yet.)

\n

But, while we have a mutual interest, it is not strictly necessary that you and I use the similar labels internally; it is only convenient.  If I know that, to you, \"oto\" means sound—that is, you associate \"oto\" to a concept very similar to the one I associate to \"sound\"—then I can say \"Paper crumpling makes a crackling oto.\"  It requires extra thought, but I can do it if I want.

\n

Similarly, if you say \"What is the walking-stick of a bowling ball dropping on the floor?\" and I know which concept you associate with the syllables \"walking-stick\", then I can figure out what you mean.  It may require some thought, and give me pause, because I ordinarily associate \"walking-stick\" with a different concept.  But I can do it just fine.

\n

When humans really want to communicate with each other, we're hard to stop!  If we're stuck on a deserted island with no common language, we'll take up sticks and draw pictures in sand.

\n

Albert's appeal to the Argument from Common Usage assumes that agreement on language is a cooperatively established public good.  Yet Albert assumes this for the sole purpose of rhetorically accusing Barry of breaking the agreement, and endangering the public good.  Now the falling-tree argument has gone all the way from botany to semantics to politics; and so Barry responds by challenging Albert for the authority to define the word.

\n

A rationalist, with the discipline of hugging the query active, would notice that the conversation had gone rather far astray.

\n

Oh, dear reader, is it all really necessary?  Albert knows what Barry means by \"sound\".  Barry knows what Albert means by \"sound\".  Both Albert and Barry have access to words, such as \"acoustic vibrations\" or \"auditory experience\", which they already associate to the same concepts, and which can describe events in the forest without ambiguity.  If they were stuck on a deserted island, trying to communicate with each other, their work would be done.

\n

When both sides know what the other side wants to say, and both sides accuse the other side of defecting from \"common usage\", then whatever it is they are about, it is clearly not working out a way to communicate with each other.  But this is the whole benefit that common usage provides in the first place.

\n

Why would you argue about the meaning of a word, two sides trying to wrest it back and forth?  If it's just a namespace conflict that has gotten blown out of proportion, and nothing more is at stake, then the two sides need merely generate two new words and use them consistently.

\n

Yet often categorizations function as hidden inferences and disguised queriesIs atheism a \"religion\"?  If someone is arguing that the reasoning methods used in atheism are on a par with the reasoning methods used in Judaism, or that atheism is on a par with Islam in terms of causally engendering violence, then they have a clear argumentative stake in lumping it all together into an indistinct gray blur of \"faith\".

\n

Or consider the fight to blend together blacks and whites as \"people\".  This would not be a time to generate two words—what's at stake is exactly the idea that you shouldn't draw a moral distinction.

\n

But once any empirical proposition is at stake, or any moral proposition, you can no longer appeal to common usage.

\n

If the question is how to cluster together similar things for purposes of inference, empirical predictions will depend on the answer; which means that definitions can be wrong.  A conflict of predictions cannot be settled by an opinion poll.

\n

If you want to know whether atheism should be clustered with supernaturalist religions for purposes of some particular empirical inference, the dictionary can't answer you.

\n

If you want to know whether blacks are people, the dictionary can't answer you.

\n

If everyone believes that the red light in the sky is Mars the God of War, the dictionary will define \"Mars\" as the God of War.  If everyone believes that fire is the release of phlogiston, the dictionary will define \"fire\" as the release of phlogiston.

\n

There is an art to using words; even when definitions are not literally true or false, they are often wiser or more foolish.  Dictionaries are mere histories of past usage; if you treat them as supreme arbiters of meaning, it binds you to the wisdom of the past, forbidding you to do better.

\n

Though do take care to ensure (if you must depart from the wisdom of the past) that people can figure out what you're trying to swim.

" } }, { "_id": "dMCFk2n2ur8n62hqB", "title": "Feel the Meaning", "pageUrl": "https://www.lesswrong.com/posts/dMCFk2n2ur8n62hqB/feel-the-meaning", "postedAt": "2008-02-13T01:01:17.000Z", "baseScore": 62, "voteCount": 56, "commentCount": 12, "url": null, "contents": { "documentId": "dMCFk2n2ur8n62hqB", "html": "

When I hear someone say, \"Oh, look, a butterfly,\" the spoken phonemes \"butterfly\" enter my ear and vibrate on my ear drum, being transmitted to the cochlea, tickling auditory nerves that transmit activation spikes to the auditory cortex, where phoneme processing begins, along with recognition of words, and reconstruction of syntax (a by no means serial process), and all manner of other complications.

\n

But at the end of the day, or rather, at the end of the second, I am primed to look where my friend is pointing and see a visual pattern that I will recognize as a butterfly; and I would be quite surprised to see a wolf instead.

\n

My friend looks at a butterfly, his throat vibrates and lips move, the pressure waves travel invisibly through the air, my ear hears and my nerves transduce and my brain reconstructs, and lo and behold, I know what my friend is looking at.  Isn't that marvelous?  If we didn't know about the pressure waves in the air, it would be a tremendous discovery in all the newspapers:  Humans are telepathic!  Human brains can transfer thoughts to each other!

\n

Well, we are telepathic, in fact; but magic isn't exciting when it's merely real, and all your friends can do it too.

\n

Think telepathy is simple?  Try building a computer that will be telepathic with you.  Telepathy, or \"language\", or whatever you want to call our partial thought transfer ability, is more complicated than it looks.

\n

But it would be quite inconvenient to go around thinking, \"Now I shall partially transduce some features of my thoughts into a linear sequence of phonemes which will invoke similar thoughts in my conversational partner...\"

\n

So the brain hides the complexity—or rather, never represents it in the first place—which leads people to think some peculiar thoughts about words.

\n

\n

As I remarked earlier, when a large yellow striped object leaps at me, I think \"Yikes!  A tiger!\" not \"Hm... objects with the properties of largeness, yellowness, and stripedness have previously often possessed the properties 'hungry' and 'dangerous', and therefore, although it is not logically necessary, auughhhh CRUNCH CRUNCH GULP.\"

\n

Similarly, when someone shouts \"Yikes!  A tiger!\", natural selection would not favor an organism that thought, \"Hm... I have just heard the syllables 'Tie' and 'Grr' which my fellow tribe members associate with their internal analogues of my own tiger concept, and which they are more likely to utter if they see an object they categorize as aiiieeee CRUNCH CRUNCH help it's got my arm CRUNCH GULP\".

\n

\"Blegg4_4\" Considering this as a design constraint on the human cognitive architecture, you wouldn't want any extra steps between when your auditory cortex recognizes the syllables \"tiger\", and when the tiger concept gets activated.

\n

Going back to the parable of bleggs and rubes, and the centralized network that categorizes quickly and cheaply, you might visualize a direct connection running from the unit that recognizes the syllable \"blegg\", to the unit at the center of the blegg network.  The central unit, the blegg concept, gets activated almost as soon as you hear Susan the Senior Sorter say \"Blegg!\"

\n

Or, for purposes of talking—which also shouldn't take eons—as soon as you see a blue egg-shaped thing and the central blegg unit fires, you holler \"Blegg!\" to Susan.

\n

And what that algorithm feels like from inside is that the label, and the concept, are very nearly identified; the meaning feels like an intrinsic property of the word itself.

\n

The cognoscenti will recognize this as yet another case of E. T. Jaynes's \"Mind Projection Fallacy\".  It feels like a word has a meaning, as a property of the word itself; just like how redness is a property of a red apple, or mysteriousness is a property of a mysterious phenomenon.

\n

Indeed, on most occasions, the brain will not distinguish at all between the word and the meaning—only bothering to separate the two while learning a new language, perhaps.  And even then, you'll see Susan pointing to a blue egg-shaped thing and saying \"Blegg!\", and you'll think, I wonder what \"blegg\" means, and not, I wonder what mental category Susan associates to the auditory label \"blegg\".

\n

Consider, in this light, the part of the Standard Dispute of Definitions where the two parties argue about what the word \"sound\" really means—the same way they might argue whether a particular apple is really red or green:

\n
\n

Albert: \"My computer's microphone can record a sound without anyone being around to hear it, store it as a file, and it's called a 'sound file'. And what's stored in the file is the pattern of vibrations in air, not the pattern of neural firings in anyone's brain.  'Sound' means a pattern of vibrations.\"

\n

Barry:  \"Oh, yeah?  Let's just see if the dictionary agrees with you.\"

\n
\n

Albert feels intuitively that the word \"sound\" has a meaning and that the meaning is acoustic vibrations.  Just as Albert feels that a tree falling in the forest makes a sound (rather than causing an event that matches the sound category).

\n

Barry likewise feels that:

\n
\n

sound.meaning == auditory experiences
forest.sound == false

\n
\n

Rather than:

\n
\n

myBrain.FindConcept(\"sound\") == concept_AuditoryExperience
concept_AuditoryExperience.match(forest) == false

\n
\n

Which is closer to what's really going on; but humans have not evolved to know this, anymore than humans instinctively know the brain is made of neurons.

\n

Albert and Barry's conflicting intuitions provide the fuel for continuing the argument in the phase of arguing over what the word \"sound\" means—which feels like arguing over a fact like any other fact, like arguing over whether the sky is blue or green.

\n

You may not even notice that anything has gone astray, until you try to perform the rationalist ritual of stating a testable experiment whose result depends on the facts you're so heatedly disputing...

" } }, { "_id": "7X2j8HAkWdmMoS8PE", "title": "Disputing Definitions", "pageUrl": "https://www.lesswrong.com/posts/7X2j8HAkWdmMoS8PE/disputing-definitions", "postedAt": "2008-02-12T00:15:11.000Z", "baseScore": 120, "voteCount": 105, "commentCount": 46, "url": null, "contents": { "documentId": "7X2j8HAkWdmMoS8PE", "html": "

I have watched more than one conversation—even conversations supposedly about cognitive science—go the route of disputing over definitions.  Taking the classic example to be \"If a tree falls in a forest, and no one hears it, does it make a sound?\", the dispute often follows a course like this:

\n
\n

If a tree falls in the forest, and no one hears it, does it make a sound?

\n

Albert:  \"Of course it does.  What kind of silly question is that?  Every time I've listened to a tree fall, it made a sound, so I'll guess that other trees falling also make sounds.  I don't believe the world changes around when I'm not looking.\"

\n
\n
\n

Barry:  \"Wait a minute.  If no one hears it, how can it be a sound?\"

\n
\n

In this example, Barry is arguing with Albert because of a genuinely different intuition about what constitutes a sound.  But there's more than one way the Standard Dispute can start.  Barry could have a motive for rejecting Albert's conclusion.  Or Barry could be a skeptic who, upon hearing Albert's argument, reflexively scrutinized it for possible logical flaws; and then, on finding a counterargument, automatically accepted it without applying a second layer of search for a counter-counterargument; thereby arguing himself into the opposite position.  This doesn't require that Barry's prior intuition—the intuition Barry would have had, if we'd asked him before Albert spoke—have differed from Albert's.

\n

Well, if Barry didn't have a differing intuition before, he sure has one now.

\n

\n
\n

Albert:  \"What do you mean, there's no sound?  The tree's roots snap, the trunk comes crashing down and hits the ground. This generates vibrations that travel through the ground and the air. That's where the energy of the fall goes, into heat and sound.  Are you saying that if people leave the forest, the tree violates conservation of energy?\"

\n

Barry:  \"But no one hears anything.  If there are no humans in the forest, or, for the sake of argument, anything else with a complex nervous system capable of 'hearing', then no one hears a sound.\"

\n
\n

Albert and Barry recruit arguments that feel like support for their respective positions, describing in more detail the thoughts that caused their \"sound\"-detectors to fire or stay silent.  But so far the conversation has still focused on the forest, rather than definitions.  And note that they don't actually disagree on anything that happens in the forest.

\n
\n

Albert:  \"This is the dumbest argument I've ever been in.  You're a niddlewicking fallumphing pickleplumber.\"

\n

Barry:  \"Yeah?  Well, you look like your face caught on fire and someone put it out with a shovel.\"

\n
\n

Insult has been proffered and accepted; now neither party can back down without losing face.  Technically, this isn't part of the argument, as rationalists account such things; but it's such an important part of the Standard Dispute that I'm including it anyway.

\n
\n

Albert:  \"The tree produces acoustic vibrations.  By definition, that is a sound.\"

\n

Barry:  \"No one hears anything.  By definition, that is not a sound.\"

\n
\n

The argument starts shifting to focus on definitions.  Whenever you feel tempted to say the words \"by definition\" in an argument that is not literally about pure mathematics, remember that anything which is true \"by definition\" is true in all possible worlds, and so observing its truth can never constrain which world you live in.

\n
\n

Albert: \"My computer's microphone can record a sound without anyone being around to hear it, store it as a file, and it's called a 'sound file'. And what's stored in the file is the pattern of vibrations in air, not the pattern of neural firings in anyone's brain.  'Sound' means a pattern of vibrations.\"

\n
\n

Albert deploys an argument that feels like support for the word \"sound\" having a particular meaning. This is a different kind of question from whether acoustic vibrations take place in a forest—but the shift usually passes unnoticed.

\n
\n

Barry:  \"Oh, yeah?  Let's just see if the dictionary agrees with you.\"

\n
\n

There's a lot of things I could be curious about in the falling-tree scenario. I could go into the forest and look at trees, or learn how to derive the wave equation for changes of air pressure, or examine the anatomy of an ear, or study the neuroanatomy of the auditory cortex.  Instead of doing any of these things, I am to consult a dictionary, apparently.  Why?  Are the editors of the dictionary expert botanists, expert physicists, expert neuroscientists?  Looking in an encyclopedia might make sense, but why a dictionary?

\n
\n

Albert:  \"Hah!  Definition 2c in Merriam-Webster:  'Sound:  Mechanical radiant energy that is transmitted by longitudinal pressure waves in a material medium (as air).'\"

\n

Barry:  \"Hah!  Definition 2b in Merriam-Webster: 'Sound:  The sensation perceived by the sense of hearing.'\"

\n

Albert and Barry, chorus:  \"Consarned dictionary!  This doesn't help at all!\"

\n
\n

Dictionary editors are historians of usage, not legislators of language. Dictionary editors find words in current usage, then write down the words next to (a small part of) what people seem to mean by them.  If there's more than one usage, the editors write down more than one definition.

\n
\n

Albert:  \"Look, suppose that I left a microphone in the forest and recorded the pattern of the acoustic vibrations of the tree falling.  If I played that back to someone, they'd call it a 'sound'!  That's the common usage!  Don't go around making up your own wacky definitions!\"

\n

Barry:  \"One, I can define a word any way I like so long as I use it consistently.  Two, the meaning I gave was in the dictionary.  Three, who gave you the right to decide what is or isn't common usage?\"

\n
\n

There's quite a lot of rationality errors in the Standard Dispute.  Some of them I've already covered, and some of them I've yet to cover; likewise the remedies.

\n

But for now, I would just like to point out—in a mournful sort of way—that Albert and Barry seem to agree on virtually every question of what is actually going on inside the forest, and yet it doesn't seem to generate any feeling of agreement.

\n

Arguing about definitions is a garden path; people wouldn't go down the path if they saw at the outset where it led.  If you asked Albert (Barry) why he's still arguing, he'd probably say something like: \"Barry (Albert) is trying to sneak in his own definition of 'sound', the scurvey scoundrel, to support his ridiculous point; and I'm here to defend the standard definition.\"

\n

But suppose I went back in time to before the start of the argument:

\n
\n

(Eliezer appears from nowhere in a peculiar conveyance that looks just like the time machine from the original 'The Time Machine' movie.)

\n

Barry:  \"Gosh!  A time traveler!\"

\n

Eliezer:  \"I am a traveler from the future!  Hear my words!  I have traveled far into the past—around fifteen minutes—\"

\n

Albert:  \"Fifteen minutes?\"

\n

Eliezer:  \"—to bring you this message!\"

\n

(There is a pause of mixed confusion and expectancy.)

\n

Eliezer:  \"Do you think that 'sound' should be defined to require both acoustic vibrations (pressure waves in air) and also auditory experiences (someone to listen to the sound), or should 'sound' be defined as meaning only acoustic vibrations, or only auditory experience?\"

\n

Barry:  \"You went back in time to ask us that?\"

\n

Eliezer:  \"My purposes are my own!  Answer!\"

\n

Albert:  \"Well... I don't see why it would matter.  You can pick any definition so long as you use it consistently.\"

\n

Barry:  \"Flip a coin.  Er, flip a coin twice.\"

\n

Eliezer:  \"Personally I'd say that if the issue arises, both sides should switch to describing the event in unambiguous lower-level constituents, like acoustic vibrations or auditory experiences.  Or each side could designate a new word, like 'alberzle' and 'bargulum', to use for what they respectively used to call 'sound'; and then both sides could use the new words consistently.  That way neither side has to back down or lose face, but they can still communicate.  And of course you should try to keep track, at all times, of some testable proposition that the argument is actually about.  Does that sound right to you?\"

\n

Albert:  \"I guess...\"

\n

Barry:  \"Why are we talking about this?\"

\n

Eliezer:  \"To preserve your friendship against a contingency you will, now, never know.  For the future has already changed!\"

\n

(Eliezer and the machine vanish in a puff of smoke.)

\n

Barry:  \"Where were we again?\"

\n

Albert:  \"Oh, yeah:  If a tree falls in the forest, and no one hears it, does it make a sound?\"

\n

Barry:  \"It makes an alberzle but not a bargulum.  What's the next question?\"

\n
\n

This remedy doesn't destroy every dispute over categorizations.  But it destroys a substantial fraction.

" } }, { "_id": "yA4gF5KrboK2m2Xu7", "title": "How An Algorithm Feels From Inside", "pageUrl": "https://www.lesswrong.com/posts/yA4gF5KrboK2m2Xu7/how-an-algorithm-feels-from-inside", "postedAt": "2008-02-11T02:35:20.000Z", "baseScore": 304, "voteCount": 255, "commentCount": 85, "url": null, "contents": { "documentId": "yA4gF5KrboK2m2Xu7", "html": "

\"If a tree falls in the forest, and no one hears it, does it make a sound?\"  I remember seeing an actual argument get started on this subject—a fully naive argument that went nowhere near Berkeleyan subjectivism.  Just:

\n
\n

\"It makes a sound, just like any other falling tree!\"
\"But how can there be a sound that no one hears?\"

\n
\n

The standard rationalist view would be that the first person is speaking as if \"sound\" means acoustic vibrations in the air; the second person is speaking as if \"sound\" means an auditory experience in a brain.  If you ask \"Are there acoustic vibrations?\" or \"Are there auditory experiences?\", the answer is at once obvious.  And so the argument is really about the definition of the word \"sound\".

\n

I think the standard analysis is essentially correct.  So let's accept that as a premise, and ask:  Why do people get into such an argument?  What's the underlying psychology?

\n

A key idea of the heuristics and biases program is that mistakes are often more revealing of cognition than correct answers.  Getting into a heated dispute about whether, if a tree falls in a deserted forest, it makes a sound, is traditionally considered a mistake.

\n

So what kind of mind design corresponds to that error?

\n

\n

In Disguised Queries I introduced the blegg/rube classification task, in which Susan the Senior Sorter explains that your job is to sort objects coming off a conveyor belt, putting the blue eggs or \"bleggs\" into one bin, and the red cubes or \"rubes\" into the rube bin.  This, it turns out, is because bleggs contain small nuggets of vanadium ore, and rubes contain small shreds of palladium, both of which are useful industrially.

\n

Except that around 2% of blue egg-shaped objects contain palladium instead.  So if you find a blue egg-shaped thing that contains palladium, should you call it a \"rube\" instead?  You're going to put it in the rube bin—why not call it a \"rube\"?

\n

But when you switch off the light, nearly all bleggs glow faintly in the dark.  And blue egg-shaped objects that contain palladium are just as likely to glow in the dark as any other blue egg-shaped object.

\n

So if you find a blue egg-shaped object that contains palladium, and you ask \"Is it a blegg?\", the answer depends on what you have to do with the answer:  If you ask \"Which bin does the object go in?\", then you choose as if the object is a rube.  But if you ask \"If I turn off the light, will it glow?\", you predict as if the object is a blegg.  In one case, the question \"Is it a blegg?\" stands in for the disguised query, \"Which bin does it go in?\".  In the other case, the question \"Is it a blegg?\" stands in for the disguised query, \"Will it glow in the dark?\"

\n

Now suppose that you have an object that is blue and egg-shaped and contains palladium; and you have already observed that it is furred, flexible, opaque, and glows in the dark.

\n

This answers every query, observes every observable introduced.  There's nothing left for a disguised query to stand for.

\n

So why might someone feel an impulse to go on arguing whether the object is really a blegg?

\n

\"Blegg3\"

\n

This diagram from Neural Categories shows two different neural networks that might be used to answer questions about bleggs and rubes.  Network 1 has a number of disadvantages—such as potentially oscillating/chaotic behavior, or requiring O(N2) connections—but Network 1's structure does have one major advantage over Network 2:  Every unit in the network corresponds to a testable query.  If you observe every observable, clamping every value, there are no units in the network left over.

\n

Network 2, however, is a far better candidate for being something vaguely like how the human brain works:  It's fast, cheap, scalable—and has an extra dangling unit in the center, whose activation can still vary, even after we've observed every single one of the surrounding nodes.

\n

Which is to say that even after you know whether an object is blue or red, egg or cube, furred or smooth, bright or dark, and whether it contains vanadium or palladium, it feels like there's a leftover, unanswered question:  But is it really a blegg?

\n

Usually, in our daily experience, acoustic vibrations and auditory experience go together.  But a tree falling in a deserted forest unbundles this common association.  And even after you know that the falling tree creates acoustic vibrations but not auditory experience, it feels like there's a leftover question:  Did it make a sound?

We know where Pluto is, and where it's going; we know Pluto's shape, and Pluto's mass—but is it a planet?

\n

Now remember:  When you look at Network 2, as I've laid it out here, you're seeing the algorithm from the outside.  People don't think to themselves, \"Should the central unit fire, or not?\" any more than you think \"Should neuron #12,234,320,242 in my visual cortex fire, or not?\"

\n

It takes a deliberate effort to visualize your brain from the outside—and then you still don't see your actual brain; you imagine what you think is there, hopefully based on science, but regardless, you don't have any direct access to neural network structures from introspection.  That's why the ancient Greeks didn't invent computational neuroscience.

\n

When you look at Network 2, you are seeing from the outside; but the way that neural network structure feels from the inside, if you yourself are a brain running that algorithm, is that even after you know every characteristic of the object, you still find yourself wondering:  \"But is it a blegg, or not?\"

\n

This is a great gap to cross, and I've seen it stop people in their tracks.  Because we don't instinctively see our intuitions as \"intuitions\", we just see them as the world.  When you look at a green cup, you don't think of yourself as seeing a picture reconstructed in your visual cortex—although that is what you are seeing—you just see a green cup.  You think, \"Why, look, this cup is green,\" not, \"The picture in my visual cortex of this cup is green.\"

\n

And in the same way, when people argue over whether the falling tree makes a sound, or whether Pluto is a planet, they don't see themselves as arguing over whether a categorization should be active in their neural networks.  It seems like either the tree makes a sound, or not.

\n

We know where Pluto is, and where it's going; we know Pluto's shape, and Pluto's mass—but is it a planet?  And yes, there were people who said this was a fight over definitions—but even that is a Network 2 sort of perspective, because you're arguing about how the central unit ought to be wired up.  If you were a mind constructed along the lines of Network 1, you wouldn't say \"It depends on how you define 'planet',\" you would just say, \"Given that we know Pluto's orbit and shape and mass, there is no question left to ask.\"  Or, rather, that's how it would feel—it would feel like there was no question left—if you were a mind constructed along the lines of Network 1.

\n

Before you can question your intuitions, you have to realize that what your mind's eye is looking at is an intuition—some cognitive algorithm, as seen from the inside—rather than a direct perception of the Way Things Really Are.

\n

People cling to their intuitions, I think, not so much because they believe their cognitive algorithms are perfectly reliable, but because they can't see their intuitions as the way their cognitive algorithms happen to look from the inside.

\n

And so everything you try to say about how the native cognitive algorithm goes astray, ends up being contrasted to their direct perception of the Way Things Really Are—and discarded as obviously wrong.

" } }, { "_id": "yFDKvfN6D87Tf5J9f", "title": "Neural Categories", "pageUrl": "https://www.lesswrong.com/posts/yFDKvfN6D87Tf5J9f/neural-categories", "postedAt": "2008-02-10T00:33:17.000Z", "baseScore": 64, "voteCount": 55, "commentCount": 17, "url": null, "contents": { "documentId": "yFDKvfN6D87Tf5J9f", "html": "

In Disguised Queries, I talked about a classification task of \"bleggs\" and \"rubes\".  The typical blegg is blue, egg-shaped, furred, flexible, opaque, glows in the dark, and contains vanadium.  The typical rube is red, cube-shaped, smooth, hard, translucent, unglowing, and contains palladium.  For the sake of simplicity, let us forget the characteristics of flexibility/hardness and opaqueness/translucency.  This leaves five dimensions in thingspace:  Color, shape, texture, luminance, and interior.

\n

Suppose I want to create an Artificial Neural Network (ANN) to predict unobserved blegg characteristics from observed blegg characteristics.  And suppose I'm fairly naive about ANNs:  I've read excited popular science books about how neural networks are distributed, emergent, and parallel just like the human brain!! but I can't derive the differential equations for gradient descent in a non-recurrent multilayer network with sigmoid units (which is actually a lot easier than it sounds).

\n

Then I might design a neural network that looks something like this:

\n

\n

\"Blegg1_3\"

\n

Network 1 is for classifying bleggs and rubes.  But since \"blegg\" is an unfamiliar and synthetic concept, I've also included a similar Network 1b for distinguishing humans from Space Monsters, with input from Aristotle (\"All men are mortal\") and Plato's Academy (\"A featherless biped with broad nails\").

\n

A neural network needs a learning rule.  The obvious idea is that when two nodes are often active at the same time, we should strengthen the connection between them—this is one of the first rules ever proposed for training a neural network, known as Hebb's Rule.

\n

Thus, if you often saw things that were both blue and furred—thus simultaneously activating the \"color\" node in the + state and the \"texture\" node in the + state—the connection would strengthen between color and texture, so that + colors activated + textures, and vice versa.  If you saw things that were blue and egg-shaped and vanadium-containing, that would strengthen positive mutual connections between color and shape and interior.

\n

Let's say you've already seen plenty of bleggs and rubes come off the conveyor belt.  But now you see something that's furred, egg-shaped, and—gasp!—reddish purple (which we'll model as a \"color\" activation level of -2/3).  You haven't yet tested the luminance, or the interior.  What to predict, what to predict?

\n

What happens then is that the activation levels in Network 1 bounce around a bit.  Positive activation flows luminance from shape, negative activation flows to interior from color, negative activation flows from interior to luminance...  Of course all these messages are passed in parallel!! and asynchronously!! just like the human brain...

\n

Finally Network 1 settles into a stable state, which has high positive activation for \"luminance\" and \"interior\".  The network may be said to \"expect\" (though it has not yet seen) that the object will glow in the dark, and that it contains vanadium.

\n

And lo, Network 1 exhibits this behavior even though there's no explicit node that says whether the object is a blegg or not.  The judgment is implicit in the whole network!!  Bleggness is an attractor!! which arises as the result of emergent behavior!! from the distributed!! learning rule.

\n

Now in real life, this kind of network design—however faddish it may sound—runs into all sorts of problems.  Recurrent networks don't always settle right away:  They can oscillate, or exhibit chaotic behavior, or just take a very long time to settle down.  This is a Bad Thing when you see something big and yellow and striped, and you have to wait five minutes for your distributed neural network to settle into the \"tiger\" attractor.  Asynchronous and parallel it may be, but it's not real-time.

\n

And there are other problems, like double-counting the evidence when messages bounce back and forth:  If you suspect that an object glows in the dark, your suspicion will activate belief that the object contains vanadium, which in turn will activate belief that the object glows in the dark.

\n

Plus if you try to scale up the Network 1 design, it requires O(N2) connections, where N is the total number of observables.

\n

So what might be a more realistic neural network design?

\n

\"Blegg2\"
In this network, a wave of activation converges on the central node from any clamped (observed) nodes, and then surges back out again to any unclamped (unobserved) nodes.  Which means we can compute the answer in one step, rather than waiting for the network to settle—an important requirement in biology when the neurons only run at 20Hz.  And the network architecture scales as O(N), rather than O(N2).

\n

Admittedly, there are some things you can notice more easily with the first network architecture than the second.  Network 1 has a direct connection between every two nodes.  So if red objects never glow in the dark, but red furred objects usually have the other blegg characteristics like egg-shape and vanadium, Network 1 can easily represent this: it just takes a very strong direct negative connection from color to luminance, but more powerful positive connections from texture to all other nodes except luminance.

\n

Nor is this a \"special exception\" to the general rule that bleggs glow—remember, in Network 1, there is no unit that represents blegg-ness; blegg-ness emerges as an attractor in the distributed network.

\n

So yes, those N2 connections were buying us something.  But not very much.  Network 1 is not more useful on most real-world problems, where you rarely find an animal stuck halfway between being a cat and a dog.

\n

(There are also facts that you can't easily represent in Network 1 or Network 2.  Let's say sea-blue color and spheroid shape, when found together, always indicate the presence of palladium; but when found individually, without the other, they are each very strong evidence for vanadium.  This is hard to represent, in either architecture, without extra nodes.  Both Network 1 and Network 2 embody implicit assumptions about what kind of environmental structure is likely to exist; the ability to read this off is what separates the adults from the babes, in machine learning.)

\n

Make no mistake:  Neither Network 1, nor Network 2, are biologically realistic.  But it still seems like a fair guess that however the brain really works, it is in some sense closer to Network 2 than Network 1.  Fast, cheap, scalable, works well to distinguish dogs and cats: natural selection goes for that sort of thing like water running down a fitness landscape.

\n

It seems like an ordinary enough task to classify objects as either bleggs or rubes, tossing them into the appropriate bin.  But would you notice if sea-blue objects never glowed in the dark?

\n

Maybe, if someone presented you with twenty objects that were alike only in being sea-blue, and then switched off the light, and none of the objects glowed.  If you got hit over the head with it, in other words.  Perhaps by presenting you with all these sea-blue objects in a group, your brain forms a new subcategory, and can detect the \"doesn't glow\" characteristic within that subcategory.  But you probably wouldn't notice if the sea-blue objects were scattered among a hundred other bleggs and rubes.  It wouldn't be easy or intuitive to notice, the way that distinguishing cats and dogs is easy and intuitive.

\n

Or:  \"Socrates is human, all humans are mortal, therefore Socrates is mortal.\"  How did Aristotle know that Socrates was human?  Well, Socrates had no feathers, and broad nails, and walked upright, and spoke Greek, and, well, was generally shaped like a human and acted like one.  So the brain decides, once and for all, that Socrates is human; and from there, infers that Socrates is mortal like all other humans thus yet observed.  It doesn't seem easy or intuitive to ask how much wearing clothes, as opposed to using language, is associated with mortality.  Just, \"things that wear clothes and use language are human\" and \"humans are mortal\".

\n

Are there biases associated with trying to classify things into categories once and for all?  Of course there are.  See e.g. Cultish Countercultishness.

\n

To be continued...

" } }, { "_id": "4FcxgdvdQP45D6Skg", "title": "Disguised Queries", "pageUrl": "https://www.lesswrong.com/posts/4FcxgdvdQP45D6Skg/disguised-queries", "postedAt": "2008-02-09T00:05:28.000Z", "baseScore": 189, "voteCount": 150, "commentCount": 108, "url": null, "contents": { "documentId": "4FcxgdvdQP45D6Skg", "html": "

Imagine that you have a peculiar job in a peculiar factory:  Your task is to take objects from a mysterious conveyor belt, and sort the objects into two bins.  When you first arrive, Susan the Senior Sorter explains to you that blue egg-shaped objects are called \"bleggs\" and go in the \"blegg bin\", while red cubes are called \"rubes\" and go in the \"rube bin\".

\n

Once you start working, you notice that bleggs and rubes differ in ways besides color and shape.  Bleggs have fur on their surface, while rubes are smooth.  Bleggs flex slightly to the touch; rubes are hard.  Bleggs are opaque; the rube's surface slightly translucent.

\n

Soon after you begin working, you encounter a blegg shaded an unusually dark blue—in fact, on closer examination, the color proves to be purple, halfway between red and blue.

\n

Yet wait!  Why are you calling this object a \"blegg\"?  A \"blegg\" was originally defined as blue and egg-shaped—the qualification of blueness appears in the very name \"blegg\", in fact.  This object is not blue.  One of the necessary qualifications is missing; you should call this a \"purple egg-shaped object\", not a \"blegg\".

\n

But it so happens that, in addition to being purple and egg-shaped, the object is also furred, flexible, and opaque.  So when you saw the object, you thought, \"Oh, a strangely colored blegg.\"  It certainly isn't a rube... right?

\n

Still, you aren't quite sure what to do next.  So you call over Susan the Senior Sorter.

\n

\n
\n

    \"Oh, yes, it's a blegg,\" Susan says, \"you can put it in the blegg bin.\"
    You start to toss the purple blegg into the blegg bin, but pause for a moment.  \"Susan,\" you say, \"how do you know this is a blegg?\"
    Susan looks at you oddly.  \"Isn't it obvious?  This object may be purple, but it's still egg-shaped, furred, flexible, and opaque, like all the other bleggs.  You've got to expect a few color defects.  Or is this one of those philosophical conundrums, like 'How do you know the world wasn't created five minutes ago complete with false memories?'  In a philosophical sense I'm not absolutely certain that this is a blegg, but it seems like a good guess.\"
    \"No, I mean...\"  You pause, searching for words.  \"Why is there a blegg bin and a rube bin?  What's the difference between bleggs and rubes?\"
    \"Bleggs are blue and egg-shaped, rubes are red and cube-shaped,\" Susan says patiently.  \"You got the standard orientation lecture, right?\"
    \"Why do bleggs and rubes need to be sorted?\"
    \"Er... because otherwise they'd be all mixed up?\" says Susan.  \"Because nobody will pay us to sit around all day and not sort bleggs and rubes?\"
    \"Who originally determined that the first blue egg-shaped object was a 'blegg', and how did they determine that?\"
    Susan shrugs.  \"I suppose you could just as easily call the red cube-shaped objects 'bleggs' and the blue egg-shaped objects 'rubes', but it seems easier to remember this way.\"
    You think for a moment.  \"Suppose a completely mixed-up object came off the conveyor.  Like, an orange sphere-shaped furred translucent object with writhing green tentacles.  How could I tell whether it was a blegg or a rube?\"
    \"Wow, no one's ever found an object that mixed up,\" says Susan, \"but I guess we'd take it to the sorting scanner.\"
    \"How does the sorting scanner work?\" you inquire.  \"X-rays?  Magnetic resonance imaging?  Fast neutron transmission spectroscopy?\"
    \"I'm told it works by Bayes's Rule, but I don't quite understand how,\" says Susan.  \"I like to say it, though.  Bayes Bayes Bayes Bayes Bayes.\"
    \"What does the sorting scanner tell you?\"
    \"It tells you whether to put the object into the blegg bin or the rube bin.  That's why it's called a sorting scanner.\"
    At this point you fall silent.
    \"Incidentally,\" Susan says casually, \"it may interest you to know that bleggs contain small nuggets of vanadium ore, and rubes contain shreds of palladium, both of which are useful industrially.\"
    \"Susan, you are pure evil.\"
    \"Thank you.\"

\n
\n

So now it seems we've discovered the heart and essence of bleggness: a blegg is an object that contains a nugget of vanadium ore.  Surface characteristics, like blue color and furredness, do not determine whether an object is a blegg; surface characteristics only matter because they help you infer whether an object is a blegg, that is, whether the object contains vanadium.

\n

Containing vanadium is a necessary and sufficient definition: all bleggs contain vanadium and everything that contains vanadium is a blegg: \"blegg\" is just a shorthand way of saying \"vanadium-containing object.\"  Right?

\n

Not so fast, says Susan:  Around 98% of bleggs contain vanadium, but 2% contain palladium instead.  To be precise (Susan continues) around 98% of blue egg-shaped furred flexible opaque objects contain vanadium.  For unusual bleggs, it may be a different percentage: 95% of purple bleggs contain vanadium, 92% of hard bleggs contain vanadium, etc.

\n

Now suppose you find a blue egg-shaped furred flexible opaque object, an ordinary blegg in every visible way, and just for kicks you take it to the sorting scanner, and the scanner says \"palladium\"—this is one of the rare 2%.  Is it a blegg?

\n

At first you might answer that, since you intend to throw this object in the rube bin, you might as well call it a \"rube\".  However, it turns out that almost all bleggs, if you switch off the lights, glow faintly in the dark; while almost all rubes do not glow in the dark.  And the percentage of bleggs that glow in the dark is not significantly different for blue egg-shaped furred flexible opaque objects that contain palladium, instead of vanadium.  Thus, if you want to guess whether the object glows like a blegg, or remains dark like a rube, you should guess that it glows like a blegg.

\n

So is the object really a blegg or a rube?

\n

On one hand, you'll throw the object in the rube bin no matter what else you learn.  On the other hand, if there are any unknown characteristics of the object you need to infer, you'll infer them as if the object were a blegg, not a rube—group it into the similarity cluster of blue egg-shaped furred flexible opaque things, and not the similarity cluster of red cube-shaped smooth hard translucent things.

\n

The question \"Is this object a blegg?\" may stand in for different queries on different occasions.

\n

If it weren't standing in for some query, you'd have no reason to care.

\n

Is atheism a \"religion\"Is transhumanism a \"cult\"?  People who argue that atheism is a religion \"because it states beliefs about God\" are really trying to argue (I think) that the reasoning methods used in atheism are on a par with the reasoning methods used in religion, or that atheism is no safer than religion in terms of the probability of causally engendering violence, etc...  What's really at stake is an atheist's claim of substantial difference and superiority relative to religion, which the religious person is trying to reject by denying the difference rather than the superiority(!)

\n

But that's not the a priori irrational part:  The a priori irrational part is where, in the course of the argument, someone pulls out a dictionary and looks up the definition of \"atheism\" or \"religion\".  (And yes, it's just as silly whether an atheist or religionist does it.)  How could a dictionary possibly decide whether an empirical cluster of atheists is really substantially different from an empirical cluster of theologians?  How can reality vary with the meaning of a word?  The points in thingspace don't move around when we redraw a boundary.

\n

But people often don't realize that their argument about where to draw a definitional boundary, is really a dispute over whether to infer a characteristic shared by most things inside an empirical cluster...

\n

Hence the phrase, \"disguised query\".

" } }, { "_id": "WBw8dDkAWohFjWQSk", "title": "The Cluster Structure of Thingspace", "pageUrl": "https://www.lesswrong.com/posts/WBw8dDkAWohFjWQSk/the-cluster-structure-of-thingspace", "postedAt": "2008-02-08T00:07:15.000Z", "baseScore": 157, "voteCount": 120, "commentCount": 32, "url": null, "contents": { "documentId": "WBw8dDkAWohFjWQSk", "html": "

The notion of a \"configuration space\" is a way of translating object descriptions into object positions.  It may seem like blue is \"closer\" to blue-green than to red, but how much closer?  It's hard to answer that question by just staring at the colors.  But it helps to know that the (proportional) color coordinates in RGB are 0:0:5, 0:3:2 and 5:0:0.  It would be even clearer if plotted on a 3D graph.

\n

In the same way, you can see a robin as a robin—brown tail, red breast, standard robin shape, maximum flying speed when unladen, its species-typical DNA and individual alleles.  Or you could see a robin as a single point in a configuration space whose dimensions described everything we knew, or could know, about the robin.

\n

A robin is bigger than a virus, and smaller than an aircraft carrier—that might be the \"volume\" dimension.  Likewise a robin weighs more than a hydrogen atom, and less than a galaxy; that might be the \"mass\" dimension.  Different robins will have strong correlations between \"volume\" and \"mass\", so the robin-points will be lined up in a fairly linear string, in those two dimensions—but the correlation won't be exact, so we do need two separate dimensions.

\n

This is the benefit of viewing robins as points in space:  You couldn't see the linear lineup as easily if you were just imagining the robins as cute little wing-flapping creatures.

\n

\n

A robin's DNA is a highly multidimensional variable, but you can still think of it as part of a robin's location in thingspace—millions of quaternary coordinates, one coordinate for each DNA base—or maybe a more sophisticated view that .  The shape of the robin, and its color (surface reflectance), you can likewise think of as part of the robin's position in thingspace, even though they aren't single dimensions.

\n

Just like the coordinate point 0:0:5 contains the same information as the actual HTML color blue, we shouldn't actually lose information when we see robins as points in space.  We believe the same statement about the robin's mass whether we visualize a robin balancing the scales opposite a 0.07-kilogram weight, or a robin-point with a mass-coordinate of +70.

\n

We can even imagine a configuration space with one or more dimensions for every distinct characteristic of an object, so that the position of an object's point in this space corresponds to all the information in the real object itself.  Rather redundantly represented, too—dimensions would include the mass, the volume, and the density.

\n

If you think that's extravagant, quantum physicists use an infinite-dimensional configuration space, and a single point in that space describes the location of every particle in the universe.  So we're actually being comparatively conservative in our visualization of thingspace—a point in thingspace describes just one object, not the entire universe.

\n

If we're not sure of the robin's exact mass and volume, then we can think of a little cloud in thingspace, a volume of uncertainty, within which the robin might be.  The density of the cloud is the density of our belief that the robin has that particular mass and volume.  If you're more sure of the robin's density than of its mass and volume, your probability-cloud will be highly concentrated in the density dimension, and concentrated around a slanting line in the subspace of mass/volume.  (Indeed, the cloud here is actually a surface, because of the relation VD = M.)

\n

\"Radial categories\" are how cognitive psychologists describe the non-Aristotelian boundaries of words.  The central \"mother\" conceives her child, gives birth to it, and supports it. Is an egg donor who never sees her child a mother?  She is the \"genetic mother\".  What about a woman who is implanted with a foreign embryo and bears it to term?  She is a \"surrogate mother\".  And the woman who raises a child that isn't hers genetically?  Why, she's an \"adoptive mother\".  The Aristotelian syllogism would run, \"Humans have ten fingers, Fred has nine fingers, therefore Fred is not a human\" but the way we actually think is \"Humans have ten fingers, Fred is a human, therefore Fred is a 'nine-fingered human'.\"

\n

We can think about the radial-ness of categories in intensional terms, as described above—properties that are usually present, but optionally absent.  If we thought about the intension of the word \"mother\", it might be like a distributed glow in thingspace, a glow whose intensity matches the degree to which that volume of thingspace matches the category \"mother\".  The glow is concentrated in the center of genetics and birth and child-raising; the volume of egg donors would also glow, but less brightly.

\n

Or we can think about the radial-ness of categories extensionally.  Suppose we mapped all the birds in the world into thingspace, using a distance metric that corresponds as well as possible to perceived similarity in humans:  A robin is more similar to another robin, than either is similar to a pigeon, but robins and pigeons are all more similar to each other than either is to a penguin, etcetera.

\n

Then the center of all birdness would be densely populated by many neighboring tight clusters, robins and sparrows and canaries and pigeons and many other species.  Eagles and falcons and other large predatory birds would occupy a nearby cluster.  Penguins would be in a more distant cluster, and likewise chickens and ostriches.

\n

The result might look, indeed, something like an astronomical cluster: many galaxies orbiting the center, and a few outliers.

\n

Or we could think simultaneously about both the intension of the cognitive category \"bird\", and its extension in real-world birds:  The central clusters of robins and sparrows glowing brightly with highly typical birdness; satellite clusters of ostriches and penguins glowing more dimly with atypical birdness, and Abraham Lincoln a few megaparsecs away and glowing not at all.

\n

I prefer that last visualization—the glowing points—because as I see it, the structure of the cognitive intension followed from the extensional cluster structure.  First came the structure-in-the-world, the empirical distribution of birds over thingspace; then, by observing it, we formed a category whose intensional glow roughly overlays this structure.

\n

This gives us yet another view of why words are not Aristotelian classes: the empirical clustered structure of the real universe is not so crystalline.  A natural cluster, a group of things highly similar to each other, may have no set of necessary and sufficient properties—no set of characteristics that all group members have, and no non-members have.

\n

But even if a category is irrecoverably blurry and bumpy, there's no need to panic.  I would not object if someone said that birds are \"feathered flying things\".  But penguins don't fly!—well, fine.  The usual rule has an exception; it's not the end of the world.  Definitions can't be expected to exactly match the empirical structure of thingspace in any event, because the map is smaller and much less complicated than the territory.  The point of the definition \"feathered flying things\" is to lead the listener to the bird cluster, not to give a total description of every existing bird down to the molecular level.

\n

When you draw a boundary around a group of extensional points empirically clustered in thingspace, you may find at least one exception to every simple intensional rule you can invent.

\n

But if a definition works well enough in practice to point out the intended empirical cluster, objecting to it may justly be called \"nitpicking\".

" } }, { "_id": "4mEsPHqcbRWxnaE5b", "title": "Typicality and Asymmetrical Similarity", "pageUrl": "https://www.lesswrong.com/posts/4mEsPHqcbRWxnaE5b/typicality-and-asymmetrical-similarity", "postedAt": "2008-02-06T21:20:50.000Z", "baseScore": 58, "voteCount": 52, "commentCount": 69, "url": null, "contents": { "documentId": "4mEsPHqcbRWxnaE5b", "html": "

Birds fly.  Well, except ostriches don't.  But which is a more typical bird—a robin, or an ostrich?

Which is a more typical chair:  A desk chair, a rocking chair, or a beanbag chair?

Most people would say that a robin is a more typical bird, and a desk chair is a more typical chair.  The cognitive psychologists who study this sort of thing experimentally, do so under the heading of \"typicality effects\" or \"prototype effects\" (Rosch and Lloyd 1978).  For example, if you ask subjects to press a button to indicate \"true\" or \"false\" in response to statements like \"A robin is a bird\" or \"A penguin is a bird\", reaction times are faster for more central examples.  (I'm still unpacking my books, but I'm reasonably sure my source on this is Lakoff 1986.)  Typicality measures correlate well using different investigative methods—reaction times are one example; you can also ask people to directly rate, on a scale of 1 to 10, how well an example (like a specific robin) fits a category (like \"bird\").

So we have a mental measure of typicality—which might, perhaps, function as a heuristic—but is there a corresponding bias we can use to pin it down?

Well, which of these statements strikes you as more natural:  \"98 is approximately 100\", or \"100 is approximately 98\"?  If you're like most people, the first statement seems to make more sense.  (Sadock 1977.)  For similar reasons, people asked to rate how similar Mexico is to the United States, gave consistently higher ratings than people asked to rate how similar the United States is to Mexico.  (Tversky and Gati 1978.)

And if that still seems harmless, a study by Rips (1975) showed that people were more likely to expect a disease would spread from robins to ducks on an island, than from ducks to robins.  Now this is not a logical impossibility, but in a pragmatic sense, whatever difference separates a duck from a robin and would make a disease less likely to spread from a duck to a robin, must also be a difference between a robin and a duck, and would make a disease less likely to spread from a robin to a duck.

 

Yes, you can come up with rationalizations, like \"Well, there could be more neighboring species of the robins, which would make the disease more likely to spread initially, etc.,\" but be careful not to try too hard to rationalize the probability ratings of subjects who didn't even realize there was a comparison going on.  And don't forget that Mexico is more similar to the United States than the United States is to Mexico, and that 98 is closer to 100 than 100 is to 98.  A simpler interpretation is that people are using the (demonstrated) similarity heuristic as a proxy for the probability that a disease spreads, and this heuristic is (demonstrably) asymmetrical.

Kansas is unusually close to the center of the United States, and Alaska is unusually far from the center of the United States; so Kansas is probably closer to most places in the US and Alaska is probably farther.  It does not follow, however, that Kansas is closer to Alaska than is Alaska to Kansas.  But people seem to reason (metaphorically speaking) as if closeness is an inherent property of Kansas and distance is an inherent property of Alaska; so that Kansas is still close, even to Alaska; and Alaska is still distant, even from Kansas.

So once again we see that Aristotle's notion of categories—logical classes with membership determined by a collection of properties that are individually strictly necessary, and together strictly sufficient—is not a good model of human cognitive psychology.  (Science's view has changed somewhat over the last 2350 years?  Who would've thought?)  We don't even reason as if set membership is a true-or-false property:  Statements of set membership can be more or less true.  (Note:  This is not the same thing as being more or less probable.)

One more reason not to pretend that you, or anyone else, is really going to treat words as Aristotelian logical classes.


Lakoff, George. (1986). Women, Fire and Dangerous Things: What Categories Tell Us About the Nature of Thought. University of Chicago Press, Chicago.

Rips, Lance J. (1975). \"Inductive judgments about natural categories.\"  Journal of Verbal Learning and Verbal Behavior. 14:665-81.

Rosch, Eleanor and B. B. Lloyd, eds. (1978).  Cognition and Categorization.  Hillsdale, N.J.: Lawrence Erlbaum Associates.

Sadock, Jerrold. (1977).  \"Truth and Approximations.\"  In Papers from the Third Annual Meeting of the Berkeley Linguistics Society, pp. 430-39.  Berkeley: Berkeley Linguistics Society.

Tversky, Amos and Itamar Gati. (1978).  \"Studies of Similarity\".  In Rosch and Lloyd (1978).

" } }, { "_id": "jMTbQj9XB5ah2maup", "title": "Similarity Clusters", "pageUrl": "https://www.lesswrong.com/posts/jMTbQj9XB5ah2maup/similarity-clusters", "postedAt": "2008-02-06T03:34:22.000Z", "baseScore": 68, "voteCount": 65, "commentCount": 5, "url": null, "contents": { "documentId": "jMTbQj9XB5ah2maup", "html": "

Once upon a time, the philosophers of Plato's Academy claimed that the best definition of human was a \"featherless biped\".  Diogenes of Sinope, also called Diogenes the Cynic, is said to have promptly exhibited a plucked chicken and declared \"Here is Plato's man.\"  The Platonists promptly changed their definition to \"a featherless biped with broad nails\".

\n

No dictionary, no encyclopedia, has ever listed all the things that humans have in common.  We have red blood, five fingers on each of two hands, bony skulls, 23 pairs of chromosomes—but the same might be said of other animal species.  We make complex tools to make complex tools, we use syntactical combinatorial language, we harness critical fission reactions as a source of energy: these things may serve out to single out only humans, but not all humans—many of us have never built a fission reactor.  With the right set of necessary-and-sufficient gene sequences you could single out all humans, and only humans—at least for now—but it would still be far from all that humans have in common.

But so long as you don't happen to be near a plucked chicken, saying \"Look for featherless bipeds\" may serve to pick out a few dozen of the particular things that are humans, as opposed to houses, vases, sandwiches, cats, colors, or mathematical theorems.

\n

\n

Once the definition \"featherless biped\" has been bound to some particular featherless bipeds, you can look over the group, and begin harvesting some of the other characteristics—beyond mere featherfree twolegginess—that the \"featherless bipeds\" seem to share in common.  The particular featherless bipeds that you see seem to also use language, build complex tools, speak combinatorial language with syntax, bleed red blood if poked, die when they drink hemlock.

\n

Thus the category \"human\" grows richer, and adds more and more characteristics; and when Diogenes finally presents his plucked chicken, we are not fooled:  This plucked chicken is obviously not similar to the other \"featherless bipeds\".

\n

(If Aristotelian logic were a good model of human psychology, the Platonists would have looked at the plucked chicken and said, \"Yes, that's a human; what's your point?\")

\n

If the first featherless biped you see is a plucked chicken, then you may end up thinking that the verbal label \"human\" denotes a plucked chicken; so I can modify my treasure map to point to \"featherless bipeds with broad nails\", and if I am wise, go on to say, \"See Diogenes over there?  That's a human, and I'm a human, and you're a human; and that chimpanzee is not a human, though fairly close.\"

\n

The initial clue only has to lead the user to the similarity cluster—the group of things that have many characteristics in common.  After that, the initial clue has served its purpose, and I can go on to convey the new information \"humans are currently mortal\", or whatever else I want to say about us featherless bipeds.

\n

A dictionary is best thought of, not as a book of Aristotelian class definitions, but a book of hints for matching verbal labels to similarity clusters, or matching labels to properties that are useful in distinguishing similarity clusters.

" } }, { "_id": "QDHcpmTXMqPrrsddr", "title": "Buy Now Or Forever Hold Your Peace", "pageUrl": "https://www.lesswrong.com/posts/QDHcpmTXMqPrrsddr/buy-now-or-forever-hold-your-peace", "postedAt": "2008-02-04T21:42:51.000Z", "baseScore": 39, "voteCount": 30, "commentCount": 58, "url": null, "contents": { "documentId": "QDHcpmTXMqPrrsddr", "html": "

The Intrade prediction market is giving Hillary a 53% chance and Obama a 47% chance of winning the Democratic presidential nomination.  Hillary is down 7.5 percentage points in just the last day.  (Note:  Between when I wrote the above, and when I posted this, Hillary went up to 54.)

\n\n

From what I've read on Intrade, you can fund your account with up to $250 using a credit card, and it should land in your account immediately.  (More than this takes time.)  Also, remember that you can sell contracts at any time afterward - you don't have to wait months to collect your payout.

\n\n

If you think that Hillary is going to do better than the polls on Super Tuesday, and you're going to sneer afterward and say that Intrade was "just tracking the polls", buy Hillary now.

\n\n

If you think that Obama is going to do better than the polls on Super Tuesday, and you're going to gloat about how prediction markets didn't call this surprise in advance, buy Obama now.

\n\n

If you don't do either, then clearly you do not really believe that you know anything the prediction markets don't.  (Or you don't understand expected utility, or your utilities over final outcomes drop off improbably fast in the vicinity of your current wealth minus fifty bucks - you don't have to bet the full $250.)  It is free money, going now for anyone who genuinely thinks they know better than the prediction markets what will happen next.

\n\n

Prediction markets do not have supernatural insight.  If they give the candidates fifty-fifty odds, it means that the market collectively doesn't know what will happen next.  Even if you're well-calibrated, you get surprised on 90% probabilities one time out of ten.

\n\n

The point is not that prediction markets are a good predictor but that they are the best predictor.  If you think you can do better, why ain'cha rich?  Any person, group, or method that does better can pump money out of the prediction markets.

\n\n

If prediction markets react to polls, they're getting new information, that they didn't predict in advance, which happens.  Being the best predictor doesn't make you omniscient.

\n\n

Everyone's going to find it real easy to make a better prediction afterward, but if you think you can call it in advance, there's FREE MONEY GOING NOW.

\n\n

Buy now, or forever hold your peace.

" } }, { "_id": "HsznWM9A7NiuGsp28", "title": "Extensions and Intensions", "pageUrl": "https://www.lesswrong.com/posts/HsznWM9A7NiuGsp28/extensions-and-intensions", "postedAt": "2008-02-04T21:34:52.000Z", "baseScore": 103, "voteCount": 82, "commentCount": 32, "url": null, "contents": { "documentId": "HsznWM9A7NiuGsp28", "html": "
\n

\"What is red?\"
\"Red is a color.\"
\"What's a color?\"
\"A color is a property of a thing.\"

\n
\n

But what is a thing?  And what's a property?  Soon the two are lost in a maze of words defined in other words, the problem that Steven Harnad once described as trying to learn Chinese from a Chinese/Chinese dictionary.

\n

Alternatively, if you asked me \"What is red?\" I could point to a stop sign, then to someone wearing a red shirt, and a traffic light that happens to be red, and blood from where I accidentally cut myself, and a red business card, and then I could call up a color wheel on my computer and move the cursor to the red area.  This would probably be sufficient, though if you know what the word \"No\" means, the truly strict would insist that I point to the sky and say \"No.\"

\n

I think I stole this example from S. I. Hayakawa—though I'm really not sure, because I heard this way back in the indistinct blur of my childhood.  (When I was 12, my father accidentally deleted all my computer files.  I have no memory of anything before that.)

\n

But that's how I remember first learning about the difference between intensional and extensional definition.  To give an \"intensional definition\" is to define a word or phrase in terms of other words, as a dictionary does.  To give an \"extensional definition\" is to point to examples, as adults do when teaching children.  The preceding sentence gives an intensional definition of \"extensional definition\", which makes it an extensional example of \"intensional definition\".

\n

\n

In Hollywood Rationality and popular culture generally, \"rationalists\" are depicted as word-obsessed, floating in endless verbal space disconnected from reality.

\n

But the actual Traditional Rationalists have long insisted on maintaining a tight connection to experience:

\n
\n

\"If you look into a textbook of chemistry for a definition of lithium, you may be told that it is that element whose atomic weight is 7 very nearly. But if the author has a more logical mind he will tell you that if you search among minerals that are vitreous, translucent, grey or white, very hard, brittle, and insoluble, for one which imparts a crimson tinge to an unluminous flame, this mineral being triturated with lime or witherite rats-bane, and then fused, can be partly dissolved in muriatic acid; and if this solution be evaporated, and the residue be extracted with sulphuric acid, and duly purified, it can be converted by ordinary methods into a chloride, which being obtained in the solid state, fused, and electrolyzed with half a dozen powerful cells, will yield a globule of a pinkish silvery metal that will float on gasolene; and the material of that is a specimen of lithium.\"
        — Charles Sanders Peirce

\n
\n

That's an example of \"logical mind\" as described by a genuine Traditional Rationalist, rather than a Hollywood scriptwriter.

\n

But note:  Peirce isn't actually showing you a piece of lithium.  He didn't have pieces of lithium stapled to his book.  Rather he's giving you a treasure map—an intensionally defined procedure which, when executed, will lead you to an extensional example of lithium.  This is not the same as just tossing you a hunk of lithium, but it's not the same as saying \"atomic weight 7\" either.  (Though if you had sufficiently sharp eyes, saying \"3 protons\" might let you pick out lithium at a glance...)

\n

So that is intensional and extensional definition., which is a way of telling someone else what you mean by a concept.  When I talked about \"definitions\" above, I talked about a way of communicating concepts—telling someone else what you mean by \"red\", \"tiger\", \"human\", or \"lithium\".  Now let's talk about the actual concepts themselves.

\n

The actual intension of my \"tiger\" concept would be the neural pattern (in my temporal cortex) that inspects an incoming signal from the visual cortex to determine whether or not it is a tiger.

\n

The actual extension of my \"tiger\" concept is everything I call a tiger.

\n

Intensional definitions don't capture entire intensions; extensional definitions don't capture entire extensions.  If I point to just one tiger and say the word \"tiger\", the communication may fail if they think I mean \"dangerous animal\" or \"male tiger\" or \"yellow thing\".  Similarly, if I say \"dangerous yellow-black striped animal\", without pointing to anything, the listener may visualize giant hornets.

\n

You can't capture in words all the details of the cognitive concept—as it exists in your mind—that lets you recognize things as tigers or nontigers.  It's too large.  And you can't point to all the tigers you've ever seen, let alone everything you would call a tiger.

\n

The strongest definitions use a crossfire of intensional and extensional communication to nail down a concept.  Even so, you only communicate maps to concepts, or instructions for building concepts—you don't communicate the actual categories as they exist in your mind or in the world.

\n

(Yes, with enough creativity you can construct exceptions to this rule, like \"Sentences Eliezer Yudkowsky has published containing the term 'huragaloni' as of Feb 4, 2008\".  I've just shown you this concept's entire extension.  But except in mathematics, definitions are usually treasure maps, not treasure.)

\n

So that's another reason you can't \"define a word any way you like\":  You can't directly program concepts into someone else's brain.

\n

Even within the Aristotelian paradigm, where we pretend that the definitions are the actual concepts, you don't have simultaneous freedom of intension and extension.  Suppose I define Mars as \"A huge red rocky sphere, around a tenth of Earth's mass and 50% further away from the Sun\".  It's then a separate matter to show that this intensional definition matches some particular extensional thing in my experience, or indeed, that it matches any real thing whatsoever.  If instead I say \"That's Mars\" and point to a red light in the night sky, it becomes a separate matter to show that this extensional light matches any particular intensional definition I may propose—or any intensional beliefs I may have—such as \"Mars is the God of War\".

\n

But most of the brain's work of applying intensions happens sub-deliberately.  We aren't consciously aware that our identification of a red light as \"Mars\" is a separate matter from our verbal definition \"Mars is the God of War\".  No matter what kind of intensional definition I make up to describe Mars, my mind believes that \"Mars\" refers to this thingy, and that it is the fourth planet in the Solar System.

\n

When you take into account the way the human mind actually, pragmatically works, the notion \"I can define a word any way I like\" soon becomes \"I can believe anything I want about a fixed set of objects\" or \"I can move any object I want in or out of a fixed membership test\".  Just as you can't usually convey a concept's whole intension in words because it's a big complicated neural membership test, you can't control the concept's entire intension because it's applied sub-deliberately.  This is why arguing that XYZ is true \"by definition\" is so popular.  If definition changes behaved like the empirical nullops they're supposed to be, no one would bother arguing them.  But abuse definitions just a little, and they turn into magic wands—in arguments, of course; not in reality.

" } }, { "_id": "3nxs2WYDGzJbzcLMp", "title": "Words as Hidden Inferences", "pageUrl": "https://www.lesswrong.com/posts/3nxs2WYDGzJbzcLMp/words-as-hidden-inferences", "postedAt": "2008-02-03T23:36:23.000Z", "baseScore": 100, "voteCount": 95, "commentCount": 23, "url": null, "contents": { "documentId": "3nxs2WYDGzJbzcLMp", "html": "

Suppose I find a barrel, sealed at the top, but with a hole large enough for a hand.  I reach in, and feel a small, curved object.  I pull the object out, and it's blue—a bluish egg.  Next I reach in and feel something hard and flat, with edges—which, when I extract it, proves to be a red cube.  I pull out 11 eggs and 8 cubes, and every egg is blue, and every cube is red.

\n

Now I reach in and I feel another egg-shaped object.  Before I pull it out and look, I have to guess:  What will it look like?

\n

The evidence doesn't prove that every egg in the barrel is blue, and every cube is red.  The evidence doesn't even argue this all that strongly: 19 is not a large sample size.  Nonetheless, I'll guess that this egg-shaped object is blue—or as a runner-up guess, red.  If I guess anything else, there's as many possibilities as distinguishable colors—and for that matter, who says the egg has to be a single shade?  Maybe it has a picture of a horse painted on.

\n

So I say \"blue\", with a dutiful patina of humility.  For I am a sophisticated rationalist-type person, and I keep track of my assumptions and dependencies—I guess, but I'm aware that I'm guessing... right?

\n

But when a large yellow striped feline-shaped object leaps out at me from the shadows, I think, \"Yikes!  A tiger!\"  Not, \"Hm... objects with the properties of largeness, yellowness, stripedness, and feline shape, have previously often possessed the properties 'hungry' and 'dangerous', and thus, although it is not logically necessary, it may be an empirically good guess that aaauuughhhh CRUNCH CRUNCH GULP.\"

\n

The human brain, for some odd reason, seems to have been adapted to make this inference quickly, automatically, and without keeping explicit track of its assumptions.

\n

And if I name the egg-shaped objects \"bleggs\" (for blue eggs) and the red cubes \"rubes\", then, when I reach in and feel another egg-shaped object, I may think:  Oh, it's a blegg, rather than considering all that problem-of-induction stuff.

\n

\n

It is a common misconception that you can define a word any way you like.

\n

This would be true if the brain treated words as purely logical constructs, Aristotelian classes, and you never took out any more information than you put in.

\n

Yet the brain goes on about its work of categorization, whether or not we consciously approve.  \"All humans are mortal, Socrates is a human, therefore Socrates is mortal\"—thus spake the ancient Greek philosophers.  Well, if mortality is part of your logical definition of \"human\", you can't logically classify Socrates as human until you observe him to be mortal.  But—this is the problem—Aristotle knew perfectly well that Socrates was a human.  Aristotle's brain placed Socrates in the \"human\" category as efficiently as your own brain categorizes tigers, apples, and everything else in its environment:  Swiftly, silently, and without conscious approval.

\n

Aristotle laid down rules under which no one could conclude Socrates was \"human\" until after he died.  Nonetheless, Aristotle and his students went on concluding that living people were humans and therefore mortal; they saw distinguishing properties such as human faces and human bodies, and their brains made the leap to inferred properties such as mortality.

\n

Misunderstanding the working of your own mind does not, thankfully, prevent the mind from doing its work.  Otherwise Aristotelians would have starved, unable to conclude that an object was edible merely because it looked and felt like a banana.

\n

So the Aristotelians went on classifying environmental objects on the basis of partial information, the way people had always done.  Students of Aristotelian logic went on thinking exactly the same way, but they had acquired an erroneous picture of what they were doing.

\n

If you asked an Aristotelian philosopher whether Carol the grocer was mortal, they would say \"Yes.\"  If you asked them how they knew, they would say \"All humans are mortal, Carol is human, therefore Carol is mortal.\"  Ask them whether it was a guess or a certainty, and they would say it was a certainty (if you asked before the sixteenth century, at least).  Ask them how they knew that humans were mortal, and they would say it was established by definition.

\n

The Aristotelians were still the same people, they retained their original natures, but they had acquired incorrect beliefs about their own functioning.  They looked into the mirror of self-awareness, and saw something unlike their true selves: they reflected incorrectly.

\n

Your brain doesn't treat words as logical definitions with no empirical consequences, and so neither should you.  The mere act of creating a word can cause your mind to allocate a category, and thereby trigger unconscious inferences of similarity.  Or block inferences of similarity; if I create two labels I can get your mind to allocate two categories.  Notice how I said \"you\" and \"your brain\" as if they were different things?

\n

Making errors about the inside of your head doesn't change what's there; otherwise Aristotle would have died when he concluded that the brain was an organ for cooling the blood.  Philosophical mistakes usually don't interfere with blink-of-an-eye perceptual inferences.

\n

But philosophical mistakes can severely mess up the deliberate thinking processes that we use to try to correct our first impressions.  If you believe that you can \"define a word any way you like\", without realizing that your brain goes on categorizing without your conscious oversight, then you won't take the effort to choose your definitions wisely.

" } }, { "_id": "bcM5ft8jvsffsZZ4Y", "title": "The Parable of Hemlock", "pageUrl": "https://www.lesswrong.com/posts/bcM5ft8jvsffsZZ4Y/the-parable-of-hemlock", "postedAt": "2008-02-03T02:01:43.000Z", "baseScore": 96, "voteCount": 95, "commentCount": 19, "url": null, "contents": { "documentId": "bcM5ft8jvsffsZZ4Y", "html": "
\n

\"All men are mortal.  Socrates is a man.  Therefore Socrates is mortal.\"
        — Aristotle(?)

\n
\n

    Socrates raised the glass of hemlock to his lips...
    \"Do you suppose,\" asked one of the onlookers, \"that even hemlock will not be enough to kill so wise and good a man?\"
    \"No,\" replied another bystander, a student of philosophy; \"all men are mortal, and Socrates is a man; and if a mortal drink hemlock, surely he dies.\"
    \"Well,\" said the onlooker, \"what if it happens that Socrates isn't mortal?\"
    \"Nonsense,\" replied the student, a little sharply; \"all men are mortal by definition; it is part of what we mean by the word 'man'. All men are mortal, Socrates is a man, therefore Socrates is mortal.  It is not merely a guess, but a logical certainty.\"
    \"I suppose that's right...\" said the onlooker. \"Oh, look, Socrates already drank the hemlock while we were talking.\"
    \"Yes, he should be keeling over any minute now,\" said the student.
    And they waited, and they waited, and they waited...
    \"Socrates appears not to be mortal,\" said the onlooker.
    \"Then Socrates must not be a man,\" replied the student.  \"All men are mortal, Socrates is not mortal, therefore Socrates is not a man.  And that is not merely a guess, but a logical certainty.\"

\n

\n

The fundamental problem with arguing that things are true \"by definition\" is that you can't make reality go a different way by choosing a different definition.

\n

You could reason, perhaps, as follows:  \"All things I have observed which wear clothing, speak language, and use tools, have also shared certain other properties as well, such as breathing air and pumping red blood. The last thirty 'humans' belonging to this cluster, whom I observed to drink hemlock, soon fell over and stopped moving.  Socrates wears a toga, speaks fluent ancient Greek, and drank hemlock from a cup.  So I predict that Socrates will keel over in the next five minutes.\"

\n

But that would be mere guessing.  It wouldn't be, y'know, absolutely and eternally certain.  The Greek philosophers—like most prescientific philosophers—were rather fond of certainty.

\n

Luckily the Greek philosophers have a crushing rejoinder to your questioning.  You have misunderstood the meaning of \"All humans are mortal,\" they say.  It is not a mere observation.  It is part of the definition of the word \"human\".  Mortality is one of several properties that are individually necessary, and together sufficient, to determine membership in the class \"human\".  The statement \"All humans are mortal\" is a logically valid truth, absolutely unquestionable.  And if Socrates is human, he must be mortal: it is a logical deduction, as certain as certain can be.

\n

But then we can never know for certain that Socrates is a \"human\" until after Socrates has been observed to be mortal.  It does no good to observe that Socrates speaks fluent Greek, or that Socrates has red blood, or even that Socrates has human DNA.  None of these characteristics are logically equivalent to mortality.  You have to see him die before you can conclude that he was human.

\n

(And even then it's not infinitely certain.  What if Socrates rises from the grave a night after you see him die?  Or more realistically, what if Socrates is signed up for cryonics?  If mortality is defined to mean finite lifespan, then you can never really know if someone was human, until you've observed to the end of eternity—just to make sure they don't come back.  Or you could think you saw Socrates keel over, but it could be an illusion projected onto your eyes with a retinal scanner.  Or maybe you just hallucinated the whole thing...)

\n

The problem with syllogisms is that they're always valid.  \"All humans are mortal; Socrates is human; therefore Socrates is mortal\" is—if you treat it as a logical syllogism—logically valid within our own universe.  It's also logically valid within neighboring Everett branches in which, due to a slightly different evolved biochemistry, hemlock is a delicious treat rather than a poison.  And it's logically valid even in universes where Socrates never existed, or for that matter, where humans never existed.

\n

The Bayesian definition of evidence favoring a hypothesis is evidence which we are more likely to see if the hypothesis is true than if it is false.  Observing that a syllogism is logically valid can never be evidence favoring any empirical proposition, because the syllogism will be logically valid whether that proposition is true or false.

\n

Syllogisms are valid in all possible worlds, and therefore, observing their validity never tells us anything about which possible world we actually live in.

\n

This doesn't mean that logic is useless—just that logic can only tell us that which, in some sense, we already know.  But we do not always believe what we know.  Is the number 29384209 prime?  By virtue of how I define my decimal system and my axioms of arithmetic, I have already determined my answer to this question—but I do not know what my answer is yet, and I must do some logic to find out.

\n

Similarly, if I form the uncertain empirical generalization \"Humans are vulnerable to hemlock\", and the uncertain empirical guess \"Socrates is human\", logic can tell me that my previous guesses are predicting that Socrates will be vulnerable to hemlock.

\n

It's been suggested that we can view logical reasoning as resolving our uncertainty about impossible possible worlds—eliminating probability mass in logically impossible worlds which we did not know to be logically impossible.  In this sense, logical argument can be treated as observation.

\n

But when you talk about an empirical prediction like \"Socrates is going to keel over and stop breathing\" or \"Socrates is going to do fifty jumping jacks and then compete in the Olympics next year\", that is a matter of possible worlds, not impossible possible worlds.

\n

Logic can tell us which hypotheses match up to which observations, and it can tell us what these hypotheses predict for the future—it can bring old observations and previous guesses to bear on a new problem.  But logic never flatly says, \"Socrates will stop breathing now.\"  Logic never dictates any empirical question; it never settles any real-world query which could, by any stretch of the imagination, go either way.

\n

Just remember the Litany Against Logic:

\n
\n

Logic stays true, wherever you may go,
So logic never tells you where you live.

\n
" } }, { "_id": "hQxYBfu2LPc9Ydo6w", "title": "The Parable of the Dagger", "pageUrl": "https://www.lesswrong.com/posts/hQxYBfu2LPc9Ydo6w/the-parable-of-the-dagger", "postedAt": "2008-02-01T20:53:05.000Z", "baseScore": 234, "voteCount": 164, "commentCount": 105, "url": null, "contents": { "documentId": "hQxYBfu2LPc9Ydo6w", "html": "

Once upon a time, there was a court jester who dabbled in logic.

\n

The jester presented the king with two boxes.  Upon the first box was inscribed:

\n
\n

\"Either this box contains an angry frog, or the box with a false inscription contains an angry frog, but not both.\"

\n
\n

On the second box was inscribed:

\n
\n

\"Either this box contains gold and the box with a false inscription contains an angry frog, or this box contains an angry frog and the box with a true inscription contains gold.\"

\n
\n

And the jester said to the king:  \"One box contains an angry frog, the other box gold; and one, and only one, of the inscriptions is true.\"

\n

\n

The king opened the wrong box, and was savaged by an angry frog.

\n

\"You see,\" the jester said, \"let us hypothesize that the first inscription is the true one.  Then suppose the first box contains gold.  Then the other box would have an angry frog, while the box with a true inscription would contain gold, which would make the second statement true as well.  Now hypothesize that the first inscription is false, and that the first box contains gold.  Then the second inscription would be—\"

\n

The king ordered the jester thrown in the dungeons.

\n

A day later, the jester was brought before the king in chains, and shown two boxes.

\n

\"One box contains a key,\" said the king, \"to unlock your chains; and if you find the key you are free.  But the other box contains a dagger for your heart, if you fail.\"

\n

And the first box was inscribed:

\n
\n

\"Either both inscriptions are true, or both inscriptions are false.\"

\n
\n

And the second box was inscribed:

\n
\n

\"This box contains the key.\"

\n
\n

The jester reasoned thusly:  \"Suppose the first inscription is true.  Then the second inscription must also be true.  Now suppose the first inscription is false.  Then again the second inscription must be true. So the second box must contain the key, if the first inscription is true, and also if the first inscription is false.  Therefore, the second box must logically contain the key.\"

\n

The jester opened the second box, and found a dagger.

\n

\"How?!\" cried the jester in horror, as he was dragged away.  \"It's logically impossible!\"

\n

\"It is entirely possible,\" replied the king.  \"I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.\"

\n

(Adapted from Raymond Smullyan.)

" } }, { "_id": "i47JiJwNQ2q7bWf4L", "title": "OB Meetup: Millbrae, Thu 21 Feb, 7pm", "pageUrl": "https://www.lesswrong.com/posts/i47JiJwNQ2q7bWf4L/ob-meetup-millbrae-thu-21-feb-7pm", "postedAt": "2008-01-31T23:18:20.000Z", "baseScore": 1, "voteCount": 3, "commentCount": 13, "url": null, "contents": { "documentId": "i47JiJwNQ2q7bWf4L", "html": "

The Overcoming Bias meetup has been scheduled for Thursday, February 21st, at 7pm.  We're going to look at locating this in Millbrae within walking distance of the BART / Caltrain station.  The particular restaurant I had in mind turns out to be booked for Thursdays, so if you know a good Millbrae restaurant (with a private room?) in walking distance of the train station, please post in the comments.  I'll be looking at restaurants shortly.

\n\n

Why not schedule to a day other than Thursday, you ask?

\n\n

Because:

\n\n

Robin Hanson will be in the Bay Area and attending!  Woohoo!

\n\n

If you would be able to make Thursday the 21st, 7pm, in Millbrae, somewhere near the BART/Caltrain, please vote below.  No, seriously, please vote, now - the kind of restaurant I have to find depends on how many people will be attending.

Opinion Polls & Market Research
" } }, { "_id": "6ddcsdA2c2XpNpE5x", "title": "Newcomb's Problem and Regret of Rationality", "pageUrl": "https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality", "postedAt": "2008-01-31T19:36:56.000Z", "baseScore": 157, "voteCount": 138, "commentCount": 620, "url": null, "contents": { "documentId": "6ddcsdA2c2XpNpE5x", "html": "

The following may well be the most controversial dilemma in the history of decision theory:

\n
\n

A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game.  In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.

\n

Box A is transparent and contains a thousand dollars.
Box B is opaque, and contains either a million dollars, or nothing.

\n

You can take both boxes, or take only box B.

\n

And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.

\n

Omega has been correct on each of 100 observed occasions so far - everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars.  (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)

\n

Before you make your choice, Omega has flown off and moved on to its next game.  Box B is already empty or already full.

\n

Omega drops two boxes on the ground in front of you and flies off.

\n

Do you take both boxes, or only box B?

\n
\n

And the standard philosophical conversation runs thusly:

\n
\n

One-boxer:  \"I take only box B, of course.  I'd rather have a million than a thousand.\"

\n

Two-boxer:  \"Omega has already left.  Either box B is already full or already empty.  If box B is already empty, then taking both boxes nets me $1000, taking only box B nets me $0.  If box B is already full, then taking both boxes nets $1,001,000, taking only box B nets $1,000,000.  In either case I do better by taking both boxes, and worse by leaving a thousand dollars on the table - so I will be rational, and take both boxes.\"

\n

One-boxer:  \"If you're so rational, why ain'cha rich?\"

\n

Two-boxer:  \"It's not my fault Omega chooses to reward only people with irrational dispositions, but it's already too late for me to do anything about that.\"

\n
\n

\n

There is a large literature on the topic of Newcomblike problems - especially if you consider the Prisoner's Dilemma as a special case, which it is generally held to be.  \"Paradoxes of Rationality and Cooperation\" is an edited volume that includes Newcomb's original essay.  For those who read only online material, this PhD thesis summarizes the major standard positions.

\n

I'm not going to go into the whole literature, but the dominant consensus in modern decision theory is that one should two-box, and Omega is just rewarding agents with irrational dispositions.  This dominant view goes by the name of \"causal decision theory\".

\n

As you know, the primary reason I'm blogging is that I am an incredibly slow writer when I try to work in any other format.  So I'm not going to try to present my own analysis here.  Way too long a story, even by my standards.

\n

But it is agreed even among causal decision theorists that if you have the power to precommit yourself to take one box, in Newcomb's Problem, then you should do so.  If you can precommit yourself before Omega examines you; then you are directly causing box B to be filled.

\n

Now in my field - which, in case you have forgotten, is self-modifying AI - this works out to saying that if you build an AI that two-boxes on Newcomb's Problem, it will self-modify to one-box on Newcomb's Problem, if the AI considers in advance that it might face such a situation.  Agents with free access to their own source code have access to a cheap method of precommitment.

\n

What if you expect that you might, in general, face a Newcomblike problem, without knowing the exact form of the problem?  Then you would have to modify yourself into a sort of agent whose disposition was such that it would generally receive high rewards on Newcomblike problems.

\n

But what does an agent with a disposition generally-well-suited to Newcomblike problems look like?  Can this be formally specified?

\n

Yes, but when I tried to write it up, I realized that I was starting to write a small book.  And it wasn't the most important book I had to write, so I shelved it.  My slow writing speed really is the bane of my existence.  The theory I worked out seems, to me, to have many nice properties besides being well-suited to Newcomblike problems.  It would make a nice PhD thesis, if I could get someone to accept it as my PhD thesis.  But that's pretty much what it would take to make me unshelve the project.  Otherwise I can't justify the time expenditure, not at the speed I currently write books.

\n

I say all this, because there's a common attitude that \"Verbal arguments for one-boxing are easy to come by, what's hard is developing a good decision theory that one-boxes\" - coherent math which one-boxes on Newcomb's Problem without producing absurd results elsewhere.  So I do understand that, and I did set out to develop such a theory, but my writing speed on big papers is so slow that I can't publish it.  Believe it or not, it's true.

\n

Nonetheless, I would like to present some of my motivations on Newcomb's Problem - the reasons I felt impelled to seek a new theory - because they illustrate my source-attitudes toward rationality.  Even if I can't present the theory that these motivations motivate...

\n

First, foremost, fundamentally, above all else:

\n

Rational agents should WIN.

\n

Don't mistake me, and think that I'm talking about the Hollywood Rationality stereotype that rationalists should be selfish or shortsighted.  If your utility function has a term in it for others, then win their happiness.  If your utility function has a term in it for a million years hence, then win the eon.

\n

But at any rate, WIN.  Don't lose reasonably, WIN.

\n

Now there are defenders of causal decision theory who argue that the two-boxers are doing their best to win, and cannot help it if they have been cursed by a Predictor who favors irrationalists.  I will talk about this defense in a moment.  But first, I want to draw a distinction between causal decision theorists who believe that two-boxers are genuinely doing their best to win; versus someone who thinks that two-boxing is the reasonable or the rational thing to do, but that the reasonable move just happens to predictably lose, in this case.  There are a lot of people out there who think that rationality predictably loses on various problems - that, too, is part of the Hollywood Rationality stereotype, that Kirk is predictably superior to Spock.

\n

Next, let's turn to the charge that Omega favors irrationalists.  I can conceive of a superbeing who rewards only people born with a particular gene, regardless of their choices.  I can conceive of a superbeing who rewards people whose brains inscribe the particular algorithm of \"Describe your options in English and choose the last option when ordered alphabetically,\" but who does not reward anyone who chooses the same option for a different reason.  But Omega rewards people who choose to take only box B, regardless of which algorithm they use to arrive at this decision, and this is why I don't buy the charge that Omega is rewarding the irrational.  Omega doesn't care whether or not you follow some particular ritual of cognition; Omega only cares about your predicted decision.

\n

We can choose whatever reasoning algorithm we like, and will be rewarded or punished only according to that algorithm's choices, with no other dependency - Omega just cares where we go, not how we got there.

\n

It is precisely the notion that Nature does not care about our algorithm, which frees us up to pursue the winning Way - without attachment to any particular ritual of cognition, apart from our belief that it wins.  Every rule is up for grabs, except the rule of winning.

\n

As Miyamoto Musashi said - it's really worth repeating:

\n
\n

\"You can win with a long weapon, and yet you can also win with a short weapon.  In short, the Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size.\"

\n
\n

(Another example:  It was argued by McGee that we must adopt bounded utility functions or be subject to \"Dutch books\" over infinite times.  But:  The utility function is not up for grabs.  I love life without limit or upper bound:  There is no finite amount of life lived N where I would prefer a 80.0001% probability of living N years to an 0.0001% chance of living a googolplex years and an 80% chance of living forever.  This is a sufficient condition to imply that my utility function is unbounded.  So I just have to figure out how to optimize for that morality.  You can't tell me, first, that above all I must conform to a particular ritual of cognition, and then that, if I conform to that ritual, I must change my morality to avoid being Dutch-booked.  Toss out the losing ritual; don't change the definition of winning.  That's like deciding to prefer $1000 to $1,000,000 so that Newcomb's Problem doesn't make your preferred ritual of cognition look bad.)

\n

\"But,\" says the causal decision theorist, \"to take only one box, you must somehow believe that your choice can affect whether box B is empty or full - and that's unreasonable!  Omega has already left!  It's physically impossible!\"

\n

Unreasonable?  I am a rationalist: what do I care about being unreasonable?  I don't have to conform to a particular ritual of cognition.  I don't have to take only box B because I believe my choice affects the box, even though Omega has already left.  I can just... take only box B.

\n

I do have a proposed alternative ritual of cognition which computes this decision, which this margin is too small to contain; but I shouldn't need to show this to you.  The point is not to have an elegant theory of winning - the point is to win; elegance is a side effect.

\n

Or to look at it another way:  Rather than starting with a concept of what is the reasonable decision, and then asking whether \"reasonable\" agents leave with a lot of money, start by looking at the agents who leave with a lot of money, develop a theory of which agents tend to leave with the most money, and from this theory, try to figure out what is \"reasonable\".  \"Reasonable\" may just refer to decisions in conformance with our current ritual of cognition - what else would determine whether something seems \"reasonable\" or not?

\n

From James Joyce (no relation), Foundations of Causal Decision Theory:

\n
\n

Rachel has a perfectly good answer to the \"Why ain't you rich?\" question.  \"I am not rich,\" she will say, \"because I am not the kind of person the psychologist thinks will refuse the money.  I'm just not like you, Irene.  Given that I know that I am the type who takes the money, and given that the psychologist knows that I am this type, it was reasonable of me to think that the $1,000,000 was not in my account.  The $1,000 was the most I was going to get no matter what I did.  So the only reasonable thing for me to do was to take it.\"

\n

Irene may want to press the point here by asking, \"But don't you wish you were like me, Rachel?  Don't you wish that you were the refusing type?\"  There is a tendency to think that Rachel, a committed causal decision theorist, must answer this question in the negative, which seems obviously wrong (given that being like Irene would have made her rich).  This is not the case.  Rachel can and should admit that she does wish she were more like Irene.  \"It would have been better for me,\" she might concede, \"had I been the refusing type.\"  At this point Irene will exclaim, \"You've admitted it!  It wasn't so smart to take the money after all.\"  Unfortunately for Irene, her conclusion does not follow from Rachel's premise.  Rachel will patiently explain that wishing to be a refuser in a Newcomb problem is not inconsistent with thinking that one should take the $1,000 whatever type one is.  When Rachel wishes she was Irene's type she is wishing for Irene's options, not sanctioning her choice.

\n
\n

It is, I would say, a general principle of rationality - indeed, part of how I define rationality - that you never end up envying someone else's mere choices.  You might envy someone their genes, if Omega rewards genes, or if the genes give you a generally happier disposition.  But Rachel, above, envies Irene her choice, and only her choice, irrespective of what algorithm Irene used to make it.  Rachel wishes just that she had a disposition to choose differently.

\n

You shouldn't claim to be more rational than someone and simultaneously envy them their choice - only their choice.  Just do the act you envy.

\n

I keep trying to say that rationality is the winning-Way, but causal decision theorists insist that taking both boxes is what really wins, because you can't possibly do better by leaving $1000 on the table... even though the single-boxers leave the experiment with more money.  Be careful of this sort of argument, any time you find yourself defining the \"winner\" as someone other than the agent who is currently smiling from on top of a giant heap of utility.

\n

Yes, there are various thought experiments in which some agents start out with an advantage - but if the task is to, say, decide whether to jump off a cliff, you want to be careful not to define cliff-refraining agents as having an unfair prior advantage over cliff-jumping agents, by virtue of their unfair refusal to jump off cliffs.  At this point you have covertly redefined \"winning\" as conformance to a particular ritual of cognition.  Pay attention to the money!

\n

Or here's another way of looking at it:  Faced with Newcomb's Problem, would you want to look really hard for a reason to believe that it was perfectly reasonable and rational to take only box B; because, if such a line of argument existed, you would take only box B and find it full of money?  Would you spend an extra hour thinking it through, if you were confident that, at the end of the hour, you would be able to convince yourself that box B was the rational choice?  This too is a rather odd position to be in.  Ordinarily, the work of rationality goes into figuring out which choice is the best - not finding a reason to believe that a particular choice is the best.

\n

Maybe it's too easy to say that you \"ought to\" two-box on Newcomb's Problem, that this is the \"reasonable\" thing to do, so long as the money isn't actually in front of you.  Maybe you're just numb to philosophical dilemmas, at this point.  What if your daughter had a 90% fatal disease, and box A contained a serum with a 20% chance of curing her, and box B might contain a serum with a 95% chance of curing her?  What if there was an asteroid rushing toward Earth, and box A contained an asteroid deflector that worked 10% of the time, and box B might contain an asteroid deflector that worked 100% of the time?

\n

Would you, at that point, find yourself tempted to make an unreasonable choice?

\n

If the stake in box B was something you could not leave behind?  Something overwhelmingly more important to you than being reasonable?  If you absolutely had to win - really win, not just be defined as winning?

\n

Would you wish with all your power that the \"reasonable\" decision was to take only box B?

\n

Then maybe it's time to update your definition of reasonableness.

\n

Alleged rationalists should not find themselves envying the mere decisions of alleged nonrationalists, because your decision can be whatever you like.  When you find yourself in a position like this, you shouldn't chide the other person for failing to conform to your concepts of reasonableness.  You should realize you got the Way wrong.

\n

So, too, if you ever find yourself keeping separate track of the \"reasonable\" belief, versus the belief that seems likely to be actually true.  Either you have misunderstood reasonableness, or your second intuition is just wrong.

\n

Now one can't simultaneously define \"rationality\" as the winning Way, and define \"rationality\" as Bayesian probability theory and decision theory.  But it is the argument that I am putting forth, and the moral of my advice to Trust In Bayes, that the laws governing winning have indeed proven to be math.  If it ever turns out that Bayes fails - receives systematically lower rewards on some problem, relative to a superior alternative, in virtue of its mere decisions - then Bayes has to go out the window.  \"Rationality\" is just the label I use for my beliefs about the winning Way - the Way of the agent smiling from on top of the giant heap of utility.  Currently, that label refers to Bayescraft.

\n

I realize that this is not a knockdown criticism of causal decision theory - that would take the actual book and/or PhD thesis - but I hope it illustrates some of my underlying attitude toward this notion of \"rationality\".

\n

You shouldn't find yourself distinguishing the winning choice from the reasonable choice.  Nor should you find yourself distinguishing the reasonable belief from the belief that is most likely to be true.

\n

That is why I use the word \"rational\" to denote my beliefs about accuracy and winning - not to denote verbal reasoning, or strategies which yield certain success, or that which is logically provable, or that which is publicly demonstrable, or that which is reasonable.

\n

As Miyamoto Musashi said:

\n
\n

\"The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him.\"

\n
" } }, { "_id": "SGR4GxFK7KmW7ckCB", "title": "Something to Protect", "pageUrl": "https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect", "postedAt": "2008-01-30T17:52:49.000Z", "baseScore": 231, "voteCount": 180, "commentCount": 82, "url": null, "contents": { "documentId": "SGR4GxFK7KmW7ckCB", "html": "

In the gestalt of (ahem) Japanese fiction, one finds this oft-repeated motif:  Power comes from having something to protect.

\n

I'm not just talking about superheroes that power up when a friend is threatened, the way it works in Western fiction.  In the Japanese version it runs deeper than that.

\n

In the X saga it's explicitly stated that each of the good guys draw their power from having someone—one person—who they want to protect.  Who?  That question is part of X's plot—the \"most precious person\" isn't always who we think.  But if that person is killed, or hurt in the wrong way, the protector loses their power—not so much from magical backlash, as from simple despair.  This isn't something that happens once per week per good guy, the way it would work in a Western comic.  It's equivalent to being Killed Off For Real—taken off the game board.

\n

The way it works in Western superhero comics is that the good guy gets bitten by a radioactive spider; and then he needs something to do with his powers, to keep him busy, so he decides to fight crime.  And then Western superheroes are always whining about how much time their superhero duties take up, and how they'd rather be ordinary mortals so they could go fishing or something.

\n

Similarly, in Western real life, unhappy people are told that they need a \"purpose in life\", so they should pick out an altruistic cause that goes well with their personality, like picking out nice living-room drapes, and this will brighten up their days by adding some color, like nice living-room drapes.  You should be careful not to pick something too expensive, though.

\n

In Western comics, the magic comes first, then the purpose:  Acquire amazing powers, decide to protect the innocent.  In Japanese fiction, often, it works the other way around.

\n

Of course I'm not saying all this to generalize from fictional evidence. But I want to convey a concept whose deceptively close Western analogue is not what I mean.

\n

I have touched before on the idea that a rationalist must have something they value more than \"rationality\":  The Art must have a purpose other than itself, or it collapses into infinite recursion.  But do not mistake me, and think I am advocating that rationalists should pick out a nice altruistic cause, by way of having something to do, because rationality isn't all that important by itself.  No.  I am asking:  Where do rationalists come from?  How do we acquire our powers? 

\n

\n

It is written in the Twelve Virtues of Rationality:

\n
\n

How can you improve your conception of rationality?  Not by saying to yourself, \"It is my duty to be rational.\"  By this you only enshrine your mistaken conception.  Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, \"The sky is green,\" and you look up at the sky and see blue.  If you think:  \"It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,\" you lose a chance to discover your mistake.

\n
\n

Historically speaking, the way humanity finally left the trap of authority and began paying attention to, y'know, the actual sky, was that beliefs based on experiment turned out to be much more useful than beliefs based on authority.  Curiosity has been around since the dawn of humanity, but the problem is that spinning campfire tales works just as well for satisfying curiosity.

\n

Historically speaking, science won because it displayed greater raw strength in the form of technology, not because science sounded more reasonable.  To this very day, magic and scripture still sound more reasonable to untrained ears than science.  That is why there is continuous social tension between the belief systems.  If science not only worked better than magic, but also sounded more intuitively reasonable, it would have won entirely by now.

\n

Now there are those who say:  \"How dare you suggest that anything should be valued more than Truth?  Must not a rationalist love Truth more than mere usefulness?\"

\n

Forget for a moment what would have happened historically to someone like that—that people in pretty much that frame of mind defended the Bible because they loved Truth more than mere accuracy.  Propositional morality is a glorious thing, but it has too many degrees of freedom.

\n

No, the real point is that a rationalist's love affair with the Truth is, well, just more complicated as an emotional relationship.

\n

One doesn't become an adept rationalist without caring about the truth, both as a purely moral desideratum and as something that's fun to have.  I doubt there are many master composers who hate music.

\n

But part of what I like about rationality is the discipline imposed by requiring beliefs to yield predictions, which ends up taking us much closer to the truth than if we sat in the living room obsessing about Truth all day.  I like the complexity of simultaneously having to love True-seeming ideas, and also being ready to drop them out the window at a moment's notice.  I even like the glorious aesthetic purity of declaring that I value mere usefulness above aesthetics.  That is almost a contradiction, but not quite; and that has an aesthetic quality as well, a delicious humor.

\n

And of course, no matter how much you profess your love of mere usefulness, you should never actually end up deliberately believing a useful false statement.

\n

So don't oversimplify the relationship between loving truth and loving usefulness.  It's not one or the other.  It's complicated, which is not necessarily a defect in the moral aesthetics of single events.

\n

But morality and aesthetics alone, believing that one ought to be \"rational\" or that certain ways of thinking are \"beautiful\", will not lead you to the center of the Way.  It wouldn't have gotten humanity out of the authority-hole.

\n

In Circular Altruism, I discussed this dilemma:  Which of these options would you prefer:

\n
    \n
  1. Save 400 lives, with certainty
  2. \n
  3. Save 500 lives, 90% probability; save no lives, 10% probability.
  4. \n
\n

You may be tempted to grandstand, saying, \"How dare you gamble with people's lives?\"  Even if you, yourself, are one of the 500—but you don't know which one—you may still be tempted to rely on the comforting feeling of certainty, because our own lives are often worth less to us than a good intuition.

\n

But if your precious daughter is one of the 500, and you don't know which one, then, perhaps, you may feel more impelled to shut up and multiply—to notice that you have an 80% chance of saving her in the first case, and a 90% chance of saving her in the second.

\n

And yes, everyone in that crowd is someone's son or daughter.  Which, in turn, suggests that we should pick the second option as altruists, as well as concerned parents.

\n

My point is not to suggest that one person's life is more valuable than 499 people.  What I am trying to say is that more than your own life has to be at stake, before a person becomes desperate enough to resort to math.

\n

What if you believe that it is \"rational\" to choose the certainty of option 1?  Lots of people think that \"rationality\" is about choosing only methods that are certain to work, and rejecting all uncertainty.  But, hopefully, you care more about your daughter's life than about \"rationality\".

\n

Will pride in your own virtue as a rationalist save you?  Not if you believe that it is virtuous to choose certainty.  You will only be able to learn something about rationality if your daughter's life matters more to you than your pride as a rationalist.

\n

You may even learn something about rationality from the experience, if you are already far enough grown in your Art to say, \"I must have had the wrong conception of rationality,\" and not, \"Look at how rationality gave me the wrong answer!\"

\n

(The essential difficulty in becoming a master rationalist is that you need quite a bit of rationality to bootstrap the learning process.)

\n

Is your belief that you ought to be rational, more important than your life?  Because, as I've previously observed, risking your life isn't comparatively all that scary.  Being the lone voice of dissent in the crowd and having everyone look at you funny is much scarier than a mere threat to your life, according to the revealed preferences of teenagers who drink at parties and then drive home.  It will take something terribly important to make you willing to leave the pack.  A threat to your life won't be enough.

\n

Is your will to rationality stronger than your pride?  Can it be, if your will to rationality stems from your pride in your self-image as a rationalist?  It's helpful—very helpful—to have a self-image which says that you are the sort of person who confronts harsh truth.  It's helpful to have too much self-respect to knowingly lie to yourself or refuse to face evidence.  But there may come a time when you have to admit that you've been doing rationality all wrong.  Then your pride, your self-image as a rationalist, may make that too hard to face.

\n

If you've prided yourself on believing what the Great Teacher says—even when it seems harsh, even when you'd rather not—that may make it all the more bitter a pill to swallow, to admit that the Great Teacher is a fraud, and all your noble self-sacrifice was for naught.

\n

Where do you get the will to keep moving forward?

\n

When I look back at my own personal journey toward rationality—not just humanity's historical journey—well, I grew up believing very strongly that I ought to be rational.  This made me an above-average Traditional Rationalist a la Feynman and Heinlein, and nothing more.  It did not drive me to go beyond the teachings I had received.  I only began to grow further as a rationalist once I had something terribly important that I needed to do.  Something more important than my pride as a rationalist, never mind my life.

\n

Only when you become more wedded to success than to any of your beloved techniques of rationality, do you begin to appreciate these words of Miyamoto Musashi:

\n
\n

\"You can win with a long weapon, and yet you can also win with a short weapon.  In short, the Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size.\"
        —Miyamoto Musashi, The Book of Five Rings

\n
\n

Don't mistake this for a specific teaching of rationality.  It describes how you learn the Way, beginning with a desperate need to succeed.  No one masters the Way until more than their life is at stake.  More than their comfort, more even than their pride.

\n

You can't just pick out a Cause like that because you feel you need a hobby.  Go looking for a \"good cause\", and your mind will just fill in a standard clicheLearn how to multiply, and perhaps you will recognize a drastically important cause when you see one.

\n

But if you have a cause like that, it is right and proper to wield your rationality in its service.

\n

To strictly subordinate the aesthetics of rationality to a higher cause, is part of the aesthetic of rationality.  You should pay attention to that aesthetic:  You will never master rationality well enough to win with any weapon, if you do not appreciate the beauty for its own sake.

" } }, { "_id": "BL9DuE2iTCkrnuYzx", "title": "Trust in Bayes", "pageUrl": "https://www.lesswrong.com/posts/BL9DuE2iTCkrnuYzx/trust-in-bayes", "postedAt": "2008-01-29T23:12:36.000Z", "baseScore": 40, "voteCount": 36, "commentCount": 28, "url": null, "contents": { "documentId": "BL9DuE2iTCkrnuYzx", "html": "

Followup toBeautiful Probability, Trust in Math

\n\n

In Trust in Math, I presented an algebraic proof that 1 = 2, which turned out to be - surprise surprise - flawed.  Trusting that algebra, correctly used, will not carry you to an absurd result, is not a matter of blind faith.  When we see apparent evidence against algebra's trustworthiness, we should also take into account the massive evidence favoring algebra which we have previously encountered.  We should take into account our past experience of seeming contradictions which turned out to be themselves flawed.  Based on our inductive faith that we may likely have a similar experience in the future, we look for a flaw in the contrary evidence.

\n\n

This seems like a dangerous way to think, and it is dangerous, as I noted in "Trust in Math".  But, faced with a proof that 2 = 1, I can't convince myself that it's genuinely reasonable to think any other way.

The novice goes astray and says, "The Art failed me."
The master goes astray and says, "I failed my Art."

To get yourself to stop saying "The Art failed me", it's helpful to know the history of people crying wolf on Bayesian math - to be familiar with seeming paradoxes that have been discovered and refuted.  Here an invaluable resource is "Paradoxes of Probability Theory", Chapter 15 of E. T. Jaynes's Probability Theory: The Logic of Science (available online).

\n\n

I'll illustrate with one of Jaynes's examples:

\n\n

If you have a\nconditional probability distribution P(X|C), the unconditional\nprobability P(X) should be a weighted average of the various P(X|C),\nand therefore intermediate between the various P(X|C) in value -\nsomewhere between the minimum and the maximum P(X|C).  That is:  If you\nflip a coin before rolling a die, and the die is a four-sided die if\nthe coin comes up heads, or ten-sided if the coin comes up tails, then\n(even without doing an exact calculation) you know that the compound\nprobability of rolling a "1" occupies the range [0.1, 0.25].

\n\n

Now suppose a two-dimensional array, M cells wide and N cells tall, with positions written (i, j) with i as the horizontal coordinate and j as the vertical coordinate.  And suppose a uniform probability distribution over the array: p(i, j) = 1/MN for all i, j.  Finally, let X be the event that i < j.  We'll be asking about P(X).

\n\n

If we think about just the top row - that is, condition on the information j=N - then the probability of X is p(i < N) = (N - 1)/M, or 1 if N > M.

\n\n

If we think about just the bottom row - condition on the information j=1 - then the probability of X is p(i < 1) = 0.

\n\n

Similarly, if we think about just the rightmost column, condition on i=M, then the probability of X is p(j > M) = (N - M)/N, or 0 if M > N.

\n\n

And thinking about the leftmost column, conditioning on i=1, the probability of X is p(j > 1) = (N - 1)/N.

\n\n

So for the whole array, the probability of X must be between (N - 1)/M and 0 (by reasoning about rows) or between (N - 1)/N and (N - M)/N (by reasoning about columns).

\n\n

This is actually correct, so no paradox so far.  If the array is 5 x 7, then the probability of X on the top row is 1, the probability of X on the bottom row is 0.  The probability of X in the rightmost row is 2/7, and the probability of X in the leftmost row is 6/7.  The probability of X over the whole array is 4/7, which obeys both constraints.

\n\n

But now suppose that the array is infinite.  Reasoning about the rows, we see that, for every row, there is a finite number of points where i < j, and an infinite number of points where i >= j.  So for every row, the probability of the event X must be 0.  Reasoning about the columns, we see in every column a finite number of points where j <= i, and an infinite number of points where i < j.  So for every column, the probability of the event X must be 1.  This is a paradox, since the compound probability of X must be both a weighted mix of the probability for each row, and a weighted mix of the probability for each column.

\n\n

If to you this seems like a perfectly reasonable paradox, then you really need to read Jaynes's "Paradoxes of Probability Theory" all the way through.

\n\n

In "paradoxes" of algebra, there is always an illegal operation that produces the contradiction.  For algebraic paradoxes, the illegal operation is usually a disguised division by zero.  For "paradoxes" of probability theory and decision theory, the illegal operation is usually assuming an infinity that has not been obtained as the limit of a finite calculation.

\n\n

In the case above, the limiting probability of i < j approaches the ratio M / N, so just assuming that M and N are "infinite" will naturally produce all sorts of paradoxes in the ratio - it depends on how M and N approach infinity.

\n\n

It's all too tempting to just talk about infinities, instead of constructing them.  As Jaynes observes, this is a particularly pernicious habit because it may work 95% of the time and then lead you into trouble on the last 5% of occasions - like how the really deadly bugs in a computer program are not those that appear all the time, but those that only appear 1% of the time.

\n\n

Apparently there was a whole cottage industry in this kind of paradox, where, assuming infinite sets, the marginal probability seemed not to be inside the conditional probabilities of some partition, and this was called "nonconglomerability".  Jaynes again:

"Obviously, nonconglomerability cannot arise from a correct application of the rules of probability on finite sets.  It cannot, therefore, occur in an infinite set which is approached as a well-defined limit of a sequence of finite sets.  Yet nonconglomerability has become a minor industry, with a large and growing literature.  There are writers who believe that it is a real phenomenon, and that they are proving theorems about the circumstances in which it occurs, which are important for the foundations of probability theory."\n\n\n\n

We recently ran into a similar problem here on Overcoming Bias:  A commenter cited a paper, "An Air-Tight Dutch Book" by Vann McGee, which purports to show that if your utility function is not bounded, then a dutch book can be constructed against you.  The paper is gated, but Neel Krishnaswami passed me a copy.  A summary of McGee's argument can also be found in the ungated paper "Bayesianism, infinite decisions, and binding".

\n\n

Rephrasing somewhat, McGee's argument goes as follows.  Suppose that you are an expected utility maximizer and your utility function is unbounded in some quantity, such as human lives or proving math theorems.  We'll write $27 to indicate a quantity worth 27 units of utility; by the hypothesis of an unbounded utility function, you can always find some amount of fun that is worth at least 27 units of utility (where the reference unit can be any positive change in the status quo).

\n\n

Two important notes are that, (1) this does not require your utility function to be linear in anything, just that it grow monotonically and without bound; and (2) your utility function does not have to assign infinite utility to any outcome, just ever-larger finite utilities to ever-larger finite outcomes.

\n\n

Now for the seeming Dutch Book - a sequence of bets that (McGee argues) a Bayesian will take, but which produce a guaranteed loss.

\n\n

McGee produces a fair coin, and proceeds to offer us the following bet:  We lose $1 (one unit of utility) if the coin comes up "tails" on the first round, and gain $3 (three units of utility) if the coin comes up "heads" on the first round and "tails" on the second round.  Otherwise nothing happens - the bet has no payoff.  The probability of the first outcome in the bet is 1/2, so it has an expected payoff of -$.50; and the probability of the second outcome is 1/4, so it has an expected payoff of +$.75.  All other outcomes have no payoff, so the net value is +$0.25.  We take the bet.

\n\n

Now McGee offers us a second bet, which loses $4 if the coin first comes up "tails" on the second round, but pays $9 if the coin first comes up "tails" on the third round, with no consequence otherwise.  The probabilities of a fair coin producing a sequence that begins HT or HHT are respectively 1/4 and 1/8, so the expected values are -$1.00 and +$1.125.  The net expectation is positive, so we take this bet as well.

\n\n

Then McGee offers us a third bet which loses $10 if the coin first comes up "tails" on the third round, but gains $21 if the coin first comes up "tails" on the fourth round; then a bet which loses $22 if the coin shows "tails" first on round 4 and gains $45 if the coin shows "tails" first on round 5.  Etc.

\n\n

If we accept all these bets together, then we lose $1 no matter when the coin first shows "tails".  So, McGee says, we have accepted a Dutch Book.  From which McGee argues that every rational mind must have a finite upper bound on its utility function.

\n\n

Now, y'know, there's a number of replies I could give to this.  I won't challenge the possibility of the game, which would be my typical response as an infinite set atheist, because I never actually encounter an infinity.  I can imagine living in a universe where McGee actually does have the ability to increase his resources exponentially, and so could actually offer me that series of bets.

\n\n

But if McGee is allowed to deploy a scenario where the expected value of the infinite sequence does not equal the limit of the expected values of the finite sequences, then why should a good Bayesian's decision in the infinite case equal the limit of the Bayesian's decisions in the finite cases?

\n\n

It's easy to demonstrate that for every finite N, a Bayesian will accept the first N bets.  It's also easy to demonstrate that for every finite N, accepting the first N bets has a positive expected payoff.  The decision in every finite scenario is to accept all N bets - from which you might say that the "limit" decision is to accept all offered bets - and the limit of the expected payoffs of these finite decisions goes to +$.50.

\n\n

But now McGee wants to talk about the infinite scenario directly, rather than as a limiting strategy that applies to any one of a series of finite scenarios.  Jaynes would not let you get away with this at all, but I accept that I might live in an unbounded universe and I might just have to shut up and deal with infinite games.  Well, if so, the expected payoff of the infinite scenario does not equal the limit of the expected payoffs of the finite scenarios.  One equals -$1, the other equals +$.50.

\n\n

So there is no particular reason why the rational decision in the infinite scenario should equal the limit of the rational decisions in the finite scenarios, given that the payoff in the infinite scenario does not equal the limit of the payoffs in the finite scenarios.

\n\n

And from this, McGee wants to deduce that all rational entities must have bounded utility functions?  If it turns out that I live in an infinite universe, you can bet that there isn't any positive real number such that I would decline to have more fun than that.

\n\n

Arntzenius, Elga, and Hawthorne give a more detailed argument in "Bayesianism, Infinite Decisions, and Binding" that the concept of provable dominance only applies to finite option sets and not infinite option sets.  If you show me a compound planning problem with a sub-option X, such that for every possible compound plan I am better off taking X1 than X2, then this shows that a maximal plan must include X1 when the set of possible plans is finite.  But when there is no maximal plan, no "optimal" decision - because there are an infinite number of possible plans whose upper bound (if any) isn't in the set - then proving local dominance obviously can't show anything about the "optimal" decision.  See Arntzenius et. al's section on "Satan's Apple" for their full argument.

\n\n

An even better version of McGee's scenario, in my opinion, would use a different sequence of bets:  -$1 on the first round versus +$6 on the second round; -$6 on the second round and +$20 on the third round; -$20 on the third round and +$56 on the fourth round.  Now we've picked the sequence so that if you accept all bets up to the Nth bet, your expected value is $N.

\n\n

So really McGee's argument can be simplified as follows:  Pick any positive integer, and I'll give you that amount of money.  Clearly you shouldn't pick 1, because 1 is always inferior to 2 and above.  Clearly you shouldn't pick 2, because it's always inferior to 3 and above.  By induction, you shouldn't pick any number, so you don't get any money.  So (McGee concludes) if you're really rational, there must be some upper bound on how much you care about anything.

\n\n

(Actually, McGee's proposed upper bound doesn't really solve anything.  Once you allow infinite times, you can be put into the same dilemma if I offer you $.50, but then offer to trade $.50 today for $.75 tomorrow, and then, tomorrow, offer to trade $.75 now for $.875 the next day, and so on.  Even if my utility function is bounded at $1, this doesn't save me from problems where the limit of the payoffs of the finite plans doesn't seem to equal the payoff of the limit of the finite plans.  See the comments for further arguments.)

\n\n

The meta-moral is that Bayesian probability theory and decision theory are math: the formalism provably follows from axioms, and the formalism provably obeys those axioms.  When someone shows you a purported paradox of probability theory or decision theory, don't shrug and say, "Well, I guess 2 = 1 in that case" or "Haha, look how dumb Bayesians are" or "The Art failed me... guess I'll resort to irrationality."  Look for the division by zero; or the infinity that is assumed rather than being constructed as the limit of a finite operation; or the use of different implicit background knowledge in different parts of the calculation; or the improper prior that is not treated as the limit of a series of proper priors... something illegal.

\n\n

Trust Bayes.  Bayes has earned it.

" } }, { "_id": "r5MSQ83gtbjWRBDWJ", "title": "The \"Intuitions\" Behind \"Utilitarianism\"", "pageUrl": "https://www.lesswrong.com/posts/r5MSQ83gtbjWRBDWJ/the-intuitions-behind-utilitarianism", "postedAt": "2008-01-28T16:29:06.000Z", "baseScore": 90, "voteCount": 84, "commentCount": 208, "url": null, "contents": { "documentId": "r5MSQ83gtbjWRBDWJ", "html": "

(Still no Internet access.  Hopefully they manage to repair the DSL today.)

\n\n

I haven't said much about metaethics - the nature of morality - because that has a forward dependency on a discussion of the Mind Projection Fallacy that I haven't gotten to yet.  I used to be very confused about metaethics.  After my confusion finally cleared up, I did a postmortem on my previous thoughts.  I found that my object-level moral reasoning had been valuable and my meta-level moral reasoning had been worse than useless.  And this appears to be a general syndrome - people do much better when discussing whether torture is good or bad than when they discuss the meaning of "good" and "bad".  Thus, I deem it prudent to keep moral discussions on the object level wherever I possibly can.

\n\n

Occasionally people object to any discussion of morality on the grounds that morality doesn't exist, and in lieu of jumping over the forward dependency to explain that "exist" is not the right term to use here, I generally say, "But what do you do anyway?" and take the discussion back down to the object level.

\n\n

Paul Gowder, though, has pointed out that both the idea of choosing a googolplex dust specks in a googolplex eyes over 50 years of torture for one person, and the idea of "utilitarianism", depend on "intuition".  He says I've argued that the two are not compatible, but charges me with failing to argue for the utilitarian intuitions that I appeal to.

Now "intuition" is not how I would describe the computations that\nunderlie human morality and distinguish us, as moralists, from an ideal philosopher of perfect emptiness and/or a rock. \nBut I am okay with using the word "intuition" as a term of art, bearing\nin mind that "intuition" in this sense is not to be contrasted to\nreason, but is, rather, the cognitive building block out of which both\nlong verbal arguments and fast perceptual arguments are constructed.

\n\n\n\n

I see the project of morality as a project of renormalizing\nintuition.  We have intuitions about things that seem desirable or\nundesirable, intuitions about actions that are right or wrong,\nintuitions about how to resolve conflicting intuitions, intuitions\nabout how to systematize specific intuitions into general principles.

\n

Delete all the intuitions, and you aren't left with\nan ideal philosopher of perfect emptiness, you're left with a rock.

\n\n

Keep all your specific intuitions and refuse to build upon the\nreflective ones, and you aren't left with an ideal philosopher of\nperfect spontaneity and genuineness, you're left with a grunting caveperson\nrunning in circles, due to cyclical preferences and similar\ninconsistencies.

\n\n

"Intuition", as a term of art, is not a curse\nword when it comes to morality - there is nothing else to argue from.  Even modus\nponens is an "intuition" in this sense - it's just that modus ponens still seems\nlike a good idea after being formalized, reflected on, extrapolated out\nto see if it has sensible consequences, etcetera.

\n\n

So that is "intuition".

\n\n

However, Gowder did not say what he meant by "utilitarianism".  Does utilitarianism say...

\n\n
  1. That right actions are strictly determined by good consequences?
  2. \n\n
  3. That praiseworthy actions depend on justifiable expectations of good consequences?
  4. \n\n
  5. That probabilities of consequences should normatively be discounted by\ntheir probability, so that a 50% probability of something bad should\nweigh exactly half as much in our tradeoffs?\n\n
  6. \n\n
  7. That virtuous actions always correspond to maximizing expected utility under some utility function?
  8. \n\n
  9. That two harmful events are worse than one?
  10. \n\n
  11. That two independent occurrences of a harm (not to the same person, not interacting with each other) are exactly twice as bad as one?
  12. \n\n
  13. That for any two harms A and B, with A much worse than B, there exists some tiny probability such that gambling on this probability of A is preferable to a certainty of B?
\n\n

If you say that I advocate something, or that my argument depends on something, and that it is wrong, do please specify what this thingy is... anyway, I accept 3, 5, 6, and 7, but not 4; I am not sure about the phrasing of 1; and 2 is true, I guess, but phrased in a rather solipsistic and selfish fashion: you should not worry about being praiseworthy.

\n\n

Now, what are the "intuitions" upon which my "utilitarianism" depends?

\n\n

This is a deepish sort of topic, but I'll take a quick stab at it.\n\n\n

\n\n

First of all, it's not just that someone presented\nme with a list of statements like those above, and I decided which ones\nsounded "intuitive".  Among other things, if you try to violate\n"utilitarianism", you run into paradoxes, contradictions, circular\npreferences, and other things that aren't symptoms of moral wrongness\nso much as moral incoherence.

\n\n

After you think about moral problems for a while, and also find new truths about the world, and even discover disturbing facts about how you yourself work, you often end up with different moral opinions than when you started out.  This does not quite define moral progress, but it is how we experience moral progress.

\n

As part of my experienced moral progress, I've drawn a conceptual separation between questions of type Where should we go? and questions of type How should we get there?  (Could that be what Gowder means by saying I'm "utilitarian"?)

\n\n

The question of where a road goes - where it leads - you can answer by traveling the road and finding out.  If you have a false belief about where the road leads, this falsity can be destroyed by the truth in a very direct and straightforward manner.

\n\n

When it comes to wanting to go to a particular place, this want is not entirely immune from the destructive powers of truth.  You could go there and find that you regret it afterward (which does not define moral error, but is how we experience moral error).

\n\n

But, even so, wanting to be in a particular place seems worth distinguishing from wanting to take a particular road to a particular place.

\n\n

Our intuitions about where to go are arguable enough, but our intuitions about how to get there are frankly messed up.  After the two hundred and eighty-seventh research study showing that people will chop their own feet off if you frame the problem the wrong way, you start to distrust first impressions.

\n\n

When you've read enough research on scope insensitivity - people will pay only 28% more to protect all 57 wilderness areas in Ontario than one area, people will pay the same amount to save 50,000 lives as 5,000 lives... that sort of thing...

\n\n

Well, the worst case of scope insensitivity I've ever heard of was described here by Slovic:

Other recent research shows similar results. Two Israeli psychologists\nasked people to contribute to a costly life-saving treatment. They\ncould offer that contribution to a group of eight sick children, or to\nan individual child selected from the group. The target amount needed\nto save the child (or children) was the same in both cases.\nContributions to individual group members far outweighed the\ncontributions to the entire group.

There's other research along similar lines, but I'm just presenting one example, 'cause, y'know, eight examples would probably have less impact.

\n\n

If you know the general experimental paradigm, then the reason for the above behavior is pretty\nobvious - focusing your attention on a single child creates\nmore emotional arousal than trying to distribute attention around eight children\nsimultaneously.  So people are willing to pay more to help one child\nthan to help eight.

\n\n\n\n

Now, you could look at this intuition, and think it was revealing some kind of incredibly deep moral truth which shows that one child's good fortune is somehow devalued by the other children's good fortune.

\n\n

But what about the billions of other children in the world?  Why isn't it a bad idea to help this one child, when that causes the value of all the other children to go down?  How can it be significantly better to have 1,329,342,410 happy children than 1,329,342,409, but then somewhat worse to have seven more at 1,329,342,417?

\n\n

Or you could look at that and say:  "The intuition is wrong: the brain can't successfully multiply by eight and get a larger quantity than it started with.  But it ought to, normatively speaking."

\n\n

And once you realize that the brain can't multiply by eight, then the other cases of scope neglect stop seeming to reveal some fundamental truth about 50,000 lives being worth just the same effort as 5,000 lives, or whatever.  You don't get the impression you're looking at the revelation of a deep moral truth about nonagglomerative utilities.  It's just that the brain doesn't goddamn multiply.  Quantities get thrown out the window.

\n\n

If you have $100 to spend, and you spend $20 each on each of 5 efforts to save 5,000 lives, you will do worse than if you spend $100 on a single effort to save 50,000 lives.  Likewise if such choices are made by 10 different people, rather than the same person.  As soon as you start believing that it is better to save 50,000 lives\nthan 25,000 lives, that simple preference of final destinations has\nimplications for the choice of paths, when you consider five different\nevents that save 5,000 lives.

\n\n

(It is a general principle that Bayesians see no difference between the long-run answer and the short-run answer; you never get two different answers from computing the same question two different ways.  But the long run is a helpful intuition pump, so I am talking about it anyway.)

\n\n

The aggregative valuation strategy of "shut up and multiply" arises from the simple preference to have more of something - to save as many lives as possible - when you have to describe general principles for choosing more than once, acting more than once, planning at more than one time.

\n\n

Aggregation also arises from claiming that the local choice to save one life doesn't depend on how many lives already exist, far away on the other side of the planet, or far away on the other side of the universe.  Three lives are one and one and one.  No matter how many billions are doing better, or doing worse. 3 = 1 + 1 + 1, no matter what other quantities you add to both sides of the equation.  And if you add another life you get 4 = 1 + 1 + 1 + 1.  That's aggregation.

\n\n

When you've read enough heuristics and biases research, and enough coherence and uniqueness proofs for Bayesian probabilities and expected utility, and you've seen the "Dutch book" and "money pump" effects that penalize trying to handle uncertain outcomes any other way, then you don't see the preference reversals in the Allais Paradox as revealing some incredibly deep moral truth about the intrinsic value of certainty.  It just goes to show that the brain doesn't goddamn multiply.

\n\n

The primitive, perceptual intuitions that make a choice "feel good" don't handle probabilistic pathways through time very skillfully, especially when the probabilities have been expressed symbolically rather than experienced as a frequency.  So you reflect, devise more trustworthy logics, and think it through in words.

\n\n

When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don't think that their protestations reveal some deep truth about incommensurable utilities.

\n\n

Part of it, clearly, is that primitive intuitions don't successfully diminish the emotional impact of symbols standing for small quantities - anything you talk about seems like "an amount worth considering".

\n\n

And part of it has to do with preferring unconditional social rules to conditional social rules.  Conditional rules seem weaker, seem more subject to manipulation.  If there's any loophole that lets the government legally commit torture, then the government will drive a truck through that loophole.

\n\n

So it seems like there should be an unconditional social injunction against preferring money to life, and no "but" following it.  Not even "but a thousand dollars isn't worth a 0.0000000001% probability of saving a life".  Though the latter choice, of course, is revealed every time we sneeze without calling a doctor.

\n\n

The rhetoric of sacredness gets bonus points for seeming to express an unlimited commitment, an unconditional refusal that signals trustworthiness and refusal to compromise.  So you conclude that moral rhetoric espouses qualitative distinctions, because espousing a quantitative tradeoff would sound like you were plotting to defect.

\n\n

On such occasions, people vigorously want to throw quantities out the window, and they get upset if you try to bring quantities back in, because quantities sound like conditions that would weaken the rule.

But you don't conclude that there are actually two tiers of utility with lexical ordering.  You don't conclude that there is actually an infinitely sharp moral gradient, some atom that moves a Planck distance (in our continuous physical universe) and sends a utility from 0 to infinity.  You don't conclude that utilities must be expressed using hyper-real numbers.  Because the lower tier would simply vanish in any equation.  It would never be worth the tiniest effort to recalculate for it.  All decisions would be determined by the upper tier, and all thought spent thinking about the upper tier only, if the upper tier genuinely had lexical priority.

\n\n

As Peter Norvig once pointed out, if Asimov's robots had strict priority for the First Law of Robotics ("A robot shall not harm a human being, nor through inaction allow a human being to come to harm") then no robot's behavior would ever show any sign of the other two Laws; there would always be some tiny First Law factor that would be sufficient to determine the decision.

\n\n

Whatever value is worth thinking about at all, must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off.  When you reveal a value, you reveal a utility.

\n\n

I don't say that morality should always be simple.  I've already said that the meaning of music is more than happiness alone, more than just a pleasure center lighting up.  I would rather see music composed by people than by nonsentient machine learning algorithms, so that someone should have the joy of composition; I care about the journey, as well as the destination.  And I am ready to hear if you tell me that the value of music is deeper, and involves more complications, than I realize - that the valuation of this one event is more complex than I know. 

\n\n

But that's for one event.  When it comes to multiplying by quantities and probabilities, complication is to be avoided - at least if you care more about the destination than the journey.  When you've reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as "Shut up and multiply."

\n\n

Where music is concerned, I care about the journey.

\n\n

When lives are at stake, I shut up and multiply.

\n\n

It is more important that lives be saved, than that we conform to any particular ritual in saving them.  And the optimal path to that destination is governed by laws that are simple, because they are math.

\n\n

And that's why I'm a utilitarian - at least when I am doing something that is overwhelmingly more important than my own feelings about it - which is most of the time, because there are not many utilitarians, and many things left undone.

\n\n

</rant>

" } }, { "_id": "vKtrgLkpsbvb3Y77Y", "title": "Rationality Quotes 9", "pageUrl": "https://www.lesswrong.com/posts/vKtrgLkpsbvb3Y77Y/rationality-quotes-9", "postedAt": "2008-01-27T18:00:00.000Z", "baseScore": 12, "voteCount": 10, "commentCount": 9, "url": null, "contents": { "documentId": "vKtrgLkpsbvb3Y77Y", "html": "

\"A world ought to have a few genuine good guys, and not just a spectrum of people running from bad to worse.\"
        -- Glen Cook, A Shadow of All Night Falling

\n

\"You couldn't get a clue during the clue mating season in a field full of horny clues if you smeared your body with clue musk and did the clue mating dance.\"
        -- Edward Flaherty

\n

\"We all enter this world in the same way: naked; screaming; soaked in blood. But if you live your life right, that kind of thing doesn't have to stop there.\"
        -- Dana Gould

\n

\"Love is a snowmobile racing across the tundra and then suddenly it flips over, pinning you underneath. At night, the ice weasels come.\"
        -- Matt Groening

\n

\"Things do get better, all the time, maybe just not as fast as I'd like. I do what I can. Don't ask me to hate, too.\"
        -- Michael Wiik

\n

\n

\"Political or military commentators, like astrologers, can survive almost any mistake, because their more devoted followers do not look to them for an appraisal of the facts but for the stimulation of nationalistic loyalties.\"
        -- George Orwell, Notes on Nationalism

\n

\"People are always amazed by how much \"free time\" I have.
They're also amazed that I don't know who Ally McBeal is.
Frankly, I'm amazed that they can't make the connection.\"
        -- Robert Wenzlaff

\n

\"Throughout the technology revolution, mankind has consistently sought to improve life by reducing the number of tasks that require physical activity, then sought to counteract the inevitable weight gain by manufacturing food that only looks like food and barely tastes like it.\"
        -- Samuel Stoddard

\n

\"Any person who claims to have deep feeling for other human beings should think a long, long time before he votes to have other men kept behind bars - caged. I am not saying there shouldn't be prisons, but there shouldn't be bars. Behind bars, a man never reforms. He will never forget. He never will get completely over the memory of the bars.\"
        -- Malcolm X

\n

\"What funding committee will agree to fund a book describing an entire new field that has never before been dreamt of? Committees base their conclusions on a shared understanding of a common body of knowledge. Their members are drawn from an existing society of experts to evaluate the next incremental improvement. What do you do when there are no experts? Who lays claim to expertise in nanomedicine? Who has spent their life in this field which is just being conceived? No one. The committee process breaks down when we move into truly new terrain. It fails us just when failure is most expensive: at the beginnings of new things. Here we must fall back on individuals - individuals who are bold enough to believe in themselves when there are no experts to turn to for help and support. Individuals who are willing to back up their own beliefs with action, who will nurture the truly new and the truly groundbreaking without having to first seek the approval of others. For there are no others! On the far frontiers there are very few, and sometimes there is only one.\"
        -- Ralph Merkle, afterword to Nanomedicine

" } }, { "_id": "ZgRnaasvHaueoSKs8", "title": "Rationality Quotes 8", "pageUrl": "https://www.lesswrong.com/posts/ZgRnaasvHaueoSKs8/rationality-quotes-8", "postedAt": "2008-01-26T18:00:00.000Z", "baseScore": 11, "voteCount": 9, "commentCount": 5, "url": null, "contents": { "documentId": "ZgRnaasvHaueoSKs8", "html": "

\"Like a lot of people in the computer industry, Keith Malinowski had spent his whole life being the smartest person in the room, and like most of his fellows the experience left him with a rather high opinion of his opinions.\"
        -- Rick Cook, The Wizardry Quested

\n

\"The fact that we can become accustomed to anything, however disgusting at first, makes it necessary to examine carefully everything we have become accustomed to.\"
         -- George Bernard Shaw

\n

\"Beware `we should...', extend a hand to `how do I...'\"
        -- Alan Cox

\n

\"I assign higher credibility to an institution if liberals accuse it of being conservative and conservatives accuse it of being liberal.\"
        -- Alex F. Bokov

\n

\"An open mind is a great place for other people to dump their garbage.\"
        -- Rev. Rock, Church of the Subgenius

\n

\n

\"A.D. 1517: Martin Luther nails his 95 Theses to the church door and is promptly moderated down to (-1, Flamebait).\"
        -- Yu Suzuki

\n

\"A committee cannot be wrong - only divided. Once it resolves its division, then every part reinforces every other part and its rightness becomes unassailable.\"
         -- John M. Ford, The Princes of the Air

\n

\"Gee, thanks for the elementary lesson in metaphysics. And there I thought Being was an undifferentiated unity.\"
        -- Mark Walker

\n

\"The old political syllogism \"something must be done: this is something: therefore this will be done\" appears to be at work here, in spades.\"
        -- Charlie Stross

\n

\"The fact that I beat a drum has nothing to do with the fact that I do theoretical physics. Theoretical physics is a human endeavour, one of the higher developments of human beings — and this perpetual desire to prove that people who do it are human by showing that they do other things that a few other humans do (like playing bongo drums) is insulting to me. I am human enough to tell you to go to hell.\"
        -- Richard Feynman

\n

\"Java sucks.  Java on TV set top boxes will suck so hard it might well inhale people from off their sofa until their heads get wedged in the card slots.\"
        -- Jon Rabone

\n

\"Susan was bright enough to know that the phrase \"Someone ought to do something\" was not, by itself, a helpful one. People who used it never added the rider \"and that someone is me.\" But someone ought to do something, and right now the whole pool of someones consisted of her, and no one else.\"
        -- Terry Pratchett, Hogfather

" } }, { "_id": "naJZBB8FgLSFdAPRB", "title": "Rationality Quotes 7", "pageUrl": "https://www.lesswrong.com/posts/naJZBB8FgLSFdAPRB/rationality-quotes-7", "postedAt": "2008-01-25T18:00:00.000Z", "baseScore": 10, "voteCount": 8, "commentCount": 17, "url": null, "contents": { "documentId": "naJZBB8FgLSFdAPRB", "html": "

\"People don't buy three-eighths-inch drill bits. People buy three-eighths-inch holes.\"
        -- Michael Porter

\n

\"I feel more like I do now than I did a while ago.\"
        -- Arifel

\n

\"Know thyself, because in the end there's no one else.\"
        -- Living Colour, Solace of You

\n

\"Student motivation? I'm gonna start hooking you all up to electrodes.\"
        -- Kevin Giffhorn

\n

\"A rock pile ceases to be a rock pile the moment a single man contemplates it, bearing within him the image of a cathedral.\"
        -- Antoine de Saint-Exupery

\n

\n

\"I found one day in school a boy of medium size ill-treating a smaller boy. I expostulated, but he replied: 'The bigs hit me, so I hit the babies; that's fair.' In these words he epitomized the history of the human race.\"
        -- Bertrand Russell

\n

\"Open Source Software: There are days when I can't figure out whether I'm living in a Socialist utopia or a Libertarian one.\"
        -- Alex Future Bokov

\n

\"Supposing you got a crate of oranges that you opened, and you found all the top layer of oranges bad, you would not argue, `The underneath ones must be good, so as to redress the balance.' You would say, `Probably the whole lot is a bad consignment'; and that is really what a scientific person would say about the universe.\"
        -- Bertrand Russell

\n

\"This thing not only has all the earmarks of a hoax, it has \"HOAX\" branded into its flank and it's regulated by the U.S. Department of Hoaxes, Pranks and Shenanigans.\"
        -- Brunching Shuttlecocks

\n

\"Natural selection gave a big advantage to those who were good at spotting the pattern of a saber toothed tiger hiding in the bushes, but no advantage to those who were good at solving partial differential equations. It is not mere rhetoric to say that in an absolute sense a janitor has a more intellectually challenging job than a professor of mathematics.\"
        -- John K. Clark

\n

\"The simple fact is that non-violent means do not work against Evil. Gandhi's non-violent resistance against the British occupiers had some effect because Britain was wrong, but not Evil. The same is true of the success of non-violent civil rights resistance against de jure racism. Most people, including those in power, knew that what was being done was wrong. But Evil is an entirely different beast. Gandhi would have gone to the ovens had he attempted non-violent resistance against the Nazis. When one encounters Evil, the only solution is violence, actual or threatened. That's all Evil understands.\"
        -- Robert Bruce Thompson

" } }, { "_id": "62upYZQSsMsrLrFLM", "title": "Rationality Quotes 6", "pageUrl": "https://www.lesswrong.com/posts/62upYZQSsMsrLrFLM/rationality-quotes-6", "postedAt": "2008-01-24T18:00:00.000Z", "baseScore": 8, "voteCount": 8, "commentCount": 4, "url": null, "contents": { "documentId": "62upYZQSsMsrLrFLM", "html": "

\"Gnostic: Knowledge that is so pure that it cannot be explained or proven wrong.\"
        -- Glossary of Zen

\n

\"I haven't been wrong since 1961, when I thought I made a mistake.\"
        -- Bob Hudson

\n

\"Sometimes I envy that groundless confidence.\"
        -- Vandread

\n

\"I remember when I first learned the word 'omniscient'. I thought that was a reasonable goal to aim for.\"
        -- Samantha Atkins

\n

\"Our language for describing emotions is very crude... that's what music is for, I guess.\"
        -- Ben Goertzel

\n

\n

\"How much money would it take to get YOU to change your sexual orientation? Now add a dollar to that. That's how much Disney could offer you.\"
        -- Leader Kibo

\n

\"I don't need The Media to tell me that I should be outraged about a brutal murder. All I need is to be informed that it has happened, and I'll form my own opinion about it.\"
        -- The_Morlock

\n

\"Now Charlie, don't forget what happened to the man who suddenly got everything he wished for.\"
\"What?\"
\"He lived happily ever after.\"
        -- Roald Dahl, Willy Wonka and the Chocolate Factory

\n

\"He was never at ease with politics, where good and bad were just, apparently, two ways of looking at the same thing or, at least, were described like that by the people who were on the side Vimes thought of as 'bad'.\"
        -- Terry Pratchett, \"The Fifth Elephant\"

\n

\"Philosophy is the art of asking the wrong questions.\"
        -- J.R. Molloy

\n

\"The future is coming, Bill Joy. Like a juggernaut. There'll be no slowing it down.  A billion people in India (substantial numbers of programmers riding the wave of the silicon revolution) and more than a billion people in China want a decent standard of living and they're counting on future technologies to deliver it for them. To slow down the future, you would have to nuke them. Are you going to nuke them, Bill? I didn't think so.\"
        -- Jeff Davis

\n

\"Yikes, we may need to revise the Wechsler-Bellevue to accomodate some hitherto unrecognized nadir of the intelligence quotient scale.\"
        -- Tom Morse

\n

 

\n

\"Adultery always begins with the adulterer(s) claiming to themselves and to others that the relationship is \"harmless\" because it hasn't crossed a certain line. The line where it becomes wrong is the line where you start having to rationalize like that.\"
        -- Gelfin

" } }, { "_id": "XLk4dEe66bzygseQX", "title": "Rationality Quotes 5", "pageUrl": "https://www.lesswrong.com/posts/XLk4dEe66bzygseQX/rationality-quotes-5", "postedAt": "2008-01-23T18:00:00.000Z", "baseScore": 9, "voteCount": 7, "commentCount": 15, "url": null, "contents": { "documentId": "XLk4dEe66bzygseQX", "html": "

If you're seeing this, it means that I've moved, but that my Internet access isn't set up yet.  I've set up quotes to be posted automatically for the next few days.  Don't be surprised if I don't respond to comments!

\n

\"Perfection is our goal. Excellence will be tolerated.\"
        -- J. Yahl

\n

\"Morality is objective within a given frame of reference.\"
        -- Gordon Worley

\n

\"If there were a verb meaning \"to believe falsely,\" it would not have any significant first person, present indicative.\"
        -- Ludwig Wittgenstein

\n

\"It takes 50 years for your parents to mature enough that they see you as an independent person.\"
        -- Ralph Lewis

\n

\n

\"Impatience is a flaw. There's always just enough time when you do something right, no more, no less. Your sword has no blade. It has only your intention. When that goes astray you have no weapon.\"
        -- C.J. Cherryh, The Paladin

\n

\"They're space cannibals. They only eat other space cannibals. Q.E.D.\"
        -- Nikolai Kingsley

\n

\"My father said whoever tells the longest story is always the liar. The truth isn't that complicated.\"
        -- Bill Joy, cofounder and Chief Scientist of Sun Microsystems

\n

\"My experience tells me that in this complicated world the simplest explanation is usually dead wrong. But I've noticed that the simplest explanation usually sounds right and is far more convincing than any complicated explanation could hope to be.\"
        -- Scott Adams, cartoonist

\n

\"There are really good and thoroughly bad people on each side in all wars.\"
\"Nothing is more truly horrifying than the limits of human behavior.\"

\"Determined efforts are better than a miracle.\"
\"Things only get weirder the longer they go on.\"
\"People will love you for who you are ... as long as you're secretly a super-hero.\"
\"Lack of communication leads to 90% of all problems. The other causes being 5% magic and 5% giant robots.\"
        -- Tonbo, Things I've Learned From Anime

\n

\"And I heard a voice saying \"Give up! Give up!\" And that really scared me 'cause it sounded like Ben Kenobi.\"
        -- Rebel Pilot's Lament

\n

\"If you're a literary critic, keep in mind that I hate you, too, and I said it first.\"
        -- Scott Adams

\n

\"In any war there are always more of the enemy than you think, and there are always allies you never knew you had.\"
        -- John M. Ford, Web of Angels

\n

\"He told her about the Afterglow: that brief, brilliant period after the Big Bang, when matter gathered briefly in clumps and burned by fusion light.\"
        -- Stephen Baxter, The Gravity Mine

" } }, { "_id": "4ZzefKQwAtMo5yp99", "title": "Circular Altruism", "pageUrl": "https://www.lesswrong.com/posts/4ZzefKQwAtMo5yp99/circular-altruism", "postedAt": "2008-01-22T18:00:00.000Z", "baseScore": 88, "voteCount": 97, "commentCount": 310, "url": null, "contents": { "documentId": "4ZzefKQwAtMo5yp99", "html": "

Followup toTorture vs. Dust Specks, Zut Allais, Rationality Quotes 4

\n

Suppose that a disease, or a monster, or a war, or something, is killing people.  And suppose you only have enough resources to implement one of the following two options:

\n
    \n
  1. Save 400 lives, with certainty.
  2. \n
  3. Save 500 lives, with 90% probability; save no lives, 10% probability.
  4. \n
\n

Most people choose option 1.  Which, I think, is foolish; because if you multiply 500 lives by 90% probability, you get an expected value of 450 lives, which exceeds the 400-life value of option 1.  (Lives saved don't diminish in marginal utility, so this is an appropriate calculation.)

\n

\"What!\" you cry, incensed.  \"How can you gamble with human lives? How can you think about numbers when so much is at stake?  What if that 10% probability strikes, and everyone dies?  So much for your damned logic!  You're following your rationality off a cliff!\"

\n

Ah, but here's the interesting thing.  If you present the options this way:

\n
    \n
  1. 100 people die, with certainty.
  2. \n
  3. 90% chance no one dies; 10% chance 500 people die.
  4. \n
\n

Then a majority choose option 2.  Even though it's the same gamble.  You see, just as a certainty of saving 400 lives seems to feel so much more comfortable than an unsure gain, so too, a certain loss feels worse than an uncertain one.

\n

\n

You can grandstand on the second description too:  \"How can you condemn 100 people to certain death when there's such a good chance you can save them?  We'll all share the risk!  Even if it was only a 75% chance of saving everyone, it would still be worth it - so long as there's a chance - everyone makes it, or no one does!\"

\n

You know what?  This isn't about your feelings.  A human life, with all its joys and all its pains, adding up over the course of decades, is worth far more than your brain's feelings of comfort or discomfort with a plan.  Does computing the expected utility feel too cold-blooded for your taste?  Well, that feeling isn't even a feather in the scales, when a life is at stake.  Just shut up and multiply.

\n

Previously on Overcoming Bias, I asked what was the least bad, bad thing that could happen, and suggested that it was getting a dust speck in your eye that irritated you for a fraction of a second, barely long enough to notice, before it got blinked away.  And conversely, a very bad thing to happen, if not the worst thing, would be getting tortured for 50 years.

\n

Now, would you rather that a googolplex people got dust specks in their eyes, or that one person was tortured for 50 years?  I originally asked this question with a vastly larger number - an incomprehensible mathematical magnitude - but a googolplex works fine for this illustration.

\n

Most people chose the dust specks over the torture.  Many were proud of this choice, and indignant that anyone should choose otherwise:  \"How dare you condone torture!\"

\n

This matches research showing that there are \"sacred values\", like human lives, and \"unsacred values\", like money.  When you try to trade off a sacred value against an unsacred value, subjects express great indignation (sometimes they want to punish the person who made the suggestion).

\n

My favorite anecdote along these lines - though my books are packed at the moment, so no citation for now - comes from a team of researchers who evaluated the effectiveness of a certain project, calculating the cost per life saved, and recommended to the government that the project be implemented because it was cost-effective.  The governmental agency rejected the report because, they said, you couldn't put a dollar value on human life.  After rejecting the report, the agency decided not to implement the measure.

\n

Trading off a sacred value (like refraining from torture) against an unsacred value (like dust specks) feels really awful.  To merely multiply utilities would be too cold-blooded - it would be following rationality off a cliff...

\n

But let me ask you this.  Suppose you had to choose between one person being tortured for 50 years, and a googol people being tortured for 49 years, 364 days, 23 hours, 59 minutes and 59 seconds.  You would choose one person being tortured for 50 years, I do presume; otherwise I give up on you.

\n

And similarly, if you had to choose between a googol people tortured for 49.9999999 years, and a googol-squared people being tortured for 49.9999998 years, you would pick the former.

\n

A googolplex is ten to the googolth power.  That's a googol/100 factors of a googol.  So we can keep doing this, gradually - very gradually - diminishing the degree of discomfort, and multiplying by a factor of a googol each time, until we choose between a googolplex people getting a dust speck in their eye, and a googolplex/googol people getting two dust specks in their eye.

\n

If you find your preferences are circular here, that makes rather a mockery of moral grandstanding.  If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, you may have fun driving, but you aren't going anywhere.  Maybe you think it a great display of virtue to choose for a googolplex people to get dust specks rather than one person being tortured.  But if you would also trade a googolplex people getting one dust speck for a googolplex/googol people getting two dust specks et cetera, you sure aren't helping anyone.  Circular preferences may work for feeling noble, but not for feeding the hungry or healing the sick. 

\n

Altruism isn't the warm fuzzy feeling you get from being altruistic.  If you're doing it for the spiritual benefit, that is nothing but selfishness.  The primary thing is to help others, whatever the means.  So shut up and multiply!

\n

And if it seems to you that there is a fierceness to this maximization, like the bare sword of the law, or the burning of the sun - if it seems to you that at the center of this rationality there is a small cold flame -

\n

Well, the other way might feel better inside you.  But it wouldn't work.

\n

And I say also this to you:  That if you set aside your regret for all the spiritual satisfaction you could be having - if you wholeheartedly pursue the Way, without thinking that you are being cheated - if you give yourself over to rationality without holding back, you will find that rationality gives to you in return.

\n

But that part only works if you don't go around saying to yourself, \"It would feel better inside me if only I could be less rational.\"

\n

Chimpanzees feel, but they don't multiply.  Should you be sad that you have the opportunity to do better?  You cannot attain your full potential if you regard your gift as a burden.

\n

Added:  If you'd still take the dust specks, see Unknown's comment on the problem with qualitative versus quantitative distinctions.

" } }, { "_id": "AvJeJw52NL9y7RJDJ", "title": "Against Discount Rates", "pageUrl": "https://www.lesswrong.com/posts/AvJeJw52NL9y7RJDJ/against-discount-rates", "postedAt": "2008-01-21T10:00:00.000Z", "baseScore": 38, "voteCount": 47, "commentCount": 81, "url": null, "contents": { "documentId": "AvJeJw52NL9y7RJDJ", "html": "

I've never been a fan of the notion that we should (normatively) have a discount rate in our pure preferences - as opposed to a pseudo-discount rate arising from monetary inflation, or from opportunity costs of other investments, or from various probabilistic catastrophes that destroy resources or consumers.  The idea that it is literally, fundamentally 5% more important that a poverty-stricken family have clean water in 2008, than that a similar family have clean water in 2009, seems like pure discrimination to me - just as much as if you were to discriminate between blacks and whites.

\n\n

And there's worse:  If your temporal discounting follows any curve other than the exponential, you'll have time-inconsistent goals that force you to wage war against your future selves - preference reversals - cases where your self of 2008 will pay a dollar to ensure that your future self gets option A in 2011 rather than B in 2010; but then your future self in 2009 will pay another dollar to get B in 2010 rather than A in 2011.

\n\n

But a 5%-per-year discount rate, compounded exponentially, implies that it is worth saving a single person from torture today, at the cost of 168 people being tortured a century later, or a googol persons being tortured 4,490 years later.

People who deal in global catastrophic risks sometimes have to wrestle with the discount rate assumed by standard economics.  Is a human civilization spreading through the Milky Way, 100,000 years hence - the Milky Way being about 100K lightyears across - really to be valued at a discount of 10-2,227 relative to our own little planet today?

\n\n

And when it comes to artificial general intelligence... I encounter wannabe AGI-makers who say, "Well, I don't know how to make my math work for an infinite time horizon, so... um... I've got it!  I'll build an AGI whose planning horizon cuts out in a thousand years."  Such a putative AGI would be quite happy to take an action that causes the galaxy to explode, so long as the explosion happens at least 1,001 years later.  (In general, I've observed that most wannabe AGI researchers confronted with Singularity-level problems ponder for ten seconds and then propose the sort of clever programming trick one would use for data-mining the Netflix Prize, without asking if it makes deep sense for Earth-originating civilization over the next million years.)

\n\n

The discount question is an old debate in economics, I know.  I'm writing this blog post just now, because I recently had a conversation with Carl Shulman, who proposed an argument against temporal discounting that is, as far as I know, novel: namely that an AI with a 5% temporal discount rate has a nearly infinite incentive to expend all available resources on attempting time travel - maybe hunting for wormholes with a terminus in the past.

\n\n

Or to translate this back out of transhumanist discourse:  If you wouldn't burn alive 1,226,786,652 people today to save Giordano Bruno from the stake in 1600, then clearly, you do not have a 5%-per-year temporal discount rate in your pure preferences.

\n\n

Maybe it's easier to believe in a temporal discount rate when you - the you of today - are the king of the hill, part of the most valuable class of persons in the landscape of present and future.  But you wouldn't like it if there were other people around deemed more valuable than yourself, to be traded off against you.  You wouldn't like a temporal discount if the past was still around.

\n\n

Discrimination always seems more justifiable, somehow, when you're not the person who is discriminated against -

\n\n

- but you will be.

\n\n

(Just to make it clear, I'm not advocating against the idea that Treasury bonds can exist.  But I am advocating against the idea that you should intrinsically care less about the future than the present; and I am advocating against the idea that you should compound a 5% discount rate a century out when you are valuing global catastrophic risk management.)

" } }, { "_id": "knpAQ4F3gmguxy39z", "title": "Allais Malaise", "pageUrl": "https://www.lesswrong.com/posts/knpAQ4F3gmguxy39z/allais-malaise", "postedAt": "2008-01-21T00:40:01.000Z", "baseScore": 41, "voteCount": 32, "commentCount": 38, "url": null, "contents": { "documentId": "knpAQ4F3gmguxy39z", "html": "

Continuation ofThe Allais Paradox, Zut Allais!

\n\n

Judging by the comments on Zut Allais, I failed to emphasize the points that needed emphasis.

\n\n

The problem with the Allais Paradox is the incoherent pattern 1A > 1B, 2B > 2A.  If you need $24,000 for a lifesaving operation and an extra $3,000 won't help that much, then you choose 1A > 1B and 2A > 2B.  If you have a million dollars in the bank account and your utility curve doesn't change much with an extra $25,000 or so, then you should choose 1B > 1A and 2B > 2A.  Neither the individual choice 1A > 1B, nor the individual choice 2B > 2A, are of themselves irrational.  It's the combination that's the problem.

\n\n

Expected utility is not expected dollars.  In the case above, the utility-distance from $24,000 to $27,000 is a tiny fraction of the distance from $21,000 to $24,000.  So, as stated, you should choose 1A > 1B and 2A > 2B, a quite coherent combination.  The Allais Paradox has nothing to do with believing that every added dollar is equally useful.  That idea has been rejected since the dawn of decision theory.

\n\n

If satisfying your intuitions is more important to you than money, do whatever the heck you want.  Drop the money over Niagara falls.  Blow it all on expensive champagne.  Set fire to your hair.  Whatever.  If the largest utility you care about is the utility of\nfeeling good about your decision, then any decision that feels good is\nthe right one.  If you say that different trajectories to the same outcome "matter emotionally", then you're attaching an inherent utility to conforming to the brain's native method of optimization, whether or not it actually optimizes.  Heck, running around in circles from preference reversals could feel really good too.  But if you care enough about the stakes that winning is more important than your brain's good feelings about an intuition-conforming strategy, then use decision theory.

If you suppose the problem is different from the one presented\n- that the gambles are untrustworthy and that, after this mistrust is\ntaken into account, the payoff probabilities are not as described -\nthen, obviously, you can make the answer anything you want.

\n\n

\nLet's say you're dying of thirst, you only have $1.00, and you have\nto choose between a vending machine that dispenses a drink with\ncertainty for $0.90, versus spending $0.75 on a vending machine that\ndispenses a drink with 99% probability.  Here, the 1% chance of dying\nis worth more to you than $0.15, so you would pay the extra fifteen\ncents.  You would also pay the extra fifteen cents if the two vending\nmachines dispensed drinks with 75% probability and 74% probability\nrespectively.  The 1% probability is worth the same amount whether or\nnot it's the last increment towards certainty.  This pattern of decisions is perfectly coherent.  Don't confuse being rational with being shortsighted or greedy.

\n\n

Added:  A 50% probability of $30K and a 50% probability of $20K, is not the same as a 50% probability of $26K and a 50% probability of $24K.  If your utility is logarithmic in money (the standard assumption) then you will definitely prefer the latter to the former:  0.5 log(30) + 0.5 log(20)  <  0.5 log(26) + 0.5 log(24).  You take the expectation of the utility of the money, not the utility of the expectation of the money.

" } }, { "_id": "rs7B2TsTCtDEbHZgM", "title": "Rationality Quotes 4", "pageUrl": "https://www.lesswrong.com/posts/rs7B2TsTCtDEbHZgM/rationality-quotes-4", "postedAt": "2008-01-20T17:00:00.000Z", "baseScore": 18, "voteCount": 12, "commentCount": 5, "url": null, "contents": { "documentId": "rs7B2TsTCtDEbHZgM", "html": "

\"Altruistic behavior: An act done without any intent for personal gain in any form. Altruism requires that there is no want for material, physical, spiritual, or egoistic gain.\"
        -- Glossary of Zen

\n

\"The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy's cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him.\"
        -- Miyamoto Musashi, The Book of Five Rings

\n

\"You can win with a long weapon, and yet you can also win with a short weapon. In short, the Way of the Ichi school is the spirit of winning, whatever the weapon and whatever its size.\"
        -- Miyamoto Musashi, The Book of Five Rings

\n

\n

    \"Well,\" Rowlands said carefully, turning the Land-Rover into the road. \"I am not at all sure what it is that is going on all around us, Will bach, or where it is leading. But those men who know anything at all about the Light also know that there is a fierceness to its power, like the bare sword of the law, or the white burning of the sun.\" Suddenly his voice sounded to Will very strong, and very Welsh. \"At the very heart, that is. Other things, like humanity, and mercy, and charity, that most good men hold more precious than all else, they do not come first for the Light. Oh, sometimes they are there; often, indeed. But in the very long run the concern of you people is with the absolute good, ahead of all else. You are like fanatics. Your masters, at any rate. Like the old Crusaders -- oh, like certain groups in every belief, though this is not a matter of religion, of course. At the centre of the Light there is a cold white flame, just as at the centre of the Dark there is a great black pit bottomless as the Universe.\"
    His warm, deep voice ended, and there was only the roar of the engine. Will looked out over the grey-misted fields, silent.
    \"There was a great long speech, now,\" John Rowlands said awkwardly. \"But I was only saying, be careful not to forget that there are people in this valley who can be hurt, even in the pursuit of good ends.\"
    Will heard again in his mind Bran's anguished cry as the dog Cafall was shot dead, and heard his cold dismissal: go away, go away... And for a second another image, unexpected, flashed into his mind out of the past: the strong, bony face of Merriman his master, first of the Old Ones, cold in judgment of a much-loved figure who, through the frailty of being no more than a man, had once betrayed the cause of the Light.
    He sighed. \"I understand what you are saying,\" he said sadly. \"But you misjudge us, because you are a man yourself. For us, there is only the destiny. Like a job to be done. We are here simply to save the world from the Dark. Make no mistake, John, the Dark is rising, and will take the world to itself very soon if nothing stands in its way. And if that should happen, then there would be no question ever, for anyone, either of warm charity or of cold absolute good, because nothing would exist in the world or in the hearts of men except that bottomless black pit. The charity and the mercy and the humanitarianism are for you, they are the only things by which men are able to exist together in peace. But in this hard case that we the Light are in, confronting the Dark, we can make no use of them. We are fighting a war. We are fighting for life or death -- not for our life, remember, since we cannot die. For yours.\"
    He reached his hand behind him, over the back of the seat, and Pen licked it with his floppy wet tongue.
    \"Sometimes,\" Will said slowly, \"in this sort of a war, it is not possible to pause, to smooth the way for one human being, because even that one small thing could mean an end of the world for all the rest.\"
    A fine rain began to mist the windscreen. John Rowlands turned on the wipers, peering forward at the grey world as he drove. He said, \"It is a cold world you live in, bachgen. I do not think so far ahead, myself. I would take the one human being over all the principle, all the time.\"
    Will slumped down low in his seat, curling into a ball, pulling up his knees. \"Oh, so would I,\" he said sadly. \"So would I, if I could. It would feel a lot better inside me. But it wouldn't work.\"
        -- Susan Cooper, The Grey King

" } }, { "_id": "zNcLnqHF5rvrTsQJx", "title": "Zut Allais!", "pageUrl": "https://www.lesswrong.com/posts/zNcLnqHF5rvrTsQJx/zut-allais", "postedAt": "2008-01-20T03:18:16.000Z", "baseScore": 60, "voteCount": 55, "commentCount": 51, "url": null, "contents": { "documentId": "zNcLnqHF5rvrTsQJx", "html": "

Huh!  I was not expecting that response.  Looks like I ran into an inferential distance.

\n

It probably helps in interpreting the Allais Paradox to have absorbed more of the gestalt of the field of heuristics and biases, such as:

\n\n

\n

Let's start with the issue of incoherent preferences - preference reversals, dynamic inconsistency, money pumps, that sort of thing.

\n

Anyone who knows a little prospect theory will have no trouble constructing cases where people say they would prefer to play gamble A rather than gamble B; but when you ask them to price the gambles they put a higher value on gamble B than gamble A.  There are different perceptual features that become salient when you ask \"Which do you prefer?\" in a direct comparison, and \"How much would you pay?\" with a single item.

\n

My books are packed up for the move, but from what I remember, this should typically generate a preference reversal:

\n
    \n
  1. 1/3 to win $18 and 2/3 to lose $1.50
  2. \n
  3. 19/20 to win $4 and 1/20 to lose $0.25
  4. \n
\n

Most people will (IIRC) rather play 2 than 1.  But if you ask them to price the bets separately - ask for a price at which they would be indifferent between having that amount of money, and having a chance to play the gamble - people will (IIRC) put a higher price on 1 than on 2.  If I'm wrong about this exact example, nonetheless, there are plenty of cases where such a pattern is exhibited experimentally.

\n

So first you sell them a chance to play bet 1, at their stated price.  Then you offer to trade bet 1 for bet 2.  Then you buy bet 2 back from them, at their stated price.  Then you do it again.  Hence the phrase, \"money pump\".

\n

Or to paraphrase Steve Omohundro:  If you would rather be in Oakland than San Francisco, and you would rather be in San Jose than Oakland, and you would rather be in San Francisco than San Jose, you're going to spend an awful lot of money on taxi rides.

\n

Amazingly, people defend these preference patterns.  Some subjects abandon them after the money-pump effect is pointed out - revise their price or revise their preference - but some subjects defend them.

\n

On one occasion, gamblers in Las Vegas played these kinds of bets for real money, using a roulette wheel.  And afterward, one of the researchers tried to explain the problem with the incoherence between their pricing and their choices.  From the transcript:

\n
\n

Experimenter:  Well, how about the bid for Bet A?  Do you have any further feelings about it now that you know you are choosing one but bidding more for the other one?
Subject:  It's kind of strange, but no, I don't have any feelings at all whatsoever really about it.  It's just one of those things.  It shows my reasoning process isn't so good, but, other than that, I... no qualms.
...
E:  Can I persuade you that it is an irrational pattern?
S:  No, I don't think you probably could, but you could try.
...
E: Well, now let me suggest what has been called a money-pump game and try this out on you and see how you like it.  If you think Bet A is worth 550 points [points were converted to dollars after the game, though not on a one-to-one basis] then you ought to be willing to give me 550 points if I give you the bet...
...
E: So you have Bet A, and I say, \"Oh, you'd rather have Bet B wouldn't you?\"
...
S: I'm losing money.
E: I'll buy Bet B from you.  I'll be generous; I'll pay you more than 400 points.  I'll pay you 401 points.  Are you willing to sell me Bet B for 401 points?
S: Well, certainly.
...
E: I'm now ahead 149 points.
S: That's good reasoning on my part. (laughs) How many times are we going to go through this?
...
E: Well, I think I've pushed you as far as I know how to push you short of actually insulting you.
S: That's right.

\n
\n

You want to scream, \"Just give up already!  Intuition isn't always right!\"

\n

And then there's the business of the strange value that people attach to certainty.  Again, I don't have my books, but I believe that one experiment showed that a shift from 100% probability to 99% probability weighed larger in people's minds than a shift from 80% probability to 20% probability.

\n

The problem with attaching a huge extra value to certainty is that one time's certainty is another time's probability.

\n

Yesterday I talked about the Allais Paradox:

\n\n

The naive preference pattern on the Allais Paradox is 1A > 1B and 2B > 2A.  Then you will pay me to throw a switch from A to B because you'd rather have a 33% chance of winning $27,000 than a 34% chance of winning $24,000.  Then a die roll eliminates a chunk of the probability mass.  In both cases you had at least a 66% chance of winning nothing.  This die roll eliminates that 66%.  So now option B is a 33/34 chance of winning $27,000, but option A is a certainty of winning $24,000.  Oh, glorious certainty!  So you pay me to throw the switch back from B to A.

\n

Now, if I've told you in advance that I'm going to do all that, do you really want to pay me to throw the switch, and then pay me to throw it back?  Or would you prefer to reconsider?

\n

Whenever you try to price a probability shift from 24% to 23% as being less important than a shift from ~1 to 99% - every time you try to make an increment of probability have more value when it's near an end of the scale - you open yourself up to this kind of exploitation.  I can always set up a chain of events that eliminates the probability mass, a bit at a time, until you're left with \"certainty\" that flips your preferences.  One time's certainty is another time's uncertainty, and if you insist on treating the distance from ~1 to 0.99 as special, I can cause you to invert your preferences over time and pump some money out of you.

\n

Can I persuade you, perhaps, that this is an irrational pattern?

\n

Surely, if you've been reading this blog for a while, you realize that you - the very system and process that reads these very words - are a flawed piece of machinery.  Your intuitions are not giving you direct, veridical information about good choices.  If you don't believe that, there are some gambling games I'd like to play with you.

\n

There are various other games you can also play with certainty effects.  For example, if you offer someone a certainty of $400, or an 80% probability of $500 and a 20% probability of $300, they'll usually take the $400.  But if you ask people to imagine themselves $500 richer, and ask if they would prefer a certain loss of $100 or a 20% chance of losing $200, they'll usually take the chance of losing $200.  Same probability distribution over outcomes, different descriptions, different choices.

\n

Yes, Virginia, you really should try to multiply the utility of outcomes by their probability.  You really should.  Don't be embarrassed to use clean math.

\n

In the Allais paradox, figure out whether 1 unit of the difference between getting $24,000 and getting nothing, outweighs 33 units of the difference between getting $24,000 and $27,000.  If it does, prefer 1A to 1B and 2A to 2B.  If the 33 units outweigh the 1 unit, prefer 1B to 1A and 2B to 2A.  As for calculating the utility of money, I would suggest using an approximation that assumes money is logarithmic in utility.  If you've got plenty of money already, pick B.  If $24,000 would double your existing assets, pick A.  Case 2 or case 1, makes no difference.  Oh, and be sure to assess the utility of total asset values - the utility of final outcome states of the world - not changes in assets, or you'll end up inconsistent again.

\n

A number of commenters, yesterday, claimed that the preference pattern wasn't irrational because of \"the utility of certainty\", or something like that.  One commenter even wrote U(Certainty) into an expected utility equation.

\n

Does anyone remember that whole business about expected utility and utility being of fundamentally different types?  Utilities are over outcomes.  They are values you attach to particular, solid states of the world.  You cannot feed a probability of 1 into a utility function.  It makes no sense.

\n

And before you sniff, \"Hmph... you just want the math to be neat and tidy,\" remember that, in this case, the price of departing the Bayesian Way was paying someone to throw a switch and then throw it back.

\n

But what about that solid, warm feeling of reassurance?  Isn't that a utility?

\n

That's being human.  Humans are not expected utility maximizers.  Whether you want to relax and have fun, or pay some extra money for a feeling of certainty, depends on whether you care more about satisfying your intuitions or actually achieving the goal.

\n

If you're gambling at Las Vegas for fun, then by all means, don't think about the expected utility - you're going to lose money anyway.

\n

But what if it were 24,000 lives at stake, instead of $24,000?  The certainty effect is even stronger over human lives.  Will you pay one human life to throw the switch, and another to switch it back?

\n

Tolerating preference reversals makes a mockery of claims to optimization.  If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, then you may get a lot of warm fuzzy feelings out of it, but you can't be interpreted as having a destination - as trying to go somewhere.

\n

When you have circular preferences, you're not steering the future - just running in circles.  If you enjoy running for its own sake, then fine.  But if you have a goal - something you're trying to actually accomplish - a preference reversal reveals a big problem.  At least one of the choices you're making must not be working to actually optimize the future in any coherent sense.

\n

If what you care about is the warm fuzzy feeling of certainty, then fine.  If someone's life is at stake, then you had best realize that your intuitions are a greasy lens through which to see the world.  Your feelings are not providing you with direct, veridical information about strategic consequences - it feels that way, but they're not.  Warm fuzzies can lead you far astray.

\n

There are mathematical laws governing efficient strategies for steering the future.  When something truly important is at stake - something more important than your feelings of happiness about the decision - then you should care about the math, if you truly care at all.

" } }, { "_id": "zJZvoiwydJ5zvzTHK", "title": "The Allais Paradox", "pageUrl": "https://www.lesswrong.com/posts/zJZvoiwydJ5zvzTHK/the-allais-paradox", "postedAt": "2008-01-19T03:05:32.000Z", "baseScore": 65, "voteCount": 57, "commentCount": 145, "url": null, "contents": { "documentId": "zJZvoiwydJ5zvzTHK", "html": "

Choose between the following two options:

1A.  $24,000, with certainty.
1B.  33/34 chance of winning $27,000, and 1/34 chance of winning nothing.

Which seems more intuitively appealing?  And which one would you choose in real life?

Now which of these two options would you intuitively prefer, and which would you choose in real life?

2A. 34% chance of winning $24,000, and 66% chance of winning nothing.
2B. 33% chance of winning $27,000, and 67% chance of winning nothing.

The Allais Paradox - as Allais called it, though it's not really a paradox - was one of the first conflicts between decision theory and human reasoning to be experimentally exposed, in 1953.  I've modified it slightly for ease of math, but the essential problem is the same:  Most people prefer 1A > 1B, and most people prefer 2B > 2A.  Indeed, in within-subject comparisons, a majority of subjects express both preferences simultaneously.

\n\n

This is a problem because the 2s are equal to a one-third chance of playing the 1s.  That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability.

\n\n

Among the axioms used to prove that "consistent" decisionmakers can be viewed as maximizing expected utility, is the Axiom of Independence:  If X is strictly preferred to Y, then a probability P of X and (1 - P) of Z should be strictly preferred to P chance of Y and (1 - P) chance of Z.

\n\n

All the axioms are consequences, as well as antecedents, of a consistent utility function.  So it must be possible to prove that the experimental subjects above can't have a consistent utility function over outcomes.  And indeed, you can't simultaneously have:

\n\n

These two equations are algebraically inconsistent, regardless of U, so the Allais Paradox has nothing to do with the diminishing marginal utility of money.

\n\n

Maurice Allais initially defended the revealed preferences of the experimental subjects - he saw the experiment as exposing a flaw in the conventional ideas of utility, rather than exposing a flaw in human psychology.  This was 1953, after all, and the heuristics-and-biases movement wouldn't really get started for another two decades.  Allais thought his experiment just showed that the Axiom of Independence clearly wasn't a good idea in real life.

\n\n

(How naive, how foolish, how simplistic is Bayesian decision theory...)

\n\n

Surely, the certainty of having $24,000 should count for something.  You can feel the difference, right?  The solid reassurance?

\n\n

(I'm starting to think of this as "naive philosophical realism" - supposing that our intuitions directly expose truths about which strategies are wiser, as though it was a directly perceived fact that "1A is superior to 1B".  Intuitions directly expose truths about human cognitive functions, and only indirectly expose (after we reflect on the cognitive functions themselves) truths about rationality.)

\n\n

"But come now," you say, "is it really such a terrible thing, to depart from Bayesian beauty?"  Okay, so the subjects didn't follow the neat little "independence axiom" espoused by the likes of von Neumann and Morgenstern.  Yet who says that things must be neat and tidy?

\n\n

Why fret about elegance, if it makes us take risks we don't want?  Expected utility tells us that we ought to assign some kind of number to an outcome, and then multiply that value by the outcome's probability, add them up, etc.  Okay, but why do we have to do that?  Why not make up more palatable rules instead?

\n\n

There is always a price for leaving the Bayesian Way.  That's what coherence and uniqueness theorems are all about.

\n\n

In this case, if an agent prefers 1A > 1B, and 2B > 2A, it introduces a form of preference reversal - a dynamic inconsistency in the agent's planning.  You become a money pump.

\n\n

Suppose that at 12:00PM I roll a hundred-sided die.  If the die shows a number greater than 34, the game terminates.  Otherwise, at 12:05PM I consult a switch with two settings, A and B.  If the setting is A, I pay you $24,000.  If the setting is B, I roll a 34-sided die and pay you $27,000 unless the die shows "34", in which case I pay you nothing.

\n\n

Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference.  The switch starts in state A.  Before 12:00PM, you pay me a penny to throw the switch to B.  The die comes up 12.  After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A.

\n\n

I have taken your two cents on the subject.

If you indulge your intuitions, and dismiss mere elegance as a pointless obsession with neatness, then don't be surprised when your pennies get taken from you...

\n\n

(I think the same failure to proportionally devalue the emotional impact of small probabilities is responsible for the lottery.)

\n\n
\n\n

\nAllais, M. (1953). Le comportement de l'homme rationnel devant le risque: Critique des postulats et axiomes de l'école américaine.  Econometrica, 21,\n 503-46.

\n\n\n\n

\nKahneman, D. and Tversky, A. (1979.) Prospect Theory: An Analysis of Decision Under\nRisk. Econometrica, 47, 263-92.\n

" } }, { "_id": "jh33KiQTCxXgpHD78", "title": "Rationality Quotes 3", "pageUrl": "https://www.lesswrong.com/posts/jh33KiQTCxXgpHD78/rationality-quotes-3", "postedAt": "2008-01-18T02:20:32.000Z", "baseScore": 9, "voteCount": 11, "commentCount": 15, "url": null, "contents": { "documentId": "jh33KiQTCxXgpHD78", "html": "

\"Reality is that which, when you stop believing in it, doesn't go away.\"
        -- Philip K. Dick

\n

\"How many legs does a dog have, if you call the tail a leg? Four. Calling a tail a leg doesn't make it a leg.\"
        -- Abraham Lincoln

\n

\"Faced with the choice of changing one's mind and proving that there is no need to do so, almost everyone gets busy on the proof.\"
        -- John Kenneth Galbraith

\n

\"I'd rather live with a good question than a bad answer.\"
        -- Aryeh Frimer

\n

\"It ain't a true crisis of faith unless things could just as easily go either way.\"
        -- bunnyThor

\n

\"My best test for a libertarian so far is to ask what needs to be done to protect ancient sequoias.  If you say you need to buy them, you pass.\"
        -- Rafal Smigrodzki

\n

\"Mystical explanations are considered deep. The truth is that they are not even superficial.\"
        -- Friedrich Nietzsche, The Gay Science

\n

\"Some people know better, and they still make the mistake. That's when ignorance becomes stupidity.\"
        -- Aaron McBride

\n

\"Perfection is achieved, not when there is nothing left to add, but when there is nothing left to take away.\"
        -- Antoine de Saint-Exupéry

\n

\"Skill is successfully walking a tightrope over Niagara Falls. Intelligence is not trying.\"
        -- Unknown

\n

\n

 

\n

\"The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2.\"
        -- James R. Newman, The World of Mathematics

\n

\"Giving a person a high IQ is kind of like giving a person a million dollars. A few individuals will do something interesting with it, but most will piss it away on trinkets and pointless exercises.\"
        -- J. Andrew Rogers

\n

\"Fatal stupidity is inefficient: idiots take other people out with them way too often.\"
        -- Mike

\n

\"Surprises are things that you not only didn't know, but that contradict things you thought you knew. And so they're the most valuable sort of fact you can get. They're like a food that's not merely healthy, but counteracts the unhealthy effects of things you've already eaten.\"
        -- Paul Graham

\n

\"So many times I found myself on the receiving end of unkind treatment, or on the giving end of the same, that there is not often a space in which I can find peace to escape from this woman I have become. I want so much to not know the things I've known, to not feel the things I've felt, to not have hurt the ones I've hurt.\"
        -- Sara

\n

\"I am an undrawn Grand Master of the Game, and you cannot lose well against me, no matter the form. But as with all my children, I will play this game or another against you every day that you are here, and in time you will learn to lose well, and you may even learn to lose brilliantly.\"
        --
John M. Ford, The Final Reflection

\n

\"When, however, the lay public rallies round an idea that is denounced by distinguished but elderly scientists and supports that idea with great fervor and emotion -- the distinguished but elderly scientists are then, after all, probably right.\"
        -- Asimov's Corollary

\n

\"Stupid gods! Don't they realize how important this is?\"
        -- The Wings of Change

\n

\"Laws do inhibit some citizens from breaking them, and laws do specify punishments for crimes, but laws do not prevent anybody from doing anything.\"
        -- Michael Roy Ames

\n

\"I have often been accused by friends and acquaintances of being very logical. What they really meant was that I take some principle or insight and apply it further than other people that they know.\"
        -- Lee Corbin

\n

\"If the meanings of \"true\" and \"false\" were switched, then this sentence wouldn't be false.\"
        -- Douglas Hofstadter

\n

\"Make changes based on your strongest opportunities, not your most convenient ones.\"
        -- MegaTokyo

\n

\"You are not ready to count your enemy's losses until you have learned to count your own. And remember that some enemies will never have learned to count.\"
        --
John M. Ford, The Final Reflection

\n

\"Our brains live in a dark, quiet, wet place. That is the reality. It is only by means of our senses that we get the illusion of being out there in the world. In a way, our bodies are a form of telepresence, operated by our brains, huddling safe in their little caves of bone.\"
        -- Hal Finney

\n

\"If you do not wish a thing heard, do not say it.\"
        -- John M. Ford, 
The Final Reflection

\n

\"The four points of the compass be logic, knowledge, wisdom and the unknown. Some do bow in that final direction. Others advance upon it. To bow before the one is to lose sight of the three. I may submit to the unknown, but never to the unknowable. The man who bows in that final direction is either a saint or a fool. I have no use for either.\"
        -- Roger Zelazny, Lord of Light

\n

\"The assassin's gun may believe it is a surgeon's laser. But the assassin must know the task.\"
        --
John M. Ford, The Final Reflection

\n

\"Man has Nature whacked,\" said someone to a friend of mine not long ago. In their context the words had a certain tragic beauty, for the speaker was dying of tuberculosis. \"No matter,\" he said, \"I know I'm one of the casualties. Of course there are casualties on the winning as well as on the losing side. But that doesn't alter the fact that it is winning.\"
        -- C.S. Lewis, The Abolition of Man

" } }, { "_id": "vSBNjrZvGzjoFNb4C", "title": "Rationality Quotes 2", "pageUrl": "https://www.lesswrong.com/posts/vSBNjrZvGzjoFNb4C/rationality-quotes-2", "postedAt": "2008-01-16T23:47:45.000Z", "baseScore": 6, "voteCount": 10, "commentCount": 9, "url": null, "contents": { "documentId": "vSBNjrZvGzjoFNb4C", "html": "

\"I often have to arrange talks years in advance. If I am asked for a title, I suggest “The Current Crisis in the Middle East.” It has yet to fail.\"
        -- Noam Chomsky

\n

\"We don't do science for the general public. We do it for each other. Good day.\"
        -- Renato Dalbecco, complete text of interview with H F Judson

\n

\"Most witches don't believe in gods. They know that the gods exist, of course. They even deal with them occasionally. But they don't believe in them. They know them too well. It would be like believing in the postman.\"
        -- Terry Pratchett, Witches Abroad

\n

\"He didn't reject the idea so much as not react to it and watch as it floated away.\"
        -- David Foster Wallace, Infinite Jest

\n

\"People can't predict how long they will be happy with recently acquired objects, how long their marriages will last, how their new jobs will turn out, yet it's subatomic particles that they cite as \"limits of prediction.\" They're ignoring a mammoth standing in front of them in favor of matter even a microscope would not allow them to see.\"
        -- Nassim Taleb, The Black Swan

\n

\n

\"I would strongly challenge the notion of “that's what's called growing up.” The most depressing part of the nomenclature around adolescence and college life is this bizarre connection between “experimentation” / “learning from your mistakes” and binge drinking, reckless sex, and drug use.\"
        -- Ben Casnocha

\n

\"Behind every story of extraordinary heroism, there is a less exciting and more interesting story about the larger failures that made heroism necessary in the first place.\"
        -- Black Belt Bayesian

\n

\"Anyone who claims that the brain is a total mystery should be slapped upside the head with the MIT Encyclopedia of the Cognitive Sciences. All one thousand ninety-six pages of it.\"
        -- Tom McCabe

\n

\"Try explaining anything scientific to your friends -- you soon won't have any.\"
        -- Soloport

\n

\"The definition of the word \"meaning,\" is something that is conveyed. So, who is conveying this \"meaning\" that you speak of? To put it another way, if \"life\" is a painting, then who is painter? Whoever is the painter is the one who decides what meaning the painting has. Now, depending on your outlook, the painter is either yourself, or God. Depending upon how you answer that question, you should now be able to figure out the answer.\"
        -- Flipside

\n

\"It is doubtful that most \"Noble Lies\" are at all noble.\"
        -- Samantha Atkins

\n

\"In essence, we have to be more moral than God. A quick glance over God's rap sheet suggests that this is, indeed, possible.\"
        -- Blake Stacey

\n

\"Are there wonderful Xians out there? Yes. Do I think they are assholes for believing I deserve to burn forever because I don't believe in fairies? Hell yes.\"
        -- RRyan

\n

\"I am most often irritated by those who attack the bishop but somehow fall for the securities analyst.\"
        -- Nassim Taleb, The Black Swan

\n

\"I've found that people who are great at something are not so much convinced of their own greatness as mystified at why everyone else seems so incompetent.\"
        -- Paul Graham

\n

       \"But then... it used to be so simple, once upon a time.
        Because the universe was full of ignorance all around and the scientist panned through it like a prospector crouched over a mountain stream, looking for the gold of knowledge among the gravel of unreason, the sand of uncertainty and the little whiskery eight-legged swimming things of superstition.
        Occasionally he would straighten up and say things like \"Hurrah, I've discovered Boyle's Third Law.\" And everyone knew where they stood. But the trouble was that ignorance became more interesting, especially big fascinating ignorance about huge and important things like matter and creation, and people stopped patiently building their little houses of rational sticks in the chaos of the universe and started getting interested in the chaos itself -- partly because it was a lot easier to be an expert on chaos, but mostly because it made really good patterns that you could put on a t-shirt.
        And instead of getting on with proper science scientists suddenly went around saying how impossible it was to know anything, and that there wasn't really anything you could call reality to know anything about, and how all this was tremendously exciting, and incidentally did you know there were possibly all these little universes all over the place but no one can see them because they are all curved in on themselves? Incidentally, don't you think this is a rather good t-shirt?\"

        -- Terry Pratchett, Witches Abroad

\n

\"To the extent there is a secret handshake among good hackers, it's when they know one another well enough to express opinions that would get them stoned to death by the general public.\"
        -- Paul Graham

" } }, { "_id": "LiDk2XMBFmRiju4x3", "title": "Rationality Quotes 1", "pageUrl": "https://www.lesswrong.com/posts/LiDk2XMBFmRiju4x3/rationality-quotes-1", "postedAt": "2008-01-16T07:41:57.000Z", "baseScore": 14, "voteCount": 14, "commentCount": 28, "url": null, "contents": { "documentId": "LiDk2XMBFmRiju4x3", "html": "

I'll be moving to Redwood City, CA in a week, so forgive me if I don't get a regular post out every day between now and then.  As a substitute offering, some items from my (offline) quotesfile:

\n
\n
\n

\"It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought.\"
       -- E. T. Jaynes

\n

\"When you're young, you look at television and think, There's a conspiracy. The networks have conspired to dumb us down. But when you get a little older, you realize that's not true. The networks are in business to give people exactly what they want. That's a far more depressing thought. Conspiracy is optimistic! You can shoot the bastards!\"
        -- Steve Jobs

\n

\"Saving a drowning child is no more a moral duty than understanding a syllogism is a logical one.\"
        -- Sam Harris, The End of Faith

\n
\n

\n
\n

\"Don't ask why (again, long story), but the number 5,479,863,282.86 has been stuck in my head for as long as I can remember.\"
        -- Ultimatum479

\n

\"I've met these people, the ones from the glossy magazines. I've walked among them. I have seen, firsthand, their callow, empty lives. I have watched them from the shadows when they thought themselves alone. And I can tell you this: I'm afraid there is not one of them who would swap lives with you at gunpoint.\"
        -- Neil Gaiman, Anansi Boys

\n

\"Before you can get to the end of this paragraph, another person will probably die because of what someone else believes about God.\"
        -- Sam Harris, The End of Faith

\n

\"Someone walking down the street with absolutely no scars or calluses would look pretty odd. I suspect having a conversation with someone who'd never taken any emotional or mental damage would be even odder. The line between \"experience\" and \"damage\" is pretty thin.\"
        -- Aliza, from the Open-Source Wish Project

\n

\"Fear and lies fester in darkness. The truth may wound, but it cuts clean.\"
        -- Jacqueline Carey, Kushiel's Avatar

\n

\"I never make a prediction that can be proved wrong within 24 hours.\"
        -- Louis Rukeyser

\n

\"You have been asking what you could do in the great events that are now stirring, and have found that you could do nothing. But that is because your suffering has caused you to phrase the question in the wrong way... Instead of asking what you could do, you ought to have been asking what needs to be done.\"
        -- Steven Brust, The Paths of the Dead

\n

\"In The Brothers Karamazov, Alyosha expresses the idea which panicked Dostoyevski more than any other: Without God, 'everything is lawful'. But as Mohammed Atta can explain, the opposite is true. Without God, murder is forbidden by human law; it is only for those acting on behalf of God, that everything is permitted.\"
        -- Jonathan Wallace

\n

\"It's possible to describe anything in mathematical notation. I recall seeing some paper once in which someone had created a mathematical description of C. (I forget whether or not this included the preprocessor.) As an achievement, this is somewhat like building a full-size model of the Eiffel Tower out of tongue depressors. It's clearly not the act of a talentless man, but you have to wonder what he said when he applied for his grant.\"
       -- Mencius Moldbug

\n

\n

\"If we are fervently passionate about the idea that fire is hot, we are more rational than the man who calmly and quietly says fire is cold.\"
        -- Tom McCabe

\n

\"You are only as strong as your weakest delusion.\"
        -- Common Sense Camp

\n
" } }, { "_id": "2MqXKvBym3kRxvJMv", "title": "Trust in Math", "pageUrl": "https://www.lesswrong.com/posts/2MqXKvBym3kRxvJMv/trust-in-math", "postedAt": "2008-01-15T04:25:04.000Z", "baseScore": 23, "voteCount": 21, "commentCount": 51, "url": null, "contents": { "documentId": "2MqXKvBym3kRxvJMv", "html": "

Followup toExpecting Beauty

\n

I was once reading a Robert Heinlein story - sadly I neglected to note down which story, but I do think it was a Heinlein - where one of the characters says something like, \"Logic is a fine thing, but I have seen a perfectly logical proof that 2 = 1.\"  Authors are not to be confused with characters, but the line is voiced by one of Heinlein's trustworthy father figures.  I find myself worried that Heinlein may have meant it.

\n

The classic proof that 2 = 1 runs thus.  First, let x = y = 1.  Then:

\n
    \n
  1. x  =  y
  2. \n
  3. x2  =  xy
  4. \n
  5. x2 - y=  xy - y2
  6. \n
  7. (x + y)(x - y)  =  y(x - y)
  8. \n
  9. x + y = y
  10. \n
  11. 2 = 1
  12. \n
\n

Now, you could look at that, and shrug, and say, \"Well, logic doesn't always work.\"

\n

Or, if you felt that math had rightfully earned just a bit more credibility than that, over the last thirty thousand years, then you might suspect the flaw lay in your use of math, rather than Math Itself.

\n

You might suspect that the proof was not, in fact, \"perfectly logical\".

\n

The novice goes astray and says:  \"The Art failed me.\"
The master goes astray and says:  \"I failed my Art.\"

\n

\n

Is this - gasp! - faith?  To believe that math is consistent, when you have seen with your own eyes a proof that it is not?  Are you supposed to just ignore the contrary evidence, my good Bayesian?

\n

As I have remarked before, it seems worthwhile to distinguish \"faith\" that the sun will rise in the east just like the last hundred thousand times observed, from \"faith\" that tomorrow a green goblin will give you a bag of gold doubloons.  When first-order arithmetic has been observed to be internally consistent over the last ten million theorems proved in it, and you see a seeming proof of inconsistency, it is, perhaps, reasonable to double-check the proof.

\n

You're not going to ignore the contrary evidence.  You're going to double-check it.  You're going to also take into account the last ten million times that first-order arithmetic has proven consistent, when you evaluate your new posterior confidence that 2 = 1 is not perfectly logical.  On that basis, you are going to evaluate a high probability that, if you check for a flaw, you are likely to find one.

\n

But isn't this motivated skepticism?   The most fearful bane of students of bias?  You're applying a stronger standard of checking to incongruent evidence than congruent evidence?

\n

Yes.  So it is necessary to be careful around this sort of reasoning, because it can induce belief hysteresis - a case where your final beliefs end up determined by the order in which you see the evidence.  When you add decision theory, unlike the case of pure probability theory, you have to decide whether to take costly actions to look for additional evidence, and you will do this based on the evidence you have seen so far.

\n

Perhaps you should think to yourself, \"Huh, if I didn't spot this flaw at first sight, then I may have accepted some flawed congruent evidence too.  What other mistaken proofs do I have in my head, whose absurdity is not at first apparent?\"  Maybe you should apply stronger scrutiny to the next piece of congruent evidence you hear, just to balance things out.

\n

Real faith, blind faith, would be if you looked at the proof and shrugged and said, \"Seems like a valid proof to me, but I don't care, I believe in math.\"  That would be discarding the evidence.

\n

You have a doubt.  Move to resolve it.  That is the purpose of a doubt.  After all, if the proof does hold up, you will have to discard first-order arithmetic.  It's not acceptable to be walking around with your mind containing both the belief that arithmetic is consistent, and what seems like a valid proof that 2 = 1.

\n

Oh, and the flaw in the proof?  Simple technique for finding it:  Substitute 1 for both x and y, concretely evaluate the arithmetic on both sides of the equation, and find the first line where a true equation is followed by a false equation.  Whatever step was performed between those two equations, must have been illegal - illegal for some general reason, mind you; not illegal just because it led to a conclusion you don't like.

\n

That's what Heinlein should have looked for - if, perhaps, he'd had a bit more faith in algebra.

\n
\n

AddedAndrew2 says the character was Jubal from Stranger in a Strange Land.

\n

Charlie says that Heinlein did graduate work in math at UCLA and was a hardcore formalist.  I guess either Jubal wasn't expressing an authorial opinion, or Heinlein meant to convey \"deceptively logical-seeming\" by the phrase \"perfectly logical\".

\n

If you don't already know the flaw in the algebra, there are spoilers in the comments ahead.

" } }, { "_id": "bkSkRwo9SRYxJMiSY", "title": "Beautiful Probability", "pageUrl": "https://www.lesswrong.com/posts/bkSkRwo9SRYxJMiSY/beautiful-probability", "postedAt": "2008-01-14T07:19:47.000Z", "baseScore": 117, "voteCount": 83, "commentCount": 124, "url": null, "contents": { "documentId": "bkSkRwo9SRYxJMiSY", "html": "

Should we expect rationality to be, on some level, simple?  Should we search and hope for underlying beauty in the arts of belief and choice?

\n

Let me introduce this issue by borrowing a complaint of the late great Bayesian Master, E. T. Jaynes (1990):

\n
\n

\"Two medical researchers use the same treatment independently, in different hospitals.  Neither would stoop to falsifying the data, but one had decided beforehand that because of finite resources he would stop after treating N=100 patients, however many cures were observed by then.  The other had staked his reputation on the efficacy of the treatment, and decided he would not stop until he had data indicating a rate of cures definitely greater than 60%, however many patients that might require.  But in fact, both stopped with exactly the same data:  n = 100 [patients], r = 70 [cures].  Should we then draw different conclusions from their experiments?\"  (Presumably the two control groups also had equal results.)

\n
\n

According to old-fashioned statistical procedure - which I believe is still being taught today - the two researchers have performed different experiments with different stopping conditions.  The two experiments could have terminated with different data, and therefore represent different tests of the hypothesis, requiring different statistical analyses.  It's quite possible that the first experiment will be \"statistically significant\", the second not.

\n

Whether or not you are disturbed by this says a good deal about your attitude toward probability theory, and indeed, rationality itself.

\n

\n

Non-Bayesian statisticians might shrug, saying, \"Well, not all statistical tools have the same strengths and weaknesses, y'know - a hammer isn't like a screwdriver - and if you apply different statistical tools you may get different results, just like using the same data to compute a linear regression or train a regularized neural network.  You've got to use the right tool for the occasion.  Life is messy -\"

\n

And then there's the Bayesian reply:  \"Excuse you?  The evidential impact of a fixed experimental method, producing the same data, depends on the researcher's private thoughts?  And you have the nerve to accuse us of being 'too subjective'?\"

\n

If Nature is one way, the likelihood of the data coming out the way we have seen will be one thing.  If Nature is another way, the likelihood of the data coming out that way will be something else.  But the likelihood of a given state of Nature producing the data we have seen, has nothing to do with the researcher's private intentions.  So whatever our hypotheses about Nature, the likelihood ratio is the same, and the evidential impact is the same, and the posterior belief should be the same, between the two experiments.  At least one of the two Old Style methods must discard relevant information - or simply do the wrong calculation - for the two methods to arrive at different answers.

\n

The ancient war between the Bayesians and the accursèd frequentists stretches back through decades, and I'm not going to try to recount that elder history in this blog post.

\n

But one of the central conflicts is that Bayesians expect probability theory to be... what's the word I'm looking for?  \"Neat?\"  \"Clean?\"  \"Self-consistent?\"

\n

As Jaynes says, the theorems of Bayesian probability are just that, theorems in a coherent proof system.  No matter what derivations you use, in what order, the results of Bayesian probability theory should always be consistent - every theorem compatible with every other theorem.

\n

If you want to know the sum of 10 + 10, you can redefine it as (2 * 5) + (7 + 3) or as (2 * (4 + 6)) or use whatever other legal tricks you like, but the result always has to come out to be the same, in this case, 20.  If it comes out as 20 one way and 19 the other way, then you may conclude you did something illegal on at least one of the two occasions.  (In arithmetic, the illegal operation is usually division by zero; in probability theory, it is usually an infinity that was not taken as a the limit of a finite process.)

\n

If you get the result 19 = 20, look hard for that error you just made, because it's unlikely that you've sent arithmetic itself up in smoke.  If anyone should ever succeed in deriving a real contradiction from Bayesian probability theory - like, say, two different evidential impacts from the same experimental method yielding the same results - then the whole edifice goes up in smoke.  Along with set theory, 'cause I'm pretty sure ZF provides a model for probability theory.

\n

Math!  That's the word I was looking for.  Bayesians expect probability theory to be math.  That's why we're interested in Cox's Theorem and its many extensions, showing that any representation of uncertainty which obeys certain constraints has to map onto probability theory.  Coherent math is great, but unique math is even better.

\n

And yet... should rationality be math?  It is by no means a foregone conclusion that probability should be pretty.  The real world is messy - so shouldn't you need messy reasoning to handle it?  Maybe the non-Bayesian statisticians, with their vast collection of ad-hoc methods and ad-hoc justifications, are strictly more competent because they have a strictly larger toolbox.  It's nice when problems are clean, but they usually aren't, and you have to live with that.

\n

After all, it's a well-known fact that you can't use Bayesian methods on many problems because the Bayesian calculation is computationally intractable.  So why not let many flowers bloom?  Why not have more than one tool in your toolbox?

\n

That's the fundamental difference in mindset.  Old School statisticians thought in terms of tools, tricks to throw at particular problems.  Bayesians - at least this Bayesian, though I don't think I'm speaking only for myself - we think in terms of laws.

\n

Looking for laws isn't the same as looking for especially neat and pretty tools.  The second law of thermodynamics isn't an especially neat and pretty refrigerator.

\n

The Carnot cycle is an ideal engine - in fact, the ideal engine.  No engine powered by two heat reservoirs can be more efficient than a Carnot engine.  As a corollary, all thermodynamically reversible engines operating between the same heat reservoirs are equally efficient.

\n

But, of course, you can't use a Carnot engine to power a real car.  A real car's engine bears the same resemblance to a Carnot engine that the car's tires bear to perfect rolling cylinders.

\n

Clearly, then, a Carnot engine is a useless tool for building a real-world car.  The second law of thermodynamics, obviously, is not applicable here.  It's too hard to make an engine that obeys it, in the real world.  Just ignore thermodynamics - use whatever works.

\n

This is the sort of confusion that I think reigns over they who still cling to the Old Ways.

\n

No, you can't always do the exact Bayesian calculation for a problem.  Sometimes you must seek an approximation; often, indeed.  This doesn't mean that probability theory has ceased to apply, any more than your inability to calculate the aerodynamics of a 747 on an atom-by-atom basis implies that the 747 is not made out of atoms.  Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation - and fails to the extent that it departs.

\n

Bayesianism's coherence and uniqueness proofs cut both ways.  Just as any calculation that obeys Cox's coherency axioms (or any of the many reformulations and generalizations) must map onto probabilities, so too, anything that is not Bayesian must fail one of the coherency tests.  This, in turn, opens you to punishments like Dutch-booking (accepting combinations of bets that are sure losses, or rejecting combinations of bets that are sure gains).

\n

You may not be able to compute the optimal answer.  But whatever approximation you use, both its failures and successes will be explainable in terms of Bayesian probability theory.  You may not know the explanation; that does not mean no explanation exists.

\n

So you want to use a linear regression, instead of doing Bayesian updates?  But look to the underlying structure of the linear regression, and you see that it corresponds to picking the best point estimate given a Gaussian likelihood function and a uniform prior over the parameters.

\n

You want to use a regularized linear regression, because that works better in practice?  Well, that corresponds (says the Bayesian) to having a Gaussian prior over the weights.

\n

Sometimes you can't use Bayesian methods literally; often, indeed.  But when you can use the exact Bayesian calculation that uses every scrap of available knowledge, you are done.  You will never find a statistical method that yields a better answer.  You may find a cheap approximation that works excellently nearly all the time, and it will be cheaper, but it will not be more accurate.  Not unless the other method uses knowledge, perhaps in the form of disguised prior information, that you are not allowing into the Bayesian calculation; and then when you feed the prior information into the Bayesian calculation, the Bayesian calculation will again be equal or superior.

\n

When you use an Old Style ad-hoc statistical tool with an ad-hoc (but often quite interesting) justification, you never know if someone else will come up with an even more clever tool tomorrow.  But when you can directly use a calculation that mirrors the Bayesian law, you're done - like managing to put a Carnot heat engine into your car.  It is, as the saying goes, \"Bayes-optimal\".

\n

It seems to me that the toolboxers are looking at the sequence of cubes {1, 8, 27, 64, 125, ...} and pointing to the first differences {7, 19, 37, 61, ...} and saying \"Look, life isn't always so neat - you've got to adapt to circumstances.\"  And the Bayesians are pointing to the third differences, the underlying stable level {6, 6, 6, 6, 6, ...}.  And the critics are saying, \"What the heck are you talking about?  It's 7, 19, 37 not 6, 6, 6.  You are oversimplifying this messy problem; you are too attached to simplicity.\"

\n

It's not necessarily simple on a surface level.  You have to dive deeper than that to find stability.

\n

Think laws, not tools.  Needing to calculate approximations to a law doesn't change the law.  Planes are still atoms, they aren't governed by special exceptions in Nature for aerodynamic calculations.  The approximation exists in the map, not in the territory.  You can know the second law of thermodynamics, and yet apply yourself as an engineer to build an imperfect car engine.  The second law does not cease to be applicable; your knowledge of that law, and of Carnot cycles, helps you get as close to the ideal efficiency as you can.

\n

We aren't enchanted by Bayesian methods merely because they're beautiful.  The beauty is a side effect.  Bayesian theorems are elegant, coherent, optimal, and provably unique because they are laws.

\n

AddendumCyan directs us to chapter 37 of MacKay's excellent statistics book, free online, for a more thorough explanation of the opening problem.

\n
\n

Jaynes, E. T. (1990.) Probability Theory as Logic. In: P. F. Fougere (Ed.), Maximum Entropy and Bayesian Methods. Kluwer Academic Publishers.

\n

MacKay, D. (2003.) Information Theory, Inference, and Learning Algorithms. Cambridge: Cambridge University Press.

" } }, { "_id": "oKiy7YwGToaYXdvnj", "title": "Is Reality Ugly?", "pageUrl": "https://www.lesswrong.com/posts/oKiy7YwGToaYXdvnj/is-reality-ugly", "postedAt": "2008-01-12T22:26:23.000Z", "baseScore": 76, "voteCount": 61, "commentCount": 48, "url": null, "contents": { "documentId": "oKiy7YwGToaYXdvnj", "html": "

Yesterday I talked about the cubes {1, 8, 27, 64, 125, ...} and how their first differences {7, 19, 37, 61, ...} might at first seem to lack an obvious pattern, but taking the second differences {12, 18, 24, ...} takes you down to the simply related level.  Taking the third differences {6, 6, ...} brings us to the perfectly stable level, where chaos dissolves into order.

\n\n\n\n

But this (as I noted) is a handpicked example.  Perhaps the "messy real world" lacks the beauty of these abstract mathematical objects?  Perhaps it would be more appropriate to talk about neuroscience or gene expression networks?

\n\n

Abstract math, being constructed solely in imagination, arises from simple foundations - a small set of initial axioms - and is a closed system; conditions that might seem unnaturally conducive to neatness.

\n\n

Which is to say:  In pure math, you don't have to worry about a tiger leaping out of the bushes and eating Pascal's Triangle.

\n\n

So is the real world uglier than mathematics?

\n\n

Strange that people ask this.  I mean, the question might have been sensible two and a half millennia ago...

Back when the Greek philosophers were debating what this "real world" thingy might be made of, there were many positions.  Heraclitus said, "All is fire."  Thales said, "All is water."  Pythagoras said, "All is number."

\n\n

Score:    Heraclitus 0    Thales 0    Pythagoras 1

\n\n

Beneath the complex forms and shapes of the surface world, there is a simple level, an exact and stable level, whose laws we name "physics".  This discovery, the Great Surprise, has already taken place at our point in human history - but it does not do to forget that it was surprising.  Once upon a time, people went in search of underlying beauty, with no guarantee of finding it; and once upon a time, they found it; and now it is a known thing, and taken for granted.

\n\n

Then why can't we predict the location of every tiger in the bushes as easily as we predict the sixth cube?

\n\n

I count three sources of uncertainty even within worlds of pure math - two obvious sources, and one not so obvious.

\n\n

The first source of uncertainty is that even a creature of pure math, living embedded in a world of pure math, may not know the math.   Humans walked the Earth long before Galileo/Newton/Einstein discovered the law of gravity that prevents us from being flung off into space.  You can be governed by stable fundamental rules without knowing them.  There is no law of physics which says that laws of physics must be explicitly represented, as knowledge, in brains that run under them.

\n\n

We do not yet have the Theory of Everything.  Our best current theories are things of math, but they are not perfectly integrated with each other.  The most probable explanation is that - as has previously proved to be the case - we are seeing surface manifestations of deeper math.  So by far the best guess is that reality is made of math; but we do not fully know which math, yet.

\n\n

But physicists have to construct huge particle accelerators to distinguish between theories - to manifest their remaining uncertainty in any visible fashion.  That physicists must go to such lengths to be unsure, suggests that this is not the source of our uncertainty about stock prices.

\n\n

The second obvious source of uncertainty is that even when you know all the relevant laws of physics, you may not have enough computing power to extrapolate them.  We know every fundamental physical law that is relevant to a chain of amino acids folding itself into a protein.  But we still can't predict the shape of the protein from the amino acids.  Some tiny little 5-nanometer molecule that folds in a microsecond is too much information for current computers to handle (never mind tigers and stock prices).  Our frontier efforts in protein folding use clever approximations, rather than the underlying Schrödinger equation.  When it comes to describing a 5-nanometer object using really basic physics, over quarks - well, you don't even bother trying.

\n\n

We have to use instruments like X-ray crystallography and NMR to discover the shapes of proteins that are fully determined by physics we know and a DNA sequence we know.  We are not logically omniscient; we cannot see all the implications of our thoughts; we do not know what we believe.

\n\n

The third source of uncertainty is the most difficult to understand, and Nick Bostrom has written a book about it.  Suppose that the sequence {1, 8, 27, 64, 125, ...} exists; suppose that this is a fact.  And suppose that atop each cube is a little person - one person per cube - and suppose that this is also a fact.

\n\n

If you stand on the outside and take a global perspective - looking down from above at the sequence of cubes and the little people perched on top - then these two facts say everything there is to know about the sequence and the people.

\n\n

But if you are one of the little people perched atop a cube, and you know these two facts, there is still a third piece of information you need to make predictions:  "Which cube am I standing on?"

\n\n

You expect to find yourself standing on a cube; you do not expect to find yourself standing on the number 7.  Your anticipations are definitely constrained by your knowledge of the basic physics; your beliefs are falsifiable.  But you still have to look down to find out whether you're standing on 1728 or 5177717.  If you can do fast mental arithmetic, then seeing that the first two digits of a four-digit cube are 17__ will be sufficient to guess that the last digits are 2 and 8.  Otherwise you may have to look to discover the 2 and 8 as well.

\n\n

To figure out what the night sky should look like, it's not enough to know the laws of physics.  It's not even enough to have logical omniscience over their consequences.  You have to know where you are in the universe.  You have to know that you're looking up at the night sky from Earth.  The information required is not just the information to locate Earth in the visible universe, but in the entire universe, including all the parts that our telescopes can't see because they are too distant, and different inflationary universes, and alternate Everett branches.

\n\n

It's a good bet that "uncertainty about initial conditions at the boundary" is really indexical uncertainty.  But if not, it's empirical uncertainty, uncertainty about how the universe is from a global perspective, which puts it in the same class as uncertainty about fundamental laws.

\n\n

Wherever our best guess is that the "real world" has an irretrievably messy component, it is because of the second and third sources of uncertainty - logical uncertainty and indexical uncertainty.

\n\n

Ignorance of fundamental laws does not tell you that a messy-looking pattern really is messy.  It might just be that you haven't figured out the order yet.

\n\n

But when it comes to messy gene expression networks, we've already found the hidden beauty - the stable level of underlying physics.  Because we've already found the master order, we can guess that we  won't find any additional secret patterns that will make biology as easy as a sequence of cubes.  Knowing the rules of the game, we know that the game is hard.  We don't have enough computing power to do protein chemistry from physics (the second source of uncertainty) and evolutionary pathways may have gone different ways on different planets (the third source of uncertainty).  New discoveries in basic physics won't help us here.

\n\n

If you were an ancient Greek staring at the raw data from a biology\nexperiment, you would be much wiser to look for some hidden structure\nof Pythagorean elegance, all the proteins lining up in a perfect icosahedron.  But in biology we already know where the Pythagorean elegance is, and we know it's too far down to help us overcome our indexical and logical uncertainty.

\n\n

Similarly, we can be confident that no one will ever be able to predict the results of certain quantum experiments, only because our fundamental theory tells us quite definitely that different versions of us will see different results.  If your knowledge of fundamental laws tells you that there's a sequence of cubes, and that there's one little person standing on top of each cube, and that the little people are all alike except for being on different cubes, and that you are one of these little people, then you know that you have no way of deducing which cube you're on except by looking.

\n\n

The best current knowledge says that the "real world" is a perfectly regular, deterministic, and very large\nmathematical object which is highly expensive to simulate.  So "real life" is less like predicting the next cube in a sequence of cubes, and more like knowing that lots of little people are standing on top of cubes, but not knowing who you personally are, and also not being very good at mental arithmetic.  Our knowledge of the rules does constrain our anticipations, quite a bit, but not perfectly.

\n\n

There, now doesn't that sound like real life?

\n\n

But uncertainty exists in the map, not in the territory.  If we are ignorant of a phenomenon, that is a fact about our state of mind, not a fact about the phenomenon itself.  Empirical uncertainty, logical uncertainty, and indexical uncertainty are just names for our own bewilderment.  The best current guess is that the world is math and the math is perfectly regular.  The messiness is only in the eye of the beholder.

\n\n

Even the huge morass of the blogosphere is embedded in this perfect physics, which is ultimately as orderly as {1, 8, 27, 64, 125, ...}.

\n\n

So the Internet is not a big muck... it's a series of cubes.

" } }, { "_id": "uichBYWKcGAqRZZdP", "title": "Expecting Beauty", "pageUrl": "https://www.lesswrong.com/posts/uichBYWKcGAqRZZdP/expecting-beauty", "postedAt": "2008-01-12T03:00:04.000Z", "baseScore": 27, "voteCount": 24, "commentCount": 7, "url": null, "contents": { "documentId": "uichBYWKcGAqRZZdP", "html": "

Followup toBeautiful Math

If you looked at the sequence {1, 4, 9, 16, 25, ...} and didn't\nrecognize the square numbers, you might still essay an\nexcellent-seeming prediction of future items in the sequence by\nnoticing that the table of first differences is {3, 5, 7, 9, ...}. \nIndeed, your prediction would be perfect, though you have no way of\nknowing this without peeking at the generator.  The correspondence can\nbe shown algebraically or even geometrically (see yesterday's post).  It's really rather elegant.

\n\n

Whatever people praise, they tend to praise too much; and there are skeptics who think that the pursuit of elegance is like unto a disease, which produces neat mathematics in opposition to the messiness of the real world.  "You got lucky," they say, "and you won't always be lucky.  If you expect that kind of elegance, you'll distort the world to match your expectations - chop off all the parts of Life that don't fit into your nice little pattern."

\n\n

I mean, suppose Life hands you the sequence {1, 8, 27, 64, 125, ...}.  When you take the first differences, you get {7, 19, 37, 61, ...}.  All these numbers have in common is that they're primes, and they aren't even sequential primes.  Clearly, there isn't the neat order here that we saw in the squares.

You might try to impose order, by insisting that the first differences must\nbe evenly spaced, and any deviations are experimental errors - or\nbetter yet, we just won't think about them.  "You will say," says the\nskeptic, "that 'The first differences are spaced around 20 apart and land on\nprime numbers, so that the next difference is probably 83, which makes the next number 208.'  But reality comes back and says 216."

\n\n

Serves you right, expecting neatness and elegance when there isn't any there.  You were too addicted to absolutes; you had too much need for closure.  Behold the perils of - gasp! - DUN DUN DUN - reductionism.

\n\n

You can guess, from the example I chose, that I don't think this is the best way to look at the problem.  Because, in the example I chose, it's not that no order exists, but that you have to look a little deeper to find it.  The sequence {7, 19, 37, 61, ...} doesn't leap out at you - you might not recognize it, if you met it on the street - but take the second differences and you find {12, 18, 24, ...}.  Take the third differences and you find {6, 6, ...}.

\n\n

You had to dig deeper to find the stable level, but it was still there - in the example I chose.

\n\n

Someone who grasped too quickly at order, who demanded closure right now, who forced the pattern, might never find the stable level.  If you tweak the table of first differences to make them "more even", fit your own conception of aesthetics before you found the math's own rhythm, then the second differences and third differences will come out wrong.  Maybe you won't even bother to take the second differences and third differences.  Since, once you've forced the first differences to conform to your own sense of aesthetics, you'll be happy - or you'll insist in a loud voice that you're happy.

\n\n

None of this says a word against - gasp! - reductionism.  The order is there, it's just better-hidden.  Is the moral of the tale (as I told it) to forsake the search for beauty?  Is the moral to take pride in the glorious cosmopolitan sophistication of confessing, "It is ugly"?  No, the moral is to reduce at the right time, to wait for an opening before you slice, to not prematurely terminate the search for beauty.  So long as you can refuse to see beauty that isn't there, you have already taken the needful precaution if it all turns out ugly.

\n\n

But doesn't it take - gasp! - faith to search for a beauty you haven't found yet?

\n\n

As I recently remarked, if you say, "Many times I have witnessed the turning of the seasons, and tomorrow I expect the Sun will rise in its appointed place in the east," then that is not a certainty.  And if you say, "I expect a purple polka-dot fairy to come out of my nose and give me a bag of money," that is not a certainty.  But they are not the same shade of uncertainty, and it seems insufficiently narrow to call them both "faith".

\n\n

Looking for mathematical beauty you haven't found yet, is not so sure as expecting the Sun to rise in the east.  But neither does it seem like the same shade of uncertainty as expecting a purple polka-dot fairy - not after you ponder the last fifty-seven thousand cases where humanity found hidden order.

\n\n

And yet in mathematics the premises and axioms are closed systems - can we expect the messy real world to reveal hidden beauty?  Tune in next time on Overcoming Bias to find out!

" } }, { "_id": "Rjw4qhEMvqskhL6Gm", "title": "Beautiful Math", "pageUrl": "https://www.lesswrong.com/posts/Rjw4qhEMvqskhL6Gm/beautiful-math", "postedAt": "2008-01-10T22:43:55.000Z", "baseScore": 33, "voteCount": 29, "commentCount": 35, "url": null, "contents": { "documentId": "Rjw4qhEMvqskhL6Gm", "html": "

Consider the sequence {1, 4, 9, 16, 25, ...}  You recognize these as the square numbers, the sequence Ak = k2.  Suppose you did not recognize this sequence at a first glance.  Is there any way you could predict the next item in the sequence?  Yes:  You could take the first differences, and end up with:

\n\n

{4 - 1, 9 - 4, 16 - 9, 25 - 16, ...} = {3, 5, 7, 9, ...}

\n\n

And if you don't recognize these as successive odd numbers, you are still not defeated; if you produce a table of the second differences, you will find:

\n\n

{5 - 3, 7 - 5, 9 - 7, ...} = {2, 2, 2, ...}

\n\n

If you cannot recognize this as the number 2 repeating, then you're hopeless.

\n\n

But if you predict that the next second difference is also 2, then you can see the next first difference must be 11, and the next item in the original sequence must be 36 - which, you soon find out, is correct.

\n\n

Dig down far enough, and you discover hidden order, underlying structure, stable relations beneath changing surfaces.

The original sequence was generated by squaring successive numbers - yet we predicted it using what seems like a wholly different method, one that we could in principle use without ever realizing we were generating the squares.  Can you prove the two methods are always equivalent? - for thus far we have not proven this, but only ventured an induction.  Can you simplify the proof so that you can you see it at a glance? - as Polya was fond of asking.

\n\n

This is a very simple example by modern standards, but it is a very simple example of the sort of thing that mathematicians spend their whole lives looking for.

\n\n

The joy of mathematics is inventing mathematical objects, and then noticing that the mathematical objects that you just created have all sorts of wonderful properties that you never intentionally built into them.  It is like building a toaster and then realizing that your invention also, for some unexplained reason, acts as a rocket jetpack and MP3 player.

\n\n

Numbers, according to our best guess at history, have been invented and reinvented over the course of time.  (Apparently some artifacts from 30,000 BC have marks cut that look suspiciously like tally marks.)  But I doubt that a single one of the human beings who invented counting visualized the employment they would provide to generations of mathematicians.  Or the excitement that would someday surround Fermat's Last Theorem, or the factoring problem in RSA cryptography... and yet these are as implicit in the definition of the natural numbers, as are the first and second difference tables implicit in the sequence of squares.

\n\n

This is what creates the impression of a mathematical universe that is "out there" in Platonia, a universe which humans are exploring rather than creating.  Our definitions teleport us to various locations in Platonia, but we don't create the surrounding environment.  It seems this way, at least, because we don't remember creating all the wonderful things we find.  The inventors of the natural numbers teleported to Countingland, but did not\ncreate it, and later mathematicians spent centuries exploring Countingland and\ndiscovering all sorts of things no one in 30,000 BC could begin to imagine.

\n\n

To say that human beings "invented numbers" - or invented the structure implicit in numbers - seems like claiming that Neil Armstrong hand-crafted the Moon.  The universe existed before there were any sentient beings to observe\nit, which implies that physics preceded physicists.  This is a puzzle, I know; but if you claim the physicists came first, it is even more confusing because instantiating a physicist takes quite a lot of physics.  Physics involves math, so math - or at least that portion of math which is contained in physics - must have preceded mathematicians.  Otherwise, there would have no structured universe running long enough for innumerate organisms to evolve for the billions of years required to produce mathematicians.

\n\n

The amazing thing is that math is a game without a designer, and yet it is eminently playable.

\n\n

Oh, and to prove that the pattern in the difference tables always holds:

(k + 1)2 = k2 + (2k + 1)

As for seeing it at a glance:

\n\n\n\n

\"Squares\"\n

\n\n

Think the square problem is too trivial to be worth your attention?  Think there's nothing amazing about the tables of first and second differences?  Think it's so obviously implicit in the squares as to not count as a separate discovery?  Then consider the cubes:

1, 8, 27, 64...

Now, without calculating it directly, and without doing any algebra, can you see at a glance what the cubes' third differences must be?

\n\n

And of course, when you know what the cubes' third difference is, you will realize that it could not possibly have been anything else...

" } }, { "_id": "QGkYCwyC7wTDyt3yT", "title": "0 And 1 Are Not Probabilities", "pageUrl": "https://www.lesswrong.com/posts/QGkYCwyC7wTDyt3yT/0-and-1-are-not-probabilities", "postedAt": "2008-01-10T06:58:50.000Z", "baseScore": 120, "voteCount": 121, "commentCount": 150, "url": null, "contents": { "documentId": "QGkYCwyC7wTDyt3yT", "html": "

One, two, and three are all integers, and so is negative four. If you keep counting up, or keep counting down, you’re bound to encounter a whole lot more integers. You will not, however, encounter anything called “positive infinity” or “negative infinity,” so these are not integers.

Positive and negative infinity are not integers, but rather special symbols for talking about the behavior of integers. People sometimes say something like, “5 + infinity = infinity,” because if you start at 5 and keep counting up without ever stopping, you’ll get higher and higher numbers without limit. But it doesn’t follow from this that “infinity - infinity = 5.” You can’t count up from 0 without ever stopping, and then count down without ever stopping, and then find yourself at 5 when you’re done.

From this we can see that infinity is not only not-an-integer, it doesn’t even behave like an integer. If you unwisely try to mix up infinities with integers, you’ll need all sorts of special new inconsistent-seeming behaviors which you don’t need for 1, 2, 3 and other actual integers.

Even though infinity isn’t an integer, you don’t have to worry about being left at a loss for numbers. Although people have seen five sheep, millions of grains of sand, and septillions of atoms, no one has ever counted an infinity of anything. The same with continuous quantities—people have measured dust specks a millimeter across, animals a meter across, cities kilometers across, and galaxies thousands of lightyears across, but no one has ever measured anything an infinity across. In the real world, you don’t need a whole lot of infinity.1

In the usual way of writing probabilities, probabilities are between 0 and 1. A coin might have a probability of 0.5 of coming up tails, or the weatherman might assign probability 0.9 to rain tomorrow.

This isn’t the only way of writing probabilities, though. For example, you can transform probabilities into odds via the transformation O = (P/(1 - P)). So a probability of 50% would go to odds of 0.5/0.5 or 1, usually written 1:1, while a probability of 0.9 would go to odds of 0.9/0.1 or 9, usually written 9:1. To take odds back to probabilities you use P = (O∕(1 + O)), and this is perfectly reversible, so the transformation is an isomorphism—a two-way reversible mapping. Thus, probabilities and odds are isomorphic, and you can use one or the other according to convenience.

For example, it’s more convenient to use odds when you’re doing Bayesian updates. Let’s say that I roll a six-sided die: If any face except 1 comes up, there’s a 10% chance of hearing a bell, but if the face 1 comes up, there’s a 20% chance of hearing the bell. Now I roll the die, and hear a bell. What are the odds that the face showing is 1? Well, the prior odds are 1:5 (corresponding to the real number 1/5 = 0.20) and the likelihood ratio is 0.2:0.1 (corresponding to the real number 2) and I can just multiply these two together to get the posterior odds 2:5 (corresponding to the real number 2/5 or 0.40). Then I convert back into a probability, if I like, and get (0.4/1.4) = 2/7 = ~29%.

So odds are more manageable for Bayesian updates—if you use probabilities, you’ve got to deploy Bayes’s Theorem in its complicated version. But probabilities are more convenient for answering questions like “If I roll a six-sided die, what’s the chance of seeing a number from 1 to 4?” You can add up the probabilities of 1/6 for each side and get 4/6, but you can’t add up the odds ratios of 0.2 for each side and get an odds ratio of 0.8.

Why am I saying all this? To show that “odd ratios” are just as legitimate a way of mapping uncertainties onto real numbers as “probabilities.” Odds ratios are more convenient for some operations, probabilities are more convenient for others. A famous proof called Cox’s Theorem (plus various extensions and refinements thereof) shows that all ways of representing uncertainties that obey some reasonable-sounding constraints, end up isomorphic to each other.

Why does it matter that odds ratios are just as legitimate as probabilities? Probabilities as ordinarily written are between 0 and 1, and both 0 and 1 look like they ought to be readily reachable quantities—it’s easy to see 1 zebra or 0 unicorns. But when you transform probabilities onto odds ratios, 0 goes to 0, but 1 goes to positive infinity. Now absolute truth doesn’t look like it should be so easy to reach.

A representation that makes it even simpler to do Bayesian updates is the log odds—this is how E. T. Jaynes recommended thinking about probabilities. For example, let’s say that the prior probability of a proposition is 0.0001—this corresponds to a log odds of around -40 decibels. Then you see evidence that seems 100 times more likely if the proposition is true than if it is false. This is 20 decibels of evidence. So the posterior odds are around -40 dB + 20 dB = -20 dB, that is, the posterior probability is ~0.01.

When you transform probabilities to log odds, 0 goes to negative infinity and 1 goes to positive infinity. Now both infinite certainty and infinite improbability seem a bit more out-of-reach.

In probabilities, 0.9999 and 0.99999 seem to be only 0.00009 apart, so that 0.502 is much further away from 0.503 than 0.9999 is from 0.99999. To get to probability 1 from probability 0.99999, it seems like you should need to travel a distance of merely 0.00001.

But when you transform to odds ratios, 0.502 and 0.503 go to 1.008 and 1.012, and 0.9999 and 0.99999 go to 9,999 and 99,999. And when you transform to log odds, 0.502 and 0.503 go to 0.03 decibels and 0.05 decibels, but 0.9999 and 0.99999 go to 40 decibels and 50 decibels.

When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence.

Using the log odds exposes the fact that reaching infinite certainty requires infinitely strong evidence, just as infinite absurdity requires infinitely strong counterevidence.

Furthermore, all sorts of standard theorems in probability have special cases if you try to plug 1s or 0s into them—like what happens if you try to do a Bayesian update on an observation to which you assigned probability 0.

So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers.

The main reason this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1.

However, in the real world, when you roll a die, it doesn’t literally have infinite certainty of coming up some number between 1 and 6. The die might land on its edge; or get struck by a meteor; or the Dark Lords of the Matrix might reach in and write “37” on one side.

If you made a magical symbol to stand for “all possibilities I haven’t considered,” then you could marginalize over the events including this magical symbol, and arrive at a magical symbol “T” that stands for infinite certainty.

But I would rather ask whether there’s some way to derive a theorem without using magic symbols with special behaviors. That would be more elegant. Just as there are mathematicians who refuse to believe in the law of the excluded middle or infinite sets, I would like to be a probability theorist who doesn’t believe in absolute certainty.


1I should note for the more sophisticated reader that they do not need to write me with elaborate explanations of, say, the difference between ordinal numbers and cardinal numbers. I’m familiar with the different set-theoretic notions of infinity, but I don’t see a good use for them in probability theory.

" } }, { "_id": "ooypcn7qFzsMcy53R", "title": "Infinite Certainty", "pageUrl": "https://www.lesswrong.com/posts/ooypcn7qFzsMcy53R/infinite-certainty", "postedAt": "2008-01-09T06:49:08.000Z", "baseScore": 92, "voteCount": 92, "commentCount": 130, "url": null, "contents": { "documentId": "ooypcn7qFzsMcy53R", "html": "

In “Absolute Authority,” I argued that you don’t need infinite certainty:

If you have to choose between two alternatives A and B, and you somehow succeed in establishing knowably certain well-calibrated 100% confidence that A is absolutely and entirely desirable and that B is the sum of everything evil and disgusting, then this is a sufficient condition for choosing A over B. It is not a necessary condition . . . You can have uncertain knowledge of relatively better and relatively worse options, and still choose. It should be routine, in fact.

Concerning the proposition that 2 + 2 = 4, we must distinguish between the map and the territory. Given the seeming absolute stability and universality of physical laws, it’s possible that never, in the whole history of the universe, has any particle exceeded the local lightspeed limit. That is, the lightspeed limit may be not just true 99% of the time, or 99.9999% of the time, or (1 - 1/googolplex) of the time, but simply always and absolutely true.

But whether we can ever have absolute confidence in the lightspeed limit is a whole ’nother question. The map is not the territory.

It may be entirely and wholly true that a student plagiarized their assignment, but whether you have any knowledge of this fact at all—let alone absolute confidence in the belief—is a separate issue. If you flip a coin and then don’t look at it, it may be completely true that the coin is showing heads, and you may be completely unsure of whether the coin is showing heads or tails. A degree of uncertainty is not the same as a degree of truth or a frequency of occurrence.

The same holds for mathematical truths. It’s questionable whether the statement “2 + 2 = 4” or “In Peano arithmetic, SS0 + SS0 = SSSS0” can be said to be true in any purely abstract sense, apart from physical systems that seem to behave in ways similar to the Peano axioms. Having said this, I will charge right ahead and guess that, in whatever sense “2 + 2 = 4” is true at all, it is always and precisely true, not just roughly true (“2 + 2 actually equals 4.0000004”) or true 999,999,999,999 times out of 1,000,000,000,000.

I’m not totally sure what “true” should mean in this case, but I stand by my guess. The credibility of “2 + 2 = 4 is always true” far exceeds the credibility of any particular philosophical position on what “true,” “always,” or “is” means in the statement above.

This doesn’t mean, though, that I have absolute confidence that 2 + 2 = 4. See the previous discussion on how to convince me that 2 + 2 = 3, which could be done using much the same sort of evidence that convinced me that 2 + 2 = 4 in the first place. I could have hallucinated all that previous evidence, or I could be misremembering it. In the annals of neurology there are stranger brain dysfunctions than this.

So if we attach some probability to the statement “2 + 2 = 4,” then what should the probability be? What you seek to attain in a case like this is good calibration—statements to which you assign “99% probability” come true 99 times out of 100. This is actually a hell of a lot more difficult than you might think. Take a hundred people, and ask each of them to make ten statements of which they are “99% confident.” Of the 1,000 statements, do you think that around 10 will be wrong?

I am not going to discuss the actual experiments that have been done on calibration—you can find them in my book chapter on cognitive biases and global catastrophic risk1—because I’ve seen that when I blurt this out to people without proper preparation, they thereafter use it as a Fully General Counterargument, which somehow leaps to mind whenever they have to discount the confidence of someone whose opinion they dislike, and fails to be available when they consider their own opinions. So I try not to talk about the experiments on calibration except as part of a structured presentation of rationality that includes warnings against motivated skepticism.

But the observed calibration of human beings who say they are “99% confident” is not 99% accuracy.

Suppose you say that you’re 99.99% confident that 2 + 2 = 4. Then you have just asserted that you could make 10,000 independent statements, in which you repose equal confidence, and be wrong, on average, around once. Maybe for 2 + 2 = 4 this extraordinary degree of confidence would be possible: “2 + 2 = 4” is extremely simple, and mathematical as well as empirical, and widely believed socially (not with passionate affirmation but just quietly taken for granted). So maybe you really could get up to 99.99% confidence on this one.

I don’t think you could get up to 99.99% confidence for assertions like “53 is a prime number.” Yes, it seems likely, but by the time you tried to set up protocols that would let you assert 10,000 independent statements of this sort—that is, not just a set of statements about prime numbers, but a new protocol each time—you would fail more than once.2

Yet the map is not the territory: If I say that I am 99% confident that 2 + 2 = 4, it doesn’t mean that I think “2 + 2 = 4” is true to within 99% precision, or that “2 + 2 = 4” is true 99 times out of 100. The proposition in which I repose my confidence is the proposition that “2 + 2 = 4 is always and exactly true,” not the proposition “2 + 2 = 4 is mostly and usually true.”

As for the notion that you could get up to 100% confidence in a mathematical proposition—well, really now! If you say 99.9999% confidence, you’re implying that you could make one million equally fraught statements, one after the other, and be wrong, on average, about once. That’s around a solid year’s worth of talking, if you can make one assertion every 20 seconds and you talk for 16 hours a day.

Assert 99.9999999999% confidence, and you’re taking it up to a trillion. Now you’re going to talk for a hundred human lifetimes, and not be wrong even once?

Assert a confidence of (1 - 1/googolplex) and your ego far exceeds that of mental patients who think they’re God.

And a googolplex is a lot smaller than even relatively small inconceivably huge numbers like 3 ↑↑↑ 3. But even a confidence of (1 - 1/3 ↑↑↑ 3) isn’t all that much closer to PROBABILITY 1 than being 90% sure of something.

If all else fails, the hypothetical Dark Lords of the Matrix, who are right now tampering with your brain’s credibility assessment of this very sentence, will bar the path and defend us from the scourge of infinite certainty.

Am I absolutely sure of that?

Why, of course not.

As Rafal Smigrodski once said:

I would say you should be able to assign a less than 1 certainty level to the mathematical concepts which are necessary to derive Bayes’s rule itself, and still practically use it. I am not totally sure I have to be always unsure. Maybe I could be legitimately sure about something. But once I assign a probability of 1 to a proposition, I can never undo it. No matter what I see or learn, I have to reject everything that disagrees with the axiom. I don’t like the idea of not being able to change my mind, ever.


1Eliezer Yudkowsky, “Cognitive Biases Potentially Affecting Judgment of Global Risks,” in Global Catastrophic Risks, ed. Nick Bostrom and Milan M. irkovi (New York: Oxford University Press, 2008), 91–119.

2Peter de Blanc has an amusing anecdote on this point: http://www.spaceandgames.com/?p=27. (I told him not to do it again.)

" } }, { "_id": "PmQkensvTGg7nGtJE", "title": "Absolute Authority", "pageUrl": "https://www.lesswrong.com/posts/PmQkensvTGg7nGtJE/absolute-authority", "postedAt": "2008-01-08T03:33:43.000Z", "baseScore": 117, "voteCount": 106, "commentCount": 78, "url": null, "contents": { "documentId": "PmQkensvTGg7nGtJE", "html": "

The one comes to you and loftily says: “Science doesn’t really know anything. All you have are theories—you can’t know for certain that you’re right. You scientists changed your minds about how gravity works—who’s to say that tomorrow you won’t change your minds about evolution?”

Behold the abyssal cultural gap. If you think you can cross it in a few sentences, you are bound to be sorely disappointed.

In the world of the unenlightened ones, there is authority and un-authority. What can be trusted, can be trusted; what cannot be trusted, you may as well throw away. There are good sources of information and bad sources of information. If scientists have changed their stories ever in their history, then science cannot be a true Authority, and can never again be trusted—like a witness caught in a contradiction, or like an employee found stealing from the till.

Plus, the one takes for granted that a proponent of an idea is expected to defend it against every possible counterargument and confess nothing. All claims are discounted accordingly. If even the proponent of science admits that science is less than perfect, why, it must be pretty much worthless.

When someone has lived their life accustomed to certainty, you can’t just say to them, “Science is probabilistic, just like all other knowledge.” They will accept the first half of the statement as a confession of guilt; and dismiss the second half as a flailing attempt to accuse everyone else to avoid judgment.

You have admitted you are not trustworthy—so begone, Science, and trouble us no more!

One obvious source for this pattern of thought is religion, where the scriptures are alleged to come from God; therefore to confess any flaw in them would destroy their authority utterly; so any trace of doubt is a sin, and claiming certainty is mandatory whether you’re certain or not.1

But I suspect that the traditional school regimen also has something to do with it. The teacher tells you certain things, and you have to believe them, and you have to recite them back on the test. But when a student makes a suggestion in class, you don’t have to go along with it—you’re free to agree or disagree (it seems) and no one will punish you.

This experience, I fear, maps the domain of belief onto the social domains of authority, of command, of law. In the social domain, there is a qualitative difference between absolute laws and nonabsolute laws, between commands and suggestions, between authorities and unauthorities. There seems to be strict knowledge and unstrict knowledge, like a strict regulation and an unstrict regulation. Strict authorities must be yielded to, while unstrict suggestions can be obeyed or discarded as a matter of personal preference. And Science, since it confesses itself to have a possibility of error, must belong in the second class.

(I note in passing that I see a certain similarity to they who think that if you don’t get an Authoritative probability written on a piece of paper from the teacher in class, or handed down from some similar Unarguable Source, then your uncertainty is not a matter for Bayesian probability theory.2 Someone might—gasp!—argue with your estimate of the prior probability. It thus seems to the not-fully-enlightened ones that Bayesian priors belong to the class of beliefs proposed by students, and not the class of beliefs commanded you by teachers—it is not proper knowledge).

The abyssal cultural gap between the Authoritative Way and the Quantitative Way is rather annoying to those of us staring across it from the rationalist side. Here is someone who believes they have knowledge more reliable than science’s mere probabilistic guesses—such as the guess that the Moon will rise in its appointed place and phase tomorrow, just like it has every observed night since the invention of astronomical record-keeping, and just as predicted by physical theories whose previous predictions have been successfully confirmed to fourteen decimal places. And what is this knowledge that the unenlightened ones set above ours, and why? It’s probably some musty old scroll that has been contradicted eleventeen ways from Sunday, and from Monday, and from every day of the week. Yet this is more reliable than Science (they say) because it never admits to error, never changes its mind, no matter how often it is contradicted. They toss around the word “certainty” like a tennis ball, using it as lightly as a feather—while scientists are weighed down by dutiful doubt, struggling to achieve even a modicum of probability. “I’m perfect,” they say without a care in the world, “I must be so far above you, who must still struggle to improve yourselves.”

There is nothing simple you can say to them—no fast crushing rebuttal. By thinking carefully, you may be able to win over the audience, if this is a public debate. Unfortunately you cannot just blurt out, “Foolish mortal, the Quantitative Way is beyond your comprehension, and the beliefs you lightly name ‘certain’ are less assured than the least of our mighty hypotheses.” It’s a difference of life-gestalt that isn’t easy to describe in words at all, let alone quickly.

What might you try, rhetorically, in front of an audience? Hard to say . . . maybe:

But, in a way, the more interesting question is what you say to someone not in front of an audience. How do you begin the long process of teaching someone to live in a universe without certainty?

I think the first, beginning step should be understanding that you can live without certainty—that if, hypothetically speaking, you couldn’t be certain of anything, it would not deprive you of the ability to make moral or factual distinctions. To paraphrase Lois Bujold, “Don’t push harder, lower the resistance.”

One of the common defenses of Absolute Authority is something I call “The Argument from the Argument from Gray,” which runs like this:

Reversed stupidity is not intelligence. You can’t arrive at a correct answer by reversing every single line of an argument that ends with a bad conclusion—it gives the fool too much detailed control over you. Every single line must be correct for a mathematical argument to carry. And it doesn’t follow, from the fact that moral relativists say “The world isn’t black and white,” that this is false, any more than it follows, from Stalin’s belief that 2 + 2 = 4, that “2 + 2 = 4” is false. The error (and it only takes one) is in the leap from the two-color view to the single-color view, that all grays are the same shade.

It would concede far too much (indeed, concede the whole argument) to agree with the premise that you need absolute knowledge of absolutely good options and absolutely evil options in order to be moral. You can have uncertain knowledge of relatively better and relatively worse options, and still choose. It should be routine, in fact, not something to get all dramatic about.

I mean, yes, if you have to choose between two alternatives A and B, and you somehow succeed in establishing knowably certain well-calibrated 100% confidence that A is absolutely and entirely desirable and that B is the sum of everything evil and disgusting, then this is a sufficient condition for choosing A over B. It is not a necessary condition.

Oh, and: Logical fallacy: Appeal to consequences of belief.

Let’s see, what else do they need to know? Well, there’s the entire rationalist culture which says that doubt, questioning, and confession of error are not terrible shameful things.

There’s the whole notion of gaining information by looking at things, rather than being proselytized. When you look at things harder, sometimes you find out that they’re different from what you thought they were at first glance; but it doesn’t mean that Nature lied to you, or that you should give up on seeing.

Then there’s the concept of a calibrated confidence—that “probability” isn’t the same concept as the little progress bar in your head that measures your emotional commitment to an idea. It’s more like a measure of how often, pragmatically, in real life, people in a certain state of belief say things that are actually true. If you take one hundred people and ask them each to make a statement of which they are “absolutely certain,” how many of these statements will be correct? Not one hundred.

If anything, the statements that people are really fanatic about are far less likely to be correct than statements like “the Sun is larger than the Moon” that seem too obvious to get excited about. For every statement you can find of which someone is “absolutely certain,” you can probably find someone “absolutely certain” of its opposite, because such fanatic professions of belief do not arise in the absence of opposition. So the little progress bar in people’s heads that measures their emotional commitment to a belief does not translate well into a calibrated confidence—it doesn’t even behave monotonically.

As for “absolute certainty”—well, if you say that something is 99.9999% probable, it means you think you could make one million equally strong independent statements, one after the other, over the course of a solid year or so, and be wrong, on average, around once. This is incredible enough. (It’s amazing to realize we can actually get that level of confidence for “Thou shalt not win the lottery.”) So let us say nothing of probability 1.0. Once you realize you don’t need probabilities of 1.0 to get along in life, you’ll realize how absolutely ridiculous it is to think you could ever get to 1.0 with a human brain. A probability of 1.0 isn’t just certainty, it’s infinite certainty.

In fact, it seems to me that to prevent public misunderstanding, maybe scientists should go around saying “We are not infinitely certain” rather than “We are not certain.” For the latter case, in ordinary discourse, suggests you know some specific reason for doubt.

1See “Professing and Cheering,” collected in Map and Territory and findable at rationalitybook.com and lesswrong.com/rationality.

2See “Focus Your Uncertainty” in Map and Territory.

" } }, { "_id": "dLJv2CoRCgeC2mPgj", "title": "The Fallacy of Gray", "pageUrl": "https://www.lesswrong.com/posts/dLJv2CoRCgeC2mPgj/the-fallacy-of-gray", "postedAt": "2008-01-07T06:24:55.000Z", "baseScore": 310, "voteCount": 266, "commentCount": 82, "url": null, "contents": { "documentId": "dLJv2CoRCgeC2mPgj", "html": "\n\n\n\n \n\n \n\n
\n \n\n

The Sophisticate: “The world isn’t black and white. No one does pure good or pure bad. It’s all gray. Therefore, no one is better than anyone else.”

\n\n

The Zetet: “Knowing only gray, you conclude that all grays are the same shade. You mock the simplicity of the two-color view, yet you replace it with a one-color view . . .”

\n\n

—Marc Stiegler, David’s Sling

\n
\n\n

I don’t know if the Sophisticate’s mistake has an official name, but I call it the Fallacy of Gray. We saw it manifested in the previous essay—the one who believed that odds of two to the power of seven hundred and fifty million to one, against, meant “there was still a chance.” All probabilities, to him, were simply “uncertain” and that meant he was licensed to ignore them if he pleased.

\n\n

“The Moon is made of green cheese” and “the Sun is made of mostly hydrogen and helium” are both uncertainties, but they are not the same uncertainty.

\n\n

Everything is shades of gray, but there are shades of gray so light as to be very nearly white, and shades of gray so dark as to be very nearly black. Or even if not, we can still compare shades, and say “it is darker” or “it is lighter.”

\n\n

Years ago, one of the strange little formative moments in my career as a rationalist was reading this paragraph from Player of Games by Iain M. Banks, especially the sentence in bold:

\n\n
\n \n\n

A guilty system recognizes no innocents. As with any power apparatus which thinks everybody’s either for it or against it, we’re against it. You would be too, if you thought about it. The very way you think places you amongst its enemies. This might not be your fault, because every society imposes some of its values on those raised within it, but the point is that some societies try to maximize that effect, and some try to minimize it. You come from one of the latter and you’re being asked to explain yourself to one of the former. Prevarication will be more difficult than you might imagine; neutrality is probably impossible. You cannot choose not to have the politics you do; they are not some separate set of entities somehow detachable from the rest of your being; they are a function of your existence. I know that and they know that; you had better accept it.

\n
\n\n

Now, don’t write angry comments saying that, if societies impose fewer of their values, then each succeeding generation has more work to start over from scratch. That’s not what I got out of the paragraph.

\n\n

What I got out of the paragraph was something which seems so obvious in retrospect that I could have conceivably picked it up in a hundred places; but something about that one paragraph made it click for me.

\n\n

It was the whole notion of the Quantitative Way applied to life-problems like moral judgments and the quest for personal self-improvement. That, even if you couldn’t switch something from on to off, you could still tend to increase it or decrease it.

\n\n

Is this too obvious to be worth mentioning? I say it is not too obvious, for many bloggers have said of Overcoming Bias: “It is impossible, no one can completely eliminate bias.” I don’t care if the one is a professional economist, it is clear that they have not yet grokked the Quantitative Way as it applies to everyday life and matters like personal self-improvement. That which I cannot eliminate may be well worth reducing.

\n\n

Or consider an exchange between Robin Hanson and Tyler Cowen.1 Robin Hanson said that he preferred to put at least 75% weight on the prescriptions of economic theory versus his intuitions: “I try to mostly just straightforwardly apply economic theory, adding little personal or cultural judgment.” Tyler Cowen replied:

\n\n
\n \n\n

In my view there is no such thing as “straightforwardly applying economic theory” . . . theories are always applied through our personal and cultural filters and there is no other way it can be.

\n
\n\n

Yes, but you can try to minimize that effect, or you can do things that are bound to increase it. And if you try to minimize it, then in many cases I don’t think it’s unreasonable to call the output “straightforward”—even in economics.

\n\n

“Everyone is imperfect.” Mohandas Gandhi was imperfect and Joseph Stalin was imperfect, but they were not the same shade of imperfection. “Everyone is imperfect” is an excellent example of replacing a two-color view with a one-color view. If you say, “No one is perfect, but some people are less imperfect than others,” you may not gain applause; but for those who strive to do better, you have held out hope. No one is perfectly imperfect, after all.

\n\n

(Whenever someone says to me, “Perfectionism is bad for you,” I reply: “I think it’s okay to be imperfect, but not so imperfect that other people notice.”)

\n\n

Likewise the folly of those who say, “Every scientific paradigm imposes some of its assumptions on how it interprets experiments,” and then act like they’d proven science to occupy the same level with witchdoctoring. Every worldview imposes some of its structure on its observations, but the point is that there are worldviews which try to minimize that imposition, and worldviews which glory in it. There is no white, but there are shades of gray that are far lighter than others, and it is folly to treat them as if they were all on the same level.

\n\n

If the Moon has orbited the Earth these past few billion years, if you have seen it in the sky these last years, and you expect to see it in its appointed place and phase tomorrow, then that is not a certainty. And if you expect an invisible dragon to heal your daughter of cancer, that too is not a certainty. But they are rather different degrees of uncertainty—this business of expecting things to happen yet again in the same way you have previously predicted to twelve decimal places, versus expecting something to happen that violates the order previously observed. Calling them both “faith” seems a little too un-narrow.

\n\n

It’s a most peculiar psychology—this business of “Science is based on faith too, so there!” Typically this is said by people who claim that faith is a good thing. Then why do they say “Science is based on faith too!” in that angry-triumphal tone, rather than as a compliment? And a rather dangerous compliment to give, one would think, from their perspective. If science is based on “faith,” then science is of the same kind as religion—directly comparable. If science is a religion, it is the religion that heals the sick and reveals the secrets of the stars. It would make sense to say, “The priests of science can blatantly, publicly, verifiably walk on the Moon as a faith-based miracle, and your priests’ faith can’t do the same.” Are you sure you wish to go there, oh faithist? Perhaps, on further reflection, you would prefer to retract this whole business of “Science is a religion too!”

\n\n

There’s a strange dynamic here: You try to purify your shade of gray, and you get it to a point where it’s pretty light-toned, and someone stands up and says in a deeply offended tone, “But it’s not white! It’s gray!” It’s one thing when someone says, “This isn’t as light as you think, because of specific problems X, Y, and Z.” It’s a different matter when someone says angrily “It’s not white! It’s gray!” without pointing out any specific dark spots.

\n\n

In this case, I begin to suspect psychology that is more imperfect than usual—that someone may have made a devil’s bargain with their own mistakes, and now refuses to hear of any possibility of improvement. When someone finds an excuse not to try to do better, they often refuse to concede that anyone else can try to do better, and every mode of improvement is thereafter their enemy, and every claim that it is possible to move forward is an offense against them. And so they say in one breath proudly, “I’m glad to be gray,” and in the next breath angrily, “And you’re gray too!

\n\n

If there is no black and white, there is yet lighter and darker, and not all grays are the same.

\n\n

The commenter G2 points us to Asimov’s “The Relativity of Wrong”:

\n\n
\n \n\n

When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together.

\n
\n\n
\n \n\n

1Hanson (2007), “Economist Judgment,” http://www.overcomingbias.com/2007/12/economist-judgm.html. Cowen (2007), “Can Theory Override Intuition?”, http://marginalrevolution.com/marginalrevolution/2007/12/how-my-views-di.html.

\n
\n\n" } }, { "_id": "q7Me34xvSG3Wm97As", "title": "But There's Still A Chance, Right?", "pageUrl": "https://www.lesswrong.com/posts/q7Me34xvSG3Wm97As/but-there-s-still-a-chance-right", "postedAt": "2008-01-06T01:56:14.000Z", "baseScore": 131, "voteCount": 114, "commentCount": 61, "url": null, "contents": { "documentId": "q7Me34xvSG3Wm97As", "html": "\n\n\n\n \n\n \n\n

Years ago, I was speaking to someone when he casually remarked that he didn’t believe in evolution. And I said, “This is not the nineteenth century. When Darwin first proposed evolution, it might have been reasonable to doubt it. But this is the twenty-first century. We can read the genes. Humans and chimpanzees have 98% shared DNA. We know humans and chimps are related. It’s over.

\n\n

He said, “Maybe the DNA is just similar by coincidence.”

\n\n

I said, “The odds of that are something like two to the power of seven hundred and fifty million to one.”

\n\n

He said, “But there’s still a chance, right?”

\n\n

Now, there’s a number of reasons my past self cannot claim a strict moral victory in this conversation. One reason is that I have no memory of whence I pulled that 2750,000,000 figure, though it’s probably the right meta-order of magnitude. The other reason is that my past self didn’t apply the concept of a calibrated confidence. Of all the times over the history of humanity that a human being has calculated odds of 2750,000,000:1 against something, they have undoubtedly been wrong more often than once in 2750,000,000 times. E.g., the shared genes estimate was revised to 95%, not 98%—and that may even apply only to the 30,000 known genes and not the entire genome, in which case it’s the wrong meta-order of magnitude.

\n\n

But I think the other guy’s reply is still pretty funny.

\n\n

I don’t recall what I said in further response—probably something like “No”—but I remember this occasion because it brought me several insights into the laws of thought as seen by the unenlightened ones.

\n\n

It first occurred to me that human intuitions were making a qualitative distinction between “No chance” and “A very tiny chance, but worth keeping track of.” You can see this in the Overcoming Bias lottery debate.

\n\n

The problem is that probability theory sometimes lets us calculate a chance which is, indeed, too tiny to be worth the mental space to keep track of it—but by that time, you’ve already calculated it. People mix up the map with the territory, so that on a gut level, tracking a symbolically described probability feels like “a chance worth keeping track of,” even if the referent of the symbolic description is a number so tiny that if it were a dust speck, you couldn’t see it. We can use words to describe numbers that small, but not feelings—a feeling that small doesn’t exist, doesn’t fire enough neurons or release enough neurotransmitters to be felt. This is why people buy lottery tickets—no one can feel the smallness of a probability that small.

\n\n

But what I found even more fascinating was the qualitative distinction between “certain” and “uncertain” arguments, where if an argument is not certain, you’re allowed to ignore it. Like, if the likelihood is zero, then you have to give up the belief, but if the likelihood is one over googol, you’re allowed to keep it.

\n\n

Now it’s a free country and no one should put you in jail for illegal reasoning, but if you’re going to ignore an argument that says the likelihood is one over googol, why not also ignore an argument that says the likelihood is zero? I mean, as long as you’re ignoring the evidence anyway, why is it so much worse to ignore certain evidence than uncertain evidence?

\n\n

I have often found, in life, that I have learned from other people’s nicely blatant bad examples, duly generalized to more subtle cases. In this case, the flip lesson is that, if you can’t ignore a likelihood of one over googol because you want to, you can’t ignore a likelihood of 0.9 because you want to. It’s all the same slippery cliff.

\n\n

Consider his example if you ever you find yourself thinking, “But you can’t prove me wrong.” If you’re going to ignore a probabilistic counterargument, why not ignore a proof, too?

\n\n" } }, { "_id": "kX6C2qdngKp4AdEAk", "title": "A Failed Just-So Story", "pageUrl": "https://www.lesswrong.com/posts/kX6C2qdngKp4AdEAk/a-failed-just-so-story", "postedAt": "2008-01-05T06:35:50.000Z", "baseScore": 22, "voteCount": 21, "commentCount": 49, "url": null, "contents": { "documentId": "kX6C2qdngKp4AdEAk", "html": "

Followup toRational vs. Scientific Ev-Psych, The Tragedy of Group Selectionism, Evolving to Extinction

\n

Perhaps the real reason that evolutionary \"just-so stories\" got a bad name is that so many attempted stories are prima facie absurdities to serious students of the field.

\n

As an example, consider a hypothesis I've heard a few times (though I didn't manage to dig up an example).  The one says:  Where does religion come from?  It appears to be a human universal, and to have its own emotion backing it - the emotion of religious faith.  Religion often involves costly sacrifices, even in hunter-gatherer tribes - why does it persist?  What selection pressure could there possibly be for religion?

\n

So, the one concludes, religion must have evolved because it bound tribes closer together, and enabled them to defeat other tribes that didn't have religion.

\n

This, of course, is a group selection argument - an individual sacrifice for a group benefit - and see the referenced posts if you're not familiar with the math, simulations, and observations which show that group selection arguments are extremely difficult to make work.  For example, a 3% individual fitness sacrifice which doubles the fitness of the tribe will fail to rise to universality, even under unrealistically liberal assumptions, if the tribe size is as large as fifty.  Tribes would need to have no more than 5 members if the individual fitness cost were 10%.  You can see at a glance from the sex ratio in human births that, in humans, individual selection pressures overwhelmingly dominate group selection pressures.  This is an example of what I mean by prima facie absurdity.

\n

\n

So why religion, then?

\n

Well, it might just be a side effect of our ability to do things like model other minds, which enables us to conceive of disembodied minds.  Faith, as an emotion, might just be co-opted hope.

\n

But if faith is a true religious adaptation, I don't see why it's even puzzling what the selection pressure could have been.

\n

Heretics were routinely burned alive just a few centuries ago.  Or stoned to death, or executed by whatever method local fashion demands.  Questioning the local gods is the notional crime for which Socrates was made to drink hemlock.

\n

Conversely, Huckabee just won Iowa's nomination for tribal-chieftain.

\n

Why would you need to go anywhere near the accursèd territory of group selectionism in order to provide an evolutionary explanation for religious faith?  Aren't the individual selection pressures obvious?

\n

I don't know whether to suppose that (1) people are mapping the question onto the \"clash of civilizations\" issue in current affairs, (2) people want to make religion out to have some kind of nicey-nice group benefit (though exterminating other tribes isn't very nice), or (3) when people get evolutionary hypotheses wrong, they just naturally tend to get it wrong by postulating group selection.

\n

But the problem with this hypothesis is not that it's \"unscientific\" because no replicable experiment was proposed to test it.  The problem is that the hypothesis is, as a matter of prior probability, almost certainly wrong.  If you did propose a valid experiment to test it, I would expect it to turn up negative.

" } }, { "_id": "pL3To6G42AeihNtaN", "title": "Rational vs. Scientific Ev-Psych", "pageUrl": "https://www.lesswrong.com/posts/pL3To6G42AeihNtaN/rational-vs-scientific-ev-psych", "postedAt": "2008-01-04T07:01:14.000Z", "baseScore": 36, "voteCount": 35, "commentCount": 50, "url": null, "contents": { "documentId": "pL3To6G42AeihNtaN", "html": "

PrerequisiteEvolutionary Psychology

\n

Years ago, before I left my parents' nest, I was standing in front of a refrigerator, looking inside.  My mother approached and said, \"What are you doing?\"  I said, \"Looking for the ketchup.  I don't see it.\"

\n

My mother reached behind a couple of bottles and took out the ketchup.

\n

She said, \"If you don't see the ketchup, why don't you move things around and look behind them, instead of just standing and staring into the refrigerator?  Do you think the ketchup is magically going to appear if you stare into the refrigerator long enough?\"

\n

And lo, the light went on over my head, and I said:  \"Men are hunters, so if we can't find our prey, we instinctively freeze motionless and wait for it to wander into our field of vision.  Women are gatherers, so they move things around and look behind them.\"

\n

Now this sort of thing is not scientifically respectable; it is called a \"just-so story\", after Kipling's \"Just-So Stories\" like \"How the Camel Got His Hump\".  The implication being that you can make up anything you like for an evolutionary story, but the difficult thing is finding a way to prove it.

\n

Well, fine, but I bet it's still true.

\n

\n

The sexual division of labor in hunter-gatherer societies - \"Men hunt, women gather\" - is a human universal among hunter-gatherers, like it or not; and it has experimentally tested cognitive consequences:  Men are better at throwing spears; women are better at remembering where things are.  So that part is pretty much nailed down - the controversial thing is saying, \"This is why men stand motionless and stare into the refrigerator.\"

\n

But even the Refrigerator Hypothesis is not as untestable as one might think.  For a start, you have to verify that this is a cross-cultural universal - you have to check to see if men and women in China and South Africa exhibit the same stereotypical behavior as in the US.  And you also have to verify that hunters who don't see prey where they expect it, do indeed freeze motionless and wait for the prey to wander across their vision.

\n

But that doesn't prove that the same psychology is at work in a man staring into a refrigerator, does it?

\n

Well, you could install an eye-tracker on a hunter; look for a characteristic pattern of eye movements when they're frozen waiting for prey; and use simulations or game theory to show that the gaze pattern is efficient.  Then you could put an eye-tracker on a man looking into a refrigerator, and see if they show the same gaze pattern.  Then you would have demonstrated a detailed correspondence which enables you to say, \"He is frozen, waiting for the ketchup to come into sight.\"

\n

Here is an odd thing:  There are some people who will, if you just tell them the Refrigerator Hypothesis, snort and say \"That's an untestable just-so story\" and dismiss it out of hand; but if you start by telling them about the idea of the gaze-tracking experiment and then explain the evolutionary motivation, they will say, \"Huh, that might be right.\"  Because then, you see, you are proposing a Scientific hypothesis; but in the earlier case, you are just making up a story without testing it, which is very Unscientific.  We all know that Scientific hypotheses are more likely to be true than Unscientific ones.

\n

This pattern of belief is very hard to justify from a Bayesian perspective.  It is just the same hypothesis in both cases.  Even if, in the second case, I announce an experimental method and my intent to actually test it, I have not yet experimented and I have not yet received any observational evidence in favor of the hypothesis.  So in either case, my current estimate should equal my prior probability, estimated the degree to which the \"just-so story\" seems \"just\" versus \"so\".  You can't revise your beliefs based on an expected experimental success, by Conservation of Expected Evidence; if you expect the experiment to succeed, then it was a rationally convincing just-so story.

\n

People confuse the distinction between rationality and science.  Science is a special kind of strong evidence, a subset of rational evidence; if I say that my socks are currently white, you have a rational reason to believe, but it is not Science, because there is no experiment people can perform to independently verify the belief.

\n

If you have a hypothesis you have not figured out how to test with an organized, rigorous experiment, then your hypothesis is not very scientific.  When you figure out how to do an experiment, and more importantly, set out to do the experiment, then your hypothesis becomes very scientific indeed.  If you are judging probabilities using the affect heuristic, and you know that science is a Good Thing, then making the jump from \"merely rational\" to \"scientific\" might seem to raise the probability.

\n

But this itself is not normative.  Figuring out a way to test a belief with an organized, rigorous, repeatable experiment is certainly a Good Thing - but it should not raise the belief's rational probability in advance of the experiment succeeding!  A hypothesis may become \"more scientific\" because you are going to test it, but it doesn't get any of the power of scientific confirmation until the experiment succeeds.  To whatever degree you guess that the experiment might work - that it's likely enough to be worth performing - you must have arrived at a probabilistic belief in the hypothesis being true, in advance of any scientific confirmation of it.

\n

I conclude that an evolutionary just-so story, whose predictions you cannot figure out how to test in any organized, rigorous, repeatable way, may nonetheless have a substantial rational credibility - equalling the degree to which you would expect a rigorous experiment to succeed, if you could only figure one out.

" } }, { "_id": "k5qPoHFgjyxtvYsm7", "title": "Stop Voting For Nincompoops", "pageUrl": "https://www.lesswrong.com/posts/k5qPoHFgjyxtvYsm7/stop-voting-for-nincompoops", "postedAt": "2008-01-02T18:00:00.000Z", "baseScore": 76, "voteCount": 63, "commentCount": 92, "url": null, "contents": { "documentId": "k5qPoHFgjyxtvYsm7", "html": "

Followup toThe Two-Party Swindle, The American System and Misleading Labels

\n\n

If evolutionary psychology could be simplified down to one sentence (which it can't), it would be:  "Our instincts are adaptations that increased fitness in the ancestral environment, and we go on feeling that way regardless of whether it increases fitness today."  Sex with condoms, tastes for sugar and fat, etc.

\n\n

In the ancestral environment, there was no such thing as voter confidentiality.  If you backed a power faction in your hunter-gatherer band, everyone knew which side you'd picked.  The penalty for choosing the losing side could easily be death.

\n\n

Our emotions are shaped to steer us through hunter-gatherer life, not life in the modern world.  It should be no surprise, then, that when people choose political sides, they feel drawn to the faction that seems stronger, better positioned to win.  Even when voting is confidential.  Just as people enjoy sex, even when using contraception.

\n\n

(George Orwell had a few words to say in "Raffles and Miss Blandish" about where the admiration of power can go.  The danger, not of lusting for power, but just of feeling drawn to it.)

\n\n

In a recent special election for California governor, the usual lock of the party structure broke down - they neglected to block that special case, and so you could get in with 65 signatures and $3500.  As a result there were 135 candidates.

With 135 candidates, one might have thought there would be an opportunity for some genuine voter choice - a lower barrier to entry, which would create a chance to elect an exceptionally competent governor.  However, the media immediately swung into action and decided that only a tiny fraction of these candidates would be allowed to get any publicity.  Which ones?  Why, the ones who already had name recognition!  Those, after all, were the candidates who were likely to win, so those were the ones which the media reported on.

\n\n

Amazingly, the media collectively exerted such tremendous power, in nearly perfect coordination, without deliberate intention (conspiracies are generally much less necessary than believed).  They genuinely thought, I think, that they were reporting the news rather than making it.  Did it even occur to them that the entire business was self-referential?  Did anyone write about that aspect?  With a coordinated action, the media could have chosen any not-completely-pathetic candidate to report as the "front-runner", and their reporting would thereby have been correct.

\n\n

The technical term for this is Keynesian beauty contest, wherein everyone tries to vote for whoever they think most people will vote for.

\n\n

If Arnold Schwarzenegger (4,206,284 votes) had been as unable to get publicity as Logan Clements (274 votes), perhaps because the media believed (in uncoordinated unison) that no action-movie hero could be taken seriously as a candidate, then Arnold Schwarzenegger would not have been a "serious candidate".

\n\n

In effect, Arnold Schwarzenegger was appointed Governor of California by the media.  The case is notable because usually it's the party structure that excludes candidates, and the party structure's power has a formal basis that does not require voter complicity.  The power of the media to appoint Arnold Schwarzenegger governor derived strictly from voters following what someone told them was the trend.  If the voters had ignored the media telling them who the front-runner was, and decided their initial pick of "serious candidates" based on, say, the answers to a questionnaire, then the media would have had no power.

\n\n

Yes, this is presently around as likely as the Sun rising in the west and illuminating a Moon of green cheese.  But there's this thing called the Internet now, which humanity is still figuring out how to use, and there may be another change or two on the way.  Twenty years ago, if the media had decided not to report on Ron Paul, that would have been that.

\n\n

Someone is bound to say, at this point, "But if you vote for a candidate with no chance of winning, you're throwing your vote away!"

    "The leaders are lizards.  The people hate the lizards and the lizards rule the people."
    "Odd," said Arthur, "I thought you said it was a democracy."
    "I did," said Ford, "It is."
    "So," said Arthur, hoping he wasn't sounding ridiculously obtuse, "why don't the people get rid of the lizards?"
    "It honestly doesn't occur to them," said Ford. "They've all got the vote, so they all pretty much assume that the government they've voted in more or less approximates to the government they want."
    "You mean they actually vote for the lizards?"
    "Oh yes," said Ford with a shrug, "of course."
    "But," said Arthur, going for the big one again, "why?"
    "Because if they didn't vote for a lizard," said Ford, "the wrong lizard might get in. Got any gin?"

To which the economist replies, "But you can't always jump from a Nash equilibrium to a Pareto optimum," meaning roughly, "Unless everyone else has that same idea at the same time, you'll still be throwing your vote away," or in other words, "You can make fun all you like, but if you don't vote for a lizard, the wrong lizard really might get in."

\n\n

In which case, the lizards know they can rely on your vote going to one of them, and they have no incentive to treat you kindly.  Most of the benefits of democracy may be from the lizards being scared enough of voters to not misbehave really really badly, rather than from the "right lizard" winning a voter-fight.

\n\n

Besides, picking the better lizard is harder than it looks.  In 2000, the comic Melonpool showed a character pondering, "Bush or Gore... Bush or Gore... it's like flipping a two-headed coin."  Well, how were they supposed to know?  In 2000, based on history, it seemed to me that the Republicans were generally less interventionist and therefore less harmful than the Democrats, so I pondered whether to vote for Bush to prevent Gore from getting in.  Yet it seemed to me that the barriers to keep out third parties were a raw power grab, and that I was therefore obliged to vote for third parties wherever possible, to penalize the Republicrats for getting grabby.  And so I voted Libertarian, though I don't consider myself one (at least not with a big "L").  I'm glad I didn't do the "sensible" thing.  Less blood on my hands.

\n\n

If we could go back in time and change our votes, and see alternate histories laid out side-by-side, it might make sense to vote for the less evil of two lizards.  But in a state of ignorance - voting for candidates that abandon their stated principles like they discard used toilet paper - then it is harder to compare lizards than those enthusiastically cheering for team colors might think.

\n\n

Are people who vote for Ron Paul in the Republican primary wasting their votes?  I'm not asking, mind you, whether you approve of Ron Paul as a candidate.  I'm asking you whether the Ron Paul voters are taking an effectless action  if Ron Paul doesn't win.  Ron Paul is showing what an candidate can do with the Internet, despite the party structure and the media.  A competent outsider considering a presidential run in 2012 is much more likely to take a shot at it now.  What exactly does a vote for Hilliani accomplish, besides telling the lizards to keep doing whatever it is they're doing?

\n\n

Make them work for your vote.  Vote for more of the same this year, for whatever clever-sounding reason, and next election, they'll give you more of the same.  Refuse to vote for nincompoops and maybe they'll try offering you a less nincompoopy candidate, or non-nincompoops will be more likely to run for office when they see they have a chance.

\n\n

Besides, if you're going to apply game theory to the situation in a shortsighted local fashion - not taking into account others thinking similarly, and not taking into account the incentives you create for later elections based on what potential future candidates see you doing today - if, I say, you think in such a strictly local fashion and call it "rational", then why vote at all, when your single vote is exceedingly unlikely to determine the winner?

\n\n

Consider these two clever-sounding game-theoretical arguments side by side:

\n\n
  1. You should vote for the less evil of the top mainstream candidates, because your vote is unlikely to make a critical difference if you vote for a candidate that most people don't vote for.
  2. \n\n
  3. You should stay home, because your vote is unlikely to make a critical difference.
\n\n

It's hard to see who should accept argument #1 but refuse to accept argument #2.

\n\n

I'm not going to go into the notion of collective action, Prisoner's Dilemma, Newcomblike problems, etcetera, because the last time I tried to write about this, I accidentally started to write a book.  But whatever meaning you attach to voting - especially any notions of good citizenship - it's hard to see why you should vote for a lizard if you bother to vote at all.

\n\n

There is an interaction here, a confluence of folly, between the evolutionary psychology of politics as a football game, and the evolutionary psychology of trying to side with the winner.  The media - I am not the first to observe this - report on politics as though it is a horse race.  Good feelings about a candidate are generated, not by looking over voting records, but by the media reporting excitedly:  "He's pulling ahead in the third stretch!"  What the media thinks we should know about candidates is that such-and-such candidate appeals to such-and-such voting faction.  Since this is practically all the media report on, it feeds nothing but the instinct to get yourself on the winning side.

\n\n

And then there's the lovely concept of "electability":  Trying to vote for a candidate that you think other people will vote for, because you want your own color to win at any cost.  You have to admire the spectacle of the media breathlessly reporting on which voting factions think that candidate X is the most "electable".  Is anyone even counting the levels of recursion here?

\n\n

Or consider it from yet another perspective:

\n\n

There are roughly 300 million people in the United States, of whom only one can be President at any given time.

\n\n

With 300 million available candidates, many of whom are not nincompoops, why does America keep electing nincompoops to political office?

\n\n

Sending a message to select 1 out of 300 million possibilities requires 29 bits.  So if you vote in only the general election for the Presidency, then some mysterious force narrows the election down to 2 out of 300 million possibilities - exerting 28 bits of decision power - and then you, or rather the entire voting population, exert 1 more bit of decision power.  If you vote in a primary election, you may send another 2 or 3 bits worth of message.

\n\n

Where do the other 25 bits of decision power come from?

\n\n

You may object:  "Wait a minute, not everyone in the United States is 35 years old and a natural-born citizen, so it's not 300 million possibilities."

\n\n

I reply, "How do you know that a 34-year-old cannot be President?"

\n\n

And you cry, "What?  It's in the Constitution!"

\n\n

Well, there you go:  Since around half the population is under the age of 35, at least one bit of the missing decision power is exerted by 55 delegates in Philadelphia in 1787.  Though the "natural-born citizen" clause comes from a letter sent by John Jay to George Washington, a suggestion that was adopted without debate by the Philadelphia Convention.

\n\n

I am not necessarily advising that you go outside the box on this one.  Sometimes the box is there for a good reason.  But you should at least be conscious of the box's existence and origin.

\n\n

Likewise, not everyone would want to be President.  (But see the hidden box:  In principle the option exists of enforcing Presidential service, like jury duty.)  How many people would run for President if they had a serious chance at winning?  Let's pretend the number is only 150,000.  That accounts for another 10 bits.

\n\n

Then some combination of the party structure, and the media telling complicit voters who voters are likely to vote for, is exerting on the order of 14-15 bits of power over the Presidency; while the voters only exert 3-4 bits.  And actually the situation is worse than this, because the media and party structure get to move first.  They can eliminate nearly all the variance along any particular dimension.  So that by the time you get to choose one of four "serious" "front-running" candidates, that is, the ones approved by both the party structure and the media, you're choosing between 90.8% nincompoop and 90.6% nincompoop.

\n\n

I seriously think the best thing you can do about the situation, as a voter, is stop trying to be clever.  Don't try to vote for someone you don't really like, because you think your vote is more likely to make a difference that way.  Don't fret about "electability".  Don't try to predict and outwit other voters.  Don't treat it as a horse race.  Don't worry about "wasting your vote" - it always sends a message, you may as well make it a true message.

\n\n

Remember that this is not the ancestral environment, and that you won't die if you aren't on the winning side.  Remember that the threat that voters as a class hold against politicians as a class is more important to democracy than your fights with other voters.  Forget all the "game theory" that doesn't take future incentives into account; real game theory is further-sighted, and besides, if you're going to look at it that way, you might as well stay home.  When you try to be clever, you usually end up playing the Politicians' game.

\n\n

Clear your mind of distractions...

\n\n

And stop voting for nincompoops.

\n\n

If you vote for nincompoops, for whatever clever-sounding reason, don't be surprised that out of 300 million people you get nincompoops in office.

\n\n

The arguments are long, but the voting strategy they imply is simple:  Stop trying to be clever, just don't vote for nincompoops.

\n\n

Oh - and if you're going to vote at all, vote in the primary.  That's where most of your remaining bits and remaining variance have a chance to be exerted.  It's a pretty good bet that a Republicrat will be elected.  The primary is your only chance to choose between Hilliani and Opaula (or whatever your poison).

\n\n

If anyone tells you that voting in a party's primary commits you to voting for that party in the general election, or that a political party owns the primary and you're stealing something from them, then laugh in their faces.  They've taken nearly all the decision bits, moved first in the game, and now they think they can convince you not to exercise the bits you have left?

\n\n

To boil it all down to an emotional argument that isn't necessarily wrong:

\n\n

Why drive out to your polling place and stand in line for half an hour or more - when your vote isn't very likely to singlehandedly determine the Presidency - and then vote for someone you don't even want?

" } }, { "_id": "ZXuqNhMDcs6mYtb6i", "title": "The American System and Misleading Labels", "pageUrl": "https://www.lesswrong.com/posts/ZXuqNhMDcs6mYtb6i/the-american-system-and-misleading-labels", "postedAt": "2008-01-02T07:38:09.000Z", "baseScore": 44, "voteCount": 41, "commentCount": 30, "url": null, "contents": { "documentId": "ZXuqNhMDcs6mYtb6i", "html": "

"How many legs does a dog have, if you call a tail a leg?  Four.  Calling a tail a leg doesn't make it a leg."
        -- Abraham Lincoln

So I was at this conference.  Where one of the topics was legal rights for human-level Artificial Intelligence.  And personally, I don't think this is going to be the major problem, but these are the kinds of conferences I go to.  Anyway.  Brad Templeton, chairman of the Electronic Frontier Foundation, was present; and he said:

"The legal status of AIs is ultimately a legislative question, and in the American system of democracy, legislative questions are decided by the Supreme Court."

Much laughter followed.  We all knew it was true.  (And Brad has taken a case or two to the Supreme Court, so he was speaking from experience.)

\n\n

I'm not criticizing the Supreme Court.  They don't always agree with me on every issue - that is not a politician's job - but reasoned cooperative discourse, compact outputs, and sheer professionalism all make the Supreme Court a far more competent legislative body than Congress.

Try to say aloud the color - not the meaning, the color - of the following letters:

RED

Now say aloud the meaning of the letters, not their color:

PURPLE

Which task felt more difficult?  It's actually easier to pay attention to the labels of things than their substances.

\n\n

But if you're going to be faced with several repetitions of the first task, there's a way to make it easier - just blur your eyes a little, so that you can see the color a moment before you're distracted by the meaning.  Try it - defocus your eyes slightly, and then say the following colors aloud:

BLUE        ORANGE        YELLOW

\n\n

If you want to know what the Supreme Court really does, you should blur your eyes so that you can't see the words "Supreme" or "Court", or all the giant signs reading "judge", "Honorable", or "judicial branch of government".  Then how can you tell what they do?  Well, you could follow these nine people around for a few days and observe them at work.  You'll see that they dress in funny outfits and go to a building where they hear some people arguing with each other.  Then they'll talk it over for a while, and issue one or two short written documents, some of which tell other people what they are or aren't allowed to do.  If you were a Martian anthropologist and you had absolutely no idea that these people were supposed to be doing this or that or something else, you would probably conclude they were (among other things) making laws.

\n\n

Do Representatives and Senators make laws?  Well, I've met one or two Congresspeople, and I didn't see them writing any documents that tell people what to do.  Maybe they do it when no one is looking?  I've heard their days are pretty heavily scheduled, though.

\n\n

Some laws are made by Congressional staff, but the vast majority of legislation is written by professional bureaucrats, who - if you refocus your eyes and read their labels - are part of the "executive" branch.

\n\n

What if you've got a problem with a bureaucrat getting in your hair?  You won't have much luck talking to the White House.  But if you contact your Representative or Senator's constituent service office, they'll be happy to help you if they can.  If you didn't know how the system works apart from a high school civics class, you might end up very frustrated - the people who help you deal with the executive branch of government have signs reading "legislative branch".

\n\n

Your Congressperson would much rather help a little old lady deal with a lost Social Security check, than make potentially controversial laws about immigration or intellectual property.  That sort of thing may please some of your constituents, but it gets others very angry at you, and voters are faster to forget a favor than a disservice.  Keep making laws and you might just get yourself unelected.  If you know everyone is going to cheer you for a law, go ahead and pass it; but otherwise it's safer to leave legislation like e.g. school desegregation to the Supreme Court.

\n\n

What Congresspeople prefer to do is write elaborate budgets that exert detailed control over how the government spends money, so they can send more of it to their districts.  That, and help their constituents with the bureaucracy, which makes friends without getting anyone mad at them.  The executive branch has no time for such matters, it's busy declaring wars.

\n\n

So, bearing in mind that nothing works the way it's written down on paper, let's defocus our eyes and ask about the role of the voters.

\n\n

If we blur out the label over your head and look at what you do, then you go to a certain building and touch one of several names written on a computer screen.  You don't choose which names will be on the screen - no, you don't.  Forget the labels, remember your actual experiences.  You walked into the building, you did choose which rectangle to touch, and you did not choose which little names you would see on the computer screen.

When Stephen Colbert wanted to register for a presidential run in South Carolina, the executive council of the South Carolina Democratic Party voted 13-3 to keep him off the ballot:  "He clearly doesn't meet the requirements.  It's a distraction and takes away from the seriousness of our primary here and takes attention from the serious candidates:  Clinton, Edwards, Barack Obama and the rest."

\n\n

Hey, South Carolina Democratic Party executive council, you know who ELSE might be interested in determining whether someone is, or isn't, a "serious candidate"?  HOW ABOUT THE %#<!ing VOTERS?

\n\n

Ahem.  But the psychology of this response is quite revealing.  "They want to prevent wasted votes" would be a polite way of putting it.  It doesn't even seem to have occurred to them that a voter might attach their own meaning to a vote for Stephen Colbert - that a voter might have their own thoughts about whether a vote for Stephen Colbert was "wasted" or not.  Nor that the voters might have a sense of ownership of their own votes, a wish to determine their use.  In the psychology of politicians, politicians own voters, voters do not own politicians.  South Carolina Democratic voters are a resource of the South Carolina Democratic Party, not the other way around.  They don't want you to waste their votes.

\n\n

(Am I the only one on the planet to whom it occurred that the South Carolina voters could decide the meaning of a vote for Stephen Colbert?  Because I seriously don't remember anyone else pointing that out, at the time.)

\n\n

How much power does touching a name in a rectangle give you?  Well, let me put it this way:

\n\n

When I blur my eyes and look at the American system of democracy, I see that the three branches of government are the executive, the legislative, the judicial, the bureaucracy, the party structure, and the media.  In the next tier down are second-ranked powers, such as "the rich" so often demonized by the foolish - the upper-upper class can exert influence, but they have little in the way of direct political control.  Similarly with NGOs (non-governmental organizations) such as the Electronic Frontier Foundation, think tanks, traditional special interest groups, "big corporations", lobbyists, the voters, foreign powers with a carrot or stick to offer the US, and so on.

\n\n

Which is to say that political powers do make an attempt to court the voters, but not to a noticeably greater degree than they court, say, the agricultural industry.  The voters' position on the issues is not without influence, but it is a degree of influence readily comparable to the collective influence of think tanks, and probably less than the collective influence of K Street lobbyists.  In practice, that's how it seems to work out.

\n\n

The voters do have two special powers, but both of them only manifest in cases of great emotional arousal, like a comic-book superhero who can't power up until angry.  The first power is the ability to swap control of Congress (and in years divisible by 4, the Presidency) between political parties.  To the extent that the two allowed political parties are similar, this will not accomplish anything.  Also it's a rather crude threat, not like the fine-grained advice offered by think tanks or lobbyists.  There's a difference between the power to write detailed instructions on a sheet of paper, and the power to grunt and throw a big rock.

\n\n

Possibly due to a coordination problem among individual politicians, the party currently in power rarely acts scared of the voters' threat.  Maybe individual politicians have an incentive to pursue their goals while the pursuing is good, since a shift in parties will not necessarily deprive them of their own seats in secure districts?  Thus, we regularly see the party in power acting arrogantly until the voters get angry and the vote swings back.  Then the minority-turned-majority party stops trying so hard to please the voters, and begins drinking the wine of power.  That's my best guess for why the balance of voters tends to be actively restored around a 50/50 ratio.

\n\n

The voters' second hidden superpower can only be used if the voters get really, really, really angry, like a comic-book hero who's just seen one of their friends killed.  This is the power to overthrow the existing political structure entirely by placing a new party in charge.  There are barriers to entry that keep out third parties, which prevents the ordinary turnover visible in the history of US politics before the 1850s.  But these barriers wouldn't stop the voters if they got really, really mad.  Even tampering with electronic voting machines won't work if the vote is 90% lopsided.

\n\n

And this never-used power of the voters, strangely enough, may be the most beneficial factor in democracy - because it means that although the voters are ordinarily small potatoes in the power structure, no one dares make the voters really, really, really angry.

\n\n

How much of the benefit of living in a democracy is in the small influences that voters occasionally manage to exert on the political process?  And how much of that benefit is from power-wielders being too scared to act like historical kings and slaughter you on a whim?

\n\n

Arguably, the chief historical improvements in living conditions have not been from voters having the influence to pass legislation which (they think) will benefit them, but, rather, from power-wielders becoming scared of doing anything too horrible to voters.  Maybe one retrodiction (I haven't checked) would be that if you looked at the history of England, you would find a smooth improvement in living conditions corresponding to a gradually more plausible threat of revolution, rather than a sharp jump following the introduction of an elected legislature.

\n\n

You'll notice that my first post, on the Two-Party Swindle, worried about the tendency of voters to lose themselves in emotion and identify with "their" professional politicians.  If you think the chief benefit of living in a democracy is the hope of getting Your-Favorite-Legislation passed, then you might abandon yourself in adulation of a professional politician who wears the colors of Your-Favorite-Legislation.  Isn't that the only way you can get Your-Favorite-Legislation, the most important thing in the world, passed?

\n\n

But what if the real benefit of living in a democracy, for voters, comes from their first and second superpowers?  Then by identifying with politicians, the voters become less likely to remove the politician from office.  By identifying with parties, voters become less likely to swap the party out of power, or cast it from government entirely.  Both identifications interfere with the plausible threat.  The power-wielders can get away with doing more and worse things before you turn on them.

\n\n

The feature of democracies, of allowing heated color wars between voters on particular policy issues, is not likely to account, on average, for the benefits of living in a democracy.  Even if one option is genuinely better than the other, the existence of a color war implies that large efforts are being spent on tugging in either direction.

\n\n

So if voters get wrapped up in color wars, identifying emotionally with "their" professional politicians and issues that put them at odds with other voters - at the expense of being less likely to get upset with "their" politician and "their" party - then shouldn't we expect the end result, on average, to be harmful to voters in general?

\n\n

Coming tomorrow:  My shocking, counterintuitive suggestion for a pragmatic voting policy!  (Well... maybe not so counterintuitive as all that...)

" } }, { "_id": "X3HpE8tMXz4m4w6Rz", "title": "The Simple Truth", "pageUrl": "https://www.lesswrong.com/posts/X3HpE8tMXz4m4w6Rz/the-simple-truth", "postedAt": "2008-01-01T20:00:44.147Z", "baseScore": 179, "voteCount": 143, "commentCount": 16, "url": null, "contents": { "documentId": "X3HpE8tMXz4m4w6Rz", "html": "

I remember this paper I wrote on existentialism. My teacher gave it back with an F. She’d underlined true and truth wherever it appeared in the essay, probably about twenty times, with a question mark beside each. She wanted to know what I meant by truth.

—Danielle Egan, journalist

This essay is meant to restore a naive view of truth.

Someone says to you: “My miracle snake oil can rid you of lung cancer in just three weeks.” You reply: “Didn’t a clinical study show this claim to be untrue?” The one returns: “This notion of ‘truth’ is quite naive; what do you mean by ‘true’?”

Many people, so questioned, don’t know how to answer in exquisitely rigorous detail. Nonetheless they would not be wise to abandon the concept of “truth.” There was a time when no one knew the equations of gravity in exquisitely rigorous detail, yet if you walked off a cliff, you would fall.

Often I have seen—especially on Internet mailing lists—that amidst other conversation, someone says “X is true,” and then an argument breaks out over the use of the word “true.” This essay is not meant as an encyclopedic reference for that argument. Rather, I hope the arguers will read this essay, and then go back to whatever they were discussing before someone questioned the nature of truth.

In this essay I pose questions. If you see what seems like a really obvious answer, it’s probably the answer I intend. The obvious choice isn’t always the best choice, but sometimes, by golly, it is. I don’t stop looking as soon I find an obvious answer, but if I go on looking, and the obvious-seeming answer still seems obvious, I don’t feel guilty about keeping it. Oh, sure, everyone thinks two plus two is four, everyone says two plus two is four, and in the mere mundane drudgery of everyday life everyone behaves as if two plus two is four, but what does two plus two really, ultimately equal? As near as I can figure, four. It’s still four even if I intone the question in a solemn, portentous tone of voice.

Does that seem like an unduly simple answer? Maybe, on this occasion, life doesn’t need to be complicated. Wouldn’t that be refreshing?

If you are one of those fortunate folk to whom the question seems trivial at the outset, I hope it still seems trivial at the finish. If you find yourself stumped by deep and meaningful questions, remember that if you know exactly how a system works, and could build one yourself out of buckets and pebbles, it should not be a mystery to you.

If confusion threatens when you interpret a metaphor as a metaphor, try taking everything completely literally.


 



 

Imagine that in an era before recorded history or formal mathematics, I am a shepherd and I have trouble tracking my sheep. My sheep sleep in an enclosure, a fold; and the enclosure is high enough to guard my sheep from wolves that roam by night. Each day I must release my sheep from the fold to pasture and graze; each night I must find my sheep and return them to the fold. If a sheep is left outside, I will find its body the next morning, killed and half-eaten by wolves. But it is so discouraging, to scour the fields for hours, looking for one last sheep, when I know that probably all the sheep are in the fold. Sometimes I give up early, and usually I get away with it; but around a tenth of the time there is a dead sheep the next morning.

If only there were some way to divine whether sheep are still grazing, without the inconvenience of looking! I try several methods: I toss the divination sticks of my tribe; I train my psychic powers to locate sheep through clairvoyance; I search carefully for reasons to believe all the sheep are in the fold. It makes no difference. Around a tenth of the times I turn in early, I find a dead sheep the next morning. Perhaps I realize that my methods aren’t working, and perhaps I carefully excuse each failure; but my dilemma is still the same. I can spend an hour searching every possible nook and cranny, when most of the time there are no remaining sheep; or I can go to sleep early and lose, on the average, one-tenth of a sheep.

Late one afternoon I feel especially tired. I toss the divination sticks and the divination sticks say that all the sheep have returned. I visualize each nook and cranny, and I don’t imagine scrying any sheep. I’m still not confident enough, so I look inside the fold and it seems like there are a lot of sheep, and I review my earlier efforts and decide that I was especially diligent. This dissipates my anxiety, and I go to sleep. The next morning I discover two dead sheep. Something inside me snaps, and I begin thinking creatively.

That day, loud hammering noises come from the gate of the sheepfold’s enclosure.

The next morning, I open the gate of the enclosure only a little way, and as each sheep passes out of the enclosure, I drop a pebble into a bucket nailed up next to the door. In the afternoon, as each returning sheep passes by, I take one pebble out of the bucket. When there are no pebbles left in the bucket, I can stop searching and turn in for the night. It is a brilliant notion. It will revolutionize shepherding.

That was the theory. In practice, it took considerable refinement before the method worked reliably. Several times I searched for hours and didn’t find any sheep, and the next morning there were no stragglers. On each of these occasions it required deep thought to figure out where my bucket system had failed. On returning from one fruitless search, I thought back and realized that the bucket already contained pebbles when I started; this, it turned out, was a bad idea. Another time I randomly tossed pebbles into the bucket, to amuse myself, between the morning and the afternoon; this too was a bad idea, as I realized after searching for a few hours. But I practiced my pebblecraft, and became a reasonably proficient pebblecrafter.

One afternoon, a man richly attired in white robes, leafy laurels, sandals, and business suit trudges in along the sandy trail that leads to my pastures.

“Can I help you?” I inquire.

The man takes a badge from his coat and flips it open, proving beyond the shadow of a doubt that he is Markos Sophisticus Maximus, a delegate from the Senate of Rum. (One might wonder whether another could steal the badge; but so great is the power of these badges that if any other were to use them, they would in that instant be transformed into Markos.)

“Call me Mark,” he says. “I’m here to confiscate the magic pebbles, in the name of the Senate; artifacts of such great power must not fall into ignorant hands.”

“That bleedin’ apprentice,” I grouse under my breath, “he’s been yakkin’ to the villagers again.” Then I look at Mark’s stern face, and sigh. “They aren’t magic pebbles,” I say aloud. “Just ordinary stones I picked up from the ground.”

A flicker of confusion crosses Mark’s face, then he brightens again. “I’m here for the magic bucket!” he declares.

“It’s not a magic bucket,” I say wearily. “I used to keep dirty socks in it.”

Mark’s face is puzzled. “Then where is the magic?” he demands.

An interesting question. “It’s hard to explain,” I say.

My current apprentice, Autrey, attracted by the commotion, wanders over and volunteers his explanation: “It’s the level of pebbles in the bucket,” Autrey says. “There’s a magic level of pebbles, and you have to get the level just right, or it doesn’t work. If you throw in more pebbles, or take some out, the bucket won’t be at the magic level anymore. Right now, the magic level is,” Autrey peers into the bucket, “about one-third full.”

“I see!” Mark says excitedly. From his back pocket Mark takes out his own bucket, and a heap of pebbles. Then he grabs a few handfuls of pebbles, and stuffs them into the bucket. Then Mark looks into the bucket, noting how many pebbles are there. “There we go,” Mark says, “the magic level of this bucket is half full. Like that?”

“No!” Autrey says sharply. “Half full is not the magic level. The magic level is about one-third. Half full is definitely unmagic. Furthermore, you’re using the wrong bucket.”

Mark turns to me, puzzled. “I thought you said the bucket wasn’t magic?”

“It’s not,” I say. A sheep passes out through the gate, and I toss another pebble into the bucket. “Besides, I’m watching the sheep. Talk to Autrey.”

Mark dubiously eyes the pebble I tossed in, but decides to temporarily shelve the question. Mark turns to Autrey and draws himself up haughtily. “It’s a free country,” Mark says, “under the benevolent dictatorship of the Senate, of course. I can drop whichever pebbles I like into whatever bucket I like.”

Autrey considers this. “No you can’t,” he says finally, “there won’t be any magic.”

“Look,” says Mark patiently, “I watched you carefully. You looked in your bucket, checked the level of pebbles, and called that the magic level. I did exactly the same thing.”

“That’s not how it works,” says Autrey.

“Oh, I see,” says Mark, “It’s not the level of pebbles in my bucket that’s magic, it’s the level of pebbles in your bucket. Is that what you claim? What makes your bucket so much better than mine, huh?”

“Well,” says Autrey, “if we were to empty your bucket, and then pour all the pebbles from my bucket into your bucket, then your bucket would have the magic level. There’s also a procedure we can use to check if your bucket has the magic level, if we know that my bucket has the magic level; we call that a bucket compare operation.”

Another sheep passes, and I toss in another pebble.

“He just tossed in another pebble!” Mark says. “And I suppose you claim the new level is also magic? I could toss pebbles into your bucket until the level was the same as mine, and then our buckets would agree. You’re just comparing my bucket to your bucket to determine whether you think the level is ‘magic’ or not. Well, I think your bucket isn’t magic, because it doesn’t have the same level of pebbles as mine. So there!”

“Wait,” says Autrey, “you don’t understand—”

“By ‘magic level,’ you mean simply the level of pebbles in your own bucket. And when I say ‘magic level,’ I mean the level of pebbles in my bucket. Thus you look at my bucket and say it ‘isn’t magic,’ but the word ‘magic’ means different things to different people. You need to specify whose magic it is. You should say that my bucket doesn’t have ‘Autrey’s magic level,’ and I say that your bucket doesn’t have ‘Mark’s magic level.’ That way, the apparent contradiction goes away.”

“But—” says Autrey helplessly.

“Different people can have different buckets with different levels of pebbles, which proves this business about ‘magic’ is completely arbitrary and subjective.”

“Mark,” I say, “did anyone tell you what these pebbles do?”

Do?” says Mark. “I thought they were just magic.”

“If the pebbles didn’t do anything,” says Autrey, “our ISO 9000 process efficiency auditor would eliminate the procedure from our daily work.”

“What’s your auditor’s name?”

“Darwin,” says Autrey.

“Hm,” says Mark. “Charles does have a reputation as a strict auditor. So do the pebbles bless the flocks, and cause the increase of sheep?”

“No,” I say. “The virtue of the pebbles is this; if we look into the bucket and see the bucket is empty of pebbles, we know the pastures are likewise empty of sheep. If we do not use the bucket, we must search and search until dark, lest one last sheep remain. Or if we stop our work early, then sometimes the next morning we find a dead sheep, for the wolves savage any sheep left outside. If we look in the bucket, we know when all the sheep are home, and we can retire without fear.”

Mark considers this. “That sounds rather implausible,” he says eventually. “Did you consider using divination sticks? Divination sticks are infallible, or at least, anyone who says they are fallible is burned at the stake. This is an extremely painful way to die, so the divination sticks must be extremely infallible.”

“You’re welcome to use divination sticks if you like,” I say.

“Oh, good heavens, of course not,” says Mark. “They work infallibly, with absolute perfection on every occasion, as befits such blessed instruments. But what if there were a dead sheep the next morning? I only use the divination sticks when there is no possibility of their being proven wrong. Otherwise I might be burned alive. So how does your magic bucket work?”

How does the bucket work . . . ? I’d better start with the simplest possible case. “Well,” I say, “suppose the pastures are empty, and the bucket isn’t empty. Then we’ll waste hours looking for a sheep that isn’t there. And if there are sheep in the pastures, but the bucket is empty, then Autrey and I will turn in too early, and we’ll find dead sheep the next morning. So an empty bucket is magical if and only if the pastures are empty—”

“Hold on,” says Autrey. “That sounds like a vacuous tautology to me. Aren’t an empty bucket and empty pastures obviously the same thing?”

“It’s not vacuous,” I say. “Here’s an analogy: The logician Alfred Tarski once said that the assertion ‘Snow is white’ is true if and only if snow is white. If you can understand that, you should be able to see why an empty bucket is magical if and only if the pastures are empty of sheep.”

“Hold on,” says Mark. “These are buckets. They don’t have anything to do with sheep. Buckets and sheep are obviously completely different. There’s no way the sheep can ever interact with the bucket.”

“Then where do you think the magic comes from?” inquires Autrey.

Mark considers. “You said you could compare two buckets to check if they had the same level . . . I can see how buckets can interact with buckets. Maybe when you get a large collection of buckets, and they all have the same level, that’s what generates the magic. I’ll call that the coherentist theory of magic buckets.”

“Interesting,” says Autrey. “I know that my master is working on a system with multiple buckets—he says it might work better because of ‘redundancy’ and ‘error correction.’ That sounds like coherentism to me.”

“They’re not quite the same—” I start to say.

“Let’s test the coherentism theory of magic,” says Autrey. “I can see you’ve got five more buckets in your back pocket. I’ll hand you the bucket we’re using, and then you can fill up your other buckets to the same level—”

Mark recoils in horror. “Stop! These buckets have been passed down in my family for generations, and they’ve always had the same level! If I accept your bucket, my bucket collection will become less coherent, and the magic will go away!”

“But your current buckets don’t have anything to do with the sheep!” protests Autrey.

Mark looks exasperated. “Look, I’ve explained before, there’s obviously no way that sheep can interact with buckets. Buckets can only interact with other buckets.”

“I toss in a pebble whenever a sheep passes,” I point out.

“When a sheep passes, you toss in a pebble?” Mark says. “What does that have to do with anything?”

“It’s an interaction between the sheep and the pebbles,” I reply.

“No, it’s an interaction between the pebbles and you,” Mark says. “The magic doesn’t come from the sheep, it comes from you. Mere sheep are obviously nonmagical. The magic has to come from somewhere, on the way to the bucket.”

I point at a wooden mechanism perched on the gate. “Do you see that flap of cloth hanging down from that wooden contraption? We’re still fiddling with that—it doesn’t work reliably—but when sheep pass through, they disturb the cloth. When the cloth moves aside, a pebble drops out of a reservoir and falls into the bucket. That way, Autrey and I won’t have to toss in the pebbles ourselves.”

Mark furrows his brow. “I don’t quite follow you . . . is the cloth magical?”

I shrug. “I ordered it online from a company called Natural Selections. The fabric is called Sensory Modality.” I pause, seeing the incredulous expressions of Mark and Autrey. “I admit the names are a bit New Agey. The point is that a passing sheep triggers a chain of cause and effect that ends with a pebble in the bucket. Afterward you can compare the bucket to other buckets, and so on.”

“I still don’t get it,” Mark says. “You can’t fit a sheep into a bucket. Only pebbles go in buckets, and it’s obvious that pebbles only interact with other pebbles.”

“The sheep interact with things that interact with pebbles . . .” I search for an analogy. “Suppose you look down at your shoelaces. A photon leaves the Sun; then travels down through Earth’s atmosphere; then bounces off your shoelaces; then passes through the pupil of your eye; then strikes the retina; then is absorbed by a rod or a cone. The photon’s energy makes the attached neuron fire, which causes other neurons to fire. A neural activation pattern in your visual cortex can interact with your beliefs about your shoelaces, since beliefs about shoelaces also exist in neural substrate. If you can understand that, you should be able to see how a passing sheep causes a pebble to enter the bucket.”

“At exactly which point in the process does the pebble become magic?” says Mark.

“It . . . um . . .” Now I’m starting to get confused. I shake my head to clear away cobwebs. This all seemed simple enough when I woke up this morning, and the pebble-and-bucket system hasn’t gotten any more complicated since then. “This is a lot easier to understand if you remember that the point of the system is to keep track of sheep.”

Mark sighs sadly. “Never mind . . . it’s obvious you don’t know. Maybe all pebbles are magical to start with, even before they enter the bucket. We could call that position panpebblism.”

“Ha!” Autrey says, scorn rich in his voice. “Mere wishful thinking! Not all pebbles are created equal. The pebbles in your bucket are not magical. They’re only lumps of stone!”

Mark’s face turns stern. “Now,” he cries, “now you see the danger of the road you walk! Once you say that some people’s pebbles are magical and some are not, your pride will consume you! You will think yourself superior to all others, and so fall! Many throughout history have tortured and murdered because they thought their own pebbles supreme!” A tinge of condescension enters Mark’s voice. “Worshipping a level of pebbles as ‘magical’ implies that there’s an absolute pebble level in a Supreme Bucket. Nobody believes in a Supreme Bucket these days.”

“One,” I say. “Sheep are not absolute pebbles. Two, I don’t think my bucket actually contains the sheep. Three, I don’t worship my bucket level as perfect—I adjust it sometimes—and I do that because I care about the sheep.”

“Besides,” says Autrey, “if someone believes that possessing absolute pebbles would license torture and murder, they’re making a mistake that has nothing to do with buckets. You’re solving the wrong problem.”

Mark calms himself down. “I suppose I can’t expect any better from mere shepherds. You probably believe that snow is white, don’t you.”

“Um . . . yes?” says Autrey.

“It doesn’t bother you that Joseph Stalin believed that snow is white?”

“Um . . . no?” says Autrey.

Mark gazes incredulously at Autrey, and finally shrugs. “Let’s suppose, purely for the sake of argument, that your pebbles are magical and mine aren’t. Can you tell me what the difference is?”

“My pebbles represent the sheep!” Autrey says triumphantly. “Your pebbles don’t have the representativeness property, so they won’t work. They are empty of meaning. Just look at them. There’s no aura of semantic content; they are merely pebbles. You need a bucket with special causal powers.”

“Ah!” Mark says. “Special causal powers, instead of magic.”

“Exactly,” says Autrey. “I’m not superstitious. Postulating magic, in this day and age, would be unacceptable to the international shepherding community. We have found that postulating magic simply doesn’t work as an explanation for shepherding phenomena. So when I see something I don’t understand, and I want to explain it using a model with no internal detail that makes no predictions even in retrospect, I postulate special causal powers. Or I’ll call it an emergent phenomenon, or something.”

“What kind of special powers does the bucket have?” asks Mark.

“Hm,” says Autrey. “Maybe this bucket is imbued with an about-ness relation to the pastures. That would explain why it worked—when the bucket is empty, it means the pastures are empty.”

“Where did you find this bucket?” says Mark. “And how did you realize it had an about-ness relation to the pastures?”

“It’s an ordinary bucket,” I say. “I used to climb trees with it . . . I don’t think this question needs to be difficult.”

“I’m talking to Autrey,” says Mark.

“You have to bind the bucket to the pastures, and the pebbles to the sheep, using a magical ritual—pardon me, an emergent process with special causal powers—that my master discovered,” Autrey explains.

Autrey then attempts to describe the ritual, with Mark nodding along in sage comprehension.

“You have to throw in a pebble every time a sheep leaves through the gate?” says Mark. “Take out a pebble every time a sheep returns?”

Autrey nods. “Yeah.”

“That must be really hard,” Mark says sympathetically.

Autrey brightens, soaking up Mark’s sympathy like rain. “Exactly!” says Autrey. “It’s extremely hard on your emotions. When the bucket has held its level for a while, you . . . tend to get attached to that level.”

A sheep passes then, leaving through the gate. Autrey sees; he stoops, picks up a pebble, holds it aloft in the air. “Behold!” Autrey proclaims. “A sheep has passed! I must now toss a pebble into this bucket, my dear bucket, and destroy that fond level which has held for so long—” Another sheep passes. Autrey, caught up in his drama, misses it, so I plunk a pebble into the bucket. Autrey is still speaking: “—for that is the supreme test of the shepherd, to throw in the pebble, be it ever so agonizing, be the old level ever so precious. Indeed, only the best of shepherds can meet a requirement so stern—”

“Autrey,” I say, “if you want to be a great shepherd someday, learn to shut up and throw in the pebble. No fuss. No drama. Just do it.”

“And this ritual,” says Mark, “it binds the pebbles to the sheep by the magical laws of Sympathy and Contagion, like a voodoo doll.”

Autrey winces and looks around. “Please! Don’t call it Sympathy and Contagion. We shepherds are an anti-superstitious folk. Use the word ‘intentionality,’ or something like that.”

“Can I look at a pebble?” says Mark.

“Sure,” I say. I take one of the pebbles out of the bucket, and toss it to Mark. Then I reach to the ground, pick up another pebble, and drop it into the bucket.

Autrey looks at me, puzzled. “Didn’t you just mess it up?”

I shrug. “I don’t think so. We’ll know I messed it up if there’s a dead sheep next morning, or if we search for a few hours and don’t find any sheep.”

“But—” Autrey says.

“I taught you everything you know, but I haven’t taught you everything I know,” I say.

Mark is examining the pebble, staring at it intently. He holds his hand over the pebble and mutters a few words, then shakes his head. “I don’t sense any magical power,” he says. “Pardon me. I don’t sense any intentionality.”

“A pebble only has intentionality if it’s inside a ma—an emergent bucket,” says Autrey. “Otherwise it’s just a mere pebble.”

“Not a problem,” I say. I take a pebble out of the bucket, and toss it away. Then I walk over to where Mark stands, tap his hand holding a pebble, and say: “I declare this hand to be part of the magic bucket!” Then I resume my post at the gates.

Autrey laughs. “Now you’re just being gratuitously evil.”

I nod, for this is indeed the case.

“Is that really going to work, though?” says Autrey.

I nod again, hoping that I’m right. I’ve done this before with two buckets, and in principle, there should be no difference between Mark’s hand and a bucket. Even if Mark’s hand is imbued with the Élan vital that distinguishes live matter from dead matter, the trick should work as well as if Mark were a marble statue.

Mark is looking at his hand, a bit unnerved. “So . . . the pebble has intentionality again, now?”

“Yep,” I say. “Don’t add any more pebbles to your hand, or throw away the one you have, or you’ll break the ritual.”

Mark nods solemnly. Then he resumes inspecting the pebble. “I understand now how your flocks grew so great,” Mark says. “With the power of this bucket, you could keep on tossing pebbles, and the sheep would keep returning from the fields. You could start with just a few sheep, let them leave, then fill the bucket to the brim before they returned. And if tending so many sheep grew tedious, you could let them all leave, then empty almost all the pebbles from the bucket, so that only a few returned . . . increasing the flocks again when it came time for shearing . . . dear heavens, man! Do you realize the sheer power of this ritual you’ve discovered? I can only imagine the implications; humankind might leap ahead a decade—no, a century!”

“It doesn’t work that way,” I say. “If you add a pebble when a sheep hasn’t left, or remove a pebble when a sheep hasn’t come in, that breaks the ritual. The power does not linger in the pebbles, but vanishes all at once, like a soap bubble popping.”

Mark’s face is terribly disappointed. “Are you sure?”

I nod. “I tried that and it didn’t work.”

Mark sighs heavily. “And this . . . math. . . seemed so powerful and useful until then . . . Oh, well. So much for human progress.”

“Mark, it was a brilliant idea,” Autrey says encouragingly. “The notion didn’t occur to me, and yet it’s so obvious . . . it would save an enormous amount of effort . . . there must be a way to salvage your plan! We could try different buckets, looking for one that would keep the magical pow—the intentionality in the pebbles, even without the ritual. Or try other pebbles. Maybe our pebbles just have the wrong properties to have inherent intentionality. What if we tried it using stones carved to resemble tiny sheep? Or just write ‘sheep’ on the pebbles; that might be enough.”

“Not going to work,” I predict dryly.

Autrey continues. “Maybe we need organic pebbles, instead of silicon pebbles . . . or maybe we need to use expensive gemstones. The price of gemstones doubles every eighteen months, so you could buy a handful of cheap gemstones now, and wait, and in twenty years they’d be really expensive.”

“You tried adding pebbles to create more sheep, and it didn’t work?” Mark asks me. “What exactly did you do?”

“I took a handful of dollar bills. Then I hid the dollar bills under a fold of my blanket, one by one; each time I hid another bill, I took another paperclip from a box, making a small heap. I was careful not to keep track in my head, so that all I knew was that there were ‘many’ dollar bills, and ‘many’ paperclips. Then when all the bills were hidden under my blanket, I added a single additional paperclip to the heap, the equivalent of tossing an extra pebble into the bucket. Then I started taking dollar bills from under the fold, and putting the paperclips back into the box. When I finished, a single paperclip was left over.”

“What does that result mean?” asks Autrey.

“It means the trick didn’t work. Once I broke ritual by that single misstep, the power did not linger, but vanished instantly; the heap of paperclips and the pile of dollar bills no longer went empty at the same time.”

“You actually tried this?” asks Mark.

“Yes,” I say, “I actually performed the experiment, to verify that the outcome matched my theoretical prediction. I have a sentimental fondness for the scientific method, even when it seems absurd. Besides, what if I’d been wrong?”

“If it had worked,” says Mark, “you would have been guilty of counterfeiting! Imagine if everyone did that; the economy would collapse! Everyone would have billions of dollars of currency, yet there would be nothing for money to buy!”

“Not at all,” I reply. “By that same logic whereby adding another paperclip to the heap creates another dollar bill, creating another dollar bill would create an additional dollar’s worth of goods and services.”

Mark shakes his head. “Counterfeiting is still a crime . . . You should not have tried.”

“I was reasonably confident I would fail.”

“Aha!” says Mark. “You expected to fail! You didn’t believe you could do it!”

“Indeed,” I admit. “You have guessed my expectations with stunning accuracy.”

“Well, that’s the problem,” Mark says briskly. “Magic is fueled by belief and willpower. If you don’t believe you can do it, you can’t. You need to change your belief about the experimental result, if we are to change the result itself.”

“Funny,” I say nostalgically, “that’s what Autrey said when I told him about the pebble-and-bucket method. That it was too ridiculous for him to believe, so it wouldn’t work for him.”

“How did you persuade him?” inquires Mark.

“I told him to shut up and follow instructions,” I say, “and when the method worked, Autrey started believing in it.”

Mark frowns, puzzled. “That makes no sense. It doesn’t resolve the essential chicken-and-egg dilemma.”

“Sure it does. The bucket method works whether or not you believe in it.”

“That’s absurd!” sputters Mark. “I don’t believe in magic that works whether or not you believe in it!”

“I said that too,” chimes in Autrey. “Apparently I was wrong.”

Mark screws up his face in concentration. “But . . . if you didn’t believe in magic that works whether or not you believe in it, then why did the bucket method work when you didn’t believe in it? Did you believe in magic that works whether or not you believe in it whether or not you believe in magic that works whether or not you believe in it?”

“I don’t . . . think so . . .” says Autrey doubtfully.

“Then if you didn’t believe in magic that works whether or not you . . . hold on a second, I need to work this out with paper and pencil—” Mark scribbles frantically, looks skeptically at the result, turns the piece of paper upside down, then gives up. “Never mind,” says Mark. “Magic is difficult enough for me to comprehend; metamagic is out of my depth.”

“Mark, I don’t think you understand the art of bucketcraft,” I say. “It’s not about using pebbles to control sheep. It’s about making sheep control pebbles. In this art, it is not necessary to begin by believing the art will work. Rather, first the art works, then one comes to believe that it works.”

“Or so you believe,” says Mark.

“So I believe,” I reply, “because it happens to be a fact. The correspondence between reality and my beliefs comes from reality controlling my beliefs, not the other way around.”

Another sheep passes, causing me to toss in another pebble.

“Ah! Now we come to the root of the problem,” says Mark. “What’s this so-called ‘reality’ business? I understand what it means for a hypothesis to be elegant, or falsifiable, or compatible with the evidence. It sounds to me like calling a belief ‘true’ or ‘real’ or ‘actual’ is merely the difference between saying you believe something, and saying you really really believe something.”

I pause. “Well . . .” I say slowly. “Frankly, I’m not entirely sure myself where this ‘reality’ business comes from. I can’t create my own reality in the lab, so I must not understand it yet. But occasionally I believe strongly that something is going to happen, and then something else happens instead. I need a name for whatever-it-is that determines my experimental results, so I call it ‘reality’. This ‘reality’ is somehow separate from even my very best hypotheses. Even when I have a simple hypothesis, strongly supported by all the evidence I know, sometimes I’m still surprised. So I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief,’ and the latter thingy ‘reality.’ ”

Mark snorts. “I don’t even know why I bother listening to this obvious nonsense. Whatever you say about this so-called ‘reality,’ it is merely another belief. Even your belief that reality precedes your beliefs is a belief. It follows, as a logical inevitability, that reality does not exist; only beliefs exist.”

“Hold on,” says Autrey, “could you repeat that last part? You lost me with that sharp swerve there in the middle.”

“No matter what you say about reality, it’s just another belief,” explains Mark. “It follows with crushing necessity that there is no reality, only beliefs.”

“I see,” I say. “The same way that no matter what you eat, you need to eat it with your mouth. It follows that there is no food, only mouths.”

“Precisely,” says Mark. “Everything that you eat has to be in your mouth. How can there be food that exists outside your mouth? The thought is nonsense, proving that ‘food’ is an incoherent notion. That’s why we’re all starving to death; there’s no food.”

Autrey looks down at his stomach. “But I’m not starving to death.”

Aha!” shouts Mark triumphantly. “And how did you utter that very objection? With your mouth, my friend! With your mouth! What better demonstration could you ask that there is no food?”

What’s this about starvation?” demands a harsh, rasping voice from directly behind us. Autrey and I stay calm, having gone through this before. Mark leaps a foot in the air, startled almost out of his wits.

Inspector Darwin smiles tightly, pleased at achieving surprise, and makes a small tick on his clipboard.

“Just a metaphor!” Mark says quickly. “You don’t need to take away my mouth, or anything like that—”

Why do you need a mouth if there is no food?” demands Darwin angrily. “Never mind. I have no time for this foolishness. I am here to inspect the sheep.

“Flock’s thriving, sir,” I say. “No dead sheep since January.”

Excellent. I award you 0.12 units of fitness. Now what is this person doing here? Is he a necessary part of the operations?”

“As far as I can see, he would be of more use to the human species if hung off a hot-air balloon as ballast,” I say.

“Ouch,” says Autrey mildly.

“I do not care about the human species. Let him speak for himself.”

Mark draws himself up haughtily. “This mere shepherd,” he says, gesturing at me, “has claimed that there is such a thing as reality. This offends me, for I know with deep and abiding certainty that there is no truth. The concept of ‘truth’ is merely a stratagem for people to impose their own beliefs on others. Every culture has a different ‘truth,’ and no culture’s ‘truth’ is superior to any other. This that I have said holds at all times in all places, and I insist that you agree.”

“Hold on a second,” says Autrey. “If nothing is true, why should I believe you when you say that nothing is true?”

“I didn’t say that nothing is true—” says Mark.

“Yes, you did,” interjects Autrey, “I heard you.”

“—I said that ‘truth’ is an excuse used by some cultures to enforce their beliefs on others. So when you say something is ‘true,’ you mean only that it would be advantageous to your own social group to have it believed.”

“And this that you have said,” I say, “is it true?”

“Absolutely, positively true!” says Mark emphatically. “People create their own realities.”

“Hold on,” says Autrey, sounding puzzled again, “saying that people create their own realities is, logically, a completely separate issue from saying that there is no truth, a state of affairs I cannot even imagine coherently, perhaps because you still have not explained how exactly it is supposed to work—”

“There you go again,” says Mark exasperatedly, “trying to apply your Western concepts of logic, rationality, reason, coherence, and self-consistency.”

“Great,” mutters Autrey, “now I need to add a third subject heading, to keep track of this entirely separate and distinct claim—”

“It’s not separate,” says Mark. “Look, you’re taking the wrong attitude by treating my statements as hypotheses, and carefully deriving their consequences. You need to think of them as fully general excuses, which I apply when anyone says something I don’t like. It’s not so much a model of how the universe works, as a Get Out of Jail Free card. The key is to apply the excuse selectively. When I say that there is no such thing as truth, that applies only to your claim that the magic bucket works whether or not I believe in it. It does not apply to my claim that there is no such thing as truth.”

“Um . . . why not?” inquires Autrey.

Mark heaves a patient sigh. “Autrey, do you think you’re the first person to think of that question? To ask us how our own beliefs can be meaningful if all beliefs are meaningless? That’s the same thing many students say when they encounter this philosophy, which, I’ll have you know, has many adherents and an extensive literature.”

“So what’s the answer?” says Autrey.

“We named it the ‘reflexivity problem,’ ” explains Mark.

“But what’s the answer?” persists Autrey.

Mark smiles condescendingly. “Believe me, Autrey, you’re not the first person to think of such a simple question. There’s no point in presenting it to us as a triumphant refutation.”

“But what’s the actual answer?

“Now, I’d like to move on to the issue of how logic kills cute baby seals—”

You are wasting time,” snaps Inspector Darwin.

“Not to mention, losing track of sheep,” I say, tossing in another pebble.

Inspector Darwin looks at the two arguers, both apparently unwilling to give up their positions. “Listen,” Darwin says, more kindly now, “I have a simple notion for resolving your dispute. You say,” says Darwin, pointing to Mark, “that people’s beliefs alter their personal realities. And you fervently believe,” his finger swivels to point at Autrey, “that Mark’s beliefs can’t alter reality. So let Mark believe really hard that he can fly, and then step off a cliff. Mark shall see himself fly away like a bird, and Autrey shall see him plummet down and go splat, and you shall both be happy.”

We all pause, considering this.

“It sounds reasonable . . .” Mark says finally.

“There’s a cliff right there,” observes Inspector Darwin.

Autrey is wearing a look of intense concentration. Finally he shouts: “Wait! If that were true, we would all have long since departed into our own private universes, in which case the other people here are only figments of your imagination—there’s no point in trying to prove anything to us—”

A long dwindling scream comes from the nearby cliff, followed by a dull and lonely splat. Inspector Darwin flips his clipboard to the page that shows the current gene pool and pencils in a slightly lower frequency for Mark’s alleles.

Autrey looks slightly sick. “Was that really necessary?”

Necessary?” says Inspector Darwin, sounding puzzled. “It just happened. . . I don’t quite understand your question.”

Autrey and I turn back to our bucket. It’s time to bring in the sheep. You wouldn’t want to forget about that part. Otherwise what would be the point?

" } }, { "_id": "qAJgWCWJJkke4mE8x", "title": "The Two-Party Swindle", "pageUrl": "https://www.lesswrong.com/posts/qAJgWCWJJkke4mE8x/the-two-party-swindle", "postedAt": "2008-01-01T08:38:28.000Z", "baseScore": 77, "voteCount": 72, "commentCount": 68, "url": null, "contents": { "documentId": "qAJgWCWJJkke4mE8x", "html": "

The Robbers Cave Experiment had as its subject 22 twelve-year-old boys, selected from 22 different schools in Oklahoma City, all doing well in school, all from stable middle-class Protestant families.  In short, the boys were as similar to each other as the experimenters could arrange, though none started out knowing any of the others.  The experiment, conducted in the aftermath of WWII, was meant to investigate conflicts between groups. How would the scientists spark an intergroup conflict to investigate? Well, the first step was to divide the 22 boys into two groups of 11 campers -

\n

- and that was quite sufficient.  There was hostility almost from the moment each group became aware of the other group's existence. Though they had not needed any name for themselves before, they named themselves the Eagles and the Rattlers.  After the researchers (disguised as camp counselors) instigated contests for prizes, rivalry reached a fever pitch and all traces of good sportsmanship disintegrated.  The Eagles stole the Rattlers' flag and burned it; the Rattlers raided the Eagles' cabin and stole the blue jeans of the group leader and painted it orange and carried it as a flag the next day.

\n

Each group developed a stereotype of itself and a contrasting stereotype of the opposing group (though the boys had been initially selected to be as similar as possible).  The Rattlers swore heavily and regarded themselves as rough-and-tough.  The Eagles swore off swearing, and developed an image of themselves as proper-and-moral.

\n

Consider, in this light, the episode of the Blues and the Greens in the days of Rome.  Since the time of the ancient Romans, and continuing into the era of Byzantium and the Roman Empire, the Roman populace had been divided into the warring Blue and Green factions.  Blues murdered Greens and Greens murdered Blues, despite all attempts at policing. They died in single combats, in ambushes, in group battles, in riots.

\n

\n

From Procopius, History of the Wars, I:

\n
\n

In every city the population has been divided for a long time past into the Blue and the Green factions [...] And they fight against their opponents knowing not for what end they imperil themselves [...] So there grows up in them against their fellow men a hostility which has no cause, and at no time does it cease or disappear, for it gives place neither to the ties of marriage nor of relationship nor of friendship, and the case is the same even though those who differ with respect to these colours be brothers or any other kin.

\n
\n

Edward Gibbon, The Decline and Fall of the Roman Empire:

\n
\n

The support of a faction became necessary to every candidate for civil or ecclesiastical honors.

\n
\n

Who were the Blues and the Greens?

\n

They were sports fans - the partisans of the blue and green chariot-racing teams.

\n

It's less surprising if you think of the Robbers Cave experiment.  Favorite-Team is us; Rival-Team is them. Nothing more is ever necessary to produce fanatic enthusiasms for Us and great hatreds of Them.  People pursue their sports allegiances with all the desperate energy of two hunter-gatherer bands lined up for battle - cheering as if their very life depended on it, because fifty thousand years ago, it did.

Evolutionary psychology
produces strange echoes in time, as adaptations continue to execute long after they cease to maximize fitness.  Sex with condoms.  Taste buds still chasing sugar and fat.  Rioting basketball fans.

\n

And so the fans of Favorite-Football-Team all praise their favorite players to the stars, and derogate the players on the Hated-Rival-Team.  We are the fans and players on the Favorite-Football-Team.  They are the fans and players from Hated-Rival-Team.  Those are the two opposing tribes, right?

\n

And yet the professional football players from Favorite-Team have a lot more in common with the professional football players from Rival-Team, than either has in common with the truck driver screaming cheers at the top of his lungs.  The professional football players live similar lives, undergo similar training regimens, move from one team to another.  They're much more likely to hang out at the expensive hotel rooms of fellow football players, than share a drink with a truck driver in his rented trailer home.  Whether Favorite-Team or Rival-Team wins, it's professional football players, not truck drivers, who get the girls, the spotlights, and above all the money: professional football players are paid a hell of a lot more than truck drivers.

\n

Why are professional football players better paid than truck drivers?  Because the truck driver divides the world into Favorite-Team and Rival-Team. That's what motivates him to buy the tickets and wear the T-Shirts. The whole money-making system would fall apart if people started seeing the world in terms of Professional Football Players versus Spectators.

\n

And I'm not even objecting to professional football.  Group identification is pretty much the service provided by football players, and since that service can be provided to many people simultaneously, salaries are naturally competitive.  Fans pay for tickets voluntarily, and everyone knows the score.

\n

It would be a very different matter if your beloved professional football players held over you the power of taxation and war, prison and death.

\n

Then it might not be a good idea to lose yourself in the delicious rush of group identification.

\n

Back in the good ol' days, when knights were brave and peasants starved, there was little doubt that the government and the governed were distinct classes.  Everyone simply took for granted that this was the Natural Order of Things.

\n

This era did not vanish in an instantaneous flash.  The Magna Carta did not challenge the obvious natural distinction between nobles and peasants - but it suggested the existence of a contract, a bargain, two sides at the table rather than one:

\n
\n

No Freeman shall be taken or imprisoned, or be disseised of his Freehold, or Liberties, or free Customs, or be outlawed, or exiled, or any other wise destroyed; nor will We not pass upon him, nor condemn him, but by lawful judgment of his Peers, or by the Law of the Land.  We will sell to no man, we will not deny or defer to any man either Justice or Right.

\n
\n

England did not replace the House of Lords with the House of Commons, when the notion of an elected legislature was first being floated.  They both exist, side-by-side, to this day.

\n

The American War of Independence did not begin as a revolt against the idea of kings, but rather a revolt against one king who had overstepped his authority and violated the compact.

\n

And then someone suggested a really wild idea...

\n

From Decision in Philadelphia: The Constitutional Convention of 1787:

\n
\n

[The delegates to the Constitutional Convention] had grown up believing in a somewhat different principle of government, the idea of the social contract, which said that government was a bargain between the rulers and the ruled.  The people, in essence, agreed to accept the overlordship of their kings and governors; in return, the rulers agreed to respect certain rights of the people.

\n

But as the debate progressed, a new concept of government began more and more to be tossed around.  It abandoned the whole idea of the contract between rulers and the ruled as the philosophic basis for the government.  It said instead that the power resided solely in the people, they could delegate as much as they wanted to, and withdraw it as they saw fit.  All members of the government, not just legislators, would represent the people.  The Constitution, then, was not a bargain between the people and whoever ran the new government, but a delegation of certain powers to the new government, which the people could revise whenever they wanted.

\n
\n

That was the theory.  But did it work in practice?

\n

In some ways, obviously it did work.  I mean, the Presidency of the United States doesn't work like the monarchies of olden times, when the crown passed from father to son, or when a queen would succeed the king her husband.

\n

But that's not even the important question.  Forget that Congresspeople on both sides of the \"divide\" are more likely to be lawyers than truck drivers.  Forget that in training and in daily life, they have far more in common with each other than they do with a randomly selected US citizen from their own party. Forget that they are more likely to hang out at each other's expensive hotel rooms than drop by your own house.  Is there a political divide - a divide of policies and interests - between Professional Politicians on the one hand, and Voters on the other?

\n

Well, let me put it this way.  Suppose that you happen to be socially liberal, fiscally conservative.  Who would you vote for?

\n

Or simplify it further:  Suppose that you're a voter who prefers a smaller, less expensive government - should you vote Republican or Democratic?  Or, lest I be accused of color favoritism, suppose that your voter preference is to get US troops out of Iraq.  Should you vote Democratic or Republican?

\n

One needs to be careful, at this point, to keep track of the distinction between marketing materials and historical records.  I'm not asking which political party stands for the idea of smaller government - which football team has \"Go go smaller government!  Go go go!\" as one of its cheers.  (Or \"Troops out of Iraq!  Yay!\")  Rather, over the last several decades, among Republican politicians and Democratic politicians, which group of Professional Politicians shrunk the government while it was in power?

\n

And by \"shrunk\" I mean \"shrunk\".  If you're suckered into an angry, shouting fight over whether Your Politicians or Their Politicians grew the government slightly less slowly, it means you're not seeing the divide between Politicians and Voters. There isn't a grand conspiracy to expand the government, but there's an incentive for each individual politician to send pork to campaign contributors, or borrow today against tomorrow's income.  And that creates a divide between the Politicians and the Voters, as a class, for reasons that have nothing to do with colors and slogans.

\n

Imagine two football teams.  The Green team's professional players shout the battle cry, \"Cheaper tickets!  Cheaper tickets!\" as they rush into the game.  The Blue team's professional players shout, \"Better seating!  Better seating!\" as they move forward.  The Green Spectators likewise cry \"Cheaper tickets!\" and the Blue Spectators of course cheer \"Better seating!\"

\n

And yet every year the price of tickets goes up, and the seats get harder and less comfortable.  The Blues win a football game, and a great explosion of \"Better seating!  Better seating!\" rises to the heavens with great shouts of excitement and glory, and then the next year the cushions have been replaced by cold steel.  The Greens kick a long-range field goal, and the Green Spectators leap up and down and hug each other screaming \"Cheaper tickets!  Hooray!  Cheaper tickets!\" and then tomorrow there's a $5 cost increase.

\n

It's not that there's a conspiracy.  No conspiracy is required. Even dishonesty is not required - it's so painful to have to lie consciously.  But somehow, after the Blue Professional Football Players have won the latest game, and they're just about to install some new cushions, it occurs to them that they'd rather be at home drinking a nice cold beer.  So they exchange a few furtive guilty looks, scurry home, and apologize to the Blue Spectators the next day.

\n

As for the Blue Spectators catching on, that's not very likely.  See, one of the cheers of the Green side is \"Even if the Blues win, they won't install new seat cushions!\"  So if a Blue Spectator says, \"Hey, Blue Players, we cheered real hard and you won the last game!  What's up with the cold steel seats?\" all the other Blue Spectators will stare aghast and say, \"Why are you calling a Green cheer?\"  And the lonely dissenter says, \"No, you don't understand, I'm not cheering for the Greens.  I'm pointing out, as a fellow Spectator with an interest in better seating, that the Professional Football Players who are allegedly on the Blue Spectators' side haven't actually -\"

\n

\"What do you mean?\" cry the Blue Spectators.  \"Listen!  You can hear the Players calling it now!  'Better seating!'  It resounds from the rafters - how can you say our Players aren't true Blue?  Do you want the Green Players to win?  You - you're betraying Our Team by criticizing Our Players!\"

\n

This is what I mean by the \"two-party swindle\".  Once a politician gets you to identify with them, they pretty much own you.

\n

There doesn't have to be a conscious, collaborative effort by Your Politicians and Their Politicians to keep the Voters screaming at each other, so that they don't notice the increasing gap between the Voters and the Politicians.  There doesn't have to be a conspiracy.  It emerges from the interests of the individual politicians in getting you to identify with them instead of judging them.

\n

The problem dates back to olden times.  Commoners identifying with kings was one of the great supports of the monarchy.  The commoners in France and England alike might be cold and starving.  And the kings of France and England alike might be living in a palace, drinking from golden cups.  But hey, the King of England is our king, right?  His glory is our glory?  Long live King Henry the Whatever!

\n

But as soon as you managed to take an emotional step back, started to think of your king as a contractor - rather than cheering for him because of the country he symbolized - you started to notice that the king wasn't a very good employee.

\n

And I dare say the Big Mess is not likely to be cleaned up, until the Republifans and Demofans realize that in many ways they have more in common with other Voters than with \"their\" Politicians; or, at the very least, stop enthusiastically cheering for rich lawyers because they wear certain colors, and begin judging them as employees severely derelict in their duties.

\n

Until then, the wheel will turn, one sector rising and one sector falling, with a great tumult of lamentation and cheers - and turn again, with uninhibited cries of joy or apprehension - turn again and again, and not go anywhere.

\n

Getting emotional over politics as though it were a sports game - identifying with one color and screaming cheers for them, while heaping abuse on the other color's fans - is a very good thing for the Professional Players' Team; not so much for Team Voters.

\n

(This post is related to the sequence Politics is the Mind-Killer.)

" } }, { "_id": "twxEQ5hecmvDjbWv2", "title": "Posting on Politics", "pageUrl": "https://www.lesswrong.com/posts/twxEQ5hecmvDjbWv2/posting-on-politics", "postedAt": "2008-01-01T07:25:59.000Z", "baseScore": 11, "voteCount": 7, "commentCount": 1, "url": null, "contents": { "documentId": "twxEQ5hecmvDjbWv2", "html": "

Politics, ah, politics!  If human insanities could physically manifest as lumps of putrefaction, and the dripping slimes collected\ninto a giant festering pit, and the mound of corruption formed itself\ninto a monster and shambled forth to eat brains...

\n\n

Ordinarily I prefer to discuss politics\nindirectly, rather than directly.  Politics varies from government to government.  Better to talk about human universals - cognitive biases that can be nailed down and examined in the laboratory - malfunctions of sanity that appear wherever humans go.  Then it's up to you to apply the knowledge to your own political situation.

\n\n

This policy also avoids offending people, so I tend to suspect my clever-sounding rationale for following it.

\n\n

Over the next two weeks, Iowa and New\nHampshire will exercise their Constitutional right to appoint the next President of the United States.  I\nhope you will forgive me if I am, briefly, relevant.

\n\n

I intend to do a\nseries of three posts directly applying to politics.  Don't worry, after that it's back to the safe refuge of cognitive\nscience.

\n\n

Rest assured that I don't plan on endorsing a party, let alone a candidate.

\n\n

If I say something that you disagree with, remember that my attempts at\nrationality are not sourced from a divine scripture, and hence are not\na package deal. \nYou read plenty of other blogs, I hope, where an author occasionally\nsays something you dislike, from time to time?

" } } ] } } }