{ "data": { "posts": { "results": [ { "_id": "Hvxfvhdvpj7x2dJWo", "title": "My Strange Beliefs", "pageUrl": "https://www.lesswrong.com/posts/Hvxfvhdvpj7x2dJWo/my-strange-beliefs", "postedAt": "2007-12-30T12:15:00.000Z", "baseScore": 27, "voteCount": 25, "commentCount": 51, "url": null, "contents": { "documentId": "Hvxfvhdvpj7x2dJWo", "html": "

Yesterday, \"Overcoming Cryonics\" wrote:

\n
\n

Eliezer, enough with your nonsense about cryonicism, life-extensionism, trans-humanism, and the singularity.  These things have nothing to do with overcoming bias... if you're going to enforce the comments policy then you should also self-enforce the overcoming bias posting policy instead of using posts to blithely proselytize your cryonicism / life-extensionism / trans-humanism / singularity religion.

\n
\n

One, there is nothing in the Overcoming Bias posting policy against transhumanism.

\n

Two, as a matter of fact, I do try to avoid proselytizing here.  I have other forums in which to vent my thoughts on transhumanism.  When I write a blog post proselytizing transhumanism, it looks like this, this, or this.

\n

But it's hard for me to avoid all references to transhumanism.  \"Overcoming Cryonics\" commented to a post in which there was exactly one reference to a transhumanist topic.  I had said:

\n
\n

The first time I gave a presentation - the first time I ever climbed onto a stage in front of a couple of hundred people to talk about the Singularity - I briefly thought to myself:  \"I bet most people would be experiencing 'stage fright' about now.  But that wouldn't be helpful, so I'm not going to go there.

\n
\n

What, exactly, am I supposed to do about that?  The first time I ever got up on stage, I was in fact talking about the Singularity!  That's the actual history!  Transhumanism is not a hobby for me, it's my paid day job as a Research Fellow of the Singularity Institute.  Asking me to avoid all mentions of transhumanism is like asking Robin Hanson to avoid all mentions of academia.

\n

\n

Occasionally, someone remarks that I seem to take notions like the Singularity on faith, because I mention them but don't defend them.

\n

I don't defend my views here.  Because I know that not everyone is interested in the considerable volume of work I have produced on transhumanism.  Which you can find on yudkowsky.net.

\n

If, however, you don't like any mention of transhumanism, even as an illustration of some other point about rationality - well, this is a blog.  These are blog posts.  They are written in the first person.  I am occasionally going to use anecdotes from my history, or even, y'know, transcribe my thought processes a little?

\n

Given the amount of time that I spend thinking about transhumanism, I naturally tend to think of transhumanist illustrations for my points about rationality.  If I had spent the last eleven years as a geologist, I would find it easy to illustrate my ideas by talking about rocks.  If you don't like my illustrations and think you can do better, feel free to invent superior illustrations and post them in the comments.  I may even adopt them.

\n

On some transhumanist topics, such as cryonics, I haven't written all that much myself.  But there is plenty about cryonics at Alcor or Cryonics Institute.   Also, the Transhumanist FAQ has some nice intros.  If you don't want it discussed here, then why are you asking?

\n

I will probably post explicitly on cryonics at some point, because I think there are some points about sour grapes for which I would have difficulty finding an equally strong illustration.  Meanwhile, yes, I sometimes do mention \"cryonics\" as the archetype for a socially weird belief which happens to be true.  No matter what I use as an example of \"socially weird but true\", some people are going to disagree with it.  Otherwise it wouldn't be an example.  And weird-but-true is certainly an important topic in rationality - otherwise there would be a knockdown argument against ever dissenting.

\n

Even after checking the referenced sources, you might find that you - gasp! - still disagree with me.  Oh, the horror!  The horror!  You don't read any other blogs where one of the authors occasionally disagrees with you.

\n

Just because this blog is called Overcoming Bias, it does not mean that any time any author says something you disagree with, you should comment \"OMG!  How biased!  I am sooo disappointed in you I thought you would do better.\"  Part of the art of rationality is having extended discussions with people you disagree with.  \"OMG U R BIASED!\" does not present much basis for continuing discussion.

\n

It is a good rule of thumb that you should never flatly accuse someone of being \"biased\".  Name the specific bias that attaches to the specific problem.  Conjunction fallacyAvailability?

\n

If you disagree with someone, you presumably think they're doing something wrong.  Saying \"You are like so totally biased, dude\" is not helpful.  If you strike a tragic, sorrowful pose and go, \"Oh, alas, oh, woe, I am so disappointed in you,\" it is still not helpful.  If you point to a specific belief that you disagree with, and say, \"See, that belief is biased,\" then that doesn't convey any additional information beyond \"I disagree with that belief.\"  Which bias?  There's quite a lot of possibilities.

\n

If you think that \"rationality\" means people will agree with you on their first try, so that anyone who doesn't do this can be dismissed out of hand as a poseur, you have an exaggerated idea of how obvious your beliefs are.

\n

So stop telling me, or Robin Hanson, \"Why, you... you... you're not absolutely rational!\"  We already know that.

\n

Just because I try to be rational doesn't mean I think I'm a god.

\n

Well, sure, I want to be a god when I grow up, but that is like a totally different issue from that first part.

\n

Except that both goals involve Bayesian methods.

\n

(And are intertwined in other ways you won't realize until it's too late to turn back.)

\n

Thank you.

\n

Yours in the darkest abyssal depths of sincerity,
Eliezer Yudkowsky.

" } }, { "_id": "gBma88LH3CLQsqyfS", "title": "Cultish Countercultishness", "pageUrl": "https://www.lesswrong.com/posts/gBma88LH3CLQsqyfS/cultish-countercultishness", "postedAt": "2007-12-30T00:53:28.000Z", "baseScore": 108, "voteCount": 94, "commentCount": 32, "url": null, "contents": { "documentId": "gBma88LH3CLQsqyfS", "html": "\n\n\n\n \n\n \n\n

In the modern world, joining a cult is probably one of the worse things that can happen to you. The best-case scenario is that you’ll end up in a group of sincere but deluded people, making an honest mistake but otherwise well-behaved, and you’ll spend a lot of time and money but end up with nothing to show. Actually, that could describe any failed Silicon Valley startup. Which is supposed to be a hell of a harrowing experience, come to think. So yes, very scary.

\n\n

Real cults are vastly worse. “Love bombing” as a recruitment technique, targeted at people going through a personal crisis. Sleep deprivation. Induced fatigue from hard labor. Distant communes to isolate the recruit from friends and family. Daily meetings to confess impure thoughts. It’s not unusual for cults to take all the recruit’s money—life savings plus weekly paycheck—forcing them to depend on the cult for food and clothing. Starvation as a punishment for disobedience. Serious brainwashing and serious harm.

\n\n

With all that taken into account, I should probably sympathize more with people who are terribly nervous, embarking on some odd-seeming endeavor, that they might be joining a cult. It should not grate on my nerves. Which it does.

\n\n

Point one: “Cults” and “non-cults” aren’t separated natural kinds like dogs and cats. If you look at any list of cult characteristics, you’ll see items that could easily describe political parties and corporations—“group members encouraged to distrust outside criticism as having hidden motives,” “hierarchical authoritative structure.” I’ve written on group failure modes like group polarization, happy death spirals, uncriticality, and evaporative cooling, all of which seem to feed on each other. When these failures swirl together and meet, they combine to form a Super-Failure stupider than any of the parts, like Voltron. But this is not a cult essence; it is a cult attractor.

\n\n

Dogs are born with dog DNA, and cats are born with cat DNA. In the current world, there is no in-between. (Even with genetic manipulation, it wouldn’t be as simple as creating an organism with half dog genes and half cat genes.) It’s not like there’s a mutually reinforcing set of dog-characteristics, which an individual cat can wander halfway into and become a semidog.

\n\n

The human mind, as it thinks about categories, seems to prefer essences to attractors. The one wishes to say, “It is a cult,” or, “It is not a cult,” and then the task of classification is over and done. If you observe that Socrates has ten fingers, wears clothes, and speaks fluent Greek, then you can say, “Socrates is human,” and from there deduce, “Socrates is vulnerable to hemlock,” without doing specific blood tests to confirm his mortality. You have decided Socrates’s humanness once and for all.

\n\n

But if you observe that a certain group of people seems to exhibit ingroup-outgroup polarization and see a positive halo effect around their Favorite Thing Ever—which could be Objectivism, or vegetarianism, or neural networks—you cannot, from the evidence gathered so far, deduce whether they have achieved uncriticality. You cannot deduce whether their main idea is true, or false, or genuinely useful but not quite as useful as they think. From the information gathered so far, you cannot deduce whether they are otherwise polite, or if they will lure you into isolation and deprive you of sleep and food. The characteristics of cultness are not all present or all absent.

\n\n

If you look at online arguments over “X is a cult,” “X is not a cult,” then one side goes through an online list of cult characteristics and finds one that applies and says, “Therefore it is a cult!” And the defender finds a characteristic that does not apply and says, “Therefore it is not a cult!”

\n\n

You cannot build up an accurate picture of a group’s reasoning dynamic using this kind of essentialism. You’ve got to pay attention to individual characteristics individually.

\n\n

Furthermore, reversed stupidity is not intelligence. If you’re interested in the central idea, not just the implementation group, then smart ideas can have stupid followers. Lots of New Agers talk about “quantum physics,” but this is no strike against quantum physics.1 Along with binary essentialism goes the idea that if you infer that a group is a “cult,” therefore their beliefs must be false, because false beliefs are characteristic of cults, just like cats have fur. If you’re interested in the idea, then look at the idea, not the people. Cultishness is a characteristic of groups more than hypotheses.

\n\n

The second error is that when people nervously ask, “This isn’t a cult, is it?” it sounds to me like they’re seeking reassurance of rationality. The notion of a rationalist not getting too attached to their self-image as a rationalist deserves its own essay.2 But even without going into detail, surely one can see that nervously seeking reassurance is not the best frame of mind in which to evaluate questions of rationality. You will not be genuinely curious or think of ways to fulfill your doubts. Instead, you’ll find some online source which says that cults use sleep deprivation to control people, you’ll notice that Your-Favorite-Group doesn’t use sleep deprivation, and you’ll conclude, “It’s not a cult. Whew!” If it doesn’t have fur, it must not be a cat. Very reassuring.

\n\n

But every cause wants to be a cult, whether the cause itself is wise or foolish. The ingroup-outgroup dichotomy, etc., are part of human nature, not a special curse of mutants. Rationality is the exception, not the rule. You have to put forth a constant effort to maintain rationality against the natural slide into entropy. If you decide, “It’s not a cult!” and sigh with relief, then you will not put forth a continuing effort to push back ordinary tendencies toward cultishness. You’ll decide the cult-essence is absent, and stop pumping against the entropy of the cult attractor.

\n\n

If you are terribly nervous about cultishness, then you will want to deny any hint of any characteristic that resembles a cult. But any group with a goal seen in a positive light is at risk for the halo effect, and will have to pump against entropy to avoid an affective death spiral. This is true even for ordinary institutions like political parties—people who think that “liberal values” or “conservative values” can cure cancer, etc. It is true for Silicon Valley startups, both failed and successful. It is true of Mac users and of Linux users. The halo effect doesn’t become okay just because everyone does it; if everyone walks off a cliff, you wouldn’t too. The error in reasoning is to be fought, not tolerated. But if you’re too nervous about, “Are you sure this isn’t a cult?” then you will be reluctant to see any sign of cultishness, because that would imply you’re in a cult, and It’s not a cult!! So you won’t see the current battlefields where the ordinary tendencies toward cultishness are creeping forward, or being pushed back.

\n\n

The third mistake in nervously asking, “This isn’t a cult, is it?” is that, I strongly suspect, the nervousness is there for entirely the wrong reasons.

\n\n

Why is it that groups which praise their Happy Thing to the stars, encourage members to donate all their money and work in voluntary servitude, and run private compounds in which members are kept tightly secluded, are called “religions” rather than “cults” once they’ve been around for a few hundred years?

\n\n

Why is it that most of the people who nervously ask of cryonics, “This isn’t a cult, is it?” would not be equally nervous about attending a Republican or Democratic political rally? Ingroup-outgroup dichotomies and happy death spirals can happen in political discussion, in mainstream religions, in sports fandom. If the nervousness came from fear of rationality errors, people would ask, “This isn’t an ingroup-outgroup dichotomy, is it?” about Democratic or Republican political rallies, in just the same fearful tones.

\n\n

There’s a legitimate reason to be less fearful of Libertarianism than of a flying-saucer cult, because Libertarians don’t have a reputation for employing sleep deprivation to convert people. But cryonicists don’t have a reputation for using sleep deprivation, either. So why be any more worried about having your head frozen after you stop breathing?

\n\n

I suspect that the nervousness is not the fear of believing falsely, or the fear of physical harm. It is the fear of lonely dissent. The nervous feeling that subjects get in Asch’s conformity experiment, when all the other subjects (actually confederates) say one after another that line C is the same size as line X, and it looks to the subject like line B is the same size as line X. The fear of leaving the pack.

\n\n

That’s why groups whose beliefs have been around long enough to seem “normal” don’t inspire the same nervousness as “cults,” though some mainstream religions may also take all your money and send you to a monastery. It’s why groups like political parties, that are strongly liable for rationality errors, don’t inspire the same nervousness as “cults.” The word “cult” isn’t being used to symbolize rationality errors; it’s being used as a label for something that seems weird.

\n\n

Not every change is an improvement, but every improvement is necessarily a change. That which you want to do better, you have no choice but to do differently. Common wisdom does embody a fair amount of, well, actual wisdom; yes, it makes sense to require an extra burden of proof for weirdness. But the nervousness isn’t that kind of deliberate, rational consideration. It’s the fear of believing something that will make your friends look at you really oddly. And so people ask, “This isn’t a cult, is it?” in a tone that they would never use for attending a political rally, or for putting up a gigantic Christmas display.

\n\n

That’s the part that bugs me.

\n\n

It’s as if, as soon as you believe anything that your ancestors did not believe, the Cult Fairy comes down from the sky and infuses you with the Essence of Cultness, and the next thing you know, you’re all wearing robes and chanting. As if “weird” beliefs are the direct cause of the problems, never mind the sleep deprivation and beatings. The harm done by cults—the Heaven’s Gate suicide and so on—just goes to show that everyone with an odd belief is crazy; the first and foremost characteristic of “cult members” is that they are Outsiders with Peculiar Ways.

\n\n

Yes, socially unusual belief puts a group at risk for ingroup-outgroup thinking and evaporative cooling and other problems. But the unusualness is a risk factor, not a disease in itself. Same thing with having a goal that you think is worth accomplishing. Whether or not the belief is true, having a nice goal always puts you at risk of the happy death spiral. But that makes lofty goals a risk factor, not a disease. Some goals are genuinely worth pursuing.3

\n\n

Problem four: The fear of lonely dissent is something that cults themselves exploit. Being afraid of your friends looking at you disapprovingly is exactly the effect that real cults use to convert and keep members—surrounding converts with wall-to-wall agreement among cult believers.

\n\n

The fear of strange ideas, the impulse to conformity, has no doubt warned many potential victims away from flying saucer cults. When you’re out, it keeps you out. But when you’re in, it keeps you in. Conformity just glues you to wherever you are, whether that’s a good place or a bad place.

\n\n

The one wishes there was some way they could be sure that they weren’t in a “cult.” Some definite, crushing rejoinder to people who looked at them funny. Some way they could know once and for all that they were doing the right thing, without these constant doubts. I believe that’s called “need for closure.” And—of course—cults exploit that, too.

\n\n

Hence the phrase “cultish countercultishness.”

\n\n

Living with doubt is not a virtue—the purpose of every doubt is to annihilate itself in success or failure, and a doubt that just hangs around accomplishes nothing. But sometimes a doubt does take a while to annihilate itself. Living with a stack of currently unresolved doubts is an unavoidable fact of life for rationalists. Doubt shouldn’t be scary. Otherwise you’re going to have to choose between living one heck of a hunted life, or one heck of a stupid one.

\n\n

If you really, genuinely can’t figure out whether a group is a “cult,” then you’ll just have to choose under conditions of uncertainty. That’s what decision theory is all about.

\n\n

Problem five: Lack of strategic thinking.

\n\n

I know people who are cautious around ideas like intelligence explosion and superintelligent AI, and they’re also cautious around political parties and mainstream religions. Cautious, not nervous or defensive. These people can see at a glance that singularity-ish ideas aren’t currently the nucleus of a full-blown cult with sleep deprivation, etc. But they worry that it will become a cult, because of risk factors like turning the concept of a powerful AI into a Super Happy Agent (an agent defined primarily by agreeing with any nice thing said about it). Just because something isn’t a cult now doesn’t mean it won’t become a cult in the future. Cultishness is an attractor, not an essence.

\n\n

Does this kind of caution annoy me? Hell no. I spend a lot of time worrying about that scenario myself. I try to place my Go stones in advance to block movement in that direction.4

\n\n

People who talk about “rationality” also have an added risk factor. Giving people advice about how to think is an inherently dangerous business. But it is a risk factor, not a disease.

\n\n

Both of my favorite Causes are at-risk for cultishness. Yet somehow I get asked, “Are you sure this isn’t a cult?” a lot more often when I talk about powerful AIs than when I talk about probability theory and cognitive science. I don’t know if one risk factor is higher than the other, but I know which one sounds weirder . . .

\n\n

Problem #6 with asking, “This isn’t a cult, is it?” . . .

\n\n

Just the question itself places me in a very annoying sort of Catch-22. An actual Evil Guru would surely use the one’s nervousness against them, and design a plausible elaborate argument explaining Why This Is Not A Cult, and the one would be eager to accept it. Sometimes I get the impression that this is what people want me to do! Whenever I try to write about cultishness and how to avoid it, I keep feeling like I’m giving in to that flawed desire—that I am, in the end, providing people with reassurance. Even when I tell people that a constant fight against entropy is required.

\n\n

It feels like I’m making myself a first dissenter in Asch’s conformity experiment, telling people, “Yes, line X really is the same as line B, it’s okay for you to say so too.” They shouldn’t need to ask! Or, even worse, it feels like I’m presenting an elaborate argument for Why This Is Not A Cult. It’s a wrong question.

\n\n

Just look at the group’s reasoning processes for yourself, and decide for yourself whether it’s something you want to be part of, once you get rid of the fear of weirdness. It is your own responsibility to stop yourself from thinking cultishly, no matter which group you currently happen to be operating in.

\n\n

Cults feed on groupthink, nervousness, desire for reassurance. You cannot make nervousness go away by wishing, and false self-confidence is even worse. But so long as someone needs reassurance—even reassurance about being a rationalist—that will always be a flaw in their armor. A skillful swordsman focuses on the target, rather than glancing away to see if anyone might be laughing. When you know what you’re trying to do and why, you’ll know whether you’re getting it done or not, and whether a group is helping you or hindering you.5

\n\n
\n \n\n

1Of course, stupid ideas can also have stupid followers.

\n\n

2Though see the two cult koans, “Why Truth?” (in Map and Territory), and “The Twelve Virtues of Rationality” (http://www.lesswrong.com/rationality/the-twelve-virtues-of-rationality).

\n\n

3On the other hand, I see no legitimate reason for sleep deprivation or threatening dissenters with beating, full stop. When a group does this, then whether you call it “cult” or “not-cult,” you have directly answered the pragmatic question of whether to join.

\n\n

4Hence, for example, the series of essays on cultish failures of reasoning.

\n\n

5PS: If the one comes to you and says, “Are you sure this isn’t a cult?” don’t try to explain all these concepts in one breath. You’re underestimating inferential distances. The one will say, “Aha, so you’re admitting you’re a cult!” or, “Wait, you’re saying I shouldn’t worry about joining cults?” or, “So . . . the fear of cults is cultish? That sounds awfully cultish to me.”

\n\n

So the last annoyance factor—#7 if you’re keeping count—is that all of this is such a long story to explain.

\n
\n\n" } }, { "_id": "n5oCEbnW2PgFmkQhr", "title": "To Lead, You Must Stand Up", "pageUrl": "https://www.lesswrong.com/posts/n5oCEbnW2PgFmkQhr/to-lead-you-must-stand-up", "postedAt": "2007-12-29T06:38:47.000Z", "baseScore": 48, "voteCount": 38, "commentCount": 31, "url": null, "contents": { "documentId": "n5oCEbnW2PgFmkQhr", "html": "

Followup toLonely Dissent

\n\n

True story:  In July, I attended a certain Silicon Valley event.  I was not an organizer, or a speaker, or in any other wise involved on an official level; just an attendee.  It was an evening event, and after\nthe main presentations were done, much of the audience hung around\ntalking... and talking... and talking...  Finally the event\norganizer began dimming the lights and turning them back up again.  And the crowd still stayed; no one left.  So the organizer\ndimmed the lights and turned them up some more.  And lo, the people continued talking.

\n\n

I walked over to the event organizer, standing by the light switches, and said, "Are you hinting\nfor people to leave?"  And he said, "Yes.  In fact [the host company] says we've got to get out\nof here now - the building needs to close down."

\n\n

I nodded.

\n

I walked over to the exit.

\n

I shouted, "LISTEN UP, EVERYONE!  WE'VE GOT TO GO!  OUR TIME\nHERE HAS PASSED!  YOU CAN TALK OUTSIDE IF YOU LIKE!  NOW FOLLOW ME...\nTO FREEDOM!"

\n

I turned.

\n\n

I marched out the door.

\n\n

\nAnd everyone followed.

\n\n

\nI expect there were at least two or three CEOs in that Silicon Valley\ncrowd.  It didn't lack for potential leaders.  Why was it left to me to lead the CEOs to freedom?

\n\n

Well, what was in it for them to perform that service to the group?  It wasn't their problem.  I'm in the habit of doing work I see being left undone; but this doesn't appear to be a common habit.

\n\n

So why didn't some aspiring would-be future-CEO take the opportunity to distinguish themselves by acting the part of the leader?  I bet at least five people in that Silicon Valley crowd had recently read a business book on leadership...

\n\n

\nBut it's terribly embarrassing to stand up in front of a crowd.  What if the crowd hadn't followed me?  What if I'd turned and marched out the door, and been left looking like a complete fool?  Oh nos!  Oh horrors!

While I have sometimes pretended to wisdom, I have never pretended\nto solemnity.  I wasn't worried about looking silly, because heck, I am silly.  It runs in the Yudkowsky family.  There is a difference between being serious and being solemn.\n\n

\n\n

As for daring to stand out in the crowd, to have everyone staring at me - that was a feature of grade school.  The first time I gave a presentation - the first time I ever climbed onto a stage in front of a couple of hundred people to talk about the Singularity - I briefly thought to myself:  "I bet most people would be experiencing 'stage fright' about now.  But that wouldn't be helpful, so I'm not going to go there."

\n\n

\nI expect that a majority of my readers like to think of themselves as\nhaving strong leadership qualities.  Well, maybe you do, and maybe you\ndon't.  But you'll never get a chance to express those leadership qualities if\nyou're too embarrassed to call attention to yourself, to stand up in front of the crowd and have all eyes turn to you.  To lead the\npack, you must be willing to leave the pack.

" } }, { "_id": "CEGnJBHmkcwPTysb7", "title": "Lonely Dissent", "pageUrl": "https://www.lesswrong.com/posts/CEGnJBHmkcwPTysb7/lonely-dissent", "postedAt": "2007-12-28T04:23:31.000Z", "baseScore": 180, "voteCount": 138, "commentCount": 91, "url": null, "contents": { "documentId": "CEGnJBHmkcwPTysb7", "html": "\n\n\n\n \n\n \n\n

Asch’s conformity experiment showed that the presence of a single dissenter tremendously reduced the incidence of “conforming” wrong answers. Individualism is easy, experiment shows, when you have company in your defiance. Every other subject in the room, except one, says that black is white. You become the second person to say that black is black. And it feels glorious: the two of you, lonely and defiant rebels, against the world!1

\n\n

But you can only join the rebellion after someone, somewhere, becomes the first to rebel. Someone has to say that black is black after hearing everyone else, one after the other, say that black is white. And that—experiment shows—is a lot harder.

\n\n

Lonely dissent doesn’t feel like going to school dressed in black. It feels like going to school wearing a clown suit.

\n\n

That’s the difference between joining the rebellion and leaving the pack.

\n\n

If there’s one thing I can’t stand, it’s fakeness—you may have noticed this. Well, lonely dissent has got to be one of the most commonly, most ostentatiously faked characteristics around. Everyone wants to be an iconoclast.

\n\n

I don’t mean to degrade the act of joining a rebellion. There are rebellions worth joining. It does take courage to brave the disapproval of your peer group, or perhaps even worse, their shrugs. Needless to say, going to a rock concert is not rebellion. But, for example, vegetarianism is. I’m not a vegetarian myself, but I respect people who are, because I expect it takes a noticeable amount of quiet courage to tell people that hamburgers won’t work for dinner.2

\n\n

Still, if you tell people that you’re a vegetarian, they’ll think they understand your motives (even if they don’t). They may disagree. They may be offended if you manage to announce it proudly enough, or for that matter, they may be offended just because they’re easily offended. But they know how to relate to you.

\n\n

When someone wears black to school, the teachers and the other children understand the role thereby being assumed in their society. It’s Outside the System—in a very standard way that everyone recognizes and understands. Not, y’know, actually outside the system. It’s a Challenge to Standard Thinking, of a standard sort, so that people indignantly say, “I can’t understand why you—” but don’t have to actually think any thoughts they had not thought before. As the saying goes, “Has any of the ‘subversive literature’ you’ve read caused you to modify any of your political views?”

\n\n

What takes real courage is braving the outright incomprehension of the people around you, when you do something that isn’t Standard Rebellion #37, something for which they lack a ready-made script. They don’t hate you for a rebel. They just think you’re, like, weird, and turn away. This prospect generates a much deeper fear. It’s the difference between explaining vegetarianism and explaining cryonics. There are other cryonicists in the world, somewhere, but they aren’t there next to you. You have to explain it, alone, to people who just think it’s weird. Not forbidden, but outside bounds that people don’t even think about. You’re going to get your head frozen? You think that’s going to stop you from dying? What do you mean, brain information? Huh? What? Are you crazy?

\n\n

I’m tempted to essay a post facto explanation in evolutionary psychology: You could get together with a small group of friends and walk away from your hunter-gatherer band, but having to go it alone in the forests was probably a death sentence—at least reproductively. We don’t reason this out explicitly, but that is not the nature of evolutionary psychology. Joining a rebellion that everyone knows about is scary, but nowhere near as scary as doing something really differently—something that in ancestral times might have concluded, not with the band splitting, but with you being driven out alone.

\n\n

As the case of cryonics testifies, the fear of thinking really different is stronger than the fear of death. Hunter-gatherers had to be ready to face death on a routine basis—hunting large mammals, or just walking around in a world that contained predators. They needed that courage in order to live. Courage to defy the tribe’s standard ways of thinking, to entertain thoughts that seem truly weird—well, that probably didn’t serve its bearers as well. We don’t reason this out explicitly; that’s not how evolutionary psychology works. We human beings are just built in such fashion that many more of us go skydiving than sign up for cryonics.

\n\n

And that’s not even the highest courage. There’s more than one cryonicist in the world. Only Robert Ettinger had to say it first.

\n\n

To be a scientific revolutionary, you’ve got to be the first person to contradict what everyone else you know is thinking. This is not the only route to scientific greatness; it is rare even among the great. No one can become a scientific revolutionary by trying to imitate revolutionariness. You can only get there by pursuing the correct answer in all things, whether the correct answer is revolutionary or not. But if, in the due course of time—if, having absorbed all the power and wisdom of the knowledge that has already accumulated—if, after all that and a dose of sheer luck, you find your pursuit of mere correctness taking you into new territory . . . then you have an opportunity for your courage to fail.

\n\n

This is the true courage of lonely dissent, which every damn rock band out there tries to fake.

\n\n

Of course, not everything that takes courage is a good idea. It would take courage to walk off a cliff, but then you would just go splat.

\n\n

The fear of lonely dissent is a hindrance to good ideas, but not every dissenting idea is good.3 Most of the difficulty in having a new true scientific thought is in the “true” part.

\n\n

It really isn’t necessary to be different for the sake of being different. If you do things differently only when you see an overwhelmingly good reason, you will have more than enough trouble to last you the rest of your life.

\n\n

There are a few genuine packs of iconoclasts around. The Church of the SubGenius, for example, seems to genuinely aim at confusing the mundanes, not merely offending them. And there are islands of genuine tolerance in the world, such as science fiction conventions. There are certain people who have no fear of departing the pack. Many fewer such people really exist, than imagine themselves rebels; but they do exist. And yet scientific revolutionaries are tremendously rarer. Ponder that.

\n\n

Now me, you know, I really am an iconoclast. Everyone thinks they are, but with me it’s true, you see. I would totally have worn a clown suit to school. My serious conversations were with books, not with other children.

\n\n

But if you think you would totally wear that clown suit, then don’t be too proud of that either! It just means that you need to make an effort in the opposite direction to avoid dissenting too easily. That’s what I have to do, to correct for my own nature. Other people do have reasons for thinking what they do, and ignoring that completely is as bad as being afraid to contradict them. You wouldn’t want to end up as a free thinker. It’s not a virtue, you see—just a bias either way.

\n\n
\n \n\n

1Followup interviews showed that subjects in the one-dissenter condition expressed strong feelings of camaraderie with the dissenter—though, of course, they didn’t think the presence of the dissenter had influenced their own nonconformity.

\n\n

2Albeit that in the Bay Area, people ask as a matter of routine.

\n\n

3See Robin Hanson, “Against Free Thinkers,” Overcoming Bias (blog), 2007, http://www.overcoming-bias.com/2007/06/against_free_th.html.

\n
\n\n" } }, { "_id": "ovvwAhKKoNbfcMz8K", "title": "On Expressing Your Concerns", "pageUrl": "https://www.lesswrong.com/posts/ovvwAhKKoNbfcMz8K/on-expressing-your-concerns", "postedAt": "2007-12-27T04:04:44.000Z", "baseScore": 66, "voteCount": 61, "commentCount": 38, "url": null, "contents": { "documentId": "ovvwAhKKoNbfcMz8K", "html": "\n\n\n\n \n\n \n\n

The scary thing about Asch’s conformity experiments is that you can get many people to say black is white, if you put them in a room full of other people saying the same thing. The hopeful thing about Asch’s conformity experiments is that a single dissenter tremendously drove down the rate of conformity, even if the dissenter was only giving a different wrong answer. And the wearisome thing is that dissent was not learned over the course of the experiment—when the single dissenter started siding with the group, rates of conformity rose back up.

\n\n

Being a voice of dissent can bring real benefits to the group. But it also (famously) has a cost. And then you have to keep it up. Plus you could be wrong.

\n\n

I recently had an interesting experience wherein I began discussing a project with two people who had previously done some planning on their own. I thought they were being too optimistic and made a number of safety-margin-type suggestions for the project. Soon a fourth guy wandered by, who was providing one of the other two with a ride home, and began making suggestions. At this point I had a sudden insight about how groups become overconfident, because whenever I raised a possible problem, the fourth guy would say, “Don’t worry, I’m sure we can handle it!” or something similarly reassuring.

\n\n

An individual, working alone, will have natural doubts. They will think to themselves, “Can I really do XYZ?” because there’s nothing impolite about doubting your own competence. But when two unconfident people form a group, it is polite to say nice and reassuring things, and impolite to question the other person’s competence. Together they become more optimistic than either would be on their own, each one’s doubts quelled by the other’s seemingly confident reassurance, not realizing that the other person initially had the same inner doubts.

\n\n

The most fearsome possibility raised by Asch’s experiments on conformity is the specter of everyone agreeing with the group, swayed by the confident voices of others, careful not to let their own doubts show—not realizing that others are suppressing similar worries. This is known as “pluralistic ignorance.”

\n\n

Robin Hanson and I have a long-running debate over when, exactly, aspiring rationalists should dare to disagree. I tend toward the widely held position that you have no real choice but to form your own opinions. Robin Hanson advocates a more iconoclastic position, that you—not just other people—should consider that others may be wiser. Regardless of our various disputes, we both agree that Aumann’s Agreement Theorem extends to imply that common knowledge of a factual disagreement shows someone must be irrational.1 Despite the funny looks we’ve gotten, we’re sticking to our guns about modesty: Forget what everyone tells you about individualism, you should pay attention to what other people think.

\n\n

Ahem. The point is that, for rationalists, disagreeing with the group is serious business. You can’t wave it off with, “Everyone is entitled to their own opinion.”

\n\n

I think the most important lesson to take away from Asch’s experiments is to distinguish “expressing concern” from “disagreement.” Raising a point that others haven’t voiced is not a promise to disagree with the group at the end of its discussion.

\n\n

The ideal Bayesian’s process of convergence involves sharing evidence that is unpredictable to the listener. The Aumann agreement result holds only for common knowledge, where you know, I know, you know I know, etc. Hanson’s post or paper on “We Can’t Foresee to Disagree” provides a picture of how strange it would look to watch ideal rationalists converging on a probability estimate; it doesn’t look anything like two bargainers in a marketplace converging on a price.

\n\n

Unfortunately, there’s not much difference socially between “expressing concerns” and “disagreement.” A group of rationalists might agree to pretend there’s a difference, but it’s not how human beings are really wired. Once you speak out, you’ve committed a socially irrevocable act; you’ve become the nail sticking up, the discord in the comfortable group harmony, and you can’t undo that. Anyone insulted by a concern you expressed about their competence to successfully complete task XYZ will probably hold just as much of a grudge afterward if you say, “No problem, I’ll go along with the group,” at the end.

\n\n

Asch’s experiment shows that the power of dissent to inspire others is real. Asch’s experiment shows that the power of conformity is real. If everyone refrains from voicing their private doubts, that will indeed lead groups into madness. But history abounds with lessons on the price of being the first, or even the second, to say that the Emperor has no clothes. Nor are people hardwired to distinguish “expressing a concern” from “disagreement even with common knowledge”; this distinction is a rationalist’s artifice. If you read the more cynical brand of self-help books (e.g., Machiavelli’s The Prince) they will advise you to mask your nonconformity entirely, not voice your concerns first and then agree at the end. If you perform the group service of being the one who gives voice to the obvious problems, don’t expect the group to thank you for it.

\n\n

These are the costs and the benefits of dissenting—whether you “disagree” or just “express concern”—and the decision is up to you.

\n\n
\n \n\n

1See “The Modesty Argument.” http://lesswrong.com/lw/gr/the_modesty_argument.

\n
\n\n" } }, { "_id": "WHK94zXkQm7qm7wXk", "title": "Asch's Conformity Experiment", "pageUrl": "https://www.lesswrong.com/posts/WHK94zXkQm7qm7wXk/asch-s-conformity-experiment", "postedAt": "2007-12-26T07:03:13.000Z", "baseScore": 70, "voteCount": 65, "commentCount": 67, "url": null, "contents": { "documentId": "WHK94zXkQm7qm7wXk", "html": "\n\n\n\n \n\n \n\n

Solomon Asch, with experiments originally carried out in the 1950s and well-replicated since, highlighted a phenomenon now known as “conformity.” In the classic experiment, a subject sees a puzzle like the one in the nearby diagram: Which of the lines A, B, and C is the same size as the line X? Take a moment to determine your own answer . . .

\n\n\n\n

The gotcha is that the subject is seated alongside a number of other people looking at the diagram—seemingly other subjects, actually confederates of the experimenter. The other “subjects” in the experiment, one after the other, say that line C seems to be the same size as X. The real subject is seated next-to-last. How many people, placed in this situation, would say “C”—giving an obviously incorrect answer that agrees with the unanimous answer of the other subjects? What do you think the percentage would be?

\n\n
\n \n \n \n \n\n \n\n \n \n \n\n \n \n \n\n \n \n \n\n \n \n \n\n \n \n \n\n \n \n \n\n \n\n \n \n \n\n \n\n \n \n \n\n \n\n \n \n \n\n \n\n \n \n \n \n
\n\n

Three-quarters of the subjects in Asch’s experiment gave a “conforming” answer at least once. A third of the subjects conformed more than half the time.

\n\n

Interviews after the experiment showed that while most subjects claimed to have not really believed their conforming answers, some said they’d really thought that the conforming option was the correct one.

\n\n

Asch was disturbed by these results:1

\n\n
\n \n\n

That we have found the tendency to conformity in our society so strong . . . is a matter of concern. It raises questions about our ways of education and about the values that guide our conduct.

\n
\n\n

It is not a trivial question whether the subjects of Asch’s experiments behaved irrationally. Robert Aumann’s Agreement Theorem shows that honest Bayesians cannot agree to disagree—if they have common knowledge of their probability estimates, they have the same probability estimate. Aumann’s Agreement Theorem was proved more than twenty years after Asch’s experiments, but it only formalizes and strengthens an intuitively obvious point—other people’s beliefs are often legitimate evidence.

\n\n

If you were looking at a diagram like the one above, but you knew for a fact that the other people in the experiment were honest and seeing the same diagram as you, and three other people said that C was the same size as X, then what are the odds that only you are the one who’s right? I lay claim to no advantage of visual reasoning—I don’t think I’m better than an average human at judging whether two lines are the same size. In terms of individual rationality, I hope I would notice my own severe confusion and then assign >50% probability to the majority vote.

\n\n

In terms of group rationality, seems to me that the proper thing for an honest rationalist to say is, “How surprising, it looks to me like B is the same size as X. But if we’re all looking at the same diagram and reporting honestly, I have no reason to believe that my assessment is better than yours.” The last sentence is important—it’s a much weaker claim of disagreement than, “Oh, I see the optical illusion—I understand why you think it’s C, of course, but the real answer is B.”

\n\n

So the conforming subjects in these experiments are not automatically convicted of irrationality, based on what I’ve described so far. But as you might expect, the devil is in the details of the experimental results. According to a meta-analysis of over a hundred replications by Smith and Bond . . . 2

\n\n

. . . Conformity increases strongly up to 3 confederates, but doesn’t increase further up to 10–15 confederates. If people are conforming rationally, then the opinion of 15 other subjects should be substantially stronger evidence than the opinion of 3 other subjects.

\n\n

Adding a single dissenter—just one other person who gives the correct answer, or even an incorrect answer that’s different from the group’s incorrect answer—reduces conformity very sharply, down to 5–10% of subjects. If you’re applying some intuitive version of Aumann’s Agreement to think that when 1 person disagrees with 3 people, the 3 are probably right, then in most cases you should be equally willing to think that 2 people will disagree with 6 people.3 On the other hand, if you’ve got people who are emotionally nervous about being the odd one out, then it’s easy to see how adding a single other person who agrees with you, or even adding a single other person who disagrees with the group, would make you much less nervous.

\n\n

Unsurprisingly, subjects in the one-dissenter condition did not think their nonconformity had been influenced or enabled by the dissenter. Like the 90% of drivers who think they’re above-average in the top 50%, some of them may be right about this, but not all. People are not self-aware of the causes of their conformity or dissent, which weighs against any attempts to argue that the patterns of conformity are rational.4

\n\n

When the single dissenter suddenly switched to conforming to the group, subjects’ conformity rates went back up to just as high as in the no-dissenter condition. Being the first dissenter is a valuable (and costly!) social service, but you’ve got to keep it up.

\n\n

Consistently within and across experiments, all-female groups (a female subject alongside female confederates) conform significantly more often than all-male groups. Around one-half the women conform more than half the time, versus a third of the men. If you argue that the average subject is rational, then apparently women are too agreeable and men are too disagreeable, so neither group is actually rational . . .

\n\n

Ingroup-outgroup manipulations (e.g., a handicapped subject alongside other handicapped subjects) similarly show that conformity is significantly higher among members of an ingroup.

\n\n

Conformity is lower in the case of blatant diagrams, like the one at the beginning of this essay, versus diagrams where the errors are more subtle. This is hard to explain if (all) the subjects are making a socially rational decision to avoid sticking out.

\n\n

Finally, Paul Crowley reminds me to note that when subjects can respond in a way that will not be seen by the group, conformity also drops, which also argues against an Aumann interpretation.

\n\n
\n \n\n

1Solomon E. Asch, “Studies of Independence and Conformity: A Minority of One Against a Unanimous Majority,” Psychological Monographs 70 (1956).

\n\n

2Rod Bond and Peter B. Smith, “Culture and Conformity: A Meta-Analysis of Studies Using Asch’s (1952b, 1956) Line Judgment Task,” Psychological Bulletin 119 (1996): 111–137.

\n\n

3This isn’t automatically true, but it’s true ceteris paribus.

\n\n

4For example, in the hypothesis that people are socially-rationally choosing to lie in order to not stick out, it appears that (at least some) subjects in the one-dissenter condition do not consciously anticipate the “conscious strategy” they would employ when faced with unanimous opposition.

\n
\n\n" } }, { "_id": "FkwKGQFS5XL9mQSQb", "title": "The Amazing Virgin Pregnancy", "pageUrl": "https://www.lesswrong.com/posts/FkwKGQFS5XL9mQSQb/the-amazing-virgin-pregnancy", "postedAt": "2007-12-24T14:00:00.000Z", "baseScore": 32, "voteCount": 50, "commentCount": 271, "url": null, "contents": { "documentId": "FkwKGQFS5XL9mQSQb", "html": "

People who grow up believing certain things,
even if they later stop believing them,
may not quite realize how the beliefs sound to outsiders...

\n

(SCENE:  A small cottage in Nazareth.)

\n

Joseph:  Mary, my dearest fiancée, there's something I've been meaning to talk to you about.

\n

(Mary's shoulders slump.  Slowly, as if under a heavy burden, she turns around to face Joseph.)

\n

Joseph:  You seem to be getting fat around the waistline, and throwing up in the morning, and, er, not getting any periods.  Which is odd, because it's sort of like -

\n

Mary:  Yes!  I'm pregnant!  All right?  I'm PREGNANT!

\n

Joseph:  How is that possible?

\n

(Mary's shoulders slump further.)  Mary:  How do you think?

\n

Joseph:  I don't know, that's why I'm asking you.  I mean, you're still a virgin, right?

\n

(Mary looks up cautiously, and sees Joseph's face looking blankly puzzled.)

\n

Joseph:  Well?

\n

Mary:  God did it.

\n

Joseph:  You had sex with -

\n

Mary:  No!  Haha.  Of course not.  I mean, God just snapped his fingers and did one of those miracle things and made me pregnant.

\n

Joseph:  God made you pregnant.

\n

Mary:  (Starts to sweat.)  Yes.

\n

Joseph:  Mary, that is just so... completely...

\n

(Mary's eyes squeeze shut.)

\n

Joseph:  ...COOL!

\n

\n

(Mary opens her eyes again, cautiously.)

\n

Mary:  You think so?

\n

Joseph:  Of course!  Who wouldn't think so?  Come on, we've got to tell everyone the news!

\n

Mary:  Maybe we should keep this between just the two of us -

\n

Joseph:  No, no, silly girl, this is way too important!  Come on!

\n

(Joseph grabs Mary's wrist and drags her out of the house. SCENE:  The gathering square of Nazareth.  A dozen well-dressed men, and the town's head rabbi, look on Joseph and Mary impatiently.)

\n

Rabbi:  What's this all about, Joseph?  I trust there's a good reason for the fuss?

\n

Joseph:  Go ahead, Mary!  Tell them what you told me.

\n

Mary:  Um...  (She swallows.)  God made me pregnant.

\n

Rabbi, looking stern, yet understanding:  Now, Joseph, you know you're not supposed to do that before -

\n

Joseph:  No, no, you don't get it!  She's still a virgin!  God made her pregnant directly!

\n

(There's a long pause.)

\n

Man #1:  So, what you're saying here, basically, is that Mary tells you she's a virgin.

\n

Joseph:  Uh huh!

\n

Man #2:  And you haven't had sex with her.

\n

Joseph:  Uh huh!

\n

Man #3:  And now she's pregnant.

\n

Joseph:  Precisely!

\n

Man #4:  So you think that God did it.

\n

Joseph:  What other explanation could there be?

\n

Rabbi:  Joseph, that is just so... unbelievably...

\n

(Mary holds her breath.)

\n

Rabbi:  NEAT!

\n

(Mary exhales.)

\n

Man #5:  A miracle!  A miracle right here in Nazareth!

\n

Man #6:  Wow!  I thought that miracles only happened in Jerusalem!

\n

Man #7:  Come on!  Let's spread the good news!

\n

(They depart.  SCENE:  Mary is alone with her friend, Betty, in Betty's house.)

\n

Betty:  \"God did it.\"

\n

Mary:  I panicked!  It was all I could think of!

\n

Betty:  So who's the real -

\n

(Mary lifts an eyebrow significantly.  There's a brief pause.)

\n

Betty:  Ah.  So that's why the rabbi went along with it.

\n

Mary:  Well, he thinks he's the father, anyway.  Why, does it matter?

\n

Betty:  It puts some things in a different light.

\n

Mary:  Like what? 

\n

Betty:  The rabbi has been telling all the pretty young girls that you, Mary, are the ultimate embodiment of feminine virtue, and when they grow up, they should be just like you -

\n

Mary:  I just feel so awful about the whole mess.  What kind of thing is this to have hanging over my child's life?

\n

Betty:  You've got to put things in perspective, dearie.  You told one little white lie.  It's not as if you caused the fall of the Roman Empire.

\n

Mary:  But what if the Romans hear about it?  I don't want my baby to end up being crucified!

\n

Betty:  No one's going to obsess about it that long.  In a couple of months this whole thing will blow over.

\n

Mary:  I hope you're right...

\n

(Exeunt Omnes.)

" } }, { "_id": "HLERouG7QBt7jzLt4", "title": "Zen and the Art of Rationality", "pageUrl": "https://www.lesswrong.com/posts/HLERouG7QBt7jzLt4/zen-and-the-art-of-rationality", "postedAt": "2007-12-24T04:36:34.000Z", "baseScore": 58, "voteCount": 45, "commentCount": 33, "url": null, "contents": { "documentId": "HLERouG7QBt7jzLt4", "html": "

Followup toEffortless Technique

\n\n

No one would mistake my writings for ancient Eastern wisdom.  Successfully or not, I aspire to clearly set forth the reasoning, antecedent assumptions, and pragmatic use of my conclusions.  Successfully or not, I aspire to cut my proposals into modular pieces, so that a user can reject one mistake without destroying the whole.  This standard of writing is inherited from the ancient traditions of technical thinking, not the ancient traditions of Zen.

\n\n

No one would mistake my writings for ancient Eastern wisdom.  My goals are not the goals of Buddha or Lao Tse.  Feeling Rational suggested that emotions should follow from beliefs but not beliefs follow from emotions:  the ideal is to free yourself of all attachment to preferred conclusions about reality, arrive at your beliefs of fact by weighing the evidence without prejudice, and then feel fully whatever emotions follow from these beliefs-of-fact.  In stereotypical Eastern philosophy, you are supposed to free yourself of all attachments, not just attachment to beliefs-of-fact apart from evidence; you are supposed to relinquish all desire.  Yes, I know it's more complicated than that - but still, their goals are not mine.

\n\n

And yet it oftimes seems to me that my thoughts are expressed in conceptual language that owes a great deal to the inspiration of Eastern philosophy.  "Free yourself of attachments to thinking that the universe is one way or another:  Arrive at your picture of the world without prejudice, and then feel fully whichever feelings arise from this picture.  Let your emotions flow from your beliefs, not the other way around."  It's not a Buddhist conclusion, but the language owes a nod in the direction of Buddhism.  Even if a Buddhist teacher would vehemently disagree, they might still grasp immediately what was being proposed.  Grasp it more clearly, perhaps, than an old-school (i.e. pre-Bayesian) Western rationalist.

No one would mistake my writings for ancient Eastern wisdom.  And this is well, because I can't stand people who try to pass off their ideas as ancient wisdom.  As if that were a recommendation!  The fifth-century Chinese philosopher Xiaoguang Li observed that ancient civilizations are revered, and yet ancient civilizations are not wise like venerable human elders are wise.  A civilization further back in time is younger, not older.  The current civilization is always the senior, because the present enjoys a longer history than the past.  Incidentally, does it change your opinion if I tell you that Xiaoguang "Mike" Li is actually a friend of mine who lives in the Bay Area?

\n\n

So be it far from me to spray-paint my work with a patina of venerability.  And yet in too many ways to list here, my work owes a nod in the\ndirection of Buddhism, Taoism, Zen - and even Bushido.  Yes, Bushido! \nSee e.g. the Musashi quotes in the Twelve Virtues of Rationality. \nWhatever their other flaws, samurai had a deep grasp\nof the virtue of perfectionism as a life-principle.  To Westerners, "perfectionism" refers to something that seems like work, makes people unhappy, and\ncauses software to ship late.

\n\n

Of the virtue of curiosity, I said:  "A burning itch to know is higher than a solemn vow to pursue truth."  Here is the conceptual language - but not the propositional statements - of Lao Tse admonishing, "Stop talking about morality and righteousness, and people will regain the love of their fellows."  People are not naturally rational - but you sure can trip over your own feet by thinking too much about "rationality" instead of paying attention to the obvious evidence.  Learned virtues are powerful but dangerous; they have many degrees of freedom for error.

\n\n

Western religions demand submission to God, bended knee and bowed neck.  Many Christian saints achieved their canonization by going to great lengths of voluntary suffering.  You obey God's precepts out of dutiful morality and reverence, on penalty of judgment and damnation.  Such concepts have contaminated Eastern street religions as well, of course.  But so far as Eastern religious philosophy is concerned, one speaks of harmony with the Tao, rather than submitting to the Tao.

\n\n

When I ask myself whether rationality seems more like submitting to the commands of Bayes, or moving in harmony with the Bayes, the latter seems far closer to the mark.  By placing yourself in correspondence with the Bayes, you wield the power of the Bayes.  If you misstep in the dance (accidentally or deliberately), there is no judge who damns you, or any divine watcher disappointed in you:  You have failed yourself.  The laws of probability theory still govern you, entirely indifferent to your submission or defiance.  The consequences of your disharmony will occur to you according to the natural order of things: the Bayes does not condemn you for your disobedience, but reality will not go according to your hopeful plans.  Neither guilt nor repentance will save you, since the Bayes cares nothing for your allegiance.  Worshipping the Bayes will not gain its favor, for the Bayes has no ego-desire to demand your praise.  Probability theory is there to be used, not believed-in.  There is no ancient Taoist manuscript that agrees with such Bayesianity, but the language...

\n\n

The axioms of Bayesian probability theory make no mention of clothing, and therefore a valid derivation is valid whether you wear a lab coat or a clown suit.  The Bayes makes no mention of solemnity or silliness, and therefore lecture on rationality is just the same whether spoken in deep portentous tones or a high squeaky voice from inhaling helium.  Understanding what probability theory constrains and does not constrain, we are free to be spontaneous in all other respects.  This purity and freedom is preached in no Buddhist tract, but there is something of an Eastern aesthetic about it - and a mathematical aesthetic also, but math knows no East or West, and is simply math.

\n\n

Miyamoto Musashi said:

"The primary thing when you take a sword in your hands is your\nintention to cut the enemy, whatever the means. Whenever you parry,\nhit,\nspring, strike or touch the enemy's cutting sword, you must cut the\nenemy\nin the same movement. It is essential to attain this. If you think only\nof\nhitting, springing, striking or touching the enemy, you will not be\nable actually\nto cut him."

Likewise in rationality.  Every step cuts through to the truth in the same movement.  Every step carries the map through to reflect the territory.  If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.  Whether you wear a lab coat or a clown suit, however much it might naively seem to associate with science, does not affect whether you cut through to the correct answer.  (This is why I'm not afraid to borrow the language of Taoism, or verse-form when the whim takes me; mere style makes no difference to probability theory.)  You might think that such focus, such purposefulness, is more Western than Eastern - but where is the equivalent declaration of Musashi's by a Western philosopher?

\n\n

Lest I seem to give the East too much praise, I note a well-known truism to the effect that Westerners overestimate the average quality of Eastern philosophy because only the good stuff gets imported.  Buddhism seems "atheistic" because you don't read about the ten thousand minor deities unabashedly worshipped on the street.  Such selectivity is right and proper, and I make no apology for it.  I am not trying for authenticity, that is not my purpose.

\n\n

Likewise, I don't spend much time pondering my "Western influences" because they are as natural to me as breathing, as unseen to me as air.  If I had grown up in Taiwan, my writing would probably sound far more Buddhist and Taoistic; and perhaps I would talk of the inspiration (though not advice) I had received from reading some Taiwanese book about Greek philosophers, and how I often felt closer to Judaism than my forgotten childhood Buddhism.

\n\n

Nonetheless, I think it a wise thing for an aspiring rationalist to read at least one book of Buddhist or Taoist or Zen philosophy - preferably a book in its original English, recommended to you by some mathematician or programmer or scientist.

" } }, { "_id": "Eiw6fea93DhmGEBux", "title": "Effortless Technique", "pageUrl": "https://www.lesswrong.com/posts/Eiw6fea93DhmGEBux/effortless-technique", "postedAt": "2007-12-23T04:22:17.000Z", "baseScore": 41, "voteCount": 34, "commentCount": 16, "url": null, "contents": { "documentId": "Eiw6fea93DhmGEBux", "html": "

"All my life I have been intensely repelled by the idea of 'making an effort'.  I hate this idea today as much as I did as a child.  I don't know why I hate it so much; I just do."
           -- Raymond Smullyan, The Tao Is Silent

In the Hollywood version of rationality - or even the Traditional rationality that was passed down from supervisor to grad student in ancient days before Bayesianism - rationality is a great strain, a great effort, a continuous battle to coerce your mind into a desired shape.  Spock, the archetype of Hollywood's concept of rationality, represses all his emotions.\n\n

\n\n

And this great effort, they conceive, is virtue unto a rationalist.  The more effort you expend on forcing yourself into the mold, the better the rationalist you must be.  It's like working extra hard at your job, as demanded by the Protestant work-ethic.  If the one works long hours - sweating, getting ulcers - surely the one must be worthy of praise?

This, I think, is an instance of a Lost Purpose.  People see that successful folk must sometimes make an effort, and so\nthey conclude that effort of itself is virtuous whether or not it\nsucceeds.

\n\n

I am reminded of an AI koan from AGI '06, where the discussion turned (as it often does) to defining "intelligence".  A medium-prominent AI researcher suggested that an agent's "intelligence" could be measured in the agent's processing cycles per second, bits of memory, and bits of sensory bandwidth.  To which I replied with a quote from Dijkstra:

"If we wish to count lines of code, we should not regard them as 'lines produced' but as 'lines spent': the current conventional wisdom is so foolish as to book that count on the wrong side of the ledger."

Surely (I said), an agent is less intelligent if it uses more memory, processing power, and sensory bandwidth to accomplish the same task?

\n\n

This reply was due, in no small part, to my having read Raymond Smullyan's The Tao Is Silent at the age of sixteen.  Raymond Smullyan\nis a mathematical logician, a great composer of logic puzzles, and\nsometime Westernized Taoist.  Though I disagree with much of The Tao Is Silent, I would count "just the parts of the book I liked" as one of my most important formative influences as a rationalist.

\n\n

In particular, it was in The Tao Is Silent that I first encountered the\nTaoistic principles of spontaneity, working with rather than against the nature of things, human goodness as a distinct phenomenon from the heavy weight of dutiful moral obligation, and above all, wei wu wei, "acting through not acting".

\n\n\n\n

Smullyan's Taoism was more inspiration than instruction, but it was important inspiration.  I matured as a rationalist while keeping firmly in mind that my "rationality" was not measured by how much effort I expended on proper thinking, but rather how little.

\n\n

You can see this same view manifested in these lines from The Simple Truth:

        "You have to throw in a pebble every time a sheep leaves through the gate?" says Mark.  "Take out a pebble every time a sheep returns?"
        Autrey nods.  "Yeah."
        "That must be really hard," Mark says sympathetically.
        Autrey brightens, soaking up Mark's sympathy like rain.  "Exactly!" says Autrey.  "It's extremely hard on your emotions.  When the bucket has held its level for a while, you... tend to get attached to that level."
        A sheep passes then, leaving through the gate. Autrey sees; he stoops,\npicks up a pebble, holds it aloft in the air. "Behold!" Autrey\nproclaims. "A sheep has passed! I must now toss a pebble into this\nbucket, my dear bucket, and destroy that fond level which has held for\nso long -" Another sheep passes. Autrey, caught up in his drama, misses\nit; so I plunk a pebble into the bucket. Autrey is still speaking: "-\nfor that is the supreme test of the shepherd, to throw in the pebble,\nbe it ever so agonizing, be the old level ever so precious. Indeed,\nonly the best of shepherds can meet a requirement so stern -"
        "Autrey," I say, "if you want to be a great shepherd someday,\nlearn to shut up and throw in the pebble. No fuss. No drama. Just do\nit."\n

Long ago - I think I must have been pretty young - I decided to move my limbs with maximum "efficiency", to save effort.  I might even have been thinking of Vulcans.

\n\n

So I tried what my youthful mind wordlessly conceived of as "efficiency":  I tried to move my limbs in perfectly straight lines as quickly as possible, with corresponding sudden stops and sudden starts.

\n\n

"Efficiency" didn't feel very efficient.  The sudden starts and sudden stops took effort.  Moving my hand in a straight line forced my elbow and shoulder to move in strange curves.

\n\n

You can buy books that teach this same life lesson, but they use a lot more pages.

\n\n

Now this is scarcely Taoism, at least so far as philosophical premises are concerned.  According to authentic Taoism, you can exert no effort at all while accomplishing all worthwhile things.  This seems to me around as plausible as an agent that achieves its utility function using zero computing power and is therefore maximally intelligent.  The only way you could do it is if the agent assigns constant utility to all outcomes, or if the utility function's maximum is set by sleight of hand to wherever the universe goes anyway.  This may be why I am not a Taoist:  "A maximally intelligent agent with zero computing power and no utility function" sounds like a good metaphor for the Tao.  I object to a metric of intelligence that makes me dumber than a rock.

\n\n

According to Taoism, everyone ought to act in accordance with their natures.  One can scarcely see how it could be otherwise.  I think this religion only appears nontrivial because of selective failure to consider all its consequences.

\n\n

In any case, my own nature is to make certain efforts even if they seem unpleasant.  Therefore I have no objection to making an effort now and then.

\n\n

But one should not think that force, effort, and control are virtuous unto a rationalist - that would book them on the wrong side of the ledger.

\n\n

Addendum:  Before anyone else points it out:  Yes, I know that my critique of Taoism is appallingly simplistic and that the Taoists are well aware of it.  That doesn't make it wrong.

" } }, { "_id": "NbbK6YKTpQR7u7D6u", "title": "False Laughter", "pageUrl": "https://www.lesswrong.com/posts/NbbK6YKTpQR7u7D6u/false-laughter", "postedAt": "2007-12-22T06:03:45.000Z", "baseScore": 44, "voteCount": 32, "commentCount": 65, "url": null, "contents": { "documentId": "NbbK6YKTpQR7u7D6u", "html": "

Followup toPolitics and Awful Art

\n

There's this thing called \"derisive laughter\" or \"mean-spirited laughter\", which follows from seeing the Hated Enemy get a kick in the pants.  It doesn't have to be an unexpected kick in the pants, or a kick followed up with a custard pie.  It suffices that the Hated Enemy gets hurt.  It's like humor, only without the humor.

\n

If you know what your audience hates, it doesn't take much effort to get a laugh like that—which marks this as a subspecies of awful political art.

\n

There are deliciously biting satires, yes; not all political art is bad art.  But satire is a much more demanding art than just punching the Enemy in the nose.  In fact, never mind satire—just an atom of ordinary genuine humor takes effort.

\n

Imagine this political cartoon:  A building labeled \"science\", and a standard Godzilla-ish monster labeled \"Bush\" stomping on the \"science\" building.  Now there are people who will laugh at this—hur hur, scored a point off Bush, hur hur—but this political cartoon didn't take much effort to imagine.  In fact, it was the very first example that popped into my mind when I thought \"political cartoon about Bush and science\".  This degree of obviousness is a bad sign.

\n

If I want to make a funny political cartoon, I have to put in some effort.  Go beyond the cached thought.  Use my creativity.  Depict Bush as a tentacle monster and Science as a Japanese schoolgirl.

\n

\n

There are many art forms that suffer from obviousness.  But humor more than most, because humor relies on surprise—the ridiculous, the unexpected, the absurd.

\n

(Satire achieves surprise by saying, out loud, the thoughts you didn't dare think.  Fake satires repeat thoughts you were already thinking.)

\n

You might say that a predictable punchline is too high-entropy to be funny, by that same logic which says you should be enormously less surprised to find your thermostat reading 30 degrees than 29 degrees.

\n

The general test against awful political art is to ask whether the art would seem worthwhile if it were not political.  If someone writes a song about space travel, and the song is good enough that I would enjoy listening to it even if it were about butterflies, then and only then does it qualify to pick up bonus points for praising a Worthy Cause.

\n

So one test for derisive laughter is to ask if the joke would still be funny, if it weren't the Hated Enemy getting the kick in the pants.  Bill Gates once got hit by an unexpected pie in the face.  Would it still have been funny (albeit less funny) if Linus Torvalds had gotten hit by the pie?

\n

Of course I'm not suggesting that you sit around all day asking which jokes are \"really\" funny, or which jokes you're \"allowed\" to laugh at.  As the saying goes, analyzing a joke is like dissecting a frog—it kills the frog and it's not much fun for you, either.

\n

So why this blog post, then?  Don't you and I already know which jokes are funny?

\n

One application:  If you find yourself in a group of people who tell consistently unfunny jokes about the Hated Enemy, it may be a good idea to head for the hills, before you start to laugh as well...

\n

Another application:  You and I should be allowed not to laugh at certain jokes—even jokes that target our own favorite causes—on the grounds that the joke is too predictable to be funny.  We should be able to do this without being accused of being humorless, \"unable to take a joke\", or protecting sacred cows.  If labeled-Godzilla-stomps-a-labeled-building isn't funny about \"Bush\" and \"Science\", then it also isn't funny about \"libertarian economists\" and \"American national competitiveness\", etc.

\n

The most scathing accusation I ever heard against Objectivism is that hardcore Objectivists have no sense of humor; but no one could prove this by showing an Objectivist a cartoon of Godzilla-\"Rand\" stomping on building-\"humor\" and demanding that he laugh.

\n

Requiring someone to laugh in order to prove their non-cultishness—well, like most kinds of obligatory laughter, it doesn't quite work.  Laughter, of all things, has to come naturally.  The most you can do is get fear and insecurity out of its way.

\n

If an Objectivist, innocently browsing the Internet, came across a depiction of Ayn Rand as a Japanese schoolgirl lecturing a tentacle monster, and still didn't laugh, then that would be a problem.  But they couldn't fix this problem by deliberately trying to laugh.

\n

Obstacles to humor are a sign of dreadful things.  But making humor obligatory, or constantly wondering whether you're laughing enough, just throws up another obstacle.  In that way it's rather Zen.  There are things you can accomplish by deliberately composing a joke, but very few things you can accomplish by deliberately believing a joke is funny.

\n

 

\n

Part of the Politics Is the Mind-Killer subsequence of How To Actually Change Your Mind

\n

Next post: \"Human Evil and Muddled Thinking\"

\n

Previous post: \"Politics and Awful Art\"

" } }, { "_id": "Qr4MB9hFRzamuMRHJ", "title": "Two Cult Koans", "pageUrl": "https://www.lesswrong.com/posts/Qr4MB9hFRzamuMRHJ/two-cult-koans", "postedAt": "2007-12-21T05:45:31.000Z", "baseScore": 141, "voteCount": 121, "commentCount": 100, "url": null, "contents": { "documentId": "Qr4MB9hFRzamuMRHJ", "html": "\n\n\n\n \n\n \n\n

A novice rationalist studying under the master Ougi was rebuked by a friend who said, “You spend all this time listening to your master, and talking of ‘rational’ this and ‘rational’ that—you have fallen into a cult!”

\n\n

The novice was deeply disturbed; he heard the words You have fallen into a cult! resounding in his ears as he lay in bed that night, and even in his dreams.

\n\n

The next day, the novice approached Ougi and related the events, and said, “Master, I am constantly consumed by worry that this is all really a cult, and that your teachings are only dogma.”

\n\n

Ougi replied, “If you find a hammer lying in the road and sell it, you may ask a low price or a high one. But if you keep the hammer and use it to drive nails, who can doubt its worth?”

\n\n

The novice said, “See, now that’s just the sort of thing I worry about—your mysterious Zen replies.”

\n\n

Ougi said, “Fine, then, I will speak more plainly, and lay out perfectly reasonable arguments which demonstrate that you have not fallen into a cult. But first you have to wear this silly hat.”

\n\n

Ougi gave the novice a huge brown ten-gallon cowboy hat.

\n\n

“Er, master . . .” said the novice.

\n\n

“When I have explained everything to you,” said Ougi, “you will see why this was necessary. Or otherwise, you can continue to lie awake nights, wondering whether this is a cult.”

\n\n

The novice put on the cowboy hat.

\n\n

Ougi said, “How long will you repeat my words and ignore the meaning? Disordered thoughts begin as feelings of attachment to preferred conclusions. You are too anxious about your self-image as a rationalist. You came to me to seek reassurance. If you had been truly curious, not knowing one way or the other, you would have thought of ways to resolve your doubts. Because you needed to resolve your cognitive dissonance, you were willing to put on a silly hat. If I had been an evil man, I could have made you pay a hundred silver coins. When you concentrate on a real-world question, the worth or worthlessness of your understanding will soon become apparent. You are like a swordsman who keeps glancing away to see if anyone might be laughing at him—”

\n\n

“All right,” said the novice.

\n\n

“You asked for the long version,” said Ougi.

\n\n

This novice later succeeded Ougi and became known as Ni no Tachi. Ever after, he would not allow his students to cite his words in their debates, saying, “Use the techniques and do not mention them.”

\n\n
\n
\n
\n
\n
\n\n

A novice rationalist approached the master Ougi and said, “Master, I worry that our rationality dojo is . . . well . . . a little cultish.”

\n\n

“That is a grave concern,” said Ougi.

\n\n

The novice waited a time, but Ougi said nothing more.

\n\n

So the novice spoke up again: “I mean, I’m sorry, but having to wear these robes, and the hood—it just seems like we’re the bloody Freemasons or something.”

\n\n

“Ah,” said Ougi, “the robes and trappings.”

\n\n

“Well, yes the robes and trappings,” said the novice. “It just seems terribly irrational.”

\n\n

“I will address all your concerns,” said the master, “but first you must put on this silly hat.” And Ougi drew out a wizard’s hat, embroidered with crescents and stars.

\n\n

The novice took the hat, looked at it, and then burst out in frustration: “How can this possibly help?

\n\n

“Since you are so concerned about the interactions of clothing with probability theory,” Ougi said, “it should not surprise you that you must wear a special hat to understand.”

\n\n

When the novice attained the rank of grad student, he took the name Bouzo and would only discuss rationality while wearing a clown suit.

\n\n" } }, { "_id": "n5xT2RJy2fWxCA3eH", "title": "Politics and Awful Art", "pageUrl": "https://www.lesswrong.com/posts/n5xT2RJy2fWxCA3eH/politics-and-awful-art", "postedAt": "2007-12-20T03:46:21.000Z", "baseScore": 38, "voteCount": 35, "commentCount": 49, "url": null, "contents": { "documentId": "n5xT2RJy2fWxCA3eH", "html": "

Followup toRationality and the English Language

\n

One of my less treasured memories is of a State of the Union address, or possibly a presidential inauguration, at which a Nobel Laureate got up and read, in a terribly solemn voice, some politically correct screed about what a wonderfully inclusive nation we all were—\"The African-Americans, the Ethiopians, the Etruscans\", or something like that.  The \"poem\", if you can call it that, was absolutely awful.  As far as my ears could tell, it had no redeeming artistic merit whatsoever.

\n

Every now and then, yet another atheist is struck by the amazing idea that atheists should have hymns, just like religious people have hymns, and they take some existing religious song and turn out an atheistic version.  And then this \"atheistic hymn\" is, almost without exception, absolutely awful.  But the author can't see how dreadful the verse is as verse.  They're too busy congratulating themselves on having said \"Religion sure sucks, amen.\"  Landing a punch on the Hated Enemy feels so good that they overlook the hymn's lack of any other merit.  Verse of the same quality about something unpolitical, like mountain streams, would be seen as something a kindergartener's mother would post on her refrigerator. 

\n

\n

In yesterday's Litany Against Gurus, there are only two lines that might be classifiable as \"poetry\", not just \"verse\".  When I was composing the litany's end, the lines that first popped into my head were:

\n
\n

I was not your destination
Only a step on your path

\n
\n

Which didn't sound right at all.  Substitute \"pathway\" for \"road\", so the syllable counts would match?  But that sounded even worse.  The prosody—the pattern of stressed syllables—was all wrong.

\n

The real problem was the word des-ti-NA-tion—a huge awkward lump four syllables long.  So get rid of it!  \"I was not your goal\" was the first alternative that came to mind.  Nicely short.  But now that I was thinking about it, \"goal\" sounded very airy and abstract.  Then the word \"city\" came into my mind—and it echoed.

\n

\"I was never your city\" came to me, not by thinking about rationality, but by thinking about prosody.  The constraints of art force us to toss out the first, old, tired phrasing that comes to mind; and in searching for a less obvious phrasing, often lead us to less obvious thoughts.

\n

If I'd said, \"Well, this is such a wonderful thought about rationality, that I don't have to worry about the prosodic problem\", then I would have not received the benefit of being constrained.

\n

The other poetic line began as \"Laugh once, and never look back,\" which had problems as rationality, not just as prosody.  \"Laugh once\" is the wrong kind of laughter; too derisive.  \"Never look back\" is even less correct, because the memory of past mistakes can be useful years later.  So... \"Look back, laugh once smile, and then,\" um, \"look forward\"?  Now if I'd been enthralled by the wonders of rationality, I would have said, \"Ooh, 'look forward'!  What a progressive sentiment!\" and forgiven the extra syllable.

\n

\"Eyes front!\"  It was two syllables.  It had the crisp click of a drill sergeant telling you to stop woolgathering, snap out of that daze, and get to work!  Nothing like the soft cliche of \"look forward, look upward, look to the future in a vaguely admiring sort of way...\"

\n

Eyes front!  It's a better thought as rationality, which I would never have found, if I'd been so impressed with daring to write about rationality, that I had forgiven myself the prosodic transgression of an extra syllable.

\n

If you allow affirmation of My-Favorite-Idea to compensate for lack of rhythm in a song, lack of beauty in a painting, lack of poignancy in fiction, then your art will, inevitably, suck.  When you do art about My-Favorite-Idea, you have to hold yourself to the same standard as if you were doing art about a butterfly.

\n

There is powerful politicized art, just as there are great religious paintings.  But merit in politicized art is more the exception than the rule.  Most of it ends up as New Soviet Man Heroically Crushing Capitalist Snakes.  It's an easy living.  If anyone criticizes your art on grounds of general suckiness, they'll be executed for siding with the capitalist snakes.

\n

Tolerance of awful art, just because it lands a delicious punch on the Enemy, or just because it affirms the Great Truth, is a dangerous sign:  It indicates an affective death spiral entering the supercritical phase where you can no longer criticize any argument whose conclusion is the \"right\" one.

\n

And then the next thing you know, you're composing dreadful hymns, or inserting giant philosophical lectures into the climax of your fictional novel...

\n

 

\n

Part of the Politics Is the Mind-Killer subsequence of How To Actually Change Your Mind

\n

Next post: \"False Laughter\"

\n

Previous post: \"The Litany Against Gurus\"

" } }, { "_id": "t6Fe2PsEwb3HhcBEr", "title": "The Litany Against Gurus", "pageUrl": "https://www.lesswrong.com/posts/t6Fe2PsEwb3HhcBEr/the-litany-against-gurus", "postedAt": "2007-12-18T20:11:02.000Z", "baseScore": 45, "voteCount": 41, "commentCount": 36, "url": null, "contents": { "documentId": "t6Fe2PsEwb3HhcBEr", "html": "

I am your hero!
I am your master!
Learn my arts,
Seek my way.

Learn as I learned,
Seek as I sought.

Envy me!
Aim at me!
Rival me!
Transcend me!

Look back,
Smile,
And then
Eyes front!

I was never your city,
Just a stretch of your road.

\n

 

\n

Part of the Politics Is the Mind-Killer subsequence of How To Actually Change Your Mind

\n

Next post: \"Politics and Awful Art\"

\n

Previous post: \"Rationality and the English Language\"

" } }, { "_id": "96TBXaHwLbFyeAxrg", "title": "Guardians of Ayn Rand", "pageUrl": "https://www.lesswrong.com/posts/96TBXaHwLbFyeAxrg/guardians-of-ayn-rand", "postedAt": "2007-12-18T06:24:05.000Z", "baseScore": 123, "voteCount": 100, "commentCount": 122, "url": null, "contents": { "documentId": "96TBXaHwLbFyeAxrg", "html": "
\n

\"For skeptics, the idea that reason can lead to a cult is absurd.  The characteristics of a cult are 180 degrees out of phase with reason.  But as I will demonstrate, not only can it happen, it has happened, and to a group that would have to be considered the unlikeliest cult in history.  It is a lesson in what happens when the truth becomes more important than the search for truth...\"
                 —Michael Shermer, \"The Unlikeliest Cult in History\"

\n
\n

I think Michael Shermer is over-explaining Objectivism.  I'll get around to amplifying on that.

\n

Ayn Rand's novels glorify technology, capitalism, individual defiance of the System, limited government, private property, selfishness. Her ultimate fictional hero, John Galt, was <SPOILER>a scientist who invented a new form of cheap renewable energy; but then refuses to give it to the world since the profits will only be stolen to prop up corrupt governments.</SPOILER>

\n

And then—somehow—it all turned into a moral and philosophical \"closed system\" with Ayn Rand at the center.  The term \"closed system\" is not my own accusation; it's the term the Ayn Rand Institute uses to describe Objectivism.  Objectivism is defined by the works of Ayn Rand.  Now that Rand is dead, Objectivism is closed.  If you disagree with Rand's works in any respect, you cannot be an Objectivist.

\n

\n

Max Gluckman once said:  \"A science is any discipline in which the fool of this generation can go beyond the point reached by the genius of the last generation.\"  Science moves forward by slaying its heroes, as Newton fell to Einstein.  Every young physicist dreams of being the new champion that future physicists will dream of dethroning.

\n

Ayn Rand's philosophical idol was Aristotle.  Now maybe Aristotle was a hot young math talent 2350 years ago, but math has made noticeable progress since his day.  Bayesian probability theory is the quantitative logic of which Aristotle's qualitative logic is a special case; but there's no sign that Ayn Rand knew about Bayesian probability theory when she wrote her magnum opus, Atlas Shrugged.  Rand wrote about \"rationality\", yet failed to familiarize herself with the modern research in heuristics and biases.  How can anyone claim to be a master rationalist, yet know nothing of such elementary subjects?

\n

\"Wait a minute,\" objects the reader, \"that's not quite fair!  Atlas Shrugged was published in 1957!  Practically nobody knew about Bayes back then.\"  Bah.  Next you'll tell me that Ayn Rand died in 1982, and had no chance to read Judgment Under Uncertainty: Heuristics and Biases, which was published that same year.

\n

Science isn't fair.  That's sorta the point.  An aspiring rationalist in 2007 starts with a huge advantage over an aspiring rationalist in 1957.  It's how we know that progress has occurred.

\n

To me the thought of voluntarily embracing a system explicitly tied to the beliefs of one human being, who's dead, falls somewhere between the silly and the suicidal.  A computer isn't five years old before it's obsolete.

\n

The vibrance that Rand admired in science, in commerce, in every railroad that replaced a horse-and-buggy route, in every skyscraper built with new architecture—it all comes from the principle of surpassing the ancient masters. How can there be science, if the most knowledgeable scientist there will ever be, has already lived?  Who would raise the New York skyline that Rand admired so, if the tallest building that would ever exist, had already been built?

\n

And yet Ayn Rand acknowledged no superior, in the past, or in the future yet to come.  Rand, who began in admiring reason and individuality, ended by ostracizing anyone who dared contradict her.  Shermer: \"[Barbara] Branden recalled an evening when a friend of Rand's remarked that he enjoyed the music of Richard Strauss.  'When he left at the end of the evening, Ayn said, in a reaction becoming increasingly typical, 'Now I understand why he and I can never be real soulmates.  The distance in our sense of life is too great.'  Often she did not wait until a friend had left to make such remarks.\"

\n

Ayn Rand changed over time, one suspects.

\n

Rand grew up in Russia, and witnessed the Bolshevik revolution firsthand.  She was granted a visa to visit American relatives at the age of 21, and she never returned.  It's easy to hate authoritarianism when you're the victim.  It's easy to champion the freedom of the individual, when you are yourself the oppressed.

\n

It takes a much stronger constitution to fear authority when you have the power.  When people are looking to you for answers, it's harder to say \"What the hell do I know about music? I'm a writer, not a composer,\" or \"It's hard to see how liking a piece of music can be untrue.\"

\n

When you're the one crushing those who dare offend you, the exercise of power somehow seems much more justifiable than when you're the one being crushed.  All sorts of excellent justifications somehow leap to mind.

\n

Michael Shermer goes into detail on how he thinks that Rand's philosophy ended up descending into cultishness.  In particular, Shermer says (it seems) that Objectivism failed because Rand thought that certainty was possible, while science is never certain.  I can't back Shermer on that one.  The atomic theory of chemistry is pretty damned certain.  But chemists haven't become a cult.

\n

Actually, I think Shermer's falling prey to correspondence bias by supposing that there's any particular correlation between Rand's philosophy and the way her followers formed a cult.  Every cause wants to be a cult.

\n

Ayn Rand fled the Soviet Union, wrote a book about individualism that a lot of people liked, got plenty of compliments, and formed a coterie of admirers. Her admirers found nicer and nicer things to say about her (happy death spiral), and she enjoyed it too much to tell them to shut up.  She found herself with the power to crush those of whom she disapproved, and she didn't resist the temptation of power.

\n

Ayn Rand and Nathaniel Branden carried on a secret extramarital affair.  (With permission from both their spouses, which counts for a lot in my view.  If you want to turn that into a \"problem\", you have to specify that the spouses were unhappy—and then it's still not a matter for outsiders.)  When Branden was revealed to have \"cheated\" on Rand with yet another woman, Rand flew into a fury and excommunicated him.  Many Objectivists broke away when news of the affair became public.

\n

Who stayed with Rand, rather than following Branden, or leaving Objectivism altogether?  Her strongest supporters.  Who departed?  The previous voices of moderation.  (Evaporative cooling of group beliefs.)  Ever after, Rand's grip over her remaining coterie was absolute, and no questioning was allowed.

\n

The only extraordinary thing about the whole business, is how ordinary it was.

\n

You might think that a belief system which praised \"reason\" and \"rationality\" and \"individualism\" would have gained some kind of special immunity, somehow...?

\n

Well, it didn't.

\n

It worked around as well as putting a sign saying \"Cold\" on a refrigerator that wasn't plugged in.

\n

The active effort required to resist the slide into entropy wasn't there, and decay inevitably followed.

\n

And if you call that the \"unlikeliest cult in history\", you're just calling reality nasty names.

\n

Let that be a lesson to all of us:  Praising \"rationality\" counts for nothing.  Even saying \"You must justify your beliefs through Reason, not by agreeing with the Great Leader\" just runs a little automatic program that takes whatever the Great Leader says and generates a justification that your fellow followers will view as Reason-able.

\n

So where is the true art of rationality to be found?  Studying up on the math of probability theory and decision theory.  Absorbing the cognitive sciences like evolutionary psychology, or heuristics and biases.  Reading history books...

\n

\"Study science, not just me!\" is probably the most important piece of advice Ayn Rand should've given her followers and didn't.  There's no one human being who ever lived, whose shoulders were broad enough to bear all the weight of a true science with many contributors.

\n

It's noteworthy, I think, that Ayn Rand's fictional heroes were architects and engineers; John Galt, her ultimate, was a physicist; and yet Ayn Rand herself wasn't a great scientist.  As far as I know, she wasn't particularly good at math.  She could not aspire to rival her own heroes.  Maybe that's why she began to lose track of Tsuyoku Naritai.

\n

Now me, y'know, I admire Francis Bacon's audacity, but I retain my ability to bashfully confess, \"If I could go back in time, and somehow make Francis Bacon understand the problem I'm currently working on, his eyeballs would pop out of their sockets like champagne corks and explode.\"

\n

I admire Newton's accomplishments.  But my attitude toward a woman's right to vote, bars me from accepting Newton as a moral paragon. Just as my knowledge of Bayesian probability bars me from viewing Newton as the ultimate unbeatable source of mathematical knowledge. And my knowledge of Special Relativity, paltry and little-used though it may be, bars me from viewing Newton as the ultimate authority on physics.

\n

Newton couldn't realistically have discovered any of the ideas I'm lording over him—but progress isn't fair!  That's the point!

\n

Science has heroes, but no gods.  The great Names are not our superiors, or even our rivals, they are passed milestones on our road; and the most important milestone is the hero yet to come.

\n

To be one more milestone in humanity's road is the best that can be said of anyone; but this seemed too lowly to please Ayn Rand.  And that is how she became a mere Ultimate Prophet.

" } }, { "_id": "aFtWRL3QihoF5uQd5", "title": "Guardians of the Gene Pool", "pageUrl": "https://www.lesswrong.com/posts/aFtWRL3QihoF5uQd5/guardians-of-the-gene-pool", "postedAt": "2007-12-16T20:08:39.000Z", "baseScore": 41, "voteCount": 32, "commentCount": 73, "url": null, "contents": { "documentId": "aFtWRL3QihoF5uQd5", "html": "

Like any educated denizen of the 21st century, you may have heard of World War II.  You may remember that Hitler and the Nazis planned to carry forward a romanticized process of evolution, to breed a new master race, supermen, stronger and smarter than anything that had existed before.

\n

Actually this is a common misconception.  Hitler believed that the Aryan superman had previously existed—the Nordic stereotype, the blond blue-eyed beast of prey—but had been polluted by mingling with impure races.  There had been a racial Fall from Grace.

\n

It says something about the degree to which the concept of progress permeates Western civilization, that the one is told about Nazi eugenics and hears \"They tried to breed a superhuman.\"  You, dear reader—if you failed hard enough to endorse coercive eugenics, you would try to create a superhuman.  Because you locate your ideals in your future, not in your past.  Because you are creative.  The thought of breeding back to some Nordic archetype from a thousand years earlier would not even occur to you as a possibility—what, just the Vikings?  That's all?  If you failed hard enough to kill, you would damn well try to reach heights never before reached, or what a waste it would all be, eh?  Well, that's one reason you're not a Nazi, dear reader.

\n

It says something about how difficult it is for the relatively healthy to envision themselves in the shoes of the relatively sick, that we are told of the Nazis, and distort the tale to make them defective transhumanists.

\n

It's the Communists who were the defective transhumanists.  \"New Soviet Man\" and all that.  The Nazis were quite definitely the bioconservatives of the tale.

" } }, { "_id": "etBrzxdfNop3DqJvA", "title": "Guardians of the Truth", "pageUrl": "https://www.lesswrong.com/posts/etBrzxdfNop3DqJvA/guardians-of-the-truth", "postedAt": "2007-12-15T18:44:28.000Z", "baseScore": 57, "voteCount": 53, "commentCount": 55, "url": null, "contents": { "documentId": "etBrzxdfNop3DqJvA", "html": "

The criticism is sometimes leveled against rationalists:  \"The Inquisition thought they had the truth!  Clearly this 'truth' business is dangerous.\"

\n

There are many obvious responses, such as \"If you think that possessing the truth would license you to torture and kill, you're making a mistake that has nothing to do with epistemology.\"  Or, \"So that historical statement you just made about the Inquisition—is it true?\"

\n

Reversed stupidity is not intelligence:  \"If your current computer stops working, you can't conclude that everything about the current system is wrong and that you need a new system without an AMD processor, an ATI video card... even though your current system has all these things and it doesn't work.  Maybe you just need a new power cord.\"  To arrive at a poor conclusion requires only one wrong step, not every step wrong.  The Inquisitors believed that 2 + 2 = 4, but that wasn't the source of their madness.  Maybe epistemological realism wasn't the problem either?

\n

It does seem plausible that if the Inquisition had been made up of relativists, professing that nothing was true and nothing mattered, they would have mustered less enthusiasm for their torture.  They would also have had been less enthusiastic if lobotomized.  I think that's a fair analogy.

\n

And yet... I think the Inquisition's attitude toward truth played a role.  The Inquisition believed that there was such a thing as truth, and that it was important; well, likewise Richard Feynman.  But the Inquisitors were not Truth-Seekers.  They were Truth-Guardians.

\n

\n

I once read an argument (can't find source) that a key component of a zeitgeist is whether it locates its ideals in its future or its past.  Nearly all cultures before the Enlightenment believed in a Fall from Grace—that things had once been perfect in the distant past, but then catastrophe had struck, and everything had slowly run downhill since then:

\n
\n

\"In the age when life on Earth was full...  They loved each other and did not know that this was 'love of neighbor'. They deceived no one yet they did not know that they were 'men to be trusted'. They were reliable and did not know that this was 'good faith'. They lived freely together giving and taking, and did not know that they were generous. For this reason their deeds have not been narrated. They made no history.\"
        —The Way of Chuang Tzu, trans. Thomas Merton

\n
\n

The perfect age of the past, according to our best anthropological evidence, never existed.  But a culture that sees life running inexorably downward is very different from a culture in which you can reach unprecedented heights. 

\n

(I say \"culture\", and not \"society\", because you can have more than one subculture in a society.)

\n

You could say that the difference between e.g. Richard Feynman and the Inquisition was that the Inquisition believed they had truth, while Richard Feynman sought truth.  This isn't quite defensible, though, because there were undoubtedly some truths that Richard Feynman thought he had as well.  \"The sky is blue,\" for example, or \"2 + 2 = 4\".

\n

Yes, there are effectively certain truths of science.  General Relativity may be overturned by some future physics—albeit not in any way that predicts the Sun will orbit Jupiter; the new theory must steal the successful predictions of the old theory, not contradict them.  But evolutionary theory takes place on a higher level of organization than atoms, and nothing we discover about quarks is going to throw out Darwinism, or the cell theory of biology, or the atomic theory of chemistry, or a hundred other brilliant innovations whose truth is now established beyond reasonable doubt.

\n

Are these \"absolute truths\"?  Not in the sense of possessing a probability of literally 1.0.  But they are cases where science basically thinks it's got the truth.

\n

And yet scientists don't torture people who question the atomic theory of chemistry.  Why not?  Because they don't believe that certainty licenses torture?  Well, yes, that's the surface difference; but why don't scientists believe this?

\n

Because chemistry asserts no supernatural penalty of eternal torture for disbelieving in the atomic theory of chemistry?  But again we recurse and ask the question, \"Why?\"  Why don't chemists believe that you go to hell if you disbelieve in the atomic theory?

\n

Because journals won't publish your paper until you get a solid experimental observation of Hell?  But all too many scientists can suppress their skeptical reflex at will.  Why don't chemists have a private cult which argues that nonchemists go to hell, given that many are Christians anyway?

\n

Questions like that don't have neat single-factor answers.  But I would argue that one of the factors has to do with assuming a defensive posture toward the truth, versus a productive posture toward the truth.

\n

When you are the Guardian of the Truth, you've got nothing useful to contribute to the Truth but your guardianship of it.  When you're trying to win the Nobel Prize in chemistry by discovering the next benzene or buckyball, someone who challenges the atomic theory isn't so much a threat to your worldview as a waste of your time.

\n

When you are a Guardian of the Truth, all you can do is try to stave off the inevitable slide into entropy by zapping anything that departs from the Truth.  If there's some way to pump against entropy, generate new true beliefs along with a little waste heat, that same pump can keep the truth alive without secret police.  In chemistry you can replicate experiments and see for yourself—and that keeps the precious truth alive without need of violence.

\n

And it's not such a terrible threat if we make one mistake somewhere—end up believing a little untruth for a little while—because tomorrow we can recover the lost ground.

\n

But this whole trick only works because the experimental method is a \"criterion of goodness\" which is not a mere \"criterion of comparison\".  Because experiments can recover the truth without need of authority, they can also override authority and create new true beliefs where none existed before.

\n

Where there are criteria of goodness that are not criteria of comparison, there can exist changes which are improvements, rather than threats.  Where there are only criteria of comparison, where there's no way to move past authority, there's also no way to resolve a disagreement between authorities.  Except extermination.  The bigger guns win.

\n

I don't mean to provide a grand overarching single-factor view of history.  I do mean to point out a deep psychological difference between seeing your grand cause in life as protecting, guarding, preserving, versus discovering, creating, improving.  Does the \"up\" direction of time point to the past or the future?  It's a distinction that shades everything, casts tendrils everywhere.

\n

This is why I've always insisted, for example, that if you're going to start talking about \"AI ethics\", you had better be talking about how you are going to improve on the current situation using AI, rather than just keeping various things from going wrong.  Once you adopt criteria of mere comparison, you start losing track of your ideals—lose sight of wrong and right, and start seeing simply \"different\" and \"same\".

\n

I would also argue that this basic psychological difference is one of the reasons why an academic field that stops making active progress tends to turn mean.  (At least by the refined standards of science.  Reputational assassination is tame by historical standards; most defensive-posture belief systems went for the real thing.)  If major shakeups don't arrive often enough to regularly promote young scientists based on merit rather than conformity, the field stops resisting the standard degeneration into authority.  When there's not many discoveries being made, there's nothing left to do all day but witch-hunt the heretics.

\n

To get the best mental health benefits of the discover/create/improve posture, you've got to actually be making progress, not just hoping for it.

" } }, { "_id": "2jp98zdLo898qExrr", "title": "Hug the Query", "pageUrl": "https://www.lesswrong.com/posts/2jp98zdLo898qExrr/hug-the-query", "postedAt": "2007-12-14T19:51:37.000Z", "baseScore": 174, "voteCount": 135, "commentCount": 22, "url": null, "contents": { "documentId": "2jp98zdLo898qExrr", "html": "\n\n\n\n \n\n \n\n

In the art of rationality there is a discipline of closeness-to-the-issue—trying to observe evidence that is as near to the original question as possible, so that it screens off as many other arguments as possible.

\n\n

The Wright Brothers say, “My plane will fly.” If you look at their authority (bicycle mechanics who happen to be excellent amateur physicists) then you will compare their authority to, say, Lord Kelvin, and you will find that Lord Kelvin is the greater authority.

\n\n

If you demand to see the Wright Brothers’ calculations, and you can follow them, and you demand to see Lord Kelvin’s calculations (he probably doesn’t have any apart from his own incredulity), then authority becomes much less relevant.

\n\n

If you actually watch the plane fly, the calculations themselves become moot for many purposes, and Kelvin’s authority not even worth considering.

\n\n

The more directly your arguments bear on a question, without intermediate inferences—the closer the observed nodes are to the queried node, in the Great Web of Causality—the more powerful the evidence. It’s a theorem of these causal graphs that you can never get more information from distant nodes, than from strictly closer nodes that screen off the distant ones.

\n\n

Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.”1

\n\n

Just as it is superior to argue physics than credentials, it is also superior to argue physics than rationality. Who was more rational, the Wright Brothers or Lord Kelvin? If we can check their calculations, we don’t have to care! The virtue of a rationalist cannot directly cause a plane to fly.

\n\n

If you forget this principle, learning about more biases will hurt you, because it will distract you from more direct arguments. It’s all too easy to argue that someone is exhibiting Bias #182 in your repertoire of fully generic accusations, but you can’t settle a factual issue without closer evidence. If there are biased reasons to say the Sun is shining, that doesn’t make it dark out.

\n\n

Just as you can’t always experiment today, you can’t always check the calculations today.2 Sometimes you don’t know enough background material, sometimes there’s private information, sometimes there just isn’t time. There’s a sadly large number of times when it’s worthwhile to judge the speaker’s rationality. You should always do it with a hollow feeling in your heart, though, a sense that something’s missing.

\n\n

Whenever you can, dance as near to the original question as possible—press yourself up against it—get close enough to hug the query!

\n\n
\n \n\n

1Jerry Cleaver, Immediate Fiction: A Complete Writing Course (Macmillan, 2004).

\n\n

2See also “Is Molecular Nanotechnology ’Scientific’?” http://lesswrong.com/lw/io/is_molecular_nanotechnology_scientific.

\n
\n\n" } }, { "_id": "5yFRd3cjLpm3Nd6Di", "title": "Argument Screens Off Authority", "pageUrl": "https://www.lesswrong.com/posts/5yFRd3cjLpm3Nd6Di/argument-screens-off-authority", "postedAt": "2007-12-14T00:05:35.000Z", "baseScore": 138, "voteCount": 110, "commentCount": 86, "url": null, "contents": { "documentId": "5yFRd3cjLpm3Nd6Di", "html": "

Scenario 1: Barry is a famous geologist. Charles is a fourteen-year-old juvenile delinquent with a long arrest record and occasional psychotic episodes. Barry flatly asserts to Arthur some counterintuitive statement about rocks, and Arthur judges it 90% probable. Then Charles makes an equally counterintuitive flat assertion about rocks, and Arthur judges it 10% probable. Clearly, Arthur is taking the speaker’s authority into account in deciding whether to believe the speaker’s assertions.

Scenario 2: David makes a counterintuitive statement about physics and gives Arthur a detailed explanation of the arguments, including references. Ernie makes an equally counterintuitive statement, but gives an unconvincing argument involving several leaps of faith. Both David and Ernie assert that this is the best explanation they can possibly give (to anyone, not just Arthur). Arthur assigns 90% probability to David’s statement after hearing his explanation, but assigns a 10% probability to Ernie’s statement.

It might seem like these two scenarios are roughly symmetrical: both involve taking into account useful evidence, whether strong versus weak authority, or strong versus weak argument.

But now suppose that Arthur asks Barry and Charles to make full technical cases, with references; and that Barry and Charles present equally good cases, and Arthur looks up the references and they check out. Then Arthur asks David and Ernie for their credentials, and it turns out that David and Ernie have roughly the same credentials—maybe they’re both clowns, maybe they’re both physicists.

Assuming that Arthur is knowledgeable enough to understand all the technical arguments—otherwise they’re just impressive noises—it seems that Arthur should view David as having a great advantage in plausibility over Ernie, while Barry has at best a minor advantage over Charles.

Indeed, if the technical arguments are good enough, Barry’s advantage over Charles may not be worth tracking. A good technical argument is one that eliminates reliance on the personal authority of the speaker.

Similarly, if we really believe Ernie that the argument he gave is the best argument he could give, which includes all of the inferential steps that Ernie executed, and all of the support that Ernie took into account—citing any authorities that Ernie may have listened to himself—then we can pretty much ignore any information about Ernie’s credentials. Ernie can be a physicist or a clown, it shouldn’t matter. (Again, this assumes we have enough technical ability to process the argument. Otherwise, Ernie is simply uttering mystical syllables, and whether we “believe” these syllables depends a great deal on his authority.)

So it seems there’s an asymmetry between argument and authority. If we know authority we are still interested in hearing the arguments; but if we know the arguments fully, we have very little left to learn from authority.

Clearly (says the novice) authority and argument are fundamentally different kinds of evidence, a difference unaccountable in the boringly clean methods of Bayesian probability theory.1 For while the strength of the evidences—90% versus 10%—is just the same in both cases, they do not behave similarly when combined. How will we account for this?

Here’s half a technical demonstration of how to represent this difference in probability theory. (The rest you can take on my personal authority, or look up in the references.)

If P(H|E1) = 90% and P(H|E2) = 9%, what is the probability P(H|E1,E2)? If learning E1 is true leads us to assign 90% probability to H, and learning E2 is true leads us to assign 9% probability to H, then what probability should we assign to H if we learn both E1 and E2? This is simply not something you can calculate in probability theory from the information given. No, the missing information is not the prior probability of H. The events E1 and E2 may not be independent of each other.

Suppose that H is “My sidewalk is slippery,” E1 is “My sprinkler is running,” and E2 is “It’s night.” The sidewalk is slippery starting from one minute after the sprinkler starts, until just after the sprinkler finishes, and the sprinkler runs for ten minutes. So if we know the sprinkler is on, the probability is 90% that the sidewalk is slippery. The sprinkler is on during 10% of the nighttime, so if we know that it’s night, the probability of the sidewalk being slippery is 9%. If we know that it’s night and the sprinkler is on—that is, if we know both facts—the probability of the sidewalk being slippery is 90%.

We can represent this in a graphical model as follows:

Whether or not it’s Night causes the Sprinkler to be on or off, and whether the Sprinkler is on causes the sidewalk to be Slippery or unSlippery.

The direction of the arrows is meaningful. Say we had:

This would mean that, if I didn’t know anything about the sprinkler, the probability of Nighttime and Slipperiness would be independent of each other. For example, suppose that I roll Die One and Die Two, and add up the showing numbers to get the Sum:

If you don’t tell me the sum of the two numbers, and you tell me the first die showed 6, this doesn’t tell me anything about the result of the second die, yet. But if you now also tell me the sum is 7, I know the second die showed 1.

Figuring out when various pieces of information are dependent or independent of each other, given various background knowledge, actually turns into a quite technical topic. The books to read are Judea Pearl’s Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference and Causality: Models, Reasoning, and Inference. (If you only have time to read one book, read the first one.)

If you know how to read causal graphs, then you look at the dice-roll graph and immediately see:

P(Die 1,Die 2) = P(Die 1) ✕ P(Die 2)

P(Die 1,Die 2|Sum) ≠ P(Die 1)|Sum) ✕ P(Die 2|Sum) .

If you look at the correct sidewalk diagram, you see facts like:

P(Slippery|Night) ≠ P(Slippery)

P(Slippery|Sprinkler) ≠ P(Slippery)

P(Slippery|Night,Sprinkler) = P(Slippery|Sprinkler) .

That is, the probability of the sidewalk being Slippery, given knowledge about the Sprinkler and the Night, is the same probability we would assign if we knew only about the Sprinkler. Knowledge of the Sprinkler has made knowledge of the Night irrelevant to inferences about Slipperiness.

This is known as screening off, and the criterion that lets us read such conditional independences off causal graphs is known as D-separation.

For the case of argument and authority, the causal diagram looks like this:

If something is true, then it therefore tends to have arguments in favor of it, and the experts therefore observe these evidences and change their opinions. (In theory!)

If we see that an expert believes something, we infer back to the existence of evidence-in-the-abstract (even though we don’t know what that evidence is exactly), and from the existence of this abstract evidence, we infer back to the truth of the proposition.

But if we know the value of the Argument node, this D-separates the node “Truth” from the node “Expert Belief” by blocking all paths between them, according to certain technical criteria for “path blocking” that seem pretty obvious in this case. So even without checking the exact probability distribution, we can read off from the graph that:

P(truth|argument,expert) = P(truth|argument) .

This does not represent a contradiction of ordinary probability theory. It’s just a more compact way of expressing certain probabilistic facts. You could read the same equalities and inequalities off an unadorned probability distribution—but it would be harder to see it by eyeballing. Authority and argument don’t need two different kinds of probability, any more than sprinklers are made out of ontologically different stuff than sunlight.

In practice you can never completely eliminate reliance on authority. Good authorities are more likely to know about any counterevidence that exists and should be taken into account; a lesser authority is less likely to know this, which makes their arguments less reliable. This is not a factor you can eliminate merely by hearing the evidence they did take into account.

It’s also very hard to reduce arguments to pure math; and otherwise, judging the strength of an inferential step may rely on intuitions you can’t duplicate without the same thirty years of experience.

There is an ineradicable legitimacy to assigning slightly higher probability to what E. T. Jaynes tells you about Bayesian probability, than you assign to Eliezer Yudkowsky making the exact same statement. Fifty additional years of experience should not count for literally zero influence.

But this slight strength of authority is only ceteris paribus, and can easily be overwhelmed by stronger arguments. I have a minor erratum in one of Jaynes’s books—because algebra trumps authority.


1See “What Is Evidence?” in Map and Territory.

" } }, { "_id": "qNZM3EGoE5ZeMdCRt", "title": "Reversed Stupidity Is Not Intelligence", "pageUrl": "https://www.lesswrong.com/posts/qNZM3EGoE5ZeMdCRt/reversed-stupidity-is-not-intelligence", "postedAt": "2007-12-12T22:14:07.000Z", "baseScore": 192, "voteCount": 146, "commentCount": 122, "url": null, "contents": { "documentId": "qNZM3EGoE5ZeMdCRt", "html": "\n\n\n\n \n\n \n\n
\n \n\n

“. . . then our people on that time-line went to work with corrective action. Here.”

\n\n

He wiped the screen and then began punching combinations. Page after page appeared, bearing accounts of people who had claimed to have seen the mysterious disks, and each report was more fantastic than the last.

\n\n

“The standard smother-out technique,” Verkan Vall grinned. “I only heard a little talk about the ‘flying saucers,’ and all of that was in joke. In that order of culture, you can always discredit one true story by setting up ten others, palpably false, parallel to it.”

\n\n

—H. Beam Piper, Police Operation

\n
\n\n

Piper had a point. Pers’nally, I don’t believe there are any poorly hidden aliens infesting these parts. But my disbelief has nothing to do with the awful embarrassing irrationality of flying saucer cults—at least, I hope not.

\n\n

You and I believe that flying saucer cults arose in the total absence of any flying saucers. Cults can arise around almost any idea, thanks to human silliness. This silliness operates orthogonally to alien intervention: We would expect to see flying saucer cults whether or not there were flying saucers. Even if there were poorly hidden aliens, it would not be any less likely for flying saucer cults to arise. The conditional probability P(cults|aliens) isn’t less than P(cults|¬aliens), unless you suppose that poorly hidden aliens would deliberately suppress flying saucer cults.1 By the Bayesian definition of evidence, the observation “flying saucer cults exist” is not evidence against the existence of flying saucers. It’s not much evidence one way or the other.

\n\n

This is an application of the general principle that, as Robert Pirsig puts it, “The world’s greatest fool may say the Sun is shining, but that doesn’t make it dark out.”2

\n\n

If you knew someone who was wrong 99.99% of the time on yes-or-no questions, you could obtain 99.99% accuracy just by reversing their answers. They would need to do all the work of obtaining good evidence entangled with reality, and processing that evidence coherently, just to anticorrelate that reliably. They would have to be superintelligent to be that stupid.

\n\n

A car with a broken engine cannot drive backward at 200 mph, even if the engine is really really broken.

\n\n

If stupidity does not reliably anticorrelate with truth, how much less should human evil anticorrelate with truth? The converse of the halo effect is the horns effect: All perceived negative qualities correlate. If Stalin is evil, then everything he says should be false. You wouldn’t want to agree with Stalin, would you?

\n\n

Stalin also believed that 2 + 2 = 4. Yet if you defend any statement made by Stalin, even “2 + 2 = 4,” people will see only that you are “agreeing with Stalin”; you must be on his side.

\n\n

Corollaries of this principle:

\n\n \n\n
\n \n\n

1Read “P(cults|aliens)” as “the probability of UFO cults given that aliens have visited Earth,” and read “P(cults|¬aliens)” as “the probability of UFO cults given that aliens have not visited Earth.”

\n\n

2Robert M. Pirsig, Zen and the Art of Motorcycle Maintenance: An Inquiry Into Values, 1st ed. (New York: Morrow, 1974).

\n\n

3See Scott Alexander, “The Least Convenient Possible World,” Less Wrong (blog), December 2, 2018, http://lesswrong.com/lw/2k/the_least_convenient_possible_world/.

\n\n

4See also “Selling Nonapples.” http://lesswrong.com/lw/vs/selling_nonapples.

\n
\n\n" } }, { "_id": "yEjaj7PWacno5EvWa", "title": "Every Cause Wants To Be A Cult", "pageUrl": "https://www.lesswrong.com/posts/yEjaj7PWacno5EvWa/every-cause-wants-to-be-a-cult", "postedAt": "2007-12-12T03:04:30.000Z", "baseScore": 130, "voteCount": 109, "commentCount": 41, "url": null, "contents": { "documentId": "yEjaj7PWacno5EvWa", "html": "\n\n\n\n \n\n \n\n

Cade Metz at The Register recently alleged that a secret mailing list of Wikipedia’s top administrators has become obsessed with banning all critics and possible critics of Wikipedia.1 Including banning a productive user when one administrator—solely because of the productivity—became convinced that the user was a spy sent by Wikipedia Review. And that the top people at Wikipedia closed ranks to defend their own.

\n\n

Is there some deep moral flaw in seeking to systematize the world’s knowledge, of the sort that would lead pursuers of that Cause into madness? Perhaps only people with innately totalitarian tendencies would try to become the world’s authority on everything—

\n\n

Correspondence bias alert! If the allegations about Wikipedia are true, they’re explained by ordinary human nature, not by extraordinary human nature.

\n\n

The ingroup-outgroup dichotomy is part of ordinary human nature. So are happy death spirals and spirals of hate. A Noble Cause doesn’t need a deep hidden flaw for its adherents to form a cultish in-group. It is sufficient that the adherents be human. Everything else follows naturally, decay by default, like food spoiling in a refrigerator after the electricity goes off.

\n\n

In the same sense that every thermal differential wants to equalize itself, and every computer program wants to become a collection of ad-hoc patches, every Cause wants to be a cult. It’s a high-entropy state into which the system trends, an attractor in human psychology. It may have nothing to do with whether the Cause is truly Noble. You might think that a Good Cause would rub off its goodness on every aspect of the people associated with it—that the Cause’s followers would also be less susceptible to status games, ingroup-outgroup bias, affective spirals, leader-gods. But believing one true idea won’t switch off the halo effect. A noble cause won’t make its adherents something other than human. There are plenty of bad ideas that can do plenty of damage—but that’s not necessarily what’s going on.

\n\n

Every group of people with an unusual goal—good, bad, or silly—will trend toward the cult attractor unless they make a constant effort to resist it. You can keep your house cooler than the outdoors, but you have to run the air conditioner constantly, and as soon as you turn off the electricity—give up the fight against entropy—things will go back to “normal.”

\n\n

On one notable occasion there was a group that went semicultish whose rallying cry was “Rationality! Reason! Objective reality!”2 Labeling the Great Idea “rationality” won’t protect you any more than putting up a sign over your house that says “Cold!” You still have to run the air conditioner—expend the required energy per unit time to reverse the natural slide into cultishness. Worshipping rationality won’t make you sane any more than worshipping gravity enables you to fly. You can’t talk to thermodynamics and you can’t pray to probability theory. You can use it, but not join it as an in-group.

\n\n

Cultishness is quantitative, not qualitative. The question is not, “Cultish, yes or no?” but, “How much cultishness and where?” Even in Science, which is the archetypal Genuinely Truly Noble Cause, we can readily point to the current frontiers of the war against cult-entropy, where the current battle line creeps forward and back. Are journals more likely to accept articles with a well-known authorial byline, or from an unknown author from a well-known institution, compared to an unknown author from an unknown institution? How much belief is due to authority and how much is from the experiment? Which journals are using blinded reviewers, and how effective is blinded reviewing?

\n\n

I cite this example, rather than the standard vague accusations of “scientists aren’t open to new ideas,” because it shows a battle line—a place where human psychology is being actively driven back, where accumulated cult-entropy is being pumped out. (Of course, this requires emitting some waste heat.)

\n\n

This essay is not a catalog of techniques for actively pumping against cultishness. I’ve described some such techniques before, and I’ll discuss more later. Here I just want to point out that the worthiness of the Cause does not mean you can spend any less effort in resisting the cult attractor. And that if you can point to current battle lines, it does not mean you confess your Noble Cause unworthy. You might think that if the question were, “Cultish, yes or no?” that you were obliged to answer, “No,” or else betray your beloved Cause. But that is like thinking that you should divide engines into “perfectly efficient” and “inefficient,” instead of measuring waste.

\n\n

Contrariwise, if you believe that it was the Inherent Impurity of those Foolish Other Causes that made them go wrong, if you laugh at the folly of “cult victims,” if you think that cults are led and populated by mutants, then you will not expend the necessary effort to pump against entropy—to resist being human.

\n\n
\n \n\n

1See “Secret Mailing List Rocks Wikipedia” (http://www.theregister.co.uk/2007/12/04/wikipedia_secret_mailing) and “Wikipedia Black Helicopters Circle Utah’s Traverse Mountain” (http://www.theregister.co.uk/2007/12/06/wikipedia_and_overstock).

\n\n

2See “Guardians of the Truth” (http://lesswrong.com/lw/lz/guardians_of_the_truth) and “Guardians of Ayn Rand” (http://lesswrong.com/lw/m1/guardians_of_ayn_rand).

\n
\n\n" } }, { "_id": "LGHcoEah3E7oYsvke", "title": "Misc Meta", "pageUrl": "https://www.lesswrong.com/posts/LGHcoEah3E7oYsvke/misc-meta", "postedAt": "2007-12-10T22:57:53.000Z", "baseScore": 8, "voteCount": 6, "commentCount": 12, "url": null, "contents": { "documentId": "LGHcoEah3E7oYsvke", "html": "

Overcoming Bias now has a new Welcome page, as I'm sure you've noticed on the sidebar.  A completely ad-hoc eyeballing "statistical" test during our recent Redditing showed that a less prominent placement didn't increase pageviews per visit.  Hopefully it won't get in the way too much.

\n\n

Handy social bookmarking thingy is just below "Recent Posts".

\n\n

The "Contributors" section now contains only individuals who have made 3 or more Overcoming Bias posts.  For the curious, the following is the complete list of individuals who've made 10 or more contributions:  Stuart Armstrong, David Balan, Nick Bostrom, Hal Finney, Robin Hanson, Andrew Gelman, James Miller, Eliezer Yudkowsky.

\n\n


Many of us, including me, have been having trouble with a odd Typekey bug that shows us as logged in, but marks our contributions as having come from nowhere.  If you "Sign out", manually enter your name and email address (and optionally URL), hit "Remember personal info", and then post, you shouldn't have this problem.  At least it's worked for me, so far.


I've located what looks to be an acceptable restaurant for the Bay Area Overcoming Bias meetup, in Millbrae right next to the Millbrae BART/Caltrain station.  This seems like a fairly central location and very well accessible by public transport.  It's even centrally located for anyone who wants to quickly fly in to SFO.  However, we're currently approaching the holiday crunch, so my thought is to schedule the first meetup for mid-January.  Will post on this soon, I hope.

\n\n


During November I generated 40,965 words of posts (not including comments).  And here I was wondering why I've been feeling tired lately.  Blooking feels like trying to run up a mountain, through concrete, at top speed - but it gets things said.  41,000 words/month, even if only a third of them end up being used, would be nearly in the range of a professional author if I could sustain it.

\n\n


The recent post "When None Dare Urge Restraint" rose to #1 on Reddit, which raises interesting issues about how often that sort of thing should be allowed to happen on Overcoming Bias.  Political posts are less interesting, and generate lower-quality discussion; they violate both Hanson's injunction to "Tug the rope sideways" and my own principle of "Learn rationality first on nondistracting problems."  So we definitely don't want to do this too often.

\n\n

However, I also recall an occasion where a Congressperson visited a transhumanist gathering, and asked "How many people here are signed up for cryonics?"  Many hands went up.  "How many of you know the name of your representative in the House?"  Fewer hands went up.  "And you wonder why you don't have much political influence."  Point taken.

\n\n

There is something to be said for being a little relevant every now and then.  I didn't write "When None Dare Urge Restraint" with the intent that it would rise on Reddit, but I'm glad it did, and I'm currently considering whether to write another political post.  It has obvious pros and obvious cons.

" } }, { "_id": "MBpj3QKfPg9xKNeXW", "title": "The Robbers Cave Experiment", "pageUrl": "https://www.lesswrong.com/posts/MBpj3QKfPg9xKNeXW/the-robbers-cave-experiment", "postedAt": "2007-12-10T06:18:56.000Z", "baseScore": 61, "voteCount": 62, "commentCount": 65, "url": null, "contents": { "documentId": "MBpj3QKfPg9xKNeXW", "html": "

Did you ever wonder, when you were a kid, whether your inane \"summer camp\" actually had some kind of elaborate hidden purpose—say, it was all a science experiment and the \"camp counselors\" were really researchers observing your behavior?

\n

Me neither.

\n

But we'd have been more paranoid if we'd read Intergroup Conflict and Cooperation:  The Robbers Cave Experiment by Sherif, Harvey, White, Hood, and Sherif (1954/1961).  In this study, the experimental subjects—excuse me, \"campers\"—were 22 boys between 5th and 6th grade, selected from 22 different schools in Oklahoma City, of stable middle-class Protestant families, doing well in school, median IQ 112.  They were as well-adjusted and as similar to each other as the researchers could manage. 

\n

The experiment, conducted in the bewildered aftermath of World War II, was meant to investigate the causes—and possible remedies—of intergroup conflict.  How would they spark an intergroup conflict to investigate?  Well, the 22 boys were divided into two groups of 11 campers, and—

\n

—and that turned out to be quite sufficient.

\n

\n

The researchers' original plans called for the experiment to be conducted in three stages.  In Stage 1, each group of campers would settle in, unaware of the other group's existence.  Toward the end of Stage 1, the groups would gradually be made aware of each other.  In Stage 2, a set of contests and prize competitions would set the two groups at odds.

\n

They needn't have bothered with Stage 2.  There was hostility almost from the moment each group became aware of the other group's existence:  They were using our campground, our baseball diamond.  On their first meeting, the two groups began hurling insults.  They named themselves the Rattlers and the Eagles (they hadn't needed names when they were the only group on the campground).

\n

When the contests and prizes were announced, in accordance with pre-established experimental procedure, the intergroup rivalry rose to a fever pitch.  Good sportsmanship in the contests was evident for the first two days but rapidly disintegrated.

\n

The Eagles stole the Rattlers' flag and burned it.  Rattlers raided the Eagles' cabin and stole the blue jeans of the group leader, which they painted orange and carried as a flag the next day, inscribed with the legend \"The Last of the Eagles\".  The Eagles launched a retaliatory raid on the Rattlers, turning over beds, scattering dirt.  Then they returned to their cabin where they entrenched and prepared weapons (socks filled with rocks) in case of a return raid.  After the Eagles won the last contest planned for Stage 2, the Rattlers raided their cabin and stole the prizes.  This developed into a fistfight that the staff had to shut down for fear of injury.  The Eagles, retelling the tale among themselves, turned the whole affair into a magnificent victory—they'd chased the Rattlers \"over halfway back to their cabin\" (they hadn't).

\n

Each group developed a negative stereotype of Them and a contrasting positive stereotype of Us.  The Rattlers swore heavily.  The Eagles, after winning one game, concluded that the Eagles had won because of their prayers and the Rattlers had lost because they used cuss-words all the time.  The Eagles decided to stop using cuss-words themselves.  They also concluded that since the Rattlers swore all the time, it would be wiser not to talk to them.  The Eagles developed an image of themselves as proper-and-moral; the Rattlers developed an image of themselves as rough-and-tough.

\n

Group members held their noses when members of the other group passed.

\n

In Stage 3, the researchers tried to reduce friction between the two groups.

\n

Mere contact (being present without contesting) did not reduce friction between the two groups.  Attending pleasant events together—for example, shooting off Fourth of July fireworks—did not reduce friction; instead it developed into a food fight.

\n

Would you care to guess what did work?

\n

(Spoiler space...)

\n

The boys were informed that there might be a water shortage in the whole camp, due to mysterious trouble with the water system—possibly due to vandals.  (The Outside Enemy, one of the oldest tricks in the book.)

\n

The area between the camp and the reservoir would have to be inspected by four search details.  (Initially, these search details were composed uniformly of members from each group.)  All details would meet up at the water tank if nothing was found.  As nothing was found, the groups met at the water tank and observed for themselves that no water was coming from the faucet.  The two groups of boys discussed where the problem might lie, pounded the sides of the water tank, discovered a ladder to the top, verified that the water tank was full, and finally found the sack stuffed in the water faucet.  All the boys gathered around the faucet to clear it.  Suggestions from members of both groups were thrown at the problem and boys from both sides tried to implement them.

\n

When the faucet was finally cleared, the Rattlers, who had canteens, did not object to the Eagles taking a first turn at the faucets (the Eagles didn't have canteens with them).  No insults were hurled, not even the customary \"Ladies first\".

\n

It wasn't the end of the rivalry.  There was another food fight, with insults, the next morning.  But a few more common tasks, requiring cooperation from both groups—e.g. restarting a stalled truck—did the job.  At the end of the trip, the Rattlers used $5 won in a bean-toss contest to buy malts for all the boys in both groups.

\n

The Robbers Cave Experiment illustrates the psychology of hunter-gatherer bands, echoed through time, as perfectly as any experiment ever devised by social science.

\n

Any resemblance to modern politics is just your imagination.

\n

(Sometimes I think humanity's second-greatest need is a supervillain.  Maybe I'll go into that line of work after I finish my current job.)

\n
\n

Sherif, M., Harvey, O. J., White, B. J., Hood, W. R., & Sherif, C. W. 1954/1961. Study of positive and negative intergroup attitudes between experimentally produced groups: Robbers Cave study. University of Oklahoma.

" } }, { "_id": "Tw9cLvzSKrkGjNHW3", "title": "When None Dare Urge Restraint", "pageUrl": "https://www.lesswrong.com/posts/Tw9cLvzSKrkGjNHW3/when-none-dare-urge-restraint", "postedAt": "2007-12-08T23:09:34.000Z", "baseScore": 135, "voteCount": 109, "commentCount": 123, "url": null, "contents": { "documentId": "Tw9cLvzSKrkGjNHW3", "html": "\n\n\n\n \n\n \n\n

One morning, I got out of bed, turned on my computer, and my Netscape email client automatically downloaded that day’s news pane. On that particular day, the news was that two hijacked planes had been flown into the World Trade Center.

\n\n

These were my first three thoughts, in order:

\n\n
\n \n\n

I guess I really am living in the Future.

\n\n

Thank goodness it wasn’t nuclear.

\n\n

and then

\n\n

The overreaction to this will be ten times worse than the original event.

\n
\n\n

A mere factor of “ten times worse” turned out to be a vast understatement. Even I didn’t guess how badly things would go. That’s the challenge of pessimism; it’s really hard to aim low enough that you’re pleasantly surprised around as often and as much as you’re unpleasantly surprised.

\n\n

Nonetheless, I did realize immediately that everyone everywhere would be saying how awful, how terrible this event was; and that no one would dare to be the voice of restraint, of proportionate response. Initially, on 9/11, it was thought that six thousand people had died. Any politician who had said, “6,000 deaths is 1/8 the annual US casualties from automobile accidents,” would have been asked to resign the same hour.

\n\n

No, 9/11 wasn’t a good day. But if everyone gets brownie points for emphasizing how much it hurts, and no one dares urge restraint in how hard to hit back, then the reaction will be greater than the appropriate level, whatever the appropriate level may be.

\n\n

This is the even darker mirror of the happy death spiral—the spiral of hate. Anyone who attacks the Enemy is a patriot; and whoever tries to dissect even a single negative claim about the Enemy is a traitor. But just as the vast majority of all complex statements are untrue, the vast majority of negative things you can say about anyone, even the worst person in the world, are untrue.

\n\n

I think the best illustration was “the suicide hijackers were cowards.” Some common sense, please? It takes a little courage to voluntarily fly your plane into a building. Of all their sins, cowardice was not on the list. But I guess anything bad you say about a terrorist, no matter how silly, must be true. Would I get even more brownie points if I accused al-Qaeda of having assassinated John F. Kennedy? Maybe if I accused them of being Stalinists? Really, cowardice?

\n\n

Yes, it matters that the 9/11 hijackers weren’t cowards. Not just for understanding the enemy’s realistic psychology. There is simply too much damage done by spirals of hate. It is just too dangerous for there to be any target in the world, whether it be the Jews or Adolf Hitler, about whom saying negative things trumps saying accurate things.

\n\n

When the defense force contains thousands of aircraft and hundreds of thousands of heavily armed soldiers, one ought to consider that the immune system itself is capable of wreaking more damage than nineteen guys and four nonmilitary airplanes. The US spent billions of dollars and thousands of soldiers’ lives shooting off its own foot more effectively than any terrorist group could dream.

\n\n

If the USA had completely ignored the 9/11 attack—just shrugged and rebuilt the building—it would have been better than the real course of history. But that wasn’t a political option. Even if anyone privately guessed that the immune response would be more damaging than the disease, American politicians had no career-preserving choice but to walk straight into al-Qaeda’s trap. Whoever argues for a greater response is a patriot. Whoever dissects a patriotic claim is a traitor.

\n\n

Initially, there were smarter responses to 9/11 than I had guessed. I saw a Congressperson—I forget who—say in front of the cameras, “We have forgotten that the first purpose of government is not the economy, it is not health care, it is defending the country from attack.” That widened my eyes, that a politician could say something that wasn’t an applause light. The emotional shock must have been very great for a Congressperson to say something that . . . real.

\n\n

But within two days, the genuine shock faded, and concern-for-image regained total control of the political discourse. Then the spiral of escalation took over completely. Once restraint becomes unspeakable, no matter where the discourse starts out, the level of fury and folly can only rise with time.

\n\n" } }, { "_id": "ZQG9cwKbct2LtmL3p", "title": "Evaporative Cooling of Group Beliefs", "pageUrl": "https://www.lesswrong.com/posts/ZQG9cwKbct2LtmL3p/evaporative-cooling-of-group-beliefs", "postedAt": "2007-12-07T23:08:01.000Z", "baseScore": 181, "voteCount": 136, "commentCount": 52, "url": null, "contents": { "documentId": "ZQG9cwKbct2LtmL3p", "html": "\n\n\n\n \n\n \n\n

Early studiers of cults were surprised to discover that when cults receive a major shock—a prophecy fails to come true, a moral flaw of the founder is revealed—they often come back stronger than before, with increased belief and fanaticism. The Jehovah’s Witnesses placed Armageddon in 1975, based on Biblical calculations; 1975 has come and passed. The Unarian cult, still going strong today, survived the nonappearance of an intergalactic spacefleet on September 27, 1975.

\n\n

Why would a group belief become stronger after encountering crushing counterevidence?

\n\n

The conventional interpretation of this phenomenon is based on cognitive dissonance. When people have taken “irrevocable” actions in the service of a belief—given away all their property in anticipation of the saucers landing—they cannot possibly admit they were mistaken. The challenge to their belief presents an immense cognitive dissonance; they must find reinforcing thoughts to counter the shock, and so become more fanatical. In this interpretation, the increased group fanaticism is the result of increased individual fanaticism.

\n\n

I was looking at a Java applet which demonstrates the use of evaporative cooling to form a Bose-Einstein condensate, when it occurred to me that another force entirely might operate to increase fanaticism. Evaporative cooling sets up a potential energy barrier around a collection of hot atoms. Thermal energy is essentially statistical in nature—not all atoms are moving at the exact same speed. The kinetic energy of any given atom varies as the atoms collide with each other. If you set up a potential energy barrier that’s just a little higher than the average thermal energy, the workings of chance will give an occasional atom a kinetic energy high enough to escape the trap. When an unusually fast atom escapes, it takes with it an unusually large amount of kinetic energy, and the average energy decreases. The group becomes substantially cooler than the potential energy barrier around it.

\n\n

In Festinger, Riecken, and Schachter’s classic When Prophecy Fails, one of the cult members walked out the door immediately after the flying saucer failed to land. Who gets fed up and leaves first? An average cult member? Or a relatively skeptical member, who previously might have been acting as a voice of moderation, a brake on the more fanatic members?

\n\n

After the members with the highest kinetic energy escape, the remaining discussions will be between the extreme fanatics on one end and the slightly less extreme fanatics on the other end, with the group consensus somewhere in the “middle.”

\n\n

And what would be the analogy to collapsing to form a Bose-Einstein condensate? Well, there’s no real need to stretch the analogy that far. But you may recall that I used a fission chain reaction analogy for the affective death spiral; when a group ejects all its voices of moderation, then all the people encouraging each other, and suppressing dissents, may internally increase in average fanaticism.1

\n\n

When Ayn Rand’s long-running affair with Nathaniel Branden was revealed to the Objectivist membership, a substantial fraction of the Objectivist membership broke off and followed Branden into espousing an “open system” of Objectivism not bound so tightly to Ayn Rand. Who stayed with Ayn Rand even after the scandal broke? The ones who really, really believed in her—and perhaps some of the undecideds, who, after the voices of moderation left, heard arguments from only one side. This may account for how the Ayn Rand Institute is (reportedly) more fanatical after the breakup than the original core group of Objectivists under Branden and Rand.

\n\n

A few years back, I was on a transhumanist mailing list where a small group espousing “social democratic transhumanism” vitriolically insulted every libertarian on the list. Most libertarians left the mailing list; most of the others gave up on posting. As a result, the remaining group shifted substantially to the left. Was this deliberate? Probably not, because I don’t think the perpetrators knew that much psychology.2 At most, they might have thought to make themselves “bigger fish in a smaller pond.”

\n\n

This is one reason why it’s important to be prejudiced in favor of tolerating dissent. Wait until substantially after it seems to you justified in ejecting a member from the group, before actually ejecting. If you get rid of the old outliers, the group position will shift, and someone else will become the oddball. If you eject them too, you’re well on the way to becoming a Bose-Einstein condensate and, er, exploding.

\n\n

The flip side: Thomas Kuhn believed that a science has to become a “paradigm,” with a shared technical language that excludes outsiders, before it can get any real work done. In the formative stages of a science, according to Kuhn, the adherents go to great pains to make their work comprehensible to outside academics. But (according to Kuhn) a science can only make real progress as a technical discipline once it abandons the requirement of outside accessibility, and scientists working in the paradigm assume familiarity with large cores of technical material in their communications. This sounds cynical, relative to what is usually said about public understanding of science, but I can definitely see a core of truth here.3

\n\n
\n \n\n

1No thermodynamic analogy here, unless someone develops a nuclear weapon that explodes when it gets cold.

\n\n

2For that matter, I can’t recall seeing the evaporative cooling analogy elsewhere, though that doesn’t mean it hasn’t been noted before.

\n\n

3My own theory of Internet moderation is that you have to be willing to exclude trolls and spam to get a conversation going. You must even be willing to exclude kindly but technically uninformed folks from technical mailing lists if you want to get any work done. A genuinely open conversation on the Internet degenerates fast.

\n\n

It’s the articulate trolls that you should be wary of ejecting, on this theory—they serve the hidden function of legitimizing less extreme disagreements. But you should not have so many articulate trolls that they begin arguing with each other, or begin to dominate conversations. If you have one person around who is the famous Guy Who Disagrees With Everything, anyone with a more reasonable, more moderate disagreement won’t look like the sole nail sticking out. This theory of Internet moderation may not have served me too well in practice, so take it with a grain of salt.

\n
\n\n" } }, { "_id": "NnohDYHNnKDtbiMyp", "title": "Fake Utility Functions", "pageUrl": "https://www.lesswrong.com/posts/NnohDYHNnKDtbiMyp/fake-utility-functions", "postedAt": "2007-12-06T16:55:41.000Z", "baseScore": 71, "voteCount": 67, "commentCount": 64, "url": null, "contents": { "documentId": "NnohDYHNnKDtbiMyp", "html": "

Every now and then, you run across someone who has discovered the\nOne Great Moral Principle, of which all other values are a mere\nderivative consequence.

\n\n

I run across more of these people than you do.  Only in my case, it's people who know the amazingly simple utility function that is all you need to program into an artificial superintelligence and then everything will turn out fine.

\n\n

(This post should come as an anticlimax, since you already know virtually all the concepts involved, I bloody well hope.  See yesterday's post, and all my posts since October 31st, actually...)

\n

Some people, when they encounter the how-to-program-a-superintelligence problem, try to solve the problem immediately.  Norman R. F. Maier:  "Do not propose solutions\nuntil the problem has been discussed as thoroughly as possible without\nsuggesting any."  Robyn Dawes:  "I have often used this edict\nwith groups I have led - particularly when they face a very tough\nproblem, which is when group members are most apt to propose solutions\nimmediately."  Friendly AI is an extremely tough problem so people solve it extremely fast.\n\n

\n\n

There's several major classes of fast wrong solutions I've observed; and one of these is the Incredibly Simple Utility Function That Is All A Superintelligence Needs For Everything To Work Out Just Fine.

\n\n

I may have contributed to this problem with a really poor choice of phrasing, years ago when I first started talking about "Friendly AI".  I referred to the optimization criterion of an optimization process - the region into which an agent tries to steer the future - as the "supergoal".  I'd meant "super" in the sense of "parent", the source of a directed link in an acyclic graph.  But it seems the effect of my phrasing was to send some people into happy death spirals as they tried to imagine the Superest Goal Ever, the\nGoal That Overrides All Over Goals, the Single Ultimate Rule From Which All\nEthics Can Be Derived.

\n\n

But a utility function doesn't have to be simple.  It can contain an arbitrary number of terms.  We have every reason to believe that insofar as humans can said to be have values, there are lots of them - high Kolmogorov complexity.  A human brain implements a thousand shards of desire, though this fact may not be appreciated by one who has not studied evolutionary psychology.  (Try to explain this without a full, long introduction, and the one hears "humans are trying to maximize fitness", which is exactly the opposite of what evolutionary psychology says.)

\n\n

So far as descriptive theories of morality are concerned, the complicatedness of human morality is a known fact.  It is a descriptive fact about human beings, that the love of a parent for a child, and the love of a child for a parent, and the love of a man for a woman, and the love of a woman for a man, have not been cognitively derived from each other or from any other value.  A mother doesn't have to do complicated moral philosophy to love her daughter, nor extrapolate the consequences to some other desideratum.  There are many such shards of desire, all different values.

\n\n

Leave out just one of these values from a superintelligence, and even if you successfully include every other value, you could end up with a hyperexistential catastrophe, a fate worse than death.  If there's a superintelligence that wants everything for us that we want for ourselves, except the human values relating to controlling your own life and achieving your own goals, that's one of the oldest dystopias in the book.  (Jack Williamson's "With Folded Hands", in this case.)

\n\n

So how does the one constructing the Amazingly Simple Utility Function deal with this objection?

\n\n

Objection?  Objection?  Why would they be searching for possible objections to their lovely theory?  (Note that the process of searching for real, fatal objections isn't the same as performing a dutiful search that amazingly hits on only questions to which they have a snappy answer.)  They don't know any of this stuff.  They aren't thinking about burdens of proof.  They don't know the problem is difficult.  They heard the word "supergoal" and went off in a happy death spiral around "complexity" or whatever.

\n\n\n\n

Press them on some particular point, like the love a mother has for her children, and they reply "But if the superintelligence wants 'complexity', it will see how complicated the parent-child relationship is, and therefore encourage mothers to love their children."  Goodness, where do I start?

\n\n

Begin with the motivated stopping:  A superintelligence actually searching for ways to maximize complexity wouldn't conveniently stop if it noticed that a parent-child relation was complex.  It would ask if anything else was more complex.  This is a fake justification; the one trying to argue the imaginary superintelligence into a policy selection, didn't really arrive at that policy proposal by carrying out a pure search for ways to maximize complexity.

\n\n

The whole argument is a fake morality.  If what you really\nvalued was complexity, then you would be justifying the parental-love\ndrive by pointing to how it increases complexity.  If you justify a\ncomplexity drive by alleging that it increases parental love, it means\nthat what you really value is the parental love.  It's like giving a\nprosocial argument in favor of selfishness.

\n\n

But if you consider the affective death spiral, then it doesn't increase the perceived niceness of "complexity" to say "A mother's relationship to her daughter is only important because it increases complexity; consider that if the relationship became simpler, we would not value it."  What does increase the perceived niceness of "complexity" is saying, "If you set out to increase complexity, mothers will love their daughters - look at the positive consequence this has!"
\n

\n\n

This point applies whenever you run across a moralist who tries to convince you that their One Great Idea is all that anyone needs for moral judgment, and proves this by saying, "Look at all these positive consequences of this Great Thingy", rather than saying, "Look at how all these things we think of as 'positive' are only positive when their consequence is to increase the Great Thingy."  The latter being what you'd actually need to carry such an argument.

\n\n

But if you're trying to persuade others (or yourself) of your theory that the One Great Idea is "bananas", you'll sell a lot more bananas by arguing how bananas lead to better sex, rather than claiming that you should only want sex when it leads to bananas.

\n\n

Unless you're so far gone into the Happy Death Spiral that you really do start saying "Sex is only good when it leads to bananas."  Then you're in trouble.  But at least you won't convince anyone else.

\n\n

In the end, the only process that reliably regenerates all the local decisions you would make given your morality, is your morality.  Anything else - any attempt to substitute instrumental means for terminal ends - ends up losing purpose and requiring an infinite number of patches because the system doesn't contain the source of the instructions you're giving it.  You shouldn't expect to be able to compress a human morality down to a simple utility function, any more than you should expect to compress a large computer file down to 10 bits.

\n\n

Addendum:  Please note that we're not yet ready to discuss Friendly AI, as such, on Overcoming Bias.  That will require a lot more prerequisite material.  This post is only about why simple utility functions fail to compress our values.

" } }, { "_id": "D6rsNhHM4pBCpDzSb", "title": "Fake Fake Utility Functions", "pageUrl": "https://www.lesswrong.com/posts/D6rsNhHM4pBCpDzSb/fake-fake-utility-functions", "postedAt": "2007-12-06T06:30:26.000Z", "baseScore": 42, "voteCount": 29, "commentCount": 9, "url": null, "contents": { "documentId": "D6rsNhHM4pBCpDzSb", "html": "

Followup to: Most of my posts over the last month...

\n

Every now and then, you run across someone who has discovered the One Great Moral Principle, of which all other values are a mere derivative consequence.

\n

I run across more of these people than you do.  Only in my case, it's people who know the amazingly simple utility function that is all you need to program into an artificial superintelligence and then everything will turn out fine...

\n

It's incredible how one little issue can require so much prerequisite material.  My original schedule called for \"Fake Utility Functions\" to follow \"Fake Justification\" on Oct 31.

\n

Talk about your planning fallacy.  I've been planning to post on this topic in \"just a few days\" for the past month.  A fun little demonstration of underestimated inferential distances.

\n

You see, before I wrote this post, it occurred to me that if I wanted to properly explain the problem of fake utility functions, it would be helpful to illustrate a mistake about what a simple optimization criterion implied.  The strongest real-world example I knew was the Tragedy of Group Selectionism.  At first I thought I'd mention it in passing, within \"Fake Utility Functions\", but I decided the Tragedy of Group Selectionism was a long enough story that it needed its own blog post...

\n

\n

So I started to write \"The Tragedy of Group Selectionism\".  A few hours later, I noticed that I hadn't said anything about group selectionism yet.  I'd been too busy introducing basic evolutionary concepts. Select all the introductory stuff, cut, Compose New Post, paste, title... \"An Alien God\".  Then keep writing until the \"Alien God\" post gets too long, and start taking separate subjects out into their own posts: \"The Wonder of Evolution\", \"Evolutions Are Stupid\", and at this point it became clear that, since I was planning to say a few words on evolution anyway, that was the time.  Besides, a basic familiarity with evolution would help to shake people loose of their human assumptions when it came to visualizing nonhuman optimization processes.

\n

So, finally I posted \"The Tragedy of Group Selectionism\". Now I was ready to write \"Fake Utility Functions\", right?  The post that was supposed to come immediately afterward?  So I thought, but each time I tried to write the post, I ended up recursing on a prerequisite post instead.  Such as \"Fake Selfishness\", \"Fake Morality\", and \"Fake Optimization Criteria\".

\n

When I got to \"Fake Optimization Criteria\", I really thought I could do \"Fake Utility Functions\" the next day.  But then it occurred to me that I'd never explained why a simple utility function wouldn't be enough.  We are a thousand shards of desire, as I said in \"Thou Art Godshatter\".  Only that first required discussing \"Evolutionary Psychology\", which required explaining that human minds are \"Adaptation-Executers, not Fitness-Maximizers\", plus the difference between \"Protein Reinforcement and DNA Consequentialism\".

\n

Furthermore, I'd never really explained the difference between \"Terminal Values and Instrumental Values\", without which I could hardly talk about utility functions.

\n

Surely now I was ready?  Yet I thought about conversations I'd had over the years, and how people seem to think a simple instruction like \"Get my mother out of that burning building!\" contains all the motivations that shape a human plan to rescue her, so I thought that first I'd do \"The Hidden Complexity of Wishes\". But, really, the hidden complexity of planning, and all the special cases needed to patch the genie's wish, was part of the general problem of recording outputs without absorbing the process that generates the outputs - as I explained in \"Artificial Addition\" and \"Truly Part Of You\".  You don't want to keep the local goal description and discard the nonlocal utility function:  \"Leaky Generalizations\" and \"Lost Purposes\".

\n

Plus it occurred to me that evolution itself made an interesting genie, so before all that, came \"Conjuring An Evolution To Serve You\".

\n

One kind of lost purpose is artificial pleasure, and \"happiness\" is one of the Fake Utility Functions I run into more often:  \"Not for the Sake of Happiness (Alone)\".  Similarly, it was worth taking the time to establish that fitness is not always your friend (\"Evolving to Extinction\") and that not everything in the universe is subject to significant selection pressures (\"No Evolutions for Corporations or Nanodevices\"), to avoid the Fake Utility Function of \"genetic fitness\".

\n

Right after \"Lost Purposes\" seemed like a good time to point out the deep link between keeping track of your original goal and keeping track of your original question:  \"Purpose and Pragmatism\".

\n

Into the home stretch!  No, wait, this would be a good time to discuss \"Affective Death Spirals\", since that's one of the main things that goes wrong when someone discovers The One True Valuable Thingy - they keep finding nicer and nicer things to say about it.  Well, you can't discuss affective death spirals unless you first discuss \"The Affect Heuristic\", but I'd been meaning to do that for a while anyway.  \"Evaluability\" illustrates the affect heuristic and leads to an important point about \"Unbounded Scales and Futurism\".  The second key to affective death spirals is \"The Halo Effect\", which we can see illustrated in \"Superhero Bias\" and \"Mere Messiahs\".  Then it's on to affective death spirals and how to \"Resist the Happy Death Spiral\" and \"Uncritical Supercriticality\".

\n

A bonus irony is that \"Fake Utility Functions\" isn't a grand climax.  It's just one of many Less Wrong posts relevant to my AI work, with plenty more scheduled.  This particular post just turned out to require just a little more prerequisite material which - I thought on each occasion - I would have to write anyway, sooner or later.

\n

And that's why blogging is difficult, and why it is necessary, at least for me.  I would have been doomed, yea, utterly doomed, if I'd tried to write all this as one publication rather than as a series of blog posts.  One month is nothing for this much material.

\n

But now, it's done!  Now, after only slightly more than an extra month of prerequisite material, I can do the blog post originally scheduled for November 1st!

\n

Except...

\n

Now that I think about it...

\n

This post is pretty long already, right?

\n

So I'll do the real \"Fake Utility Functions\" tomorrow.

" } }, { "_id": "NCefvet6X3Sd4wrPc", "title": "Uncritical Supercriticality", "pageUrl": "https://www.lesswrong.com/posts/NCefvet6X3Sd4wrPc/uncritical-supercriticality", "postedAt": "2007-12-04T16:40:53.000Z", "baseScore": 121, "voteCount": 105, "commentCount": 174, "url": null, "contents": { "documentId": "NCefvet6X3Sd4wrPc", "html": "\n\n\n\n \n\n \n\n

Every now and then, you see people arguing over whether atheism is a “religion.” As I touch on elsewhere, in “Purpose and Pragmatism,” arguing over the meaning of a word nearly always means that you’ve lost track of the original question.1 How might this argument arise to begin with?

\n\n

An atheist is holding forth, blaming “religion” for the Inquisition, the Crusades, and various conflicts with or within Islam. The religious one may reply, “But atheism is also a religion, because you also have beliefs about God; you believe God doesn’t exist.” Then the atheist answers, “If atheism is a religion, then not collecting stamps is a hobby,” and the argument begins.

\n\n

Or the one may reply, “But horrors just as great were inflicted by Stalin, who was an atheist, and who suppressed churches in the name of atheism; therefore you are wrong to blame the violence on religion.” Now the atheist may be tempted to reply, “No true Scotsman,” saying, “Stalin’s religion was Communism.” The religious one answers “If Communism is a religion, then Star Wars fandom is a government,” and the argument begins.

\n\n

Should a “religious” person be defined as someone who has a definite opinion about the existence of at least one God, e.g., assigning a probability lower than 10% or higher than 90% to the existence of Zeus? Or should a “religious” person be defined as someone who has a positive opinion (say, a probability higher than 90%) on the existence of at least one God? In the former case, Stalin was “religious”; in the latter case, Stalin was “not religious.”

\n\n

But this is exactly the wrong way to look at the problem. What you really want to know—what the argument was originally about—is why, at certain points in human history, large groups of people were slaughtered and tortured, ostensibly in the name of an idea. Redefining a word won’t change the facts of history one way or the other.

\n\n

Communism was a complex catastrophe, and there may be no single why, no single critical link in the chain of causality. But if I had to suggest an ur-mistake, it would be . . . well, I’ll let God say it for me:

\n\n
\n \n\n

If your brother, the son of your father or of your mother, or your son or daughter, or the spouse whom you embrace, or your most intimate friend, tries to secretly seduce you, saying, “Let us go and serve other gods,” unknown to you or your ancestors before you, gods of the peoples surrounding you, whether near you or far away, anywhere throughout the world, you must not consent, you must not listen to him; you must show him no pity, you must not spare him or conceal his guilt. No, you must kill him, your hand must strike the first blow in putting him to death and the hands of the rest of the people following. You must stone him to death, since he has tried to divert you from Yahweh your God.

\n\n

—Deuteronomy 13:7–11, emphasis added

\n
\n\n

This was likewise the rule which Stalin set for Communism, and Hitler for Nazism: if your brother tries to tell you why Marx is wrong, if your son tries to tell you the Jews are not planning world conquest, then do not debate him or set forth your own evidence; do not perform replicable experiments or examine history; but turn him in at once to the secret police.

\n\n

I suggested that one key to resisting an affective death spiral is the principle of “burdensome details”—just remembering to question the specific details of each additional nice claim about the Great Idea.2 This wouldn’t get rid of the halo effect, but it would hopefully reduce the resonance to below criticality, so that one nice-sounding claim triggers less than 1.0 additional nice-sounding claims, on average.

\n\n

The diametric opposite of this advice, which sends the halo effect supercritical, is when it feels wrong to argue against any positive claim about the Great Idea.

\n\n

Politics is the mind-killer. Arguments are soldiers. Once you know which side you’re on, you must support all favorable claims, and argue against all unfavorable claims. Otherwise it’s like giving aid and comfort to the enemy, or stabbing your friends in the back.

\n\n

If . . .

\n\n \n\n

. . . then the affective death spiral has gone supercritical. It is now a Super Happy Death Spiral.

\n\n

When it comes to our original question—“What makes the slaughter?”—the key category to pay attention to isn’t religion as such. The best distinction I’ve heard between “supernatural” and “naturalistic” worldviews is that a supernatural worldview asserts the existence of ontologically basic mental substances, like spirits, while a naturalistic worldview reduces mental phenomena to nonmental parts. Focusing on this as the source of the problem buys into religious exceptionalism. Supernaturalist claims are worth distinguishing, because they always turn out to be wrong for fairly fundamental reasons.3 But it’s still just one kind of mistake.

\n\n

An affective death spiral can nucleate around supernatural beliefs—particularly monotheisms whose pinnacle is a Super Happy Agent, defined primarily by agreeing with any nice statement about it—and particularly meme complexes grown sophisticated enough to assert supernatural punishments for disbelief. But the death spiral can also start around a political innovation, a charismatic leader, belief in racial destiny, or an economic hypothesis. The lesson of history is that affective death spirals are dangerous whether or not they happen to involve supernaturalism. Religion isn’t special enough, as a class of mistake, to be the key problem.

\n\n

Sam Harris came closer when he put the accusing finger on faith. If you don’t place an appropriate burden of proof on each and every additional nice claim, the affective resonance gets started very easily. Look at the poor New Agers. Christianity developed defenses against criticism, arguing for the wonders of faith; New Agers culturally inherit the cached thought that faith is positive, but lack Christianity’s exclusionary scripture to keep out competing memes. New Agers end up in happy death spirals around stars, trees, magnets, diets, spells, unicorns . . .

\n\n

But the affective death spiral turns much deadlier after criticism becomes a sin, or a gaffe, or a crime. There are things in this world that are worth praising greatly, and you can’t flatly say that praise beyond a certain point is forbidden. But there is never an Idea so true that it’s wrong to criticize any argument that supports it. Never. Never ever never for ever. That is flat. The vast majority of possible beliefs in a nontrivial answer space are false, and likewise, the vast majority of possible supporting arguments for a true belief are also false, and not even the happiest idea can change that.

\n\n

And it is triple ultra forbidden to respond to criticism with violence. There are a very few injunctions in the human art of rationality that have no ifs, ands, buts, or escape clauses. This is one of them. Bad argument gets counterargument. Does not get bullet. Never. Never ever never for ever.

\n\n
\n \n\n

1Link: http://lesswrong.com/lw/lf/purpose_and_pragmatism/.

\n\n

2It’s not trivial advice. People often don’t remember to do this when they’re listening to a futurist sketching amazingly detailed projections about the wonders of tomorrow, let alone when they’re thinking about their favorite idea ever.

\n\n

3See, for example, “Mysterious Answers to Mysterious Questions” in Map and Territory.

\n
\n\n" } }, { "_id": "hwi8JQjspnMWyWs4g", "title": "Resist the Happy Death Spiral", "pageUrl": "https://www.lesswrong.com/posts/hwi8JQjspnMWyWs4g/resist-the-happy-death-spiral", "postedAt": "2007-12-04T01:15:32.000Z", "baseScore": 95, "voteCount": 84, "commentCount": 47, "url": null, "contents": { "documentId": "hwi8JQjspnMWyWs4g", "html": "\n\n\n\n \n\n \n\n

Once upon a time, there was a man who was convinced that he possessed a Great Idea. Indeed, as the man thought upon the Great Idea more and more, he realized that it was not just a great idea, but the most wonderful idea ever. The Great Idea would unravel the mysteries of the universe, supersede the authority of the corrupt and error-ridden Establishment, confer nigh-magical powers upon its wielders, feed the hungry, heal the sick, make the whole world a better place, etc., etc., etc.

\n\n

The man was Francis Bacon, his Great Idea was the scientific method, and he was the only crackpot in all history to claim that level of benefit to humanity and turn out to be completely right.1

\n\n

That’s the problem with deciding that you’ll never admire anything that much: Some ideas really are that good. Though no one has fulfilled claims more audacious than Bacon’s; at least, not yet.

\n\n

But then how can we resist the happy death spiral with respect to Science itself? The happy death spiral starts when you believe something is so wonderful that the halo effect leads you to find more and more nice things to say about it, making you see it as even more wonderful, and so on, spiraling up into the abyss. What if Science is in fact so beneficial that we cannot acknowledge its true glory and retain our sanity? Sounds like a nice thing to say, doesn’t it? Oh no it’s starting ruuunnnnn . . .

\n\n

If you retrieve the standard cached deep wisdom for don’t go overboard on admiring science, you will find thoughts like “Science gave us air conditioning, but it also made the hydrogen bomb” or “Science can tell us about stars and biology, but it can never prove or disprove the dragon in my garage.” But the people who originated such thoughts were not trying to resist a happy death spiral. They weren’t worrying about their own admiration of science spinning out of control. Probably they didn’t like something science had to say about their pet beliefs, and sought ways to undermine its authority.

\n\n

The standard negative things to say about science aren’t likely to appeal to someone who genuinely feels the exultation of science—that’s not the intended audience. So we’ll have to search for other negative things to say instead.

\n\n

But if you look selectively for something negative to say about science—even in an attempt to resist a happy death spiral—do you not automatically convict yourself of rationalization? Why would you pay attention to your own thoughts, if you knew you were trying to manipulate yourself?

\n\n

I am generally skeptical of people who claim that one bias can be used to counteract another. It sounds to me like an automobile mechanic who says that the motor is broken on your right windshield wiper, but instead of fixing it, they’ll just break your left windshield wiper to balance things out. This is the sort of cleverness that leads to shooting yourself in the foot. Whatever the solution, it ought to involve believing true things, rather than believing you believe things that you believe are false.

\n\n

Can you prevent the happy death spiral by restricting your admiration of Science to a narrow domain? Part of the happy death spiral is seeing the Great Idea everywhere—thinking about how Communism could cure cancer if it were only given a chance. Probably the single most reliable sign of a cult guru is that the guru claims expertise, not in one area, not even in a cluster of related areas, but in everything. The guru knows what cult members should eat, wear, do for a living; who they should have sex with; which art they should look at; which music they should listen to . . .

\n\n

Unfortunately for this plan, most people fail miserably when they try to describe the neat little box that science has to stay inside. The usual trick, “Hey, science won’t cure cancer,” isn’t going to fly. “Science has nothing to say about a parent’s love for their child”—sorry, that’s simply false. If you try to sever science from e.g. parental love, you aren’t just denying cognitive science and evolutionary psychology. You’re also denying Martine Rothblatt’s founding of United Therapeutics to seek a cure for her daughter’s pulmonary hypertension.2 Science is legitimately related, one way or another, to just about every important facet of human existence.

\n\n

All right, so what’s an example of a false nice claim you could make about science?

\n\n

One false claim, in my humble opinion, is that science is so wonderful that scientists shouldn’t even try to take ethical responsibility for their work—it will turn out well in the end regardless. It appears to me that this misunderstands the process whereby science benefits humanity. Scientists are human; they have prosocial concerns just like most other other people, and this is at least part of why science ends up doing more good than evil.

\n\n

But that point is, evidently, not beyond dispute. So here’s a simpler false nice claim: “A cancer patient can be cured just through the publishing of enough journal papers.” Or: “Sociopaths could become fully normal, if they just committed themselves to never believing anything without replicated experimental evidence with p < 0.05.”

\n\n

The way to avoid believing such statements isn’t an affective cap, deciding that science is only slightly nice. Nor searching for reasons to believe that publishing journal articles causes cancer. Nor believing that science has nothing to say about cancer one way or the other.

\n\n

Rather, if you know with enough specificity how science works, then you know that while it may be possible for “science to cure cancer,” a cancer patient writing journal papers isn’t going to experience a miraculous remission. That specific proposed chain of cause and effect is not going to work out.

\n\n

The happy death spiral is only an emotional problem because of a perceptual problem, the halo effect, that makes us more likely to accept future positive claims once we’ve accepted an initial positive claim. We can’t get rid of this effect just by wishing; it will probably always influence us a little. But we can manage to slow down, stop, consider each additional nice claim as an additional burdensome detail, and focus on the specific points of the claim apart from its positiveness.

\n\n

What if a specific nice claim “can’t be disproven” but there are arguments “both for and against” it? Actually these are words to be wary of in general, because often this is what people say when they’re rehearsing the evidence or avoiding the real weak points. Given the danger of the happy death spiral, it makes sense to try to avoid being happy about unsettled claims—to avoid making them into a source of yet more positive affect about something you liked already.

\n\n

The happy death spiral is only a big emotional problem because of the overly positive feedback, the ability for the process to go critical. You may not be able to eliminate the halo effect entirely, but you can apply enough critical reasoning to keep the halos subcritical—make sure that the resonance dies out rather than exploding.

\n\n

You might even say that the whole problem starts with people not bothering to critically examine every additional burdensome detail—demanding sufficient evidence to compensate for complexity, searching for flaws as well as support, invoking curiosity—once they’ve accepted some core premise. Without the conjunction fallacy, there might still be a halo effect, but there wouldn’t be a happy death spiral.3

\n\n

Even on the nicest Nice Thingies in the known universe, a perfect rationalist who demanded exactly the necessary evidence for every additional (positive) claim would experience no affective resonance. You can’t do this, but you can stay close enough to rational to keep your happiness from spiraling out of control.4

\n\n

Stuart Armstrong gives closely related advice:5

\n\n
\n \n\n

Cut up your Great Thingy into smaller independent ideas, and treat them as independent.

\n\n

For instance a marxist would cut up Marx’s Great Thingy into a theory of value of labour, a theory of the political relations between classes, a theory of wages, a theory on the ultimate political state of mankind. Then each of them should be assessed independently, and the truth or falsity of one should not halo on the others. If we can do that, we should be safe from the spiral, as each theory is too narrow to start a spiral on its own.

\n
\n\n

This, metaphorically, is like keeping subcritical masses of plutonium from coming together. Three Great Ideas are far less likely to drive you mad than one Great Idea. Armstrong’s advice also helps promote specificity: As soon as someone says, “Publishing enough papers can cure your cancer,” you ask, “Is that a benefit of the experimental method, and if so, at which stage of the experimental process is the cancer cured? Or is it a benefit of science as a social process, and if so, does it rely on individual scientists wanting to cure cancer, or can they be self-interested?” Hopefully this leads you away from the good or bad feeling, and toward noticing the confusion and lack of support.

\n\n

To summarize, you do avoid a Happy Death Spiral by:

\n\n \n\n

but not by:

\n\n \n\n
\n \n\n

1Bacon didn’t singlehandedly invent science, of course, but he did contribute, and may have been the first to realize the power.

\n\n

2Successfully, I might add.

\n\n

3For more background, see “Burdensome Details,” “How Much Evidence Does it Take?”, and “Occam’s Razor” in the previous volume, Map and Territory.

\n\n

4The really dangerous cases are the ones where any criticism of any positive claim about the Great Thingy feels bad or is socially unacceptable. Arguments are soldiers; any positive claim is a soldier on our side; stabbing your soldiers in the back is treason. Then the chain reaction goes supercritical. More on this later.

\n\n

5Source: http://lesswrong.com/lw/lm/affective_death_spirals/gp5.

\n
\n\n" } }, { "_id": "XrzQW69HpidzvBxGr", "title": "Affective Death Spirals", "pageUrl": "https://www.lesswrong.com/posts/XrzQW69HpidzvBxGr/affective-death-spirals", "postedAt": "2007-12-02T16:44:44.000Z", "baseScore": 117, "voteCount": 102, "commentCount": 46, "url": null, "contents": { "documentId": "XrzQW69HpidzvBxGr", "html": "\n\n\n\n \n\n \n\n

Many, many, many are the flaws in human reasoning which lead us to overestimate how well our beloved theory explains the facts. The phlogiston theory of chemistry could explain just about anything, so long as it didn’t have to predict it in advance. And the more phenomena you use your favored theory to explain, the truer your favored theory seems—has it not been confirmed by these many observations? As the theory seems truer, you will be more likely to question evidence that conflicts with it. As the favored theory seems more general, you will seek to use it in more explanations.

\n\n

If you know anyone who believes that Belgium secretly controls the US banking system, or that they can use an invisible blue spirit force to detect available parking spaces, that’s probably how they got started.

\n\n

(Just keep an eye out, and you’ll observe much that seems to confirm this theory . . .)

\n\n

This positive feedback cycle of credulity and confirmation is indeed fearsome, and responsible for much error, both in science and in everyday life.

\n\n

But it’s nothing compared to the death spiral that begins with a charge of positive affect—a thought that feels really good.

\n\n

A new political system that can save the world. A great leader, strong and noble and wise. An amazing tonic that can cure upset stomachs and cancer.

\n\n

Heck, why not go for all three? A great cause needs a great leader. A great leader should be able to brew up a magical tonic or two.

\n\n

The halo effect is that any perceived positive characteristic (such as attractiveness or strength) increases perception of any other positive characteristic (such as intelligence or courage). Even when it makes no sense, or less than no sense.

\n\n

Positive characteristics enhance perception of every other positive characteristic? That sounds a lot like how a fissioning uranium atom sends out neutrons that fission other uranium atoms.

\n\n

Weak positive affect is subcritical; it doesn’t spiral out of control. An attractive person seems more honest, which, perhaps, makes them seem more attractive; but the effective neutron multiplication factor is less than one. Metaphorically speaking. The resonance confuses things a little, but then dies out.

\n\n

With intense positive affect attached to the Great Thingy, the resonance touches everywhere. A believing Communist sees the wisdom of Marx in every hamburger bought at McDonald’s; in every promotion they’re denied that would have gone to them in a true worker’s paradise; in every election that doesn’t go to their taste; in every newspaper article “slanted in the wrong direction.” Every time they use the Great Idea to interpret another event, the Great Idea is confirmed all the more. It feels better—positive reinforcement—and of course, when something feels good, that, alas, makes us want to believe it all the more.

\n\n

When the Great Thingy feels good enough to make you seek out new opportunities to feel even better about the Great Thingy, applying it to interpret new events every day, the resonance of positive affect is like a chamber full of mousetraps loaded with ping-pong balls.

\n\n

You could call it a “happy attractor,” “overly positive feedback,” a “praise locked loop,” or “funpaper.” Personally I prefer the term “affective death spiral.”

\n\n

Coming up next: How to resist an affective death spiral.1

\n\n
\n \n\n

1Hint: It’s not by refusing to ever admire anything again, nor by keeping the things you admire in safe little restricted magisteria.

\n
\n\n" } }, { "_id": "Tv7WWhgbKMWzEnMmx", "title": "Mere Messiahs", "pageUrl": "https://www.lesswrong.com/posts/Tv7WWhgbKMWzEnMmx/mere-messiahs", "postedAt": "2007-12-02T00:49:18.000Z", "baseScore": 77, "voteCount": 67, "commentCount": 83, "url": null, "contents": { "documentId": "Tv7WWhgbKMWzEnMmx", "html": "

Yesterday I discussed how the halo effect, which causes people to see all positive characteristics as correlated—for example, more attractive individuals are also perceived as more kindly, honest, and intelligent—causes us to admire heroes more if they're super-strong and immune to bullets.  Even though, logically, it takes much more courage to be a hero if you're not immune to bullets.  Furthermore, it reveals more virtue to act courageously to save one life than to save the world.  (Although if you have to do one or the other, of course you should save the world.)

\n

\"The police officer who puts their life on the line with no superpowers\", I said, \"reveals far greater virtue than Superman, who is a mere superhero.\"

\n

But let's be more specific.

\n

John Perry was a New York City police officer who also happened to be an Extropian and transhumanist, which is how I come to know his name.  John Perry was due to retire shortly and start his own law practice, when word came that a plane had slammed into the World Trade Center.  He died when the north tower fell.  I didn't know John Perry personally, so I cannot attest to this of direct knowledge; but very few Extropians believe in God, and I expect that Perry was likewise an atheist.

\n

\n

Which is to say that Perry knew he was risking his very existence, every week on the job.  And it's not, like most people in history, that he knew he had only a choice of how to die, and chose to make it matter—because Perry was a transhumanist; he had genuine hope.  And Perry went out there and put his life on the line anyway.  Not because he expected any divine reward. Not because he expected to experience anything at all, if he died.  But because there were other people in danger, and they didn't have immortal souls either, and his hope of life was worth no more than theirs.

\n

I did not know John Perry.  I do not know if he saw the world this way.  But the fact that an atheist and a transhumanist can still be a police officer, can still run into the lobby of a burning building, says more about the human spirit than all the martyrs who ever hoped of heaven.

\n

So that is one specific police officer...

\n

...and now for the superhero.

\n

As the Christians tell the story, Jesus Christ could walk on water, calm storms, drive out demons with a word.  It must have made for a comfortable life:  Starvation a problem?  Xerox some bread.  Don't like a tree?  Curse it.  Romans a problem?  Sic your Dad on them.  Eventually this charmed life ended, when Jesus voluntarily presented himself for crucifixion.  Being nailed to a cross is not a comfortable way to die.  But as the Christians tell the story, Jesus did this knowing he would come back to life three days later, and then go to Heaven.  What was the threat that moved Jesus to face this temporary suffering followed by eternity in Heaven?  Was it the life of a single person?  Was it the corruption of the church of Judea, or the oppression of Rome?  No: as the Christians tell the story, the eternal fate of every human went on the line before Jesus suffered himself to be temporarily nailed to a cross.

\n

But I do not wish to condemn a man who is not truly so guilty. What if Jesus—no, let's pronounce his name correctly: Yeishu—what if Yeishu of Nazareth never walked on water, and nonetheless defied the church of Judea established by the powers of Rome?

\n

Would that not deserve greater honor than that which adheres to Jesus Christ, who was only a mere messiah?

\n

Alas, somehow it seems greater for a hero to have steel skin and godlike powers.  Somehow it seems to reveal more virtue to die temporarily to save the whole world, than to die permanently confronting a corrupt church.  It seems so common, as if many other people through history had done the same.

\n

Comfortably ensconced two thousand years in the future, we can levy all sorts of criticisms at Yeishu, but Yeishu did what he believed to be right, confronted a church he believed to be corrupt, and died for it.  Without benefit of hindsight, he could hardly be expected to predict the true impact of his life upon the world.  Relative to most other prophets of his day, he was probably relatively more honest, relatively less violent, and relatively more courageous.  If you strip away the unintended consequences, the worst that can be said of Yeishu is that others in history did better.  (Epicurus, Buddha, and Marcus Aurelius all come to mind.)  Yeishu died forever, and—from one perspective—he did it for the sake of honesty.  Fifteen hundred years before science, religious honesty was not an oxymoron.

\n

As Sam Harris said:

\n
\n

\"It is not enough that Jesus was a man who transformed himself to such a degree that the Sermon on the Mount could be his heart's confession.  He also had to be the Son of God, born of a virgin, and destined to return to earth trailing clouds of glory.  The effect of such dogma is to place the example of Jesus forever out of reach.  His teaching ceases to become a set of empirical claims about the linkage between ethics and spiritual insight and instead becomes a gratuitous, and rather gruesome, fairy tale.  According to the dogma of Christianity, becoming just like Jesus is impossible.  One can only enumerate one's sins, believe the unbelievable, and await the end of the world.\"

\n
\n

I severely doubt that Yeishu ever spoke the Sermon on the Mount.  Nonetheless, Yeishu deserves honor.  He deserves more honor than the Christians would grant him.

\n

But since Yeishu probably anticipated his soul would survive, he doesn't deserve more honor than John Perry.

" } }, { "_id": "krMzmSXgvEdf7iBT6", "title": "Superhero Bias", "pageUrl": "https://www.lesswrong.com/posts/krMzmSXgvEdf7iBT6/superhero-bias", "postedAt": "2007-12-01T03:14:44.000Z", "baseScore": 128, "voteCount": 113, "commentCount": 43, "url": null, "contents": { "documentId": "krMzmSXgvEdf7iBT6", "html": "\n\n\n\n \n\n \n\n

Suppose there’s a heavily armed sociopath, a kidnapper with hostages, who has just rejected all requests for negotiation and announced his intent to start killing. In real life, the good guys don’t usually kick down the door when the bad guy has hostages. But sometimes—very rarely, but sometimes—life imitates Hollywood to the extent of genuine good guys needing to smash through a door.

\n\n

Imagine, in two widely separated realities, two heroes who charge into the room, first to confront the villain.

\n\n

In one reality, the hero is strong enough to throw cars, can fire power blasts out of his nostrils, has X-ray hearing, and his skin doesn’t just deflect bullets but annihilates them on contact. The villain has ensconced himself in an elementary school and taken over two hundred children hostage; their parents are waiting outside, weeping.

\n\n

In another reality, the hero is a New York police officer, and the hostages are three prostitutes the villain collected off the street.

\n\n

Consider this question very carefully: Who is the greater hero? And who is more likely to get their own comic book?

\n\n

The halo effect is that perceptions of all positive traits are correlated. Profiles rated higher on scales of attractiveness are also rated higher on scales of talent, kindness, honesty, and intelligence.

\n\n

And so comic-book characters who seem strong and invulnerable, both positive traits, also seem to possess more of the heroic traits of courage and heroism. And yet:

\n\n
\n \n\n

How tough can it be to act all brave and courageous when you’re pretty much invulnerable?

\n\n

—Adam Warren, Empowered, Vol. 1

\n
\n\n

I can’t remember if I read the following point somewhere, or hypothesized it myself: Fame, in particular, seems to combine additively with all other personality characteristics. Consider Gandhi. Was Gandhi the most altruistic person of the twentieth century, or just the most famous altruist? Gandhi faced police with riot sticks and soldiers with guns. But Gandhi was a celebrity, and he was protected by his celebrity. What about the others in the march, the people who faced riot sticks and guns even though there wouldn’t be international headlines if they were put in the hospital or gunned down?

\n\n

What did Gandhi think of getting the headlines, the celebrity, the fame, the place in history, becoming the archetype for non-violent resistance, when he took less risk than any of the people marching with him? How did he feel when one of those anonymous heroes came up to him, eyes shining, and told Gandhi how wonderful he was? Did Gandhi ever visualize his world in those terms? I don’t know; I’m not Gandhi.

\n\n

This is not in any sense a criticism of Gandhi. The point of non-violent resistance is not to show off your courage. That can be done much more easily by going over Niagara Falls in a barrel. Gandhi couldn’t help being somewhat-but-not-entirely protected by his celebrity. And Gandhi’s actions did take courage—not as much courage as marching anonymously, but still a great deal of courage.

\n\n

The bias I wish to point out is that Gandhi’s fame score seems to get perceptually added to his justly accumulated altruism score. When you think about nonviolence, you think of Gandhi—not an anonymous protestor in one of Gandhi’s marches who faced down riot clubs and guns, and got beaten, and had to be taken to the hospital, and walked with a limp for the rest of her life, and no one ever remembered her name.

\n\n

Similarly, which is greater—to risk your life to save two hundred children, or to risk your life to save three adults?

\n\n

The answer depends on what one means by greater. If you ever have to choose between saving two hundred children and saving three adults, then choose the former. “Whoever saves a single life, it is as if he had saved the whole world” may be a fine applause light, but it’s terrible moral advice if you’ve got to pick one or the other. So if you mean “greater” in the sense of “Which is more important?” or “Which is the preferred outcome?” or “Which should I choose if I have to do one or the other?” then it is greater to save two hundred than three.

\n\n

But if you ask about greatness in the sense of revealed virtue, then someone who would risk their life to save only three lives reveals more courage than someone who would risk their life to save two hundred but not three.

\n\n

This doesn’t mean that you can deliberately choose to risk your life to save three adults, and let the two hundred schoolchildren go hang, because you want to reveal more virtue. Someone who risks their life because they want to be virtuous has revealed far less virtue than someone who risks their life because they want to save others. Someone who chooses to save three lives rather than two hundred lives, because they think it reveals greater virtue, is so selfishly fascinated with their own “greatness” as to have committed the moral equivalent of manslaughter.

\n\n

It’s one of those wu wei scenarios: You cannot reveal virtue by trying to reveal virtue. Given a choice between a safe method to save the world which involves no personal sacrifice or discomfort, and a method that risks your life and requires you to endure great privation, you cannot become a hero by deliberately choosing the second path. There is nothing heroic about wanting to look like a hero. It would be a lost purpose.

\n\n

Truly virtuous people who are genuinely trying to save lives, rather than trying to reveal virtue, will constantly seek to save more lives with less effort, which means that less of their virtue will be revealed. It may be confusing, but it’s not contradictory.

\n\n

But we cannot always choose to be invulnerable to bullets. After we’ve done our best to reduce risk and increase scope, any remaining heroism is well and truly revealed.

\n\n

The police officer who puts their life on the line with no superpowers, no X-Ray vision, no super-strength, no ability to fly, and above all no invulnerability to bullets, reveals far greater virtue than Superman—who is a mere superhero.

\n\n" } }, { "_id": "ACGeaAk6KButv2xwQ", "title": "The Halo Effect", "pageUrl": "https://www.lesswrong.com/posts/ACGeaAk6KButv2xwQ/the-halo-effect", "postedAt": "2007-11-30T00:58:56.000Z", "baseScore": 81, "voteCount": 73, "commentCount": 57, "url": null, "contents": { "documentId": "ACGeaAk6KButv2xwQ", "html": "\n\n\n\n \n\n \n\n

The affect heuristic is how an overall feeling of goodness or badness contributes to many other judgments, whether it’s logical or not, whether you’re aware of it or not. Subjects told about the benefits of nuclear power are likely to rate it as having fewer risks; stock analysts rating unfamiliar stocks judge them as generally good or generally bad—low risk and high returns, or high risk and low returns—in defiance of ordinary economic theory, which says that risk and return should correlate positively.

\n\n

The halo effect is the manifestation of the affect heuristic in social psychology. Robert Cialdini summarizes:1

\n\n
\n \n\n

Research has shown that we automatically assign to good-looking individuals such favorable traits as talent, kindness, honesty, and intelligence (for a review of this evidence, see Eagly, Ashmore, Makhijani, and Longo, 1991). Furthermore, we make these judgments without being aware that physical attractiveness plays a role in the process. Some consequences of this unconscious assumption that “good-looking equals good” scare me. For example, a study of the 1974 Canadian federal elections found that attractive candidates received more than two and a half times as many votes as unattractive candidates (Efran and Patterson, 1976). Despite such evidence of favoritism toward handsome politicians, follow-up research demonstrated that voters did not realize their bias. In fact, 73 percent of Canadian voters surveyed denied in the strongest possible terms that their votes had been influenced by physical appearance; only 14 percent even allowed for the possibility of such influence (Efran and Patterson, 1976). Voters can deny the impact of attractiveness on electability all they want, but evidence has continued to confirm its troubling presence (Budesheim and DePaola, 1994).

\n\n

A similar effect has been found in hiring situations. In one study, good grooming of applicants in a simulated employment interview accounted for more favorable hiring decisions than did job qualifications—this, even though the interviewers claimed that appearance played a small role in their choices (Mack and Rainey, 1990). The advantage given to attractive workers extends past hiring day to payday. Economists examining US and Canadian samples have found that attractive individuals get paid an average of 12–14 percent more than their unattractive coworkers (Hamermesh and Biddle, 1994).

\n\n

Equally unsettling research indicates that our judicial process is similarly susceptible to the influences of body dimensions and bone structure. It now appears that good-looking people are likely to receive highly favorable treatment in the legal system (see Castellow, Wuensch, and Moore, 1991; and Downs and Lyons, 1990, for reviews). For example, in a Pennsylvania study (Stewart, 1980), researchers rated the physical attractiveness of 74 separate male defendants at the start of their criminal trials. When, much later, the researchers checked court records for the results of these cases, they found that the handsome men had received significantly lighter sentences. In fact, attractive defendants were twice as likely to avoid jail as unattractive defendants. In another study—this one on the damages awarded in a staged negligence trial—a defendant who was better looking than his victim was assessed an average amount of $5,623; but when the victim was the more attractive of the two, the average compensation was $10,051. What’s more, both male and female jurors exhibited the attractiveness-based favoritism (Kulka and Kessler, 1978).

\n\n

Other experiments have demonstrated that attractive people are more likely to obtain help when in need (Benson, Karabenic, and Lerner, 1976) and are more persuasive in changing the opinions of an audience (Chaiken, 1979) . . .

\n
\n\n

The influence of attractiveness on ratings of intelligence, honesty, or kindness is a clear example of bias—especially when you judge these other qualities based on fixed text—because we wouldn’t expect judgments of honesty and attractiveness to conflate for any legitimate reason. On the other hand, how much of my perceived intelligence is due to my honesty? How much of my perceived honesty is due to my intelligence? Finding the truth, and saying the truth, are not as widely separated in nature as looking pretty and looking smart . . .

\n\n

But these studies on the halo effect of attractiveness should make us suspicious that there may be a similar halo effect for kindness, or intelligence. Let’s say that you know someone who not only seems very intelligent, but also honest, altruistic, kindly, and serene. You should be suspicious that some of these perceived characteristics are influencing your perception of the others. Maybe the person is genuinely intelligent, honest, and altruistic, but not all that kindly or serene. You should be suspicious if the people you know seem to separate too cleanly into devils and angels.

\n\n

And—I know you don’t think you have to do it, but maybe you should—be just a little more skeptical of the more attractive political candidates.

\n\n
\n \n\n

1Robert B. Cialdini, Influence: Science and Practice (Boston: Allyn & Bacon, 2001).

\n
\n\n" } }, { "_id": "5u5THLyRkTpPHiaG5", "title": "Unbounded Scales, Huge Jury Awards, & Futurism", "pageUrl": "https://www.lesswrong.com/posts/5u5THLyRkTpPHiaG5/unbounded-scales-huge-jury-awards-and-futurism", "postedAt": "2007-11-29T07:45:53.000Z", "baseScore": 84, "voteCount": 64, "commentCount": 10, "url": null, "contents": { "documentId": "5u5THLyRkTpPHiaG5", "html": "

“Psychophysics,” despite the name, is the respectable field that links physical effects to sensory effects. If you dump acoustic energy into air—make noise—then how loud does that sound to a person, as a function of acoustic energy? How much more acoustic energy do you have to pump into the air, before the noise sounds twice as loud to a human listener? It’s not twice as much; more like eight times as much.

Acoustic energy and photons are straightforward to measure. When you want to find out how loud an acoustic stimulus sounds, how bright a light source appears, you usually ask the listener or watcher. This can be done using a bounded scale from “very quiet” to “very loud,” or “very dim” to “very bright.” You can also use an unbounded scale, whose zero is “not audible at all” or “not visible at all,” but which increases from there without limit. When you use an unbounded scale, the observer is typically presented with a constant stimulus, the modulus, which is given a fixed rating. For example, a sound that is assigned a loudness of 10. Then the observer can indicate a sound twice as loud as the modulus by writing 20.

And this has proven to be a fairly reliable technique. But what happens if you give subjects an unbounded scale, but no modulus? Zero to infinity, with no reference point for a fixed value? Then they make up their own modulus, of course. The ratios between stimuli will continue to correlate reliably between subjects. Subject A says that sound X has a loudness of 10 and sound Y has a loudness of 15. If subject B says that sound X has a loudness of 100, then it’s a good guess that subject B will assign loudness in the vicinity of 150 to sound Y. But if you don’t know what subject C is using as their modulus—their scaling factor—then there’s no way to guess what subject C will say for sound X. It could be 1. It could be 1,000.

For a subject rating a single sound, on an unbounded scale, without a fixed standard of comparison, nearly all the variance is due to the arbitrary choice of modulus, rather than the sound itself.

“Hm,” you think to yourself, “this sounds an awful lot like juries deliberating on punitive damages. No wonder there’s so much variance!” An interesting analogy, but how would you go about demonstrating it experimentally?

Kahneman et al. presented 867 jury-eligible subjects with descriptions of legal cases (e.g., a child whose clothes caught on fire) and asked them to either

  1. Rate the outrageousness of the defendant’s actions, on a bounded scale, 
  2. Rate the degree to which the defendant should be punished, on a bounded scale, or  
  3. Assign a dollar value to punitive damages.1

And, lo and behold, while subjects correlated very well with each other in their outrage ratings and their punishment ratings, their punitive damages were all over the map. Yet subjects’ rank-ordering of the punitive damages—their ordering from lowest award to highest award—correlated well across subjects.

If you asked how much of the variance in the “punishment” scale could be explained by the specific scenario—the particular legal case, as presented to multiple subjects—then the answer, even for the raw scores, was 0.49. For the rank orders of the dollar responses, the amount of variance predicted was 0.51. For the raw dollar amounts, the variance explained was 0.06!

Which is to say: if you knew the scenario presented—the aforementioned child whose clothes caught on fire—you could take a good guess at the punishment rating, and a good guess at the rank-ordering of the dollar award relative to other cases, but the dollar award itself would be completely unpredictable.

Taking the median of twelve randomly selected responses didn’t help much either.

So a jury award for punitive damages isn’t so much an economic valuation as an attitude expression—a psychophysical measure of outrage, expressed on an unbounded scale with no standard modulus.

I observe that many futuristic predictions are, likewise, best considered as attitude expressions. Take the question, “How long will it be until we have human-level AI?” The responses I’ve seen to this are all over the map. On one memorable occasion, a mainstream AI guy said to me, “Five hundred years.” (!!)

Now the reason why time-to-AI is just not very predictable, is a long discussion in its own right. But it’s not as if the guy who said “Five hundred years” was looking into the future to find out. And he can’t have gotten the number using the standard bogus method with Moore’s Law. So what did the number 500 mean?

As far as I can guess, it’s as if I’d asked, “On a scale where zero is ‘not difficult at all,’ how difficult does the AI problem feel to you?” If this were a bounded scale, every sane respondent would mark “extremely hard” at the right-hand end. Everything feels extremely hard when you don’t know how to do it. But instead there’s an unbounded scale with no standard modulus. So people just make up a number to represent “extremely difficult,” which may come out as 50, 100, or even 500. Then they tack “years” on the end, and that’s their futuristic prediction.

“How hard does the AI problem feel?” isn’t the only substitutable question. Others respond as if I’d asked “How positive do you feel about AI?”—except lower numbers mean more positive feelings—and then they also tack “years” on the end. But if these “time estimates” represent anything other than attitude expressions on an unbounded scale with no modulus, I have been unable to determine it.

1Daniel Kahneman, David A. Schkade, and Cass R. Sunstein, “Shared Outrage and Erratic Awards: The Psychology of Punitive Damages,” Journal of Risk and Uncertainty 16 (1 1998): 48–86; Daniel Kahneman, Ilana Ritov, and David Schkade, “Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues,” Journal of Risk and Uncertainty 19, nos. 1–3 (1999): 203–235.

" } }, { "_id": "3T6p93Mut7G8qdkAs", "title": "Evaluability (And Cheap Holiday Shopping)", "pageUrl": "https://www.lesswrong.com/posts/3T6p93Mut7G8qdkAs/evaluability-and-cheap-holiday-shopping", "postedAt": "2007-11-28T00:37:11.000Z", "baseScore": 89, "voteCount": 78, "commentCount": 63, "url": null, "contents": { "documentId": "3T6p93Mut7G8qdkAs", "html": "

With the expensive part of the Hallowthankmas season now approaching, a question must be looming large in our readers’ minds:

“Dear Overcoming Bias, are there biases I can exploit to be seen as generous without actually spending lots of money?”

I’m glad to report the answer is yes! According to Hsee—in a paper entitled “Less is Better”—if you buy someone a $45 scarf, you are more likely to be seen as generous than if you buy them a $55 coat.1

This is a special case of a more general phenomenon. In an earlier experiment, Hsee asked subjects how much they would be willing to pay for a second-hand music dictionary:2

The gotcha was that some subjects saw both dictionaries side-by-side, while other subjects only saw one dictionary . . .

Subjects who saw only one of these options were willing to pay an average of $24 for Dictionary A and an average of $20 for Dictionary B. Subjects who saw both options, side-by-side, were willing to pay $27 for Dictionary B and $19 for Dictionary A.

Of course, the number of entries in a dictionary is more important than whether it has a torn cover, at least if you ever plan on using it for anything. But if you’re only presented with a single dictionary, and it has 20,000 entries, the number 20,000 doesn’t mean very much. Is it a little? A lot? Who knows? It’s non-evaluable. The torn cover, on the other hand—that stands out. That has a definite affective valence: namely, bad.

Seen side-by-side, though, the number of entries goes from non-evaluable to evaluable, because there are two compatible quantities to be compared. And once the number of entries becomes evaluable, that facet swamps the importance of the torn cover.

From Slovic et al.: Which would you prefer?3

  1. A 29/36 chance to win $2. 
  2. A 7/36 chance to win $9.

While the average prices (equivalence values) placed on these options were $1.25 and $2.11 respectively, their mean attractiveness ratings were 13.2 and 7.5. Both the prices and the attractiveness rating were elicited in a context where subjects were told that two gambles would be randomly selected from those rated, and they would play the gamble with the higher price or higher attractiveness rating. (Subjects had a motive to rate gambles as more attractive, or price them higher, that they would actually prefer to play.)

The gamble worth more money seemed less attractive, a classic preference reversal. The researchers hypothesized that the dollar values were more compatible with the pricing task, but the probability of payoff was more compatible with attractiveness. So (the researchers thought) why not try to make the gamble’s payoff more emotionally salient—more affectively evaluable—more attractive?

And how did they do this? By adding a very small loss to the gamble. The old gamble had a 7/36 chance of winning $9. The new gamble had a 7/36 chance of winning $9 and a 29/36 chance of losing 5 cents. In the old gamble, you implicitly evaluate the attractiveness of $9. The new gamble gets you to evaluate the attractiveness of winning $9 versus losing 5 cents.

“The results,” said Slovic et al., “exceeded our expectations.” In a new experiment, the simple gamble with a 7/36 chance of winning $9 had a mean attractiveness rating of 9.4, while the complex gamble that included a 29/36 chance of losing 5 cents had a mean attractiveness rating of 14.9.

A follow-up experiment tested whether subjects preferred the old gamble to a certain gain of $2. Only 33% of students preferred the old gamble. Among another group asked to choose between a certain $2 and the new gamble (with the added possibility of a 5 cents loss), fully 60.8% preferred the gamble. After all, $9 isn’t a very attractive amount of money, but $9 / 5 cents is an amazingly attractive win/loss ratio.

You can make a gamble more attractive by adding a strict loss! Isn’t psychology fun? This is why no one who truly appreciates the wondrous intricacy of human intelligence wants to design a human-like AI.

Of course, it only works if the subjects don’t see the two gambles side-by-side.


 

Two ice cream cups from Hsee. © 1998 John Wiley & Sons, Ltd.

Similarly, which of the two ice creams in Figure 1 do you think subjects in Hsee’s 1998 study preferred?

Naturally, the answer depends on whether the subjects saw a single ice cream, or the two side-by-side. Subjects who saw a single ice cream were willing to pay $1.66 to Vendor H and $2.26 to Vendor L. Subjects who saw both ice creams were willing to pay $1.85 to Vendor H and $1.56 to Vendor L.

What does this suggest for your holiday shopping? That if you spend $400 on a 16GB iPod Touch, your recipient sees the most expensive MP3 player. If you spend $400 on a Nintendo Wii, your recipient sees the least expensive game machine. Which is better value for the money? Ah, but that question only makes sense if you see the two side-by-side. You’ll think about them side-by-side while you’re shopping, but the recipient will only see what they get.

If you have a fixed amount of money to spend—and your goal is to display your friendship, rather than to actually help the recipient—you’ll be better off deliberately not shopping for value. Decide how much money you want to spend on impressing the recipient, then find the most worthless object which costs that amount. The cheaper the class of objects, the more expensive a particular object will appear, given that you spend a fixed amount. Which is more memorable, a $25 shirt or a $25 candle?

Gives a whole new meaning to the Japanese custom of buying $50 melons, doesn’t it? You look at that and shake your head and say “What is it with the Japanese?” And yet they get to be perceived as incredibly generous, spendthrift even, while spending only $50. You could spend $200 on a fancy dinner and not appear as wealthy as you can by spending $50 on a melon. If only there was a custom of gifting $25 toothpicks or $10 dust specks; they could get away with spending even less.

PS: If you actually use this trick, I want to know what you bought.


1Christopher K. Hsee, “Less Is Better: When Low-Value Options Are Valued More Highly than High-Value Options,” Behavioral Decision Making 11 (2 1998): 107–121.

2Christopher K. Hsee, “The Evaluability Hypothesis: An Explanation for Preference Reversals between Joint and Separate Evaluations of Alternatives,” Organizational Behavior and Human Decision Processes 67 (3 1996): 247–257.

3Slovic et al., “Rational Actors or Rational Fools.”

" } }, { "_id": "Kow8xRzpfkoY7pa69", "title": "The Affect Heuristic", "pageUrl": "https://www.lesswrong.com/posts/Kow8xRzpfkoY7pa69/the-affect-heuristic", "postedAt": "2007-11-27T07:58:44.000Z", "baseScore": 79, "voteCount": 70, "commentCount": 70, "url": null, "contents": { "documentId": "Kow8xRzpfkoY7pa69", "html": "\n\n\n\n \n\n \n\n

The affect heuristic is when subjective impressions of goodness/badness act as a heuristic—a source of fast, perceptual judgments. Pleasant and unpleasant feelings are central to human reasoning, and the affect heuristic comes with lovely biases—some of my favorites.

\n\n

Let’s start with one of the relatively less crazy biases. You’re about to move to a new city, and you have to ship an antique grandfather clock. In the first case, the grandfather clock was a gift from your grandparents on your fifth birthday. In the second case, the clock was a gift from a remote relative and you have no special feelings for it. How much would you pay for an insurance policy that paid out $100 if the clock were lost in shipping? According to Hsee and Kunreuther, subjects stated willingness to pay more than twice as much in the first condition.1 This may sound rational—why not pay more to protect the more valuable object?—until you realize that the insurance doesn’t protect the clock, it just pays if the clock is lost, and pays exactly the same amount for either clock. (And yes, it was stated that the insurance was with an outside company, so it gives no special motive to the movers.)

\n\n

All right, but that doesn’t sound too insane. Maybe you could get away with claiming the subjects were insuring affective outcomes, not financial outcomes—purchase of consolation.

\n\n

Then how about this? Yamagishi showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal.2 Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who’s more likely to survive than not.

\n\n

But wait, it gets worse.

\n\n

Suppose an airport must decide whether to spend money to purchase some new equipment, while critics argue that the money should be spent on other aspects of airport safety. Slovic et al. presented two groups of subjects with the arguments for and against purchasing the equipment, with a response scale ranging from 0 (would not support at all) to 20 (very strong support).3 One group saw the measure described as saving 150 lives. The other group saw the measure described as saving 98% of 150 lives. The hypothesis motivating the experiment was that saving 150 lives sounds vaguely good—is that a lot? a little?—while saving 98% of something is clearly very good because 98% is so close to the upper bound of the percentage scale. Lo and behold, saving 150 lives had mean support of 10.4, while saving 98% of 150 lives had mean support of 13.6.

\n\n

Or consider the report of Denes-Raj and Epstein: subjects who were offered an opportunity to win $1 each time they randomly drew a red jelly bean from a bowl often preferred to draw from a bowl with more red beans and a smaller proportion of red beans.4 E.g., 7 in 100 was preferred to 1 in 10.

\n\n

According to Denes-Raj and Epstein, these subjects reported afterward that even though they knew the probabilities were against them, they felt they had a better chance when there were more red beans. This may sound crazy to you, oh Statistically Sophisticated Reader, but if you think more carefully you’ll realize that it makes perfect sense. A 7% probability versus 10% probability may be bad news, but it’s more than made up for by the increased number of red beans. It’s a worse probability, yes, but you’re still more likely to win, you see. You should meditate upon this thought until you attain enlightenment as to how the rest of the planet thinks about probability.

\n\n

As I discussed in “The Scales of Justice, the Notebook of Rationality,” Finucane et al. found that for nuclear reactors, natural gas, and food preservatives, presenting information about high benefits made people perceive lower risks; presenting information about higher risks made people perceive lower benefits; and so on across the quadrants.5 People conflate their judgments about particular good/bad aspects of something into an overall good or bad feeling about that thing.

\n\n

Finucane et al. also found that time pressure greatly increased the inverse relationship between perceived risk and perceived benefit, consistent with the general finding that time pressure, poor information, or distraction all increase the dominance of perceptual heuristics over analytic deliberation.

\n\n

Ganzach found the same effect in the realm of finance.6 According to ordinary economic theory, return and risk should correlate positively—or to put it another way, people pay a premium price for safe investments, which lowers the return; stocks deliver higher returns than bonds, but have correspondingly greater risk. When judging familiar stocks, analysts’ judgments of risks and returns were positively correlated, as conventionally predicted. But when judging unfamiliar stocks, analysts tended to judge the stocks as if they were generally good or generally bad—low risk and high returns, or high risk and low returns.

\n\n

For further reading I recommend Slovic’s fine summary article, “Rational Actors or Rational Fools: Implications of the Affect Heuristic for Behavioral Economics.”

\n\n
\n \n\n

1Christopher K. Hsee and Howard C. Kunreuther, “The Affection Effect in Insurance Decisions,” Journal of Risk and Uncertainty 20 (2 2000): 141–159.

\n\n

2Kimihiko Yamagishi, “When a 12.86% Mortality Is More Dangerous than 24.14%: Implications for Risk Communication,” Applied Cognitive Psychology 11 (6 1997): 461–554.

\n\n

3Paul Slovic et al., “Rational Actors or Rational Fools: Implications of the Affect Heuristic for Behavioral Economics,” Journal of Socio-Economics 31, no. 4 (2002): 329–342.

\n\n

4Veronika Denes-Raj and Seymour Epstein, “Conflict between Intuitive and Rational Processing: When People Behave against Their Better Judgment,” Journal of Personality and Social Psychology 66 (5 1994): 819–829.

\n\n

5Finucane et al., “The Affect Heuristic in Judgments of Risks and Benefits.”

\n\n

6Yoav Ganzach, “Judging Risk and Return of Financial Assets,” Organizational Behavior and Human Decision Processes 83, no. 2 (2000): 353–370.

\n
\n\n" } }, { "_id": "AZfBrZfBu8Aa2FK9D", "title": "Purpose and Pragmatism", "pageUrl": "https://www.lesswrong.com/posts/AZfBrZfBu8Aa2FK9D/purpose-and-pragmatism", "postedAt": "2007-11-26T06:51:29.000Z", "baseScore": 25, "voteCount": 18, "commentCount": 8, "url": null, "contents": { "documentId": "AZfBrZfBu8Aa2FK9D", "html": "

Followup toMaking Beliefs Pay Rent, Lost Purposes

\n\n

Thus runs the ancient parable:

If a tree falls in a\nforest and no one hears it, does it make a sound?
One says, "Yes it does, for it makes\nvibrations in the air."
Another\nsays, "No it does not, for there is no auditory processing in any\nbrain."

So begins a long, acrimonious battle...

\n\n

The conventional resolution is that the two are fighting over the definition of a word, and such labels do not have intrinsic definitions, only agreed-upon definitions.

\n\n

Yet if you need to know about the forest for any pragmatic reason - if there is anything you plan on doing with the knowledge - then the answer is no longer a matter of mutual agreement.  If, for example, you need to know whether landmines will be set off by the tree falling, then you cannot make the land mines explode or unexplode by any possible amount of agreement about the meaning of the word "sound".  You can get the whole world to agree, one way or the other, and it still won't make a difference.

\n\n

You find yourself in an unheard-falling-tree dilemma, only when you become curious about a question with no pragmatic use, and no predictive consequences.  Which suggests that you may be playing loose with your purposes.

So does this mean that truth reduces to usefulness?  But this, itself,\nwould be a purpose-loss, a subgoal stomp, a mistaking of the indicator for the indicated.  Usefulness for prediction,\nand demonstrated powers of manipulation, is one of the best indicators of truth.  This does\nnot mean that usefulness is truth.  You might as well say that the act of driving to the supermarket is eating chocolate.

\n

There is, nonetheless, a deep similarity between the pragmatic and\nthe epistemic arts of rationality, in the matter of keeping your eye\non the ball.

\n\n

In pragmatic rationality, keeping your eye on the ball means holding to your purpose:  Being aware of how each act leads to\nits consequence, and not losing sight of utilities in leaky\ngeneralizations about expected utilities.  If you hold firmly in your mind the image of a drained swamp, you will be less likely to get lost in fighting alligators.

\n\n

In epistemic rationality,\nkeeping your eye on the ball means holding to your question:  Being\naware of what each indicator says about its indicatee, and not losing\nsight of the original question in fights over indicators.  If you want to know whether landmines will detonate, you will not get lost in fighting over the meaning of the word "sound".

\n\n

Both cases deal with leaky generalizations about conditional probabilities:  P(Y=y|X=x) is nearly but not quite 1.

\n\n

In the case of pragmatic rationality: driving to the supermarket may almost always get you chocolate, but on some occasions it will not.  If you forget your final purpose and think that x=y then you will not be able to deal with cases where the supermarket is out of chocolate.

\n\n

In the case of epistemic rationality: seeing a "Chocolate for sale" sign in the supermarket may almost always indicate that chocolate is available, but on some occasions it will not.  If you forget your original question and think that  x=y then you will go on arguing "But the sign is up!" even when someone calls out to you, "Hey, they don't have any chocolate today!"

\n\n

This is a deep connection between the human arts of pragmatic and epistemic rationality...

\n\n

...which does not mean they are the same thing.

\n\n" } }, { "_id": "sP2Hg6uPwpfp3jZJN", "title": "Lost Purposes", "pageUrl": "https://www.lesswrong.com/posts/sP2Hg6uPwpfp3jZJN/lost-purposes", "postedAt": "2007-11-25T09:01:50.000Z", "baseScore": 191, "voteCount": 159, "commentCount": 79, "url": null, "contents": { "documentId": "sP2Hg6uPwpfp3jZJN", "html": "

It was in either kindergarten or first grade that I was first asked to pray, given a transliteration of a Hebrew prayer.  I asked what the words meant.  I was told that so long as I prayed in Hebrew, I didn't need to know what the words meant, it would work anyway.

\n\n

That was the beginning of my break with Judaism.

\n\n

As you read this, some young man or woman is sitting at a desk in a university, earnestly studying material they have no intention of ever using, and no interest in knowing for its own sake.  They want a high-paying job, and the high-paying job requires a piece of paper, and the piece of paper requires a previous master's degree, and the master's degree requires a bachelor's degree, and the university that grants the bachelor's degree requires you to take a class in 12th-century knitting patterns to graduate.  So they diligently study, intending to forget it all the moment the final exam is administered, but still seriously working away, because they want that piece of paper.

\n\n

Maybe you realized it was all madness, but I bet you did it anyway.  You didn't have a choice, right?

A recent study here in the Bay Area showed that 80% of teachers in K-5 reported spending less than one hour per week on science, and 16% said they spend no time on science.  Why?  I'm given to understand the proximate cause is the No Child Left Behind Act and similar legislation.  Virtually all classroom time is now spent on preparing for tests mandated at the state or federal level.  I seem to recall (though I can't find the source) that just taking mandatory tests was 40% of classroom time in one school.

\n\n

The old Soviet bureaucracy was famous for being more interested in appearances than reality.  One shoe factory overfulfilled its quota by producing lots of tiny shoes.  Another shoe factory reported cut but unassembled leather as a "shoe".  The superior bureaucrats weren't interested in looking too hard, because they also wanted to report quota overfulfillments.  All this was a great help to the comrades freezing their feet off.

\n\n

It is now being suggested in several sources that an actual majority of published findings in medicine, though "statistically significant with p<0.05", are untrue.  But so long as p<0.05 remains the threshold for publication, why should anyone hold themselves to higher standards, when that requires bigger research grants for larger experimental groups, and decreases the likelihood of getting a publication?  Everyone knows that the whole point of science is to publish lots of papers, just as the whole point of a university is to print certain pieces of parchment, and the whole point of a school is to pass the mandatory tests that guarantee the annual budget.  You don't get to set the rules of the game, and if you try to play by different rules, you'll just lose.

\n\n

(Though for some reason, physics journals require a threshold of p<0.0001.  It's as if they conceive of some other purpose to their existence than publishing physics papers.)

\n\n

There's chocolate at the supermarket, and you can get to the supermarket by driving, and driving requires that you be in the car, which means opening your car door, which needs keys.  If you find there's no chocolate at the supermarket, you won't stand around opening and slamming your car door because the car door still needs opening.  I rarely notice people losing track of plans they devised themselves.

\n\n

It's another matter when incentives must flow through large organizations - or worse, many different organizations and interest groups, some of them governmental.  Then you see behaviors that would mark literal insanity, if they were born from a single mind.  Someone gets paid every time they open a car door, because that's what's measurable; and this person doesn't care whether the driver ever gets paid for arriving at the supermarket, let alone whether the buyer purchases the chocolate, or whether the eater is happy or starving.

\n\n

From a Bayesian perspective, subgoals are epiphenomena of conditional probability functions.  There is no expected utility without utility.  How silly would it be to think that instrumental value could take on a mathematical life of its own, leaving terminal value in the dust?  It's not sane by decision-theoretical criteria of sanity.

\n\n

But consider the No Child Left Behind Act.  The politicians want to look like they're doing something about educational difficulties; the politicians have to look busy to voters this year, not fifteen years later when the kids are looking for jobs.  The politicians are not the consumers of education.  The bureaucrats have to show progress, which means that they're only interested in progress that can be measured this year.  They aren't the ones who'll end up ignorant of science.  The publishers who commission textbooks, and the committees that purchase textbooks, don't sit in the classrooms bored out of their skulls.

\n\n

The actual consumers of knowledge are the children - who can't pay, can't vote, can't sit on the committees.  Their parents care for them, but don't sit in the classes themselves; they can only hold politicians responsible according to surface images of "tough on education".  Politicians are too busy being re-elected to study all the data themselves; they have to rely on surface images of bureaucrats being busy and commissioning studies - it may not work to help any children, but it works to let politicians appear caring.  Bureaucrats don't expect to use textbooks themselves, so they don't care if the textbooks are hideous to read, so long as the process by which they are purchased looks good on the surface.  The textbook publishers have no motive to produce bad textbooks, but they know that the textbook purchasing committee will be comparing textbooks based on how many different subjects they cover, and that the fourth-grade purchasing committee isn't coordinated with the third-grade purchasing committee, so they cram as many subjects into one textbook as possible.  Teachers won't get through a fourth of the textbook before the end of the year, and then the next year's teacher will start over.  Teachers might complain, but they aren't the decision-makers, and ultimately, it's not their future on the line, which puts sharp bounds on how much effort they'll spend on unpaid altruism...

\n\n

It's amazing, when you look at it that way - consider all the lost information and lost incentives - that anything at all remains of the original purpose, gaining knowledge.  Though many educational systems seem to be currently in the process of collapsing into a state not much better than nothing.

\n\n

Want to see the problem really solved?  Make the politicians go to school.

\n\n

A single human mind can track a probabilistic expectation of utility as it flows through the conditional chances of a dozen intermediate events - including nonlocal dependencies, places where the expected utility of opening the car door depends on whether there's chocolate in the supermarket.  But organizations can only reward today what is measurable today, what can be written into legal contract today, and this means measuring intermediate events rather than their distant consequences.  These intermediate measures, in turn, are leaky generalizations - often very leaky.  Bureaucrats are untrustworthy genies, for they do not share the values of the wisher.

\n\n

Miyamoto Musashi said:

"The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit,\nspring, strike or touch the enemy's cutting sword, you must cut the enemy\nin the same movement. It is essential to attain this. If you think only of\nhitting, springing, striking or touching the enemy, you will not be able actually\nto cut him. More than anything, you must be thinking of carrying your movement\nthrough to cutting him. You must thoroughly research this."

(I wish I lived in an era where I could just tell my readers they have to thoroughly research something, without giving insult.)

\n\n

Why would any individual lose track of their purposes in a swordfight?  If someone else had taught them to fight, if they had not generated the entire art from within themselves, they might not understand the reason for parrying at one moment, or springing at another moment; they might not realize when the rules had exceptions, fail to see the times when the usual method won't cut through.

\n\n

The essential thing in the art of epistemic rationality is to understand how every rule is cutting through to the truth in the same movement.  The corresponding essential of pragmatic rationality - decision theory, versus probability theory - is to always see how every expected utility cuts through to utility.  You must thoroughly research this.

\n\n

C. J. Cherryh said:

"Your sword has no blade. It has\nonly your intention. When that goes\nastray you have no weapon."

I have seen many people go astray when they wish to the genie of an imagined AI, dreaming up wish after wish that seems good to them, sometimes with many patches and sometimes without even that pretense of caution.  And they don't jump to the meta-level.  They don't instinctively look-to-purpose, the instinct that started me down the track to atheism at the age of five.  They do not ask, as I reflexively ask, "Why do I think this wish is a good idea?  Will the genie judge likewise?"  They don't see the source of their judgment, hovering behind the judgment as its generator.  They lose track of the ball; they know the ball bounced, but they don't instinctively look back to see where it bounced from - the criterion that generated their judgments.

\n\n

Likewise with people not automatically noticing when supposedly selfish people give altruistic arguments in favor of selfishness, or when supposedly altruistic people give selfish arguments in favor of altruism.

\n\n

People can handle goal-tracking for driving to the supermarket just fine, when it's all inside their own heads, and no genies or bureaucracies or philosophies are involved.  The trouble is that real civilization is immensely more complicated than this.  Dozens of organizations, and dozens of years, intervene between the child suffering in the classroom, and the new-minted college graduate not being very good at their job.  (But will the interviewer or manager notice, if the college graduate is good at looking busy?)  With every new link that intervenes between the action and its consequence, intention has one more chance to go astray.  With every intervening link, information is lost, incentive is lost.  And this bothers most people a lot less than it bothers me, or why were all my classmates willing to say prayers without knowing what they meant?  They didn't feel the same instinct to look-to-the-generator.

\n\n

Can people learn to keep their eye on the ball?  To keep their intention from going astray?  To never spring or strike or touch, without knowing the higher goal they will complete in the same movement?  People do often want to do their jobs, all else being equal.  Can there be such a thing as a sane corporation?  A sane civilization, even?  That's only a distant dream, but it's what I've been getting at with all these blog posts on the flow of intentions (aka expected utility, aka instrumental value) without losing purpose (aka utility, aka terminal value).  Can people learn to feel the flow of parent goals and child goals?  To know consciously, as well as implicitly, the distinction between expected utility and utility?

\n\n

Do you care about threats to your civilization?  The worst metathreat to complex civilization is its own complexity, for that complication leads to the loss of many purposes.

\n\n

I look back, and I see that more than anything, my life has been driven by an exceptionally strong abhorrence to lost purposes.  I hope it can be transformed to a learnable skill.

" } }, { "_id": "4ARaTpNX62uaL86j6", "title": "The Hidden Complexity of Wishes", "pageUrl": "https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden-complexity-of-wishes", "postedAt": "2007-11-24T00:12:33.000Z", "baseScore": 180, "voteCount": 180, "commentCount": 199, "url": null, "contents": { "documentId": "4ARaTpNX62uaL86j6", "html": "

(It has come to my attention that this article is currently being misrepresented as proof that I/MIRI previously advocated that it would be very difficult to get machine superintelligences to understand or predict human values. This would obviously be false, and also, is not what is being argued below. The example in the post below is not about an Artificial Intelligence literally at all! If the post were about what AIs supposedly can't do, the central example would have used an AI! The point that is made below will be about the algorithmic complexity of human values. This point is relevant within a larger argument, because it bears on the complexity of what you need to get an artificial superintelligence to want or value; rather than bearing on what a superintelligence supposedly could not predict or understand. -- EY, May 2024.)

\n

\n
\n\"I wish to live in the locations of my choice, in a physically healthy, uninjured, and apparently normal version of my current body containing my current mental state, a body which will heal from all injuries at a rate three sigmas faster than the average given the medical technology available to me, and which will be protected from any diseases, injuries or illnesses causing disability, pain, or degraded functionality or any sense, organ, or bodily function for more than ten days consecutively or fifteen days in any year...\"
            -- The Open-Source Wish Project, Wish For Immortality 1.1\n
\n

There are three kinds of genies:  Genies to whom you can safely say \"I wish for you to do what I should wish for\"; genies for which no wish is safe; and genies that aren't very powerful or intelligent.

\n

\n

Suppose your aged mother is trapped in a burning building, and it so happens that you're in a wheelchair; you can't rush in yourself.  You could cry, \"Get my mother out of that building!\" but there would be no one to hear.

\n

Luckily you have, in your pocket, an Outcome Pump.  This handy device squeezes the flow of time, pouring probability into some outcomes, draining it from others.

\n

The Outcome Pump is not sentient.  It contains a tiny time machine, which resets time unless a specified outcome occurs.  For example, if you hooked up the Outcome Pump's sensors to a coin, and specified that the time machine should keep resetting until it sees the coin come up heads, and then you actually flipped the coin, you would see the coin come up heads.  (The physicists say that any future in which a \"reset\" occurs is inconsistent, and therefore never happens in the first place - so you aren't actually killing any versions of yourself.)

\n

Whatever proposition you can manage to input into the Outcome Pump, somehow happens, though not in a way that violates the laws of physics.  If you try to input a proposition that's too unlikely, the time machine will suffer a spontaneous mechanical failure before that outcome ever occurs.

\n

You can also redirect probability flow in more quantitative ways using the \"future function\" to scale the temporal reset probability for different outcomes.  If the temporal reset probability is 99% when the coin comes up heads, and 1% when the coin comes up tails, the odds will go from 1:1 to 99:1 in favor of tails.  If you had a mysterious machine that spit out money, and you wanted to maximize the amount of money spit out, you would use reset probabilities that diminished as the amount of money increased.  For example, spitting out $10 might have a 99.999999% reset probability, and spitting out $100 might have a 99.99999% reset probability.  This way you can get an outcome that tends to be as high as possible in the future function, even when you don't know the best attainable maximum.

\n

So you desperately yank the Outcome Pump from your pocket - your mother is still trapped in the burning building, remember? - and try to describe your goal: get your mother out of the building!

\n

The user interface doesn't take English inputs.  The Outcome Pump isn't sentient, remember?  But it does have 3D scanners for the near vicinity, and built-in utilities for pattern matching.  So you hold up a photo of your mother's head and shoulders; match on the photo; use object contiguity to select your mother's whole body (not just her head and shoulders); and define the future function using your mother's distance from the building's center.  The further she gets from the building's center, the less the time machine's reset probability.

\n

You cry \"Get my mother out of the building!\", for luck, and press Enter.

\n

For a moment it seems like nothing happens.  You look around, waiting for the fire truck to pull up, and rescuers to arrive - or even just a strong, fast runner to haul your mother out of the building -

\n

BOOM!  With a thundering roar, the gas main under the building explodes.  As the structure comes apart, in what seems like slow motion, you glimpse your mother's shattered body being hurled high into the air, traveling fast, rapidly increasing its distance from the former center of the building.

\n

On the side of the Outcome Pump is an Emergency Regret Button.  All future functions are automatically defined with a huge negative value for the Regret Button being pressed - a temporal reset probability of nearly 1 - so that the Outcome Pump is extremely unlikely to do anything which upsets the user enough to make them press the Regret Button.  You can't ever remember pressing it.  But you've barely started to reach for the Regret Button (and what good will it do now?) when a flaming wooden beam drops out of the sky and smashes you flat.

\n

Which wasn't really what you wanted, but scores very high in the defined future function...

\n

The Outcome Pump is a genie of the second class.  No wish is safe.

\n

If someone asked you to get their poor aged mother out of a burning building, you might help, or you might pretend not to hear.  But it wouldn't even occur to you to explode the building.  \"Get my mother out of the building\" sounds like a much safer wish than it really is, because you don't even consider the plans that you assign extreme negative values.

\n

Consider again the Tragedy of Group Selectionism: Some early biologists asserted that group selection for low subpopulation sizes would produce individual restraint in breeding; and yet actually enforcing group selection in the laboratory produced cannibalism, especially of immature females.  It's obvious in hindsight that, given strong selection for small subpopulation sizes, cannibals will outreproduce individuals who voluntarily forego reproductive opportunities.  But eating little girls is such an un-aesthetic solution that Wynne-Edwards, Allee, Brereton, and the other group-selectionists simply didn't think of it.  They only saw the solutions they would have used themselves.

\n

Suppose you try to patch the future function by specifying that the Outcome Pump should not explode the building: outcomes in which the building materials are distributed over too much volume, will have ~1 temporal reset probabilities.

\n

So your mother falls out of a second-story window and breaks her neck.  The Outcome Pump took a different path through time that still ended up with your mother outside the building, and it still wasn't what you wanted, and it still wasn't a solution that would occur to a human rescuer.

\n

If only the Open-Source Wish Project had developed a Wish To Get Your Mother Out Of A Burning Building:

\n
\n

\"I wish to move my mother (defined as the woman who shares half my genes and gave birth to me) to outside the boundaries of the building currently closest to me which is on fire; but not by exploding the building; nor by causing the walls to crumble so that the building no longer has boundaries; nor by waiting until after the building finishes burning down for a rescue worker to take out the body...\"

\n
\n

All these special cases, the seemingly unlimited number of required patches, should remind you of the parable of Artificial Addition - programming an Arithmetic Expert Systems by explicitly adding ever more assertions like \"fifteen plus fifteen equals thirty, but fifteen plus sixteen equals thirty-one instead\".

\n

How do you exclude the outcome where the building explodes and flings your mother into the sky?  You look ahead, and you foresee that your mother would end up dead, and you don't want that consequence, so you try to forbid the event leading up to it.

\n

Your brain isn't hardwired with a specific, prerecorded statement that \"Blowing up a burning building containing my mother is a bad idea.\"  And yet you're trying to prerecord that exact specific statement in the Outcome Pump's future function.  So the wish is exploding, turning into a giant lookup table that records your judgment of every possible path through time.

\n

You failed to ask for what you really wanted.  You wanted your mother to go on living, but you wished for her to become more distant from the center of the building.

\n

Except that's not all you wanted.  If your mother was rescued from the building but was horribly burned, that outcome would rank lower in your preference ordering than an outcome where she was rescued safe and sound.  So you not only value your mother's life, but also her health.

\n

And you value not just her bodily health, but her state of mind. Being rescued in a fashion that traumatizes her - for example, a giant purple monster roaring up out of nowhere and seizing her - is inferior to a fireman showing up and escorting her out through a non-burning route.  (Yes, we're supposed to stick with physics, but maybe a powerful enough Outcome Pump has aliens coincidentally showing up in the neighborhood at exactly that moment.)  You would certainly prefer her being rescued by the monster to her being roasted alive, however.

\n

How about a wormhole spontaneously opening and swallowing her to a desert island?  Better than her being dead; but worse than her being alive, well, healthy, untraumatized, and in continual contact with you and the other members of her social network.

\n

Would it be okay to save your mother's life at the cost of the family dog's life, if it ran to alert a fireman but then got run over by a car?  Clearly yes, but it would be better ceteris paribus to avoid killing the dog.  You wouldn't want to swap a human life for hers, but what about the life of a convicted murderer?  Does it matter if the murderer dies trying to save her, from the goodness of his heart?  How about two murderers?  If the cost of your mother's life was the destruction of every extant copy, including the memories, of Bach's Little Fugue in G Minor, would that be worth it?  How about if she had a terminal illness and would die anyway in eighteen months?

\n

If your mother's foot is crushed by a burning beam, is it worthwhile to extract the rest of her?  What if her head is crushed, leaving her body?  What if her body is crushed, leaving only her head?  What if there's a cryonics team waiting outside, ready to suspend the head?  Is a frozen head a person?  Is Terry Schiavo a person?  How much is a chimpanzee worth?

\n

Your brain is not infinitely complicated; there is only a finite Kolmogorov complexity / message length which suffices to describe all the judgments you would make.  But just because this complexity is finite does not make it small.  We value many things, and no they are not reducible to valuing happiness or valuing reproductive fitness.

\n

There is no safe wish smaller than an entire human morality.  There are too many possible paths through Time.  You can't visualize all the roads that lead to the destination you give the genie.  \"Maximizing the distance between your mother and the center of the building\" can be done even more effectively by detonating a nuclear weapon.  Or, at higher levels of genie power, flinging her body out of the Solar System.  Or, at higher levels of genie intelligence, doing something that neither you nor I would think of, just like a chimpanzee wouldn't think of detonating a nuclear weapon.  You can't visualize all the paths through time, any more than you can program a chess-playing machine by hardcoding a move for every possible board position.

\n

And real life is far more complicated than chess.  You cannot predict, in advance, which of your values will be needed to judge the path through time that the genie takes.  Especially if you wish for something longer-term or wider-range than rescuing your mother from a burning building.

\n

I fear the Open-Source Wish Project is futile, except as an illustration of how not to think about genie problems.  The only safe genie is a genie that shares all your judgment criteria, and at that point, you can just say \"I wish for you to do what I should wish for.\"  Which simply runs the genie's should function.

\n

Indeed, it shouldn't be necessary to say anything.  To be a safe fulfiller of a wish, a genie must share the same values that led you to make the wish. Otherwise the genie may not choose a path through time which leads to the destination you had in mind, or it may fail to exclude horrible side effects that would lead you to not even consider a plan in the first place.  Wishes are leaky generalizations, derived from the huge but finite structure that is your entire morality; only by including this entire structure can you plug all the leaks.

\n

With a safe genie, wishing is superfluous.  Just run the genie.

" } }, { "_id": "Tc2H9KbKRjuDJ3WSS", "title": "Leaky Generalizations", "pageUrl": "https://www.lesswrong.com/posts/Tc2H9KbKRjuDJ3WSS/leaky-generalizations", "postedAt": "2007-11-22T21:16:11.000Z", "baseScore": 58, "voteCount": 55, "commentCount": 31, "url": null, "contents": { "documentId": "Tc2H9KbKRjuDJ3WSS", "html": "

Are apples good to eat?  Usually, but some apples are rotten.

\n\n

Do humans have ten fingers?  Most of us do, but plenty of people have lost a finger and nonetheless qualify as "human".

\n\n

Unless you descend to a level of description far below any macroscopic object - below societies, below people, below fingers, below tendon and bone, below cells, all the way down to particles and fields where the laws are truly universal - then practically every generalization you use in the real world will be leaky.

\n\n

(Though there may, of course, be some exceptions to the above rule...)\n\n

\n\n

Mostly, the way you deal with leaky generalizations is that, well,\nyou just have to deal.  If the cookie market almost always closes at 10pm,\nexcept on Thanksgiving it closes at 6pm, and today happens to be\nNational Native American Genocide Day, you'd better show up before 6pm or you won't get a\ncookie.

\n\n

Our ability to manipulate leaky generalizations is opposed by need for closure, the degree to which we want to say once and for all that humans have fingers, and get frustrated when we have to tolerate continued ambiguity.  Raising the value of the stakes can increase need for closure - which shuts down complexity tolerance when complexity tolerance is most needed.

Life would be complicated even if the things we wanted were simple (they aren't).  The leakyness of leaky generalizations about what-to-do-next would leak in from the leaky structure of the real world.  Or to put it another way:

\n\n

Instrumental values often have no specification which is both compact and local.

\n\n

Suppose there's a box containing a million dollars.  The box is\nlocked, not with an ordinary combination lock, but with a dozen keys\ncontrolling a machine that can open the box.  If you know how the machine works, you can\ndeduce which sequences of key-presses will open the box.  There's more\nthan one key sequence that can trigger the button.  But if you\npress a sufficiently wrong sequence, the machine incinerates the\nmoney.  And if you don't know about the machine, there's no\nsimple rules like "Pressing any key three times opens the box" or\n"Pressing five different keys with no repetitions incinerates the\nmoney."

\n\n

There's a compact nonlocal specification of which keys you\nwant to press:  You want to press keys such that they open the box. \nYou can write a compact computer program that computes which key sequences are\ngood, bad or neutral, but the computer program will need to describe\nthe machine, not just the keys themselves.

\n\n

There's likewise a local noncompact specification of which keys to press: a giant lookup table of the results for each possible key sequence.  It's a very large computer program, but it makes no mention of anything except the keys.

\n\n

But there's no way to describe which key sequences are good, bad, or neutral, which is both simple and phrased only in terms of the keys themselves.

\n\n

It may be even worse if there are tempting local generalizations which turn out to be leaky.  Pressing most\nkeys three times in a row will open the box, but there's a particular\nkey that incinerates the money if you press it just once.  You might\nthink you had found a perfect generalization - a locally describable\nclass of sequences that always opened the box - when you had\nmerely failed to visualize all the possible paths of the machine, or\nfailed to value all the side effects.

\n\n

The machine represents the complexity of the real world.  The openness of the box (which is good) and the incinerator (which is bad) represent the thousand shards of desire that make up our terminal values.  The keys represent the actions and policies and strategies available to us.

\n\n

When you consider how many different ways we value outcomes, and how complicated are the paths we take to get there, it's a wonder that there exists any such thing as helpful ethical advice.  (Of which the strangest of all advices, and yet still helpful, is that "The end does not justify the means.")

\n\n

But conversely, the complicatedness of action need not say anything about the complexity of goals.  You often find people who smile wisely, and say, "Well, morality is complicated, you know, female circumcision is right in one culture and wrong in another, it's not always a bad thing to torture people.  How naive you are, how full of need for closure, that you think there are any simple rules."

\n\n

You can say, unconditionally and flatly, that killing anyone is a huge dose of negative terminal utility.  Yes, even Hitler.  That doesn't mean you shouldn't shoot Hitler.  It means that the net instrumental utility of shooting Hitler carries a giant dose of negative utility from Hitler's death, and an hugely larger dose of positive utility from all the other lives that would be saved as a consequence.

\n\n

Many commit the type error that I warned against in Terminal Values and Instrumental Values, and think that if the net consequential expected utility of Hitler's death is conceded to be positive, then the immediate local terminal utility must also be positive, meaning that the moral principle "Death is always a bad thing" is itself a leaky generalization.  But this is double counting, with utilities instead of probabilities; you're setting up a resonance between the expected utility and the utility, instead of a one-way flow from utility to expected utility.

\n\n

Or maybe it's just the urge toward a one-sided policy debate: the best policy must have no drawbacks.

\n\n

In my moral philosophy, the local negative utility of Hitler's death is stable, no matter what happens to the external consequences and hence to the expected utility.

\n\n

Of course, you can set up a moral argument that it's an inherently a good thing to punish evil people, even with capital punishment for sufficiently evil people.  But you can't carry this moral argument by pointing out that the consequence of shooting a man with a leveled gun may be to save other lives.  This is appealing to the value of life, not appealing to the value of death.  If expected utilities are leaky and complicated, it doesn't mean that utilities must be leaky and complicated as well.  They might be!  But it would be a separate argument.

" } }, { "_id": "synsRtBKDeAFuo7e3", "title": "Not for the Sake of Happiness (Alone)", "pageUrl": "https://www.lesswrong.com/posts/synsRtBKDeAFuo7e3/not-for-the-sake-of-happiness-alone", "postedAt": "2007-11-22T03:19:34.000Z", "baseScore": 109, "voteCount": 112, "commentCount": 109, "url": null, "contents": { "documentId": "synsRtBKDeAFuo7e3", "html": "

When I met the futurist Greg Stock some years ago, he argued that the joy of scientific discovery would soon be replaced by pills that could simulate the joy of scientific discovery.  I approached him after his talk and said, "I agree that such pills are probably possible, but I wouldn't voluntarily take them."

\n\n

And Stock said, "But they'll be so much better that the real thing won't be able to compete.  It will just be way more fun for you to take the pills than to do all the actual scientific work."

\n\n

And I said, "I agree that's possible, so I'll make sure never to take them."

\n\n

Stock seemed genuinely surprised by my attitude, which genuinely surprised me.

One often sees ethicists arguing as if all human desires are reducible, in principle, to the desire for ourselves and others to be happy.  (In particular, Sam Harris does this in The End of Faith, which I just finished perusing - though Harris's reduction is more of a drive-by shooting than a major topic of discussion.)

\n\n

This isn't the same as arguing whether all happinesses can be measured on a common utility scale - different happinesses might occupy different scales, or be otherwise non-convertible.  And it's not the same as arguing that it's theoretically impossible to value anything other than your own psychological states, because it's still permissible to care whether other people are happy.

\n\n

The question, rather, is whether we should care about the things that make us happy, apart from any happiness they bring.

\n\n

We can easily list many cases of moralists going astray by caring about things besides happiness.  The various states and countries that still outlaw oral sex make a good example; these legislators would have been better off if they'd said, "Hey, whatever turns you on."  But this doesn't show that all values are reducible to happiness; it just argues that in this particular case it was an ethical mistake to focus on anything else.

\n\n

It is an undeniable fact that we tend to do things that make us happy, but this doesn't mean we should regard the happiness as the only reason for so acting.  First, this would make it difficult to explain how we could care about anyone else's happiness - how we could treat people as ends in themselves, rather than instrumental means of obtaining a warm glow of satisfaction.

\n\n

Second, just because something is a consequence of my action doesn't mean it was the sole justification.  If I'm writing a blog post, and I get a headache, I may take an ibuprofen.  One of the consequences of my action is that I experience less pain, but this doesn't mean it was the only consequence, or even the most important reason for my decision.  I do value the state of not having a headache.  But I can value something for its own sake and also value it as a means to an end.

\n\n

For all value to be reducible to happiness, it's not enough to show that happiness is involved in most of our decisions - it's not even enough to show that happiness is the most important consequent in all of our decisions - it must be the only consequent.  That's a tough standard to meet.  (I originally found this point in a Sober and Wilson paper, not sure which one.)

\n\n

If I claim to value art for its own sake, then would I value art that no one ever saw?  A screensaver running in a closed room, producing beautiful pictures that no one ever saw?  I'd have to say no.  I can't think of any completely lifeless object that I would value as an end, not just a means.  That would be like valuing ice cream as an end in itself, apart from anyone eating it.  Everything I value, that I can think of, involves people and their experiences somewhere along the line.

\n\n

The best way I can put it, is that my moral intuition appears to require both the objective and subjective component to grant full value.

\n\n

The value of scientific discovery requires both a genuine scientific discovery, and a person to take joy in that discovery.  It may seem difficult to disentangle these values, but the pills make it clearer.

\n\n

I would be disturbed if people retreated into holodecks and fell in love with mindless wallpaper.  I would be disturbed even if they weren't aware it was a holodeck, which is an important ethical issue if some agents can potentially transport people into holodecks and substitute zombies for their loved ones without their awareness.  Again, the pills make it clearer:  I'm not just concerned with my own awareness of the uncomfortable fact.  I wouldn't put myself into a holodeck even if I could take a pill to forget the fact afterward.  That's simply not where I'm trying to steer the future.

\n\n

I value freedom:  When I'm deciding where to steer the future, I take into account not only the subjective states that people end up in, but also whether they got there as a result of their own efforts.  The presence or absence of an external puppet master can affect my valuation of an otherwise fixed outcome.  Even if people wouldn't know they were being manipulated, it would matter to my judgment of how well humanity had done with its future.  This is an important ethical issue, if you're dealing with agents powerful enough to helpfully tweak people's futures without their knowledge.

\n\n

So my values are not strictly reducible to happiness:  There are properties I value about the future that aren't reducible to activation levels in anyone's pleasure center; properties that are not strictly reducible to subjective states even in principle.

\n\n

Which means that my decision system has a lot of terminal values, none of them strictly reducible to anything else.  Art, science, love, lust, freedom, friendship...

\n\n

And I'm okay with that.  I value a life complicated enough to be challenging and aesthetic - not just the feeling that life is complicated, but the actual complications - so turning into a pleasure center in a vat doesn't appeal to me.  It would be a waste of humanity's potential, which I value actually fulfilling, not just having the feeling that it was fulfilled.

" } }, { "_id": "fg9fXrHpeaDD6pEPL", "title": "Truly Part Of You", "pageUrl": "https://www.lesswrong.com/posts/fg9fXrHpeaDD6pEPL/truly-part-of-you", "postedAt": "2007-11-21T02:18:23.000Z", "baseScore": 197, "voteCount": 158, "commentCount": 61, "url": null, "contents": { "documentId": "fg9fXrHpeaDD6pEPL", "html": "

A classic paper by Drew McDermott, “Artificial Intelligence Meets Natural Stupidity,” criticized AI programs that would try to represent notions like happiness is a state of mind using a semantic network:

And of course there’s nothing inside the HAPPINESS node; it’s just a naked LISP token with a suggestive English name.

So, McDermott says, “A good test for the disciplined programmer is to try using gensyms in key places and see if he still admires his system. For example, if STATE-OF-MIND is renamed G1073. . .” then we would have IS-A(HAPPINESS, G1073) “which looks much more dubious.”

Or as I would slightly rephrase the idea: If you substituted randomized symbols for all the suggestive English names, you would be completely unable to figure out what G1071(G1072, G1073) meant. Was the AI program meant to represent hamburgers? Apples? Happiness? Who knows? If you delete the suggestive English names, they don’t grow back.

Suppose a physicist tells you that “Light is waves,” and you believe the physicist. You now have a little network in your head that says:

 

IS-A(LIGHT, WAVES)

 

As McDermott says, “The whole problem is getting the hearer to notice what it has been told. Not ‘understand,’ but ‘notice.’ ” Suppose that instead the physicist told you, “Light is made of little curvy things.”1 Would you notice any difference of anticipated experience?

How can you realize that you shouldn’t trust your seeming knowledge that “light is waves”? One test you could apply is asking, “Could I regenerate his knowledge if it were somehow deleted from my mind?”

This is similar in spirit to scrambling the names of suggestively named lisp tokens in your AI program, and seeing if someone else can figure out what they allegedly “refer” to. It’s also similar in spirit to observing that an Artificial Arithmetician programmed to record and play back

 

Plus-Of(Seven, Six) = Thirteen

 

can’t regenerate the knowledge if you delete it from memory, until another human re-enters it in the database. Just as if you forgot that “light is waves,” you couldn’t get back the knowledge except the same way you got the knowledge to begin with—by asking a physicist. You couldn’t generate the knowledge for yourself, the way that physicists originally generated it.

The same experiences that lead us to formulate a belief, connect that belief to other knowledge and sensory input and motor output. If you see a beaver chewing a log, then you know what this thing-that-chews-through-logs looks like, and you will be able to recognize it on future occasions whether it is called a “beaver” or not. But if you acquire your beliefs about beavers by someone else telling you facts about “beavers,” you may not be able to recognize a beaver when you see one.

This is the terrible danger of trying to tell an artificial intelligence facts that it could not learn for itself. It is also the terrible danger of trying to tell someone about physics that they cannot verify for themselves. For what physicists mean by “wave” is not “little squiggly thing” but a purely mathematical concept.

As Donald Davidson observes, if you believe that “beavers” live in deserts, are pure white in color, and weigh 300 pounds when adult, then you do not have any beliefs about beavers, true or false. Your belief about “beavers” is not right enough to be wrong.2 If you don’t have enough experience to regenerate beliefs when they are deleted, then do you have enough experience to connect that belief to anything at all? Wittgenstein: “A wheel that can be turned though nothing else moves with it, is not part of the mechanism.”

Almost as soon as I started reading about AI—even before I read McDermott—I realized it would be a really good idea to always ask myself: “How would I regenerate this knowledge if it were deleted from my mind?”

The deeper the deletion, the stricter the test. If all proofs of the Pythagorean Theorem were deleted from my mind, could I re-prove it? I think so. If all knowledge of the Pythagorean Theorem were deleted from my mind, would I notice the Pythagorean Theorem to re-prove? That’s harder to boast, without putting it to the test; but if you handed me a right triangle with sides of length 3 and 4, and told me that the length of the hypotenuse was calculable, I think I would be able to calculate it, if I still knew all the rest of my math.

What about the notion of mathematical proof? If no one had ever told it to me, would I be able to reinvent that on the basis of other beliefs I possess? There was a time when humanity did not have such a concept. Someone must have invented it. What was it that they noticed? Would I notice if I saw something equally novel and equally important? Would I be able to think that far outside the box?

How much of your knowledge could you regenerate? From how deep a deletion? It’s not just a test to cast out insufficiently connected beliefs. It’s a way of absorbing a fountain of knowledge, not just one fact.

A shepherd builds a counting system that works by throwing a pebble into a bucket whenever a sheep leaves the fold, and taking a pebble out whenever a sheep returns. If you, the apprentice, do not understand this system—if it is magic that works for no apparent reason—then you will not know what to do if you accidentally drop an extra pebble into the bucket. That which you cannot make yourself, you cannot remake when the situation calls for it. You cannot go back to the source, tweak one of the parameter settings, and regenerate the output, without the source. If “two plus four equals six” is a brute fact unto you, and then one of the elements changes to “five,” how are you to know that “two plus five equals seven” when you were simply told that “two plus four equals six”?

If you see a small plant that drops a seed whenever a bird passes it, it will not occur to you that you can use this plant to partially automate the sheep-counter. Though you learned something that the original maker would use to improve on their invention, you can’t go back to the source and re-create it.

When you contain the source of a thought, that thought can change along with you as you acquire new knowledge and new skills. When you contain the source of a thought, it becomes truly a part of you and grows along with you.

Strive to make yourself the source of every thought worth thinking. If the thought originally came from outside, make sure it comes from inside as well. Continually ask yourself: “How would I regenerate the thought if it were deleted?” When you have an answer, imagine that knowledge being deleted as well. And when you find a fountain, see what else it can pour.


1 Not true, by the way.

2 Richard Rorty, “Out of the Matrix: How the Late Philosopher Donald Davidson Showed That Reality Can’t Be an Illusion,” The Boston Globe, 2003, http://archive.boston.com/news/globe/ideas/articles/2003/10/05/out_ of_ the_ matrix/.

" } }, { "_id": "YhgjmCxcQXixStWMC", "title": "Artificial Addition", "pageUrl": "https://www.lesswrong.com/posts/YhgjmCxcQXixStWMC/artificial-addition", "postedAt": "2007-11-20T07:58:50.000Z", "baseScore": 93, "voteCount": 80, "commentCount": 128, "url": null, "contents": { "documentId": "YhgjmCxcQXixStWMC", "html": "

Suppose that human beings had absolutely no idea how they performed arithmetic.  Imagine that human beings had evolved, rather than having learned, the ability to count sheep and add sheep.  People using this built-in ability have no idea how it worked, the way Aristotle had no idea how his visual cortex supported his ability to see things.  Peano Arithmetic as we know it has not been invented.  There are philosophers working to formalize numerical intuitions, but they employ notations such as

Plus-Of(Seven, Six) = Thirteen

to formalize the intuitively obvious fact that when you add "seven" plus "six", of course you get "thirteen".

\n\n

In this world, pocket calculators work by storing a giant lookup table of arithmetical facts, entered manually by a team of expert Artificial Arithmeticians, for starting values that range between zero and one hundred.  While these calculators may be helpful in a pragmatic sense, many philosophers argue that they're only simulating addition, rather than really adding.  No machine can really count - that's why humans have to count thirteen sheep before typing "thirteen" into the calculator.  Calculators can recite back stored facts, but they can never know what the statements mean - if you type in "two hundred plus two hundred" the calculator says "Error: Outrange", when it's intuitively obvious, if you know what the words mean, that the answer is "four hundred".

Philosophers, of course, are not so naive as to be taken in by these intuitions.  Numbers are really a purely formal system - the label\n"thirty-seven" is meaningful, not because of any inherent property of\nthe words themselves, but because the label refers to thirty-seven sheep in\nthe external world.  A number is given this referential property by its semantic\nnetwork of relations to other numbers.  That's why, in computer programs, the LISP token for "thirty-seven" doesn't need any internal structure - it's only meaningful because of reference and relation, not some computational property of "thirty-seven" itself.

\n\n

No one has ever developed an Artificial General Arithmetician, though\nof course there are plenty of domain-specific, narrow Artificial\nArithmeticians that work on numbers between "twenty" and "thirty", and\nso on.  And if you look at how slow progress has been on numbers in the\nrange of "two hundred", then it becomes clear that we're not going to\nget Artificial General Arithmetic any time soon.  The best experts in\nthe field estimate it will be at least a hundred years before\ncalculators can add as well as a human twelve-year-old.

\n\n

But not everyone agrees with this estimate, or with merely conventional beliefs about Artificial Arithmetic.  It's common to hear statements such as the following:

\n\n\n\n

There is more than one moral to this parable, and I have told it with different morals in different contexts.  It illustrates the idea of levels of organization, for example - a CPU can add two large numbers because the numbers aren't black-box opaque objects, they're ordered structures of 32 bits.

\n\n

But for purposes of overcoming bias, let us draw two morals:

\n\n\n\n

Lest anyone accuse me of generalizing from fictional evidence, both lessons may be drawn from the real history of Artificial Intelligence as well.

\n\n

The first danger is the object-level problem that the AA devices ran into: they functioned as tape recorders playing back "knowledge" generated from outside the system, using a process they couldn't capture internally.  A human could tell the AA device that "twenty-one plus sixteen equals thirty-seven", and the AA devices could record this sentence and play it back, or even pattern-match "twenty-one plus sixteen" to output "thirty-seven!", but the AA devices couldn't generate such knowledge for themselves.

\n\n

Which is strongly reminiscent of believing a physicist who tells you "Light is waves", recording the fascinating words and playing them back when someone asks "What is light made of?", without being able to generate the knowledge for yourself.  More on this theme tomorrow.

\n\n

The second moral is the meta-level danger that consumed the Artificial Arithmetic researchers and opinionated bystanders - the danger of dancing around confusing gaps in your knowledge.  The tendency to do just about anything except grit your teeth and buckle down and fill in the damn gap.

\n\n

Whether you say, "It is emergent!", or whether you say, "It is unknowable!", in neither case are you acknowledging that there is a basic insight required which is possessable, but unpossessed by you.

\n\n

How can you know when you'll have a new basic insight?  And there's no way to get one except by banging your head against the problem, learning everything you can about it, studying it from as many angles as possible, perhaps for years.  It's not a pursuit that academia is set up to permit, when you need to publish at least one paper per month.  It's certainly not something that venture capitalists will fund.  You want to either go ahead and build the system now, or give up and do something else instead.

\n\n

Look at the comments above: none are aimed at setting out on a quest for the missing insight which would make numbers no longer mysterious, make "twenty-seven" more than a black box.  None of the commenters realized that their difficulties arose from ignorance or confusion in their own minds, rather than an inherent property of arithmetic.  They were not trying to achieve a state where the confusing thing ceased to be confusing.

\n\n

If you read Judea Pearl's "Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference" then you will see that the basic insight behind graphical models is indispensable to problems that require it.  (It's not something that fits on a T-Shirt, I'm afraid, so you'll have to go and read the book yourself.  I haven't seen any online popularizations of Bayesian networks that adequately convey the reasons behind the principles, or the importance of the math being exactly the way it is, but Pearl's book is wonderful.)  There were once dozens of "non-monotonic logics" awkwardly trying to capture intuitions such as "If my burglar alarm goes off, there was probably a burglar, but if I then learn that there was a small earthquake near my home, there was probably not a burglar."  With the graphical-model insight in hand, you can give a mathematical explanation of exactly why first-order logic has the wrong properties for the job, and express the correct solution in a compact way that captures all the common-sense details in one elegant swoop.  Until you have that insight, you'll go on patching the logic here, patching it there, adding more and more hacks to force it into correspondence with everything that seems "obviously true".

\n\n

You won't know the Artificial Arithmetic problem is unsolvable without its key.  If you don't know the rules, you don't know the rule that says you need to know the rules to do anything.  And so there will be all sorts of clever ideas\nthat seem like they might work, like building an Artificial\nArithmetician that can read natural language and download millions of\narithmetical assertions from the Internet.

\n\n

And yet somehow the clever ideas never work.  Somehow it always turns out that you "couldn't see any reason it wouldn't work" because you were ignorant of the obstacles, not because no obstacles existed.  Like shooting blindfolded at a distant target - you can fire blind shot\nafter blind shot, crying, "You can't prove to me that I won't hit the center!"  But until you take off the blindfold, you're not even in the\naiming game.  When "no one can prove to you" that your precious idea isn't right, it means you don't have enough information to strike a small target in a vast answer spaceUntil you know your idea will work, it won't.

\n\n

From the history of previous key insights in Artificial Intelligence, and the grand messes which were proposed prior to those insights, I derive an important real-life lesson:  When the basic problem is your ignorance, clever strategies for bypassing your ignorance lead to shooting yourself in the foot.

" } }, { "_id": "KE8wPzGiX5QPotyS8", "title": "Conjuring An Evolution To Serve You", "pageUrl": "https://www.lesswrong.com/posts/KE8wPzGiX5QPotyS8/conjuring-an-evolution-to-serve-you", "postedAt": "2007-11-19T05:55:56.000Z", "baseScore": 76, "voteCount": 64, "commentCount": 26, "url": null, "contents": { "documentId": "KE8wPzGiX5QPotyS8", "html": "

GreyThumb.blog offers an interesting analogue between research on animal breeding and the fall of Enron.  Before 1995, the way animal breeding worked was that you would take the top individual performers in each generation and breed from them, or their parents.  A cockerel doesn't lay eggs, so you have to observe daughter hens to determine which cockerels to breed.  Sounds logical, right?  If you take the hens who lay the most eggs in each generation, and breed from them, you should get hens who lay more and more eggs.

\n

Behold the awesome power of making evolution work for you!  The power that made butterflies - now constrained to your own purposes!  And it worked, too.  Per-cow milk output in the US doubled between 1905 and 1965, and has doubled again since then.

\n

Yet conjuring Azathoth oft has unintended consequences, as some researchers realized in the 1990s.  In the real world, sometimes you have more than animal per farm.  You see the problem, right?  If you don't, you should probably think twice before trying to conjure an evolution to serve you - magic is not for the unparanoid.

\n

\n

Selecting the hen who lays the most eggs doesn't necessarily get you the most efficient egg-laying metabolism.  It may get you the most dominant hen, that pecked its way to the top of the pecking order at the expense of other hens.  Individual selection doesn't necessarily work to the benefit of the group, but a farm's productivity is determined by group outputs.

\n

Indeed, for some strange reason, the individual breeding programs which had been so successful at increasing egg production now required hens to have their beaks clipped, or be housed in individual cages, or they would peck each other to death.

\n

While the conditions for group selection are only rarely right in Nature, one can readily impose genuine group selection in the laboratory.  After only 6 generations of artificially imposed group selection - breeding from the hens in the best groups, rather than the best individual hens - average days of survival increased from 160 to 348, and egg mass per bird increased from 5.3 to 13.3 kg.  At 58 weeks of age, the selected line had 20% mortality compared to the control group at 54%.  A commercial line of hens, allowed to grow up with unclipped beaks, had 89% mortality at 58 weeks.

\n

And the fall of Enron?  Jeff Skilling fancied himself an evolution-conjurer, it seems.  (Not that he, like, knew any evolutionary math or anything.)  Every year, every Enron employee's performance would be evaluated, and the bottom 10% would get fired, and the top performers would get huge raises and bonuses.  Unfortunately, as GreyThumb points out:

\n
\n

\"Everyone knows that there are many things you can do in any corporate environment to give the appearance and impression of being productive. Enron's corporate environment was particularly conducive to this: its principal business was energy trading, and it had large densely populated trading floors peopled by high-powered traders that would sit and play the markets all day. There were, I'm sure, many things that a trader could do to up his performance numbers, either by cheating or by gaming the system. This gaming of the system probably included gaming his fellow traders, many of whom were close enough to rub elbows with.

\n

\"So Enron was applying selection at the individual level according to metrics like individual trading performance to a group system whose performance was, like the henhouses, an emergent property of group dynamics as well as a result of individual fitness. The result was more or less the same. Instead of increasing overall productivity, they got mean chickens and actual productivity declined. They were selecting for traits like aggressiveness, sociopathic tendencies, and dishonesty.\"

\n
\n

And the moral of the story is:  Be careful when you set forth to conjure the blind idiot god.  People look at a pretty butterfly (note selectivity) and think:  \"Evolution designed them - how pretty - I should get evolution to do things for me, too!\"  But this is qualitative reasoning, as if evolution were either present or absent.  Applying 10% selection for 10 generations is not going to get you the same amount of cumulative selection pressure as 3.85 billion years of natural selection.

\n

I have previously emphasized that the evolution-of-foxes works at cross-purposes to the evolution-of-rabbits; there is no unitary Evolution God to praise for every beauty of Nature.  Azathoth has ten million hands.  When you conjure, you don't get the evolution, the Maker of Butterflies.  You get an evolution, with characteristics and strength that depend on your exact conjuration.  If you just take everything you see in Nature and attribute it to \"evolution\", you'll start thinking that some cute little conjuration which runs for 20 generations will get you artifacts on the order of butterflies.  Try 3.85 billion years.

\n

Same caveat with the wonders of simulated evolution on computers, producing a radio antenna better than a human design, etcetera.  These are sometimes human-competitive (more often not) when it comes to optimizing a continuous design over 57 performance criteria, or breeding a design with 57 elements.  Anything beyond that, and modern evolutionary algorithms are defeated by the same exponential explosion that consumes the rest of AI.  Yes, evolutionary algorithms have a legitimate place in AI.  Consult a machine-learning expert, who knows when to use them and when not to.  Even biologically inspired genetic algorithms with sexual mixing, rarely perform better than beam searches and other non-biologically-inspired techniques on the same problem.

\n

And for this weakness, let us all be thankful.  If the blind idiot god did not take a million years in which to do anything complicated, It would be bloody scary.  3.85 billion years of natural selection produced molecular nanotechnology (cells) and Artificial General Intelligence (brains), which even we humans aren't going to get for a few more decades.  If there were an alien demideity, morality-and-aesthetics-free, often blindly suicidal, capable of wielding nanotech and AGI in real time, I'd put aside all other concerns and figure out how to kill it.  Assuming that I hadn't already been enslaved beyond all desire of escape.  Look at the trouble we're having with bacteria, which go through generations fast enough that their evolutions are learning to evade our antibiotics after only a few decades' respite.

\n

You really don't want to conjure Azathoth at full power.  You really, really don't.  You'll get more than pretty butterflies.

" } }, { "_id": "HnPEpu5eQWkbyAJCT", "title": "The Simple Math of Everything", "pageUrl": "https://www.lesswrong.com/posts/HnPEpu5eQWkbyAJCT/the-simple-math-of-everything", "postedAt": "2007-11-17T22:42:12.000Z", "baseScore": 95, "voteCount": 71, "commentCount": 43, "url": null, "contents": { "documentId": "HnPEpu5eQWkbyAJCT", "html": "

I am not a professional evolutionary biologist.  I only know a few equations, very simple ones by comparison to what can be found in any textbook on evolutionary theory with math, and on one memorable occasion I used one incorrectly.  For me to publish an article in a highly technical ev-bio journal would be as impossible as corporations evolving.  And yet when I'm dealing with almost anyone who's not a professional evolutionary biologist...\n\n

\n\n

It seems to me that there's a substantial advantage in knowing the drop-dead basic fundamental embarrassingly simple mathematics in as many different subjects as you can manage.  Not, necessarily, the high-falutin' complicated damn math that appears in the latest journal articles.  Not unless you plan to become a professional in the field.  But for people who can read calculus, and sometimes just plain algebra, the drop-dead basic mathematics of a field may not take that long to learn.  And it's likely to change your outlook on life more than the math-free popularizations or the highly technical math.

Not Jacobean matrices for frequency-dependent gene selection; just Haldane's\ncalculation of time to fixation.  Not quantum physics; just the wave\nequation for sound in air.  Not the maximum entropy solution using\nLagrange multipliers; just Bayes's Rule.

\n\n

The Simple Math of Everything, written for people who are good at math, might not be all that weighty a volume.  How long does it take to explain Bayes's Rule to someone who's good at math?  Damn would I like to buy that book and send it back in time to my 16-year-old self.  But there's no way I have time to write this book, so I'm tossing the idea out there.

\n\n

Even in reading popular works on science, there is yet power.  You don't want to end up\nlike those poor souls in that recent interview (I couldn't Google)\nwhere a well-known scientist in field XYZ thinks the universe is 100\nbillion years old.  But it seems to me that there's substantially more\npower in pushing until you encounter some basic math.  Not complicated\nmath, just basic math.  F=ma is too simple, though.  You should take the highest low-hanging fruit you can reach.

\n\n

\nYes, there are sciences whose soul is not in their math, yet which are\nnonetheless incredibly important and enlightening.  Evolutionary\npsychology, for example.  But even there, if you kept pushing until you\nencountered equations, you would be well-served by that\nheuristic, even if the equations didn't seem all that enlightening\ncompared to the basic results.

\n\n

I remember when I finally picked up and started reading through my copy of the Feynman Lectures on Physics, even though I couldn't think of any realistic excuse for how this was going to help my AI work, because I just got fed up with not knowing physics.  And - you can guess how this story ends - it gave me a new way of looking at the world, which all my earlier reading in popular physics (including Feynman's QED) hadn't done.  Did that help inspire my AI research?  Hell yes.  (Though it's a good thing I studied neuroscience, evolutionary psychology, evolutionary biology, Bayes, and physics in that order - physics alone would have been terrible inspiration for AI research.)

\n\n

In academia (or so I am given to understand) there's a huge pressure\nto specialize, to push your understanding of one subject all the way\nout to the frontier of the latest journal articles, so that you can\nwrite your own journal articles and get tenure.  Well, one may certainly have to learn the far math of one field, but why avoid the simple math of others?  Is it too embarrassing to learn just a little math, and then stop?  Is there an unwritten rule which says that once you start learning any math, you are obligated to finish it all?  Could that be why the practice isn't more common?

\n\n

I know that I'm much more embarrassed to know a few simple equations of physics, than I was to know only popular physics.  It feels wronger to know a few simple equations of evolutionary biology than to know only qualitative evolutionary biology.  Even mentioning how useful it's been seems wrong, as if I'm boasting about something that no one should boast about.  It feels like I'm a dilettante - but how would I be diletting less if I hadn't studied even the simple math?

" } }, { "_id": "XC7Kry5q6CD9TyG4K", "title": "No Evolutions for Corporations or Nanodevices", "pageUrl": "https://www.lesswrong.com/posts/XC7Kry5q6CD9TyG4K/no-evolutions-for-corporations-or-nanodevices", "postedAt": "2007-11-17T02:24:09.000Z", "baseScore": 113, "voteCount": 90, "commentCount": 32, "url": null, "contents": { "documentId": "XC7Kry5q6CD9TyG4K", "html": "
"The laws of physics and the rules of math don't cease to apply. That leads me to believe that evolution doesn't stop. That further leads me to believe that nature —bloody in tooth and claw, as some have termed it —will simply be taken to the next level...
"[Getting rid of Darwinian evolution is] like trying to get rid of gravitation. So long as there are limited resources and multiple competing actors capable of passing on characteristics, you have selection pressure."
—Perry Metzger, predicting that the reign of natural selection would continue into the indefinite future.

In evolutionary biology, as in many other fields, it is important to think quantitatively rather than qualitatively. Does a beneficial mutation "sometimes spread, but not always"? Well, a psychic power would be a beneficial mutation, so you'd expect it to spread, right? Yet this is qualitative reasoning, not quantitative—if X is true, then Y is true; if psychic powers are beneficial, they may spread. In Evolutions Are Stupid, I described the equations for a beneficial mutation's probability of fixation, roughly twice the fitness advantage (6% for a 3% advantage). Only this kind of numerical thinking is likely to make us realize that mutations which are only rarely useful are extremely unlikely to spread, and that it is practically impossible for complex adaptations to arise without constant use. If psychic powers really existed, we should expect to see everyone using them all the time—not just because they would be so amazingly useful, but because otherwise they couldn't have evolved in the first place.

"So long as there are limited resources and multiple competing actors capable of passing on characteristics, you have selection pressure." This is qualitative reasoning. How much selection pressure?


While there are several candidates for the most important equation in evolutionary biology, I would pick Price's Equation, which in its simplest formulation reads:

Δz=cov(vi,zi)

change in average characteristic = covariance(relative fitness, characteristic)


This is a very powerful and general formula. For example, a particular gene for height can be the Z, the characteristic that changes, in which case Price's Equation says that the change in the probability of possessing this gene equals the covariance of the gene with reproductive fitness. Or you can consider height in general as the characteristic Z, apart from any particular genes, and Price's Equation says that the change in height in the next generation will equal the covariance of height with relative reproductive fitness.

(At least, this is true so long as height is straightforwardly heritable. If nutrition improves, so that a fixed genotype becomes taller, you have to add a correction term to Price's Equation. If there are complex nonlinear interactions between many genes, you have to either add a correction term, or calculate the equation in such a complicated way that it ceases to enlighten.)

Many enlightenments may be attained by studying the different forms and derivations of Price's Equation. For example, the final equation says that the average characteristic changes according to its covariance with relative fitness, rather than its absolute fitness. This means that if a Frodo gene saves its whole species from extinction, the average Frodo characteristic does not increase, since Frodo's act benefited all genotypes equally and did not covary with relative fitness.

It is said that Price became so disturbed with the implications of his equation for altruism that he committed suicide, though he may have had other issues. (Overcoming Bias does not advocate committing suicide after studying Price's Equation.)

One of the enlightenments which may be gained by meditating upon Price's Equation is that "limited resources" and "multiple competing actors capable of passing on characteristics" are not sufficient to give rise to an evolution. "Things that replicate themselves" is not a sufficient condition. Even "competition between replicating things" is not sufficient.

Do corporations evolve? They certainly compete. They occasionally spin off children. Their resources are limited. They sometimes die.

But how much does the child of a corporation resemble its parents? Much of the personality of a corporation derives from key officers, and CEOs cannot divide themselves by fission. Price's Equation only operates to the extent that characteristics are heritable across generations. If great-great-grandchildren don't much resemble their great-great-grandparents, you won't get more than four generations' worth of cumulative selection pressure—anything that happened more than four generations ago will blur itself out. Yes, the personality of a corporation can influence its spinoff—but that's nothing like the heritability of DNA, which is digital rather than analog, and can transmit itself with 10^-8 errors per base per generation.

With DNA you have heritability lasting for millions of generations. That's how complex adaptations can arise by pure evolution—the digital DNA lasts long enough for a gene conveying 3% advantage to spread itself over 768 generations, and then another gene dependent on it can arise. Even if corporations replicated with digital fidelity, they would currently be at most ten generations into the RNA World.

Now, corporations are certainly selected, in the sense that incompetent corporations go bust. This should logically make you more likely to observe corporations with features contributing to competence. And in the same sense, any star that goes nova shortly after it forms, is less likely to be visible when you look up at the night sky. But if an accident of stellar dynamics makes one star burn longer than another star, that doesn't make it more likely that future stars will also burn longer—the feature will not be copied onto other stars. We should not expect future astrophysicists to discover complex internal features of stars which seem designed to help them burn longer. That kind of mechanical adaptation requires much larger cumulative selection pressures than a once-off winnowing.

Think of the principle introduced in Einstein's Arrogance—that the vast majority of the evidence required to think of General Relativity had to go into raising that one particular equation to the level of Einstein's personal attention; the amount of evidence required to raise it from a deliberately considered possibility to 99.9% certainty was trivial by comparison. In the same sense, complex features of corporations which require hundreds of bits to specify, are produced primarily by human intelligence, not a handful of generations of low-fidelity evolution. In biology, the mutations are purely random and evolution supplies thousands of bits of cumulative selection pressure. In corporations, humans offer up thousand-bit intelligently designed complex "mutations", and then the further selection pressure of "Did it go bankrupt or not?" accounts for a handful of additional bits in explaining what you see.

Advanced molecular nanotechnology—the artificial sort, not biology—should be able to copy itself with digital fidelity through thousands of generations. Would Price's Equation thereby gain a foothold?

Correlation is covariance divided by variance, so if A is highly predictive of B, there can be a strong "correlation" between them even if A is ranging from 0 to 9 and B is only ranging from 50.0001 and 50.0009. Price's Equation runs on covariance of characteristics with reproduction—not correlation! If you can compress variance in characteristics into a tiny band, the covariance goes way down, and so does the cumulative change in the characteristic.

The Foresight Institute suggests, among other sensible proposals, that the replication instructions for any nanodevice should be encrypted. Moreover, encrypted such that flipping a single bit of the encoded instructions will entirely scramble the decrypted output. If all nanodevices produced are precise molecular copies, and moreover, any mistakes on the assembly line are not heritable because the offspring got a digital copy of the original encrypted instructions for use in making grandchildren, then your nanodevices ain't gonna be doin' much evolving.

You'd still have to worry about prions—self-replicating assembly errors apart from the encrypted instructions, where a robot arm fails to grab a carbon atom that is used in assembling a homologue of itself, and this causes the offspring's robot arm to likewise fail to grab a carbon atom, etc., even with all the encrypted instructions remaining constant. But how much correlation is there likely to be, between this sort of transmissible error, and a higher reproductive rate? Let's say that one nanodevice produces a copy of itself every 1000 seconds, and the new nanodevice is magically more efficient (it not only has a prion, it has a beneficial prion) and copies itself every 999.99999 seconds. It needs one less carbon atom attached, you see. That's not a whole lot of variance in reproduction, so it's not a whole lot of covariance either.

And how often will these nanodevices need to replicate? Unless they've got more atoms available than exist in the solar system, or for that matter, the visible Universe, only a small number of generations will pass before they hit the resource wall. "Limited resources" are not a sufficient condition for evolution; you need the frequently iterated death of a substantial fraction of the population to free up resources. Indeed, "generations" is not so much an integer as an integral over the fraction of the population that consists of newly created individuals.

This is, to me, the most frightening thing about grey goo or nanotechnological weapons—that they could eat the whole Earth and then that would be it, nothing interesting would happen afterward. Diamond is stabler than proteins held together by van der Waals forces, so the goo would only need to reassemble some pieces of itself when an asteroid hit. Even if prions were a powerful enough idiom to support evolution at all—evolution is slow enough with digital DNA!—less than 1.0 generations might pass between when the goo ate the Earth and when the Sun died.

To sum up, if you have all of the following properties:

Then you will have significant cumulative selection pressures, enough to produce complex adaptations by the force of evolution.

" } }, { "_id": "gDNrpuwahdRrDJ9iY", "title": "Evolving to Extinction", "pageUrl": "https://www.lesswrong.com/posts/gDNrpuwahdRrDJ9iY/evolving-to-extinction", "postedAt": "2007-11-16T07:18:53.000Z", "baseScore": 143, "voteCount": 122, "commentCount": 44, "url": null, "contents": { "documentId": "gDNrpuwahdRrDJ9iY", "html": "

It is a very common misconception that an evolution works for the good of its species. Can you remember hearing someone talk about two rabbits breeding eight rabbits and thereby "contributing to the survival of their species"? A modern evolutionary biologist would never say such a thing; they'd sooner breed with a rabbit.

It's yet another case where you've got to simultaneously consider multiple abstract concepts and keep them distinct. Evolution doesn't operate on particular individuals; individuals keep whatever genes they're born with. Evolution operates on a reproducing population, a species, over time. There's a natural tendency to think that if an Evolution Fairy is operating on the species, she must be optimizing for the species. But what really changes are the gene frequencies, and frequencies don't increase or decrease according to how much the gene helps the species as a whole. As we shall later see, it's quite possible for a species to evolve to extinction.

Why are boys and girls born in roughly equal numbers? (Leaving aside crazy countries that use artificial gender selection technologies.) To see why this is surprising, consider that 1 male can impregnate 2, 10, or 100 females; it wouldn't seem that you need the same number of males as females to ensure the survival of the species. This is even more surprising in the vast majority of animal species where the male contributes very little to raising the children—humans are extraordinary, even among primates, for their level of paternal investment. Balanced gender ratios are found even in species where the male impregnates the female and vanishes into the mist.

Consider two groups on different sides of a mountain; in group A, each mother gives birth to 2 males and 2 females; in group B, each mother gives birth to 3 females and 1 male. Group A and group B will have the same number of children, but group B will have 50% more grandchildren and 125% more great-grandchildren. You might think this would be a significant evolutionary advantage.

But consider: The rarer males become, the more reproductively valuable they become—not to the group, but to the individual parent. Every child has one male and one female parent. Then in every generation, the total genetic contribution from all males equals the total genetic contribution from all females. The fewer males, the greater the individual genetic contribution per male. If all the females around you are doing what's good for the group, what's good for the species, and birthing 1 male per 10 females, you can make a genetic killing by birthing all males, each of whom will have (on average) ten times as many grandchildren as their female cousins.

So while group selection ought to favor more girls, individual selection favors equal investment in male and female offspring. Looking at the statistics of a maternity ward, you can see at a glance that the quantitative balance between group selection forces and individual selection forces is overwhelmingly tilted in favor of individual selection in Homo sapiens.

(Technically, this isn't quite a glance. Individual selection favors equal parental investments in male and female offspring. If males cost half as much to birth and/or raise, twice as many males as females will be born at the evolutionarily stable equilibrium. If the same number of males and females were born in the population at large, but males were twice as cheap to birth, then you could again make a genetic killing by birthing more males. So the maternity ward should reflect the balance of parental opportunity costs, in a hunter-gatherer society, between raising boys and raising girls; and you'd have to assess that somehow. But ya know, it doesn't seem all that much more reproductive-opportunity-costly for a hunter-gatherer family to raise a girl, so it's kinda suspicious that around the same number of boys are born as girls.)

Natural selection isn't about groups, or species, or even individuals. In a sexual species, an individual organism doesn't evolve; it keeps whatever genes it's born with. An individual is a once-off collection of genes that will never reappear; how can you select on that? When you consider that nearly all of your ancestors are dead, it's clear that "survival of the fittest" is a tremendous misnomer. "Replication of the fitter" would be more accurate, although technically, fitness is defined only in terms of replication.

Natural selection is really about gene frequencies. To get a complex adaptation, a machine with multiple dependent parts, each new gene as it evolves depends on the other genes being reliably present in its genetic environment. They must have high frequencies. The more complex the machine, the higher the frequencies must be. The signature of natural selection occurring is a gene rising from 0.00001% of the gene pool to 99% of the gene pool. This is the information, in an information-theoretic sense; and this is what must happen for large complex adaptations to evolve.

The real struggle in natural selection is not the competition of organisms for resources; this is an ephemeral thing when all the participants will vanish in another generation. The real struggle is the competition of alleles for frequency in the gene pool. This is the lasting consequence that creates lasting information. The two rams bellowing and locking horns are only passing shadows.

It's perfectly possible for an allele to spread to fixation by outcompeting an alternative allele which was "better for the species". If the Flying Spaghetti Monster magically created a species whose gender mix was perfectly optimized to ensure the survival of the species—the optimal gender mix to bounce back reliably from near-extinction events, adapt to new niches, etcetera—then the evolution would rapidly degrade this species optimum back into the individual-selection optimum of equal parental investment in males and females.

Imagine a "Frodo gene" that sacrifices its vehicle to save its entire species from an extinction event. What happens to the allele frequency as a result? It goes down. Kthxbye.

If species-level extinction threats occur regularly (call this a "Buffy environment") then the Frodo gene will systematically decrease in frequency and vanish, and soon thereafter, so will the species. A hypothetical example? Maybe. If the human species was going to stay biological for another century, it would be a good idea to start cloning Gandhi.

In viruses, there's the tension between individual viruses replicating as fast as possible, versus the benefit of leaving the host alive long enough to transmit the illness. This is a good real-world example of group selection, and if the virus evolves to a point on the fitness landscape where the group selection pressures fail to overcome individual pressures, the virus could vanish shortly thereafter. I don't know if a disease has ever been caught in the act of evolving to extinction, but it's probably happened any number of times.

Segregation-distorters subvert the mechanisms that usually guarantee fairness of sexual reproduction. For example, there is a segregation-distorter on the male sex chromosome of some mice which causes only male children to be born, all carrying the segregation-distorter. Then these males impregnate females, who give birth to only male children, and so on. You might cry "This is cheating!" but that's a human perspective; the reproductive fitness of this allele is extremely high, since it produces twice as many copies of itself in the succeeding generation as its nonmutant alternative. Even as females become rarer and rarer, males carrying this gene are no less likely to mate than any other male, and so the segregation-distorter remains twice as fit as its alternative allele. It's speculated that real-world group selection may have played a role in keeping the frequency of this gene as low as it seems to be. In which case, if mice were to evolve the ability to fly and migrate for the winter, they would probably form a single reproductive population, and would evolve to extinction as the segregation-distorter evolved to fixation.

Around 50% of the total genome of maize consists of transposons, DNA elements whose primary function is to copy themselves into other locations of DNA. A class of transposons called "P elements" seem to have first appeared in Drosophila only in the middle of the 20th century, and spread to every population of the species within 50 years. The "Alu sequence" in humans, a 300-base transposon, is repeated between 300,000 and a million times in the human genome. This may not extinguish a species, but it doesn't help it; transposons cause more mutations which are as always mostly harmful, decrease the effective copying fidelity of DNA. Yet such cheaters are extremely fit.

Suppose that in some sexually reproducing species, a perfect DNA-copying mechanism is invented. Since most mutations are detrimental, this gene complex is an advantage to its holders. Now you might wonder about beneficial mutations—they do happen occasionally, so wouldn't the unmutable be at a disadvantage? But in a sexual species, a beneficial mutation that began in a mutable can spread to the descendants of unmutables as well. The mutables suffer from degenerate mutations in each generation; and the unmutables can sexually acquire, and thereby benefit from, any beneficial mutations that occur in the mutables. Thus the mutables have a pure disadvantage. The perfect DNA-copying mechanism rises in frequency to fixation. Ten thousand years later there's an ice age and the species goes out of business. It evolved to extinction.

The "bystander effect" is that, when someone is in trouble, solitary individuals are more likely to intervene than groups. A college student apparently having an epileptic seizure was helped 85% of the time by a single bystander, and 31% of the time by five bystanders. I speculate that even if the kinship relation in a hunter-gatherer tribe was strong enough to create a selection pressure for helping individuals not directly related, when several potential helpers were present, a genetic arms race might occur to be the last one to step forward. Everyone delays, hoping that someone else will do it. Humanity is facing multiple species-level extinction threats right now, and I gotta tell ya, there ain't a lot of people steppin' forward. If we lose this fight because virtually no one showed up on the battlefield, then—like a probably-large number of species which we don't see around today—we will have evolved to extinction.

Cancerous cells do pretty well in the body, prospering and amassing more resources, far outcompeting their more obedient counterparts. For a while.

Multicellular organisms can only exist because they've evolved powerful internal mechanisms to outlaw evolution. If the cells start evolving, they rapidly evolve to extinction: the organism dies.

So praise not evolution for the solicitous concern it shows for the individual; nearly all of your ancestors are dead. Praise not evolution for the solicitous concern it shows for a species; no one has ever found a complex adaptation which can only be interpreted as operating to preserve a species, and the mathematics would seem to indicate that this is virtually impossible. Indeed, it's perfectly possible for a species to evolve to extinction. Humanity may be finishing up the process right now. You can't even praise evolution for the solicitous concern it shows for genes; the battle between two alternative alleles at the same location is a zero-sum game for frequency.

Fitness is not always your friend.

" } }, { "_id": "n5ucT5ZbPdhfGNLtP", "title": "Terminal Values and Instrumental Values", "pageUrl": "https://www.lesswrong.com/posts/n5ucT5ZbPdhfGNLtP/terminal-values-and-instrumental-values", "postedAt": "2007-11-15T07:56:15.000Z", "baseScore": 117, "voteCount": 111, "commentCount": 46, "url": null, "contents": { "documentId": "n5ucT5ZbPdhfGNLtP", "html": "

On a purely instinctive level, any human planner behaves as if they distinguish between means and ends.  Want chocolate?  There's chocolate at the Publix supermarket.  You can get to the supermarket if you drive one mile south on Washington Ave.  You can drive if you get into the car.  You can get into the car if you open the door.  You can open the door if you have your car keys.  So you put your car keys into your pocket, and get ready to leave the house...

\n\n

...when suddenly the word comes on the radio that an earthquake has destroyed all the chocolate at the local Publix.  Well, there's no point in driving to the Publix if there's no chocolate there, and no point in getting into the car if you're not driving anywhere, and no point in having car keys in your pocket if you're not driving.  So you take the car keys out of your pocket, and call the local pizza service and have them deliver a chocolate pizza.  Mm, delicious.\n\n

\n\n

I rarely notice people losing track of plans they devised themselves.  People usually don't drive to the supermarket if they know the chocolate is gone.  But I've also noticed that when people begin explicitly talking about goal systems instead of just wanting things, mentioning "goals" instead of using them, they oft become confused.  Humans are experts at planning, not experts on planning, or there'd be a lot more AI developers in the world.

\n\n

In particularly, I've noticed people get confused when - in abstract philosophical discussions rather than everyday life - they consider the distinction between means and ends; more formally, between "instrumental values" and "terminal values".

\n\n

(Another long post needed as a reference.)

Part of the problem, it seems to me, is that the human mind uses a rather ad-hoc system to keep track of its goals - it works, but not cleanly. English doesn't embody a sharp distinction between means and ends:  "I want to save my sister's life" and "I want to administer penicillin to my sister" use the same word "want".

\n\n

Can we describe, in mere English, the distinction that is getting lost?

\n\n

As a first stab:

\n\n

"Instrumental values" are desirable strictly conditional on their anticipated consequences.  "I want to administer penicillin to my sister", not because a penicillin-filled sister is an intrinsic good, but in anticipation of penicillin curing her flesh-eating pneumonia.  If instead you anticipated that injecting penicillin would melt your sister into a puddle like the Wicked Witch of the West, you'd fight just as hard to keep her penicillin-free.

\n\n

"Terminal values" are desirable without conditioning on other consequences:  "I want to save my sister's life" has nothing to do with your anticipating whether she'll get injected with penicillin after that.

\n\n

This first attempt suffers from obvious flaws.  If saving my sister's life would cause the Earth to be swallowed up by a black hole, then I would go off and cry for a while, but I wouldn't administer penicillin.  Does this mean that saving my sister's life was not a "terminal" or "intrinsic" value, because it's theoretically conditional on its consequences?  Am I only trying to save her life because of my belief that a black hole won't consume the Earth afterward?  Common sense should say that's not what's happening.

\n\n

So forget English.  We can set up a mathematical description of a decision system in which terminal values and instrumental values are separate and incompatible types - like integers and floating-point numbers, in a programming language with no automatic conversion between them.

\n\n

An ideal Bayesian decision system can be set up using only four elements:

\n\n\n\n

If you can't read the type system directly, don't worry, I'll always translate into English.  For programmers, seeing it described in distinct statements helps to set up distinct mental objects.

\n\n

And the decision system itself?

\n\n\n\n

For every action, calculate the conditional probability of all the consequences that might follow, then add up the utilities of those consequences times their conditional probability.  Then pick the best action.\n\n

\n\n

This is a mathematically simple sketch of a decision system.  It is not an efficient way to compute decisions in the real world.

\n\n

Suppose, for example, that you need a sequence of acts to carry out a plan?  The formalism can easily represent this by letting each Action stand for a whole sequence.  But this creates an exponentially large space, like the space of all sentences you can type in 100 letters.  As a simple example, if one of the possible acts on the first turn is "Shoot my own foot off", a human planner will decide this is a bad idea generally - eliminate all sequences beginning with this action.  But we've flattened this structure out of our representation.  We don't have sequences of acts, just flat "actions".

\n\n

So, yes, there are a few minor complications.  Obviously so, or we'd just run out and build a real AI this way.  In that sense, it's much the same as Bayesian probability theory itself.

\n\n

But this is one of those times when it's a surprisingly good idea to consider the absurdly simple version before adding in any high-falutin' complications.

\n\n

Consider the philosopher who asserts, "All of us are ultimately selfish; we care only about our own states of mind.  The mother who claims to care about her son's welfare, really wants to believe that her son is doing well - this belief is what makes the mother happy.  She helps him for the sake of her own happiness, not his."  You say, "Well, suppose the mother sacrifices her life to push her son out of the path of an oncoming truck.  That's not going to make her happy, just dead."  The philosopher stammers for a few moments, then replies, "But she still did it because she valued that choice above others - because of the feeling of importance she attached to that decision."

\n\n

So you say, "TYPE ERROR: No constructor found for Expected_Utility -> Utility."

\n\n

Allow me to explain that reply.

\n\n

Even our simple formalism illustrates a sharp distinction between expected utility, which is something that actions have; and utility, which is something that outcomes have.  Sure, you can map both utilities and expected utilities onto real numbers.  But that's like observing that you can map wind speed and temperature onto real numbers.  It doesn't make them the same thing.

\n\n

The philosopher begins by arguing that all your Utilities must be over Outcomes consisting of your state of mind.  If this were true, your intelligence would operate as an engine to steer the future into regions where you were happy  Future states would be distinguished only by your state of mind; you would be indifferent between any two futures in which you had the same state of mind.

\n\n

And you would, indeed, be rather unlikely to sacrifice your own life to save another.

\n\n

When we object that people sometimes do sacrifice their lives, the philosopher's reply shifts to discussing Expected Utilities over Actions:  "The feeling of importance she attached to that decision."  This is a drastic jump that should make us leap out of our chairs in indignation.  Trying to convert an Expected_Utility into a Utility would cause an outright error in our programming language.  But in English it all sounds the same.

\n\n

The choices of our simple decision system are those with highest Expected Utility, but this doesn't say anything whatsoever about where it steers the future.  It doesn't say anything about the utilities the decider assigns, or which real-world outcomes are likely to happen as a result.  It doesn't say anything about the mind's function as an engine.

\n\n\n\n

The physical cause of a physical action is a cognitive state, in our ideal decider an Expected_Utility, and this expected utility is calculated by evaluating\na utility function over imagined consequences.  To save your son's life, you must imagine the event of your son's life being saved, and this imagination is not the event itself.  It's a quotation, like the difference between "snow" and snow.  But that\ndoesn't mean that what's inside the quote marks must itself be a cognitive\nstate.  If you choose the action that leads to the future that you represent with "my son is still alive", then you have functioned as an engine to steer the future into a region where your son is still alive.  Not an engine that steers the future into a region where you represent the sentence "my son is still alive".  To steer the future there, your utility function would have to return a high utility when fed ""my son is still alive"", the quotation of the quotation, your imagination of yourself imagining.  Recipes make poor cake when you grind them up and toss them in the batter.

\n\n\n\n

And that's why it's helpful to consider the simple decision systems first.  Mix enough complications into the system, and formerly clear distinctions become harder to see.

\n\n

So now let's look at some complications.  Clearly the Utility function (mapping Outcomes onto Utilities) is meant to formalize what I earlier referred to as "terminal values", values not contingent upon their consequences.  What about the case where saving your sister's life leads to Earth's destruction by a black hole?  In our formalism, we've flattened out this possibility.  Outcomes don't lead to Outcomes, only Actions lead to Outcomes.  Your sister recovering from pneumonia followed by the Earth being devoured by a black hole would be flattened into a single "possible outcome".

\n\n

And where are the "instrumental values" in this simple formalism?  Actually, they've vanished entirely!  You see, in this formalism, actions lead directly to outcomes with no intervening events.  There's no notion of throwing a rock that flies through the air and knocks an apple off a branch so that it falls to the ground.  Throwing the rock is the Action, and it leads straight to the Outcome of the apple lying on the ground - according to the conditional probability function that turns an Action directly into a Probability distribution over Outcomes.

\n\n

In order to actually compute the conditional probability function, and in order to separately consider the utility of a sister's pneumonia and a black hole swallowing Earth, we would have to represent the network structure of causality - the way that events lead to other events.

\n\n

And then the instrumental values would start coming back.  If the causal network was sufficiently regular, you could find a state B that tended to lead to C regardless of how you achieved B.  Then if you wanted to achieve C for some reason, you could plan efficiently by first working out a B that led to C, and then an A that led to B.  This would be the phenomenon of "instrumental value" - B would have "instrumental value" because it led to C.  C itself might be terminally valued - a term in the utility function over the total outcome.  Or C might just be an instrumental value, a node that was not directly valued by the utility function.

\n\n

Instrumental value, in this formalism, is purely an aid to the efficient computation of plans.  It can and should be discarded wherever this kind of regularity does not exist.

\n\n

Suppose, for example, that there's some particular value of B that doesn't lead to C.  Would you choose an A which led to that B?  Or never mind the abstract philosophy:  If you wanted to go to the supermarket to get chocolate, and you wanted to drive to the supermarket, and you needed to get into your car, would you gain entry by ripping off the car door with a steam shovel?  (No.)  Instrumental value is a "leaky abstraction", as we programmers say; you sometimes have to toss away the cached value and compute out the actual expected utility.  Part of being efficient without being suicidal is noticing when convenient shortcuts break down.  Though this formalism does give rise to instrumental values, it does so only where the requisite regularity exists, and strictly as a convenient shortcut in computation.

\n\n

But if you complicate the formalism before you understand the simple version, then you may start thinking that instrumental values have some strange life of their own, even in a normative sense.  That, once you say B is usually good because it leads to C, you've committed yourself to always try for B even in the absence of C.  People make this kind of mistake in abstract philosophy, even though they would never, in real life, rip open their car door with a steam shovel.  You may start thinking that there's no way to develop a consequentialist that maximizes only inclusive genetic fitness, because it will starve unless you include an explicit terminal value for "eating food".  People make this mistake even though they would never stand around opening car doors all day long, for fear of being stuck outside their cars if they didn't have a terminal value for opening car doors.

\n\n

Instrumental values live in (the network structure of) the conditional probability function.  This makes instrumental value strictly dependent on beliefs-of-fact given a fixed utility function.  If I believe that penicillin causes pneumonia, and that the absence of penicillin cures pneumonia, then my perceived instrumental value of penicillin will go from high to low.  Change the beliefs of fact - change the conditional probability function that associates actions to believed consequences - and the instrumental values will change in unison.

\n\n

In moral arguments, some disputes are about instrumental consequences, and some disputes are about terminal values.  If your debating opponent says that banning guns will lead to lower crime, and you say that banning guns lead to higher crime, then you agree about a superior instrumental value (crime is bad), but you disagree about which intermediate events lead to which consequences.  But I do not think an argument about female circumcision is really a factual argument about how to best achieve a shared value of treating women fairly or making them happy.

\n\n

This important distinction often gets flushed down the toilet in angry arguments.  People with factual disagreements and shared values, each decide that their debating opponents must be sociopaths.  As if your hated enemy, gun control / rights advocates, really wanted to kill people, which should be implausible as realistic psychology.

\n\n

I fear the human brain does not strongly type the distinction between terminal moral beliefs and instrumental moral beliefs.  "We should ban guns" and "We should save lives" don't feel different, as moral beliefs, the way that sight feels different from sound.  Despite all the other ways that the human goal system complicates everything in sight, this one distinction it manages to collapse into a mishmash of things-with-conditional-value.

\n\n

To extract out the terminal values we have to inspect this mishmash of valuable things, trying to figure out which ones are getting their value from somewhere else.  It's a difficult project!  If you say that you want to ban guns in order to reduce crime, it may take a moment to realize that "reducing crime" isn't a terminal value, it's a superior instrumental value with links to terminal values for human lives and human happinesses.  And then the one who advocates gun rights may have links to the superior instrumental value of "reducing crime" plus a link to a value for "freedom", which might be a terminal value unto them, or another instrumental value...

\n\n

We can't print out our complete network of values derived from other values.  We probably don't even store the whole history of how values got there.  By considering the right moral dilemmas, "Would you do X if Y", we can often figure out where our values came from.  But even this project itself is full of pitfalls; misleading dilemmas and gappy philosophical arguments.  We don't know what our own values are, or where they came from, and can't find out except by undertaking error-prone projects of cognitive archaeology.  Just forming a conscious distinction between "terminal value" and "instrumental value", and keeping track of what it means, and using it correctly, is hard work.  Only by inspecting the simple formalism can we see how easy it ought to be, in principle.

\n\n

And that's to say nothing of all the other complications of the human reward system - the whole use of reinforcement architecture, and the way that eating chocolate is pleasurable, and anticipating eating chocolate is pleasurable, but they're different kinds of pleasures...

\n\n

But I don't complain too much about the mess.

\n\n

Being ignorant of your own values may not always be fun, but at least it's not boring.

" } }, { "_id": "cSXZpvqpa9vbGGLtG", "title": "Thou Art Godshatter", "pageUrl": "https://www.lesswrong.com/posts/cSXZpvqpa9vbGGLtG/thou-art-godshatter", "postedAt": "2007-11-13T19:38:56.000Z", "baseScore": 246, "voteCount": 184, "commentCount": 83, "url": null, "contents": { "documentId": "cSXZpvqpa9vbGGLtG", "html": "

Before the 20th century, not a single human being had an explicit concept of "inclusive genetic fitness", the sole and absolute obsession of the blind idiot god. We have no instinctive revulsion of condoms or oral sex. Our brains, those supreme reproductive organs, don't perform a check for reproductive efficacy before granting us sexual pleasure.

Why not? Why aren't we consciously obsessed with inclusive genetic fitness? Why did the Evolution-of-Humans Fairy create brains that would invent condoms? "It would have been so easy," thinks the human, who can design new complex systems in an afternoon.

The Evolution Fairy, as we all know, is obsessed with inclusive genetic fitness. When she decides which genes to promote to universality, she doesn't seem to take into account anything except the number of copies a gene produces. (How strange!)

But since the maker of intelligence is thus obsessed, why not create intelligent agents - you can't call them humans - who would likewise care purely about inclusive genetic fitness? Such agents would have sex only as a means of reproduction, and wouldn't bother with sex that involved birth control. They could eat food out of an explicitly reasoned belief that food was necessary to reproduce, not because they liked the taste, and so they wouldn't eat candy if it became detrimental to survival or reproduction. Post-menopausal women would babysit grandchildren until they became sick enough to be a net drain on resources, and would then commit suicide.

It seems like such an obvious design improvement - from the Evolution Fairy's perspective.

Now it's clear, as was discussed yesterday, that it's hard to build a powerful enough consequentialist. Natural selection sort-of reasons consequentially, but only by depending on the actual consequences. Human evolutionary theorists have to do really high-falutin' abstract reasoning in order to imagine the links between adaptations and reproductive success.

But human brains clearly can imagine these links in protein. So when the Evolution Fairy made humans, why did It bother with any motivation except inclusive genetic fitness?

It's been less than two centuries since a protein brain first represented the concept of natural selection. The modern notion of "inclusive genetic fitness" is even more subtle, a highly abstract concept. What matters is not the number of shared genes. Chimpanzees share 95% of your genes. What matters is shared genetic variance, within a reproducing population - your sister is one-half related to you, because any variations in your genome, within the human species, are 50% likely to be shared by your sister.

Only in the last century - arguably only in the last fifty years - have evolutionary biologists really begun to understand the full range of causes of reproductive success, things like reciprocal altruism and costly signaling. Without all this highly detailed knowledge, an intelligent agent that set out to "maximize inclusive fitness" would fall flat on its face.

So why not preprogram protein brains with the knowledge? Why wasn't a concept of "inclusive genetic fitness" programmed into us, along with a library of explicit strategies? Then you could dispense with all the reinforcers. The organism would be born knowing that, with high probability, fatty foods would lead to fitness. If the organism later learned that this was no longer the case, it would stop eating fatty foods. You could refactor the whole system. And it wouldn't invent condoms or cookies.

This looks like it should be quite possible in principle. I occasionally run into people who don't quite understand consequentialism, who say, "But if the organism doesn't have a separate drive to eat, it will starve, and so fail to reproduce." So long as the organism knows this very fact, and has a utility function that values reproduction, it will automatically eat. In fact, this is exactly the consequentialist reasoning that natural selection itself used to build automatic eaters.

What about curiosity? Wouldn't a consequentialist only be curious when it saw some specific reason to be curious? And wouldn't this cause it to miss out on lots of important knowledge that came with no specific reason for investigation attached? Again, a consequentialist will investigate given only the knowledge of this very same fact. If you consider the curiosity drive of a human - which is not undiscriminating, but responds to particular features of problems - then this complex adaptation is purely the result of consequentialist reasoning by DNA, an implicit representation of knowledge: Ancestors who engaged in this kind of inquiry left more descendants.

So in principle, the pure reproductive consequentialist is possible. In principle, all the ancestral history implicitly represented in cognitive adaptations can be converted to explicitly represented knowledge, running on a core consequentialist.

But the blind idiot god isn't that smart. Evolution is not a human programmer who can simultaneously refactor whole code architectures. Evolution is not a human programmer who can sit down and type out instructions at sixty words per minute.

For millions of years before hominid consequentialism, there was reinforcement learning. The reward signals were events that correlated reliably to reproduction. You can't ask a nonhominid brain to foresee that a child eating fatty foods now will live through the winter. So the DNA builds a protein brain that generates a reward signal for eating fatty food. Then it's up to the organism to learn which prey animals are tastiest.

DNA constructs protein brains with reward signals that have a long-distance correlation to reproductive fitness, but a short-distance correlation to organism behavior. You don't have to figure out that eating sugary food in the fall will lead to digesting calories that can be stored fat to help you survive the winter so that you mate in spring to produce offspring in summer. An apple simply tastes good, and your brain just has to plot out how to get more apples off the tree.

And so organisms evolve rewards for eating, and building nests, and scaring off competitors, and helping siblings, and discovering important truths, and forming strong alliances, and arguing persuasively, and of course having sex...

When hominid brains capable of cross-domain consequential reasoning began to show up, they reasoned consequentially about how to get the existing reinforcers. It was a relatively simple hack, vastly simpler than rebuilding an "inclusive fitness maximizer" from scratch. The protein brains plotted how to acquire calories and sex, without any explicit cognitive representation of "inclusive fitness".

A human engineer would have said, "Whoa, I've just invented a consequentialist! Now I can take all my previous hard-won knowledge about which behaviors improve fitness, and declare it explicitly! I can convert all this complicated reinforcement learning machinery into a simple declarative knowledge statement that 'fatty foods and sex usually improve your inclusive fitness'. Consequential reasoning will automatically take care of the rest. Plus, it won't have the obvious failure mode where it invents condoms!"

But then a human engineer wouldn't have built the retina backward, either.

The blind idiot god is not a unitary purpose, but a many-splintered attention. Foxes evolve to catch rabbits, rabbits evolve to evade foxes; there are as many evolutions as species. But within each species, the blind idiot god is purely obsessed with inclusive genetic fitness. No quality is valued, not even survival, except insofar as it increases reproductive fitness. There's no point in an organism with steel skin if it ends up having 1% less reproductive capacity.

Yet when the blind idiot god created protein computers, its monomaniacal focus on inclusive genetic fitness was not faithfully transmitted. Its optimization criterion did not successfully quine. We, the handiwork of evolution, are as alien to evolution as our Maker is alien to us. One pure utility function splintered into a thousand shards of desire.

Why? Above all, because evolution is stupid in an absolute sense. But also because the first protein computers weren't anywhere near as general as the blind idiot god, and could only utilize short-term desires.

In the final analysis, asking why evolution didn't build humans to maximize inclusive genetic fitness, is like asking why evolution didn't hand humans a ribosome and tell them to design their own biochemistry. Because evolution can't refactor code that fast, that's why. But maybe in a billion years of continued natural selection that's exactly what would happen, if intelligence were foolish enough to allow the idiot god continued reign.

The Mote in God's Eye by Niven and Pournelle depicts an intelligent species that stayed biological a little too long, slowly becoming truly enslaved by evolution, gradually turning into true fitness maximizers obsessed with outreproducing each other. But thankfully that's not what happened. Not here on Earth. At least not yet.

So humans love the taste of sugar and fat, and we love our sons and daughters. We seek social status, and sex. We sing and dance and play. We learn for the love of learning.

A thousand delicious tastes, matched to ancient reinforcers that once correlated with reproductive fitness - now sought whether or not they enhance reproduction. Sex with birth control, chocolate, the music of long-dead Bach on a CD.

And when we finally learn about evolution, we think to ourselves: "Obsess all day about inclusive genetic fitness? Where's the fun in that?"

The blind idiot god's single monomaniacal goal splintered into a thousand shards of desire. And this is well, I think, though I'm a human who says so. Or else what would we do with the future? What would we do with the billion galaxies in the night sky? Fill them with maximally efficient replicators? Should our descendants deliberately obsess about maximizing their inclusive genetic fitness, regarding all else only as a means to that end?

Being a thousand shards of desire isn't always fun, but at least it's not boring. Somewhere along the line, we evolved tastes for novelty, complexity, elegance, and challenge - tastes that judge the blind idiot god's monomaniacal focus, and find it aesthetically unsatisfying.

And yes, we got those very same tastes from the blind idiot's godshatter. So what?

" } }, { "_id": "gTNB9CQd5hnbkMxAG", "title": "Protein Reinforcement and DNA Consequentialism", "pageUrl": "https://www.lesswrong.com/posts/gTNB9CQd5hnbkMxAG/protein-reinforcement-and-dna-consequentialism", "postedAt": "2007-11-13T01:34:25.000Z", "baseScore": 62, "voteCount": 50, "commentCount": 20, "url": null, "contents": { "documentId": "gTNB9CQd5hnbkMxAG", "html": "

Followup toEvolutionary Psychology

\n

It takes hundreds of generations for a simple beneficial mutation to promote itself to universality in a gene pool.  Thousands of generations, or even millions, to create complex interdependent machinery.

\n

That's some slow learning there.  Let's say you're building a squirrel, and you want the squirrel to know locations for finding nuts.  Individual nut trees don't last for the thousands of years required for natural selection.  You're going to have to learn using proteins.  You're going to have to build a brain.

\n

\n

Protein computers and sensors can learn by looking, much faster than DNA can learn by mutation and selection.  And yet (until very recently) the protein learning machines only learned in narrow, specific domains.  Squirrel brains learn to find nut trees, but not to build gliders - as flying squirrel DNA is slowly learning to do.  The protein computers learned faster than DNA, but much less generally.

\n

How the heck does a double-stranded molecule that fits inside a cell nucleus, come to embody truths that baffle a whole damn squirrel brain?

\n

Consider the high-falutin' abstract thinking that modern evolutionary theorists do in order to understand how adaptations increase inclusive genetic fitness.  Reciprocal altruism, evolutionarily stable strategies, deterrence, costly signaling, sexual selection - how many humans explicitly represent this knowledge?  Yet DNA can learn it without a protein computer.

\n

There's a long chain of causality whereby a male squirrel, eating a nut today, produces more offspring months later:  Chewing and swallowing food, to digesting food, to burning some calories today and turning others into fat, to burning the fat through the winter, to surviving the winter, to mating with a female, to the sperm fertilizing an egg inside the female, to the female giving birth to an offspring that shares 50% of the squirrel's genes.

\n

With the sole exception of humans, no protein brain can imagine chains of causality that long, that abstract, and crossing that many domains.  With one exception, no protein brain is even capable of drawing the consequential link from chewing and swallowing to inclusive reproductive fitness.

\n

Yet natural selection exploits links between local actions and distant reproductive benefits.  In wide generality, across domains, and through levels of abstraction that confuse some humans.  Because - of course - the basic evolutionary idiom works through the actual real-world consequences, avoiding the difficulty of having a brain imagine them.

\n

Naturally, this also misses the efficiency of having a brain imagine consequences.  It takes millions of years and billions of dead bodies to build complex machines this way.  And if you want to memorize the location of a nut tree, you're out of luck.

\n

Gradually DNA acquired the ability to build protein computers, brains, that could learn small modular facets of reality like the location of nut trees. To call these brains \"limited\" implies that a speed limit was tacked onto a general learning device, which isn't what happened.  It's just that the incremental successes of particular mutations tended to build out into domain-specific nut-tree-mapping programs.  (If you know how to program, you can verify for yourself that it's easier to build a nut-tree-mapper than an Artificial General Intelligence.)

\n

One idiom that brain-building DNA seems to have hit on, over and over, is reinforcement learning - repeating policies similar to policies previously rewarded.  If a food contains lots of calories and doesn't make you sick, then eat more foods that have similar tastes.  This doesn't require a brain that visualizes the whole chain of digestive causality.

\n

Reinforcement learning isn't trivial:  You've got to chop up taste space into neighborhoods of similarity, and stick a sensor in the stomach to detect calories or indigestion, and do some kind of long-term-potentiation that strengthens the eating impulse.  But it seems much easier for evolution to hit on reinforcement learning, than a brain that accurately visualizes the digestive system, let alone a brain that accurately visualizes the reproductive consequences N months later.

\n

(This efficiency does come at a price:  If the environment changes, making food no longer scarce and famines improbable, the organisms may go on eating food until they explode.)

\n

Similarly, a bird doesn't have to cognitively model the airflow over its wings.  It just has to track which wing-flapping policies cause it to lurch.

\n

Why not learn to like food based on reproductive success, so that you'll stop liking the taste of candy if it stops leading to reproductive success?  Why don't birds wait and see which wing-flapping policies result in more eggs, not just more stability?

\n

Because it takes too long.  Reinforcement learning still requires you to wait for the detected consequences before you learn.

\n

Now, if a protein brain could imagine the consequences, accurately, it wouldn't need a reinforcement sensor that waited for them to actually happen.

\n

Put a food reward in a transparent box.  Put the corresponding key, which looks unique and uniquely corresponds to that box, in another transparent box.  Put the key to that box in another box.  Do this with five boxes.  Mix in another sequence of five boxes that doesn't lead to a food reward.  Then offer a choice of two keys, one which starts the sequence of five boxes leading to food, one which starts the sequence leading nowhere.

\n

Chimpanzees can learn to do this.  (Dohl 1970.)  So consequentialist reasoning, backward chaining from goal to action, is not strictly limited to Homo sapiens.

\n

But as far as I know, no non-primate species can pull that trick.  And working with a few transparent boxes is nothing compared to the kind of high-falutin' cross-domain reasoning you would need to causally link food to inclusive fitness.  (Never mind linking reciprocal altruism to inclusive fitness).  Reinforcement learning seems to evolve a lot more easily.

\n

When natural selection builds a digestible-calorie-sensor linked by reinforcement learning to taste, then the DNA itself embodies the implicit belief that calories lead to reproduction.  So the long-term, complicated, cross-domain, distant link from calories to reproduction, is learned by natural selection - it's implicit in the reinforcement learning mechanism that uses calories as a reward signal.

\n

Only short-term consequences, which the protein brains can quickly observe and easily learn from, get hooked up to protein learning.  The DNA builds a protein computer that seeks calories, rather than, say, chewiness.  Then the protein computer learns which tastes are caloric.  (Oversimplified, I know.  Lots of inductive hints embedded in this machinery.)

\n

But the DNA had better hope that its protein computer never ends up in an environment where calories are bad for it...  or where sexual pleasure stops correlating to reproduction... or where there are marketers that intelligently reverse-engineer reward signals...

" } }, { "_id": "epZLSoNvjW53tqNj9", "title": "Evolutionary Psychology", "pageUrl": "https://www.lesswrong.com/posts/epZLSoNvjW53tqNj9/evolutionary-psychology", "postedAt": "2007-11-11T20:41:03.000Z", "baseScore": 110, "voteCount": 99, "commentCount": 43, "url": null, "contents": { "documentId": "epZLSoNvjW53tqNj9", "html": "

Like \"IRC chat\" or \"TCP/IP protocol\", the phrase \"reproductive organ\" is redundant. All organs are reproductive organs. Where do a bird's wings come from? An Evolution-of-Birds Fairy who thinks that flying is really neat? The bird's wings are there because they contributed to the bird's ancestors' reproduction. Likewise the bird's heart, lungs, and genitals. At most we might find it worthwhile to distinguish between directly reproductive organs and indirectly reproductive organs.

\n

This observation holds true also of the brain, the most complex organ system known to biology. Some brain organs are directly reproductive, like lust; others are indirectly reproductive, like anger.

\n

Where does the human emotion of anger come from? An Evolution-of-Humans Fairy who thought that anger was a worthwhile feature? The neural circuitry of anger is a reproductive organ as surely as your liver. Anger exists in Homo sapiens because angry ancestors had more kids. There's no other way it could have gotten there.

\n

This historical fact about the origin of anger confuses all too many people. They say, \"Wait, are you saying that when I'm angry, I'm subconsciously trying to have children? That's not what I'm thinking after someone punches me in the nose.\"

\n

No. No. No. NO!

\n

Individual organisms are best thought of as adaptation-executers, not fitness-maximizers. The cause of an adaptation, the shape of an adaptation, and the consequence of an adaptation, are all separate things. If you built a toaster, you wouldn't expect the toaster to reshape itself when you tried to cram in a whole loaf of bread; yes, you intended it to make toast, but that intention is a fact about you, not a fact about the toaster. The toaster has no sense of its own purpose.

\n

But a toaster is not an intention-bearing object. It is not a mind at all, so we are not tempted to attribute goals to it. If we see the toaster as purposed, we don't think the toaster knows it, because we don't think the toaster knows anything.

\n

It's like the old test of being asked to say the color of the letters in \"blue\". It takes longer for subjects to name this color, because of the need to untangle the meaning of the letters and the color of the letters. You wouldn't have similar trouble naming the color of the letters in \"wind\".

\n

But a human brain, in addition to being an artifact historically produced by evolution, is also a mind capable of bearing its own intentions, purposes, desires, goals, and plans. Both a bee and a human are designs, but only a human is a designer. The bee is \"wind\", the human is \"blue\".

\n

Cognitive causes are ontologically distinct from evolutionary causes. They are made out of a different kind of stuff. Cognitive causes are made of neurons. Evolutionary causes are made of ancestors.

\n

The most obvious kind of cognitive cause is deliberate, like an intention to go to the supermarket, or a plan for toasting toast. But an emotion also exists physically in the brain, as a train of neural impulses or a cloud of spreading hormones. Likewise an instinct, or a flash of visualization, or a fleetingly suppressed thought; if you could scan the brain in three dimensions and you understood the code, you would be able to see them.

\n

Even subconscious cognitions exist physically in the brain. \"Power tends to corrupt,\" observed Lord Acton. Stalin may or may not have believed himself an altruist, working toward the greatest good for the greatest number. But it seems likely that, somewhere in Stalin's brain, there were neural circuits that reinforced pleasurably the exercise of power, and neural circuits that detected anticipations of increases and decreases in power. If there were nothing in Stalin's brain that correlated to power - no little light that went on for political command, and off for political weakness - then how could Stalin's brain have known to be corrupted by power?

\n

Evolutionary selection pressures are ontologically distinct from the biological artifacts they create. The evolutionary cause of a bird's wings is millions of ancestor-birds who reproduced more often than other ancestor-birds, with statistical regularity owing to their possession of incrementally improved wings compared to their competitors. We compress this gargantuan historical-statistical macrofact by saying \"evolution did it\".

\n

Natural selection is ontologically distinct from creatures; evolution is not a little furry thing lurking in an undiscovered forest. Evolution is a causal, statistical regularity in the reproductive history of ancestors.

\n

And this logic applies also to the brain. Evolution has made wings that flap, but do not understand flappiness. It has made legs that walk, but do not understand walkyness. Evolution has carved bones of calcium ions, but the bones themselves have no explicit concept of strength, let alone inclusive genetic fitness. And evolution designed brains themselves capable of designing; yet these brains had no more concept of evolution than a bird has of aerodynamics. Until the 20th century, not a single human brain explicitly represented the complex abstract concept of inclusive genetic fitness.

\n

When we're told that \"The evolutionary purpose of anger is to increase inclusive genetic fitness,\" there's a tendency to slide to \"The purpose of anger is reproduction\" to \"The cognitive purpose of anger is reproduction.\" No! The statistical regularity of ancestral history isn't in the brain, even subconsciously, any more than the designer's intentions of toast are in a toaster!

\n

Thinking that your built-in anger-circuitry embodies an explicit desire to reproduce, is like thinking your hand is an embodied mental desire to pick things up.

\n

Your hand is not wholly cut off from your mental desires. In particular circumstances, you can control the flexing of your fingers by an act of will. If you bend down and pick up a penny, then this may represent an act of will; but it is not an act of will that made your hand grow in the first place.

\n

One must distinguish a one-time event of particular anger (anger-1, anger-2, anger-3) from the underlying neural circuitry for anger. An anger-event is a cognitive cause, and an anger-event may have cognitive causes, but you didn't will the anger-circuitry to be wired into the brain.

\n

So you have to distinguish the event of anger, from the circuitry of anger, from the gene complex which laid down the neural template, from the ancestral macrofact which explains the gene complex's presence.

\n

If there were ever a discipline that genuinely demanded X-Treme Nitpicking, it is evolutionary psychology.

\n

Consider, O my readers, this sordid and joyful tale: A man and a woman meet in a bar. The man is attracted to her clear complexion and firm breasts, which would have been fertility cues in the ancestral environment, but which in this case result from makeup and a bra. This does not bother the man; he just likes the way she looks. His clear-complexion-detecting neural circuitry does not know that its purpose is to detect fertility, any more than the atoms in his hand contain tiny little XML tags reading \"<purpose>pick things up</purpose>\". The woman is attracted to his confident smile and firm manner, cues to high status, which in the ancestral environment would have signified the ability to provide resources for children. She plans to use birth control, but her confident-smile-detectors don't know this any more than a toaster knows its designer intended it to make toast. She's not concerned philosophically with the meaning of this rebellion, because her brain is a creationist and denies vehemently that evolution exists. He's not concerned philosophically with the meaning of this rebellion, because he just wants to get laid. They go to a hotel, and undress. He puts on a condom, because he doesn't want kids, just the dopamine-noradrenaline rush of sex, which reliably produced offspring 50,000 years ago when it was an invariant feature of the ancestral environment that condoms did not exist. They have sex, and shower, and go their separate ways. The main objective consequence is to keep the bar and the hotel and condom-manufacturer in business; which was not the cognitive purpose in their minds, and has virtually nothing to do with the key statistical regularities of reproduction 50,000 years ago which explain how they got the genes that built their brains that executed all this behavior.

\n

To reason correctly about evolutionary psychology you must simultaneously consider many complicated abstract facts that are strongly related yet importantly distinct, without a single mixup or conflation.

\n" } }, { "_id": "XPErvb8m9FapXCjhA", "title": "Adaptation-Executers, not Fitness-Maximizers", "pageUrl": "https://www.lesswrong.com/posts/XPErvb8m9FapXCjhA/adaptation-executers-not-fitness-maximizers", "postedAt": "2007-11-11T06:39:18.000Z", "baseScore": 174, "voteCount": 137, "commentCount": 33, "url": null, "contents": { "documentId": "XPErvb8m9FapXCjhA", "html": "
"Individual organisms are best thought of as adaptation-executers rather than as fitness-maximizers."
—John Tooby and Leda Cosmides, The Psychological Foundations of Culture.

Fifty thousand years ago, the taste buds of Homo sapiens directed their bearers to the scarcest, most critical food resources—sugar and fat. Calories, in a word. Today, the context of a taste bud's function has changed, but the taste buds themselves have not. Calories, far from being scarce (in First World countries), are actively harmful. Micronutrients that were reliably abundant in leaves and nuts are absent from bread, but our taste buds don't complain. A scoop of ice cream is a superstimulus, containing more sugar, fat, and salt than anything in the ancestral environment.

No human being with the deliberate goal of maximizing their alleles' inclusive genetic fitness, would ever eat a cookie unless they were starving. But individual organisms are best thought of as adaptation-executers, not fitness-maximizers.

A toaster, though its designer intended it to make toast, does not bear within it the intelligence of the designer—it won't automatically redesign and reshape itself if you try to cram in an entire loaf of bread. A Phillips-head screwdriver won't reconform itself to a flat-head screw. We created these tools, but they exist independently of us, and they continue independently of us.

The atoms of a screwdriver don't have tiny little XML tags inside describing their "objective" purpose. The designer had something in mind, yes, but that's not the same as what happens in the real world. If you forgot that the designer is a separate entity from the designed thing, you might think, "The purpose of the screwdriver is to drive screws"—as though this were an explicit property of the screwdriver itself, rather than a property of the designer's state of mind. You might be surprised that the screwdriver didn't reconfigure itself to the flat-head screw, since, after all, the screwdriver's purpose is to turn screws.

The cause of the screwdriver's existence is the designer's mind, which imagined an imaginary screw, and imagined an imaginary handle turning. The actual operation of the screwdriver, its actual fit to an actual screw head, cannot be the objective cause of the screwdriver's existence: The future cannot cause the past. But the designer's brain, as an actually existent thing within the past, can indeed be the cause of the screwdriver.

The consequence of the screwdriver's existence, may not correspond to the imaginary consequences in the designer's mind. The screwdriver blade could slip and cut the user's hand.

And the meaning of the screwdriver—why, that's something that exists in the mind of a user, not in tiny little labels on screwdriver atoms. The designer may intend it to turn screws. A murderer may buy it to use as a weapon. And then accidentally drop it, to be picked up by a child, who uses it as a chisel.

So the screwdriver's cause, and its shape, and its consequence, and its various meanings, are all different things; and only one of these things is found within the screwdriver itself.

Where do taste buds come from? Not from an intelligent designer visualizing their consequences, but from a frozen history of ancestry: Adam liked sugar and ate an apple and reproduced, Barbara liked sugar and ate an apple and reproduced, Charlie liked sugar and ate an apple and reproduced, and 2763 generations later, the allele became fixed in the population. For convenience of thought, we sometimes compress this giant history and say: "Evolution did it." But it's not a quick, local event like a human designer visualizing a screwdriver. This is the objective cause of a taste bud.

What is the objective shape of a taste bud? Technically, it's a molecular sensor connected to reinforcement circuitry. This adds another level of indirection, because the taste bud isn't directly acquiring food. It's influencing the organism's mind, making the organism want to eat foods that are similar to the food just eaten.

What is the objective consequence of a taste bud? In a modern First World human, it plays out in multiple chains of causality: from the desire to eat more chocolate, to the plan to eat more chocolate, to eating chocolate, to getting fat, to getting fewer dates, to reproducing less successfully. This consequence is directly opposite the key regularity in the long chain of ancestral successes which caused the taste bud's shape. But, since overeating has only recently become a problem, no significant evolution (compressed regularity of ancestry) has further influenced the taste bud's shape.

What is the meaning of eating chocolate? That's between you and your moral philosophy. Personally, I think chocolate tastes good, but I wish it were less harmful; acceptable solutions would include redesigning the chocolate or redesigning my biochemistry.

Smushing several of the concepts together, you could sort-of-say, "Modern humans do today what would have propagated our genes in a hunter-gatherer society, whether or not it helps our genes in a modern society." But this still isn't quite right, because we're not actually asking ourselves which behaviors would maximize our ancestors' inclusive fitness. And many of our activities today have no ancestral analogue. In the hunter-gatherer society there wasn't any such thing as chocolate.

So it's better to view our taste buds as an adaptation fitted to ancestral conditions that included near-starvation and apples and roast rabbit, which modern humans execute in a new context that includes cheap chocolate and constant bombardment by advertisements.

Therefore it is said: Individual organisms are best thought of as adaptation-executers, not fitness-maximizers.

" } }, { "_id": "i6fKszWY6gLZSX2Ey", "title": "Fake Optimization Criteria", "pageUrl": "https://www.lesswrong.com/posts/i6fKszWY6gLZSX2Ey/fake-optimization-criteria", "postedAt": "2007-11-10T00:10:51.000Z", "baseScore": 73, "voteCount": 65, "commentCount": 21, "url": null, "contents": { "documentId": "i6fKszWY6gLZSX2Ey", "html": "

I've previously dwelt in considerable length upon forms of rationalization whereby our beliefs appear to match the evidence much more strongly than they actually do. And I'm not overemphasizing the point, either. If we could beat this fundamental metabias and see what every hypothesis really predicted, we would be able to recover from almost any other error of fact.

The mirror challenge for decision theory is seeing which option a choice criterion really endorses. If your stated moral principles call for you to provide laptops to everyone, does that really endorse buying a $1 million gem-studded laptop for yourself, or spending the same money on shipping 5000 OLPCs?

We seem to have evolved a knack for arguing that practically any goal implies practically any action. A phlogiston theorist explaining why magnesium gains weight when burned has nothing on an Inquisitor explaining why God's infinite love for all His children requires burning some of them at the stake.

There's no mystery about this. Politics was a feature of the ancestral environment. We are descended from those who argued most persuasively that the good of the tribe meant executing their hated rival Uglak. (We sure ain't descended from Uglak.)

And yet... is it possible to prove that if Robert Mugabe cared only for the good of Zimbabwe, he would resign from its presidency? You can argue that the policy follows from the goal, but haven't we just seen that humans can match up any goal to any policy? How do you know that you're right and Mugabe is wrong? (There are a number of reasons this is a good guess, but bear with me here.)

Human motives are manifold and obscure, our decision processes as vastly complicated as our brains. And the world itself is vastly complicated, on every choice of real-world policy. Can we even prove that human beings are rationalizing—that we're systematically distorting the link from principles to policy—when we lack a single firm place on which to stand? When there's no way to find out exactly what even a single optimization criterion implies? (Actually, you can just observe that people disagree about office politics in ways that strangely correlate to their own interests, while simultaneously denying that any such interests are at work. But again, bear with me here.)

Where is the standardized, open-source, generally intelligent, consequentialist optimization process into which we can feed a complete morality as an XML file, to find out what that morality really recommends when applied to our world? Is there even a single real-world case where we can know exactly what a choice criterion recommends? Where is the pure moral reasoner—of known utility function, purged of all other stray desires that might distort its optimization—whose trustworthy output we can contrast to human rationalizations of the same utility function?

Why, it's our old friend the alien god, of course! Natural selection is guaranteed free of all mercy, all love, all compassion, all aesthetic sensibilities, all political factionalism, all ideological allegiances, all academic ambitions, all libertarianism, all socialism, all Blue and all Green. Natural selection doesn't maximize its criterion of inclusive genetic fitness—it's not that smart. But when you look at the output of natural selection, you are guaranteed to be looking at an output that was optimized only for inclusive genetic fitness, and not the interests of the US agricultural industry.

In the case histories of evolutionary science—in, for example, The Tragedy of Group Selectionism—we can directly compare human rationalizations to the result of pure optimization for a known criterion. What did Wynne-Edwards think would be the result of group selection for small subpopulation sizes? Voluntary individual restraint in breeding, and enough food for everyone. What was the actual laboratory result? Cannibalism.

Now you might ask: Are these case histories of evolutionary science really relevant to human morality, which doesn't give two figs for inclusive genetic fitness when it gets in the way of love, compassion, aesthetics, healing, freedom, fairness, et cetera? Human societies didn't even have a concept of "inclusive genetic fitness" until the 20th century.

But I ask in return: If we can't see clearly the result of a single monotone optimization criterion—if we can't even train ourselves to hear a single pure note—then how will we listen to an orchestra? How will we see that "Always be selfish" or "Always obey the government" are poor guiding principles for human beings to adopt—if we think that even optimizing genes for inclusive fitness will yield organisms which sacrifice reproductive opportunities in the name of social resource conservation?

To train ourselves to see clearly, we need simple practice cases.

" } }, { "_id": "z5AukJtvFcq3M4DLq", "title": "Why aren’t our property rights over one another more transferable?", "pageUrl": "https://www.lesswrong.com/posts/z5AukJtvFcq3M4DLq/why-aren-t-our-property-rights-over-one-another-more", "postedAt": "2007-11-09T16:05:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "z5AukJtvFcq3M4DLq", "html": "

Why do people get married? If anyone ever proposes to me for a reason other than to surreptitiously steal my belongings or to get more centerlink benefits, I would have to refuse them on the basis that I could not love a man so irrational.

\n

~ all married/engaged folks please forgive me and freely assume I’m just rather jealous :) ~

\n

What is the purpose of a contract to love someone forever?
If you anticipate loving them forever anyway, it would seem to be pointless. I’m told it is romantic nonetheless, but how is it romantic to take a legal precaution that implies some doubt that you will love each other forever?
On the off chance that you stop being in love with them, the last thing you want is to be legally bound to stick with them. And a legal obligation to actually love them is pretty laughable. Possibly if they stop loving you you might want them to stick around regardless, but isn’t that rather selfish and desperate? Anyway, surely this is hardly the contingency people have in mind when saying their vows.

\n

Anyway, now that divorce is allowed the whole thing seems to be completely meaningless, except if understood as a way of betting large swathes of assets on the outcomes of ones emotional attachments, with divorce lawyers and priests playing casino. If this is the kind of gambling that floats your boat it makes perfect sense, but perhaps you could benefit from counselling at some point.

\n

I propose a solution for escaping most of the potential damage of weddings while retaining the romance they apparently emanate: short term marriage contracts. At the end of, say, six months (terms such as length should be completely flexible) you renew it, or don’t, and act accordingly. If your spouse forgets this anniversary you can give them a year off. The whole ceremony could be the same as before, with a minor alteration to the vows: ‘…in sickness or in health, to love and to cherish ’til death or May 17 – whichever comes first, do us part’. Plus you can have more parties later on.

\n

(On the earlier point, if anyone is ever irrational enough to propose to me, and I usually consider them rational, perhaps I must conclude that they are irrational specifically with regards to me, so therefore may in fact love me. A heuristic for finding selectively crazy guys could be just what one needs. If this occurs then you all have permission to laugh at me lots.)


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "fATPBv4pnHC33EmJ2", "title": "Fake Morality", "pageUrl": "https://www.lesswrong.com/posts/fATPBv4pnHC33EmJ2/fake-morality", "postedAt": "2007-11-08T21:32:04.000Z", "baseScore": 117, "voteCount": 101, "commentCount": 105, "url": null, "contents": { "documentId": "fATPBv4pnHC33EmJ2", "html": "

God, say the religious fundamentalists, is the source of all morality; there can be no morality without a Judge who rewards and punishes.  If we did not fear hell and yearn for heaven, then what would stop people from murdering each other left and right?\n

\n\n

Suppose Omega makes a credible threat that if you ever step inside a bathroom between 7AM and 10AM in the morning, he'll kill you. \nWould you be panicked by the prospect of Omega withdrawing his threat?  Would you cower in existential terror and cry:  "If Omega withdraws his threat, then what's to keep me from going to the bathroom?"  No; you'd probably be quite relieved at your increased opportunity to, ahem, relieve yourself.

\n\n

Which is to say:  The very fact that a religious person would be afraid of God withdrawing Its threat to punish them for committing murder, shows that they have a revulsion of murder which is independent of whether God punishes murder or not.  If they had no sense that murder was wrong independently of divine retribution, the prospect of God not punishing murder would be no more existentially horrifying than the prospect of God not punishing sneezing.

If Overcoming Bias has any religious readers left, I say to you: it\nmay be that you will someday lose your faith: and on that day, you will not lose all sense of moral direction.  For if you fear the prospect of God not punishing some deed, that is a moral compass.  You can plug that compass directly into your decision system and steer by it.  You can simply not do whatever you are afraid God may not punish you for doing.  The fear of losing a moral compass is itself a moral compass.  Indeed, I suspect you are steering by that compass, and that you always have been.  As Piers Anthony once said, "Only those with souls worry over whether or not they have them."  s/soul/morality/ and the point carries.

\n\n

You don't hear religious fundamentalists using the argument:  "If we did not fear hell and yearn for heaven, then what would stop people from eating pork?"  Yet by their assumptions - that we have no moral compass but divine reward and retribution - this argument should sound just as forceful as the other.

\n\n

Even the notion that God threatens you with eternal\nhellfire, rather than cookies, piggybacks on a pre-existing negative\nvalue for hellfire.  Consider the following, and ask which of these two philosophers is really the altruist, and which is really selfish?

"You should be selfish, because when people set out to improve society, they meddle in their neighbors' affairs and pass laws and seize control and make everyone unhappy.  Take whichever job that pays the most money: the reason the job pays more is that the efficient market thinks it produces more value than its alternatives.  Take a job that pays less, and you're second-guessing what the market thinks will benefit society most."

\n\n

"You should be altruistic, because the world is an iterated Prisoner's Dilemma, and the strategy that fares best is Tit for Tat with initial cooperation.  People don't like jerks.  Nice guys really do finish first.  Studies show that people who contribute to society and have a sense of meaning in their lives, are happier than people who don't; being selfish will only make you unhappy in the long run."

Blank out the recommendations of these two philosophers, and you can see that the first philosopher is using strictly prosocial criteria to justify his recommendations; to him, what validates an argument for selfishness is showing that selfishness benefits everyone.  The second philosopher appeals to strictly individual and hedonic criteria; to him, what validates an argument for altruism is showing that altruism benefits him as an individual: higher social status or more intense feelings of pleasure.

\n\n\n\n

So which of these two is the actual altruist?  Whichever one actually holds open doors for little old ladies.

" } }, { "_id": "Masoq4NdmmGSiq2xw", "title": "Fake Selfishness", "pageUrl": "https://www.lesswrong.com/posts/Masoq4NdmmGSiq2xw/fake-selfishness", "postedAt": "2007-11-08T02:31:09.000Z", "baseScore": 76, "voteCount": 63, "commentCount": 72, "url": null, "contents": { "documentId": "Masoq4NdmmGSiq2xw", "html": "

Once upon a time, I met someone who proclaimed himself to be purely selfish, and told me that I should be purely selfish as well.  I was feeling mischievous(*) that day, so I said, "I've observed that with most religious people, at least the ones I meet, it doesn't matter much what their religion says, because whatever they want to do, they can find a religious reason for it.  Their religion says they should stone unbelievers, but they want to be nice to people, so they find a religious justification for that instead.  It looks to me like when people espouse a philosophy of selfishness, it has no effect on their behavior, because whenever they want to be nice to people, they can rationalize it in selfish terms."

\n\n

And the one said, "I don't think that's true."

\n\n

I said, "If you're genuinely selfish, then why do you want me to be selfish too?  Doesn't that make you concerned for my welfare?  Shouldn't you be trying to persuade me to be more altruistic, so you can exploit me?"

The one replied:  "Well, if you become selfish, then you'll realize\nthat it's in your rational self-interest to play a productive role in\nthe economy, instead of, for example, passing laws that infringe on my\nprivate property."

\n\n

\nAnd I said, "But I'm a small-L libertarian already, so I'm not going to support those laws.  And since I conceive of myself as an altruist, I've taken a job that I expect to benefit a lot\nof people, including you, instead of a job that pays more.  Would you\nreally benefit more from me if I became selfish?  Besides, is trying to\npersuade me to be selfish the most selfish thing you could be\ndoing?  Aren't there other things you could do with your time that\nwould bring much more direct benefits?  But what I really want\nto know is this:  Did you start out by thinking that you wanted to be\nselfish, and then decide this was the most selfish thing you could\npossibly do?  Or did you start out by wanting to convert others to\nselfishness, then look for ways to rationalize that as self-benefiting?"

\n\n

And the one said, "You may be right about that last part," so I marked him down as intelligent.

\n\n

(*)  Other mischievous questions to ask self-proclaimed Selfishes:   "Would you sacrifice your own life to save the entire human species?"  (If they notice that their own life is strictly included within the human species, you can specify that they can choose between dying immediately to save the Earth, or living in comfort for one more year and then dying along with Earth.)  Or, taking into account that scope insensitivity leads many people to be more concerned over one life than the Earth,\n"If you had to choose one event or the other, would you rather that you stubbed your toe,\nor that the stranger standing near the wall there gets horribly tortured for fifty years?"  (If they say that they'd be emotionally disturbed by knowing, specify that they won't know about the torture.)  "Would you steal a thousand dollars from Bill Gates if you could be guaranteed that neither he nor anyone else would ever find out about it?"  (Selfish libertarians only.)

" } }, { "_id": "QsMJQSFj7WfoTMNgW", "title": "The Tragedy of Group Selectionism", "pageUrl": "https://www.lesswrong.com/posts/QsMJQSFj7WfoTMNgW/the-tragedy-of-group-selectionism", "postedAt": "2007-11-07T07:47:05.000Z", "baseScore": 123, "voteCount": 105, "commentCount": 89, "url": null, "contents": { "documentId": "QsMJQSFj7WfoTMNgW", "html": "

Before 1966, it was not unusual to see serious biologists advocating evolutionary hypotheses that we would now regard as magical thinking. These muddled notions played an important historical role in the development of later evolutionary theory, error calling forth correction; like the folly of English kings provoking into existence the Magna Carta and constitutional democracy.

As an example of romance, Vero Wynne-Edwards, Warder Allee, and J. L. Brereton, among others, believed that predators would voluntarily restrain their breeding to avoid overpopulating their habitat and exhausting the prey population.

But evolution does not open the floodgates to arbitrary purposes. You cannot explain a rattlesnake's rattle by saying that it exists to benefit other animals who would otherwise be bitten. No outside Evolution Fairy decides when a gene ought to be promoted; the gene's effect must somehow directly cause the gene to be more prevalent in the next generation. It's clear why our human sense of aesthetics, witnessing a population crash of foxes who've eaten all the rabbits, cries "Something should've been done!" But how would a gene complex for restraining reproduction—of all things!—cause itself to become more frequent in the next generation?

A human being designing a neat little toy ecology—for entertainment purposes, like a model railroad—might be annoyed if their painstakingly constructed fox and rabbit populations self-destructed by the foxes eating all the rabbits and then dying of starvation themselves. So the human would tinker with the toy ecology—a fox-breeding-restrainer is the obvious solution that leaps to our human minds—until the ecology looked nice and neat. Nature has no human, of course, but that needn't stop us—now that we know what we want on aesthetic grounds, we just have to come up with a plausible argument that persuades Nature to want the same thing on evolutionary grounds.

Obviously, selection on the level of the individual won't produce individual restraint in breeding. Individuals who reproduce unrestrainedly will, naturally, produce more offspring than individuals who restrain themselves.

(Addendum: Individual selection will not produce individual sacrifice of breeding opportunities. Individual selection can certainly produce individuals who, after acquiring all available resources, use those resources to produce 4 big eggs instead of 8 small eggs—not to conserve social resources, but because that is the individual sweet spot for number of eggs * egg survival probability. This does not get rid of the commons problem.)

But suppose that the species population was broken up into subpopulations, which were mostly isolated, and only occasionally interbred. Then, surely, subpopulations that restrained their breeding would be less likely to go extinct, and would send out more messengers, and create new colonies to reinhabit the territories of crashed populations.

The problem with this scenario wasn't that it was mathematically impossible. The problem was that it was possible but very difficult.

The fundamental problem is that it's not only restrained breeders who reap the benefits of restrained breeding. If some foxes refrain from spawning cubs who eat rabbits, then the uneaten rabbits don't go to only cubs who carry the restrained-breeding adaptation. The unrestrained foxes, and their many more cubs, will happily eat any rabbits left unhunted. The only way the restraining gene can survive against this pressure, is if the benefits of restraint preferentially go to restrainers.

Specifically, the requirement is C/B < FST where C is the cost of altruism to the donor, B is the benefit of altruism to the recipient, and FST is the spatial structure of the population: the average relatedness between a randomly selected organism and its randomly selected neighbor, where a "neighbor" is any other fox who benefits from an altruistic fox's restraint. (I believe this is a derivation with different symbols, best one I could find online.)

So is the cost of restrained breeding sufficiently small, and the empirical benefit of less famine sufficiently large, compared to the empirical spatial structure of fox populations and rabbit populations, that the group selection argument can work?

The math suggests this is pretty unlikely. In this simulation, for example, the cost to altruists is 3% of fitness, pure altruist groups have a fitness twice as great as pure selfish groups, the subpopulation size is 25, and 20% of all deaths are replaced with messengers from another group: the result is polymorphic for selfishness and altruism. If the subpopulation size is doubled to 50, selfishness is fixed; if the cost to altruists is increased to 6%, selfishness is fixed; if the altruistic benefit is decreased by half, selfishness is fixed or in large majority. Neighborhood-groups must be very small, with only around 5 members, for group selection to operate when the cost of altruism exceeds 10%. This doesn't seem plausibly true of foxes restraining their breeding.

You can guess by now, I think, that the group selectionists ultimately lost the scientific argument. The kicker was not the mathematical argument, but empirical observation: foxes didn't restrain their breeding (I forget the exact species of dispute; it wasn't foxes and rabbits), and indeed, predator-prey systems crash all the time. Group selectionism would later revive, somewhat, in drastically different form—mathematically speaking, there is neighborhood structure, which implies nonzero group selection pressure not necessarily capable of overcoming countervailing individual selection pressure, and if you don't take it into account your math will be wrong, full stop. And evolved enforcement mechanisms (not originally postulated) change the game entirely. So why is this now-historical scientific dispute worthy material for Overcoming Bias?

A decade after the controversy, a biologist had a fascinating idea. The mathematical conditions for group selection overcoming individual selection were too extreme to be found in Nature. Why not create them artificially, in the laboratory? Michael J. Wade proceeded to do just that, repeatedly selecting populations of insects for low numbers of adults per subpopulation. And what was the result? Did the insects restrain their breeding and live in quiet peace with enough food for all?

No; the adults adapted to cannibalize eggs and larvae, especially female larvae.

Of course selecting for small subpopulation sizes would not select for individuals who restrained their own breeding; it would select for individuals who ate other individuals' children. Especially the girls.

Once you have that experimental result in hand—and it's massively obvious in retrospect—then it suddenly becomes clear how the original group selectionists allowed romanticism, a human sense of aesthetics, to cloud their predictions of Nature.

This is an archetypal example of a missed Third Alternative, resulting from a rationalization of a predetermined bottom line which produced a fake justification and then motivatedly stopped. The group selectionists didn't start with clear, fresh minds, happen upon the idea of group selection, and neutrally extrapolate forward the probable outcome. They started out with the beautiful idea of fox populations voluntarily restraining their reproduction to what the rabbit population would bear, Nature in perfect harmony; then they searched for a reason why this would happen, and came up with the idea of group selection; then, since they knew what they wanted the outcome of group selection to be, they didn't look for any less beautiful and aesthetic adaptations that group selection would be more likely to promote instead. If they'd really been trying to calmly and neutrally predict the result of selecting for small subpopulation sizes resistant to famine, they would have thought of cannibalizing other organisms' children or some similarly "ugly" outcome—long before they imagined anything so evolutionarily outré as individual restraint in breeding!

This also illustrates the point I was trying to make in Einstein's Arrogance: With large answer spaces, nearly all of the real work goes into promoting one possible answer to the point of being singled out for attention. If a hypothesis is improperly promoted to your attention—your sense of aesthetics suggests a beautiful way for Nature to be, and yet natural selection doesn't involve an Evolution Fairy who shares your appreciation—then this alone may seal your doom, unless you can manage to clear your mind entirely and start over.

In principle, the world's stupidest person may say the Sun is shining, but that doesn't make it dark out. Even if an answer is suggested by a lunatic on LSD, you should be able to neutrally calculate the evidence for and against, and if necessary, un-believe.

In practice, the group selectionists were doomed because their bottom line was originally suggested by their sense of aesthetics, and Nature's bottom line was produced by natural selection. These two processes had no principled reason for their outputs to correlate, and indeed they didn't. All the furious argument afterward didn't change that.

If you start with your own desires for what Nature should do, consider Nature's own observed reasons for doing things, and then rationalize an extremely persuasive argument for why Nature should produce your preferred outcome for Nature's own reasons, then Nature, alas, still won't listen. The universe has no mind and is not subject to clever political persuasion. You can argue all day why gravity should really make water flow uphill, and the water just ends up in the same place regardless. It's like the universe plain isn't listening. J. R. Molloy said: "Nature is the ultimate bigot, because it is obstinately and intolerantly devoted to its own prejudices and absolutely refuses to yield to the most persuasive rationalizations of humans."

I often recommend evolutionary biology to friends just because the modern field tries to train its students against rationalization, error calling forth correction. Physicists and electrical engineers don't have to be carefully trained to avoid anthropomorphizing electrons, because electrons don't exhibit mindish behaviors. Natural selection creates purposefulnesses which are alien to humans, and students of evolutionary theory are warned accordingly. It's good training for any thinker, but it is especially important if you want to think clearly about other weird mindish processes that do not work like you do.

" } }, { "_id": "BahoNzY2pzSeM2Dtk", "title": "Beware of Stephen J. Gould", "pageUrl": "https://www.lesswrong.com/posts/BahoNzY2pzSeM2Dtk/beware-of-stephen-j-gould", "postedAt": "2007-11-06T05:22:39.000Z", "baseScore": 60, "voteCount": 52, "commentCount": 80, "url": null, "contents": { "documentId": "BahoNzY2pzSeM2Dtk", "html": "

Followup to:  Natural Selection's Speed Limit and Complexity Bound

\n

If you've read anything Stephen J. Gould has ever said about evolutionary biology, I have some bad news for you.  In the field of evolutionary biology at large, Gould's reputation is mud.  Not because he was wrong.  Many honest scientists have made honest mistakes.  What Gould did was much worse, involving deliberate misrepresentation of science.

\n

In his 1996 book Full House: The Spread of Excellence from Plato to Darwin, Stephen J. Gould explains how modern evolutionary biology is very naive about evolutionary progress.  Foolish evolutionary biologists, says Gould, believe that evolution has a preferred tendency toward progress and the accumulation of complexity.  But of course - Gould kindly explains - this is simply a statistical illusion, bolstered by the tendency to cite hand-picked sequences like bacteria, fern, dinosaurs, dog, man.  You could equally well explain this apparent progress by supposing that evolution is undergoing a random walk, sometimes losing complexity and sometimes gaining it.  If so, Gould says, there will be a left bound, a minimum at zero complexity, but no right bound, and the most complex organisms will seem to grow more complex over time.  Even though it's really just a random walk with no preference in either direction, the distribution widens and the tail gets longer.

\n

What romantics, ha ha, those silly evolutionary biologists, believing in progress!  It's a good thing we had a statistically sophisticated thinker like Stephen J. Gould to keep their misconceptions from infecting the general public.  Indeed, Stephen J. Gould was a hero - a martyr - because evolutionary biologists don't like it when you challenge their romantic preconceptions, and they persecuted him.  Or so Gould represented himself to the public.

\n

There's just one problem:  It's extremely unlikely that any modern evolutionary theorist, however much a romantic, would believe that evolution was accumulating complexity.

\n

\n

There was once a time when many evolutionary biologists had a romantic conception of progress, evolution climbing ever-higher mountains of complexity, dinosaur to dog to man.  And there was a hero who challenged that widespread misconception.  The hero was George Williams, his challenge was successful, and his reputation rests securely in evolutionary biology today.

\n

In a population at equilibrium, harmful mutations will be eliminated by death (or failure to reproduce) at the same rate they are introduced by copying errors.  A very severe mutation may be eliminated by an embryo that fails to develop, but a mutation that's lethal only one time out of 10,000 may spread to 10,000 people before it starts to be eliminated.  It takes the same amount of selection pressure to support minor or major adaptations; whether the adaptation was a big one or a small one, at equilibrium, mutations must be eliminated at the same rate they are introduced by copying errors.

\n

A population cannot sustain too high a selection pressure - too many deaths or failures to reproduce - without dying out.  And it requires the same amount of selection to support any given amount of DNA against the degenerative pressure of copying errors.  This, in turn, implies an upper bound on the amount of DNA that can be sustained by selection against the degenerative pressure of copying errors.

\n

The upshot, as George Williams wrote:

\n
\n

A certain amount of information is added by selection every generation.  At the same time, a certain amount is subtracted by randomizing processes.  The more information is already stored, the more would mutation and other random forces reduce it in a given time interval.  It is reasonable to suppose that there would be a maximum level of information content that could be maintained by selection in opposition to randomizing forces...

\n

The view suggested here is that all organisms of above a certain low level organization - perhaps that of the simpler invertebrates - and beyond a certain geological period - perhaps the Cambrian - may have much the same amounts of [meaningful] information in their nuclei.

\n
\n

Saying this did not make Williams a heroic, persecuted martyr.  He simply won.  His arguments were accepted and biology moved on.  The book quoted above is Adaptation and Natural Selection, now showing its age but still considered a great classic.  The shift to a gene's-eye-view in evolutionary theory is sometimes called the \"Williams Revolution\", the other founders being Hamilton, John Maynard Smith, Trivers, and Dawkins as popularizer.  In short, Williams was not exactly Mr. Obscure.

\n

And Williams wrote in 1966, thirty years before Gould wrote Full House.

\n

If Gould had simply stolen Williams's ideas and presented them as his own, then he would have been guilty of plagiarism.  And yet at least the general public would have been accurately informed; in that sense, less damage would have been done to the public understanding of science.

\n

But Gould's actual conduct was much stranger.  He wrote as if the entire Williams revolution had never occurred!  Gould attacked, as if they were still current views, romantic notions that no serious biologist had put forth since the 1960s.  Then Gould presented his own counterarguments to these no-longer-advocated views, and they were bizarre.  Evolution is a random walk in complexity, with a minimum at zero complexity and no upper bound?  But there is an upper bound!  Sheer chance explains why dogs are more complex than dinosaurs?  But they probably aren't!

\n

Why did Gould behave thus?  Two observations:  One, to bring order to a scientific field, it must first be in chaos.  Two, plagiarism is a crime that everyone understands.

\n

Gould undid the last thirty years of progress in his depiction of the field he was criticizing, pretending that evolutionary theory was in chaos, so he could depict himself as heroically bringing order to it.  If Gould had also redid the accepted solutions as his own, he would have been caught, tried, and cast out of the public eye.  Newspaper editors may not be interested in mathematical arguments about evolutionary biology, but they understand stories about plagiarism and theft.  Once Gould's copying had been laid out next to the original, and eminent scientists attested to the identity, it would have been over.

\n

So instead Gould committed a crime so bizarre that it couldn't be explained to editors.  He stole Williams's chaos.

\n

(Incidentally, Gould's notion of a random walk in complexity has the same quality as the rest of his argument.  A genome acquires a beneficial allele at a readily calculable speed and probability, and until the complexity reaches equilibrium, new adaptations will tend to be acquired faster than old adaptations are lost to copying errors or environmental shifts.  The fewer adaptations have been acquired by a genome, the fewer are likely to be lost to a given event.  If complexity starts far below the equilibrium level, it will tend to increase.)

\n

All this that I have said, was a common pattern throughout Gould's \"work\".  And all this that I have said, is no news to professional biologists.  Here's John Maynard Smith:

\n
\n

\"Gould occupies a rather curious position, particularly on his side of the Atlantic. Because of the excellence of his essays, he has come to be seen by non-biologists as the preeminent evolutionary theorist. In contrast, the evolutionary biologists with whom I have discussed his work tend to see him as a man whose ideas are so confused as to be hardly worth bothering with, but as one who should not be publicly criticized because he is at least on our side against the creationists.  All this would not matter, were it not that he is giving non-biologists a largely false picture of the state of evolutionary theory.\"

\n
\n

John Maynard Smith was a genuinely great evolutionary biologist, the sort of man that Gould pretended to be.  But some readers may have to take my word for this, since the names of eminent scientists are often less well-known to the general public than the names of fast-talking scoundrels such as Uri Geller or Stephen J. Gould. 

\n

I am not calling Gould a scoundrel because he was wrong; honest scientists can make honest mistakes.  But Gould systematically misrepresented what other scientists thought; he deluded the public as to what evolutionary biologists were thinking.

\n

It is as if someone presented geocentric epicycles as the current belief in 21st-century astronomy, sharply criticized the complexity of all those circles orbiting circles, and argued for their own simpler model of planets that move in straight lines.

\n

Did Gould deliberately lie?  If not, he executed one of the most epic feats of self-deception in the history of marketing.  The eminent John Tooby speaks:

\n
\n

\"Although Gould characterizes his critics as \"anonymous\" and \"a tiny coterie,\" nearly every major evolutionary biologist of our era has weighed in in a vain attempt to correct the tangle of confusions that the higher profile Gould has inundated the intellectual world with.  The point is not that Gould is the object of some criticism -- so properly are we all -- it is that his reputation as a credible and balanced authority about evolutionary biology is non-existent among those who are in a professional position to know...
These [major evolutionary biologists] include Ernst Mayr, John Maynard Smith, George Williams, Bill Hamilton, Richard Dawkins, E.O. Wilson, Tim Clutton-Brock, Paul Harvey, Brian Charlesworth, Jerry Coyne, Robert Trivers, John Alcock, Randy Thornhill, and many others.\"

\n
\n

If Gould, after receiving that many corrections, managed to still not know the actually current beliefs in evolutionary biology, he must have had neutronium earplugs.  I'm not saying it's impossible, though, because it's amazing what people can not-know when their reputation depends on it.  But there comes a point in self-deception where it becomes morally indistinguishable from lying.  Consistently self-serving scientific \"error\", in the face of repeated correction and without informing others of the criticism, blends over into scientific fraud.

\n

And after all this, Gould is widely believed, by the general public and even by many scientists outside evolutionary biology, to be an evolutionary theorist of honorable reputation!  It is as if Immanuel Velikovsky had managed to make himself into the public face of astronomy.

\n

If you have read one of Gould's books, you are not to blame; but you must now do your best to un-believe it all - especially all the implied beliefs in evolutionary biology that Gould seemed to be attacking.

\n

And so as not to be accused of plagiarism myself, many others have said much of what I said here - only in politer academic language, with longer sentences, and without that specific example.  I thought it deserved a sharper presentation, for the benefit of the popular audience that Gould deluded; and a clear-cut example of Gould's \"work\", to show what the fuss is about.  Many academic writers on Gould could not speak as sharply as Gould deserved.  As I have no fear for my own reputation, I will say it plainly:  One way or another, knowingly or unknowingly, Gould deceived the trusting public and committed the moral equivalent of deliberate scientific fraud.

" } }, { "_id": "QcnkFgojszmo9k4xk", "title": "Natural Selection's Speed Limit and Complexity Bound", "pageUrl": "https://www.lesswrong.com/posts/QcnkFgojszmo9k4xk/natural-selection-s-speed-limit-and-complexity-bound", "postedAt": "2007-11-04T16:54:24.000Z", "baseScore": 12, "voteCount": 16, "commentCount": 105, "url": null, "contents": { "documentId": "QcnkFgojszmo9k4xk", "html": "

Followup to:  An Alien God, The Wonder of Evolution, Evolutions Are Stupid

\n

Yesterday, I wrote:

\n
\n

Humans can do things that evolutions probably can't do period over the expected lifetime of the universe.  As the eminent biologist Cynthia Kenyon once put it at a dinner I had the honor of attending, \"One grad student can do things in an hour that evolution could not do in a billion years.\"  According to biologists' best current knowledge, evolutions have invented a fully rotating wheel on a grand total of three occasions.

\n
\n

But then, natural selection has not been running for a mere million years.  It's been running for 3.85 billion years.   That's enough to do something natural selection \"could not do in a billion years\" three times.  Surely the cumulative power of natural selection is beyond human intelligence?

\n

Not necessarily.  There's a limit on how much complexity an evolution can support against the degenerative pressure of copying errors.

\n

\n

(Warning:  A simulation I wrote to verify the following arguments did not return the expected results.  See addendum and comments.)

\n

(Addendum 2:  This discussion has now been summarized in the Less Wrong Wiki.  I recommend reading that instead.)

\n

The vast majority of mutations are either neutral or detrimental; here we are focusing on detrimental mutations.  At equilibrium, the rate at which a detrimental mutation is introduced by copying errors, will equal the rate at which it is eliminated by selection.

\n

A copying error introduces a single instantiation of the mutated gene.  A death eliminates a single instantiation of the mutated gene. (We'll ignore the possibility that it's a homozygote, etc; a failure to mate also works, etc.)  If the mutation is severely detrimental, it will be eliminated very quickly - the embryo might just fail to develop.  But if the mutation only leads to a 0.01% probability of dying, it might spread to 10,000 people before one of them died.  On average, one detrimental mutation leads to one death; the weaker the selection pressure against it, the more likely it is to spread.  Again, at equilibrium, copying errors will introduce mutations at the same rate that selection eliminates them. One mutation, one death.

\n

This means that you need the same amount of selection pressure to keep a gene intact, whether it's a relatively important gene or a relatively unimportant one.  The more genes are around, the more selection pressure required.  Under too much selection pressure - too many children eliminated in each generation - a species will die out.

\n

We can quantify selection pressure as follows:  Suppose that 2 parents give birth to an average of 16 children.  On average all but 2 children must either die or fail to reproduce.  Otherwise the species population very quickly goes to zero or infinity.  From 16 possibilities, all but 2 are eliminated - we can call this 3 bits of selection pressure.  Not bits like bytes on a hard drive, but mathematician's bits, information-theoretical bits; one bit is the ability to eliminate half the possibilities.  This is the speed limit on evolution.

\n

Among mammals, it's safe to say that the selection pressure per generation is on the rough order of 1 bit.  Yes, many mammals give birth to more than 4 children, but neither does selection perfectly eliminate all but the most fit organisms.  The speed limit on evolution is an upper bound, not an average.

\n

This 1 bit per generation has to be divided up among all the genetic variants being selected on, for the whole population.  It's not 1 bit per organism per generation, it's 1 bit per gene pool per generation.  Suppose there's some amazingly beneficial mutation making the rounds, so that organisms with the mutation have 50% more offspring.  And suppose there's another less beneficial mutation, that only contributes 1% to fitness.  Very often, an organism that lacks the 1% mutation, but has the 50% mutation, will outreproduce another who has the 1% mutation but not the 50% mutation.

\n

There are limiting forces on variance; going from 10 to 20 children is harder than going from 1 to 2 children.  There's only so much selection to go around, and beneficial mutations compete to be promoted by it (metaphorically speaking).  There's an upper bound, a speed limit to evolution:  If Nature kills off a grand total of half the children, then the gene pool of the next generation can acquire a grand total of 1 bit of information.

\n

I am informed that this speed limit holds even with semi-isolated breeding subpopulations, sexual reproduction, chromosomal linkages, and other complications.

\n

Let's repeat that.  It's worth repeating.  A mammalian gene pool can acquire at most 1 bit of information per generation.

\n

Among mammals, the rate of DNA copying errors is roughly 10^-8 per base per generation.  Copy a hundred million DNA bases, and on average, one will copy incorrectly.  One mutation, one death; each non-junk base of DNA soaks up the same amount of selection pressure to counter the degenerative pressure of copying errors.  It's a truism among biologists that most selection pressure goes toward maintaining existing genetic information, rather than promoting new mutations.

\n

Natural selection probably hit its complexity bound no more than a hundred million generations after multicellular organisms got started.  Since then, over the last 600 million years, evolutions have substituted new complexity for lost complexity, rather than accumulating adaptations.  Anyone who doubts this should read George Williams's classic \"Adaptation and Natural Selection\", which treats the point at much greater length.

\n

In material terms, a Homo sapiens genome contains roughly 3 billion bases.  We can see, however, that mammalian selection pressures aren't going to support 3 billion bases of useful information.  This was realized on purely mathematical grounds before \"junk DNA\" was discovered, before the Genome Project announced that humans probably had only 20-25,000 protein-coding genes.  Yes, there's genetic information that doesn't code for proteins - all sorts of regulatory regions and such.  But it is an excellent bet that nearly all the DNA which appears to be junk, really is junk.  Because, roughly speaking, an evolution isn't going to support more than 10^8 meaningful bases with 1 bit of selection pressure and a 10^-8 error rate.

\n

Each base is 2 bits.  A byte is 8 bits.  So the meaningful DNA specifying a human must fit into at most 25 megabytes.

\n

(Pause.)

\n

Yes.  Really.

\n

And the Human Genome Project gave the final confirmation.  25,000 genes plus regulatory regions will fit in 100,000,000 bases with lots of room to spare.

\n

Amazing, isn't it?

\n

Addendum:  genetics.py, a simple Python program that simulates mutation and selection in a sexually reproducing population, is failing to match the result described above.  Sexual recombination is random, each pair of parents have 4 children, and the top half of the population is selected each time.  Wei Dai rewrote the program in C++ and reports that the supportable amount of genetic information increases as the inverse square of the mutation rate(?!) which if generally true would make it possible for the entire human genome to be meaningful.

\n

In the above post,  George Williams's arguments date back to 1966, and the result that the human genome contains <25,000 protein-coding regions comes from the Genome Project.  The argument that 2 parents having 16 children with 2 surviving implies a speed limit of 3 bits per generation was found here, and I understand that it dates back to Kimura's work in the 1950s.  However, the attempt to calculate a specific bound of 25 megabytes was my own.

\n

It's possible that the simulation contains a bug, or that I used unrealistic assumptions.  If the entire human genome of 3 billion DNA bases could be meaningful, it's not clear why it would contain <25,000 genes.  Empirically, an average of O(1) bits of genetic information per generation seems to square well with observed evolutionary times; we don't actually see species gaining thousands of bits per generation.  There is also no reason to believe that a dog has greater morphological or biochemical complexity than a dinosaur.  In short, only the math I tried to calculate myself should be regarded as having failed, not the beliefs that are wider currency in evolutionary biology.  But until I understand what's going on, I would suggest citing only George Williams's arguments and the Genome Project result, not the specific mathematical calculation shown above.

" } }, { "_id": "W9PLjSLzoordAJMvr", "title": "Corporate ecology", "pageUrl": "https://www.lesswrong.com/posts/W9PLjSLzoordAJMvr/corporate-ecology", "postedAt": "2007-11-03T22:13:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "W9PLjSLzoordAJMvr", "html": "

Direct competition is resource intensive. Just to compete, species and companies have to invest heaps of energy in long trunks and roots, extra hunting and massive advertising campaigns for instance, instead of expanding or improving production. To avoid these costs they move into niches. Where there are multiple species or companies with very similar habits, one will eventually get an advantage somewhere and use it to get further ahead and outcompete the others. Consequently those that survive employ slightly different tactics and are spread between different habitats and markets. The fast food diverged from the fancy restaurants way back and nestled into more isolated markets. The fast food members have since emphasised their differences through differentiation of colourful plastic toys, varieties of hamburger and corporate identity, to appeal to different prey.

\n

Companies can even evolve according to the prey’s preferences, their appendages growing beautiful but functionless layers of plastic and coloured cardboard, along with scents precisely attuned to attract passing shoppers.

\n

All right, the mechanisms are half different (companies at least try to steer their behaviour, though I reckon natural selection comes in there to a great extent too). And the structure of the larger system is perhaps different (unless people are the decomposers, the production chain the trophic levels…yeah, whatev).


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "jAToJHtg39AMTAuJo", "title": "Evolutions Are Stupid (But Work Anyway)", "pageUrl": "https://www.lesswrong.com/posts/jAToJHtg39AMTAuJo/evolutions-are-stupid-but-work-anyway", "postedAt": "2007-11-03T15:45:39.000Z", "baseScore": 106, "voteCount": 91, "commentCount": 68, "url": null, "contents": { "documentId": "jAToJHtg39AMTAuJo", "html": "

Yesterday, I wrote:

Science has a very exact idea of the capabilities of evolution. If you praise evolution one millimeter higher than this, you're not \"fighting on evolution's side\" against creationism. You're being scientifically inaccurate, full stop.

In this post I describe some well-known inefficiencies and limitations of evolutions. I say \"evolutions\", plural, because fox evolution works at cross-purposes to rabbit evolution, and neither can talk to snake evolution to learn how to build venomous fangs.

So I am talking about limitations of evolution here, but this does not mean I am trying to sneak in creationism. This is standard Evolutionary Biology 201. (583 if you must derive the equations.) Evolutions, thus limited, can still explain observed biology; in fact the limitations are necessary to make sense of it. Remember that the wonder of evolutions is not how well they work, but that they work at all.

Human intelligence is so complicated that no one has any good way to calculate how efficient it is. Natural selection, though not simple, is simpler than a human brain; and correspondingly slower and less efficient, as befits the first optimization process ever to exist. In fact, evolutions are simple enough that we can calculate exactly how stupid they are.

Evolutions are slow. How slow? Suppose there's a beneficial mutation which conveys a fitness advantage of 3%: on average, bearers of this gene have 1.03 times as many children as non-bearers. Assuming that the mutation spreads at all, how long will it take to spread through the whole population? That depends on the population size. A gene conveying a 3% fitness advantage, spreading through a population of 100,000, would require an average of 768 generations to reach universality in the gene pool. A population of 500,000 would require 875 generations. The general formula is

where N is the population size, and (1 + s) is the fitness. (If each bearer of the gene has 1.03 times as many children as a non-bearer, s = 0.03.)

Thus, if the population size were 1,000,000—the estimated population in hunter-gatherer times—then it would require 2763 generations for a gene conveying a 1% advantage to spread through the gene pool.1

This should not be surprising; genes have to do all their own work of spreading. There's no Evolution Fairy who can watch the gene pool and say, \"Hm, that gene seems to be spreading rapidly—I should distribute it to everyone.\" In a human market economy, someone who is legitimately getting 20% returns on investment—especially if there's an obvious, clear mechanism behind it—can rapidly acquire more capital from other investors; and others will start duplicate enterprises. Genes have to spread without stock markets or banks or imitators—as if Henry Ford had to make one car, sell it, buy the parts for 1.01 more cars (on average), sell those cars, and keep doing this until he was up to a million cars.

All this assumes that the gene spreads in the first place. Here the equation is simpler and ends up not depending at all on population size:

A mutation conveying a 3% advantage (which is pretty darned large, as mutations go) has a 6% chance of spreading, at least on that occasion.2 Mutations can happen more than once, but in a population of a million with a copying fidelity of 10^-8 errors per base per generation, you may have to wait a hundred generations for another chance, and then it still has an only 6% chance of fixating.

Still, in the long run, an evolution has a good shot at getting there eventually. (This is going to be a running theme.)

Complex adaptations take a very long time to evolve. First comes allele A, which is advantageous of itself, and requires a thousand generations to fixate in the gene pool. Only then can another allele B, which depends on A, begin rising to fixation. A fur coat is not a strong advantage unless the environment has a statistically reliable tendency to throw cold weather at you. Well, genes form part of the environment of other genes, and if B depends on A, B will not have a strong advantage unless A is reliably present in the genetic environment.

Let's say that B confers a 5% advantage in the presence of A, no advantage otherwise. Then while A is still at 1% frequency in the population, B only confers its advantage 1 out of 100 times, so the average fitness advantage of B is 0.05%, and B's probability of fixation is 0.1%. With a complex adaptation, first A has to evolve over a thousand generations, then B has to evolve over another thousand generations, then A* evolves over another thousand generations... and several million years later, you've got a new complex adaptation.

Then other evolutions don't imitate it. If snake evolution develops an amazing new venom, it doesn't help fox evolution or lion evolution.

Contrast all this to a human programmer, who can design a new complex mechanism with a hundred interdependent parts over the course of a single afternoon. How is this even possible? I don't know all the answer, and my guess is that neither does science; human brains are much more complicated than evolutions. I could wave my hands and say something like \"goal-directed backward chaining using combinatorial modular representations\", but you would not thereby be enabled to design your own human. Still: Humans can foresightfully design new parts in anticipation of later designing other new parts; produce coordinated simultaneous changes in interdependent machinery; learn by observing single test cases; zero in on problem spots and think abstractly about how to solve them; and prioritize which tweaks are worth trying, rather than waiting for a cosmic ray strike to produce a good one. By the standards of natural selection, this is simply magic.

Humans can do things that evolutions probably can't do period over the expected lifetime of the universe. As the eminent biologist Cynthia Kenyon once put it at a dinner I had the honor of attending, \"One grad student can do things in an hour that evolution could not do in a billion years.\" According to biologists' best current knowledge, evolutions have invented a fully rotating wheel on a grand total of three occasions.

And don't forget the part where the programmer posts the code snippet to the Internet.

Yes, some evolutionary handiwork is impressive even by comparison to the best technology of Homo sapiens. But our Cambrian explosion only started, we only really began accumulating knowledge, around... what, four hundred years ago? In some ways, biology still excels over the best human technology: we can't build a self-replicating system the size of a butterfly. In other ways, human technology leaves biology in the dust. We got wheels, we got steel, we got guns, we got knives, we got pointy sticks; we got rockets, we got transistors, we got nuclear power plants. With every passing decade, that balance tips further.

So, once again: for a human to look to natural selection as inspiration on the art of design, is like a sophisticated modern bacterium trying to imitate the first awkward replicator's biochemistry. The first replicator would be eaten instantly if it popped up in today's competitive ecology. The same fate would accrue to any human planner who tried making random point mutations to their strategies and waiting 768 iterations of testing to adopt a 3% improvement.

Don't praise evolutions one millimeter more than they deserve.

Coming tomorrow: More exciting mathematical bounds on evolution!


1 Graur, D. and Li, W.H. 2000. Fundamentals of Molecular Evolution, 2nd edition. Sinauer Associates, Sunderland, MA.

2 Haldane, J. B. S. 1927. A mathematical theory of natural and artificial selection. IV. Proc. Camb. Philos. Soc. 23:607-615.

" } }, { "_id": "DQmcGZJYtjtFR7TiX", "title": "Drawing lines and tigers", "pageUrl": "https://www.lesswrong.com/posts/DQmcGZJYtjtFR7TiX/drawing-lines-and-tigers", "postedAt": "2007-11-03T13:08:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "DQmcGZJYtjtFR7TiX", "html": "

There is a problem that catches the light occasionally, and is pushed off into political correctitude, but one day will have to be met. Humans are all as good as one another. If they are stupid or disabled or anything this doesn’t detract from their worth as people. This is fine – I’m not disagreeing. Animals are worth less than humans. Dead humans are worth less than humans. This is also fine, and I’m not disagreeing. However there’s an inconsistency.

\n

These views can only work as long as the gaps between these things and humans are not filled. Humanity isn’t binary. There is, at least potentially, a sliding scale between characteristic humanness and, say, characteristic antness, involving variations in many characteristics. Similarly for living and dead. At what point as you travel away from normal human characteristics do you suddenly draw a line and value a creature/person a little less?

\n

In practice as soon as you stop relating to them, but this is hardly the basis for a moral distinction. Wherever you draw a line, it must be admitted that it is arbitrary. So while we might take pride in our fair treatment of all mankind, regardless of their characteristics, we must agree that we could just as legitimately draw the line elsewhere and treat our celebratedly cared for lowest-capability people as animals.

\n

Aside from where to draw the line is the question of why to have one. Why does a characteristic (such as intelligence or ‘level of consciousness’) varying among animals vary their moral worth, while the same characteristic varying among humans doesn’t? Their differences are judged using different rules, but not because of relevant inherent differences.

\n

This problem hasn’t fully emerged with animals yet (perhaps more with dead people, and very little with robots), but that does little to the argument: our ethics are inconsistent.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "v96ALNEoQz9S8PmXN", "title": "A way to be more open minded (an experimental thought)", "pageUrl": "https://www.lesswrong.com/posts/v96ALNEoQz9S8PmXN/a-way-to-be-more-open-minded-an-experimental-thought", "postedAt": "2007-11-03T09:43:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "v96ALNEoQz9S8PmXN", "html": "

(As in actually open minded, not just comfortably sheltered from one’s narrow mindedness)

\n

A technique I noticed while experimenting with being wrong:
1. If you have an opinion on something, find an opposing one
2. Feel like you believe it (emotionally, not necessarily mentally – pretend you know it’s true and don’t think about whether it is). It doesn’t matter how averse you are to it – if somebody else can believe it, there are reasons to (not necessarily rational ones). Think of that reason and try out the associated emotions. Feel loyal, caring and understanding toward the idea’s followers. The important bit isn’t the belief, but its emotional affects – feel something about it.
3. Stop.

\n

Some justification for this touchy feely emotional garbage? To remove it from the equation.
I think the biggest fog over unbiased judgement is emotion. From a side of any battlefield there are fierce positive emotions radiating from your ideals and negative ones flying in from the other side. The correct side is obvious – the one supporting the good feelings! Open mindedness isn’t even called for. If it is brought out it is only to declare ‘I’m looking at the other side, and they look dangerous!’. But both sides are awash with emotions supporting them and driving them on. If they weren’t, there probably wouldn’t be an argument. Sound reasons devoid of emotional allure don’t pull the crowds. To be open minded it is necessary to neutralise of emotion. But it isn’t enough just to acknowledge it – ‘well the other side clearly cares about X’ – if you actually feel something for the arguments on your side. You have to feel both sides or none. There’ll be plenty of non feeling once you’re dead, and giving up feelings can be hard, so try for feeling both initially, as outlined above. It’ll all fade away quickly and you’ll be more open minded.

\n

Note: do not necessarily think both sides are correct as a result. Just choose unemotionally, or taking all emotion into account.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "ZyNak8F6WXjuEbWWc", "title": "The Wonder of Evolution", "pageUrl": "https://www.lesswrong.com/posts/ZyNak8F6WXjuEbWWc/the-wonder-of-evolution", "postedAt": "2007-11-02T20:49:30.000Z", "baseScore": 103, "voteCount": 88, "commentCount": 85, "url": null, "contents": { "documentId": "ZyNak8F6WXjuEbWWc", "html": "

The wonder of evolution is that it works at all.

I mean that literally: If you want to marvel at evolution, that's what's marvel-worthy.

How does optimization first arise in the universe? If an intelligent agent designed Nature, who designed the intelligent agent? Where is the first design that has no designer? The puzzle is not how the first stage of the bootstrap can be super-clever and super-efficient; the puzzle is how it can happen at all.

Evolution resolves the infinite regression, not by being super-clever and super-efficient, but by being stupid and inefficient and working anyway. This is the marvel.

For professional reasons, I often have to discuss the slowness, randomness, and blindness of evolution. Afterward someone says: "You just said that evolution can't plan simultaneous changes, and that evolution is very inefficient because mutations are random. Isn't that what the creationists say? That you couldn't assemble a watch by randomly shaking the parts in a box?"

But the reply to creationists is not that you can assemble a watch by shaking the parts in a box. The reply is that this is not how evolution works. If you think that evolution does work by whirlwinds assembling 747s, then the creationists have successfully misrepresented biology to you; they've sold the strawman.

The real answer is that complex machinery evolves either incrementally, or by adapting previous complex machinery used for a new purpose. Squirrels jump from treetop to treetop using just their muscles, but the length they can jump depends to some extent on the aerodynamics of their bodies. So now there are flying squirrels, so aerodynamic they can glide short distances. If birds were wiped out, the descendants of flying squirrels might reoccupy that ecological niche in ten million years, gliding membranes transformed into wings. And the creationists would say, "What good is half a wing? You'd just fall down and splat. How could squirrelbirds possibly have evolved incrementally?"

That's how one complex adaptation can jump-start a new complex adaptation. Complexity can also accrete incrementally, starting from a single mutation.

First comes some gene A which is simple, but at least a little useful on its own, so that A increases to universality in the gene pool. Now along comes gene B, which is only useful in the presence of A, but A is reliably present in the gene pool, so there's a reliable selection pressure in favor of B. Now a modified version of A* arises, which depends on B, but doesn't break B's dependency on A/A*. Then along comes C, which depends on A* and B, and B*, which depends on A* and C. Soon you've got "irreducibly complex" machinery that breaks if you take out any single piece.

And yet you can still visualize the trail backward to that single piece: you can, without breaking the whole machine, make one piece less dependent on another piece, and do this a few times, until you can take out one whole piece without breaking the machine, and so on until you've turned a ticking watch back into a crude sundial.

Here's an example: DNA stores information very nicely, in a durable format that allows for exact duplication. A ribosome turns that stored information into a sequence of amino acids, a protein, which folds up into a variety of chemically active shapes. The combined system, DNA and ribosome, can build all sorts of protein machinery. But what good is DNA, without a ribosome that turns DNA information into proteins? What good is a ribosome, without DNA to tell it which proteins to make?

Organisms don't always leave fossils, and evolutionary biology can't always figure out the incremental pathway. But in this case we do know how it happened. RNA shares with DNA the property of being able to carry information and replicate itself, although RNA is less durable and copies less accurately. And RNA also shares the ability of proteins to fold up into chemically active shapes, though it's not as versatile as the amino acid chains of proteins. Almost certainly, RNA is the single A which predates the mutually dependent A* and B.

It's just as important to note that RNA does the combined job of DNA and proteins poorly, as that it does the combined job at all. It's amazing enough that a single molecule can both store information and manipulate chemistry. For it to do the job well would be a wholly unnecessary miracle.

What was the very first replicator ever to exist? It may well have been an RNA strand, because by some strange coincidence, the chemical ingredients of RNA are chemicals that would have arisen naturally on the prebiotic Earth of 4 billion years ago. Please note: evolution does not explain the origin of life; evolutionary biology is not supposed to explain the first replicator, because the first replicator does not come from another replicator. Evolution describes statistical trends in replication. The first replicator wasn't a statistical trend, it was a pure accident. The notion that evolution should explain the origin of life is a pure strawman—more creationist misrepresentation.

If you'd been watching the primordial soup on the day of the first replicator, the day that reshaped the Earth, you would not have been impressed by how well the first replicator replicated. The first replicator probably copied itself like a drunken monkey on LSD. It would have exhibited none of the signs of careful fine-tuning embodied in modern replicators, because the first replicator was an accident. It was not needful for that single strand of RNA, or chemical hypercycle, or pattern in clay, to replicate gracefully. It just had to happen at all. Even so, it was probably very improbable, considered in an isolated event—but it only had to happen once, and there were a lot of tide pools. A few billions of years later, the replicators are walking on the moon.

The first accidental replicator was the most important molecule in the history of time. But if you praised it too highly, attributing to it all sorts of wonderful replication-aiding capabilities, you would be missing the whole point.

Don't think that, in the political battle between evolutionists and creationists, whoever praises evolution must be on the side of science. Science has a very exact idea of the capabilities of evolution. If you praise evolution one millimeter higher than this, you're not "fighting on evolution's side" against creationism. You're being scientifically inaccurate, full stop. You're falling into a creationist trap by insisting that, yes, a whirlwind does have the power to assemble a 747! Isn't that amazing! How wonderfully intelligent is evolution, how praiseworthy! Look at me, I'm pledging my allegiance to science! The more nice things I say about evolution, the more I must be on evolution's side against the creationists!

But to praise evolution too highly destroys the real wonder, which is not how well evolution designs things, but that a naturally occurring process manages to design anything at all.

So let us dispose of the idea that evolution is a wonderful designer, or a wonderful conductor of species destinies, which we human beings ought to imitate. For human intelligence to imitate evolution as a designer, would be like a sophisticated modern bacterium trying to imitate the first replicator as a biochemist. As T. H. Huxley, "Darwin's Bulldog", put it:

Let us understand, once and for all, that the ethical progress of society depends, not on imitating the cosmic process, still less in running away from it, but in combating it.

Huxley didn't say that because he disbelieved in evolution, but because he understood it all too well.

" } }, { "_id": "dbmCk8Z4iWkpBb38H", "title": "Some half-serious, half-formed thoughts on existing and so on", "pageUrl": "https://www.lesswrong.com/posts/dbmCk8Z4iWkpBb38H/some-half-serious-half-formed-thoughts-on-existing-and-so-on", "postedAt": "2007-11-02T11:20:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "dbmCk8Z4iWkpBb38H", "html": "

So I’ve been banging my head against a wall (metaphorically, almost not) for about a week and a half (or years on and off) about the apparent meaninglessness of anything and the difficulty of finding anything to do that is mildly satisfying next to the absurdity of existing. This is what I’ve come to:

\n

On lack of inherant meaning in anything:

\n

– Whether there is value inherent in the universe or not (by the way there’s not) doesn’t matter (nothing does! lol. But that’s not my point). Value that you choose to place on something is as legitimate as that which ‘God’ or anything else does. It would be impossible for a God or anything else to allocate value to things in any more legitimate a way. If they did, and you disagreed with them, why would their values have precedence? To give them precedence would be a value judgement. There is no better possibility than what we’ve got (similar to how there is no better version of free will than determinism).

\n

– Really it’s not that bad. You have the freedom (yay) to value what you think should be valued. If there were some fundamental ones one had to stick by, I’d probably whinge heaps more about that (and anyway, if not comfy with it you can probably find some place to live where some government will be willing to choose values for you – such as Australia it seems)

\n

– It is objectively better to value things, and to value things that other people’s values aren’t mutually exclusive with. ‘Better’ is defined in terms of the value placed on stuff (yours and others’) – if you value things more, there will be more value. So it will be better. If you value killing people etc. you will impinge on their (probably less messed up) experience of value quite considerably, so it will very likely not be better. In the end the goodness of anything is a practical question of whether the values of the individuals involved are fulfilled. Potential for this depends on them having values, and them not being contradictory.

\n

Note: there is a difference between indifference and not valuing things. You can just indifferently value whatever comes along, without caring what it is (though there are still other people’s fixed values to watch out for). This can kind of work.

\n

– You probably get on alright having your own values – knowingly chosen/based on biological and environmental effects – for things like wallpaper and lunch. Just do it for everything else (I don’t like AIDS because it doesn’t go with my sofa).

\n

On how to behave when the absurdity of existing at all is just so crazy that anything else seems incredibly unsatisfying in comparison:

\n

– Violence? Tried it this arvo for a bit. Distracting, yes. Fun, hell yes. Incredibly satisfying? Not really. A viable source of income? Possibly, but would have to find richer people to mug :)

\n

– If you really feel like hurting yourself just to feel something, physical violence is probably not the best bet. Before it hurts enough you will damage yourself, which isn’t useful. Try psychological torment ;D Some good bits of emotion can be had from just thinking about this kind of thing…satisfaction from the horror of dissatisfaction…mmm it’s even pleasingly recursive (I like recursion and I don’t care if God does). I had some other ideas, but I edited them out, as I feel bad about depressing people, ironically enough.

\n

-Seek satisfaction from the absurdity of existing, without doing anything about it? Just think about it and see how amazed you can be. I suspect not enough to seem appropriate, but what’s appropriate?

\n

– Try to be nice and save the world and stuff? As mentioned earlier, I think this is the inevitable conclusion I must come to, regardless of the source of it’s preferability. However I’m slightly inclined not to. On further introspection, I think this is merely because I just don’t want to follow all the people who are lefties or righties or whatever because they haven’t thought about any of this and are just engaging in smugness about their smugness about what they blindly assume is right. It’s just kind of lonely – I feel like a hipocrite and an outsider to their sentiments, which makes me angry, which makes me more right wing. This is a bad reason, and anything is going to be lonely, with or without other people to misunderstand me. So this one isn’t written off – in fact I think it is still going to be the inevitable conclusion.

\n

– Something that hasn’t been done before? Hard to find and once you’ve done it, it’s been done. Also, it is unlikely to be terribly satisfying. Things that are particularly satisfying have probably been done. The best candidate for ‘something that hasn’t been done and might be satisfying’ is something horrendously idealistic and difficult, like saving the world (from whatever, it’s irrelevant here). Which solves the problem in the last point, because when smug people with the same end goal as me talk to me I can at least say I want to save the world because it would be ‘kind of post-modern’. This will at least make it clear to them if we probably can’t relate to each other, and they will go away.

\n

– Hang around and think more about it? I am probably stupid enough to be wrong about how I’m even looking at these problems. Almost cerainly in fact – to my knowledge, nobody exists who isn’t impressively stupid. This is one of the more interesting things to read/think about anyway.

\n

– Wait until one day I give up caring about whether things matter inherently or not, and be back to square one…until I stop caring about that…fuck…

\n

– Be relieved that as a the kind of complicated biological and social thing you are, you have a good few pre-programmed preferences for things. You could chuck them all out the window, on the basis that they are arbitrary upshots of evolution. However so are you, and they are the arbitrary upshots you like, and you probably won’t find much satisfaction in not having them particularly. Also it’s hard to do properly and you probably can’t keep it up for that long (‘…it is inevitable to be drawn back into human drama…’).

\n

So there. I think I’ll go for a combination while I look for other things to think.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "pLRogvJLPPg6Mrvg4", "title": "An Alien God", "pageUrl": "https://www.lesswrong.com/posts/pLRogvJLPPg6Mrvg4/an-alien-god", "postedAt": "2007-11-02T06:57:36.000Z", "baseScore": 229, "voteCount": 190, "commentCount": 162, "url": null, "contents": { "documentId": "pLRogvJLPPg6Mrvg4", "html": "

"A curious aspect of the theory of evolution," said Jacques Monod, "is that everybody thinks he understands it."

A human being, looking at the natural world, sees a thousand times purpose. A rabbit's legs, built and articulated for running; a fox's jaws, built and articulated for tearing. But what you see is not exactly what is there...

In the days before Darwin, the cause of all this apparent purposefulness was a very great puzzle unto science. The Goddists said "God did it", because you get 50 bonus points each time you use the word "God" in a sentence. Yet perhaps I'm being unfair. In the days before Darwin, it seemed like a much more reasonable hypothesis. Find a watch in the desert, said William Paley, and you can infer the existence of a watchmaker.

But when you look at all the apparent purposefulness in Nature, rather than picking and choosing your examples, you start to notice things that don't fit the Judeo-Christian concept of one benevolent God. Foxes seem well-designed to catch rabbits. Rabbits seem well-designed to evade foxes. Was the Creator having trouble making up Its mind?

When I design a toaster oven, I don't design one part that tries to get electricity to the coils and a second part that tries to prevent electricity from getting to the coils. It would be a waste of effort. Who designed the ecosystem, with its predators and prey, viruses and bacteria? Even the cactus plant, which you might think well-designed to provide water fruit to desert animals, is covered with inconvenient spines.

The ecosystem would make much more sense if it wasn't designed by a unitary Who, but, rather, created by a horde of deities—say from the Hindu or Shinto religions. This handily explains both the ubiquitous purposefulnesses, and the ubiquitous conflicts: More than one deity acted, often at cross-purposes. The fox and rabbit were both designed, but by distinct competing deities. I wonder if anyone ever remarked on the seemingly excellent evidence thus provided for Hinduism over Christianity. Probably not.

Similarly, the Judeo-Christian God is alleged to be benevolent—well, sort of. And yet much of nature's purposefulness seems downright cruel. Darwin suspected a non-standard Creator for studying Ichneumon wasps, whose paralyzing stings preserve its prey to be eaten alive by its larvae: "I cannot persuade myself," wrote Darwin, "that a beneficent and omnipotent God would have designedly created the Ichneumonidae with the express intention of their feeding within the living bodies of Caterpillars, or that a cat should play with mice." I wonder if any earlier thinker remarked on the excellent evidence thus provided for Manichaen religions over monotheistic ones.

By now we all know the punchline: You just say "evolution".

I worry that's how some people are absorbing the "scientific" explanation, as a magical purposefulness factory in Nature. I've previously discussed the case of lesswrong.com/posts/4Bwr6s9dofvqPWakn/science-as-attire\">Storm from the movie X-Men, who in one mutation gets the ability to throw lightning bolts. Why? Well, there's this thing called "evolution" that somehow pumps a lot of purposefulness into Nature, and the changes happen through "mutations". So if Storm gets a really large mutation, she can be redesigned to throw lightning bolts. Radioactivity is a popular super origin: radiation causes mutations, so more powerful radiation causes more powerful mutations. That's logic.

But evolution doesn't allow just any kind of purposefulness to leak into Nature. That's what makes evolution a success as an empirical hypothesis. If evolutionary biology could explain a toaster oven, not just a tree, it would be worthless. There's a lot more to evolutionary theory than pointing at Nature and saying, "Now purpose is allowed," or "Evolution did it!" The strength of a theory is not what it allows, but what it prohibits; if you can invent an equally persuasive explanation for any outcome, you have zero knowledge.

"Many non-biologists," observed George Williams, "think that it is for their benefit that rattles grow on rattlesnake tails." Bzzzt! This kind of purposefulness is not allowed. Evolution doesn't work by letting flashes of purposefulness creep in at random—reshaping one species for the benefit of a random recipient.

Evolution is powered by a systematic correlation between the different ways that different genes construct organisms, and how many copies of those genes make it into the next generation. For rattles to grow on rattlesnake tails, rattle-growing genes must become more and more frequent in each successive generation. (Actually genes for incrementally more complex rattles, but if I start describing all the fillips and caveats to evolutionary biology, we really will be here all day.)

There isn't an Evolution Fairy that looks over the current state of Nature, decides what would be a "good idea", and chooses to increase the frequency of rattle-constructing genes.

I suspect this is where a lot of people get stuck, in evolutionary biology. They understand that "helpful" genes become more common, but "helpful" lets any sort of purpose leak in. They don't think there's an Evolution Fairy, yet they ask which genes will be "helpful" as if a rattlesnake gene could "help" non-rattlesnakes.

The key realization is that there is no Evolution Fairy. There's no outside force deciding which genes ought to be promoted. Whatever happens, happens because of the genes themselves.

Genes for constructing (incrementally better) rattles, must have somehow ended up more frequent in the rattlesnake gene pool, because of the rattle. In this case it's probably because rattlesnakes with better rattles survive more often—rather than mating more successfully, or having brothers that reproduce more successfully, etc.

Maybe predators are wary of rattles and don't step on the snake. Or maybe the rattle diverts attention from the snake's head. (As George Williams suggests, "The outcome of a fight between a dog and a viper would depend very much on whether the dog initially seized the reptile by the head or by the tail.")

But that's just a snake's rattle. There are much more complicated ways that a gene can cause copies of itself to become more frequent in the next generation. Your brother or sister shares half your genes. A gene that sacrifices one unit of resources to bestow three units of resource on a brother, may promote some copies of itself by sacrificing one of its constructed organisms. (If you really want to know all the details and caveats, buy a book on evolutionary biology; there is no royal road.)

The main point is that the gene's effect must cause copies of that gene to become more frequent in the next generation. There's no Evolution Fairy that reaches in from outside. There's nothing which decides that some genes are "helpful" and should, therefore, increase in frequency. It's just cause and effect, starting from the genes themselves.

This explains the strange conflicting purposefulness of Nature, and its frequent cruelty. It explains even better than a horde of Shinto deities.

Why is so much of Nature at war with other parts of Nature? Because there isn't one Evolution directing the whole process. There's as many different "evolutions" as reproducing populations. Rabbit genes are becoming more or less frequent in rabbit populations. Fox genes are becoming more or less frequent in fox populations. Fox genes which construct foxes that catch rabbits, insert more copies of themselves in the next generation. Rabbit genes which construct rabbits that evade foxes are naturally more common in the next generation of rabbits. Hence the phrase "natural selection".

Why is Nature cruel? You, a human, can look at an Ichneumon wasp, and decide that it's cruel to eat your prey alive. You can decide that if you're going to eat your prey alive, you can at least have the decency to stop it from hurting. It would scarcely cost the wasp anything to anesthetize its prey as well as paralyze it. Or what about old elephants, who die of starvation when their last set of teeth fall out? These elephants aren't going to reproduce anyway. What would it cost evolution—the evolution of elephants, rather—to ensure that the elephant dies right away, instead of slowly and in agony? What would it cost evolution to anesthetize the elephant, or give it pleasant dreams before it dies? Nothing; that elephant won't reproduce more or less either way.

If you were talking to a fellow human, trying to resolve a conflict of interest, you would be in a good negotiating position—would have an easy job of persuasion. It would cost so little to anesthetize the prey, to let the elephant die without agony! Oh please, won't you do it, kindly... um...

There's no one to argue with.

Human beings fake their justifications, figure out what they want using one method, and then justify it using another method. There's no Evolution of Elephants Fairy that's trying to (a) figure out what's best for elephants, and then (b) figure out how to justify it to the Evolutionary Overseer, who (c) doesn't want to see reproductive fitness decreased, but is (d) willing to go along with the painless-death idea, so long as it doesn't actually harm any genes.

There's no advocate for the elephants anywhere in the system.

Humans, who are often deeply concerned for the well-being of animals, can be very persuasive in arguing how various kindnesses wouldn't harm reproductive fitness at all. Sadly, the evolution of elephants doesn't use a similar algorithm; it doesn't select nice genes that can plausibly be argued to help reproductive fitness. Simply: genes that replicate more often become more frequent in the next generation. Like water flowing downhill, and equally benevolent.

A human, looking over Nature, starts thinking of all the ways we would design organisms. And then we tend to start rationalizing reasons why our design improvements would increase reproductive fitness—a political instinct, trying to sell your own preferred option as matching the boss's favored justification.

And so, amateur evolutionary biologists end up making all sorts of wonderful and completely mistaken predictions. Because the amateur biologists are drawing their bottom line—and more importantly, locating their prediction in hypothesis-space—using a different algorithm than evolutions use to draw their bottom lines.

A human engineer would have designed human taste buds to measure how much of each nutrient we had, and how much we needed. When fat was scarce, almonds or cheeseburgers would taste delicious. But if you started to become obese, or if vitamins were lacking, lettuce would taste delicious. But there is no Evolution of Humans Fairy, which intelligently planned ahead and designed a general system for every contingency. It was a reliable invariant of humans' ancestral environment that calories were scarce. So genes whose organisms loved calories, became more frequent. Like water flowing downhill.

We are simply the embodied history of which organisms did in fact survive and reproduce, not which organisms ought prudentially to have survived and reproduced.

The human retina is constructed backward: The light-sensitive cells are at the back, and the nerves emerge from the front and go back through the retina into the brain. Hence the blind spot. To a human engineer, this looks simply stupid—and other organisms have independently evolved retinas the right way around. Why not redesign the retina?

The problem is that no single mutation will reroute the whole retina simultaneously. A human engineer can redesign multiple parts simultaneously, or plan ahead for future changes. But if a single mutation breaks some vital part of the organism, it doesn't matter what wonderful things a Fairy could build on top of it—the organism dies and the genes decreases in frequency.

If you turn around the retina's cells without also reprogramming the nerves and optic cable, the system as a whole won't work. It doesn't matter that, to a Fairy or a human engineer, this is one step forward in redesigning the retina. The organism is blind. Evolution has no foresight, it is simply the frozen history of which organisms did in fact reproduce. Evolution is as blind as a halfway-redesigned retina.

Find a watch in a desert, said William Paley, and you can infer the watchmaker. There were once those who denied this, who thought that life "just happened" without need of an optimization process, mice being spontaneously generated from straw and dirty shirts.

If we ask who was more correct—the theologians who argued for a Creator-God, or the intellectually unfulfilled atheists who argued that mice spontaneously generated—then the theologians must be declared the victors: evolution is not God, but it is closer to God than it is to pure random entropy. Mutation is random, but selection is non-random. This doesn't mean an intelligent Fairy is reaching in and selecting. It means there's a non-zero statistical correlation between the gene and how often the organism reproduces. Over a few million years, that non-zero statistical correlation adds up to something very powerful. It's not a god, but it's more closely akin to a god than it is to snow on a television screen.

In a lot of ways, evolution is like unto theology. "Gods are ontologically distinct from creatures," said Damien Broderick, "or they're not worth the paper they're written on." And indeed, the Shaper of Life is not itself a creature. Evolution is bodiless, like the Judeo-Christian deity. Omnipresent in Nature, immanent in the fall of every leaf. Vast as a planet's surface. Billions of years old. Itself unmade, arising naturally from the structure of physics. Doesn't that all sound like something that might have been said about God?

And yet the Maker has no mind, as well as no body. In some ways, its handiwork is incredibly poor design by human standards. It is internally divided. Most of all, it isn't nice.

In a way, Darwin discovered God—a God that failed to match the preconceptions of theology, and so passed unheralded. If Darwin had discovered that life was created by an intelligent agent—a bodiless mind that loves us, and will smite us with lightning if we dare say otherwise—people would have said "My gosh! That's God!"

But instead Darwin discovered a strange alien God—not comfortably "ineffable", but really genuinely different from us. Evolution is not a God, but if it were, it wouldn't be Jehovah. It would be H. P. Lovecraft's Azathoth, the blind idiot God burbling chaotically at the center of everything, surrounded by the thin monotonous piping of flutes.

Which you might have predicted, if you had really looked at Nature.

So much for the claim some religionists make, that they believe in a vague deity with a correspondingly high probability. Anyone who really believed in a vague deity, would have recognized their strange inhuman creator when Darwin said "Aha!"

So much for the claim some religionists make, that they are waiting innocently curious for Science to discover God. Science has already discovered the sort-of-godlike maker of humans—but it wasn't what the religionists wanted to hear. They were waiting for the discovery of their God, the highly specific God they want to be there. They shall wait forever, for the great discovery has already taken place, and the winner is Azathoth.

Well, more power to us humans. I like having a Creator I can outwit. Beats being a pet. I'm glad it was Azathoth and not Odin.

" } }, { "_id": "bfbiyTogEKWEGP96S", "title": "Fake Justification", "pageUrl": "https://www.lesswrong.com/posts/bfbiyTogEKWEGP96S/fake-justification", "postedAt": "2007-11-01T03:57:37.000Z", "baseScore": 130, "voteCount": 100, "commentCount": 59, "url": null, "contents": { "documentId": "bfbiyTogEKWEGP96S", "html": "

Many Christians who’ve stopped really believing now insist that they revere the Bible as a source of ethical advice. The standard atheist reply is given by Sam Harris: “You and I both know that it would take us five minutes to produce a book that offers a more coherent and compassionate morality than the Bible does.”1 Similarly, one may try to insist that the Bible is valuable as a literary work. Then why not revere Lord of the Rings, a vastly superior literary work? And despite the standard criticisms of Tolkien’s morality, Lord of the Rings is at least superior to the Bible as a source of ethics. So why don’t people wear little rings around their neck, instead of crosses? Even Harry Potter is superior to the Bible, both as a work of literary art and as moral philosophy.2

“How can you justify buying a $1 million gem-studded laptop,” you ask your friend, “when so many people have no laptops at all?” And your friend says, “But think of the employment that this will provide—to the laptop maker, the laptop maker’s advertising agency—and then they’ll buy meals and haircuts—it will stimulate the economy and eventually many people will get their own laptops.” But it would be even more efficient to buy 5,000 One Laptop Per Child laptops, thus providing employment to the OLPC manufacturers and giving out laptops directly.

I’ve touched before on the failure to look for third alternatives. But this is not really motivated stopping. Calling it “motivated stopping” would imply that there was a search carried out in the first place.

In “The Bottom Line,” I observed that only the real determinants of our beliefs can ever influence our real-world accuracy. Only the real determinants of our actions can influence our effectiveness in achieving our goals. Someone who buys a million-dollar laptop was really thinking, “Ooh, shiny,” and that was the one true causal history of their decision to buy a laptop. No amount of “justification” can change this, unless the justification is a genuine, newly running search process that can change the conclusion. Really change the conclusion. Most criticism carried out from a sense of duty is more of a token inspection than anything else. Free elections in a one-party country.

To genuinely justify the Bible as an object of laudation by reference to its literary quality, you would have to somehow perform a neutral reading through candidate books until you found the book of highest literary quality. Renown is one reasonable criterion for generating candidates, so I suppose you could legitimately end up reading Shakespeare, the Bible, and Gödel, Escher, Bach. (Otherwise it would be quite a coincidence to find the Bible as a candidate, among a million other books.) The real difficulty is in that “neutral reading” part. Easy enough if you’re not a Christian, but if you are . . .

But of course nothing like this happened. No search ever occurred. Writing the justification of “literary quality” above the bottom line of “I ♡ the Bible” is a historical misrepresentation of how the bottom line really got there, like selling cat milk as cow milk. That is just not where the bottom line really came from. That is just not what originally happened to produce that conclusion.

If you genuinely subject your conclusion to a criticism that can potentially de-conclude it—if the criticism genuinely has that power—then that does modify “the real algorithm behind” your conclusion. It changes the entanglement of your conclusion over possible worlds. But people overestimate, by far, how likely they really are to change their minds.

With all those open minds out there, you’d think there’d be more belief-updating.

Let me guess: Yes, you admit that you originally decided you wanted to buy a million-dollar laptop by thinking, “Ooh, shiny.” Yes, you concede that this isn’t a decision process consonant with your stated goals. But since then, you’ve decided that you really ought to spend your money in such fashion as to provide laptops to as many laptopless wretches as possible. And yet you just couldn’t find any more efficient way to do this than buying a million-dollar diamond-studded laptop—because, hey, you’re giving money to a laptop store and stimulating the economy! Can’t beat that!

My friend, I am damned suspicious of this amazing coincidence. I am damned suspicious that the best answer under this lovely, rational, altruistic criterion X, is also the idea that just happened to originally pop out of the unrelated indefensible process Y. If you don’t think that rolling dice would have been likely to produce the correct answer, then how likely is it to pop out of any other irrational cognition?

It’s improbable that you used mistaken reasoning, yet made no mistakes.


1In Harris’ “Is Religion Built Upon Lies?” dialogue with Andrew Sullivan, http://www.samharris.org/site/full_text/debate-with-andrew-sullivan-part-two.

2If I really wanted to be cruel, I would compare the Bible to Jacqueline Carey’s Kushiel series.

" } }, { "_id": "3viwrvz7MJbrdi9gW", "title": "A Terrifying Halloween Costume", "pageUrl": "https://www.lesswrong.com/posts/3viwrvz7MJbrdi9gW/a-terrifying-halloween-costume", "postedAt": "2007-11-01T02:54:40.000Z", "baseScore": 11, "voteCount": 15, "commentCount": 10, "url": null, "contents": { "documentId": "3viwrvz7MJbrdi9gW", "html": "

After the jump, you can see me dressed up as something so horrifyingly dreadful that it surpasses the comprehension of a mortal human mind.

\"Dust

" } }, { "_id": "i2ruK7M3coWfv8mfD", "title": "A Case Study of Motivated Continuation", "pageUrl": "https://www.lesswrong.com/posts/i2ruK7M3coWfv8mfD/a-case-study-of-motivated-continuation", "postedAt": "2007-10-31T01:27:19.000Z", "baseScore": 35, "voteCount": 29, "commentCount": 36, "url": null, "contents": { "documentId": "i2ruK7M3coWfv8mfD", "html": "

I am not wholly unsympathetic to the many commenters in Torture vs. Dust Specks who argued that it is preferable to inflict dust specks upon the eyes of 3^^^3 (amazingly huge but finite number of) people, rather than torture one person for 50 years.  If you think that a dust speck is simply of no account unless it has other side effects - if you literally do not prefer zero dust specks to one dust speck - then your position is consistent.  (Though I suspect that many speckers would have expressed a preference if they hadn't known about the dilemma's sting.)

\n\n

So I'm on board with the commenters who chose TORTURE, and I can understand the commenters who chose SPECKS.

\n\n

But some of you said the question was meaningless; or that all morality was arbitrary and subjective; or that you needed more information before you could decide; or you talked about some other confusing aspect of the problem; and then you didn't go on to state a preference.

\n\n

Sorry.  I can't back you on that one.

If you actually answer the dilemma, then no matter which option you choose, you're giving something up.  If you say SPECKS, you're giving up your claim on a certain kind of utilitarianism; you may worry that you're not being rational enough, or that others will accuse you of failing to comprehend large numbers.  If you say TORTURE, you're accepting an outcome that has torture in it.

\n\n

I falsifiably predict that of the commenters who dodged, most of them saw some specific answer - either TORTURE or SPECKS - that they flinched away from giving.  Maybe for just a fraction of a second before the question-confusing operation took over, but I predict the flinch was there.  (To be specific:  I'm not predicting that you knew, and selected, and have in mind right now, some particular answer you're deliberately not giving.  I'm predicting that your thinking trended toward a particular uncomfortable answer, for at least one fraction of a second before you started finding reasons to question the dilemma itself.)

\n\n

In "bioethics" debates, you very often see experts on bioethics discussing what they see as the pros and cons of, say, stem-cell research; and then, at the conclusion of their talk, they gravely declare that more debate is urgently needed, with participation from all stakeholders.  If you actually come to a conclusion, if you actually argue for banning stem cells, then people with relatives dying of Parkinson's will scream at you.  If you come to a conclusion and actually endorse stem cells, religious fundamentalists will scream at you.  But who can argue with a call to debate?

\n\n

Uncomfortable with the way the evidence is trending on Darwinism versus creationism?  Consider the issue soberly, and decide that you need more evidence; you want archaeologists to dig up another billion fossils before you come to a conclusion.  That way you neither say something sacrilegious, nor relinquish your self-image as a rationalist.  Keep on doing this with all issues that look like they might be trending in an uncomfortable direction, and you can maintain a whole religion in your mind.

\n\n

Real life is often confusing, and we have to choose anyway, because refusing to choose is also a choice.  The null plan is still a plan.  We always do something, even if it's nothing.  As Russell and Norvig put it, "Refusing to choose is like refusing to allow time to pass."

\n\n

Ducking uncomfortable choices is a dangerous habit of mind.  There are certain times when it's wise to suspend judgment (for an hour, not a year).  When you're facing a dilemma all of whose answers seem uncomfortable, is not one of those times!  Pick one of the uncomfortable answers as the best of an unsatisfactory lot.  If there's missing information, fill in the blanks with plausible assumptions or probability distributions.  Whatever it takes to overcome the basic flinch away from discomfort.  Then you can search for an escape route.

\n\n

Until you pick one interim best guess, the discomfort will consume your attention, distract you from the search, tempt you to confuse the issue whenever your analysis seems to trend in a particular direction.

\n\n

In real life, when people flinch away from uncomfortable choices, they often hurt others as well as themselves.  Refusing to choose is often one of the worst choices you can make.  Motivated continuation is not a habit of thought anyone can afford, egoist or altruist.  The cost of comfort is too high.  It's important to acquire that habit of gritting your teeth and choosing - just as important as looking for escape routes afterward.

" } }, { "_id": "3wYTFWY3LKQCnAptN", "title": "Torture vs. Dust Specks", "pageUrl": "https://www.lesswrong.com/posts/3wYTFWY3LKQCnAptN/torture-vs-dust-specks", "postedAt": "2007-10-30T02:50:28.000Z", "baseScore": 85, "voteCount": 83, "commentCount": 630, "url": null, "contents": { "documentId": "3wYTFWY3LKQCnAptN", "html": "

"What's the worst that can happen?" goes the optimistic saying.  It's probably a bad question to ask anyone with a creative imagination.  Let's consider the problem on an individual level: it's not really the worst that can happen, but would nonetheless be fairly bad, if you were horribly tortured for a number of years.  This is one of the worse things that can realistically happen to one person in today's world.

\n\n

What's the least bad, bad thing that can happen?  Well, suppose a dust speck floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck.

\n\n

For our next ingredient, we need a large number.  Let's use 3^^^3, written in Knuth's up-arrow notation:

\n\n\n\n

3^^^3 is an exponential tower of 3s which is 7,625,597,484,987 layers tall.  You start with 1; raise 3 to the power of 1 to get 3; raise 3 to the power of 3 to get 27; raise 3 to the power of 27 to get 7625597484987; raise 3 to the power of 7625597484987 to get a number much larger than the number of atoms in the universe, but which could still be written down in base 10, on 100 square kilometers of paper; then raise 3 to that power; and continue until you've exponentiated 7625597484987 times.  That's 3^^^3.  It's the smallest simple inconceivably huge number I know.

\n\n

Now here's the moral dilemma.  If neither event is going to happen to you personally, but you still had to choose one or the other:

\n\n

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

\n\n

I think the answer is obvious.  How about you?

" } }, { "_id": "L32LHWzy9FzSDazEg", "title": "Motivated Stopping and Motivated Continuation", "pageUrl": "https://www.lesswrong.com/posts/L32LHWzy9FzSDazEg/motivated-stopping-and-motivated-continuation", "postedAt": "2007-10-28T23:10:26.000Z", "baseScore": 97, "voteCount": 81, "commentCount": 8, "url": null, "contents": { "documentId": "L32LHWzy9FzSDazEg", "html": "\n\n\n\n \n\n \n\n

While I disagree with some views of the Fast and Frugal crowd—in my opinion they make a few too many lemons into lemonade—it also seems to me that they tend to develop the most psychologically realistic models of any school of decision theory. Most experiments present the subjects with options, and the subject chooses an option, and that’s the experimental result. The frugalists realized that in real life, you have to generate your options, and they studied how subjects did that.

\n\n

Likewise, although many experiments present evidence on a silver platter, in real life you have to gather evidence, which may be costly, and at some point decide that you have enough evidence to stop and choose. When you’re buying a house, you don’t get exactly ten houses to choose from, and you aren’t led on a guided tour of all of them before you’re allowed to decide anything. You look at one house, and another, and compare them to each other; you adjust your aspirations—reconsider how much you really need to be close to your workplace and how much you’re really willing to pay; you decide which house to look at next; and at some point you decide that you’ve seen enough houses, and choose.

\n\n

Gilovich’s distinction between motivated skepticism and motivated credulity highlights how conclusions a person does not want to believe are held to a higher standard than conclusions a person wants to believe. A motivated skeptic asks if the evidence compels them to accept the conclusion; a motivated credulist asks if the evidence allows them to accept the conclusion.

\n\n

I suggest that an analogous bias in psychologically realistic search is motivated stopping and motivated continuation: when we have a hidden motive for choosing the “best” current option, we have a hidden motive to stop, and choose, and reject consideration of any more options. When we have a hidden motive to reject the current best option, we have a hidden motive to suspend judgment pending additional evidence, to generate more options—to find something, anything, to do instead of coming to a conclusion.

\n\n

A major historical scandal in statistics was R. A. Fisher, an eminent founder of the field, insisting that no causal link had been established between smoking and lung cancer. “Correlation is not causation,” he testified to Congress. Perhaps smokers had a gene which both predisposed them to smoke and predisposed them to lung cancer.

\n\n

Or maybe Fisher’s being employed as a consultant for tobacco firms gave him a hidden motive to decide that the evidence already gathered was insufficient to come to a conclusion, and it was better to keep looking. Fisher was also a smoker himself, and died of colon cancer in 1962.1

\n\n

Like many other forms of motivated skepticism, motivated continuation can try to disguise itself as virtuous rationality. Who can argue against gathering more evidence?2

\n\n

I can. Evidence is often costly, and worse, slow, and there is certainly nothing virtuous about refusing to integrate the evidence you already have. You can always change your mind later.3

\n\n

As for motivated stopping, it appears in every place a third alternative is feared, and wherever you have an argument whose obvious counterargument you would rather not see, and in other places as well. It appears when you pursue a course of action that makes you feel good just for acting, and so you’d rather not investigate how well your plan really worked, for fear of destroying the warm glow of moral satisfaction you paid good money to purchase.4 It appears wherever your beliefs and anticipations get out of sync, so you have a reason to fear any new evidence gathered.5

\n\n

The moral is that the decision to terminate a search procedure (temporarily or permanently) is, like the search procedure itself, subject to bias and hidden motives. You should suspect motivated stopping when you close off search, after coming to a comfortable conclusion, and yet there’s a lot of fast cheap evidence you haven’t gathered yet—there are websites you could visit, there are counter-counter arguments you could consider, or you haven’t closed your eyes for five minutes by the clock trying to think of a better option. You should suspect motivated continuation when some evidence is leaning in a way you don’t like, but you decide that more evidence is needed—expensive evidence that you know you can’t gather anytime soon, as opposed to something you’re going to look up on Google in thirty minutes—before you’ll have to do anything uncomfortable.

\n\n
\n \n\n

1Ad hominem note: Fisher was a frequentist. Bayesians are more reasonable about inferring probable causality; see Judea Pearl’s Causality: Models, Reasoning, and Inference.

\n\n

2Compare Robin Hanson, “Cut Medicine In Half,” Overcoming Bias (blog), September 10, 2007, http://www.overcomingbias.com/2007/09/cut-medicine-in.html.

\n\n

3Apparent contradiction resolved as follows: Spending one hour discussing the problem, with your mind carefully cleared of all conclusions, is different from waiting ten years on another $20 million study.

\n\n

4See “‘Can’t Say No’ Spending.” http://lesswrong.com/lw/kb/cant_say_no_spending.

\n\n

5See “Belief in Belief” in Map and Territory.

\n
\n\n" } }, { "_id": "sBsMM5cTkftS3W7qj", "title": "Bay Area Bayesians Unite!", "pageUrl": "https://www.lesswrong.com/posts/sBsMM5cTkftS3W7qj/bay-area-bayesians-unite", "postedAt": "2007-10-28T00:07:42.000Z", "baseScore": 2, "voteCount": 3, "commentCount": 15, "url": null, "contents": { "documentId": "sBsMM5cTkftS3W7qj", "html": "

Robin Hanson has his fellow GMU economists to talk to, but I'm not\nassociated with a university and I live way out in the boondocks: the\nechoing emptiness of, er, Silicon Valley.

\n\n

Overcoming Bias gets over 2000 visitors per day.  Surely some of you\nare from the Bay Area.  Would you be interested in a Bay Area meetup of\nOvercoming Bias readers?

\n\n\n\n

Polls after the jump.
If you're interested at all, please vote in at least the closest-city poll.
Polls will be processed for a best-compromise value, not a binding\nmodal result.
If I get at least 30 responses, I'll start looking into\nmeetup locations.

Opinion Polls & Market Research
\n\n
Opinion Polls & Market Research
\n
Opinion Polls & Market Research
" } }, { "_id": "faHbrHuPziFH7Ef7p", "title": "Why Are Individual IQ Differences OK?", "pageUrl": "https://www.lesswrong.com/posts/faHbrHuPziFH7Ef7p/why-are-individual-iq-differences-ok", "postedAt": "2007-10-26T21:50:54.000Z", "baseScore": 76, "voteCount": 66, "commentCount": 515, "url": null, "contents": { "documentId": "faHbrHuPziFH7Ef7p", "html": "

Idang Alibi of Abuja, Nigeria writes on the James Watson affair:

\n
\n

A few days ago, the Nobel Laureate, Dr. James Watson, made a remark that is now generating worldwide uproar, especially among blacks.  He said what to me looks like a self-evident truth.  He told The Sunday Times of London in an interview that in his humble opinion, black people are less intelligent than the White people...

\n
\n

An intriguing opening.  Is Idang Alibi about to take a position on the real heart of the uproar?

\n
\n

I do not know what constitutes intelligence.  I leave that to our so-called scholars.  But I do know that in terms of organising society for the benefit of the people living in it, we blacks have not shown any intelligence in that direction at all.  I am so ashamed of this and sometimes feel that I ought to have belonged to another race...

\n
\n

Darn, it's just a lecture on personal and national responsibility.  Of course, for African nationals, taking responsibility for their country's problems is the most productive attitude regardless.  But it doesn't engage with the controversies that got Watson fired.

\n

Later in the article came this:

\n
\n

As I write this, I do so with great pains in my heart because I know that God has given intelligence in equal measure to all his children irrespective of the colour of their skin.

\n
\n

This intrigued me for two reasons:  First, I'm always on the lookout for yet another case of theology making a falsifiable experimental prediction.  And second, the prediction follows obviously if God is just, but what does skin colour have to do with it at all?

\n

\n

A great deal has already been said about the Watson affair, and I suspect that in most respects I have little to contribute that has not been said before.

\n

But why is it that the rest of the world seems to think that individual genetic differences are okay, whereas racial genetic differences in intelligence are not?  Am I the only one who's every bit as horrified by the proposition that there's any way whatsoever to be screwed before you even start, whether it's genes or lead-based paint or Down's Syndrome?  What difference does skin colour make?  At all?

\n

This is only half a rhetorical question.  Race adds extra controversy to anything; in that sense, it's obvious what difference skin colour makes politically.  However, just because this attitude is common, should not cause us to overlook its insanity.  Some kind of different psychological processing is taking place around individually-unfair intelligence distributions, and group-unfair intelligence distributions.

\n

So, in defiance of this psychological difference, and in defiance of politics, let me point out that a group injustice has no existence apart from injustice to individuals.  It's individuals who have brains to experience suffering.  It's individuals who deserve, and often don't get, a fair chance at life.  If God has not given intelligence in equal measure to all his children, God stands convicted of a crime against humanity, period.  Skin colour has nothing to do with it, nothing at all.

\n

And I don't think there's any serious scholar of intelligence who disputes that God has been definitively shown to be most terribly unfair.  Never mind the airtight case that intelligence has a hereditary genetic component among individuals; if you think that being born with Down's Syndrome doesn't impact life outcomes, then you are on crack.  What about lead-based paint?  Does it not count, because parents theoretically could have prevented it but didn't?  In the beginning no one knew that it was damaging.  How is it just for such a tiny mistake to have such huge, irrevocable consequences?  And regardless, would not a just God damn us for only our own choices?  Kids don't choose to live in apartments with lead-based paint.

\n

So much for God being \"just\", unless you count the people whom God has just screwed over.  Maybe that's part of the fuel in the burning controversy - that people do realize, on some level, the implications for religion.  They can rationalize away the implications of a child born with no legs, but not a child born with no possibility of ever understanding calculus.  But then this doesn't help explain the original observation, which is that people, for some odd reason, think that adding race makes it worse somehow.

\n

And why is my own perspective, apparently, unusual?  Perhaps because I also think that intelligence deficits will be fixable given sufficiently advanced technology, biotech or nanotech.  When truly huge horrors are believed unfixable, the mind's eye tends to just skip over the hideous unfairness - for much the same reason you don't deliberately rest your hand on a hot stoveburner; it hurts.

" } }, { "_id": "vNBxmcHpnozjrJnJP", "title": "No One Knows What Science Doesn't Know", "pageUrl": "https://www.lesswrong.com/posts/vNBxmcHpnozjrJnJP/no-one-knows-what-science-doesn-t-know", "postedAt": "2007-10-25T23:47:47.000Z", "baseScore": 94, "voteCount": 82, "commentCount": 107, "url": null, "contents": { "documentId": "vNBxmcHpnozjrJnJP", "html": "

At a family party some years ago, one of my uncles remarked on how little science really knows.  For example, we still have no idea how gravity works - why things fall down.

\n\n

"Actually, we do know how gravity works," I said.  (My father, a Ph.D. physicist, was also present; but he wasn't even touching this one.)

\n\n

"We do?" said my uncle.

\n\n

"Yes," I said, "Gravity is the curvature of spacetime."  At this point I had still swallowed Feynman's line about being able to explain physics to one's grandmother, so I continued:  "You could say that the Earth goes around the Sun in a straight line.  Imagine a graph that shows both space and time, so that a straight line shows steady movement and a curved line shows acceleration.  Then curve the graph paper itself.  When you try to draw a straight line on the curved paper, you'll get what looks like acceleration -"

\n\n

"I never heard about anything like that," said my uncle.

When was the last time, in history, when it was possible for a single human to know the knowledge of the most advanced civilization?  I've seen various estimates for this - usually in the\nform of polymaths nominated for the position of "last person to\nknow everything".  One plausible candidate is Leonardo da Vinci, who\ndied in 1519 - shortly after the printing press began to become popular,\nand shortly before Copernicus inaugurated the scientific revolution.

\n\n

In the ancestral environment it was possible to know everything, and nearly everyone did.  In hunter-gatherer bands of less than 200 people, with no written literature, all background knowledge was universal knowledge.  If one person, in a world containing 200 people total, discovered how gravity worked, you could certainly expect to hear about it.

\n\n

\nIn a world of 6 billion people, there is not one person alive who can say with certainty that science does not know a thing.  There is too much science.  Our current lifetimes are too short to learn more than a tiny fraction of it, and more is being produced all the time.

\n\n

Even if last week's technical journal doesn't contain the answer to a mystery, that doesn't mean that no one knows it.  Maybe someone out there is typing up the paper at this very moment.  You can't generalize over all 6 billion people in the world because you haven't talked to all of them - which is a non-ancestral condition!  For the vast majority of humanity's evolutionary history, it was possible to meet everyone in your little world.  Now there's 6 billion people who might know the answer to any question you care to ask, and you can't ask all of them.

\n\n

No one knows anymore what no one knows.

\n\n

My uncle is not an isolated phenomenon.  I've met people who think that science knows nothing about the brain, that thought is a complete mystery unto us.  (My favorite was the fellow who confidently asserted that neuroscience had been unable to assign any function "to the cerebral cortex".)  As Tom McCabe put it:  "Anyone who claims that the brain is a total mystery\nshould be slapped upside the head with the MIT Encyclopedia of the\nCognitive Sciences.  All one thousand ninety-six pages of it."

\n\n

I haven't seen the movie What The Bleep Do We Know, but if the horror stories are true, it's one long celebration of imaginary ignorance.  Particularly the "mysterious effect of conscious observation" in quantum physics, which was explained away as ordinary decoherence in the 1950s, but let's not get into that again.

\n\n

Ignorance should not be celebrated in the first place; I've made this point before.  It is a corruption of curiosity to prefer the question to its answer.  Yet people seem to get a tremendous emotional kick out of not knowing something.  Worse, they think that the mysteriousness of a mysterious phenomena indicates a special quality of the phenomenon itself, inferring that it is surely different-in-kind from phenomena labeled "understood".  If we are ignorant about a phenomenon, that is a fact about our state of mind, not a fact about the phenomenon itself.

\n\n

In the ancestral environment, there was a certain permanence to the division between ignorance and knowledge.  If none of your fellow hunter-gatherers knew what made rain fall, it was likely that no one would ever find out in your grandchildren's lifetimes.  Today, the absence of knowledge is a fragile and temporary condition, like the darkness in a closet whose door happens to be shut.  A single thought can shatter the absence of thought.  Every scientific discovery ever made, destroyed an ancient absence-of-knowledge dating back to the dawn of time.  No one knows what 6 billion people don't know today, and still less does anyone know what 7 billion people will know tomorrow.

" } }, { "_id": "sBBGxdvhKcppQWZZE", "title": "Double Illusion of Transparency", "pageUrl": "https://www.lesswrong.com/posts/sBBGxdvhKcppQWZZE/double-illusion-of-transparency", "postedAt": "2007-10-24T23:06:29.000Z", "baseScore": 126, "voteCount": 87, "commentCount": 33, "url": null, "contents": { "documentId": "sBBGxdvhKcppQWZZE", "html": "

Followup to:  Explainers Shoot High, Illusion of Transparency

My first true foray into Bayes For Everyone was writing An Intuitive Explanation of Bayesian Reasoning, still one of my most popular works.  This is the Intuitive Explanation's origin story.

In December of 2002, I'd been sermonizing in a habitual IRC channels about what seemed to me like a very straightforward idea:  How words, like all other useful forms of thought, are secretly a disguised form of Bayesian inference.  I thought I was explaining clearly, and yet there was one fellow, it seemed, who didn't get it.  This worried me, because this was someone who'd been very enthusiastic about my Bayesian sermons up to that point.  He'd gone around telling people that Bayes was \"the secret of the universe\", a phrase I'd been known to use.

So I went into a private IRC conversation to clear up the sticking point.

 

And he still didn't get it.

I took a step back and explained the immediate prerequisites, which I had thought would be obvious -

He didn't understand my explanation of the prerequisites.

In desperation, I recursed all the way back to Bayes's Theorem, the ultimate foundation stone of -

He didn't know how to apply Bayes's Theorem to update the probability that a fruit is a banana, after it is observed to be yellow.  He kept mixing up p(b|y) and p(y|b).

It seems like a small thing, I know.  It's strange how small things can trigger major life-realizations.  Any former TAs among my readers are probably laughing:  I hadn't realized, until then, that instructors got misleading feedback.  Robin commented yesterday that the best way to aim your explanations is feedback from the intended audience, \"an advantage teachers often have\".  But what if self-anchoring also causes you to overestimate how much understanding appears in your feedback?

I fell prey to a double illusion of transparency.  First, I assumed that my words meant what I intended them to mean - that my listeners heard my intentions as though they were transparent.  Second, when someone repeated back my sentences using slightly different word orderings, I assumed that what I heard was what they had intended to say.  As if all words were transparent windows into thought, in both directions.

I thought that if I said, \"Hey, guess what I noticed today!  Bayes's Theorem is the secret of the universe!\", and someone else said, \"Yes! Bayes's Theorem is the secret of the universe!\", then this was what a successful teacher-student interaction looked like: knowledge conveyed and verifiedI'd read Pirsig and I knew, in theory, about how students learn to repeat back what the teacher says in slightly different words.  But I thought of that as a deliberate tactic to get good grades, and I wasn't grading anyone.

This may sound odd, but until that very day, I hadn't realized why there were such things as universities.  I'd thought it was just rent-seekers who'd gotten a lock on the credentialing system.  Why would you need teachers to learn?  That was what books were for.

But now a great and terrible light was dawning upon me.  Genuinely explaining complicated things took months or years, and an entire university infrastructure with painstakingly crafted textbooks and professional instructors.  You couldn't just tell people.

 

You're laughing at me right now, academic readers; but think back and you'll realize that academics are generally very careful not to tell the general population how difficult it is to explain things, because it would come across as condescending.  Physicists can't just say, \"What we do is beyond your comprehension, foolish mortal\" when Congress is considering their funding.  Richard Feynman once said that if you really understand something in physics you should be able to explain it to your grandmother.  I believed him.  I was shocked to discover it wasn't true.

But once I realized, it became horribly clear why no one had picked up and run with any of the wonderful ideas I'd been telling about Artificial Intelligence. 

If I wanted to explain all these marvelous ideas I had, I'd have to go back, and back, and back.  I'd have to start with the things I'd figured out before I was even thinking about Artificial Intelligence, the foundations without which nothing else would make sense.

Like all that stuff I'd worked out about human rationality, back at the dawn of time.

Which I'd considerably reworked after receiving my Bayesian Enlightenment.  But either way, I had to start with the foundations.  Nothing I said about AI was going to make sense unless I started at the beginning.  My listeners would just decide that emergence was a better explanation.

And the beginning of all things in the reworked version was Bayes, to which there didn't seem to be any decent online introduction for newbies.  Most sources just stated Bayes's Theorem and defined the terms.  This, I now realized, was not going to be sufficient.  The online sources I saw didn't even say why Bayes's Theorem was important.  E. T. Jaynes seemed to get it, but Jaynes spoke only in calculus - no hope for novices there.

So I mentally consigned everything I'd written before 2003 to the trash heap - it was mostly obsolete in the wake of my Bayesian Enlightenment, anyway - and started over at what I fondly conceived to be the beginning.

(It wasn't.)

And I would explain it so clearly that even grade school students would get it.

(They didn't.)

I had, and have, much left to learn about explaining.  But that's how it all began.

" } }, { "_id": "2TPph4EGZ6trEbtku", "title": "Explainers Shoot High. Aim Low!", "pageUrl": "https://www.lesswrong.com/posts/2TPph4EGZ6trEbtku/explainers-shoot-high-aim-low", "postedAt": "2007-10-24T01:13:54.000Z", "baseScore": 112, "voteCount": 82, "commentCount": 35, "url": null, "contents": { "documentId": "2TPph4EGZ6trEbtku", "html": "

Followup to:  Illusion of Transparency: Why No One Understands You, Expecting Short Inferential Distances

\n\n

A few years ago, an eminent scientist once told me how he'd written an explanation of his field aimed at a much lower technical level than usual.  He had thought it would be useful to academics outside the field, or even reporters.  This ended up being one of his most popular papers within his field, cited more often than anything else he'd written.

\n\n

The lesson was not that his fellow scientists were stupid, but that we tend to enormously underestimate the effort required to properly explain things.

\n\n

He told me this, because I'd just told him about my experience publishing "An Intuitive Explanation of Bayesian Reasoning".  This is still one of my most popular, most blogged, and most appreciated works today.  I regularly get fan mail from formerly confused undergraduates taking statistics classes, and journalists, and professors from outside fields.  In short, I successfully hit the audience the eminent scientist had thought he was aiming for.

\n\n

I'd thought I was aiming for elementary school.

Today, when I look back at the Intuitive Explanation, it seems pretty silly as an attempt on grade school:

\n\n\n\n

(Then again, I get a roughly equal number of complaints that the Intuitive Explanation is too long and drawn-out, as that it is too short.  The current version does seem to be "just right" for a fair number of people.)

\n\n

Explainers shoot way, way higher than they think they're aiming, thanks to the illusion of transparency and self-anchoring.  We miss the mark by several major grades of expertise.  Aiming for outside academics gets you an article that will be popular among specialists in your field.  Aiming at grade school (admittedly, naively so) will hit undergraduates.  This is not because your audience is more stupid than you think, but because your words are far less helpful than you think.  You're way way overshooting the target.  Aim several major gradations lower, and you may hit your mark.

\n\n

PS:  I know and do confess that I need to work on taking my own advice.

\n\n

Addendum:  With his gracious permission:  The eminent scientist was Ralph Merkle.

" } }, { "_id": "HLqWn5LASfhhArZ7w", "title": "Expecting Short Inferential Distances", "pageUrl": "https://www.lesswrong.com/posts/HLqWn5LASfhhArZ7w/expecting-short-inferential-distances", "postedAt": "2007-10-22T23:42:01.000Z", "baseScore": 402, "voteCount": 332, "commentCount": 106, "url": null, "contents": { "documentId": "HLqWn5LASfhhArZ7w", "html": "\n\n\n\n \n\n \n\n

Homo sapiens’s environment of evolutionary adaptedness (a.k.a. EEA or “ancestral environment”) consisted of hunter-gatherer bands of at most 200 people, with no writing. All inherited knowledge was passed down by speech and memory.

\n\n

In a world like that, all background knowledge is universal knowledge. All information not strictly private is public, period.

\n\n

In the ancestral environment, you were unlikely to end up more than one inferential step away from anyone else. When you discover a new oasis, you don’t have to explain to your fellow tribe members what an oasis is, or why it’s a good idea to drink water, or how to walk. Only you know where the oasis lies; this is private knowledge. But everyone has the background to understand your description of the oasis, the concepts needed to think about water; this is universal knowledge. When you explain things in an ancestral environment, you almost never have to explain your concepts. At most you have to explain one new concept, not two or more simultaneously.

\n\n

In the ancestral environment there were no abstract disciplines with vast bodies of carefully gathered evidence generalized into elegant theories transmitted by written books whose conclusions are a hundred inferential steps removed from universally shared background premises.

\n\n

In the ancestral environment, anyone who says something with no obvious support is a liar or an idiot. You’re not likely to think, “Hey, maybe this person has well-supported background knowledge that no one in my band has even heard of,” because it was a reliable invariant of the ancestral environment that this didn’t happen.

\n\n

Conversely, if you say something blatantly obvious and the other person doesn’t see it, they’re the idiot, or they’re being deliberately obstinate to annoy you.

\n\n

And to top it off, if someone says something with no obvious support and expects you to believe it—acting all indignant when you don’t—then they must be crazy.

\n\n

Combined with the illusion of transparency and self-anchoring (the tendency to model other minds as though the were slightly modified versions of oneself), I think this explains a lot about the legendary difficulty most scientists have in communicating with a lay audience—or even communicating with scientists from other disciplines. When I observe failures of explanation, I usually see the explainer taking one step back, when they need to take two or more steps back. Or listeners assume that things should be visible in one step, when they take two or more steps to explain. Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge.

\n\n

A biologist, speaking to a physicist, can justify evolution by saying it is the simplest explanation. But not everyone on Earth has been inculcated with that legendary history of science, from Newton to Einstein, which invests the phrase “simplest explanation” with its awesome import: a Word of Power, spoken at the birth of theories and carved on their tombstones. To someone else, “But it’s the simplest explanation!” may sound like an interesting but hardly knockdown argument; it doesn’t feel like all that powerful a tool for comprehending office politics or fixing a broken car. Obviously the biologist is infatuated with their own ideas, too arrogant to be open to alternative explanations which sound just as plausible. (If it sounds plausible to me, it should sound plausible to any sane member of my band.)

\n\n

And from the biologist’s perspective, they can understand how evolution might sound a little odd at first—but when someone rejects evolution even after the biologist explains that it’s the simplest explanation, well, it’s clear that nonscientists are just idiots and there’s no point in talking to them.

\n\n

A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don’t recurse far enough, you’re just talking to yourself.

\n\n

If at any point you make a statement without obvious justification in arguments you’ve previously supported, the audience just thinks you’re crazy.

\n\n

This also happens when you allow yourself to be seen visibly attaching greater weight to an argument than is justified in the eyes of the audience at that time. For example, talking as if you think “simpler explanation” is a knockdown argument for evolution (which it is), rather than a sorta-interesting idea (which it sounds like to someone who hasn’t been raised to revere Occam’s Razor).

\n\n

Oh, and you’d better not drop any hints that you think you’re working a dozen inferential steps away from what the audience knows, or that you think you have special background knowledge not available to them. The audience doesn’t know anything about an evolutionary-psychological argument for a cognitive bias to underestimate inferential distances leading to traffic jams in communication. They’ll just think you’re condescending.

\n\n

And if you think you can explain the concept of “systematically underestimated inferential distances” briefly, in just a few words, I’ve got some sad news for you . . .

\n\n" } }, { "_id": "sWtvoBsknYvS6QPTb", "title": "Self-Anchoring", "pageUrl": "https://www.lesswrong.com/posts/sWtvoBsknYvS6QPTb/self-anchoring", "postedAt": "2007-10-22T06:11:12.000Z", "baseScore": 48, "voteCount": 42, "commentCount": 10, "url": null, "contents": { "documentId": "sWtvoBsknYvS6QPTb", "html": "

Sometime between the age of 3 and 4, a human child becomes able, for the first time, to model other minds as having different beliefs.  The child sees a box, sees candy in the box, and sees that Sally sees the box.  Sally leaves, and then the experimenter, in front of the child, replaces the candy with pencils and closes the box so that the inside is not visible.  Sally returns, and the child is asked what Sally thinks is in the box.  Children younger than 3 say "pencils", children older than 4 say "candy".

\n\n

Our ability to visualize other minds is imperfect.  Neural circuitry is not as flexible as a program fed to a general-purpose computer.  An AI, with fast read-write access to its own memory, might be able to create a distinct, simulated visual cortex to imagine what a human "sees".  We humans only have one visual cortex, and if we want to imagine what someone else is seeing, we've got to simulate it using our own visual cortex - put our own brains into the other mind's shoes.  And because you can't reconfigure memory to simulate a new brain from stratch, pieces of you leak into your visualization of the Other.

\"Keysarselfanchoring_2\"\n

\n\n

The diagram above is from Keysar, Barr, Balin, & Brauner (2000).  The experimental subject, the "addressee", sat in front of an array of objects, viewed as seen on the left.  On the other side, across from the addressee, sat the "director", with the view as seen on the right.  The addressee had an unblocked view, which also allowed the addressee to see which objects were not visible to the director.

\n\n

The experiment used the eye-tracking method: the direction of a subject's gaze can be measured using computer vision.  Tanenhaus et. al. (1995) had previously demonstrated that when people understand a spoken reference, their gaze fixates on the identified object almost immediately.

\n\n

The key test was when the director said "Put the small candle next to the truck."  As the addressee can clearly observe, the director only knows about two candles, the largest and medium ones; the smallest candle is occluded.

\n\n

And, lo and behold, subjects' eyes fixated on the occluded smallest candle an average of 1,487 milliseconds before they correctly identified the medium-sized candle as the one the director must have meant.

\n\n

This seems to suggest that subjects first computed the meaning according to their brains' settings, their knowledge, and then afterward adjusted for the other mind's different knowledge.

\n\n

Numerous experiments suggest that where there is adjustment, there is usually under-adjustment, which leads to anchoring.  In this case, "self-anchoring".

\n\n

Barr (2003) argues that the processes are actually more akin to contamination and under-correction; we can't stop ourselves from leaking over, and then we can't correct for the leakage.  Different process, same outcome:

\n\n

We can put our feet in other minds' shoes, but we keep our own socks on.

\n\n

Barr, D. J. (2003). Listeners are mentally contaminated. Poster\npresented at the 44th annual meeting of the Psychonomic Society,\nVancouver.

\n\n

Keysar, B., Barr, D. J., Balin, J. A., & Brauner, J. S. (2000).\nTaking perspective in conversation: The role of mutual knowledge in\ncomprehension. Psychological Sciences, 11, 32-38.

\n\n

Perner, J., Leekam, S. R., & Wimmer, H. (1987). Three-year-olds’ difficulty with false belief: The case for a conceptual deficit. British Journal of Developmental Psychology, 5(2), 125–137.

\n\n

Tanenhaus, M.K., Spivey-Knowlton, M.J., Eberhard, K.M. & Sedivy,\nJ.C. (1995). Integration of visual and linguistic information in spoken\nlanguage comprehension. Science 268: 1632-1634.

" } }, { "_id": "sSqoEw9eRP2kPKLCz", "title": "Illusion of Transparency: Why No One Understands You", "pageUrl": "https://www.lesswrong.com/posts/sSqoEw9eRP2kPKLCz/illusion-of-transparency-why-no-one-understands-you", "postedAt": "2007-10-20T23:49:30.000Z", "baseScore": 184, "voteCount": 173, "commentCount": 52, "url": null, "contents": { "documentId": "sSqoEw9eRP2kPKLCz", "html": "

In hindsight bias, people who know the outcome of a situation believe the outcome should have been easy to predict in advance. Knowing the outcome, we reinterpret the situation in light of that outcome. Even when warned, we can’t de-interpret to empathize with someone who doesn’t know what we know.

Closely related is the illusion of transparency: We always know what we mean by our words, and so we expect others to know it too. Reading our own writing, the intended interpretation falls easily into place, guided by our knowledge of what we really meant. It’s hard to empathize with someone who must interpret blindly, guided only by the words.

June recommends a restaurant to Mark; Mark dines there and discovers (a) unimpressive food and mediocre service or (b) delicious food and impeccable service. Then Mark leaves the following message on June’s answering machine: “June, I just finished dinner at the restaurant you recommended, and I must say, it was marvelous, just marvelous.” Keysar (1994) presented a group of subjects with scenario (a), and 59% thought that Mark’s message was sarcastic and that Jane would perceive the sarcasm.1 Among other subjects, told scenario (b), only 3% thought that Jane would perceive Mark’s message as sarcastic. Keysar and Barr (2002) seem to indicate that an actual voice message was played back to the subjects.2 Keysar (1998) showed that if subjects were told that the restaurant was horrible but that Mark wanted to conceal his response, they believed June would not perceive sarcasm in the (same) message:3

They were just as likely to predict that she would perceive sarcasm when he attempted to conceal his negative experience as when he had a positive experience and was truly sincere. So participants took Mark’s communicative intention as transparent. It was as if they assumed that June would perceive whatever intention Mark wanted her to perceive.4

“The goose hangs high” is an archaic English idiom that has passed out of use in modern language. Keysar and Bly (1995) told one group of subjects that “the goose hangs high” meant that the future looks good; another group of subjects learned that “the goose hangs high” meant the future looks gloomy.5 Subjects were then asked which of these two meanings an uninformed listener would be more likely to attribute to the idiom. Each group thought that listeners would perceive the meaning presented as “standard.”6

Keysar and Henly (2002) tested the calibration of speakers: Would speakers underestimate, overestimate, or correctly estimate how often listeners understood them?7 Speakers were given ambiguous sentences (“The man is chasing a woman on a bicycle.”) and disambiguating pictures (a man running after a cycling woman). Speakers were then asked to utter the words in front of addressees, and asked to estimate how many addressees understood the intended meaning. Speakers thought that they were understood in 72% of cases and were actually understood in 61% of cases. When addressees did not understand, speakers thought they did in 46% of cases; when addressees did understand, speakers thought they did not in only 12% of cases.

Additional subjects who overheard the explanation showed no such bias, expecting listeners to understand in only 56% of cases.

As Keysar and Barr note, two days before Germany’s attack on Poland, Chamberlain sent a letter intended to make it clear that Britain would fight if any invasion occurred. The letter, phrased in polite diplomatese, was heard by Hitler as conciliatory—and the tanks rolled.

Be not too quick to blame those who misunderstand your perfectly clear sentences, spoken or written. Chances are, your words are more ambiguous than you think.


1 Boaz Keysar, “The Illusory Transparency of Intention: Linguistic Perspective Taking in Text,” Cognitive Psychology 26 (2 1994): 165–208.

2 Boaz Keysar and Dale J. Barr, “Self-Anchoring in Conversation: Why Language Users Do Not Do What They ‘Should,’” in Heuristics and Biases: The Psychology of Intuitive Judgment, ed. Griffin Gilovich and Daniel Kahneman (New York: Cambridge University Press, 2002), 150–166.

3 Boaz Keysar, “Language Users as Problem Solvers: Just What Ambiguity Problem Do They Solve?,” in Social and Cognitive Approaches to Interpersonal Communication, ed. Susan R. Fussell and Roger J. Kreuz (Mahwah, NJ: Lawrence Erlbaum Associates, 1998), 175–200.

4 The wording here is from Keysar and Barr.

5 Boaz Keysar and Bridget Bly, “Intuitions of the Transparency of Idioms: Can One Keep a Secret by Spilling the Beans?,” Journal of Memory and Language 34 (1 1995): 89–109.

6 Other idioms tested included “come the uncle over someone,” “to go by the board,” and “to lay out in lavender.” Ah, English, such a lovely language.

7 Boaz Keysar and Anne S. Henly, “Speakers’ Overestimation of Their Effectiveness,” Psychological Science 13 (3 2002): 207–212.

" } }, { "_id": "a5JAiTdytou3Jg749", "title": "Pascal's Mugging: Tiny Probabilities of Vast Utilities", "pageUrl": "https://www.lesswrong.com/posts/a5JAiTdytou3Jg749/pascal-s-mugging-tiny-probabilities-of-vast-utilities", "postedAt": "2007-10-19T23:37:38.000Z", "baseScore": 112, "voteCount": 81, "commentCount": 354, "url": null, "contents": { "documentId": "a5JAiTdytou3Jg749", "html": "

The most common formalizations of Occam's Razor, Solomonoff induction and Minimum Description Length, measure the program size of a computation used in a hypothesis, but don't measure the running time or space requirements of the computation.  What if this makes a mind vulnerable to finite forms of Pascal's Wager?  A compactly specified wager can grow in size much faster than it grows in complexity.  The utility of a Turing machine can grow much faster than its prior probability shrinks.

\n

Consider Knuth's up-arrow notation:

\n\n

In other words:  3^^^3 describes an exponential tower of threes 7625597484987 layers tall.  Since this number can be computed by a simple Turing machine, it contains very little information and requires a very short message to describe.  This, even though writing out 3^^^3 in base 10 would require enormously more writing material than there are atoms in the known universe (a paltry 10^80).

\n

Now suppose someone comes to me and says, \"Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people.\"

\n

Call this Pascal's Mugging.

\n

\n

\"Magic powers from outside the Matrix\" are easier said than done - we have to suppose that our world is a computing simulation run from within an environment that can afford simulation of arbitrarily large finite Turing machines, and that the would-be wizard has been spliced into our own Turing tape and is in continuing communication with an outside operator, etc.

\n

Thus the Kolmogorov complexity of \"magic powers from outside the Matrix\" is larger than the mere English words would indicate.  Therefore the Solomonoff-inducted probability, two to the negative Kolmogorov complexity, is exponentially tinier than one might naively think.

\n

But, small as this probability is, it isn't anywhere near as small as 3^^^^3 is large.  If you take a decimal point, followed by a number of zeros equal to the length of the Bible, followed by a 1, and multiply this unimaginably tiny fraction by 3^^^^3, the result is pretty much 3^^^^3.

\n

Most people, I think, envision an \"infinite\" God that is nowhere near as large as 3^^^^3.  \"Infinity\" is reassuringly featureless and blank.  \"Eternal life in Heaven\" is nowhere near as intimidating as the thought of spending 3^^^^3 years on one of those fluffy clouds.  The notion that the diversity of life on Earth springs from God's infinite creativity, sounds more plausible than the notion that life on Earth was created by a superintelligence 3^^^^3 bits large.  Similarly for envisioning an \"infinite\" God interested in whether women wear men's clothing, versus a superintelligence of 3^^^^3 bits, etc.

\n

The original version of Pascal's Wager is easily dealt with by the gigantic multiplicity of possible gods, an Allah for every Christ and a Zeus for every Allah, including the \"Professor God\" who places only atheists in Heaven.   And since all the expected utilities here are allegedly \"infinite\", it's easy enough to argue that they cancel out.  Infinities, being featureless and blank, are all the same size.

\n

But suppose I built an AI which worked by some bounded analogue of Solomonoff induction - an AI sufficiently Bayesian to insist on calculating complexities and assessing probabilities, rather than just waving them off as \"large\" or \"small\".

\n

If the probabilities of various scenarios considered did not exactly cancel out, the AI's action in the case of Pascal's Mugging would be overwhelmingly dominated by whatever tiny differentials existed in the various tiny probabilities under which 3^^^^3 units of expected utility were actually at stake.

\n

You or I would probably wave off the whole matter with a laugh, planning according to the dominant mainline probability:  Pascal's Mugger is just a philosopher out for a fast buck.

\n

But a silicon chip does not look over the code fed to it, assess it for reasonableness, and correct it if not.  An AI is not given its code like a human servant given instructions.  An AI is its code.  What if a philosopher tries Pascal's Mugging on the AI for a joke, and the tiny probabilities of 3^^^^3 lives being at stake, override everything else in the AI's calculations?   What is the mere Earth at stake, compared to a tiny probability of 3^^^^3 lives?

\n

How do I know to be worried by this line of reasoning?  How do I know to rationalize reasons a Bayesian shouldn't work that way?  A mind that worked strictly by Solomonoff induction would not know to rationalize reasons that Pascal's Mugging mattered less than Earth's existence.  It would simply go by whatever answer Solomonoff induction obtained.

\n

It would seem, then, that I've implicitly declared my existence as a mind that does not work by the logic of Solomonoff, at least not the way I've described it.  What am I comparing Solomonoff's answer to, to determine whether Solomonoff induction got it \"right\" or \"wrong\"?

\n

Why do I think it's unreasonable to focus my entire attention on the magic-bearing possible worlds, faced with a Pascal's Mugging?  Do I have an instinct to resist exploitation by arguments \"anyone could make\"?  Am I unsatisfied by any visualization in which the dominant mainline probability leads to a loss?  Do I drop sufficiently small probabilities from consideration entirely?  Would an AI that lacks these instincts be exploitable by Pascal's Mugging?

\n

Is it me who's wrong?  Should I worry more about the possibility of some Unseen Magical Prankster of very tiny probability taking this post literally, than about the fate of the human species in the \"mainline\" probabilities?

\n

It doesn't feel to me like 3^^^^3 lives are really at stake, even at very tiny probability.  I'd sooner question my grasp of \"rationality\" than give five dollars to a Pascal's Mugger because I thought it was \"rational\".

\n

Should we penalize computations with large space and time requirements?  This is a hack that solves the problem, but is it true? Are computationally costly explanations less likely?  Should I think the universe is probably a coarse-grained simulation of my mind rather than real quantum physics, because a coarse-grained human mind is exponentially cheaper than real quantum physics?  Should I think the galaxies are tiny lights on a painted backdrop, because that Turing machine would require less space to compute?

\n

Given that, in general, a Turing machine can increase in utility vastly faster than it increases in complexity, how should an Occam-abiding mind avoid being dominated by tiny probabilities of vast utilities?

\n

If I could formalize whichever internal criterion was telling me I didn't want this to happen, I might have an answer.

\n

I talked over a variant of this problem with Nick Hay, Peter de Blanc, and Marcello Herreshoff in summer of 2006.  I don't feel I have a satisfactory resolution as yet, so I'm throwing it open to any analytic philosophers who might happen to read Overcoming Bias.

" } }, { "_id": "PNXjsEGBpR2WjTdsH", "title": "Congratulations to Paris Hilton", "pageUrl": "https://www.lesswrong.com/posts/PNXjsEGBpR2WjTdsH/congratulations-to-paris-hilton", "postedAt": "2007-10-19T00:31:07.000Z", "baseScore": 3, "voteCount": 9, "commentCount": 97, "url": null, "contents": { "documentId": "PNXjsEGBpR2WjTdsH", "html": "

...on signing up for cryopreservation with the Cryonics Institute.

\n

(No, it's not a joke.)

\n

Anyone not signed up for cryonics has now lost the right to make fun of Paris Hilton,
because no matter what else she does wrong, and what else you do right,
all of it together can't outweigh the life consequences of that one little decision.

\n

Congratulations, Paris.  I look forward to meeting you someday.

\n

Addendum:  On Nov 28 '07, Paris Hilton denied being signed up for cryonics.  Oh well.

" } }, { "_id": "LxcJHS2Lt22mHQ4Hm", "title": "\"Can't Say No\" Spending", "pageUrl": "https://www.lesswrong.com/posts/LxcJHS2Lt22mHQ4Hm/can-t-say-no-spending", "postedAt": "2007-10-18T02:08:24.000Z", "baseScore": 32, "voteCount": 33, "commentCount": 33, "url": null, "contents": { "documentId": "LxcJHS2Lt22mHQ4Hm", "html": "

The remarkable observation that medical spending has zero net marginal effect is shocking, but not completely unprecedented.

\n\n

According to Spiegel in "Too Much of a Good Thing: Choking on Aid Money in Africa", the Washington Center for Global Development calculated that it would require $3,521 of marginal development aid invested, per person, in order to increase per capita yearly income by $3.65 (one penny per day).

\n\n

The Kenyan economist James Shikwati is even more pessimistic in "For God's Sake, Please Stop the Aid!":  The net effect of Western aid to Africa is actively destructive (even when it isn't stolen to prop up corrupt regimes), a chaotic flux of money and goods that destroys local industry.

\n\n

What does aid to Africa have in common with healthcare spending? \nBesides, of course, that it's heartbreaking to just say no -

" } }, { "_id": "uHYYA32CKgKT3FagE", "title": "Hold Off On Proposing Solutions", "pageUrl": "https://www.lesswrong.com/posts/uHYYA32CKgKT3FagE/hold-off-on-proposing-solutions", "postedAt": "2007-10-17T03:16:04.000Z", "baseScore": 137, "voteCount": 112, "commentCount": 52, "url": null, "contents": { "documentId": "uHYYA32CKgKT3FagE", "html": "\n\n\n\n \n\n \n\n

From Robyn Dawes’s Rational Choice in an Uncertain World.1 Bolding added.

\n\n
\n \n\n

Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem. Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested. Maier enacted an edict to enhance group problem solving: “Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any.” It is easy to show that this edict works in contexts where there are objectively defined good solutions to problems.

\n\n

Maier devised the following “role playing” experiment to demonstrate his point. Three employees of differing ability work on an assembly line. They rotate among three jobs that require different levels of ability, because the most able—who is also the most dominant—is strongly motivated to avoid boredom. In contrast, the least able worker, aware that he does not perform the more difficult jobs as well as the other two, has agreed to rotation because of the dominance of his able co-worker. An “efficiency expert” notes that if the most able employee were given the most difficult task and the least able the least difficult, productivity could be improved by 20%, and the expert recommends that the employees stop rotating. The three employees and . . . a fourth person designated to play the role of foreman are asked to discuss the expert’s recommendation. Some role-playing groups are given Maier’s edict not to discuss solutions until having discussed the problem thoroughly, while others are not. Those who are not given the edict immediately begin to argue about the importance of productivity versus worker autonomy and the avoidance of boredom. Groups presented with the edict have a much higher probability of arriving at the solution that the two more able workers rotate, while the least able one sticks to the least demanding job—a solution that yields a 19% increase in productivity.

\n\n

I have often used this edict with groups I have led—particularly when they face a very tough problem, which is when group members are most apt to propose solutions immediately. While I have no objective criterion on which to judge the quality of the problem solving of the groups, Maier’s edict appears to foster better solutions to problems.

\n
\n\n

This is so true it’s not even funny. And it gets worse and worse the tougher the problem becomes. Take artificial intelligence, for example. A surprising number of people I meet seem to know exactly how to build an artificial general intelligence, without, say, knowing how to build an optical character recognizer or a collaborative filtering system (much easier problems). And as for building an AI with a positive impact on the world—a Friendly AI, loosely speaking—why, that problem is so incredibly difficult that an actual majority resolve the whole issue within fifteen seconds.2 Give me a break.

\n\n

This problem is by no means unique to AI. Physicists encounter plenty of nonphysicists with their own theories of physics, economists get to hear lots of amazing new theories of economics. If you’re an evolutionary biologist, anyone you meet can instantly solve any open problem in your field, usually by postulating group selection. Et cetera.

\n\n

Maier’s advice echoes the principle of the bottom line, that the effectiveness of our decisions is determined only by whatever evidence and processing we did in first arriving at our decisions—after you write the bottom line, it is too late to write more reasons above. If you make your decision very early on, it will, in fact, be based on very little thought, no matter how many amazing arguments you come up with afterward.

\n\n

And consider furthermore that we change our minds less often than we think: 24 people assigned an average 66% probability to the future choice thought more probable, but only 1 in 24 actually chose the option thought less probable. Once you can guess what your answer will be, you have probably already decided. If you can guess your answer half a second after hearing the question, then you have half a second in which to be intelligent. It’s not a lot of time.

\n\n

Traditional Rationality emphasizes falsification—the ability to relinquish an initial opinion when confronted by clear evidence against it. But once an idea gets into your head, it will probably require way too much evidence to get it out again. Worse, we don’t always have the luxury of overwhelming evidence.

\n\n

I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer. To suspend, draw out, that tiny moment when we can’t yet guess what our answer will be; thus giving our intelligence a longer time in which to act.

\n\n

Even half a minute would be an improvement over half a second.

\n\n
\n \n\n

1Robyn M. Dawes, Rational Choice in An Uncertain World, 1st ed., ed. Jerome Kagan (San Diego, CA: Harcourt Brace Jovanovich, 1988), 55–56.

\n\n

2See Yudkowsky, “Artificial Intelligence as a Positive and Negative Factor in Global Risk.”

\n
\n\n" } }, { "_id": "rHBdcHGLJ7KvLJQPk", "title": "The Logical Fallacy of Generalization from Fictional Evidence", "pageUrl": "https://www.lesswrong.com/posts/rHBdcHGLJ7KvLJQPk/the-logical-fallacy-of-generalization-from-fictional", "postedAt": "2007-10-16T03:57:34.000Z", "baseScore": 134, "voteCount": 109, "commentCount": 62, "url": null, "contents": { "documentId": "rHBdcHGLJ7KvLJQPk", "html": "\n\n\n\n \n\n \n\n

When I try to introduce the subject of advanced AI, what’s the first thing I hear, more than half the time?

\n\n

“Oh, you mean like the Terminator movies / The Matrix / Asimov’s robots!”

\n\n

And I reply, “Well, no, not exactly. I try to avoid the logical fallacy of generalizing from fictional evidence.”

\n\n

Some people get it right away, and laugh. Others defend their use of the example, disagreeing that it’s a fallacy.

\n\n

What’s wrong with using movies or novels as starting points for the discussion? No one’s claiming that it’s true, after all. Where is the lie, where is the rationalist sin? Science fiction represents the author’s attempt to visualize the future; why not take advantage of the thinking that’s already been done on our behalf, instead of starting over?

\n\n

Not every misstep in the precise dance of rationality consists of outright belief in a falsehood; there are subtler ways to go wrong.

\n\n

First, let us dispose of the notion that science fiction represents a full-fledged rational attempt to forecast the future. Even the most diligent science fiction writers are, first and foremost, storytellers; the requirements of storytelling are not the same as the requirements of forecasting. As Nick Bostrom points out:1

\n\n
\n \n\n

When was the last time you saw a movie about humankind suddenly going extinct (without warning and without being replaced by some other civilization)? While this scenario may be much more probable than a scenario in which human heroes successfully repel an invasion of monsters or robot warriors, it wouldn’t be much fun to watch.

\n
\n\n

So there are specific distortions in fiction.2 But trying to correct for these specific distortions is not enough. A story is never a rational attempt at analysis, not even with the most diligent science fiction writers, because stories don’t use probability distributions. I illustrate as follows:

\n\n
\n \n\n

Bob Merkelthud slid cautiously through the door of the alien spacecraft, glancing right and then left (or left and then right) to see whether any of the dreaded Space Monsters yet remained. At his side was the only weapon that had been found effective against the Space Monsters, a Space Sword forged of pure titanium with 30% probability, an ordinary iron crowbar with 20% probability, and a shimmering black discus found in the smoking ruins of Stonehenge with 45% probability, the remaining 5% being distributed over too many minor outcomes to list here.

\n\n

Merklethud (though there’s a significant chance that Susan Wifflefoofer was there instead) took two steps forward or one step back, when a vast roar split the silence of the black airlock! Or the quiet background hum of the white airlock! Although Amfer and Woofi (1997) argue that Merklethud is devoured at this point, Spacklebackle (2003) points out that—

\n
\n\n

Characters can be ignorant, but the author can’t say the three magic words “I don’t know.” The protagonist must thread a single line through the future, full of the details that lend flesh to the story, from Wifflefoofer’s appropriately futuristic attitudes toward feminism, down to the color of her earrings.

\n\n

Then all these burdensome details and questionable assumptions are wrapped up and given a short label, creating the illusion that they are a single package.3

\n\n

On problems with large answer spaces, the greatest difficulty is not verifying the correct answer but simply locating it in answer space to begin with. If someone starts out by asking whether or not AIs are gonna put us into capsules like in The Matrix, they’re jumping to a 100-bit proposition, without a corresponding 98 bits of evidence to locate it in the answer space as a possibility worthy of explicit consideration. It would only take a handful more evidence after the first 98 bits to promote that possibility to near-certainty, which tells you something about where nearly all the work gets done.

\n\n

The “preliminary” step of locating possibilities worthy of explicit consideration includes steps like: weighing what you know and don’t know, what you can and can’t predict; making a deliberate effort to avoid absurdity bias and widen confidence intervals; pondering which questions are the important ones, trying to adjust for possible Black Swans and think of (formerly) unknown unknowns. Jumping to “The Matrix: Yes or No?” skips over all of this.

\n\n

Any professional negotiator knows that to control the terms of a debate is very nearly to control the outcome of the debate. If you start out by thinking of The Matrix, it brings to mind marching robot armies defeating humans after a long struggle—not a superintelligence snapping nanotechnological fingers. It focuses on an “Us vs. Them” struggle, directing attention to questions like “Who will win?” and “Who should win?” and “Will AIs really be like that?” It creates a general atmosphere of entertainment, of “What is your amazing vision of the future?”

\n\n

Lost to the echoing emptiness are: considerations of more than one possible mind design that an “artificial intelligence” could implement; the future’s dependence on initial conditions; the power of smarter-than-human intelligence and the argument for its unpredictability; people taking the whole matter seriously and trying to do something about it.

\n\n

If some insidious corrupter of debates decided that their preferred outcome would be best served by forcing discussants to start out by refuting Terminator, they would have done well in skewing the frame. Debating gun control, the NRA spokesperson does not wish to be introduced as a “shooting freak,” the anti-gun opponent does not wish to be introduced as a “victim disarmament advocate.” Why should you allow the same order of frame-skewing by Hollywood scriptwriters, even accidentally?

\n\n

Journalists don’t tell me, “The future will be like 2001.” But they ask, “Will the future be like 2001, or will it be like A.I.?” This is just as huge a framing issue as asking, “Should we cut benefits for disabled veterans, or raise taxes on the rich?”

\n\n

In the ancestral environment, there were no moving pictures; what you saw with your own eyes was true. A momentary glimpse of a single word can prime us and make compatible thoughts more available, with demonstrated strong influence on probability estimates. How much havoc do you think a two-hour movie can wreak on your judgment? It will be hard enough to undo the damage by deliberate concentration—why invite the vampire into your house? In Chess or Go, every wasted move is a loss; in rationality, any non-evidential influence is (on average) entropic.

\n\n

Do movie-viewers succeed in unbelieving what they see? So far as I can tell, few movie viewers act as if they have directly observed Earth’s future. People who watched the Terminator movies didn’t hide in fallout shelters on August 29, 1997. But those who commit the fallacy seem to act as if they had seen the movie events occurring on some other planet; not Earth, but somewhere similar to Earth.

\n\n

You say, “Suppose we build a very smart AI,” and they say, “But didn’t that lead to nuclear war in The Terminator?” As far as I can tell, it’s identical reasoning, down to the tone of voice, of someone who might say: “But didn’t that lead to nuclear war on Alpha Centauri?” or “Didn’t that lead to the fall of the Italian city-state of Piccolo in the fourteenth century?” The movie is not believed, but it is cognitively available. It is treated, not as a prophecy, but as an illustrative historical case. Will history repeat itself? Who knows?

\n\n

In a recent intelligence explosion discussion, someone mentioned that Vinge didn’t seem to think that brain-computer interfaces would increase intelligence much, and cited Marooned in Realtime and Tunç Blumenthal, who was the most advanced traveller but didn’t seem all that powerful. I replied indignantly, “But Tunç lost most of his hardware! He was crippled!” And then I did a mental double-take and thought to myself: What the hell am I saying.

\n\n

Does the issue not have to be argued in its own right, regardless of how Vinge depicted his characters? Tunç Blumenthal is not “crippled,” he’s unreal. I could say “Vinge chose to depict Tunç as crippled, for reasons that may or may not have had anything to do with his personal best forecast,” and that would give his authorial choice an appropriate weight of evidence. I cannot say “Tunç was crippled.” There is no was of Tunç Blumenthal.

\n\n

I deliberately left in a mistake I made, in my first draft of the beginning of this essay: “Others defend their use of the example, disagreeing that it’s a fallacy.” But The Matrix is not an example!

\n\n

A neighboring flaw is the logical fallacy of arguing from imaginary evidence: “Well, if you did go to the end of the rainbow, you would find a pot of gold—which just proves my point!” (Updating on evidence predicted, but not observed, is the mathematical mirror image of hindsight bias.)

\n\n

The brain has many mechanisms for generalizing from observation, not just the availability heuristic. You see three zebras, you form the category “zebra,” and this category embodies an automatic perceptual inference. Horse-shaped creatures with white and black stripes are classified as “Zebras,” therefore they are fast and good to eat; they are expected to be similar to other zebras observed.

\n\n

So people see (moving pictures of) three Borg, their brain automatically creates the category “Borg,” and they infer automatically that humans with brain-computer interfaces are of class “Borg” and will be similar to other Borg observed: cold, uncompassionate, dressing in black leather, walking with heavy mechanical steps. Journalists don’t believe that the future will contain Borg—they don’t believe Star Trek is a prophecy. But when someone talks about brain-computer interfaces, they think, “Will the future contain Borg?” Not, “How do I know computer-assisted telepathy makes people less nice?” Not, “I’ve never seen a Borg and never has anyone else.” Not, “I’m forming a racial stereotype based on literally zero evidence.”

\n\n

As George Orwell said of cliches:4

\n\n
\n \n\n

What is above all needed is to let the meaning choose the word, and not the other way around . . . When you think of something abstract you are more inclined to use words from the start, and unless you make a conscious effort to prevent it, the existing dialect will come rushing in and do the job for you, at the expense of blurring or even changing your meaning.

\n
\n\n

Yet in my estimation, the most damaging aspect of using other authors’ imaginations is that it stops people from using their own. As Robert Pirsig said:5

\n\n
\n \n\n

She was blocked because she was trying to repeat, in her writing, things she had already heard, just as on the first day he had tried to repeat things he had already decided to say. She couldn’t think of anything to write about Bozeman because she couldn’t recall anything she had heard worth repeating. She was strangely unaware that she could look and see freshly for herself, as she wrote, without primary regard for what had been said before.

\n
\n\n

Remembered fictions rush in and do your thinking for you; they substitute for seeing—the deadliest convenience of all.

\n\n
\n \n\n

1Nick Bostrom, “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards,” Journal of Evolution and Technology 9 (2002), http://www.jetpress.org/volume9/risks.html.

\n\n

2E.g., Hanson’s (2006) “Biases of Science Fiction.” http://www.overcomingbias.com/2006/12/biases_of_scien.html.

\n\n

3See “The Third Alternative” in this volume, and “Occam’s Razor” and “Burdensome Details” in Map and Territory.

\n\n

4Orwell, “Politics and the English Language.”

\n\n

5Pirsig, Zen and the Art of Motorcycle Maintenance.

\n
\n\n" } }, { "_id": "hFTkZjPiAyQ9RtCQf", "title": "The Meaning That Immortality Gives to Life", "pageUrl": "https://www.lesswrong.com/posts/hFTkZjPiAyQ9RtCQf/the-meaning-that-immortality-gives-to-life", "postedAt": "2007-10-15T03:02:39.213Z", "baseScore": 72, "voteCount": 27, "commentCount": 8, "url": null, "contents": { "documentId": "hFTkZjPiAyQ9RtCQf", "html": "

I was once present when William Hurlbut, during a debate with Aubrey de Grey, spoke of “the meaning that death gives to life”; Hurlbut repeated the standard claims that life without death would be meaningless and empty. As I replied during the comments session, Hurlbut had not made a sincere effort to think about what meaning immortality would give to life, on the same order of the effort that has gone into thinking about “the meaning that death gives to life”.

Philosophers have put forth a mighty effort to find nice things to say about death. But this is scant reason to fear lifespan extension, when philosophers have not put forth an equally motivated effort to say nice things about immortality.

Such is human nature, that if we were all hit on the head with a baseball bat once a week, philosophers would soon discover many amazing benefits of being hit on the head with a baseball bat: It toughens us, renders us less fearful of lesser pains, makes bat-free days all the sweeter. But if people are not currently being hit with baseball bats, they will not volunteer for it.

Modern literature about immortality is written primarily by authors who expect to die, and their grapes are accordingly sour. Hurlbut, it seems, is afraid of living too long. Well, suppose Hurlbut’s most dreaded fear materialized, and he was forced to live forever – worse, in good health – worst of all, with his IQ rising at a steady rate of 1 point per year. What positive aesthetics might Hurlbut find in his inescapable fate?

We cannot ask Hurlbut this question today. Today he expects to die, and so he seeks nice things to say about death, and conversely awful things to say about immortality. But if Hurlbut were sentenced to life, he would probably stop tormenting himself by finding terrible things to say about his situation, and begin to wonder what nice things he might say instead. Such is human nature, after all.

I once discussed death with a woman who said that, because of her awareness of mortality, whenever she thought of a nice thing to say to someone, she would say it right away; because who knows if they might not meet again. What a terrible world it would be if we had unlimited time to say nice things to each other! We should run right out and step in front of trucks. Perhaps if we were immortal, this woman would have remarked on how, whenever you meet a person or deal with them in any fashion, you are bound to meet again someday – thus you should speak kindly to them. What a terrible world it would be, if people met thinking they would never meet again! Then why would people tip appropriately in out-of-state restaurants? We should run right out and sign up with Alcor.

Another common excuse for praising death is that it gives us a sense of urgency. Go hang-gliding today, go learn to play the flute today, for tomorrow may never come. These people must value initiative, if they use it to justify death – what would they say if they were immortal? Perhaps, “You’ve got to learn linear algebra eventually - why not start today?” You’re not saving yourself any work by procrastinating. Isn’t that a beautiful thought – that you’ve got to learn all these things someday, so why not begin now? Such is the meaning that immortality gives to life.

What is the meaning of humanity’s unfolding future, if we are to die, if we are to live? If we are to die, then perhaps the meaning is that – to reverse the words of immortal Gandalf – we are to take thought only for this one generation of the world. We are to bequeath the world in the best possible state to our children, but not otherwise meddle in their affairs. But if we are to live, then the future is our concern personally, and we shall ourselves reap the fruits of whatever we sow. Inescapable responsibility, inescapable consequences. Is this not equally a call to action?

I have met many people who, when I try to tell them of the Singularity, say, “But do you really think all this will happen in our lifetimes?”, as if the universe ceases to exist beyond the horizon of their personal deaths. Given what I’ve actually seen of people’s psychology, if you want anything done about global warming (like building 1000 nuclear power plants and moving on to real problems), then, yes, you should urge people to sign up for Alcor.

What meaning does death, the inevitable termination of existence, give to an effort to be a better person? Perhaps the notion of a virtuous life having a beginning, a middle, and an end; so that it is shaped, through a finite amount of effort, into having a satisfying conclusion; and then it is done, finished like a painting, put on a stand and exhibited. What meaning would immortality give to a virtuous life? An unending, unbounded effort; never finished like a painting, never simply exhibited; never flawless, always improving. Is this not equally a beautiful thought? It may even have the advantage of being equally scary.

But really, both sides of all these arguments fall under the category of “excuses to be virtuous”, which no one should ever need. As I remarked to the woman, after she said that her mortality leads her to say nice things to people right away instead of later, “That’s a beautiful thought, and even if someday the threat of death is lifted from you, I hope you go on doing it.” Once you know what virtuous behavior would help excuse death, or immortality, or whatever, just go ahead and do it without need for an excuse. If this essay has an object, it is to demonstrate the ease of finding beautiful thoughts just about anywhere.

Neither death, nor immortality, are needed to give meaning to life. Life gives meaning to life. The object of friendship is friendship, the object of learning is learning. At most, the particular meanings that death or immortality would give to an act of life are secondary shades, fine points of artistry, like the landscape in the background of the Mona Lisa’s smile.

In truth, I suspect that if people were immortal, they would not think overmuch about the meaning that immortality gives to life. People in the Deaf subculture may ponder the implications of deafness; some Deaf parents even want to ensure that they have deaf children. Yet I rarely find myself pondering the meaning of hearing – perhaps I should! Only clouds must be searched for silver linings. Only things unvirtuous of themselves, must be excused by philosophizing them into excuses for virtue.

If, someday, the threat of death is lifted from humankind, perhaps only those originally born as Homo sapiens, we who were once mortal, will give thought to the meaning of immortality.

" } }, { "_id": "aSQy7yHj6nPD44RNo", "title": "How to Seem (and Be) Deep", "pageUrl": "https://www.lesswrong.com/posts/aSQy7yHj6nPD44RNo/how-to-seem-and-be-deep", "postedAt": "2007-10-14T18:13:09.000Z", "baseScore": 124, "voteCount": 115, "commentCount": 123, "url": null, "contents": { "documentId": "aSQy7yHj6nPD44RNo", "html": "

I recently attended a discussion group whose topic, at that session, was Death.  It brought out deep emotions.  I think that of all the Silicon Valley lunches I've ever attended, this one was the most honest; people talked about the death of family, the death of friends, what they thought about their own deaths.  People really listened to each other.  I wish I knew how to reproduce those conditions reliably.

\n

I was the only transhumanist present, and I was extremely careful not to be obnoxious about it.  (\"A fanatic is someone who can't change his mind and won't change the subject.\"  I endeavor to at least be capable of changing the subject.)  Unsurprisingly, people talked about the meaning that death gives to life, or how death is truly a blessing in disguise.  But I did, very cautiously, explain that transhumanists are generally positive on life but thumbs down on death.

\n

Afterward, several people came up to me and told me I was very \"deep\".  Well, yes, I am, but this got me thinking about what makes people seem deep. 

\n

\n

At one point in the discussion, a woman said that thinking about death led her to be nice to people because, who knows, she might not see them again.  \"When I have a nice thing to say about someone,\" she said, \"now I say it to them right away, instead of waiting.\"

\n

\"That is a beautiful thought,\" I said, \"and even if someday the threat of death is lifted from you, I hope you will keep on doing it—\"

\n

Afterward, this woman was one of the people who told me I was deep.

\n

At another point in the discussion, a man spoke of some benefit X of death, I don't recall exactly what.  And I said:  \"You know, given human nature, if people got hit on the head by a baseball bat every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing.  But if you took someone who wasn't being hit on the head with a baseball bat, and you asked them if they wanted it, they would say no.  I think that if you took someone who was immortal, and asked them if they wanted to die for benefit X, they would say no.\"

\n

Afterward, this man told me I was deep.

\n

Correlation is not causality.  Maybe I was just speaking in a deep voice that day, and so sounded wise.

\n

But my suspicion is that I came across as \"deep\" because I coherently violated the cached pattern for \"deep wisdom\" in a way that made immediate sense.

\n

There's a stereotype of Deep Wisdom.  Death: complete the pattern: \"Death gives meaning to life.\"  Everyone knows this standard Deeply Wise response.  And so it takes on some of the characteristics of an applause light.  If you say it, people may nod along, because the brain completes the pattern and they know they're supposed to nod.  They may even say \"What deep wisdom!\", perhaps in the hope of being thought deep themselves.   But they will not be surprised; they will not have heard anything outside the box; they will not have heard anything they could not have thought of for themselves.  One might call it belief in wisdom—the thought is labeled \"deeply wise\", and it's the completed standard pattern for \"deep wisdom\", but it carries no experience of insight.

\n

People who try to seem Deeply Wise often end up seeming hollow, echoing as it were, because they're trying to seem Deeply Wise instead of optimizing.

\n

How much thinking did I need to do, in the course of seeming deep?  Human brains only run at 100Hz and I responded in realtime, so most of the work must have been precomputed.  The part I experienced as effortful was picking a response understandable in one inferential step and then phrasing it for maximum impact.

\n

Philosophically, nearly all of my work was already done.  Complete the pattern: Existing condition X is really justified because it has benefit Y:  \"Naturalistic fallacy?\" / \"Status quo bias?\" / \"Could we get Y without X?\" / \"If we had never even heard of X before, would we voluntarily take it on to get Y?\"  I think it's fair to say that I execute these thought-patterns at around the same level of automaticity as I breathe.  After all, most of human thought has to be cache lookups if the brain is to work at all.

\n

And I already held to the developed philosophy of transhumanism.  Transhumanism also has cached thoughts about death.  Death: complete the pattern: \"Death is a pointless tragedy which people rationalize.\"  This was a nonstandard cache, one with which my listeners were unfamiliar.  I had several opportunities to use nonstandard cache, and because they were all part of the developed philosophy of transhumanism, they all visibly belonged to the same theme.  This made me seem coherent, as well as original.

\n

I suspect this is one reason Eastern philosophy seems deep to Westerners—it has nonstandard but coherent cache for Deep Wisdom.  Symmetrically, in works of Japanese fiction, one sometimes finds Christians depicted as repositories of deep wisdom and/or mystical secrets.  (And sometimes not.)

\n

If I recall correctly an economist once remarked that popular audiences are so unfamiliar with standard economics that, when he was called upon to make a television appearance, he just needed to repeat back Econ 101 in order to sound like a brilliantly original thinker.

\n

Also crucial was that my listeners could see immediately that my reply made sense.  They might or might not have agreed with the thought, but it was not a complete non-sequitur unto them.  I know transhumanists who are unable to seem deep because they are unable to appreciate what their listener does not already know.  If you want to sound deep, you can never say anything that is more than a single step of inferential distance away from your listener's current mental state.  That's just the way it is.

\n

To seem deep, study nonstandard philosophies.  Seek out discussions on topics that will give you a chance to appear deep.  Do your philosophical thinking in advance, so you can concentrate on explaining well.  Above all, practice staying within the one-inferential-step bound.

\n

To be deep, think for yourself about \"wise\" or important or emotionally fraught topics.  Thinking for yourself isn't the same as coming up with an unusual answer.  It does mean seeing for yourself, rather than letting your brain complete the pattern.  If you don't stop at the first answer, and cast out replies that seem vaguely unsatisfactory, in time your thoughts will form a coherent whole, flowing from the single source of yourself, rather than being fragmentary repetitions of other people's conclusions.

" } }, { "_id": "SA79JMXKWke32A3hG", "title": "Original Seeing", "pageUrl": "https://www.lesswrong.com/posts/SA79JMXKWke32A3hG/original-seeing", "postedAt": "2007-10-14T04:38:45.000Z", "baseScore": 188, "voteCount": 154, "commentCount": 29, "url": null, "contents": { "documentId": "SA79JMXKWke32A3hG", "html": "\n\n\n\n \n\n \n\n

Since Robert Pirsig put this very well, I’ll just copy down what he said. I don’t know if this story is based on reality or not, but either way, it’s true.

\n\n
\n \n\n

He’d been having trouble with students who had nothing to say. At first he thought it was laziness but later it became apparent that it wasn’t. They just couldn’t think of anything to say.

\n\n

One of them, a girl with strong-lensed glasses, wanted to write a five-hundred word essay about the United States. He was used to the sinking feeling that comes from statements like this, and suggested without disparagement that she narrow it down to just Bozeman.

\n\n

When the paper came due she didn’t have it and was quite upset. She had tried and tried but she just couldn’t think of anything to say.

\n\n

It just stumped him. Now he couldn’t think of anything to say. A silence occurred, and then a peculiar answer: “Narrow it down to the main street of Bozeman.” It was a stroke of insight.

\n\n

She nodded dutifully and went out. But just before her next class she came back in real distress, tears this time, distress that had obviously been there for a long time. She still couldn’t think of anything to say, and couldn’t understand why, if she couldn’t think of anything about all of Bozeman, she should be able to think of something about just one street.

\n\n

He was furious. “You’re not looking!” he said. A memory came back of his own dismissal from the University for having too much to say. For every fact there is an infinity of hypotheses. The more you look the more you see. She really wasn’t looking and yet somehow didn’t understand this.

\n\n

He told her angrily, “Narrow it down to the front of one building on the main street of Bozeman. The Opera House. Start with the upper left-hand brick.”

\n\n

Her eyes, behind the thick-lensed glasses, opened wide.

\n\n

She came in the next class with a puzzled look and handed him a five-thousand-word essay on the front of the Opera House on the main street of Bozeman, Montana. “I sat in the hamburger stand across the street,” she said, “and started writing about the first brick, and the second brick, and then by the third brick it all started to come and I couldn’t stop. They thought I was crazy, and they kept kidding me, but here it all is. I don’t understand it.”

\n\n

Neither did he, but on long walks through the streets of town he thought about it and concluded she was evidently stopped with the same kind of blockage that had paralyzed him on his first day of teaching. She was blocked because she was trying to repeat, in her writing, things she had already heard, just as on the first day he had tried to repeat things he had already decided to say. She couldn’t think of anything to write about Bozeman because she couldn’t recall anything she had heard worth repeating. She was strangely unaware that she could look and see freshly for herself, as she wrote, without primary regard for what had been said before. The narrowing down to one brick destroyed the blockage because it was so obvious she had to do some original and direct seeing.

\n\n

\n\n

\n —Robert M. Pirsig,\n
\n\n
\n Zen and the Art of Motorcycle Maintenance\n
\n

\n\n" } }, { "_id": "qu95AwSrKqQSo4fCY", "title": "The \"Outside the Box\" Box", "pageUrl": "https://www.lesswrong.com/posts/qu95AwSrKqQSo4fCY/the-outside-the-box-box", "postedAt": "2007-10-12T22:50:31.000Z", "baseScore": 94, "voteCount": 75, "commentCount": 52, "url": null, "contents": { "documentId": "qu95AwSrKqQSo4fCY", "html": "

Whenever someone exhorts you to \"think outside the box\", they usually, for your convenience, point out exactly where \"outside the box\" is located.  Isn't it funny how nonconformists all dress the same...

\n

In Artificial Intelligence, everyone outside the field has a cached result for brilliant new revolutionary AI idea—neural networks, which work just like the human brain!  New AI Idea: complete the pattern:  \"Logical AIs, despite all the big promises, have failed to provide real intelligence for decades—what we need are neural networks!\"

\n

This cached thought has been around for three decades.  Still no general intelligence.  But, somehow, everyone outside the field knows that neural networks are the Dominant-Paradigm-Overthrowing New Idea, ever since backpropagation was invented in the 1970s.  Talk about your aging hippies.

\n

Nonconformist images, by their nature, permit no departure from the norm.  If you don't wear black, how will people know you're a tortured artist?  How will people recognize uniqueness if you don't fit the standard pattern for what uniqueness is supposed to look like?  How will anyone recognize you've got a revolutionary AI concept, if it's not about neural networks?

\n

\n

Another example of the same trope is \"subversive\" literature, all of which sounds the same, backed up by a tiny defiant league of rebels who control the entire English Department.  As Anonymous asks on Scott Aaronson's blog:

\n
\n

\"Has any of the subversive literature you've read caused you to modify any of your political views?\"

\n
\n

Or as Lizard observes:

\n
\n

\"Revolution has already been televised. Revolution has been *merchandised*. Revolution is a commodity, a packaged lifestyle, available at your local mall. $19.95 gets you the black mask, the spray can, the \"Crush the Fascists\" protest sign, and access to your blog where you can write about the police brutality you suffered when you chained yourself to a fire hydrant.  Capitalism has learned how to sell anti-capitalism.\"

\n
\n

Many in Silicon Valley have observed that the vast majority of venture capitalists at any given time are all chasing the same Revolutionary Innovation, and it's the Revolutionary Innovation that IPO'd six months ago.  This is an especially crushing observation in venture capital, because there's a direct economic motive to not follow the herd—either someone else is also developing the product, or someone else is bidding too much for the startup.  Steve Jurvetson once told me that at Draper Fisher Jurvetson, only two partners need to agree in order to fund any startup up to $1.5 million.  And if all the partners agree that something sounds like a good idea, they won't do it.  If only grant committees were this sane.

\n

The problem with originality is that you actually have to think in order to attain it, instead of letting your brain complete the pattern.  There is no conveniently labeled \"Outside the Box\" to which you can immediately run off.  There's an almost Zen-like quality to it—like the way you can't teach satori in words because satori is the experience of words failing you.  The more you try to follow the Zen Master's instructions in words, the further you are from attaining an empty mind.

\n

There is a reason, I think, why people do not attain novelty by striving for it.  Properties like truth or good design are independent of novelty:  2 + 2 = 4, yes, really, even though this is what everyone else thinks too.  People who strive to discover truth or to invent good designs, may in the course of time attain creativity.  Not every change is an improvement, but every improvement is a change.

\n

Every improvement is a change, but not every change is an improvement.  The one who says, \"I want to build an original mousetrap!\", and not, \"I want to build an optimal mousetrap!\", nearly always wishes to be perceived as original.  \"Originality\" in this sense is inherently social, because it can only be determined by comparison to other people.  So their brain simply completes the standard pattern for what is perceived as \"original\", and their friends nod in agreement and say it is subversive.

\n

Business books always tell you, for your convenience, where your cheese has been moved to.  Otherwise the readers would be left around saying, \"Where is this 'Outside the Box' I'm supposed to go?\"

\n

Actually thinking, like satori, is a wordless act of mind.

\n

The eminent philosophers of Monty Python said it best of all.

\n\n

\n\n\n\n\n\n\n

" } }, { "_id": "2MD3NMLBPCqPfnfre", "title": "Cached Thoughts", "pageUrl": "https://www.lesswrong.com/posts/2MD3NMLBPCqPfnfre/cached-thoughts", "postedAt": "2007-10-11T23:46:20.000Z", "baseScore": 234, "voteCount": 196, "commentCount": 94, "url": null, "contents": { "documentId": "2MD3NMLBPCqPfnfre", "html": "\n\n\n\n \n\n \n\n

One of the single greatest puzzles about the human brain is how the damn thing works at all when most neurons fire 10–20 times per second, or 200Hz tops. In neurology, the “hundred-step rule” is that any postulated operation has to complete in at most 100 sequential steps—you can be as parallel as you like, but you can’t postulate more than 100 (preferably fewer) neural spikes one after the other.

\n\n

Can you imagine having to program using 100Hz CPUs, no matter how many of them you had? You’d also need a hundred billion processors just to get anything done in realtime.

\n\n

If you did need to write realtime programs for a hundred billion 100Hz processors, one trick you’d use as heavily as possible is caching. That’s when you store the results of previous operations and look them up next time, instead of recomputing them from scratch. And it’s a very neural idiom—recognition, association, completing the pattern.

\n\n

It’s a good guess that the actual majority of human cognition consists of cache lookups.

\n\n

This thought does tend to go through my mind at certain times.

\n\n

There was a wonderfully illustrative story which I thought I had bookmarked, but couldn’t re-find: it was the story of a man whose know-it-all neighbor had once claimed in passing that the best way to remove a chimney from your house was to knock out the fireplace, wait for the bricks to drop down one level, knock out those bricks, and repeat until the chimney was gone. Years later, when the man wanted to remove his own chimney, this cached thought was lurking, waiting to pounce . . .

\n\n

As the man noted afterward—you can guess it didn’t go well—his neighbor was not particularly knowledgeable in these matters, not a trusted source. If he’d questioned the idea, he probably would have realized it was a poor one. Some cache hits we’d be better off recomputing. But the brain completes the pattern automatically—and if you don’t consciously realize the pattern needs correction, you’ll be left with a completed pattern.

\n\n

I suspect that if the thought had occurred to the man himself—if he’d personally had this bright idea for how to remove a chimney—he would have examined the idea more critically. But if someone else has already thought an idea through, you can save on computing power by caching their conclusion—right?

\n\n

In modern civilization particularly, no one can think fast enough to think their own thoughts. If I’d been abandoned in the woods as an infant, raised by wolves or silent robots, I would scarcely be recognizable as human. No one can think fast enough to recapitulate the wisdom of a hunter-gatherer tribe in one lifetime, starting from scratch. As for the wisdom of a literate civilization, forget it.

\n\n

But the flip side of this is that I continually see people who aspire to critical thinking, repeating back cached thoughts which were not invented by critical thinkers.

\n\n

A good example is the skeptic who concedes, “Well, you can’t prove or disprove a religion by factual evidence.” As I have pointed out elsewhere,1 this is simply false as probability theory. And it is also simply false relative to the real psychology of religion—a few centuries ago, saying this would have gotten you burned at the stake. A mother whose daughter has cancer prays, “God, please heal my daughter,” not, “Dear God, I know that religions are not allowed to have any falsifiable consequences, which means that you can’t possibly heal my daughter, so . . . well, basically, I’m praying to make myself feel better, instead of doing something that could actually help my daughter.”

\n\n

But people read “You can’t prove or disprove a religion by factual evidence,” and then, the next time they see a piece of evidence disproving a religion, their brain completes the pattern. Even some atheists repeat this absurdity without hesitation. If they’d thought of the idea themselves, rather than hearing it from someone else, they would have been more skeptical.

\n\n

Death. Complete the pattern: “Death gives meaning to life.”

\n\n

It’s frustrating, talking to good and decent folk—people who would never in a thousand years spontaneously think of wiping out the human species—raising the topic of existential risk, and hearing them say, “Well, maybe the human species doesn’t deserve to survive.” They would never in a thousand years shoot their own child, who is a part of the human species, but the brain completes the pattern.

\n\n

What patterns are being completed, inside your mind, that you never chose to be there?

\n\n

Rationality. Complete the pattern: “Love isn’t rational.”

\n\n

If this idea had suddenly occurred to you personally, as an entirely new thought, how would you examine it critically? I know what I would say, but what would you? It can be hard to see with fresh eyes. Try to keep your mind from completing the pattern in the standard, unsurprising, already-known way. It may be that there is no better answer than the standard one, but you can’t think about the answer until you can stop your brain from filling in the answer automatically.

\n\n

Now that you’ve read this, the next time you hear someone unhesitatingly repeating a meme you think is silly or false, you’ll think, “Cached thoughts.” My belief is now there in your mind, waiting to complete the pattern. But is it true? Don’t let your mind complete the pattern! Think!

\n\n
\n \n\n

1See ‘Religion’s Claim to be Non-Disprovable,” in Map and Territory.

\n
\n\n" } }, { "_id": "TiDGXt3WrQwtCdDj3", "title": "Do We Believe Everything We're Told?", "pageUrl": "https://www.lesswrong.com/posts/TiDGXt3WrQwtCdDj3/do-we-believe-everything-we-re-told", "postedAt": "2007-10-10T23:52:46.000Z", "baseScore": 108, "voteCount": 93, "commentCount": 41, "url": null, "contents": { "documentId": "TiDGXt3WrQwtCdDj3", "html": "\n\n\n\n \n\n \n\n

Some early experiments on anchoring and adjustment tested whether distracting the subjects—rendering subjects cognitively “busy” by asking them to keep a lookout for “5” in strings of numbers, or some such—would decrease adjustment, and hence increase the influence of anchors. Most of the experiments seemed to bear out the idea that being cognitive busy increased anchoring, and more generally contamination.

\n\n

Looking over the accumulating experimental results—more and more findings of contamination, exacerbated by cognitive busyness—Daniel Gilbert saw a truly crazy pattern emerging: Do we believe everything we’re told?

\n\n

One might naturally think that on being told a proposition, we would first comprehend what the proposition meant, then consider the proposition, and finally accept or reject it. This obvious-seeming model of cognitive process flow dates back to Descartes. But Descartes’s rival, Spinoza, disagreed; Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

\n\n

Over the last few centuries, philosophers pretty much went along with Descartes, since his view seemed more, y’know, logical and intuitive.1 But Gilbert saw a way of testing Descartes’s and Spinoza’s hypotheses experimentally.

\n\n

If Descartes is right, then distracting subjects should interfere with both accepting true statements and rejecting false statements. If Spinoza is right, then distracting subjects should cause them to remember false statements as being true, but should not cause them to remember true statements as being false.

\n\n

Gilbert, Krull, and Malone bear out this result, showing that, among subjects presented with novel statements labeled true or false, distraction had no effect on identifying true propositions (55% success for uninterrupted presentations, vs. 58% when interrupted); but did affect identifying false propositions (55% success when uninterrupted, vs. 35% when interrupted).2

\n\n

A much more dramatic illustration was produced in followup experiments by Gilbert, Tafarodi, and Malone.2 Subjects read aloud crime reports crawling across a video monitor, in which the color of the text indicated whether a particular statement was true or false. Some reports contained false statements that exacerbated the severity of the crime, other reports contained false statements that extenuated (excused) the crime. Some subjects also had to pay attention to strings of digits, looking for a “5,” while reading the crime reports—this being the distraction task to create cognitive busyness. Finally, subjects had to recommend the length of prison terms for each criminal, from 0 to 20 years.\n

\n

Subjects in the cognitively busy condition recommended an average of 11.15 years in prison for criminals in the “exacerbating” condition, that is, criminals whose reports contained labeled false statements exacerbating the severity of the crime. Busy subjects recommended an average of 5.83 years in prison for criminals whose reports contained labeled false statements excusing the crime. This nearly twofold difference was, as you might suspect, statistically significant.\n

\n

Non-busy participants read exactly the same reports, with the same labels, and the same strings of numbers occasionally crawling past, except that they did not have to search for the number “5.” Thus, they could devote more attention to “unbelieving” statements labeled false. These non-busy participants recommended 7.03 years versus 6.03 years for criminals whose reports falsely exacerbated or falsely excused.\n

\n

Gilbert, Tafarodi, and Malone’s paper was entitled “You Can’t Not Believe Everything You Read.”\n

\n

This suggests—to say the very least—that we should be more careful when we expose ourselves to unreliable information, especially if we’re doing something else at the time. Be careful when you glance at that newspaper in the supermarket.\n

\n

PS: According to an unverified rumor I just made up, people will be less skeptical of this essay because of the distracting color changes.\n

\n\n
\n \n\n

1See Robin Hanson, “Policy Tug-O-War,” Overcoming Bias (blog), 2007, http://www.overcomingbias.com/2007/05/policy_tugowar.html.

\n\n

2Daniel T. Gilbert, Douglas S. Krull, and Patrick S. Malone, “Unbelieving the Unbelievable: Some Problems in the Rejection of False Information,” Journal of Personality and Social Psychology 59 (4 1990): 601–613.

\n\n

3Daniel T. Gilbert, Romin W. Tafarodi, and Patrick S. Malone, “You Can’t Not Believe Everything You Read,” Journal of Personality and Social Psychology 65 (2 1993): 221–233.

\n
\n

\n" } }, { "_id": "BaCWFCxBQYjJXSsah", "title": "Priming and Contamination", "pageUrl": "https://www.lesswrong.com/posts/BaCWFCxBQYjJXSsah/priming-and-contamination", "postedAt": "2007-10-10T02:23:05.000Z", "baseScore": 69, "voteCount": 63, "commentCount": 27, "url": null, "contents": { "documentId": "BaCWFCxBQYjJXSsah", "html": "\n\n\n\n \n\n \n\n

Suppose you ask subjects to press one button if a string of letters forms a word, and another button if the string does not form a word (e.g., “banack” vs. “banner”). Then you show them the string “water.” Later, they will more quickly identify the string “drink” as a word. This is known as “cognitive priming”; this particular form would be “semantic priming” or “conceptual priming.”

\n\n

The fascinating thing about priming is that it occurs at such a low level—priming speeds up identifying letters as forming a word, which one would expect to take place before you deliberate on the word’s meaning.

\n\n

Priming also reveals the massive parallelism of spreading activation: if seeing “water” activates the word “drink,” it probably also activates “river,” or “cup,” or “splash” . . . and this activation spreads, from the semantic linkage of concepts, all the way back to recognizing strings of letters.

\n\n

Priming is subconscious and unstoppable, an artifact of the human neural architecture. Trying to stop yourself from priming is like trying to stop the spreading activation of your own neural circuits.

\n\n

Try making a set of index cards with words like Brown written in randomly assigned colors–a red Green, a blue Yellow, and so on. Try to say aloud the color—not the meaning, but the color—of the letter-strings.

\n\n

In Mussweiler and Strack’s experiment, subjects were asked an anchoring question: “Is the annual mean temperature in Germany higher or lower than 5°C / 20°C?”1 Afterward, on a word-identification task, subjects presented with the 5°C anchor were faster on identifying words like “cold” and “snow,” while subjects with the high anchor were faster to identify “hot” and “sun.” This shows a non-adjustment mechanism for anchoring: priming compatible thoughts and memories.

\n\n

The more general result is that completely uninformative, known false, or totally irrelevant “information” can influence estimates and decisions. In the field of heuristics and biases, this more general phenomenon is known as contamination.2

\n\n

Early research in heuristics and biases discovered anchoring effects, such as subjects giving lower (higher) estimates of the percentage of UN countries found within Africa, depending on whether they were first asked if the percentage was more or less than 10 (65). This effect was originally attributed to subjects adjusting from the anchor as a starting point, stopping as soon as they reached a plausible value, and under-adjusting because they were stopping at one end of a confidence interval.3

\n\n

Tversky and Kahneman’s early hypothesis still appears to be the correct explanation in some circumstances, notably when subjects generate the initial estimate themselves. But modern research seems to show that most anchoring is actually due to contamination, not sliding adjustment.4

\n\n

Your grocery store probably has annoying signs saying “Limit 12 per customer” or “5 for $10.” Are these signs effective at getting customers to buy in larger quantities? You probably think you’re not influenced. But someone must be, because these signs have been shown to work. Which is why stores keep putting them up.5

\n\n

Yet the most fearsome aspect of contamination is that it serves as yet another of the thousand faces of confirmation bias.6 Once an idea gets into your head, it primes information compatible with it—and thereby ensures its continued existence. Never mind the selection pressures for winning political arguments; confirmation bias is built directly into our hardware, associational networks priming compatible thoughts and memories. An unfortunate side effect of our existence as neural creatures.

\n\n

A single fleeting image can be enough to prime associated words for recognition. Don’t think it takes anything more to set confirmation bias in motion. All it takes is that one quick flash, and the bottom line is already decided, for we change our minds less often than we think . . .

\n\n
\n \n\n

1Thomas Mussweiler and Fritz Strack, “Comparing Is Believing: A Selective Accessibility Model of Judgmental Anchoring,” European Review of Social Psychology 10 (1 1999): 135–167.

\n\n

2Gretchen B. Chapman and Eric J. Johnson, “Incorporating the Irrelevant: Anchors in Judgments of Belief and Value,” in Heuristics and Biases, ed. Thomas Gilovich, Dale Griffin, and Daniel Kahneman (New York: Cambridge University Press, 2002), 120–138.

\n\n

3Tversky and Kahneman, “Judgment Under Uncertainty.”

\n\n

4Nicholas Epley and Thomas Gilovich, “Putting Adjustment Back in the Anchoring and Adjustment Heuristic: Differential Processing of Self-Generated and Experimentor-Provided Anchors,” Psychological Science 12 (5 2001): 391–396.

\n\n

5Brian Wansink, Robert J. Kent, and Stephen J. Hoch, “An Anchoring and Adjustment Model of Purchase Quantity Decisions,” Journal of Marketing Research 35, no. 1 (1998): 71–81, http://www.jstor.org/stable/3151931.

\n\n

6See “The Third Alternative,” “Knowing About Biases Can Hurt You,” “One Argument Against An Army,” “What Evidence Filtered Evidence?”, and “Rationalization.” And “Hindsight Devalues Science,” “Fake Causality,” and “Positive Bias: Look into the Dark” in Map and Territory. And the rest of this book.

\n
\n\n" } }, { "_id": "qmqLxvtsPzZ2s6mpY", "title": "A Priori", "pageUrl": "https://www.lesswrong.com/posts/qmqLxvtsPzZ2s6mpY/a-priori", "postedAt": "2007-10-08T21:02:14.000Z", "baseScore": 88, "voteCount": 73, "commentCount": 133, "url": null, "contents": { "documentId": "qmqLxvtsPzZ2s6mpY", "html": "

Traditional Rationality is phrased as social rules, with violations interpretable as cheating: if you break the rules and no one else is doing so, you're the first to defect - making you a bad, bad person.  To Bayesians, the brain is an engine of accuracy: if you violate the laws of rationality, the engine doesn't run, and this is equally true whether anyone else breaks the rules or not.

\n\n

Consider the problem of Occam's Razor, as confronted by Traditional philosophers.  If two hypotheses fit the same observations equally well, why believe the simpler one is more likely to be true?

You could argue that Occam's Razor has worked in the past,\nand is therefore likely to continue to work in the future.  But\nthis, itself, appeals to a prediction from Occam's Razor.  "Occam's Razor works up to\nOctober 8th, 2007 and then stops working thereafter" is more complex,\nbut it fits the observed evidence equally well.

\n\n

You could argue that Occam's Razor is a reasonable distribution on\nprior probabilities.  But what is a "reasonable" distribution?  Why not label "reasonable" a very complicated prior distribution,\nwhich makes Occam's Razor work in all observed tests so far, but\ngenerates exceptions in future cases?

\n\n

Indeed, it seems there is no way to justify Occam's Razor except by appealing to Occam's Razor, making this argument unlikely to convince any judge who does not already accept Occam's Razor.  (What's special about the words I italicized?)

\n\n

If you are a philosopher whose daily work is to write papers,\ncriticize other people's papers, and respond to others' criticisms of\nyour own papers, then you may look at Occam's Razor and shrug.  Here is an end to justifying, arguing and convincing.  You decide to call a truce on writing papers; if your fellow philosophers do not demand justification for your un-arguable beliefs, you will not demand justification for theirs.  And as the symbol of your treaty, your white flag, you use the phrase "a priori truth".

\n\n

But to a Bayesian, in this era of cognitive science and evolutionary biology and Artificial Intelligence, saying "a priori" doesn't explain why the brain-engine\nruns.  If the brain has an amazing "a priori truth factory" that works\nto produce accurate beliefs, it makes you wonder why a thirsty\nhunter-gatherer can't use the "a priori truth factory" to locate\ndrinkable water.  It makes you wonder why eyes evolved in the first place, if there are ways to produce accurate beliefs without looking at things.

\n\n

James R. Newman said:  "The fact that one apple added to one apple invariably gives two apples helps in the teaching of arithmetic, but has no bearing on the truth of the proposition that 1 + 1 = 2."  The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience.  Wikipedia quotes Hume:  Relations of ideas are "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe."  You can see that 1 + 1 = 2 just by thinking about it, without looking at apples.

\n\n

But in this era of neurology, one ought to be aware that thoughts are existent in the universe; they are identical to the operation of brains.  Material brains, real in the universe, composed of quarks in a single unified mathematical physics whose laws draw no border between the inside and outside of your skull.

\n\n

When you add 1 + 1 and get 2 by thinking, these thoughts are themselves embodied in flashes of neural patterns.  In principle, we could observe, experientially, the exact same material events as they occurred within someone else's brain.  It would require some advances in computational neurobiology and brain-computer interfacing, but in principle, it could be done.  You could see someone else's engine operating materially, through material chains of cause and effect, to compute by "pure thought" that 1 + 1 = 2.  How is observing this pattern in someone else's brain any different, as a way of knowing, from observing your own brain doing the same thing?  When "pure thought" tells you that 1 + 1 = 2, "independently of any experience or observation", you are, in effect, observing your own brain as evidence.

\n\n

If this seems counterintuitive, try to see minds/brains as engines - an engine that collides the neural pattern for 1 and the neural pattern for 1 and gets the neural pattern for 2.  If this engine works at all, then it should have the same output if it observes (with eyes and retina) a similar brain-engine carrying out a similar collision, and copies into itself the resulting pattern.  In other words, for every form of a priori knowledge obtained by "pure thought", you are learning exactly the same thing you would learn if you saw an outside brain-engine carrying out the same pure flashes of neural activation.  The engines are equivalent, the bottom-line outputs are equivalent, the belief-entanglements are the same.

\n\n

There is nothing you can know "a priori", which you could not know with equal validity by observing the chemical release of neurotransmitters within some outside brain.  What do you think you are, dear reader?

\n\n

This is why you can predict the result of adding 1 apple and 1 apple by imagining it first in your mind, or punch "3 x 4" into a calculator to predict the result of imagining 4 rows with 3 apples per row.  You and the apple exist within a boundary-less unified physical process, and one part may echo another.

\n\n

Are the sort of neural flashes that philosophers label "a priori beliefs", arbitrary?  Many AI algorithms function better with "regularization" that biases\nthe solution space toward simpler solutions.  But the regularized\nalgorithms are themselves more complex; they contain an extra line of\ncode (or 1000 extra lines) compared to unregularized algorithms.  The human\nbrain is biased toward simplicity,\nand we think more efficiently thereby.  If you press the Ignore button\nat this point, you're left with a complex brain that exists\nfor no reason and works for no reason.  So don't try to tell me that "a priori" beliefs are\narbitrary, because they sure aren't generated by rolling random numbers.  (What does the adjective "arbitrary" mean, anyway?)

\n\n

You can't excuse calling a proposition "a priori" by pointing out that\nother philosophers are having trouble justifying their propositions.  If a\nphilosopher fails to explain something, this fact cannot supply electricity to a refrigerator, nor act as a magical factory for\naccurate beliefs.  There's no truce, no white flag, until you understand why the engine\nworks.

\n\n

If you clear your mind of justification, of argument, then it seems obvious why Occam's Razor works in practice: we live in a simple world, a low-entropy universe in which there are short explanations to be found.  "But," you cry, "why is the universe itself orderly?"  This I do not know, but it is what I see as the next mystery to be explained.  This is not the same question as "How do I argue Occam's Razor to a hypothetical debater who has not already accepted it?"

\n\n

Perhaps you cannot argue anything to a hypothetical debater who has not accepted Occam's Razor, just as you cannot argue anything to a rock.  A mind needs a certain amount of dynamic structure to be an argument-acceptor.  If a mind doesn't implement Modus Ponens, it can accept "A" and "A->B" all day long without ever producing "B".  How do you justify Modus Ponens to a mind that hasn't accepted it?  How do you argue a rock into becoming a mind?

\n\n

Brains evolved from non-brainy matter by natural selection; they were not justified into existence by arguing with an ideal philosophy student of perfect emptiness.  This does not make our judgments meaningless.  A brain-engine can work correctly, producing accurate beliefs, even if it was merely built - by human hands or cumulative stochastic selection pressures - rather than argued into existence.  But to be satisfied by this answer, one must see rationality in terms of engines, rather than arguments.

" } }, { "_id": "eY45uCCX7DdwJ4Jha", "title": "No One Can Exempt You From Rationality's Laws", "pageUrl": "https://www.lesswrong.com/posts/eY45uCCX7DdwJ4Jha/no-one-can-exempt-you-from-rationality-s-laws", "postedAt": "2007-10-07T17:24:44.000Z", "baseScore": 136, "voteCount": 106, "commentCount": 53, "url": null, "contents": { "documentId": "eY45uCCX7DdwJ4Jha", "html": "\n\n\n\n \n\n \n\n

Traditional Rationality is phrased in terms of social rules, with violations interpretable as cheating—as defections from cooperative norms. If you want me to accept a belief from you, you are obligated to provide me with a certain amount of evidence. If you try to get out of it, we all know you’re cheating on your obligation. A theory is obligated to make bold predictions for itself, not just steal predictions that other theories have labored to make. A theory is obligated to expose itself to falsification—if it tries to duck out, that’s like trying to duck out of a fearsome initiation ritual; you must pay your dues.

\n\n

Traditional Rationality is phrased similarly to the customs that govern human societies, which makes it easy to pass on by word of mouth. Humans detect social cheating with much greater reliability than isomorphic violations of abstract logical rules.1 But viewing rationality as a social obligation gives rise to some strange ideas.

\n\n

For example, one finds religious people defending their beliefs by saying, “Well, you can’t justify your belief in science!” In other words, “How dare you criticize me for having unjustified beliefs, you hypocrite! You’re doing it too!”

\n\n

To Bayesians, the brain is an engine of accuracy: it processes and concentrates entangled evidence into a map that reflects the territory. The principles of rationality are laws in the same sense as the Second Law of Thermodynamics: obtaining a reliable belief requires a calculable amount of entangled evidence, just as reliably cooling the contents of a refrigerator requires a calculable minimum of free energy.

\n\n

In principle, the laws of physics are time-reversible, so there’s an infinitesimally tiny probability—indistinguishable from zero to all but mathematicians—that a refrigerator will spontaneously cool itself down while generating electricity. There’s a slightly larger infinitesimal chance that you could accurately draw a detailed street map of New York without ever visiting, sitting in your living room with your blinds closed and no Internet connection. But I wouldn’t hold your breath.

\n\n

Before you try mapping an unseen territory, pour some water into a cup at room temperature and wait until it spontaneously freezes before proceeding. That way you can be sure the general trick—ignoring infinitesimally tiny probabilities of success—is working properly. You might not realize directly that your map is wrong, especially if you never visit New York; but you can see that water doesn’t freeze itself.

\n\n

If the rules of rationality are social customs, then it may seem to excuse behavior X if you point out that others are doing the same thing. It wouldn’t be fair to demand evidence from you, if we can’t provide it ourselves. We will realize that none of us are better than the rest, and we will relent and mercifully excuse you from your social obligation to provide evidence for your belief. And we’ll all live happily ever afterward in liberty, fraternity, and equality.

\n\n

If the rules of rationality are mathematical laws, then trying to justify evidence-free belief by pointing to someone else doing the same thing will be around as effective as listing thirty reasons why you shouldn’t fall off a cliff. Even if we all vote that it’s unfair for your refrigerator to need electricity, it still won’t run (with probability ~1). Even if we all vote that you shouldn’t have to visit New York, the map will still be wrong. Lady Nature is famously indifferent to such pleading, and so is Lady Math.

\n\n

So—to shift back to the social language of Traditional Rationality—don’t think you can get away with claiming that it’s okay to have arbitrary beliefs about XYZ, because other people have arbitrary beliefs too. If two parties to a contract both behave equally poorly, a human judge may decide to impose penalties on neither. But if two engineers design their engines equally poorly, neither engine will work. One design error cannot excuse another. Even if I’m doing XYZ wrong, it doesn’t help you, or exempt you from the rules; it just means we’re both screwed.

\n\n

As a matter of human law in liberal democracies, everyone is entitled to their own beliefs. As a matter of Nature’s law, you are not entitled to accuracy. We don’t arrest people for believing weird things, at least not in the wiser countries. But no one can revoke the law that you need evidence to generate accurate beliefs. Not even a vote of the whole human species can obtain mercy in the court of Nature.

\n\n

Physicists don’t decide the laws of physics, they just guess what they are. Rationalists don’t decide the laws of rationality, we just guess what they are. You cannot “rationalize” anything that is not rational to begin with. If by dint of extraordinary persuasiveness you convince all the physicists in the world that you are exempt from the law of gravity, and you walk off a cliff, you’ll fall. Even saying “We don’t decide” is too anthropomorphic. There is no higher authority that could exempt you. There is only cause and effect.

\n\n

Remember this, when you plead to be excused just this once. We can’t excuse you. It isn’t up to us.

\n\n
\n \n\n

1Leda Cosmides and John Tooby, “Cognitive Adaptations for Social Exchange: Evolutionary Psychology and the Generation of Culture,” in The Adapted Mind, ed. Jerome H. Barkow, Leda Cosmides, and John Tooby (New York: Oxford University Press, 1992), 163–228.

\n
\n\n" } }, { "_id": "CahCppKy9HuXe3j2i", "title": "Singlethink", "pageUrl": "https://www.lesswrong.com/posts/CahCppKy9HuXe3j2i/singlethink", "postedAt": "2007-10-06T19:24:01.000Z", "baseScore": 116, "voteCount": 92, "commentCount": 32, "url": null, "contents": { "documentId": "CahCppKy9HuXe3j2i", "html": "\n\n\n\n \n\n \n\n

I remember the exact moment when I began my journey as a rationalist.

\n\n

It was not while reading Surely You’re Joking, Mr. Feynman or any existing work upon rationality; for these I simply accepted as obvious. The journey begins when you see a great flaw in your existing art, and discover a drive to improve, to create new skills beyond the helpful but inadequate ones you found in books.

\n\n

In the last moments of my first life, I was fifteen years old, and rehearsing a pleasantly self-righteous memory of a time when I was much younger. My memories this far back are vague; I have a mental image, but I don’t remember how old I was exactly. I think I was six or seven, and that the original event happened during summer camp.

\n\n

What happened originally was that a camp counselor, a teenage male, got us much younger boys to form a line, and proposed the following game: the boy at the end of the line would crawl through our legs, and we would spank him as he went past, and then it would be the turn of the next eight-year-old boy at the end of the line. (Maybe it’s just that I’ve lost my youthful innocence, but I can’t help but wonder . . .) I refused to play this game, and was told to go sit in the corner.

\n\n

This memory—of refusing to spank and be spanked—came to symbolize to me that even at this very early age I had refused to take joy in hurting others. That I would not purchase a spank on another’s butt, at the price of a spank on my own; would not pay in hurt for the opportunity to inflict hurt. I had refused to play a negative-sum game.

\n\n

And then, at the age of fifteen, I suddenly realized that it wasn’t true. I hadn’t refused out of a principled stand against negative-sum games. I found out about the Prisoner’s Dilemma pretty early in life, but not at the age of seven. I’d refused simply because I didn’t want to get hurt, and standing in the corner was an acceptable price to pay for not getting hurt.

\n\n

More importantly, I realized that I had always known this—that the real memory had always been lurking in a corner of my mind, my mental eye glancing at it for a fraction of a second and then looking away.

\n\n

In my very first step along the Way, I caught the feeling—generalized over the subjective experience—and said, “So that’s what it feels like to shove an unwanted truth into the corner of my mind! Now I’m going to notice every time I do that, and clean out all my corners!”

\n\n

This discipline I named singlethink, after Orwell’s doublethink. In doublethink, you forget, and then forget you have forgotten. In singlethink, you notice you are forgetting, and then you remember. You hold only a single non-contradictory thought in your mind at once.

\n\n

“Singlethink” was the first new rationalist skill I created, which I had not read about in books. I doubt that it is original in the sense of academic priority, but this is thankfully not required.

\n\n

Oh, and my fifteen-year-old self liked to name things.

\n\n

The terrifying depths of the confirmation bias go on and on. Not forever, for the brain is of finite complexity, but long enough that it feels like forever. You keep on discovering (or reading about) new mechanisms by which your brain shoves things out of the way.

\n\n

But my young self swept out quite a few corners with that first broom.

\n\n" } }, { "_id": "3nZMgRTfFEfHp34Gb", "title": "The Meditation on Curiosity", "pageUrl": "https://www.lesswrong.com/posts/3nZMgRTfFEfHp34Gb/the-meditation-on-curiosity", "postedAt": "2007-10-06T00:26:28.000Z", "baseScore": 196, "voteCount": 165, "commentCount": 102, "url": null, "contents": { "documentId": "3nZMgRTfFEfHp34Gb", "html": "

The first virtue is curiosity.

—“The Twelve Virtues of Rationality

As rationalists, we are obligated to criticize ourselves and question our beliefs . . . are we not?

Consider what happens to you, on a psychological level, if you begin by saying: “It is my duty to criticize my own beliefs.” Roger Zelazny once distinguished between “wanting to be an author” versus “wanting to write.” Mark Twain said: “A classic is something that everyone wants to have read and no one wants to read.” Criticizing yourself from a sense of duty leaves you wanting to have investigated, so that you’ll be able to say afterward that your faith is not blind. This is not the same as wanting to investigate.

This can lead to motivated stopping of your investigation.  You consider an objection, then a counterargument to that objection, then you stop there.  You repeat this with several objections, until you feel that you have done your duty to investigate, and then you stop there. You have achieved your underlying psychological objective: to get rid of the cognitive dissonance that would result from thinking of yourself as a rationalist, and yet knowing that you had not tried to criticize your belief.  You might call it purchase of rationalist satisfaction—trying to create a \"warm glow\" of discharged duty.

Afterward, your stated probability level will be high enough to justify your keeping the plans and beliefs you started with, but not so high as to evoke incredulity from yourself or other rationalists.

When you’re really curious, you’ll gravitate to inquiries that seem most promising of producing shifts in belief, or inquiries that are least like the ones you’ve tried before. Afterward, your probability distribution likely should not look like it did when you started out—shifts should have occurred, whether up or down; and either direction is equally fine to you, if you’re genuinely curious.

Contrast this to the subconscious motive of keeping your inquiry on familiar ground, so that you can get your investigation over with quickly, so that you can have investigated, and restore the familiar balance on which your familiar old plans and beliefs are based.

As for what I think true curiosity should look like, and the power that it holds, I refer you to “A Fable of Science and Politics” in the first book of this series, Map and Territory. The fable showcases the reactions of different characters to an astonishing discovery, with each character’s response intended to illustrate different lessons. Ferris, the last character, embodies the power of innocent curiosity: which is lightness, and an eager reaching forth for evidence.

Ursula K. LeGuin wrote: “In innocence there is no strength against evil. But there is strength in it for good.”1 Innocent curiosity may turn innocently awry; and so the training of a rationalist, and its accompanying sophistication, must be dared as a danger if we want to become stronger. Nonetheless we can try to keep the lightness and the eager reaching of innocence.

As it is written in “The Twelve Virtues of Rationality”:

If in your heart you believe you already know, or if in your heart you do not wish to know, then your questioning will be purposeless and your skills without direction. Curiosity seeks to annihilate itself; there is no curiosity that does not want an answer.

There just isn’t any good substitute for genuine curiosity. A burning itch to know is higher than a solemn vow to pursue truth. But you can’t produce curiosity just by willing it, any more than you can will your foot to feel warm when it feels cold. Sometimes, all we have is our mere solemn vows.

So what can you do with duty? For a start, we can try to take an interest in our dutiful investigations—keep a close eye out for sparks of genuine intrigue, or even genuine ignorance and a desire to resolve it. This goes right along with keeping a special eye out for possibilities that are painful, that you are flinching away from—it’s not all negative thinking.

It should also help to meditate on “Conservation of Expected Evidence.” For every new point of inquiry, for every piece of unseen evidence that you suddenly look at, the expected posterior probability should equal your prior probability. In the microprocess of inquiry, your belief should always be evenly poised to shift in either direction. Not every point may suffice to blow the issue wide open—to shift belief from 70% to 30% probability—but if your current belief is 70%, you should be as ready to drop it to 69% as raise it to 71%. You should not think that you know which direction it will go in (on average), because by the laws of probability theory, if you know your destination, you are already there. If you can investigate honestly, so that each new point really does have equal potential to shift belief upward or downward, this may help to keep you interested or even curious about the microprocess of inquiry.

If the argument you are considering is not new, then why is your attention going here? Is this where you would look if you were genuinely curious? Are you subconsciously criticizing your belief at its strong points, rather than its weak points? Are you rehearsing the evidence?

If you can manage not to rehearse already known support, and you can manage to drop down your belief by one tiny bite at a time from the new evidence, you may even be able to relinquish the belief entirely—to realize from which quarter the winds of evidence are blowing against you.

Another restorative for curiosity is what I have taken to calling the Litany of Tarski, which is really a meta-litany that specializes for each instance (this is only appropriate). For example, if I am tensely wondering whether a locked box contains a diamond, then rather than thinking about all the wonderful consequences if the box does contain a diamond, I can repeat the Litany of Tarski:

If the box contains a diamond,
I desire to believe that the box contains a diamond;
If the box does not contain a diamond,
I desire to believe that the box does not contain a diamond;
Let me not become attached to beliefs I may not want.

Then you should meditate upon the possibility that there is no diamond, and the subsequent advantage that will come to you if you believe there is no diamond, and the subsequent disadvantage if you believe there is a diamond. See also the Litany of Gendlin.

If you can find within yourself the slightest shred of true uncertainty, then guard it like a forester nursing a campfire. If you can make it blaze up into a flame of curiosity, it will make you light and eager, and give purpose to your questioning and direction to your skills.

1Ursula K. Le Guin, The Farthest Shore (Saga Press, 2001).

" } }, { "_id": "dHQkDNMhj692ayx78", "title": "Avoiding Your Belief's Real Weak Points", "pageUrl": "https://www.lesswrong.com/posts/dHQkDNMhj692ayx78/avoiding-your-belief-s-real-weak-points", "postedAt": "2007-10-05T01:59:32.000Z", "baseScore": 195, "voteCount": 178, "commentCount": 214, "url": null, "contents": { "documentId": "dHQkDNMhj692ayx78", "html": "\n\n\n\n \n\n \n\n

A few years back, my great-grandmother died, in her nineties, after a long, slow, and cruel disintegration. I never knew her as a person, but in my distant childhood, she cooked for her family; I remember her gefilte fish, and her face, and that she was kind to me. At her funeral, my grand-uncle, who had taken care of her for years, spoke. He said, choking back tears, that God had called back his mother piece by piece: her memory, and her speech, and then finally her smile; and that when God finally took her smile, he knew it wouldn’t be long before she died, because it meant that she was almost entirely gone.

\n\n

I heard this and was puzzled, because it was an unthinkably horrible thing to happen to anyone, and therefore I would not have expected my grand-uncle to attribute it to God. Usually, a Jew would somehow just-not-think-about the logical implication that God had permitted a tragedy. According to Jewish theology, God continually sustains the universe and chooses every event in it; but ordinarily, drawing logical implications from this belief is reserved for happier occasions. By saying “God did it!” only when you’ve been blessed with a baby girl, and just-not-thinking “God did it!” for miscarriages and stillbirths and crib deaths, you can build up quite a lopsided picture of your God’s benevolent personality.

\n\n

Hence I was surprised to hear my grand-uncle attributing the slow disintegration of his mother to a deliberate, strategically planned act of God. It violated the rules of religious self-deception as I understood them.

\n\n

If I had noticed my own confusion, I could have made a successful surprising prediction. Not long afterward, my grand-uncle left the Jewish religion. (The only member of my extended family besides myself to do so, as far as I know.)

\n\n

Modern Orthodox Judaism is like no other religion I have ever heard of, and I don’t know how to describe it to anyone who hasn’t been forced to study Mishna and Gemara. There is a tradition of questioning, but the kind of questioning . . . It would not be at all surprising to hear a rabbi, in his weekly sermon, point out the conflict between the seven days of creation and the 13.7 billion years since the Big Bang—because he thought he had a really clever explanation for it, involving three other Biblical references, a Midrash, and a half-understood article in Scientific American. In Orthodox Judaism you’re allowed to notice inconsistencies and contradictions, but only for purposes of explaining them away, and whoever comes up with the most complicated explanation gets a prize.

\n\n

There is a tradition of inquiry. But you only attack targets for purposes of defending them. You only attack targets you know you can defend.

\n\n

In Modern Orthodox Judaism I have not heard much emphasis of the virtues of blind faith. You’re allowed to doubt. You’re just not allowed to successfully doubt.

\n\n

I expect that the vast majority of educated Orthodox Jews have questioned their faith at some point in their lives. But the questioning probably went something like this: “According to the skeptics, the Torah says that the universe was created in seven days, which is not scientifically accurate. But would the original tribespeople of Israel, gathered at Mount Sinai, have been able to understand the scientific truth, even if it had been presented to them? Did they even have a word for ‘billion’? It’s easier to see the seven-days story as a metaphor—first God created light, which represents the Big Bang . . .”

\n\n

Is this the weakest point at which to attack one’s own Judaism? Read a bit further on in the Torah, and you can find God killing the first-born male children of Egypt to convince an unelected Pharaoh to release slaves who logically could have been teleported out of the country. An Orthodox Jew is most certainly familiar with this episode, because they are supposed to read through the entire Torah in synagogue once per year, and this event has an associated major holiday. The name “Passover” (“Pesach”) comes from God passing over the Jewish households while killing every male firstborn in Egypt.

\n\n

Modern Orthodox Jews are, by and large, kind and civilized people; far more civilized than the several editors of the Old Testament. Even the old rabbis were more civilized. There’s a ritual in the Seder where you take ten drops of wine from your cup, one drop for each of the Ten Plagues, to emphasize the suffering of the Egyptians. (Of course, you’re supposed to be sympathetic to the suffering of the Egyptians, but not so sympathetic that you stand up and say, “This is not right! It is wrong to do such a thing!”) It shows an interesting contrast—the rabbis were sufficiently kinder than the compilers of the Old Testament that they saw the harshness of the Plagues. But Science was weaker in these days, and so rabbis could ponder the more unpleasant aspects of Scripture without fearing that it would break their faith entirely.

\n\n

You don’t even ask whether the incident reflects poorly on God, so there’s no need to quickly blurt out “The ways of God are mysterious!” or “We’re not wise enough to question God’s decisions!” or “Murdering babies is okay when God does it!” That part of the question is just-not-thought-about.

\n\n

The reason that educated religious people stay religious, I suspect, is that when they doubt, they are subconsciously very careful to attack their own beliefs only at the strongest points—places where they know they can defend. Moreover, places where rehearsing the standard defense will feel strengthening.

\n\n

It probably feels really good, for example, to rehearse one’s prescripted defense for “Doesn’t Science say that the universe is just meaningless atoms bopping around?” because it confirms the meaning of the universe and how it flows from God, etc. Much more comfortable to think about than an illiterate Egyptian mother wailing over the crib of her slaughtered son. Anyone who spontaneously thinks about the latter, when questioning their faith in Judaism, is really questioning it, and is probably not going to stay Jewish much longer.

\n\n

My point here is not just to beat up on Orthodox Judaism. I’m sure that there’s some reply or other for the Slaying of the Firstborn, and probably a dozen of them. My point is that, when it comes to spontaneous self-questioning, one is much more likely to spontaneously self-attack strong points with comforting replies to rehearse, than to spontaneously self-attack the weakest, most vulnerable points. Similarly, one is likely to stop at the first reply and be comforted, rather than further criticizing the reply. A better title than “Avoiding Your Belief’s Real Weak Points” would be “Not Spontaneously Thinking About Your Belief’s Most Painful Weaknesses.”

\n\n

More than anything, the grip of religion is sustained by people just-not-thinking-about the real weak points of their religion. I don’t think this is a matter of training, but a matter of instinct. People don’t think about the real weak points of their beliefs for the same reason they don’t touch an oven’s red-hot burners; it’s painful.

\n\n

To do better: When you’re doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most. Don’t rehearse standard objections whose standard counters would make you feel better. Ask yourself what smart people who disagree would say to your first reply, and your second reply. Whenever you catch yourself flinching away from an objection you fleetingly thought of, drag it out into the forefront of your mind. Punch yourself in the solar plexus. Stick a knife in your heart, and wiggle to widen the hole. In the face of the pain, rehearse only this:1

\n\n
\n \n\n

What is true is already so.

\n\n

Owning up to it doesn’t make it worse.

\n\n

Not being open about it doesn’t make it go away.

\n\n

And because it’s true, it is what is there to be interacted with.

\n\n

Anything untrue isn’t there to be lived.

\n\n

People can stand what is true,

\n\n

for they are already enduring it.

\n
\n\n
\n \n\n

1Eugene T. Gendlin, Focusing (Bantam Books, 1982).

\n
\n\n" } }, { "_id": "buixYfcXBah9hbSNZ", "title": "We Change Our Minds Less Often Than We Think", "pageUrl": "https://www.lesswrong.com/posts/buixYfcXBah9hbSNZ/we-change-our-minds-less-often-than-we-think", "postedAt": "2007-10-03T18:14:52.000Z", "baseScore": 116, "voteCount": 95, "commentCount": 120, "url": null, "contents": { "documentId": "buixYfcXBah9hbSNZ", "html": "\n\n\n\n \n\n \n\n
\n \n\n

Over the past few years, we have discreetly approached colleagues faced with a choice between job offers, and asked them to estimate the probability that they will choose one job over another. The average confidence in the predicted choice was a modest 66%, but only 1 of the 24 respondents chose the option to which he or she initially assigned a lower probability, yielding an overall accuracy rate of 96%.

\n\n

—Dale Griffin and Amos Tversky1

\n
\n\n

When I first read the words above—on August 1st, 2003, at around 3 o’clock in the afternoon—it changed the way I thought. I realized that once I could guess what my answer would be—once I could assign a higher probability to deciding one way than other—then I had, in all probability, already decided. We change our minds less often than we think. And most of the time we become able to guess what our answer will be within half a second of hearing the question.

\n\n

How swiftly that unnoticed moment passes, when we can’t yet guess what our answer will be; the tiny window of opportunity for intelligence to act. In questions of choice, as in questions of fact.

\n\n

The principle of the bottom line is that only the actual causes of your beliefs determine your effectiveness as a rationalist. Once your belief is fixed, no amount of argument will alter the truth-value; once your decision is fixed, no amount of argument will alter the consequences.

\n\n

You might think that you could arrive at a belief, or a decision, by non-rational means, and then try to justify it, and if you found you couldn’t justify it, reject it.

\n\n

But we change our minds less often—much less often—than we think.

\n\n

I’m sure that you can think of at least one occasion in your life when you’ve changed your mind. We all can. How about all the occasions in your life when you didn’t change your mind? Are they as available, in your heuristic estimate of your competence?

\n\n

Between hindsight bias, fake causality, positive bias, anchoring/priming, et cetera, et cetera, and above all the dreaded confirmation bias, once an idea gets into your head, it’s probably going to stay there.

\n\n
\n \n\n

1Dale Griffin and Amos Tversky, “The Weighing of Evidence and the Determinants of Confidence,” Cognitive Psychology 24, no. 3 (1992): 411–435.

\n
\n\n" } }, { "_id": "HqzteR7AHxyWtRBcD", "title": "Probability is the oil of rationalisation", "pageUrl": "https://www.lesswrong.com/posts/HqzteR7AHxyWtRBcD/probability-is-the-oil-of-rationalisation", "postedAt": "2007-10-03T02:28:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "HqzteR7AHxyWtRBcD", "html": "
\n
Or How to do whatever you feel like despite being a rationalist.
\n
\n

To rationally make a choice you weigh up all costs and benefits of all possibilities and choose the one with the greatest net benefit. To rationalise a choice you want to make, you choose costs and benefits that lead to your choice seeming like the rational conclusion. Thinking you’re being rational while completely ignoring known costs and benefits that don’t lead to your preferred conclusion is hard to do though. Even slight intelligence leads you to notice things like this happening in your mind.

\n

For most everyday decisions I suggest the ‘solution’ lies in probability estimation. While you might have a set of outcomes you consider possible, their likelihoods are virtually always uncertain. It’s a guessing game, and if you’re guessing, why not guess things that lead to the conclusion you prefer? You might even notice while you’re doing it that your probability estimates are being swayed by the conclusion they’ll lead to, but it doesn’t matter. Within the range where there are no other bases for their positioning, why change your estimates to ones with a less pleasing outcome in the short term? Essentially we slide partiality into the one non-rational part of a rational process.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "9f5EXt8KNNxTAihtZ", "title": "A Rational Argument", "pageUrl": "https://www.lesswrong.com/posts/9f5EXt8KNNxTAihtZ/a-rational-argument", "postedAt": "2007-10-02T18:35:48.000Z", "baseScore": 106, "voteCount": 90, "commentCount": 41, "url": null, "contents": { "documentId": "9f5EXt8KNNxTAihtZ", "html": "\n\n\n\n \n\n \n\n

You are, by occupation, a campaign manager, and you’ve just been hired by Mortimer Q. Snodgrass, the Green candidate for Mayor of Hadleyburg. As a campaign manager reading a book on rationality, one question lies foremost on your mind: “How can I construct an impeccable rational argument that Mortimer Q. Snodgrass is the best candidate for Mayor of Hadleyburg?”

\n\n

Sorry. It can’t be done.

\n\n

“What?” you cry. “But what if I use only valid support to construct my structure of reason? What if every fact I cite is true to the best of my knowledge, and relevant evidence under Bayes’s Rule?”1

\n\n

Sorry. It still can’t be done. You defeated yourself the instant you specified your argument’s conclusion in advance.

\n\n

This year, the Hadleyburg Trumpet sent out a 16-item questionnaire to all mayoral candidates, with questions like “Can you paint with all the colors of the wind?” and “Did you inhale?” Alas, the Trumpet’s offices are destroyed by a meteorite before publication. It’s a pity, since your own candidate, Mortimer Q. Snodgrass, compares well to his opponents on 15 out of 16 questions. The only sticking point was Question 11, “Are you now, or have you ever been, a supervillain?”

\n\n

So you are tempted to publish the questionnaire as part of your own campaign literature . . . with the 11th question omitted, of course.

\n\n

Which crosses the line between rationality and rationalization. It is no longer possible for the voters to condition on the facts alone; they must condition on the additional fact of their presentation, and infer the existence of hidden evidence.

\n\n

Indeed, you crossed the line at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it. “What!” you cry. “A campaign should publish facts unfavorable to their candidate?” But put yourself in the shoes of a voter, still trying to select a candidate—why would you censor useful information? You wouldn’t, if you were genuinely curious. If you were flowing forward from the evidence to an unknown choice of candidate, rather than flowing backward from a fixed candidate to determine the arguments.

\n\n

A “logical” argument is one that follows from its premises. Thus the following argument is illogical:

\n\n \n\n

This syllogism is not rescued from illogic by the truth of its premises or even the truth of its conclusion. It is worth distinguishing logical deductions from illogical ones, and to refuse to excuse them even if their conclusions happen to be true. For one thing, the distinction may affect how we revise our beliefs in light of future evidence. For another, sloppiness is habit-forming.

\n\n

Above all, the syllogism fails to state the real explanation. Maybe all squares are rectangles, but, if so, it’s not because they are both quadrilaterals. You might call it a hypocritical syllogism—one with a disconnect between its stated reasons and real reasons.

\n\n

If you really want to present an honest, rational argument for your candidate, in a political campaign, there is only one way to do it:

\n\n \n\n

Only in this way can you offer a rational chain of argument, one whose bottom line was written flowing forward from the lines above it. Whatever actually decides your bottom line is the only thing you can honestly write on the lines above.

\n\n
\n \n\n

1See “What Is Evidence?” in Map and Territory.

\n
\n\n" } }, { "_id": "RiQYixgCdvd8eWsjg", "title": "Recommended Rationalist Reading", "pageUrl": "https://www.lesswrong.com/posts/RiQYixgCdvd8eWsjg/recommended-rationalist-reading", "postedAt": "2007-10-01T18:36:40.000Z", "baseScore": 21, "voteCount": 18, "commentCount": 23, "url": null, "contents": { "documentId": "RiQYixgCdvd8eWsjg", "html": "

From this month's Open Thread, Stirling Westrup asks:

\n
\n
\n

There is much mention in this blog about Bayesian rationality, or the use of Bayes' methods in decision making. Now, I studied Bayes conditional probabilities in Statistics class in University many years ago, but my knowledge of the theory ends there. Can you recommend any good books on the subject?

\n

In fact, do you folks have a recommended reading list (other than this blog, of course!) for those trying to identify and overcome their own biases?

\n
\n
\n
\n

I second the question.  My own recommendations will be found in the comments.

\n
" } }, { "_id": "SFZoEBpLo9frSJGkc", "title": "Rationalization", "pageUrl": "https://www.lesswrong.com/posts/SFZoEBpLo9frSJGkc/rationalization", "postedAt": "2007-09-30T19:29:15.000Z", "baseScore": 133, "voteCount": 117, "commentCount": 29, "url": null, "contents": { "documentId": "SFZoEBpLo9frSJGkc", "html": "

In “The Bottom Line,” I presented the dilemma of two boxes, only one of which contains a diamond, with various signs and portents as evidence. I dichotomized the curious inquirer and the clever arguer. The curious inquirer writes down all the signs and portents, and processes them, and finally writes down, “Therefore, I estimate an 85% probability that box B contains the diamond.” The clever arguer works for the highest bidder, and begins by writing, “Therefore, box B contains the diamond,” and then selects favorable signs and portents to list on the lines above.

The first procedure is rationality. The second procedure is generally known as “rationalization.”

“Rationalization.” What a curious term. I would call it a wrong word. You cannot “rationalize” what is not already rational. It is as if “lying” were called “truthization.”

On a purely computational level, there is a rather large difference between:

  1. Starting from evidence, and then crunching probability flows, in order to output a probable conclusion. (Writing down all the signs and portents, and then flowing forward to a probability on the bottom line which depends on those signs and portents.) 
  2. Starting from a conclusion, and then crunching probability flows, in order to output evidence apparently favoring that conclusion. (Writing down the bottom line, and then flowing backward to select signs and portents for presentation on the lines above.)

What fool devised such confusingly similar words, “rationality” and “rationalization,” to describe such extraordinarily different mental processes? I would prefer terms that made the algorithmic difference obvious, like “rationality” versus “giant sucking cognitive black hole.”

Not every change is an improvement, but every improvement is necessarily a change. You cannot obtain more truth for a fixed proposition by arguing it; you can make more people believe it, but you cannot make it more true. To improve our beliefs, we must necessarily change our beliefs. Rationality is the operation that we use to obtain more accuracy for our beliefs by changing them. Rationalization operates to fix beliefs in place; it would be better named “anti-rationality,” both for its pragmatic results and for its reversed algorithm.

“Rationality” is the forward flow that gathers evidence, weighs it, and outputs a conclusion. The curious inquirer used a forward-flow algorithm: first gathering the evidence, writing down a list of all visible signs and portents, which they then processed forward to obtain a previously unknown probability for the box containing the diamond. During the entire time that the rationality-process was running forward, the curious inquirer did not yet know their destination, which was why they were curious. In the Way of Bayes, the prior probability equals the expected posterior probability: If you know your destination, you are already there.

“Rationalization” is a backward flow from conclusion to selected evidence. First you write down the bottom line, which is known and fixed; the purpose of your processing is to find out which arguments you should write down on the lines above. This, not the bottom line, is the variable unknown to the running process.

I fear that Traditional Rationality does not properly sensitize its users to the difference between forward flow and backward flow. In Traditional Rationality, there is nothing wrong with the scientist who arrives at a pet hypothesis and then sets out to find an experiment that proves it. A Traditional Rationalist would look at this approvingly, and say, “This pride is the engine that drives Science forward.” Well, it is the engine that drives Science forward. It is easier to find a prosecutor and defender biased in opposite directions, than to find a single unbiased human.

But just because everyone does something, doesn’t make it okay. It would be better yet if the scientist, arriving at a pet hypothesis, set out to test that hypothesis for the sake of curiosity—creating experiments that would drive their own beliefs in an unknown direction.

If you genuinely don’t know where you are going, you will probably feel quite curious about it. Curiosity is the first virtue, without which your questioning will be purposeless and your skills without direction.

Feel the flow of the Force, and make sure it isn’t flowing backwards.

" } }, { "_id": "kJiPnaQPiy4p9Eqki", "title": "What Evidence Filtered Evidence?", "pageUrl": "https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence", "postedAt": "2007-09-29T23:10:05.000Z", "baseScore": 127, "voteCount": 95, "commentCount": 43, "url": null, "contents": { "documentId": "kJiPnaQPiy4p9Eqki", "html": "\n\n\n\n \n\n \n\n

I discussed the dilemma of the clever arguer, hired to sell you a box that may or may not contain a diamond. The clever arguer points out to you that the box has a blue stamp, and it is a valid known fact that diamond-containing boxes are more likely than empty boxes to bear a blue stamp. What happens at this point, from a Bayesian perspective? Must you helplessly update your probabilities, as the clever arguer wishes?

\n\n

If you can look at the box yourself, you can add up all the signs yourself. What if you can’t look? What if the only evidence you have is the word of the clever arguer, who is legally constrained to make only true statements, but does not tell you everything they know? Each statement that the clever arguer makes is valid evidence—how could you not update your probabilities? Has it ceased to be true that, in such-and-such a proportion of Everett branches or Tegmark duplicates in which box B has a blue stamp, box B contains a diamond? According to Jaynes, a Bayesian must always condition on all known evidence, on pain of paradox. But then the clever arguer can make you believe anything they choose, if there is a sufficient variety of signs to selectively report. That doesn’t sound right.

\n\n

Consider a simpler case, a biased coin, which may be biased to come up 2/3 heads and 1/3 tails, or 1/3 heads and 2/3 tails, both cases being equally likely a priori. Each H observed is 1 bit of evidence for an H-biased coin; each T observed is 1 bit of evidence for a T-biased coin.1 I flip the coin ten times, and then I tell you, “The 4th flip, 6th flip, and 9th flip came up heads.” What is your posterior probability that the coin is H-biased?

\n\n

And the answer is that it could be almost anything, depending on what chain of cause and effect lay behind my utterance of those words—my selection of which flips to report.

\n\n \n\n

Or consider the Monty Hall problem:

\n\n
\n \n\n

On a game show, you are given the choice of three doors leading to three rooms. You know that in one room is $100,000, and the other two are empty. The host asks you to pick a door, and you pick door #1. Then the host opens door #2, revealing an empty room. Do you want to switch to door #3, or stick with door #1?

\n
\n\n

The answer depends on the host’s algorithm. If the host always opens a door and always picks a door leading to an empty room, then you should switch to door #3. If the host always opens door #2 regardless of what is behind it, #1 and #3 both have 50% probabilities of containing the money. If the host only opens a door, at all, if you initially pick the door with the money, then you should definitely stick with #1.

\n\n

You shouldn’t just condition on #2 being empty, but this fact plus the fact of the host choosing to open door #2. Many people are confused by the standard Monty Hall problem because they update only on #2 being empty, in which case #1 and #3 have equal probabilities of containing the money. This is why Bayesians are commanded to condition on all of their knowledge, on pain of paradox.

\n\n

When someone says, “The 4th coinflip came up heads,” we are not conditioning on the 4th coinflip having come up heads—we are not taking the subset of all possible worlds where the 4th coinflip came up heads—but rather are conditioning on the subset of all possible worlds where a speaker following some particular algorithm said, “The 4th coinflip came up heads.” The spoken sentence is not the fact itself; don’t be led astray by the mere meanings of words.

\n\n

Most legal processes work on the theory that every case has exactly two opposed sides and that it is easier to find two biased humans than one unbiased one. Between the prosecution and the defense, someone has a motive to present any given piece of evidence, so the court will see all the evidence; that is the theory. If there are two clever arguers in the box dilemma, it is not quite as good as one curious inquirer, but it is almost as good. But that is with two boxes. Reality often has many-sided problems, and deep problems, and nonobvious answers, which are not readily found by Blues and Greens shouting at each other.

\n\n

Beware lest you abuse the notion of evidence-filtering as a Fully General Counterargument to exclude all evidence you don’t like: “That argument was filtered, therefore I can ignore it.” If you’re ticked off by a contrary argument, then you are familiar with the case, and care enough to take sides. You probably already know your own side’s strongest arguments. You have no reason to infer, from a contrary argument, the existence of new favorable signs and portents which you have not yet seen. So you are left with the uncomfortable facts themselves; a blue stamp on box B is still evidence.

\n\n

But if you are hearing an argument for the first time, and you are only hearing one side of the argument, then indeed you should beware! In a way, no one can really trust the theory of natural selection until after they have listened to creationists for five minutes; and then they know it’s solid.

\n\n
\n \n\n

1“Bits” in this context are a measure of how much evidence something provides—they’re the logarithms of probabilities, base 1/2.

\n\n

Suppose a question has exactly two possible (mutually exclusive) answers, and you initially assign 50% probability to each answer. If I then tell you that the first answer is correct (and you have complete faith in my claim), then you have acquired one bit of evidence. If there are four equally likely options, and I tell you the first one is correct, then I have given you two bits; if there are eight and I tell you the right one, then I have given you three bits; and so on. This is discussed further in “How Much Evidence Does It Take?” (in Map and Territory).

\n
\n\n" } }, { "_id": "wKryhfPqYA682ZPGZ", "title": "Is outdoing monkeys while imagining free will the only way you can feel like a man?", "pageUrl": "https://www.lesswrong.com/posts/wKryhfPqYA682ZPGZ/is-outdoing-monkeys-while-imagining-free-will-the-only-way", "postedAt": "2007-09-29T00:34:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "wKryhfPqYA682ZPGZ", "html": "

Why does it bother people that we might be pretty similar to other monkeys (i.e. with better vocabularies, worse feet etc but no glorious fundamental difference)? Similarly what’s so scary about everything being mechanistic, free will not existing, and everything being meaningless apart from the values that we make up?

\n

If we are fundamentally similar to other animals it has no effect whatsoever on the experience of humanity that we cherish. It has always been that way, and works fine. We know what being human is like, so if monkeys are similar that should only change our ideas of what being a monkey is like. What being a monkey is like is not usually considered a pressing issue in society, so why care? Why does our societal self-worth rest on being heaps better than monkeys?

\n

Similarly with the other possibilities listed above, if they are true, obviously they always have been and everything we enjoy is possible in their presence. It isn’t like as soon as you stop believing in free will you will turn into a robot. If it’s the case, you already are one, and everything you’ve ever loved and dreamed of has arisen from that. It’s not some strange new reality.

\n

Perhaps practically these things seem to hold different probabilities for the future to other beliefs? e.g. the universe being purely mechanistic might make Heaven seem unlikely. But you could still have a mechanistic God and Heaven and soul (it’s not nearly as impossible as non-mechanistic ones). It’s not the end of the world.

\n

Or is it actually hard to hold one’s own values, for instance, without the delusion that they are somehow fundamentally valuable?


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "34XxbRFe54FycoCDw", "title": "The Bottom Line", "pageUrl": "https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line", "postedAt": "2007-09-28T17:47:21.000Z", "baseScore": 212, "voteCount": 166, "commentCount": 17, "url": null, "contents": { "documentId": "34XxbRFe54FycoCDw", "html": "\n\n\n\n \n\n \n\n

There are two sealed boxes up for auction, box A and box B. One and only one of these boxes contains a valuable diamond. There are all manner of signs and portents indicating whether a box contains a diamond; but I have no sign which I know to be perfectly reliable. There is a blue stamp on one box, for example, and I know that boxes which contain diamonds are more likely than empty boxes to show a blue stamp. Or one box has a shiny surface, and I have a suspicion—I am not sure—that no diamond-containing box is ever shiny.

\n\n

Now suppose there is a clever arguer, holding a sheet of paper, and they say to the owners of box A and box B: “Bid for my services, and whoever wins my services, I shall argue that their box contains the diamond, so that the box will receive a higher price.” So the box-owners bid, and box B’s owner bids higher, winning the services of the clever arguer.

\n\n

The clever arguer begins to organize their thoughts. First, they write, “And therefore, box B contains the diamond!” at the bottom of their sheet of paper. Then, at the top of the paper, the clever arguer writes, “Box B shows a blue stamp,” and beneath it, “Box A is shiny,” and then, “Box B is lighter than box A,” and so on through many signs and portents; yet the clever arguer neglects all those signs which might argue in favor of box A. And then the clever arguer comes to me and recites from their sheet of paper: “Box B shows a blue stamp, and box A is shiny,” and so on, until they reach: “and therefore, box B contains the diamond.”

\n\n

But consider: At the moment when the clever arguer wrote down their conclusion, at the moment they put ink on their sheet of paper, the evidential entanglement of that physical ink with the physical boxes became fixed.

\n\n

It may help to visualize a collection of worlds—Everett branches or Tegmark duplicates—within which there is some objective frequency at which box A or box B contains a diamond.1

\n\n

The ink on paper is formed into odd shapes and curves, which look like this text: “And therefore, box B contains the diamond.” If you happened to be a literate English speaker, you might become confused, and think that this shaped ink somehow meant that box B contained the diamond. Subjects instructed to say the color of printed pictures and shown the word Green in red ink often say “green” instead of “red.” It helps to be illiterate, so that you are not confused by the shape of the ink.

\n\n

To us, the true import of a thing is its entanglement with other things. Consider again the collection of worlds, Everett branches or Tegmark duplicates. At the moment when all clever arguers in all worlds put ink to the bottom line of their paper—let us suppose this is a single moment—it fixed the correlation of the ink with the boxes. The clever arguer writes in non-erasable pen; the ink will not change. The boxes will not change. Within the subset of worlds where the ink says “And therefore, box B contains the diamond,” there is already some fixed percentage of worlds where box A contains the diamond. This will not change regardless of what is written in on the blank lines above.

\n\n

So the evidential entanglement of the ink is fixed, and I leave to you to decide what it might be. Perhaps box owners who believe a better case can be made for them are more liable to hire advertisers; perhaps box owners who fear their own deficiencies bid higher. If the box owners do not themselves understand the signs and portents, then the ink will be completely unentangled with the boxes’ contents, though it may tell you something about the owners’ finances and bidding habits.

\n\n

Now suppose another person present is genuinely curious, and they first write down all the distinguishing signs of both boxes on a sheet of paper, and then apply their knowledge and the laws of probability and write down at the bottom: “Therefore, I estimate an 85% probability that box B contains the diamond.” Of what is this handwriting evidence? Examining the chain of cause and effect leading to this physical ink on physical paper, I find that the chain of causality wends its way through all the signs and portents of the boxes, and is dependent on these signs; for in worlds with different portents, a different probability is written at the bottom.

\n\n

So the handwriting of the curious inquirer is entangled with the signs and portents and the contents of the boxes, whereas the handwriting of the clever arguer is evidence only of which owner paid the higher bid. There is a great difference in the indications of ink, though one who foolishly read aloud the ink-shapes might think the English words sounded similar.

\n\n

Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts. If your car makes metallic squealing noises when you brake, and you aren’t willing to face up to the financial cost of getting your brakes replaced, you can decide to look for reasons why your car might not need fixing. But the actual percentage of you that survive in Everett branches or Tegmark worlds—which we will take to describe your effectiveness as a rationalist—is determined by the algorithm that decided which conclusion you would seek arguments for. In this case, the real algorithm is “Never repair anything expensive.” If this is a good algorithm, fine; if this is a bad algorithm, oh well. The arguments you write afterward, above the bottom line, will not change anything either way.

\n\n

This is intended as a caution for your own thinking, not a Fully General Counterargument against conclusions you don’t like. For it is indeed a clever argument to say “My opponent is a clever arguer,” if you are paying yourself to retain whatever beliefs you had at the start. The world’s cleverest arguer may point out that the Sun is shining, and yet it is still probably daytime.

\n\n
\n \n\n

1Max Tegmark, “Parallel Universes,” in Science and Ultimate Reality: Quantum Theory, Cosmology, and Complexity, ed. John D. Barrow, Paul C. W. Davies, and Charles L. Harper Jr. (New York: Cambridge University Press, 2004), 459–491, http://arxiv.org/abs/astro-ph/0302131.

\n
\n\n" } }, { "_id": "6FmqiAgS8h4EJm86s", "title": "How to Convince Me That 2 + 2 = 3", "pageUrl": "https://www.lesswrong.com/posts/6FmqiAgS8h4EJm86s/how-to-convince-me-that-2-2-3", "postedAt": "2007-09-27T23:00:21.000Z", "baseScore": 160, "voteCount": 150, "commentCount": 410, "url": null, "contents": { "documentId": "6FmqiAgS8h4EJm86s", "html": "\n\n\n\n \n\n \n\n

In “What is Evidence?” I wrote:1

\n\n
\n \n\n

This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise. If your retina ended up in the same state regardless of what light entered it, you would be blind . . . Hence the phrase, “blind faith.” If what you believe doesn’t depend on what you see, you’ve been blinded as effectively as by poking out your eyeballs.

\n
\n\n

Cihan Baran replied:2

\n\n
\n \n\n

I can not conceive of a situation that would make 2 + 2 = 4 false. Perhaps for that reason, my belief in 2 + 2 = 4 is unconditional.

\n
\n\n

I admit, I cannot conceive of a “situation” that would make 2 + 2 = 4 false. (There are redefinitions, but those are not “situations,” and then you’re no longer talking about 2, 4, =, or +.) But that doesn’t make my belief unconditional. I find it quite easy to imagine a situation which would convince me that 2 + 2 = 3.

\n\n

Suppose I got up one morning, and took out two earplugs, and set them down next to two other earplugs on my nighttable, and noticed that there were now three earplugs, without any earplugs having appeared or disappeared—in contrast to my stored memory that 2 + 2 was supposed to equal 4. Moreover, when I visualized the process in my own mind, it seemed that making xx and xx come out to xxxx required an extra x to appear from nowhere, and was, moreover, inconsistent with other arithmetic I visualized, since subtracting xx from xxx left xx, but subtracting xx from xxxx left xxx. This would conflict with my stored memory that 3 - 2 = 1, but memory would be absurd in the face of physical and mental confirmation that xxx - xx = xx.

\n\n

I would also check a pocket calculator, Google, and perhaps my copy of 1984 where Winston writes that “Freedom is the freedom to say two plus two equals three.” All of these would naturally show that the rest of the world agreed with my current visualization, and disagreed with my memory, that 2 + 2 = 3.

\n\n

How could I possibly have ever been so deluded as to believe that 2 + 2 = 4? Two explanations would come to mind: First, a neurological fault (possibly caused by a sneeze) had made all the additive sums in my stored memory go up by one. Second, someone was messing with me, by hypnosis or by my being a computer simulation. In the second case, I would think it more likely that they had messed with my arithmetic recall than that 2 + 2 actually equalled 4. Neither of these plausible-sounding explanations would prevent me from noticing that I was very, very, very confused.3

\n\n

What would convince me that 2 + 2 = 3, in other words, is exactly the same kind of evidence that currently convinces me that 2 + 2 = 4: The evidential crossfire of physical observation, mental visualization, and social agreement.

\n\n

There was a time when I had no idea that 2 + 2 = 4. I did not arrive at this new belief by random processes—then there would have been no particular reason for my brain to end up storing “2 + 2 = 4” instead of “2 + 2 = 7.” The fact that my brain stores an answer surprisingly similar to what happens when I lay down two earplugs alongside two earplugs, calls forth an explanation of what entanglement produces this strange mirroring of mind and reality.

\n\n

There’s really only two possibilities, for a belief of fact—either the belief got there via a mind-reality entangling process, or not. If not, the belief can’t be correct except by coincidence. For beliefs with the slightest shred of internal complexity (requiring a computer program of more than 10 bits to simulate), the space of possibilities is large enough that coincidence vanishes.4

\n\n

Unconditional facts are not the same as unconditional beliefs. If entangled evidence convinces me that a fact is unconditional, this doesn’t mean I always believed in the fact without need of entangled evidence.

\n\n

I believe that 2 + 2 = 4, and I find it quite easy to conceive of a situation which would convince me that 2 + 2 = 3. Namely, the same sort of situation that currently convinces me that 2 + 2 = 4. Thus I do not fear that I am a victim of blind faith.5

\n\n
\n \n\n

1See Map and Territory.

\n\n

2Comment: http://lesswrong.com/lw/jl/what_is_evidence/f7h.

\n\n

3See “Your Strength as a Rationalist” in Map and Territory.

\n\n

4For more on belief formation and beliefs of fact, see “Feeling Rational” and “What Is Evidence?” in Map and Territory. For more on belief complexity, see “Occam’s Razor” in the same volume.

\n\n

5If there are any Christians reading this who know Bayes’s Theorem, might I inquire of you what situation would convince you of the truth of Islam? Presumably it would be the same sort of situation causally responsible for producing your current belief in Christianity: We would push you screaming out of the uterus of a Muslim woman, and have you raised by Muslim parents who continually told you that it is good to believe unconditionally in Islam.

\n\n

Or is there more to it than that? If so, what situation would convince you of Islam, or at least, non-Christianity? And how confident are you that the general kinds of evidence and reasoning you appeal to would have been enough to dissuade you of your religion if you had been raised a Muslim?

\n
\n\n" } }, { "_id": "QtyKq4BDyuJ3tysoK", "title": "9/26 is Petrov Day", "pageUrl": "https://www.lesswrong.com/posts/QtyKq4BDyuJ3tysoK/9-26-is-petrov-day", "postedAt": "2007-09-26T16:14:07.000Z", "baseScore": 283, "voteCount": 198, "commentCount": 64, "url": null, "contents": { "documentId": "QtyKq4BDyuJ3tysoK", "html": "

Today is September 26th, Petrov Day, celebrated to honor the deed of Stanislav Yevgrafovich Petrov on September 26th, 1983.  Wherever you are, whatever you're doing, take a minute to not destroy the world.

\n

The story begins on September 1st, 1983, when Soviet jet interceptors shot down a Korean Air Lines civilian airliner after the aircraft crossed into Soviet airspace and then, for reasons still unknown, failed to respond to radio hails.  269 passengers and crew died, including US Congressman Lawrence McDonald.  Ronald Reagan called it \"barbarism\", \"inhuman brutality\", \"a crime against humanity that must never be forgotten\".  Note that this was already a very, very poor time for US/USSR relations.  Andropov, the ailing Soviet leader, was half-convinced the US was planning a first strike.  The KGB sent a flash message to its operatives warning them to prepare for possible nuclear war.

\n

On September 26th, 1983, Lieutenant Colonel Stanislav Yevgrafovich Petrov was the officer on duty when the warning system reported a US missile launch.  Petrov kept calm, suspecting a computer error.

\n

Then the system reported another US missile launch.

\n

And another, and another, and another.

\n

\n

What had actually happened, investigators later determined, was sunlight on high-altitude clouds aligning with the satellite view on a US missile base.

\n

In the command post there were beeping signals, flashing lights, and officers screaming at people to remain calm.  According to several accounts I've read, there was a large flashing screen from the automated computer system saying simply \"START\" (presumably in Russian). Afterward, when investigators asked Petrov why he hadn't written everything down in the logbook, Petrov replied,\"Because I had a phone in one hand and the intercom in the other, and I don't have a third hand.\"

\n

The policy of the Soviet Union called for launch on warning.  The Soviet Union's land radar could not detect missiles over the horizon, and waiting for positive identification would limit the response time to minutes.  Petrov's report would be relayed to his military superiors, who would decide whether to start a nuclear war.

\n

Petrov decided that, all else being equal, he would prefer not to destroy the world.  He sent messages declaring the launch detection a false alarm, based solely on his personal belief that the US did not seem likely to start an attack using only five missiles.

\n

Petrov was first congratulated, then extensively interrogated, then reprimanded for failing to follow procedure.  He resigned in poor health from the military several months later.  According to Wikipedia, he is spending his retirement in relative poverty in the town of Fryazino, on a pension of $200/month.  In 2004, the Association of World Citizens gave Petrov a trophy and $1000.  There is also a movie scheduled for release in 2008, entitled The Red Button and the Man Who Saved the World.

\n

Maybe someday, the names of people who decide not to start nuclear wars will be as well known as the name of Britney Spears.  Looking forward to such a time, when humankind has grown a little wiser, let us celebrate, in this moment, Petrov Day.

" } }, { "_id": "f4txACqDWithRi7hs", "title": "Occam's Razor", "pageUrl": "https://www.lesswrong.com/posts/f4txACqDWithRi7hs/occam-s-razor", "postedAt": "2007-09-26T06:36:48.000Z", "baseScore": 143, "voteCount": 115, "commentCount": 55, "url": null, "contents": { "documentId": "f4txACqDWithRi7hs", "html": "

The more complex an explanation is, the more evidence you need just to find it in belief-space. (In Traditional Rationality this is often phrased misleadingly, as “The more complex a proposition is, the more evidence is required to argue for it.”) How can we measure the complexity of an explanation? How can we determine how much evidence is required?

Occam’s Razor is often phrased as “The simplest explanation that fits the facts.” Robert Heinlein replied that the simplest explanation is “The lady down the street is a witch; she did it.”

One observes that the length of an English sentence is not a good way to measure “complexity.” And “fitting” the facts by merely failing to prohibit them is insufficient.

Why, exactly, is the length of an English sentence a poor measure of complexity? Because when you speak a sentence aloud, you are using labels for concepts that the listener shares—the receiver has already stored the complexity in them. Suppose we abbreviated Heinlein’s whole sentence as “Tldtsiawsdi!” so that the entire explanation can be conveyed in one word; better yet, we’ll give it a short arbitrary label like “Fnord!” Does this reduce the complexity? No, because you have to tell the listener in advance that “Tldtsiawsdi!” stands for “The lady down the street is a witch; she did it.” “Witch,” itself, is a label for some extraordinary assertions—just because we all know what it means doesn’t mean the concept is simple.

An enormous bolt of electricity comes out of the sky and hits something, and the Norse tribesfolk say, “Maybe a really powerful agent was angry and threw a lightning bolt.” The human brain is the most complex artifact in the known universe. If anger seems simple, it’s because we don’t see all the neural circuitry that’s implementing the emotion. (Imagine trying to explain why Saturday Night Live is funny, to an alien species with no sense of humor. But don’t feel superior; you yourself have no sense of fnord.) The complexity of anger, and indeed the complexity of intelligence, was glossed over by the humans who hypothesized Thor the thunder-agent.

To a human, Maxwell’s equations take much longer to explain than Thor. Humans don’t have a built-in vocabulary for calculus the way we have a built-in vocabulary for anger. You’ve got to explain your language, and the language behind the language, and the very concept of mathematics, before you can start on electricity.

And yet it seems that there should be some sense in which Maxwell’s equations are simpler than a human brain, or Thor the thunder-agent.

There is. It’s enormously easier (as it turns out) to write a computer program that simulates Maxwell’s equations, compared to a computer program that simulates an intelligent emotional mind like Thor.

The formalism of Solomonoff induction measures the “complexity of a description” by the length of the shortest computer program which produces that description as an output. To talk about the “shortest computer program” that does something, you need to specify a space of computer programs, which requires a language and interpreter. Solomonoff induction uses Turing machines, or rather, bitstrings that specify Turing machines. What if you don’t like Turing machines? Then there’s only a constant complexity penalty to design your own universal Turing machine that interprets whatever code you give it in whatever programming language you like. Different inductive formalisms are penalized by a worst-case constant factor relative to each other, corresponding to the size of a universal interpreter for that formalism.

In the better (in my humble opinion) versions of Solomonoff induction, the computer program does not produce a deterministic prediction, but assigns probabilities to strings. For example, we could write a program to explain a fair coin by writing a program that assigns equal probabilities to all 2N strings of length N. This is Solomonoff induction’s approach to fitting the observed data. The higher the probability a program assigns to the observed data, the better that program fits the data. And probabilities must sum to 1, so for a program to better “fit” one possibility, it must steal probability mass from some other possibility which will then “fit” much more poorly. There is no superfair coin that assigns 100% probability to heads and 100% probability to tails.

How do we trade off the fit to the data, against the complexity of the program? If you ignore complexity penalties, and think only about fit, then you will always prefer programs that claim to deterministically predict the data, assign it 100% probability. If the coin shows HTTHHT, then the program that claims that the coin was fixed to show HTTHHT fits the observed data 64 times better than the program which claims the coin is fair. Conversely, if you ignore fit, and consider only complexity, then the “fair coin” hypothesis will always seem simpler than any other hypothesis. Even if the coin turns up HTHHTHHHTHHHHTHHHHHT  . . .

Indeed, the fair coin is simpler and it fits this data exactly as well as it fits any other string of 20 coinflips—no more, no less—but we see another hypothesis, seeming not too complicated, that fits the data much better.

If you let a program store one more binary bit of information, it will be able to cut down a space of possibilities by half, and hence assign twice as much probability to all the points in the remaining space. This suggests that one bit of program complexity should cost at least a “factor of two gain” in the fit. If you try to design a computer program that explicitly stores an outcome like HTTHHT, the six bits that you lose in complexity must destroy all plausibility gained by a 64-fold improvement in fit. Otherwise, you will sooner or later decide that all fair coins are fixed.

Unless your program is being smart, and compressing the data, it should do no good just to move one bit from the data into the program description.

The way Solomonoff induction works to predict sequences is that you sum up over all allowed computer programs—if every program is allowed, Solomonoff induction becomes uncomputable—with each program having a prior probability of 1/2 to the power of its code length in bits, and each program is further weighted by its fit to all data observed so far. This gives you a weighted mixture of experts that can predict future bits.

The Minimum Message Length formalism is nearly equivalent to Solomonoff induction. You send a string describing a code, and then you send a string describing the data in that code. Whichever explanation leads to the shortest total message is the best. If you think of the set of allowable codes as a space of computer programs, and the code description language as a universal machine, then Minimum Message Length is nearly equivalent to Solomonoff induction.1

This lets us see clearly the problem with using “The lady down the street is a witch; she did it” to explain the pattern in the sequence 0101010101. If you’re sending a message to a friend, trying to describe the sequence you observed, you would have to say: “The lady down the street is a witch; she made the sequence come out 0101010101.” Your accusation of witchcraft wouldn’t let you shorten the rest of the message; you would still have to describe, in full detail, the data which her witchery caused.

Witchcraft may fit our observations in the sense of qualitatively permitting them; but this is because witchcraft permits everything , like saying “Phlogiston!” So, even after you say “witch,” you still have to describe all the observed data in full detail. You have not compressed the total length of the message describing your observations by transmitting the message about witchcraft; you have simply added a useless prologue, increasing the total length.

The real sneakiness was concealed in the word “it” of “A witch did it.” A witch did what?

Of course, thanks to hindsight bias and anchoring and fake explanations and fake causality and positive bias and motivated cognition, it may seem all too obvious that if a woman is a witch, of course she would make the coin come up 0101010101. But I’ll get to that soon enough. . .


1 Nearly, because it chooses the shortest program, rather than summing up over all programs.

" } }, { "_id": "MwQRucYo6BZZwjKE7", "title": "Einstein's Arrogance", "pageUrl": "https://www.lesswrong.com/posts/MwQRucYo6BZZwjKE7/einstein-s-arrogance", "postedAt": "2007-09-25T01:29:57.000Z", "baseScore": 194, "voteCount": 165, "commentCount": 90, "url": null, "contents": { "documentId": "MwQRucYo6BZZwjKE7", "html": "\n\n\n\n \n\n \n\n

In 1919, Sir Arthur Eddington led expeditions to Brazil and to the island of Principe, aiming to observe solar eclipses and thereby test an experimental prediction of Einstein’s novel theory of General Relativity. A journalist asked Einstein what he would do if Eddington’s observations failed to match his theory. Einstein famously replied: “Then I would feel sorry for the good Lord. The theory is correct.”

\n\n

It seems like a rather foolhardy statement, defying the trope of Traditional Rationality that experiment above all is sovereign. Einstein seems possessed of an arrogance so great that he would refuse to bend his neck and submit to Nature’s answer, as scientists must do. Who can know that the theory is correct, in advance of experimental test?

\n\n

Of course, Einstein did turn out to be right. I try to avoid criticizing people when they are right. If they genuinely deserve criticism, I will not need to wait long for an occasion where they are wrong.

\n\n

And Einstein may not have been quite so foolhardy as he sounded . . .

\n\n

To assign more than 50% probability to the correct candidate from a pool of 100,000,000 possible hypotheses, you need at least 27 bits of evidence (or thereabouts). You cannot expect to find the correct candidate without tests that are this strong, because lesser tests will yield more than one candidate that passes all the tests. If you try to apply a test that only has a million-to-one chance of a false positive (~ 20 bits), you’ll end up with a hundred candidates. Just finding the right answer, within a large space of possibilities, requires a large amount of evidence.

\n\n

Traditional Rationality emphasizes justification: “If you want to convince me of X, you’ve got to present me with Y amount of evidence.” I myself often slip into this phrasing, whenever I say something like, “To justify believing in this proposition, at more than 99% probability, requires 34 bits of evidence.” Or, “In order to assign more than 50% probability to your hypothesis, you need 27 bits of evidence.” The Traditional phrasing implies that you start out with a hunch, or some private line of reasoning that leads you to a suggested hypothesis, and then you have to gather “evidence” to confirm it—to convince the scientific community, or justify saying that you believe in your hunch.

\n\n

But from a Bayesian perspective, you need an amount of evidence roughly equivalent to the complexity of the hypothesis just to locate the hypothesis in theory-space. It’s not a question of justifying anything to anyone. If there’s a hundred million alternatives, you need at least 27 bits of evidence just to focus your attention uniquely on the correct answer.

\n\n

This is true even if you call your guess a “hunch” or “intuition.” Hunchings and intuitings are real processes in a real brain. If your brain doesn’t have at least 10 bits of genuinely entangled valid Bayesian evidence to chew on, your brain cannot single out a correct 10-bit hypothesis for your attention—consciously, subconsciously, whatever. Subconscious processes can’t find one out of a million targets using only 19 bits of entanglement any more than conscious processes can. Hunches can be mysterious to the huncher, but they can’t violate the laws of physics.

\n\n

You see where this is going: At the time of first formulating the hypothesis—the very first time the equations popped into his head—Einstein must have had, already in his possession, sufficient observational evidence to single out the complex equations of General Relativity for his unique attention. Or he couldn’t have gotten them right.

\n\n

Now, how likely is it that Einstein would have exactly enough observational evidence to raise General Relativity to the level of his attention, but only justify assigning it a 55% probability? Suppose General Relativity is a 29.3-bit hypothesis. How likely is it that Einstein would stumble across exactly 29.5 bits of evidence in the course of his physics reading?

\n\n

Not likely! If Einstein had enough observational evidence to single out the correct equations of General Relativity in the first place, then he probably had enough evidence to be damn sure that General Relativity was true.

\n\n

In fact, since the human brain is not a perfectly efficient processor of information, Einstein probably had overwhelmingly more evidence than would, in principle, be required for a perfect Bayesian to assign massive confidence to General Relativity.

\n\n

“Then I would feel sorry for the good Lord; the theory is correct.” It doesn’t sound nearly as appalling when you look at it from that perspective. And remember that General Relativity was correct, from all that vast space of possibilities.

\n\n" } }, { "_id": "nj8JKFoLSMEmD3RGp", "title": "How Much Evidence Does It Take?", "pageUrl": "https://www.lesswrong.com/posts/nj8JKFoLSMEmD3RGp/how-much-evidence-does-it-take", "postedAt": "2007-09-24T04:06:01.000Z", "baseScore": 178, "voteCount": 148, "commentCount": 33, "url": null, "contents": { "documentId": "nj8JKFoLSMEmD3RGp", "html": "\n\n\n\n \n\n \n\n

Previously, I defined evidence as “an event entangled, by links of cause and effect, with whatever you want to know about,” and entangled as “happening differently for different possible states of the target.” So how much entanglement—how much rational evidence—is required to support a belief?

\n\n

Let’s start with a question simple enough to be mathematical: How hard would you have to entangle yourself with the lottery in order to win? Suppose there are seventy balls, drawn without replacement, and six numbers to match for the win. Then there are 131,115,985 possible winning combinations, hence a randomly selected ticket would have a 1/131,115,985 probability of winning (0.0000007%). To win the lottery, you would need evidence selective enough to visibly favor one combination over 131,115,984 alternatives.

\n\n

Suppose there are some tests you can perform which discriminate, probabilistically, between winning and losing lottery numbers. For example, you can punch a combination into a little black box that always beeps if the combination is the winner, and has only a 1/4 (25%) chance of beeping if the combination is wrong. In Bayesian terms, we would say the likelihood ratio is 4 to 1. This means that the box is 4 times as likely to beep when we punch in a correct combination, compared to how likely it is to beep for an incorrect combination.

\n\n

There are still a whole lot of possible combinations. If you punch in 20 incorrect combinations, the box will beep on 5 of them by sheer chance (on average). If you punch in all 131,115,985 possible combinations, then while the box is certain to beep for the one winning combination, it will also beep for 32,778,996 losing combinations (on average).

\n\n

So this box doesn’t let you win the lottery, but it’s better than nothing. If you used the box, your odds of winning would go from 1 in 131,115,985 to 1 in 32,778,997. You’ve made some progress toward finding your target, the truth, within the huge space of possibilities.

\n\n

Suppose you can use another black box to test combinations twice, independently. Both boxes are certain to beep for the winning ticket. But the chance of a box beeping for a losing combination is 1/4 independently for each box; hence the chance of both boxes beeping for a losing combination is 1/16. We can say that the cumulative evidence, of two independent tests, has a likelihood ratio of 16:1. The number of losing lottery tickets that pass both tests will be (on average) 8,194,749.

\n\n

Since there are 131,115,985 possible lottery tickets, you might guess that you need evidence whose strength is around 131,115,985 to 1—an event, or series of events, which is 131,115,985 times more likely to happen for a winning combination than a losing combination. Actually, this amount of evidence would only be enough to give you an even chance of winning the lottery. Why? Because if you apply a filter of that power to 131 million losing tickets, there will be, on average, one losing ticket that passes the filter. The winning ticket will also pass the filter. So you’ll be left with two tickets that passed the filter, only one of them a winner. Fifty percent odds of winning, if you can only buy one ticket.

\n\n

A better way of viewing the problem: In the beginning, there is 1 winning ticket and 131,115,984 losing tickets, so your odds of winning are 1:131,115,984. If you use a single box, the odds of it beeping are 1 for a winning ticket and 0.25 for a losing ticket. So we multiply 1:131,115,984 by 1:0.25 and get 1:32,778,996. Adding another box of evidence multiplies the odds by 1:0.25 again, so now the odds are 1 winning ticket to 8,194,749 losing tickets.

\n\n

It is convenient to measure evidence in bits—not like bits on a hard drive, but mathematician’s bits, which are conceptually different. Mathematician’s bits are the logarithms, base 1/2, of probabilities. For example, if there are four possible outcomes A, B, C, and D, whose probabilities are 50%, 25%, 12.5%, and 12.5%, and I tell you the outcome was “D,” then I have transmitted three bits of information to you, because I informed you of an outcome whose probability was 1/8.

\n\n

It so happens that 131,115,984 is slightly less than 2 to the 27th power. So 14 boxes or 28 bits of evidence—an event 268,435,456:1 times more likely to happen if the ticket-hypothesis is true than if it is false—would shift the odds from 1:131,115,984 to 268,435,456:131,115,984, which reduces to 2:1. Odds of 2 to 1 mean two chances to win for each chance to lose, so the probability of winning with 28 bits of evidence is 2/3. Adding another box, another 2 bits of evidence, would take the odds to 8:1. Adding yet another two boxes would take the chance of winning to 128:1.

\n\n

So if you want to license a strong belief that you will win the lottery—arbitrarily defined as less than a 1% probability of being wrong—34 bits of evidence about the winning combination should do the trick.

\n\n

In general, the rules for weighing “how much evidence it takes” follow a similar pattern: The larger the space of possibilities in which the hypothesis lies, or the more unlikely the hypothesis seems a priori compared to its neighbors, or the more confident you wish to be, the more evidence you need.

\n\n

You cannot defy the rules; you cannot form accurate beliefs based on inadequate evidence. Let’s say you’ve got 10 boxes lined up in a row, and you start punching combinations into the boxes. You cannot stop on the first combination that gets beeps from all 10 boxes, saying, “But the odds of that happening for a losing combination are a million to one! I’ll just ignore those ivory-tower Bayesian rules and stop here.” On average, 131 losing tickets will pass such a test for every winner. Considering the space of possibilities and the prior improbability, you jumped to a too-strong conclusion based on insufficient evidence. That’s not a pointless bureaucratic regulation; it’s math.

\n\n

Of course, you can still believe based on inadequate evidence, if that is your whim; but you will not be able to believe accurately. It is like trying to drive your car without any fuel, because you don’t believe in the fuddy-duddy concept that it ought to take fuel to go places. Wouldn’t it be so much more fun, and so much less expensive, if we just decided to repeal the law that cars need fuel?

\n\n

Well, you can try. You can even shut your eyes and pretend the car is moving. But really arriving at accurate beliefs requires evidence-fuel, and the further you want to go, the more fuel you need.

\n\n" } }, { "_id": "46qnWRSR7L2eyNbMA", "title": "The Lens That Sees Its Flaws", "pageUrl": "https://www.lesswrong.com/posts/46qnWRSR7L2eyNbMA/the-lens-that-sees-its-flaws", "postedAt": "2007-09-23T00:10:41.000Z", "baseScore": 429, "voteCount": 436, "commentCount": 49, "url": null, "contents": { "documentId": "46qnWRSR7L2eyNbMA", "html": "

Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace; and so you believe that your shoelaces are untied.

Here is the secret of deliberate rationality—this whole process is not magic, and you can understand it. You can understand how you see your shoelaces. You can think about which sort of thinking processes will create beliefs which mirror reality, and which thinking processes will not.

Mice can see, but they can’t understand seeing. You can understand seeing, and because of that, you can do things that mice cannot do. Take a moment to marvel at this, for it is indeed marvelous.

Mice see, but they don’t know they have visual cortexes, so they can’t correct for optical illusions. A mouse lives in a mental world that includes cats, holes, cheese and mousetraps—but not mouse brains. Their camera does not take pictures of its own lens. But we, as humans, can look at a seemingly bizarre image, and realize that part of what we’re seeing is the lens itself. You don’t always have to believe your own eyes, but you have to realize that you have eyes—you must have distinct mental buckets for the map and the territory, for the senses and reality. Lest you think this a trivial ability, remember how rare it is in the animal kingdom.

The whole idea of Science is, simply, reflective reasoning about a more reliable process for making the contents of your mind mirror the contents of the world. It is the sort of thing mice would never invent. Pondering this business of “performing replicable experiments to falsify theories,” we can see why it works. Science is not a separate magisterium, far away from real life and the understanding of ordinary mortals. Science is not something that only applies to the inside of laboratories. Science, itself, is an understandable process-in-the-world that correlates brains with reality.

Science makes sense, when you think about it. But mice can’t think about thinking, which is why they don’t have Science. One should not overlook the wonder of this—or the potential power it bestows on us as individuals, not just scientific societies.

Admittedly, understanding the engine of thought may be a little more complicated than understanding a steam engine—but it is not a fundamentally different task.

Once upon a time, I went to EFNet’s #philosophy chatroom to ask, “Do you believe a nuclear war will occur in the next 20 years? If no, why not?” One person who answered the question said he didn’t expect a nuclear war for 100 years, because “All of the players involved in decisions regarding nuclear war are not interested right now.” “But why extend that out for 100 years?” I asked. “Pure hope,” was his reply.

Reflecting on this whole thought process, we can see why the thought of nuclear war makes the person unhappy, and we can see how his brain therefore rejects the belief. But if you imagine a billion worlds—Everett branches, or Tegmark duplicates1—this thought process will not systematically correlate optimists to branches in which no nuclear war occurs.2

To ask which beliefs make you happy is to turn inward, not outward—it tells you something about yourself, but it is not evidence entangled with the environment. I have nothing against happiness, but it should follow from your picture of the world, rather than tampering with the mental paintbrushes.

If you can see this—if you can see that hope is shifting your first-order thoughts by too large a degree—if you can understand your mind as a mapping engine that has flaws—then you can apply a reflective correction. The brain is a flawed lens through which to see reality. This is true of both mouse brains and human brains. But a human brain is a flawed lens that can understand its own flaws—its systematic errors, its biases—and apply second-order corrections to them. This, in practice, makes the lens far more powerful. Not perfect, but far more powerful.


1 Max Tegmark, “Parallel Universes,” in Science and Ultimate Reality: Quantum Theory, Cosmology, and Complexity, ed. John D. Barrow, Paul C. W. Davies, and Charles L. Harper Jr. (New York: Cambridge University Press, 2004), 459–491, http://arxiv.org/abs/astro-ph/0302131.

2 Some clever fellow is bound to say, “Ah, but since I have hope, I'll work a little harder at my job, pump up the global economy, and thus help to prevent countries from sliding into the angry and hopeless state where nuclear war is a possibility. So the two events are related after all.” At this point, we have to drag in Bayes’s Theorem and measure the relationship quantitatively. Your optimistic nature cannot have that large an effect on the world; it cannot, of itself, decrease the probability of nuclear war by 20%, or however much your optimistic nature shifted your beliefs. Shifting your beliefs by a large amount, due to an event that only slightly increases your chance of being right, will still mess up your mapping.

" } }, { "_id": "6s3xABaXKPdFwA3FS", "title": "What is Evidence?", "pageUrl": "https://www.lesswrong.com/posts/6s3xABaXKPdFwA3FS/what-is-evidence", "postedAt": "2007-09-22T06:43:31.000Z", "baseScore": 208, "voteCount": 179, "commentCount": 62, "url": null, "contents": { "documentId": "6s3xABaXKPdFwA3FS", "html": "

The sentence “snow is white” is true if and only if snow is white.

—Alfred Tarski

 

To say of what is, that it is, or of what is not, that it is not, is true.

—Aristotle, Metaphysics IV

Walking along the street, your shoelaces come untied. Shortly thereafter, for some odd reason, you start believing your shoelaces are untied. Light leaves the Sun and strikes your shoelaces and bounces off; some photons enter the pupils of your eyes and strike your retina; the energy of the photons triggers neural impulses; the neural impulses are transmitted to the visual-processing areas of the brain; and there the optical information is processed and reconstructed into a 3D model that is recognized as an untied shoelace. There is a sequence of events, a chain of cause and effect, within the world and your brain, by which you end up believing what you believe. The final outcome of the process is a state of mind which mirrors the state of your actual shoelaces.

What is evidence? It is an event entangled, by links of cause and effect, with whatever you want to know about. If the target of your inquiry is your shoelaces, for example, then the light entering your pupils is evidence entangled with your shoelaces. This should not be confused with the technical sense of “entanglement” used in physics—here I’m just talking about “entanglement” in the sense of two things that end up in correlated states because of the links of cause and effect between them.

Not every influence creates the kind of “entanglement” required for evidence. It’s no help to have a machine that beeps when you enter winning lottery numbers, if the machine also beeps when you enter losing lottery numbers. The light reflected from your shoes would not be useful evidence about your shoelaces, if the photons ended up in the same physical state whether your shoelaces were tied or untied.

To say it abstractly: For an event to be evidence about a target of inquiry, it has to happen differently in a way that’s entangled with the different possible states of the target. (To say it technically: There has to be Shannon mutual information between the evidential event and the target of inquiry, relative to your current state of uncertainty about both of them.)

Entanglement can be contagious when processed correctly, which is why you need eyes and a brain. If photons reflect off your shoelaces and hit a rock, the rock won’t change much. The rock won’t reflect the shoelaces in any helpful way; it won’t be detectably different depending on whether your shoelaces were tied or untied. This is why rocks are not useful witnesses in court. A photographic film will contract shoelace-entanglement from the incoming photons, so that the photo can itself act as evidence. If your eyes and brain work correctly, you will become tangled up with your own shoelaces.

This is why rationalists put such a heavy premium on the paradoxical-seeming claim that a belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise. If your retina ended up in the same state regardless of what light entered it, you would be blind. Some belief systems, in a rather obvious trick to reinforce themselves, say that certain beliefs are only really worthwhile if you believe them unconditionally—no matter what you see, no matter what you think. Your brain is supposed to end up in the same state regardless. Hence the phrase, “blind faith.” If what you believe doesn’t depend on what you see, you’ve been blinded as effectively as by poking out your eyeballs.

If your eyes and brain work correctly, your beliefs will end up entangled with the facts. Rational thought produces beliefs which are themselves evidence.

If your tongue speaks truly, your rational beliefs, which are themselves evidence, can act as evidence for someone else. Entanglement can be transmitted through chains of cause and effect—and if you speak, and another hears, that too is cause and effect. When you say “My shoelaces are untied” over a cellphone, you’re sharing your entanglement with your shoelaces with a friend.

Therefore rational beliefs are contagious, among honest folk who believe each other to be honest. And it’s why a claim that your beliefs are not contagious—that you believe for private reasons which are not transmissible—is so suspicious. If your beliefs are entangled with reality, they should be contagious among honest folk.

If your model of reality suggests that the outputs of your thought processes should not be contagious to others, then your model says that your beliefs are not themselves evidence, meaning they are not entangled with reality. You should apply a reflective correction, and stop believing.

Indeed, if you feel, on a gut level, what this all means, you will automatically stop believing. Because “my belief is not entangled with reality” means “my belief is not accurate.” As soon as you stop believing “ ‘snow is white’ is true,” you should (automatically!) stop believing “snow is white,” or something is very wrong.

So try to explain why the kind of thought processes you use systematically produce beliefs that mirror reality. Explain why you think you’re rational. Why you think that, using thought processes like the ones you use, minds will end up believing “snow is white” if and only if snow is white. If you don’t believe that the outputs of your thought processes are entangled with reality, why believe the outputs of your thought processes? It’s the same thing, or it should be.

" } }, { "_id": "Yq6aA4M3JKWaQepPJ", "title": "Burdensome Details", "pageUrl": "https://www.lesswrong.com/posts/Yq6aA4M3JKWaQepPJ/burdensome-details", "postedAt": "2007-09-20T23:46:06.000Z", "baseScore": 269, "voteCount": 248, "commentCount": 49, "url": null, "contents": { "documentId": "Yq6aA4M3JKWaQepPJ", "html": "

Merely corroborative detail, intended to give artistic verisimilitude to an otherwise bald and unconvincing narrative . . .

—Pooh-Bah, in Gilbert and Sullivan’s The Mikado

The conjunction fallacy is when humans assign a higher probability to a proposition of the form “A and B” than to one of the propositions “A” or “B” in isolation, even though it is a theorem that conjunctions are never likelier than their conjuncts. For example, in one experiment, 68% of the subjects ranked it more likely that “Reagan will provide federal support for unwed mothers and cut federal support to local governments” than that “Reagan will provide federal support for unwed mothers.”1

A long series of cleverly designed experiments, which weeded out alternative hypotheses and nailed down the standard interpretation, confirmed that conjunction fallacy occurs because we “substitute judgment of representativeness for judgment of probability.”2 By adding extra details, you can make an outcome seem more characteristic of the process that generates it. You can make it sound more plausible that Reagan will support unwed mothers, by adding the claim that Reagan will also cut support to local governments. The implausibility of one claim is compensated by the plausibility of the other; they “average out.”

Which is to say: Adding detail can make a scenario sound more plausible, even though the event necessarily becomes less probable.

If so, then, hypothetically speaking, we might find futurists spinning unconscionably plausible and detailed future histories, or find people swallowing huge packages of unsupported claims bundled with a few strong-sounding assertions at the center.

If you are presented with the conjunction fallacy in a naked, direct comparison, then you may succeed on that particular problem by consciously correcting yourself. But this is only slapping a band-aid on the problem, not fixing it in general.

In the 1982 experiment where professional forecasters assigned systematically higher probabilities to “Russia invades Poland, followed by suspension of diplomatic relations between the USA and the USSR” than to “Suspension of diplomatic relations between the USA and the USSR,” each experimental group was only presented with one proposition.3 What strategy could these forecasters have followed, as a group, that would have eliminated the conjunction fallacy, when no individual knew directly about the comparison? When no individual even knew that the experiment was about the conjunction fallacy? How could they have done better on their probability judgments?

Patching one gotcha as a special case doesn’t fix the general problem. The gotcha is the symptom, not the disease.

What could the forecasters have done to avoid the conjunction fallacy, without seeing the direct comparison, or even knowing that anyone was going to test them on the conjunction fallacy? It seems to me, that they would need to notice the word “and.” They would need to be wary of it—not just wary, but leap back from it. Even without knowing that researchers were afterward going to test them on the conjunction fallacy particularly. They would need to notice the conjunction of two entire details, and be shocked by the audacity of anyone asking them to endorse such an insanely complicated prediction. And they would need to penalize the probability substantially—a factor of four, at least, according to the experimental details.

It might also have helped the forecasters to think about possible reasons why the US and Soviet Union would suspend diplomatic relations. The scenario is not “The US and Soviet Union suddenly suspend diplomatic relations for no reason,” but “The US and Soviet Union suspend diplomatic relations for any reason.”

And the subjects who rated “Reagan will provide federal support for unwed mothers and cut federal support to local governments”? Again, they would need to be shocked by the word “and.” Moreover, they would need to add absurdities—where the absurdity is the log probability, so you can add it—rather than averaging them. They would need to think, “Reagan might or might not cut support to local governments (1 bit), but it seems very unlikely that he will support unwed mothers (4 bits). Total absurdity: 5 bits.” Or maybe, “Reagan won’t support unwed mothers. One strike and it’s out. The other proposition just makes it even worse.”

Similarly, consider Tversky and Kahnemans (1983) experiment based around a six-sided die with four green faces and two red faces.4 The subjects had to bet on the sequence (1) RGRRR, (2) GRGRRR, or (3) GRRRRR appearing anywhere in twenty rolls of the dice. Sixty-five percent of the subjects chose GRGRRR, which is strictly dominated by RGRRR, since any sequence containing GRGRRR also pays off for RGRRR. How could the subjects have done better? By noticing the inclusion? Perhaps; but that is only a band-aid, it does not fix the fundamental problem. By explicitly calculating the probabilities? That would certainly fix the fundamental problem, but you can’t always calculate an exact probability.

The subjects lost heuristically by thinking: “Aha! Sequence 2 has the highest proportion of green to red! I should bet on Sequence 2!” To win heuristically, the subjects would need to think: “Aha! Sequence 1 is short! I should go with Sequence 1!”

They would need to feel a stronger emotional impact from Occam’s Razor—feel every added detail as a burden, even a single extra roll of the dice.

Once upon a time, I was speaking to someone who had been mesmerized by an incautious futurist (one who adds on lots of details that sound neat). I was trying to explain why I was not likewise mesmerized by these amazing, incredible theories. So I explained about the conjunction fallacy, specifically the “suspending relations ± invading Poland” experiment. And he said, “Okay, but what does this have to do with—” And I said, “It is more probable that universes replicate for any reason, than that they replicate via black holes because advanced civilizations manufacture black holes because universes evolve to make them do it.” And he said, “Oh.”

Until then, he had not felt these extra details as extra burdens. Instead they were corroborative detail, lending verisimilitude to the narrative. Someone presents you with a package of strange ideas, one of which is that universes replicate. Then they present support for the assertion that universes replicate. But this is not support for the package, though it is all told as one story.

You have to disentangle the details. You have to hold up every one independently, and ask, “How do we know this detail?” Someone sketches out a picture of humanity’s descent into nanotechnological warfare, where China refuses to abide by an international control agreement, followed by an arms race . . . Wait a minute—how do you know it will be China? Is that a crystal ball in your pocket or are you just happy to be a futurist? Where are all these details coming from? Where did that specific detail come from?

For it is written:

If you can lighten your burden you must do so.

There is no straw that lacks the power to break your back.


1Amos Tversky and Daniel Kahneman, “Judgments of and by Representativeness: Heuristics and Biases,” in Judgment Under Uncertainty, ed. Daniel Kahneman, Paul Slovic, and Amos Tversky (New York: Cambridge University Press, 1982), 84–98.

2 See Amos Tversky and Daniel Kahneman, “Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review 90, no. 4 (1983): 293–315 and Daniel Kahneman and Shane Frederick, “Representativeness Revisited: Attribute Substitution in Intuitive Judgment,” in Heuristics and Biases: The Psychology of Intuitive Judgment, ed. Thomas Gilovich, Dale Griffin, and Daniel Kahneman (Cambridge University Press, 2002) for more information.

3 Tversky and Kahneman, “Extensional Versus Intuitive Reasoning.”

4 Ibid.

" } }, { "_id": "cXzTpSiCrNGzeoRAz", "title": "Conjunction Controversy (Or, How They Nail It Down)", "pageUrl": "https://www.lesswrong.com/posts/cXzTpSiCrNGzeoRAz/conjunction-controversy-or-how-they-nail-it-down", "postedAt": "2007-09-20T02:41:38.000Z", "baseScore": 62, "voteCount": 39, "commentCount": 25, "url": null, "contents": { "documentId": "cXzTpSiCrNGzeoRAz", "html": "

Followup toConjunction Fallacy

\n

When a single experiment seems to show that subjects are guilty of some horrifying sinful bias - such as thinking that the proposition \"Bill is an accountant who plays jazz\" has a higher probability than \"Bill is an accountant\" - people may try to dismiss (not defy) the experimental data.  Most commonly, by questioning whether the subjects interpreted the experimental instructions in some unexpected fashion - perhaps they misunderstood what you meant by \"more probable\".

\n

Experiments are not beyond questioning; on the other hand, there should always exist some mountain of evidence which suffices to convince you.  It's not impossible for researchers to make mistakes.  It's also not impossible for experimental subjects to be really genuinely and truly biased.  It happens.  On both sides, it happens.  We're all only human here.

\n

If you think to extend a hand of charity toward experimental subjects, casting them in a better light, you should also consider thinking charitably of scientists.  They're not stupid, you know.  If you can see an alternative interpretation, they can see it too.  This is especially important to keep in mind when you read about a bias and one or two illustrative experiments in a blog post.  Yes, if the few experiments you saw were all the evidence, then indeed you might wonder.  But you might also wonder if you're seeing all the evidence that supports the standard interpretation.  Especially if the experiments have dates on them like \"1982\" and are prefaced with adjectives like \"famous\" or \"classic\".

\n

\n

So!  This is a long post.  It is a long post because nailing down a theory requires more experiments than the one or two vivid illustrations needed to merely explain.  I am going to cite maybe one in twenty of the experiments that I've read about, which is maybe a hundredth of what's out there.  For more information, see Tversky and Kahneman (1983) or Kahneman and Frederick (2002), both available online, from which this post is primarily drawn.

\n

Here is (probably) the single most questioned experiment in the literature of heuristics and biases, which I reproduce here exactly as it appears in Tversky and Kahneman (1982):

\n
\n

Linda is 31 years old, single, outspoken, and very bright.  She majored in philosophy.  As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

\n

Please rank the following statements by their probability, using 1 for the most probable and 8 for the least probable:

\n

(5.2)  Linda is a teacher in elementary school.
(3.3)  Linda works in a bookstore and takes Yoga classes.
(2.1)  Linda is active in the feminist movement.  (F)
(3.1)  Linda is a psychiatric social worker.
(5.4)  Linda is a member of the League of Women Voters.
(6.2)  Linda is a bank teller.  (T)
(6.4)  Linda is an insurance salesperson.
(4.1)  Linda is a bank teller and is active in the feminist movement.  (T & F)

\n
\n

(The numbers at the start of each line are the mean ranks of each proposition, lower being more probable.)

\n

How do you know that subjects did not interpret \"Linda is a bank teller\" to mean \"Linda is a bank teller and is not active in the feminist movement\"?  For one thing, dear readers, I offer the observation that most bank tellers, even the ones who participated in anti-nuclear demonstrations in college, are probably not active in the feminist movement.  So, even so, Teller should rank above Teller & Feminist.  You should be skeptical of your own objections, too; else it is disconfirmation bias.  But the researchers did not stop with this observation; instead, in Tversky and Kahneman (1983), they created a between-subjects experiment in which either the conjunction or the two conjuncts were deleted.  Thus, in the between-subjects version of the experiment, each subject saw either (T&F), or (T), but not both.  With a total of five propositions ranked, the mean rank of (T&F) was 3.3 and the mean rank of (T) was 4.4, N=86.  Thus, the fallacy is not due solely to interpreting \"Linda is a bank teller\" to mean \"Linda is a bank teller and not active in the feminist movement.\"

\n

Similarly, the experiment discussed yesterday used a between-subjects design (where each subject only saw one statement) to elicit lower probabilities for \"A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983\" versus \"A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983\".

\n

Another way of knowing whether subjects have misinterpreted an experiment is to ask the subjects directly.  Also in Tversky and Kahneman (1983), a total of 103 medical internists (including 37 internists taking a postgraduate course at Harvard, and 66 internists with admitting privileges at New England Medical Center) were given problems like the following:

\n
\n

A 55-year-old woman had pulmonary embolism documented angiographically 10 days after a cholecstectomy.  Please rank order the following in terms of the probability that they will be among the conditions experienced by the patient (use 1 for the most likely and 6 for the least likely).  Naturally, the patient could experience more than one of these conditions.

\n
\n
\n\n
\n

As Tversky and Kahneman note, \"The symptoms listed for each problem included one, denoted B, that was judged by our consulting physicians to be nonrepresentative of the patient's condition, and the conjunction of B with another highly representative symptom denoted A.  In the above example of pulmonary embolism (blood clots in the lung), dyspnea (shortness of breath) is a typical symptom, whereas hemiparesis (partial paralysis) is very atypical.\"

\n

In indirect tests, the mean ranks of A&B and B respectively were 2.8 and 4.3; in direct tests, they were 2.7 and 4.6.  In direct tests, subjects ranked A&B above B between 73% to 100% of the time, with an average of 91%.

\n

The experiment was designed to eliminate, in four ways, the possibility that subjects were interpreting B to mean \"only B (and not A)\".  First, carefully wording the instructions:  \"...the probability that they will be among the conditions experienced by the patient\", plus an explicit reminder, \"the patient could experience more than one of these conditions\".   Second, by including indirect tests as a comparison.  Third, the researchers afterward administered a questionnaire:

\n
\n

In assessing the probability that the patient described has a particular symptom X, did you assume that (check one):
    X is the only symptom experienced by the patient?
    X is among the symptoms experienced by the patient?

\n
\n

60 of 62 physicians, asked this question, checked the second answer.

\n

Fourth and finally, as Tversky and Kahneman write, \"An additional group of 24 physicians, mostly residents at Stanford Hospital, participated in a group discussion in which they were confronted with their conjunction fallacies in the same questionnaire.  The respondents did not defend their answers, although some references were made to 'the nature of clinical experience.'  Most participants appeared surprised and dismayed to have made an elementary error of reasoning.\"

\n

A further experiment is also discussed in Tversky and Kahneman (1983) in which 93 subjects rated the probability that Bjorn Borg, a strong tennis player, would in the Wimbledon finals \"win the match\", \"lose the first set\", \"lose the first set but win the match\", and \"win the first set but lose the match\".  The conjunction fallacy was expressed:  \"lose the first set but win the match\" was ranked more probable than\"lose the first set\".  Subjects were also asked to verify whether various strings of wins and losses would count as an extensional example of each case, and indeed, subjects were interpreting the cases as conjuncts which were satisfied iff both constituents were satisfied, and not interpreting them as material implications, conditional statements, or disjunctions; also, constituent B was not interpreted to exclude constituent A.  The genius of this experiment was that researchers could directly test what subjects thought was the meaning of each proposition, ruling out a very large class of misunderstandings.

\n

Does the conjunction fallacy arise because subjects misinterpret what is meant by \"probability\"?  This can be excluded by offering students bets with payoffs.  In addition to the colored dice discussed yesterday, subjects have been asked which possibility they would prefer to bet $10 on in the classic Linda experiment.  This did reduce the incidence of the conjunction fallacy, but only to 56% (N=60), which is still more than half the students.

\n

But the ultimate proof of the conjunction fallacy is also the most elegant.  In the conventional interpretation of the Linda experiment, subjects substitute judgment of representativeness for judgment of probability:  Their feelings of similarity between each of the propositions and Linda's description, determines how plausible it feels that each of the propositions is true of Linda.  If this central theory is true, then the way in which the conjunction fallacy follows is obvious - Linda more closely resembles a feminist than a feminist bank teller, and more closely resembles a feminist bank teller than a bank teller.  Well, that is our theory about what goes on in the experimental subjects minds, but how could we possibly know?  We can't look inside their neural circuits - not yet!  So how would you construct an experiment to directly test the standard model of the Linda experiment?

\n

Very easily.  You just take another group of experimental subjects, and ask them how much each of the propositions \"resembles\" Linda.  This was done - see Kahneman and Frederick (2002) - and the correlation between representativeness and probability was nearly perfect.  0.99, in fact.  Here's the (rather redundant) graph:

\n

\"Lindacorrelation\"

\n

This has been replicated for numerous other experiments.  For example, in the medical experiment described above, an independent group of 32 physicians from Stanford University was asked to rank each list of symptoms \"by the degree to which they are representative of the clinical condition of the patient\".  The correlation between probability rank and representativeness rank exceeded 95% on each of the five tested medical problems.

\n

Now, a correlation near 1 does not prove that subjects are substituting judgment of representativeness for judgment of probability.  But if you want to claim that subjects are doing something else, I would like to hear the explanation for why the correlation comes out so close to 1.  It will really take quite a complicated story to explain, not just why the subjects have an elaborate misunderstanding that produces an innocent and blameless conjunction fallacy, but also how it comes out to a completely coincidental correlation of nearly 1 with subjects' feeling of similarity.  Across multiple experimental designs.

\n

And we all know what happens to the probability of complicated stories:  They go down when you add details to them.

\n

Really, you know, sometimes people just make mistakes.  And I'm not talking about the researchers here.

\n

The conjunction fallacy is probably the single most questioned bias ever introduced, which means that it now ranks among the best replicated.  The conventional interpretation has been nearly absolutely nailed down.  Questioning, in science, calls forth answers.

\n

I emphasize this, because it seems that when I talk about biases (especially to audiences not previously familiar with the field), a lot of people want to be charitable to experimental subjects.  But it is not only experimental subjects who deserve charity.  Scientists can also be unstupid.  Someone else has already thought of your alternative interpretation. Someone else has already devised an experiment to test it.  Maybe more than one.  Maybe more than twenty.

\n

A blank map is not a blank territory; if you don't know whether someone has tested it, that doesn't mean no one has tested it.  This is not a hunter-gatherer tribe of two hundred people, where if you do not know a thing, then probably no one in your tribe knows.  There are six billion people in the world, and no one can say with certitude that science does not know a thing; there is too much science.  Absence of such evidence is only extremely weak evidence of absence.  So do not mistake your ignorance of whether an alternative interpretation has been tested, for the positive knowledge that no one has tested it.  Be charitable to scientists too.  Do not say, \"I bet what really happened was X\", but ask, \"Which experiments discriminated between the standard interpretation versus X?\"

\n

If it seems that I am driving this point home with a sledgehammer, well, yes, I guess I am.  It does become a little frustrating, sometimes - to know about this overwhelming mountain of evidence from thousands of experiments, but other people have no clue that it exists.  After all, if there are other experiments supporting the result, why haven't they heard of them?  It's a small tribe, after all; surely they would have heard.  By the same token, I have to make a conscious effort to remember that other people don't know about the evidence, and they aren't deliberately ignoring it in order to annoy me.  Which is why it gets a little frustrating sometimes!  We just aren't built for worlds of 6 billion people.

\n

I'm not saying, of course, that people should stop asking questions.  If you stop asking questions, you'll never find out about the mountains of experimental evidence.  Faith is not understanding, only belief in a password.  It is futile to believe in something, however fervently, when you don't really know what you're supposed to believe in.  So I'm not saying that you should take it all on faith.  I'm not saying to shut up.  I'm not trying to make you feel guilty for asking questions.

\n

I'm just saying, you should suspect the existence of other evidence, when a brief account of accepted science raises further questions in your mind.  Not believe in that unseen evidence, just suspect its existence.  The more so if it is a classic experiment with a standard interpretation.  Ask a little more gently.  Put less confidence in your brilliant new alternative hypothesis.  Extend some charity to the researchers, too.

\n

And above all, talk like a pirate.  Arr!

\n
\n

Kahneman, D. and Frederick, S. 2002. Representativeness revisited: Attribute substitution in intuitive judgment. Pp 49-81 in Gilovich, T., Griffin, D. and Kahneman, D., eds. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge University Press, Cambridge.

\n

Tversky, A. and Kahneman, D. 1982. Judgments of and by representativeness. Pp 84-98 in Kahneman, D., Slovic, P., and Tversky, A., eds. Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press.

\n

Tversky, A. and Kahneman, D. 1983. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90: 293-315.

" } }, { "_id": "QAK43nNCTQQycAcYe", "title": "Conjunction Fallacy", "pageUrl": "https://www.lesswrong.com/posts/QAK43nNCTQQycAcYe/conjunction-fallacy", "postedAt": "2007-09-19T01:54:38.000Z", "baseScore": 56, "voteCount": 48, "commentCount": 46, "url": null, "contents": { "documentId": "QAK43nNCTQQycAcYe", "html": "

The following experiment has been slightly modified for ease of blogging.  You are given the following written description, which is assumed true:

Bill is 34 years old.  He is intelligent, but unimaginative, compulsive, and generally lifeless.  In school, he was strong in mathematics but weak in social studies and humanities.

No complaints about the description, please, this experiment was done in 1974.  Anyway, we are interested in the probability of the following propositions, which may or may not be true, and are not mutually exclusive or exhaustive:

A:  Bill is an accountant.
B:  Bill is a physician who plays poker for a hobby.
C:  Bill plays jazz for a hobby.
D:  Bill is an architect.
E:  Bill is an accountant who plays jazz for a hobby.
F:  Bill climbs mountains for a hobby.

Take a moment before continuing to rank these six propositions by probability, starting with the most probable propositions and ending with the least probable propositions.  Again, the starting description of Bill is assumed true, but the six propositions may be true or untrue (they are not additional evidence) and they are not assumed mutually exclusive or exhaustive.

\n\n\n\n

In a very similar experiment conducted by Tversky and Kahneman (1982), 92% of 94 undergraduates at the University of British Columbia gave an ordering with A > E > C.  That is, the vast majority of subjects indicated that Bill was more likely to be an accountant than an accountant who played jazz, and more likely to be an accountant who played jazz than a jazz player.  The ranking E > C was also displayed by 83% of 32 grad students in the decision science program of Stanford Business School, all of whom had taken advanced courses in probability and statistics.

\n\n

There is a certain logical problem with saying that Bill is more likely to be an account who plays jazz, than he is to play jazz.  The conjunction rule of probability theory states that, for all X and Y, P(X&Y) <= P(Y).  That is, the probability that X and Y are simultaneously true, is always less than or equal to the probability that Y is true.  Violating this rule is called a conjunction fallacy.

\n\n

Imagine a group of 100,000 people, all of whom fit Bill's description (except for the name, perhaps).  If you take the subset of all these persons who play jazz, and the subset of all these persons who play jazz and are accountants, the second subset will always be smaller because it is strictly contained within the first subset.

\n\n

Could the conjunction fallacy rest on students interpreting the experimental instructions in an unexpected way - misunderstanding, perhaps, what is meant by "probable"?  Here's another experiment, Tversky and Kahneman (1983), played by 125 undergraduates at UBC and Stanford for real money:

Consider a regular six-sided die with four green faces and two red faces.  The die will be rolled 20 times and the sequences of greens (G) and reds (R) will be recorded.  You are asked to select one sequence, from a set of three, and you will win $25 if the sequence you chose appears on successive rolls of the die.  Please check the sequence of greens and reds on which you prefer to bet.

\n\n

1.  RGRRR
2.  GRGRRR
3.  GRRRRR

65% of the subjects chose sequence 2, which is most representative of the die, since the die is mostly green and sequence 2 contains the greatest proportion of green rolls.  However, sequence 1 dominates sequence 2, because sequence 1 is strictly included in 22 is 1 preceded by a G; that is, 2 is the conjunction of an initial G with 1.  This clears up possible misunderstandings of "probability", since the goal was simply to get the $25.

\n\n

Another experiment from Tversky and Kahneman (1983) was conducted at the Second International Congress on Forecasting in July of 1982.  The experimental subjects were 115 professional analysts, employed by industry, universities, or research institutes.  Two different experimental groups were respectively asked to rate the probability of two different statements, each group seeing only one statement:

\n\n
  1. "A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."
  2. \n\n
  3. "A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."
\n\n

Estimates of probability were low for both statements, but significantly lower for the first group than the second (p < .01 by Mann-Whitney).  Since each experimental group only saw one statement, there is no possibility that the first group interpreted (1) to mean "suspension but no invasion".

\n\n

The moral?  Adding more detail or extra assumptions can make an event seem more plausible, even though the event necessarily becomes less probable.

\n\n

Do you have a favorite futurist?  How many details do they tack onto their amazing, futuristic predictions?


\n\n

Tversky, A. and Kahneman, D. 1982. Judgments of and by representativeness. Pp 84-98 in Kahneman, D., Slovic, P., and Tversky, A., eds. Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press.

\n\n

Tversky, A. and Kahneman, D. 1983. Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 90: 293-315.

" } }, { "_id": "B9SF3v5vNzhJZFveH", "title": "Kahneman's Planning Anecdote", "pageUrl": "https://www.lesswrong.com/posts/B9SF3v5vNzhJZFveH/kahneman-s-planning-anecdote", "postedAt": "2007-09-17T16:39:07.000Z", "baseScore": 38, "voteCount": 30, "commentCount": 8, "url": null, "contents": { "documentId": "B9SF3v5vNzhJZFveH", "html": "

Followup toPlanning Fallacy

\n

From \"Timid Choices and Bold Forecasts: Cognitive Perspective on Risk Taking\" by Nobel Laureate Daniel Kahneman and Dan Lovallo, in a discussion on \"Inside and Outside Views\":

\n
\n

In 1976 one of us (Daniel Kahneman) was involved in a project designed to develop a curriculum for the study of judgment and decision making under uncertainty for high schools in Israel.  When the team had been in operation for about a year, with some significant achievements already to its credit, the discussion at one of the team meetings turned to the question of how long the project would take.  To make the debate more useful, I asked everyone to indicate on a slip of paper their best estimate of the number of months that would be needed to bring the project to a well-defined stage of completion: a complete draft ready for submission to the Ministry of education.  The estimates, including my own, ranged from 18 to 30 months.

\n

At this point I had the idea of turning to one of our members, a distinguished expert in curriculum development, asking him a question phrased about as follows:

\n

\"We are surely not the only team to have tried to develop a curriculum where none existed before.  Please try to recall as many such cases as you can.  Think of them as they were in a stage comparable to ours at present.  How long did it take them, from that point, to complete their projects?\"

\n
\n

\n
\n

After a long silence, something much like the following answer was given, with obvious signs of discomfort:  \"First, I should say that not all teams that I can think of in a comparable stage ever did complete their task.  About 40% of them eventually gave up.  Of the remaining, I cannot think of any that was completed in less than seven years, nor of any that took more than ten.\"

\n

In response to a further question, he answered:  \"No, I cannot think of any relevant factor that distinguishes us favorably from the teams I have been thinking about.  Indeed, my impression is that we are slightly below average in terms of our resources and potential.\"

\n

Facing the facts can be intolerably demoralizing.  The participants in the meeting had professional expertise in the logic of forecasting, and none even ventured to question the relevance of the forecast implied by our expert's statistics: an even chance of failure, and a completion time of seven to ten years in case of success.  Neither of these outcomes was an acceptable basis for continuing the project, but no one was willing to draw the embarrassing conclusion that it should be scrapped.

\n

So, the forecast was quietly dropped from active debate, along with any pretense of long-term planning, and the project went on along its predictably unforeseeable path to eventual completion some eight years later.

\n
" } }, { "_id": "CPm5LTwHrvBJCa9h5", "title": "Planning Fallacy", "pageUrl": "https://www.lesswrong.com/posts/CPm5LTwHrvBJCa9h5/planning-fallacy", "postedAt": "2007-09-17T07:06:20.000Z", "baseScore": 205, "voteCount": 200, "commentCount": 44, "url": null, "contents": { "documentId": "CPm5LTwHrvBJCa9h5", "html": "

The Denver International Airport opened 16 months late, at a cost overrun of $2 billion.1

The Eurofighter Typhoon, a joint defense project of several European countries, was delivered 54 months late at a cost of $19 billion instead of $7 billion.

The Sydney Opera House may be the most legendary construction overrun of all time, originally estimated to be completed in 1963 for $7 million, and finally completed in 1973 for $102 million.2

Are these isolated disasters brought to our attention by selective availability? Are they symptoms of bureaucracy or government incentive failures? Yes, very probably. But there’s also a corresponding cognitive bias, replicated in experiments with individual planners.

Buehler et al. asked their students for estimates of when they (the students) thought they would complete their personal academic projects.3 Specifically, the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their personal projects would be done. Would you care to guess how many students finished on or before their estimated 50%, 75%, and 99% probability levels?

As Buehler et al. wrote, “The results for the 99% probability level are especially striking: Even when asked to make a highly conservative forecast, a prediction that they felt virtually certain that they would fulfill, students’ confidence in their time estimates far exceeded their accomplishments.”4

More generally, this phenomenon is known as the “planning fallacy.” The planning fallacy is that people think they can plan, ha ha.

A clue to the underlying problem with the planning algorithm was uncovered by Newby-Clark et al., who found that

. . . produced indistinguishable results.5

When people are asked for a “realistic” scenario, they envision everything going exactly as planned, with no unexpected delays or unforeseen catastrophes—the same vision as their “best case.”

Reality, it turns out, usually delivers results somewhat worse than the “worst case.”

Unlike most cognitive biases, we know a good debiasing heuristic for the planning fallacy. It won’t work for messes on the scale of the Denver International Airport, but it’ll work for a lot of personal planning, and even some small-scale organizational stuff. Just use an “outside view” instead of an “inside view.”

People tend to generate their predictions by thinking about the particular, unique features of the task at hand, and constructing a scenario for how they intend to complete the task—which is just what we usually think of as planning.

When you want to get something done, you have to plan out where, when, how; figure out how much time and how much resource is required; visualize the steps from beginning to successful conclusion. All this is the “inside view,” and it doesn’t take into account unexpected delays and unforeseen catastrophes. As we saw before, asking people to visualize the “worst case” still isn’t enough to counteract their optimism—they don’t visualize enough Murphyness.

The outside view is when you deliberately avoid thinking about the special, unique features of this project, and just ask how long it took to finish broadly similar projects in the past. This is counterintuitive, since the inside view has so much more detail—there’s a temptation to think that a carefully tailored prediction, taking into account all available data, will give better results.

But experiment has shown that the more detailed subjects’ visualization, the more optimistic (and less accurate) they become. Buehler et al. asked an experimental group of subjects to describe highly specific plans for their Christmas shopping—where, when, and how.6 On average, this group expected to finish shopping more than a week before Christmas. Another group was simply asked when they expected to finish their Christmas shopping, with an average response of four days. Both groups finished an average of three days before Christmas.

Likewise, Buehler et al., reporting on a cross-cultural study, found that Japanese students expected to finish their essays ten days before deadline. They actually finished one day before deadline. Asked when they had previously completed similar tasks, they responded, “one day before deadline.” This is the power of the outside view over the inside view.

A similar finding is that experienced outsiders, who know less of the details, but who have relevant memory to draw upon, are often much less optimistic and much more accurate than the actual planners and implementers.

So there is a fairly reliable way to fix the planning fallacy, if you’re doing something broadly similar to a reference class of previous projects. Just ask how long similar projects have taken in the past, without considering any of the special properties of this project. Better yet, ask an experienced outsider how long similar projects have taken.

You’ll get back an answer that sounds hideously long, and clearly reflects no understanding of the special reasons why this particular task will take less time. This answer is true. Deal with it.


1 I’ve also seen $3.1 billion asserted.

2 Roger Buehler, Dale Griffin, and Michael Ross, “Exploring the ‘Planning Fallacy’: Why People Underestimate Their Task Completion Times,” Journal of Personality and Social Psychology 67, no. 3 (1994): 366–381.

3 Roger Buehler, Dale Griffin, and Michael Ross, “It’s About Time: Optimistic Predictions in Work and Love,” European Review of Social Psychology 6, no. 1 (1995): 1–32.

4 Roger Buehler, Dale Griffin, and Michael Ross, “Inside the Planning Fallacy: The Causes and Consequences of Optimistic Time Predictions,” in Heuristics and Biases: The Psychology of Intuitive Judgment, ed. Thomas Gilovich, Dale Griffin, and Daniel Kahneman (New York: Cambridge University Press, 2002), 250–270.

5 Ian R. Newby-Clark et al., “People Focus on Optimistic Scenarios and Disregard Pessimistic Scenarios While Predicting Task Completion Times,” Journal of Experimental Psychology: Applied 6, no. 3 (2000): 171–182.

6 Buehler, Griffin, and Ross, “Inside the Planning Fallacy.”

" } }, { "_id": "vHPrTLnhrgAHA96ko", "title": "Why I'm Blooking", "pageUrl": "https://www.lesswrong.com/posts/vHPrTLnhrgAHA96ko/why-i-m-blooking", "postedAt": "2007-09-15T17:49:02.000Z", "baseScore": 50, "voteCount": 38, "commentCount": 20, "url": null, "contents": { "documentId": "vHPrTLnhrgAHA96ko", "html": "

Yesterday being my 100th Overcoming Bias post, it seems an opportune time to answer a commenter's question:  Why am I posting?

\n\n

For a long time I've suffered from writer's molasses.  Like writer's block, only instead of not writing, I write very slooowly. \nAt least when it comes to writing Documents - papers, book chapters,\nwebsite material.  If I haven't published a hundred papers, it's not\nfor lack of a hundred ideas, but because writing one paper - at my\ncurrent pace - takes four months full time.  I sometimes wonder if I\ncould become a respectable academic if I wrote at a respectable pace.

\n\n

Oddly enough, I can write most emails around as fast as I type. \nSuch disorders are hard to self-diagnose, but I suspect that part of\nthe problem is that on Documents I repeatedly reread and tweak material\nI've already written, instead of writing new material.  James Hogan (an\nSF author) once told me that he was more productive on a typewriter\nthan a word processor, because the typewriter prevented him from\ntweaking until the second draft.

\n\n

A blook is a collection of blog posts that have been edited into a\nbook.  Logically, then, publishing a book as a series of blog posts\nought to be known as "blooking".

It would be more precise to say that I'm generating raw material\nto be edited into a book, and collecting some feedback along the way. I\nmake no promises for this project.  (I hate promising anything unless I\nhave already done it.)  The first part of the plan, generating the\nraw material as blog posts, has proceeded at a respectable pace so\nfar.  Will I be able to edit the posts into chapters, so long as all\nthe raw material is there?  Will I be able to generate all the raw\nmaterial, or will the project, ahem, "blog down"?

\n\n

In August I decided that I was going to write one blog post per day\nfor Overcoming Bias.  This challenge began to hone my writing speed\nsomewhat - for example, I would look at the clock and try not to take\nlonger than an hour... or three hours... but nonetheless I began to\nfeel the need to shove the post out the door instead of perfecting it further.  This is necessary and proper.

\n\n

Near the end of August, I faced a new challenge - I also had to\nprepare two talks for the Singularity Summit 2007 (Sep 8-9).  Those\nwere actual Documents.  I knew, from previous experience, that I\ncouldn't possibly prepare the two talks and also keep up the pace of\nblogging on Overcoming Bias.  Blogging was using up all my writing\nenergy already - I have only a limited supply of words per day.  If I\noverreach one day's budget I can't write at all the next day.  So (I\nknew) I would have to temporarily stop blogging and resume after the\nSummit.

\n\n

And then I said to myself, Hey, if I never try to do anything "impossible", I'll never grow.

\n\n

I decided I would keep up the pace on Overcoming Bias while simultaneously writing my two Summit talks.  Tsuyoku naritai!

\n\n

I lost sleep, and skipped exercise.  But I did it.  I'll remember\nthat the next time I'm thinking of trying something impossible.

" } }, { "_id": "Hs3ymqypvhgFMkgLb", "title": "Doublethink (Choosing to be Biased)", "pageUrl": "https://www.lesswrong.com/posts/Hs3ymqypvhgFMkgLb/doublethink-choosing-to-be-biased", "postedAt": "2007-09-14T20:05:13.000Z", "baseScore": 107, "voteCount": 109, "commentCount": 169, "url": null, "contents": { "documentId": "Hs3ymqypvhgFMkgLb", "html": "
\n

An oblong slip of newspaper had appeared between O'Brien's fingers. For perhaps five seconds it was within the angle of Winston's vision. It was a photograph, and there was no question of its identity. It was the photograph. It was another copy of the photograph of Jones, Aaronson, and Rutherford at the party function in New York, which he had chanced upon eleven years ago and promptly destroyed. For only an instant it was before his eyes, then it was out of sight again. But he had seen it, unquestionably he had seen it! He made a desperate, agonizing effort to wrench the top half of his body free. It was impossible to move so much as a centimetre in any direction. For the moment he had even forgotten the dial. All he wanted was to hold the photograph in his fingers again, or at least to see it.

\n

'It exists!' he cried.

\n

'No,' said O'Brien.

\n

He stepped across the room.

\n

There was a memory hole in the opposite wall. O'Brien lifted the grating. Unseen, the frail slip of paper was whirling away on the current of warm air; it was vanishing in a flash of flame. O'Brien turned away from the wall.

\n

'Ashes,' he said. 'Not even identifiable ashes. Dust. It does not exist. It never existed.'

\n

'But it did exist! It does exist! It exists in memory. I remember it. You remember it.'

\n

'I do not remember it,' said O'Brien.

\n

Winston's heart sank. That was doublethink. He had a feeling of deadly helplessness. If he could have been certain that O'Brien was lying, it would not have seemed to matter. But it was perfectly possible that O'Brien had really forgotten the photograph. And if so, then already he would have forgotten his denial of remembering it, and forgotten the act of forgetting. How could one be sure that it was simple trickery? Perhaps that lunatic dislocation in the mind could really happen: that was the thought that defeated him.

\n

   —George Orwell, 1984

\n
\n

What if self-deception helps us be happy?  What if just running out and overcoming bias will make us—gasp!—unhappy?  Surely, true wisdom would be second-order rationality, choosing when to be rational.  That way you can decide which cognitive biases should govern you, to maximize your happiness.

\n

Leaving the morality aside, I doubt such a lunatic dislocation in the mind could really happen.

\n

\n

Second-order rationality implies that at some point, you will think to yourself, \"And now, I will irrationally believe that I will win the lottery, in order to make myself happy.\"  But we do not have such direct control over our beliefs.  You cannot make yourself believe the sky is green by an act of will.  You might be able to believe you believed it—though I have just made that more difficult for you by pointing out the difference.  (You're welcome!)  You might even believe you were happy and self-deceived; but you would not in fact be happy and self-deceived.

\n

For second-order rationality to be genuinely rational, you would first need a good model of reality, to extrapolate the consequences of rationality and irrationality.  If you then chose to be first-order irrational, you would need to forget this accurate view. And then forget the act of forgetting.  I don't mean to commit the logical fallacy of generalizing from fictional evidence, but I think Orwell did a good job of extrapolating where this path leads.

\n

You can't know the consequences of being biased, until you have already debiased yourself.  And then it is too late for self-deception.

\n

The other alternative is to choose blindly to remain biased, without any clear idea of the consequences.  This is not second-order rationality.  It is willful stupidity.

\n

Be irrationally optimistic about your driving skills, and you will be happily unconcerned where others sweat and fear.  You won't have to put up with the inconvenience of a seatbelt.  You will be happily unconcerned for a day, a week, a year.  Then CRASH, and spend the rest of your life wishing you could scratch the itch in your phantom limb.  Or paralyzed from the neck down.  Or dead.  It's not inevitable, but it's possible; how probable is it?  You can't make that tradeoff rationally unless you know your real driving skills, so you can figure out how much danger you're placing yourself in.  You can't make that tradeoff rationally unless you know about biases like neglect of probability.

\n

No matter how many days go by in blissful ignorance, it only takes a single mistake to undo a human life, to outweigh every penny you picked up from the railroad tracks of stupidity.

\n

One of chief pieces of advice I give to aspiring rationalists is \"Don't try to be clever.\" And, \"Listen to those quiet, nagging doubts.\"  If you don't know, you don't know what you don't know, you don't know how much you don't know, and you don't know how much you needed to know.

\n

There is no second-order rationality.  There is only a blind leap into what may or may not be a flaming lava pit.  Once you know, it will be too late for blindness.

\n

But people neglect this, because they do not know what they do not know.  Unknown unknowns are not available. They do not focus on the blank area on the map, but treat it as if it corresponded to a blank territory.  When they consider leaping blindly, they check their memory for dangers, and find no flaming lava pits in the blank map.  Why not leap?

\n

Been there.  Tried that.  Got burned.  Don't try to be clever.

\n

I once said to a friend that I suspected the happiness of stupidity was greatly overrated.  And she shook her head seriously, and said, \"No, it's not; it's really not.\"

\n

Maybe there are stupid happy people out there.  Maybe they are happier than you are.  And life isn't fair, and you won't become happier by being jealous of what you can't have.  I suspect the vast majority of Overcoming Bias readers could not achieve the \"happiness of stupidity\" if they tried.  That way is closed to you. You can never achieve that degree of ignorance, you cannot forget what you know, you cannot unsee what you see. 

\n

The happiness of stupidity is closed to you.  You will never have it short of actual brain damage, and maybe not even then.  You should wonder, I think, whether the happiness of stupidity is optimal—if it is the most happiness that a human can aspire to—but it matters not.  That way is closed to you, if it was ever open.

\n

All that is left to you now, is to aspire to such happiness as a rationalist can achieve.  I think it may prove greater, in the end. There are bounded paths and open-ended paths; plateaus on which to laze, and mountains to climb; and if climbing takes more effort, still the mountain rises higher in the end.

\n

Also there is more to life than happiness; and other happinesses than your own may be at stake in your decisions.

\n

But that is moot.  By the time you realize you have a choice, there is no choice.  You cannot unsee what you see.  The other way is closed.

" } }, { "_id": "i8q4vXestDkGTFwsc", "title": "Human Evil and Muddled Thinking", "pageUrl": "https://www.lesswrong.com/posts/i8q4vXestDkGTFwsc/human-evil-and-muddled-thinking", "postedAt": "2007-09-13T23:43:13.000Z", "baseScore": 152, "voteCount": 107, "commentCount": 144, "url": null, "contents": { "documentId": "i8q4vXestDkGTFwsc", "html": "

George Orwell saw the descent of the civilized world into totalitarianism, the conversion or corruption of one country after another; the boot stamping on a human face, forever, and remember that it is forever. You were born too late to remember a time when the rise of totalitarianism seemed unstoppable, when one country after another fell to secret police and the thunderous knock at midnight, while the professors of free universities hailed the Soviet Union’s purges as progress. It feels as alien to you as fiction; it is hard for you to take seriously. Because, in your branch of time, the Berlin Wall fell. And if Orwell’s name is not carved into one of those stones, it should be.

Orwell saw the destiny of the human species, and he put forth a convulsive effort to wrench it off its path. Orwell’s weapon was clear writing. Orwell knew that muddled language is muddled thinking; he knew that human evil and muddled thinking intertwine like conjugate strands of DNA:1

In our time, political speech and writing are largely the defence of the indefensible. Things like the continuance of British rule in India, the Russian purges and deportations, the dropping of the atom bombs on Japan, can indeed be defended, but only by arguments which are too brutal for most people to face, and which do not square with the professed aims of the political parties. Thus political language has to consist largely of euphemism, question-begging and sheer cloudy vagueness. Defenceless villages are bombarded from the air, the inhabitants driven out into the countryside, the cattle machine-gunned, the huts set on fire with incendiary bullets: this is called PACIFICATION . . .

Orwell was clear on the goal of his clarity:

If you simplify your English, you are freed from the worst follies of orthodoxy. You cannot speak any of the necessary dialects, and when you make a stupid remark its stupidity will be obvious, even to yourself.

To make our stupidity obvious, even to ourselves—this is the heart of Overcoming Bias.

Evil sneaks, hidden, through the unlit shadows of the mind. We look back with the clarity of history, and weep to remember the planned famines of Stalin and Mao, which killed tens of millions. We call this evil, because it was done by deliberate human intent to inflict pain and death upon innocent human beings. We call this evil, because of the revulsion that we feel against it, looking back with the clarity of history. For perpetrators of evil to avoid its natural opposition, the revulsion must remain latent. Clarity must be avoided at any cost. Even as humans of clear sight tend to oppose the evil that they see; so too does human evil, wherever it exists, set out to muddle thinking.

1984 sets this forth starkly: Orwell’s ultimate villains are cutters and airbrushers of photographs (based on historical cutting and airbrushing in the Soviet Union). At the peak of all darkness in the Ministry of Love, O’Brien tortures Winston to admit that two plus two equals five:2

“Do you remember,” he went on, “writing in your diary, ‘Freedom is the freedom to say that two plus two make four’?”

“Yes,” said Winston.

O’Brien held up his left hand, its back towards Winston, with the thumb hidden and the four fingers extended.

“How many fingers am I holding up, Winston?”

“Four.”

“And if the party says that it is not four but five—then how many?”

“Four.”

The word ended in a gasp of pain. The needle of the dial had shot up to fifty-five. The sweat had sprung out all over Winston’s body. The air tore into his lungs and issued again in deep groans which even by clenching his teeth he could not stop. O’Brien watched him, the four fingers still extended. He drew back the lever. This time the pain was only slightly eased.

I am continually aghast at apparently intelligent folks—such as Robin Hanson’s colleague Tyler Cowen—who don’t think that overcoming bias is important.3 This is your mind we’re talking about. Your human intelligence. It separates you from an orangutan. It built this world. You don’t think how the mind works is important? You don’t think the mind’s systematic malfunctions are important? Do you think the Inquisition would have tortured witches, if all were ideal Bayesians?

Tyler Cowen apparently feels that overcoming bias is just as biased as bias: “I view Robin’s blog as exemplifying bias, and indeed showing that bias can be very useful.” I hope this is only the result of thinking too abstractly while trying to sound clever. Does Tyler seriously think that scope insensitivity to the value of human life is on the same level with trying to create plans that will really save as many lives as possible?

Orwell was forced to fight a similar attitude—that to admit to any distinction is youthful naiveté:

Stuart Chase and others have come near to claiming that all abstract words are meaningless, and have used this as a pretext for advocating a kind of political quietism. Since you don’t know what Fascism is, how can you struggle against Fascism?

Maybe overcoming bias doesn’t look quite exciting enough, if it’s framed as a struggle against mere accidental mistakes. Maybe it’s harder to get excited if there isn’t some clear evil to oppose. So let us be absolutely clear that where there is human evil in the world, where there is cruelty and torture and deliberate murder, there are biases enshrouding it. Where people of clear sight oppose these biases, the concealed evil fights back. The truth does have enemies. If Overcoming Bias were a newsletter in the old Soviet Union, every poster and commenter of Overcoming Bias would have been shipped off to labor camps.

In all human history, every great leap forward has been driven by a new clarity of thought. Except for a few natural catastrophes, every great woe has been driven by a stupidity. Our last enemy is ourselves; and this is a war, and we are soldiers.


1George Orwell, “Politics and the English Language,” Horizon, 1946.

2George Orwell, 1984 (Signet Classic, 1950).

3See Tyler Cowen, “How Important is Overcoming Bias?,” Marginal Revolution (blog), 2007, http://marginalrevolution.com/marginalrevolution/2007/08/how-important-i.html.

" } }, { "_id": "Lz64L3yJEtYGkzMzu", "title": "Rationality and the English Language", "pageUrl": "https://www.lesswrong.com/posts/Lz64L3yJEtYGkzMzu/rationality-and-the-english-language", "postedAt": "2007-09-12T22:55:57.000Z", "baseScore": 148, "voteCount": 113, "commentCount": 33, "url": null, "contents": { "documentId": "Lz64L3yJEtYGkzMzu", "html": "

The other day, someone commented that my writing reminded them of George Orwell’s “Politics and the English Language.”1 I was honored. Especially since I’d already thought of today’s topic.

If you really want an artist’s perspective on rationality, then read Orwell; he is mandatory reading for rationalists as well as authors. Orwell was not a scientist, but a writer; his tools were not numbers, but words; his adversary was not Nature, but human evil. If you wish to imprison people for years without trial, you must think of some other way to say it than “I’m going to imprison Mr. Jennings for years without trial.” You must muddy the listener’s thinking, prevent clear images from outraging conscience. You say, “Unreliable elements were subjected to an alternative justice process.”

Orwell was the outraged opponent of totalitarianism and the muddy thinking in which evil cloaks itself—which is how Orwell’s writings on language ended up as classic rationalist documents on a level with Feynman, Sagan, or Dawkins.

“Writers are told to avoid usage of the passive voice.” A rationalist whose background comes exclusively from science may fail to see the flaw in the previous sentence; but anyone who’s done a little writing should see it right away. I wrote the sentence in the passive voice, without telling you who tells authors to avoid passive voice. Passive voice removes the actor, leaving only the acted-upon. “Unreliable elements were subjected to an alternative justice process”—subjected by whom? What does an “alternative justice process” do? With enough static noun phrases, you can keep anything unpleasant from actually happening.

Journal articles are often written in passive voice. (Pardon me, some scientists write their journal articles in passive voice. It’s not as if the articles are being written by no one, with no one to blame.) It sounds more authoritative to say “The subjects were administered Progenitorivox” than “I gave each college student a bottle of 20 Progenitorivox, and told them to take one every night until they were gone.” If you remove the scientist from the description, that leaves only the all-important data. But in reality the scientist is there, and the subjects are college students, and the Progenitorivox wasn’t “administered” but handed over with instructions. Passive voice obscures reality.

Judging from the comments I get, someone will protest that using the passive voice in a journal article is hardly a sin—after all, if you think about it, you can realize the scientist is there. It doesn’t seem like a logical flaw. And this is why rationalists need to read Orwell, not just Feynman or even Jaynes.

Nonfiction conveys knowledge, fiction conveys experience. Medical science can extrapolate what would happen to a human unprotected in a vacuum. Fiction can make you live through it.

Some rationalists will try to analyze a misleading phrase, try to see if there might possibly be anything meaningful to it, try to construct a logical interpretation. They will be charitable, give the author the benefit of the doubt. Authors, on the other hand, are trained not to give themselves the benefit of the doubt. Whatever the audience thinks you said is what you said, whether you meant to say it or not; you can’t argue with the audience no matter how clever your justifications.

A writer knows that readers will not stop for a minute to think. A fictional experience is a continuous stream of first impressions. A writer-rationalist pays attention to the experience words create. If you are evaluating the public rationality of a statement, and you analyze the words deliberatively, rephrasing propositions, trying out different meanings, searching for nuggets of truthiness, then you’re losing track of the first impression—what the audience sees, or rather feels.

A novelist would notice the screaming wrongness of “The subjects were administered Progenitorivox.” What life is here for a reader to live? This sentence creates a distant feeling of authoritativeness, and that’s all—the only experience is the feeling of being told something reliable. A novelist would see nouns too abstract to show what actually happened—the postdoc with the bottle in their hand, trying to look stern; the student listening with a nervous grin.

My point is not to say that journal articles should be written like novels, but that a rationalist should become consciously aware of the experiences which words create. A rationalist must understand the mind and how to operate it. That includes the stream of consciousness, the part of yourself that unfolds in language. A rationalist must become consciously aware of the actual, experiential impact of phrases, beyond their mere propositional semantics.2

Or to say it more bluntly: Meaning does not excuse impact!

I don’t care what rational interpretation you can construct for an applause light like “AI should be developed through democratic processes.” That cannot excuse its irrational impact of signaling the audience to applaud, not to mention its cloudy question-begging vagueness.

Here is Orwell, railing against the impact of cliches, their effect on the experience of thinking:

When one watches some tired hack on the platform mechanically repeating the familiar phrases—BESTIAL, ATROCITIES, IRON HEEL, BLOODSTAINED TYRANNY, FREE PEOPLES OF THE WORLD, STAND SHOULDER TO SHOULDER—one often has a curious feeling that one is not watching a live human being but some kind of dummy . . . A speaker who uses that kind of phraseology has gone some distance toward turning himself into a machine. The appropriate noises are coming out of his larynx, but his brain is not involved, as it would be if he were choosing his words for himself . . .

What is above all needed is to let the meaning choose the word, and not the other way around. In prose, the worst thing one can do with words is surrender to them. When you think of a concrete object, you think wordlessly, and then, if you want to describe the thing you have been visualising you probably hunt about until you find the exact words that seem to fit it. When you think of something abstract you are more inclined to use words from the start, and unless you make a conscious effort to prevent it, the existing dialect will come rushing in and do the job for you, at the expense of blurring or even changing your meaning. Probably it is better to put off using words as long as possible and get one’s meaning as clear as one can through pictures and sensations.

Charles Sanders Peirce might have written that last paragraph. More than one path can lead to the Way.


1Comment at http://lesswrong.com/lw/jb/applause_lights/f1t.

2Compare “Semantic Stopsigns” and “Applause Lights” in Map and Territory.

" } }, { "_id": "dLbkrPu5STNCBLRjr", "title": "Applause Lights", "pageUrl": "https://www.lesswrong.com/posts/dLbkrPu5STNCBLRjr/applause-lights", "postedAt": "2007-09-11T18:31:48.000Z", "baseScore": 389, "voteCount": 320, "commentCount": 99, "url": null, "contents": { "documentId": "dLbkrPu5STNCBLRjr", "html": "

At the Singularity Summit 2007, one of the speakers called for democratic, multinational development of artificial intelligence. So I stepped up to the microphone and asked:

Suppose that a group of democratic republics form a consortium to develop AI, and there’s a lot of politicking during the process—some interest groups have unusually large influence, others get shafted—in other words, the result looks just like the products of modern democracies. Alternatively, suppose a group of rebel nerds develops an AI in their basement, and instructs the AI to poll everyone in the world—dropping cellphones to anyone who doesn’t have them—and do whatever the majority says. Which of these do you think is more “democratic,” and would you feel safe with either?

I wanted to find out whether he believed in the pragmatic adequacy of the democratic political process, or if he believed in the moral rightness of voting. But the speaker replied:

The first scenario sounds like an editorial in Reason magazine, and the second sounds like a Hollywood movie plot.

Confused, I asked:

Then what kind of democratic process did you have in mind?

The speaker replied:

Something like the Human Genome Project—that was an internationally sponsored research project.

I asked:

How would different interest groups resolve their conflicts in a structure like the Human Genome Project?

And the speaker said:

I don’t know.

This exchange puts me in mind of a quote from some dictator or other, who was asked if he had any intentions to move his pet state toward democracy:

We believe we are already within a democratic system. Some factors are still missing, like the expression of the people’s will.

The substance of a democracy is the specific mechanism that resolves policy conflicts. If all groups had the same preferred policies, there would be no need for democracy—we would automatically cooperate. The resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an artificial intelligence, but it has to be something. What does it mean to call for a “democratic” solution if you don’t have a conflict-resolution mechanism in mind?

I think it means that you have said the word “democracy,” so the audience is supposed to cheer. It’s not so much a propositional statement or belief, as the equivalent of the “Applause” light that tells a studio audience when to clap.

This case is remarkable only in that I mistook the applause light for a policy suggestion, with subsequent embarrassment for all. Most applause lights are much more blatant, and can be detected by a simple reversal test. For example, suppose someone says:

We need to balance the risks and opportunities of AI.

If you reverse this statement, you get:

We shouldn’t balance the risks and opportunities of AI.

Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information.

There are plenty of legitimate reasons for uttering a sentence that would be uninformative in isolation. “We need to balance the risks and opportunities of AI” can introduce a discussion topic; it can emphasize the importance of a specific proposal for balancing; it can criticize an unbalanced proposal. Linking to a normal assertion can convey new information to a bounded rationalist—the link itself may not be obvious. But if no specifics follow, the sentence is probably an applause light.

I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing:

I am here to propose to you today that we need to balance the risks and opportunities of advanced artificial intelligence. We should avoid the risks and, insofar as it is possible, realize the opportunities. We should not needlessly confront entirely unnecessary dangers. To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm. We should respect the interests of all parties with a stake in the Singularity. We must try to ensure that the benefits of advanced technologies accrue to as many individuals as possible, rather than being restricted to a few. We must try to avoid, as much as possible, violent conflicts using these technologies; and we must prevent massive destructive capability from falling into the hands of individuals. We should think through these issues before, not after, it is too late to do anything about them . . .

" } }, { "_id": "RiWPdNaoL8fq7phbY", "title": "We Don't Really Want Your Participation", "pageUrl": "https://www.lesswrong.com/posts/RiWPdNaoL8fq7phbY/we-don-t-really-want-your-participation", "postedAt": "2007-09-10T19:53:46.000Z", "baseScore": 61, "voteCount": 55, "commentCount": 25, "url": null, "contents": { "documentId": "RiWPdNaoL8fq7phbY", "html": "

At the Singularity Summit yesterday, several speakers alleged that we should "reach out" to artists and poets to encourage their participation in the Singularity dialogue.  So at the end of one such session, a woman went up to the audience microphone and said:

\n\n

"I am an artist.  I want to participate.  What should I do?"

\n\n

And there was a brief, frozen silence.

I wanted to leap up and say:

\n

No, no, I'm afraid you've misunderstood.  We're just calling for greater participation by artists.  We can get plenty of credit for being enlightened just by issuing the call. \nIf we really cared what artists thought, we would find some artists and ask them questions, not call for artists to\nparticipate.  We don't actually want to hear from artists.  We think your opinions are stupid.

\n

And if she'd asked me afterward, my real answer\nwould have been:

\n

You are not an artist, you are a human being; art is\nonly one facet in which you express your humanity.  Your reactions\nto the Singularity should arise from your entire self.  It's\nperfectly all right to have a boringly normal and nonunique reaction like "I'm afraid," or "I don't\nthink we should do this," or "I want to help, where do I send the\ncheck?"  The right answer is not always unusual.  Your natural reaction does not need to be unique, and that's why you don't need to try to come up with\nan "artist's viewpoint" on the Singularity.  I would call on you to\nparticipate as a human being, not an artist.  If your artistry has\nsomething to say, it will express itself naturally in your\nresponses, without you needing to make a conscious effort to say something artist-like.

But I didn't say any of this, of course.  It would have been indecorous.

\n\n

And while we're on the subject, I would feel rather patronized - like a dog commanded to perform a trick - if someone presented me with a painting and said, "Say something mathematical!"

" } }, { "_id": "GMhzDb3uAFYLwmXtY", "title": "Radical Honesty", "pageUrl": "https://www.lesswrong.com/posts/GMhzDb3uAFYLwmXtY/radical-honesty", "postedAt": "2007-09-10T06:09:00.000Z", "baseScore": 43, "voteCount": 31, "commentCount": 37, "url": null, "contents": { "documentId": "GMhzDb3uAFYLwmXtY", "html": "

I recently ran across this interesting article about Radical Honesty, a movement founded by a psychotherapist named Brad Blanton who suggests that we should kick our addiction to lying and just tell the complete truth all the time.  I also like this quote from the Wikipedia article on Radical Honesty:  \"The significant majority of participants in the Radical Honesty\nworkshops report dramatic changes in their lives after taking the\ncourse, though they are not always comfortable and positive.\"  The movement visibly suffers from having been founded by a psychotherapist - it's more about the amazing happiness that absolute truth-telling can bring to your relationships (!!) rather than such rationalist values as seeking truth by teaching yourself a habit of honesty, or not wishing to deceive others because it infringes on their autonomy.

\n\n

I once suggested a notion called \"Crocker's Rules\", which was the mirror image of Radical Honesty - rather than telling the whole truth to other people, you would strive to always allow others to tell you the complete truth without being offended.

Crocker's Rules didn't give you the right to say anything offensive, but other people could say potentially offensive things to you, and it was your responsibility not to be offended. \nThis was surprisingly hard to explain to people; many people would read\nthe careful explanation and hear, \"Crocker's Rules mean you can say\noffensive things to other people.\"

\n\n

I was initially a bit suspicious of Blanton's movement - it seemed like the mirror-image that so many people misinterpreted, the option of saying offensive things to other people.  But Blanton makes it not only optional, but mandatory to speak your mind - a far greater inconvenience than Crocker's Rules would ever impose on anyone.

\n\n

\nCrocker's Rules didn't catch on.  Maybe it was too hard to tell the\ndifference between someone delivering a slap in the face, and someone\ndeliberately invoking Crocker's Rules - you don't want to miss a real\nclue to real hostility because of your acceptance; you wouldn't want to\nnot believe a true fact, even if the true fact is that someone else\nhates you.  And third parties may assume the truthteller is an offensive person\nno matter how much the receiver disclaims offense - they may assume the\nreceiver is \"just being polite\", or that requesting honesty does not\nexcuse its offensiveness.

\n\n

Will Blanton's Rules ever catch on?  I worry that Radical Honesty would selectively disadvantage rationalists in human relationships.  Broadcasting your opinions is much easier when you can deceive yourself about anything you'd feel uncomfortable saying to others.  I wonder whether practitioners of Radical Honesty tend to become more adept at self-deception, as they stop being able to tell white lies or admit private thoughts to themselves.  I have taken a less restrictive kind of honesty upon myself - to avoid statements that are literally false  - and I know that this becomes more and more difficult, more and more of a disadvantage, as I deceive myself less and less.

\n\n

I suspect that the neural circuits that we use to lie to others, also censor our own thoughts.  Honesty to others is important unto a rationalist, even one who is seeking a strictly selfish advantage in finding truth only for themselves.  If there were a Bayesian Order, would its practitioners take a vow of Radical Honesty?

\n\n

I think that if there is ever a vow of honesty among rationalists, it will be restricted in scope.  Normally, perhaps, you would avoid making statements that were literally false, and be ready to accept brutal honesty from anyone who first said \"Crocker's Rules\".  Maybe you would be Radically Honest, but only with others who had taken a vow of Radical Honesty, and who understood the trust required to tell someone the truth.

\n\n

Maybe Radical Honesty would be reserved for matters sacred unto a rationalist?  In some domains this is already the case.  We believe that scientists should always tell the whole truth about science.  It's one thing to lie in everyday life, lie to your boss, lie to the police, lie to your lover; but whoever lies in a journal article is guilty of utter heresy and will be excommunicated.

\n\n

I wonder what it would be like to have anyone in the world, even a single person, who you could absolutely trust.  Or what it would be like for there to be anyone in the world, even a single person, whom you had to tell all your thoughts, without possibility of concealment.

" } }, { "_id": "qRWfvgJG75ESLRNu9", "title": "The Crackpot Offer", "pageUrl": "https://www.lesswrong.com/posts/qRWfvgJG75ESLRNu9/the-crackpot-offer", "postedAt": "2007-09-08T14:32:49.000Z", "baseScore": 130, "voteCount": 109, "commentCount": 73, "url": null, "contents": { "documentId": "qRWfvgJG75ESLRNu9", "html": "\n\n\n\n \n\n \n\n

When I was very young—I think thirteen or maybe fourteen—I thought I had found a disproof of Cantor’s Diagonal Argument, a famous theorem which demonstrates that the real numbers outnumber the rational numbers. Ah, the dreams of fame and glory that danced in my head!

\n\n

My idea was that since each whole number can be decomposed into a bag of powers of 2, it was possible to map the whole numbers onto the set of subsets of whole numbers simply by writing out the binary expansion. The number 13, for example, 1101, would map onto {0, 2, 3}. It took a whole week before it occurred to me that perhaps I should apply Cantor’s Diagonal Argument to my clever construction, and of course it found a counterexample—the binary number (. . . 1111), which does not correspond to any finite whole number.

\n\n

So I found this counterexample, and saw that my attempted disproof was false, along with my dreams of fame and glory.

\n\n

I was initially a bit disappointed.

\n\n

The thought went through my mind: “I’ll get that theorem eventually! Someday I’ll disprove Cantor’s Diagonal Argument, even though my first try failed!” I resented the theorem for being obstinately true, for depriving me of my fame and fortune, and I began to look for other disproofs.

\n\n

And then I realized something. I realized that I had made a mistake, and that, now that I’d spotted my mistake, there was absolutely no reason to suspect the strength of Cantor’s Diagonal Argument any more than other major theorems of mathematics.

\n\n

I saw then very clearly that I was being offered the opportunity to become a math crank, and to spend the rest of my life writing angry letters in green ink to math professors. (I’d read a book once about math cranks.)

\n\n

I did not wish this to be my future, so I gave a small laugh, and let it go. I waved Cantor’s Diagonal Argument on with all good wishes, and I did not question it again.

\n\n

And I don’t remember, now, if I thought this at the time, or if I thought it afterward . . . but what a terribly unfair test to visit upon a child of thirteen. That I had to be that rational, already, at that age, or fail.

\n\n

The smarter you are, the younger you may be, the first time you have what looks to you like a really revolutionary idea. I was lucky in that I saw the mistake myself; that it did not take another mathematician to point it out to me, and perhaps give me an outside source to blame. I was lucky in that the disproof was simple enough for me to understand. Maybe I would have recovered eventually, otherwise. I’ve recovered from much worse, as an adult. But if I had gone wrong that early, would I ever have developed that skill?

\n\n

I wonder how many people writing angry letters in green ink were thirteen when they made that first fatal misstep. I wonder how many were promising minds before then.

\n\n

I made a mistake. That was all. I was not really right, deep down; I did not win a moral victory; I was not displaying ambition or skepticism or any other wondrous virtue; it was not a reasonable error; I was not half right or even the tiniest fraction right. I thought a thought I would never have thought if I had been wiser, and that was all there ever was to it.

\n\n

If I had been unable to admit this to myself, if I had reinterpreted my mistake as virtuous, if I had insisted on being at least a little right for the sake of pride, then I would not have let go. I would have gone on looking for a flaw in the Diagonal Argument. And, sooner or later, I might have found one.

\n\n

Until you admit you were wrong, you cannot get on with your life; your self-image will still be bound to the old mistake.

\n\n

Whenever you are tempted to hold on to a thought you would never have thought if you had been wiser, you are being offered the opportunity to become a crackpot—even if you never write any angry letters in green ink. If no one bothers to argue with you, or if you never tell anyone your idea, you may still be a crackpot. It’s the clinging that defines it.

\n\n

It’s not true. It’s not true deep down. It’s not half-true or even a little true. It’s nothing but a thought you should never have thought. Not every cloud has a silver lining. Human beings make mistakes, and not all of them are disguised successes. Human beings make mistakes; it happens, that’s all. Say “oops,” and get on with your life.

\n\n" } }, { "_id": "bMkCEZoBNhgRBtzoj", "title": "Anchoring and Adjustment", "pageUrl": "https://www.lesswrong.com/posts/bMkCEZoBNhgRBtzoj/anchoring-and-adjustment", "postedAt": "2007-09-07T21:33:51.000Z", "baseScore": 85, "voteCount": 71, "commentCount": 22, "url": null, "contents": { "documentId": "bMkCEZoBNhgRBtzoj", "html": "\n\n\n\n \n\n \n\n

Suppose I spin a Wheel of Fortune device as you watch, and it comes up pointing to 65. Then I ask: Do you think the percentage of countries in the United Nations that are in Africa is above or below this number? What do you think is the percentage of UN countries that are in Africa? Take a moment to consider these two questions yourself, if you like, and please don’t Google.

\n\n

Also, try to guess, within five seconds, the value of the following arithmetical expression. Five seconds. Ready? Set . . . Go!

\n\n
\n

1 × 2 × 3 × 4 × 5 × 6 × 7 × 8

\n
\n\n

Tversky and Kahneman recorded the estimates of subjects who saw the Wheel of Fortune showing various numbers.1 The median estimate of subjects who saw the wheel show 65 was 45%; the median estimate of subjects who saw 10 was 25%.

\n\n

The current theory for this and similar experiments is that subjects take the initial, uninformative number as their starting point or anchor; and then they adjust upward or downward from their starting estimate until they reach an answer that “sounds plausible”; and then they stop adjusting. This typically results in under-adjustment from the anchor—more distant numbers could also be “plausible,” but one stops at the first satisfying-sounding answer.

\n\n

Similarly, students shown “1 × 2 × 3 × 4 × 5 × 6 × 7 × 8” made a median estimate of 512, while students shown “8 × 7 × 6 × 5 × 4 × 3 × 2 × 1” made a median estimate of 2,250. The motivating hypothesis was that students would try to multiply (or guess-combine) the first few factors of the product, then adjust upward. In both cases the adjustments were insufficient, relative to the true value of 40,320; but the first set of guesses were much more insufficient because they started from a lower anchor.

\n\n

Tversky and Kahneman report that offering payoffs for accuracy did not reduce the anchoring effect.

\n\n

Strack and Mussweiler asked for the year Einstein first visited the United States.2 Completely implausible anchors, such as 1215 or 1992, produced anchoring effects just as large as more plausible anchors such as 1905 or 1939.

\n\n

There are obvious applications in, say, salary negotiations, or buying a car. I won’t suggest that you exploit it, but watch out for exploiters.

\n\n

And watch yourself thinking, and try to notice when you are adjusting a figure in search of an estimate.

\n\n

Debiasing manipulations for anchoring have generally proved not very effective. I would suggest these two: First, if the initial guess sounds implausible, try to throw it away entirely and come up with a new estimate, rather than sliding from the anchor. But this in itself may not be sufficient—subjects instructed to avoid anchoring still seem to do so.3 So, second, even if you are trying the first method, try also to think of an anchor in the opposite direction—an anchor that is clearly too small or too large, instead of too large or too small—and dwell on it briefly.

\n\n
\n \n\n

1Amos Tversky and Daniel Kahneman, “Judgment Under Uncertainty: Heuristics and Biases,” Science 185, no. 4157 (1974): 1124–1131.

\n\n

2Fritz Strack and Thomas Mussweiler, “Explaining the Enigmatic Anchoring Effect: Mechanisms of Selective Accessibility,” Journal of Personality and Social Psychology 73, no. 3 (1997): 437–446.

\n\n

3George A. Quattrone et al., “Explorations in Anchoring: The Effects of Prior Range, Anchor Extremity, and Suggestive Hints” (Unpublished manuscript, Stanford University, 1981).

\n
\n\n" } }, { "_id": "Ga2HSwf9iQe64JwAa", "title": "Why is the Future So Absurd?", "pageUrl": "https://www.lesswrong.com/posts/Ga2HSwf9iQe64JwAa/why-is-the-future-so-absurd", "postedAt": "2007-09-07T08:42:13.000Z", "baseScore": 52, "voteCount": 42, "commentCount": 17, "url": null, "contents": { "documentId": "Ga2HSwf9iQe64JwAa", "html": "

Followup to:  Stranger than History, Absurdity Heuristic / Absurdity Bias

\n\n

Why is the future more absurd than people seem to expect?  (That is:  Why, historically, has the future so often turned out to be more "absurd" than people seem to have expected?)

\n\n

One obvious reason is hindsight bias.  Hindsight does not just cause people to severely underestimate how much they would have been surprised.  Hindsight also leads people to overestimate how much attention they would have paid to the key factors, the factors that turned out to be important.  As R. H. Tawney put it:

"Historians\ngive an appearance of inevitability to an existing order by dragging into\nprominence the forces which have triumphed and thrusting into the background\nthose which they have swallowed up."

\n\n

When people look at historical changes and think "I could have predicted X" or "You could have predicted X if you looked at factors 1, 2, and 3"; then they forget that people did not, in fact, predict X, perhaps because they were distracted by factors 4 through 117.  People read history books, see coherent narratives, and think that's how Time works.  Underestimating the surprise of the present, they overestimate the predictability of the future.

\n\n

I suspect that a major factor contributing to absurdity bias is that, when we look over history, we see changes away from absurd conditions such as everyone being a peasant farmer and women not having the vote, toward normal conditions like a majority middle class and equal rights.  When people look at history, they see a series of normalizations.  They learn the rule, "The future grows ever less absurd over time."

Perhaps one way to comprehend the bizarreness of the future would be to\ntry and imagine historical changes occurring in reverse - how absurd\nwould it be if all your electrical appliances suddenly disappeared, or\nyou were transformed into a peasant farmer?  Even if the future is\nnicer than the past, it will feel at least that absurd.

\n\n

The correspondence bias of social psychology may also play a role in how we fail to learn from history - or so my own experience\nsuggests.  When we read about the strange behaviors of people in other\neras, we may see them as people with a disposition to that strange behavior,\nrather than properly comprehending the strangeness of the times.  In the\n16th century, one popular entertainment was setting a cat on fire.   If you think to yourself\n"What horrible people they must be!" then you have, to the same extent,\ndiminished your appreciation of what horrible times they lived in.

\n\n

We see at least some social and technological changes during our own lifetime.  We do have some experience of genuine future shock.  Why wouldn't this be enough to extrapolate forward?

\n\n

According to Ray Kurzweil's thesis of accelerating change, our\nintuitions about the future are linear - we expect around as much\nchange as occurred in the past - but technological change feeds on\nitself, and therefore has a positive second derivative.  We should\nexpect more technological change in the future than we have seen in the\npast, and insofar as technology drives cultural change, we should\nexpect more cultural change too.

\n\n

Or that, in my opinion, is the strongest version of Kurzweil's theory that can be put forward.  Kurzweil dwells on Moore's Law and smoothly predictable exponential curves, but this seems to me both iffy and unnecessary.  A curve does not need to be smooth or exponential to have a positive second derivative.  And our cultural sensitivity to,\nsay, computing power, is probably logarithmic anyway, obeying Weber's Law\n- a 20% increase in computing power probably feels the same whether it's\nfrom 1MHz to 1.2MHz, or 2GHz to 2.4GHz.  In which case, people extrapolating the future "linearly" should get it pretty much correct.

\n\n

But if you pull back\nand view the last few millennia, not just the last few decades, the strength of the core idea becomes obvious - technology change does feed on itself and therefore does speed up.

\n\n

I would actually question Kurzweil's assertion that people extrapolate the past linearly into the future.  Kurzweil may be too optimistic here.  As discussed earlier, dwellers on flood plains do not extrapolate from small floods to large floods; instead, small floods set a perceived upper bound on risk.  I suspect that when people try to visualize the strangeness of the future, they focus on a single possible change, of no greater magnitude than the largest single change they remember in their own lifetime.

\n\n

The real future is not composed of single developments, but many developments together.  Even if one change can pass the futurism filter, to suppose three absurdities simultaneously - never mind twenty - would entirely overload the absurdity meter.  This may also explain why future projections get wronger and wronger as they go further out.  People seem to imagine futures that are minimally counterintuitive, with one or two interesting changes to make a good story, rather than a realistic number of changes that would overload their extrapolation abilities.

\n\n

What other biases could lead us to underestimate the absurdity of the future?

\n\n" } }, { "_id": "ax2ip3QPx4Hh24gER", "title": "So much for the factual public debate that democracies completely fail to be built on", "pageUrl": "https://www.lesswrong.com/posts/ax2ip3QPx4Hh24gER/so-much-for-the-factual-public-debate-that-democracies", "postedAt": "2007-09-06T15:07:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "ax2ip3QPx4Hh24gER", "html": "

Publicly refuting facts often reinforces their believed truth in the minds of the public, and they will even credit the misinformation to the organisation denying it.

\n

A Washington Post article reports an experiment where people were given fliers labelling common ideas about influenza ‘true’ or ‘false’. Half an hour later older people already remembered 28% of falsities as facts, and three days later 40% , by which time younger people caught up to the older people’s half hour figure. It seems that the repetition of the false information helps to ingrain it, while the extra information – that it is false – is soon lost.

\n

So how do you have factual public debate when whoever starts it automaticly has a major advantage? Denial and silence can have the same effect as agreeing, but denying is still best. A good proportion of people (a few days later at least) do remember whether their facts are false or not. Though as TWP discusses, it’s probably best to deny things without actually mentioning them if possible. That is, fiercly support something mutually exclusive.

\n

As noted in the discussion of Overcoming Bias’ post on this, if people have anything at stake they might pay more attention. While this has problems of its own (discussed there), a big obvious gap where it’s important for people to have accurate information on topics not directly concerning them is in democracy. Just another in a long list of problems with the kinds of democratic systems we use, but in conjunction with rational ignorance it makes the chance of voters having a clue about anything not immediately concerning them both tiny and tied firmly to the chance of the first buyer of lots of ads happening to be right.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "R8cpqD3NA4rZxRdQ4", "title": "Availability", "pageUrl": "https://www.lesswrong.com/posts/R8cpqD3NA4rZxRdQ4/availability", "postedAt": "2007-09-06T06:55:41.000Z", "baseScore": 205, "voteCount": 189, "commentCount": 23, "url": null, "contents": { "documentId": "R8cpqD3NA4rZxRdQ4", "html": "

The availability heuristic is judging the frequency or probability of an event by the ease with which examples of the event come to mind.

A famous 1978 study by Lichtenstein, Slovic, Fischhoff, Layman, and Combs, “Judged Frequency of Lethal Events,” studied errors in quantifying the severity of risks, or judging which of two dangers occurred more frequently. Subjects thought that accidents caused about as many deaths as disease; thought that homicide was a more frequent cause of death than suicide. Actually, diseases cause about sixteen times as many deaths as accidents, and suicide is twice as frequent as homicide.

An obvious hypothesis to account for these skewed beliefs is that murders are more likely to be talked about than suicides—thus, someone is more likely to recall hearing about a murder than hearing about a suicide. Accidents are more dramatic than diseases—perhaps this makes people more likely to remember, or more likely to recall, an accident. In 1979, a followup study by Combs and Slovic showed that the skewed probability judgments correlated strongly (0.85 and 0.89) with skewed reporting frequencies in two newspapers. This doesn’t disentangle whether murders are more available to memory because they are more reported-on, or whether newspapers report more on murders because murders are more vivid (hence also more remembered). But either way, an availability bias is at work.

Selective reporting is one major source of availability biases. In the ancestral environment, much of what you knew, you experienced yourself; or you heard it directly from a fellow tribe-member who had seen it. There was usually at most one layer of selective reporting between you, and the event itself. With today’s Internet, you may see reports that have passed through the hands of six bloggers on the way to you—six successive filters. Compared to our ancestors, we live in a larger world, in which far more happens, and far less of it reaches us—a much stronger selection effect, which can create much larger availability biases.

In real life, you’re unlikely to ever meet Bill Gates. But thanks to selective reporting by the media, you may be tempted to compare your life success to his—and suffer hedonic penalties accordingly. The objective frequency of Bill Gates is 0.00000000015, but you hear about him much more often. Conversely, 19% of the planet lives on less than $1/day, and I doubt that one fifth of the blog posts you read are written by them.

Using availability seems to give rise to an absurdity bias; events that have never happened are not recalled, and hence deemed to have probability zero. When no flooding has recently occurred (and yet the probabilities are still fairly calculable), people refuse to buy flood insurance even when it is heavily subsidized and priced far below an actuarially fair value. Kunreuther et al. suggest underreaction to threats of flooding may arise from “the inability of individuals to conceptualize floods that have never occurred . . . Men on flood plains appear to be very much prisoners of their experience . . . Recently experienced floods appear to set an upward bound to the size of loss with which managers believe they ought to be concerned.”1

Burton et al. report that when dams and levees are built, they reduce the frequency of floods, and thus apparently create a false sense of security, leading to reduced precautions.2 While building dams decreases the frequency of floods, damage per flood is afterward so much greater that average yearly damage increases.

The wise would extrapolate from a memory of small hazards to the possibility of large hazards. Instead, past experience of small hazards seems to set a perceived upper bound on risk. A society well-protected against minor hazards takes no action against major risks, building on flood plains once the regular minor floods are eliminated. A society subject to regular minor hazards treats those minor hazards as an upper bound on the size of the risks, guarding against regular minor floods but not occasional major floods.

Memory is not always a good guide to probabilities in the past, let alone in the future.


1 Howard Kunreuther, Robin Hogarth, and Jacqueline Meszaros, “Insurer Ambiguity and Market Failure,” Journal of Risk and Uncertainty 7 (1 1993): 71–87.

2 Ian Burton, Robert W. Kates, and Gilbert F. White, The Environment as Hazard, 1st ed. (New York: Oxford University Press, 1978).

" } }, { "_id": "P792Z4QA9dzcLdKkE", "title": "Absurdity Heuristic, Absurdity Bias", "pageUrl": "https://www.lesswrong.com/posts/P792Z4QA9dzcLdKkE/absurdity-heuristic-absurdity-bias", "postedAt": "2007-09-05T03:20:06.000Z", "baseScore": 59, "voteCount": 46, "commentCount": 10, "url": null, "contents": { "documentId": "P792Z4QA9dzcLdKkE", "html": "

Followup toStranger Than History, Robin's post What Evidence Ease of Imagination?

\n

I've been pondering lately the notion of \"absurdity\" - wondering what exactly goes on in people's minds when they utter the adjective \"absurd\" or the objection \"Absurd!\"

\n

If there is an absurdity heuristic, it would seem, at first glance, to be the mirror image of the well-known representativeness heuristic.  The less X resembles Y, or the more X violates typicality assumptions of Y, the less probable that X is the product, explanation, or outcome of Y.  A sequence of events is less probable when it involves an egg unscrambling itself, water flowing upward, machines thinking or dead people coming back to life.  Since human psychology is not a pure structure of quantitative probabilities, it is easy to imagine that the absurdity heuristic is separate from the representativeness heuristic - implemented by separate absurdity-detecting brainware.

\n

I suspect people may also be more sensitive to \"absurdity\" that invalidates a plan or indicates cheating.  Consider the difference between \"I saw a little blue man yesterday, walking down the street\" versus \"I'm going to jump off this cliff and a little blue man will catch me on the way down\" or \"If you give me your wallet, a little blue man will bring you a pot of gold.\"  (I'm thinking, in particular, about how projections of future technology are often met by the objection, \"That's absurd!\", and how the objection seems more violent than usual in this case.)

As Robin observed, a heuristic is not necessarily a bias.  The vast majority of objects do not fall upward.  And yet helium balloons are an exception.  When are exceptions predictable?

\n

I can think of three major circumstances where the absurdity heuristic gives rise to an absurdity bias:

\n

\n

The first case is when we have information about underlying laws which should override surface reasoning.  If you know why most objects fall, and you can calculate how fast they fall, then your calculation that a helium balloon should rise at such-and-such a rate, ought to strictly override the absurdity of an object falling upward.  If you can do deep calculations, you have no need for qualitative surface reasoning.  But we may find it hard to attend to mere calculations in the face of surface absurdity, until we see the balloon rise.

\n

(In 1913, Lee de Forest was accused of fraud for selling stock in an impossible endeavor, the Radio Telephone Company:  \"De Forest has said in many newspapers and over his signature that it would be possible to transmit human voice across the Atlantic before many years. Based on these absurd and deliberately misleading statements, the misguided public...has been persuaded to purchase stock in his company...\")

\n

The second case is a generalization of the first - attending to surface absurdity in the face of abstract information that ought to override it.  If people cannot accept that studies show that marginal spending on medicine has zero net effect, because it seems absurd - violating the surface rule that  \"medicine cures\" - then I would call this \"absurdity bias\".  There are many reasons that people may fail to attend to abstract information or integrate it incorrectly.  I think it worth distinguishing cases where the failure arises from absurdity detectors going off.

\n

The third case is when the absurdity heuristic simply doesn't work - the process is not stable in its surface properties over the range of extrapolation - and yet people use it anyway.  The future is usually \"absurd\" - it is unstable in its surface rules over fifty-year intervals.

\n

This doesn't mean that anything can happen.  Of all the events in the 20th century that would have been \"absurd\" by the standards of the 19th century, not a single one - to the best of our knowledge - violated the law of conservation of energy, which was known in 1850.  Reality is not up for grabs; it works by rules even more precise than the ones we believe in instinctively.

\n

The point is not that you can say anything you like about the future and no one can contradict you; but, rather, that the particular practice of crying \"Absurd!\" has historically been an extremely poor heuristic for predicting the future.  Over the last few centuries, the absurdity heuristic has done worse than maximum entropy - ruled out the actual outcomes as being far too absurd to be considered.  You would have been better off saying \"I don't know\".

" } }, { "_id": "xxvbPJ8tXmwP73fFM", "title": "Markets are a kind of electrochemical cell", "pageUrl": "https://www.lesswrong.com/posts/xxvbPJ8tXmwP73fFM/markets-are-a-kind-of-electrochemical-cell", "postedAt": "2007-09-04T16:49:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "xxvbPJ8tXmwP73fFM", "html": "

There are two processes taking place: adding and using up value from units. When everything is mixed together and these processes are happening in one place, they happen slowly (think of subsistence production by consumers). Separate them to their own containers and they happen faster (think production in factories and consumption in homes). The containers must be joined by a channel for units to move according to their value, and a wire for charges to balance that. The same value is removed from particles in one container as added to others in the other.

\n

The extra energy pushing the charges and value laden particles between the containers can be used to run things like light bulbs and welfare systems. Alternatively it can be used to run a small heat to warm up the reaction, or an advertising industry.

\n

While the charges can move around indefinitely, the particles eventually run out. Then it’s all over. With any luck/sensible policy the metaphor doesn’t continue this far.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "L22jhyY9ocXQNLqyE", "title": "\"Science\" as Curiosity-Stopper", "pageUrl": "https://www.lesswrong.com/posts/L22jhyY9ocXQNLqyE/science-as-curiosity-stopper", "postedAt": "2007-09-03T20:04:40.000Z", "baseScore": 168, "voteCount": 156, "commentCount": 61, "url": null, "contents": { "documentId": "L22jhyY9ocXQNLqyE", "html": "

Imagine that I, in full view of live television cameras, raised my hands and chanted abracadabra and caused a brilliant light to be born, flaring in empty space beyond my outstretched hands. Imagine that I committed this act of blatant, unmistakeable sorcery under the full supervision of James Randi and all skeptical armies. Most people, I think, would be fairly curious as to what was going on.

But now suppose instead that I don’t go on television. I do not wish to share the power, nor the truth behind it. I want to keep my sorcery secret. And yet I also want to cast my spells whenever and wherever I please. I want to cast my brilliant flare of light so that I can read a book on the train—without anyone becoming curious. Is there a spell that stops curiosity?

Yes indeed! Whenever anyone asks “How did you do that?” I just say “Science!”

It’s not a real explanation, so much as a curiosity-stopper. It doesn’t tell you whether the light will brighten or fade, change color in hue or saturation, and it certainly doesn’t tell you how to make a similar light yourself. You don’t actually know anything more than you knew before I said the magic word. But you turn away, satisfied that nothing unusual is going on.

Better yet, the same trick works with a standard light switch.

Flip a switch and a light bulb turns on. Why?

In school, one is taught that the password to the light bulb is “Electricity!” By now, I hope, you’re wary of marking the light bulb “understood” on such a basis. Does saying “Electricity!” let you do calculations that will control your anticipation of experience? There is, at the least, a great deal more to learn.1

If you thought the light bulb was scientifically inexplicable, it would seize the entirety of your attention. You would drop whatever else you were doing, and focus on that light bulb.

But what does the phrase “scientifically explicable” mean? It means that someone else knows how the light bulb works. When you are told the light bulb is “scientifically explicable,” you don’t know more than you knew earlier; you don’t know whether the light bulb will brighten or fade. But because someone else knows, it devalues the knowledge in your eyes. You become less curious.

Someone is bound to say, “If the light bulb were unknown to science, you could gain fame and fortune by investigating it.” But I’m not talking about greed. I’m not talking about career ambition. I’m talking about the raw emotion of curiosity—the feeling of being intrigued. Why should your curiosity be diminished because someone else, not you, knows how the light bulb works? Is this not spite? It’s not enough for you to know; other people must also be ignorant, or you won’t be happy?

There are goods that knowledge may serve besides curiosity, such as the social utility of technology. For these instrumental goods, it matters whether some other entity in local space already knows. But for my own curiosity, why should it matter?

Besides, consider the consequences if you permit “Someone else knows the answer” to function as a curiosity-stopper. One day you walk into your living room and see a giant green elephant, seemingly hovering in midair, surrounded by an aura of silver light.

“What the heck?” you say.

And a voice comes from above the elephant, saying,

Somebody already knows why this elephant is here.

“Oh,” you say, “in that case, never mind,” and walk on to the kitchen.

I don’t know the grand unified theory for this universe’s laws of physics. I also don’t know much about human anatomy with the exception of the brain. I couldn’t point out on my body where my kidneys are, and I can’t recall offhand what my liver does.2

Should I, so far as curiosity is concerned, be more intrigued by my ignorance of the ultimate laws of physics, than the fact that I don’t know much about what goes on inside my own body?

If I raised my hands and cast a light spell, you would be intrigued. Should you be any less intrigued by the very fact that I raised my hands? When you raise your arm and wave a hand around, this act of will is coordinated by (among other brain areas) your cerebellum. I bet you don’t know how the cerebellum works. I know a little—though only the gross details, not enough to perform calculations . . . but so what? What does that matter, if you don’t know? Why should there be a double standard of curiosity for sorcery and hand motions?

Look at yourself in the mirror. Do you know what you’re looking at? Do you know what looks out from behind your eyes? Do you know what you are? Some of that answer Science knows, and some of it Science does not. But why should that distinction matter to your curiosity, if you don’t know?

Do you know how your knees work? Do you know how your shoes were made? Do you know why your computer monitor glows? Do you know why water is wet?

The world around you is full of puzzles. Prioritize, if you must. But do not complain that cruel Science has emptied the world of mystery. With reasoning such as that, I could get you to overlook an elephant in your living room.


1 Physicists should ignore this paragraph and substitute a problem in evolutionary theory, where the substance of the theory is again in calculations that few people know how to perform.

2 I am not proud of this. Alas, with all the math I need to study, I’m not likely to learn anatomy anytime soon.

" } }, { "_id": "HGmMEq36SZxu8gzT3", "title": "Who needs democracy, free speech and all that rubbish when you can prescribe the values of your citizens?", "pageUrl": "https://www.lesswrong.com/posts/HGmMEq36SZxu8gzT3/who-needs-democracy-free-speech-and-all-that-rubbish-when", "postedAt": "2007-09-03T14:58:00.000Z", "baseScore": 1, "voteCount": 0, "commentCount": 0, "url": null, "contents": { "documentId": "HGmMEq36SZxu8gzT3", "html": "

The Australian Government has released a list of ten values it considers essential to being an Australian citizen.

\n

While these principles are relatively inoffensive, letting the government prescribe what values citizens should hold is a frightening road to be going down! The point of democracy is for citizens to decide what the government’s values should be. This means nothing if the government chooses citizens’ values.

\n

Incidentally I don’t value any of those listed per se. Only as general principles that are usually upshots of what I do value. There are times I would act against most of them for values not on this list. I also don’t know what our national flower is (though I’ve never found that a barrier to integrating with Australian culture). I hope I get deported to somewhere where policy is less of a joke.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "yxvi9RitzZDpqn6Yh", "title": "Explain/Worship/Ignore?", "pageUrl": "https://www.lesswrong.com/posts/yxvi9RitzZDpqn6Yh/explain-worship-ignore", "postedAt": "2007-09-02T20:01:45.000Z", "baseScore": 124, "voteCount": 115, "commentCount": 88, "url": null, "contents": { "documentId": "yxvi9RitzZDpqn6Yh", "html": "\n\n\n\n \n\n \n\n

As our tribe wanders through the grasslands, searching for fruit trees and prey, it happens every now and then that water pours down from the sky.

\n\n

“Why does water sometimes fall from the sky?” I ask the bearded wise man of our tribe.

\n\n

He thinks for a moment, this question having never occurred to him before, and then says, “From time to time, the sky spirits battle, and when they do, their blood drips from the sky.”

\n\n

“Where do the sky spirits come from?” I ask.

\n\n

His voice drops to a whisper. “From the before time. From the long long ago.”

\n\n

When it rains, and you don’t know why, you have several options. First, you could simply not ask why—not follow up on the question, or never think of the question in the first place. This is the Ignore command, which the bearded wise man originally selected. Second, you could try to devise some sort of explanation, the Explain command, as the bearded man did in response to your first question. Third, you could enjoy the sensation of mysteriousness—the Worship command.

\n\n

Now, as you are bound to notice from this story, each time you select Explain, the best-case scenario is that you get an explanation, such as “sky spirits.” But then this explanation itself is subject to the same dilemma—Explain, Worship, or Ignore? Each time you hit Explain, science grinds for a while, returns an explanation, and then another dialog box pops up. As good rationalists, we feel duty-bound to keep hitting Explain, but it seems like a road that has no end.

\n\n

You hit Explain for life, and get chemistry; you hit Explain for chemistry, and get atoms; you hit Explain for atoms, and get electrons and nuclei; you hit Explain for nuclei, and get quantum chromodynamics and quarks; you hit Explain for how the quarks got there, and get back the Big Bang . . .

\n\n

We can hit Explain for the Big Bang, and wait while science grinds through its process, and maybe someday it will return a perfectly good explanation. But then that will just bring up another dialog box. So, if we continue long enough, we must come to a special dialog box, a new option, an Explanation That Needs No Explanation, a place where the chain ends—and this, maybe, is the only explanation worth knowing.

\n\n

There—I just hit Worship.

\n\n

Never forget that there are many more ways to worship something than lighting candles around an altar.

\n\n

If I’d said, “Huh, that does seem paradoxical. I wonder how the apparent paradox is resolved?” then I would have hit Explain, which does sometimes take a while to produce an answer.

\n\n

And if the whole issue seems to you unimportant, or irrelevant, or if you’d rather put off thinking about it until tomorrow, than you have hit Ignore.

\n\n

Select your option wisely.

\n\n" } }, { "_id": "h3vdnR34ZvohDEFT5", "title": "Stranger Than History", "pageUrl": "https://www.lesswrong.com/posts/h3vdnR34ZvohDEFT5/stranger-than-history", "postedAt": "2007-09-01T18:57:58.000Z", "baseScore": 142, "voteCount": 134, "commentCount": 335, "url": null, "contents": { "documentId": "h3vdnR34ZvohDEFT5", "html": "\n\n\n\n \n\n \n\n

Suppose I told you that I knew for a fact that the following statements were true:

\n\n \n\n

You’d think I was crazy, right?

\n\n

Now suppose it were the year 1901, and you had to choose between believing those statements I have just offered, and believing statements like the following:

\n\n \n\n

Based on a comment of Robin Hanson’s: “I wonder if one could describe in enough detail a fictional story of an alternative reality, a reality that our ancestors could not distinguish from the truth, in order to make it very clear how surprising the truth turned out to be.”1

\n\n
\n \n\n

1Source: http://lesswrong.com/lw/j0/making_history_available/ewg.

\n
\n\n" } }, { "_id": "TLKPj4GDXetZuPDH5", "title": "Making History Available", "pageUrl": "https://www.lesswrong.com/posts/TLKPj4GDXetZuPDH5/making-history-available", "postedAt": "2007-08-31T19:52:31.000Z", "baseScore": 205, "voteCount": 174, "commentCount": 87, "url": null, "contents": { "documentId": "TLKPj4GDXetZuPDH5", "html": "

There is a habit of thought which I call the logical fallacy of generalization from fictional evidence. Journalists who, for example, talk about the Terminator movies in a report on AI, do not usually treat Terminator as a prophecy or fixed truth. But the movie is recalled—is available—as if it were an illustrative historical case. As if the journalist had seen it happen on some other planet, so that it might well happen here.

There is an inverse error to generalizing from fictional evidence: failing to be sufficiently moved by historical evidence. The trouble with generalizing from fictional evidence is that it is fiction—it never actually happened. It’s not drawn from the same distribution as this, our real universe; fiction differs from reality in systematic ways. But history has happened, and should be available.

In our ancestral environment, there were no movies; what you saw with your own eyes was true. Is it any wonder that fictions we see in lifelike moving pictures have too great an impact on us? Conversely, things that really happened, we encounter as ink on paper; they happened, but we never saw them happen. We don’t remember them happening to us.

The inverse error is to treat history as mere story, process it with the same part of your mind that handles the novels you read. You may say with your lips that it is “truth,” rather than “fiction,” but that doesn’t mean you are being moved as much as you should be. Many biases involve being insufficiently moved by dry, abstract information.

When I finally realized whose shoes I was standing in, after having given a Mysterious Answer to a mysterious question, there was a sudden shock of unexpected connection with the past. I realized that the invention and destruction of vitalism—which I had only read about in books—had actually happened to real people, who experienced it much the same way I experienced the invention and destruction of my own mysterious answer. And I also realized that if I had actually experienced the past—if I had lived through past scientific revolutions myself, rather than reading about them in history books—I probably would not have made the same mistake again. I would not have come up with another mysterious answer; the first thousand lessons would have hammered home the moral.

So (I thought), to feel sufficiently the force of history, I should try to approximate the thoughts of an Eliezer who had lived through history—I should try to think as if everything I read about in history books had actually happened to me.1 I should immerse myself in history, imagine living through eras I only saw as ink on paper.

Why should I remember the Wright Brothers’ first flight? I was not there. But as a rationalist, could I dare to not remember, when the event actually happened? Is there so much difference between seeing an event through your eyes—which is actually a causal chain involving reflected photons, not a direct connection—and seeing an event through a history book? Photons and history books both descend by causal chains from the event itself.

I had to overcome the false amnesia of being born at a particular time. I had to recall—make available— all the memories, not just the memories which, by mere coincidence, belonged to myself and my own era.

The Earth became older, of a sudden.

To my former memory, the United States had always existed—there was never a time when there was no United States. I had not remembered, until that time, how the Roman Empire rose, and brought peace and order, and lasted through so many centuries, until I forgot that things had ever been otherwise; and yet the Empire fell, and barbarians overran my city, and the learning that I had possessed was lost. The modern world became more fragile to my eyes; it was not the first modern world.

So many mistakes, made over and over and over again, because I did not remember making them, in every era I never lived . . .

And to think, people sometimes wonder if overcoming bias is important.

Don’t you remember how many times your biases have killed you? You don’t? I’ve noticed that sudden amnesia often follows a fatal mistake. But take it from me, it happened. I remember; I wasn’t there.

So the next time you doubt the strangeness of the future, remember how you were born in a hunter-gatherer tribe ten thousand years ago, when no one knew of Science at all. Remember how you were shocked, to the depths of your being, when Science explained the great and terrible sacred mysteries that you once revered so highly. Remember how you once believed that you could fly by eating the right mushrooms, and then you accepted with disappointment that you would never fly, and then you flew. Remember how you had always thought that slavery was right and proper, and then you changed your mind. Don’t imagine how you could have predicted the change, for that is amnesia. Remember that, in fact, you did not guess. Remember how, century after century, the world changed in ways you did not guess.

Maybe then you will be less shocked by what happens next.


1 With appropriate reweighting for the availability bias of history books—I should remember being a thousand peasants for every ruler.

" } }, { "_id": "97Y7Jwrzxyfzz3Ad2", "title": "Failing to Learn from History", "pageUrl": "https://www.lesswrong.com/posts/97Y7Jwrzxyfzz3Ad2/failing-to-learn-from-history", "postedAt": "2007-08-30T20:22:50.000Z", "baseScore": 126, "voteCount": 113, "commentCount": 11, "url": null, "contents": { "documentId": "97Y7Jwrzxyfzz3Ad2", "html": "\n\n\n\n \n\n \n\n

Once upon a time, in my reckless youth, when I knew not the Way of Bayes, I gave a Mysterious Answer to a mysterious-seeming question. Many failures occurred in sequence, but one mistake stands out as most critical: My younger self did not realize that solving a mystery should make it feel less confusing. I was trying to explain a Mysterious Phenomenon—which to me meant providing a cause for it, fitting it into an integrated model of reality. Why should this make the phenomenon less Mysterious, when that is its nature? I was trying to explain the Mysterious Phenomenon, not render it (by some impossible alchemy) into a mundane phenomenon, a phenomenon that wouldn’t even call out for an unusual explanation in the first place.

\n\n

As a Traditional Rationalist, I knew the historical tales of astrologers and astronomy, of alchemists and chemistry, of vitalists and biology. But the Mysterious Phenomenon was not like this. It was something new, something stranger, something more difficult, something that ordinary science had failed to explain for centuries—

\n\n

—as if stars and matter and life had not been mysteries for hundreds of years and thousands of years, from the dawn of human thought right up until science finally solved them—

\n\n

We learn about astronomy and chemistry and biology in school, and it seems to us that these matters have always been the proper realm of science, that they have never been mysterious. When science dares to challenge a new Great Puzzle, the children of that generation are skeptical, for they have never seen science explain something that feels mysterious to them. Science is only good for explaining scientific subjects, like stars and matter and life.

\n\n

I thought the lesson of history was that astrologers and alchemists and vitalists had an innate character flaw, a tendency toward mysterianism, which led them to come up with mysterious explanations for non-mysterious subjects. But surely, if a phenomenon really was very weird, a weird explanation might be in order?

\n\n

It was only afterward, when I began to see the mundane structure inside the mystery, that I realized whose shoes I was standing in. Only then did I realize how reasonable vitalism had seemed at the time, how surprising and embarrassing had been the universe’s reply of, “Life is mundane, and does not need a weird explanation.”

\n\n

We read history but we don’t live it, we don’t experience it. If only I had personally postulated astrological mysteries and then discovered Newtonian mechanics, postulated alchemical mysteries and then discovered chemistry, postulated vitalistic mysteries and then discovered biology. I would have thought of my Mysterious Answer and said to myself: No way am I falling for that again.

\n\n" } }, { "_id": "DwtYPRuCxpXTrzG9m", "title": "My Wild and Reckless Youth", "pageUrl": "https://www.lesswrong.com/posts/DwtYPRuCxpXTrzG9m/my-wild-and-reckless-youth", "postedAt": "2007-08-30T01:52:35.000Z", "baseScore": 120, "voteCount": 108, "commentCount": 53, "url": null, "contents": { "documentId": "DwtYPRuCxpXTrzG9m", "html": "\n\n\n\n \n\n \n\n

It is said that parents do all the things they tell their children not to do, which is how they know not to do them.

\n\n

Long ago, in the unthinkably distant past, I was a devoted Traditional Rationalist, conceiving myself skilled according to that kind, yet I knew not the Way of Bayes. When the young Eliezer was confronted with a mysterious-seeming question, the precepts of Traditional Rationality did not stop him from devising a Mysterious Answer. It is, by far, the most embarrassing mistake I made in my life, and I still wince to think of it.

\n\n

What was my mysterious answer to a mysterious question? This I will not describe, for it would be a long tale and complicated. I was young, and a mere Traditional Rationalist who knew not the teachings of Tversky and Kahneman. I knew about Occam’s Razor, but not the conjunction fallacy. I thought I could get away with thinking complicated thoughts myself, in the literary style of the complicated thoughts I read in science books, not realizing that correct complexity is only possible when every step is pinned down overwhelmingly. Today, one of the chief pieces of advice I give to aspiring young rationalists is “Do not attempt long chains of reasoning or complicated plans.”

\n\n

Nothing more than this need be said: even after I invented my “answer,” the phenomenon was still a mystery unto me, and possessed the same quality of wondrous impenetrability that it had at the start.

\n\n

Make no mistake, that younger Eliezer was not stupid. All the errors of which the young Eliezer was guilty are still being made today by respected scientists in respected journals. It would have taken a subtler skill to protect him than ever he was taught as a Traditional Rationalist.

\n\n

Indeed, the young Eliezer diligently and painstakingly followed the injunctions of Traditional Rationality in the course of going astray.

\n\n

As a Traditional Rationalist, the young Eliezer was careful to ensure that his Mysterious Answer made a bold prediction of future experience. Namely, I expected future neurologists to discover that neurons were exploiting quantum gravity, a la Sir Roger Penrose. This required neurons to maintain a certain degree of quantum coherence, which was something you could look for, and find or not find. Either you observe that or you don’t, right?

\n\n

But my hypothesis made no retrospective predictions. According to Traditional Science, retrospective predictions don’t count—so why bother making them? To a Bayesian, on the other hand, if a hypothesis does not today have a favorable likelihood ratio over “I don’t know,” it raises the question of why you today believe anything more complicated than “I don’t know.” But I knew not the Way of Bayes, so I was not thinking about likelihood ratios or focusing probability density. I had Made a Falsifiable Prediction; was this not the Law?

\n\n

As a Traditional Rationalist, the young Eliezer was careful not to believe in magic, mysticism, carbon chauvinism, or anything of that sort. I proudly professed of my Mysterious Answer, “It is just physics like all the rest of physics!” As if you could save magic from being a cognitive isomorph of magic, by calling it quantum gravity. But I knew not the Way of Bayes, and did not see the level on which my idea was isomorphic to magic. I gave my allegiance to physics, but this did not save me; what does probability theory know of allegiances? I avoided everything that Traditional Rationality told me was forbidden, but what was left was still magic.

\n\n

Beyond a doubt, my allegiance to Traditional Rationality helped me get out of the hole I dug myself into. If I hadn’t been a Traditional Rationalist, I would have been completely screwed. But Traditional Rationality still wasn’t enough to get it right. It just led me into different mistakes than the ones it had explicitly forbidden.

\n\n

When I think about how my younger self very carefully followed the rules of Traditional Rationality in the course of getting the answer wrong, it sheds light on the question of why people who call themselves “rationalists” do not rule the world. You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.

\n\n

Traditional Rationality is taught as an art, rather than a science; you read the biography of famous physicists describing the lessons life taught them, and you try to do what they tell you to do. But you haven’t lived their lives, and half of what they’re trying to describe is an instinct that has been trained into them.

\n\n

The way Traditional Rationality is designed, it would have been acceptable for me to spend thirty years on my silly idea, so long as I succeeded in falsifying it eventually, and was honest with myself about what my theory predicted, and accepted the disproof when it arrived, et cetera. This is enough to let the Ratchet of Science click forward, but it’s a little harsh on the people who waste thirty years of their lives. Traditional Rationality is a walk, not a dance. It’s designed to get you to the truth eventually, and gives you all too much time to smell the flowers along the way.

\n\n

Traditional Rationalists can agree to disagree. Traditional Rationality doesn’t have the ideal that thinking is an exact art in which there is only one correct probability estimate given the evidence. In Traditional Rationality, you’re allowed to guess, and then test your guess. But experience has taught me that if you don’t know, and you guess, you’ll end up being wrong.

\n\n

The Way of Bayes is also an imprecise art, at least the way I’m holding forth upon it. These essays are still fumbling attempts to put into words lessons that would be better taught by experience. But at least there’s underlying math, plus experimental evidence from cognitive psychology on how humans actually think. Maybe that will be enough to cross the stratospherically high threshold required for a discipline that lets you actually get it right, instead of just constraining you into interesting new mistakes.

\n\n" } }, { "_id": "kpRSCH7ALLcb6ucWM", "title": "Say Not \"Complexity\"", "pageUrl": "https://www.lesswrong.com/posts/kpRSCH7ALLcb6ucWM/say-not-complexity", "postedAt": "2007-08-29T04:22:53.000Z", "baseScore": 126, "voteCount": 114, "commentCount": 53, "url": null, "contents": { "documentId": "kpRSCH7ALLcb6ucWM", "html": "

Once upon a time . . .

This is a story from when I first met Marcello, with whom I would later work for a year on AI theory; but at this point I had not yet accepted him as my apprentice. I knew that he competed at the national level in mathematical and computing olympiads, which sufficed to attract my attention for a closer look; but I didn’t know yet if he could learn to think about AI.

I had asked Marcello to say how he thought an AI might discover how to solve a Rubik’s Cube. Not in a preprogrammed way, which is trivial, but rather how the AI itself might figure out the laws of the Rubik universe and reason out how to exploit them. How would an AI invent for itself the concept of an “operator,” or “macro,” which is the key to solving the Rubik’s Cube?

At some point in this discussion, Marcello said: “Well, I think the AI needs complexity to do X, and complexity to do Y—”

And I said, “Don’t say ‘complexity.’ ”

Marcello said, “Why not?”

I said, “Complexity should never be a goal in itself. You may need to use a particular algorithm that adds some amount of complexity, but complexity for the sake of complexity just makes things harder.” (I was thinking of all the people whom I had heard advocating that the Internet would “wake up” and become an AI when it became “sufficiently complex.”)

And Marcello said, “But there’s got to be some amount of complexity that does it.”

I closed my eyes briefly, and tried to think of how to explain it all in words. To me, saying “complexity” simply felt like the wrong move in the AI dance. No one can think fast enough to deliberate, in words, about each sentence of their stream of consciousness; for that would require an infinite recursion. We think in words, but our stream of consciousness is steered below the level of words, by the trained-in remnants of past insights and harsh experience . . .

I said, “Did you read ‘A Technical Explanation of Technical Explanation’?”1

“Yes,” said Marcello.

“Okay,” I said. “Saying ‘complexity’ doesn’t concentrate your probability mass.”

“Oh,” Marcello said, “like ‘emergence.’ Huh. So . . . now I’ve got to think about how X might actually happen . . .”

That was when I thought to myself, “Maybe this one is teachable.

Complexity is not a useless concept. It has mathematical definitions attached to it, such as Kolmogorov complexity and Vapnik-Chervonenkis complexity. Even on an intuitive level, complexity is often worth thinking about—you have to judge the complexity of a hypothesis and decide if it’s “too complicated” given the supporting evidence, or look at a design and try to make it simpler.

But concepts are not useful or useless of themselves. Only usages are correct or incorrect. In the step Marcello was trying to take in the dance, he was trying to explain something for free, get something for nothing. It is an extremely common misstep, at least in my field. You can join a discussion on artificial general intelligence and watch people doing the same thing, left and right, over and over again—constantly skipping over things they don’t understand, without realizing that’s what they’re doing.

In an eyeblink it happens: putting a non-controlling causal node behind something mysterious, a causal node that feels like an explanation but isn’t. The mistake takes place below the level of words. It requires no special character flaw; it is how human beings think by default, how they have thought since the ancient times.

What you must avoid is skipping over the mysterious part; you must linger at the mystery to confront it directly. There are many words that can skip over mysteries, and some of them would be legitimate in other contexts—“complexity,” for example. But the essential mistake is that skip-over, regardless of what causal node goes behind it. The skip-over is not a thought, but a microthought. You have to pay close attention to catch yourself at it. And when you train yourself to avoid skipping, it will become a matter of instinct, not verbal reasoning. You have to feel which parts of your map are still blank, and more importantly, pay attention to that feeling.

I suspect that in academia there is a huge pressure to sweep problems under the rug so that you can present a paper with the appearance of completeness. You’ll get more kudos for a seemingly complete model that includes some “emergent phenomena,” versus an explicitly incomplete map where the label says “I got no clue how this part works” or “then a miracle occurs.” A journal may not even accept the latter paper, since who knows but that the unknown steps are really where everything interesting happens?2

And if you’re working on a revolutionary AI startup, there is an even huger pressure to sweep problems under the rug; or you will have to admit to yourself that you don’t know how to build the right kind of AI yet, and your current life plans will come crashing down in ruins around your ears. But perhaps I am over-explaining, since skip-over happens by default in humans. If you’re looking for examples, just watch people discussing religion or philosophy or spirituality or any science in which they were not professionally trained.

Marcello and I developed a convention in our AI work: when we ran into something we didn’t understand, which was often, we would say “magic”—as in, X magically does Y”—to remind ourselves that here was an unsolved problem, a gap in our understanding. It is far better to say “magic” than “complexity” or “emergence”; the latter words create an illusion of understanding. Wiser to say “magic,” and leave yourself a placeholder, a reminder of work you will have to do later.

1 http://lesswrong.com/rationality/a-technical-explanation-of-technical-explanation

2 And yes, it sometimes happens that all the non-magical parts of your map turn out to also be non-important. That’s the price you sometimes pay, for entering into terra incognita and trying to solve problems incrementally. But that makes it even more important to know when you aren’t finished yet. Mostly, people don’t dare to enter terra incognita at all, for the deadly fear of wasting their time.

" } }, { "_id": "rmAbiEKQDpDnZzcRf", "title": "Positive Bias: Look Into the Dark", "pageUrl": "https://www.lesswrong.com/posts/rmAbiEKQDpDnZzcRf/positive-bias-look-into-the-dark", "postedAt": "2007-08-28T03:55:07.000Z", "baseScore": 177, "voteCount": 156, "commentCount": 59, "url": null, "contents": { "documentId": "rmAbiEKQDpDnZzcRf", "html": "

I am teaching a class, and I write upon the blackboard three numbers: 2-4-6. “I am thinking of a rule,” I say, “which governs sequences of three numbers. The sequence 2-4-6, as it so happens, obeys this rule. Each of you will find, on your desk, a pile of index cards. Write down a sequence of three numbers on a card, and I’ll mark it ‘Yes’ for fits the rule, or ‘No’ for not fitting the rule. Then you can write down another set of three numbers and ask whether it fits again, and so on. When you’re confident that you know the rule, write down the rule on a card. You can test as many triplets as you like.”

Here’s the record of one student’s guesses:
 

4-6-2No
4-6-8Yes
10-12-14Yes

 

At this point the student wrote down their guess at the rule. What do you think the rule is? Would you have wanted to test another triplet, and if so, what would it be? Take a moment to think before continuing.

The challenge above is based on a classic experiment due to Peter Wason, the 2-4-6 task. Although subjects given this task typically expressed high confidence in their guesses, only 21% of the subjects successfully guessed the experimenter’s real rule, and replications since then have continued to show success rates of around 20%.

The study was called “On the failure to eliminate hypotheses in a conceptual task.” Subjects who attempt the 2-4-6 task usually try to generate positive examples, rather than negative examples—they apply the hypothetical rule to generate a representative instance, and see if it is labeled “Yes.”

Thus, someone who forms the hypothesis “numbers increasing by two” will test the triplet 8-10-12, hear that it fits, and confidently announce the rule. Someone who forms the hypothesis X-2X-3X will test the triplet 3-6-9, discover that it fits, and then announce that rule.

In every case the actual rule is the same: the three numbers must be in ascending order.

But to discover this, you would have to generate triplets that shouldn’t fit, such as 20-23-26, and see if they are labeled “No.” Which people tend not to do, in this experiment. In some cases, subjects devise, “test,” and announce rules far more complicated than the actual answer.

This cognitive phenomenon is usually lumped in with “confirmation bias.” However, it seems to me that the phenomenon of trying to test positive rather than negative examples, ought to be distinguished from the phenomenon of trying to preserve the belief you started with. “Positive bias” is sometimes used as a synonym for “confirmation bias,” and fits this particular flaw much better.

It once seemed that phlogiston theory could explain a flame going out in an enclosed box (the air became saturated with phlogiston and no more could be released). But phlogiston theory could just as well have explained the flame not going out. To notice this, you have to search for negative examples instead of positive examples, look into zero instead of one; which goes against the grain of what experiment has shown to be human instinct.

For by instinct, we human beings only live in half the world.

One may be lectured on positive bias for days, and yet overlook it in-the-moment. Positive bias is not something we do as a matter of logic, or even as a matter of emotional attachment. The 2-4-6 task is “cold,” logical, not affectively “hot.” And yet the mistake is sub-verbal, on the level of imagery, of instinctive reactions. Because the problem doesn’t arise from following a deliberate rule that says “Only think about positive examples,” it can’t be solved just by knowing verbally that “We ought to think about both positive and negative examples.” Which example automatically pops into your head? You have to learn, wordlessly, to zag instead of zig. You have to learn to flinch toward the zero, instead of away from it.

I have been writing for quite some time now on the notion that the strength of a hypothesis is what it can’t explain, not what it can—if you are equally good at explaining any outcome, you have zero knowledge. So to spot an explanation that isn’t helpful, it’s not enough to think of what it does explain very well—you also have to search for results it couldn’t explain, and this is the true strength of the theory.

So I said all this, and then I challenged the usefulness of “emergence” as a concept. One commenter cited superconductivity and ferromagnetism as examples of emergence. I replied that non-superconductivity and non-ferromagnetism were also examples of emergence, which was the problem. But be it far from me to criticize the commenter! Despite having read extensively on “confirmation bias,” I didn’t spot the “gotcha” in the 2-4-6 task the first time I read about it. It’s a subverbal blink-reaction that has to be retrained. I’m still working on it myself.

So much of a rationalist’s skill is below the level of words. It makes for challenging work in trying to convey the Art through words. People will agree with you, but then, in the next sentence, do something subdeliberative that goes in the opposite direction. Not that I’m complaining! A major reason I’m writing this is to observe what my words haven’t conveyed.

Are you searching for positive examples of positive bias right now, or sparing a fraction of your search on what positive bias should lead you to not see? Did you look toward light or darkness?

" } }, { "_id": "8QzZKw9WHRxjR4948", "title": "The Futility of Emergence", "pageUrl": "https://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futility-of-emergence", "postedAt": "2007-08-26T22:10:54.000Z", "baseScore": 128, "voteCount": 146, "commentCount": 142, "url": null, "contents": { "documentId": "8QzZKw9WHRxjR4948", "html": "

The failures of phlogiston and vitalism are historical hindsight. Dare I step out on a limb, and name some current theory which I deem analogously flawed?

I name emergence or emergent phenomena—usually defined as the study of systems whose high-level behaviors arise or “emerge” from the interaction of many low-level elements. (Wikipedia: “The way complex systems and patterns arise out of a multiplicity of relatively simple interactions.”)

Taken literally, that description fits every phenomenon in our universe above the level of individual quarks, which is part of the problem. Imagine pointing to a market crash and saying “It’s not a quark!” Does that feel like an explanation? No? Then neither should saying “It’s an emergent phenomenon!”

It’s the noun “emergence” that I protest, rather than the verb “emerges from.” There’s nothing wrong with saying “X emerges from Y,” where Y is some specific, detailed model with internal moving parts. “Arises from” is another legitimate phrase that means exactly the same thing. Gravity arises from the curvature of spacetime, according to the specific mathematical model of General Relativity. Chemistry arises from interactions between atoms, according to the specific model of quantum electrodynamics.

Now suppose I should say that gravity depends on “arisence” or that chemistry is an “arising phenomenon,” and claim that as my explanation.

The phrase “emerges from” is acceptable, just like “arises from” or “is caused by” are acceptable, if the phrase precedes some specific model to be judged on its own merits.

However, this is not the way “emergence” is commonly used. “Emergence” is commonly used as an explanation in its own right.

I have lost track of how many times I have heard people say, “Intelligence is an emergent phenomenon!” as if that explained intelligence. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that intelligence is “emergent”? You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don’t anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there’s no detailed internal model to manipulate. Those who proffer the hypothesis of “emergence” confess their ignorance of the internals, and take pride in it; they contrast the science of “emergence” to other sciences merely mundane.

And even after the answer of “Why? Emergence!” is given, the phenomenon is still a mystery and possesses the same sacred impenetrability it had at the start.

A fun exercise is to eliminate the adjective “emergent” from any sentence in which it appears, and see if the sentence says anything different:

Another fun exercise is to replace the word “emergent” with the old word, the explanation that people had to use before emergence was invented:

Does not each statement convey exactly the same amount of knowledge about the phenomenon’s behavior? Does not each hypothesis fit exactly the same set of outcomes?

“Emergence” has become very popular, just as saying “magic” used to be very popular. “Emergence” has the same deep appeal to human psychology, for the same reason. “Emergence” is such a wonderfully easy explanation, and it feels good to say it; it gives you a sacred mystery to worship. Emergence is popular because it is the junk food of curiosity. You can explain anything using emergence, and so people do just that; for it feels so wonderful to explain things.

Humans are still humans, even if they’ve taken a few science classes in college. Once they find a way to escape the shackles of settled science, they get up to the same shenanigans as their ancestors—dressed up in the literary genre of “science,” but humans are still humans, and human psychology is still human psychology.

" } }, { "_id": "6i3zToomS86oj9bS6", "title": "Mysterious Answers to Mysterious Questions", "pageUrl": "https://www.lesswrong.com/posts/6i3zToomS86oj9bS6/mysterious-answers-to-mysterious-questions", "postedAt": "2007-08-25T22:27:47.000Z", "baseScore": 250, "voteCount": 197, "commentCount": 160, "url": null, "contents": { "documentId": "6i3zToomS86oj9bS6", "html": "

Imagine looking at your hand, and knowing nothing of cells, nothing of biochemistry, nothing of DNA. You’ve learned some anatomy from dissection, so you know your hand contains muscles; but you don’t know why muscles move instead of lying there like clay. Your hand is just . . . stuff . . . and for some reason it moves under your direction. Is this not magic?

It seemed to me then, and it still seems to me, most probable that the animal body does not act as a thermodynamic engine . . . The influence of animal or vegetable life on matter is infinitely beyond the range of any scientific inquiry hitherto entered on. Its power of directing the motions of moving particles, in the demonstrated daily miracle of our human free-will, and in the growth of generation after generation of plants from a single seed, are infinitely different from any possible result of the fortuitous concourse of atoms[.]1

[C]onsciousness teaches every individual that they are, to some extent, subject to the direction of his will. It appears, therefore, that animated creatures have the power of immediately applying, to certain moving particles of matter within their bodies, forces by which the motions of these particles are directed to produce desired mechanical effects.2

Modern biologists are coming once more to a firm acceptance of something beyond mere gravitational, chemical, and physical forces; and that unknown thing is a vital principle.3

—Lord Kelvin

This was the theory of vitalism ; that the mysterious difference between living matter and non-living matter was explained by an Élan vital or vis vitalis. Élan vital infused living matter and caused it to move as consciously directed. Élan vital participated in chemical transformations which no mere non-living particles could undergo—Wöhler’s later synthesis of urea, a component of urine, was a major blow to the vitalistic theory because it showed that mere chemistry could duplicate a product of biology.

Calling “Élan vital” an explanation, even a fake explanation like phlogiston, is probably giving it too much credit. It functioned primarily as a curiosity-stopper. You said “Why?” and the answer was “Élan vital!”

When you say “Élan vital!” it feels like you know why your hand moves. You have a little causal diagram in your head that says:

But actually you know nothing you didn’t know before. You don’t know, say, whether your hand will generate heat or absorb heat, unless you have observed the fact already; if not, you won’t be able to predict it in advance. Your curiosity feels sated, but it hasn’t been fed. Since you can say “Why? Élan vital!” to any possible observation, it is equally good at explaining all outcomes, a disguised hypothesis of maximum entropy, et cetera.

But the greater lesson lies in the vitalists’ reverence for the Élan vital, their eagerness to pronounce it a mystery beyond all science. Meeting the great dragon Unknown, the vitalists did not draw their swords to do battle, but bowed their necks in submission. They took pride in their ignorance, made biology into a sacred mystery, and thereby became loath to relinquish their ignorance when evidence came knocking.

The Secret of Life was infinitely beyond the reach of science! Not just a little beyond, mind you, but infinitely beyond! Lord Kelvin sure did get a tremendous emotional kick out of not knowing something.

But ignorance exists in the map, not in the territory. If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. A phenomenon can seem mysterious to some particular person. There are no phenomena which are mysterious of themselves. To worship a phenomenon because it seems so wonderfully mysterious is to worship your own ignorance.

Vitalism shared with phlogiston the error of encapsulating the mystery as a substance. Fire was mysterious, and the phlogiston theory encapsulated the mystery in a mysterious substance called “phlogiston.” Life was a sacred mystery, and vitalism encapsulated the sacred mystery in a mysterious substance called “Élan vital.” Neither answer helped concentrate the model’s probability density—helped make some outcomes easier to explain than others. The “explanation” just wrapped up the question as a small, hard, opaque black ball.

In a comedy written by Molière, a physician explains the power of a soporific by saying that it contains a “dormitive potency.” Same principle. It is a failure of human psychology that, faced with a mysterious phenomenon, we more readily postulate mysterious inherent substances than complex underlying processes.

But the deeper failure is supposing that an answer can be mysterious. If a phenomenon feels mysterious, that is a fact about our state of knowledge, not a fact about the phenomenon itself. The vitalists saw a mysterious gap in their knowledge, and postulated a mysterious stuff that plugged the gap. In doing so, they mixed up the map with the territory. All confusion and bewilderment exist in the mind, not in encapsulated substances.

This is the ultimate and fully general explanation for why, again and again in humanity’s history, people are shocked to discover that an incredibly mysterious question has a non-mysterious answer. Mystery is a property of questions, not answers.

Therefore I call theories such as vitalism mysterious answers to mysterious questions.

These are the signs of mysterious answers to mysterious questions:


1 Lord Kelvin, “On the Dissipation of Energy: Geology and General Physics,” in Popular Lectures and Addresses, vol. ii (London: Macmillan, 1894).

2 Lord Kelvin, “On the Mechanical action of Heat or Light: On the Power of Animated Creatures over Matter: On the Sources available to Man for the production of Mechanical Effect,” Proceedings of the Royal Society of Edinburgh 3, no. 1 (1852): 108–113.

3 Silvanus Phillips Thompson, The Life of Lord Kelvin (American Mathematical Society, 2005).

" } }, { "_id": "FWMfQKG3RpZx6irjm", "title": "Semantic Stopsigns", "pageUrl": "https://www.lesswrong.com/posts/FWMfQKG3RpZx6irjm/semantic-stopsigns", "postedAt": "2007-08-24T19:29:10.000Z", "baseScore": 163, "voteCount": 154, "commentCount": 111, "url": null, "contents": { "documentId": "FWMfQKG3RpZx6irjm", "html": "

And the child asked:

Q: Where did this rock come from?

A: I chipped it off the big boulder, at the center of the village.

Q: Where did the boulder come from?

A: It probably rolled off the huge mountain that towers over our village.

Q: Where did the mountain come from?

A: The same place as all stone: it is the bones of Ymir, the primordial giant.

Q: Where did the primordial giant, Ymir, come from?

A: From the great abyss, Ginnungagap.

Q: Where did the great abyss, Ginnungagap, come from?

A: Never ask that question.

Consider the seeming paradox of the First Cause. Science has traced events back to the Big Bang, but why did the Big Bang happen? It’s all well and good to say that the zero of time begins at the Big Bang—that there is nothing before the Big Bang in the ordinary flow of minutes and hours. But saying this presumes our physical law, which itself appears highly structured; it calls out for explanation. Where did the physical laws come from? You could say that we’re all a computer simulation, but then the computer simulation is running on some other world’s laws of physics—where did those laws of physics come from?

At this point, some people say, “God!”

What could possibly make anyone, even a highly religious person, think this even helped answer the paradox of the First Cause? Why wouldn’t you automatically ask, “Where did God come from?” Saying “God is uncaused” or “God created Himself” leaves us in exactly the same position as “Time began with the Big Bang.” We just ask why the whole metasystem exists in the first place, or why some events but not others are allowed to be uncaused.

My purpose here is not to discuss the seeming paradox of the First Cause, but to ask why anyone would think “God!” could resolve the paradox. Saying “God!” is a way of belonging to a tribe, which gives people a motive to say it as often as possible—some people even say it for questions like “Why did this hurricane strike New Orleans?” Even so, you’d hope people would notice that on the particular puzzle of the First Cause, saying “God!” doesn’t help. It doesn’t make the paradox seem any less paradoxical even if true. How could anyone not notice this?

Jonathan Wallace suggested that “God!” functions as a semantic stopsign—that it isn’t a propositional assertion, so much as a cognitive traffic signal: do not think past this point.1 Saying “God!” doesn’t so much resolve the paradox, as put up a cognitive traffic signal to halt the obvious continuation of the question-and-answer chain.

Of course you’d never do that, being a good and proper atheist, right? But “God!” isn’t the only semantic stopsign, just the obvious first example.

The transhuman technologies—molecular nanotechnology, advanced biotech, genetech, artificial intelligence, et cetera—pose tough policy questions. What kind of role, if any, should a government take in supervising a parent’s choice of genes for their child? Could parents deliberately choose genes for schizophrenia? If enhancing a child’s intelligence is expensive, should governments help ensure access, to prevent the emergence of a cognitive elite? You can propose various institutions to answer these policy questions—for example, that private charities should provide financial aid for intelligence enhancement—but the obvious next question is, “Will this institution be effective?” If we rely on product liability lawsuits to prevent corporations from building harmful nanotech, will that really work?

I know someone whose answer to every one of these questions is “Liberal democracy!” That’s it. That’s his answer. If you ask the obvious question of “How well have liberal democracies performed, historically, on problems this tricky?” or “What if liberal democracy does something stupid?” then you’re an autocrat, or libertopian, or otherwise a very very bad person. No one is allowed to question democracy.

I once called this kind of thinking “the divine right of democracy.” But it is more precise to say that “Democracy!” functioned for him as a semantic stopsign. If anyone had said to him “Turn it over to the Coca-Cola corporation!” he would have asked the obvious next questions: “Why? What will the Coca-Cola corporation do about it? Why should we trust them? Have they done well in the past on equally tricky problems?”

Or suppose that someone says, “Mexican-Americans are plotting to remove all the oxygen in Earth’s atmosphere.” You’d probably ask, “Why would they do that? Don’t Mexican-Americans have to breathe too? Do Mexican-Americans even function as a unified conspiracy?” If you don’t ask these obvious next questions when someone says, “Corporations are plotting to remove Earth’s oxygen,” then “Corporations!” functions for you as a semantic stopsign.

Be careful here not to create a new generic counterargument against things you don’t like—“Oh, it’s just a stopsign!” No word is a stopsign of itself; the question is whether a word has that effect on a particular person. Having strong emotions about something doesn’t qualify it as a stopsign. I’m not exactly fond of terrorists or fearful of private property; that doesn’t mean “Terrorists!” or “Capitalism!” are cognitive traffic signals unto me. (The word “intelligence” did once have that effect on me, though no longer.) What distinguishes a semantic stopsign is failure to consider the obvious next question.


1 See Wallace’s “God vs. God” (http://www.spectacle.org/yearzero/godvgod.html) and “God as a Semantical Signpost” (http://www.spectacle.org/1095/stop1.html).

" } }, { "_id": "RgkqLqkg8vLhsYpfh", "title": "Fake Causality", "pageUrl": "https://www.lesswrong.com/posts/RgkqLqkg8vLhsYpfh/fake-causality", "postedAt": "2007-08-23T18:12:31.000Z", "baseScore": 144, "voteCount": 128, "commentCount": 88, "url": null, "contents": { "documentId": "RgkqLqkg8vLhsYpfh", "html": "

Phlogiston was the eighteenth century’s answer to the Elemental Fire of the Greek alchemists. Ignite wood, and let it burn. What is the orangey-bright “fire” stuff? Why does the wood transform into ash? To both questions, the eighteenth-century chemists answered, “phlogiston.”

. . . and that was it, you see, that was their answer: “Phlogiston.”

Phlogiston escaped from burning substances as visible fire. As the phlogiston escaped, the burning substances lost phlogiston and so became ash, the “true material.” Flames in enclosed containers went out because the air became saturated with phlogiston, and so could not hold any more. Charcoal left little residue upon burning because it was nearly pure phlogiston.

Of course, one didn’t use phlogiston theory to predict the outcome of a chemical transformation. You looked at the result first, then you used phlogiston theory to explain it. It’s not that phlogiston theorists predicted a flame would extinguish in a closed container; rather they lit a flame in a container, watched it go out, and then said, “The air must have become saturated with phlogiston.” You couldn’t even use phlogiston theory to say what you ought not to see; it could explain everything.

This was an earlier age of science. For a long time, no one realized there was a problem. Fake explanations don’t feel fake. That’s what makes them dangerous.

Modern research suggests that humans think about cause and effect using something like the directed acyclic graphs (DAGs) of Bayes nets. Because it rained, the sidewalk is wet; because the sidewalk is wet, it is slippery:

From this we can infer—or, in a Bayes net, rigorously calculate in probabilities—that when the sidewalk is slippery, it probably rained; but if we already know that the sidewalk is wet, learning that the sidewalk is slippery tells us nothing more about whether it rained.

Why is fire hot and bright when it burns?

It feels like an explanation. It’s represented using the same cognitive data format. But the human mind does not automatically detect when a cause has an unconstraining arrow to its effect. Worse, thanks to hindsight bias, it may feel like the cause constrains the effect, when it was merely fitted to the effect.

Interestingly, our modern understanding of probabilistic reasoning about causality can describe precisely what the phlogiston theorists were doing wrong. One of the primary inspirations for Bayesian networks was noticing the problem of double-counting evidence if inference resonates between an effect and a cause. For example, let’s say that I get a bit of unreliable information that the sidewalk is wet. This should make me think it’s more likely to be raining. But, if it’s more likely to be raining, doesn’t that make it more likely that the sidewalk is wet? And wouldn’t that make it more likely that the sidewalk is slippery? But if the sidewalk is slippery, it’s probably wet; and then I should again raise my probability that it’s raining . . .

Judea Pearl uses the metaphor of an algorithm for counting soldiers in a line. Suppose you’re in the line, and you see two soldiers next to you, one in front and one in back. That’s three soldiers, including you. So you ask the soldier behind you, “How many soldiers do you see?” They look around and say, “Three.” So that’s a total of six soldiers. This, obviously, is not how to do it.

A smarter way is to ask the soldier in front of you, “How many soldiers forward of you?” and the soldier in back, “How many soldiers backward of you?” The question “How many soldiers forward?” can be passed on as a message without confusion. If I’m at the front of the line, I pass the message “1 soldier forward,” for myself. The person directly in back of me gets the message “1 soldier forward,” and passes on the message “2 soldiers forward” to the soldier behind them. At the same time, each soldier is also getting the message “N soldiers backward” from the soldier behind them, and passing it on as “N + 1 soldiers backward” to the soldier in front of them. How many soldiers in total? Add the two numbers you receive, plus one for yourself: that is the total number of soldiers in line.

The key idea is that every soldier must separately track the two messages, the forward-message and backward-message, and add them together only at the end. You never add any soldiers from the backward-message you receive to the forward-message you pass back. Indeed, the total number of soldiers is never passed as a message—no one ever says it aloud.

An analogous principle operates in rigorous probabilistic reasoning about causality. If you learn something about whether it’s raining, from some source other than observing the sidewalk to be wet, this will send a forward-message from [Rain] to [Sidewalk Wet] and raise our expectation of the sidewalk being wet. If you observe the sidewalk to be wet, this sends a backward-message to our belief that it is raining, and this message propagates from [Rain] to all neighboring nodes except the [Sidewalk Wet] node. We count each piece of evidence exactly once; no update message ever “bounces” back and forth. The exact algorithm may be found in Judea Pearl’s classic Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.

So what went wrong in phlogiston theory? When we observe that fire is hot and bright, the [Fire Hot and Bright] node can send backward-evidence to the [Phlogiston] node, leading us to update our beliefs about phlogiston. But if so, we can’t count this as a successful forward-prediction of phlogiston theory. The message should go in only one direction, and not bounce back.

Alas, human beings do not use a rigorous algorithm for updating belief networks. We learn about parent nodes from observing children, and predict child nodes from beliefs about parents. But we don’t keep rigorously separate books for the backward-message and forward-message. We just remember that phlogiston is hot, which causes fire to be hot. So it seems like phlogiston theory predicts the hotness of fire. Or, worse, it just feels like phlogiston makes the fire hot.

Until you notice that no advance predictions are being made, the non-constraining causal node is not labeled “fake.” It’s represented the same way as any other node in your belief network. It feels like a fact, like all the other facts you know: Phlogiston makes the fire hot.

A properly designed AI would notice the problem instantly. This wouldn’t even require special-purpose code, just correct bookkeeping of the belief network. (Sadly, we humans can’t rewrite our own code, the way a properly designed AI could.)

Speaking of “hindsight bias” is just the nontechnical way of saying that humans do not rigorously separate forward and backward messages, allowing forward messages to be contaminated by backward ones.

Those who long ago went down the path of phlogiston were not trying to be fools. No scientist deliberately wants to get stuck in a blind alley. Are there any fake explanations in your mind? If there are, I guarantee they’re not labeled “fake explanation,” so polling your thoughts for the “fake” keyword will not turn them up.

Thanks to hindsight bias, it’s also not enough to check how well your theory “predicts” facts you already know. You’ve got to predict for tomorrow, not yesterday. It’s the only way a messy human mind can be guaranteed of sending a pure forward message.

" } }, { "_id": "4Bwr6s9dofvqPWakn", "title": "Science as Attire", "pageUrl": "https://www.lesswrong.com/posts/4Bwr6s9dofvqPWakn/science-as-attire", "postedAt": "2007-08-23T05:10:21.000Z", "baseScore": 176, "voteCount": 156, "commentCount": 88, "url": null, "contents": { "documentId": "4Bwr6s9dofvqPWakn", "html": "

The preview for the X-Men movie has a voice-over saying: “In every human being . . . there is the genetic code . . . for mutation.” Apparently you can acquire all sorts of neat abilities by mutation. The mutant Storm, for example, has the ability to throw lightning bolts.

I beg you, dear reader, to consider the biological machinery necessary to generate electricity; the biological adaptations necessary to avoid being harmed by electricity; and the cognitive circuitry required for finely tuned control of lightning bolts. If we actually observed any organism acquiring these abilities in one generation, as the result of mutation, it would outright falsify the neo-Darwinian model of natural selection. It would be worse than finding rabbit fossils in the pre-Cambrian. If evolutionary theory could actually stretch to cover Storm, it would be able to explain anything, and we all know what that would imply.

The X-Men comics use terms like “evolution,” “mutation,” and “genetic code,” purely to place themselves in what they conceive to be the literary genre of science. The part that scares me is wondering how many people, especially in the media, understand science only as a literary genre.

I encounter people who very definitely believe in evolution, who sneer at the folly of creationists. And yet they have no idea of what the theory of evolutionary biology permits and prohibits. They’ll talk about “the next step in the evolution of humanity,” as if natural selection got here by following a plan. Or even worse, they’ll talk about something completely outside the domain of evolutionary biology, like an improved design for computer chips, or corporations splitting, or humans uploading themselves into computers, and they’ll call that “evolution.” If evolutionary biology could cover that, it could cover anything.

Probably an actual majority of the people who believe in evolution use the phrase “because of evolution” because they want to be part of the scientific in-crowd—belief as scientific attire, like wearing a lab coat. If the scientific in-crowd instead used the phrase “because of intelligent design,” they would just as cheerfully use that instead—it would make no difference to their anticipation-controllers. Saying “because of evolution” instead of “because of intelligent design” does not, for them, prohibit Storm. Its only purpose, for them, is to identify with a tribe.

I encounter people who are quite willing to entertain the notion of dumber-than-human artificial intelligence, or even mildly smarter-than-human artificial intelligence. Introduce the notion of strongly superhuman artificial intelligence, and they’ll suddenly decide it’s “pseudoscience.” It’s not that they think they have a theory of intelligence which lets them calculate a theoretical upper bound on the power of an optimization process. Rather, they associate strongly superhuman AI to the literary genre of apocalyptic literature; whereas an AI running a small corporation associates to the literary genre of Wired magazine. They aren’t speaking from within a model of cognition. They don’t realize they need a model. They don’t realize that science is about models. Their devastating critiques consist purely of comparisons to apocalyptic literature, rather than, say, known laws which prohibit such an outcome. They understand science only as a literary genre, or in-group to belong to. The attire doesn’t look to them like a lab coat; this isn’t the football team they’re cheering for.

Is there any idea in science that you are proud of believing, though you do not use the belief professionally? You had best ask yourself which future experiences your belief prohibits from happening to you. That is the sum of what you have assimilated and made a true part of yourself. Anything else is probably passwords or attire.

" } }, { "_id": "NMoLJuDJEms7Ku9XS", "title": "Guessing the Teacher's Password", "pageUrl": "https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-the-teacher-s-password", "postedAt": "2007-08-22T03:40:48.000Z", "baseScore": 311, "voteCount": 265, "commentCount": 100, "url": null, "contents": { "documentId": "NMoLJuDJEms7Ku9XS", "html": "

When I was young, I read popular physics books such as Richard Feynman’s QED: The Strange Theory of Light and Matter. I knew that light was waves, sound was waves, matter was waves. I took pride in my scientific literacy, when I was nine years old.

When I was older, and I began to read the Feynman Lectures on Physics, I ran across a gem called “the wave equation.” I could follow the equation’s derivation, but, looking back, I couldn’t see its truth at a glance. So I thought about the wave equation for three days, on and off, until I saw that it was embarrassingly obvious. And when I finally understood, I realized that the whole time I had accepted the honest assurance of physicists that light was waves, sound was waves, matter was waves, I had not had the vaguest idea of what the word “wave” meant to a physicist.

There is an instinctive tendency to think that if a physicist says “light is made of waves,” and the teacher says “What is light made of?” and the student says “Waves!”, then the student has made a true statement. That’s only fair, right? We accept “waves” as a correct answer from the physicist; wouldn’t it be unfair to reject it from the student? Surely, the answer “Waves!” is either true or false, right?

Which is one more bad habit to unlearn from school. Words do not have intrinsic definitions. If I hear the syllables “bea-ver” and think of a large rodent, that is a fact about my own state of mind, not a fact about the syllables “bea-ver.” The sequence of syllables “made of waves” (or “because of heat conduction”) is not a hypothesis; it is a pattern of vibrations traveling through the air, or ink on paper. It can associate to a hypothesis in someone’s mind, but it is not, of itself, right or wrong. But in school, the teacher hands you a gold star for saying “made of waves,” which must be the correct answer because the teacher heard a physicist emit the same sound-vibrations. Since verbal behavior (spoken or written) is what gets the gold star, students begin to think that verbal behavior has a truth-value. After all, either light is made of waves, or it isn’t, right?

And this leads into an even worse habit. Suppose the teacher asks you why the far side of a metal plate feels warmer than the side next to the radiator. If you say “I don’t know,” you have no chance of getting a gold star—it won’t even count as class participation. But, during the current semester, this teacher has used the phrases “because of heat convection,” “because of heat conduction,” and “because of radiant heat.” One of these is probably what the teacher wants. You say, “Eh, maybe because of heat conduction?”

This is not a hypothesis about the metal plate. This is not even a proper belief. It is an attempt to guess the teacher’s password.

Even visualizing the symbols of the diffusion equation (the math governing heat conduction) doesn’t mean you’ve formed a hypothesis about the metal plate. This is not school; we are not testing your memory to see if you can write down the diffusion equation. This is Bayescraft; we are scoring your anticipations of experience. If you use the diffusion equation, by measuring a few points with a thermometer and then trying to predict what the thermometer will say on the next measurement, then it is definitely connected to experience. Even if the student just visualizes something flowing, and therefore holds a match near the cooler side of the plate to try to measure where the heat goes, then this mental image of flowing-ness connects to experience; it controls anticipation.

If you aren’t using the diffusion equation—putting in numbers and getting out results that control your anticipation of particular experiences—then the connection between map and territory is severed as though by a knife. What remains is not a belief, but a verbal behavior.

In the school system, it’s all about verbal behavior, whether written on paper or spoken aloud. Verbal behavior gets you a gold star or a failing grade. Part of unlearning this bad habit is becoming consciously aware of the difference between an explanation and a password.

Does this seem too harsh? When you’re faced by a confusing metal plate, can’t “heat conduction?” be a first step toward finding the answer? Maybe, but only if you don’t fall into the trap of thinking that you are looking for a password. What if there is no teacher to tell you that you failed? Then you may think that “Light is wakalixes” is a good explanation, that “wakalixes” is the correct password. It happened to me when I was nine years old—not because I was stupid, but because this is what happens by default. This is how human beings think, unless they are trained not to fall into the trap. Humanity stayed stuck in holes like this for thousands of years.

Maybe, if we drill students that words don’t count, only anticipation-controllers, the student will not get stuck on “Heat conduction? No? Maybe heat convection? That’s not it either?” Maybe then, thinking the phrase “heat conduction” will lead onto a genuinely helpful path, like:

If we are not strict about “Eh, maybe because of heat conduction?” being a fake explanation, the student will very probably get stuck on some wakalixes-password. This happens by default: it happened to the whole human species for thousands of years.

" } }, { "_id": "fysgqk4CjAwhBgNYT", "title": "Fake Explanations", "pageUrl": "https://www.lesswrong.com/posts/fysgqk4CjAwhBgNYT/fake-explanations", "postedAt": "2007-08-20T21:13:35.000Z", "baseScore": 187, "voteCount": 168, "commentCount": 97, "url": null, "contents": { "documentId": "fysgqk4CjAwhBgNYT", "html": "

Once upon a time, there was an instructor who taught physics students. One day the instructor called them into the classroom and showed them a wide, square plate of metal, next to a hot radiator. The students each put their hand on the plate and found the side next to the radiator cool, and the distant side warm. And the instructor said, Why do you think this happens? Some students guessed convection of air currents, and others guessed strange metals in the plate. They devised many creative explanations, none stooping so low as to say “I don’t know” or “This seems impossible.”

And the answer was that before the students entered the room, the instructor turned the plate around.1

Consider the student who frantically stammers, “Eh, maybe because of the heat conduction and so?” I ask: Is this answer a proper belief? The words are easily enough professed—said in a loud, emphatic voice. But do the words actually control anticipation?

Ponder that innocent little phrase, “because of,” which comes before “heat conduction.” Ponder some of the other things we could put after it. We could say, for example, “Because of phlogiston,” or “Because of magic.”

“Magic!” you cry. “That’s not a scientific explanation!” Indeed, the phrases “because of heat conduction” and “because of magic” are readily recognized as belonging to different literary genres. “Heat conduction” is something that Spock might say on Star Trek, whereas “magic” would be said by Giles in Buffy the Vampire Slayer.

However, as Bayesians, we take no notice of literary genres. For us, the substance of a model is the control it exerts on anticipation. If you say “heat conduction,” what experience does that lead you to anticipate? Under normal circumstances, it leads you to anticipate that, if you put your hand on the side of the plate near the radiator, that side will feel warmer than the opposite side. If “because of heat conduction” can also explain the radiator-adjacent side feeling cooler, then it can explain pretty much anything.

And as we all know by this point (I do hope), if you are equally good at explaining any outcome, you have zero knowledge. “Because of heat conduction,” used in such fashion, is a disguised hypothesis of maximum entropy. It is anticipation-isomorphic to saying “magic.” It feels like an explanation, but it’s not.

Suppose that instead of guessing, we measured the heat of the metal plate at various points and various times. Seeing a metal plate next to the radiator, we would ordinarily expect the point temperatures to satisfy an equilibrium of the diffusion equation with respect to the boundary conditions imposed by the environment. You might not know the exact temperature of the first point measured, but after measuring the first points—I’m not physicist enough to know how many would be required—you could take an excellent guess at the rest.

A true master of the art of using numbers to constrain the anticipation of material phenomena—a “physicist”—would take some measurements and say, “This plate was in equilibrium with the environment two and a half minutes ago, turned around, and is now approaching equilibrium again.”

The deeper error of the students is not simply that they failed to constrain anticipation. Their deeper error is that they thought they were doing physics. They said the phrase “because of,” followed by the sort of words Spock might say on Star Trek, and thought they thereby entered the magisterium of science.

Not so. They simply moved their magic from one literary genre to another.


1 Joachim Verhagen, Science Jokes, 2001, http://web.archive.org/web/20060424082937/http://www.nvon.nl/scheik/best/diversen/scijokes/scijokes.txt

" } }, { "_id": "XKcawbsB6Tj5e2QRK", "title": "Is Molecular Nanotechnology \"Scientific\"?", "pageUrl": "https://www.lesswrong.com/posts/XKcawbsB6Tj5e2QRK/is-molecular-nanotechnology-scientific", "postedAt": "2007-08-20T04:11:56.000Z", "baseScore": 40, "voteCount": 46, "commentCount": 49, "url": null, "contents": { "documentId": "XKcawbsB6Tj5e2QRK", "html": "

Prerequisite / Read this first:  Scientific Evidence, Legal Evidence, Rational Evidence

\n\n

Consider the statement "It is physically possible to construct diamondoid nanomachines which repair\nbiological cells."  Some people will tell you\nthat molecular nanotechnology is "pseudoscience" because it has not\nbeen verified by experiment - no one has ever seen a nanofactory, so how can believing in their possibility be scientific?

\n\n

Drexler, I think, would reply that\nhis extrapolations of diamondoid nanomachines are based on standard\nphysics, which is to say, scientific generalizations. Therefore, if you\nsay that nanomachines cannot work, you must be inventing new physics.  Or to put it more sharply:  If you say that a simulation of a molecular gear\nis inaccurate, if you claim that atoms thus configured would behave\ndifferently from depicted, then either you know a flaw in the simulation algorithm or\nyou're inventing your own laws of physics.

My own sympathies, I confess, are with Drexler.  And not just because you could apply the same argument of "I've never seen it, therefore it can't happen" to my own field of Artificial Intelligence.

\n\n

What about the Wright Brothers' attempt to build a non-biological heavier-than-air powered flying machine?  Was that "pseudoscience"?  No one had ever seen one before.  Wasn't "all flying machines crash" a generalization true over all previous observations?  Wouldn't it be scientific to extend this generalization to predict future experiments?

\n\n

"Flying machines crash" is a qualitative, imprecise, verbal, surface-level generalization.  If you have a quantitative theory of aerodynamics which can calculate precisely how previous flying machines crashed, that same theory of aerodynamics would predict the Wright Flyer will fly (and how high, at what speed).  Deep quantitative generalizations take strict precedence over verbal surface generalizations.  Only deep laws possess the absolute universality and stability of physics.  Perhaps there are no new quarks under the Sun, but on higher levels of organization, new things happen all the time.

\n\n

"No one has ever seen a non-biological nanomachine" is a verbalish surface-level generalization, which can hardly overrule the precise physical models used to simulate a molecular gear.\n\n

\n\n

And yet... I still would not say that "It's possible to construct a nanofactory" is a scientific belief.\nThis belief will not become scientific until someone actually\nconstructs a nanofactory.  Just because something is the best\nextrapolation from present\ngeneralizations, doesn't make it true.  We have not done an atom-by-atom calculation for the\nsynthesis and behavior of an entire nanofactory; the argument for nanofactories is based on\nqualitative, abstract reasoning.  Such reasoning, even from the\nbest available current theories, sometimes goes wrong.  Not always, but sometimes.

\n\n

The argument for "it's possible to construct a nanofactory" is based on the protected belief pool of science. \nBut it does not, itself, meet the special strong standards required to\nceremonially add a belief to the protected belief pool.

\n\n

Yet if, on a whim, you decide to make a strong positive assertion that nanomachines are impossible, you are being irrational.  You are even being "unscientific".  An ungrounded whimsical assertion that tomorrow the Sun will not rise is "unscientific", because you have needlessly contradicted the best extrapolation from current scientific knowledge.

\n\n

In the nanotechnology debate, we see once again the severe folly of thinking\nthat everything which is not science is pseudoscience - as if Nature is\nprohibited from containing any truths except those already verified by surface observations of scientific experiments.  It is a fallacy of the excluded middle.

\n\n

Of course you could try to criticize the feasibility of diamondoid nanotechnology from within the known laws of physics.  That could be argued.  It wouldn't have the just plain silly quality of "Nanotech is pseudoscience because no one's ever seen a nanotech."  Drexler used qualitative, abstract reasoning from known science; perhaps his argument has a hidden flaw according to known science.

\n\n\n

For now, "diamondoid nanosystems are possible" is merely a best guess.  It is merely based\non qualitative, abstract, approximate, potentially fallible reasoning from\nbeliefs already in the protected belief pool of science.  Such a guess is\nnot reliable enough itself to be added to the protected belief pool.  It is merely rational.

" } }, { "_id": "fhojYBGGiYAFcryHZ", "title": "Scientific Evidence, Legal Evidence, Rational Evidence", "pageUrl": "https://www.lesswrong.com/posts/fhojYBGGiYAFcryHZ/scientific-evidence-legal-evidence-rational-evidence", "postedAt": "2007-08-19T05:36:12.000Z", "baseScore": 147, "voteCount": 129, "commentCount": 18, "url": null, "contents": { "documentId": "fhojYBGGiYAFcryHZ", "html": "

Suppose that your good friend, the police commissioner, tells you in strictest confidence that the crime kingpin of your city is Wulky Wilkinsen. As a rationalist, are you licensed to believe this statement? Put it this way: if you go ahead and insult Wulky, I’d call you foolhardy. Since it is prudent to act as if Wulky has a substantially higher-than-default probability of being a crime boss, the police commissioner’s statement must have been strong Bayesian evidence.

Our legal system will not imprison Wulky on the basis of the police commissioner’s statement. It is not admissible as legal evidence. Maybe if you locked up every person accused of being a crime boss by a police commissioner, you’d initially catch a lot of crime bosses, and relatively few people the commissioner just didn’t like. But unrestrained power attracts corruption like honey attracts flies: over time, you’d catch fewer and fewer real crime bosses (who would go to greater lengths to ensure anonymity), and more and more innocent victims.

This does not mean that the police commissioner’s statement is not rational evidence. It still has a lopsided likelihood ratio, and you’d still be a fool to insult Wulky. But on a social level, in pursuit of a social goal, we deliberately define “legal evidence” to include only particular kinds of evidence, such as the police commissioner’s own observations on the night of April 4th. All legal evidence should ideally be rational evidence, but not the other way around. We impose special, strong, additional standards before we anoint rational evidence as “legal evidence.”

As I write this sentence at 8:33 p.m., Pacific time, on August 18th, 2007, I am wearing white socks. As a rationalist, are you licensed to believe the previous statement? Yes. Could I testify to it in court? Yes. Is it a scientific statement? No, because there is no experiment you can perform yourself to verify it. Science is made up of generalizations which apply to many particular instances, so that you can run new real-world experiments which test the generalization, and thereby verify for yourself that the generalization is true, without having to trust anyone’s authority. Science is the publicly reproducible knowledge of humankind.

Like a court system, science as a social process is made up of fallible humans. We want a protected pool of beliefs that are especially reliable. And we want social rules that encourage the generation of such knowledge. So we impose special, strong, additional standards before we canonize rational knowledge as “scientific knowledge,” adding it to the protected belief pool. Is a rationalist licensed to believe in the historical existence of Alexander the Great? Yes. We have a rough picture of ancient Greece, untrustworthy but better than maximum entropy. But we are dependent on authorities such as Plutarch; we cannot discard Plutarch and verify everything for ourselves. Historical knowledge is not scientific knowledge.

Is a rationalist licensed to believe that the Sun will rise on September 18th, 2007? Yes—not with absolute certainty, but that’s the way to bet.1 Is this statement, as I write this essay on August 18th, 2007, a scientific belief?

It may seem perverse to deny the adjective “scientific” to statements like “The Sun will rise on September 18th, 2007.” If Science could not make predictions about future events—events which have not yet happened—then it would be useless; it could make no prediction in advance of experiment. The prediction that the Sun will rise is, definitely, an extrapolation from scientific generalizations. It is based upon models of the Solar System that you could test for yourself by experiment.

But imagine that you’re constructing an experiment to verify prediction #27, in a new context, of an accepted theory Q. You may not have any concrete reason to suspect the belief is wrong; you just want to test it in a new context. It seems dangerous to say, before running the experiment, that there is a “scientific belief” about the result. There is a “conventional prediction” or “theory Q’s prediction.” But if you already know the “scientific belief” about the result, why bother to run the experiment?

You begin to see, I hope, why I identify Science with generalizations, rather than the history of any one experiment. A historical event happens once; generalizations apply over many events. History is not reproducible; scientific generalizations are.

Is my definition of “scientific knowledge” true? That is not a well-formed question. The special standards we impose upon science are pragmatic choices. Nowhere upon the stars or the mountains is it written that p < 0.05 shall be the standard for scientific publication. Many now argue that 0.05 is too weak, and that it would be useful to lower it to 0.01 or 0.001.

Perhaps future generations, acting on the theory that science is the public, reproducible knowledge of humankind, will only label as “scientific” papers published in an open-access journal. If you charge for access to the knowledge, is it part of the knowledge of humankind? Can we fully trust a result if people must pay to criticize it?

For myself, I think scientific practice would be better served by the dictum that only open, public knowledge counts. But however we choose to define “science,” information in a $20,000/year closed-access journal will still count as Bayesian evidence; and so too, the police commissioner’s private assurance that Wulky is the kingpin.


1 Pedants: interpret this as the Earth’s rotation and orbit remaining roughly constant relative to the Sun.

" } }, { "_id": "WnheMGAka4fL99eae", "title": "Hindsight Devalues Science", "pageUrl": "https://www.lesswrong.com/posts/WnheMGAka4fL99eae/hindsight-devalues-science", "postedAt": "2007-08-17T19:39:42.000Z", "baseScore": 256, "voteCount": 227, "commentCount": 44, "url": null, "contents": { "documentId": "WnheMGAka4fL99eae", "html": "

This essay is closely based on an excerpt from Meyers’s Exploring Social Psychology; the excerpt is worth reading in its entirety.

Cullen Murphy, editor of The Atlantic, said that the social sciences turn up “no ideas or conclusions that can’t be found in [any] encyclopedia of quotations . . . Day after day social scientists go out into the world. Day after day they discover that people’s behavior is pretty much what you’d expect.”

Of course, the “expectation” is all hindsight. (Hindsight bias: Subjects who know the actual answer to a question assign much higher probabilities they “would have” guessed for that answer, compared to subjects who must guess without knowing the answer.)

The historian Arthur Schlesinger, Jr. dismissed scientific studies of World War II soldiers’ experiences as “ponderous demonstrations” of common sense. For example:

  1. Better educated soldiers suffered more adjustment problems than less educated soldiers. (Intellectuals were less prepared for battle stresses than street-smart people.) 
  2. Southern soldiers coped better with the hot South Sea Island climate than Northern soldiers. (Southerners are more accustomed to hot weather.) 
  3. White privates were more eager to be promoted to noncommissioned officers than Black privates. (Years of oppression take a toll on achievement motivation.) 
  4. Southern Blacks preferred Southern to Northern White officers. (Southern officers were more experienced and skilled in interacting with Blacks.) 
  5. As long as the fighting continued, soldiers were more eager to return home than after the war ended. (During the fighting, soldiers knew they were in mortal danger.)

How many of these findings do you think you could have predicted in advance? Three out of five? Four out of five? Are there any cases where you would have predicted the opposite—where your model takes a hit? Take a moment to think before continuing . . .

 

 

. . .

 

 

In this demonstration (from Paul Lazarsfeld by way of Meyers), all of the findings above are the opposite of what was actually found.1 How many times did you think your model took a hit? How many times did you admit you would have been wrong? That’s how good your model really was. The measure of your strength as a rationalist is your ability to be more confused by fiction than by reality.

Unless, of course, I reversed the results again. What do you think?

Do your thought processes at this point, where you really don’t know the answer, feel different from the thought processes you used to rationalize either side of the “known” answer?

Daphna Baratz exposed college students to pairs of supposed findings, one true (“In prosperous times people spend a larger portion of their income than during a recession”) and one the truth’s opposite.2 In both sides of the pair, students rated the supposed finding as what they “would have predicted.” Perfectly standard hindsight bias.

Which leads people to think they have no need for science, because they “could have predicted” that.

(Just as you would expect, right?)

Hindsight will lead us to systematically undervalue the surprisingness of scientific findings, especially the discoveries we understand—the ones that seem real to us, the ones we can retrofit into our models of the world. If you understand neurology or physics and read news in that topic, then you probably underestimate the surprisingness of findings in those fields too. This unfairly devalues the contribution of the researchers; and worse, will prevent you from noticing when you are seeing evidence that doesn’t fit what you really would have expected.

We need to make a conscious effort to be shocked enough.


1 Paul F. Lazarsfeld, “The American Solidier—An Expository Review,” Public Opinion Quarterly 13, no. 3 (1949): 377–404.

2 Daphna Baratz, How Justified Is the “Obvious” Reaction? (Stanford University, 1983).

" } }, { "_id": "fkM9XsNvXdYH6PPAx", "title": "Hindsight bias", "pageUrl": "https://www.lesswrong.com/posts/fkM9XsNvXdYH6PPAx/hindsight-bias", "postedAt": "2007-08-16T21:58:45.000Z", "baseScore": 74, "voteCount": 66, "commentCount": 25, "url": null, "contents": { "documentId": "fkM9XsNvXdYH6PPAx", "html": "

Hindsight bias is when people who know the answer vastly overestimate its predictability or obviousness, compared to the estimates of subjects who must guess without advance knowledge.  Hindsight bias is sometimes called the I-knew-it-all-along effect.

\n

Fischhoff and Beyth (1975) presented students with historical accounts of unfamiliar incidents, such as a conflict between the Gurkhas and the British in 1814.  Given the account as background knowledge, five groups of students were asked what they would have predicted as the probability for each of four outcomes: British victory, Gurkha victory, stalemate with a peace settlement, or stalemate with no peace settlement.  Four experimental groups were respectively told that these four outcomes were the historical outcome.  The fifth, control group was not told any historical outcome.  In every case, a group told an outcome assigned substantially higher probability to that outcome, than did any other group or the control group.

\n

\n

Hindsight bias matters in legal cases, where a judge or jury must determine whether a defendant was legally negligent in failing to foresee a hazard (Sanchiro 2003). In an experiment based on an actual legal case, Kamin and Rachlinski (1995) asked two groups to estimate the probability of flood damage caused by blockage of a city-owned drawbridge. The control group was told only the background information known to the city when it decided not to hire a bridge watcher. The experimental group was given this information, plus the fact that a flood had actually occurred. Instructions stated the city was negligent if the foreseeable probability of flooding was greater than 10%. 76% of the control group concluded the flood was so unlikely that no precautions were necessary; 57% of the experimental group concluded the flood was so likely that failure to take precautions was legally negligent. A third experimental group was told the outcome andalso explicitly instructed to avoid hindsight bias, which made no difference: 56% concluded the city was legally negligent.

\n

Viewing history through the lens of hindsight, we vastly underestimate the cost of effective safety precautions.  In 1986, the Challenger exploded for reasons traced to an O-ring losing flexibility at low temperature.  There were warning signs of a problem with the O-rings.  But preventing the Challenger disaster would have required, not attending to the problem with the O-rings, but attending to every warning sign which seemed as severe as the O-ring problem, without benefit of hindsight.  It could have been done, but it would have required a general policy much more expensive than just fixing the O-Rings.

\n

Shortly after September 11th 2001, I thought to myself, and now someone will turn up minor intelligence warnings of something-or-other, and then the hindsight will begin.  Yes, I'm sure they had some minor warnings of an al Qaeda plot, but they probably also had minor warnings of mafia activity, nuclear material for sale, and an invasion from Mars.

\n

Because we don't see the cost of a general policy, we learn overly specific lessons.  After September 11th, the FAA prohibited box-cutters on airplanes—as if the problem had been the failure to take this particular \"obvious\" precaution.  We don't learn the general lesson: the cost of effective caution is very high because you must attend to problems that are not as obvious now as past problems seem in hindsight.

\n

The test of a model is how much probability it assigns to the observed outcome.  Hindsight bias systematically distorts this test; we think our model assigned much more probability than it actually did.  Instructing the jury doesn't help.  You have to write down your predictions in advance.  Or as Fischhoff (1982) put it:

\n
\n

When we attempt to understand past events, we implicitly test the hypotheses or rules we use both to interpret and to anticipate the world around us. If, in hindsight, we systematically underestimate the surprises that the past held and holds for us, we are subjecting those hypotheses to inordinately weak tests and, presumably, finding little reason to change them.

\n
\n

 

\n

Part of the sequence Mysterious Answers to Mysterious Questions

\n

Next post: \"Hindsight Devalues Science\"

\n

Previous post: \"Conservation of Expected Evidence\"

\n
\n

Fischhoff, B. 1982. For those condemned to study the past: Heuristics and biases in hindsight. In Kahneman et. al. 1982: 332–351.

\n

Fischhoff, B., and Beyth, R. 1975. I knew it would happen: Remembered probabilities of once-future things. Organizational Behavior and Human Performance, 13: 1-16.

\n

Kamin, K. and Rachlinski, J. 1995. Ex Post ≠ Ex Ante: Determining Liability in Hindsight. Law and Human Behavior, 19(1): 89-104.

\n

Sanchiro, C. 2003. Finding Error. Mich. St. L. Rev. 1189.

" } }, { "_id": "WN73eiLQkuDtSC8Ag", "title": "One Argument Against An Army", "pageUrl": "https://www.lesswrong.com/posts/WN73eiLQkuDtSC8Ag/one-argument-against-an-army", "postedAt": "2007-08-15T18:39:43.000Z", "baseScore": 110, "voteCount": 94, "commentCount": 37, "url": null, "contents": { "documentId": "WN73eiLQkuDtSC8Ag", "html": "\n\n\n\n \n\n \n\n

I talked about a style of reasoning in which not a single contrary argument is allowed, with the result that every non-supporting observation has to be argued away. Here I suggest that when people encounter a contrary argument, they prevent themselves from downshifting their confidence by rehearsing already-known support.

\n\n

Suppose the country of Freedonia is debating whether its neighbor, Sylvania, is responsible for a recent rash of meteor strikes on its cities. There are several pieces of evidence suggesting this: the meteors struck cities close to the Sylvanian border; there was unusual activity in the Sylvanian stock markets before the strikes; and the Sylvanian ambassador Trentino was heard muttering about “heavenly vengeance.”

\n\n

Someone comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. They have trade with us of billions of dinars annually.” “Well,” you reply, “the meteors struck cities close to Sylvania, there was suspicious activity in their stock market, and their ambassador spoke of heavenly vengeance afterward.” Since these three arguments outweigh the first, you keep your belief that Sylvania is responsible—you believe rather than disbelieve, qualitatively. Clearly, the balance of evidence weighs against Sylvania.

\n\n

Then another comes to you and says: “I don’t think Sylvania is responsible for the meteor strikes. Directing an asteroid strike is really hard. Sylvania doesn’t even have a space program.” You reply, “But the meteors struck cities close to Sylvania, and their investors knew it, and the ambassador came right out and admitted it!” Again, these three arguments outweigh the first (by three arguments against one argument), so you keep your belief that Sylvania is responsible.

\n\n

Indeed, your convictions are strengthened. On two separate occasions now, you have evaluated the balance of evidence, and both times the balance was tilted against Sylvania by a ratio of 3 to 1.

\n\n

You encounter further arguments by the pro-Sylvania traitors—again, and again, and a hundred times again—but each time the new argument is handily defeated by 3 to 1. And on every occasion, you feel yourself becoming more confident that Sylvania was indeed responsible, shifting your prior according to the felt balance of evidence.

\n\n

The problem, of course, is that by rehearsing arguments you already knew, you are double-counting the evidence. This would be a grave sin even if you double-counted all the evidence. (Imagine a scientist who does an experiment with 50 subjects and fails to obtain statistically significant results, so the scientist counts all the data twice.)

\n\n

But to selectively double-count only some evidence is sheer farce. I remember seeing a cartoon as a child, where a villain was dividing up loot using the following algorithm: “One for you, one for me. One for you, one-two for me. One for you, one-two-three for me.”

\n\n

As I emphasized in the last essay, even if a cherished belief is true, a rationalist may sometimes need to downshift the probability while integrating all the evidence. Yes, the balance of support may still favor your cherished belief. But you still have to shift the probability down—yes, down—from whatever it was before you heard the contrary evidence. It does no good to rehearse supporting arguments, because you have already taken those into account.

\n\n

And yet it does appear to me that when people are confronted by a new counterargument, they search for a justification not to downshift their confidence, and of course they find supporting arguments they already know. I have to keep constant vigilance not to do this myself! It feels as natural as parrying a sword-strike with a handy shield.

\n\n

With the right kind of wrong reasoning, a handful of support—or even a single argument—can stand off an army of contradictions.

\n\n" } }, { "_id": "627DZcvme7nLDrbZu", "title": "Update Yourself Incrementally", "pageUrl": "https://www.lesswrong.com/posts/627DZcvme7nLDrbZu/update-yourself-incrementally", "postedAt": "2007-08-14T14:56:33.000Z", "baseScore": 120, "voteCount": 104, "commentCount": 29, "url": null, "contents": { "documentId": "627DZcvme7nLDrbZu", "html": "

Politics is the mind-killer.  Debate is war, arguments are soldiers.  There is the temptation to search for ways to interpret every possible experimental result to confirm your theory, like securing a citadel against every possible line of attack.  This you cannot do.  It is mathematically impossible. For every expectation of evidence, there is an equal and opposite expectation of counterevidence.

But it’s okay if your cherished belief isn’t perfectly defended. If the hypothesis is that the coin comes up heads 95% of the time, then one time in twenty you will expect to see what looks like contrary evidence. This is okay. It’s normal. It’s even expected, so long as you’ve got nineteen supporting observations for every contrary one. A probabilistic model can take a hit or two, and still survive, so long as the hits don't keep on coming in.2

Yet it is widely believed, especially in the court of public opinion, that a true theory can have no failures and a false theory no successes.

You find people holding up a single piece of what they conceive to be evidence, and claiming that their theory can “explain” it, as though this were all the support that any theory needed. Apparently a false theory can have no supporting evidence; it is impossible for a false theory to fit even a single event. Thus, a single piece of confirming evidence is all that any theory needs.

It is only slightly less foolish to hold up a single piece of probabilistic counterevidence as disproof, as though it were impossible for a correct theory to have even a slight argument against it. But this is how humans have argued for ages and ages, trying to defeat all enemy arguments, while denying the enemy even a single shred of support. People want their debates to be one-sided; they are accustomed to a world in which their preferred theories have not one iota of antisupport. Thus, allowing a single item of probabilistic counterevidence would be the end of the world.

I just know someone in the audience out there is going to say, “But you can’t concede even a single point if you want to win debates in the real world! If you concede that any counterarguments exist, the Enemy will harp on them over and over—you can’t let the Enemy do that! You’ll lose! What could be more viscerally terrifying than that?

Whatever. Rationality is not for winning debates, it is for deciding which side to join. If you’ve already decided which side to argue for, the work of rationality is done within you, whether well or poorly. But how can you, yourself, decide which side to argue? If choosing the wrong side is viscerally terrifying, even just a little viscerally terrifying, you’d best integrate all the evidence.

Rationality is not a walk, but a dance. On each step in that dance your foot should come down in exactly the correct spot, neither to the left nor to the right. Shifting belief upward with each iota of confirming evidence. Shifting belief downward with each iota of contrary evidence. Yes, down. Even with a correct model, if it is not an exact model, you will sometimes need to revise your belief down.

If an iota or two of evidence happens to countersupport your belief, that’s okay. It happens, sometimes, with probabilistic evidence for non-exact theories. (If an exact theory fails, you are in trouble!) Just shift your belief downward a little—the probability, the odds ratio, or even a nonverbal weight of credence in your mind. Just shift downward a little, and wait for more evidence. If the theory is true, supporting evidence will come in shortly, and the probability will climb again. If the theory is false, you don’t really want it anyway.

The problem with using black-and-white, binary, qualitative reasoning is that any single observation either destroys the theory or it does not. When not even a single contrary observation is allowed, it creates cognitive dissonance and has to be argued away. And this rules out incremental progress; it rules out correct integration of all the evidence. Reasoning probabilistically, we realize that on average, a correct theory will generate a greater weight of support than countersupport. And so you can, without fear, say to yourself: “This is gently contrary evidence, I will shift my belief downward.” Yes, down. It does not destroy your cherished theory. That is qualitative reasoning; think quantitatively.

For every expectation of evidence, there is an equal and opposite expectation of counterevidence. On every occasion, you must, on average, anticipate revising your beliefs downward as much as you anticipate revising them upward. If you think you already know what evidence will come in, then you must already be fairly sure of your theory—probability close to 1—which doesn’t leave much room for the probability to go further upward. And however unlikely it seems that you will encounter disconfirming evidence, the resulting downward shift must be large enough to precisely balance the anticipated gain on the other side. The weighted mean of your expected posterior probability must equal your prior probability.

How silly is it, then, to be terrified of revising your probability downward, if you’re bothering to investigate a matter at all? On average, you must anticipate as much downward shift as upward shift from every individual observation.

It may perhaps happen that an iota of antisupport comes in again, and again and again, while new support is slow to trickle in. You may find your belief drifting downward and further downward. Until, finally, you realize from which quarter the winds of evidence are blowing against you. In that moment of realization, there is no point in constructing excuses. In that moment of realization, you have already relinquished your cherished belief. Yay! Time to celebrate! Pop a champagne bottle or send out for pizza! You can’t become stronger by keeping the beliefs you started with, after all.

" } }, { "_id": "jiBFC7DcCrZjGmZnJ", "title": "Conservation of Expected Evidence", "pageUrl": "https://www.lesswrong.com/posts/jiBFC7DcCrZjGmZnJ/conservation-of-expected-evidence", "postedAt": "2007-08-13T15:55:26.000Z", "baseScore": 298, "voteCount": 246, "commentCount": 82, "url": null, "contents": { "documentId": "jiBFC7DcCrZjGmZnJ", "html": "

Friedrich Spee von Langenfeld, a priest who heard the confessions of condemned witches, wrote in 1631 the Cautio Criminalis (“prudence in criminal cases”), in which he bitingly described the decision tree for condemning accused witches: If the witch had led an evil and improper life, she was guilty; if she had led a good and proper life, this too was a proof, for witches dissemble and try to appear especially virtuous. After the woman was put in prison: if she was afraid, this proved her guilt; if she was not afraid, this proved her guilt, for witches characteristically pretend innocence and wear a bold front. Or on hearing of a denunciation of witchcraft against her, she might seek flight or remain; if she ran, that proved her guilt; if she remained, the devil had detained her so she could not get away.

Spee acted as confessor to many witches; he was thus in a position to observe every branch of the accusation tree, that no matter what the accused witch said or did, it was held as proof against her. In any individual case, you would only hear one branch of the dilemma. It is for this reason that scientists write down their experimental predictions in advance.

But you can’t have it both ways —as a matter of probability theory, not mere fairness. The rule that “absence of evidence is evidence of absence” is a special case of a more general law, which I would name Conservation of Expected Evidence: the expectation of the posterior probability, after viewing the evidence, must equal the prior probability.

P(H) = P(H)
P(H) = P(H,E) + P(H,~E)
P(H) = P(H|E)*P(E) + P(H|~E)*P(~E)

Therefore, for every expectation of evidence, there is an equal and opposite expectation of counterevidence.

If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow. On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs.

So if you claim that “no sabotage” is evidence for the existence of a Japanese-American Fifth Column, you must conversely hold that seeing sabotage would argue against a Fifth Column. If you claim that “a good and proper life” is evidence that a woman is a witch, then an evil and improper life must be evidence that she is not a witch. If you argue that God, to test humanity’s faith, refuses to reveal His existence, then the miracles described in the Bible must argue against the existence of God.

Doesn’t quite sound right, does it? Pay attention to that feeling of this seems a little forced, that quiet strain in the back of your mind. It’s important.

For a true Bayesian, it is impossible to seek evidence that confirms a theory. There is no possible plan you can devise, no clever strategy, no cunning device, by which you can legitimately expect your confidence in a fixed proposition to be higher (on average) than before. You can only ever seek evidence to test a theory, not to confirm it.

This realization can take quite a load off your mind. You need not worry about how to interpret every possible experimental result to confirm your theory. You needn’t bother planning how to make any given iota of evidence confirm your theory, because you know that for every expectation of evidence, there is an equal and oppositive expectation of counterevidence. If you try to weaken the counterevidence of a possible “abnormal” observation, you can only do it by weakening the support of a “normal” observation, to a precisely equal and opposite degree. It is a zero-sum game. No matter how you connive, no matter how you argue, no matter how you strategize, you can’t possibly expect the resulting game plan to shift your beliefs (on average) in a particular direction.

You might as well sit back and relax while you wait for the evidence to come in.

. . . Human psychology is so screwed up.

" } }, { "_id": "mnS2WYLCGJP2kQkRn", "title": "Absence of Evidence Is Evidence of Absence", "pageUrl": "https://www.lesswrong.com/posts/mnS2WYLCGJP2kQkRn/absence-of-evidence-is-evidence-of-absence", "postedAt": "2007-08-12T20:34:16.000Z", "baseScore": 181, "voteCount": 158, "commentCount": 119, "url": null, "contents": { "documentId": "mnS2WYLCGJP2kQkRn", "html": "

From Robyn Dawes’s Rational Choice in an Uncertain World:

In fact, this post-hoc fitting of evidence to hypothesis was involved in a most grievous chapter in United States history: the internment of Japanese-Americans at the beginning of the Second World War. When California governor Earl Warren testified before a congressional hearing in San Francisco on February 21, 1942, a questioner pointed out that there had been no sabotage or any other type of espionage by the Japanese-Americans up to that time. Warren responded, “I take the view that this lack [of subversive activity] is the most ominous sign in our whole situation. It convinces me more than perhaps any other factor that the sabotage we are to get, the Fifth Column activities are to get, are timed just like Pearl Harbor was timed . . . I believe we are just being lulled into a false sense of security.”

Consider Warren’s argument from a Bayesian perspective. When we see evidence, hypotheses that assigned a higher likelihood to that evidence gain probability, at the expense of hypotheses that assigned a lower likelihood to the evidence. This is a phenomenon of relative likelihoods and relative probabilities. You can assign a high likelihood to the evidence and still lose probability mass to some other hypothesis, if that other hypothesis assigns a likelihood that is even higher.

Warren seems to be arguing that, given that we see no sabotage, this confirms that a Fifth Column exists. You could argue that a Fifth Column might delay its sabotage. But the likelihood is still higher that the absence of a Fifth Column would perform an absence of sabotage.

Let E stand for the observation of sabotage, and ¬E for the observation of no sabotage. The symbol H1 stands for the hypothesis of a Japanese-American Fifth Column, and H2 for the hypothesis that no Fifth Column exists. The conditional probability P(E | H), or “E given H,” is how confidently we’d expect to see the evidence E if we assumed the hypothesis H were true.

Whatever the likelihood that a Fifth Column would do no sabotage, the probability P(¬E | H1), it won’t be as large as the likelihood that there’s no sabotage given that there’s no Fifth Column, the probability P(¬E | H2). So observing a lack of sabotage increases the probability that no Fifth Column exists.

A lack of sabotage doesn’t prove that no Fifth Column exists. Absence of proof is not proof of absence. In logic, (A ⇒ B), read “A implies B,” is not equivalent to (¬A ⇒ ¬B), read “not-A implies not-B .”

But in probability theory, absence of evidence is always evidence of absence. If E is a binary event and P(H | E) > P(H), i.e., seeing E increases the probability of H, then P(H | ¬ E) < P(H), i.e., failure to observe E decreases the probability of H . The probability P(H) is a weighted mix of P(H | E) and P(H | ¬ E), and necessarily lies between the two.1

Under the vast majority of real-life circumstances, a cause may not reliably produce signs of itself, but the absence of the cause is even less likely to produce the signs. The absence of an observation may be strong evidence of absence or very weak evidence of absence, depending on how likely the cause is to produce the observation. The absence of an observation that is only weakly permitted (even if the alternative hypothesis does not allow it at all) is very weak evidence of absence (though it is evidence nonetheless). This is the fallacy of “gaps in the fossil record”—fossils form only rarely; it is futile to trumpet the absence of a weakly permitted observation when many strong positive observations have already been recorded. But if there are no positive observations at all, it is time to worry; hence the Fermi Paradox.

Your strength as a rationalist is your ability to be more confused by fiction than by reality; if you are equally good at explaining any outcome you have zero knowledge. The strength of a model is not what it can explain, but what it can’t, for only prohibitions constrain anticipation. If you don’t notice when your model makes the evidence unlikely, you might as well have no model, and also you might as well have no evidence; no brain and no eyes.


1 If any of this sounds at all confusing, see my discussion of Bayesian updating toward the end of The Machine in the Ghost, the third volume of Rationality: From AI to Zombies.

" } }, { "_id": "vrHRcEDMjZcx5Yfru", "title": "I Defy the Data!", "pageUrl": "https://www.lesswrong.com/posts/vrHRcEDMjZcx5Yfru/i-defy-the-data", "postedAt": "2007-08-11T21:33:19.000Z", "baseScore": 104, "voteCount": 82, "commentCount": 12, "url": null, "contents": { "documentId": "vrHRcEDMjZcx5Yfru", "html": "

One of the great weaknesses of Science is this mistaken idea that if\nan experiment contradicts the dominant theory, we should throw out the\ntheory instead of the experiment.

\n\n

Experiments can go awry.  They can contain design flaws. \nThey can be deliberately corrupted.  They can be unconsciously\ncorrupted.  They can be selectively reported.  Most of all, 1 time in\n20 they can be "statistically significant" by sheer coincidence, and\nthere are a lot of experiments out there.

\n\n

Unfortunately, Science has this notion that you can never go\nagainst an honestly obtained experimental result.  So, when someone\nobtains an experimental result that contradicts the standard model,\nresearchers are faced with a dilemma for resolving their cognitive\ndissonance: they either have to immediately throw away the standard model, or else attack the experiment - accuse the researchers of dishonesty, or flawed design, or conflict of interest...

\n\n

Someone once presented me with a new study on the effects of\nintercessory prayer (that is, people praying for patients who are not\ntold about the prayer), which showed 50% of the prayed-for patients\nachieving success at in-vitro fertilization, versus 25% of the control group.  I liked\nthis claim.  It had a nice large effect size.  Claims of blatant\nimpossible effects are much more pleasant to deal with than claims of\nsmall impossible effects that are "statistically significant".

\n\n

So I cheerfully said:  "I defy the data."

My original phrasing was actually "I deny the data".  Nonetheless I\nsaid it outright, without apology, and with deliberate insolence.  I am\nkeeping my theory; your experiment is wrong.

\n\n

If an experimental result contradicts the Standard Model, this is an important fact.  It needs to be openly acknowledged.  An experiment that makes traditionalists want\nto discard the data - or even an experiment that makes traditionalists\nvery skeptical of the data - should be a high priority  for\nreplication.  An experiment worth defying should command attention!

\n\n

But it is not socially acceptable to say, "The hell with your\nexperimental falsification, I'm keeping my theory."  So the data has to\nbe defied covertly - by character assassination of the researchers, by\nsly innuendos, by dire hints of controversy.  The data has to be dismissed, excused away, swept under a rug, silently into the dark, because you can't admit you're defying the data. \nThis is not a good way of focusing attention on an anomalous result. \nThis is not a good way to ensure funding for replication attempts.

\n\n

It would be much better if science had a standard procedure for\nsaying, "I defy the data!"  It would be clearly understood that this\nwas a bold act, and someone else in the audience might stand up and\nsay, "Wait a minute, is that data really worth defying?"  If a major\nfigure in the field said "I defy the data!", this would be sufficient\njustification on grant proposals for why the result urgently needed\nreplication.  Scientists could say, "I'm holding my breath, waiting for\nreplication," rather than having to take sides immediately in the\ncharacter-assassination controversy.

\n\n

Maybe you could even get the media to report that the experiment has\nbeen "published but defied".  Then the replication, or failure to\nreplicate, would be news.  The replicators could get their names in the\nnewspaper, and the negative result could be published in a major\njournal.  If you want replications done, you'll have to offer some\nincentive.

\n\n

I would also suggest that when an experiment is defied, the\nreplication\nmust pre-declare a minimum effect size, and attain significance of\np<0.01.  In extreme cases where claims have been made and shot down\nbefore, p<0.001.

\n\n

Oh, and the prayer study?  Soon enough we heard that it had been retracted and was probably fraudulent.  But I didn't say fraud.  I didn't speculate on how the results might have been obtained.  That would have been dismissive.  I just stuck my neck out, and nakedly, boldly, without excuses, defied the data.

\n\n

Addendum:  I should have spelled this out explicitly:  You can defy the data on one experiment.  You can't defy the data on multiple experiments.  At that point you either have to relinquish the theory or dismiss the data - point to a design flaw, or refer to an even larger body of experiments that failed to replicate the result, or accuse the researchers of a deliberate hoax, et cetera.  But you should not turn around and argue that the theory and the experiment are actually compatible.  Why didn't you think of that before you defied the data?  Defying the data admits that the data is not compatible with your theory; it sticks your neck way out, so your head can be easily chopped off.

" } }, { "_id": "5JDkW4MYXit2CquLs", "title": "Your Strength as a Rationalist", "pageUrl": "https://www.lesswrong.com/posts/5JDkW4MYXit2CquLs/your-strength-as-a-rationalist", "postedAt": "2007-08-11T00:21:20.000Z", "baseScore": 307, "voteCount": 255, "commentCount": 123, "url": null, "contents": { "documentId": "5JDkW4MYXit2CquLs", "html": "

The following happened to me in an IRC chatroom, long enough ago that I was still hanging around in IRC chatrooms. Time has fuzzed the memory and my report may be imprecise.

So there I was, in an IRC chatroom, when someone reports that a friend of his needs medical advice. His friend says that he’s been having sudden chest pains, so he called an ambulance, and the ambulance showed up, but the paramedics told him it was nothing, and left, and now the chest pains are getting worse. What should his friend do?

I was confused by this story. I remembered reading about homeless people in New York who would call ambulances just to be taken someplace warm, and how the paramedics always had to take them to the emergency room, even on the 27th iteration. Because if they didn’t, the ambulance company could be sued for lots and lots of money. Likewise, emergency rooms are legally obligated to treat anyone, regardless of ability to pay.1 So I didn’t quite understand how the described events could have happened. Anyone reporting sudden chest pains should have been hauled off by an ambulance instantly.

And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, “Well, if the paramedics told your friend it was nothing, it must really be nothing—they’d have hauled him off if there was the tiniest chance of serious trouble.”

Thus I managed to explain the story within my existing model, though the fit still felt a little forced . . .

Later on, the fellow comes back into the IRC chatroom and says his friend made the whole thing up. Evidently this was not one of his more reliable friends.

I should have realized, perhaps, that an unknown acquaintance of an acquaintance in an IRC channel might be less reliable than a published journal article. Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.2

So instead, by dint of mighty straining, I forced my model of reality to explain an anomaly that never actually happened. And I knew how embarrassing this was. I knew that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.

Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.

We are all weak, from time to time; the sad part is that I could have been stronger. I had all the information I needed to arrive at the correct answer, I even noticed the problem, and then I ignored it. My feeling of confusion was a Clue, and I threw my Clue away.

I should have paid more attention to that sensation of still feels a little forced. It’s one of the most important feelings a truthseeker can have, a part of your strength as a rationalist. It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading:

Either Your Model Is False Or This Story Is Wrong.

1 And the hospital absorbs the costs, which are enormous, so hospitals are closing their emergency rooms . . . It makes you wonder what’s the point of having economists if we’re just going to ignore them.

2 From McCluskey (2007), “Truth Bias”: “[P]eople are more likely to correctly judge that a truthful statement is true than that a lie is false. This appears to be a fairly robust result that is not just a function of truth being the correct guess where the evidence is weak—it shows up in controlled experiments where subjects have good reason not to assume truth[.]” http://www.overcomingbias.com/2007/08/truth-bias.html .

And from Gilbert et al. (1993), “You Can’t Not Believe Everything You Read”: “Can people comprehend assertions without believing them? [...] Three experiments support the hypothesis that comprehension includes an initial belief in the information comprehended.”

" } }, { "_id": "dLzZWNGD23zqNLvt3", "title": "The Apocalypse Bet", "pageUrl": "https://www.lesswrong.com/posts/dLzZWNGD23zqNLvt3/the-apocalypse-bet", "postedAt": "2007-08-09T17:23:33.000Z", "baseScore": 50, "voteCount": 36, "commentCount": 51, "url": null, "contents": { "documentId": "dLzZWNGD23zqNLvt3", "html": "

A problem with betting on engineered superplagues, physics disasters, nanotechnological warfare, or intelligence explosions of both Friendly and unFriendly type, is that all these events are likely to disrupt settlement of trades (to put it mildly).  It's not easy to sell a bet that pays off only if the prediction market ceases to exist.

\n

And yet everyone still wants to know the year, month, and day of the Singularity.  Even I want to know, I'm just professionally aware that the knowledge is not available.

\n

This morning, I saw that someone had launched yet another poll on \"when the Singularity will occur\".  Just a raw poll, mind you, not a prediction market.  I was thinking of how completely and utterly worthless this poll was, and how a prediction market might be slightly less than completely worthless, when it occurred to me how to structure the bet - bet that \"settlement of trades will be disrupted / the resources gambled will become worthless, no later than time T\".

\n

Suppose you think that gold will become worthless on April 27th, 2020 at between four and four-thirty in the morning.  I, on the other hand, think this event will not occur until 2030.  We can sign a contract in which I pay you one ounce of gold per year from 2010 to 2020, and then you pay me two ounces of gold per year from 2020 to 2030.  If gold becomes worthless when you say, you will have profited; if gold becomes worthlesss when I say, I will have profited.  We can have a prediction market on a generic apocalypse, in which participants who believe in an earlier apocalypse are paid by believers in a later apocalypse, until they pass the date of their prediction, at which time the flow reverses with interest.  I don't see any way to distinguish between apocalypses, but we can ask the participants why they were willing to bet, and probably receive a decent answer.

\n

I would be quite interested in seeing what such a market had to say.  And if the predicted date was hovering around 2080, I would pick up as much of that free money as I dared.

\n
\n

EDIT:  Robin Hanson pointed out why this wouldn't work.  See comments.

" } }, { "_id": "HYWhKXRsMAyvRKRYz", "title": "You Can Face Reality", "pageUrl": "https://www.lesswrong.com/posts/HYWhKXRsMAyvRKRYz/you-can-face-reality", "postedAt": "2007-08-09T01:46:36.000Z", "baseScore": 202, "voteCount": 173, "commentCount": 41, "url": null, "contents": { "documentId": "HYWhKXRsMAyvRKRYz", "html": "

What is true is already so.

Owning up to it doesn’t make it worse.

Not being open about it doesn’t make it go away.

And because it’s true, it is what is there to be interacted with.

Anything untrue isn’t there to be lived.

People can stand what is true,

for they are already enduring it.

Eugene Gendlin

" } }, { "_id": "yDfxTj9TKYsYiWH5o", "title": "The Virtue of Narrowness", "pageUrl": "https://www.lesswrong.com/posts/yDfxTj9TKYsYiWH5o/the-virtue-of-narrowness", "postedAt": "2007-08-07T17:57:46.000Z", "baseScore": 139, "voteCount": 124, "commentCount": 66, "url": null, "contents": { "documentId": "yDfxTj9TKYsYiWH5o", "html": "

What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world.

—“The Twelve Virtues of Rationality”

Within their own professions, people grasp the importance of narrowness; a car mechanic knows the difference between a carburetor and a radiator, and would not think of them both as “car parts.” A hunter-gatherer knows the difference between a lion and a panther. A janitor does not wipe the floor with window cleaner, even if the bottles look similar to one who has not mastered the art.

Outside their own professions, people often commit the misstep of trying to broaden a word as widely as possible, to cover as much territory as possible. Is it not more glorious, more wise, more impressive, to talk about all the apples in the world? How much loftier it must be to explain human thought in general, without being distracted by smaller questions, such as how humans invent techniques for solving a Rubik’s Cube. Indeed, it scarcely seems necessary to consider specific questions at all; isn’t a general theory a worthy enough accomplishment on its own?

It is the way of the curious to lift up one pebble from among a million pebbles on the shore, and see something new about it, something interesting, something different. You call these pebbles “diamonds,” and ask what might be special about them—what inner qualities they might have in common, beyond the glitter you first noticed. And then someone else comes along and says: “Why not call this pebble a diamond too? And this one, and this one?” They are enthusiastic, and they mean well. For it seems undemocratic and exclusionary and elitist and unholistic to call some pebbles “diamonds,” and others not. It seems . . . narrow-minded . . . if you’ll pardon the phrase. Hardly open, hardly embracing, hardly communal.

You might think it poetic, to give one word many meanings, and thereby spread shades of connotation all around. But even poets, if they are good poets, must learn to see the world precisely. It is not enough to compare love to a flower. Hot jealous unconsummated love is not the same as the love of a couple married for decades. If you need a flower to symbolize jealous love, you must go into the garden, and look, and make subtle distinctions—find a flower with a heady scent, and a bright color, and thorns. Even if your intent is to shade meanings and cast connotations, you must keep precise track of exactly which meanings you shade and connote.

It is a necessary part of the rationalist’s art—or even the poet’s art!—to focus narrowly on unusual pebbles which possess some special quality. And look at the details which those pebbles—and those pebbles alone!—share among each other. This is not a sin.

It is perfectly all right for modern evolutionary biologists to explain just the patterns of living creatures, and not the “evolution” of stars or the “evolution” of technology. Alas, some unfortunate souls use the same word “evolution” to cover the naturally selected patterns of replicating life, and the strictly accidental structure of stars, and the intelligently configured structure of technology. And as we all know, if people use the same word, it must all be the same thing. These biologists must just be too dumb to see the connections.

And what could be more virtuous than seeing connections? Surely the wisest of all human beings are the New Age gurus who say, “Everything is connected to everything else.” If you ever say this aloud, you should pause, so that everyone can absorb the sheer shock of this Deep Wisdom.

There is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. The important graphs are the ones where some things are not connected to some other things.

When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. The remedy is specific knowledge and in-depth study. When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.

Likewise, the important categories are the ones that do not contain everything in the universe. Good hypotheses can only explain some possible outcomes, and not others.

It was perfectly all right for Isaac Newton to explain just gravity, just the way things fall down—and how planets orbit the Sun, and how the Moon generates the tides—but not the role of money in human society or how the heart pumps blood. Sneering at narrowness is rather reminiscent of ancient Greeks who thought that going out and actually looking at things was manual labor, and manual labor was for slaves.

As Plato put it in The Republic, Book VII:

If anyone should throw back his head and learn something by staring at the varied patterns on a ceiling, apparently you would think that he was contemplating with his reason, when he was only staring with his eyes . . . I cannot but believe that no study makes the soul look on high except that which is concerned with real being and the unseen. Whether he gape and stare upwards, or shut his mouth and stare downwards, if it be things of the senses that he tries to learn something about, I declare he never could learn, for none of these things admit of knowledge: I say his soul is looking down, not up, even if he is floating on his back on land or on sea!

Many today make a similar mistake, and think that narrow concepts are as lowly and unlofty and unphilosophical as, say, going out and looking at things—an endeavor only suited to the underclass. But rationalists—and also poets—need narrow words to express precise thoughts; they need categories that include only some things, and exclude others. There’s nothing wrong with focusing your mind, narrowing your categories, excluding possibilities, and sharpening your propositions. Really, there isn’t! If you make your words too broad, you end up with something that isn’t true and doesn’t even make good poetry.

And DON’T EVEN GET ME STARTED on people who think Wikipedia is an “Artificial Intelligence,” the invention of LSD was a “Singularity,” or that corporations are “superintelligent”!

" } }, { "_id": "43PTNr4ZMaezyAJ5o", "title": "The Proper Use of Doubt", "pageUrl": "https://www.lesswrong.com/posts/43PTNr4ZMaezyAJ5o/the-proper-use-of-doubt", "postedAt": "2007-08-06T20:29:51.000Z", "baseScore": 93, "voteCount": 84, "commentCount": 35, "url": null, "contents": { "documentId": "43PTNr4ZMaezyAJ5o", "html": "\n\n\n\n \n\n \n\n

Once, when I was holding forth upon the Way, I remarked upon how most organized belief systems exist to flee from doubt. A listener replied to me that the Jesuits must be immune from this criticism, because they practice organized doubt: their novices, he said, are told to doubt Christianity; doubt the existence of God; doubt if their calling is real; doubt that they are suitable for perpetual vows of chastity and poverty. And I said: Ah, but they’re supposed to overcome these doubts, right? He said: No, they are to doubt that perhaps their doubts may grow and become stronger.

\n\n

Googling failed to confirm or refute these allegations. But I find this scenario fascinating, worthy of discussion, regardless of whether it is true or false of Jesuits. If the Jesuits practiced deliberate doubt, as described above, would they therefore be virtuous as rationalists?

\n\n

I think I have to concede that the Jesuits, in the (possibly hypothetical) scenario above, would not properly be described as “fleeing from doubt.” But the (possibly hypothetical) conduct still strikes me as highly suspicious. To a truly virtuous rationalist, doubt should not be scary. The conduct described above sounds to me like a program of desensitization for something very scary, like exposing an arachnophobe to spiders under carefully controlled conditions.

\n\n

But even so, they are encouraging their novices to doubt—right? Does it matter if their reasons are flawed? Is this not still a worthy deed unto a rationalist?

\n\n

All curiosity seeks to annihilate itself; there is no curiosity that does not want an answer. But if you obtain an answer, if you satisfy your curiosity, then the glorious mystery will no longer be mysterious.

\n\n

In the same way, every doubt exists in order to annihilate some particular belief. If a doubt fails to destroy its target, the doubt has died unfulfilled—but that is still a resolution, an ending, albeit a sadder one. A doubt that neither destroys itself nor destroys its target might as well have never existed at all. It is the resolution of doubts, not the mere act of doubting, which drives the ratchet of rationality forward.

\n\n

Every improvement is a change, but not every change is an improvement. Every rationalist doubts, but not all doubts are rational. Wearing doubts doesn’t make you a rationalist any more than wearing a white medical lab coat makes you a doctor.

\n\n

A rational doubt comes into existence for a specific reason—you have some specific justification to suspect the belief is wrong. This reason, in turn, implies an avenue of investigation which will either destroy the targeted belief or destroy the doubt. This holds even for highly abstract doubts, like: “I wonder if there might be a simpler hypothesis which also explains this data.” In this case you investigate by trying to think of simpler hypotheses. As this search continues longer and longer without fruit, you will think it less and less likely that the next increment of computation will be the one to succeed. Eventually the cost of searching will exceed the expected benefit, and you’ll stop searching. At which point you can no longer claim to be usefully doubting. A doubt that is not investigated might as well not exist. Every doubt exists to destroy itself, one way or the other. An unresolved doubt is a null-op; it does not turn the wheel, neither forward nor back.

\n\n

If you really believe a religion (and don’t just believe in it), then why would you tell your novices to consider doubts that must die unfulfilled? It would be like telling physics students to agonize over whether the twentieth-century revolution might have been a mistake, and that Newtonian mechanics was correct all along. If you don’t really doubt something, why would you pretend that you do?

\n\n

Because we all want to be seen as rational—and doubting is widely believed to be a virtue of a rationalist. But it is not widely understood that you need a particular reason to doubt, or that an unresolved doubt is a null-op. Instead people think it’s about modesty, a submissive demeanor, maintaining the tribal status hierarchy—almost exactly the same problem as with humility, on which I have previously written. Making a great public display of doubt to convince yourself that you are a rationalist will do around as much good as wearing a lab coat.

\n\n

To avoid merely professing doubts,1 remember:

\n\n \n\n
\n \n\n

1See “Professing and Cheering” in Map and Territory.

\n
\n\n" } }, { "_id": "GJ4ZQm7crTzTM6xDW", "title": "Focus Your Uncertainty", "pageUrl": "https://www.lesswrong.com/posts/GJ4ZQm7crTzTM6xDW/focus-your-uncertainty", "postedAt": "2007-08-05T20:49:59.000Z", "baseScore": 126, "voteCount": 114, "commentCount": 21, "url": null, "contents": { "documentId": "GJ4ZQm7crTzTM6xDW", "html": "\n\n\n\n \n\n \n\n

Will bond yields go up, or down, or remain the same? If you’re a TV pundit and your job is to explain the outcome after the fact, then there’s no reason to worry. No matter which of the three possibilities comes true, you’ll be able to explain why the outcome perfectly fits your pet market theory. There’s no reason to think of these three possibilities as somehow opposed to one another, as exclusive, because you’ll get full marks for punditry no matter which outcome occurs.

\n\n

But wait! Suppose you’re a novice TV pundit, and you aren’t experienced enough to make up plausible explanations on the spot. You need to prepare remarks in advance for tomorrow’s broadcast, and you have limited time to prepare. In this case, it would be helpful to know which outcome will actually occur—whether bond yields will go up, down, or remain the same—because then you would only need to prepare one set of excuses.

\n\n

Alas, no one can possibly foresee the future. What are you to do? You certainly can’t use “probabilities.” We all know from school that “probabilities” are little numbers that appear next to a word problem, and there aren’t any little numbers here. Worse, you feel uncertain. You don’t remember feeling uncertain while you were manipulating the little numbers in word problems. College classes teaching math are nice clean places, so math can’t apply to life situations that aren’t nice and clean. You wouldn’t want to inappropriately transfer thinking skills from one context to another. Clearly, this is not a matter for “probabilities.”

\n\n

Nonetheless, you only have 100 minutes to prepare your excuses. You can’t spend the entire 100 minutes on “up,” and also spend all 100 minutes on “down,” and also spend all 100 minutes on “same.” You’ve got to prioritize somehow.

\n\n

If you needed to justify your time expenditure to a review committee, you would have to spend equal time on each possibility. Since there are no little numbers written down, you’d have no documentation to justify spending different amounts of time. You can hear the reviewers now: And why, Mr. Finkledinger, did you spend exactly 42 minutes on excuse #3? Why not 41 minutes, or 43? Admit it—you’re not being objective! You’re playing subjective favorites!

\n\n

But, you realize with a small flash of relief, there’s no review committee to scold you. This is good, because there’s a major Federal Reserve announcement tomorrow, and it seems unlikely that bond prices will remain the same. You don’t want to spend 33 precious minutes on an excuse you don’t anticipate needing.

\n\n

Your mind keeps drifting to the explanations you use on television, of why each event plausibly fits your market theory. But it rapidly becomes clear that plausibility can’t help you here—all three events are plausible. Fittability to your pet market theory doesn’t tell you how to divide your time. There’s an uncrossable gap between your 100 minutes of time, which are conserved; versus your ability to explain how an outcome fits your theory, which is unlimited.

\n\n

And yet . . . even in your uncertain state of mind, it seems that you anticipate the three events differently; that you expect to need some excuses more than others. And—this is the fascinating part—when you think of something that makes it seem more likely that bond prices will go up, then you feel less likely to need an excuse for bond prices going down or remaining the same.

\n\n

It even seems like there’s a relation between how much you anticipate each of the three outcomes, and how much time you want to spend preparing each excuse. Of course the relation can’t actually be quantified. You have 100 minutes to prepare your speech, but there isn’t 100 of anything to divide up in this anticipation business. (Although you do work out that, if some particular outcome occurs, then your utility function is logarithmic in time spent preparing the excuse.)

\n\n

Still . . . your mind keeps coming back to the idea that anticipation is limited, unlike excusability, but like time to prepare excuses. Maybe anticipation should be treated as a conserved resource, like money. Your first impulse is to try to get more anticipation, but you soon realize that, even if you get more anticipation, you won’t have any more time to prepare your excuses. No, your only course is to allocate your limited supply of anticipation as best you can.

\n\n

You’re pretty sure you weren’t taught anything like that in your statistics courses. They didn’t tell you what to do when you felt so terribly uncertain. They didn’t tell you what to do when there were no little numbers handed to you. Why, even if you tried to use numbers, you might end up using any sort of numbers at all—there’s no hint what kind of math to use, if you should be using math! Maybe you’d end up using pairs of numbers, right and left numbers, which you’d call DS for Dexter-Sinister . . . or who knows what else? (Though you do have only 100 minutes to spend preparing excuses.)

\n\n

If only there were an art of focusing your uncertainty—of squeezing as much anticipation as possible into whichever outcome will actually happen!

\n\n

But what could we call an art like that? And what would the rules be like?

\n\n" } }, { "_id": "wCqfCLs8z5Qw4GbKS", "title": "The Importance of Saying \"Oops\"", "pageUrl": "https://www.lesswrong.com/posts/wCqfCLs8z5Qw4GbKS/the-importance-of-saying-oops", "postedAt": "2007-08-05T03:17:46.000Z", "baseScore": 281, "voteCount": 242, "commentCount": 36, "url": null, "contents": { "documentId": "wCqfCLs8z5Qw4GbKS", "html": "\n\n\n\n \n\n \n\n

I just finished reading a history of Enron’s downfall, The Smartest Guys in the Room, which hereby wins my award for “Least Appropriate Book Title.”

\n\n

An unsurprising feature of Enron’s slow rot and abrupt collapse was that the executive players never admitted to having made a large mistake. When catastrophe #247 grew to such an extent that it required an actual policy change, they would say, “Too bad that didn’t work out—it was such a good idea—how are we going to hide the problem on our balance sheet?” As opposed to, “It now seems obvious in retrospect that it was a mistake from the beginning.” As opposed to, “I’ve been stupid.” There was never a watershed moment, a moment of humbling realization, of acknowledging a fundamental problem. After the bankruptcy, Jeff Skilling, the former COO and brief CEO of Enron, declined his own lawyers’ advice to take the Fifth Amendment; he testified before Congress that Enron had been a great company.

\n\n

Not every change is an improvement, but every improvement is necessarily a change. If we only admit small local errors, we will only make small local changes. The motivation for a big change comes from acknowledging a big mistake.

\n\n

As a child I was raised on equal parts science and science fiction, and from Heinlein to Feynman I learned the tropes of Traditional Rationality: theories must be bold and expose themselves to falsification; be willing to commit the heroic sacrifice of giving up your own ideas when confronted with contrary evidence; play nice in your arguments; try not to deceive yourself; and other fuzzy verbalisms.

\n\n

A traditional rationalist upbringing tries to produce arguers who will concede to contrary evidence eventually—there should be some mountain of evidence sufficient to move you. This is not trivial; it distinguishes science from religion. But there is less focus on speed, on giving up the fight as quickly as possible, integrating evidence efficiently so that it only takes a minimum of contrary evidence to destroy your cherished belief.

\n\n

I was raised in Traditional Rationality, and thought myself quite the rationalist. I switched to Bayescraft (Laplace / Jaynes / Tversky / Kahneman) in the aftermath of . . . well, it’s a long story. Roughly, I switched because I realized that Traditional Rationality’s fuzzy verbal tropes had been insufficient to prevent me from making a large mistake.

\n\n

After I had finally and fully admitted my mistake, I looked back upon the path that had led me to my Awful Realization. And I saw that I had made a series of small concessions, minimal concessions, grudgingly conceding each millimeter of ground, realizing as little as possible of my mistake on each occasion, admitting failure only in small tolerable nibbles. I could have moved so much faster, I realized, if I had simply screamed “OOPS!

\n\n

And I thought: I must raise the level of my game.

\n\n

There is a powerful advantage to admitting you have made a large mistake. It’s painful. It can also change your whole life.

\n\n

It is important to have the watershed moment, the moment of humbling realization. To acknowledge a fundamental problem, not divide it into palatable bite-size mistakes.

\n\n

Do not indulge in drama and become proud of admitting errors. It is surely superior to get it right the first time. But if you do make an error, better by far to see it all at once. Even hedonically, it is better to take one large loss than many small ones. The alternative is stretching out the battle with yourself over years. The alternative is Enron.

\n\n

Since then I have watched others making their own series of minimal concessions, grudgingly conceding each millimeter of ground; never confessing a global mistake where a local one will do; always learning as little as possible from each error. What they could fix in one fell swoop voluntarily, they transform into tiny local patches they must be argued into. Never do they say, after confessing one mistake, I’ve been a fool. They do their best to minimize their embarrassment by saying I was right in principle, or It could have worked, or I still want to embrace the true essence of whatever-I’m-attached-to. Defending their pride in this passing moment, they ensure they will again make the same mistake, and again need to defend their pride.

\n\n

Better to swallow the entire bitter pill in one terrible gulp.

\n\n" } }, { "_id": "fAuWLS7RKWD2npBFR", "title": "Religion's Claim to be Non-Disprovable", "pageUrl": "https://www.lesswrong.com/posts/fAuWLS7RKWD2npBFR/religion-s-claim-to-be-non-disprovable", "postedAt": "2007-08-04T03:21:50.000Z", "baseScore": 359, "voteCount": 316, "commentCount": 333, "url": null, "contents": { "documentId": "fAuWLS7RKWD2npBFR", "html": "

The earliest account I know of a scientific experiment is, ironically, the story of Elijah and the priests of Baal.

The people of Israel are wavering between Jehovah and Baal, so Elijah announces that he will conduct an experiment to settle it—quite a novel concept in those days! The priests of Baal will place their bull on an altar, and Elijah will place Jehovah’s bull on an altar, but neither will be allowed to start the fire; whichever God is real will call down fire on His sacrifice. The priests of Baal serve as control group for Elijah—the same wooden fuel, the same bull, and the same priests making invocations, but to a false god. Then Elijah pours water on his altar—ruining the experimental symmetry, but this was back in the early days—to signify deliberate acceptance of the burden of proof, like needing a 0.05 significance level. The fire comes down on Elijah’s altar, which is the experimental observation. The watching people of Israel shout “The Lord is God!”—peer review.

And then the people haul the 450 priests of Baal down to the river Kishon and slit their throats. This is stern, but necessary. You must firmly discard the falsified hypothesis, and do so swiftly, before it can generate excuses to protect itself. If the priests of Baal are allowed to survive, they will start babbling about how religion is a separate magisterium which can be neither proven nor disproven.

Back in the old days, people actually believed their religions instead of just believing in them. The biblical archaeologists who went in search of Noah’s Ark did not think they were wasting their time; they anticipated they might become famous. Only after failing to find confirming evidence—and finding disconfirming evidence in its place—did religionists execute what William Bartley called the retreat to commitment, “I believe because I believe.”

Back in the old days, there was no concept of religion’s being a separate magisterium. The Old Testament is a stream-of-consciousness culture dump: history, law, moral parables, and yes, models of how the universe works—like the universe being created in six days (which is a metaphor for the Big Bang), or rabbits chewing their cud. (Which is a metaphor for . . .)

Back in the old days, saying the local religion “could not be proven” would have gotten you burned at the stake. One of the core beliefs of Orthodox Judaism is that God appeared at Mount Sinai and said in a thundering voice, “Yeah, it’s all true.” From a Bayesian perspective that’s some darned unambiguous evidence of a superhumanly powerful entity. (Although it doesn’t prove that the entity is God per se, or that the entity is benevolent—it could be alien teenagers.) The vast majority of religions in human history—excepting only those invented extremely recently—tell stories of events that would constitute completely unmistakable evidence if they’d actually happened. The orthogonality of religion and factual questions is a recent and strictly Western concept. The people who wrote the original scriptures didn’t even know the difference.

The Roman Empire inherited philosophy from the ancient Greeks; imposed law and order within its provinces; kept bureaucratic records; and enforced religious tolerance. The New Testament, created during the time of the Roman Empire, bears some traces of modernity as a result. You couldn’t invent a story about God completely obliterating the city of Rome (a la Sodom and Gomorrah), because the Roman historians would call you on it, and you couldn’t just stone them.

In contrast, the people who invented the Old Testament stories could make up pretty much anything they liked. Early Egyptologists were genuinely shocked to find no trace whatsoever of Hebrew tribes having ever been in Egypt—they weren’t expecting to find a record of the Ten Plagues, but they expected to find something. As it turned out, they did find something. They found out that, during the supposed time of the Exodus, Egypt ruled much of Canaan. That’s one huge historical error, but if there are no libraries, nobody can call you on it.

The Roman Empire did have libraries. Thus, the New Testament doesn’t claim big, showy, large-scale geopolitical miracles as the Old Testament routinely did. Instead the New Testament claims smaller miracles which nonetheless fit into the same framework of evidence. A boy falls down and froths at the mouth; the cause is an unclean spirit; an unclean spirit could reasonably be expected to flee from a true prophet, but not to flee from a charlatan; Jesus casts out the unclean spirit; therefore Jesus is a true prophet and not a charlatan. This is perfectly ordinary Bayesian reasoning, if you grant the basic premise that epilepsy is caused by demons (and that the end of an epileptic fit proves the demon fled).

Not only did religion used to make claims about factual and scientific matters, religion used to make claims about everything. Religion laid down a code of law—before legislative bodies; religion laid down history—before historians and archaeologists; religion laid down the sexual morals—before Women’s Lib; religion described the forms of government—before constitutions; and religion answered scientific questions from biological taxonomy to the formation of stars.1 The modern concept of religion as purely ethical derives from every other area’s having been taken over by better institutions. Ethics is what’s left.

Or rather, people think ethics is what’s left. Take a culture dump from 2,500 years ago. Over time, humanity will progress immensely, and pieces of the ancient culture dump will become ever more glaringly obsolete. Ethics has not been immune to human progress—for example, we now frown upon such Bible-approved practices as keeping slaves. Why do people think that ethics is still fair game?

Intrinsically, there’s nothing small about the ethical problem with slaughtering thousands of innocent first-born male children to convince an unelected Pharaoh to release slaves who logically could have been teleported out of the country. It should be more glaring than the comparatively trivial scientific error of saying that grasshoppers have four legs. And yet, if you say the Earth is flat, people will look at you like you’re crazy. But if you say the Bible is your source of ethics, women will not slap you. Most people’s concept of rationality is determined by what they think they can get away with; they think they can get away with endorsing Bible ethics; and so it only requires a manageable effort of self-deception for them to overlook the Bible’s moral problems. Everyone has agreed not to notice the elephant in the living room, and this state of affairs can sustain itself for a time.

Maybe someday, humanity will advance further, and anyone who endorses the Bible as a source of ethics will be treated the same way as Trent Lott endorsing Strom Thurmond’s presidential campaign. And then it will be said that religion’s “true core” has always been genealogy or something.

The idea that religion is a separate magisterium that cannot be proven or disproven is a Big Lie—a lie which is repeated over and over again, so that people will say it without thinking; yet which is, on critical examination, simply false. It is a wild distortion of how religion happened historically, of how all scriptures present their beliefs, of what children are told to persuade them, and of what the majority of religious people on Earth still believe. You have to admire its sheer brazenness, on a par with Oceania has always been at war with Eastasia. The prosecutor whips out the bloody axe, and the defendant, momentarily shocked, thinks quickly and says: “But you can’t disprove my innocence by mere evidence—it’s a separate magisterium!”

And if that doesn’t work, grab a piece of paper and scribble yourself a Get Out of Jail Free card.


1 The Old Testament doesn't talk about a sense of wonder at the complexity of the universe, perhaps because it was too busy laying down the death penalty for women who wore mens clothing, which was solid and satisfying religious content of that era.

" } }, { "_id": "ZpMDQsgLY9eF89ocF", "title": "God is irrelevant", "pageUrl": "https://www.lesswrong.com/posts/ZpMDQsgLY9eF89ocF/god-is-irrelevant", "postedAt": "2007-08-03T10:57:00.000Z", "baseScore": 1, "voteCount": 1, "commentCount": 1, "url": null, "contents": { "documentId": "ZpMDQsgLY9eF89ocF", "html": "

Philosophically that is. Psychologically he fulfils an important role – to distance us from philosophy.

\n

In no way would the existence of a God alter the important properties of the universe. Most of the problems a God supposedly solves are merely shifted to the other side of him – a step further away from humans, where we can comfortably ignore them.

\n

Some solutions God doesn’t really provide (presumably all thought of before by various philosophers, but I don’t know which ones, and it’s irrelevant, so please excuse the plagiarism) :

\n

Creator of the universe: An obvious one. Where did God come from then? If he’s existed forever then so could a universe. If you think something as complex as a universe couldn’t come from nothing, how complex would God have to be to be able to make universes?

\n

Source of morality: Where does God get his moral principles from? If he invents them himself they are just as arbitrary a set of restrictions on behaviour as any other (such as an atheist’s morals are feared to be by the religious). Why follow them? If they are inherent in the universe, related to other people, or a matter of choice then God isn’t needed.

\n

Morality is a set of value judgements. If God and I both have a set of value judgements (a moral code), to say that God’s takes precedence is a value judgement in itself. Who judges? God? Why?

\n

Provider of free will: For reasons discussed in the previous post, Free will isn’t a concept (unless you mean determinism), God can’t have – or give humans – free will which isn’t deterministic. The absence of God’s ‘free will’ is even more apparent if he must be good all the time (unless he invents his own changeable moral code as he goes, but is that the kind of morality God should subscribe to? Well yes, if he does! But there’s still the old problem of free will not existing – he can’t escape).

\n

If he’s all powerful as well, then he just ends up as another natural law – one that makes good things always happen. Anyone who’s been alive can tell you there’s fairly solid empirical evidence against such a law existing, but my point isn’t to draw attention to the problem of evil so much as to point out that natural laws are nothing new.

\n

The final picture? A God who may well exist*. But who cares? Yeah, if he’s all powerful perhaps you should follow his moral laws just to stop him smiting you, but that’s politics, not metaphysics.

\n

*except perhaps for the whole problem of evil bit – but goodness is hard to define, so let’s give him a break on that one for a moment


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "nYkMLFpx77Rz3uo9c", "title": "Belief as Attire", "pageUrl": "https://www.lesswrong.com/posts/nYkMLFpx77Rz3uo9c/belief-as-attire", "postedAt": "2007-08-02T17:13:56.000Z", "baseScore": 155, "voteCount": 143, "commentCount": 105, "url": null, "contents": { "documentId": "nYkMLFpx77Rz3uo9c", "html": "

I have so far distinguished between belief as anticipation-controller, belief in belief, professing and cheering.  Of these, we might call anticipation-controlling beliefs \"proper beliefs\" and the other forms \"improper belief\". Proper belief can be wrong or irrational, as when someone genuinely anticipates that prayer will cure their sick baby. But the other forms are arguably “not belief at all.”

Yet another form of improper belief is belief as group identification—as a way of belonging. Robin Hanson uses the excellent metaphor of wearing unusual clothing, a group uniform like a priest’s vestments or a Jewish skullcap, and so I will call this “belief as attire.”

In terms of humanly realistic psychology, the Muslims who flew planes into the World Trade Center undoubtedly saw themselves as heroes defending truth, justice, and the Islamic Way from hideous alien monsters a la the movie Independence Day. Only a very inexperienced nerd, the sort of nerd who has no idea how non-nerds see the world, would say this out loud in an Alabama bar. It is not an American thing to say. The American thing to say is that the terrorists “hate our freedom” and that flying a plane into a building is a “cowardly act.” You cannot say the phrases “heroic self-sacrifice” and “suicide bomber” in the same sentence, even for the sake of accurately describing how the Enemy sees the world. The very concept of the courage and altruism of a suicide bomber is Enemy attire—you can tell, because the Enemy talks about it. The cowardice and sociopathy of a suicide bomber is American attire. There are no quote marks you can use to talk about how the Enemy sees the world; it would be like dressing up as a Nazi for Halloween.

Belief-as-attire may help explain how people can be passionate about improper beliefs. Mere belief in belief, or religious professing, would have some trouble creating genuine, deep, powerful emotional effects. Or so I suspect; I confess I’m not an expert here. But my impression is this: People who’ve stopped anticipating-as-if their religion is true, will go to great lengths to convince themselves they are passionate, and this desperation can be mistaken for passion. But it’s not the same fire they had as a child.

On the other hand, it is very easy for a human being to genuinely, passionately, gut-level belong to a group, to cheer for their favorite sports team.1 Identifying with a tribe is a very strong emotional force. People will die for it. And once you get people to identify with a tribe, the beliefs which are the attire of that tribe will be spoken with the full passion of belonging to that tribe.


1 This is the foundation on which rests the swindle of “Republicans vs. Democrats” and analogous false dilemmas in other countries, but that’s a topic for another time.

" } }, { "_id": "RmCjazjupRGcHSm5N", "title": "Professing and Cheering", "pageUrl": "https://www.lesswrong.com/posts/RmCjazjupRGcHSm5N/professing-and-cheering", "postedAt": "2007-08-02T07:20:21.000Z", "baseScore": 132, "voteCount": 134, "commentCount": 45, "url": null, "contents": { "documentId": "RmCjazjupRGcHSm5N", "html": "

I once attended a panel on the topic, “Are science and religion compatible?” One of the women on the panel, a pagan, held forth interminably upon how she believed that the Earth had been created when a giant primordial cow was born into the primordial abyss, who licked a primordial god into existence, whose descendants killed a primordial giant and used its corpse to create the Earth, etc. The tale was long, and detailed, and more absurd than the Earth being supported on the back of a giant turtle. And the speaker clearly knew enough science to know this.

I still find myself struggling for words to describe what I saw as this woman spoke. She spoke with . . . pride? Self-satisfaction? A deliberate flaunting of herself?

The woman went on describing her creation myth for what seemed like forever, but was probably only five minutes. That strange pride/satisfaction/flaunting clearly had something to do with her knowing that her beliefs were scientifically outrageous. And it wasn’t that she hated science; as a panelist she professed that religion and science were compatible. She even talked about how it was quite understandable that the Vikings talked about a primordial abyss, given the land in which they lived—explained away her own religion!—and yet nonetheless insisted this was what she “believed,” said with peculiar satisfaction.

I’m not sure that Daniel Dennett’s concept of “belief in belief” stretches to cover this event. It was weirder than that. She didn’t recite her creation myth with the fanatical faith of someone who needs to reassure herself. She didn’t act like she expected us, the audience, to be convinced—or like she needed our belief to validate her.

Dennett, in addition to introducing the idea of belief in belief, has also suggested that much of what is called “religious belief” should really be studied as “religious profession” instead. Suppose an alien anthropologist studied a group of English students who all seemingly believed that Wulky Wilkensen was a retropositional author. The appropriate question may not be “Why do the students all believe this strange belief?” but “Why do they all write this strange sentence on quizzes?” Even if a sentence is essentially meaningless, you can still know when you are supposed to chant the response aloud.

I think Dennett may be slightly too cynical in suggesting that religious profession is just saying the belief aloud—most people are honest enough that, if they say a religious statement aloud, they will also feel obligated to say the verbal sentence into their own stream of consciousness.

But even the concept of “religious profession” doesn’t seem to cover the pagan woman’s claim to believe in the primordial cow. If you had to profess a religious belief to satisfy a priest, or satisfy a co-religionist—heck, to satisfy your own self-image as a religious person—you would have to pretend to believe much more convincingly than this woman was doing. As she recited her tale of the primordial cow, she wasn’t even trying to be persuasive on that front—wasn’t even trying to convince us that she took her own religion seriously. I think that’s the part that so took me aback. I know people who believe they believe ridiculous things, but when they profess them, they’ll spend much more effort to convince themselves that they take their beliefs seriously.

It finally occurred to me that this woman wasn’t trying to convince us or even convince herself. Her recitation of the creation story wasn’t about the creation of the world at all. Rather, by launching into a five-minute diatribe about the primordial cow, she was cheering for paganism, like holding up a banner at a football game. A banner saying GO BLUES isn’t a statement of fact, or an attempt to persuade; it doesn’t have to be convincing—it’s a cheer.

That strange flaunting pride . . . it was like she was marching naked in a gay pride parade.1 It wasn’t just a cheer, like marching, but an outrageous cheer, like marching naked—believing that she couldn’t be arrested or criticized, because she was doing it for her pride parade.

That’s why it mattered to her that what she was saying was beyond ridiculous. If she’d tried to make it sound more plausible, it would have been like putting on clothes.


1 Of course, theres nothing wrong with actually marching naked in pride parades; this isn't something that truth can destroy.

" } }, { "_id": "NKaPFf98Y5otMbsPk", "title": "Bayesian Judo", "pageUrl": "https://www.lesswrong.com/posts/NKaPFf98Y5otMbsPk/bayesian-judo", "postedAt": "2007-07-31T05:53:13.000Z", "baseScore": 90, "voteCount": 133, "commentCount": 110, "url": null, "contents": { "documentId": "NKaPFf98Y5otMbsPk", "html": "

You can have some fun with people whose anticipations get out of sync with what they believe they believe.

\n

I was once at a dinner party, trying to explain to a man what I did for a living, when he said: \"I don't believe Artificial Intelligence is possible because only God can make a soul.\"

\n

At this point I must have been divinely inspired, because I instantly responded: \"You mean if I can make an Artificial Intelligence, it proves your religion is false?\"

\n

\n

He said, \"What?\"

\n

I said, \"Well, if your religion predicts that I can't possibly make an Artificial Intelligence, then, if I make an Artificial Intelligence, it means your religion is false. Either your religion allows that it might be possible for me to build an AI; or, if I build an AI, that disproves your religion.\"

\n

There was a pause, as the one realized he had just made his hypothesis vulnerable to falsification, and then he said, \"Well, I didn't mean that you couldn't make an intelligence, just that it couldn't be emotional in the same way we are.\"

\n

I said, \"So if I make an Artificial Intelligence that, without being deliberately preprogrammed with any sort of script, starts talking about an emotional life that sounds like ours, that means your religion is wrong.\"

\n

He said, \"Well, um, I guess we may have to agree to disagree on this.\"

\n

I said: \"No, we can't, actually. There's a theorem of rationality called Aumann's Agreement Theorem which shows that no two rationalists can agree to disagree. If two people disagree with each other, at least one of them must be doing something wrong.\"

\n

We went back and forth on this briefly. Finally, he said, \"Well, I guess I was really trying to say that I don't think you can make something eternal.\"

\n

I said, \"Well, I don't think so either! I'm glad we were able to reach agreement on this, as Aumann's Agreement Theorem requires.\"  I stretched out my hand, and he shook it, and then he wandered away.

\n

A woman who had stood nearby, listening to the conversation, said to me gravely, \"That was beautiful.\"

\n

\"Thank you very much,\" I said.

\n

 

\n

Part of the sequence Mysterious Answers to Mysterious Questions

\n

Next post: \"Professing and Cheering\"

\n

Previous post: \"Belief in Belief\"

" } }, { "_id": "CqyJzDZWvGhhFJ7dY", "title": "Belief in Belief", "pageUrl": "https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief", "postedAt": "2007-07-29T17:49:43.000Z", "baseScore": 227, "voteCount": 202, "commentCount": 179, "url": null, "contents": { "documentId": "CqyJzDZWvGhhFJ7dY", "html": "

Carl Sagan once told a parable of someone who comes to us and claims: “There is a dragon in my garage.” Fascinating! We reply that we wish to see this dragon—let us set out at once for the garage! “But wait,” the claimant says to us, “it is an invisible dragon.”

Now as Sagan points out, this doesn’t make the hypothesis unfalsifiable. Perhaps we go to the claimant’s garage, and although we see no dragon, we hear heavy breathing from no visible source; footprints mysteriously appear on the ground; and instruments show that something in the garage is consuming oxygen and breathing out carbon dioxide.

But now suppose that we say to the claimant, “Okay, we’ll visit the garage and see if we can hear heavy breathing,” and the claimant quickly says no, it’s an inaudible dragon. We propose to measure carbon dioxide in the air, and the claimant says the dragon does not breathe. We propose to toss a bag of flour into the air to see if it outlines an invisible dragon, and the claimant immediately says, “The dragon is permeable to flour.”

Carl Sagan used this parable to illustrate the classic moral that poor hypotheses need to do fast footwork to avoid falsification. But I tell this parable to make a different point: The claimant must have an accurate model of the situation somewhere in their mind, because they can anticipate, in advance, exactly which experimental results they’ll need to excuse.

Some philosophers have been much confused by such scenarios, asking, “Does the claimant really believe there’s a dragon present, or not?” As if the human brain only had enough disk space to represent one belief at a time! Real minds are more tangled than that. There are different types of belief; not all beliefs are direct anticipations. The claimant clearly does not anticipate seeing anything unusual upon opening the garage door. Otherwise they wouldn’t make advance excuses. It may also be that the claimant’s pool of propositional beliefs contains the free-floating statement There is a dragon in my garage. It may seem, to a rationalist, that these two beliefs should collide and conflict even though they are of different types. Yet it is a physical fact that you can write “The sky is green!” next to a picture of a blue sky without the paper bursting into flames.

The rationalist virtue of empiricism is supposed to prevent us from making this class of mistake. We’re supposed to constantly ask our beliefs which experiences they predict, make them pay rent in anticipation. But the dragon-claimant’s problem runs deeper, and cannot be cured with such simple advice. It’s not exactly difficult to connect belief in a dragon to anticipated experience of the garage. If you believe there’s a dragon in your garage, then you can expect to open up the door and see a dragon. If you don’t see a dragon, then that means there’s no dragon in your garage. This is pretty straightforward. You can even try it with your own garage.

No, this invisibility business is a symptom of something much worse.

Depending on how your childhood went, you may remember a time period when you first began to doubt Santa Claus’s existence, but you still believed that you were supposed to believe in Santa Claus, so you tried to deny the doubts. As Daniel Dennett observes, where it is difficult to believe a thing, it is often much easier to believe that you ought to believe it. What does it mean to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green? The statement is confusing; it’s not even clear what it would mean to believe it—what exactly would be believed, if you believed. You can much more easily believe that it is proper, that it is good and virtuous and beneficial, to believe that the Ultimate Cosmic Sky is both perfectly blue and perfectly green. Dennett calls this “belief in belief.”1

And here things become complicated, as human minds are wont to do—I think even Dennett oversimplifies how this psychology works in practice. For one thing, if you believe in belief, you cannot admit to yourself that you merely believe in belief. What’s virtuous is to believe, not to believe in believing; and so if you only believe in belief, instead of believing, you are not virtuous. Nobody will admit to themselves, “I don’t believe the Ultimate Cosmic Sky is blue and green, but I believe I ought to believe it”—not unless they are unusually capable of acknowledging their own lack of virtue. People don’t believe in belief in belief, they just believe in belief.

(Those who find this confusing may find it helpful to study mathematical logic, which trains one to make very sharp distinctions between the proposition P, a proof of P, and a proof that P is provable. There are similarly sharp distinctions between P, wanting P, believing P, wanting to believe P, and believing that you believe P.)

There are different kinds of belief in belief. You may believe in belief explicitly; you may recite in your deliberate stream of consciousness the verbal sentence “It is virtuous to believe that the Ultimate Cosmic Sky is perfectly blue and perfectly green.” (While also believing that you believe this, unless you are unusually capable of acknowledging your own lack of virtue.) But there are also less explicit forms of belief in belief. Maybe the dragon-claimant fears the public ridicule that they imagine will result if they publicly confess they were wrong.2 Maybe the dragon-claimant flinches away from the prospect of admitting to themselves that there is no dragon, because it conflicts with their self-image as the glorious discoverer of the dragon, who saw in their garage what all others had failed to see.

If all our thoughts were deliberate verbal sentences like philosophers manipulate, the human mind would be a great deal easier for humans to understand. Fleeting mental images, unspoken flinches, desires acted upon without acknowledgement—these account for as much of ourselves as words.

While I disagree with Dennett on some details and complications, I still think that Dennett’s notion of belief in belief is the key insight necessary to understand the dragon-claimant. But we need a wider concept of belief, not limited to verbal sentences. “Belief” should include unspoken anticipation-controllers. “Belief in belief” should include unspoken cognitive-behavior-guiders. It is not psychologically realistic to say, “The dragon-claimant does not believe there is a dragon in their garage; they believe it is beneficial to believe there is a dragon in their garage.” But it is realistic to say the dragon-claimant anticipates as if there is no dragon in their garage, and makes excuses as if they believed in the belief.

You can possess an ordinary mental picture of your garage, with no dragons in it, which correctly predicts your experiences on opening the door, and never once think the verbal phrase There is no dragon in my garage. I even bet it’s happened to you—that when you open your garage door or bedroom door or whatever, and expect to see no dragons, no such verbal phrase runs through your mind.

And to flinch away from giving up your belief in the dragon—or flinch away from giving up your self-image as a person who believes in the dragon—it is not necessary to explicitly think I want to believe there’s a dragon in my garage. It is only necessary to flinch away from the prospect of admitting you don’t believe.

If someone believes in their belief in the dragon, and also believes in the dragon, the problem is much less severe. They will be willing to stick their neck out on experimental predictions, and perhaps even agree to give up the belief if the experimental prediction is wrong.3 But when someone makes up excuses in advance, it would seem to require that belief and belief in belief have become unsynchronized.


1 Daniel C. Dennett, Breaking the Spell: Religion as a Natural Phenomenon (Penguin, 2006).

2 Although, in fact, a rationalist would congratulate them, and others are more likely to ridicule the claimant if they go on claiming theres a dragon in their garage.

3 Although belief in belief can still interfere with this, if the belief itself is not absolutely confident.

" } }, { "_id": "a7n8GdKiAZRX86T5A", "title": "Making Beliefs Pay Rent (in Anticipated Experiences)", "pageUrl": "https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences", "postedAt": "2007-07-28T22:59:48.000Z", "baseScore": 530, "voteCount": 481, "commentCount": 269, "url": null, "contents": { "documentId": "a7n8GdKiAZRX86T5A", "html": "

Thus begins the ancient parable:

If a tree falls in a forest and no one hears it, does it make a sound? One says, “Yes it does, for it makes vibrations in the air.” Another says, “No it does not, for there is no auditory processing in any brain.”

If there’s a foundational skill in the martial art of rationality, a mental stance on which all other technique rests, it might be this one: the ability to spot, inside your own head, psychological signs that you have a mental map of something, and signs that you don’t.

Suppose that, after a tree falls, the two arguers walk into the forest together. Will one expect to see the tree fallen to the right, and the other expect to see the tree fallen to the left? Suppose that before the tree falls, the two leave a sound recorder next to the tree. Would one, playing back the recorder, expect to hear something different from the other? Suppose they attach an electroencephalograph to any brain in the world; would one expect to see a different trace than the other?

Though the two argue, one saying “No,” and the other saying “Yes,” they do not anticipate any different experiences. The two think they have different models of the world, but they have no difference with respect to what they expect will happen to them; their maps of the world do not diverge in any sensory detail.

It’s tempting to try to eliminate this mistake class by insisting that the only legitimate kind of belief is an anticipation of sensory experience. But the world does, in fact, contain much that is not sensed directly. We don’t see the atoms underlying the brick, but the atoms are in fact there. There is a floor beneath your feet, but you don’t experience the floor directly; you see the light reflected from the floor, or rather, you see what your retina and visual cortex have processed of that light. To infer the floor from seeing the floor is to step back into the unseen causes of experience. It may seem like a very short and direct step, but it is still a step.

You stand on top of a tall building, next to a grandfather clock with an hour, minute, and ticking second hand. In your hand is a bowling ball, and you drop it off the roof. On which tick of the clock will you hear the crash of the bowling ball hitting the ground?

To answer precisely, you must use beliefs like Earth’s gravity is 9.8 meters per second per second, and This building is around 120 meters tall. These beliefs are not wordless anticipations of a sensory experience; they are verbal-ish, propositional. It probably does not exaggerate much to describe these two beliefs as sentences made out of words. But these two beliefs have an inferential consequence that is a direct sensory anticipation—if the clock’s second hand is on the 12 numeral when you drop the ball, you anticipate seeing it on the 1 numeral when you hear the crash five seconds later. To anticipate sensory experiences as precisely as possible, we must process beliefs that are not anticipations of sensory experience.

It is a great strength of Homo sapiens that we can, better than any other species in the world, learn to model the unseen. It is also one of our great weak points. Humans often believe in things that are not only unseen but unreal.

The same brain that builds a network of inferred causes behind sensory experience can also build a network of causes that is not connected to sensory experience, or poorly connected. Alchemists believed that phlogiston caused fire—we could simplistically model their minds by drawing a little node labeled “Phlogiston,” and an arrow from this node to their sensory experience of a crackling campfire—but this belief yielded no advance predictions; the link from phlogiston to experience was always configured after the experience, rather than constraining the experience in advance.

Or suppose your English professor teaches you that the famous writer Wulky Wilkinsen is actually a “retropositional author,” which you can tell because his books exhibit “alienated resublimation.” And perhaps your professor knows all this because their professor told them; but all they're able to say about resublimation is that it's characteristic of retropositional thought, and of retropositionality that it's marked by alienated resublimation. What does this mean you should expect from Wulky Wilkinsen’s books?

Nothing. The belief, if you can call it that, doesn’t connect to sensory experience at all. But you had better remember the propositional assertions that “Wulky Wilkinsen” has the “retropositionality” attribute and also the “alienated resublimation” attribute, so you can regurgitate them on the upcoming quiz. The two beliefs are connected to each other, though still not connected to any anticipated experience.

We can build up whole networks of beliefs that are connected only to each other—call these “floating” beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens’s ability to build more general and flexible belief networks.

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit. Do you believe that phlogiston is the cause of fire? Then what do you expect to see happen, because of that? Do you believe that Wulky Wilkinsen is a retropositional author? Then what do you expect to see because of that? No, not “alienated resublimation”; what experience will happen to you? Do you believe that if a tree falls in the forest, and no one hears it, it still makes a sound? Then what experience must therefore befall you?

It is even better to ask: what experience must not happen to you? Do you believe that Élan vital explains the mysterious aliveness of living beings? Then what does this belief not allow to happen—what would definitely falsify this belief? A null answer means that your belief does not constrain experience; it permits anything to happen to you. It floats.

When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can’t find the difference of anticipation, you’re probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network. If you don’t know what experiences are implied by Wulky Wilkinsens writing being retropositional, you can go on arguing forever.

Above all, don’t ask what to believe—ask what to anticipate. Every question of belief should flow from a question of anticipation, and that question of anticipation should be the center of the inquiry. Every guess of belief should begin by flowing to a specific guess of anticipation, and should continue to pay rent in future anticipations. If a belief turns deadbeat, evict it.

" } }, { "_id": "gXgq2Fwm2s2GwhjF3", "title": "Free will isn’t a concept (unless you mean determinism)", "pageUrl": "https://www.lesswrong.com/posts/gXgq2Fwm2s2GwhjF3/free-will-isn-t-a-concept-unless-you-mean-determinism", "postedAt": "2007-07-15T13:51:00.000Z", "baseScore": 3, "voteCount": 2, "commentCount": 1, "url": null, "contents": { "documentId": "gXgq2Fwm2s2GwhjF3", "html": "

Imagine something happens. For instance you make a decision. There are three possibilities for this occurence:

\n
    \n
  1. It could be related purely to other factors (determinism)
  2. \n
  3. It could be not related to other factors (randomness)
  4. \n
  5. It could be a combination of these (a mixture of determinism and randomness)
  6. \n
\n

None of these are free will (as commonly understood). So where does the concept of free will fit in? How could an occurence escape from being in one of these categories? Clearly it can’t. So there is no possibility of a concept of free will that is in opposition to determinism, let alone a chance of it existing in reality.

\n

But you feel like you have free will (whatever that is – just don’t think about it), don’t you? Or to put it another way, you feel like your actions are neither determined nor random. You choose them.

\n

And that is precisely why they are determined. They are determined by you. And you already exist to the finest detail at the time you are making the decision. If you made choices (or some element of them) not controlled by your personality, experience, thoughts and anything else that comes under the heading of ‘the state of your brain as a result of genetics and your prior environments’, they would be random, which still isn’t free will (not to mention being a less personal and less appealing model, if that’s how you choose your beliefs).

\n

You might argue that you can choose what to think and how to feel , and how heavily to let those things influence you, when making a decision. That doesn’t alter the situation however. Those are then choices too, and your decisions for them would presumably have to be made based on other thoughts and feelings , which you would presumably choose, and so on. The point at which free will should have occurred would just be shifted back indefinitely. Again you just have a long chain of cause and effect.

\n

The closest thing you can have to free will is for your actions to be determined purely by the state of your brain. Free will is determinism.


\"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\" \"\"" } }, { "_id": "48WeP7oTec3kBEada", "title": "Two More Things to Unlearn from School", "pageUrl": "https://www.lesswrong.com/posts/48WeP7oTec3kBEada/two-more-things-to-unlearn-from-school", "postedAt": "2007-07-12T17:45:33.000Z", "baseScore": 177, "voteCount": 147, "commentCount": 157, "url": null, "contents": { "documentId": "48WeP7oTec3kBEada", "html": "

In Three Things to\nUnlearn from School, Ben Casnocha cites Bill Bullard's list of three bad\nhabits of thought: Attaching importance\nto personal opinions, solving given problems, and earning the approval of\nothers. Bullard's proposed alternatives don't look very good to me, but Bullard has surely\nidentified some important problems.

\n\n\n\n

I can think of other school-inculcated bad habits of\nthought, too many to list, but I'll name two of my least favorite.

\n\n\n\n

I suspect the most\ndangerous habit of thought taught in schools is that even if you don't really\nunderstand something, you should parrot it back anyway. One of the most fundamental life skills is\nrealizing when you are confused, and school actively destroys this ability -\nteaches students that they "understand" when they can successfully\nanswer questions on an exam, which is very very very far from absorbing the\nknowledge and making it a part of you. Students learn the habit that eating consists of putting food into\nmouth; the exams can't test for chewing or swallowing, and so they starve.

\n\n

Much of this problem may come from needing to take three\n4-credit courses per quarter, with a textbook chapter plus homework to be done\nevery week - the courses are timed\nfor frantic memorization, it's not possible\nto deeply chew over and leisurely digest knowledge in the same period. College students aren't allowed to be confused; if they started saying, "Wait, do I\nreally understand this? Maybe I'd better\nspend a few days looking up related papers, or consult another textbook," they'd\nfail all the courses they took that quarter. A month later they would understand the material far better and remember\nit much longer - but one month after finals is too late; it counts for nothing\nin the lunatic university utility function.

\n\n\n\n

Many students who have gone through this process no longer\neven realize when something confuses\nthem, or notice gaps in their understanding. They have been trained out of pausing to think.

\n\n\n\n

I recall reading, though I can't remember where, that\nphysicists in some country were more likely to become extreme religious\nfanatics. This confused me, until the\nauthor suggested that physics students are presented with a received truth that\nis actually correct, from which they learn the habit of trusting authority.

\n\n\n\n

It may be dangerous to present people with a giant mass of\nauthoritative knowledge, especially if\nit is actually true. It may damage their\nskepticism.

\n\n\n\n

So what could you do? Teach students the history of physics, how each idea was replaced in\nturn by a new correct one? "Here's\nthe old idea, here's the new idea, here's the experiment - the new idea\nwins!" Repeat this lesson ten times\nand what is the habit of thought learned? "New ideas always win; every new idea in physics turns out to be\ncorrect." You still haven't taught\nany critical thinking, because you only showed them history as seen with\nperfect hindsight. You've taught them\nthe habit that distinguishing true ideas from false ones is perfectly clear-cut\nand straightforward, so if a shiny new idea has anything to recommend it, it's\nprobably true.

\n\n\n\n

Maybe it would be possible to teach the history of physics\nfrom a historically realistic point of view, without benefit of hindsight: show students the different alternatives that were considered historically\nplausible, re-enact the historical disagreements and debates.

\n\n\n\n

Maybe you could avoid handing students knowledge on a silver\nplatter: show students different versions of physics equations that looked\nplausible, and ask them to figure out which was the correct one, or invent\nexperiments that would distinguish between alternatives. This wouldn't be as challenging as needing to\nnotice anomalies without hints and invent alternatives from scratch, but it\nwould be a vast improvement over memorizing a received authority.

\n\n\n\n

Then, perhaps, you could teach the habit of thought: "The ideas of received authority are often\nimperfect but it takes a great effort to find a new idea that is better. Most possible changes are for the worse, even\nthough every improvement is necessarily a change."

\n\n" } }, { "_id": "JCnYq4SBZ29zngRf4", "title": "Open Thread", "pageUrl": "https://www.lesswrong.com/posts/JCnYq4SBZ29zngRf4/open-thread-0", "postedAt": "2007-07-01T19:38:34.000Z", "baseScore": 5, "voteCount": 4, "commentCount": 38, "url": null, "contents": { "documentId": "JCnYq4SBZ29zngRf4", "html": "

By request of the community, an Open Thread for free-form comments, so long as they're still related to the basic project of this blog.

\n\n

A word on post requests:  You're free to ask, but the authors can't commit to posting on requested topics - it's hard enough to do the ones we have in mind already.

" } }, { "_id": "28bAMAxhoX3bwbAKC", "title": "Are Your Enemies Innately Evil?", "pageUrl": "https://www.lesswrong.com/posts/28bAMAxhoX3bwbAKC/are-your-enemies-innately-evil", "postedAt": "2007-06-26T21:13:26.000Z", "baseScore": 229, "voteCount": 201, "commentCount": 148, "url": null, "contents": { "documentId": "28bAMAxhoX3bwbAKC", "html": "\n\n\n\n \n\n \n\n

We see far too direct a correspondence between others’ actions and their inherent dispositions. We see unusual dispositions that exactly match the unusual behavior, rather than asking after real situations or imagined situations that could explain the behavior. We hypothesize mutants.

\n\n

When someone actually offends us—commits an action of which we (rightly or wrongly) disapprove—then, I observe, the correspondence bias redoubles. There seems to be a very strong tendency to blame evil deeds on the Enemy’s mutant, evil disposition. Not as a moral point, but as a strict question of prior probability, we should ask what the Enemy might believe about their situation that would reduce the seeming bizarrity of their behavior. This would allow us to hypothesize a less exceptional disposition, and thereby shoulder a lesser burden of improbability.

\n\n

On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America. Now why do you suppose they might have done that? Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?

\n\n

Realistically, most people don’t construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy’s story, as seen by the Enemy, is not going to make the Enemy look bad. If you try to construe motivations that would make the Enemy look bad, you’ll end up flat wrong about what actually goes on in the Enemy’s mind.

\n\n

But politics is the mind-killer. Debate is war; arguments are soldiers. If the Enemy did have an evil disposition, that would be an argument in favor of your side. And any argument that favors your side must be supported, no matter how silly—otherwise you’re letting up the pressure somewhere on the battlefront. Everyone strives to outshine their neighbor in patriotic denunciation, and no one dares to contradict. Soon the Enemy has horns, bat wings, flaming breath, and fangs that drip corrosive venom. If you deny any aspect of this on merely factual grounds, you are arguing the Enemy’s side; you are a traitor. Very few people will understand that you aren’t defending the Enemy, just defending the truth.

\n\n

If it took a mutant to do monstrous things, the history of the human species would look very different. Mutants would be rare.

\n\n

Or maybe the fear is that understanding will lead to forgiveness. It’s easier to shoot down evil mutants. It is a more inspiring battle cry to scream, “Die, vicious scum!” instead of “Die, people who could have been just like me but grew up in a different environment!” You might feel guilty killing people who weren’t pure darkness.

\n\n

This looks to me like the deep-seated yearning for a one-sided policy debate in which the best policy has no drawbacks. If an army is crossing the border or a lunatic is coming at you with a knife, the policy alternatives are (a) defend yourself or (b) lie down and die. If you defend yourself, you may have to kill. If you kill someone who could, in another world, have been your friend, that is a tragedy. And it is a tragedy. The other option, lying down and dying, is also a tragedy. Why must there be a non-tragic option? Who says that the best policy available must have no downside? If someone has to die, it may as well be the initiator of force, to discourage future violence and thereby minimize the total sum of death.

\n\n

If the Enemy has an average disposition, and is acting from beliefs about their situation that would make violence a typically human response, then that doesn’t mean their beliefs are factually accurate. It doesn’t mean they’re justified. It means you’ll have to shoot down someone who is the hero of their own story, and in their novel the protagonist will die on page 80. That is a tragedy, but it is better than the alternative tragedy. It is the choice that every police officer makes, every day, to keep our neat little worlds from dissolving into chaos.

\n\n

When you accurately estimate the Enemy’s psychology—when you know what is really in the Enemy’s mind—that knowledge won’t feel like landing a delicious punch on the opposing side. It won’t give you a warm feeling of righteous indignation. It won’t make you feel good about yourself. If your estimate makes you feel unbearably sad, you may be seeing the world as it really is. More rarely, an accurate estimate may send shivers of serious horror down your spine, as when dealing with true psychopaths, or neurologically intact people with beliefs that have utterly destroyed their sanity (Scientologists or Jesus Campers).

\n\n

So let’s come right out and say it—the 9/11 hijackers weren’t evil mutants. They did not hate freedom. They, too, were the heroes of their own stories, and they died for what they believed was right—truth, justice, and the Islamic way. If the hijackers saw themselves that way, it doesn’t mean their beliefs were true. If the hijackers saw themselves that way, it doesn’t mean that we have to agree that what they did was justified. If the hijackers saw themselves that way, it doesn’t mean that the passengers of United Flight 93 should have stood aside and let it happen. It does mean that in another world, if they had been raised in a different environment, those hijackers might have been police officers. And that is indeed a tragedy. Welcome to Earth.

\n\n" } }, { "_id": "DB6wbyrMugYMK5o6a", "title": "Correspondence Bias", "pageUrl": "https://www.lesswrong.com/posts/DB6wbyrMugYMK5o6a/correspondence-bias", "postedAt": "2007-06-25T00:58:26.000Z", "baseScore": 107, "voteCount": 96, "commentCount": 49, "url": null, "contents": { "documentId": "DB6wbyrMugYMK5o6a", "html": "

The correspondence bias is the tendency to draw inferences about a person’s unique and enduring dispositions from behaviors that can be entirely explained by the situations in which they occur.

—Gilbert and Malone1

We tend to see far too direct a correspondence between others’ actions and personalities. When we see someone else kick a vending machine for no visible reason, we assume they are “an angry person.” But when you yourself kick the vending machine, it’s because the bus was late, the train was early, your report is overdue, and now the damned vending machine has eaten your lunch money for the second day in a row. Surely, you think to yourself, anyone would kick the vending machine, in that situation.

We attribute our own actions to our situations, seeing our behaviors as perfectly normal responses to experience. But when someone else kicks a vending machine, we don’t see their past history trailing behind them in the air. We just see the kick, for no reason we know about, and we think this must be a naturally angry person—since they lashed out without any provocation.

Yet consider the prior probabilities. There are more late buses in the world, than mutants born with unnaturally high anger levels that cause them to sometimes spontaneously kick vending machines. Now the average human is, in fact, a mutant. If I recall correctly, an average individual has two to ten somatically expressed mutations. But any given DNA location is very unlikely to be affected. Similarly, any given aspect of someone’s disposition is probably not very far from average. To suggest otherwise is to shoulder a burden of improbability.

Even when people are informed explicitly of situational causes, they don’t seem to properly discount the observed behavior. When subjects are told that a pro-abortion or anti-abortion speaker was randomly assigned to give a speech on that position, subjects still think the speakers harbor leanings in the direction randomly assigned.2

It seems quite intuitive to explain rain by water spirits; explain fire by a fire-stuff (phlogiston) escaping from burning matter; explain the soporific effect of a medication by saying that it contains a “dormitive potency.” Reality usually involves more complicated mechanisms: an evaporation and condensation cycle underlying rain, oxidizing combustion underlying fire, chemical interactions with the nervous system for soporifics. But mechanisms sound more complicated than essences; they are harder to think of, less available. So when someone kicks a vending machine, we think they have an innate vending-machine-kicking-tendency.

Unless the “someone” who kicks the machine is us—in which case we’re behaving perfectly normally, given our situations; surely anyone else would do the same. Indeed, we overestimate how likely others are to respond the same way we do—the “false consensus effect.” Drinking students considerably overestimate the fraction of fellow students who drink, but nondrinkers considerably underestimate the fraction. The “fundamental attribution error” refers to our tendency to overattribute others’ behaviors to their dispositions, while reversing this tendency for ourselves.

To understand why people act the way they do, we must first realize that everyone sees themselves as behaving normally. Don’t ask what strange, mutant disposition they were born with, which directly corresponds to their surface behavior. Rather, ask what situations people see themselves as being in. Yes, people do have dispositions—but there are not enough heritable quirks of disposition to directly account for all the surface behaviors you see.

Suppose I gave you a control with two buttons, a red button and a green button. The red button destroys the world, and the green button stops the red button from being pressed. Which button would you press? The green one. Anyone who gives a different answer is probably overcomplicating the question.3

And yet people sometimes ask me why I want to save the world.4 Like I must have had a traumatic childhood or something. Really, it seems like a pretty obvious decision . . . if you see the situation in those terms.

I may have non-average views which call for explanation—why do I believe such things, when most people don’t?—but given those beliefs, my reaction doesn’t seem to call forth an exceptional explanation. Perhaps I am a victim of false consensus; perhaps I overestimate how many people would press the green button if they saw the situation in those terms. But y’know, I’d still bet there’d be at least a substantial minority.

Most people see themselves as perfectly normal, from the inside. Even people you hate, people who do terrible things, are not exceptional mutants. No mutations are required, alas. When you understand this, you are ready to stop being surprised by human events.


1Daniel T. Gilbert and Patrick S. Malone, “The Correspondence Bias,” Psychological Bulletin 117, no. 1 (1995): 21–38.

2Edward E. Jones and Victor A. Harris, “The Attribution of Attitudes,” Journal of Experimental Social Psychology 3 (1967): 1–24, http://www.radford.edu/~jaspelme/443/spring-2007/Articles/Jones_n_Harris_1967.pdf.

3Compare “Transhumanism as Simplified Humanism.” http://yudkowsky.net/singularity/simplified.

4See Eliezer Yudkowsky, “Artificial Intelligence as a Positive and Negative Factor in Global Risk,” in Global Catastrophic Risks, ed. Nick Bostrom and Milan M. Ćirković (New York: Oxford University Press, 2008), 308–345.

" } }, { "_id": "mgmvs6BT3dSNxmyP2", "title": "Risk-Free Bonds Aren't", "pageUrl": "https://www.lesswrong.com/posts/mgmvs6BT3dSNxmyP2/risk-free-bonds-aren-t", "postedAt": "2007-06-22T22:30:00.000Z", "baseScore": 24, "voteCount": 25, "commentCount": 40, "url": null, "contents": { "documentId": "mgmvs6BT3dSNxmyP2", "html": "

I've always been annoyed by the term "risk-free bonds rate", meaning the return on US Treasury bills.  Just because US bonds have not defaulted within their trading experience, people assume this is impossible?  A list of major governments in 1900 would probably put the Ottoman Empire or Austria-Hungary well ahead of the relatively young United States.  Citing the good track record of the US alone, and not all governments of equal apparent stability at the start of the same time period, is purest survivorship bias.

\n\n

The United States is a democracy; if enough people vote for representatives who decide not to pay off the bonds, they won't get paid.  Do you want to look at recent history, let alone ancient history, and tell me this is impossible?  The Internet could enable coordinated populist voting that would sweep new candidates into office, in defiance of prevous political machines.  Then the US economy melts under the burden of consumer debt, which causes China to stop buying US bonds and dump its dollar reserves.  Then Al Qaeda finally smuggles a nuke into Washington, D.C.  Then the next global pandemic hits.  And these are just "good stories" - the probability of the US defaulting on its bonds for any reason, is necessarily higher than the probability of it happening for the particular reasons I've just described.  I'm not saying these are high probabilities, but they are probabilities.  Treasury bills are nowhere near "risk free". 

I may be prejudiced here, because I anticipate particular Black Swans (AI, nanotech, biotech) that I see as having a high chance of striking over the lifetime of a 30-year Treasury bond.  But even if you don't share those particular assumptions, do you expect the United States to still be around in 300 years?  If not, do you know exactly when it will go bust?  Then why isn't the risk of losing your capital on a 30-year Treasury bond at least, say, 10%?

\n\n

Nassim Nicholas Taleb's latest, The Black Swan, is about the\nimpact of unknown unknowns - sudden blowups, processes that seem to\nbehave normally for long periods and then melt down, variables in which\nmost of the movement may occur on a tiny fraction of the moves.  Taleb\ninveighs against the dangers of induction, the ludic fallacy,\nhindsight, survivorship bias.  And then on\npage 205, Taleb suggests:

\n

Instead of putting your money in "medium risk" investments\n(how do you know it is medium risk? by listening to tenure-seeking\n"experts"?), you need to put a portion, say 85 to 90 percent, in\nextremely safe instruments, like Treasury bills - as safe a class of\ninstruments as you can manage to find on this planet.  The remaining 10\nto 15 percent you put in extremely speculative bets, as leveraged as\npossible (like options), preferably venture capital-style portfolios. \nThat way you do not depend on errors of risk management; no Black Swan\ncan hurt you at all, beyond your "floor", the nest egg that you have in\nmaximally safe instruments.

\n
\n\n

\nDoes Taleb know something I don't, or has he forgotten to apply his own principles in the heat of the moment?  (That's a serious question, by the way, if Taleb happens to be reading this.  I'm not an experienced trader, and Taleb undoubtedly knows more than I do about how to use Black Swan thinking in trading.  But we all know how hard it is to remember to apply our finely honed skepticism in the face of handy popular phrases like "risk-free bonds rate".)  Regardless, I think that if you advise your readers to invest 90% of their money in "extremely safe" instruments, you should certainly also warn that it had better not all go into the same instrument - no, not even Treasury bills or gold bullion.  There is always risk management, and you are always exposed to error.  The safest instruments you can find on this planet aren't very safe.

" } }, { "_id": "xiHy3kFni8nsxfdcP", "title": "One Life Against the World", "pageUrl": "https://www.lesswrong.com/posts/xiHy3kFni8nsxfdcP/one-life-against-the-world", "postedAt": "2007-05-18T22:06:02.000Z", "baseScore": 125, "voteCount": 101, "commentCount": 84, "url": null, "contents": { "documentId": "xiHy3kFni8nsxfdcP", "html": "

\"Whoever saves a single life, it is as if he had saved the whole world.\"
– The Talmud, Sanhedrin 4:5

It's a beautiful thought, isn't it? Feel that warm glow.

I can testify that helping one person feels just as good as helping the whole world. Once upon a time, when I was burned out for the day and wasting time on the Internet - it's a bit complicated, but essentially, I managed to turn someone's whole life around by leaving an anonymous blog comment. I wasn't expecting it to have an effect that large, but it did. When I discovered what I had accomplished, it gave me a tremendous high. The euphoria lasted through that day and into the night, only wearing off somewhat the next morning. It felt just as good (this is the scary part) as the euphoria of a major scientific insight, which had previously been my best referent for what it might feel like to do drugs.

Saving one life probably does feel just as good as being the first person to realize what makes the stars shine. It probably does feel just as good as saving the entire world.

But if you ever have a choice, dear reader, between saving a single life and saving the whole world - then save the world. Please. Because beyond that warm glow is one heck of a gigantic difference.

For some people, the notion that saving the world is significantly better than saving one human life will be obvious, like saying that six billion dollars is worth more than one dollar, or that six cubic kilometers of gold weighs more than one cubic meter of gold. (And never mind the expected value of posterity.) Why might it not be obvious? Well, suppose there's a qualitative duty to save what lives you can - then someone who saves the world, and someone who saves one human life, are just fulfilling the same duty. Or suppose that we follow the Greek conception of personal virtue, rather than consequentialism; someone who saves the world is virtuous, but not six billion times as virtuous as someone who saves one human life. Or perhaps the value of one human life is already too great to comprehend - so that the passing grief we experience at funerals is an infinitesimal underestimate of what is lost - and thus passing to the entire world changes little.

I agree that one human life is of unimaginably high value. I also hold that two human lives are twice as unimaginably valuable. Or to put it another way: Whoever saves one life, if it is as if they had saved the whole world; whoever saves ten lives, it is as if they had saved ten worlds. Whoever actually saves the whole world - not to be confused with pretend rhetorical saving the world - it is as if they had saved an intergalactic civilization.

Two deaf children are sleeping on the railroad tracks, the train speeding down; you see this, but you are too far away to save the child. I'm nearby, within reach, so I leap forward and drag one child off the railroad tracks - and then stop, calmly sipping a Diet Pepsi as the train bears down on the second child. \"Quick!\" you scream to me. \"Do something!\" But (I call back) I already saved one child from the train tracks, and thus I am \"unimaginably\" far ahead on points. Whether I save the second child, or not, I will still be credited with an \"unimaginably\" good deed. Thus, I have no further motive to act. Doesn't sound right, does it?

Why should it be any different if a philanthropist spends $10 million on curing a rare but spectacularly fatal disease which afflicts only a hundred people planetwide, when the same money has an equal probability of producing a cure for a less spectacular disease that kills 10% of 100,000 people? I don't think it is different. When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives. Whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer.
 

Addendum:  It's not cognitively easy to spend money to save lives, since cliche methods that instantly leap to mind don't work or are counterproductive.  (I will post later on why this tends to be so.)  Stuart Armstrong also points out that if we are to disdain the philanthropist who spends life-saving money inefficiently, we should be consistent and disdain more those who could spend money to save lives but don't.

" } }, { "_id": "2ftJ38y9SRBCBsCzy", "title": "Scope Insensitivity", "pageUrl": "https://www.lesswrong.com/posts/2ftJ38y9SRBCBsCzy/scope-insensitivity", "postedAt": "2007-05-14T02:53:49.000Z", "baseScore": 355, "voteCount": 329, "commentCount": 72, "url": null, "contents": { "documentId": "2ftJ38y9SRBCBsCzy", "html": "

Once upon a time, three groups of subjects were asked how much they would pay to save 2,000 / 20,000 / 200,000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88.1 This is scope insensitivity or scope neglect: the number of birds saved—the scope of the altruistic action—had little effect on willingness to pay.

Similar experiments showed that Toronto residents would pay little more to clean up all polluted lakes in Ontario than polluted lakes in a particular region of Ontario, or that residents of four western US states would pay only 28% more to protect all 57 wilderness areas in those states than to protect a single area.2 People visualize “a single exhausted bird, its feathers soaked in black oil, unable to escape.”3 This image, or prototype, calls forth some level of emotional arousal that is primarily responsible for willingness-to-pay—and the image is the same in all cases. As for scope, it gets tossed out the window—no human can visualize 2,000 birds at once, let alone 200,000. The usual finding is that exponential increases in scope create linear increases in willingness-to-pay—perhaps corresponding to the linear time for our eyes to glaze over the zeroes; this small amount of affect is added, not multiplied, with the prototype affect. This hypothesis is known as “valuation by prototype.”

An alternative hypothesis is “purchase of moral satisfaction.” People spend enough money to create a warm glow in themselves, a sense of having done their duty. The level of spending needed to purchase a warm glow depends on personality and financial situation, but it certainly has nothing to do with the number of birds.

We are insensitive to scope even when human lives are at stake: Increasing the alleged risk of chlorinated drinking water from 0.004 to 2.43 annual deaths per 1,000—a factor of 600—increased willingness-to-pay from $3.78 to $15.23.4 Baron and Greene found no effect from varying lives saved by a factor of 10.5

A paper entitled “Insensitivity to the value of human life: A study of psychophysical numbing” collected evidence that our perception of human deaths follows Weber’s Law—obeys a logarithmic scale where the “just noticeable difference” is a constant fraction of the whole. A proposed health program to save the lives of Rwandan refugees garnered far higher support when it promised to save 4,500 lives in a camp of 11,000 refugees, rather than 4,500 in a camp of 250,000. A potential disease cure had to promise to save far more lives in order to be judged worthy of funding, if the disease was originally stated to have killed 290,000 rather than 160,000 or 15,000 people per year.6

The moral: If you want to be an effective altruist, you have to think it through with the part of your brain that processes those unexciting inky zeroes on paper, not just the part that gets real worked up about that poor struggling oil-soaked bird.


1 William H. Desvousges et al., Measuring Nonuse Damages Using Contingent Valuation: An Experimental Evaluation of Accuracy, technical report (Research Triangle Park, NC: RTI International, 2010).

2 Daniel Kahneman, “Comments by Professor Daniel Kahneman,” in Valuing Environmental Goods: An Assessment of the Contingent Valuation Method, ed. Ronald G. Cummings, David S. Brookshire, and William D. Schulze, vol. 1.B, Experimental Methods for Assessing Environmental Benefits (Totowa, NJ: Rowman & Allanheld, 1986), 226–235; Daniel L. McFadden and Gregory K. Leonard, “Issues in the Contingent Valuation of Environmental Goods: Methodologies for Data Collection and Analysis,” in Contingent Valuation: A Critical Assessment, ed. Jerry A. Hausman, Contributions to Economic Analysis 220 (New York: North-Holland, 1993), 165–215.

3 Daniel Kahneman, Ilana Ritov, and David Schkade, “Economic Preferences or Attitude Expressions?: An Analysis of Dollar Responses to Public Issues,” Journal of Risk and Uncertainty 19, nos. 1–3 (1999): 203–235.

4 Richard T. Carson and Robert Cameron Mitchell, “Sequencing and Nesting in Contingent Valuation Surveys,” Journal of Environmental Economics and Management 28, no. 2 (1995): 155–173.

5 Jonathan Baron and Joshua D. Greene, “Determinants of Insensitivity to Quantity in Valuation of Public Goods: Contribution, Warm Glow, Budget Constraints, Availability, and Prominence,” Journal of Experimental Psychology: Applied 2, no. 2 (1996): 107–125.

6 David Fetherstonhaugh et al., “Insensitivity to the Value of Human Life: A Study of Psychophysical Numbing,” Journal of Risk and Uncertainty 14, no. 3 (1997): 283–300.

" } }, { "_id": "ujz9PXXz7A4eKjBjp", "title": "Third Alternatives for Afterlife-ism", "pageUrl": "https://www.lesswrong.com/posts/ujz9PXXz7A4eKjBjp/third-alternatives-for-afterlife-ism", "postedAt": "2007-05-08T07:41:29.000Z", "baseScore": 35, "voteCount": 33, "commentCount": 20, "url": null, "contents": { "documentId": "ujz9PXXz7A4eKjBjp", "html": "

One of the most commonly proposed Noble Lies is belief in an afterlife.  Surely, goes the argument, the crushing certainty of absolute annihilation in a few decades is too much for any human being to bear.  People need hope - if they don't believe in an afterlife, they won't be able to live. 

\n

Surely this must be the strongest of all arguments for Noble Lies.  You can find Third Alternatives to many dilemmas, but can you find one to Death?

\n

Well, did you close your eyes and think creatively about the problem for five minutes?  No excuses, please; just answer \"Yes\" or \"No\".  Did you, or did you not, brainstorm the problem for five minutes by the clock before giving up?

\n

\n

The assumed task is to find a source of hope against looming death.  So at the very least I would cite medical nanotechnology, the argument from actuarial escape velocity, cryonics, or meddling with the forbidden ultimate technology.  But do you think that anyone who actually argued for afterlife as a Noble Lie would be glad to hear about these Third Alternatives?  No, because the point was not really to find the best strategy for supplying hope, but rather to excuse a fixed previous belief from criticism.

\n

You can argue against the feasibility of one of the above Third Alternatives, or even argue against the feasibility of all of them, but that's not the point.  Any one of those Third Alternatives stretches credulity less than a soul - that is (a) an imperishable dualistic stuff floating alongside the brain which (b) malfunctions exactly as the brain is neurologically damaged and yet (c) survives the brain's entire death. Even if we suppose the above Third Alternatives to be false-in-fact, they are packaged with far fewer associated absurdities, and put far less of a strain on the Standard Model.

\n

Thus on the presentation of any one of these Third Alternatives, afterlife-ism stands immediately convicted because it cannot be the best strategy even as a Noble Lie.  The old Noble Lie is dominated in the payoff table. If you decided to lie (to others or yourself) to soften the horror of personal extinction, then you'd nudge the balance of evidence a little on actuarial escape velocity - not spin up a soul from whole cloth.

\n

(A truly fanatic rationalist - like me - would refuse to judge between these two lies, regarding them both as equal transgressions of the deontological commandments Thou Shalt Not Nudge Thy Probability Assignments and Thou Shalt Not Pursue Hope As An Emotion, Only Actual Positive Outcomes.  Which is still no argument in favor of afterlife-ism; when a negative utility drops off my radar screen and becomes incomparable, I generally don't choose that policy.)

" } }, { "_id": "erGipespbbzdG5zYb", "title": "The Third Alternative", "pageUrl": "https://www.lesswrong.com/posts/erGipespbbzdG5zYb/the-third-alternative", "postedAt": "2007-05-06T23:47:52.000Z", "baseScore": 171, "voteCount": 151, "commentCount": 86, "url": null, "contents": { "documentId": "erGipespbbzdG5zYb", "html": "\n\n\n\n \n\n \n\n

“Believing in Santa Claus gives children a sense of wonder and encourages them to behave well in hope of receiving presents. If Santa-belief is destroyed by truth, the children will lose their sense of wonder and stop behaving nicely. Therefore, even though Santa-belief is false-to-fact, it is a Noble Lie whose net benefit should be preserved for utilitarian reasons.”

\n\n

Classically, this is known as a false dilemma, the fallacy of the excluded middle, or the package-deal fallacy. Even if we accept the underlying factual and moral premises of the above argument, it does not carry through. Even supposing that the Santa policy (encourage children to believe in Santa Claus) is better than the null policy (do nothing), it does not follow that Santa-ism is the best of all possible alternatives. Other policies could also supply children with a sense of wonder, such as taking them to watch a Space Shuttle launch or supplying them with science fiction novels. Likewise, offering children bribes for good behavior encourages the children to behave well only when adults are watching, while praise without bribes leads to unconditional good behavior.

\n\n

Noble Lies are generally package-deal fallacies; and the response to a package-deal fallacy is that if we really need the supposed gain, we can construct a Third Alternative for getting it.

\n\n

How can we obtain Third Alternatives? The first step in obtaining a Third Alternative is deciding to look for one, and the last step is the decision to accept it. This sounds obvious, and yet most people fail on these two steps, rather than within the search process.

\n\n

Some false dilemmas arise honestly, because superior alternatives are cognitively hard to see. But one factory for false dilemmas is justifying a questionable policy by pointing to a supposed benefit over the null action. In this case, the justifier does not want a Third Alternative; finding a Third Alternative would destroy the justification. The last thing a Santa-ist wants to hear is that praise works better than bribes, or that spaceships can be as inspiring as flying reindeer.

\n\n

The best is the enemy of the good. If the goal is really to help people, then a superior alternative is cause for celebration—once we find this better strategy, we can help people more effectively. But if the goal is to justify a particular strategy by claiming that it helps people, a Third Alternative is an enemy argument, a competitor.

\n\n

Modern cognitive psychology views decision-making as a search for alternatives. In real life, it’s not enough to compare options; you have to generate the options in the first place. On many problems, the number of alternatives is huge, so you need a stopping criterion for the search. When you’re looking to buy a house, you can’t compare every house in the city; at some point you have to stop looking and decide.

\n\n

But what about when our conscious motives for the search—the criteria we can admit to ourselves—don’t square with subconscious influences? When we are carrying out an allegedly altruistic search, a search for an altruistic policy, and we find a strategy that benefits others but disadvantages ourselves—well, we don’t stop looking there; we go on looking. Telling ourselves that we’re looking for a strategy that brings greater altruistic benefit, of course. But suppose we find a policy that has some defensible benefit, and also just happens to be personally convenient? Then we stop the search at once! In fact, we’ll probably resist any suggestion that we start looking again—pleading lack of time, perhaps. (And yet somehow, we always have cognitive resources for coming up with justifications for our current policy.)

\n\n

Beware when you find yourself arguing that a policy is defensible rather than optimal; or that it has some benefit compared to the null action, rather than the best benefit of any action.

\n\n

False dilemmas are often presented to justify unethical policies that are, by some vast coincidence, very convenient. Lying, for example, is often much more convenient than telling the truth; and believing whatever you started out with is more convenient than updating. Hence the popularity of arguments for Noble Lies; it serves as a defense of a pre-existing belief—one does not find Noble Liars who calculate an optimal new Noble Lie; they keep whatever lie they started with. Better stop that search fast!

\n\n

To do better, ask yourself straight out: If I saw that there was a superior alternative to my current policy, would I be glad in the depths of my heart, or would I feel a tiny flash of reluctance before I let go? If the answers are “no” and “yes,” beware that you may not have searched for a Third Alternative.

\n\n

Which leads into another good question to ask yourself straight out: Did I spend five minutes with my eyes closed, brainstorming wild and creative options, trying to think of a better alternative? It has to be five minutes by the clock, because otherwise you blink—close your eyes and open them again—and say, “Why, yes, I searched for alternatives, but there weren’t any.” Blinking makes a good black hole down which to dump your duties. An actual, physical clock is recommended.

\n\n

And those wild and creative options—were you careful not to think of a good one? Was there a secret effort from the corner of your mind to ensure that every option considered would be obviously bad?

\n\n

It’s amazing how many Noble Liars and their ilk are eager to embrace ethical violations—with all due bewailing of their agonies of conscience—when they haven’t spent even five minutes by the clock looking for an alternative. There are some mental searches that we secretly wish would fail; and when the prospect of success is uncomfortable, people take the earliest possible excuse to give up.

\n\n" } }, { "_id": "jyDBcs3Rx8t5Fhquo", "title": "Beware the Unsurprised", "pageUrl": "https://www.lesswrong.com/posts/jyDBcs3Rx8t5Fhquo/beware-the-unsurprised", "postedAt": "2007-05-03T22:45:48.000Z", "baseScore": 41, "voteCount": 25, "commentCount": 4, "url": null, "contents": { "documentId": "jyDBcs3Rx8t5Fhquo", "html": "

In Think Like Reality, I put forth the astonishing and controversial proposition that when human intuitions disagree with a fact, we need to either disprove the \"fact\" in question, or try to reshape the intuition.  (Well, it wouldn't have been so controversial, but like a fool I picked quantum mechanics to illustrate the point.  Never use quantum mechanics as an example of anything.)  Probability theory says that a model which is consistently surprised on the data is probably not a very good model.

\n

Matt Shulman pointed out in personal conversation that, in practice, we may want to be wary of people who don't appear surprised by surprising-seeming data.  Some people affect to be unsurprised because it is a fakeable signal of competence.  Well, a lot of things that good rationalists will do - such as appearing skeptical and appearing to take other people's opinions into account - are also fakeable signals of competence.  But, in practice, Matt's point is still well-taken.

\n

\n

People may also appear unsurprised (Matt points out) if their models are so vague that they don't understand the implications one way or the other.  (Rob Spear: \"It doesn't matter to the general public whether reality has 11, 42, or 97.5 dimensions...  The primary good that most modern physics provides to the people is basically light entertainment.\")  Or they may appear unsurprised if they fail to emotionally connect to the implications - \"Oh, sure, an asteroid is going to hit Earth... but personally I don't think humanity really deserves to survive anyway... are you taking Sally to her doctor's appointment tomorrow?\"

\n

Or Cialdini on the bystander effect:

\n
\n

We can learn from the way the other witnesses are reacting whether the event is or is not an emergency. What is easy to forget, though, is that everybody else observing the event is likely to be looking for social evidence, too. Because we all prefer to appear poised and unflustered among others, we are likely to search for that evidence placidly, with brief, camouflaged glances at those around us. Therefore everyone is likely to see everyone else looking unruffled and failing to act.

\n
\n

So appearing unsurprised, or pretending to yourself that you weren't surprised, is both personally and socially detrimental.  By saying that a consistently surprised model is a poor model, I didn't intend to make it more difficult for people to admit their surprise!  Even rationalists are surprised sometimes - the important thing is to throw away the model, reshape your intuitions, and otherwise update yourself so that it doesn't happen again.

\n

Think Like Reality wasn't arguing that we should never admit surprise, but that, having been surprised, we shouldn't get all indignant at reality for surprising us - because that just keeps us in the mistaken frame of mind that was surprised in the first place; instead, we should try to adjust our intuitions so that reality doesn't seem surprising the next time.  That doesn't mean rationalizing the events in hindsight using your current model - hindsight bias is detrimental to this process because it leads you to underestimate how surprised you were, and hence adjust your model less than it needs to be adjusted.

" } }, { "_id": "tWLFWAndSZSYN6rPB", "title": "Think Like Reality", "pageUrl": "https://www.lesswrong.com/posts/tWLFWAndSZSYN6rPB/think-like-reality", "postedAt": "2007-05-02T06:36:17.000Z", "baseScore": 133, "voteCount": 129, "commentCount": 69, "url": null, "contents": { "documentId": "tWLFWAndSZSYN6rPB", "html": "

Whenever I hear someone describe quantum physics as \"weird\" - whenever I hear someone bewailing the mysterious effects of observation on the observed, or the bizarre existence of nonlocal correlations, or the incredible impossibility of knowing position and momentum at the same time - then I think to myself:  This person will never understand physics no matter how many books they read.

\n

Reality has been around since long before you showed up.  Don't go calling it nasty names like \"bizarre\" or \"incredible\".  The universe was propagating complex amplitudes through configuration space for ten billion years before life ever emerged on Earth.  Quantum physics is not \"weird\".  You are weird.  You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space.  This is your problem, not reality's, and you are the one who needs to change.

\n

\n

Human intuitions were produced by evolution and evolution is a hack.  The same optimization process that built your retina backward and then routed the optic cable through your field of vision, also designed your visual system to process persistent objects bouncing around in 3 spatial dimensions because that's what it took to chase down tigers.  But \"tigers\" are leaky surface generalizations - tigers came into existence gradually over evolutionary time, and they are not all absolutely similar to each other.  When you go down to the fundamental level, the level on which the laws are stable, global, and exception-free, there aren't any tigers.  In fact there aren't any persistent objects bouncing around in 3 spatial dimensions.  Deal with it.

\n

Calling reality \"weird\" keeps you inside a viewpoint already proven erroneous.  Probability theory tells us that surprise is the measure of a poor hypothesis; if a model is consistently stupid  - consistently hits on events the model assigns tiny probabilities - then it's time to discard that model.  A good model makes reality look normal, not weird; a good model assigns high probability to that which is actually the case.  Intuition is only a model by another name: poor intuitions are shocked by reality, good intuitions make reality feel natural.  You want to reshape your intuitions so that the universe looks normal.  You want to think like reality.

\n

This end state cannot be forced.  It is pointless to pretend that quantum physics feels natural to you when in fact it feels strange.  This is merely denying your confusion, not becoming less confused.  But it will also hinder you to keep thinking How bizarre!  Spending emotional energy on incredulity wastes time you could be using to update.  It repeatedly throws you back into the frame of the old, wrong viewpoint.  It feeds your sense of righteous indignation at reality daring to contradict you.

\n

The principle extends beyond physics.  Have you ever caught yourself saying something like, \"I just don't understand how a PhD physicist can believe in astrology?\"  Well, if you literally don't understand, this indicates a problem with your model of human psychology.  Perhaps you are indignant - you wish to express strong moral disapproval.  But if you literally don't understand, then your indignation is stopping you from coming to terms with reality.  It shouldn't be hard to imagine how a PhD physicist ends up believing in astrology.  People compartmentalize, enough said.

\n

I now try to avoid using the English idiom \"I just don't understand how...\" to express indignation.  If I genuinely don't understand how, then my model is being surprised by the facts, and I should discard it and find a better model.

\n

Surprise exists in the map, not in the territory.  There are no surprising facts, only models that are surprised by facts.  Likewise for facts called such nasty names as \"bizarre\", \"incredible\", \"unbelievable\", \"unexpected\", \"strange\", \"anomalous\", or \"weird\".  When you find yourself tempted by such labels, it may be wise to check if the alleged fact is really factual.  But if the fact checks out, then the problem isn't the fact, it's you.

" } }, { "_id": "7iTwGquBFZKttpEdE", "title": "Universal Law", "pageUrl": "https://www.lesswrong.com/posts/7iTwGquBFZKttpEdE/universal-law", "postedAt": "2007-04-29T06:41:08.000Z", "baseScore": 115, "voteCount": 94, "commentCount": 28, "url": null, "contents": { "documentId": "7iTwGquBFZKttpEdE", "html": "

Antoine-Laurent de Lavoisier discovered that breathing (respiration) and fire (combustion) operated on the same principle.  It was one of the most startling unifications in the history of science, for it brought together the mundane realm of matter and the sacred realm of life, which humans had divided into separate magisteria.

\n

The first great simplification was that of Isaac Newton, who unified the course of the planets with the trajectory of a falling apple.  The shock of this discovery was greater by far than Lavoisier's.  It wasn't just that Newton had dared to unify the Earthly realm of base matter with the obviously different and sacred celestial realm, once thought to be the abode of the gods.  Newton's discovery gave rise to the notion of a universal law, one that is the same everywhere and everywhen, with literally zero exceptions.

\n

\n

Human beings live in a world of surface phenomena, and surface phenomena are divided into leaky categories with plenty of exceptions.  A tiger does not behave like a buffalo.  Most buffalo have four legs, but perhaps this one has three.  Why would anyone think there would be laws that hold everywhere?  It's just so obviously untrue.

\n

The only time when it seems like we would want a law to hold everywhere is when we are talking about moral laws - tribal rules of behavior.  Some tribe members may try to take more than their fair share of the buffalo meat - perhaps coming up with some clever excuse - so in the case of moral laws we do seem to have an instinct to universality.  Yes, the rule about dividing the meat evenly applies to you, right now, whether you like it or not.  But even here there are exceptions.  If - for some bizarre reason - a more powerful tribe threatened to spear all of you unless Bob received twice as much meat on just this one occasion, you'd give Bob twice as much meat.  The idea of a rule with literally no exceptions seems insanely rigid, the product of closed-minded thinking by fanatics so in the grip of their one big idea that they can't see the richness and complexity of the real universe.

\n

This is the customary accusation made against scientists - the professional students of the richness and complexity of the real universe.  Because when you actually look at the universe, it turns out to be, by human standards, insanely rigid in applying its rules.  As far as we know, there has been not one single violation of conservation of momentum from the uttermost dawn of time up until now.

\n

Sometimes - very rarely - we observe an apparent violation of our models of the fundamental laws.  Though our scientific models may last for a generation or two, they are not stable over the course of centuries... but do not fancy that this makes the universe itself whimsical.  That is mixing up the map with the territory.  For when the dust subsides and the old theory is overthrown, it turns out that the universe always was acting according to the new generalization we have discovered, which once again is absolutely universal as far as humanity's knowledge extends.  When it was discovered that Newtonian gravitation was a special case of General Relativity, it was seen that General Relativity had been governing the orbit of Mercury for decades before any human being knew about it; and it would later become apparent that General Relativity had been governing the collapse of stars for billions of years before humanity.  It is only our model that was mistaken - the Law itself was always absolutely constant - or so our new model tells us.

\n

I may repose only 80% confidence that the lightspeed limit will last out the next hundred thousand years, but this does not mean that I think the lightspeed limit holds only 80% of the time, with occasional exceptions.  The proposition to which I assign 80% probability is that the lightspeed law is absolutely inviolable throughout the entirety of space and time.

\n

One of the reasons the ancient Greeks didn't discover science is that they didn't realize you could generalize from experiments.  The Greek philosophers were interested in \"normal\" phenomena.  If you set up a contrived experiment, you would probably get a \"monstrous\" result, one that had no implications for how things really worked.

\n

So that is how humans tend to dream, before they learn better; but what of the universe's own quiet dreams that it dreamed to itself before ever it dreamed of humans?  If you would learn to think like reality, then here is the Tao:

\n
\n

Since the beginning
not one unusual thing
has ever happened.

\n
" } }, { "_id": "LaM5aTcXvXzwQSC2Q", "title": "Universal Fire", "pageUrl": "https://www.lesswrong.com/posts/LaM5aTcXvXzwQSC2Q/universal-fire", "postedAt": "2007-04-27T21:15:46.000Z", "baseScore": 211, "voteCount": 160, "commentCount": 46, "url": null, "contents": { "documentId": "LaM5aTcXvXzwQSC2Q", "html": "

In L. Sprague de Camp's fantasy story The Incomplete Enchanter (which set the mold for the many imitations that followed), the hero, Harold Shea, is transported from our own universe into the universe of Norse mythology.  This world is based on magic rather than technology; so naturally, when Our Hero tries to light a fire with a match brought along from Earth, the match fails to strike.

\n

I realize it was only a fantasy story, but... how do I put this...

\n

No.

\n

\n

In the late eighteenth century, Antoine-Laurent de Lavoisier discovered fire.  \"What?\" you say.  \"Hasn't the use of fire been dated back for hundreds of thousands of years?\"  Well, yes, people used fire; it was hot, bright, sort of orangey-colored, and you could use it to cook things.  But nobody knew how it worked.  Greek and medieval alchemists thought that Fire was a basic thing, one of the Four Elements.  In Lavoisier's time the alchemical paradigm had been gradually amended and greatly complicated, but fire was still held to be basic - in the form of \"phlogiston\", a rather mysterious substance which was said to explain fire, and also every other phenomenon in alchemy.

\n

Lavoisier's great innovation was to weigh all the pieces of the chemical puzzle, both before and after the chemical reaction.  It had previously been thought that some chemical transmutations changed the weight of the total material:  If you subjected finely ground antimony to the focused sunlight of a burning glass, the antimony would be reduced to ashes after one hour, and the ashes would weigh one-tenth more than the original antimony - even though the burning had been accompanied by the loss of a thick white smoke.  Lavoisier weighed all the components of such reactions, including the air in which the reaction took place, and discovered that matter was neither created nor destroyed.  If the burnt ashes increased in weight, there was a corresponding decrease in the weight of the air.

\n

Lavoisier also knew how to separate gases, and discovered that a burning candle diminished the amount of one kind of gas, vital air, and produced another gas, fixed air.  Today we would call them oxygen and carbon dioxide.  When the vital air was exhausted, the fire went out.  One might guess, perhaps, that combustion transformed vital air into fixed air and fuel to ash, and that the ability of this transformation to continue was limited by the amount of vital air available.

\n

Lavoisier's proposal directly contradicted the then-current phlogiston theory. That alone would have been shocking enough, but it also turned out...

\n

To appreciate what comes next, you must put yourself into an eighteenth-century frame of mind. Forget the discovery of DNA, which occurred only in 1953. Unlearn the cell theory of biology, which was formulated in 1839. Imagine looking at your hand, flexing your fingers... and having absolutely no idea how it worked. The anatomy of muscle and bone was known, but no one had any notion of \"what makes it go\" - why a muscle moves and flexes, while clay molded into a similar shape just sits there. Imagine your own body being composed of mysterious, incomprehensible gloop. And then, imagine discovering...

\n

...that humans, in the course of breathing, consumed vital air and breathed out fixed air. People also ran on combustion! Lavoisier measured the amount of heat that animals (and Lavoisier's assistant, Seguin) produced when exercising, the amount of vital air consumed, and the fixed air breathed out.  When animals produced more heat, they consumed more vital air and exhaled more fixed air. People, like fire, consumed fuel and oxygen; people, like fire, produced heat and carbon dioxide. Deprive people of oxygen, or fuel, and the light goes out.

\n

Matches catch fire because of phosphorus - \"safety matches\" have phosphorus on the ignition strip; strike-anywhere matches have phosphorus in the match heads.  Phosphorus is highly reactive; pure phosphorus glows in the dark and may spontaneously combust.  (Henning Brand, who purified phosphorus in 1669, announced that he had discovered Elemental Fire.)  Phosphorus is thus also well-suited to its role in adenosine triphosphate, ATP, your body's chief method of storing chemical energy.  ATP is sometimes called the \"molecular currency\".  It invigorates your muscles and charges up your neurons.  Almost every metabolic reaction in biology relies on ATP, and therefore on the chemical properties of phosphorus.

\n

If a match stops working, so do you.  You can't change just one thing.

\n

The surface-level rules, \"Matches catch fire when struck,\" and \"Humans need air to breathe,\" are not obviously connected.  It took centuries to discover the connection, and even then, it still seems like some distant fact learned in school, relevant only to a few specialists.  It is all too easy to imagine a world where one surface rule holds, and the other doesn't; to suppress our credence in one belief, but not the other.  But that is imagination, not reality.  If your map breaks into four pieces for easy storage, it doesn't mean the territory is also broken into disconnected parts.  Our minds store different surface-level rules in different compartments, but this does not reflect any division in the laws that govern Nature.

\n

We can take the lesson further.  Phosphorus derives its behavior from even deeper laws, electrodynamics and chromodynamics.  \"Phosphorus\" is merely our word for electrons and quarks arranged a certain way.  You cannot change the chemical properties of phosphorus without changing the laws governing electrons and quarks.

\n

If you stepped into a world where matches failed to strike, you would cease to exist as organized matter.

\n

Reality is laced together a lot more tightly than humans might like to believe.

" } }, { "_id": "SqF8cHjJv43mvJJzx", "title": "Feeling Rational", "pageUrl": "https://www.lesswrong.com/posts/SqF8cHjJv43mvJJzx/feeling-rational", "postedAt": "2007-04-26T04:48:05.000Z", "baseScore": 324, "voteCount": 325, "commentCount": 89, "url": null, "contents": { "documentId": "SqF8cHjJv43mvJJzx", "html": "

Since curiosity is an emotion, I suspect that some people will object to treating curiosity as a part of rationality. A popular belief about “rationality” is that rationality opposes all emotion—that all our sadness and all our joy are automatically anti-logical by virtue of being feelings. Yet strangely enough, I can’t find any theorem of probability theory which proves that I should appear ice-cold and expressionless.

When people think of “emotion” and “rationality” as opposed, I suspect that they are really thinking of System 1 and System 2—fast perceptual judgments versus slow deliberative judgments. System 2’s deliberative judgments aren’t always true, and System 1’s perceptual judgments aren’t always false; so it is very important to distinguish that dichotomy from “rationality.” Both systems can serve the goal of truth, or defeat it, depending on how they are used.

For my part, I label an emotion as “not rational” if it rests on mistaken beliefs, or rather, on mistake-producing epistemic conduct. “If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm.” Conversely, an emotion that is evoked by correct beliefs or truth-conducive thinking is a “rational emotion”; and this has the advantage of letting us regard calm as an emotional state, rather than a privileged default.

So is rationality orthogonal to feeling? No; our emotions arise from our models of reality. If I believe that my dead brother has been discovered alive, I will be happy; if I wake up and realize it was a dream, I will be sad. P. C. Hodgell said: “That which can be destroyed by the truth should be.” My dreaming self’s happiness was opposed by truth. My sadness on waking is rational; there is no truth which destroys it.

Rationality begins by asking how-the-world-is, but spreads virally to any other thought which depends on how we think the world is. Your beliefs about “how-the-world-is” can concern anything you think is out there in reality, anything that either does or does not exist, any member of the class “things that can make other things happen.” If you believe that there is a goblin in your closet that ties your shoes’ laces together, then this is a belief about how-the-world-is. Your shoes are real—you can pick them up. If there’s something out there that can reach out and tie your shoelaces together, it must be real too, part of the vast web of causes and effects we call the “universe.”

Feeling angry at the goblin who tied your shoelaces involves a state of mind that is not just about how-the-world-is. Suppose that, as a Buddhist or a lobotomy patient or just a very phlegmatic person, finding your shoelaces tied together didn’t make you angry. This wouldn’t affect what you expected to see in the world—you’d still expect to open up your closet and find your shoelaces tied together. Your anger or calm shouldn’t affect your best guess here, because what happens in your closet does not depend on your emotional state of mind; though it may take some effort to think that clearly.

But the angry feeling is tangled up with a state of mind that is about how-the-world-is; you become angry because you think the goblin tied your shoelaces. The criterion of rationality spreads virally, from the initial question of whether or not a goblin tied your shoelaces, to the resulting anger.

Becoming more rational—arriving at better estimates of how-the-world-is—can diminish feelings or intensify them. Sometimes we run away from strong feelings by denying the facts, by flinching away from the view of the world that gave rise to the powerful emotion. If so, then as you study the skills of rationality and train yourself not to deny facts, your feelings will become stronger.

In my early days I was never quite certain whether it was all right to feel things strongly—whether it was allowed, whether it was proper. I do not think this confusion arose only from my youthful misunderstanding of rationality. I have observed similar troubles in people who do not even aspire to be rationalists; when they are happy, they wonder if they are really allowed to be happy, and when they are sad, they are never quite sure whether to run away from the emotion or not. Since the days of Socrates at least, and probably long before, the way to appear cultured and sophisticated has been to never let anyone see you care strongly about anything. It’s embarrassing to feel—it’s just not done in polite society. You should see the strange looks I get when people realize how much I care about rationality. It’s not the unusual subject, I think, but that they’re not used to seeing sane adults who visibly care about anything.

But I know, now, that there’s nothing wrong with feeling strongly. Ever since I adopted the rule of “That which can be destroyed by the truth should be,” I’ve also come to realize “That which the truth nourishes should thrive.” When something good happens, I am happy, and there is no confusion in my mind about whether it is rational for me to be happy. When something terrible happens, I do not flee my sadness by searching for fake consolations and false silver linings. I visualize the past and future of humankind, the tens of billions of deaths over our history, the misery and fear, the search for answers, the trembling hands reaching upward out of so much blood, what we could become someday when we make the stars our cities, all that darkness and all that light—I know that I can never truly understand it, and I haven’t the words to say. Despite all my philosophy I am still embarrassed to confess strong emotions, and you’re probably uncomfortable hearing them. But I know, now, that it is rational to feel.

" } }, { "_id": "rfDS25Pdoij2ZFeJ9", "title": "Consolidated Nature of Morality Thread", "pageUrl": "https://www.lesswrong.com/posts/rfDS25Pdoij2ZFeJ9/consolidated-nature-of-morality-thread", "postedAt": "2007-04-15T23:00:46.000Z", "baseScore": 14, "voteCount": 17, "commentCount": 69, "url": null, "contents": { "documentId": "rfDS25Pdoij2ZFeJ9", "html": "

My intended next OB post will, in passing, distinguish between moral judgments and factual beliefs.  Several times before, this has sparked a debate about the nature of morality.  (E.g., Believing in Todd.) Such debates often repeat themselves, reinvent the wheel each time, start all over from previous arguments.  To avoid this, I suggest consolidating the debate.  Whenever someone feels tempted to start a debate about the nature of morality in the comments thread of another post, the comment should be made to this post, instead, with an appropriate link to the article commented upon.  Otherwise it does tend to take over discussions like kudzu.  (This isn't the first blog/list where I've seen it happen.)

\n

I'll start the ball rolling with ten points to ponder about the nature of morality...

\n

\n
    \n
  1. It certainly looks like there is an important distinction between a statement like \"The total loss of human life caused by World War II was roughly 72 million people\" and \"We ought to avoid a repeat of World War II.\"  Anyone who argues that these statements are of the same fundamental kind must explain away the apparent structural differences between them.  What are the exact structural differences?  
  2. \n
  3. We experience some of our morals and preferences as being voluntary choices, others as involuntary perceptions.  I choose to play on the side of Rationality, but I don't think I could choose to believe that death is good any more than I could choose to believe the sky is green.  What psychological factors account for these differences in my perceptions of my own preferences?  
  4. \n
  5. At a relatively young age, children begin to believe that while the teacher can make it all right to stand on your chair by giving permission, the teacher cannot make it all right to steal from someone else's backpack.  (I can't recall the exact citation on this.)  Do young children in a religious environment believe that God can make it all right to steal from someone's backpack?  
  6. \n
  7. Both individual human beings and civilizations appear to change at least some of their moral beliefs over the course of time.  Some of these changes are experienced as \"decisions\", others are experienced as \"discoveries\".  Is there a systematic direction to at least some of these changes?  How does this systematic direction arise causally?  
  8. \n
  9. To paraphrase Alfred Tarski, the statement \"My car is painted green\" is true if and only if my car is painted green.  Similarly, someone might try to get away with asserting that the statement \"Human deaths are bad\" is true if and only if human deaths are bad.  Is this valid?  
  10. \n
  11. Suppose I involuntarily administered to you a potion which would cause you to believe that human deaths were good.  Afterward, would you believe truly that human deaths were good, or would you believe falsely that human deaths were good?  
  12. \n
  13. Although the statement \"My car is painted green\" is presently false, I can make it true at a future time by painting my car green.  However, I can think of no analogous action I could take which would make it right to kill people.  Does this make the moral statement stronger, weaker, or is there no sense in making the comparison?  
  14. \n
  15. There does not appear to be any \"place\" in the environment where the referents of moral statements are stored, analogous to the place where my car is stored.  Does this necessarily indicate that moral statements are empty of content, or could they correspond to something else?  Is the statement 2 + 2 = 4 true?  Could it be made untrue?  Is it falsifiable?  Where is its content?  
  16. \n
  17. The phrase \"is/ought\" gap refers to the notion that no ought statement can be logically derived from any number of is statements, without at least one ought statement in the mix.  For example, suppose I have a remote control with two buttons, and the red button kills an innocent prisoner, and the green button sets them free.  I cannot derive the ought-statement, \"I ought not to press the red button\", without both the is-statement \"If I press the red button, an innocent will die\" and the ought-statement \"I ought not to kill innocents.\"   Should we distinguish mixed ought-statements like \"I ought not to press the red button\" from pure ought-statements like \"I ought not to kill innocents\"?  If so, is there really any such thing as a \"pure\" ought-statement, or do they all have is-statements mixed into them somewhere?  
  18. \n
  19. The statement \"This painting is beautiful\" could be rendered untrue by flinging a bucket of mud on the painting.  Similarly, in the remote-control example above, the statement \"It is wrong to press the red button\" can be rendered untrue by rewiring the remote.  Are there pure aesthetic judgments?  Are there pure preferences?
  20. \n
" } }, { "_id": "anCubLdggTWjnEvBS", "title": "Your Rationality is My Business", "pageUrl": "https://www.lesswrong.com/posts/anCubLdggTWjnEvBS/your-rationality-is-my-business", "postedAt": "2007-04-15T07:31:11.000Z", "baseScore": 170, "voteCount": 148, "commentCount": 28, "url": null, "contents": { "documentId": "anCubLdggTWjnEvBS", "html": "\n\n\n\n \n\n \n\n

Some responses to “Lotteries: A Waste of Hope” chided me for daring to criticize others’ decisions; if someone else chooses to buy lottery tickets, who am I to disagree? This is a special case of a more general question: What business is it of mine, if someone else chooses to believe what is pleasant rather than what is true? Can’t we each choose for ourselves whether to care about the truth?

\n\n

An obvious snappy comeback is: “Why do you care whether I care whether someone else cares about the truth?” It is somewhat inconsistent for your utility function to contain a negative term for anyone else’s utility function having a term for someone else’s utility function. But that is only a snappy comeback, not an answer.

\n\n

So here then is my answer: I believe that it is right and proper for me, as a human being, to have an interest in the future, and what human civilization becomes in the future. One of those interests is the human pursuit of truth, which has strengthened slowly over the generations (for there was not always Science). I wish to strengthen that pursuit further, in this generation. That is a wish of mine, for the Future. For we are all of us players upon that vast gameboard, whether we accept the responsibility or not.

\n\n

And that makes your rationality my business.

\n\n

Is this a dangerous idea? Yes, and not just pleasantly edgy “dangerous.” People have been burned to death because some priest decided that they didn’t think the way they should. Deciding to burn people to death because they “don’t think properly”—that’s a revolting kind of reasoning, isn’t it? You wouldn’t want people to think that way, why, it’s disgusting. People who think like that, well, we’ll have to do something about them . . .

\n\n

I agree! Here’s my proposal: Let’s argue against bad ideas but not set their bearers on fire.

\n\n

The syllogism we desire to avoid runs: “I think Susie said a bad thing, therefore, Susie should be set on fire.” Some try to avoid the syllogism by labeling it improper to think that Susie said a bad thing. No one should judge anyone, ever; anyone who judges is committing a terrible sin, and should be publicly pilloried for it.

\n\n

As for myself, I deny the therefore. My syllogism runs, “I think Susie said something wrong, therefore, I will argue against what she said, but I will not set her on fire, or try to stop her from talking by violence or regulation . . .”

\n\n

We are all of us players upon that vast gameboard; and one of my interests for the Future is to make the game fair. The counterintuitive idea underlying science is that factual disagreements should be fought out with experiments and mathematics, not violence and edicts. This incredible notion can be extended beyond science, to a fair fight for the whole Future. You should have to win by convincing people, and should not be allowed to burn them. This is one of the principles of Rationality, to which I have pledged my allegiance.

\n\n

People who advocate relativism or selfishness do not appear to me to be truly relativistic or selfish. If they were really relativistic, they would not judge. If they were really selfish, they would get on with making money instead of arguing passionately with others. Rather, they have chosen the side of Relativism, whose goal upon that vast gameboard is to prevent the players—all the players—from making certain kinds of judgments. Or they have chosen the side of Selfishness, whose goal is to make all players selfish. And then they play the game, fairly or unfairly according to their wisdom.

\n\n

If there are any true Relativists or Selfishes, we do not hear them—they remain silent, non-players.

\n\n

I cannot help but care how you think, because—as I cannot help but see the universe—each time a human being turns away from the truth, the unfolding story of humankind becomes a little darker. In many cases, it is a small darkness only. (Someone doesn’t always end up getting hurt.) Lying to yourself, in the privacy of your own thoughts, does not shadow humanity’s history so much as telling public lies or setting people on fire. Yet there is a part of me which cannot help but mourn. And so long as I don’t try to set you on fire—only argue with your ideas—I believe that it is right and proper to me, as a human, that I care about my fellow humans. That, also, is a position I defend into the Future.

\n\n" } }, { "_id": "QawvGzYWhqdyPWgBL", "title": "New Improved Lottery", "pageUrl": "https://www.lesswrong.com/posts/QawvGzYWhqdyPWgBL/new-improved-lottery", "postedAt": "2007-04-13T23:42:11.000Z", "baseScore": 132, "voteCount": 121, "commentCount": 45, "url": null, "contents": { "documentId": "QawvGzYWhqdyPWgBL", "html": "\n\n\n\n \n\n \n\n

People are still suggesting that the lottery is not a waste of hope, but a service which enables purchase of fantasy—“daydreaming about becoming a millionaire for much less money than daydreaming about hollywood stars in movies.”1 One commenter wrote: “There is a big difference between zero chance of becoming wealthy, and epsilon. Buying a ticket allows your dream of riches to bridge that gap.”

\n\n

Actually, one of the points I was trying to make is that between zero chance of becoming wealthy, and epsilon chance, there is an order-of-epsilon difference. If you doubt this, let epsilon equal one over googolplex.

\n\n

Anyway, if we pretend that the lottery sells epsilon hope, this suggests a design for a New Improved Lottery. The New Improved Lottery pays out every five years on average, at a random time—determined, say, by the decay of a not-very-radioactive element. You buy in once, for a single dollar, and get not just a few days of epsilon chance of becoming rich, but a few years of epsilon. Not only that, your wealth could strike at any time! At any minute, the phone could ring to inform you that you, yes, you are a millionaire!

\n\n

Think of how much better this would be than an ordinary lottery drawing, which only takes place at defined times, a few times per week. Let’s say the boss comes in and demands you rework a proposal, or restock inventory, or something similarly annoying. Instead of getting to work, you could turn to the phone and stare, hoping for that call—because there would be epsilon chance that, at that exact moment, you yes you would be awarded the Grand Prize! And even if it doesn’t happen this minute, why, there’s no need to be disappointed—it might happen the next minute!

\n\n

Think of how many more fantasies this New Improved Lottery would enable. You could shop at the store, adding expensive items to your shopping cart—if your cellphone doesn’t ring with news of a lottery win, you could always put the items back, right?

\n\n

Maybe the New Improved Lottery could even show a constantly fluctuating probability distribution over the likelihood of a win occurring, and the likelihood of particular numbers being selected, with the overall expectation working out to the aforesaid Poisson distribution. Think of how much fun that would be! Oh, goodness, right this minute the chance of a win occurring is nearly ten times higher than usual! And look, the number 42 that I selected for the Mega Ball has nearly twice the usual chance of winning! You could feed it to a display on people’s cellphones, so they could just flip open the cellphone and see their chances of winning. Think of how exciting that would be! Much more exciting than trying to balance your checkbook! Much more exciting than doing your homework! This new dream would be so much tastier that it would compete with, not only hopes of going to technical school, but even hopes of getting home from work early. People could just stay glued to the screen all day long, why, they wouldn’t need to dream about anything else!

\n\n

Yep, offering people tempting daydreams that will not actually happen sure is a valuable service, all right. People are willing to pay; it must be valuable. The alternative is that consumers are making mistakes, and we all know that can’t happen.

\n\n

And yet current governments, with their vile monopoly on lotteries, don’t offer this simple and obvious service. Why? Because they want to overcharge people. They want them to spend money every week. They want them to spend a hundred dollars for the thrill of believing their chance of winning is a hundred times as large, instead of being able to stare at a cellphone screen waiting for the likelihood to spike. So if you believe that the lottery is a service, it is clearly an enormously overpriced service—charged to the poorest members of society—and it is your solemn duty as a citizen to demand the New Improved Lottery instead.

\n\n
\n \n\n

1See “The Future of Fantasy,” http://www.economist.com/blogs/freeexchange/2007/04/the_future_of_fantasy. For the comment I’m responding to, see http://lesswrong.com/lw/hl/lotteries_a_waste_of_hope/e1u.

\n
\n\n" } }, { "_id": "vYsuM8cpuRgZS5rYB", "title": "Lotteries: A Waste of Hope", "pageUrl": "https://www.lesswrong.com/posts/vYsuM8cpuRgZS5rYB/lotteries-a-waste-of-hope", "postedAt": "2007-04-13T05:36:44.000Z", "baseScore": 100, "voteCount": 93, "commentCount": 73, "url": null, "contents": { "documentId": "vYsuM8cpuRgZS5rYB", "html": "\n\n\n\n \n\n \n\n

The classic criticism of the lottery is that the people who play are the ones who can least afford to lose; that the lottery is a sink of money, draining wealth from those who most need it. Some lottery advocates, and even some commentors on Overcoming Bias, have tried to defend lottery-ticket buying as a rational purchase of fantasy—paying a dollar for a day’s worth of pleasant anticipation, imagining yourself as a millionaire.

\n\n

But consider exactly what this implies. It would mean that you’re occupying your valuable brain with a fantasy whose real probability is nearly zero—a tiny line of likelihood which you, yourself, can do nothing to realize. The lottery balls will decide your future. The fantasy is of wealth that arrives without effort—without conscientiousness, learning, charisma, or even patience.1

\n\n

Which makes the lottery another kind of sink: a sink of emotional energy. It encourages people to invest their dreams, their hopes for a better future, into an infinitesimal probability. If not for the lottery, maybe they would fantasize about going to technical school, or opening their own business, or getting a promotion at work—things they might be able to actually do, hopes that would make them want to become stronger. Their dreaming brains might, in the 20th visualization of the pleasant fantasy, notice a way to really do it. Isn’t that what dreams and brains are for? But how can such reality-limited fare compete with the artificially sweetened prospect of instant wealth—not after herding a dot-com startup through to IPO, but on Tuesday?

\n\n

Seriously, why can’t we just say that buying lottery tickets is stupid? Human beings are stupid, from time to time—it shouldn’t be so surprising a hypothesis.

\n\n

Unsurprisingly, the human brain doesn’t do 64-bit floating-point arithmetic, and it can’t devalue the emotional force of a pleasant anticipation by a factor of 0.00000001 without dropping the line of reasoning entirely. Unsurprisingly, many people don’t realize that a numerical calculation of expected utility ought to override or replace their imprecise financial instincts, and instead treat the calculation as merely one argument to be balanced against their pleasant anticipations—an emotionally weak argument, since it’s made up of mere squiggles on paper, instead of visions of fabulous wealth.

\n\n

This seems sufficient to explain the popularity of lotteries. Why do so many arguers feel impelled to defend this classic form of self-destruction?2

\n\n

The process of overcoming bias requires (1) first noticing the bias, (2) analyzing the bias in detail, (3) deciding that the bias is bad, (4) figuring out a workaround, and then (5) implementing it. It’s unfortunate how many people get through steps 1 and 2 and then bog down in step 3, which by rights should be the easiest of the five. Biases are lemons, not lemonade, and we shouldn’t try to make lemonade out of them—just burn those lemons down.

\n\n
\n \n\n

1See Po Bronson, “How Not to Talk to Your Kids,” New York, 2007, http://nymag.com/news/features/27840.

\n\n

2See “Debiasing as Non-Self-Destruction.” http://lesswrong.com/lw/hf/debiasing_as_nonselfdestruction.

\n
\n\n" } }, { "_id": "jzf4Rcienrm6btRyt", "title": "Priors as Mathematical Objects", "pageUrl": "https://www.lesswrong.com/posts/jzf4Rcienrm6btRyt/priors-as-mathematical-objects", "postedAt": "2007-04-12T03:24:49.000Z", "baseScore": 53, "voteCount": 32, "commentCount": 20, "url": null, "contents": { "documentId": "jzf4Rcienrm6btRyt", "html": "

Followup to:  "Inductive Bias" \n\n

\n\n

What exactly is a "prior", as a mathematical object?  Suppose you're looking at an urn filled with red and white balls.  When you draw the very first ball, you haven't yet had a chance to gather much evidence, so you start out with a rather vague and fuzzy expectation of what might happen - you might say "fifty/fifty, even odds" for the chance of getting a red or white ball.  But you're ready to revise that estimate for future balls as soon as you've drawn a few samples.  So then this initial probability estimate, 0.5, is not repeat not a "prior".

\n\n

An introduction to Bayes's Rule for confused students\nmight refer to the population frequency of breast cancer as the "prior\nprobability of breast cancer", and the revised probability after a\nmammography as the "posterior probability". But in the scriptures of\nDeep Bayesianism, such as Probability Theory: The Logic of Science, one finds a quite different concept - that of\nprior information, which includes e.g. our beliefs about the\nsensitivity and specificity of mammography exams. Our belief about the\npopulation frequency of breast cancer is only one small element of our\nprior information.\n

In my earlier post on inductive bias, I discussed three possible beliefs we might have about an urn of red and white balls, which will be sampled without replacement:

\n\n\n\n

In each case, if you ask me - before I draw any balls - to estimate my marginal probability that the fourth ball drawn will be red, I will respond "50%".  And yet, once I begin observing balls drawn from the urn, I reason from the evidence in three different ways:

\n\n\n\n

Suppose I write a Python program to reproduce my reasoning in each of these scenarios.  The program will take in a record of balls observed so far, and output an estimate of the probability that the next ball drawn will be red.  It turns out that the only necessary information is the count of red balls seen and white balls seen, which we will respectively call R and W.  So each program accepts inputs R and W, and outputs the probability that the next ball drawn is red:

\n\n\n\n

These programs are correct so far as they go.  But unfortunately, probability theory does not operate on Python programs.  Probability theory is an algebra of uncertainty, a calculus of credibility, and Python programs are not allowed in the formulas.  It is like trying to add 3 to a toaster oven.

\n\n

To use these programs in the probability calculus, we must figure out how to convert a Python program into a more convenient mathematical object - say, a probability distribution.

\n\n

Suppose I want to know the combined probability that the sequence observed will be RWWRR, according to program 2 above.  Program 2 does not have a direct faculty for returning the joint or combined probability of a sequence, but it is easy to extract anyway.  First, I ask what probability program 2 assigns to observing R, given that no balls have been observed.  Program 2 replies "1/2".  Then I ask the probability that the next ball is R, given that one red ball has been observed; program 2 replies "2/3".  The second ball is actually white, so the joint probability so far is 1/2 * 1/3 = 1/6.  Next I ask for the probability that the third ball is red, given that the previous observation is RW; this is summarized as "one red and one white ball", and the answer is 1/2.  The third ball is white, so the joint probability for RWW is 1/12.  For the fourth ball, given the previous observation RWW, the probability of redness is 2/5, and the joint probability goes to 1/30.  We can write this as p(RWWR|RWW) = 2/5, which means that if the sequence so far is RWW, the probability assigned by program 2 to the sequence continuing with R and forming RWWR equals 2/5.  And then p(RWWRR|RWWR) = 1/2, and the combined probability is 1/60.

\n\n

We can do this with every possible sequence of ten balls, and end up with a table of 1024 entries.  This table of 1024 entries constitutes a probability distribution over sequences of observations of length 10, and it says everything the Python program had to say (about 10 or fewer observations, anyway).  Suppose I have only this probability table, and I want to know the probability that the third ball is red, given that the first two balls drawn were white.  I need only sum over the probability of all entries beginning with WWR, and divide by the probability of all entries beginning with WW.

\n\n

We have thus transformed a program that computes the probability of future events given past experiences, into a probability distribution over sequences of observations.

\n\n

You wouldn't want to do this in real life, because the Python program is ever so much more compact than a table with 1024 entries.  The point is not that we can turn an efficient and compact computer program into a bigger and less efficient giant lookup table; the point is that we can view an inductive learner as a mathematical object, a distribution over sequences, which readily fits into standard probability calculus.  We can take a computer program that reasons from experience and think about it using probability theory.

\n\n

Why might this be convenient?  Say that I'm not sure which of these three scenarios best describes the urn - I think it's about equally likely that each of the three cases holds true.  How should I reason from my actual observations of the urn?  If you think about the problem from the perspective of constructing a computer program that imitates my inferences, it looks complicated - we have to juggle the relative probabilities of each hypothesis, and also the probabilities within each hypothesis.  If you think about it from the perspective of probability theory, the obvious thing to do is to add up all three distributions with weightings of 1/3 apiece, yielding a new distribution (which is in fact correct).  Then the task is just to turn this new distribution into a computer program, which turns out not to be difficult.

\n\n

So that is what a prior really is - a mathematical object that represents all of your starting information plus the way you learn from experience.

" } }, { "_id": "hcKCrYTW7Zbzmv29g", "title": "Marginally Zero-Sum Efforts", "pageUrl": "https://www.lesswrong.com/posts/hcKCrYTW7Zbzmv29g/marginally-zero-sum-efforts", "postedAt": "2007-04-11T05:22:57.000Z", "baseScore": 32, "voteCount": 26, "commentCount": 14, "url": null, "contents": { "documentId": "hcKCrYTW7Zbzmv29g", "html": "

Bostrom recently noted the problem of the commons in labeling efforts "important"; each managerial player has an incentive to label their project world-shakingly important, even though this devalues the priority label as used at other times or other projects, creating positive feedback in inflated labels.

\n\n

This reminds me of how my grandfather, a pioneer in quantitative genetics, regularly bemoans the need to write more and more grant proposals to maintain a constant level of funding.  It's not that the funding is drying up in his field.  But suppose there's money for 20 grants, and 21 scientists in need of grants - or one scientist who'd like to run two projects, or receive more funding for one project...  One scientist doesn't get his first grant proposal funded, so he writes another one.  His second grant proposal does get funded, which uses up a grant that could have gone to another scientist, who now also has his first grant proposal denied, and has to write and send off a second grant proposal too...

\n\n

The problem here is that, while some initial level of effort is beneficial, all effort beyond that is marginally zero-sum; there's a marginal return to the individual on additional efforts, but no marginal return to the group.  If there are 20 grants, then ultimately only 20 grant proposals are going to be funded.  No matter how many grant proposals anyone writes, the total funding available remains the same.  Everyone would be better off if everyone agreed to write only one grant proposal.  But in this case, there wouldn't be much competition for any given grant, and the rewards for writing another two or three grant proposals would be huge... until everyone else started doing the same thing.

There's no obvious limit to this process; the 21 scientists could write\n1,000 grant proposals apiece, and still get only 20 grants between\nthem.  They'd all be better off if they only wrote one grant proposal\napiece; but anyone who cuts back unilaterally will be snowed\nunder.

\n\n

In a way, this is even worse than the classic problem of the commons.  A common grazing field eventually gets eaten down to\nbedrock and the farmers find something else to do with their herds.  When professional efforts are marginally zero-sum, but yield positive returns to the individual, the resulting cycle of busy-work can expand to the limits of individual endurance.

\n\n

I've often suspected that a similar effect governs bureaucracies (both government and corporate); the longer you stay at your desk each day, the more you are perceived as a hard worker and get promoted.  But there's only a limited number of promotions to go around... and only a limited amount of genuinely important work to do.

\n\n

Social approbation is the usual method for dealing with non-positive-sum actions.  Theft has positive returns to the individual, but not positive returns to society, so we put thieves in jail.  But in this case, the social dilemma is that neither writing grant proposals, nor showing up at your office desk, is inherently an evil deed.  Some grant proposals do need to get written.  It's not inherently a zero-sum activity.  It's just marginally zero-sum beyond a certain point.

" } }, { "_id": "mZJs7FxxmhMvFxuse", "title": "Futuristic Predictions as Consumable Goods", "pageUrl": "https://www.lesswrong.com/posts/mZJs7FxxmhMvFxuse/futuristic-predictions-as-consumable-goods", "postedAt": "2007-04-10T00:18:17.000Z", "baseScore": 35, "voteCount": 28, "commentCount": 19, "url": null, "contents": { "documentId": "mZJs7FxxmhMvFxuse", "html": "

The Wikipedia entry on Friedman Units tracks over 30 different cases between 2003 and 2007 in which someone labeled the "next six months" as the "critical period in Iraq".  Apparently one of the worst offenders is journalist Thomas Friedman after whom the unit was named (8 different predictions in 4 years).  In similar news, some of my colleagues in Artificial Intelligence (you know who you are) have been predicting the spectacular success of their projects in "3-5 years" for as long as I've known them, that is, since at least 2000.

\n\n

Why do futurists make the same mistaken predictions over and over?  The same reason politicians abandon campaign promises and switch principles as expediency demands.  Predictions, like promises, are sold today and consumed today.  They produce a few chewy bites of delicious optimism or delicious horror, and then they're gone.  If the tastiest prediction is allegedly about a time interval "3-5 years in the future" (for AI projects) or "6 months in the future" (for Iraq), then futurists will produce tasty predictions of that kind.  They have no reason to change the formulation any more than Hershey has to change the composition of its chocolate bars.  People won't remember the prediction in 6 months or 3-5 years, any more than chocolate sits around in your stomach for a year and keeps you full.

\n\n

The futurists probably aren't even doing it deliberately; they themselves have long since digested their own predictions.  Can you remember what you had for breakfast on April 9th, 2006?  I bet you can't, and I bet you also can't remember what you predicted for "one year from now".

" } }, { "_id": "zGm9JoGZGXtF8zQ9P", "title": "Suggested Posts", "pageUrl": "https://www.lesswrong.com/posts/zGm9JoGZGXtF8zQ9P/suggested-posts", "postedAt": "2007-04-09T02:32:01.000Z", "baseScore": 5, "voteCount": 5, "commentCount": 16, "url": null, "contents": { "documentId": "zGm9JoGZGXtF8zQ9P", "html": "

Kaj Sotala asked:

I was wondering, is there an avenue for us non-contributor readers to\nraise questions we think would be interesting to discuss?

If you have a suggested Overcoming Bias topic you'd like to see discussed, post it in a comment here.  But please don't actually discuss the topic with further comments, just give us the suggestion.  This post is for topic suggestions, not topic discussions.

" } }, { "_id": "H59YqogX94z5jb8xx", "title": "\"Inductive Bias\"", "pageUrl": "https://www.lesswrong.com/posts/H59YqogX94z5jb8xx/inductive-bias", "postedAt": "2007-04-08T19:52:04.000Z", "baseScore": 39, "voteCount": 37, "commentCount": 24, "url": null, "contents": { "documentId": "H59YqogX94z5jb8xx", "html": "

(Part two in a series on "statistical bias", "inductive bias", and "cognitive bias".)\n\n

\n\n

Suppose that you see a swan for the first time, and it is white.  It does not follow logically that the next swan you see must be white, but white seems like a better guess than any other color.  A machine learning algorithm of the more rigid sort, if it sees a single white swan, may thereafter predict that any swan seen will be white.  But this, of course, does not follow logically - though AIs of this sort are often misnamed "logical".  For a purely logical reasoner to label the next swan white as a deductive conclusion, it would need an additional assumption:  "All swans are the same color."  This is a wonderful assumption to make if all swans are, in reality, the same color; otherwise, not so good.  Tom Mitchell's Machine Learning defines the inductive bias of a machine learning algorithm as the assumptions that must be added to the observed data to transform the algorithm's outputs into logical deductions.\n\n

\n\n

A more general view of inductive bias would identify it with a Bayesian's prior over sequences of observations...

Consider the case of an urn filled with red and white balls, from which we are to sample without replacement.  I might have prior information that the urn contains 5 red balls and 5 white balls.  Or, I might have prior information that a random number was selected from a uniform distribution between 0 and 1, and this number was then used as a fixed probability to independently generate a series of 10 balls.  In either case, I will estimate a 50% probability that the first ball is red, a 50% probability that the second ball is red, etc., which you might foolishly think indicated the same prior belief.  But, while the marginal probabilities on each round are equivalent, the probabilities over sequences are different.  In the first case, if I see 3 red balls initially, I will estimate a probability of 2/7 that the next ball will be red.  In the second case, if I see 3 red balls initially, I will estimate a 4/5 chance that the next ball will be red (by Laplace's Law of Succession, thus named because it was proved by Thomas Bayes).  In both cases we refine our future guesses based on past data, but in opposite directions, which demonstrates the importance of prior information.\n\n

\n\n

Suppose that your prior information about the urn is that a monkey tosses balls into the urn, selecting red balls with 1/4 probability and white balls with 3/4 probability, each ball selected independently.  The urn contains 10 balls, and we sample without replacement.  (E. T. Jaynes called this the "binomial monkey prior".)  Now suppose that on the first three rounds, you see three red balls.  What is the probability of seeing a red ball on the fourth round?\n\n

\n\n

First, we calculate the prior probability that the monkey tossed 0 red balls and 10 white balls into the urn; then the prior probability that the monkey tossed 1 red ball and 9 white balls into the urn; and so on.  Then we take our evidence (three red balls, sampled without replacement) and calculate the likelihood of seeing that evidence, conditioned on each of the possible urn contents.  Then we update and normalize the posterior probability of the possible remaining urn contents.  Then we average over the probability of drawing a red ball from each possible urn, weighted by that urn's posterior probability.  And the answer is... (scribbles frantically for quite some time)... 1/4!\n\n

\n\n

Of course it's 1/4.  We specified that each ball was independently tossed into the urn, with a known 1/4 probability of being red.  Imagine that the monkey is tossing the balls to you, one by one; if it tosses you a red ball on one round, that doesn't change the probability that it tosses you a red ball on the next round.  When we withdraw one ball from the urn, it doesn't tell us anything about the other balls in the urn.\n\n

\n\n

If you start out with a maximum-entropy prior, then you never learn anything, ever, no matter how much evidence you observe.  You do not even learn anything wrong - you always remain as ignorant as you began.

The more inductive bias you have, the faster you learn to predict the future, but only if your inductive bias does in fact concentrate more probability into sequences of observations that actually occur.  If your inductive bias concentrates probability into sequences that don't occur, this diverts probability mass from sequences that do occur, and you will learn more slowly, or not learn at all, or even - if you are unlucky enough - learn in the wrong direction.

\n\n

Inductive biases can be probabilistically correct or probabilistically incorrect, and if they are correct, it is good to have as much of them as possible, and if they are incorrect, you are left worse off than if you had no inductive bias at all.  Which is to say that inductive biases are like any other kind of belief; the true ones are good for you, the bad ones are worse than nothing.  In contrast, statistical bias is always bad, period - you can trade it off against other ills, but it's never a good thing for itself.  Statistical bias is a systematic direction in errors; inductive bias is a systematic direction in belief revisions.

\n\n

As the example of maximum entropy demonstrates, without a direction to your belief revisions, you end up not revising your beliefs at all.  No future prediction based on past experience follows as a matter of strict logical deduction.  Which is to say:  All learning is induction, and all induction takes place through inductive bias.

\n\n

Why is inductive bias called "bias"?  Because it has systematic qualities, like a statistical bias?  Because it is a form of pre-evidential judgment, which resembles the word "prejudice", which resembles the political concept of bias?  Damned if I know, really - I'm not the one who decided to call it that.  Words are only words; that's why humanity invented mathematics.

" } }, { "_id": "XZWMeeqKmfMSPTLha", "title": "Debiasing as Non-Self-Destruction", "pageUrl": "https://www.lesswrong.com/posts/XZWMeeqKmfMSPTLha/debiasing-as-non-self-destruction", "postedAt": "2007-04-07T20:20:13.000Z", "baseScore": 46, "voteCount": 38, "commentCount": 21, "url": null, "contents": { "documentId": "XZWMeeqKmfMSPTLha", "html": "

Nick Bostrom asks:

One sign that\nscience is not all bogus is that it enables us to do things, like go\nthe moon. What practical things does debiassing enable us to do, other\nthan refraining from buying lottery tickets?

It seems to me that how to be smart varies widely between professions.  A hedge-fund trader, a research biologist, and a corporate CEO must learn different skill sets in order to be actively excellent - an apprenticeship in one would not serve for the other.

\n\n

Yet such concepts as "be willing to admit you lost", or "policy debates should not appear one-sided", or "plan to overcome your flaws instead of just confessing them", seem like they could apply to many professions.  And all this advice is not so much about how to be extraordinarily clever, as, rather, how to not be stupid.  Each profession has its own way to be clever, but their ways of not being stupid have much more in common.  And while victors may prefer to attribute victory to their own virtue, my small knowledge of history suggests that far more battles have been lost by stupidity than won by genius.

Debiasing is mostly not about how to be extraordinarily clever, but\nabout how to not be stupid.  Its great successes are disasters that do\nnot materialize, defeats that never happen, mistakes that no one sees\nbecause they are not made.  Often you can't even be sure that something would have gone wrong if you had not tried to debias yourself.  You don't always see the bullet that doesn't hit you.

\n\n

The great victories of debiasing are exactly the lottery\ntickets we didn't buy - the hopes and dreams we kept in the real world,\ninstead of diverting them into infinitesimal probabilities.  The triumphs of debiasing are cults not joined; optimistic assumptions\nrejected during planning; time not wasted on blind alleys.  \nIt is the art of non-self-destruction.

\n\n

Admittedly, none of this is spectacular enough to make the evening\nnews.  It's not a moon landing - though the moon landing did surely\nrequire thousands of things to not go wrong.

\n\n

So how can we know that our debiasing efforts are genuinely useful? \nWell, this is the worst sort of anecdotal evidence - but people do\nsometimes ignore my advice, and then, sometimes, catastrophe ensues of\njust the sort I told them to expect.  That is a very weak kind of\nconfirmation, and I would like to see controlled studies... but most of\nthe studies I've read consist of taking a few undergraduates who are in\nit for the course credit, merely telling them about the bias, and then\nwaiting to see if they improve.  What we need is longitudinal studies\nof life outcomes, and I can think of few people I would name as\ncandidates for the experimental group.

\n\n

The fact is, most people who take a halfhearted potshot at debiasing themselves do not\nget huge amounts of mileage out of it.  This is one of those things you\nhave to work at for quite a while before you get good at it, especially\nsince there's currently no source of systematic training, or even a\ndecent manual.  If for many\nyears you practice the techniques and submit yourself to strict\nconstraints, it may be that you will glimpse the center.  But until\nthen, mistakes avoided are often just replaced by other mistakes.  It\ntakes time for your mind to become significantly quieter.  Indeed, a\nlittle knowledge of cognitive bias often does more harm than good.

\n\n

As for public proof, I can see at least three ways that it could come about.  First, there might be founded an Order of Bayescraft\nfor people who are serious about it, and the graduates of these dojos\nmight prove systematically more successful even after controlling for\nmeasures of fluid intelligence.  Second, you could wait for some\nindividual or group, working on an important domain-specific problem\nbut also known for their commitment to debiasing, to produce a spectacularly huge public success. \nThird, there might be found techniques that can be taught easily and\nthat have readily measureable results; and then simple controlled\nexperiments could serve as public proof, at least for people who attend\nto Science.

" } }, { "_id": "AdYdLP2sRqPMoe8fb", "title": "Knowing About Biases Can Hurt People", "pageUrl": "https://www.lesswrong.com/posts/AdYdLP2sRqPMoe8fb/knowing-about-biases-can-hurt-people", "postedAt": "2007-04-04T18:01:50.000Z", "baseScore": 234, "voteCount": 204, "commentCount": 82, "url": null, "contents": { "documentId": "AdYdLP2sRqPMoe8fb", "html": "

Once upon a time I tried to tell my mother about the problem of expert calibration, saying: “So when an expert says they’re 99% confident, it only happens about 70% of the time.” Then there was a pause as, suddenly, I realized I was talking to my mother, and I hastily added: “Of course, you’ve got to make sure to apply that skepticism evenhandedly, including to yourself, rather than just using it to argue against anything you disagree with—”

And my mother said: “Are you kidding? This is great! I’m going to use it all the time!”

Taber and Lodge’s “Motivated Skepticism in the Evaluation of Political Beliefs” describes the confirmation of six predictions:

  1. Prior attitude effect. Subjects who feel strongly about an issue—even when encouraged to be objective—will evaluate supportive arguments more favorably than contrary arguments.
  2. Disconfirmation bias. Subjects will spend more time and cognitive resources denigrating contrary arguments than supportive arguments.
  3. Confirmation bias. Subjects free to choose their information sources will seek out supportive rather than contrary sources.
  4. Attitude polarization. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarization.
  5. Attitude strength effect. Subjects voicing stronger attitudes will be more prone to the above biases.
  6. Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to the above biases.

If you’re irrational to start with, having more knowledge can hurt you. For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves.

I’ve seen people severely messed up by their own knowledge of biases. They have more ammunition with which to argue against anything they don’t like. And that problem—too much ready ammunition—is one of the primary ways that people with high mental agility end up stupid, in Stanovich’s “dysrationalia” sense of stupidity.

You can think of people who fit this description, right? People with high g-factor who end up being less effective because they are too sophisticated as arguers? Do you think you’d be helping them—making them more effective rationalists—if you just told them about a list of classic biases?

I recall someone who learned about the calibration/overconfidence problem. Soon after he said: “Well, you can’t trust experts; they’re wrong so often—as experiments have shown. So therefore, when I predict the future, I prefer to assume that things will continue historically as they have—” and went off into this whole complex, error-prone, highly questionable extrapolation. Somehow, when it came to trusting his own preferred conclusions, all those biases and fallacies seemed much less salient—leapt much less readily to mind—than when he needed to counter-argue someone else.

I told the one about the problem of disconfirmation bias and sophisticated argument, and lo and behold, the next time I said something he didn’t like, he accused me of being a sophisticated arguer. He didn’t try to point out any particular sophisticated argument, any particular flaw—just shook his head and sighed sadly over how I was apparently using my own intelligence to defeat itself. He had acquired yet another Fully General Counterargument.

Even the notion of a “sophisticated arguer” can be deadly, if it leaps all too readily to mind when you encounter a seemingly intelligent person who says something you don’t like.

I endeavor to learn from my mistakes. The last time I gave a talk on heuristics and biases, I started out by introducing the general concept by way of the conjunction fallacy and representativeness heuristic. And then I moved on to confirmation bias, disconfirmation bias, sophisticated argument, motivated skepticism, and other attitude effects. I spent the next thirty minutes hammering on that theme, reintroducing it from as many different perspectives as I could.

I wanted to get my audience interested in the subject. Well, a simple description of conjunction fallacy and representativeness would suffice for that. But suppose they did get interested. Then what? The literature on bias is mostly cognitive psychology for cognitive psychology’s sake. I had to give my audience their dire warnings during that one lecture, or they probably wouldn’t hear them at all.

Whether I do it on paper, or in speech, I now try to never mention calibration and overconfidence unless I have first talked about disconfirmation bias, motivated skepticism, sophisticated arguers, and dysrationalia in the mentally agile. First, do no harm!

" } }, { "_id": "yxFkuyPANtL6GSwiC", "title": "The Majority Is Always Wrong", "pageUrl": "https://www.lesswrong.com/posts/yxFkuyPANtL6GSwiC/the-majority-is-always-wrong", "postedAt": "2007-04-03T01:12:23.000Z", "baseScore": 54, "voteCount": 41, "commentCount": 55, "url": null, "contents": { "documentId": "yxFkuyPANtL6GSwiC", "html": "

Today my coworker Marcello pointed out to me an interesting anti-majoritarian effect.  There are three major interpretations of probability: the "subjective" view of probabilities as measuring the uncertainty of agents, the "propensity" view of probabilities as chances inherent within objects, and the "frequentist" view of probabilities as the limiting value of long-run frequencies.  I was remarking on how odd it was that frequentism, the predominant view in mainstream statistics, is the worst of the three major alternatives (in my view, you have to presume either uncertainty or propensity in order to talk about the limiting frequency of events that have not yet happened).\n\n

\n\n

And Marcello said something along the lines of, "Well, of course.  If anything were worse than frequentism, it wouldn't be there."  I said, "What?"  And Marcello said, "Like the saying that Mac users have, 'If Macs really were worse than Windows PCs, no one would use them.'"\n\n

\n\n

At this point the light bulb went on over my head - a fluorescent light bulb - and I understood what Marcello was saying: an alternative to frequentism that was even worse than frequentism would have dropped off the radar screens long ago.  You can survive by being popular, or by being superior, but alternatives that are neither popular nor superior quickly go extinct.

I can personally testify that Dvorak seems to be much easier on the fingers than Qwerty - but this is not surprising, since if Dvorak really were inferior to Qwerty, it would soon cease to exist.  (Yes, I am familiar with the controversy in this area - bear in mind that this is a politically charged topic since it has been used to make accusations of market failure.  Nonetheless, my fingers now sweat less, my hands feel less tired, my carpal tunnel syndrome went away, and none of this is surprising because I can feel my fingers traveling shorter distances.)\n\n

\n\n

In any case where you've got (1) a popularity effect (it's easier to use something other people are using) and (2) a most dominant alternative, plus a few smaller niche alternatives, then the most dominant alternative will probably be the worst of the lot - or at least strictly superior to none of the others.\n\n

\n\n

Can anyone else think of examples from their experience where there are several major alternatives that you've heard of, and a popularity effect (which may be as simple as journal editors preferring well-known usages), and the most popular alternative seems to be noticeably the worst?

\n\n

Addendum:  Metahacker said of this hypothesis, "It's wrong, but only sometimes."  Sounds about right to me.

" } }, { "_id": "zAvhTnQX6ynJF7pyh", "title": "The Error of Crowds", "pageUrl": "https://www.lesswrong.com/posts/zAvhTnQX6ynJF7pyh/the-error-of-crowds", "postedAt": "2007-04-01T21:50:02.000Z", "baseScore": 32, "voteCount": 24, "commentCount": 13, "url": null, "contents": { "documentId": "zAvhTnQX6ynJF7pyh", "html": "

I've always been annoyed at the notion that the bias-variance decomposition tells us something about modesty or Philosophical Majoritarianism.  For example, Scott Page rearranges the equation to get what he calls the Diversity Prediction Theorem:

\n
\n

Collective Error = Average Individual Error - Prediction Diversity

\n
\n

I think I've finally come up with a nice, mathematical way to drive a stake through the heart of that concept and bury it beneath a crossroads at midnight, though I fully expect that it shall someday rise again and shamble forth to eat the brains of the living.

\n

\n

Why should the bias-variance decomposition be relevant to modesty?  Because, it seems to show, the error of averaging all the estimates together, is lower than the typical error of an individual estimate.  Prediction Diversity (the variance) is positive when any disagreement exists at all, so Collective Error < Average Individual Error.  But then how can you justify keeping your own estimate, unless you know that you did better than average?  And how can you legitimately trust that belief, when studies show that everyone believes themselves to be above-average?  You should be more modest, and compromise a little.

\n

So what's wrong with this picture?

\n

To begin with, the bias-variance decomposition is a mathematical tautology.  It applies when we ask a group of experts to estimate the 2007 close of the NASDAQ index.  It would also apply if you weighed the experts on a pound scale and treated the results as estimates of the dollar cost of oil in 2020.

\n

As Einstein put it, \"Insofar as the expressions of mathematics refer to reality they are not certain, and insofar as they are certain they do not refer to reality.\"  The real modesty argument, Aumann's Agreement Theorem, has preconditions; AAT depends on agents computing their beliefs in a particular way.  AAT's conclusions can be false in any particular case, if the agents don't reason as Bayesians.

\n

The bias-variance decomposition applies to the luminosity of fireflies treated as estimates, just as much as a group of expert opinions.  This tells you that you are not dealing with a causal description of how the world works - there are not necessarily any causal quantities, things-in-the-world, that correspond to \"collective error\" or \"prediction diversity\".  The bias-variance decomposition is not about modesty, communication, sharing of evidence, tolerating different opinions, humbling yourself, overconfidence, or group compromise.  It's an algebraic tautology that holds whenever its quantities are defined consistently, even if they refer to the silicon content of pebbles.

\n

More importantly, the tautology depends on a particular definition of \"error\": error must go as the squared difference between the estimate and the true value.  By picking a different error function, just as plausible as the squared difference, you can conjure a diametrically opposed recommendation:

\n
\n

The professor cleared his throat.  \"All right,\" he said to the gathered students, \"you've each handed in your written estimates of the value of this expression here,\" and he gestured to a rather complex-looking string of symbols drawn on the blackboard.  \"Now it so happens,\" the professor continued, \"that this question contains a hidden gotcha.  All of you missed in the same direction - that is, you all underestimated or all overestimated the true value, but I won't tell you which.  Now, I'm going to take the square root of the amount by which you missed the correct answer, and subtract it from your grade on today's homework.  But before I do that, I'm going to give you a chance to revise your answers.  You can talk with each other and share your thoughts about the problem, if you like; or alternatively, you could stick your fingers in your ears and hum.  Which do you think is wiser?\"

\n
\n

Here we are taking the square root of the difference between the true value and the estimate, and calling this the error function, or loss function.  (It goes without saying that a student's utility is linear in their grade.)

\n

And now, your expected utility is higher if you pick a random student's estimate than if you pick the average of the class!  The students would do worse, on average, by averaging their estimates together!  And this again is tautologously true, by Jensen's Inequality.

\n

A brief explanation of Jensen's Inequality:

\n

(I strongly recommend looking at this graph while reading the following.)

\n

Jensen's Inequality says that if X is a probabilistic variable, F(X) is a function of X, and E[expr] stands for the probabilistic expectation of expr, then:

\n
\n

E[F(X)] <= F(E[X]) if F is concave (second derivative negative)
E[F(X)] >= F(E[X]) if F is convex (second derivative positive)

\n
\n

Why?  Well, think of two values, x1 and x2.  Suppose F is convex - the second derivative is positive, \"the cup holds water\".  Now imagine that we draw a line between x=x1, y=F(x1) and x=x2, y=F(x2).  Pick a point halfway along this line.  At the halfway point, x will equal (x1 + x2)/2, and y will equal (F(x1)+F(x2))/2.  Now draw a vertical line from this halfway point to the curve - the intersection will be at x=(x1 + x2)/2, y=F((x1 + x2)/2).  Since the cup holds water, the chord between two points on the curve is above the curve, and we draw the vertical line downward to intersect the curve.  Thus F((x1 + x2)/2) < (F(x1) + F(x2))/2.  In other words, the F of the average is less than the average of the Fs.

\n

So:

\n

If you define the error as the squared difference, F(x) = x^2 is a convex function, with positive second derivative, and by Jensen's Inequality, the error of the average - F(E[X]) - is less than the average of the errors - E[F(X)].  So, amazingly enough, if you square the differences, the students can do better on average by averaging their estimates.  What a surprise.

\n

But in the example above, I defined the error as the square root of the difference, which is a concave function with a negative second derivative.  Poof, by Jensen's Inequality, the average error became less than the error of the average.  (Actually, I also needed the professor to tell the students that they all erred in the same direction - otherwise, there would be a cusp at zero, and the curve would hold water.  The real-world equivalent of this condition is that you think the directional or collective bias is a larger component of the error than individual variance.)

\n

If, in the above dilemma, you think the students would still be wise to share their thoughts with each other, and talk over the math puzzle - I certainly think so - then your belief in the usefulness of conversation has nothing to do with a tautology defined over an error function that happens, in the case of squared error, to be convex.  And it follows that you must think the process of sharing thoughts, of arguing differences, is not like averaging your opinions together; or that sticking to your opinion is not like being a random member of the group.  Otherwise, you would stuff your fingers in your ears and hum when the problem had a concave error function.

\n

When a line of reasoning starts assigning negative expected utilities to knowledge - offers to pay to avoid true information - I usually consider that a reductio.

" } }, { "_id": "Wwq6WFpx9HyzwgCKx", "title": "Useful Statistical Biases", "pageUrl": "https://www.lesswrong.com/posts/Wwq6WFpx9HyzwgCKx/useful-statistical-biases", "postedAt": "2007-04-01T04:51:15.000Z", "baseScore": 19, "voteCount": 14, "commentCount": 4, "url": null, "contents": { "documentId": "Wwq6WFpx9HyzwgCKx", "html": "

Friday's post on statistical bias and the bias-variance decomposition discussed how the squared error of an estimator equals the directional error of the estimator plus the variance of the estimator.  All else being equal, bias is bad - you want to get rid of it.  But all else is not always equal.  Sometimes, by accepting a small amount of bias in your estimator, you can eliminate a large amount of variance.  This is known as the "bias-variance tradeoff".

A linear regression tries to estimate a quantity by attaching weights to various signals associated with that quantity - for example, you could try to predict the gas mileage of a car using the car's mass and engine capacity.\n\n

\n\n

A regularized linear regression tries to attach smaller variable weights, while still matching the data fairly well.  A regularized regression may generalize to unseen data better than an unregularized regression - often quite a lot better.  Assigning smaller variable weights is akin to finding a simpler explanation that fits the data almost as well.  This drive for simplicity makes the regressor less sensitive to small random wobbles in the data, so it has lower variance: if you ran the regressor over different data samples, the estimates would look more similar to each other.\n\n

\n\n

But the same regularization procedure also causes the estimator to ignore some actual data - and this is a systematic error, that would recur in the same direction if we repeated the experiment many times.  The randomness goes in both directions, so by ignoring the noise in the data, you decrease your variance.  But the real evidence goes in one direction, so if you ignore some real evidence in the process of ignoring noise - because you don't know which is which - then you end up with a directional error, an error that trends in the same direction when you repeat the experiment many times.\n\n

\n\n

In statistics this is known as the bias-variance tradeoff.  When your data is limited, it may be better to use a simplifying estimator that doesn't try to fit every tiny squiggle of the data, and this trades off a lot of variance against a little bias.\n\n

\n\n

An "unbiased estimator" is one whose expected result equals the correct result, although it may have wide random swings in either direction.  This is good if you are allowed to repeat the experiment as often as you like, because you can average together the estimates and get the correct answer to arbitrarily fine precision.  That's the law of large numbers.

\n\n

You might have the following bright idea - why not use an unbiased estimator, like an unregularized regression, to guess the bias of a regularized regression?  Then you could just subtract out the systematic bias - you could have low bias and low variance.  The problem with this, you see, is that while it may be easy to find an unbiased estimator of the bias, this estimate may have very large variance - so if you subtract out an estimate of the systematic bias, you may end up subtracting out way too much, or even subtracting in the wrong direction a fair fraction of the time.  In statistics, "unbiased" is not the same as "good", unless the estimator also has low variance.\n\n\n\n

\n\n

When you hear that a classroom gave an average estimate of 871 beans\nfor a jar that contained 850 beans, and that only one individual\nstudent did better than the crowd, the astounding notion is not that\nthe crowd can be more accurate than the individual.  The astounding\nnotion is that human beings are unbiased estimators of beans in a jar,\nhaving no significant directional error on the problem, yet with large\nvariance.  It implies that we tend to get the answer wrong but there's\nno systematic reason why.  It requires that there be lots of errors\nthat vary from individual to individual - and this is reliably true,\nenough so to keep most individuals from guessing the jar correctly. \nAnd yet there are no directional errors that everyone makes, or if\nthere are, they cancel out very precisely in the average case, despite\nthe large individual variations.  Which is just plain odd.  I\nfind myself somewhat suspicious of the claim, and wonder whether other\nexperiments that found less amazing accuracy were not as popularly\nreported.\n

\n\n\n\n

Someone is bound to suggest that cognitive biases are useful, in the sense that they represent a bias-variance tradeoff.  I think this is just mixing up words - just because the word "bias" is used by two different fields doesn't mean it has the same technical definition.  When we accept a statistical bias in trade, we can't get strong information about the direction and magnitude of the bias - otherwise we would just subtract it out.  We may be able to get an unbiased estimate of the bias, but "unbiased" is not the same as "reliable"; if the variance is huge, we really have very little information.\n\nNow with cognitive biases, we do have some idea of the direction of the systematic error, and the whole notion of "overcoming bias" is about trying to subtract it out.  Once again, we see that cognitive biases are lemons, not lemonade.  To the extent we can get strong information - e.g. from cognitive psychology experiments - about the direction and magnitude of a systematic cognitive error, we can do systematically better by trying to compensate.

" } }, { "_id": "DbQkkgfq6fHRxmdGP", "title": "\"Statistical Bias\"", "pageUrl": "https://www.lesswrong.com/posts/DbQkkgfq6fHRxmdGP/statistical-bias", "postedAt": "2007-03-30T18:55:51.000Z", "baseScore": 22, "voteCount": 18, "commentCount": 8, "url": null, "contents": { "documentId": "DbQkkgfq6fHRxmdGP", "html": "

(Part one in a series on "statistical bias", "inductive bias", and "cognitive bias".)

\n\n

"Bias" as used in the field of statistics refers to directional error in an estimator.  Statistical bias is error you cannot correct by repeating the experiment many times and averaging together the results.\n\n

\n\n

The famous bias-variance decomposition states that the expected squared error is equal to the squared directional error, or bias, plus the squared random error, or variance.  The law of large numbers says that you can reduce variance, not bias, by repeating the experiment many times and averaging the results.

An experiment has some randomness in it, so if you repeat the experiment many times, you may get slightly different data each time; and if you run a statistical estimator over the data, you may get a slightly different estimate each time.  In classical statistics, we regard the true value of the parameter as a constant, and the experimental estimate as a probabilistic variable.  The bias is the systematic, or average, difference between these two values; the variance is the leftover probabilistic component.\n\n

\n\n

Let's say you have a repeatable experiment intended to estimate, for example, the height of the Emperor of China.  In fact, the Emperor's height is 200 cm.  Suppose that every single American believes, without variation, that the Emperor's height is 180 cm.  Then if you poll a random American and ask "How tall is the Emperor of China?", the answer is always "180 cm", the error is always -20 cm, and the squared error is always 400 (I shall omit the units on squared errors).  But now suppose that Americans have normally distributed beliefs about the Emperor's height, with mean belief 180 cm, and standard deviation 10 cm.  You conduct two independent repetitions of the poll, and one American says "190 cm", and the other says "170 cm", with errors respectively of -10 cm and -30 cm, and squared errors of 100 and 900.  The average error is -20 cm, as before, but the average squared error is 100 + 900 / 2 = 500.  So even though the average (directional) error didn't change as the result of adding noise to the experiments, the average squared error went up.\n\n

\n\n

Although in one case the random perturbation of the answer happened to lead the American in the correct direction - the one who answered 190 cm, which is closer to the true value of 200 cm - the other American was led further away from the answer, replying 170 cm.  Since these are equal deviations, the average answer did not change.  But since the square increases faster than linear, the larger error corresponded to a still larger squared error, and the average squared error went up.\n\n

\n\n

Furthermore, the new average squared error of 500 equals exactly the square of the directional error (-20 cm) plus the square of the random error (standard deviation of 10cm): 400 + 100 = 500.\n\n

\n\n

In the long run, the above result is universal and exact:  If the true value is constant X and the estimator is Y, then E[(X - Y)^2] = (X - E[Y])^2 + E[(E[Y] - Y)^2].  Expected squared error = squared expected bias + expected variance of estimator.  This is the bias-variance decomposition.\n\n

\n\n

If we averaged together the two Americans above, we would get an average estimate of 180 cm, with a squared error of 400, which is less than the average error of both experiments taken individually, but still erroneous.\n\n

\n\n

If the true value is constant X and the estimator is Y, then by averaging many estimates together we converge toward the expected value of Y, E[Y], by the law of large numbers, and if we subtract this from X, we are left with a squared error of (X - E[Y])^2, which is the bias term of the bias-variance decomposition.  If your estimator is all over the map and highly sensitive to noise in the experiment, then by repeating the experiment many times you can get the expected value of your estimator, and so you are left with only the systematic error of that estimator, and not the random noise in the estimator that varies from experiment to experiment.  That's what the law of large numbers is good for.

" } }, { "_id": "gWGA8Da539EQmAR9F", "title": "Tsuyoku vs. the Egalitarian Instinct", "pageUrl": "https://www.lesswrong.com/posts/gWGA8Da539EQmAR9F/tsuyoku-vs-the-egalitarian-instinct", "postedAt": "2007-03-28T17:49:33.000Z", "baseScore": 96, "voteCount": 106, "commentCount": 35, "url": null, "contents": { "documentId": "gWGA8Da539EQmAR9F", "html": "\n\n\n\n \n\n \n\n

Hunter-gatherer tribes are usually highly egalitarian (at least if you’re male)—the all-powerful tribal chieftain is found mostly in agricultural societies, rarely in the ancestral environment. Among most hunter-gatherer tribes, a hunter who brings in a spectacular kill will carefully downplay the accomplishment to avoid envy.

\n\n

Maybe, if you start out below average, you can improve yourself without daring to pull ahead of the crowd. But sooner or later, if you aim to do the best you can, you will set your aim above the average.

\n\n

If you can’t admit to yourself that you’ve done better than others—or if you’re ashamed of wanting to do better than others—then the median will forever be your concrete wall, the place where you stop moving forward. And what about people who are below average? Do you dare say you intend to do better than them? How prideful of you!

\n\n

Maybe it’s not healthy to pride yourself on doing better than someone else. Personally I’ve found it to be a useful motivator, despite my principles, and I’ll take all the useful motivation I can get. Maybe that kind of competition is a zero-sum game, but then so is Go; it doesn’t mean we should abolish that human activity, if people find it fun and it leads somewhere interesting.

\n\n

But in any case, surely it isn’t healthy to be ashamed of doing better.

\n\n

And besides, life is not graded on a curve. The will to transcendence has no point beyond which it ceases and becomes the will to do worse; and the race that has no finish line also has no gold or silver medals. Just run as fast as you can, without worrying that you might pull ahead of other runners. (But be warned: If you refuse to worry about that possibility, someday you may pull ahead. If you ignore the consequences, they may happen to you.)

\n\n

Sooner or later, if your path leads true, you will set out to mitigate a flaw that most people have not mitigated. Sooner or later, if your efforts bring forth any fruit, you will find yourself with fewer sins to confess.

\n\n

Perhaps you will find it the course of wisdom to downplay the accomplishment, even if you succeed. People may forgive a touchdown, but not dancing in the end zone. You will certainly find it quicker, easier, more convenient to publicly disclaim your worthiness, to pretend that you are just as much a sinner as everyone else. Just so long, of course, as everyone knows it isn’t true. It can be fun to proudly display your modesty, so long as everyone knows how very much you have to be modest about.

\n\n

But do not let that be the endpoint of your journeys. Even if you only whisper it to yourself, whisper it still: Tsuyoku, tsuyoku! Stronger, stronger!

\n\n

And then set yourself a higher target. That’s the true meaning of the realization that you are still flawed (though a little less so). It means always reaching higher, without shame.

\n\n

Tsuyoku naritai! I’ll always run as fast as I can, even if I pull ahead, I’ll keep on running; and someone, someday, will surpass me; but even though I fall behind, I’ll always run as fast as I can.

\n\n" } }, { "_id": "DoLQN5ryZ9XkZjq5h", "title": "Tsuyoku Naritai! (I Want To Become Stronger)", "pageUrl": "https://www.lesswrong.com/posts/DoLQN5ryZ9XkZjq5h/tsuyoku-naritai-i-want-to-become-stronger", "postedAt": "2007-03-27T17:49:33.000Z", "baseScore": 349, "voteCount": 323, "commentCount": 84, "url": null, "contents": { "documentId": "DoLQN5ryZ9XkZjq5h", "html": "\n\n\n\n \n\n \n\n

In Orthodox Judaism there is a saying: “The previous generation is to the next one as angels are to men; the next generation is to the previous one as donkeys are to men.” This follows from the Orthodox Jewish belief that all Judaic law was given to Moses by God at Mount Sinai. After all, it’s not as if you could do an experiment to gain new halachic knowledge; the only way you can know is if someone tells you (who heard it from someone else, who heard it from God). Since there is no new source of information; it can only be degraded in transmission from generation to generation.

\n\n

Thus, modern rabbis are not allowed to overrule ancient rabbis. Crawly things are ordinarily unkosher, but it is permissible to eat a worm found in an apple—the ancient rabbis believed the worm was spontaneously generated inside the apple, and therefore was part of the apple. A modern rabbi cannot say, “Yeah, well, the ancient rabbis knew diddly-squat about biology. Overruled!” A modern rabbi cannot possibly know a halachic principle the ancient rabbis did not, because how could the ancient rabbis have passed down the answer from Mount Sinai to him? Knowledge derives from authority, and therefore is only ever lost, not gained, as time passes.

\n\n

When I was first exposed to the angels-and-donkeys proverb in (religious) elementary school, I was not old enough to be a full-blown atheist, but I still thought to myself: “Torah loses knowledge in every generation. Science gains knowledge with every generation. No matter where they started out, sooner or later science must surpass Torah.”

\n\n

The most important thing is that there should be progress. So long as you keep moving forward you will reach your destination; but if you stop moving you will never reach it.

\n\n

Tsuyoku naritai is Japanese. Tsuyoku is “strong”; naru is “becoming,” and the form naritai is “want to become.” Together it means, “I want to become stronger,” and it expresses a sentiment embodied more intensely in Japanese works than in any Western literature I’ve read. You might say it when expressing your determination to become a professional Go player—or after you lose an important match, but you haven’t given up—or after you win an important match, but you’re not a ninth-dan player yet—or after you’ve become the greatest Go player of all time, but you still think you can do better. That is tsuyoku naritai, the will to transcendence.

\n\n

Each year on Yom Kippur, an Orthodox Jew recites a litany which begins Ashamnu, bagadnu, gazalnu, dibarnu dofi, and goes on through the entire Hebrew alphabet: We have acted shamefully, we have betrayed, we have stolen, we have slandered . . .

\n\n

As you pronounce each word, you strike yourself over the heart in penitence. There’s no exemption whereby, if you manage to go without stealing all year long, you can skip the word gazalnu and strike yourself one less time. That would violate the community spirit of Yom Kippur, which is about confessing sins—not avoiding sins so that you have less to confess.

\n\n

By the same token, the Ashamnu does not end, “But that was this year, and next year I will do better.”

\n\n

The Ashamnu bears a remarkable resemblance to the notion that the way of rationality is to beat your fist against your heart and say, “We are all biased, we are all irrational, we are not fully informed, we are overconfident, we are poorly calibrated . . .”

\n\n

Fine. Now tell me how you plan to become less biased, less irrational, more informed, less overconfident, better calibrated.

\n\n

There is an old Jewish joke: During Yom Kippur, the rabbi is seized by a sudden wave of guilt, and prostrates himself and cries, “God, I am nothing before you!” The cantor is likewise seized by guilt, and cries, “God, I am nothing before you!” Seeing this, the janitor at the back of the synagogue prostrates himself and cries, “God, I am nothing before you!” And the rabbi nudges the cantor and whispers, “Look who thinks he’s nothing.”

\n\n

Take no pride in your confession that you too are biased; do not glory in your self-awareness of your flaws. This is akin to the principle of not taking pride in confessing your ignorance; for if your ignorance is a source of pride to you, you may become loath to relinquish your ignorance when evidence comes knocking. Likewise with our flaws—we should not gloat over how self-aware we are for confessing them; the occasion for rejoicing is when we have a little less to confess.

\n\n

Otherwise, when the one comes to us with a plan for correcting the bias, we will snarl, “Do you think to set yourself above us?” We will shake our heads sadly and say, “You must not be very self-aware.”

\n\n

Never confess to me that you are just as flawed as I am unless you can tell me what you plan to do about it. Afterward you will still have plenty of flaws left, but that’s not the point; the important thing is to do better, to keep moving ahead, to take one more step forward. Tsuyoku naritai!

\n\n" } }, { "_id": "JoERzF8ePGr4zP9vv", "title": "Self-deception: Hypocrisy or Akrasia?", "pageUrl": "https://www.lesswrong.com/posts/JoERzF8ePGr4zP9vv/self-deception-hypocrisy-or-akrasia", "postedAt": "2007-03-26T17:03:55.000Z", "baseScore": 68, "voteCount": 51, "commentCount": 21, "url": null, "contents": { "documentId": "JoERzF8ePGr4zP9vv", "html": "

What are we to think when someone says with their lips that they desire truth, but by their other cognitive deeds choose comfortable illusions over reality (or comfortable cynicism over reality)?\n\n

\n\n

Robin Hanson has labeled such individuals hypocrites.  In the traditional sense of the term, a hypocrite is a moral liar: someone who says a morality which they do not, themselves, believe.  On the other hand, we don't always live up to the goals we set for ourselves.  If I really believe that I ought to exercise at least 3 times per week, but I don't always do so, am I properly termed a "hypocrite"?  The term akrasia, meaning "weakness of will" or "failure of self-control", seems more appropriate.  Even if I tell all my friends that they ought to exercise 3 times per week, that doesn't necessarily make me a hypocrite.  It's good advice.  (Now, if I claimed to always exercise 3 times per week, knowing that this claim was false, that would be dishonest.)\n\n

\n\n

Accusations of hypocrisy garner a lot more attention than accusations of akrasia - because hypocrisy is a deliberate transgression.  It is tempting to say "hypocrisy" when you really mean "akrasia", because you'll get more attention, but that can cause damage to innocent bystanders.  In akrasia, your transgression is your failure of will - it's fine that you advocate going to the gym more often, you just need to live up to the principle yourself.  In hypocrisy, the transgression is claiming to care: you have no right to publicly advocate the moral principle, because (the accuser says) you don't believe in it yourself.

Will Wilkinson asked Hanson:  "Would it be a kind of victory if people who now say that they care about truth, but who really don't, started admitting that they really don't?"\n\n

\n\n

But much more importantly: who says that people who claim to care about truth, and then deceive themselves, "really don't care" about the truth?  Why not say that they really care about the truth (as is right and proper), but they aren't living up to their own morals?\n\n

\n\n

It may be standard practice in economics to deduce "preferences" from actions rather than declarations, but that's because you're trying to predict, in a scientific sense, what the subject will do next - trying to build good economic models.  Moral philosophy is a different bag o' worms.  At the very least, it is a controversial step in moral reasoning to decide that people's emotional impulses and subconscious pressures, rather than their declarative moral reasoning processes and the words that issue from their lips, constitute their "real selves".  We should then call akrasia, not weakness of will, but strength of will.\n\n

\n\n

To put the dilemma more sharply:  The one comes before you and pleads, "I know that I have many times been guilty of self-deception.  I have bought lottery tickets, I have overestimated my driving skills, I have planned optimistically, I have refused to confront contradictory evidence.  I am weak.  And yet I desire to do better.  Will you help me?"\n\n

\n\n

So that is words issuing from the lips, which say one thing.  And it may be that the one has committed other deeds which say something else.  Who is the real person?  Does that question have an answer, or only a definition?\n\n

\n\n

I do not frame an answer.  It is only needful for me to know that something has asked for my help.  There is something here that can ally to me, in our quest for truth - whether or not you call it the "real self".  Whether or not, for that matter, you call me my "real self".  If the word "I", when I use it, does not refer to the cognitive pattern that authors these words on your computer screen, what does it refer to?  And if the words that issue from some other's lips should declare me to be a ghost, then I will seek out my fellow truthseeking ghosts, and have company in my phantom quest.

" } }, { "_id": "7khK4DShZBR8gfyHv", "title": "Chronophone Motivations", "pageUrl": "https://www.lesswrong.com/posts/7khK4DShZBR8gfyHv/chronophone-motivations", "postedAt": "2007-03-24T17:23:20.000Z", "baseScore": 61, "voteCount": 46, "commentCount": 17, "url": null, "contents": { "documentId": "7khK4DShZBR8gfyHv", "html": "

Followup to:  Archimedes's Chronophone.

\n

Suppose you could send messages back in time to Archimedes of Syracuse, using a chronophone which - to avoid transmitting anachronistic information - transmits the results of executing cognitive strategies, rather than words.  If you say \"Women should have the vote\", it comes out as \"Install a tyrant of great personal virtue\", because you repeated what your culture considers a wise form of political arrangement, and what comes out of the chronophone is the result of executing the same cognitive policy in Archimedes's era.

\n

The chronophone won't transmit arguments you rationalize using your home culture's foreknowledge of the desired conclusion - it will substitute the result of executing that cognitive policy using Archimedes's culture's belief as the intended conclusion.  A basic principle of the chronophone is that if you say something considered obvious in your home culture, it comes out as something considered obvious in Archimedes's culture.

\n

The challenge was to say something useful under this restriction.  This challenge is supposed to be difficult.  It's really hard to get somewhere when you don't already know your destination.  If there were some simple cognitive policy you could follow to spark moral and technological revolutions, without your home culture having advance knowledge of the destination, you could execute that cognitive policy today - which is what the whole parable is about!

\n

\n

A surprising number of respondents seemed to completely miss the point of the chronophone, just thinking up things they would like to say directly to Archimedes.  The classic question of \"If you went back in time, how would you start up an industrial civilization?\" has been done many times in science fiction (Lord Kalvan of Otherwhen, The Cross-Time Engineer).  There are thousands of things we'd like to say to the Past.  The difficult part of the question is:  How do you get it to come out of the chronophone?

\n

Ger suggested teaching Archimedes decimal notation.  Well, if you speak decimal notation - our home culture's standard representation of numbers - into the chronophone, then the chronophone outputs the standard representation of numbers used in Syracuse.  To get a culturally nonobvious output, you need a culturally nonobvious input.  Place notation is revolutionary because it makes it easier for ordinary people, not just trained accountants, to manipulate large numbers.  Maybe an equivalent new idea in our own era would be Python, which makes it easier for novices to program computers - or a mathematician trying to standardize on category theory instead of set theory as a foundation for mathematics.  Coming up with that chronophone input suggests that maybe we should pay more attention, in this era, to Python or category theory!  A new representation that makes math easier can add up to a lot of benefit over time.

\n

Hertzlinger remarked:  \"Some of Archimedes's most potentially-important research involved things he regarded as trivial toys. So if we advise him to get interested in Rubik's cube...\"  Of course you cannot directly describe a Rubik's Cube into the chronophone.  So I asked what corresponding input Hertzlinger would say into the chronophone - has Hertzlinger followed the cognitive policy of playing with toy ideas?  Maybe if this would have been such a good policy for Archimedes to follow, we should follow it ourselves.

\n

Robin Hanson proposed an (admittedly clever) meta-trick for fine-tuning the chronophone's output.  If that worked, Robin wanted to suggest trying to make useful devices that make money, and creating a tradition of this activity.  I asked Robin if he'd ever tried to make such useful devices himself - if this is so important to human progress, why isn't Robin doing it?  Perhaps Robin could reply that we've already gotten a huge amount of progress out of inventing gadgets, so now this no longer offers the greatest marginal returns.  But that, in turn, points up one of the essential difficulties of the challenge.  In this era it is culturally obvious - a non-surprising idea - that money-making new technologies benefit humanity.  What could you say into the chronophone that would correspond to the nonobviousness of that idea in Archimedes's era?  I don't know if it's important enough to qualify, but, for example, Robin's thoughts about prediction markets are not considered obvious in modern culture.  That makes them a better bet for chronophone input than if Robin were to describe his efforts to invent a fancy new gadget.  Everyone's doing that these days; it would probably come out of the chronophone as a suggestion to become a great warrior.

\n

Richard Hamming used to ask his fellow researchers two questions:  \"What are the most important problems of your field?\" and \"Why aren't you working on them?\"

\n

What kind of ideas have provided the greatest benefit to humanity?  Why aren't you thinking them?

\n

Most of what we desperately want to say to Archimedes is not obvious relative to Archimedes's culture.  This strongly suggests that the most important things the Future would want to say to us are, amazingly enough, not things that everyone already knows.  If you want to really benefit humanity, you've got to do some original thinking - come up with the sort of nonobvious idea that you would speak into a chronophone.  And you have to do some hard thinking about areas of application, directions of effort.  You can't just run off in the direction of what your contemporary culture has instilled as the reflex answer to the question \"How can I benefit humanity?\"  In those orchards the low-hanging fruit is gone.

\n

The point of the chronophone dilemma is to make us think about what kind of cognitive policies are good to follow when you don't know your destination in advance.  If you can just tell Archimedes to build a capitalist society because your culture already knows this is a good idea, it defeats the purpose of the dilemma.  The chronophone transmits cognitive policies, not sentences.  What sort of thinking are we doing now that is analogous to the kind of thinking we wish Archimedes had done then?

" } }, { "_id": "cKrgy7hLdszkse2pq", "title": "Archimedes's Chronophone", "pageUrl": "https://www.lesswrong.com/posts/cKrgy7hLdszkse2pq/archimedes-s-chronophone", "postedAt": "2007-03-23T17:43:19.000Z", "baseScore": 58, "voteCount": 55, "commentCount": 94, "url": null, "contents": { "documentId": "cKrgy7hLdszkse2pq", "html": "

Think of how many generations of humanity would have benefited if certain ideas had been invented sooner, rather than later - if the Greeks had invented science - if the Romans had possessed printing presses - if Western civilization had turned against slavery in the thirteenth century.

\n

Archimedes of Syracuse was the greatest mathematician and engineer of the ancient world.  Imagine that Archimedes invented a temporal telephone (\"chronophone\" for short) which lets him talk to you, here in the 21st century. You can make suggestions! For purposes of the thought experiment, ignore the morality of altering history - just assume that it is proper to optimize post-Archimedean history as though it were simply the ordinary future. If so, it would seem that you are in a position to accomplish a great deal of good.

\n

Unfortunately, Archimedes's chronophone comes with certain restrictions upon its use:  It cannot transmit information that is, in a certain sense, \"too anachronistic\".

\n

\n

You cannot suggest, for example, that women should have the vote.  Maybe you could persuade Archimedes of Syracuse of the issue, and maybe not; but it is a moot point, the chronophone will not transmit the advice.  Or rather, it will transmit the advice, but it will come out as:  \"Install a tyrant of great personal virtue, such as Hiero II, under whose rule Syracuse experienced fifty years of peace and prosperity.\"  That's how the chronophone avoids transmitting overly anachronistic information - it transmits cognitive strategies rather than words.  If you follow the policy of \"Check my brain's memory to see what my contemporary culture recommends as a wise form of political organization\", what comes out of the chronophone is the result of Archimedes following the same policy of looking up in his brain what his era lauds as a wise form of political organization.

\n

You might think the next step would be to prepare a careful series of Plato-style philosophical arguments, starting from known territory, and intended to convince an impartial audience, with which to persuade Archimedes that all sentient beings should be equal before the law.  Unfortunately, if you try this, what comes out on Archimedes's end is a careful series of Plato-style philosophical analogies which argue that wealthy male landowners should have special privileges.  You followed the policy of \"Come up with a line of philosophical argument intended to persuade a neutral observer to my own era's point of view on political privilege,\" so what comes out of the chronophone is what Archimedes would think up if he followed the same cognitive strategy.

\n

In Archimedes's time, slavery was thought right and proper; in our time, it is held an abomination.  If, today, you need to argue that slavery is bad, you can invent all sorts of moral arguments which lead to that conclusion - all sorts of justifications leap readily to mind.  If you could talk to Archimedes of Syracuse directly, you might even be able to persuade him to your viewpoint (or not).  But the really odd thing is that, at some point in time, someone must have turned against slavery - gone from pro-slavery to anti-slavery - even though they didn't start out wanting to persuade themselves against slavery.  By the time someone gets to the point of wanting to construct persuasive anti-slavery arguments, they must have already turned against slavery.  If you know your desired moral destination, you are already there.  Thus, that particular cognitive strategy - searching for ways to persuade people against slavery - can't explain how we got here from there, how Western culture went from pro-slavery to anti-slavery.

\n

The chronophone, to prevent paradox, will not transmit arguments that you constructed already knowing the desired destination.  And because this is a law of physics governing time travel, the chronophone cannot be fooled.  No matter how cleverly you construct your neutral-sounding philosophical argument, the chronophone \"knows\" you started with the desired conclusion already in mind.

\n

The same dilemma applies to scientific issues. if you say \"The Earth circles the Sun\" it comes out of the chronophone as \"The Sun circles the Earth\". It doesn't matter that our civilization is right and their civilization is wrong - the chronophone takes no notice of facts, only beliefs and cognitive strategies. You tried to transmit your own belief about heavenly mechanics, so it comes out as Archimedes's belief about heavenly mechanics.

\n

Obviously, what you need to transmit is the scientific method - that's how our own civilization went from geocentrism to heliocentrism without having the destination already in mind. Unfortunately, you also can't say to Archimedes, \"Use mathematical laws instead of heroic mythology to explain empirical phenomena.\" It will come out as \"If anyone should throw back his head and learn something by staring at the varied patterns on a ceiling, apparently you would think that he was contemplating with his reason, when he was only staring with his eyes... I cannot but believe that no study makes the soul look on high except that which is concerned with real being and the unseen.\" (Plato, The Republic, Book VII.) That is Archimedes's culture's stance on epistemology, just as science is your own culture's stance.

\n

Can you suggest that Archimedes pay attention to facts, and authorities, and think about which one should ought to take precedence - by way of leading him down a garden path to the scientific method? But humanity did not invent the scientific method by setting out to invent the scientific method - by looking for a garden path that would lead to the scientific method. If you know your desired destination, you are already there. And no matter how you try to prevent your garden path from looking like a garden path, the laws of time travel know the difference.

\n

So what can you say into the chronophone?

\n

Suppose that, at some point in your life, you've genuinely thought that the scientific method might not be correct - that our culture's preferred method of factual investigation might be flawed. Then, perhaps, you could talk into the chronophone about how you've doubted that the scientific method as commonly practiced is correct, and it would come out of the chronophone as doubts about whether deference to authority is correct. After all, something like that must be how humanity got to science from nonscience - individuals who genuinely questioned whether their own culture's preferred method of epistemological investigation was correct.

\n

If you try to follow this strategy, your own doubts had better be genuine. Otherwise what will come out of the chronophone is a line of Socratic questioning that argues for deference to authority. If your doubts are genuine, surface doubts will come out as surface doubts, deep doubts as deep doubts. The chronophone always knows how much you really doubted, and how much you merely tried to convince yourself you doubted so that you could say it into the chronophone. Such is the unavoidable physics of time travel.

\n

Now... what advice do you give to Archimedes, and how do you say it into the chronophone?

\n

Addendum:  A basic principle of the chronophone is that to get nonobvious output, you need nonobvious input.  If you say something that is considered obvious in your home culture, it comes out of the chronophone as something that is considered obvious in Archimedes's culture.

" } }, { "_id": "CJfSYYdsn9LQdaoPP", "title": "Useless Medical Disclaimers", "pageUrl": "https://www.lesswrong.com/posts/CJfSYYdsn9LQdaoPP/useless-medical-disclaimers", "postedAt": "2007-03-19T16:48:05.000Z", "baseScore": 22, "voteCount": 21, "commentCount": 15, "url": null, "contents": { "documentId": "CJfSYYdsn9LQdaoPP", "html": "

I recently underwent a minor bit of toe surgery and had to sign a scary-looking disclaimer form in which I acknowledged that there was a risk of infection, repeat surgery, chronic pain, amputation, spontaneous combustion, meteor strikes, and a plague of locusts o'er the land.

\n\n

It was the most pointless damned form I've ever seen in a doctor's office.  What are the statistical incidences of any of these risks?  Should I be more or less worried about dying in a car crash on the way home?  Taken literally, that kind of "information" is absolutely useless for making decisions.  You can't translate something into an expected utility, even a qualitative and approximate one, if it doesn't come with a probability attached.

\n\n

Taken literally, saying that there is a "possibility" of infection tells me nothing.  The probability could be 1/1,000,000,000,000 and it would still be technically\ncorrect to describe the outcome as "possible".  I'm not the litigious\ntype, but I seriously wonder if it would be possible to sue based on\nthe theory that "possibilities" with no probabilities attached to them\nare not useful information and therefore should not constitute a "disclaimer" under the law.

Staring at this pointless list of disasters, I also wondered why the form contained no useful information.

\n\n

The thought that occurred to me was that, innumeracy being so widespread, no one would dare put numbers on that sheet of paper.  If "amputation" is listed as a consequence with a probability of 0.0001%, patients will run screaming out of the office, crying, "Not my toe!  I don't want to lose my toe!"  No amount of patient explanation will suffice to convince them that they ought to diminish the emotional force of their fear by a factor of one million.  Each extra zero after the decimal point would only be one more symbol for their eyes to glaze over; it would not diminish the emotional force of the anticipation by an additional factor of ten.

\n\n

And so I don't get any useful statistical information!  Hmph.

\n\n

Clearly, innumeracy produces negative externalities and it ought to be regulated.  In particular, we should impose a tax on people who can't properly diminish the emotional impact of their anticipations by tiny probability factors.

\n\n

Two classic objections to regulation are that (a) it infringes on personal freedom and (b) the individual always knows more about their own situation than the regulator.  However, my proposed policy addresses both of these issues: rather than administering a math test, we can ask each individual whether or not they're innumerate.  If they do declare themselves to be innumerate, they can decide for themselves the amount of the tax to pay.

\n\n

What do you think?  Would this tax give people an incentive to become less innumerate, as standard economics would predict?

" } }, { "_id": "Jq73GozjsuhdwMLEG", "title": "Superstimuli and the Collapse of Western Civilization", "pageUrl": "https://www.lesswrong.com/posts/Jq73GozjsuhdwMLEG/superstimuli-and-the-collapse-of-western-civilization", "postedAt": "2007-03-16T18:10:52.000Z", "baseScore": 157, "voteCount": 136, "commentCount": 90, "url": null, "contents": { "documentId": "Jq73GozjsuhdwMLEG", "html": "

At least three people have died playing online games for days without rest.  People have lost their spouses, jobs, and children to World of Warcraft. If people have the right to play video games - and it's hard to imagine a more fundamental right - then the market is going to respond by supplying the most engaging video games that can be sold, to the point that exceptionally engaged consumers are removed from the gene pool.

\n\n

How does a consumer product become so involving that, after 57 hours of using the product, the consumer would rather use the product for one more hour than eat or sleep?  (I suppose one could argue that the consumer makes a rational decision that they'd rather play Starcraft for the next hour than live out the rest of their lives, but let's just not go there.  Please.)

A candy bar is a superstimulus: it contains more concentrated sugar, salt, and fat than anything that exists in the ancestral environment.   A candy bar matches taste buds that evolved in a hunter-gatherer environment, but it matches those taste buds much more strongly than anything that actually existed in the hunter-gatherer environment.  The signal that once reliably correlated to healthy food has been hijacked, blotted out with a point in tastespace that wasn't in the training dataset - an impossibly distant outlier on the old ancestral graphs.  Tastiness, formerly representing the evolutionarily identified correlates of healthiness, has been reverse-engineered and perfectly matched with an artificial substance.  Unfortunately there's no equally powerful market incentive to make the resulting food item as healthy as it is tasty.  We can't taste healthfulness, after all.

\n\n

The now-famous Dove Evolution\nvideo shows the painstaking construction of another superstimulus: an\nordinary woman transformed by makeup, careful photography, and finally\nextensive Photoshopping, into a billboard model - a beauty impossible,\nunmatchable by human women in the unretouched real world.  Actual women are killing themselves (e.g. supermodels using cocaine to keep their weight down) to keep up with competitors that literally don't exist.

\n\n

And likewise, a video game\ncan be so much more engaging than mere reality, even through a simple computer monitor, that someone will play it without food or sleep until they literally die.  I don't know all the tricks used in video games, but I can guess some of them - challenges poised at the critical point between ease and impossibility, intermittent reinforcement, feedback showing an ever-increasing score, social involvement in massively multiplayer games.

\n\n

Is there a limit to the market incentive to make video games\nmore engaging?  You might hope there'd be no incentive past the point\nwhere the players lose their jobs; after all, they must be able\nto pay their subscription fee.  This would imply a "sweet spot" for the\naddictiveness of games, where the mode of the bell curve is having fun,\nand only a few unfortunate souls on the tail become addicted to the\npoint of losing their jobs.  As of 2007, playing World of Warcraft for\n58 hours straight until you literally die is still the\nexception rather than the rule.   But video game manufacturers compete\nagainst each other, and if you can make your game 5% more addictive,\nyou may be able to steal 50% of your competitor's customers.  You can see how this problem could get a lot worse.\n\n\n\n\n\n

\n\n

If people have the right to be tempted - and that's what free will is all about - the market is going to respond\nby supplying as much temptation as can be sold.  The incentive is to\nmake your stimuli 5% more tempting than those of your current leading\ncompetitors.  This continues well beyond the point where the stimuli\nbecome ancestrally anomalous superstimuli.  Consider how our standards\nof product-selling feminine beauty have changed since the\nadvertisements of the 1950s.  And as candy bars demonstrate, the market incentive also continues well beyond\nthe point where the superstimulus begins wreaking collateral damage on\nthe consumer.

\n\n

So why don't we just say no?  A key assumption of free-market economics is that, in the absence of force and fraud, people can always refuse to engage in a harmful transaction.  (To the extent this is true, a free market would be, not merely the best policy on the whole, but a policy with few or no downsides.)

\n\n

An organism that regularly passes up food will die, as some video game players found out the hard way.  But, on some occasions in the ancestral environment, a typically beneficial (and therefore tempting) act may in fact be harmful.  Humans, as organisms, have an unusually strong ability to perceive these special cases using abstract thought.  On the other hand we also tend to imagine lots of special-case consequences that don't exist, like ancestor spirits commanding us not to eat perfectly good rabbits.

\n\n

Evolution seems to have struck a compromise, or perhaps just aggregated new systems on top of old.  Homo sapiens are still tempted by food, but our oversized prefrontal cortices give us a limited\nability to resist temptation.  Not unlimited ability - our ancestors with too much willpower probably starved themselves to sacrifice to the gods, or failed to commit adultery one too many times.  The video game players who died must have exercised willpower (in some sense) to keep playing for so long without food or sleep; the evolutionary hazard of self-control.

\n\n

Resisting any temptation takes conscious expenditure of an exhaustible supply of mental energy.  It is not in fact true that we can "just say no" - not just say no, without cost to ourselves. \nEven humans who won the birth lottery for willpower or foresightfulness still pay a price to resist temptation.  The price is just more easily paid.

\n\n

Our limited willpower evolved to deal with ancestral\ntemptations; it may not operate well\nagainst enticements beyond anything known to hunter-gatherers.  Even where we successfully resist a superstimulus, it seems plausible that the effort required\nwould deplete willpower much faster than resisting ancestral temptations.

\n\n

Is public display of superstimuli a negative\nexternality, even to the people who say no?  Should we ban chocolate\ncookie ads, or storefronts that openly say "Ice Cream"?\n\n\n\n\n\n\n

\n\n

Just because a problem exists doesn't show (without further justification and a substantial burden of proof) that the government can fix it.  The regulator's career incentive does not focus on products that\ncombine low-grade consumer harm with addictive superstimuli; it focuses\non products with failure modes spectacular enough to get into the\nnewspaper.  Conversely, just because the government may\nnot be able to fix something, doesn't mean it isn't going wrong.

\n\n

I leave you with a final argument from fictional evidence:  Simon Funk's online novel After Life depicts (among other plot points) the planned extermination of biological Homo sapiens\n- not by marching robot armies, but by artificial children that are\nmuch cuter and sweeter and more fun to raise than real children. \nPerhaps the demographic collapse of advanced societies happens because\nthe market supplies ever-more-tempting alternatives to having children,\nwhile the attractiveness of changing diapers remains constant over\ntime.  Where are the advertising billboards that say "BREED"?  Who will\npay professional image consultants to make arguing with sullen\nteenagers seem more alluring than a vacation in Tahiti?

\n\n

"In the end," Simon Funk wrote, "the human species was simply marketed out of existence."

" } }, { "_id": "uaPc4NHi5jGXGQKFS", "title": "Blue or Green on Regulation?", "pageUrl": "https://www.lesswrong.com/posts/uaPc4NHi5jGXGQKFS/blue-or-green-on-regulation", "postedAt": "2007-03-15T18:04:27.000Z", "baseScore": 91, "voteCount": 78, "commentCount": 40, "url": null, "contents": { "documentId": "uaPc4NHi5jGXGQKFS", "html": "

In recent posts, I have predicted that, if not otherwise prevented from doing so, some people will behave stupidly and suffer the consequences:  "If people have a right to be stupid, the market will respond by supplying all the stupidity that can be sold."  People misinterpret this as indicating that I take a policy stance in favor of regulation.  It indicates no such thing.  It is meant purely as guess about empirical consequences - a testable prediction on a question of simple fact.\n\n

\n\n

Perhaps I would be less misinterpreted if I also told "the other side of the story" - inveighed at length about the reasons why bureaucrats are not perfect rationalists guarding our net best interests.  But ideally, I shouldn't have to go to such lengths.  Ideally, I could make a prediction about a strictly factual question without this being interpreted as a policy stance, or as a stance on logically distinct factual questions.

Yet it would appear that there are two and only two sides to the issue - pro-regulation and anti-regulation.  All arguments are either allied soldiers or enemy soldiers; they fight on one side or the other.  Any allied soldier can be deployed to fight any enemy soldier and vice versa.  Whatever argument pushes one side up, pushes the other side down.\n\n

\n\n

I understand that there are continuing fights about regulation, that this battle is viewed as important, and that people caught up in such battle may not want to let a pro-Green point go past without parrying with a Blue counterpoint.  But these battle reflexes have developed too far.  If I remark that victims of car accidents include minor children who had to be pushed screaming into the car on the way to school, anyone who is anti-regulation instantly suspects me of trying to pull out an emotional trump card.  But I was not trying to get cars banned.  I was trying to make a point about how emotional trump cards fail to trump the universe.\n\n

\n\n

I have previously predicted on the strictly factual matter of whether, in the absence of regulation, people will get hurt.  (Yes.)  I have also indicated as a matter of moral judgment that I do not think they deserve to get hurt, because being stupid is not the same as being malicious.  Furthermore there are such things as minor children and pedestrians.\n\n

\n\n

I shouldn't have to say this, but apparently I do, so, for the record, here is "the other side of the story":\n\n

\n\n

The FDA prevents 5,000 casualties per year but causes at least 20,000-120,000 casualties by delaying approval of beneficial medications.  The second number is calculated only by looking at delays in the introduction of medications eventually approved - not medications never approved, or medications for which approval was never sought.  FDA fatalities are comparable to the annual number of fatal car accidents, but the noneffects of medications not approved don't make the evening news.  A bureaucrat's chief incentive is not to approve anything that\nwill ever harm anyone in a way that makes it into the newspaper; no\nother cost-benefit calculus is involved as an actual career incentive. \nThe bureaucracy as a whole may have an incentive to approve at least some new products - if the FDA never approved a new medication, Congress would become suspicious - but any individual bureaucrat has an unlimited incentive to say no.  Regulators have no career motive to do any sort of cost-benefit calculation - except of course for the easy career-benefit calculation.  A product with a failure mode spectacular enough to make the newspapers will be banned regardless of what other good it might do; one-reason decisionmaking.  As with the FAA banning toenail clippers on planes, "safety precautions" are primarily an ostentatious display of costly efforts so that, when a catastrophe does occur, the agency will be seen to have tried its hardest.\n\n

\n\n

Government = ordinary human fallibility + poor incentives + organizational overhead + guns.\n\n

\n\n

But this does not change the consequences of nonregulation.  Children will still die horrible deaths in car accidents and they still will not deserve it.\n\n

\n\n

I understand that debates are not conducted in front of perfectly rational audiences.  We all know what happens when you try to trade off a sacred value against a nonsacred value.  It's why, when someone says, "But if you don't ban cars, people will die in car crashes!" you don't say "Yes, people will die horrible flaming deaths and they don't deserve it.  But it's worth it so I don't have to walk to work in the morning."  Instead you say, "How dare you take away our freedom to drive?  We'll decide for ourselves; we're just as good at making decisions as you are."  So go ahead and say that, then.  But think to yourself, in the silent privacy of your thoughts if you must:  And yet they will still die, and they will not deserve it.\n\n

\n\n

That way, when Sebastian Thrun comes up with a scheme to automate the highways, and claims it will eliminate nearly all traffic accidents, you can pay appropriate attention.\n\n

\n\n

So too with those other horrible consequences of stupidity that I may dwell upon in later posts.  Just because (you believe) regulation may not be able to solve these problems, doesn't mean we wouldn't be very interested in a proposal to solve them by other means.\n\n

\n\n

People are hurt by free markets, just as they're hurt by automobiles - torn up by huge powerful mindless machines with imperfect human operators.  It may not be the course of wisdom to fix these problems by resorting to the blunt sledgehammer of ban-the-bad-thing, by wishing to the fairy godmother of government and her magic wand of law.  But then people will still get hurt.  They will lose their jobs, lose their pensions, lose their health insurance, be ground down to bloody stumps by poverty, perhaps die, and they won't deserve it either.\n\n

\n\n

So am I Blue or Green on regulation, then?  I consider myself neither.  Imagine, for a moment, that much of what the Greens said about the downside of the Blue policy was true - that, left to the mercy of the free market, many people would be crushed by powers far beyond their understanding, nor would they deserve it.  And imagine that most of what the Blues said about the downside of the Green policy was also true - that regulators were fallible humans with poor incentives, whacking on delicately balanced forces with a sledgehammer.\n\n

\n\n

Close your eyes and imagine it.  Extrapolate the result.  If that were true, then... then you'd have a big problem and no easy way to fix it, that's what you'd have.  Does this universe look familiar?

" } }, { "_id": "XYCEB9roxEBfgjfxs", "title": "The Scales of Justice, the Notebook of Rationality", "pageUrl": "https://www.lesswrong.com/posts/XYCEB9roxEBfgjfxs/the-scales-of-justice-the-notebook-of-rationality", "postedAt": "2007-03-13T16:00:00.000Z", "baseScore": 118, "voteCount": 100, "commentCount": 22, "url": null, "contents": { "documentId": "XYCEB9roxEBfgjfxs", "html": "\n\n\n\n \n\n \n\n

Lady Justice is widely depicted as carrying scales. A set of scales has the property that whatever pulls one side down pushes the other side up. This makes things very convenient and easy to track. It’s also usually a gross distortion.

\n\n

In human discourse there is a natural tendency to treat discussion as a form of combat, an extension of war, a sport; and in sports you only need to keep track of how many points have been scored by each team. There are only two sides, and every point scored against one side is a point in favor of the other. Everyone in the audience keeps a mental running count of how many points each speaker scores against the other. At the end of the debate, the speaker who has scored more points is, obviously, the winner; so everything that speaker says must be true, and everything the loser says must be wrong.

\n\n

“The Affect Heuristic in Judgments of Risks and Benefits” studied whether subjects mixed up their judgments of the possible benefits of a technology (e.g., nuclear power), and the possible risks of that technology, into a single overall good or bad feeling about the technology.1 Suppose that I first tell you that a particular kind of nuclear reactor generates less nuclear waste than competing reactor designs. But then I tell you that the reactor is more unstable than competing designs, with a greater danger of melting down if a sufficient number of things go wrong simultaneously.

\n\n

If the reactor is more likely to melt down, this seems like a “point against” the reactor, or a “point against” someone who argues for building the reactor. And if the reactor produces less waste, this is a “point for” the reactor, or a “point for” building it. So are these two facts opposed to each other? No. In the real world, no. These two facts may be cited by different sides of the same debate, but they are logically distinct; the facts don’t know whose side they’re on.

\n\n

If it’s a physical fact about a reactor design that it’s passively safe (won’t go supercritical even if the surrounding coolant systems and so on break down), this doesn’t imply that the reactor will necessarily generate less waste, or produce electricity at a lower cost. All these things would be good, but they are not the same good thing. The amount of waste produced by the reactor arises from the properties of that reactor. Other physical properties of the reactor make the nuclear reaction more unstable. Even if some of the same design properties are involved, you have to separately consider the probability of meltdown, and the expected annual waste generated. These are two different physical questions with two different factual answers.

\n\n

But studies such as the above show that people tend to judge technologies—and many other problems—by an overall good or bad feeling. If you tell people a reactor design produces less waste, they rate its probability of meltdown as lower. This means getting the wrong answer to physical questions with definite factual answers, because you have mixed up logically distinct questions—treated facts like human soldiers on different sides of a war, thinking that any soldier on one side can be used to fight any soldier on the other side.

\n\n

A set of scales is not wholly inappropriate for Lady Justice if she is investigating a strictly factual question of guilt or innocence. Either John Smith killed John Doe, or not. We are taught (by E. T. Jaynes) that all Bayesian evidence consists of probability flows between hypotheses; there is no such thing as evidence that “supports” or “contradicts” a single hypothesis, except insofar as other hypotheses do worse or better. So long as Lady Justice is investigating a single, strictly factual question with a binary answer space, a set of scales would be an appropriate tool. If Justitia must consider any more complex issue, she should relinquish her scales or relinquish her sword.

\n\n

Not all arguments reduce to mere up or down. Lady Rationality carries a notebook, wherein she writes down all the facts that aren’t on anyone’s side.

\n\n
\n \n\n

1Melissa L. Finucane et al., “The Affect Heuristic in Judgments of Risks and Benefits,” Journal of Behavioral Decision Making 13, no. 1 (2000): 1–17.

\n
\n\n" } }, { "_id": "6dSitnwYPg8i8NHn3", "title": "Burch's Law", "pageUrl": "https://www.lesswrong.com/posts/6dSitnwYPg8i8NHn3/burch-s-law", "postedAt": "2007-03-08T02:39:00.000Z", "baseScore": 64, "voteCount": 54, "commentCount": 28, "url": null, "contents": { "documentId": "6dSitnwYPg8i8NHn3", "html": "

Greg Burch said:\n\n

"I think people should have a right to be stupid and, if they have that right, the market's going to respond by supplying as much stupidity as can be sold."

\n

Greg Burch was speaking about sport-utility vehicles, which he feels are very poorly designed.  Note that Burch was not advocating banning SUVs.  Burch did not even advocate regulating SUVs.  Burch thinks people should have a right to be stupid.  But Burch also openly acknowledges the real-world consequence of that right, which is that the market will respond by supplying as much stupidity as can be sold.  Perhaps Burch is strongly libertarian, and sees the case against regulation as a slam-dunk regardless of the consequences, and therefore has an easier time acknowledging the downside of his policy.  Or perhaps Burch is just a skillful rationalist.  Either way, I hereby canonize his observation as Burch's Law.\n\n

\n\n

Burch's Law is a special case of a more general rule:  Just because your ethics require an action doesn't mean the universe will exempt you from the consequences.  If the universe were fair, like a sympathetic human, the universe would understand that you had overriding ethical reasons for your action, and would exempt you from the usual penalties.  The judge would rule "justifiable homicide" instead of "murder" and exempt you from the usual prison term.  Well, the universe isn't fair and it won't exempt you from the consequences.  We know the equations of physics in enough detail to know that the equations don't contain any quantities reflective of ethical considerations.\n\n

\n\n

We don't send automobile manufacturers to jail, even though manufactured cars kill an estimated 1.2 million people per year worldwide.  (Roughly 2% of the annual planetary death rate.)  Not everyone who dies in an automobile accident is someone who decided to drive a car.  The tally of casualties includes pedestrians.  It includes minor children who had to be pushed screaming into the car on the way to school.  And yet we still manufacture automobiles, because, well, we're in a hurry.  I don't even disagree with this decision.  I drive a car myself.  The point is that the consequences don't change no matter how good the ethical justification sounds.  The people who die in automobile accidents are still dead.  We can suspend the jail penalty, but we can't suspend the laws of physics.\n\n

\n\n

Humanity hasn't had much luck suspending the laws of economics, either.  If people have a right to be stupid, the market will respond by supplying all the stupidity that can be sold.

" } }, { "_id": "PeSzc9JTBxhaYRp9b", "title": "Policy Debates Should Not Appear One-Sided", "pageUrl": "https://www.lesswrong.com/posts/PeSzc9JTBxhaYRp9b/policy-debates-should-not-appear-one-sided", "postedAt": "2007-03-03T18:53:08.000Z", "baseScore": 278, "voteCount": 225, "commentCount": 187, "url": null, "contents": { "documentId": "PeSzc9JTBxhaYRp9b", "html": "

Robin Hanson proposed stores where banned products could be sold.1 There are a number of excellent arguments for such a policy—an inherent right of individual liberty, the career incentive of bureaucrats to prohibit everything, legislators being just as biased as individuals. But even so (I replied), some poor, honest, not overwhelmingly educated mother of five children is going to go into these stores and buy a “Dr. Snakeoil’s Sulfuric Acid Drink” for her arthritis and die, leaving her orphans to weep on national television.

I was just making a factual observation. Why did some people think it was an argument in favor of regulation?

On questions of simple fact (for example, whether Earthly life arose by natural selection) there’s a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called “balance of evidence” should reflect this. Indeed, under the Bayesian definition of evidence, “strong evidence” is just that sort of evidence which we only expect to find on one side of an argument.

But there is no reason for complex actions with many consequences to exhibit this onesidedness property. Why do people seem to want their policy debates to be one-sided?

Politics is the mind-killer. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back. If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.

One should also be aware of a related failure pattern: thinking that the course of Deep Wisdom is to compromise with perfect evenness between whichever two policy positions receive the most airtime. A policy may legitimately have lopsided costs or benefits. If policy questions were not tilted one way or the other, we would be unable to make decisions about them. But there is also a human tendency to deny all costs of a favored policy, or deny all benefits of a disfavored policy; and people will therefore tend to think policy tradeoffs are tilted much further than they actually are.

If you allow shops that sell otherwise banned products, some poor, honest, poorly educated mother of five kids is going to buy something that kills her. This is a prediction about a factual consequence, and as a factual question it appears rather straightforward—a sane person should readily confess this to be true regardless of which stance they take on the policy issue. You may also think that making things illegal just makes them more expensive, that regulators will abuse their power, or that her individual freedom trumps your desire to meddle with her life. But, as a matter of simple fact, she’s still going to die.

We live in an unfair universe. Like all primates, humans have strong negative reactions to perceived unfairness; thus we find this fact stressful. There are two popular methods of dealing with the resulting cognitive dissonance. First, one may change one’s view of the facts—deny that the unfair events took place, or edit the history to make it appear fair.2 Second, one may change one’s morality—deny that the events are unfair.

Some libertarians might say that if you go into a “banned products shop,” passing clear warning labels that say THINGS IN THIS STORE MAY KILL YOU, and buy something that kills you, then it’s your own fault and you deserve it. If that were a moral truth, there would be no downside to having shops that sell banned products. It wouldn’t just be a net benefit, it would be a one-sided tradeoff with no drawbacks.

Others argue that regulators can be trained to choose rationally and in harmony with consumer interests; if those were the facts of the matter then (in their moral view) there would be no downside to regulation.

Like it or not, there’s a birth lottery for intelligence—though this is one of the cases where the universe’s unfairness is so extreme that many people choose to deny the facts. The experimental evidence for a purely genetic component of 0.6–0.8 is overwhelming, but even if this were to be denied, you don’t choose your parental upbringing or your early schools either.

I was raised to believe that denying reality is a moral wrong. If I were to engage in wishful optimism about how Sulfuric Acid Drink was likely to benefit me, I would be doing something that I was warned against and raised to regard as unacceptable. Some people are born into environments—we won’t discuss their genes, because that part is too unfair—where the local witch doctor tells them that it is right to have faith and wrong to be skeptical. In all goodwill, they follow this advice and die. Unlike you, they weren’t raised to believe that people are responsible for their individual choices to follow society’s lead. Do you really think you’re so smart that you would have been a proper scientific skeptic even if you’d been born in 500 CE? Yes, there is a birth lottery, no matter what you believe about genes.

Saying “People who buy dangerous products deserve to get hurt!” is not tough-minded. It is a way of refusing to live in an unfair universe. Real tough-mindedness is saying, “Yes, sulfuric acid is a horrible painful death, and no, that mother of five children didn’t deserve it, but we’re going to keep the shops open anyway because we did this cost-benefit calculation.” Can you imagine a politician saying that? Neither can I. But insofar as economists have the power to influence policy, it might help if they could think it privately—maybe even say it in journal articles, suitably dressed up in polysyllabismic obfuscationalization so the media can’t quote it.

I don’t think that when someone makes a stupid choice and dies, this is a cause for celebration. I count it as a tragedy. It is not always helping people, to save them from the consequences of their own actions; but I draw a moral line at capital punishment. If you’re dead, you can’t learn from your mistakes.

Unfortunately the universe doesn’t agree with me. We’ll see which one of us is still standing when this is over.


1Robin Hanson et al., “The Hanson-Hughes Debate on ‘The Crack of a Future Dawn,’” 16, no. 1 (2007): 99–126, http://jetpress.org/v16/hanson.pdf.

2This is mediated by the affect heuristic and the just-world fallacy.

" } }, { "_id": "HCssSWMp7zoJAPtcR", "title": "You Are Not Hiring the Top 1%", "pageUrl": "https://www.lesswrong.com/posts/HCssSWMp7zoJAPtcR/you-are-not-hiring-the-top-1", "postedAt": "2007-03-02T05:34:27.000Z", "baseScore": 104, "voteCount": 75, "commentCount": 12, "url": null, "contents": { "documentId": "HCssSWMp7zoJAPtcR", "html": "

Today's statistical fallacy (slightly redacted by editor) comes from Joel on Software:

\n
\n

Everyone thinks they're hiring the top 1%.  Martin Fowler said, \"We are still working hard to hire only the very top fraction of software developers (the target is around the top 0.5 to 1%).\"  I hear this from almost every software company. \"We hire the top 1% or less,\" they all say.  Could they all be hiring the top 1%? Where are all the other 99%? General Motors?

\n

When you get 200 resumes, and hire the best person, does that mean you're hiring the top 0.5%?  Think about what happens to the other 199 that you didn't hire.  They go look for another job.

\n
\n

\n
\n

The entire world could consist of 1,000,000 programmers, of whom the worst 199 keep applying for every job and never getting them, but the best 999,801 always get jobs as soon as they apply for one. So every time a job is listed the 199 losers apply, as usual, and one guy from the pool of 999,801 applies, and he gets the job, of course, because he's the best, and now, in this contrived example, every employer thinks they're getting the top 0.5% when they're actually getting the top 99.9801%.

\n

I'm exaggerating a lot, but the point is, when you select 1 out of 200 applicants, the other 199 don't give up and go into plumbing (although I wish they would... plumbers are impossible to find). They apply again somewhere else, and contribute to some other employer's self-delusions about how selective they are.

\n
\n

This might explain some other phenomena I've heard of, such as the \"slush heap\" of inconceivably awful stories received in the mail by every fiction publisher that accepts unsolicited manuscripts.  When a story is good enough to be published, it's accepted and removed from the system.  Otherwise the hapless author sends it to another publisher!

\n

Perhaps most novice writers aren't quite as dreadful (on average) as editors seem to believe?  Editors will disproportionately encounter the work of novice writers who are not only awkward, but so incompetent as to be unaware of their own incompetence, and so proud that they can't take a hint.

\n

PS:  Two other areas where this might apply:  Students who apply to your program/department for admission.  And, grant proposals.

\n

PPS:  Robert Scarth comments, \"This might also explain why some women often think that 'all men are bastards' - there are a few cads out there in continual circulation and with no intention to settle down, whereas men with honourable intentions have much fewer relationships, and settle down more quickly.\"

" } }, { "_id": "waqC6FihC2ryAZuAq", "title": "Just Lose Hope Already", "pageUrl": "https://www.lesswrong.com/posts/waqC6FihC2ryAZuAq/just-lose-hope-already", "postedAt": "2007-02-25T00:39:33.000Z", "baseScore": 155, "voteCount": 128, "commentCount": 78, "url": null, "contents": { "documentId": "waqC6FihC2ryAZuAq", "html": "\n\n\n\n \n\n \n\n

Casey Serin, a 24-year-old web programmer with no prior experience in real estate, owes banks 2.2 million dollars after lying on mortgage applications in order to simultaneously buy eight different houses in different states. He took cash out of the mortgage (applied for larger amounts than the price of the house) and spent the money on living expenses and real-estate seminars. He was expecting the market to go up, it seems.

\n\n

That’s not even the sad part. The sad part is that he still hasn’t given up. Casey Serin does not accept defeat. He refuses to declare bankruptcy, or get a job; he still thinks he can make it big in real estate. He went on spending money on seminars. He tried to take out a mortgage on a ninth house. He hasn’t failed, you see, he’s just had a learning experience.

\n\n

That’s what happens when you refuse to lose hope.

\n\n

While this behavior may seem to be merely stupid, it also puts me in mind of two Nobel-Prize-winning economists . . .

\n\n

. . . namely Merton and Scholes of Long-Term Capital Management.

\n\n

While LTCM raked in giant profits over its first three years, in 1998 the inefficiences that LTCM were exploiting had started to vanish—other people knew about the trick, so it stopped working.

\n\n

LTCM refused to lose hope. Addicted to 40% annual returns, they borrowed more and more leverage to exploit tinier and tinier margins. When everything started to go wrong for LTCM, they had equity of $4.72 billion, leverage of $124.5 billion, and derivative positions of $1.25 trillion.

\n\n

Every profession has a different way to be smart—different skills to learn and rules to follow. You might therefore think that the study of “rationality,” as a general discipline, wouldn’t have much to contribute to real-life success. And yet it seems to me that how to not be stupid has a great deal in common across professions. If you set out to teach someone how to not turn little mistakes into big mistakes, it’s nearly the same art whether in hedge funds or romance, and one of the keys is this: Be ready to admit you lost.

\n\n" } }, { "_id": "9weLK2AJ9JEt2Tt8f", "title": "Politics is the Mind-Killer", "pageUrl": "https://www.lesswrong.com/posts/9weLK2AJ9JEt2Tt8f/politics-is-the-mind-killer", "postedAt": "2007-02-18T21:23:00.000Z", "baseScore": 314, "voteCount": 249, "commentCount": 243, "url": null, "contents": { "documentId": "9weLK2AJ9JEt2Tt8f", "html": "

People go funny in the head when talking about politics. The evolutionary reasons for this are so obvious as to be worth belaboring: In the ancestral environment, politics was a matter of life and death. And sex, and wealth, and allies, and reputation . . . When, today, you get into an argument about whether “we” ought to raise the minimum wage, you’re executing adaptations for an ancestral environment where being on the wrong side of the argument could get you killed. Being on the right side of the argument could let you kill your hated rival!

If you want to make a point about science, or rationality, then my advice is to not choose a domain from contemporary politics if you can possibly avoid it. If your point is inherently about politics, then talk about Louis XVI during the French Revolution. Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.

Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you’re on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it’s like stabbing your soldiers in the back—providing aid and comfort to the enemy. People who would be level-headed about evenhandedly weighing all sides of an issue in their professional life as scientists, can suddenly turn into slogan-chanting zombies when there’s a Blue or Green position on an issue.

In artificial intelligence, and particularly in the domain of nonmonotonic reasoning, there’s a standard problem: “All Quakers are pacifists. All Republicans are not pacifists. Nixon is a Quaker and a Republican. Is Nixon a pacifist?”

What on Earth was the point of choosing this as an example? To rouse the political emotions of the readers and distract them from the main question? To make Republicans feel unwelcome in courses on artificial intelligence and discourage them from entering the field?1

Why would anyone pick such a distracting example to illustrate nonmonotonic reasoning? Probably because the author just couldn’t resist getting in a good, solid dig at those hated Greens. It feels so good to get in a hearty punch, y’know, it’s like trying to resist a chocolate cookie.

As with chocolate cookies, not everything that feels pleasurable is good for you.

I’m not saying that I think we should be apolitical, or even that we should adopt Wikipedia’s ideal of the Neutral Point of View. But try to resist getting in those good, solid digs if you can possibly avoid it. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it—but don’t blame it explicitly on the whole Republican Party; some of your readers may be Republicans, and they may feel that the problem is a few rogues, not the entire party. As with Wikipedia’s NPOV, it doesn’t matter whether (you think) the Republican Party really is at fault. It’s just better for the spiritual growth of the community to discuss the issue without invoking color politics.


1And no, I am not a Republican. Or a Democrat.

" } }, { "_id": "N2pENnTPB75sfc9kb", "title": "Outside the Laboratory", "pageUrl": "https://www.lesswrong.com/posts/N2pENnTPB75sfc9kb/outside-the-laboratory", "postedAt": "2007-01-21T03:46:57.000Z", "baseScore": 159, "voteCount": 128, "commentCount": 353, "url": null, "contents": { "documentId": "N2pENnTPB75sfc9kb", "html": "

\"Outside the laboratory, scientists are no wiser than anyone else.\"  Sometimes this proverb is spoken by scientists, humbly, sadly, to remind themselves of their own fallibility.  Sometimes this proverb is said for rather less praiseworthy reasons, to devalue unwanted expert advice.  Is the proverb true?  Probably not in an absolute sense.  It seems much too pessimistic to say that scientists are literally no wiser than average, that there is literally zero correlation.

\n

But the proverb does appear true to some degree, and I propose that we should be very disturbed by this fact.  We should not sigh, and shake our heads sadly.  Rather we should sit bolt upright in alarm.  Why?  Well, suppose that an apprentice shepherd is laboriously trained to count sheep, as they pass in and out of a fold.  Thus the shepherd knows when all the sheep have left, and when all the sheep have returned.  Then you give the shepherd a few apples, and say:  \"How many apples?\"  But the shepherd stares at you blankly, because they weren't trained to count apples - just sheep.  You would probably suspect that the shepherd didn't understand counting very well.

\n

Now suppose we discover that a Ph.D. economist buys a lottery ticket every week.  We have to ask ourselves:  Does this person really understand expected utility, on a gut level?  Or have they just been trained to perform certain algebra tricks?

\n

\n

One thinks of Richard Feynman's account of a failing physics education program:

\n
\n

\"The students had memorized everything, but they didn't know what anything meant.  When they heard 'light that is reflected from a medium with an index', they didn't know that it meant a material such as water.  They didn't know that the 'direction of the light' is the direction in which you see something when you're looking at it, and so on.  Everything was entirely memorized, yet nothing had been translated into meaningful words.  So if I asked, 'What is Brewster's Angle?' I'm going into the computer with the right keywords.  But if I say, 'Look at the water,' nothing happens - they don't have anything under 'Look at the water'!\"

\n
\n

Suppose we have an apparently competent scientist, who knows how to design an experiment on N subjects; the N subjects will receive a randomized treatment; blinded judges will classify the subject outcomes; and then we'll run the results through a computer and see if the results are significant at the 0.05 confidence level.  Now this is not just a ritualized tradition.  This is not a point of arbitrary etiquette like using the correct fork for salad.  It is a ritualized tradition for testing hypotheses experimentally.  Why should you test your hypothesis experimentally?  Because you know the journal will demand so before it publishes your paper?  Because you were trained to do it in college?  Because everyone else says in unison that it's important to do the experiment, and they'll look at you funny if you say otherwise?

\n

No: because, in order to map a territory, you have to go out and look at the territory.  It isn't possible to produce an accurate map of a city while sitting in your living room with your eyes closed, thinking pleasant thoughts about what you wish the city was like.  You have to go out, walk through the city, and write lines on paper that correspond to what you see.  It happens, in miniature, every time you look down at your shoes to see if your shoelaces are untied.  Photons arrive from the Sun, bounce off your shoelaces, strike your retina, are transduced into neural firing frequences, and are reconstructed by your visual cortex into an activation pattern that is strongly correlated with the current shape of your shoelaces.  To gain new information about the territory, you have to interact with the territory.  There has to be some real, physical process whereby your brain state ends up correlated to the state of the environment.  Reasoning processes aren't magic; you can give causal descriptions of how they work.  Which all goes to say that, to find things out, you've got to go look.

\n

Now what are we to think of a scientist who seems competent inside the laboratory, but who, outside the laboratory, believes in a spirit world?  We ask why, and the scientist says something along the lines of:  \"Well, no one really knows, and I admit that I don't have any evidence - it's a religious belief, it can't be disproven one way or another by observation.\"  I cannot but conclude that this person literally doesn't know why you have to look at things.  They may have been taught a certain ritual of experimentation, but they don't understand the reason for it - that to map a territory, you have to look at it - that to gain information about the environment, you have to undergo a causal process whereby you interact with the environment and end up correlated to it.  This applies just as much to a double-blind experimental design that gathers information about the efficacy of a new medical device, as it does to your eyes gathering information about your shoelaces.

\n

Maybe our spiritual scientist says:  \"But it's not a matter for experiment.  The spirits spoke to me in my heart.\"  Well, if we really suppose that spirits are speaking in any fashion whatsoever, that is a causal interaction and it counts as an observation.  Probability theory still applies.  If you propose that some personal experience of \"spirit voices\" is evidence for actual spirits, you must propose that there is a favorable likelihood ratio for spirits causing \"spirit voices\", as compared to other explanations for \"spirit voices\", which is sufficient to overcome the prior improbability of a complex belief with many parts.  Failing to realize that \"the spirits spoke to me in my heart\" is an instance of \"causal interaction\", is analogous to a physics student not realizing that a \"medium with an index\" means a material such as water.

\n

It is easy to be fooled, perhaps, by the fact that people wearing lab coats use the phrase \"causal interaction\" and that people wearing gaudy jewelry use the phrase \"spirits speaking\".  Discussants wearing different clothing, as we all know, demarcate independent spheres of existence - \"separate magisteria\", in Stephen J. Gould's immortal blunder of a phrase.  Actually, \"causal interaction\" is just a fancy way of saying, \"Something that makes something else happen\", and probability theory doesn't care what clothes you wear.

\n

In modern society there is a prevalent notion that spiritual matters can't be settled by logic or observation, and therefore you can have whatever religious beliefs you like.  If a scientist falls for this, and decides to live their extralaboratorial life accordingly, then this, to me, says that they only understand the experimental principle as a social convention.  They know when they are expected to do experiments and test the results for statistical significance.  But put them in a context where it is socially conventional to make up wacky beliefs without looking, and they just as happily do that instead.

\n

The apprentice shepherd is told that if \"seven\" sheep go out, and \"eight\" sheep go out, then \"fifteen\" sheep had better come back in.  Why \"fifteen\" instead of \"fourteen\" or \"three\"?  Because otherwise you'll get no dinner tonight, that's why!  So that's professional training of a kind, and it works after a fashion - but if social convention is the only reason why seven sheep plus eight sheep equals fifteen sheep, then maybe seven apples plus eight apples equals three apples.  Who's to say that the rules shouldn't be different for apples?

\n

But if you know why the rules work, you can see that addition is the same for sheep and for apples.  Isaac Newton is justly revered, not for his outdated theory of gravity, but for discovering that - amazingly, surprisingly - the celestial planets, in the glorious heavens, obeyed just the same rules as falling apples.  In the macroscopic world - the everyday ancestral environment - different trees bear different fruits, different customs hold for different people at different times.  A genuinely unified universe, with stationary universal laws, is a highly counterintuitive notion to humans!  It is only scientists who really believe it, though some religions may talk a good game about the \"unity of all things\".

\n

As Richard Feynman put it:

\n
\n

\"If we look at a glass closely enough we see the entire universe. There are the things of physics: the twisting liquid which evaporates depending on the wind and weather, the reflections in the glass, and our imaginations adds the atoms. The glass is a distillation of the Earth's rocks, and in its composition we see the secret of the universe's age, and the evolution of the stars. What strange array of chemicals are there in the wine? How did they come to be? There are the ferments, the enzymes, the substrates, and the products. There in wine is found the great generalization: all life is fermentation. Nobody can discover the chemistry of wine without discovering, as did Louis Pasteur, the cause of much disease. How vivid is the claret, pressing its existence into the consciousness that watches it! If our small minds, for some convenience, divide this glass of wine, this universe, into parts — physics, biology, geology, astronomy, psychology, and so on — remember that Nature does not know it! So let us put it all back together, not forgetting ultimately what it is for. Let it give us one more final pleasure: drink it and forget it all!\"

\n
\n

A few religions, especially the ones invented or refurbished after Isaac Newton, may profess that \"everything is connected to everything else\".  (Since there is a trivial isomorphism between graphs and their complements, this profound wisdom conveys exactly the same useful information as a graph with no edges.)  But when it comes to the actual meat of the religion, prophets and priests follow the ancient human practice of making everything up as they go along.  And they make up one rule for women under twelve, another rule for men over thirteen; one rule for the Sabbath and another rule for weekdays; one rule for science and another rule for sorcery...

\n

Reality, we have learned to our shock, is not a collection of separate magisteria, but a single unified process governed by mathematically simple low-level rules.  Different buildings on a university campus do not belong to different universes, though it may sometimes seem that way.  The universe is not divided into mind and matter, or life and nonlife; the atoms in our heads interact seamlessly with the atoms of the surrounding air.  Nor is Bayes's Theorem different from one place to another.

\n

If, outside of their specialist field, some particular scientist is just as susceptible as anyone else to wacky ideas, then they probably never did understand why the scientific rules work.  Maybe they can parrot back a bit of Popperian falsificationism; but they don't understand on a deep level, the algebraic level of probability theory, the causal level of cognition-as-machinery. They've been trained to behave a certain way in the laboratory, but they don't like to be constrained by evidence; when they go home, they take off the lab coat and relax with some comfortable nonsense.  And yes, that does make me wonder if I can trust that scientist's opinions even in their own field - especially when it comes to any controversial issue, any open question, anything that isn't already nailed down by massive evidence and social convention.

\n

Maybe we can beat the proverb - be rational in our personal lives, not just our professional lives.  We shouldn't let a mere proverb stop us:  \"A witty saying proves nothing,\" as Voltaire said.  Maybe we can do better, if we study enough probability theory to know why the rules work, and enough experimental psychology to see how they apply in real-world cases - if we can learn to look at the water.  An ambition like that lacks the comfortable modesty of being able to confess that, outside your specialty, you're no better than anyone else.  But if our theories of rationality don't generalize to everyday life, we're doing something wrong.  It's not a different universe inside and outside the laboratory.

\n

Addendum:  If you think that (a) science is purely logical and therefore opposed to emotion, or (b) that we shouldn't bother to seek truth in everyday life, see \"Why Truth?\"  For new readers, I also recommend \"Twelve Virtues of Rationality.\"

" } }, { "_id": "fXbRhjz3yEF9DLJfE", "title": "Some Claims Are Just Too Extraordinary", "pageUrl": "https://www.lesswrong.com/posts/fXbRhjz3yEF9DLJfE/some-claims-are-just-too-extraordinary", "postedAt": "2007-01-20T07:14:56.000Z", "baseScore": 37, "voteCount": 27, "commentCount": 12, "url": null, "contents": { "documentId": "fXbRhjz3yEF9DLJfE", "html": "

"I would sooner believe that two Yankee professors would lie, than that stones would fall from heaven."

\n\n

-- Thomas Jefferson, on meteors

\n\n

"How would I explain the event of my left arm being replaced by a\nblue tentacle?  The answer is that I wouldn't.  It isn't going to happen."

\n\n

-- Eliezer Yudkowsky, "A Technical Explanation of Technical Explanation"

\n\n

"If a ship landed in my yard and LGMs stepped out, I’d\npush past their literature and try to find the cable that dropped the saucer on\nmy roses. Lack of a cable or any\nsignificant burning to the flowers, I’d then grab a hammer and start knocking\nabout in the ship till I was convinced that nothing said “Intel Inside.” Then when I discovered a “Flux Capacitor”\ntype thing I would finally stop and say, “Hey, cool gadget!” Assuming the universal benevolence of the LGMs,\nI’d yank it out and demand from the nearest "Grey” (they are the tall nice\nones), “where the hell did this come from?” Greys don’t talk, they communicate via telepathy, so I’d ignore the\nvoice inside my head. Then stepping\noutside the saucer and sitting in a lawn chair, I’d throw pebbles at the aliens\ntill I was sure they were solid. Then\nI’d look down at the “Flux Capacitor” and make sure it hadn’t morphed into my\nbird feeder. Finally, with proof in my\nhand and aliens sitting on my deck (they’d be offered beers, though I’ve heard\nthat they absorb energy like a plant) I’d grab my cell phone and tell my doctor\nthat I’m having a serious manic episode with full-blown visual hallucinations."

\n\n

-- Peter K. Bertine, on the Extropian mailing list

We underestimate the power of science, and overestimate the power of personal observation.  A peer-reviewed, journal-published, replicated report is worth far more than what you see with your own eyes.  Our own eyes can deceive us.  People can fool themselves, hallucinate, and even go insane.  The controls on publication in major journals are more trustworthy than the very fabric of your brain.  If you see with your own eyes that the sky is blue, and Science says it is green, then sir, I advise that you trust in Science.

\n\n

This is not what most scientists will tell you, of course; but I think it is pragmatically true.  Because in real life, what happens is that your eyes have a little\nmalfunction and decide that the sky is green, and science will\ntell you that the sky is blue.

\n\n

A replicated scientific report is a special kind of extraordinary claim, designed by the surrounding process to be more extraordinary evidence than simple verbal claims.  It is more extraordinary evidence because the surrounding process - and I would place a far greater premium on the replication than on the peer review, by the way - is constructed to deny entrance to claims that are in fact false.  In this way, the replicated scientific report becomes capable of overcoming greater burdens of prior improbability.

\n\n

There are some burdens of prior improbability so great that simple verbal claims cannot overcome them.  I would not believe someone who claimed that their coffee was disobeying conservation of angular momentum - but I might believe the same report published in Physics Today, with at least three replications.  Who would believe in quantum mechanics if a stranger walked up to us on the street and whispered it to us?

\n\n

Are there some burdens of prior improbability so great that science itself cannot overcome them?

\n\n

What about the claim that 2 + 2 = 5?

\n\n

What about journals that claim to publish replicated reports of ESP?

\n\n

Sometimes, even claims deliberately constructed to be extraordinary evidence end up just not being extraordinary enough.

" } }, { "_id": "aiQabnugDhcrFtr9n", "title": "The Power of Intelligence", "pageUrl": "https://www.lesswrong.com/posts/aiQabnugDhcrFtr9n/the-power-of-intelligence", "postedAt": "2007-01-01T20:00:47.727Z", "baseScore": 126, "voteCount": 93, "commentCount": 7, "url": null, "contents": { "documentId": "aiQabnugDhcrFtr9n", "html": "

In our skulls we carry around three pounds of slimy, wet, grayish tissue, corrugated like crumpled toilet paper.

\rYou wouldn’t think, to look at the unappetizing lump, that it was some of the most powerful stuff in the known universe. If you’d never seen an anatomy textbook, and you saw a brain lying in the street, you’d say “Yuck!” and try not to get any of it on your shoes. Aristotle thought the brain was an organ that cooled the blood. It doesn’t look dangerous.

\rFive million years ago, the ancestors of lions ruled the day, the ancestors of wolves roamed the night. The ruling predators were armed with teeth and claws—sharp, hard cutting edges, backed up by powerful muscles. Their prey, in self-defense, evolved armored shells, sharp horns, toxic venoms, camouflage. The war had gone on through hundreds of eons and countless arms races. Many a loser had been removed from the game, but there was no sign of a winner. Where one species had shells, another species would evolve to crack them; where one species became poisonous, another would evolve to tolerate the poison. Each species had its private niche—for who could live in the seas and the skies and the land at once? There was no ultimate weapon and no ultimate defense and no reason to believe any such thing was possible.\r

Then came the Day of the Squishy Things.\r

They had no armor. They had no claws. They had no venoms.

\rIf you saw a movie of a nuclear explosion going off, and you were told an Earthly life form had done it, you would never in your wildest dreams imagine that the Squishy Things could be responsible. After all, Squishy Things aren’t radioactive.\r

In the beginning, the Squishy Things had no fighter jets, no machine guns, no rifles, no swords. No bronze, no iron. No hammers, no anvils, no tongs, no smithies, no mines. All the Squishy Things had were squishy fingers—too weak to break a tree, let alone a mountain. Clearly not dangerous. To cut stone you would need steel, and the Squishy Things couldn’t excrete steel. In the environment there were no steel blades for Squishy fingers to pick up. Their bodies could not generate temperatures anywhere near hot enough to melt metal. The whole scenario was obviously absurd.\r

And as for the Squishy Things manipulating DNA—that would have been beyond ridiculous. Squishy fingers are not that small. There is no access to DNA from the Squishy level; it would be like trying to pick up a hydrogen atom. Oh, technically it’s all one universe, technically the Squishy Things and DNA are part of the same world, the same unified laws of physics, the same great web of causality. But let’s be realistic: you can’t get there from here.

\rEven if Squishy Things could someday evolve to do any of those feats, it would take thousands of millennia. We have watched the ebb and flow of Life through the eons, and let us tell you, a year is not even a single clock tick of evolutionary time. Oh, sure, technically a year is six hundred trillion trillion trillion trillion Planck intervals. But nothing ever happens in less than six hundred million trillion trillion trillion trillion Planck intervals, so it’s a moot point. The Squishy Things, as they run across the savanna now, will not fly across continents for at least another ten million years; no one could have that much sex.

\rNow explain to me again why an Artificial Intelligence can’t do anything interesting over the Internet unless a human programmer builds it a robot body.\r

I have observed that someone’s flinch-reaction to “intelligence”—the thought that crosses their mind in the first half-second after they hear the\r word “intelligence”—often determines their flinch-reaction to the notion of an intelligence explosion. Often they look up the keyword “intelligence” and retrieve the concept booksmarts—a mental image of the Grand Master chess player who can’t get a date, or a college professor who can’t survive outside academia.\r

“It takes more than intelligence to succeed professionally,” people say, as if charisma resided in the kidneys, rather than the brain. “Intelligence is no match for a gun,” they say, as if guns had grown on trees. “Where will an Artificial Intelligence get money?” they ask, as if the first Homo sapiens had found dollar bills fluttering down from the sky, and used them at convenience stores already in the forest. The human species was not born into a market economy. Bees won’t sell you honey if you offer them an electronic funds transfer. The human species imagined money into existence, and it exists—for us, not mice or wasps—because we go on believing in it.\r

I keep trying to explain to people that the archetype of intelligence is not Dustin Hoffman in Rain Man. It is a human being, period. It is squishy things that explode in a vacuum, leaving footprints on their moon. Within that gray wet lump is the power to search paths through the great web of causality, and find a road to the seemingly impossible—the power sometimes called creativity.

\rPeople—venture capitalists in particular—sometimes ask how, if the Machine Intelligence Research Institute successfully builds a true AI, the results will be commercialized. This is what we call a framing problem.\r

Or maybe it’s something deeper than a simple clash of assumptions. With a bit of creative thinking, people can imagine how they would go about travelling to the Moon, or curing smallpox, or manufacturing computers. To imagine a trick that could accomplish all these things at once seems downright impossible—even though such a power resides only a few centimeters behind their own eyes. The gray wet thing still seems mysterious to the gray wet thing.

\rAnd so, because people can’t quite see how it would all work, the power of intelligence seems less real; harder to imagine than a tower of fire sending a ship to Mars. The prospect of visiting Mars captures the imagination. But if one should promise a Mars visit, and also a grand unified theory of physics, and a proof of the Riemann Hypothesis, and a cure for obesity, and a cure\r for cancer, and a cure for aging, and a cure for stupidity—well, it just sounds wrong, that’s all.\r

And well it should. It’s a serious failure of imagination to think that intelligence is good for so little. Who could have imagined, ever so long ago, what minds would someday do? We may not even know what our real problems are.

\rBut meanwhile, because it’s hard to see how one process could have such diverse powers, it’s hard to imagine that one fell swoop could solve even such prosaic problems as obesity and cancer and aging.\r

Well, one trick cured smallpox and built airplanes and cultivated wheat and tamed fire. Our current science may not agree yet on how exactly the trick works, but it works anyway. If you are temporarily ignorant about a phenomenon, that is a fact about your current state of mind, not a fact about the phenomenon. A blank map does not correspond to a blank territory. If one does not quite understand that power which put footprints on the Moon, nonetheless, the footprints are still there—real footprints, on a real Moon, put there by a real power. If one were to understand deeply enough, one could create and shape that power. Intelligence is as real as electricity. It’s merely far more powerful, far more dangerous, has far deeper implications for the unfolding story of life in the universe—and it’s a tiny little bit harder to figure out how to build a generator.


The first publication of this post is here.

" } } ] } } }