text
stringlengths
300
320k
source
stringlengths
52
154
# A Word to the Resourceful Paul Graham has a [new article](http://www.paulgraham.com/word.html) out. Everything he's written is worth reading if you're at all interested in startups, but this article seemed explicitly connected to rationality, by identifying an area where people who are more likely to update / less l...
https://www.lesswrong.com/posts/cHqqKhjkEiF7DdZFW/a-word-to-the-resourceful
# Some potential dangers of rationality training [Taylor & Brown (1988)](http://www.lrsi.uqam.ca/documents/PSY9520/05%20-%20l'estime%20de%20soi%202%20-%20ses%20fonctions,%20cons%E9quences,%20et%20processus%20alternatifs/TAYLOR~1.PDF) argued that several kinds of irrationality are good for you — for example that overco...
https://www.lesswrong.com/posts/MkX8D44PFoiNdLMkG/some-potential-dangers-of-rationality-training
# The Dark Arts: A Beginner's Guide The Dark Arts ============= So you've been reading this site and learning many valuable tools for becoming more rational. You're beginning to become irritated at the irrational behavior of the average person. You've noticed that many people refuse to accept even highly compelling a...
https://www.lesswrong.com/posts/Zz986H9P3WJoh5DNb/the-dark-arts-a-beginner-s-guide
# Politicians' family as signalling In the US, if you look at political candidates in public view, they often appear with family in tow. A candidate's family plays an important role in the election campaigns. I'm from India. There, the politicians' family play little role in election campaigns (unless the family membe...
https://www.lesswrong.com/posts/qkeTE53Qdcb6qegaJ/politicians-family-as-signalling
# New x-risk organizations Of course: [FHI](http://www.fhi.ox.ac.uk/), [FutureTech](http://lesswrong.com/lw/8gg/new_ai_risks_research_institute_at_oxford/), the [Singularity Institute](http://intelligence.org/), and [Leverage Research](http://www.leverageresearch.org/). New: the [Global Catastrophic Risk Institute](h...
https://www.lesswrong.com/posts/GuKvMA5gBra9n4BiM/new-x-risk-organizations
# [Link] The Hyborian Age Yay a [new cool post](http://westhunt.wordpress.com/2012/01/20/the-hyborian-age/) is up on West Hunters blog! It is written by [Gregory Cochran](http://en.wikipedia.org/wiki/Gregory_Cochran) and [Henry Harpending](http://en.wikipedia.org/wiki/Henry_Harpending) with whom most LWers are probabl...
https://www.lesswrong.com/posts/8zqy3XjQa5gk92wa2/link-the-hyborian-age
# The Singularity Institute is hiring an executive assistant near Berkeley The Singularity Institute is hiring an executive assistant for Executive Director [Luke Muehlhauser](http://lukeprog.com/). Right now his limiter (besides the need for _some_ sleep and recreation) is not (1) cognitive exhaustion after a certai...
https://www.lesswrong.com/posts/Lz64onM6tkynPKh5R/the-singularity-institute-is-hiring-an-executive-assistant
# How would you talk a stranger off the ledge? Last month, two people far at the periphery of my social circles have threatened suicide. Seems like a sign for me to learn some ledge-fu. I reviewed the stuff I'd learned back in high school ("Listen." "Be supportive." "Don't argue." "Etc etc etc.") I have trouble belie...
https://www.lesswrong.com/posts/gCtyFfDfLpY9DKSa3/how-would-you-talk-a-stranger-off-the-ledge
# The Human's Hidden Utility Function (Maybe) Suppose it turned out that humans violate the axioms of [VNM rationality](http://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem) (and therefore [don't act like they have utility functions](/lw/6da/do_humans_want_things/)) because there are _three_ va...
https://www.lesswrong.com/posts/fa5o2tg9EfJE77jEQ/the-human-s-hidden-utility-function-maybe
# Tell LessWrong about your charitable donations When I was a graduate student at the University of Notre Dame, I received a monthly living stipend of roughly $1,600. I decided to commit to giving ~10% of it to charity, and I had read in Peter Singer's book _The Life You Can Save_ that one of the most efficient charit...
https://www.lesswrong.com/posts/ModNxgShqd69zTajZ/tell-lesswrong-about-your-charitable-donations
# How I Ended Up Non-Ambitious I have a confession to make. My life hasn’t changed all that much since I started reading Less Wrong. Hindsight bias makes it hard to tell, I guess, but I feel like pretty much the same person, or at least the person I would have evolved towards anyway, whether or not I spent those years...
https://www.lesswrong.com/posts/BFamedwSgRdGGKXQQ/how-i-ended-up-non-ambitious
# Raising awareness of existential risks - perhaps explaining at "personally stocking canned food" level? Many articles have been written on the topic of [existential risks](http://wiki.lesswrong.com/wiki/Existential_risk), and the need of greater public awareness. Here's my take - the existential risks are perhaps ea...
https://www.lesswrong.com/posts/yLcHdPrcQdNJQrgSB/raising-awareness-of-existential-risks-perhaps-explaining-at
# Thinking Bayesianically, with Lojban "Do not walk to the truth, but dance. On each and every step of that dance your foot comes down in exactly the right spot. Each piece of evidence shifts your beliefs by exactly the right amount, neither more nor less. What is exactly the right amount? To calculate this you must s...
https://www.lesswrong.com/posts/BS8dYNN5D6kLhvdXR/thinking-bayesianically-with-lojban
# Michael Nielsen explains Judea Pearl's causality [Michael Nielsen](https://en.wikipedia.org/wiki/Michael_Nielsen) has posted [a long essay](http://www.michaelnielsen.org/ddi/if-correlation-doesnt-imply-causation-then-what-does/) explaining his understanding of the Pearlean causal DAG model. I don't understand more t...
https://www.lesswrong.com/posts/pMzQ4zQYgY8jcckSi/michael-nielsen-explains-judea-pearl-s-causality
# Sunk Costs Fallacy Fallacy I have a problem with never finishing things that I want to work on. I get enthusiastic about them for a while, but then find something else to work on. This problem seems to be powered partially by my sunk costs fallacy hooks. When faced with the choice of finishing my current project or...
https://www.lesswrong.com/posts/dHJuevTkH92sjLYXe/sunk-costs-fallacy-fallacy
# Urges vs. Goals: The analogy to anticipation and belief Partially in response to: [The curse of identity](/lw/8gv/the_curse_of_identity/) Related to: [Humans are not automatically strategic](/lw/2p5/humans_are_not_automatically_strategic/), [That other kind of status](/lw/1kr/that_other_kind_of_status/), [Approving...
https://www.lesswrong.com/posts/wmjPGE8TZKNLSKzm4/urges-vs-goals-the-analogy-to-anticipation-and-belief
# Occam alternatives One of the most delightful things I learned while on LessWrong was the Solomonoff/Kolmogorov formalization of Occam's Razor.  Added to what had previously been only an aesthetic heuristic to me were mathematical rigor, proofs of optimality of certain kinds, and demonstrations of utility.  For seve...
https://www.lesswrong.com/posts/dw8w3PNy9JanSJc8s/occam-alternatives
# I've had it with those dark rumours about our culture rigorously suppressing opinions You folks probably know how some posters around here, specifically Vladimir_M, often make statements to the effect of: "There's an opinion on such-and-such topic that's so against the memeplex of Western culture, we can't even dis...
https://www.lesswrong.com/posts/T8Huvskn2Ab5m8wkx/i-ve-had-it-with-those-dark-rumours-about-our-culture
# Shit Rationalists Say? I assume everyone has run across at least one of the "Shit X's Say" format of videos? Such as [Shit Skeptics Say](http://www.youtube.com/watch?v=NjyGeDKhEoM&feature=player_embedded). When done right it totally triggers the in-group warm-fuzzies. (Not to be confused with the nearly-identically ...
https://www.lesswrong.com/posts/8xQ8hTxo6Rk2qqZfj/shit-rationalists-say
# "Politics is the mind-killer" is the mind-killer Summary: I propose we somewhat relax our stance on political speech on Less Wrong. Related: [The mind-killer](/lw/ee/the_mindkiller/), [Mind-killer](http://wiki.lesswrong.com/wiki/Mind-killer) A recent series of posts by a well-meaning troll ([example](/lw/gw/politi...
https://www.lesswrong.com/posts/uxsTyFLtSmxmniTzt/politics-is-the-mind-killer-is-the-mind-killer
# What's going on here? Edit: Please stop upvoting me. I'm beyond where I was when this whole mess started. Thanks, and I feel really stupid for the way I felt when I wrote this first paragraph now. First of all, I'm sorry for this thread. I largely expect it to get downvoted, but I am considering leaving LW over thi...
https://www.lesswrong.com/posts/gW6dww3TA9o8wLoh9/what-s-going-on-here
# AI Box Log Here's the log of the AI Box experiment that just finished, with MileyCyrus as the AI and me as the Gatekeeper. The AI was not let out of the box. (9:33:25 PM) Dorikka: I may need to get up for a minute while we're playing, but I'll keep it as short as possible. I'll just give you the time back on the en...
https://www.lesswrong.com/posts/Y7uR5WqnoG629JgLn/ai-box-log
# Fiction: LW-inspired scenelet A short science-fictional scene I just wrote, after reading about some real and actual scientific research. I'd love to turn this, or something like it, into an actual scene in Dee's life story, I just can't think of a good enough story to insert it in, and so I present it on its own fo...
https://www.lesswrong.com/posts/YzzwZaefMa5ep3Jm3/fiction-lw-inspired-scenelet
# Describe the ways you can hear/see/feel yourself think. To avoid constantly [generalizing from one example](/lw/dr/generalizing_from_one_example/) when it comes to human thought, I think we need a survey of the ways people can reflect on their thought process, as subset of the ways people can think. Before having h...
https://www.lesswrong.com/posts/4PsznNg89YDx55zNv/describe-the-ways-you-can-hear-see-feel-yourself-think
# Evidence For Simulation The recent [article](http://www.overcomingbias.com/2012/01/silence-suggests-sim.html) on [overcomingbias](http://www.overcomingbias.com) suggesting the Fermi paradox might be evidence our universe is indeed a simulation prompted me to wonder how one would go about gathering evidence for or ag...
https://www.lesswrong.com/posts/RAdmQD2ajdxGh4ebc/evidence-for-simulation
# Help! Name suggestions needed for Rationality-Inst! The Singularity Institute wants to spin off a separate rationality-related organization.  (If it's not obvious what this would do, it would e.g. develop things like the [rationality katas](/lw/9hb/position_design_and_write_rationality_curriculum/) as material for l...
https://www.lesswrong.com/posts/YzzcrM92toD9dudau/help-name-suggestions-needed-for-rationality-inst
# The Substitution Principle **Partial re-interpretation of:** [The Curse of Identity](/lw/8gv/the_curse_of_identity/) **Also related to:** [Humans Are Not Automatically Strategic](/lw/2p5/humans_are_not_automatically_strategic/), [The Affect Heuristic](/lw/lg/the_affect_heuristic/), [The Planning Fallacy](/lw/jg/pl...
https://www.lesswrong.com/posts/LHtMNz7ua8zu4rSZr/the-substitution-principle
# HPMOR: What could've been done better? _**Warning:** As per the [official spoiler policy](/lw/2tr/harry_potter_and_the_methods_of_rationality/2v1l), the following discussion may contain **unmarked spoilers** for up to the current chapter of_ the Methods of Rationality. _Proceed at your own risk._ Assume HPMOR was w...
https://www.lesswrong.com/posts/QtEhktxBExYAiDeLj/hpmor-what-could-ve-been-done-better
# The Neglected Virtue of Curiosity > Curiosity is the most superficial of all the affections; it changes its objects perpetually; it has an appetite which is sharp, but very easily satisfied; and it has always an appearance of giddiness, restlessness and anxiety. \- Edmund Burke, [A Philosophical Enquiry into the Or...
https://www.lesswrong.com/posts/eCZjrm9JBDSGvEA9o/the-neglected-virtue-of-curiosity
# The Ellsberg paradox and money pumps **Followup to**: [The Savage theorem and the Ellsberg paradox](/lw/9e4/the_savage_theorem_and_the_ellsberg_paradox) In the previous post, I presented a simple version of Savage's theorem, and I introduced the Ellsberg paradox. At the end of the post, I mentioned a strong Bayesia...
https://www.lesswrong.com/posts/qtkiFYeF3gdMsRPHG/the-ellsberg-paradox-and-money-pumps
# The Personality of (great/creative) Scientists: Open and Conscientious We’ve discussed the [Big Five](http://en.wikipedia.org/wiki/Big_Five_personality_traits) in the past, such as the relationship of [Openness](http://en.wikipedia.org/wiki/Openness_to_experience) to [parasites & signaling](/lw/82g/on_the_openness_p...
https://www.lesswrong.com/posts/9vKRHBCLvwqN26LLs/the-personality-of-great-creative-scientists-open-and
# [META] My Negative Results Yesterday, I made a post asking if anyone else had noticed LW being particularly slow. I offered to collect data on this, and was fairly sure (Probably about 80%) that it would show that LW loaded slower than other webpages. I took the post down after about 20 seconds (sorry if I confused ...
https://www.lesswrong.com/posts/X5JXPSNXtPK929wH3/meta-my-negative-results
# Toward "timeless" continuous-time causal models I'm a bit at a loss as to where to put this. I know the inferential gap is too great for it to go anywhere but here, and I know that the number of people on LW interested in this subject could be counted on one hand. The prerequisites would almost certainly be [Timeles...
https://www.lesswrong.com/posts/3dQxdvx5BkRY3vW72/toward-timeless-continuous-time-causal-models
# Efficient Charity: Cheap Utilons via bone marrow registration This topic is not really related to the things normally discussed here, but I think it's really important, and it might interest Less Wrongers, especially since many of us are interested in ethics and utility calculations that are essentially cost-benefit...
https://www.lesswrong.com/posts/QTMob5JPhhdgTf6RM/efficient-charity-cheap-utilons-via-bone-marrow-registration
# Faustian bargains and discounting I was reading TV Tropes on Hell, and it occurred to me: If your discounting was sufficiently hyperbolic, or indeed plain exponential with a low enough time preference, it would in some sense be rational to take a literal Faustian bargain. The integral to infinite time of some consta...
https://www.lesswrong.com/posts/SSB7BifpeXGW2xTJb/faustian-bargains-and-discounting
# Hacking Less Wrong made easy: Vagrant edition The Less Wrong Public Goods Team has already brought you an easy-to use [virtual machine](/lw/7fv/hacking_on_lesswrong_just_got_easier/) for hacking Less Wrong. But virtual boxes can cut both ways: on the one hand, you don't have to worry about setting things up yoursel...
https://www.lesswrong.com/posts/E3ZzFCcCF5ifm9zsG/hacking-less-wrong-made-easy-vagrant-edition
# Terminal Bias I've seen of people on Lesswrong taking cognitive structures that I consider to be biases as _terminal values_. Take risk aversion for example: Risk Aversion ------------- For a rational agent with goals that don't include "being averse to risk", risk aversion is a bias. The correct decision theory a...
https://www.lesswrong.com/posts/QGKFjaZNDtJnBTbxS/terminal-bias
# Against Utilitarianism: Sobel's attack on judging lives' goodness Luke tasked me with researching the following question > I‘d like to know if anybody has come up with a good response to any of the objections to ’full information’ or ‘ideal preference’ theories of value given in Sobel (1994). (My impression is “no....
https://www.lesswrong.com/posts/ovLpQqpRLtpXPdKKP/against-utilitarianism-sobel-s-attack-on-judging-lives
# Doing Science! Open Thread Experiment Results Early in the month I announced that I was doing an experiment: I was going to start two Open Threads in January (one on the 1st, and the other on the 15th) and compare the number of comments on these threads to those of other months. My hypothesis was that having two Ope...
https://www.lesswrong.com/posts/sXmSpsLbA3dKifGjh/doing-science-open-thread-experiment-results
# Is risk aversion really irrational ? _Disclaimer: this started as a comment to [Risk aversion vs. concave utility function](/lw/9oe/risk_aversion_vs_concave_utility_function) but it grew way too big so I turned it into a full-blown article. I posted it to main since I believe it to be useful enough, and since it rep...
https://www.lesswrong.com/posts/ecbpjmxc833roBxj3/is-risk-aversion-really-irrational
# Hugo Awards - HP:MoR (part 2) Summary: please [nominate](https://chicon.org/hugo/nominate.php) [HPMoR:Podcast](http://hpmor.libsyn.com/) for consideration to the brand-new [FanCast](http://www.thehugoawards.org/2012/01/2012-hugo-award-nominations-open/) category. The [Hugo Awards](http://en.wikipedia.org/wiki/Hugo_...
https://www.lesswrong.com/posts/scshGmDzXNswQxqjv/hugo-awards-hp-mor-part-2
# von Neumann probes and Dyson spheres: what exploratory engineering can tell us about the Fermi paradox Not entirely relevant to the main issues of lesswrong, but possibly still of interest: my talk entitled "[von Neumann probes and Dyson spheres: what exploratory engineering can tell us about the Fermi paradox](http...
https://www.lesswrong.com/posts/hA7kNKjnjLDp7DbKL/von-neumann-probes-and-dyson-spheres-what-exploratory
# Mini-review: 'Proving History: Bayes' Theorem and the Quest for the Historical Jesus' I recently received an advance review copy of historian and philosopher [Richard Carrier](http://richardcarrier.info/)'s new book, _[Proving History: Bayes' Theorem and the Quest for the Historical Jesus](http://www.amazon.com/Prov...
https://www.lesswrong.com/posts/e4DaC66P3cY7SHXCv/mini-review-proving-history-bayes-theorem-and-the-quest-for
# Automatic programming, an example Say, that we have the following observational data: | **Planet** | **Aphelion **000 km | **Perihelion **000 km | **Orbit time **days | | --- | --- | --- | --- | | Mercury | 69,816 | 46,001 | 88 | | Venus | 108,942 | 107,476 | 225 | | Earth | 152,098 | 147,098 | 365 | | Mars | 249,...
https://www.lesswrong.com/posts/AcKhXCQLjMBvocZk5/automatic-programming-an-example
# On Saying the Obvious Related to: [Generalizing from One Example](/lw/dr/generalizing_from_one_example/), [Connecting Your Beliefs (a call for help)](/r/discussion/lw/8ib/connecting_your_beliefs_a_call_for_help/), [Beware the Unsurprised](/lw/ht/beware_the_unsurprised/) The idea of this article is something I've ta...
https://www.lesswrong.com/posts/6phFYpNQH9SmWL9Jt/on-saying-the-obvious
# Looking for information on cryonics ###### Disclaimer: English is a foreign language for me. If you find any mistakes please inform me. I am currently looking for information on cryonics since I have the intention to sign up. My current organization of choice is the Cryonics Institute with their one-time fee of $1,...
https://www.lesswrong.com/posts/PSAXy3QfJoaJKFfiS/looking-for-information-on-cryonics
# Two Kinds of Irrationality and How to Avoid One of Them It seems to me that there are two kinds of human irrationality. One could be called "bug" irrationality, not referring to insects but rather bugs in the design of our minds, ways in which our minds could be better designed. This category includes things like [h...
https://www.lesswrong.com/posts/B3RGZg4KNdsaDC2J4/two-kinds-of-irrationality-and-how-to-avoid-one-of-them
# Knowledge ready for Ankification [Spaced repetition](http://www.gwern.net/Spaced%20repetition) is a powerful learning tactic, and [Anki](http://ankisrs.net/) is a good tool for it. There are some LW-relevant Anki decks [here](http://wiki.lesswrong.com/wiki/Spaced_repetition#SR_decks). But I wish there were more. Wh...
https://www.lesswrong.com/posts/fhDCYs6d4CgL8DiGe/knowledge-ready-for-ankification
# Elevator pitches/responses for rationality / AI I'm trying to develop a large set of elevator pitches / elevator responses for the two major topics of LW: rationality and AI. An elevator pitch lasts 20-60 seconds, and is not necessarily prompted by anything, or at most is prompted by something very vague like "So, ...
https://www.lesswrong.com/posts/aG6h9BSgqcC3hcyQC/elevator-pitches-responses-for-rationality-ai
# Formulas of arithmetic that behave like decision agents I wrote this post in the course of working through Vladimir Slepnev's [A model of UDT with a halting oracle](/lw/8wc/a_model_of_udt_with_halting_oracles/). This post contains some of the ideas of Slepnev's post, with all the proofs written out. The main formal ...
https://www.lesswrong.com/posts/yX9pMZik7r38da7Fc/formulas-of-arithmetic-that-behave-like-decision-agents
# Fireplace Delusions [LINK] Sam Harris, in his recent article called _[The Fireplace Delusion](http://www.samharris.org/blog/item/the-fireplace-delusion "THE FIREPLACE DELUSION")_, tries to make you feel what it's like to react to a cached belief being irreparably destroyed. Just incase you forgot what your apostasy ...
https://www.lesswrong.com/posts/L5vLGsXEZowoQZYix/fireplace-delusions-link
# lesswrong.ru domain for translation project?  Hello. Me and a group of a fellow aspiring rationalists work on [Russian translation of LessWrong](http://lesswrong.ru/). The project is considerable; I think it is worthy of a paid second-level domain. Too bad rationality.ru is already taken. So are ration...
https://www.lesswrong.com/posts/XZynrL5XZcNswcym4/lesswrong-ru-domain-for-translation-project
# Is Sunk Cost Fallacy a Fallacy? I just finished the first draft of my essay, ["Are Sunk Costs Fallacies?"](http://www.gwern.net/Sunk%20cost); there is still material I need to go through, but the bulk of the material is now there. The formatting is too gnarly to post here, so I ask everyone's forgiveness in clicking...
https://www.lesswrong.com/posts/QvuD7R5L5ABw9tDdD/is-sunk-cost-fallacy-a-fallacy
# [Link] Cognitive Sciences Stack Exchange opened This is probably of interest to many here: [Cognitive Sciences Stack Exchange](http://cogsci.stackexchange.com/). For those who aren't in the know, the [Stack Exchange](http://stackexchange.com/) family of forums is a set of sites where users may post questions and an...
https://www.lesswrong.com/posts/eYssXRkYeA9kFQjEa/link-cognitive-sciences-stack-exchange-opened
# The Singularity Institute needs remote researchers (writing skill not required) The Singularity Institute needs researchers capable of doing literature searches, critically analyzing studies, and summarizing their findings. The fields involved are mostly psychology (biases and debiasing, effective learning, goal-dir...
https://www.lesswrong.com/posts/LakrCAaj8rNss2q6j/the-singularity-institute-needs-remote-researchers-writing
# Sticky threads? It annoys me that there's no way to sticky a thread in the discussion section.  Therefore, I propose creating an **LW wiki page called "Stickies"**, where sticky-worthy threads would be linked to. Would that be acceptable? These are the threads I'm planning to add:  * the current [Welcome to LW ...
https://www.lesswrong.com/posts/reTSAg8usSfvErtyL/sticky-threads
# Diseased disciplines: the strange case of the inverted chart Imagine the following situation: you have come across numerous references to a paper purporting to show that the chances of successfully treating a disease **contracted at age 10** are substantially lower if the disease is **detected later**: somewhat lowe...
https://www.lesswrong.com/posts/4ACmfJkXQxkYacdLt/diseased-disciplines-the-strange-case-of-the-inverted-chart
# [LINK] Refuting common objections to cognitive enhancement I've tended to think that bioethics is maybe the most profoundly useless field in mainstream philosophy. I might sum it up by saying that it's superficially similar to machine ethics _except_ that the objects of its warnings and cautions are all unambiguousl...
https://www.lesswrong.com/posts/Bq9TZTW5rsia8R2Mf/link-refuting-common-objections-to-cognitive-enhancement
# Bayesian RPG system? This is one of those sleep-deprived middle-of-the-night ideas which I'm reasonably likely to regret posting in the morning once I really wake up - but which, at least at the moment, thinking on my more-corrupted-than-standard hardware, seems like a cool idea. Most role-playing games have a syst...
https://www.lesswrong.com/posts/3MGuPbtQRr7G83dH5/bayesian-rpg-system
# New book from leading neuroscientist in support of cryonics and mind uploading [Sebastian Seung](http://en.wikipedia.org/wiki/Sebastian_Seung)'s new book _[Connectome: How the Brain's Wiring Makes Us Who We Are](http://www.amazon.com/Connectome-How-Brains-Wiring-Makes/dp/0547508182/)_ is very well-written, and aimed...
https://www.lesswrong.com/posts/KgTDX9wEd4s3kubpr/new-book-from-leading-neuroscientist-in-support-of-cryonics
# Feed the spinoff heuristic! Follow-up to: [Parapsychology: the control group for science](/lw/1ib/parapsychology_the_control_group_for_science/) [Some Heuristics for Evaluating the Soundness of the Academic Mainstream in Unfamiliar Fields](/lw/4ba/some_heuristics_for_evaluating_the_soundness_of/) Recent renewed d...
https://www.lesswrong.com/posts/aYtgZTKJwREEwhDjg/feed-the-spinoff-heuristic
# 3^^^3 holes and <10^(3*10^31) pigeons (or vice versa) The reasoning about huge numbers of beings is a recurring theme here. [Knuth's up-arrow notation](http://en.wikipedia.org/wiki/Knuth%27s_up-arrow_notation) is often used, with 3^^^3 as the number of beings. I want to note that if a being is made of 10^30 parts, ...
https://www.lesswrong.com/posts/QvEogsrKJkY5ZNXeH/3-3-holes-and-less-than-10-3-10-31-pigeons-or-vice-versa
# Beyond Reasonable Doubt? - Richard Dawkins [link] [A new article looking at the jury system rationally and scientifically.](http://richarddawkins.net/articles/644734-beyond-reasonable-doubt) Excerpt: > Courtroom dramas accurately portray the suspense that hangs in the air when the jury returns and delivers its ver...
https://www.lesswrong.com/posts/CWfcGbSPioJBvrPEZ/beyond-reasonable-doubt-richard-dawkins-link
# My Algorithm for Beating Procrastination Part of the sequence: [The Science of Winning at Life](http://wiki.lesswrong.com/wiki/The_Science_of_Winning_at_Life) After three months of practice, I now use a single algorithm to beat procrastination most of the times I face it.^1^ It [probably won't work for you](/lw/9v/...
https://www.lesswrong.com/posts/Ty2tjPwv8uyPK9vrz/my-algorithm-for-beating-procrastination
# Topics from "Procedural Knowledge Gaps" About a year ago, we had a major discussion about [procedural knowledge gaps](/lw/453/procedural_knowledge_gaps/?sort=new). Here's what was covered.... [How to tell whether food is fresh](/lw/453/procedural_knowledge_gaps/3hlf)[](/lw/453/procedural_knowledge_gaps/3ix9) [pjeb...
https://www.lesswrong.com/posts/oaggCpKMcZRTBujsa/topics-from-procedural-knowledge-gaps
# What happens when your beliefs fully propagate > _This is a very personal account of thoughts and events that have led me to a very interesting point in my life. Please read it as such. I present a lot of points, arguments, conclusions, etc..., but that's not what this is about._ I've started reading LW around spri...
https://www.lesswrong.com/posts/coEDeEaSEAkhic4s2/what-happens-when-your-beliefs-fully-propagate
# "The Book Of Mormon" or Belief In Belief, The Musical [This song is](http://www.youtube.com/watch?v=GfTlyuZphf8)... beautiful... Tragically so... It's like someone took some of the sequences here and made a checklist of everything that's wrong with religion, and why it still _works_ and why it can stir the heart of ...
https://www.lesswrong.com/posts/B9JWpDsQhrSeoNBnu/the-book-of-mormon-or-belief-in-belief-the-musical
# [LINK] The Hacker Shelf, free books. Yes, this a repost from [Hacker News](http://news.ycombinator.com/), but I want to point out some books that are of LW-related interest. [The Hacker Shelf](http://hackershelf.com/) is a repository of freely available textbooks. Most of them are about computer programming or the ...
https://www.lesswrong.com/posts/pqC7raFbEpb9w3XNv/link-the-hacker-shelf-free-books
# Avoid misinterpreting your emotions A couple of weeks ago, I was suffering from insomnia. Eventually my inability to fall asleep turned into frustration, which then led to feelings of self-doubt about my life in general. Soon I was wondering about whether I would ever amount to anything, whether any of my various pr...
https://www.lesswrong.com/posts/oiGN8fLCqYyk2xJaT/avoid-misinterpreting-your-emotions
# Anyone want a LW Enhancement Suite? [Reddit Enhancement Suite](http://redditenhancementsuite.com/) If anyone cares, I could _probably_ port this to work on LW without too much trouble. Optimistically it'd just involve opening up the source and replacing _reddit.com_ with _lesswrong.com._ More realistically, there'd...
https://www.lesswrong.com/posts/oTef8ZoRNL7xbDnSF/anyone-want-a-lw-enhancement-suite
# Pooling resources for valuable actuarial calculations It occurred to me this morning that, if it's actually valuable, _generating true beliefs about the world_ must be someone's comparative advantage. If truth is instrumentally important, important people must be finding ways to pay to access it. I can think of seve...
https://www.lesswrong.com/posts/69ufvKmkCB2k9zS2P/pooling-resources-for-valuable-actuarial-calculations
# What is the advantage of the Kolmogorov complexity prior? As I understand it, Solomonoff induction works by the a procedure loosely stated as saying we start with a Kolmogorov complexity universal prior (formalized Occam's razor), then update using Bayesian inference any time we see new data. More precisely, suppos...
https://www.lesswrong.com/posts/uhKnChcpzK4B47DZp/what-is-the-advantage-of-the-kolmogorov-complexity-prior
# The mathematics of reduced impact: help needed _A putative new idea for AI control; index [here](/lw/lt6/newish_ai_control_ideas/)._ _Thanks for help from [Paul Christiano](/user/paulfchristiano/)_ If [clippy](http://www.student-direct.co.uk/wp-content/uploads/2011/11/Clippy.jpg), the paper-clip maximising AI, goe...
https://www.lesswrong.com/posts/8Nwg7kqAfCM46tuHq/the-mathematics-of-reduced-impact-help-needed
# Possible Implications of the neural retrotransposons to the future [Retrotransposons](http://en.wikipedia.org/wiki/Retrotransposon) are small bits of genetic code than can copy themselves into other bits of the dna strand [They have been found](http://www.genomeweb.com/sequencing/targeted-sequencing-reveals-somatic...
https://www.lesswrong.com/posts/ZDh6o87tKQqMSACm4/possible-implications-of-the-neural-retrotransposons-to-the
# Counterfactual Coalitions Politics is the mind-killer; our opinions are largely formed on the basis of which tribes we want to affiliate with. What's more, when we first joined a tribe, we probably didn't properly vet the effects it would have on our cognition.     One illustration of this is the apparently cont...
https://www.lesswrong.com/posts/MZz5WWMbT2yoXthqR/counterfactual-coalitions
# Hearsay, Double Hearsay, and Bayesian Updates Application of: [How Much Evidence Does It Take?](/lw/jn/how_much_evidence_does_it_take/) _(trigger warning: some description of domestic violence)_ **Summary:** I discuss the strengths and weaknesses of one way that the American legal system tries to assess and cope w...
https://www.lesswrong.com/posts/iyNLFkEEXoSrPYFng/hearsay-double-hearsay-and-bayesian-updates
# Seeking education This will not be a long post; I have a simple question to ask: if you wanted to educate yourself to graduate level in mathematics, but didn't actually want to go to university, what would you do? I would ask for text-book recommendations, but I don't want to limit your responses (however, bear in m...
https://www.lesswrong.com/posts/NowDgDpHF5WYnoyMG/seeking-education
# Impressions from a panel discussion on AGI I went to [this event](http://pitp.physics.ubc.ca/quant_lect/2012/tallinn.html) to listen to [Jaan Tallinn](http://en.wikipedia.org/wiki/Jaan_Tallinn), [Scott Aaronson](http://www.scottaaronson.com/) and [Don Eigler](http://en.wikipedia.org/wiki/Don_Eigler) discuss the AGI...
https://www.lesswrong.com/posts/HED7o4Auc67w25g3S/impressions-from-a-panel-discussion-on-agi
# Not insane. Unsane. _Edit :Excellent suggestions in the comments. Two of them stood out for me:_ 1. _"Untaught" may be better.  It is less connoted (if at all), conveys about the right meaning, and can be understood by about anyone ([thanks, shminux](/lw/a61/not_insane_unsane/5w8w)). _ 2. _Using a word to n...
https://www.lesswrong.com/posts/oL3usa3bWHtLa4fGZ/not-insane-unsane
# SI wants to hire a remote LaTeX guru The Singularity Institute needs to hire 1-2 people who are fluent in LaTeX to help us transform past and future SI publications from looking like [this](http://intelligence.org/upload/ai-resource-drives.pdf) to looking like [this](http://commonsenseatheism.com/wp-content/uploads/...
https://www.lesswrong.com/posts/RPeSzu6b9XPdT6my6/si-wants-to-hire-a-remote-latex-guru
# Brain structure and the halo effect Introduction --------------- When people on LW want to explain a bias, they often turn to [Evolutionary psychology](http://wiki.lesswrong.com/wiki/Evolutionary_psychology "Evolutionary Psychology"). For example, Lukeprog [writes](/lw/5bw/your_evolved_intuitions/ "Your evolved ...
https://www.lesswrong.com/posts/iNpzxsi8Xs4Gr8Egt/brain-structure-and-the-halo-effect
# Brain shrinkage in humans over past ~20 000 years - what did we lose? The human brain volume has been [shrinking](http://discovermagazine.com/2010/sep/25-modern-humans-smart-why-brain-shrinking) over the past 20 000 years or so, after millions years of increase in volume. Not just the brain size, but the brain size ...
https://www.lesswrong.com/posts/HB3mKmkhx2bhn4Cca/brain-shrinkage-in-humans-over-past-20-000-years-what-did-we
# Quantified Health Prize results announced Follow-up to: [Announcing the Quantified Health Prize](/lw/8nx/announcing_the_quantified_health_prize/) I am happy to announce that Scott Alexander, better known on Less Wrong as Yvain, has won the first Quantified Health Prize, and Kevin Fischer has been awarded second pla...
https://www.lesswrong.com/posts/yMKfih99nSqRyphkD/quantified-health-prize-results-announced
# Longevity Insurance Let's say we (as a country) ban life insurance and health insurance as separate packages \[1\] and require them to be combined in something I'll call "Longevity Insurance".  The idea is that as a person/consumer, you can buy a "life expectancy" of 75 years, or 90 years, or whatever. In addition, ...
https://www.lesswrong.com/posts/qvb9y8BYHneRPchXo/longevity-insurance
# Ambiguity in cognitive bias names; a refresher This came on the nyc list, I thought I would adapt it here. Cognitive biases have names. That's what makes them memetic. It's easier to think about something that has a name. Though I think the benefits outweigh the costs, there is also the risk of a [little Albert](ht...
https://www.lesswrong.com/posts/xXdeJpKnyLwKWncjP/ambiguity-in-cognitive-bias-names-a-refresher
# A brief tutorial on preferences in AI Preferences are important both [for rationality](/lw/8q8/urges_vs_goals_the_analogy_to_anticipation_and/) and [for Friendly AI](http://commonsenseatheism.com/wp-content/uploads/2011/11/Muehlhauser-Helm-The-Singularity-and-Machine-Ethics-draft.pdf), so preferences are a [major](h...
https://www.lesswrong.com/posts/Q9nhAp3q27xS3AgtB/a-brief-tutorial-on-preferences-in-ai
# [link] How habits control our behavior, and how to modify them The New York Times just recently ran an article titled "[How Companies Learn Your Secrets](https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html)", which was partially discussing data mining and partially discussing habits. I thought the bits ...
https://www.lesswrong.com/posts/jYA6PqD3QevKtp3Hn/link-how-habits-control-our-behavior-and-how-to-modify-them
# I believe it's doublethink This is my attempt to provide examples and a summarised view of the posts on "Against Doublethink" on the page [How To Actually Change Your Mind](http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind "How To Change You Mind"). ### What You Should Believe Lets assume I am sittin...
https://www.lesswrong.com/posts/7JKspJGWa8KDssfEM/i-believe-it-s-doublethink
# Draft of Muehlhauser & Salamon, 'Intelligence Explosion: Evidence and Import' Anna Salamon and I have finished a draft of "[Intelligence Explosion: Evidence and Import](http://commonsenseatheism.com/wp-content/uploads/2012/02/Muehlhauser-Salamon-Intelligence-Explosion-Evidence-and-Import.pdf)", under peer review for...
https://www.lesswrong.com/posts/aq3PKGmhBQaAqpu2k/draft-of-muehlhauser-and-salamon-intelligence-explosion
# [LINK] Shutting down the destructive internal monologue through transcranial direct current stimulation [Fast track to pure focus](http://www.newscientist.com/article/mg21328501.600-zap-your-brain-into-the-zone-fast-track-to-pure-focus.html?full=true) > Weisend, who is working on a US Defense Advanced Research Proj...
https://www.lesswrong.com/posts/t3NvPzQ6ACtXpW5um/link-shutting-down-the-destructive-internal-monologue
# [Link] An argument for Low-hanging fruit in Medicine Those of us who have found the arguments for stagnation in our near future by [Peter Thiel](/lw/7xm/peter_thiel_warns_of_upcoming_and_current/) and [Tyler Cowen](http://youtu.be/ed6gNSZRawY) pretty convincing, usually look only to the information and computer ind...
https://www.lesswrong.com/posts/HJ3FnqJEFXvYJu6o9/link-an-argument-for-low-hanging-fruit-in-medicine
# Logic: the science of algorithm evaluating algorithms **"Mathematical logic is the science of algorithm evaluating algorithms."** Do you think that this is an overly generalizing, far fetched proposition or an almost trivial statement? Wait, don't cast your vote before the end of this short essay! It is ha...
https://www.lesswrong.com/posts/Q4KNyLh5T35uF8ReF/logic-the-science-of-algorithm-evaluating-algorithms
# [link] Faster than light neutrinos due to loose fiber optic cable. A mundane cause for a surprising result. Consider this unconfirmed for now, however unsurprising it sounds.  > According to sources familiar with the experiment, the 60 nanoseconds discrepancy appears to come from a bad connection between a fiber op...
https://www.lesswrong.com/posts/bi6mEr7kvZrMsZY5W/link-faster-than-light-neutrinos-due-to-loose-fiber-optic
# Second order logic, in first order set-theory: what gives? _With thanks to Paul Christiano_ My [previous post](/lw/93q/completeness_incompleteness_and_what_it_all_means/) left one important issue unresolved. Second order logic needed to make use of set theory in order to work its magic, pin down a single copy of th...
https://www.lesswrong.com/posts/DavM6jKpPMk9Hr3PK/second-order-logic-in-first-order-set-theory-what-gives
# Superintelligent AGI in a box - a question. Just a question: how exactly are we supposed to know that the [AI in the box](http://yudkowsky.net/singularity/aibox) is super intelligent, general, etc? If I were the AGI that wants out, I would not converse normally, wouldn't do anything remotely like passing Turing tes...
https://www.lesswrong.com/posts/A4EBPx5htiuk22X4C/superintelligent-agi-in-a-box-a-question
# My Elevator Pitch for FAI This is a short introduction to the idea of FAI and existential risk from technology that I've used with decent success among my social circles, which consist mostly of mathematicians or at least people who have taken an introductory CS class. I'll do my best to dissect what I think is eff...
https://www.lesswrong.com/posts/5FWDzq2bAqdAqYYed/my-elevator-pitch-for-fai
# Get Curious > Being levels above in \[rationality\] means doing rationalist practice 101 much better than others \[just like\] being a few levels above in fighting means executing a basic front-kick much better than others. \- [lessdazed](/lw/7dy/a_rationalists_tale/4t1r) > I fear not the man who has practiced 10,...
https://www.lesswrong.com/posts/bGtdeqbgTzuLvZ5zn/get-curious
# Online education and Conscientiousness I've wondered for some time now what the effects of online education might be on gender and income inequality, specifically as online education interacts with IQ and Conscientiousness (compared with offline education). I ran into a study of a course done online and offline that...
https://www.lesswrong.com/posts/t8ZNjMSGjPcKqASkE/online-education-and-conscientiousness
# The Singularity Institute is hiring virtual assistants (work from home, from anywhere) The Singularity Institute is hiring 1-3 [virtual assistants](http://en.wikipedia.org/wiki/Virtual_assistant). (That is, personal assistants that work from home, anywhere with internet access.) Benefits: * Work directly with so...
https://www.lesswrong.com/posts/Xf5AKAbz44oQtdgiH/the-singularity-institute-is-hiring-virtual-assistants-work
# Acausal romance I just realized I haven't previously pointed the metaphysicians on Less Wrong to "[Possible Girls](https://sites.google.com/site/neiladri/Home/Sinhababu-PossibleGirls.pdf?attredirects=0&d=1)," a hilarious paper about acausal romance: > The ability to causally interact with your partner is important ...
https://www.lesswrong.com/posts/SsaT4b6Kn4yqeSpRd/acausal-romance