text
stringlengths
300
320k
source
stringlengths
52
154
# Giving What We Can and 80,000 Hours are recruiting! Below is a message from my friends at Giving What We Can and 80,000 Hours, two key organizations in the [efficient charity](/lw/3gj/efficient_charity_do_unto_others/) or "[optimal philanthropy](/lw/6py/optimal_philanthropy_for_human_beings/)" movement. > [Giving W...
https://www.lesswrong.com/posts/nANpCXBib7SwdLBu8/giving-what-we-can-and-80-000-hours-are-recruiting
# Utopia in Manna **[Manna](http://marshallbrain.com/manna1.htm) is the title of a science fiction story that describes a near future transition to an automated society where humans are uneconomical. In the later chapters it describes in some detail a post-scarcity society.** There are several problems with it however...
https://www.lesswrong.com/posts/hETiHyeg7ux4RtpYz/utopia-in-manna
# [Link]: 80,000 hours blog Some of you probably aren't aware yet of the rather excellent [High Impact Careers / 80,000 hours blog](http://80000hours.org/blog). It covers topics about how to have the biggest impact with your career, including * [how likely you are to become Prime Minister](http://80000hours.org/bl...
https://www.lesswrong.com/posts/2r2WTAR2bHiFYe7b4/link-80-000-hours-blog
# Mike Darwin on the Less Wrong intelligentsia He has resumed posting at his blog [Chronopause](http://chronopause.com/) and he is essential reading for those interested in cryonics and, more generally, rational decision-making in an uncertain world.  In response to a comment by a LW user named Alexander, [he writes]...
https://www.lesswrong.com/posts/L4Lkz4hXJZkaKSWmL/mike-darwin-on-the-less-wrong-intelligentsia
# 'Facing the Singularity' podcast My online mini-book _[Facing the Singularity](http://facingthesingularity.com/)_ now has a [podcast](http://itunes.apple.com/us/podcast/facing-the-singularity/id501765396). Ratings and reviews on iTunes will be much appreciated, so as to direct people toward a rationality-informed ap...
https://www.lesswrong.com/posts/s6NeR8hoktaZYk3CD/facing-the-singularity-podcast
# Brazilians, unite! and what is IERFH (portuguese) Hi anglophones, this topic is only for brazilians, so someone may post in portuguese and part of this is in portuguese (We will translate it to english if necessary when the time comes). Hello Brazilians, I'm creating this topic because some misallocated questions w...
https://www.lesswrong.com/posts/36fa4ZiGzY2i4gd3r/brazilians-unite-and-what-is-ierfh-portuguese
# My summary of Eliezer's position on free will I'm participating in a university course on free will. On the online forum, someone asked me to summarise Eliezer's solution to the free will problem, and I did it like this. Is it accurate in this form? How should I change it? “I'll try to summarise Yudkowsky's argumen...
https://www.lesswrong.com/posts/7JKJoAZd6iSPDpyM6/my-summary-of-eliezer-s-position-on-free-will
# Status and Changing your Mind When you hear powerful evidence or arguments that should get you to revise your beliefs, not only do all sorts of [cognitive biases](http://wiki.lesswrong.com/wiki/Bias) fight the changes but so do the social factors of status and face saving. Perhaps I've long been a vocal proponent of...
https://www.lesswrong.com/posts/BFvuW5WsNuCnawWw2/status-and-changing-your-mind
# Cashing Out Cognitive Biases as Behavior We believe cognitive biases and susceptibility lead to bad decisions and suboptimal performance. I’d like to look at 2 interesting studies: 1. [Parker & Fischhoff 2005](http://sds.hss.cmu.edu/media/pdfs/fischhoff/Parker_FischhoffDMC.pdf): “Decision-making competence: Extern...
https://www.lesswrong.com/posts/dMDmED5LyqZtTTh5A/cashing-out-cognitive-biases-as-behavior
# How do you notice when you're rationalizing? How do you notice when you're rationalizing?  Like, what \*actually\* tips you off, in real life? I've listed my cues below; please add your own (one idea per comment), and upvote the comments that you either: (a) use; or (b) will now try using. I'll be using this list ...
https://www.lesswrong.com/posts/LHnQGxHNweyrbbDLJ/how-do-you-notice-when-you-re-rationalizing
# Heuristics and Biases in Charity Here on LW, we know that if you want to do the most good, [you shouldn't diversify your charitable giving](http://www.slate.com/articles/arts/everyday_economics/1997/01/giving_your_all.html). If a specific charity makes the best use of your money, then you should assign your whole ch...
https://www.lesswrong.com/posts/hiiziojg3R5uwQPm9/heuristics-and-biases-in-charity
# Journal of Consciousness Studies issue on the Singularity ...has finally been [published](http://friendlyai.tumblr.com/post/18609191883/new-jcs-issue-on-the-singularity). Contents: * Uziel Awret - Introduction * Susan Blackmore - [She Won’t Be Me](http://www.susanblackmore.co.uk/Articles/JCS2012.htm) * Damie...
https://www.lesswrong.com/posts/9DGfYpjYtpG4ACs8o/journal-of-consciousness-studies-issue-on-the-singularity
# People who "don't rationalize"? [Help Rationality Group figure it out] Anna Salamon and I are confused. Both of us notice ourselves rationalizing on pretty much a daily basis and have to apply techniques like the Litany of Tarski pretty regularly. But in several of our test sessions for teaching rationality, a hand...
https://www.lesswrong.com/posts/hEFrm3nZMFiW2czvb/people-who-don-t-rationalize-help-rationality-group-figure
# Request for input: draft of my "coming out" statement on religious deconversion **Edited 3/4/2012:** I shortened up the summary a bit and add the following update: Thanks for the lively comments. As a preliminary summary of things I've found quite useful/helpful: * Shorten/transform the document ([David_Gerard](...
https://www.lesswrong.com/posts/WMEQj382bjnerxCW8/request-for-input-draft-of-my-coming-out-statement-on
# AI Risk and Opportunity: A Strategic Analysis ![](http://lukeprog.com/images/AI%20risk%20and%20opportunity.png) Suppose you buy [the argument](http://facingthesingularity.com/) that humanity faces both the _risk_ of AI-caused extinction and the _opportunity_ to shape an AI-built utopia. What should we do about that...
https://www.lesswrong.com/posts/i2XoqtYEykc4XWp9B/ai-risk-and-opportunity-a-strategic-analysis
# 60m Asteroid currently assigned a .022% chance of hitting Earth. [http://neo.jpl.nasa.gov/risk/2012da14.html](http://neo.jpl.nasa.gov/risk/2012da14.html) [http://rt.com/news/paint-asteroid-earth-nasa-767/](http://rt.com/news/paint-asteroid-earth-nasa-767/) Seems like a good opportunity to bring up existential risk...
https://www.lesswrong.com/posts/tWCrrGEwrSZunWEjT/60m-asteroid-currently-assigned-a-022-chance-of-hitting
# "The Journal of Real Effects" Luke's recent [post](/lw/ajj/how_to_fix_science/) mentioned that The Lancet has a [policy](http://www.thelancet.com/lancet-oncology-information-for-authors/article-types-manuscript-requirements) encouraging the advance registration of clinical trials, while mine examined an apparent cas...
https://www.lesswrong.com/posts/YGZEE78jckF9XeTxu/the-journal-of-real-effects
# The Singularity Institute has started publishing monthly progress reports If anyone is curious what's going on at SI, it seems as though they've started publishing monthly progress reports on their blog. The latest was published less than a week ago for the month of February: [http://singinst.org/blog/](http://inte...
https://www.lesswrong.com/posts/8bGNfebYNbP64k6iu/the-singularity-institute-has-started-publishing-monthly
# Meetup Tactics Open Thread I think we need to have more discussion of meetup tactics on LW. At my local meetup, we've been feeling a bit lost about what works best, so I hereby propose that we have semi-regular meetup tactics discussions like the open and quotes threads. So here's a few questions to start us off: ...
https://www.lesswrong.com/posts/TQ5diyPDXNQ9zqgqs/meetup-tactics-open-thread
# Emotional regulation, Part I: a problem summary I have a problem with emotions. I’ve known this for a long time. It’s a very specific problem, one that only affects me a small percentage of the time: most people I know _don’t_ describe me as an emotional person. I’m lucky enough to have been born with the sort of b...
https://www.lesswrong.com/posts/B9WxT7fQKhhywW2PN/emotional-regulation-part-i-a-problem-summary
# Main section vs. discussion section (The following may only apply to me. I mention it to see if anyone else has had the same issue). For a long time I have been only looking at the Discussion section and promoted main page articles. Just now on a whim I checked the non-promoted main page articles and found there we...
https://www.lesswrong.com/posts/kgRBSDZTaG4pkXsLK/main-section-vs-discussion-section
# How to Fix Science Like [The Cognitive Science of Rationality](/r/lesswrong/lw/7e5/the_cognitive_science_of_rationality/), this is a post for beginners. Send the link to your friends! ![](http://commonsenseatheism.com/wp-content/uploads/2011/11/experimenter-bias.jpg) Science is broken. We know why, and we know how...
https://www.lesswrong.com/posts/ETe2SZacmLvvr8H9n/how-to-fix-science
# Using degrees of freedom to change the past for fun and profit Follow-up to: [Follow-up on ESP study: "We don't publish replications"](/lw/6lq/followup_on_esp_study_we_dont_publish_replications/), [Feed the Spinoff Heuristic!](/lw/9xs/feed_the_spinoff_heuristic/) Related to: [Parapsychology: the control group for s...
https://www.lesswrong.com/posts/kXgyLuyRSvsxozFRs/using-degrees-of-freedom-to-change-the-past-for-fun-and
# Delicious Luminosity, Om Nom Nom I have decided that it would be valuable for me to read books (blog posts, articles, random conversations between smart people who store chatlogs) about introspection, take notes, and try to distill and clarify the information.  This could result in me eventually giving up, or in a L...
https://www.lesswrong.com/posts/v8Sh6cpj34NFtFwAF/delicious-luminosity-om-nom-nom
# [Link] Atlantic Interview with Nick Bostrom - "We're Understimating the Risk of Human Extinction" [http://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/](http://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/2...
https://www.lesswrong.com/posts/aArw2dBbprwNXLReo/link-atlantic-interview-with-nick-bostrom-we-re
# Harry Potter and the Methods of Rationality discussion thread, part 10 **(The HPMOR discussion thread after this one is [here](/r/discussion/lw/axe/harry_potter_and_the_methods_of_rationality/).)** This is a new thread to discuss Eliezer Yudkowsky's _[Harry Potter and the Methods of Rationality](http://www.fanficti...
https://www.lesswrong.com/posts/LKFR5pBA3bBkERDxL/harry-potter-and-the-methods-of-rationality-discussion-2
# Causal diagrams and software engineering > [Fake explanations](/lw/is/fake_causality/) don't feel fake. That's what makes them dangerous. -- EY Let's look at "[A Handbook of Software and Systems Engineering](http://www.amazon.com/dp/0321154207)", which purports to examine the insights from software engineering that...
https://www.lesswrong.com/posts/XHgEbHRmJjE5DonNk/causal-diagrams-and-software-engineering
# I Was Not Almost Wrong But I Was Almost Right: Close-Call Counterfactuals and Bias **Abstract:** _"Close-call counterfactuals", claims of what could have almost happened but didn't, can be used to either defend a belief or to attack it. People have a tendency to reject counterfactuals as improbable when those counte...
https://www.lesswrong.com/posts/BmGrj9pRkcbJxae3x/i-was-not-almost-wrong-but-i-was-almost-right-close-call
# Conjunction fallacy and probabilistic risk assessment. ### Summary: There is a very dangerous way in which conjunction fallacy can be exploited. One can present you with 2..5 detailed, very plausible failure scenarios whose probabilities are shown to be very low, using solid mathematics; then if you suffer from con...
https://www.lesswrong.com/posts/owin836H842oxFQXN/conjunction-fallacy-and-probabilistic-risk-assessment
# A taxonomy of Oracle AIs Sources: An old draft on Oracle AI from Daniel Dewey, conversation with Dewey and Nick Beckstead. See also [Thinking Inside the Box](http://www.aleph.se/papers/oracleAI.pdf) and [Leakproofing the Singularity](http://dl.dropbox.com/u/5317066/2012-yampolskiy.pdf). Can we just create an Oracle...
https://www.lesswrong.com/posts/XddMs9kSGtm6L8522/a-taxonomy-of-oracle-ais
# How does real world expected utility maximization work? I would like to ask for help on how to use expected utility maximization, _in practice_, to maximally achieve my goals. As a real world example I would like to use the post '[Epistle to the New York Less Wrongians](/lw/5c0/epistle_to_the_new_york_less_wrongian...
https://www.lesswrong.com/posts/J8ojaGJozdt9hnoxE/how-does-real-world-expected-utility-maximization-work
# New cognitive bias articles on wikipedia (update) * [Conservatism](http://en.wikipedia.org/wiki/Conservatism_(Bayesian)) * [Curse of knowledge](http://en.wikipedia.org/wiki/Curse_of_knowledge) * [Duration neglect](http://en.wikipedia.org/wiki/Duration_neglect) * [Extension neglect](http://en.wikipedia.org/wi...
https://www.lesswrong.com/posts/XKfPvj9gtrskGuP3v/new-cognitive-bias-articles-on-wikipedia-update
# Predictability of Decisions and the Diagonal Method _This post collects a few situations where agents might want to make their decisions either predictable or unpredictable to certain methods of prediction, and considers a method of making a decision unpredictable by "diagonalizing" a hypothetical prediction of that...
https://www.lesswrong.com/posts/W6T93dSSm2xvHn9X6/predictability-of-decisions-and-the-diagonal-method
# Slowing Moore's Law: Why You Might Want To and How You Would Do It In this essay I argue the following: > Brain emulation requires enormous computing power; enormous computing power requires further progression of Moore’s law; further Moore’s law relies on large-scale production of cheap processors in ever more-adv...
https://www.lesswrong.com/posts/7s2as5qJg5eFpyGcM/slowing-moore-s-law-why-you-might-want-to-and-how-you-would
# New paper on Bayesian philosophy of statistics from Andrew Gelman [Andrew Gelman](http://andrewgelman.com) recently linked a new article entitled ["Induction and Deduction in Bayesian Data Analysis."](http://www.stat.columbia.edu/~gelman/research/published/philosophy_online4.pdf) At his blog, he also [described](htt...
https://www.lesswrong.com/posts/nNkXMmmgjPMbfQych/new-paper-on-bayesian-philosophy-of-statistics-from-andrew
# Anyone have any questions for David Chalmers? I'm doing an undergraduate course on the Free Will Theorem, with three lecturers: a mathematician, a physicist, and David Chalmers as the philosopher. The course is a bit pointless, but the company is brilliant. Chalmers is a pretty smart guy. He studied computer science...
https://www.lesswrong.com/posts/p3EJ5f9jxEefzAM82/anyone-have-any-questions-for-david-chalmers
# On the etiology of religious belief From ["Trust in testimony and miracles"](http://prosblogion.ektopos.com/archives/2012/03/trust-in-testim.html): > I have been of late fascinated by the research of the developmental psychologist Paul L. Harris, who has investigated how young children acquire information through t...
https://www.lesswrong.com/posts/DuM5d7stfbm4KksM2/on-the-etiology-of-religious-belief
# The Stable State is Broken or: _Why Everything Is Terrible, An Overview._^1^ It sounds like a theory which [explains too much](/lw/if/your_strength_as_a_rationalist/). But it's not a theory, hardly even an explanation, more a pattern that manifests itself once you start trying to seriously answer rhetorical questio...
https://www.lesswrong.com/posts/zsoaDq3BMmmnHR3He/the-stable-state-is-broken
# What is the best compact formalization of the argument for AI risk from fast takeoff? Many people complain that the Singularity Institute's "[Big Scary Idea](http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html)" (AGI leads to catastrophe by default) has not been argued for...
https://www.lesswrong.com/posts/AAG7vj5cyNZ2Gxtnk/what-is-the-best-compact-formalization-of-the-argument-for
# [Link] Failed replications of the "elderly walking" priming effect > Recently a controversy broke out over the replicability of a [study](http://www.yale.edu/acmelab/articles/bargh_chen_burrows_1996.pdf) John Bargh et al. published in 1996. The study reported that unconsciously priming a stereotype of elderly people...
https://www.lesswrong.com/posts/Cduq5QGzd8KYSZT2R/link-failed-replications-of-the-elderly-walking-priming
# Decision Theories: A Less Wrong Primer ![Alpha-beta pruning (from Wikipedia)](http://images.lesswrong.com/t3_aq9_0.png?v=bcf58528098c2fc6111d6a91e2a672f1) **Summary:** *If you've been wondering why people keep going on about decision theory on Less Wrong, I wrote you this post as an answer. I explain what decision ...
https://www.lesswrong.com/posts/af9MjBqF2hgu3EN6r/decision-theories-a-less-wrong-primer
# Biased Pandemic Recently, Portland Lesswrong played a game that was a perfect trifecta of: difficult mental exercise; fun; and an opportunity to learn about biases and recognize them in yourself and others. We're still perfecting it, and we'd welcome feedback, especially from people who try it. The Short Version ==...
https://www.lesswrong.com/posts/34jf9Z43kBHF7Axz2/biased-pandemic
# What if the front page… What if the front page looked a little more like this? [![An idea](http://wiki.lesswrong.com/mediawiki/images/9/91/LW_website_Homepage_V3.jpg)](http://wiki.lesswrong.com/mediawiki/images/9/91/LW_website_Homepage_V3.jpg) (Please assume that I'm trying to help. If you're polite and constructi...
https://www.lesswrong.com/posts/G4W6erJW6vPdzdLSp/what-if-the-front-page
# How would you take over Rome? A [recent discussion post](/r/discussion/lw/asc/risks_from_ai_and_charitable_giving/) has compared the difficulty of an AI destroying modern human civilization to that of a modern human taking over the Roman Empire, with the implication that it is impossible. The analogy has a few prob...
https://www.lesswrong.com/posts/2jZykdLg9fBGqKd46/how-would-you-take-over-rome
# DIY Transcranial Direct Current Stimulation. Who wants to go first? There's been a bit of discussion about TDCS here on LessWrong.  If you don't know what TDCS is [here's an article](http://www.contriving.net/link/4z) from Technology Review. [Here's a company](http://www.contriving.net/link/4x) (soon) selling a DIY...
https://www.lesswrong.com/posts/5nFvHD7aa7SB8nCLT/diy-transcranial-direct-current-stimulation-who-wants-to-go
# Dotting i's and Crossing t's - a Journey to Publishing Elegance More literally a journey to making the dots of the 'i's line up just right with the 'f's and ensuring that the crossing of 'T' meets up neatly with the tip of the 'h' - all without breaking text searching and copy and paste. ### Task Now, [as we all...
https://www.lesswrong.com/posts/iFv7q95TMWzq4wsvi/dotting-i-s-and-crossing-t-s-a-journey-to-publishing
# Cult impressions of Less Wrong/Singularity Institute If you type "less wrong c" or "singularity institute c" into Google, you'll find that people are searching for "less wrong cult" and "singularity institute cult" with some frequency. (EDIT: Please avoid testing this out, so Google doesn't autocomplete your search ...
https://www.lesswrong.com/posts/CfTH84gGFCRqo8D7t/cult-impressions-of-less-wrong-singularity-institute
# Meta Addiction I was wondering if anyone has ever had the feeling, like I get sometimes, that they were addicted to 'meta-level' optimizing rather than low-level acting? As in, I'd rather think about how to encourage myself to brush my teeth more than brush my teeth. I'm guessing there's something about this under t...
https://www.lesswrong.com/posts/g2AKPEzFdQitmpTDu/meta-addiction
# [LINK] Judea Pearl wins 2011 Turing Award Link to [ACM press release](http://www.acm.org/press-room/news-releases/2012/turing-award-11/). > In addition to their impact on probabilistic reasoning, Bayesian networks completely changed the way causality is treated in the empirical sciences, which are based on experime...
https://www.lesswrong.com/posts/eTf7Tnu9SNipcSoRT/link-judea-pearl-wins-2011-turing-award
# Please advise the Singularity Institute with your domain-specific expertise! The Singularity Institute would benefit from having a team of domain-specific advisors on hand. If you'd like to help the Singularity Institute pursue its mission more efficiently, [please sign up to be a Singularity Institute advisor](http...
https://www.lesswrong.com/posts/w9xYSrB7jEPigrMxq/please-advise-the-singularity-institute-with-your-domain
# Muehlhauser-Goertzel Dialogue, Part 1 Part of the [Muehlhauser interview series on AGI](http://wiki.lesswrong.com/wiki/Muehlhauser_interview_series_on_AGI). _[Luke Muehlhauser](http://lukeprog.com/) is Executive Director of the [Singularity Institute](http://intelligence.org/), a non-profit research institute study...
https://www.lesswrong.com/posts/TpNRpncLBAzddBnRB/muehlhauser-goertzel-dialogue-part-1
# Evolutionary psychology: evolving three eyed monsters ### Summary We should not expect evolution of complex psychological and cognitive adaptations in the timeframe in which, morphologically, animal bodies can only change by very little. The genetic alteration to the cognition for speech shouldn't be expected to ...
https://www.lesswrong.com/posts/JQnN6ZCMk9XmhNarY/evolutionary-psychology-evolving-three-eyed-monsters
# Schelling fences on slippery slopes Slippery slopes are themselves a slippery concept. Imagine trying to explain them to an alien: "Well, we right-thinking people are quite sure that the Holocaust happened, so banning Holocaust denial would shut up some crackpots and improve the discourse. But it's one step on ...
https://www.lesswrong.com/posts/Kbm6QnJv9dgWsPHQP/schelling-fences-on-slippery-slopes
# How to avoid dying in a car crash Aside from [cryonics](http://wiki.lesswrong.com/wiki/Cryonics) and [eating better](/lw/a60/quantified_health_prize_results_announced/), what else can we do to live long lives? Using [this tool](http://www.worldlifeexpectancy.com/usa-cause-of-death-by-age-and-gender), I looked up th...
https://www.lesswrong.com/posts/7XbcDaeigMaxW43EB/how-to-avoid-dying-in-a-car-crash
# I'm starting a game company and looking for a co-founder. Summary: I am looking for co-founder(s) to start a game company with me. If you, or anyone you know, is interested, please contact me. (Alternatively, if you want to invest or provide funding, that would be very nice in its own right.) It  [recently occu...
https://www.lesswrong.com/posts/LJ9FxJYgzyMrX9ZqN/i-m-starting-a-game-company-and-looking-for-a-co-founder
# LiveJournal Memes On blogging websites just as LiveJournal, memes are often in the form of a question or set of questions which a blogger answers in their own blog, then challenges their readers to answer in the readers' blogs (thus spreading).  It doesn't have to be the sort of question to which there is a 'correct...
https://www.lesswrong.com/posts/fnLDrRyFndf9syJnP/livejournal-memes
# Fallacies as weak Bayesian evidence **Abstract:** _Exactly what is fallacious about a claim like ”ghosts exist because no one has proved that they do not”? And why does a claim with the same logical structure, such as ”this drug is safe because we have no evidence that it is not”, seem more plausible? Looking at var...
https://www.lesswrong.com/posts/YgNLfytckSyKTnDXN/fallacies-as-weak-bayesian-evidence
# The Best Comments Ever Of the title of this discussion post, we already have an approximation in the most voted-for lists for [Main](/topcomments/) and [Discussion](/r/discussion/topcomments/). There are many problems with this metric, however, a subset of which are: * Both sections are cluttered. The top comment...
https://www.lesswrong.com/posts/6PsPs2D6i39dGpRqj/the-best-comments-ever
# 6 Tips for Productive Arguments We've all had arguments that seemed like a complete waste of time in retrospect. But at the same time, arguments (between scientists, policy analysts, and others) play a critical part in moving society forward. You can imagine how lousy things would be if no one ever engaged those who...
https://www.lesswrong.com/posts/JSND48qS5XTMFuZo8/6-tips-for-productive-arguments
# Emotional regulation Part II: research summary _**Abstract**: Emotional regulation is a topic currently being studied in the field of psychology. Five different types of emotional regulation strategies have been identified, distinguished by the stage of the emotion-response process in which they occur. To drasticall...
https://www.lesswrong.com/posts/vZNPEXrJkJjjctBza/emotional-regulation-part-ii-research-summary
# Posts I repent of * ["Taking Ideas Seriously"](/lw/2l6/taking_ideas_seriously/): Stylistically contemptible, skimpy on any useful details, contributes to norm of pressuring people into double binds that ultimately do more harm than good. I would prefer it if no one linked to or promoted "Taking Ideas Seriously"; s...
https://www.lesswrong.com/posts/QePFiEKZ4R2KnxMkW/posts-i-repent-of
# A model of UDT without proof limits This post requires some knowledge of decision theory math. Part of the credit goes to Vladimir Nesov. Let the universe be a computer program U that returns a utility value, and the agent is a subprogram A within U that knows the source code of both A and U. (The same setting was ...
https://www.lesswrong.com/posts/m39dkp73YhN9QKYb9/a-model-of-udt-without-proof-limits
# The limited predictor problem This post requires some knowledge of logic, computability theory, and K-complexity. Much of the credit goes to Wei Dai. The four sections of the post can be read almost independently. The limited predictor problem (LPP) is a version of [Newcomb's Problem](http://wiki.lesswrong.com/wiki...
https://www.lesswrong.com/posts/ecCLANfPSDQMzyfDf/the-limited-predictor-problem
# AI Risk and Opportunity: Humanity's Efforts So Far Part of the series [AI Risk and Opportunity: A Strategic Analysis](/r/discussion/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/). (You can leave anonymous feedback on posts in this series **[here](https://docs.google.com/spreadsheet/viewform?formkey=dDZ6d0RvM...
https://www.lesswrong.com/posts/i4susk4W3ieR5K92u/ai-risk-and-opportunity-humanity-s-efforts-so-far
# Social status hacks from The Improv Wiki I can't remember how I found this, just that I was amazed at how rational and near-mode it is on a topic where most of the information one usually encounters is hopelessly far. LessWrong wiki link on the same topic: [http://wiki.lesswrong.com/wiki/Status](http://wiki.lesswro...
https://www.lesswrong.com/posts/PMZHfLuQaeFDMQwMx/social-status-hacks-from-the-improv-wiki
# Simple but important ideas Important ideas don't always require long explanations. Here's a famous example: > Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, ...
https://www.lesswrong.com/posts/a3kv7AQuDhTqzhNrP/simple-but-important-ideas
# Daily Low-Dose Aspirin, Round 2 (For Round 1, see [this comment](/lw/5qm/living_forever_is_hard_or_the_gompertz_curve/477u) from last year.) NYT: **[Studies Link Daily Doses of Aspirin to Reduced Risk of Cancer](http://www.nytimes.com/2012/03/21/health/research/studies-link-aspirin-daily-use-to-reduced-cancer-risk....
https://www.lesswrong.com/posts/7b3njJqE7fzurhFTM/daily-low-dose-aspirin-round-2
# What epistemic hygiene norms should there be? The [wiki entry for Epistemic Hygiene](http://wiki.lesswrong.com/wiki/Epistemic_hygiene) defines the term as: **Epistemic hygiene** consists of practices meant to allow accurate beliefs to spread within a community and keep less accurate or biased beliefs contained. The...
https://www.lesswrong.com/posts/C9JB2GCBTo8Srg8GN/what-epistemic-hygiene-norms-should-there-be
# Is community-collaborative article production possible? When I showed up at the Singularity Institute, I was surprised to find that 30-60 papers' worth of material was lying around in blog posts, mailing list discussions, and people's heads — but it had never been written up in clear, well-referenced academic articl...
https://www.lesswrong.com/posts/rTD8WYxSuNDAFsRRB/is-community-collaborative-article-production-possible
# A Problem About Bargaining and Logical Uncertainty Suppose you wake up as a paperclip maximizer. Omega says "I calculated the millionth digit of pi, and it's odd. If it had been even, I would have made the universe capable of producing either 10^20^ paperclips or 10^10^ staples, and given control of it to a staples ...
https://www.lesswrong.com/posts/oZwxY88NCCHffJuxM/a-problem-about-bargaining-and-logical-uncertainty
# Modest Superintelligences I'm skeptical about trying to build FAI, but not about trying to influence the Singularity in a positive direction. Some people may be skeptical even of the latter because they don't think the possibility of an intelligence explosion is a very likely one. I suggest that even if intelligence...
https://www.lesswrong.com/posts/KuBMKQnAsYBGP4rkZ/modest-superintelligences
# Nonmindkilling open questions When I explain to people how beliefs should be expressed in probabilities, I would like to use an example like "Consider X. Lots of intelligent people believe X, but lots of equally intelligent people believe not-X. It would be ridiculous to say you are 100% sure either way, so even if ...
https://www.lesswrong.com/posts/rNJ39yQmzTnseh8nL/nonmindkilling-open-questions
# A Primer On Risks From AI The Power of Algorithms ----------------------- Evolutionary processes are the most evident example of the power of simple algorithms \[1\]\[2\]\[3\]\[4\]\[5\]. The field of evolutionary biology gathered a vast amount of evidence \[6\] that established evolution as the process that explai...
https://www.lesswrong.com/posts/jN3PwnWDfe3yaJAW2/a-primer-on-risks-from-ai
# Decision Theories: A Semi-Formal Analysis, Part I Or: The Problem with Naive Decision Theory --------------------------------------------- **Previously:** [Decision Theories: A Less Wrong Primer](/lw/aq9/decision_theories_a_less_wrong_primer/) **Summary of Sequence:** _In_ _the context of a tournament for compu...
https://www.lesswrong.com/posts/2JdvZw3CXzafxQugN/decision-theories-a-semi-formal-analysis-part-i
# Less Wrong Sequences+Website feed app for Android I use my Android phone much more than my computer, and reading the Sequences on a mobile device is a pain. I needed an easy way to access the Sequences, but since there are no apps for this website I had to create one myself. Since I'm no app developer, I used the IB...
https://www.lesswrong.com/posts/fBa4AeMn9MTQ2QqCQ/less-wrong-sequences-website-feed-app-for-android
# An example of self-fulfilling spurious proofs in UDT Benja Fallenstein was [the first to point out](/lw/2l2/what_a_reduction_of_could_could_look_like/2f7w) that spurious proofs pose a problem for UDT. Vladimir Nesov and orthonormal [asked](/lw/axl/decision_theories_a_semiformal_analysis_part_i/64ey) for a formalizat...
https://www.lesswrong.com/posts/2GebvAXXfRMTjY2g7/an-example-of-self-fulfilling-spurious-proofs-in-udt
# Fundamentals of kicking anthropic butt ![Galactus](http://dreager1.files.wordpress.com/2011/06/3102269717_1e707314af.jpg) **Introduction** An anthropic problem is one where the very fact of your existence tells you something. "I woke up this morning, therefore the earth did not get eaten by Galactus while I slu...
https://www.lesswrong.com/posts/EfoecWmSxcPcpBzEH/fundamentals-of-kicking-anthropic-butt
# [LINK] Poem: There are no beautiful surfaces without a terrible depth. [The poem](http://www.online-literature.com/forums/showthread.php?t=3569 "emotions around reductionism, in poetry") is from someone whose online pseudonym is atiguhya padma.  I'll quote the first verse, the refrain, and the beginning of the secon...
https://www.lesswrong.com/posts/nQsZKQZixHYjArbBk/link-poem-there-are-no-beautiful-surfaces-without-a-terrible
# Common mistakes people make when thinking about decision theory From my experience reading and talking about decision theory on LW, it seems that many of the unproductive comments in these discussions can be attributed to a handful of common mistakes. #### Mistake #1: Arguing about assumptions The main reason why ...
https://www.lesswrong.com/posts/gkAecqbuPw4iggiub/common-mistakes-people-make-when-thinking-about-decision
# Should logical probabilities be updateless too? (This post doesn't require much math. It's very speculative and probably confused.) Wei Dai came up with a [problem](/r/discussion/lw/b0y/a_problem_about_bargaining_and_logical_uncertainty/) that seems equivalent to a variant of [Counterfactual Mugging](/lw/3l/counter...
https://www.lesswrong.com/posts/rLcHvxKcJpyJj3i7o/should-logical-probabilities-be-updateless-too
# Does anyone know any kid geniuses? I'm friends with an incredibly smart kid. He's 14, but has been put up three grades in school at one point. He does all the obvious enrichment things which are available in the relatively small Australian city he lives in. His life experience has been pretty unusual. He doesn't re...
https://www.lesswrong.com/posts/AFefY7hWChSgxu257/does-anyone-know-any-kid-geniuses
# Brain Preservation Most people, given the option to halt aging and continue in good heath for centuries, would. Anti-aging research is popular, but medicine is only minimally increasing lifespan for healthy adults. You, I, and everyone we know have bodies that are incredibly unlikely to make it past 120. They're jus...
https://www.lesswrong.com/posts/nk9928vPqoeAMrTh6/brain-preservation
# SotW: Check Consequentialism _(The [Exercise Prize](http://wiki.lesswrong.com/wiki/CFAR_Exercise_Prize) series of [posts](/tag/exprize/) is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills.  The difficulty is __coming up with exercises interesting enough, with a ...
https://www.lesswrong.com/posts/xypbWhzEEw4ZsRK9i/sotw-check-consequentialism
# George Orwell's Prelude on Politics Is The Mind Killer I have found this most wonderful (if fairly lengthy) article, [and thought you would enjoy having it brought to your attention.](http://www.george-orwell.org/Notes_on_Nationalism/0.html) It is so good, I think we should include it among the references in the "P...
https://www.lesswrong.com/posts/dEbhs6iicJXKtswm7/george-orwell-s-prelude-on-politics-is-the-mind-killer
# The Institute For Propaganda Analysis, A Precursor and a Warning Years ago, I stumbled upon this most interesting segment while reading Aldous Huxley's Brave New World Revisited, of which I have finally found an online version that enables me to share its contents with you:  > In their anti-rational propaganda the ...
https://www.lesswrong.com/posts/L2pSvmNNRrQ9o5PaZ/the-institute-for-propaganda-analysis-a-precursor-and-a
# [Draft] How to Run a Successful Less Wrong Meetup [How to Run a Successful Less Wrong Meetup](http://wiki.lesswrong.com/mediawiki/images/c/ca/How_to_Run_a_Successful_Less_Wrong_Meetup_Group.pdf) is a guide that I've been working on, based on lukeprog's instructions, for the last week and a half. As it says in the be...
https://www.lesswrong.com/posts/T2fcyjay3GtkvGn7F/draft-how-to-run-a-successful-less-wrong-meetup
# Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28 _“__I do not say this lightly... but if you're looking for superpowers, this is the place to start.”_ _--Michael Curzi, summer 2011 minicamp participant_ _Who:_You and a class full of other aspiring rationalists and world-optimizers, f...
https://www.lesswrong.com/posts/fkhbBE2ZTSytvsy9x/minicamps-on-rationality-and-awesomeness-may-11-13-june-22
# Doing "Nothing" It might be a useful habit to remember, whenever you're making a choice about some situation, that "doing nothing" is never actually an available option. Even if you avoid doing the task you're considering, you're still making some kind of choice about how you spend your time, and you're still doing ...
https://www.lesswrong.com/posts/yHKJGird3HJHYevMu/doing-nothing
# Australian Green Party leader Bob Brown talks about global democracy, X-risk, and immortality [It's interesting to see someone with actual political power (with balance of power in the Senate and being part of the Lower House coalition) talking about the Great Filter.](http://greensmps.org.au/content/news-stories/bo...
https://www.lesswrong.com/posts/vQ2GCTBRYu4hRBL6g/australian-green-party-leader-bob-brown-talks-about-global
# Examine your assumptions There's a story you've probably heard: During World War II, the British RAF's Bomber Command wanted a survey done on the effectiveness of their aircraft armouring.  This was carried out by inspected all bombers returning from bombing raids over Germany over a particular period. All damage i...
https://www.lesswrong.com/posts/kyZgEKzZZtJQTCSG2/examine-your-assumptions
# Setting up LW meetups in unlikely places: Positive Data Point Meeting fellow LessWrongians in meat space is a great opportunity to participate in interesting discussions and to make new friends. But there aren't that many places in the world (hopefully, _yet_) where regularly active meetup groups exist. Here is a st...
https://www.lesswrong.com/posts/4YMFSdxWSK9JgQHDv/setting-up-lw-meetups-in-unlikely-places-positive-data-point
# AI Risk & Opportunity: A Timeline of Early Ideas and Arguments Part of the series [AI Risk and Opportunity: A Strategic Analysis](/r/discussion/lw/ajm/ai_risk_and_opportunity_a_strategic_analysis/). (You can leave anonymous feedback on posts in this series **[here](https://docs.google.com/spreadsheet/viewform?formk...
https://www.lesswrong.com/posts/Qdq2SKyMi8vf7Snxq/ai-risk-and-opportunity-a-timeline-of-early-ideas-and
# April Fools - Harry Potter and the Methods of Rationality Joke Chapter **This was an april fools joke.** This is a new thread to discuss <del>Eliezer Yudkowsky’s</del> my chapter of _[Harry Potter and the Methods of Rationality](http://www.freefanfic.net/s/5782108/82/Harry_Potter_and_the_Methods_of_Rationality/)_ a...
https://www.lesswrong.com/posts/dGwr3gJJuEvxwptHy/april-fools-harry-potter-and-the-methods-of-rationality-joke
# Generic Modafinil sales begin? Due to [agreements with the patent holder](http://en.wikipedia.org/wiki/Modafinil#Patent_protection_and_antitrust_litigation), Cephalon, that were made in 2005-2006 several generics manufacturers are now allowed to sell generic [Modafinil](http://en.wikipedia.org/wiki/Modafinil). I ha...
https://www.lesswrong.com/posts/g6kKTtNevCwGxa5px/generic-modafinil-sales-begin
# LessWrong downtime 2012-03-26, and site speed Our investigation into last week's LW downtime is complete: [here](https://docs.google.com/document/d/1IXYAjoQQgzDx6xAyefIYCRgKrDlsldzfT-TORTt1JiQ/view?pli=1) (Google Docs). **Executive summary:** We failed to update our [AWS](http://aws.amazon.com/) configuration afte...
https://www.lesswrong.com/posts/LSeSdq4SE5dDu5atX/lesswrong-downtime-2012-03-26-and-site-speed
# Advice for an isolated Rationalist? Hello fellow readers. I've been enjoying LW for a while now, and I can confidently say that many of the ideas on in this community have done much to better my life. However, I live in some isolation from like-minded individuals. I lack social groups that aspire to the same v...
https://www.lesswrong.com/posts/FwcFW8FefRcqDFBAA/advice-for-an-isolated-rationalist
# SotW: Be Specific _(The [Exercise Prize](http://wiki.lesswrong.com/wiki/CFAR_Exercise_Prize) series of [posts](/tag/exprize/) is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills.  The difficulty is __coming up with exercises interesting enough, with a high enough...
https://www.lesswrong.com/posts/NgtYDP3ZtLJaM248W/sotw-be-specific
# Evidence for the orthogonality thesis One of the most annoying arguments when discussing AI is the perennial "But if the AI is so smart, why won't it figure out the right thing to do anyway?" It's often the ultimate curiosity stopper. Nick Bostrom has defined the "Orthogonality thesis" as the principle that motivat...
https://www.lesswrong.com/posts/CRsYy3xtbMrLjoXZT/evidence-for-the-orthogonality-thesis
# The efficiency of prizes Several commenters on [SoTW: Be Specific](/lw/bc3/sotw_be_specific/) suggest that prizes may not be a good idea because they might be counterproductive. As a counterpoint, [Tony Barrett](http://tony-barrett.com/) of [GCRi](http://www.bmsis.org/gcri) pointed me to [this piece](http://opiniona...
https://www.lesswrong.com/posts/85N4EF2a8MCfbZzoM/the-efficiency-of-prizes
# Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82 **The new discussion thread (part 15) is [here](/r/discussion/lw/bmx/harry_potter_and_the_methods_of_rationality/). ** This is a new thread to discuss Eliezer Yudkowsky’s _[Harry Potter and the Methods of Rationality](http://www.fanf...
https://www.lesswrong.com/posts/pBTcCB5uJTzADdMm4/harry-potter-and-the-methods-of-rationality-discussion-15