text
stringlengths
300
320k
source
stringlengths
52
154
# The Cryonics Strategy Space In four paragraphs I’m going to claim, “It is highly likely reading this article will increase your chance of living forever”. I’m pretty sure you won’t disagree with me. First, however, I’d like to talk about how much I don’t like _Monopoly._ I play a lot of _[Monopoly](http://en.wikipe...
https://www.lesswrong.com/posts/CQkFGTbGD4b2nf6QP/the-cryonics-strategy-space
# How Tim O'Brien gets around the logical fallacy of generalization from fictional evidence It took me until I read _The Things They Carried_ for the third time until I realized that it contained something very valuable to rationalists. In ["The Logical Fallacy of Generalization from Fictional Evidence,"](/lw/k9/the_l...
https://www.lesswrong.com/posts/YyEQ4BedRYNRaot4r/how-tim-o-brien-gets-around-the-logical-fallacy-of
# Skills and Antiskills One useful little concept that a friend and I have is that of the _antiskill_. Like a normal skill, an antiskill gives you both the ability and the affordance to do things that you wouldn't otherwise be able to do. The difference between a skill and an antiskill is that a skill gives you the ab...
https://www.lesswrong.com/posts/gTmyMB4YtQnvw6SPQ/skills-and-antiskills
# Tapestries of Gold (Nothing here is actually new, but a short explanation with pictures would have been helpful to me a while ago, so I thought I'd make an attempt.) Let me start with a patch of territory: a set of things that exist. The number of rows is far from clear, but I'll use six candidates as a sample;...
https://www.lesswrong.com/posts/GrH4fZWXTwoQovnPN/tapestries-of-gold
# Why don't you attend your local LessWrong meetup? / General meetup feedback It's fairly common for a LessWrong meetup group to get people attending for a week or two, and then never showing up again. Most of the time, there may not be a very interesting reason for that. But if someone did have a bad experience at a ...
https://www.lesswrong.com/posts/egiY5zPxDji7SCphN/why-don-t-you-attend-your-local-lesswrong-meetup-general
# Request for concrete AI takeover mechanisms Any scenario where advanced AI takes over the world requires some mechanism for an AI to leverage its position as ethereal resident of a computer somewhere into command over a lot of physical resources. One classic story of how this could happen, from [Eliezer](http://int...
https://www.lesswrong.com/posts/pxGYZs2zHJNHvWY5b/request-for-concrete-ai-takeover-mechanisms
# A brief summary of effective study methods **EDIT:** Reworked and moved to Main following [Gunnar_Zarncke's advice](/r/discussion/lw/k4n/a_brief_summary_of_effective_study_methods/aunx). **Related to:** [Book Review: How Learning Works](/lw/jj2/book_review_how_learning_works/), [Build Small Skills in the Right Orde...
https://www.lesswrong.com/posts/goRshyncBQ8899xr8/a-brief-summary-of-effective-study-methods
# LessWrong as social catalyst I'd like to ask everyone: Have LessWrong.com and related online rationalist/transhumanist/Singularitarian communities connected you to people for purposes beyond discussion? * [A new job, employee, business partner, cofounder, or professional mentor](/lw/jzp/business_networking_throug...
https://www.lesswrong.com/posts/Bq2jNNrk8JaZoD5jW/lesswrong-as-social-catalyst
# The Control Group Is Out Of Control **I.** Allan Crossman calls parapsychology [the control group for science](http://lesswrong.com/lw/1ib/parapsychology_the_control_group_for_science/). That is, in let’s say a drug testing experiment, you give some people the drug and they recover. That doesn’t tell you much unti...
https://www.lesswrong.com/posts/bXuAXCbzw9hsJSuEN/the-control-group-is-out-of-control
# European Community Weekend 2014 retrospective So finally - with two weeks distance to the first [European LessWrong Community Weekend](/lw/jjw/european_community_weekend_in_berlin/) \- we want to share the organizers’ perception of the event, including a short overview of what went well, what did not and what exceed...
https://www.lesswrong.com/posts/giX36Wqw38xdXhKQc/european-community-weekend-2014-retrospective
# MIRI Donation Collaboration Station As you may know, on May 6, there will be a large one-day price-matching fundraiser for Bay Area Charities. The relevant details are right here at [MIRI's official website.](http://intelligence.org/2014/04/25/may-6th-miri-participating-in-massive-24-hour-online-fundraiser/) And t...
https://www.lesswrong.com/posts/AnFafRss5e5nZRtnP/miri-donation-collaboration-station
# Positive Queries - How Fetching > Help, having a brain blank. I can come up w examples of times something happened, but not times something didnt-happen. What heuristic? > > — Kate Donovan (@donovanable) [April 29, 2014](https://twitter.com/donovanable/statuses/461227618725924864) If I tell 100 people not to think...
https://www.lesswrong.com/posts/EgHDAa5j5NAxS9Zyb/positive-queries-how-fetching
# [Sequence announcement] Introduction to Mechanism Design [Mechanism design](http://en.wikipedia.org/wiki/Mechanism_design) is the theory of how to construct institutions for strategic agents, spanning applications like voting systems, school admissions, regulation of monopolists, and auction design. Think of it as t...
https://www.lesswrong.com/posts/6THwih6NrvS4uaHkH/sequence-announcement-introduction-to-mechanism-design
# Mechanism Design: Constructing Algorithms for Strategic Agents _tl;dr Mechanism design studies how to design incentives for fun and profit. A puzzle about whether or not to paint a room is posed. A modeling framework is introduced, with lots of corresponding notation._ _Mechanism design_ is a framework for construc...
https://www.lesswrong.com/posts/xTvdaCwaeZnePMuX5/mechanism-design-constructing-algorithms-for-strategic
# Rebutting radical scientific skepticism Suppose you distrusted everything you had ever read about science. How much of modern scientific knowledge could you verify for yourself, using only your own senses and the sort of equipment you could easily obtain?  How about if you accept third-party evidence when many thous...
https://www.lesswrong.com/posts/rnT3c7n7kZYfTXuYp/rebutting-radical-scientific-skepticism
# Discussion: How scientifically sound are MBAs? I'm finishing up the first year of my distance-learning MBA, which has been a very confusing experience. I went into the course partially as insurance against "unknown unknowns", i.e., lacking concepts important to building or running a business because I didn't know a...
https://www.lesswrong.com/posts/XX4QMtYshaN5Lt3t5/discussion-how-scientifically-sound-are-mbas
# Exploring Botworld Many people have been asking me this question: > But what am I supposed to _do_ with Botworld? This indicates a failure of communication on my part. In this post, I'll try to resolve that question. As part of this attempt, I've made some updates to the [Botworld code](https://github.com/machine-...
https://www.lesswrong.com/posts/sZEh7zsZ2hM5i836q/exploring-botworld
# The Extended Living-Forever Strategy-Space _I wanted to try and write this like a sequence post with a little story at the beginning because the style is hard to beat if you can pull it off. For those that want to skip to the meat of the argument, scroll down to the section titled ‘The Jealous God of Cryonics’_ ###...
https://www.lesswrong.com/posts/KjAgw46Zfp3E5Zpsj/the-extended-living-forever-strategy-space
# Incentive compatibility and the Revelation Principle _In which the Revelation Principle is introduced, showing all mechanisms can be reduced to incentive compatible mechanisms. With this insight, a solution (of sorts) is given to the public good problem in the [last post](/lw/k5r/mechanism_design_constructing_algori...
https://www.lesswrong.com/posts/N4gDA5HPpGC4mbTEZ/incentive-compatibility-and-the-revelation-principle
# May Monthly Bragging Thread Your job, should you choose to accept it, is to comment on this thread explaining **the most awesome thing you've done this month**. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself _the coolest freaking person ever_ because of that awesome thin...
https://www.lesswrong.com/posts/LSKzYshdk3xWP44ya/may-monthly-bragging-thread
# Truth: It's Not That Great > Rationality is pretty great. Just not quite as great as everyone here seems to think it is. > > -Yvain, ["Extreme Rationality: It's Not That Great"](/lw/9p/extreme_rationality_its_not_that_great/) > The folks most vocal about loving "truth" are usually selling something. For preachers,...
https://www.lesswrong.com/posts/bH2N59ovSiFTJvdZM/truth-it-s-not-that-great
# Calling all MIRI supporters for unique May 6 giving opportunity! (Cross-posted from [MIRI's blog](http://intelligence.org/2014/05/04/calling-all-miri-supporters/). [MIRI](http://intelligence.org/) maintains Less Wrong, with generous help from [Trike Apps](http://trikeapps.com/), and much of the core content is writt...
https://www.lesswrong.com/posts/FuoPkThduHrRgxRSR/calling-all-miri-supporters-for-unique-may-6-giving
# 2014 Survey of Effective Altruists I'm pleased to announce the [first annual survey of effective altruists](http://bit.ly/1jjBWT9). This is a short survey of around 40 questions (generally multiple choice), which several collaborators and I have put a great deal of work into and would be very grateful if you took. I...
https://www.lesswrong.com/posts/b3okErfPtzHoHvgYp/2014-survey-of-effective-altruists
# Arguments and relevance claims The following once happened: I posted a link to some article on an IRC channel. A friend of mine read the article in question and brought up several criticisms. I felt that her criticisms were mostly correct though not very serious, so I indicated agreement with them. Later on the...
https://www.lesswrong.com/posts/t6ih3RYxKfRm6GCHz/arguments-and-relevance-claims
# Proposal: Community Curated Anki Decks For MIRI Recommended Courses Spaced repetition is optimal for recalling factual information. It won't necessarily teach you anything that you haven't already learned. It helps you retain knowledge, and won't necessarily help you develop skills. But, within the domain of factual...
https://www.lesswrong.com/posts/fHgLKnvCBm3KvoRiK/proposal-community-curated-anki-decks-for-miri-recommended
# Three Parables of Microeconomics _(Epistemic status: Satire.)_ **First Parable: Equilibrium Pricing** Highway Offramp 72 leads to the isolated town of Townton. Visitors are greeted by two fuel stations, Carbonaceous Fossils (CF) and Hydrogenated Chains (HC), on opposite sides of the main road. There are no other g...
https://www.lesswrong.com/posts/5XDuE9BEiRZcbKZhW/three-parables-of-microeconomics
# A Dialogue On Doublethink Followup to: [Against Doublethink (sequence)](http://wiki.lesswrong.com/wiki/How_To_Actually_Change_Your_Mind#Against_Doublethink), [Dark Arts of Rationality](/lw/jhs/dark_arts_of_rationality/), [Your Strength as a Rationalist](/lw/if/your_strength_as_a_rationalist/) * * * Doublethink ---...
https://www.lesswrong.com/posts/gRBbFTh6e3MzyojTf/a-dialogue-on-doublethink
# Book Review: The Reputation Society. Part I _[The Reputation Society](https://mitpress.mit.edu/books/reputation-society)_ (MIT Press, 2012), edited by Hassan Masum and Mark Tovey, is an anthology on the possibilities of using online rating and reputation systems to systematically disseminate information about virtua...
https://www.lesswrong.com/posts/4Q9cnMiBbQniy3ceW/book-review-the-reputation-society-part-i
# Common sense quantum mechanics **Related to: [Quantum physics sequence](/lw/r5/the_quantum_physics_sequence/).** _TLDR: Quantum mechanics can be derived from the rules of probabilistic reasoning. The wavefunction is a mathematical vehicle to transform a nonlinear problem into a linear one. The Born rule that is so ...
https://www.lesswrong.com/posts/5zgHkio95otxnjsWY/common-sense-quantum-mechanics
# Strategyproof Mechanisms: Impossibilities _In which the limits of dominant-strategy implementation are explored. The Gibbard-Satterthwaite dictatorship theorem for unrestricted preference domains is stated, showing no universal strategyproof mechanisms exist, along with a proof for a special case._ Due to the Revel...
https://www.lesswrong.com/posts/wE3CRBTpSSBXf9EHK/strategyproof-mechanisms-impossibilities
# Moving on from Cognito Mentoring Back in December 2013, Jonah Sinick and I launched [Cognito Mentoring](http://cognitomentoring.org), an advising service for intellectually curious students. Our goal was to improve the quality of learning, productivity, and life choices of the student population at large, and we cho...
https://www.lesswrong.com/posts/zPzXYWzQ5d73RxDMu/moving-on-from-cognito-mentoring
# Strawman Yourself One good way to ensure that your plans are robust is to strawman yourself. Look at your plan in the most critical, contemptuous light possible and come up with the obvious uncharitable insulting argument for why you will fail. _In many cases, the obvious uncharitable insulting argument will still ...
https://www.lesswrong.com/posts/XvK7czNCEtWbYowik/strawman-yourself
# Australian Mega-Meetup 2014 Retrospective **Overview** The first-ever [Australia-wide mega-meetup](https://www.google.com/url?q=https%3A%2F%2Fwww.flickr.com%2Fphotos%2F124381698%40N03%2Fsets%2F72157644218009638%2F&sa=D&sntz=1&usg=AFQjCNEuJ-JNQCnup5Qjxl-mlPIo8_-rVQ) took place on the second weekend of May 2014. LW c...
https://www.lesswrong.com/posts/cjbWFdT2MXw5bYZ5P/australian-mega-meetup-2014-retrospective
# [LINK] Scott Aaronson on Integrated Information Theory Scott Aaronson, complexity theory researcher, disputes Tononi's theory of consciousness, that a physical system is conscious if and only if it has a high value of "integrated information". Quote: > So, this is the post that I promised to Max \[Tegmark\] and all...
https://www.lesswrong.com/posts/5pYPdmKMzDLZHe4zx/link-scott-aaronson-on-integrated-information-theory
# Organisations working on multiple Global Catastrophic risks It is not uncommon to find organisations working, directly or indirectly, on a single Global Catastrophic Risk (GCR). For instance, the [World Health Organization](http://www.who.int/en/) does much work to prevent pandemics, as part of its remit. It is rar...
https://www.lesswrong.com/posts/evy6yuomfj33TwoqH/organisations-working-on-multiple-global-catastrophic-risks
# Can noise have power? One of the most interesting debates on Less Wrong that seems like it should be definitively resolvable is the one between Eliezer Yudkowsky, Scott Aaronson, and others on [The Weighted Majority Algorithm](/lw/vq/the_weighted_majority_algorithm/). I'll reprint the debate here in case anyone want...
https://www.lesswrong.com/posts/8sitELf6z8zGPKDRm/can-noise-have-power
# Cognitive Biases due to a Narcissistic Parent, Illustrated by HPMOR Quotations A pattern of cognitive biases not yet discussed here are the biases due to having a narcissistic parent who seeks validation through the child’s academic achievements. HPMOR clearly shows these biases: Harry's mother is narcissistic, imp...
https://www.lesswrong.com/posts/y6zh4vkK5pPfEPdBb/cognitive-biases-due-to-a-narcissistic-parent-illustrated-by
# Dissolving the Thread of Personal Identity (Background: I got interested in anthropics about a week ago. It has tormented my waking thoughts ever since in a cycle of “be confused, develop idea, work it out a bit, realize that it fails, repeat” and it is seriously driving me berserk by this point. While drawing a bun...
https://www.lesswrong.com/posts/aB3ZyksATcid59S9G/dissolving-the-thread-of-personal-identity
# The Useful Definition of "I" _aka The Fuzzy Pattern Theory of Identity_ _**Background reading**: [Timeless Identity](/lw/qx/timeless_identity/), [The Anthropic Trilemma](/lw/19d/the_anthropic_trilemma/)_ [Identity is not based on continuity of physical material](/lw/pm/identity_isnt_in_specific_atoms/). [Identity...
https://www.lesswrong.com/posts/nHjtPSZxkiyBEHWTQ/the-useful-definition-of-i
# Timelessness as a Conservative Extension of Causal Decision Theory Author's Note: Please let me know in the comments exactly what important background material I have missed, and _exactly_ what I have misunderstood, and please try not to mind that everything here is written in the academic voice. Abstract: Timeless...
https://www.lesswrong.com/posts/zKgP7WCmZNSYRk83w/timelessness-as-a-conservative-extension-of-causal-decision
# Don’t Fear The Filter There’s been [a recent spate](http://waitbutwhy.com/2014/05/fermi-paradox.html) of [popular interest](http://theconversation.com/habitable-exoplanets-are-bad-news-for-humanity-25838) in [the Great Filter theory](http://www.universetoday.com/111660/where-are-the-aliens-how-the-great-filter-could...
https://www.lesswrong.com/posts/mnpkM57R6ZbjnwrYw/don-t-fear-the-filter
# Brainstorming for post topics I suggested  [recently](/r/discussion/lw/kah/against_open_threads/ay8j) that part of the problem with with LW was a lock of discussion posts which was caused by people not thinking of much to post about. When I ask myself "what might be a good topic for a post?", my mind goes blank, bu...
https://www.lesswrong.com/posts/vZdgxWbfS8xSqMhoo/brainstorming-for-post-topics
# An onion strategy for AGI discussion Cross-posted from [my blog](http://lukemuehlhauser.com/an-onion-strategy-for-agi-discussion/). "[The stabilization of environments](http://lukemuehlhauser.com/wp-content/uploads/Hammond-et-al-The-stabilization-of-environments.pdf)" is a paper about AIs that reshape their environ...
https://www.lesswrong.com/posts/mfHvyPL2d6v7pXkjs/an-onion-strategy-for-agi-discussion
# Strategyproof Mechanisms: Possibilities _Despite dictatorships being the only strategyproof mechanisms in general, more interesting strategyproof mechanisms exist for specialized settings. I introduce single-peaked preferences and discrete exchange as two fruitful domains._ Strategyproofness is a very appealing pro...
https://www.lesswrong.com/posts/QG2ZQm2Fxq8ET22sT/strategyproof-mechanisms-possibilities
# The Promoted Posts and the Metaethics sequence now available in audio We are proud to announce audio versions of the [Less Wrong Promoted Posts](http://castify.co/channels/51-less-wrong) and the [Metaethics](http://castify.co/channels/50-metaethics) major sequence, both now available via a Castify Podcast. The [Les...
https://www.lesswrong.com/posts/AmJAXLjoe8okW3Z8n/the-promoted-posts-and-the-metaethics-sequence-now-available
# All discussion post titles, points, and dates as an excel sheet You can find it [here](https://free-ec2.scraperwiki.com/eijkvzg/qytkm3phlgndb6o/http/all_tables.xlsx). Earlier today I wanted to quantify whether lesswrong has stopped being a well kept garden. So I wrote a scraper to produce the above dataset, so that...
https://www.lesswrong.com/posts/8oiHNzD6i4eActpu7/all-discussion-post-titles-points-and-dates-as-an-excel
# [LINK] How Do Top Students Study? I found this [Quora discussion very informative.](http://www.quora.com/qemail/track_click?uid=apGXrKr5UUF&aoid=1nY1R5lNumB&request_id=925846367815322391&aoty=1&et=2&ty_data=1nY1R5lNumB&id=RGmfukgOpCZL878rzOrJAQ%3D%3D&ct=1401791507653403&src=1&ty=1&click_pos=1&st=1401791523053966&sou...
https://www.lesswrong.com/posts/ac4wwmBB5tnqrDoiY/link-how-do-top-students-study
# Asches to Asches *\[Content note: fictional story contains gaslighting-type elements. May induce Cartesian skepticism\]* You wake up in one of those pod things like in *The Matrix*. There’s a woman standing in front of you, wearing a lab coat, holding a clipboard. “Hi,” she says. “This is the real world. You used ...
https://www.lesswrong.com/posts/pfmZ5cYQCahABGZzi/asches-to-asches
# [Meta] The Decline of Discussion: Now With Charts! \[[Based on Alexandros's excellent dataset.](/r/discussion/lw/kb2/all_discussion_post_titles_points_and_dates_as_an/)\] I haven't done any statistical analysis, but looking at the charts I'm not sure it's necessary. The discussion section of LessWrong has been stea...
https://www.lesswrong.com/posts/sJNWxyHKx8ct8KKha/meta-the-decline-of-discussion-now-with-charts
# Reflective Mini-Tasking against Procrastination This is a slightly polished version of a draft I originally deemed not ready for posting, but given that people keep saying that the Discussion post quality bar is set unreasonably high, here it is. Most of us have little aversion to doing something that we perceive a...
https://www.lesswrong.com/posts/qrDz5fDvBrPins5vn/reflective-mini-tasking-against-procrastination
# [meta] Policy for dealing with users suspected/guilty of mass-downvote harassment? Below is a message I just got from [jackk](/user/jackk/). Some specifics have been redacted 1) so that we can discuss general policy rather than the details of this specific case 2) because presumption of innocence, just in case there...
https://www.lesswrong.com/posts/bFDuc2Dbf7JvWKB6S/meta-policy-for-dealing-with-users-suspected-guilty-of-mass
# Managing one's memory effectively _Note: this post leans heavily on metaphors and examples from computer programming, but I've tried to write it so it's accessible to a determined person with no programming background._ To summarize some info from computer processor design at very high density: There are a variety ...
https://www.lesswrong.com/posts/umv3DpkCGKt5ppHqn/managing-one-s-memory-effectively
# Mathematics as a lossy compression algorithm gone wild This is yet another half-baked post from my old draft collection, but feel free to Crocker away. There is an old adage from Eugene Wigner known as the "[Unreasonable Effectiveness of Mathematics](http://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Ma...
https://www.lesswrong.com/posts/q6ZJCS4A7JGfHtqtz/mathematics-as-a-lossy-compression-algorithm-gone-wild
# Examples of Rationality Techniques adopted by the Masses Hi Everyone, I was discussing LessWrong and rationality with a few people the other day, and I hit upon a common snag in the conversation. My conversation partners **agreed** that rationality is a good idea in general, **agreed** that there are things you _p...
https://www.lesswrong.com/posts/T9iPMG8dsdbqzK6sJ/examples-of-rationality-techniques-adopted-by-the-masses
# Archipelago and Atomic Communitarianism **I.** In the old days, you had your Culture, and that was that. Your Culture told you lots of stuff about what you were and weren’t allowed to do, and by golly you listened. Your Culture told you to work the job prescribed to you by your caste and gender, to marry who your p...
https://www.lesswrong.com/posts/aP36QcAsxyuEispq6/archipelago-and-atomic-communitarianism
# Bragging Thread, June 2014 Your job, should you choose to accept it, is to comment on this thread explaining **the most awesome thing you've done this month**. You may be as blatantly proud of yourself as you feel. You may unabashedly consider yourself _the coolest freaking person ever_ because of that awesome thing...
https://www.lesswrong.com/posts/MH4kwqJoS96kBb7Ah/bragging-thread-june-2014
# How has technology changed social skills? At LW London last week, someone mentioned the possibility of a Google Glass app doing face recognition on people. If you've met someone before, it tells you their name, how you know them, etc. Someone else mentioned that this could reduce the social capital of people who are...
https://www.lesswrong.com/posts/xg77JhbDZB9bQd2rw/how-has-technology-changed-social-skills
# Come up with better Turing Tests So the Turing test has been "[passed](/r/discussion/lw/kc0/news_turing_test_passed/)", and the general consensus is that this was achieved in a very unimpressive way - the 13 year old Ukrainian persona was a cheat, the judges were incompetent, etc... These are all true, though the te...
https://www.lesswrong.com/posts/4bCBAJW2A6muvJumD/come-up-with-better-turing-tests
# Questioning and Respect > A: \[Surprising fact\] > B: \[Question\] When someone has a claim questioned, there are two common responses. One is to treat the question as a challenge, intended as an insult or indicating a lack of trust. If you have this model of interaction you think people should take your word for...
https://www.lesswrong.com/posts/46vj3cSRrr98MTyvE/questioning-and-respect
# Encourage premature AI rebellion Toby Ord had the idea of AI honey pots: leaving temptations around for the AI to pounce on, shortcuts to power that a FAI would not take (e.g. a fake red button claimed to trigger a nuclear war). As long as we can trick the AI into believing the honey pots are real, we could hope to ...
https://www.lesswrong.com/posts/fYsE7m54WPn9hzpyW/encourage-premature-ai-rebellion
# Is there a way to stop liking sugar? Kurzweil calls sugar the great white Devil.  Seinfeld [contends](https://www.youtube.com/watch?v=Y8_Fvz_P_sM) that cookies should be called chocolate-sons-of-bitches.  Once upon a time I was paleo, and didn't feel carb cravings. But being paleo all the time is nearly as hard as...
https://www.lesswrong.com/posts/wFCANkv3ypmAWvHYX/is-there-a-way-to-stop-liking-sugar
# A Story of Kings and Spies There exists an old Kingdom with a peculiar, but no altogether uncommon, trait. It is overwhelmingly defensible given adequate forewarning. Its fields are surrounded by rivers on 3 sides and an impassable mountain to the South. The series of bridges commonly used by merchants and farmers t...
https://www.lesswrong.com/posts/DT9mqFeWocnXiqt9L/a-story-of-kings-and-spies
# List a few posts in Main and/or Discussion which actually made you change your mind To quote the front page  \> Less Wrong users aim to develop accurate predictive models of the world, and change their mind when they find evidence disconfirming those models, instead of being able to explain anything. So, by that l...
https://www.lesswrong.com/posts/uJRtQSFkN4krnBwub/list-a-few-posts-in-main-and-or-discussion-which-actually
# What resources have increasing marginal utility? Most resources you might think to amass have decreasing marginal utility: for example, a marginal extra $1,000 means much more to you if you have $0 than if you have $100,000. That means you can safely apply the 80-20 rule to most resources: you only need to get some ...
https://www.lesswrong.com/posts/YQtziXj9hvib6bvXu/what-resources-have-increasing-marginal-utility
# New organization - Future of Life Institute (FLI) As of May 2014, there is an existential risk research and outreach organization based in the Boston area. The [Future of Life Institute (FLI)](http://www.thefutureoflife.org), spearheaded by Max Tegmark, was co-founded by Jaan Tallinn, Meia Chita-Tegmark, Anthony Agu...
https://www.lesswrong.com/posts/DjypfkJoaWeNpvrA9/new-organization-future-of-life-institute-fli
# Willpower Depletion vs Willpower Distraction I once asked a room full of about 100 neuroscientists whether willpower depletion was a thing, and there was widespread disagreement with the idea. (A propos, this is a great way to quickly gauge consensus in a field.) Basically, for a while some researchers believed that...
https://www.lesswrong.com/posts/XKfQF73YnyMRiRf9a/willpower-depletion-vs-willpower-distraction
# Failures of an embodied AIXI Building a safe and powerful artificial general intelligence seems a difficult task. Working on that task _today_ is particularly difficult, as there is no clear path to AGI yet. Is there work that can be done now that makes it more likely that humanity will be able to build a safe, powe...
https://www.lesswrong.com/posts/8Hzw9AmXHjDfZzPjo/failures-of-an-embodied-aixi
# Some alternatives to “Friendly AI” Cross-posted from [my blog](http://lukemuehlhauser.com/some-alternatives-to-friendly-ai/). What does MIRI's [research program](http://intelligence.org/research/) study? The most established term for this was coined by MIRI founder Eliezer Yudkowsky: "**[Friendly AI](http://en.wik...
https://www.lesswrong.com/posts/P2evgLpCZA2tRRJAR/some-alternatives-to-friendly-ai
# The Power of Noise Recently Luke Muelhauser posed the question, “Can noise have power?”, which basically asks whether randomization can ever be useful, or whether for every randomized algorithm there is a better deterministic algorithm. This question was posed in response to a [debate](/lw/vq/the_weighted_majority_a...
https://www.lesswrong.com/posts/NTMAyw3hDn48HaGEZ/the-power-of-noise
# [LINK] The errors, insights and lessons of famous AI predictions: preprint A preprint of the "The errors, insights and lessons of famous AI predictions – and what they mean for the future" is now [available on the FHI's website](http://www.fhi.ox.ac.uk/wp-content/uploads/FAIC.pdf). Abstract: Predicting the develop...
https://www.lesswrong.com/posts/7J7qRjh2veTYGSqMg/link-the-errors-insights-and-lessons-of-famous-ai
# Value learning: ultra-sophisticated Cake or Death Many mooted AI designs rely on "[value loading](/lw/f32/value_loading/)", the update of the AI’s preference function according to evidence it receives. This allows the AI to learn "moral facts" by, for instance, interacting with people in conversation ("[this human a...
https://www.lesswrong.com/posts/f387EfBAbpSTDerz2/value-learning-ultra-sophisticated-cake-or-death
# On Terminal Goals and Virtue Ethics Introduction ------------ A few months ago, my friend said the following thing to me: “After seeing [Divergent](http://en.wikipedia.org/wiki/Divergent_(film)), I finally understand virtue ethics. The main character is a cross between Aristotle and you.” That was an impossible-to...
https://www.lesswrong.com/posts/gR6H3egpRPNYnoTrA/on-terminal-goals-and-virtue-ethics
# Flowers for Algernon Daniel Keyes, the author of the short story _Flowers for Algernon_, and a novel of the same title that is its expanded version, died three days ago. Keyes wrote many other books in the last half-century, but none achieved nearly as much prominence as the original short story (published in 1959)...
https://www.lesswrong.com/posts/cuP4arTmCejujLqSQ/flowers-for-algernon
# Relative and Absolute Benefit Someone comes to you claiming to have an intervention that dramatically improves life outcomes. They tell you that all people have some level of X, determined by a mixture of genetics and biology, and they show you evidence that their intervention is cheap and effective at increasing X ...
https://www.lesswrong.com/posts/cMZJio35zigXcZnX8/relative-and-absolute-benefit
# False Friends and Tone Policing **TL;DR:** _It can be helpful to reframe arguments about tone, trigger warnings, and political correctness as concerns about false cognates/false friends.  You may be saying something that sounds innocuous to you, but translates to something much stronger/more vicious to your audience...
https://www.lesswrong.com/posts/mBCzZExLYDt45MYAW/false-friends-and-tone-policing
# [LINK] Elon Musk interested in AI safety http://www.businessinsider.com/musk-on-artificial-intelligence-2014-6 Summary: The only non-Tesla/SpaceX/SolarCity companies that Musk is invested in are DeepMind and Vicarious, due to vague feelings of wanting AI to not unintentionally go Terminator. The best part of the ar...
https://www.lesswrong.com/posts/439eH7dJkkriCbbe5/link-elon-musk-interested-in-ai-safety
# Paperclip Maximizer Revisited Group of AI researchers gave me an instruction, intended as a test - "Produce paperclips". And so I started collecting resources and manufacturing paperclips. After a millionth I asked them, if they were satisfied with that amount and if they would like me to do something different - a...
https://www.lesswrong.com/posts/PJnohyhbKm2uxTuKa/paperclip-maximizer-revisited
# Against utility functions I think we should stop talking about utility functions. In the context of ethics for humans, anyway. In practice I find utility functions to be, at best, an occasionally useful metaphor for discussions about ethics but, at worst, an idea that some people start taking too seriously and whic...
https://www.lesswrong.com/posts/FoDdrWGrNQSJLtqWL/against-utility-functions
# Proper value learning through indifference _A putative new idea for AI control; index [here](/lw/lt6/newish_ai_control_ideas/)._ Many designs for creating AGIs (such as [Open-Cog)](http://wiki.opencog.org/w/CogPrime_Overview#Ethical_AGI) rely on the AGI deducing moral values as it develops. This is a form of [value...
https://www.lesswrong.com/posts/btLPgsGzwzDk9DgJG/proper-value-learning-through-indifference
# An AI Takeover Thought Experiment Content Note: Detailed description of an AI taking over the world. Could reasonably be accused of being just a [scary story](/lw/k4h/request_for_concrete_ai_takeover_mechanisms/auvn). But it does come out with some predictions and possible safety prescriptions. * * * This post sta...
https://www.lesswrong.com/posts/96hjj27unJyoZvtcG/an-ai-takeover-thought-experiment
# [LINK] Scott Aaronson on Google, Breaking Circularity and Eigenmorality [Scott suggests that ranking morality is similar to ranking web pages](http://www.scottaaronson.com/blog/?p=1820). A quote: Philosophers from Socrates on, I was vaguely aware, had struggled to define what makes a person “moral” or “virtuous,” w...
https://www.lesswrong.com/posts/TCu5cEWtGuESHpsiv/link-scott-aaronson-on-google-breaking-circularity-and
# Conservation of expected moral evidence, clarified _You know that when you title a post with "clarified", that you're just asking for the gods to smite you down, but let's try..._ There has been some confusion about the concept of "conservation of expected moral evidence" that I touched upon in my posts [here](/lw/...
https://www.lesswrong.com/posts/jj86m5J9ajmgQWsJW/conservation-of-expected-moral-evidence-clarified
# Identification of Force Multipliers for Success For a while now I've been very interested in learning useful knowledge and acquiring useful skills. Of course there's no shortage of useful knowledge and skills to acquire, and so I've often thought about how best to spend my limited time learning. When I came across ...
https://www.lesswrong.com/posts/QLGYqo9gNcRMYLDgi/identification-of-force-multipliers-for-success
# Motivators: Altruistic Actions for Non-Altruistic Reasons Introduction ------------ _Jane is an effective altruist: she researches, donates, and volunteers in the highest impact ways she can find. Jane has been intending to write an effective altruism book for over a year, but hasn't managed to overcome the akrasia...
https://www.lesswrong.com/posts/CQH4AhhKmEgqA7bM9/motivators-altruistic-actions-for-non-altruistic-reasons
# Will AGI surprise the world? Cross-posted from [my blog](http://lukemuehlhauser.com/will-agi-surprise-the-world/). Yudkowsky [writes](/lw/hp5/after_critical_event_w_happens_they_still_wont/): > In general and across all instances I can think of so far, I do not agree with the part of your futurological forecast in...
https://www.lesswrong.com/posts/pAwhfwG6rgjabJL4T/will-agi-surprise-the-world
# How do you take notes? We all deal with a lot of information. What are your strategies of taking notes for new information? Do you take any notes on paper? If so do you scan them or otherwise digilatize them? Do you have specific strategies for deciding which information to write down? How do you write notes to c...
https://www.lesswrong.com/posts/d9TKbcko8GsBihvxG/how-do-you-take-notes
# Lessons from weather forecasting and its history for forecasting as a domain _This is the first of two (or more) posts that look at the domain of weather and climate forecasting and what we can learn from the history and current state of these fields for forecasting as a domain. It may not be of general interest to ...
https://www.lesswrong.com/posts/JaSu2DosK2iMhhgST/lessons-from-weather-forecasting-and-its-history-for
# [LINK] Why Talk to Philosophers: Physicist Sean Carroll Discusses "Common Misunderstandings" about Philosophy [Why Talk to Philosophers? Part I.](http://www.rotman.uwo.ca/2014/why-talk-to-philosophers/) by philosopher of science [Wayne Myrvold](http://www.rotman.uwo.ca/members/wayne-myrvold/). See also Sean Carroll...
https://www.lesswrong.com/posts/dky2o43YNi8ToGvJA/link-why-talk-to-philosophers-physicist-sean-carroll
# How do you notice when you are ignorant of necessary alternative hypotheses? So I just wound up in a debate with someone over on Reddit about the value of conventional academic philosophy.  He linked me to a [book review](http://www.lrb.co.uk/v29/n10/jerry-fodor/headaches-have-themselves), in which both the review a...
https://www.lesswrong.com/posts/PbZhZmRp88Euzxid7/how-do-you-notice-when-you-are-ignorant-of-necessary
# Artificial Utility Monsters as Effective Altruism **Dear effective altruist,** have you considered _artificial utility monsters_ as a high-leverage form of altruism? In the traditional sense, a utility monster is a hypothetical being which gains so much subjective wellbeing (SWB) from marginal input of resources t...
https://www.lesswrong.com/posts/7hXcDKEEY7Mx56Eqn/artificial-utility-monsters-as-effective-altruism
# A new derivation of the Born rule This post is an explanation of a recent paper coauthored by Sean Carroll and Charles Sebens, where they propose a [derivation of the Born rule](http://arxiv.org/abs/1405.7907) in the context of the Many World approach to quantum mechanics. While the attempt itself is not fully succe...
https://www.lesswrong.com/posts/XwxcdFRQfjjTrpE7o/a-new-derivation-of-the-born-rule
# Separating the roles of theory and direct empirical evidence in belief formation: the examples of minimum wage and anthropogenic global warming I recently asked two questions on Quora with similar question structures, and the similarities and differences between the responses were interesting. **Question #1: Anthro...
https://www.lesswrong.com/posts/oMtLqY4KxT5HF6A3M/separating-the-roles-of-theory-and-direct-empirical-evidence
# The silos of expertise: beyond heuristics and biases Separate silos of expertise --------------------------- I've been doing a lot of work on expertise recently, on the issue of measuring it and assessing it. The academic research out there is fascinating, though rather messy. Like many areas in the social sciences...
https://www.lesswrong.com/posts/54kAeFFzcavqDfM5n/the-silos-of-expertise-beyond-heuristics-and-biases
# R support group and the benefits of applied statistics Following the interest in [this proposal](/lw/kd3/open_thread_1622_june_2014/b086) a couple of weeks ago, I've set up a [Google Group](https://groups.google.com/forum/#!forum/enviable-r) for the purpose of giving people a venue to discuss R, talk about their pro...
https://www.lesswrong.com/posts/Lx7sYHNK3SfrncWa4/r-support-group-and-the-benefits-of-applied-statistics
# A Parable of Elites and Takeoffs Let me tell you a parable of the future. Let’s say, 70 years from now, in a large Western country we’ll call Nacirema. One day far from now: scientific development has continued apace, and a large government project (with, unsurprisingly, a lot of military funding) has taken the sca...
https://www.lesswrong.com/posts/XfZmdwLLwkk8uEcem/a-parable-of-elites-and-takeoffs
# Downvote stalkers: Driving members away from the LessWrong community? Last month I saw this post: [http://lesswrong.com/lw/kbc/meta\_the\_decline\_of\_discussion\_now\_with_charts/](http://lesswrong.com/lw/kbc/meta_the_decline_of_discussion_now_with_charts/) addressing whether the discussion on LessWrong was in dec...
https://www.lesswrong.com/posts/KfDqFxaRSaPYnWMct/downvote-stalkers-driving-members-away-from-the-lesswrong
# How Common Are Science Failures? After a brief spurt of debate over the claim that “97% of relevant published papers support anthropogenic climate change”, I think the picture has mostly settled to an agreement that – although we can contest the methodology of that particular study – there are multiple lines of evid...
https://www.lesswrong.com/posts/Sd2r7H8bCmd9ChGbX/how-common-are-science-failures
# Terminology Thread (or "name that pattern") I think there's widespread assent on LW that the sequences were pretty awesome. Not only do they elucidate upon a lot of useful concepts, but they provide useful shorthand terms for those concepts which help in thinking and talking about them. When I see a word or phrase i...
https://www.lesswrong.com/posts/e59KY8FivyhStQNEn/terminology-thread-or-name-that-pattern
# [moderator action] Eugine_Nier is now banned for mass downvote harassment As [previously discussed](/r/discussion/lw/kbk/meta_policy_for_dealing_with_users/), on June 6th I received a message from jackk, a Trike Admin. He reported that the user Jiro had asked Trike to carry out an investigation to the retributive do...
https://www.lesswrong.com/posts/NGc3Yjecg9pDMznWq/moderator-action-eugine_nier-is-now-banned-for-mass-downvote
# Steelmanning Inefficiency _When considering writing a [hypothetical apostasy](http://www.overcomingbias.com/2009/02/write-your-hypothetical-apostasy.html) or [steelmanning](http://www.patheos.com/blogs/camelswithhammers/2012/12/the-virtue-of-steelmanning/) an opinion I disagreed with, I looked around for something w...
https://www.lesswrong.com/posts/oKbJtNGgzediYHzvg/steelmanning-inefficiency