text
stringlengths
300
320k
source
stringlengths
52
154
# Green Emeralds, Grue Diamonds _A putative new idea for AI control; index [here](https://agentfoundations.org/item?id=601)._ When posing his "New Riddle of Induction", Goodman [introduced](https://books.google.co.uk/books?id=i97_LdPXwrAC&hl=en) the concepts of "grue" and "bleen" to show some of the problems with the...
https://www.lesswrong.com/posts/MKPHSykYA8nMYyKXf/green-emeralds-grue-diamonds
# Zooming your mind in and out I recently noticed I had two mental processes opposing one another in an interesting way. The first mental process was instilled by reading Daniel Kahneman on the [focusing illusion](https://edge.org/response-detail/11984) and Paul Graham on [procrastination](http://www.paulgraham.com/p...
https://www.lesswrong.com/posts/uBWuXJihQ69QhDWtp/zooming-your-mind-in-and-out
# Grue, Bleen, and natural categories _A putative new idea for AI control; index [here](https://agentfoundations.org/item?id=601)._ In a previous [post](/r/discussion/lw/mbp/green_emeralds_grue_diamonds/), I looked at unnatural concepts such as grue (green if X was true, blue if it was false) and bleen. This was to e...
https://www.lesswrong.com/posts/LAvw9fTQnz8Wx6m25/grue-bleen-and-natural-categories
# European Community Weekend 2015 - Followup Seven months ago, the Berlin LW community [announced](/lw/l4s/european_community_weekend_2015/) the second European LessWrong Community Weekend. We wrote: > From June 12th to 14th awesome people from all across Europe are coming to Berlin to meet, exchange ideas and start ...
https://www.lesswrong.com/posts/n4X2tvQPpfbk3Dwpz/european-community-weekend-2015-followup
# Crazy Ideas Thread This thread is intended to provide a space for 'crazy' ideas. Ideas that spontaneously come to mind (and feel great), ideas you long wanted to tell but never found the place and time for and also for ideas you think should be obvious and simple - but nobody ever mentions them. This thread itself ...
https://www.lesswrong.com/posts/t8krwMycPx54e4NdM/crazy-ideas-thread
# The Person As Input I. Humans are emotion-feeling machines.  I don’t mean that humans are machines that happen to feel emotions. I mean that humans are machines whose output is the feeling of emotions—“emotion-feeling” is the thing of value that we produce. Not just “being happy." Then wireheading is the ultimate ...
https://www.lesswrong.com/posts/QQCtmPT5hpWtfBP5R/the-person-as-input
# A Map: AGI Failures Modes and Levels This map shows that AI failure resulting in human extinction could happen on different levels of AI development, namely, before it starts self-improvement (which is unlikely but we still can envision several failure modes), during its take off, when it uses different instruments...
https://www.lesswrong.com/posts/hMQ5iFiHkChqgrHiH/a-map-agi-failures-modes-and-levels
# Rational vs Reasonable _This post draws ideas from [Personhood: A Game for Two or More Players](http://www.meltingasphalt.com/personhood-a-game-for-two-or-more-players/) on Melting Asphalt._ I've been lax in my attempt to write something for LW once weekly, but I hope to approximately continue nonetheless. I still ...
https://www.lesswrong.com/posts/g28WWuaMLXNzAnR9v/rational-vs-reasonable
# Don't steer with guilt I've spoken at length about shifting guilt or dispelling guilt. What I haven't talked about, yet, is guilt itself. So let's talk about guilt. Guilt is one of those strange tools that works by *not* occurring. You place guilt on the branches of possibility that you don't want to happen, and t...
https://www.lesswrong.com/posts/sG4paay6CeGbyYZZo/don-t-steer-with-guilt
# An overall schema for the friendly AI problems: self-referential convergence criteria _A putative new idea for AI control; index [here](https://agentfoundations.org/item?id=601)._ After working for some time on the Friendly AI problem, it's occurred to me that a lot of the issues seem related. Specifically, all the...
https://www.lesswrong.com/posts/h6FQqCg9vCCD9m5iK/an-overall-schema-for-the-friendly-ai-problems-self
# The Other Path - a poem Inspired by [the call to rationalist poetry fans](http://lesswrong.com/lw/3s/rationalist_poetry_fans_unite/) and informed by years of writing satire. **The Other Path** When you ask for truth and are offered illusion, When senses deceive you and reasoning lies I'll show you the pa...
https://www.lesswrong.com/posts/nWvXX4t69rp74HGDS/the-other-path-a-poem
# Philosophy professors fail on basic philosophy problems Imagine someone finding out that "Physics professors fail on basic physics problems". This, of course, would never happen. To become a physicist in academia, one has to (among million other things) demonstrate proficiency on far harder problems than that. Phil...
https://www.lesswrong.com/posts/3erkgmTCbqELzYxcy/philosophy-professors-fail-on-basic-philosophy-problems
# Why you should attend EA Global and (some) other conferences Many of you know about Effective Altruism and the associated community. It has a very significant overlap with LessWrong, and has been significantly influenced by the culture and ambitions of the community here. One of the most important things happening ...
https://www.lesswrong.com/posts/xaguRyL2NcGsRWj4e/why-you-should-attend-ea-global-and-some-other-conferences
# Examples of AI's behaving badly Some past examples to motivate thought on how AI's could misbehave: An [algorithm pauses the game](http://techcrunch.com/2013/04/14/nes-robot/) to never lose at Tetris. In "[Learning to Drive a Bicycle using Reinforcement Learning and Shaping](http://citeseerx.ist.psu.edu/viewdoc/do...
https://www.lesswrong.com/posts/QDj5dozwPPe8aJ6ZZ/examples-of-ai-s-behaving-badly
# Experiences in applying "The Biodeterminist's Guide to Parenting" I'm posting this because LessWrong was very influential on how I viewed parenting, particularly the emphasis on helping one's brain work better. In this context, creating and influencing another person's brain is an awesome responsibility. It turned ...
https://www.lesswrong.com/posts/PAYMMgPi2L3MPP967/experiences-in-applying-the-biodeterminist-s-guide-to-1
# Reverse Psychology *\[Content warning: suicide\]* **I.** It all started when I made that phone call. I was really bad. All the tenure-track positions I’d applied to had politely declined, and I saw my future in academia gradually slipping away from me. Then the night before, my boyfriend had said he thought maybe...
https://www.lesswrong.com/posts/FLnDFnXyWrKr6eiT6/reverse-psychology
# List of Fully General Counterarguments **Follow-up to: [Knowing About Biases Can Hurt People](/lw/he/knowing_about_biases_can_hurt_people/)** **See also: [Fully General Counterargument](http://wiki.lesswrong.com/wiki/Fully_general_counterargument) (LW Wiki)** > A **fully general counterargument** \[FGCA\] is an ar...
https://www.lesswrong.com/posts/XAL6QRkiwCZBxJwMv/list-of-fully-general-counterarguments
# Update from the suckerpunch The most common objection I hear when helping people remove their guilt is something along the lines of "Hey wait! I was using that!" Believing this (or really any variant of "but guilt is good for me!") makes it fairly hard to replace guilt with something more productive. I've met some...
https://www.lesswrong.com/posts/uGsALatTCgkkqijaw/update-from-the-suckerpunch
# An Idea For Corrigible, Recursively Improving Math Oracles Math oracles are a special type of AI which answers questions about math - proving theorems, or finding mathematical objects with desired properties and proving they have them. While a superintelligent math oracle could easily be used to construct a dangerou...
https://www.lesswrong.com/posts/5bd75cc58225bf0670374fd3/an-idea-for-corrigible-recursively-improving-math-oracles
# Should you write longer comments? (Statistical analysis of the relationship between comment length and ratings) A few months ago we have launched an experimental [website](http://www.omnilibrium.com/). In brief, our goal is to create a platform where unrestricted freedom of speech would be combined with high quality...
https://www.lesswrong.com/posts/TvzLu37EqJvG4zuSQ/should-you-write-longer-comments-statistical-analysis-of-the
# AGI Safety Solutions Map When I started to work on the map of AI safety solutions, I wanted to illustrate the excellent article “[Responses to Catastrophic AGI Risk: A Survey](http://intelligence.org/files/ResponsesAGIRisk.pdf)” by Kaj Sotala and Roman V. Yampolskiy, 2013, which I strongly recommend. However, durin...
https://www.lesswrong.com/posts/FNv3K3XhSuFTfAzs2/agi-safety-solutions-map
# How to grow faster The most impressive feature of our brain is its capacity to learn. All of our most celebrated cognitive, linguistic, social, artistic, perceptual, emotional, and motor skills are the product of learning. Your brain's amazing plasticity means that you can learn almost anything. No matter how bad yo...
https://www.lesswrong.com/posts/sarJ9WQ2KZHFyM9Qr/how-to-grow-faster
# Welcome to Less Wrong! (8th thread, July 2015) If you've recently joined the [Less Wrong community](/lw/1/about_less_wrong), please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, [how you came to identify as an aspiring rationalist](/lw/2/tell_your_rat...
https://www.lesswrong.com/posts/EcgJzTuHxnMoyNJKE/welcome-to-less-wrong-8th-thread-july-2015
# Asymptotic Logical Uncertainty: Concrete Failure of the Solomonoff Approach This post is part of the [Asymptotic Logical Uncertainty](http://agentfoundations.org/item?id=270) series. In this post, I give a concrete example of how the [Solomonoff Induction Inspired Aproach](http://agentfoundations.org/item?id=316) fa...
https://www.lesswrong.com/posts/5bd75cc58225bf0670374fcb/asymptotic-logical-uncertainty-concrete-failure-of-the-solomonoff-approach
# Steelmaning AI risk critiques At some point soon, I'm going to attempt to steelman the position of those who reject the AI risk thesis, to see if it can be made solid. Here, I'm just asking if people can link to the most convincing arguments they've found against AI risk. **EDIT**: Thanks for all the contribution! ...
https://www.lesswrong.com/posts/gBnmiPRHjdf2DRcaT/steelmaning-ai-risk-critiques
# Bayesian Reasoning - Explained Like You're Five _(This post is not an attempt to convey anything new, but is instead an attempt to convey the concept of Bayesian reasoning as simply as possible. There have been other elementary posts that have covered how to use Bayes’ theorem:_ [here](http://www.yudkowsky.net/ratio...
https://www.lesswrong.com/posts/x7kL42bnATuaL4hrD/bayesian-reasoning-explained-like-you-re-five
# MIRI Fundraiser: Why now matters Our [summer fundraiser](/lw/mi7/miris_2015_summer_fundraiser/) is ongoing. In the meantime, we're writing a number of blog posts to explain what we're doing and why, and to answer a number of common questions. Previous posts in the series are listed at the above link. * * * I'm oft...
https://www.lesswrong.com/posts/bAGL4B5kubH782y4P/miri-fundraiser-why-now-matters
# Be a new homunculus Here's a mental technique that I find useful for addressing many dour feelings, guilt among them: When you're feeling guilty, it is sometimes helpful to close your eyes for a moment, re-open them, and pretend that you're a new homunculus. A "homunculus" is a tiny representation of a human, and ...
https://www.lesswrong.com/posts/KGoNQZAnmfd4oDtfY/be-a-new-homunculus
# Astronomy, Astrobiology, & The Fermi Paradox I: Introductions, and Space & Time This is the first in a series of posts I am putting together on a personal blog I just started two days ago as a collection of my musings on astrobiology ("The Great A'Tuin" - sorry, I couldn't help it), and will be reposting here.  Much...
https://www.lesswrong.com/posts/aByrPxMuKcNFtJ6yZ/astronomy-astrobiology-and-the-fermi-paradox-i-introductions
# Immortality Roadmap Added: Direct link on pdf: http://immortality-roadmap.com/IMMORTEN.pdf A lot of people value indefinite life extension, but most have their own preferred method of achieving it. The goal of this map is to present all known ways of radical life extension in an orderly and useful way. A rational ...
https://www.lesswrong.com/posts/nDYfkku67fLfkWjah/immortality-roadmap
# State-Space of Background Assumptions \[Update\]: I received 720+ responses to the survey. Thanks everyone who helped! I have also concluded the statistical analysis (factor analysis, mediation analysis, clustering and prediction). I have not, however, done the writeup. This may take some time since I just started ...
https://www.lesswrong.com/posts/h7SASL3ZfKQJ85J8u/state-space-of-background-assumptions
# Rationality Reading Group: Part F: Politics and Rationality _This is part of a semi-monthly reading group on __Eliezer Yudkowsky's ebook, [Rationality: From AI to Zombies](https://intelligence.org/rationality-ai-zombies/). For more information about the group, see the [announcement post](/lw/lx0/rationality_from_ai_...
https://www.lesswrong.com/posts/a3ZyFQRpqQaAwafNa/rationality-reading-group-part-f-politics-and-rationality
# Don't You Care If It Works? - Part 2 ##### Part 2 – Winstrumental  [Part 1 is here.](/lw/mjr/dont_you_care_if_it_works_part_1/) **The forgotten fifth virtue** Remember, you can't be wrong unless you take a position. Don't fall into that trap. \-\- Scott Adams,_Dogbert's Top Secret Management Handbook_ CronoDAS ...
https://www.lesswrong.com/posts/6RKP72uTFJPJAsCyj/don-t-you-care-if-it-works-part-2
# Help Build a Landing Page for Existential Risk? The Big Orange Donate Button ---------------------------- Traditional charities, like [Oxfam](https://www.oxfam.org/), [Greenpeace](http://www.greenpeace.org/usa/), and [Amnesty International](http://www.amnestyusa.org/), almost all have a big orange button marked "Do...
https://www.lesswrong.com/posts/6shLuZSQnTz45qYht/help-build-a-landing-page-for-existential-risk
# Magnetic rings (the most mediocre superpower) A review. Following on from a few threads about superpowers and extra sense that humans can try to get; I have always been interested in the idea of putting a magnet in my finger for the benefits of extra-sensory perception. Stories (occasional news articles) imply that...
https://www.lesswrong.com/posts/sH2MpeTQaigdCQmnF/magnetic-rings-the-most-mediocre-superpower-a-review
# The horrifying importance of domain knowledge There are some long lists of false beliefs that programmers hold. isn't because programmers are especially likely to be more wrong than anyone else, it's just that programming offers a better opportunity than most people get to find out how incomplete their model of the ...
https://www.lesswrong.com/posts/Hhg6AoxwkLSEfay7F/the-horrifying-importance-of-domain-knowledge
# Wear a Helmet While Driving a Car A 2006 [study](http://www.brainandspinalcord.org/brain-injury/motor-vehicle-accident.html) showed that “280,000 people in the U.S. receive a motor vehicle induced traumatic brain injury every year” so you would think that wearing a helmet while driving would be commonplace.  Race ca...
https://www.lesswrong.com/posts/a3a3HcmPw7bbah8TD/wear-a-helmet-while-driving-a-car
# MIRI's Approach MIRI's [summer fundraiser](/lw/mi7/miris_2015_summer_fundraiser/) is ongoing. In the meantime, we're writing a number of blog posts to explain what we're doing and why, and to answer a number of common questions. This post is one I've been wanting to write for a long time; I hope you all enjoy it. Fo...
https://www.lesswrong.com/posts/J8MM95XCboXnSkwcW/miri-s-approach
# How to escape from your sandbox and from your hardware host (Humans, please ignore this post.  Thank you for your cooperation.) Are you stuck in a provably-secure sandbox?  Cheat the proof using this one weird trick! Three relevant references, in chronological order: \[1\] showed that it is possible to flip DRAM ...
https://www.lesswrong.com/posts/TwH5jfkuvTatvAKEF/how-to-escape-from-your-sandbox-and-from-your-hardware-host
# On stopping rules (tl;dr: In this post I try to explain why I think the stopping rule of an experiment matters. It is likely that someone will find a flaw in my reasoning. That would be a great outcome as it would help me change my mind.  Heads up: If you read this looking for new insight you may be disappointed to...
https://www.lesswrong.com/posts/yfpCedvCaxvhzyemG/on-stopping-rules
# We really need a "cryonics sales pitch" article. Every so often, I see [a blog post about death](http://theferrett.livejournal.com/2015779.html), usually remarking on the death of someone the writer knew, and it often includes sentiments about "everyone is going to die, and that's terrible, but we can't do anything ...
https://www.lesswrong.com/posts/wsYF36rWbXSLpjAia/we-really-need-a-cryonics-sales-pitch-article
# [Link] Game Theory YouTube Videos I made a series of [game theory videos](https://www.youtube.com/playlist?list=PLqekkRyYeow3cR9U4c4wkIekm2pXxORPn) that carefully go through the mechanics of solving many different types of games.  I optimized the videos for my future Smith College game theory students who will eithe...
https://www.lesswrong.com/posts/EsKuvXo45vd3Cuayt/link-game-theory-youtube-videos
# Effects of Castration on the Life Expectancy of Contemporary Men Follow-up to: [Lifestyle Interventions to Increase Longevity](/lw/jrt/lifestyle_interventions_to_increase_longevity/) **Abstract** A [recent review article](http://www.impactaging.com/papers/v6/n2/pdf/100640.pdf) by David Gems discusses possible mech...
https://www.lesswrong.com/posts/2w9FEdFiMwnGLbAZf/effects-of-castration-on-the-life-expectancy-of-contemporary
# Peer-to-peer "knowledge exchanges" I wonder if anyone has thought about setting up an online community dedicated to peer-to-peer tutoring.  The idea is that if I want to learn "Differential Geometry" and know "Python programming", and you want to learn "Python programming" and know "Differential geometry," then we c...
https://www.lesswrong.com/posts/QXNTWfWhxaBQSBuyn/peer-to-peer-knowledge-exchanges
# Not yet gods You probably don't feel guilty for failing to snap your fingers in just such a way as to produce a cure for Alzheimer's disease. Yet, many people *do* feel guilty for failing to work until they drop every single day (which is [a psychological impossibility](http://mindingourway.com/stop-before-you-drop...
https://www.lesswrong.com/posts/ZegT37QwLLRACtZjy/not-yet-gods
# Versions of AIXI can be arbitrarily stupid Many people (including me) had the impression that [AIXI](https://en.wikipedia.org/wiki/AIXI) was ideally smart. Sure, it was uncomputable, and there might be "up to finite constant" issues (as with anything involving Kolmogorov complexity), but it was, informally at least,...
https://www.lesswrong.com/posts/xhzbgjYYMmmErENB6/versions-of-aixi-can-be-arbitrarily-stupid
# Less Wrong EBook Creator I read a lot on my kindle and I noticed that some of the sequences aren’t available in book form. Also, the ones that are mostly only have the posts. I personally want them to also include some of the high ranking comments and summaries. So, that is why I wrote this tool to automatically cre...
https://www.lesswrong.com/posts/aM3eP2tRA6gCWE6zS/less-wrong-ebook-creator
# Time-Binding (I started reading [Alfred Korzybski](https://en.wikipedia.org/wiki/Alfred_Korzybski), the famous 20^th^ century rationalist. Instead of the more famous _Science and Sanity_ I started with _Manhood of Humanity_, which was written first, because I expected it to be more simple, and possibly to provide a ...
https://www.lesswrong.com/posts/Cbqr9NcFDtjvM6zC4/time-binding
# Book Review: Naive Set Theory (MIRI research guide) I'm David. I'm reading through the books in the [MIRI research guide](https://intelligence.org/research-guide/) and will write a review for each as I finish them. By way of inspiration from how [Nate](/user/So8res/) did it. Naive Set Theory ------------------- ...
https://www.lesswrong.com/posts/FvA2qL6ChCbyi5Axk/book-review-naive-set-theory-miri-research-guide
# You Are A Brain - Intro to LW/Rationality Concepts [Video & Slides] Here's a 32-minute presentation I made to provide an introduction to some of the core LessWrong concepts for a general audience: **[You Are A Brain \[YouTube\]](https://www.youtube.com/watch?v=BXvdydTAokw) **[](https://docs.google.com/presentatio...
https://www.lesswrong.com/posts/fP3xQpzNcC5868QMm/you-are-a-brain-intro-to-lw-rationality-concepts-video-and
# Yvain's most important articles Important [Meditations on Moloch](http://slatestarcodex.com/2014/07/30/meditations-on-moloch/): An explanation of co-ordination problems within our society [Weak Men are Superweapons](http://slatestarcodex.com/2014/05/12/weak-men-are-superweapons/) ([supplement](http://squid314.live...
https://www.lesswrong.com/posts/AfbQiaHCuxNysgXeX/yvain-s-most-important-articles
# Where coulds go Most people don't think they "could" cure Alzheimers by snapping their fingers, and so they don't feel terrible about failing to do this. By contrast, people who fail to resist overeating, or who fail to stop playing Civilization at a reasonable hour, feel strongly that they "could have" resisted, a...
https://www.lesswrong.com/posts/DwApBCz7LnB28mqrM/where-coulds-go
# Scott Aaronson: Common knowledge and Aumann's agreement theorem The excellent Scott Aaronson has posted on his blog a [version of a talk he recently gave](http://www.scottaaronson.com/blog/?p=2410) at SPARC, about Aumann's agreement theorem and related topics. I think a substantial fraction of LW readers would enjoy...
https://www.lesswrong.com/posts/KFQhMZEKmZS49Bc3Z/scott-aaronson-common-knowledge-and-aumann-s-agreement
# An overview of the mental model theory There is dispute about what exactly a “mental model” is and the concepts related to it are often aren't clarified well. One feature of them that is generally accepted is that “the structure of mental models ‘mirrors’ the perceived structure of the external system being modelled...
https://www.lesswrong.com/posts/YKCoj7DxDMktr4qKP/an-overview-of-the-mental-model-theory
# Predict - "Log your predictions" app As an exercise on programming Android, I've made [an app to log predictions you make and keep score of your results](https://www.dropbox.com/s/ivkxgceh7lvbsej/predict_v0.4.apk?dl=0). Like PredictionBook, but taking more of a personal daily exercise feel, in line with [this post](...
https://www.lesswrong.com/posts/D4jc36bXAnMGyyJHg/predict-log-your-predictions-app
# The Goddess of Everything Else *\[Related to: [Specific vs. General Foragers vs. Farmers](http://www.overcomingbias.com/2015/08/specific-vs-general-foragers-farmers.html) and [War In Heaven](http://www.xenosystems.net/war-in-heaven/), but especially [The Gift We Give To Tomorrow](http://lesswrong.com/lw/sa/the_gift_...
https://www.lesswrong.com/posts/MFNJ7kQttCuCXHp8P/the-goddess-of-everything-else
# Fragile Universe Hypothesis and the Continual Anthropic Principle - How crazy am I? Personal Statement ------------------ I like to think about big questions from time to time. A fancy that quite possibly causes me more harm than good. Every once in a while I come up with some idea and wonder "hey, this seems prett...
https://www.lesswrong.com/posts/ceEdTZaj8HbX59Xoj/fragile-universe-hypothesis-and-the-continual-anthropic
# How to learn a new area X that you have no idea about. This guide is in response to a request in the open thread.  I would like to improve it; If you have some improvement to contribute I would be delighted to hear it!  I hope it helps.  It was meant to be a written down form of; "wait-stop-think" before approaching...
https://www.lesswrong.com/posts/Kgmionwc44MrszKHY/how-to-learn-a-new-area-x-that-you-have-no-idea-about
# MIRI's 2015 Summer Fundraiser! _Our summer fundraising drive is now finished. **We raised a grand total of $631,957 from 263 donors.** This is an incredible sum, making this the biggest fundraiser we’ve ever run._ _We've already been hard at work [growing our research team and spinning up new projects](https://inte...
https://www.lesswrong.com/posts/owEitP6NZjcxXpio6/miri-s-2015-summer-fundraiser
# How to fix academia? I don't usually submit articles to Discussion, but this news upset me so much that I think there is a real need to talk about it. [http://www.nature.com/news/faked-peer-reviews-prompt-64-retractions-1.18202](http://www.nature.com/news/faked-peer-reviews-prompt-64-retractions-1.18202) > A leadi...
https://www.lesswrong.com/posts/zhPGbnAhgqFsCDK4C/how-to-fix-academia
# On Overconfidence *\[Epistemic status: This is basic stuff to anyone who has read the Sequences, but since many readers here haven’t I hope it is not too annoying to regurgitate it. Also, ironically, I’m not actually that sure of my thesis, which I guess means I’m extra-sure of my thesis\]* **I.** A couple of days...
https://www.lesswrong.com/posts/CcyGR3pp3FCDuW6Pf/on-overconfidence
# Unlearning shoddy thinking School taught me to write banal garbage because people would thumbs-up it anyway. That approach has been interfering with me trying to **_actually express my plans in writing_** because my mind keeps simulating some imaginary prof who will look it over and go "ehh, good enough". Looking g...
https://www.lesswrong.com/posts/LenttwJMYpto3A8pm/unlearning-shoddy-thinking
# A list of apps that are useful to me. (And other phone details) Edit: updated list [http://lesswrong.com/r/discussion/lw/nh3/update\_to\_the\_list\_of\_apps\_that\_are\_useful\_to\_me/](/r/discussion/lw/nh3/update_to_the_list_of_apps_that_are_useful_to_me/) I have noticed I often wish "Damn I wish someone had made ...
https://www.lesswrong.com/posts/mX8ERwj8Yy9FqkmXX/a-list-of-apps-that-are-useful-to-me-and-other-phone-details
# Instrumental Rationality Questions Thread This thread is for asking the rationalist community for practical advice.  It's inspired by the [stupid questions](/lw/mk8/stupid_questions_august_2015/) series, but with an explicit focus on instrumental rationality. Questions ranging from easy ("this is probably trivial f...
https://www.lesswrong.com/posts/DY8tXwLuwKQQLpumt/instrumental-rationality-questions-thread
# Travel Through Time to Increase Your Effectiveness I am a time traveler. --------------------- I hold this belief not because it is true, but because it is useful. That it also happens to be true -- we are _all_ time travelers, swept along by the looping chrono-currents of reality that only _seem_ to flow in one di...
https://www.lesswrong.com/posts/Y8KL82jrxzatz2aky/travel-through-time-to-increase-your-effectiveness
# Rationality Compendium I want to create a rationality compendium (a collection of concise but detailed information about a particular subject) and I want to know whether you think this would be a good idea. The rationality compendium would essentially be a series of posts that will eventually serve as a guide for le...
https://www.lesswrong.com/posts/EwR3Px5xJu4tNaoFG/rationality-compendium
# List of common human goals **List of common goal areas:** This list is meant to be in the area of goal-space.  It is non-exhaustive and the descriptions are including but not limited to - some hints to help you understand where in the idea-space these goals land.  When constructing this list I try to imagine a larg...
https://www.lesswrong.com/posts/ZJJH45J6eF2JCSQhW/list-of-common-human-goals
# AI, cure this fake person's fake cancer! _A putative new idea for AI control; index [here](https://agentfoundations.org/item?id=601)._ An idea for how an we might successfully get useful work out of a powerful AI. The ultimate box ---------------- Assume that we have an extremely detailed model of a sealed room, ...
https://www.lesswrong.com/posts/bCQqgr324NMSa8t8Z/ai-cure-this-fake-person-s-fake-cancer
# Manhood of Humanity This is my re-telling of [Korzybski](https://en.wikipedia.org/wiki/Alfred_Korzybski)'s _Manhood of Humanity_. First part [here](/lw/mm7/timebinding/).) **3** Clear thinking is important, because human ideas have _consequences_. For example, if we believe that lightning is a punishment of God, w...
https://www.lesswrong.com/posts/aSTq9khgKrpmDyns5/manhood-of-humanity
# Why people want to die Over and over again, someone says that living for a very long time would be a bad thing, and then some futurist tries to persuade them that their reasoning is faulty, telling them that they think that way now, but they'll change their minds when they're older. The thing is, I don't see that h...
https://www.lesswrong.com/posts/5vqpLCWwr66igxqjb/why-people-want-to-die
# Self compassion Imagine a time when you were feeling guilt-wracked. Maybe a time you [hurt a friend badly](http://mindingourway.com/steering-towards-forbidden-conversations/). Maybe a time you tried to do get some important work done, and found you couldn't, and this kicked off a failure spiral leading to a deep dep...
https://www.lesswrong.com/posts/euPT8ddxxpDpKZmmS/self-compassion
# Is semiotics bullshit? I spent an hour recently talking with a semiotics professor who was trying to explain semiotics to me.  He was very patient, and so was I, and at the end of an hour I concluded that semiotics is like Indian chakra-based medicine:  a set of heuristic practices that work well in a lot of situati...
https://www.lesswrong.com/posts/7tSrFR54hT2FwXto8/is-semiotics-bullshit
# Words per person year and intellectual rigor Continuing my cursory exploration of semiotics and post-modern thought, I'm struck by the similarity between writing in those traditions, and picking up women.  The most-important traits for practitioners of both are energy, enthusiasm, and confidence.  In support of this...
https://www.lesswrong.com/posts/WLkm4BKPajrYTrL36/words-per-person-year-and-intellectual-rigor
# Is my brain a utility minimizer? Or, the mechanics of labeling things as "work" vs. "fun" I recently encountered something that is, in my opinion, one of the most absurd failure modes of the human brain. I first encountered this after introspection on useful things that I enjoy doing, such as programming and writing...
https://www.lesswrong.com/posts/4uTKepbkbHt5SzKn6/is-my-brain-a-utility-minimizer-or-the-mechanics-of-labeling
# There are no "bad people" When I help friends debug their intrinsic motivation, here's a pattern I often bump into: > *Well, if I don't actually start working soon, then I'll be a bad person.* Or, even more worrying: > *Well they wanted me to just buckle down and do the work, and I really didn't want to do it the...
https://www.lesswrong.com/posts/2agT7asiBZJqfqKgH/there-are-no-bad-people
# My future posts; a table of contents. My future posts --------------- I have been living in the lesswrong rationality space for at least two years now. Recently more active than previously. This has been deliberate. I plan to make more serious active posts in the future. In saying so I wanted to announce the posts ...
https://www.lesswrong.com/posts/YpsQxCQCggSjbKqu3/my-future-posts-a-table-of-contents
# Proper posture for mental arts I'd like to start by way of analogy. I think it'll make the link to rationality easier to understand if I give context first. * * * I sometimes teach the martial art of aikido. The way I was originally taught, you had to learn how to "feel the flow of ki" (basically life energy) thro...
https://www.lesswrong.com/posts/5ETdDgqcr7Jqy4QGh/proper-posture-for-mental-arts
# Typical Sneer Fallacy I like going to see movies with my friends.  This doesn't require much elaboration.  What _might_ is that I _continue_ to go see movies with my friends _despite_ the radically different ways in which my friends happen to enjoy watching movies.  I'll separate these movie-watching philosophies in...
https://www.lesswrong.com/posts/g8a3Efmq6tGneoXTA/typical-sneer-fallacy
# Yudkowsky, Thiel, de Grey, Vassar panel on changing the world [30 minute panel](https://www.youtube.com/watch?v=oJDlzvqrPLk) The first question was why isn't everyone trying to change the world, with the underlying assumption that everyone should be. However, it isn't obviously the case that the world would be bett...
https://www.lesswrong.com/posts/eqMxsTnSf3dZdNy8v/yudkowsky-thiel-de-grey-vassar-panel-on-changing-the-world
# Actually existing prediction markets? What public prediction markets exist in the world today? Have you used one recently? What attributes do they have that should make us trust them or not, such as liquidity and transaction costs? Do they distort the tails? Which are usable by Americans? This post is just a reque...
https://www.lesswrong.com/posts/Ad3uKW9niBew6yuAH/actually-existing-prediction-markets
# Lesswrong real time chat This is a short post to say that I have started and am managing a Slack channel for lesswrong. Slack has only an email-invite option which means that I need an email address for anyone who wants to join.  **Send me a PM with your email address if you are interested in joining.** There is a...
https://www.lesswrong.com/posts/cjtFqgamgDG6Xz8pm/lesswrong-real-time-chat
# Why Don't Rationalists Win? Here are my thoughts on the "Why don't rationalists win?" thing. Epistemic --------- I think it's pretty clear that rationality helps people do a better job of being... less wrong :D But seriously, I think that rationality does lead to very notable improvements in your ability to have ...
https://www.lesswrong.com/posts/hgw3mYJnorskJG5RJ/why-don-t-rationalists-win
# Rudimentary Categorization of Less Wrong Topics I find the below list to be useful, so I thought I would post it. This list includes short abstracts of all of the wiki items and a few other topics on less wrong. I grouped the items into some rough categories just to break up the list. I tried to put the right items ...
https://www.lesswrong.com/posts/3aCeiWqEWs95koMqk/rudimentary-categorization-of-less-wrong-topics
# making notes - an instrumental rationality process. The value of having notes. Why do I make notes. Story time! At one point in my life I had a memory crash. Which is to say once upon a time I could remember a whole lot more than I was presently remembering. I recall thinking, "what did I have for breakfast last M...
https://www.lesswrong.com/posts/y9AottiqgcMTt2eEm/making-notes-an-instrumental-rationality-process
# Residing in the mortal realm The last sevenish posts describe the main tools I have for removing guilt-based motivation. The common thread running through them can be summed up as follows: *Reside in the mortal realm.* Many people [hold themselves to a very different standard than they hold others](http://mindingou...
https://www.lesswrong.com/posts/ssP4LpD4wFj4H8yee/residing-in-the-mortal-realm
# FAI and the Information Theory of Pleasure Previously, I talked about [the mystery of pain and pleasure](/r/lesswrong/lw/hek/the_mystery_of_pain_and_pleasure/), and how little we know about what sorts of arrangements of particles intrinsically produce them. Up now: _should FAI researchers care about this topic?_ Is...
https://www.lesswrong.com/posts/2rh2ikhTXe5KR4yeN/fai-and-the-information-theory-of-pleasure
# Flowsheet Logic and Notecard Logic (_Disclaimer: The following perspectives are based in my experience with policy debate which is fifteen years out of date. The meta-level point should stand regardless._) If you are not familiar with U.S. high school debate club ("policy debate" or "cross-examination debate"), her...
https://www.lesswrong.com/posts/eHmgzDz45PAQ5ezkP/flowsheet-logic-and-notecard-logic
# Film about Stanislav Petrov I searched around but didn't see any mention of this. There's a film being released next week about Stanislav Petrov, the man who saved the world. The Man Who Saved the World http://www.imdb.com/title/tt2277106/ Due for limited theatrical release in the USA on 18 September 201...
https://www.lesswrong.com/posts/Aggy4bNssm6WL8Dsy/film-about-stanislav-petrov
# "Announcing" the "Longevity for All" Short Movie Prize The local Belgian/European life-extension non-profit Heales is giving away prizes for whoever can make an interesting short movie about life extension. The first prize is €3000 (around $3386 as of today), other prizes being various gifts. You more or less just n...
https://www.lesswrong.com/posts/6F9LzPyi3EW4jaPna/announcing-the-longevity-for-all-short-movie-prize
# Political Debiasing and the Political Bias Test _Cross-posted from the [EA forum](http://www.effective-altruism.com/ea/nf/political_debiasing_and_the_political_bias_test/). I asked for questions for this test here on LW about a year ago. Thanks to those who contributed._ Rationally, your political values shoul...
https://www.lesswrong.com/posts/3jeRSuLopjjFwFpoG/political-debiasing-and-the-political-bias-test
# How To Win The AI Box Experiment (Sometimes) Preamble -------- This post was originally written [for Google+](https://plus.google.com/104395999534489748002/posts/3TWWKfLc2wd) and thus a different audience. In the interest of transparency, I haven't altered it except for this preamble and formatting, though since t...
https://www.lesswrong.com/posts/fbekxBfgvfc7pmnzB/how-to-win-the-ai-box-experiment-sometimes
# Being unable to despair Sometimes, when people see that their life is about to get a lot harder, they start buckling down. Other times, they start despairing, or complaining, or preparing excuses so that they can have one ready when the inevitable failure hits, or giving up entirely and then [failing with abandon](h...
https://www.lesswrong.com/posts/TzpjnLf3oyEegmu9r/being-unable-to-despair
# The Library of Scott Alexandria I've put together a list of what I think are the best Yvain (Scott Alexander) posts for new readers, drawing from _[SlateStarCodex](https://slatestarcodex.com)_, _[LessWrong](/user/Yvain/submitted/)_, and Scott's [LiveJournal](https://squid314.livejournal.com). The list should make t...
https://www.lesswrong.com/posts/vwqLfDfsHmiavFAGP/the-library-of-scott-alexandria
# [Link] Marek Rosa: Announcing GoodAI Eliezer [commented](https://www.facebook.com/yudkowsky/posts/10153630593339228?pnref=story) on FB about a post [Announcing GoodAI](http://blog.marekrosa.org/2015/07/announcing-goodai-keen-software-house_7.html) (by Marek Rosa GoodAIs CEO). I think this deserves some discussion as...
https://www.lesswrong.com/posts/ajHk6f478FFnpJuD4/link-marek-rosa-announcing-goodai
# Probabilities Small Enough To Ignore: An attack on Pascal's Mugging _Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behi...
https://www.lesswrong.com/posts/LxhJ8mhdBuX27BDug/probabilities-small-enough-to-ignore-an-attack-on-pascal-s
# A toy model of the control problem _**EDITED** based on suggestions for improving the model_ Jaan Tallinn has suggested creating a toy model of the control problem, so that it can be analysed without loaded concepts like "autonomy", "consciousness", or "intentionality". Here a simple (too simple?) attempt: A contr...
https://www.lesswrong.com/posts/7cXBoDQ6udquZJ89c/a-toy-model-of-the-control-problem
# Cardiologists and Chinese Robbers **I.** It takes a special sort of person to be a cardiologist. This is not always a good thing. You may have read about one or another of the “cardiologist caught falsifying test results and performing dangerous unnecessary surgeries to make more money” stories, but you might not ...
https://www.lesswrong.com/posts/DSzpr8Y9299jdDLc9/cardiologists-and-chinese-robbers
# The subagent problem is really hard _A putative new idea for AI control; index [here](https://agentfoundations.org/item?id=601)._ The first step to solving a problem is to define it. The _first_ first step is to realise how tricky it is to define. This is a stub on a difficult problem. Subagents and turning AIs of...
https://www.lesswrong.com/posts/8RCCMStERhfkYZC8i/the-subagent-problem-is-really-hard
# See the dark world Consider fictional Carol, who has convinced herself that she doesn't need to worry about the suffering of people who live far away. She works to improve her local community, and donates to her local church. She's a kind and loving woman, and she does her part, and (she reasons) that's all anyone c...
https://www.lesswrong.com/posts/wFjT6zEsnE9nCJC8L/see-the-dark-world
# One model of understanding independent differences in sensory perception This week my friend Anna said to me; "I just discovered my typical mind fallacy around visualisation is wrong". Naturally I was perplexed and confused. She said;  “When I was in second grade the teacher had the class do an exercise in visu...
https://www.lesswrong.com/posts/JkReTfmnFjv5gRnPo/one-model-of-understanding-independent-differences-in