text
stringlengths
300
320k
source
stringlengths
52
154
# Translating CFAR to Therapy The Center for Applied Rationality has given its alumni a number of excellent tools to work on their bugs. Going through a workshop myself, I found that a lot of these tools are similar to therapeutic techniques, just reformatted to fit a more self-help-y context. Going through the worksh...
https://www.lesswrong.com/posts/o2bfgs9FTTMR4jnY3/translating-cfar-to-therapy
# Why I am not a Quaker (even though it often seems as though I should be) In the past year, I have noticed that the Society of Friends (also known as the Quakers) has come to the right answer long before I or most people did, on a surprising number of things, in a surprising range of domains. And yet, I do not feel i...
https://www.lesswrong.com/posts/6XvnqW28e2twiv6ww/why-i-am-not-a-quaker-even-though-it-often-seems-as-though-i
# The Anthropic Principle: Five Short Examples _(content warning: nuclear war, hypothetical guns, profanity, philosophy, one grammatically-incorrect comma for readability’s sake)_ This is a very special time of year, when my whole social bubble starts murmuring about nuclear war, and sometimes, some of those murmurer...
https://www.lesswrong.com/posts/KNtKKmcd9DsP7WuZ3/the-anthropic-principle-five-short-examples
# Wikipedia pageviews: still in decline In March 2015, I [wrote](https://vipulnaik.com/blog/the-great-decline-in-wikipedia-pageviews-full-version/) about a decline in Wikipedia desktop pageviews over the last few years (and posted a [short version](http://lesswrong.com/lw/lxc/the_great_decline_in_wikipedia_pageviews/)...
https://www.lesswrong.com/posts/ghBZDavgywxXeqWSe/wikipedia-pageviews-still-in-decline
# Sabbath hard and go home Growing up Jewish, I thought that the traditional rules around the Sabbath were silly. Then I forgot to bring a spare battery on a camping trip. Now I think that something like the traditional Jewish Sabbath is an important cultural adaptation to preserve leisure, that would otherwise be des...
https://www.lesswrong.com/posts/p7hW7E3fHF3PDzErk/sabbath-hard-and-go-home
# The Outside View isn't magic The [planning fallacy](https://en.wikipedia.org/wiki/Planning_fallacy) is an almost perfect example of the strength of using the [outside](https://wiki.lesswrong.com/wiki/Outside_view) [view](http://www.mckinsey.com/business-functions/strategy-and-corporate-finance/our-insights/daniel-ka...
https://www.lesswrong.com/posts/NXcxKXLT8xng5FwDu/the-outside-view-isn-t-magic
# The Great Filter isn't magic either _A post suggested by James Miller's_ [_presentation_](https://www.chalmers.se/en/centres/GoCAS/Events/Existential-risk-to-humanity/Pages/Workshop-programme.aspx) _at the Existential Risk to Humanity conference in Gothenburg._ Seeing the [emptiness of the night sky](https://upload...
https://www.lesswrong.com/posts/nxZWDDfpTXpjGy5dP/the-great-filter-isn-t-magic-either
# Speed & Performance is our current top priority _TLDR: Fewer bugfixes and support for the next week or two while we are focusing on making the site fast and stable. General shift in direction towards more stable and polished features over a large breadth of slightly buggy features._ It's been one week since the lau...
https://www.lesswrong.com/posts/D7o9GztvqnBeqdBeX/speed-and-performance-is-our-current-top-priority
# Musings on Double Crux (and "Productive Disagreement") _Epistemic Status: Thinking out loud, not necessarily endorsed, more of a brainstorm and hopefully discussion-prompt._ [Double Crux](https://www.lesserwrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-resolving-disagreement) has been making the round...
https://www.lesswrong.com/posts/T3nb24aZf7d5S2Lnm/musings-on-double-crux-and-productive-disagreement
# Blind Goaltenders: Unproductive Disagreements If you're worried about an oncoming problem and discussing it with others to plan, your ideal interlocutor, generally, is someone who agrees with you about the danger. More often, though, you'll be discussing it with people who disagree, at least in part. The question t...
https://www.lesswrong.com/posts/c9CyLv6vqE6rnGXdG/blind-goaltenders-unproductive-disagreements
# My IRB Nightmare *\[Epistemic status: Pieced together from memory years after the event. I may have mis-remembered some things or gotten them in the wrong order. Aside from that – and the obvious jokes – this is all true. I’m being deliberately vague in places because I don’t want to condemn anything specific withou...
https://www.lesswrong.com/posts/gBChm3THPGFcrq5eH/my-irb-nightmare
# Against Individual IQ Worries _\[Related to: [Attitude vs. Altitude](http://slatestarcodex.com/2015/02/01/talents-part-2-attitude-vs-altitude/)\]_ I. #### I write a lot about the importance of IQ research, and I try to debunk pseudoscientific claims that IQ “isn’t real” or “doesn’t matter” or “just shows how well ...
https://www.lesswrong.com/posts/AmaWMMWPzuQ62Ernf/against-individual-iq-worries
# Moderator's Dilemma: The Risks of Partial Intervention If you ever end up moderating a forum, or just becoming deeply involved in the meta section of one like me, it is almost inevitable that you will become involved in disputes over exactly what one is and is not allowed to say on it. You may find these choices can...
https://www.lesswrong.com/posts/7xGcyB7RNdfDe5vxL/moderator-s-dilemma-the-risks-of-partial-intervention
# Positive Focusing Gendlin’s technique of Focusing primarily focuses (hehe) on problems or negative felt senses. Something I have not seen discussed much is that one can apply the concepts of Focusing to many different felt senses that are not problems, or even negative in any way. At least in my experience, you can...
https://www.lesswrong.com/posts/rWnCXouKEsJfBcMmA/positive-focusing
# Slack Epistemic Status: Reference post. Strong beliefs strongly held after much thought, but hard to explain well. Intentionally abstract. Disambiguation: This does not refer to any physical good, app or piece of software. Further Research (book, recommended but not at all required, take seriously but not literall...
https://www.lesswrong.com/posts/yLLkWMDbC9ZNKbjDG/slack
# Tensions in Truthseeking _[Epistemic Effort](https://www.lesserwrong.com/posts/oDy27zfRf8uAbJR6M/epistemic-effort): I've thought about this for several weeks and discussed with several people who different viewpoints. Still only_ moderately _confident though._ So, I notice that people involved with the rationalsph...
https://www.lesswrong.com/posts/wS5mprX6BsyeyaNm3/tensions-in-truthseeking-1
# Catalonia and the Overton Window The thing with Catalonia strikes me as a case of "Arguments sure get weird when the truth is not inside the Overton Window of either side." Historically, populations have usually been the subjects of governments, owned by them and farmed by them. The Catalan population is the proper...
https://www.lesswrong.com/posts/F3CwkGqEdY5KZogGS/catalonia-and-the-overton-window
# Different Worlds **I.** A few years ago I had lunch with another psychiatrist-in-training and realized we had totally different experiences with psychotherapy. We both got the same types of cases. We were both practicing the same kinds of therapy. We were both in the same training program, studying under the same ...
https://www.lesswrong.com/posts/EctieqKwDQcQHhqZy/different-worlds
# Placing Yourself as an Instance of a Class There's an intuition that I have, which I think informs my opinion on subjective probabilities, game theory, and many related matters: that part of what separates a foolish decision from a wise one is whether you treat it as an isolated instance or as one of a class of simi...
https://www.lesswrong.com/posts/GEZzHQLQETHW9vBER/placing-yourself-as-an-instance-of-a-class
# Writing That Provokes Comments _[Epistemic Effort](https://www.lesserwrong.com/posts/oDy27zfRf8uAbJR6M/epistemic-effort): Thought about it for a year. Solicited feedback. Checked my last few posts' comment count to make sure I wasn't *_obviously* _wrong._ A thing that happens to me, and perhaps to you: Someone wri...
https://www.lesswrong.com/posts/GHBLFPDhzeSQHx2eM/writing-that-provokes-comments
# Infant Mortality and the Argument from Life History Many people argue that suffering predominates in nature. A [really simple form of the argument](https://foundational-research.org/the-importance-of-wild-animal-suffering/#More_Offspring_Than_Survive), supported by people like Brian Tomasik, is what one might call t...
https://www.lesswrong.com/posts/rEiMzmHzANWPj6cKB/infant-mortality-and-the-argument-from-life-history
# Instrumental Rationality 1: Starting Advice _\[Instrumental Rationality Sequence 1/7. Repost from LW\]_ _\[This section goes over 4 concepts that I think are important to keep in mind before we start the other stuff. We go over caring about the obvious, looking for ways to apply advice in the real world, practicing...
https://www.lesswrong.com/posts/PhHNTTHA5dtLsm7Gi/instrumental-rationality-1-starting-advice
# Meaningfulness and the scope of experience I find that the extent to which I find life meaningful, seems strongly influenced by my scope of experience \[[1](https://smile.amazon.com/Six-Blind-Elephants-Understanding-Fundamental/dp/0911226419/), [2](https://scholar.google.fi/scholar?hl=en&q=perceptual+scope)\]. Say ...
https://www.lesswrong.com/posts/D3njbydiue2qEYfXt/meaningfulness-and-the-scope-of-experience
# The Problematic Third Person Perspective _\[Epistemic status: I now endorse this again. Michael [pointed out a possibility](https://www.lesserwrong.com/posts/ikMvvDgzFCSrYMRfd/the-problematic-third-person-perspective/77Wy2hxfJB8sRuest) for downside risk with losing mathematical ability, which initially made me updat...
https://www.lesswrong.com/posts/ikMvvDgzFCSrYMRfd/the-problematic-third-person-perspective
# Instrumental Rationality 2: Planning 101 _\[Instrumental Rationality sequence 2/7.\]_ \[_This section goes over the planning fallacy, our cognitive bias of making overconfident predictions in our time estimates. It starts with an overview of the field and moves into some models of how human planning works. We’ll mo...
https://www.lesswrong.com/posts/53pxcve5kgwoLFvzD/instrumental-rationality-2-planning-101
# Clueless World vs. Loser World In reply to [Different Worlds](http://slatestarcodex.com/2017/10/02/different-worlds/); It was a huge disappointment for me when Ben Hoffman [compellingly argued](http://benjaminrosshoffman.com/locker-room-talk/) in favor of parallel social worlds coexisting unobtrusively adjacent to ...
https://www.lesswrong.com/posts/ADkdXkNNaq5bvhrnb/clueless-world-vs-loser-world
# Instrumental Rationality 3: Interlude I _\[Instrumental Rationality Sequence 3/7\]._ _\[Here, we’ll cover two concepts:_ _**Acting into Uncertainty**_ _and_ _**Fading Novelty**__. They’re both sort of about two (generalizable, I hope) mental feelings (i.e. internal experiences) that can occur when you start trying...
https://www.lesswrong.com/posts/LXEmASjaKTCDAnyGp/instrumental-rationality-3-interlude-i
# Sabbath Commentary Epistemic Status: Several months of experimentation, then talking from the hip Commentary On: [Bring Back the Sabbath](https://wordpress.com/post/thezvi.wordpress.com/9049) Required: [Slack](https://thezvi.wordpress.com/2017/09/30/slack/) I have a lot of thoughts on the topic that don’t belong ...
https://www.lesswrong.com/posts/t3o8ds7LjtgW9t7FJ/sabbath-commentary
# Bring Back the Sabbath Epistemic Status: Several months of experimentation Previously: [Choices are Bad](https://thezvi.wordpress.com/2017/07/22/choices-are-bad/), [Choices Are Really Bad](https://thezvi.wordpress.com/2017/08/12/choices-are-really-bad/), [Complexity Is Bad](https://thezvi.wordpress.com/2017/07/25/c...
https://www.lesswrong.com/posts/ZoCitBiBv97WEWpX5/bring-back-the-sabbath
# The Typical Sex Life Fallacy \[Related to: [Different Worlds.](https://slatestarcodex.com/2017/10/02/different-worlds/)\] \[Please note that this post contains explicit discussion of sexuality (without pictures), including discussion of my own sex life.\] \[I have no moderation control, but I would definitely real...
https://www.lesswrong.com/posts/4DrybydEAh59vQiTs/the-typical-sex-life-fallacy
# Stubs of Posts I'd like to Write Right now I feel like LW has too much "meandering around meta-community-stuff" and not enough "actually go out into the world, try to think about object level things, and then do stuff with that thinking stuff." I'm not actually very good at the latter, but want to practice. But, t...
https://www.lesswrong.com/posts/CkdFqZDRaCdkseNcb/stubs-of-posts-i-d-like-to-write
# HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family Let's talk about the LessWrong Survey. First and foremost, **if you took the survey and hit 'submit', your information was saved and you don't have to take it again.** Your data is safe, nobody took it or anything it's not like that. If ...
https://www.lesswrong.com/posts/xsEeff3LbyJy7sTG6/howto-screw-up-the-lesswrong-survey-and-bring-great-shame-to
# NSFW Content on LW There's some discussion of this on [my post](https://www.lesserwrong.com/posts/4DrybydEAh59vQiTs/the-typical-sex-life-fallacy) but I think it properly belongs on a meta post to which the mods can refer. Points for consideration: * Do we want content about sex on LW at all? Yes, but only on peo...
https://www.lesswrong.com/posts/7vjZrinDJkHNzZHYB/nsfw-content-on-lw
# Contra double crux **Summary:** CFAR proposes [double crux](https://www.lesserwrong.com/posts/exa5kmvopeRyfJgCy/double-crux-a-strategy-for-resolving-disagreement) as a method to resolve disagreement: instead of arguing over some belief B, one should look for a crux (C) which underlies it, such that if either party c...
https://www.lesswrong.com/posts/nm6XuC9CzNBrthpPB/contra-double-crux
# Instrumental Rationality 4.1: Modeling Habits _\[Instrumental Rationality Sequence 4.1/7.\]_ _\[This is part 1 of a 3-part sequence on habits, which is itself part of the greater Instrumental Rationality Sequence. This was initially one monstrous article; in the interests of readability, I've decided to split it in...
https://www.lesswrong.com/posts/bxc6Box8JJbzv3oQt/instrumental-rationality-4-1-modeling-habits
# Community Capital **Argument:** There is a Thing called Community Capital (closely related to social capital), and because it is based more on How Humans Work, you can get a lot more out of it than just plain old Money. **Epistemic Status:** Opening a conversation. All these ideas are in sand. \-\-\-\-\-\-\-\-\- ...
https://www.lesswrong.com/posts/cMxCxjSPdsb9MZgR5/community-capital
# Double Crux Example: Should HPMOR be on the Front Page? _Note: I'm leaving this in Meta for now since the object-level-discussion is very "talk about community norms" as opposed to object level. I think ideally someone else does a non-community-non-politics Public Double Crux and posts that to the front page. But if...
https://www.lesswrong.com/posts/zCrKz6rWeG79DRQM2/double-crux-example-should-hpmor-be-on-the-front-page
# The Just World Hypothesis Sometimes bad things happen to good people. Maybe most of the time even, it's hard to know, and even harder to create common knowledge on the topic because if it is so we don't want to know, and tell stories to cover it up when it happens. Back in the golden age of psychology, long before ...
https://www.lesswrong.com/posts/HW2fbbGM8B6y7pkDb/the-just-world-hypothesis
# Distinctions in Types of Thought _Epistemic status: speculative_ For a while, I’ve had the intuition that current machine learning techniques, though powerful and useful, are simply not touching some of the functions of the human mind. But before I can really get at how to justify that intuition, I would have to st...
https://www.lesswrong.com/posts/FbQ9Y9pBif5xZ7w2f/distinctions-in-types-of-thought
# Robustness as a Path to AI Alignment _\[Epistemic Status: Some of what I'm going to say here is true technical results. I'll use them to gesture in a research direction which I think may be useful; but, I could easily be wrong. This does not represent the current agenda of MIRI overall, or even my whole research age...
https://www.lesswrong.com/posts/qQuLCAqbZf9ETgNoB/robustness-as-a-path-to-ai-alignment
# Toy model of the AI control problem: animated version A few years back, I came up with a [toy model](http://lesswrong.com/lw/mrp/a_toy_model_of_the_control_problem/) of the AI [control problem](https://en.wikipedia.org/wiki/AI_control_problem). It has a robot moving boxes into a hole, with a slightly different goal ...
https://www.lesswrong.com/posts/EdEhGPEJi6dueQXv2/toy-model-of-the-ai-control-problem-animated-version
# What would convince you you'd won the lottery? The latest (06 Oct 2017) Euromillion [lottery numbers were](https://www.national-lottery.co.uk/results/euromillions/draw-history) 01 - 09 - 15 - 19 - 25 , with the "Lucky Stars " being 01 - 07. Ha! Bet I convinced no-one about those numbers. The odds against 01 - 09 - ...
https://www.lesswrong.com/posts/AHGzjrCkmocCaXC2t/what-would-convince-you-you-d-won-the-lottery
# Winning is for Losers _This post originally appeared on [Ribbonfarm](https://www.ribbonfarm.com/2017/08/29/winning-is-for-losers/)_. _It was written as part of the Ribbonfarm_ long-form_ writing course and edited by Joseph Kelly. I owe Joseph and the Ribbonfarm editors (Venkatesh Rao and Sarah Perry) huge thanks for...
https://www.lesswrong.com/posts/DhQkDgLiYe28P3LHM/winning-is-for-losers
# Gnostic Rationality Ancient Greek famously made a distinction between 3 kinds of knowledge: doxa, episteme, and gnosis. Doxa is basically what in English we might call hearsay. It's the stuff you know because someone told you about it. If you know the Earth is round because you read it in a book, that's doxa. Epis...
https://www.lesswrong.com/posts/yPfwMoxyRX77DARjM/gnostic-rationality
# Instrumental Rationality 4.2: Creating Habits _\[Instrumental Rationality Sequence 4.2/7\]_ _\[Part two of a three-part series of habits.\]_ _\[We go over three techniques for creating habits: TAPs, Systematic Planning, and Scaling Up.\]_ **Techniques: Creating Habits** ------------------------------- _\[The Tec...
https://www.lesswrong.com/posts/vE7Z2JTDo5BHsCp4T/instrumental-rationality-4-2-creating-habits
# LesserWrong is now de facto the main site My understanding was that the idea was for LesserWrong to run as a beta with a few users switching over and the main site still running until enough bugs had been fixed and enough missing features had been added so that we could have a vote and then decide whether to replace...
https://www.lesswrong.com/posts/kovpEPTGTbxg8CBMF/lesserwrong-is-now-de-facto-the-main-site
# Humans can be assigned any values whatsoever... Humans have no values... nor do any agent. Unless you make strong assumptions about their rationality. And depending on those assumptions, you get humans to have any values. An agent with no clear preferences ---------------------------------- There are three buttons...
https://www.lesswrong.com/posts/rtphbZbMHTLCepd6d/humans-can-be-assigned-any-values-whatsoever
# There's No Fire Alarm for Artificial General Intelligence What is the function of a fire alarm? One might think that the function of a fire alarm is to provide you with important evidence about a fire existing, allowing you to change your policy accordingly and exit the building. In the classic experiment by Latan...
https://www.lesswrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence
# Rare Exception or Common Exception **Proposition:** Pointing out that rare exceptions exist is usually a negative derailing tactic. However if you think your conversational partner is annoyingly pointing out a rare exception, double-check in case _they_ think they are pointing out a common exception. \-\-\-\-\-\- ...
https://www.lesswrong.com/posts/b5vPQfS7TFcchEAiJ/rare-exception-or-common-exception
# Oxford Prioritisation Project Review *Cross-*[*posted*](https://thomas-sittler.github.io/oxprioreview/) *to Tom's website.* Short summary ============= The [Oxford Prioritisation Project](https://oxpr.io/) was a research group between January and May 2017. The team conducted research to allocate £10,000 in the mos...
https://www.lesswrong.com/posts/h99f5AqSJRPaqsT8m/oxford-prioritisation-project-review
# Offloading Executive Functioning to Morality tl;dr- "My executive functioning doesn't work, so I use morality instead." From an outside perspective it's a little hard to tell that I don't have a lot of executive functioning. I keep my living space relatively clean, I get places relatively on time, I maintain a ...
https://www.lesswrong.com/posts/zqkHvKRy3HBPtsuHS/offloading-executive-functioning-to-morality
# "Focusing," for skeptics. Gendlin’s Focusing technique is super rad. I know this because *everybody* keeps telling me so. (Okay, not quite everybody, but a really tediously large percentage of the people in my online social circle.) But I’ve tried it a bunch of times, in a bunch of variants, with a bunch of qualif...
https://www.lesswrong.com/posts/PXqQhYEdbdAYCp88m/focusing-for-skeptics
# You can never be universally inclusive A discussion about the article “[We Don’t Do That Here](http://www.thagomizer.com/blog/2017/09/29/we-don-t-do-that-here.html)” (h/t [siderea](https://siderea.dreamwidth.org/)) raised the question about the tension between having inclusive social norms on the one hand, and restr...
https://www.lesswrong.com/posts/qzpcyKioYfHWmcasv/you-can-never-be-universally-inclusive
# Identities are [Subconscious] Strategies We all have identities. Arguably, any statement of the form “I am a __” is an identity. Of course, we usually reserve the term for the statements which feel especially core to us in describing and predicting ourselves, and in expressing our values and aspirations. Such identi...
https://www.lesswrong.com/posts/JTzxg7y5HFYBBWfBj/identities-are-subconscious-strategies
# Why no total winner? Why doesn't a single power rule the world today? \[I'm taking advantage of the new "LW posts as blog posts" format to post something I'm pretty unsure about. I'm working from my memories of the blog posts, and from a discussion I had with Robin Hanson and Katja Grace in late 2012. Please let me...
https://www.lesswrong.com/posts/As76yueYGy6FjZg3R/why-no-total-winner
# Against naming things, and so on Recent discussion on naming concepts mostly focuses on arguments in favor, noting only a few caveats, as LW user Conor Moreton in [Why and How to Name Things](https://www.lesserwrong.com/posts/zFmr4vguuFBTP2CAF/why-and-how-to-name-things): > What you lose by the proliferation of jar...
https://www.lesswrong.com/posts/6JrrCK3WDYmQMkgdT/against-naming-things-and-so-on
# Multidimensional signaling What do you infer about a person who has ugly clothing? Probably that they have poor taste  (in clothes, or subcultures). But it could also be that they are too poor to improve their wardrobe. Or can’t be bothered. What about someone with poor grades? The obvious inference is that they ar...
https://www.lesswrong.com/posts/PKZ5nfZ3atihcdn2Y/multidimensional-signaling
# On the construction of beacons I am afraid of the anglerfish. Maybe this is why the comments on my blog tend to be so consistently good. Recently, a friend was telling me about the marketing strategy for a project of theirs. They favored growth, in a way that I was worried would destroy value. I struggled to articu...
https://www.lesswrong.com/posts/rCxbo8JEaHSBngfLS/on-the-construction-of-beacons
# The Balance Between Hard Work and Exhaustion Rationalists often find difficult, important challenges to work on and they become very excited and passionate about their causes. I expect it is common (because it happened to me and I have heard references to similar episodes by others) that such causes seem so importan...
https://www.lesswrong.com/posts/spRWYFQwDYW5yoqYT/the-balance-between-hard-work-and-exhaustion
# Defense against discourse So, some writer named Cathy O’Neil [wrote about](http://bostonreview.net/science-nature/cathy-oneil-know-thy-futurist) futurists’ opinions about AI risk. This piece focused on futurists as social groups with different incentives, and didn’t really engage with the content of their arguments....
https://www.lesswrong.com/posts/rtPwgCFhsbEYm8ph7/defense-against-discourse
# Anti-tribalism and positive mental health as high-value cause areas I think that tribalism is one of the biggest problems with humanity today, and that even small reductions of it could cause a massive boost to well-being. By tribalism, I basically mean the phenomenon where arguments and actions are [primarily eval...
https://www.lesswrong.com/posts/wanpTxrXxFKWRzwR8/anti-tribalism-and-positive-mental-health-as-high-value
# Seek Fair Expectations of Others’ Models Epistemic Status: Especially about the future. Response To (Eliezer Yudkowsky): [There’s No Fire Alarm for Artificial General Intelligence](https://www.lesserwrong.com/posts/BEtzRE2M5m9YEAQpX/there-s-no-fire-alarm-for-artificial-general-intelligence) It’s long, but read the...
https://www.lesswrong.com/posts/JGbnF9RPuPY28ztvM/seek-fair-expectations-of-others-models
# Instrumental Rationality 6: Attractor Theory _\[Instrumental Rationality Sequence 6/7\]_ _\[Attractor Theory is a hybrid model that tries to reconcile the effects of internal and external factors of motivation. It makes the claim that an important additional consideration in decision-making is how the action affect...
https://www.lesswrong.com/posts/3MarhJrvYzW9TC9xA/instrumental-rationality-6-attractor-theory
# Seeding a productive culture: a working hypothesis This is a compact account of my current working hypothesis for what's wrong with our culture and what needs to be done. I'm not trying to explain my reasoning fully here - some of this I've already covered, some I plan to cover later. I expect no one to be _persuad...
https://www.lesswrong.com/posts/wzBYGYYA3C25KdfjZ/seeding-a-productive-culture-a-working-hypothesis
# De-Centering Bias **Summary**: A perspective that synthesises biases with other considerations incl. game theory, virtue ethics and knowledge of your limitations. **Intro**: Adopting a way of thinking in which you are aware of your own biases is clearly important and one of the areas of rationality that is most bas...
https://www.lesswrong.com/posts/xsD8RD772gKxCe2Wh/de-centering-bias
# Distinctions Between Natalism Positions I have noticed that several distinct positions tend to be collapsed into two positions, "pro-natalism" and "anti-natalism". I think discussions about natalism would work better if people made more distinctions. When I searched for information on it I found that people had pre...
https://www.lesswrong.com/posts/88bmWTf5vZbX8kpn6/distinctions-between-natalism-positions
# LessWrong: West vs. East There is a particular distinction in Western and Eastern ways of seeing the world and interpreting Truth. I want to point at how I think LessWrong is more aligned with the Western way of truth-seeking. \[ Epistemic status is speculative / playful. \] One illustration of the distinction is ...
https://www.lesswrong.com/posts/mc85AYhkTtTBKzPKr/lesswrong-west-vs-east
# Use concrete language to improve your communication in relationships She wasn’t respecting me. Or at least, that’s what I was telling myself. And I was pretty upset. What kind of person was too busy to text back a short reply? I know she’s a friendly person because just a week ago we were talking daily, text, phone...
https://www.lesswrong.com/posts/RovDhfhy5jL6AQ6ve/use-concrete-language-to-improve-your-communication-in
# Learning Deep Learning: Joining data science research as a mathematician About two years ago I finished my PhD in mathematics on an obscure technical topic in number theory. I left academic math because I wanted to do something that had a bigger (i.e. any) impact on the world around me. I also wanted to get out of t...
https://www.lesswrong.com/posts/XZB8ThC2q76Y4m72M/learning-deep-learning-joining-data-science-research-as-a
# Regress Thyself to the Mean I don’t usually write about well understood things: [others are better explainers](https://medium.com/mindlevelup) than I am, and I have more fun working at the edge. But a few weeks ago I was commenting on a Facebook post and the exchange went something like this: > **Them**: I think \[...
https://www.lesswrong.com/posts/K6ibxm5yktmtCY3SX/regress-thyself-to-the-mean
# Yudkowsky on AGI ethics A Cornell computer scientist recently wrote on social media: > \[...\] I think the general sense in AI is that we don't know what will play out, but some of these possibilities are bad, and we need to start thinking about it. We are plagued by highly visible people ranging from Musk to Ng pa...
https://www.lesswrong.com/posts/SsCQHjqNT3xQAPQ6b/yudkowsky-on-agi-ethics
# LDL 2: Nonconvex Optimization _Edit 10/23: by request I'm going back and putting the images in. It's a pain that you can't place images in line! Also I edited the dumb probability calculation to make more sense since I may not be able to write another post today._ My favorite “party theorem” is Gauss-Bonnet. It’s t...
https://www.lesswrong.com/posts/pJaowmKt5BsNmPzda/ldl-2-nonconvex-optimization
# AlphaGo Zero and the Foom Debate AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features, doesn't pretrain against expert games or anything else human, reaches a superhuman level after 3 days of self-play, and is the strongest version of AlphaGo yet. The architecture has been sim...
https://www.lesswrong.com/posts/shnSyzv4Jq3bhMNw5/alphago-zero-and-the-foom-debate
# Instrumental Rationality: Postmortem _\[In April, I set off to write a series of essays about instrumental rationality. Now that the project’s reached a pretty good stopping point, I’m looking back to see how my expectations and goals played out.\]_ **Initial Goals:** ------------------ _\[What I originally wanted...
https://www.lesswrong.com/posts/SMRgK4PnznNvmft3Y/instrumental-rationality-postmortem
# What Evidence Is AlphaGo Zero Re AGI Complexity? Eliezer Yudkowsky write a post on Facebook on on Oct 17, where I replied at the time. Yesterday he reposted that here ([link](https://www.lesserwrong.com/posts/shnSyzv4Jq3bhMNw5/alphago-zero-and-the-foom-debate)), minus my responses. So I’ve composed the following re...
https://www.lesswrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity
# Four Scopes Of Advice There's a very common failure mode people fall into where they'll ask for 'advice on doing something', receive excellent advice, and fail to follow it. For a long time this was mysterious to me. Then a friend provided a possible explanation that completely changed how I look at it. As I explain...
https://www.lesswrong.com/posts/nCprk4MWmAMfhDh92/four-scopes-of-advice
# Typical Minding Guilt/Shame Guilt/shame can be thought to have two potential functions. One is mechanistic, and the other is for signaling. These functions don't have to go together. \[ For the purposes of this post, I am clustering guilt/shame into one category. I do think they have different functions from e...
https://www.lesswrong.com/posts/w3G253kBBt8HjQxzz/typical-minding-guilt-shame
# Unofficial ESPR Post-mortem _\[Disclaimer: This post reflects my, i.e. Owen Shen’s, personal opinions. It does NOT reflect CFAR or ESPR’s opinions, and should NOT be taken as an ESPR-endorsed communication.\]_ _\[An overview of some of the camp and project dynamics of ESPR 2017. I look at diversity causing certain ...
https://www.lesswrong.com/posts/Gbw9Tnqeo9crTNGrg/unofficial-espr-post-mortem
# Poets are intelligence assets Aeschylus’s *Oresteia* is an ancient Greek tragedy about the dialectic between the natural desire for vengeance, order, and the rule of law. This is most likely what contemporaries thought the play was about, including Aeschylus himself. It is also a play about sexual politics, and the...
https://www.lesswrong.com/posts/pJQKdL32oW63Nroge/poets-are-intelligence-assets
# Zero-Knowledge Cooperation A lot of ink has been spilled about how to get various decision algorithms to cooperate with each other. However, most approaches require the algorithm to provide some kind of information about itself to a potential cooperator. Consider [FDT](https://arxiv.org/abs/1710.05060), the hot new...
https://www.lesswrong.com/posts/TDHDWMP5PRk4f3zrR/zero-knowledge-cooperation
# Slack for your belief system Follow-up to Zvi’s [post on Slack](https://www.lesserwrong.com/posts/yLLkWMDbC9ZNKbjDG/slack) You can have Slack in your life. But you can also have Slack in your belief system. Initially, this seems like it might be bad. Won't Slack result in a lack of precision? If I give myself Sla...
https://www.lesswrong.com/posts/TqvCmAqLmtqXpDQWf/slack-for-your-belief-system
# Guided Mental Change Requires High Trust There is a general theme to a lot of the flashpoints I’ve witnessed so far on LW 2.0. We're having the usual disagreements and debates like “what is and isn’t allowed here” or “is it worse to be infantile or exclusive?”. I think underlying a lot of these arguments are mistake...
https://www.lesswrong.com/posts/yYWcN4itsDquk2uaA/guided-mental-change-requires-high-trust
# Logical Updatelessness as a Robust Delegation Problem (Cross-posted on [AgentFoundations.org](https://agentfoundations.org/item?id=1689)) There is a cluster of problems in understanding naturalized agency which I call Robust Delegation. It refers to the way in which a limited agent might direct a more powerful agen...
https://www.lesswrong.com/posts/K5Qp7ioupgb7r73Ca/logical-updatelessness-as-a-robust-delegation-problem
# 10/28/17 Development Update: New editor framework (markdown support!), speed improvements, style improvements Here is a quick summary of the changes I just pushed: * We now have a new editor framework (we moved from ory-editor to draft-js-plugins). The primary reason for this was performance, though we also had a...
https://www.lesswrong.com/posts/SvH9kDTZWGtvt8aLy/10-28-17-development-update-new-editor-framework-markdown
# Inadequacy and Modesty The following is the beginning of _[Inadequate Equilibria](https://equilibriabook.com)_, a new sequence/book on a generalization of the notion of efficient markets, and on this notion's implications for practical decision-making and epistemic rationality. * * * This is a book about two incom...
https://www.lesswrong.com/posts/zsG9yKcriht2doRhM/inadequacy-and-modesty
# Brief comment on featured Someone asked me a meta question in the object level of this site (i.e. the frontpage), so I'm copying my reply here instead of continuing discussion there. _Me_ (on Chapter 1 of Inadequate Equilibria): I've promoted this to Featured because it's a great piece of writing, and also because...
https://www.lesswrong.com/posts/4xN4MwaMin3xveFAm/brief-comment-on-featured
# Leaders of Men Related to (Eliezer Yudkowsky): [Inadequacy and Modesty](https://www.lesserwrong.com/posts/zsG9yKcriht2doRhM/inadequacy-and-modesty) Epistemic Status: Confident. No sports knowledge required. In 2005, Willie Randolph became manager of the New York Mets. In his first five games as manager, all of wh...
https://www.lesswrong.com/posts/a8L8wRJiYoGE9SyxK/leaders-of-men
# In defence of epistemic modesty This piece defends a strong form of epistemic modesty: that, in most cases, one should pay scarcely any attention to what you find the most persuasive view on an issue, hewing instead to an idealized consensus of experts. I start by better pinning down exactly what is meant by ‘episte...
https://www.lesswrong.com/posts/SJKowjkGF7z2sS98f/in-defence-of-epistemic-modesty
# Frequently Asked Questions for Central Banks Undershooting Their Inflation Target If you are a central bank undershooting your inflation target, please read this first before posting a question about how to create more inflation. * * * **Q.** Help! I keep undershooting my 2% inflation target! **A.** It sounds lik...
https://www.lesswrong.com/posts/tAThqgpJwSueqhvKM/frequently-asked-questions-for-central-banks-undershooting
# Mixed-Strategy Ratifiability Implies CDT=EDT _([Cross-posted from IAFF](https://agentfoundations.org/item?id=1690).)_ I provide conditions under which CDT=EDT in Bayes-net causal models. [Previously](https://agentfoundations.org/item?id=1629), I discussed conditions under which LICDT=LIEDT. That case was fairly d...
https://www.lesswrong.com/posts/x2wn2MWYSafDtm8Lf/mixed-strategy-ratifiability-implies-cdt-edt-1
# Cutting edge technology Original post: http://bearlamp.com.au/cutting-edge-technology/ When the microscope was invented, in a very short period of time we discovered the cell and the concept of microbiology.  That one invention allowed us to open up entire fields of biology and medicine.  Suddenly we could see the ...
https://www.lesswrong.com/posts/3GP3j7zgKbnaZCDbp/cutting-edge-technology
# Bias in rationality is much worse than noise _[Crossposted](https://agentfoundations.org/item?id=1696) at Intelligent Agent Forum._ I've found quite a few people dubious of my "radical skepticism" [post on human preferences](https://www.lesserwrong.com/posts/rtphbZbMHTLCepd6d/humans-can-be-assigned-any-values-whats...
https://www.lesswrong.com/posts/SGHsnG7ZraTPKzveo/bias-in-rationality-is-much-worse-than-noise
# Normative assumptions: regret _[Crossposted](https://agentfoundations.org/item?id=1697) at the Intelligent Agents Forum._ In a [previous post](https://www.lesserwrong.com/posts/rtphbZbMHTLCepd6d/humans-can-be-assigned-any-values-whatsoever), I presented a model of human rationality and reward as pair (p, R), with p...
https://www.lesswrong.com/posts/Fg83cD3M7dSpSaNFg/normative-assumptions-regret
# An Equilibrium of No Free Energy **Follow-up to:** [Inadequacy and Modesty](https://www.lesserwrong.com/posts/zsG9yKcriht2doRhM/inadequacy-and-modesty) * * * I am now going to introduce some concepts that lack established names in the economics literature—though I don’t believe that any of the basic ideas are new ...
https://www.lesswrong.com/posts/yPLr2tnXbiFXkMWvk/an-equilibrium-of-no-free-energy
# Doxa, Episteme, and Gnosis Ancient Greek famously made a distinction between 3 kinds of knowledge: doxa, episteme, and gnosis. Doxa is basically what in English we might call hearsay. It’s the stuff you know because someone told you about it. If you know the Earth is round because you read it in a book, that’s doxa...
https://www.lesswrong.com/posts/vGj9QcxCryjeD2r3m/doxa-episteme-and-gnosis
# Anthropic reasoning isn't magic The user [Optimization Process](https://www.lesserwrong.com/users/optimization-process) presented a very interesting collection of [five anthropic situations](https://www.lesserwrong.com/posts/KNtKKmcd9DsP7WuZ3/the-anthropic-principle-five-short-examples), leading to seemingly contrad...
https://www.lesswrong.com/posts/4ZRDXv7nffodjv477/anthropic-reasoning-isn-t-magic
# Activation Energies and Uncertainty From "An Equilibrium of No Free Energy" > We can see the notion of an inexploitable market as generalizing the notion of an efficient market as follows: in both cases, _there’s no free energy inside the system_. In both markets, there’s a horde of hungry organisms moving around t...
https://www.lesswrong.com/posts/NNseEYWKGMoYXPta9/activation-energies-and-uncertainty
# The Copernican Revolution from the Inside The Copernican revolution was a pivotal event in the history of science. Yet I believe that the lessons most often taught from from this period are largely historically inaccurate and that the most important lessons are basically *not taught at all* \[1\]. As it turns out, t...
https://www.lesswrong.com/posts/JAAHjm4iZ2j5Exfo2/the-copernican-revolution-from-the-inside
# Competitive Truth-Seeking In many domains, you can get better primarily by being correct more frequently. If you’re managing a team, or trying to improve your personal relationships, it’s very effective to improve your median decision. The more often you’re right, the better you’ll do, so people often implicitly...
https://www.lesswrong.com/posts/HFq7ydko9gLD45yYv/competitive-truth-seeking
# The Craft & The Community - A Post-Mortem & Resurrection Epistemic status: Broad, well-developed speculation. Preface ------- To my knowledge, this essay contains the most comprehensive list of criticisms of the rationality community to date. Understandably, some people may take this as a rejection of the communit...
https://www.lesswrong.com/posts/wmEcNP3KFEGPZaFJk/the-craft-and-the-community-a-post-mortem-and-resurrection