text
stringlengths
300
320k
source
stringlengths
52
154
# A List Of Questions & Exercises For Reviewing Your Year As the year comes to an end, I thought it might be useful to share some questions that might help with reviewing your past year. This is an adapted post, based on what I have written [here](https://medium.com/@wikiwisdom_/the-ultimate-guide-to-reviewing-learni...
https://www.lesswrong.com/posts/H7sRo38vFRWt4ddS8/a-list-of-questions-and-exercises-for-reviewing-your-year
# Goodhart Taxonomy [Goodhart’s Law](https://en.wikipedia.org/wiki/Goodhart%27s_law) states that "any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." However, this is not a single phenomenon. I propose that there are (at least) four different mechanisms thro...
https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy
# Methods of Phenomenology _NB:_ _[Originally posted](https://mapandterritory.org/methods-of-phenomenology-e2f936651ff)_ _on_ _[Map and Territory](https://mapandterritory.org/)_ _on Medium, so some of the internal series links go there._ [In the previous post](https://mapandterritory.org/the-world-as-phenomena-47d27d...
https://www.lesswrong.com/posts/QeMJH94B9raXDSvJ3/methods-of-phenomenology
# The answer sheet Original post with better formatting on links: **[http://bearlamp.com.au/the-answer-sheet/](http://bearlam%3Cstrong%3Ep.com.au/?p=1074%3C/strong%3E&preview=true)** **[http://bearlamp.com.au/the-answer-sheet/](http://www.bearlamp.com.au)** * * * I always wished I had an answer sheet.  A cheat shee...
https://www.lesswrong.com/posts/BzhewKpooSPA3mx9W/the-answer-sheet
# Book Review: The Elephant in the Brain We don’t only constantly deceive others. In order to better deceive others, we also deceive ourselves. [You’d pay to know what you really think](http://www.subgenius.com/pam1/pamphlet_p2.html). Robin Hanson has [worked tirelessly](http://www.overcomingbias.com/) to fill this u...
https://www.lesswrong.com/posts/BgBrXpByCSmCLjpwr/book-review-the-elephant-in-the-brain
# A Candidate Complexity Measure I'd like to introduce a mathematical definition that tries to capture some of our intuitive notion of 'complexity'. People use this word in lots of ways. What I mean by this word is the sort of 'complexity' that seems to have been increasing over the history of our planet for the past ...
https://www.lesswrong.com/posts/ra4yAMf8NJSzR9syB/a-candidate-complexity-measure
# Why everything might have taken so long I asked [why humanity took so long to do anything at the start](https://meteuphoric.wordpress.com/2017/12/28/why-did-everything-take-so-long/), and the Internet gave me its thoughts. Here is my expanded list of hypotheses, summarizing from comments [on the post](https://meteup...
https://www.lesswrong.com/posts/yxTP9FckrwoMjxPc4/why-everything-might-have-taken-so-long
# The Loudest Alarm Is Probably False _Epistemic Status: Simple point, supported by anecdotes and a straightforward model, not yet validated in any rigorous sense I know of, but IMO worth a quick reflection to see if it might be helpful to you._ A curious thing I've noticed: among the friends whose inner monologues I...
https://www.lesswrong.com/posts/B2CfMNfay2P8f2yyc/the-loudest-alarm-is-probably-false
# A Simple Two-Axis Model of Subjective States, with Possible Applications to Utilitarian Problems At the beginning of last year, I noticed that [neuroscience is very confused about what suffering is](http://effective-altruism.com/ea/16a/a_review_of_what_affective_neuroscience_knows/). This bothered me quite a lot, be...
https://www.lesswrong.com/posts/3mFmDMapHWHcbn7C6/a-simple-two-axis-model-of-subjective-states-with-possible
# Insights from 'The Strategy of Conflict' Cross-posted from [my blog](http://danielfilan.com/2018/01/03/schelling.html). I recently read [Thomas Schelling's](https://en.wikipedia.org/wiki/Thomas_Schelling) book 'The Strategy of Conflict'. Many of the ideas it contains are now pretty widely known, especially in the r...
https://www.lesswrong.com/posts/2SeN2MjmMzZB25hBo/insights-from-the-strategy-of-conflict
# Papers for 2017 I had three new papers either published or accepted into publication last year; all of them are now available online: * **How Feasible is the Rapid Development of Artificial Superintelligence?** _Physica Scripta 92_ (11), 113001. * _Abstract:_ What kinds of fundamental limits are there in ho...
https://www.lesswrong.com/posts/beSGFi2Z9uidL5rrN/papers-for-2017
# Choice begets regret Epistemic status: speculative [Choice](https://thezvi.wordpress.com/2017/07/22/choices-are-bad/) [is](https://thezvi.wordpress.com/2017/08/12/choices-are-really-bad/) [bad](https://en.wikipedia.org/wiki/Overchoice) I want to focus on one aspect of this badness: regret. I'm going to argue that i...
https://www.lesswrong.com/posts/AmSFTAJWA3kwjjnCB/choice-begets-regret
# Have you felt exiert yet? Pre-adolescent children haven't felt strong lust yet. Those of us who've avoided strong pain are also missing an experience of that affect. Nostalgia can come up very early, but does require a bit of living first. Depression can strike at any age. So, in general, there are emotions and fee...
https://www.lesswrong.com/posts/qP3s89RAcdYy2LN2K/have-you-felt-exiert-yet
# Rationality: Abridged This was originally planned for release around Christmas, but our old friend Mr. Planning Fallacy said no. The best time to plant an oak tree is twenty years ago; the second-best time is today. I present to you: _Rationality Abridged --_ a 120-page nearly 50,000-word summary of "Rationality: F...
https://www.lesswrong.com/posts/uQ3AgiaryeWNpDrBN/rationality-abridged
# LW Update 1/5/2018 – Comment Styling We've just pushed an update with a few stylistic tweaks, a few small bug fixes, and most notably, an overhaul on the comment styling. The intent here was to make it easier to skim comments and keep track of branching threads. This is a bit of an experiment, and the current appro...
https://www.lesswrong.com/posts/SQugQ9j7GeWzAuZec/lw-update-1-5-2018-comment-styling
# Roleplaying As Yourself (_This is a basic intuition pump I've found helpful in making decisions, and maybe you'll like it too.)_ For all its shortcomings, I think there was something quite useful about the "What Would Jesus Do?" meme within the Christian framework. Of course it's not a very sophisticated ethical gu...
https://www.lesswrong.com/posts/nF2fpXPuTR5pHjG99/roleplaying-as-yourself
# Demon Threads _tldr: a Demon Thread is a discussion where everything is subtly warping towards aggression and confusion (i.e. as if people are under demonic influence), even if people are well intentioned and on the same 'side.' You can see a demon thread coming in advance, but it's still hard to do anything about._...
https://www.lesswrong.com/posts/BZtAavpsy9WtMYgEL/demon-threads
# Scientist Fiction [Rabbit fiction](https://www.goodreads.com/list/show/6260.Rabbits) is always about rabbits, but science fiction isn’t always about science. The _best_ science fiction, however, usually is. Science fiction transports the reader to strange and novel worlds; figuring out how those worlds work is often...
https://www.lesswrong.com/posts/9oqCKeibhi2WNBpHo/scientist-fiction
# In Defence of Meta The current mods of the Less Wrong community seem to believe that meta (I'm using meta in a broad sense to mean meta + community) discussion and community discussion are both things that need to be heavily discouraged. In part, we can see this inspired by Elizier's post [Less Meta](http://lesswron...
https://www.lesswrong.com/posts/jHpb6XW66fs2yg9Gx/in-defence-of-meta
# Nonlinear perception of happiness Epistemic status: Speculative. tl;dr: Perception of hapiness is related to some "raw" happiness by an equivalent of a psychophysics law. The "raw" quantity should be used when aggregating. Far-reaching implications for utility calculation would follow. A body of research seeks to ...
https://www.lesswrong.com/posts/7Kv5cik4JWoayHYPD/nonlinear-perception-of-happiness
# Niceness Stealth-Bombing _Epistemic status: Tested in an environment very favorable to its success._ Guys, I know it’s hard to believe and I’m sure very few of us have experience with it, but sometimes the people around us have opinions we think are really stupid. Fortunately, there’s an easy solution to this prob...
https://www.lesswrong.com/posts/uj2oP2cq67S7WC7tL/niceness-stealth-bombing
# Video: The Phenomenology of Intentions We all know intuitively what it feels like to "intend" a certain outcome or action. However, knowing what a thing is intuitively is different from having a deep introspective take on how the thing actually plays out in your own brain. In this Mental Model Monday, I go deep into...
https://www.lesswrong.com/posts/QmcFeZtwSRhcsYDZu/video-the-phenomenology-of-intentions
# TSR #9: Hard Rules _This is part of a series of posts where I call out some ideas from the latest edition of The Strategic Review (written by Sebastian Marshall), and give some prompts and questions that I think people might find useful to answer. I include a summary of the most recent edition, but it's not a replac...
https://www.lesswrong.com/posts/96gw6eCqEqrys89ry/tsr-9-hard-rules
# 1/9/2018 Update - Frontpage Views A few people had complained that the frontpage views (i.e. "Curated Content", "Frontpage", etc) were still a bit counterintuitive. I made a couple quick changes: * make the buttons green, so that it's more clear you might want to click there at all * add a hover-menu, which ho...
https://www.lesswrong.com/posts/LGAMb2Q76K9zXoRPX/1-9-2018-update-frontpage-views
# Babble This post is an exercise in "identifying with the algorithm." I'm a big fan of the probabilistic method and randomized algorithms, so my biases will show. How do human beings produce knowledge? When we describe rational thought processes, we tend to think of them as essentially deterministic, deliberate, and...
https://www.lesswrong.com/posts/i42Dfoh4HtsCAfXxL/babble
# Surely, rhetorical question? I just finished Daniel Dennett’s [Intuition Pumps And Other Tools for Thinking](https://smile.amazon.com/Intuition-Pumps-Other-Tools-Thinking/dp/0393348784/). In the book, Dan exposes the reader to several concepts, thought experiments and red flags one may find relevant to the quest of ...
https://www.lesswrong.com/posts/9fYriyQh2BYDaKdjc/surely-rhetorical-question
# More Babble In my [last babble](https://radimentary.wordpress.com/2018/01/10/babble/), I introduced the Babble and Prune model of thought generation: Babble with a weak heuristic to generate many more possibilities than necessary, Prune with a strong heuristic to find a best, or the satisfactory one. I want to zoom ...
https://www.lesswrong.com/posts/wQACBmK5bioNCgDoG/more-babble
# Neural program synthesis is a dangerous technology _Crossposted from [my Medium blog](https://medium.com/@honnibal/neural-program-synthesis-is-a-dangerous-technology-677b1e0cfd5d)_ Today’s computer viruses can copy themselves between computers, but they can’t mutate. That might change very soon, leading to a seriou...
https://www.lesswrong.com/posts/uqco28EF6ED3pnuBr/neural-program-synthesis-is-a-dangerous-technology
# Boiling the Crab: Slow Changes (beneath Sensory Threshold) add up **Phenomenon**: A live crab, when slowly boiled, will not climb out of the pot. This is an analog of a known feature in human cognition: humans are not good at observing small changes. We are good at noticing sharp changes. In this post, I outline a...
https://www.lesswrong.com/posts/hhAXGq8BbmeTvkfgG/boiling-the-crab-slow-changes-beneath-sensory-threshold-add
# Prune [Previously](https://www.lesserwrong.com/posts/i42Dfoh4HtsCAfXxL/babble), I described human thought-generation as an adversarial process between a low-quality pseudorandom Babble generator and a high-quality Prune filter, roughly analogous to the Generative Adversarial Networks model in machine learning. I [th...
https://www.lesswrong.com/posts/rYJKvagRYeDM8E9Rf/prune
# Field-Building and Deep Models _What is important in hiring/field-building in x-risk and AI alignment communities and orgs? I had a few conversations on this recently, and I'm trying to publicly write up key ideas more regularly._ _I had in mind the mantra 'better written quickly than not written at all', so you ca...
https://www.lesswrong.com/posts/Z8r9sAmzDucngdZtn/field-building-and-deep-models
# No, Seriously. Just Try It: TAPs This next semester (I'm in university, so that's how I measure time) I'm working on developing my ability to better integrate arbitrary habits into my behavior. [Trigger-action-planning](https://www.lesserwrong.com/posts/v4nNuJBZWPkMkgQRb/making-intentions-concrete-trigger-action-pla...
https://www.lesswrong.com/posts/Ng6NcxswMMjx7z65Y/no-seriously-just-try-it-taps
# introducing: target stress note: this concept is running on a predictive processing paradigm, approximately, but a fairly generalized version of said paradigm which seems obviously true. target stress is the expectation of how much stress one is going to experience at a given time. a low target stress level is, for...
https://www.lesswrong.com/posts/ubgepHbEyJy3C4dwP/introducing-target-stress
# What strange and ancient things might we find beneath the ice? > If I am not for myself, who will be for me? > If I am only for myself, what am I? > If not now, when? > -Hillel > Nature is not good, only proto-good. > -Paolo Soleri _Epistemic status: literally a dream_ I awaken. I am in the desert, alone....
https://www.lesswrong.com/posts/3BFvhJYR3To3Cf4NK/what-strange-and-ancient-things-might-we-find-beneath-the
# Announcement: AI alignment prize winners and next round We (Zvi Mowshowitz, Vladimir Slepnev and Paul Christiano) are happy to announce that the [AI Alignment Prize](https://www.lesserwrong.com/posts/YDLGLnzJTKMEtti7Z/announcing-the-ai-alignment-prize) is a success. From November 3 to December 31 we received over 40...
https://www.lesswrong.com/posts/4WbNGQMvuFtY3So7s/announcement-ai-alignment-prize-winners-and-next-round
# Circumambulation This is Part 4 of the Babble and Prune sequence. In the previous parts, I described the brain's thought-generation process as an adversarial learning system between Babble - which generates low-quality content - and Prune - which filters for high-quality content. My primary motivation for understa...
https://www.lesswrong.com/posts/h4K6bsWrYHDcvvPtw/circumambulation
# Book Review: Why Buddhism Is True Robert Wright of _The Moral Animal_ fame recently published a book about Buddhism. He titled it _Why Buddhism Is True: The Science and Philosophy of Meditation and Enlightenment_ and in it seeks to demonstrate the connections between evolutionary psychology and Buddhism. In an appen...
https://www.lesswrong.com/posts/FGZk5PgoaNoLQrzAR/book-review-why-buddhism-is-true
# Plan to Be Lucky Like other rationalists, I’ve [written about](https://putanumonit.com/2016/11/23/climbing-the-horseshoe/) the dangers inherent in our instincts for tribalism. [Unlike other rationalists](https://twitter.com/yashkaf/status/953012916906807296), I think that instead of being suppressed tribalism sh...
https://www.lesswrong.com/posts/whReJNi98wGvgZSj7/plan-to-be-lucky
# The Solitaire Principle: Game Theory for One > Do I contradict myself? > Very well then I contradict myself; > (I am large, I contain multitudes.) This post is an exercise in taking Whitman seriously. If the self is properly understood as a loose coalition of many agents with possibly distinct values, belie...
https://www.lesswrong.com/posts/n8k5qL7w6iCWHDZrd/the-solitaire-principle-game-theory-for-one
# Conversational Presentation of Why Automation is Different This Time I have been frustrated recently with my inability to efficiently participate in discussions of automation which crop up online and in person. The purpose of the post is to refine a conversational presentation of what I believe to be the salient con...
https://www.lesswrong.com/posts/HtikjQJB7adNZSLFf/conversational-presentation-of-why-automation-is-different
# Making Exceptions to General Rules Suppose you make a general rule, ie. "I won't eat any cookies". Then you encounter a situation that legitimately feels exceptional , "These are generally considered the best cookies in the entire state". This tends to make people torn between two threads of reasoning: 1) Clearly t...
https://www.lesswrong.com/posts/CKoyBW5T7XaNSkawp/making-exceptions-to-general-rules
# An Apology is a Surrender _\[Epistemic Status: This post is aimed at people who've hurt loved ones by trying to fast talk their way out of apologies; remember [the law of equal and opposite advice](http://slatestarcodex.com/2014/03/24/should-you-reverse-any-advice-you-hear/). [A different version of this post](http:...
https://www.lesswrong.com/posts/6MogqPoyYyiDz3eRh/an-apology-is-a-surrender
# Russian Cynicism I. -- "My dear Kostya, you've finally returned to me!" Kitty comes trotting down the steps of the train station, rushing into Levin's waiting arms. "It's been but four days, my dear." Levin's wraps his arms around her slim waist with a passion that belies his nonchalance. Gently resting in each ...
https://www.lesswrong.com/posts/zZTAD7CBX9SkPdv5s/russian-cynicism
# Beware of black boxes in AI alignment research Over the course of the [AI Alignment Prize](https://www.lesserwrong.com/posts/YDLGLnzJTKMEtti7Z/announcing-the-ai-alignment-prize) I've sent out lots of feedback emails. Some of the threads were really exciting and taught me a lot. But mostly it was me saying pretty muc...
https://www.lesswrong.com/posts/DNKTmmNZr5M2uCZLz/beware-of-black-boxes-in-ai-alignment-research
# A model I use when making plans to reduce AI x-risk I've been thinking about what implicit model of the world I use to make plans that reduce x-risk from AI. I list four main gears below (with quotes to illustrate), and then discuss concrete heuristics I take from it. A model of AI x-risk in four parts ------------...
https://www.lesswrong.com/posts/XFpDTCHZZ4wpMT8PZ/a-model-i-use-when-making-plans-to-reduce-ai-x-risk
# Singularity Mindset In a fixed mindset, people believe their basic qualities, like their intelligence or talent, are simply fixed traits. In a [growth mindset](https://www.mindsetworks.com/science/), people believe that their basic qualities can be modified gradually via dedication and incremental progress. Scott Al...
https://www.lesswrong.com/posts/uNccQiT3ogH5zt2xb/singularity-mindset
# Fashionable or Fundamental Thought in Communities There's a couple interesting things about the 2011 LessWrong post, "[Approving reinforces low-effort behaviors.](http://lesswrong.com/lw/6nz/approving_reinforces_loweffort_behaviors/)" The post itself is fascinating to re-read — there's some valuable and profitable ...
https://www.lesswrong.com/posts/5BiMNfGmhsBuYzM9N/fashionable-or-fundamental-thought-in-communities
# Low Enough To See Your Shadow I wrote [last time](https://radimentary.wordpress.com/2018/01/18/singularity-mindset/) that Jung's approach to humility deserves its own post: _Modern men cannot find God because they will not look low enough._ Here's the first piece of that post. _Look low enough by confronting the ...
https://www.lesswrong.com/posts/bXQ7BLQPfp9ctJPaW/low-enough-to-see-your-shadow
# Kenshō **Follow-up to:** [Gears in Understanding](http://lesswrong.com/lw/ozz/gears_in_understanding/), [Fake Frameworks](http://lesswrong.com/lw/p80/in_praise_of_fake_frameworks/) This last September, I experienced enlightenment. I mean to share this as a simple fact to set context. I don’t claim I am enlight...
https://www.lesswrong.com/posts/tMhEv28KJYWsu6Wdo/kensh
# The Desired Response A friend once told me a story of how an interaction with her mother changed her perspective on communication. She said that she had been going through a break up at the time, and was venting to her mother, when her mother responded with "Do you want my advice, or my sympathy?" Often times, wh...
https://www.lesswrong.com/posts/GnqdqM8DMGNTriA9K/the-desired-response
# Myopia and Comfort Zones As my first CFAR approaches, I meditate on the Kabbalistic interpretations of the name. The first part is about the relationship between nearsightedness and comfort zones: the curious adjustments that I made to combat myopia, the corresponding shrinking of comfort zones, and the reason ...
https://www.lesswrong.com/posts/Fx3efEJWcxD99pEA8/myopia-and-comfort-zones
# How the LW2.0 front page could be better at incentivizing good content (This should probably be in meta, but I'm not sure how to post to meta. Feel free to move if needed.) Imagine tomorrow someone writes the best ever LW2.0 post and everyone upvotes it. How can front page visitors read that best and newest post? T...
https://www.lesswrong.com/posts/BTPv97quyNHfkmXu6/how-the-lw2-0-front-page-could-be-better-at-incentivizing
# Book Review: The Secrets of Alchemy Like most decisions that seem bizarre to others, my choice to read Lawrence Principe's book took months to germinate. It started when I saw [a post on Hacker News describing Jung's work on alchemical diagrams](https://news.ycombinator.com/item?id=12408933), noting that premodern p...
https://www.lesswrong.com/posts/cnS7X2MKuqXtzmcRA/book-review-the-secrets-of-alchemy
# The Tallest Pygmy Effect Status: I thought this was a common economics term, but when I google it I get either unrelated or references [using it the way I expect](http://econlog.econlib.org/archives/2008/10/the_tallest_pyg.html) but not defining it. It’s a really useful term, so I’m going to attempt to make it a thi...
https://www.lesswrong.com/posts/DG8wTAFLjnevwc3nR/the-tallest-pygmy-effect
# Taking it Private: Short Circuiting Demon Threads (working example) This post is intended as a working example of [how I think Demon Threads should be resolved.](https://www.lesserwrong.com/posts/BZtAavpsy9WtMYgEL/demon-threads/mGzojouMavQ2m7Zop) The gist of my suggestion is: > **Step 1.** Make it easy and common t...
https://www.lesswrong.com/posts/LxrpCKQPbdpSsitBy/taking-it-private-short-circuiting-demon-threads-working
# A simpler way to think about positive test bias Eliezer described "positive bias" (which I'll rename "positive test bias" for reasons explained in Unnamed's comment below) in an [LW post](http://lesswrong.com/lw/iw/positive_bias_look_into_the_dark/) and an [HPMOR chapter](http://www.hpmor.com/chapter/8). According t...
https://www.lesswrong.com/posts/X4DncGJPuJE6CJbMm/a-simpler-way-to-think-about-positive-test-bias
# Adequacy as Levels of Play ### Adequacy as Levels of Play One method that I find very useful for evaluating [adequacy concerns](http://equilibriabook.com) is "level of play". If you look at different games or different leagues of the same game, it's pretty apparent that the "level of play" - the amount of athletici...
https://www.lesswrong.com/posts/ucpm7hiFtcEPoW5QE/adequacy-as-levels-of-play
# Understanding-1-2-3 Epistemic Status: Seems worth sharing Assumes Knowledge Of: [System 1 and System 2](http://bigthink.com/errors-we-live-by/kahnemans-mind-clarifying-biases) from [Thinking, Fast and Slow](https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman-ebook/dp/B00555X8OA/ref=sr_1_1?ie=UTF8&qid=15166644...
https://www.lesswrong.com/posts/po2FzSwr8EcD5PQ4J/understanding-1-2-3
# Hammers and Nails _If all you have is a hammer, everything looks like a nail._ The most important idea I've blogged about so far is [Taking Ideas Seriously](https://radimentary.wordpress.com/2018/01/18/singularity-mindset/), which is itself a generalization of Zvi's [More Dakka](https://thezvi.wordpress.com/2017/12...
https://www.lesswrong.com/posts/QzBuuNEqJGQFeWM4f/hammers-and-nails
# An Untrollable Mathematician Follow-up to [All Mathematicians are Trollable](https://www.alignmentforum.org/posts/5bd75cc58225bf067037518c/all-mathematicians-are-trollable-divergence-of-naturalistic-logical-updates). It is relatively easy to see that no computable Bayesian prior on logic can converge to a single co...
https://www.lesswrong.com/posts/5bd75cc58225bf0670375533/an-untrollable-mathematician
# Book Review - Probability and Finance: It's Only a Game! Update: This post has been substantially revised in response to feedback and incorporating answers to good questions. If something seems ridiculous or nonsensical it is definitely best to assume that is my error rather than the authors'. Probability and Finan...
https://www.lesswrong.com/posts/CkYDKE99WZSrTttAF/book-review-probability-and-finance-it-s-only-a-game
# Interpersonal Approaches for X-Risk Education Much of the AI research community remains unaware of the Alignment Problem (according to my personal experience), and I haven't seen much discussion about how to deliberately expand the community (all I've seen to this effect is [Scott's A/B/C/D/E testing on alignment ar...
https://www.lesswrong.com/posts/pG7zuvMonHDCJFfjv/interpersonal-approaches-for-x-risk-education
# Dispel your justification-monkey with a “HWA!” _I'm going to use a couple of words in this post that might not be immediately clear to some people. One of them is "justification". Another is "acceptance". I would like to suggest that if you think I'm saying something stupid when I'm using those words, that you inste...
https://www.lesswrong.com/posts/aFjPArYD8ys47i92g/dispel-your-justification-monkey-with-a-hwa
# What are the Best Hammers in the Rationalist Community? I've toyed with the idea of giving an intro to rationality, hoping I can find the 20% of the material that provides 80% of the gains (or the 5% that provides 50% of the gains). Here and there, I've also been asked what the rationality community is about, and I...
https://www.lesswrong.com/posts/wmFHWiNRN4Wxiy9mF/what-are-the-best-hammers-in-the-rationalist-community
# "Slow is smooth, and smooth is fast" I think it's worthwhile to signal boost bits of wisdom that turn out to be surprisingly useful. Apparently "Slow is smooth, and smooth is fast" is a Navy SEAL saying. The meaning is pretty clear. Practice slowly so that the correct motor patterns are ingrained. And perhaps equall...
https://www.lesswrong.com/posts/4FZfzqMtwQZES3eqN/slow-is-smooth-and-smooth-is-fast
# Teaching Ladders I've been teaching math to people one or two levels below me my entire life. Although this seems like a limitation, I think it's the natural state of affairs. On the Kiseido Go Server (KGS), there's a room called the KGS Teaching Ladder where players can find teaching games with players just a few ...
https://www.lesswrong.com/posts/CKAYnu89uDoEDEuct/teaching-ladders
# Form and Feedback in Phenomenology *NB:* [*Originally posted*](https://mapandterritory.org/form-and-feedback-in-phenomenology-d44f4e5c72b3) *on* [*Map and Territory*](https://mapandterritory.org/) *on Medium, so some of the internal series links go there.* Now that we’ve covered phenomenology’s [foundations](https:...
https://www.lesswrong.com/posts/M7Z5sm6KoukNpF3SD/form-and-feedback-in-phenomenology
# European Community Weekend 2018 Announcement We are excited to announce this year's European LessWrong Community Weekend. For the fifth time, rationalists from all over Europe (and some from outside Europe) are **gathering in Berlin** to socialize, have fun, exchange knowledge and skills, and have interesting discus...
https://www.lesswrong.com/posts/gcHXqDry66NphyAEY/european-community-weekend-2018-announcement
# On not getting swept away by mental content There’s a specific subskill of meditation that I call “not getting swept away by the content”, that I think is generally valuable. It goes like this. You sit down to meditate and focus on your breath or whatever, and then a worrying thought comes to your mind. And it’s a ...
https://www.lesswrong.com/posts/sxSjSM3TSmXLAuiSi/on-not-getting-swept-away-by-mental-content
# The Dogma of Evidence-based Medicine In polite society it's currently fashionable to be in favor of Evidence-based Medicine and proclaim that we don't have enough of it. In this article I want to argue that this preference isn't backed up by good reasons. The paradigm of Evidence-based Medicine isn't backed up by ev...
https://www.lesswrong.com/posts/grQQynyy6Mc4agJxu/the-dogma-of-evidence-based-medicine
# Magic Brain Juice Shorter and less Pruned due to CFAR. > A grandfather is talking with his grandson and he says there are two wolves inside of us which are always at war with each other.  > One of them is a good wolf which represents things like kindness, bravery and love. The other is a bad wolf, which represent...
https://www.lesswrong.com/posts/5S5f9a9nBh8Bzx62Y/magic-brain-juice
# Upvote/downvote amounts In light of the variable voting powers that the new karma system has, I keep finding myself wanting to be able to vote less than my "full power" on a comment. But in particular, the thing I think I always want in practice is to be able to _downvote_ less than that. Downvoting a comment by 4 ...
https://www.lesswrong.com/posts/WpCm8puofsP6hmPZh/upvote-downvote-amounts
# Against Instrumental Convergence [Instrumental convergence](https://arbital.com/p/instrumental_convergence/) is the idea that every sufficiently intelligent agent would exhibit behaviors such as self preservation or acquiring resources. This is a natural result for maximizers of simple utility functions. However I c...
https://www.lesswrong.com/posts/28kcq8D4aCWeDKbBp/against-instrumental-convergence
# Pareto improvements are rarer than they seem _this is surely not an original insight, but I haven't seen it before_ A [Pareto improvement](https://en.wikipedia.org/wiki/Pareto_efficiency) is where you make one party better off and no parties worse off. Suppose Adam has a rare baseball card. He assigns no intrinsic...
https://www.lesswrong.com/posts/5AQBNwDoKW5YXDbvc/pareto-improvements-are-rarer-than-they-seem
# A LessWrong Crypto Autopsy Wei Dai, one of the first people Satoshi Nakamoto contacted about Bitcoin, was a frequent Less Wrong contributor. So was Hal Finney, the first person besides Satoshi to make a Bitcoin transaction. The first mention of Bitcoin on Less Wrong, a post called [Making Money With Bitcoin](http:/...
https://www.lesswrong.com/posts/MajyZJrsf8fAywWgY/a-lesswrong-crypto-autopsy
# The different types (not sizes!) of infinity In a recent conversation, a smart and mathy friend of mine revealed they didn't understand Cantor's diagonal argument. Further questionning revealed that she was using the wrong concept of infinity. I thought I'd pass on my explanations from there. What is infinity? Does...
https://www.lesswrong.com/posts/GhCbpw6uTzsmtsWoG/the-different-types-not-sizes-of-infinity
# All Mathematicians are Trollable: Divergence of Naturalistic Logical Updates The post on [naturalistic logical updates](https://agentfoundations.org/item?id=625) left open the question of whether the probability distribution converges as we condition on more logical information. Here, I show that this cannot always ...
https://www.lesswrong.com/posts/5bd75cc58225bf067037518c/all-mathematicians-are-trollable-divergence-of-naturalistic-logical-updates
# Write This post is Part 5 of the sequence on [Babble](https://radimentary.wordpress.com/2018/01/10/babble/). After writing [Hammers and Nails](https://radimentary.wordpress.com/2018/01/22/hammer-and-nails/), I figured out that my favorite Hammer is writing. Write about everything. Write it immediately. Edit afterwa...
https://www.lesswrong.com/posts/eJiE7uuKZaP5HqLvY/write
# "Taking AI Risk Seriously" (thoughts by Critch) Note: Serious discussion of end-of-the-world and what to do given limited info. Scrupulosity triggers, etc. _Epistemic Status: The first half of this post is summarizing my own views. I think I phrase each sentence about as strongly as I feel it. (When I include a cav...
https://www.lesswrong.com/posts/HnC29723hm6kJT7KP/taking-ai-risk-seriously-thoughts-by-critch
# Biological humans and the rising tide of AI The [Hanson-Yudkowsky AI-Foom Debate](http://intelligence.org/files/AIFoomDebate.pdf) focused on whether AI progress is winner-take-all. But even if it isn't, humans might still fare badly. Suppose Robin is right. Instead of one basement project going foom, AI progresses ...
https://www.lesswrong.com/posts/98QAWTRyDnHMfr6SL/biological-humans-and-the-rising-tide-of-ai
# RSS Feeds are fixed and should be properly functional this time Just pushed a fix that should make RSS feeds work properly (you will have to resubscribe to the RSS feeds, if you did so a while ago). Thanks to [squirrelInHell](https://www.lesserwrong.com/posts/EfMBh23Nb9pdJtmG5/rss-troubles) who made me aware of the ...
https://www.lesswrong.com/posts/Zo462cTMBYsc45foA/rss-feeds-are-fixed-and-should-be-properly-functional-this
# Hammertime Day 1: Bug Hunt _Rationality is systematized winning._ In [Hammers and Nails](https://radimentary.wordpress.com/2018/01/22/hammer-and-nails/), I suggested that rationalists need to be more systematic in the practice of our craft. In this post, I will use the word Hammer for a single technique well-practi...
https://www.lesswrong.com/posts/rFjhz5Ks685xHbMXW/hammertime-day-1-bug-hunt
# Arbital postmortem Disclaimer 1: These views are my own and don’t necessarily reflect the views of anyone else (Eric, Steph, or Eliezer). Disclaimer 2: Most of the events happened at least a year ago. My memory is not particularly great, so the dates are fuzzy and a few things might be slightly out of order. But th...
https://www.lesswrong.com/posts/kAgJJa3HLSZxsuSrf/arbital-postmortem
# Hammertime Day 2: Yoda Timers This is part 2 of 30 in the Hammertime Sequence. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. > _No! Try not! Do, or do not. There is no try._ > —[Yoda](http://www.youtube.com/watch?v=PcjnbIF1yAA) There's a copy of [Barney Stin...
https://www.lesswrong.com/posts/vpvKEj7shuk8h5Eet/hammertime-day-2-yoda-timers
# OpenPhil's "Update on Cause Prioritization / Worldview Diversification" OpenPhil recently posted an update on their overall cause prioritization strategy. In particular, exploring the concept of worldview diversification: > When choosing how to allocate capital, we need to decide between multiple worldviews. We use...
https://www.lesswrong.com/posts/ukc24qaC5jLJJQ7Ho/openphil-s-update-on-cause-prioritization-worldview
# An experiment Alexei's [postmortem of Arbital](https://www.lesserwrong.com/posts/kAgJJa3HLSZxsuSrf/arbital-postmortem) got me thinking about incentivizing good online explanations, so I got an idea for a test that we can run here and now: What's your simplest, most vivid, most memorable explanation of _correlation ...
https://www.lesswrong.com/posts/8mwRfXowq3jM3nmRx/an-experiment
# Epiphenomenal Oracles Ignore Holes in the Box Background: https://wiki.lesswrong.com/wiki/Oracle_AI Epistemic status: Likely wrong, but I don't know how. I'm not a 'real' FAI researcher, just an LW reader inspired by the Alignment Prize to try to start contributing. (At minimum I'm heavily indebted to Stuart Armst...
https://www.lesswrong.com/posts/q5qoG7gXuntKgBNoR/epiphenomenal-oracles-ignore-holes-in-the-box
# Paper Trauma [Andrew Critch](https://www.lesserwrong.com/posts/HnC29723hm6kJT7KP/critch-on-taking-ai-risk-seriously/) thinks people should be spending more time than they currently are using paper as a working memory aid while thinking, especially [large paper](https://www.amazon.com/dp/B0027A7DBU/ref=asc_df_B0027A7...
https://www.lesswrong.com/posts/GpDjJFeCQCLjpt68b/paper-trauma
# Sources of intuitions and data on AGI Much of the difficulty in making progress on AI safety comes from the lack of useful feedback loops. We do not have a superintelligent AI to run tests on and by the time we do, it will probably be too late. This means we have to resort to using proxies. In this post, I will high...
https://www.lesswrong.com/posts/BibDWWeo37pzuZCmL/sources-of-intuitions-and-data-on-agi
# Hammertime Day 3: TAPs This is part 3 of 30 in the Hammertime Sequence. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. A running theme of Hammertime, especially for the next two days, is _intentionality_, or deliberateness. Instrumental rationality is designed t...
https://www.lesswrong.com/posts/ESnzpoCJrAfwAzpMB/hammertime-day-3-taps
# Hammertime Day 4: Design This is part 4 of 30 in the Hammertime Sequence. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. A central theme of Hammertime is that rationalists interact with reality first. We try for at least five minutes. We build habits to solve th...
https://www.lesswrong.com/posts/HHjn3r8n7bJp6Q5HE/hammertime-day-4-design
# AI Safety Research Camp - Project Proposal AI Safety Research Camp - Project Proposal ------------------------------------------ _→ Give your feedback on our plans below or in the [google doc](https://docs.google.com/document/d/1QlKruAZuuc5ay0ieuzW5j5Q100qNuGNCTHgC4bIEXsg/edit?ts=5a651a00#)_ _→ [Apply](https://do...
https://www.lesswrong.com/posts/KgFrtaajjfSnBSZoH/ai-safety-research-camp-project-proposal
# The Monthly Newsletter as Thinking Tool At the start of 2014 I accidentally started writing a monthly newsletter for a handful of close friends, and I've mostly kept it up since then. It was originally supposed to be kind of commitment device for myself, a way of holding myself accountable to my 2014 goals by commit...
https://www.lesswrong.com/posts/TyswYDeub7mxMXCgi/the-monthly-newsletter-as-thinking-tool
# Hammertime Day 5: Comfort Zone Expansion This is part 5 of 30 in the Hammertime Sequence. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. It would be hypocritical of me to write a post of [my usual form](https://radimentary.wordpress.com/2018/01/28/write/) to tea...
https://www.lesswrong.com/posts/c5wFM7KJLtuMnLFsH/hammertime-day-5-comfort-zone-expansion
# Models of moderation _\[Author's note: I will move this into meta in a week, but this is a bit more important than the usual meta-announcements, so I will have it in community for a bit.\]_ **Meta** This post is trying to achieve roughly four things: 1\. Be a future reference for some hopefully useful models abou...
https://www.lesswrong.com/posts/bGpRGnhparqXm5GL7/models-of-moderation
# Update from Vancouver I've wanted to do a more detailed update from Vancouver, as the Post-Mortem on the Craft and the Community on here from a few months ago seems to have inspired Vancouver to get organized more than is true of other rationality communities, and is kickstarting an unprecedented reinvogiration of p...
https://www.lesswrong.com/posts/kjXvvgobBdmwLxmJJ/update-from-vancouver
# How I see knowledge aggregation After reading the Arbital postmortem, I remembered some old ideas regarding a tool for claim and prediction aggregation. First, the tool would have the basic features. There would be a list of claims. Each claim is a clear and concise statements that could be true or false, perhaps w...
https://www.lesswrong.com/posts/HKc4fuTnbLzw5ebc8/how-i-see-knowledge-aggregation
# Hammertime Day 6: Mantras This is part 6 of 30 in the Hammertime Sequence. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. I’d like to demarcate the line between the two natural halves of Hammertime (fast and interactive vs. slow and introspective) with an experi...
https://www.lesswrong.com/posts/uhqax7dL8edMpqJWp/hammertime-day-6-mantras
# Factorio, Accelerando, Empathizing with Empires and Moderate Takeoffs I started planning this post before [Cousin It's post on a similar subject](https://www.lesserwrong.com/posts/98QAWTRyDnHMfr6SL/biological-humans-and-the-rising-tide-of-ai). This is a bit of a more poetic take on moderate, peaceful AI takeoffs. S...
https://www.lesswrong.com/posts/RHurATLtM7S5JWe9v/factorio-accelerando-empathizing-with-empires-and-moderate
# Explicit Expectations when Teaching _Epistemic Status: Casual thoughts on small ways to improve teaching flows._ In the lectures for my current classes, things don't always make sense. Sometimes the not making sense comes from a feeling of incompleteness. The definition seems incomplete. I notice that I don't have ...
https://www.lesswrong.com/posts/rYd2cDK93KH942aXs/explicit-expectations-when-teaching