text
stringlengths
300
320k
source
stringlengths
52
154
# Hammertime Day 7: Aversion Factoring This is part 7 of 30 in the Hammertime Sequence. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. As we move into the introspective segment of Hammertime, I want to frame our approach around the set of (unoriginal) ideas I laid...
https://www.lesswrong.com/posts/sZCqpRHQLHSmFNWxK/hammertime-day-7-aversion-factoring
# Hammertime Day 8: Sunk Cost Faith _\[Author’s note: I will be moving future Hammertimes to my personal page to avoid cluttering the frontpage. This one is sufficiently short and probably controversial to leave here.\]_ This is part 8 of 30 in the Hammertime Sequence. Click [here](https://radimentary.wordpress.com/2...
https://www.lesswrong.com/posts/o8EWXypYGwJ6JYpBT/hammertime-day-8-sunk-cost-faith
# Beware Social Coping Strategies Worlds ====== The world is a huge, complex, scary place. Winter used to freeze us dead, and today lives are still destroyed by natural disasters like tsunami and forest fires. Plagues used to destroy our communities, and people still die from disease every day. We want more ...
https://www.lesswrong.com/posts/QJRo5HZp9ZdzoK7x3/beware-social-coping-strategies
# European Community Weekend 2018 Announcement We are excited to announce this year's European LessWrong Community Weekend. For the fifth time, rationalists from all over Europe (and some from outside Europe) are **gathering in Berlin** to socialize, have fun, exchange knowledge and skills, and have interesting di...
https://www.lesswrong.com/posts/WZD47CdEG4MgduLnM/european-community-weekend-2018-announcement
# Fun Theory in EverQuest _This is an excerpt from my personal blog/newsletter that a [couple of people said was worth sharing](https://www.lesserwrong.com/posts/TyswYDeub7mxMXCgi/the-monthly-newsletter-as-thinking-tool#SvKfQyZjwztusd9C3). It was originally written for a handful of close friends, and I have not edited...
https://www.lesswrong.com/posts/aHGG8Wyjo7Sa3ayjH/fun-theory-in-everquest
# "Cheat to Win": Engineering Positive Social Feedback This post outlines a very simple strategy that's been working for me lately. It may be obvious to some, but it only clicked for me recently. Positive social stimulation is fun for humans, right? We like to be liked. It makes us cheerful. We're motivated to do thi...
https://www.lesswrong.com/posts/dgFcJtHaYfaoByAK9/cheat-to-win-engineering-positive-social-feedback
# Introduction to Noematology _NB:_ _[Originally posted](https://mapandterritory.org/introduction-to-noematology-fac7ae7d805d)_ _on_ _[Map and Territory](https://mapandterritory.org/)_ _on Medium, so some of the internal series links go there._ [Last time](https://mapandterritory.org/form-and-feedback-in-phenomenolog...
https://www.lesswrong.com/posts/wvAEHzE55K7vfsXWz/introduction-to-noematology
# Speed improvements and changes to data querying I changed the way we query for data and the way the page gets updated, so that we now basically never query for data without the user reloading the page. This is a pretty major change to the way the page updates, and I expect some bugs to show up somewhere. This means...
https://www.lesswrong.com/posts/tTq4LA25NWdC3tTrk/speed-improvements-and-changes-to-data-querying
# Pseudo-Rationality Pseudo-rationality is the social performance of rationality, as opposed to actual rationality. Here are some examples: * Being overly skeptical to demonstrate how skeptical you are * Always fighting for the truth, even when you’re burning more social capital than the argument is worth * ...
https://www.lesswrong.com/posts/9g8gFbR6Pu4C7E4MD/pseudo-rationality
# Crypto autopsy reply (X-posted from my FB post: https://www.facebook.com/alexei.andreev.3/posts/1403550339754401) Reply to Eliezer's post on crypto: [https://www.facebook.com/yudkowsky/posts/10156147605134228](https://www.facebook.com/yudkowsky/posts/10156147605134228) Which itself is a response to Scott Alexande...
https://www.lesswrong.com/posts/aAZDrsKYHWBwJGd5h/crypto-autopsy-reply
# UDT as a Nash Equilibrium I realized today that UDT doesn't really need the assumption that other players use UDT. In any game where all players have the same utility function, "everyone using UDT" is a Nash equilibrium that gives everyone their highest possible expected utility. So you can just use it unilaterally....
https://www.lesswrong.com/posts/6HmaGnXd4EJfpfait/udt-as-a-nash-equilibrium
# Hammertime Day 9: Time Calibration This is part 9 of 30 in the Hammertime Sequence. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. I’ve been thinking about whether or not regular betting, prediction markets, and being well-calibrated is actually useful, and if s...
https://www.lesswrong.com/posts/TQq7AgR7JCpXDBJoM/hammertime-day-9-time-calibration
# Two Coordination Styles In [game theory](https://en.m.wikipedia.org/wiki/Game_theory), assumptions of rationality imply that any "solution" of a game must be an equilibrium.* However, most games have many equilibria, and realistic agents don't always know which equilibrium they are in. Certain equilibrium strategies...
https://www.lesswrong.com/posts/gR9AWLQ7yL5ewEB2M/two-coordination-styles
# Hammertime Day 10: Murphyjitsu This is part 10 of 30 in the Hammertime Sequence. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. > Like, so pessimistic that _reality_ actually comes out better than you expected around as often and as much as it comes out worse. I...
https://www.lesswrong.com/posts/N47M3JiHveHfwdbFg/hammertime-day-10-murphyjitsu
# Hammertime Intermission and Open Thread This post marks the end of the first cycle of Hammertime. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for intro. Hammertime will return on Monday 2/19. I want to close off the first cycle with some thoughts, and designate a place for...
https://www.lesswrong.com/posts/hSMCZ9srzXuntA887/hammertime-intermission-and-open-thread
# Monthly Meta: Referring is Underrated _For the next n months I'll be picking an issue within the community that I think is especially important to highlight. LW2.0 is not just a new site, but also an opportunity to do things differently and to ask ourselves how they could be better._ We all know that rationality is...
https://www.lesswrong.com/posts/MZMviCWvhdxD24TZb/monthly-meta-referring-is-underrated
# Experiment: Separate High/Low Productive Hours Right now, I'm fortunate to be contracting at a tiny organization with extremely high trust. This meant I was able to pitch an offbeat idea: I could bill for two types of hours: * High productivity hours. * Low/medium productivity hours. I’d get paid roughly twi...
https://www.lesswrong.com/posts/hzuZNb8fC84MPx8Ka/experiment-separate-high-low-productive-hours
# What Are Meetups Actually Trying to Accomplish? _Disclaimer 1: I take responsibility for all opinions expressed here and note that they do not necessarily reflect the views of the people I interviewed or of my various employers._ _Disclaimer 2: In this post I sometimes make claims on behalf of LW2. I am under the i...
https://www.lesswrong.com/posts/bDnFhJBcLQvCY3vJW/what-are-meetups-actually-trying-to-accomplish
# Two kinds of Agency Original post: [http://bearlamp.com.au/two-kinds-of-agency/](http://bearlamp.com.au/two-kinds-of-agency/) * * * Let's talk about [agency](https://en.wikipedia.org/wiki/Agency_(philosophy)).  This week I read [Transform Your Self](https://www.goodreads.com/book/show/163279.Transform_Your_Self), ...
https://www.lesswrong.com/posts/7W7nfdRWWerffLgLg/two-kinds-of-agency
# Mental TAPs (Prereq: Knowledge of Trigger Action Plans (TAPs). alkjash's [here](https://www.lesserwrong.com/posts/ESnzpoCJrAfwAzpMB/hammertime-day-3-taps) & LifelongLeaner's [here](https://www.lesserwrong.com/posts/vE7Z2JTDo5BHsCp4T/instrumental-rationality-4-2-creating-habits)) I converged (sort'of) upon the idea ...
https://www.lesswrong.com/posts/wDrf5wc3zQmZNYEoo/mental-taps
# Write a Thousand Roads to Rome Epistemological Status: Pretty sure I'm on to something here, also very sure I'm restating the obvious, utterly confident that restating the obvious is the point. Sometimes a piece of writing gets two very different responses. Half the commenters say something like "this is really...
https://www.lesswrong.com/posts/Q924oPJzK92FifuFg/write-a-thousand-roads-to-rome
# Science like a chef Alice: Hey honey, I made pasta with tomato sauce! Bob: Great, let's eat! Bob: Mmmmm, that's fantastic! It's even better than last time. It's got a sweeter, deeper flavor, which I like. Alice: Thanks. Last time I only sautéed onions and garlic before adding the tomato puree, but this time I add...
https://www.lesswrong.com/posts/AtEJEQzBnGE6uKJA5/science-like-a-chef
# Knowledge is Freedom \[Epistemic Status: Type Error\] In this post, I try to build up an ontology around the following definition of knowledge: To know something is to have the set of policies available to you closed under conditionals dependent on that thing. You are an agent G, and you are interacting with an ...
https://www.lesswrong.com/posts/b3Bt9Cz4hEtR26ANX/knowledge-is-freedom
# Stable Pointers to Value II: Environmental Goals _[Cross-posted.](https://agentfoundations.org/item?id=1762)_ In [Stable Pointers to Value](https://www.lesswrong.com/posts/5bd75cc58225bf06703754b3/stable-pointers-to-value-an-agent-embedded-in-its-own-utility-function), I discussed various ways in which we can try t...
https://www.lesswrong.com/posts/wujPGixayiZSMYfm6/stable-pointers-to-value-ii-environmental-goals-1
# "Backchaining" in Strategy Recently, some have been discussing "backchaining" as a strategic planning technique. In brief, this technique involves selecting a target outcome, then chaining backwards from there to determine what actions you should take; in other words, rather than starting with your current position ...
https://www.lesswrong.com/posts/DwoPGM8ytBCXrZpM7/backchaining-in-strategy
# Antiantinatalism [Cross posted from Putanumonit.com](https://putanumonit.com/2018/02/08/antiantinatalism/) * * * Does life suck? I think it does, somewhat. Others would say it sucks even worse than that. David Benatar thinks that life sucks so profoundly that bestowing it on anyone, like your child, is a grave cri...
https://www.lesswrong.com/posts/WGrHd64wXiLSwZQBv/antiantinatalism
# Status: Map and Territory I’m here to add another angle to the discussion on social vs. objective truth ([example](http://benjaminrosshoffman.com/actors-and-scribes-words-and-deeds/)). Here’s an analogy for reasoning about status games and why people react so strongly against improper status moves: Society is a col...
https://www.lesswrong.com/posts/BFyCM7fgCsFYF69Hf/status-map-and-territory
# Eternal, and Hearthstone Economy versus Magic Economy The game [Eternal](https://www.direwolfdigital.com/eternal/register/?ref=afa80586-14f2-4bbb-a266-fd9825c8a6e8) (that is my referral link), created by Magic professionals including lead designer Patrick Chapin, is modern Magic: The Gathering, with some simplificat...
https://www.lesswrong.com/posts/29Gocz7mjSocgGKav/eternal-and-hearthstone-economy-versus-magic-economy
# The Signal and the Corrective This is one of the most important articles I've read in awhile. It makes a generalization from something Scott pointed out in [this SSC post](http://slatestarcodex.com/2016/11/10/book-review-house-of-god/). Here are a few excerpts, but it's worth clicking through and reading the whole ...
https://www.lesswrong.com/posts/2ahPGSupYDZYHezvB/the-signal-and-the-corrective
# "Just Suffer Until It Passes" I started a "universal problemsolving" journal a few months ago — whenever anything goes wrong, I write down (1) what happened, (2) the universal problems / root causes that might underlie that problem, and (3) generalized countermeasures for that situation in the future. Many of these...
https://www.lesswrong.com/posts/i8uQpgxH3CBXisni3/just-suffer-until-it-passes
# Some conceptual highlights from “Disjunctive Scenarios of Catastrophic AI Risk” My forthcoming paper, “[Disjunctive Scenarios of Catastrophic AI Risk](https://kajsotala.fi/assets/2018/12/Disjunctivescenarios.pdf)”, attempts to introduce a number of considerations to the analysis of potential risks from Artificial Ge...
https://www.lesswrong.com/posts/8uJ3n3hu8pLXC4YNE/some-conceptual-highlights-from-disjunctive-scenarios-of-1
# Rationality Feed: Last Month's Best Posts I write a daily rational feed. I write up summaries/teasers for the previous day's article that I found interesting and/or enjoyable. I follow most rationalist blogs as well as LW2.0 and the EA Forum on RSS. The daily feed is posted in the [SSC Discord](https://discord.gg/nJ...
https://www.lesswrong.com/posts/tebApa5sB2wFuSSDK/rationality-feed-last-month-s-best-posts
# Open thread, February 2018 Apologies if it's already there, but I just can't find the open thread. In case it's not there, open thread for February 2018! Hurray! In case it is, would it be difficult to check the title string for "open thread" and show a link to the current month's open thread, ask if that's what t...
https://www.lesswrong.com/posts/gucCba6Wu6CCNc7mg/open-thread-february-2018
# A Proper Scoring Rule for Confidence Intervals You probably already know that you can incentivise honest reporting of probabilities using a proper scoring rule like log score, but did you know that you can also incentivize honest reporting of confidence intervals? To incentize reporting of a < refresh to render LaT...
https://www.lesswrong.com/posts/sRfZxF2sjrRGdgyb8/a-proper-scoring-rule-for-confidence-intervals
# Hufflepuff Cynicism _**Summary:**_ In response to catching a glimpse of [the dark world](http://mindingourway.com/see-the-dark-world/), and especially of the [extent of human hypocrisy](https://sinceriously.fyi/social-reality/) with respect to the dark world, one might take a dim view of one's fellow humans. I descr...
https://www.lesswrong.com/posts/ufBXmcpxC9kLDeELz/hufflepuff-cynicism
# Spamming Micro-Intentions to Generate Willpower The purpose of this post is to outline a potentially general trick for getting yourself out of a rut or motivating yourself to do a small task that you don't feel like doing. Part (i) describes the source of the idea. Part (ii) outlines an example for testing the techn...
https://www.lesswrong.com/posts/FQAgKe5Yo9jK4ytvN/spamming-micro-intentions-to-generate-willpower
# Rationalist Lent [It's that time of year again](http://lesswrong.com/lw/gnp/rationalist_lent/). Pick something to give up for 40 days as an experiment / [comfort zone expansion](https://www.lesserwrong.com/posts/c5wFM7KJLtuMnLFsH/hammertime-day-5-comfort-zone-expansion). Post about it here. Good luck and have fun! ...
https://www.lesswrong.com/posts/KZ47RTGvgpH3PhiFR/rationalist-lent
# Hufflepuff Cynicism on Crocker's Rule Yesterday, I mainly talked about [Hufflepuff Cynicism](https://www.lesserwrong.com/posts/ufBXmcpxC9kLDeELz/hufflepuff-cynicism) from the cynic's end. However, there's a lot to be said about the receiving end. Hufflepuff cynicism can come off as a very patronizing strategy. Is th...
https://www.lesswrong.com/posts/W5veKkNugWZhG9BTG/hufflepuff-cynicism-on-crocker-s-rule
# The Principled Intelligence Hypothesis I have been reading the thought provoking _[Elephant in the Brain](http://elephantinthebrain.com/)_, and will probably have more to say on it later. But if I understand correctly, a dominant theory of how humans came to be so smart is that they have been in an endless cat and m...
https://www.lesswrong.com/posts/Tusi9getaQ2o6kZsb/the-principled-intelligence-hypothesis
# Active vs Passive Distraction When life gets difficult, it can often become tempting to distract yourself from your own thoughts and feelings. Loved ones may scold us for this, telling us that we should confront our feelings, because distracting ourselves doesn't accomplish anything. However, distraction is an impor...
https://www.lesswrong.com/posts/6i7nZP2fzmdHDGJqm/active-vs-passive-distraction
# Two Types of Updatelessness _[Cross-posted.](https://agentfoundations.org/item?id=1765)_ Just a small note which I’m not sure has been mentioned anywhere else: It seems like there are two different classes of “updateless reasoning”. In problems like Agent Simulates Predictor, switching to updateless reasoning is ...
https://www.lesswrong.com/posts/pneKTZG9KqnSe2RdQ/two-types-of-updatelessness
# Toward a New Technical Explanation of Technical Explanation A New Framework =============== _(Thanks to Valentine for a discussion leading to this post, and thanks to CFAR for running the CFAR-MIRI cross-fertilization workshop. Val provided feedback on a version of this post. Warning: fairly long.)_ Eliezer's _A [...
https://www.lesswrong.com/posts/tKwJQbo6SfWF2ifKh/toward-a-new-technical-explanation-of-technical-explanation
# Clarifying the Postmodernism Debate With Skeptical Modernism One of the greatest challenges with attempting to discuss post-modernism is figuring out exactly what claims are being made. For a start, many of the figures typically associated with post-modernism didn't label themselves as post-modernist, so it isn't as...
https://www.lesswrong.com/posts/b8PMegKmjLoWtAH7N/clarifying-the-postmodernism-debate-with-skeptical-modernism
# Circling Circling is a practice, much like meditation is a practice. There are many forms of it (again, like there are many forms of meditation). There are even life philosophies built around it. There are lots of intellectual, heady discussions of its theoretical underpinnings, often centered in Ken Wilber's Integ...
https://www.lesswrong.com/posts/aFyWFwGWBsP5DZbHF/circling
# Replacing expensive costly signals I feel like there is a general problem where people signal something using some extremely socially destructive method, and [we can conceive of more socially efficient ways to send the same signal](https://meteuphoric.wordpress.com/2011/07/20/cheap-signaling/), but trying out altern...
https://www.lesswrong.com/posts/7tM4JggkDsBTHxsfu/replacing-expensive-costly-signals
# In Defence of Conflict Theory Scott Alexander recently wrote [an interesting blog post](http://slatestarcodex.com/2018/01/24/conflict-vs-mistake/) on the differences between approaches to politics based on conflict theory and mistake theory. Here's a rough summary, in his words: > Mistake theorists treat politics a...
https://www.lesswrong.com/posts/no5devJYimRt8CAtt/in-defence-of-conflict-theory
# Missives from China Driving in China ---------------- You’ve heard of the Iterated Prisoner’s Dilemma. But have you heard of Iterated Chicken? Heat Lag -------- Day 4: Temperature is experienced differently here. People wear wool coats over turtlenecks in 70 degree weather. Counted two other people wearing T-shir...
https://www.lesswrong.com/posts/5cABZZmkCGpnfbWeE/missives-from-china
# [Meta] New moderation tools and moderation guidelines \[I will move this into meta in a few days, but this seemed important enough to have around on the frontpage for a bit\] *Here is a short post with some of the moderation changes we are implementing. Ray, Ben and me are working on some more posts explaining some...
https://www.lesswrong.com/posts/adk5xv5Q4hjvpEhhh/meta-new-moderation-tools-and-moderation-guidelines
# A Simple Motto Rationality is _believing true things and making good choices_. This phrasing isn't quite as precise as it could be, but it communicates the gist of both epistemic and instrumental rationality in very simple language. I use this wording when communicating what rationality is to non-rationalists, and ...
https://www.lesswrong.com/posts/q2PjwfrQcZrncS8JZ/a-simple-motto
# Whose reasoning can you rely on when your own is faulty? None of us are perfect reasoners. None of us have unlimited information. Sometimes other people are more correct than we are. This is an obvious thing we all _know_ but may not _practice_ . Below are some concrete questions you can think about that come at th...
https://www.lesswrong.com/posts/c6QoLM4cyFPXx3v6c/whose-reasoning-can-you-rely-on-when-your-own-is-faulty
# An alternative way to browse LessWrong 2.0 This is something I've been tinkering with for a while, but I think it's now complete enough to be generally useful. It's an alternative frontend for LessWrong 2.0, using the GraphQL API. [![](//i.imgur.com/AVjeNoH.png)](https://www.greaterwrong.com) Features: * Fast, ...
https://www.lesswrong.com/posts/66DXhQJyPEJNsXgfw/an-alternative-way-to-browse-lesswrong-2-0
# An alternative way to browse LessWrong 2.0 This is something I’ve been tinkering with for a while, but I think it’s now complete enough to be generally useful. It’s an alternative frontend for LessWrong 2.0, using the GraphQL API. [![](https://i.imgur.com/AVjeNoH.png)](https://www.greaterwrong.com) Features: * ...
https://www.lesswrong.com/posts/atYpmqWWTbphumHSz/an-alternative-way-to-browse-lesswrong-2-0
# Formally Stating the AI Alignment Problem _NB:_ _[Originally posted](https://mapandterritory.org/introduction-to-noematology-fac7ae7d805d)_ _on_ _[Map and Territory](https://mapandterritory.org/)_ _on Medium, so some of the internal series links go there._ The development of smarter-than-human artificial intelligen...
https://www.lesswrong.com/posts/7dvDgqvqqziSKweRs/formally-stating-the-ai-alignment-problem-1
# Bug Hunt 2 This is part 11 of 30 of Hammertime. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. CFAR has an underlying mantra “adjust your seat”: systematically modify every technique and class to fit your personal situation. It’s common sense nowadays that diffe...
https://www.lesswrong.com/posts/7LmmZaqhL4TWsQdPN/bug-hunt-2
# Why we want unbiased learning processes **tl;dr**: if an agent has a biased learning process, it may choose actions that are worse (with certainty) for every possible reward function it could be learning. An agent learns its own reward function if there is a set R of possible reward functions, and there is a learn...
https://www.lesswrong.com/posts/KT4Nau2XhuNejkXQR/why-we-want-unbiased-learning-processes
# Sex, Lies, and Dexamethasone **CW**: Here are some terms that appear in this post and give you some flavor of what it's about: _clitoroplasty, childhood sexual abuse, autogynephilia, transvestite, institutional review board._ ![](https://putanumonit.files.wordpress.com/2018/02/galileos-middle-finger-cover.jpg) Thi...
https://www.lesswrong.com/posts/HPdz82fbTf6Nj6QNT/sex-lies-and-dexamethasone
# Shit rationalists say - 2018 In 2012 we had a thread titled [Shit rationalists say](http://lesswrong.com/lw/9ki/shitrationalists say/) that lead to the fun video [Shit rationalists say](https://www.youtube.com/watch?v=jlT3MeCzVao S). Given that the video is a lot of fun to watch, how about starting a new list that's...
https://www.lesswrong.com/posts/kNDLkcqLMcLdrQNzG/shit-rationalists-say-2018
# Are you the rider or the elephant? Some [recent](https://www.lesserwrong.com/posts/aFyWFwGWBsP5DZbHF/circling) [threads](https://www.lesserwrong.com/posts/tMhEv28KJYWsu6Wdo/kensh) seem to me to be pointing at a really fundamental tension that I don't know how to articulate in full. But here's a chunk of it: When yo...
https://www.lesswrong.com/posts/NJL5FYe6KkRtjekeG/are-you-the-rider-or-the-elephant
# Yoda Timers 2 This is part 12 of 30 of Hammertime. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. > Anyone who can muster their willpower for thirty seconds, can make a _desperate_ effort to lift more weight than they usually could.  But what if the weight that ...
https://www.lesswrong.com/posts/nHMu69unKxMaS2D5b/yoda-timers-2
# Don't Condition on no Catastrophes I often hear people say things like "By what date do you assign 50% chance to reaching AGI, conditioned on no other form of civilizational collapse happening first?" The purpose of this post is to make this question make you cringe. I think that most people mentally replace the co...
https://www.lesswrong.com/posts/8NBbq7xhyDXoDWM8e/don-t-condition-on-no-catastrophes
# Robustness to Scale I want to quickly draw attention to a concept in AI alignment: Robustness to Scale. Briefly, you want your proposal for an AI to be robust (or at least fail gracefully) to changes in its level of capabilities. I discuss three different types of robustness to scale: robustness to scaling up, robus...
https://www.lesswrong.com/posts/bBdfbWfWxHN9Chjcq/robustness-to-scale
# TAPs 2 This is part 13 of 30 of Hammertime. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. > “Omit needless words!” cries the author on page 23, and into that imperative Will Strunk really put his heart and soul. In the days when I was sitting in his class, he o...
https://www.lesswrong.com/posts/mvHiY9hsSXacmZuoG/taps-2
# The Three Stages Of Model Development **TLDR:** First you go with your gut, then you get a logical model, then you improve that model. Trusting your logical model over your gut before it gets good enough is a very common way to believe wrong things. _\[epistemic status: probably approximately true, with possible pa...
https://www.lesswrong.com/posts/asq9esJzoTuja7qhc/the-three-stages-of-model-development
# The Intelligent Social Web **Epistemic status:** [Fake Framework](http://lesswrong.com/lw/p80/in_praise_of_fake_frameworks/) When you walk into an [improv](https://en.wikipedia.org/wiki/Improvisational_theatre) scene, you usually have no idea what role you’re playing. All you have is some initial prompt — somet...
https://www.lesswrong.com/posts/AqbWna2S85pFTsHH4/the-intelligent-social-web
# The map has gears. They don't always turn. _Follow up to [Toward a New Technical Explanation of Technical Explanation](https://www.lesserwrong.com/posts/tKwJQbo6SfWF2ifKh/toward-a-new-technical-explanation-of-technical-explanation)._ [Confusion is in the map](https://wiki.lesswrong.com/wiki/Reality_is_normal), not ...
https://www.lesswrong.com/posts/3up8XBeGGHf77sNR4/the-map-has-gears-they-don-t-always-turn
# Explanation vs Rationalization _Follow-up to: [Toward a New Technical Explanation of Technical Explanation](https://www.lesserwrong.com/posts/tKwJQbo6SfWF2ifKh/toward-a-new-technical-explanation-of-technical-explanation), [The Bottom Line](https://www.lesserwrong.com/posts/34XxbRFe54FycoCDw/the-bottom-line)._ In [T...
https://www.lesswrong.com/posts/JtwbGiEz7QWdff5gk/explanation-vs-rationalization
# Design 2 This is part 14 of 30 of Hammertime. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. > _I am a finger pointing to the moon. Don’t look at me; look at the moon._ Rationalists drone on and on about how our [fake](http://lesswrong.com/lw/p80/in_praise_of_f...
https://www.lesswrong.com/posts/hpKnepGWPraedLiDc/design-2
# June 2012: 0/33 Turing Award winners predict computers beating humans at go within next 10 years. In June 2012, the Association for Computing Machinery—a professional society of computer scientists, best known for hosting the prestigious ACM Turing Award, commonly referred to as the "Nobel Prize of Computer Science"...
https://www.lesswrong.com/posts/cEhv4yd6GYgz66LgK/june-2012-0-33-turing-award-winners-predict-computers
# Two types of mathematician This is an expansion of a [linkdump I made a while ago](https://drossbucket.wordpress.com/2017/02/14/two-types-of-mathematician-linkdump/) with examples of mathematicians splitting other mathematicians into two groups, which may be of wider interest in the context of the recent [elephant/r...
https://www.lesswrong.com/posts/5QnvHZpy4pGgCo3Pp/two-types-of-mathematician
# Mythic Mode **Follow-up to:** [The Intelligent Social Web](https://www.lesserwrong.com/posts/AqbWna2S85pFTsHH4/the-intelligent-social-web) **Related to:** [Fake Frameworks](http://lesswrong.com/lw/p80/in_praise_of_fake_frameworks/) [Yesterday](https://www.lesserwrong.com/posts/AqbWna2S85pFTsHH4/the-intelligent-soc...
https://www.lesswrong.com/posts/HnWN6v4wHQwmYQCLX/mythic-mode
# On Building Theories of History ![](https://cdn-images-1.medium.com/max/1200/1*8lGFccuQr-88oUruay3jEg.jpeg) _This is an excerpt from the draft of_ _[my upcoming book](http://samoburja.com/gft)_ _on great founder theory. It was originally published on SamoBurja.com. You can_ _[access the original here.](http...
https://www.lesswrong.com/posts/4cm6rtNPgQ67LtQ2o/on-building-theories-of-history
# CoZE 2 This is part 15 of 30 of Hammertime. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. Another of CFAR’s running themes is: Try Things! > When you’re considering adopting new habits or ideas, there’s no better way to gather data than _actually trying \[…\] ...
https://www.lesswrong.com/posts/G4mzC7zFW6WMMkmSS/coze-2
# What we talk about when we talk about maximising utility tl;dr: “Utility” is used on LW to mean what people want, but that’s not what's morally relevant. Utilitarians aren't trying to maximise this sort of utility, but rather "well-being". Epistemic status: probably obvious to some, but this particular framing wasn...
https://www.lesswrong.com/posts/acaPdSfwNiG9igJ7u/what-we-talk-about-when-we-talk-about-maximising-utility
# Lessons from the Cold War on Information Hazards: Why Internal Communication is Critical Due to their tremendous power, nuclear weapons were a subject of intense secrecy and taboo in the US government following WW2. After their first uses, President Truman came to consider atomic weapons a terror weapon of last reso...
https://www.lesswrong.com/posts/k8qLzbHTubMjCHL2E/lessons-from-the-cold-war-on-information-hazards-why
# Meta-tations on Moderation: Towards Public Archipelago The recent [moderation tools announcement](https://www.lesserwrong.com/posts/adk5xv5Q4hjvpEhhh/meta-new-moderation-tools-and-moderation-guidelines) represents a fairly major shift in how the site admins are approaching LessWrong. Several people noted important c...
https://www.lesswrong.com/posts/5Ym7DN6h877eyaCnT/meta-tations-on-moderation-towards-public-archipelago
# Arguments about fast takeoff I expect "slow takeoff," which we could operationalize as the economy doubling over some 4 year interval before it doubles over any 1 year interval. Lots of people in the AI safety community have strongly opposing views, and it seems like a really important and intriguing disagreement. I...
https://www.lesswrong.com/posts/AfGmsjGPXN97kNp57/arguments-about-fast-takeoff
# Three Miniatures This is part 16 of 30 of Hammertime. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. The sixth day always marks the boundary between concrete and abstract. Today, I mark the occasion with three essays on new techniques. These essays are short be...
https://www.lesswrong.com/posts/vEDLwBQWhQwxXzprR/three-miniatures
# Passing Troll Bridge In an earlier discussion about the Troll Bridge problem, [Abram mentioned](https://agentfoundations.org/item?id=1711) a Lobian proof that a logical induction based agent would converge to not crossing the bridge. It looked rather sketchy, though, and further discussion by Paul in the comments...
https://www.lesswrong.com/posts/5bd75cc58225bf0670375546/passing-troll-bridge
# Open-Source Monasticism \[I originally wrote most of the strings of text below for the online _Art & Monasticism Symposium_ in 2012 through **Transpositions, **a collaborative effort of students associated with the _Institute for Theology, Imagination, and the Arts_ at the University of St Andrews. What follows has ...
https://www.lesswrong.com/posts/bNACATPYmTz6Xhg6f/open-source-monasticism
# The abruptness of nuclear weapons Nuclear weapons seem like the marquee example of rapid technological change after crossing a critical threshold. Looking at the numbers, it seems to me like: * During WWII, and probably for several years after the war, the cost / TNT equivalent for manufacturing nuclear weapons ...
https://www.lesswrong.com/posts/y5eapqjYYku8Wt9wn/the-abruptness-of-nuclear-weapons
# Self-regulation of safety in AI research In many industries, but especially those with a potentially adversarial relationship to society like advertising and arms, self regulatory organizations (SROs) exist to provide voluntary regulation of actors in those industries to assure society of their good intentions. For ...
https://www.lesswrong.com/posts/Z8BWP6CEQuARcbNZu/self-regulation-of-safety-in-ai-research
# Will AI See Sudden Progress? Will advanced AI let some small group of people or AI systems take over the world? AI X-risk folks and others have accrued lots of arguments about this over the years, but I think this debate has been disappointing in terms of anyone changing anyone else’s mind, or much being resolved. ...
https://www.lesswrong.com/posts/AJtfNyBsum6ZzWxKR/will-ai-see-sudden-progress
# Walkthrough of 'Formalizing Convergent Instrumental Goals' Introduction ------------ I found _[Formalizing Convergent Instrumental Goals](https://intelligence.org/files/FormalizingConvergentGoals.pdf)_ (Benson-Tilsen and Soares) to be quite readable. I was surprised that the instrumental convergence hypothesis had ...
https://www.lesswrong.com/posts/KXMqckn9avvY4Zo9W/walkthrough-of-formalizing-convergent-instrumental-goals
# Experimental Open Threads [Meta-tations on Moderation: Towards Public Archipelago](https://www.lesserwrong.com/posts/5Ym7DN6h877eyaCnT/meta-tations-on-moderation-towards-public-archipelago) suggested that we should be running more experiments with how discussion is run and listed several examples in the post. What i...
https://www.lesswrong.com/posts/L4TKbfYsvxMJwNGTQ/experimental-open-threads
# Mapping the Archipelago I got excited reading [Meta-tations on Moderation: Towards Public Archipelago](https://www.lesserwrong.com/posts/5Ym7DN6h877eyaCnT/meta-tations-on-moderation-towards-public-archipelago) for two reasons: there's a clear island of the archipelago I've been mostly avoiding on LessWrong, and the ...
https://www.lesswrong.com/posts/XmA3u9c3AYFLmQ7tZ/mapping-the-archipelago
# Focusing This is part 17 of 30 of Hammertime. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. > You know how they say we only use 10 percent of our brains? I think we only use 10 percent of our hearts. ~ Owen Wilson It is with some trepidation that I venture in...
https://www.lesswrong.com/posts/noXTkjP45BAZqKJxM/focusing
# Inconvenience Is Qualitatively Bad My most complicated cookie recipe has four layers. Two of these require stovetop cooking, and the other two require the use of the oven separately before the nearly-complete cookies are baked in yet a third oven use, for a total of three different oven temperatures. I have to separ...
https://www.lesswrong.com/posts/xqJqZgowy5pPxNNst/inconvenience-is-qualitatively-bad
# Goal Factoring This is part 18 of 30 of Hammertime. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. Up until today, Hammertime focused on improving one’s ability to achieve one’s goals. The next two techniques, Goal Factoring and Internal Double Crux, are designe...
https://www.lesswrong.com/posts/Cu5C5KhkoXhrPMLFN/goal-factoring
# More on the Linear Utility Hypothesis and the Leverage Prior This is the followup to “[Against the Linear Utility Hypothesis and the Leverage Prior](https://www.lesserwrong.com/posts/8FRzErffqEW9gDCCW/against-the-linear-utility-hypothesis-and-the-leverage)” that I had promised in the comments on that post. Apologies...
https://www.lesswrong.com/posts/mBFqG3xjYazsPiZkH/more-on-the-linear-utility-hypothesis-and-the-leverage-prior
# Categories of Sacredness Previously: [Eternal, and Hearthstone Economy versus Magic Economy](https://thezvi.wordpress.com/2018/02/10/eternal-and-hearthstone-economy-versus-magic-economy/), [Out to Get You](https://thezvi.wordpress.com/2017/09/23/out-to-get-you/) On Lesser Wrong, JenniferRM gave a reply that is wort...
https://www.lesswrong.com/posts/sFCfH9hC5w7ALgM68/categories-of-sacredness
# Intuition should be applied at the lowest possible level Earlier today I lost a match at Prismata, a turn-based strategy game without RNG. When I analyzed the game, I discovered that changing one particular decision I had made on one turn from A to B caused me to win comfortably. A and B had seemed very close to me ...
https://www.lesswrong.com/posts/iPLGEyHvMJYXRu74g/intuition-should-be-applied-at-the-lowest-possible-level
# Set Up for Success: Insights from 'Naïve Set Theory' Foreword ======== [This book](http://smile.amazon.com/Naive-Set-Theory-Paul-Halmos/dp/1614271313/) has been reviewed [pretty](https://www.lesserwrong.com/posts/Ee8CZW7wzaNdCENYG/book-review-naive-set-theory-miri-course-list) [thoroughly](https://www.lesserwrong.c...
https://www.lesswrong.com/posts/WPtdQ3JnoRSci87Dz/set-up-for-success-insights-from-naive-set-theory
# TDT for Humans This is part 19 of 30 of Hammertime. Click [here](https://radimentary.wordpress.com/2018/01/29/hammertime-day-1-bug-hunt/) for the intro. As is Hammertime tradition, I’m making a slight change of plans right around the scheduled time for Planning. My excuse this time: Several commenters pointed out ...
https://www.lesswrong.com/posts/4Kye4kkKwn6DCahKy/tdt-for-humans
# 2/27/08 Update – Frontpage 3.0 We finally got around to a long-postponed update to the frontpage, where logged in users see content that is hopefully more relevant to them. Some notes: * Non logged in users should have a mostly unchanged experience * Logged in users now see a list of recommended sequences. The...
https://www.lesswrong.com/posts/ch6i9GzaG4LoZoj5X/2-27-08-update-frontpage-3-0
# Using the universal prior for logical uncertainty (retracted) (I'm not sure about any of this) (Yup! Vanessa Kowalski pointed out a crucial error, see her comment below. The idea about the universal prior picking up all computable regularities is _unproved_. I'm leaving the post as it was, but please read it as a h...
https://www.lesswrong.com/posts/D25HE2znAKFp5WKk9/using-the-universal-prior-for-logical-uncertainty-retracted
# Beyond algorithmic equivalence: algorithmic noise There is a '[no-free-lunch](http://In https://arxiv.org/abs/1712.05812)' theorem in value learning; without assuming anything about an agent's rationality, you can't deduce anything about its reward, and vice versa. Here I'll investigate whether you can deduce more ...
https://www.lesswrong.com/posts/meG3Pai2YeRYcwPwS/beyond-algorithmic-equivalence-algorithmic-noise
# Beyond algorithmic equivalence: self-modelling In the [previous post](https://www.lesserwrong.com/posts/meG3Pai2YeRYcwPwS/beyond-algorithmic-equivalence-rewards), I discussed ways that the internal structure of an algorithm might, given the right [normative assumption](https://www.lesserwrong.com/posts/Fg83cD3M7dSpS...
https://www.lesswrong.com/posts/kmLP3bTnBhc22DnqY/beyond-algorithmic-equivalence-self-modelling
# Extended Quote on the Institution of Academia From the top-notch 80,000 Hours podcast, and their [recent interview](https://80000hours.org/2018/02/holden-karnofsky-open-philanthropy/#full-transcript) with Holden Karnofsky (Executive Director of the Open Philanthropy Project). What follows is an short analysis of wh...
https://www.lesswrong.com/posts/nXZi8efFArfk3u568/extended-quote-on-the-institution-of-academia
# Ambiguity Detection _Note: the most up-to-date version of this proposal can be found [here](https://www.lesswrong.com/posts/txGJZAPjraYEQfHq2/open-category-classification)._ Introduction ------------ > If I present you with five examples of burritos, I don’t want you to pursue the _simplest_ way of classifying bur...
https://www.lesswrong.com/posts/75s3362FgLrqxtzbE/ambiguity-detection
# Kidneys, trade, sacredness, and space travel To the trader mindset, sacred values are nothing but a confusion; if you don’t like the deal, you just haven’t been offered a high enough price. There’s something important the trader mindset can’t see. Its modus operandi is to take two different representations of value...
https://www.lesswrong.com/posts/Wy3jsNmp6k2e5nWGL/kidneys-trade-sacredness-and-space-travel