text
stringlengths
300
320k
source
stringlengths
52
154
# Post-college: changing nature of friend interactions As a working professional a couple of years out college, I’ve been noticing how interactions with my friends has changed since the beginning of college – and especially since graduation.  In college, my social groups typically formed around groups with common mee...
https://www.lesswrong.com/posts/hcTXEFphtysjqJd9t/post-college-changing-nature-of-friend-interactions
# Evaluating the feasibility of SI's plan (With Kaj Sotala) SI's current R&D plan seems to go as follows:  1\. Develop the perfect theory. 2\. Implement this as a safe, working, Artificial General Intelligence -- and do so before anyone else builds an AGI. The Singularity Institute is almost the only group workin...
https://www.lesswrong.com/posts/5evRqMmGxTKf98pvT/evaluating-the-feasibility-of-si-s-plan
# How to signal curiosity? At LessWrong we encourage people to be [curious](/lw/4ku/use_curiosity/). Curiosity causes people to ask questions, but sometimes those questions get misinterpreted as social challenges or rhetorical techniques, or maybe just regular questions that you don't have a "[burning itch](/lw/jz/the...
https://www.lesswrong.com/posts/qKjwd4zR9PvB9Fxfw/how-to-signal-curiosity
# A fungibility theorem _Restatement of: __[If you don't know the name of the game, just tell me what I mean to you](/lw/2xb/if_you_dont_know_the_name_of_the_game_just_tell/). __Alternative to: [Why you must maximize expected utility](/lw/fu1/why_you_must_maximize_expected_utility/). Related to: [Harsanyi's Social Agg...
https://www.lesswrong.com/posts/oRRpsGkCZHA3pzhvm/a-fungibility-theorem
# Farewell Aaron Swartz (1986-2013) [Link](http://tech.mit.edu/V132/N61/swartz.html) [One of us](/user/aaronsw/overview/) is [no more](http://tech.mit.edu/V132/N61/swartz.html). > Computer activist Aaron H. Swartz committed suicide in New York City yesterday, Jan. 11. > > The accomplished Swartz co-authored the no...
https://www.lesswrong.com/posts/qWxApH2AyF3bbPDeL/farewell-aaron-swartz-1986-2013
# Seeking examples of people smarter than me who got hung up I'm looking for historical examples of scientists who were  a) very intelligent and still b) continued to put themselves behind a theory in their discipline long after it was rejected.  Maybe they got too attached to it, refused to be wrong, got emotional,...
https://www.lesswrong.com/posts/BXqLFptZoKXgAmvZW/seeking-examples-of-people-smarter-than-me-who-got-hung-up
# Evolution, Sex, and Gender, Not to Mention Research From [The New York Times](http://www.nytimes.com/2013/01/13/opinion/sunday/darwin-was-wrong-about-dating.html?pagewanted=2&_r=0&ref=general&src=me): > Take the question of promiscuity. Everyone has always assumed — and early research had shown — that women desired...
https://www.lesswrong.com/posts/FMMtzKdbjMQmJMgP9/evolution-sex-and-gender-not-to-mention-research
# Assessing Kurzweil: the gory details This post goes along with [this one](/lw/gbi/assessing_kurzweil_the_results/), which was merely summarising the results of the volunteer assessment. Here we present the further details of the methodology and results. Kurzweil's predictions were decomposed into 172 separate state...
https://www.lesswrong.com/posts/edd9mAcMByL2BFNJv/assessing-kurzweil-the-gory-details
# Study on depression I am currently running a study on depression, in collaboration with Shannon Friedman ([http://lesswrong.com/user/ShannonFriedman/overview/](/user/ShannonFriedman/overview/)). If you are interested in participating, the study involves filling out a survey and will take a few minutes of your time (...
https://www.lesswrong.com/posts/NGhfJp5qPAL8j9JTp/study-on-depression
# Assessing Kurzweil: the results Predictions of the future rely, to a much greater extent than in most fields, on the personal judgement of the expert making them. Just one problem - personal expert judgement generally [sucks](http://press.princeton.edu/titles/7959.html), especially when the experts don't receive imm...
https://www.lesswrong.com/posts/kbA6T3xpxtko36GgP/assessing-kurzweil-the-results
# Outline of Possible Sources of Values I don't know what my values are. I don't even know how to find out what my values are. But do I know something about how I (or an [FAI](http://wiki.lesswrong.com/wiki/Friendly_artificial_intelligence)) _may_ be able to find out what my values are? Perhaps... and I've organized m...
https://www.lesswrong.com/posts/uFEu2Y7efZ8CzCD5F/outline-of-possible-sources-of-values
# Generalizing from One Trend **Related:** [Reference Class of the Unclassreferenceable](/lw/1lx/reference_class_of_the_unclassreferenceable/ ), [Generalizing From One Example](/lw/dr/generalizing_from_one_example/) Many people try to predict the future. Few succeed. One common mistake made in predicting the future ...
https://www.lesswrong.com/posts/5wg3FRBie7BEFujQK/generalizing-from-one-trend
# My simple hack for increased alertness and improved cognitive functioning: very bright light This is a simple idea that I came up with by myself. I was looking for a means to enter high functioning lots-of-beta-waves modes without the use of chemical stimulants. What I found was that very bright light works really, ...
https://www.lesswrong.com/posts/Ag7oQifJQM5AnMCrR/my-simple-hack-for-increased-alertness-and-improved
# Banish the Clippy-creating Bias Demon! I [posted](http://blog.practicalethics.ox.ac.uk/2013/01/invoking-and-banishing-the-dread-demon-lead/) in Practical Ethics, arguing that if we mentally anthropomorphised certain risks, then we'd be more likely to give them the attention they [deserved](http://en.wikipedia.org/wi...
https://www.lesswrong.com/posts/P2ggjdWv67uTzJ28z/banish-the-clippy-creating-bias-demon
# Michael Vassar's Edge contribution: summary Michael Vassar has written a [provocative response](http://edge.org/response-detail/23876) to this year's [Edge question](http://edge.org/annual-question/q2013): "What \*should\* we be worried about?". But, I'm confused about his post. My attempt to summarize his point of ...
https://www.lesswrong.com/posts/gKfXFQxvYN5Dm4eoR/michael-vassar-s-edge-contribution-summary
# Long-chain correlation: lead paint and crime A friend has been asking my views on the likelihood that there's anything to a correlation between changing levels of lead in paint (and automotive exhaust) and the levels of crime. He quoted from a Reason Blog: > So Nevin dove in further, digging up detailed data on lea...
https://www.lesswrong.com/posts/b2WFFRdy2DTMJMfjT/long-chain-correlation-lead-paint-and-crime
# On the Importance of Systematic Biases in Science From pg812-1020 of [Chapter 8 “Sufficiency, Ancillarity, And All That”](http://omega.albany.edu:8008/ETJ-PS/cc8k.ps) of [_Probability Theory: The Logic of Science_](http://omega.albany.edu:8008/JaynesBook.html) by E.T. Jaynes: > The classical example showing the err...
https://www.lesswrong.com/posts/qZiCwDQZtskfLipiF/on-the-importance-of-systematic-biases-in-science
# I attempted the AI Box Experiment (and lost) #### [_**Update 2013-09-05.**_](/r/discussion/lw/ij4/i_attempted_the_ai_box_experiment_again_and_won/) #### **[_I have since played two more AI box experiments after this one, winning both._](/r/discussion/lw/ij4/i_attempted_the_ai_box_experiment_again_and_won/)** **_Up...
https://www.lesswrong.com/posts/FmxhoWxvBqSxhFeJn/i-attempted-the-ai-box-experiment-and-lost
# Update on Kim Suozzi (cancer patient in want of cryonics) Kim Suozzi was a neuroscience student with brain cancer who wanted to be cryonically preserved but lacked the funds. She appealed to reddit and a foundation was set up, called the Society for Venturism.  Enough money was raised, and when she died on the Janua...
https://www.lesswrong.com/posts/xgr8sDtQEEs7zfTLH/update-on-kim-suozzi-cancer-patient-in-want-of-cryonics
# AidGrade - GiveWell finally has some competition AidGrade is a new charity evaluator that looks to be comparable to GiveWell. Their primary difference is that they \*only\* focus on how charities compare along particular measured outcomes (such as school attendance, birthrate, chance of opening a business, malaria),...
https://www.lesswrong.com/posts/745w39YeAEwvRyBt9/aidgrade-givewell-finally-has-some-competition
# AI box: AI has one shot at avoiding destruction - what might it say? Eliezer [proposed in a comment:](/r/discussion/lw/gej/i_attempted_the_ai_box_experiment_and_lost/8bje) >More difficult version of AI-Box Experiment: Instead of having up to 2 hours, you can lose at any time if the other player types AI DESTROYED. ...
https://www.lesswrong.com/posts/TMQY54nbmv2Pqn3ux/ai-box-ai-has-one-shot-at-avoiding-destruction-what-might-it
# Want to help me test my Anki deck creation skills? I'm interested in trying to make better Anki decks for the LessWrong community, but I want to see how well I can actually do this first. There's a lot of knowledge out there about how to format and create decks, but it's still a decent amount of work, and there are ...
https://www.lesswrong.com/posts/5RF3GKNbahvMeTAr5/want-to-help-me-test-my-anki-deck-creation-skills
# Value-Focused Thinking: a chapter-by-chapter summary ![](http://www.ebook3000.com/upimg/201007/17/1714184168.jpeg)This is a chapter-by-chapter summary of [Value-Focusing Thinking](http://www.amazon.com/Value-Focused-Thinking-Path-Creative-Decisionmaking/dp/067493198X/ref=nosim?tag=vglnk-c319-20) by [Ralph Keeney](ht...
https://www.lesswrong.com/posts/CQHZZWZt99An8fmpT/value-focused-thinking-a-chapter-by-chapter-summary
# LW anchoring experiment: maybe > I do an informal experiment testing whether LessWrong karma scores are susceptible to a form of anchoring based on the first comment posted; a medium-large effect size is found although the data does not fit the assumed normal distribution & the more sophisticated analysis is equivoc...
https://www.lesswrong.com/posts/DfvX99AKx7pR7NE3v/lw-anchoring-experiment-maybe
# Right for the Wrong Reasons One of the few things that I really appreciate having encountered during my study of philosophy is the Gettier problem. Paper after paper has been published on this subject, starting with Gettier's original ["Is Justified True Belief Knowledge?"](http://philosophyfaculty.ucsd.edu/faculty/...
https://www.lesswrong.com/posts/7c7crvbG62KL8kAuW/right-for-the-wrong-reasons
# Notes on Autonomous Cars > Excerpts from literature on robotic/self-driving/autonomous cars with a focus on legal issues, lengthy, often tedious; some more SI work. See also [Notes on Psychopathy](/lw/fzy/notes_on_psychopathy/). Having read through all this material, my general feeling is: the near-term future (1 d...
https://www.lesswrong.com/posts/8ZrQkBDptXhumhP3S/notes-on-autonomous-cars
# In the beginning, Dartmouth created the AI and the hype I've just been through the proposal for the Dartmouth AI conference of 1956, and it's a surprising read. All I really knew about it was its absurd optimism, as typified by the quote: An attempt will be made to find how to make machines use language, form abstr...
https://www.lesswrong.com/posts/ZEZwWsPHGo6R88WuH/in-the-beginning-dartmouth-created-the-ai-and-the-hype
# Best of Rationality Quotes, 2012 Edition I finished creating the 2012 edition of the Best of Rationality Quotes collection. ([Here is last year's](/lw/3cn/best_of_rationality_quotes_20092010/).) [**Best of Rationality Quotes 2012**](http://people.mokk.bme.hu/~daniel/rationality_quotes_2012/rq_only2012.html) (500kB ...
https://www.lesswrong.com/posts/zaWnu3PxP4YTYiuCm/best-of-rationality-quotes-2012-edition
# CEV: a utilitarian critique _I'm posting this article on behalf of [Brian Tomasik](http://www.utilitarian-essays.com/), who authored it but is at present too busy to respond to comments._ _Update from Brian_: "As of 2013-2014, I have become more sympathetic to at least the spirit of CEV specifically and to the proj...
https://www.lesswrong.com/posts/PnAqpopgvDGyeBCQE/cev-a-utilitarian-critique
# Cryo and Social Obligations I'm about a third of the way through "Debt: The First 5,000 Years" by David Graeber, and am enjoying the feeling of ideas shifting around in my head, arranging themselves into more useful patterns. (The last book I read that put together ideas of similar breadth was "Economix: How and Why...
https://www.lesswrong.com/posts/GG8KAsj4c3bWdgBBc/cryo-and-social-obligations
# Meetup : Love and Sex in Salt Lake City Discussion article for the meetup : [Love and Sex in Salt Lake City](/meetups/ij) --------------------------------------------------------------------------------- **WHEN:** 16 February 2013 01:00:00PM (-0700) **WHERE:** 1558 Palo Verde Way QE#12, cottonwood heights, ut 84...
https://www.lesswrong.com/posts/APyPRqpHCBgcxYAEm/meetup-love-and-sex-in-salt-lake-city
# [LINK] NYT Article about Existential Risk from AI [http://opinionator.blogs.nytimes.com/2013/01/27/cambridge-cabs-and-copenhagen-my-route-to-existential-risk/](http://opinionator.blogs.nytimes.com/2013/01/27/cambridge-cabs-and-copenhagen-my-route-to-existential-risk/) Author: Huw Price (Bertrand Russell Professor o...
https://www.lesswrong.com/posts/tK37jT79YFgARZRje/link-nyt-article-about-existential-risk-from-ai
# Isolated AI with no chat whatsoever Suppose you make a super-intelligent AI and run it on a computer. The computer has NO conventional means of output (no connections to other computers, no screen, etc). Might it still be able to get out / cause harm? I'll post my ideas, and you post yours in the comments. (This ma...
https://www.lesswrong.com/posts/Nuh2d6TsFhkQktbS7/isolated-ai-with-no-chat-whatsoever
# [minor] Separate Upvotes and Downvotes Implimented It seems that if you look at the column on the right of the page, you can see upvotes and downvotes separately for recent posts. The same \[n, m\] format is displayed for recent comments, but it doesn't seem to actually sync with the score displaying on the comment....
https://www.lesswrong.com/posts/7e5LpsnJEuRgnSytM/minor-separate-upvotes-and-downvotes-implimented
# The Zeroth Skillset **Related:** [23 Cognitive Mistakes that make People Play Bad Poker](http://rationalpoker.com/2011/07/30/23-cognitive-mistakes-that-make-people-play-bad-poker/) **Followed by:** Situational Awareness And You If epistemic rationality is the art of updating one's beliefs based on new evidence to ...
https://www.lesswrong.com/posts/vmBHCPZxunwdbFvaJ/the-zeroth-skillset
# Second major sequence now available in audio format The sequence "[A Human's Guide to Words](http://wiki.lesswrong.com/wiki/A_Human%27s_Guide_to_Words)" is now available as a [professionally read podcast](http://castify.co/channels/16-less-wrong-a-human-s-guide-to-words). We have started working on the large "[Redu...
https://www.lesswrong.com/posts/yhynHHKb7Hgmefwq6/second-major-sequence-now-available-in-audio-format
# Singularity Institute is now Machine Intelligence Research Institute [http://singularity.org/blog/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/](http://intelligence.org/blog/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/) As [Risto Saarelma](/user/Risto_Saarelma/) ...
https://www.lesswrong.com/posts/PbAdoTrZF7Nkuh5sS/singularity-institute-is-now-machine-intelligence-research
# Thoughts on the January CFAR workshop So, the [Center for Applied Rationality](http://appliedrationality.org/) just ran another [workshop](/lw/g6g/applied_rationality_workshops_jan_2528_and_march/), which Anna kindly invited me to. Below I've written down some thoughts on it, both to organize those thoughts and beca...
https://www.lesswrong.com/posts/9FfxfaLQN2rRvSjp7/thoughts-on-the-january-cfar-workshop
# Pinpointing Utility **Following** [Morality is Awesome](/lw/g7y/morality_is_awesome/). **Related:** [Logical Pinpointing](/lw/f4e/logical_pinpointing/), [VNM](https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem). The eternal question, with a quantitative edge: A wizard has turned you into ...
https://www.lesswrong.com/posts/CQkGJ2t5Rw8GcZKJm/pinpointing-utility
# [LINK] The Cryopreservation of Kim Suozzi [http://www.alcor.org/blog/?p=2716](http://www.alcor.org/blog/?p=2716) > With the inevitable end in sight – and with the cancer continuing to spread throughout her brain – Kim made the brave choice to refuse food and fluids. Even so, it took around 11 days before her body s...
https://www.lesswrong.com/posts/wLA3sBNZDAaQ2ekfG/link-the-cryopreservation-of-kim-suozzi
# The value of Now. I am an easily bored Omega-level being, and I want to play a game with you. I am going to offer you two choices.  Choice 1: You spend the next thousand years in horrific torture, after which I restore your local universe to precisely the state it is in now (wiping your memory in the process), and...
https://www.lesswrong.com/posts/BzQqhoxnoYSPJB28o/the-value-of-now
# Save the princess: A tale of AIXI and utility functions _"Intelligence measures an agent's ability to achieve goals in a wide range of environments." _(Shane Legg) ^[\[1\]](#quote "quote")^ A little while ago [I tried to equip](/lw/feo/universal_agents_and_utility_functions/ "Universal agents and utility functions"...
https://www.lesswrong.com/posts/RxQE4m9QgNwuq764M/save-the-princess-a-tale-of-aixi-and-utility-functions
# [Link] The Stanford Superman Experiment: Anchoring Empathy? [Virtual superpowers encourage real-world empathy](http://news.stanford.edu/news/2013/january/virtual-reality-altruism-013013.html): With a whoosh of air, the subjects left the ground – either controlling their flight by a series of arm motions, like Super...
https://www.lesswrong.com/posts/KMLYXAS2rYdcvtQqg/link-the-stanford-superman-experiment-anchoring-empathy
# Naturalism versus unbounded (or unmaximisable) utility options There are many paradoxes with unbounded utility functions. For instance, consider whether it's [rational to spend eternity in Hell](http://philsci-archive.pitt.edu/1595/1/15.1.bayesbind.pdf): Suppose that you die, and God offers you a deal. You can spen...
https://www.lesswrong.com/posts/PpTN7GP2FsPyHfKrs/naturalism-versus-unbounded-or-unmaximisable-utility-options
# S.E.A.R.L.E's COBOL room _A response to Searle's [Chinese](http://en.wikipedia.org/wiki/Chinese_room) [Room](http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=6573580) argument._ **PunditBot**: Dear viewers, we are currently interviewing the renowned robot philosopher, none other than the Syn...
https://www.lesswrong.com/posts/T27QnGQ929YMTZaaM/s-e-a-r-l-e-s-cobol-room
# [suggestion] New Meetup Tab Hi everyone, I am unsure if I am formatting this correctly or putting it in the appropriate location. I think that having meetup notifications is a great idea. A new tab (I.e "main", "discussion" and "meetups") would  make it easier to find your own meetups, as well as create less clutt...
https://www.lesswrong.com/posts/j8rgwQmQQE5f4HqGX/suggestion-new-meetup-tab
# The Wrongness Iceberg As soon as I got out of college I got a job at a restaurant. At the time I had never had a job at a restaurant, but my mom had known the owners and I felt obligatedto avoid performing badly. Yet inevitably I _did_ perform badly, and how this performance was evaluated would greatly affect my way...
https://www.lesswrong.com/posts/yWLa7LMaWprtxhjL9/the-wrongness-iceberg
# Humor: GURPS Friendly AI Found some hidden internet gold and thought I would share: http://sl4.org/wiki/GurpsFriendlyAI http://sl4.org/wiki/FriendlyAICriticalFailureTable [GurpsFriendlyAI](http://sl4.org/wiki/back=GurpsFriendlyAI) =========================================================== by [EliezerYud...
https://www.lesswrong.com/posts/aXC5QXDsjhQ7CCNgT/humor-gurps-friendly-ai
# Offer: I'll match donations to the Against Malaria Foundation Giving money to effective charities is one of the ways we can have the biggest positive impact in the world. If you've been thinking about giving to [GiveWell](http://www.givewell.org/)'s top-rated charity, the [Against Malaria Foundation](http://www.give...
https://www.lesswrong.com/posts/uLTJh2RkXgWSXRxQK/offer-i-ll-match-donations-to-the-against-malaria-foundation
# Official LW uncensored thread (on Reddit) [http://www.reddit.com/r/LessWrong/comments/17y819/lw\_uncensored\_thread/](http://www.reddit.com/r/LessWrong/comments/17y819/lw_uncensored_thread/) This is meant as an open discussion thread someplace where I won't censor anything (and in fact can't censor anything, since ...
https://www.lesswrong.com/posts/vacuhzvPZSwZpyX2u/official-lw-uncensored-thread-on-reddit
# How to offend a rationalist (who hasn't thought about it yet): a life lesson Usually, I don't get offended at things that people say to me, because I can see at what points in their argument we differ, and what sort of counterargument I could make to that. I can't get mad at people for having beliefs I think are wro...
https://www.lesswrong.com/posts/EhyiWtMmWG6Eorh84/how-to-offend-a-rationalist-who-hasn-t-thought-about-it-yet
# [Link] Power of Suggestion **Related:** [Social Psychology & Priming: Art Wears Off](/r/discussion/lw/gld/link_social_psychology_priming_art_wears_off/) I recommend reading the [piece](http://chronicle.com/article/Power-of-Suggestion/136907/), but below are some excerpts and commentary. > Power of Suggestion > ---...
https://www.lesswrong.com/posts/rdSH8h3xo7aQxuY5Y/link-power-of-suggestion
# Link: blog on effective altruism Over the last few months I've [started blogging](http://rationalaltruist.com/) about effective altruism more broadly, rather than [focusing on AI risk](http://ordinaryideas.wordpress.com/). I'm still focusing on abstract considerations and methodological issues, but I hope it is of i...
https://www.lesswrong.com/posts/i8CoHSqGHgJCbdQK6/link-blog-on-effective-altruism
# Philosophical Landmines **Related:** [Cached Thoughts](/lw/k5/cached_thoughts/) Last summer I was talking to my sister about something. I don't remember the details, but I invoked the concept of "truth", or "reality" or some such. She immediately spit out a cached reply along the lines of "But how can you really sa...
https://www.lesswrong.com/posts/L4HQ3gnSrBETRdcGu/philosophical-landmines
# A brief history of ethically concerned scientists > For the first time in history, it has become possible for a limited group of a few thousand people to threaten the absolute destruction of millions. \-\- Norbert Wiener (1956), [Moral Reflections of a Mathematician](http://books.google.com/books?id=CwoAAAAAMBAJ&pg...
https://www.lesswrong.com/posts/hxaq9MCaSrwWPmooZ/a-brief-history-of-ethically-concerned-scientists
# MLP: The Next Level Of Your Studies The first four chapters of my MLP fanfiction are now [online on fimfiction.net](https://www.fimfiction.net/story/82658/the-next-level-of-your-studies). Unlike [Friendship Is Optimal](/lw/efi/friendship_is_optimal_a_my_little_pony_fanfic/) ([fim link](http://www.fimfiction.net/stor...
https://www.lesswrong.com/posts/CSpApLKpTLmZJjJpW/mlp-the-next-level-of-your-studies
# [Link] False memories of fabricated political events Another one for the memory-is-really-unreliable file. Some researchers at UC Irvine (one of them is Elizabeth Loftus, whose name I've seen attached to other fake-memory studies) asked about 5000 subjects about their recollection of four political events. One of th...
https://www.lesswrong.com/posts/REf88ofNFs9vbmzmF/link-false-memories-of-fabricated-political-events
# [LINK] Open Source Software Developer with Terminal Illness Hopes to Opt Out of Death Aaron Winborn writes: > TLDR: [http://venturist.info/aaron-winborn-charity.html](http://venturist.info/aaron-winborn-charity.html "http://venturist.info/aaron-winborn-charity.html") > > So maybe you've heard about my plight, in w...
https://www.lesswrong.com/posts/rSWBkLpoQWERWdGjM/link-open-source-software-developer-with-terminal-illness
# The Fundamental Question - Rationality computer game design > I sometimes go around saying that the fundamental question of rationality is _Why do you believe what you believe?_ > > \-\- [Eliezer in Quantum Non-Realism](/lw/q5/quantum_nonrealism/) I was much impressed when they finally came out with a PC version o...
https://www.lesswrong.com/posts/8kFewgKabR6vYLjQb/the-fundamental-question-rationality-computer-game-design
# Higher than the most high In an earlier [post](/lw/giu/naturalism_versus_unbounded_or_unmaximisable/), I talked about how we could deal with variants of the Heaven and Hell problem - situations where you have an infinite number of options, and none of them is a maximum. The solution for a (deterministic) agent was t...
https://www.lesswrong.com/posts/jpMwB3NKjcYXZLo8E/higher-than-the-most-high
# [LINK] "Scott and Scurvy": a reminder of the messiness of scientific progress [http://idlewords.com/2010/03/scott\_and\_scurvy.htm](http://idlewords.com/2010/03/scott_and_scurvy.htm) > Now, I had been taught in school that scurvy had been conquered in 1747, when the Scottish physician [James Lind](http://en.wikiped...
https://www.lesswrong.com/posts/vPvpo5dxGX2mh8Gwp/link-scott-and-scurvy-a-reminder-of-the-messiness-of
# Memetic Tribalism Related: [politics is the mind killer](/lw/gw/politics_is_the_mindkiller/), [other optimizing](/lw/9v/beware_of_otheroptimizing/) When someone says something stupid, I get an urge to correct them. Based on the stories I hear from others, I'm not the only one. For example, some of my friends are i...
https://www.lesswrong.com/posts/Ztsw7b3CbJSzD98aR/memetic-tribalism
# Rationalist Lent As I understand it, [Lent](http://en.wikipedia.org/wiki/Lent) is a holiday where we celebrate the scientific method by changing exactly one variable in our lives for 40 days. This seems like a convenient [Schelling point](/lw/dc7/nash_equilibria_and_schelling_points/) for rationalists to adopt, so: ...
https://www.lesswrong.com/posts/LnpShPEqcsGFTFsKS/rationalist-lent
# A Series of Increasingly Perverse and Destructive Games _Related to: [Higher Than the Most High](/r/discussion/lw/gng/higher_than_the_most_high/)_ The linked post describes a game in which (I fudge a little), Omega comes to you and two other people, and ask you to tell him an integer.  The person who names the larg...
https://www.lesswrong.com/posts/Dtkrq5h7GnuxSSPRG/a-series-of-increasingly-perverse-and-destructive-games
# The Singularity Wars _(This is a introduction, for  those not immersed in the Singularity world, into the history of and relationships between SU, SIAI \[SI,MIRI\], SS, LW, CSER, FHI, and CFAR. It also has some opinions, which are strictly my own.)_ The good news is that there _were_ no Singularity Wars.  The Bay ...
https://www.lesswrong.com/posts/GxxARXNDAD5pWnxak/the-singularity-wars
# Learning critical thinking: a personal example Related to: [Is Rationality Teachable](/lw/76x/is_rationality_teachable/) “Critical care nursing isn’t about having critically ill patients,” my preceptor likes to say, “it’s about critical thinking.” I doubt she's talking about the same kind of critical thinking that...
https://www.lesswrong.com/posts/pp62TwbtyFnTZe4Nb/learning-critical-thinking-a-personal-example
# LW Women: LW Online Standard Intro -------------- **The following section will be at the top of all posts in the LW Women series.** Several months ago, I put out a call for anonymous submissions by the women on LW, with the idea that I would compile them into some kind of post.  There is a LOT of material, so I am...
https://www.lesswrong.com/posts/dDZcMG8xopYgnNfvK/lw-women-lw-online
# Three Axes of Prohibitions The Game of Thrones board game is similar to Diplomacy (so I hear: I've never actually played Diplomacy). You often need to make alliances to survive, but these alliances are weak. It is both expected and required that you will eventually break your alliances, otherwise you will lose. My f...
https://www.lesswrong.com/posts/GX72JPWoWJLT9zpgu/three-axes-of-prohibitions
# What are your rules of thumb? I'm not as smart as I like to think I am. Knowing that, I've gotten into a habit of trying to work out as many general principles as I can ahead of time, so that when I actually need to think of something, I've already done as much of the work as I can. What are your most useful cached...
https://www.lesswrong.com/posts/DwvLzRZsZRzPuW49a/what-are-your-rules-of-thumb
# Cryo: Legal fees of $2500 I have just received a message from my lawyer, regarding the preparations of my cryo-based will, power of attorney, and related papers. The most significant quote reads as follows: Due to the complex nature of your wishes and the undeveloped area of the law surrounding cryogenics an...
https://www.lesswrong.com/posts/84Kzzrne3bFLcY9wg/cryo-legal-fees-of-usd2500
# Great rationality posts by LWers not posted to LW Ever since Eliezer, Yvain, and myself stopped posting regularly, LW's front page has mostly been populated by meta posts. (The Discussion section is still abuzz with interesting content, though, including [original research](/lw/f6o/original_research_on_less_wrong/)....
https://www.lesswrong.com/posts/xZdW7D43AaCiQQzvM/great-rationality-posts-by-lwers-not-posted-to-lw
# Imitation is the Sincerest Form of Argument I recently gave a talk at Chicago Ideas Week on adapting Turing Tests to have better, less mindkill-y arguments, and this is the precis for folks who would prefer not to sit through the video ([which is available here](http://vimeo.com/56932073)). Conventional Turing Test...
https://www.lesswrong.com/posts/KjdP2WjWng6skwbY7/imitation-is-the-sincerest-form-of-argument
# Are coin flips quantum random to my conscious brain-parts? Hello rationality friends!  I have a question that I bet some of you have thought about... I hear lots of people saying that classical coin flips are not "quantum random events", because the outcome is very nearly determined by thumb movement when I flip th...
https://www.lesswrong.com/posts/9KW7oLQepcgu4LkFN/are-coin-flips-quantum-random-to-my-conscious-brain-parts
# Visual Mental Imagery Training Previously: [Generalizing From One Example](/lw/dr/generalizing_from_one_example/) > There was a debate, in the late 1800s, about whether "imagination" was simply a turn of phrase or a real phenomenon. That is, can people actually create images in their minds which they see vividly, o...
https://www.lesswrong.com/posts/8ciFqEjkekqzaTqT6/visual-mental-imagery-training
# Think Like a Supervillain **See also**: [Everything I Needed To Know About Life, I Learned From Supervillains](http://dirtsimple.org/2009/02/everything-i-needed-to-know-about-life.html) > Mr. Malfoy would hardly shrink from talk of ordinary murder, but even he was shocked - yes you were Mr. Malfoy, I was watching y...
https://www.lesswrong.com/posts/vgXXyTae4noZBWykN/think-like-a-supervillain
# Calibrating Against Undetectable Utilons and Goal Changing Events (part1) Summary: Random events can preclude or steal attention from the goals you set up to begin with, hormonal fluctuation inclines people to change some of their goals with time. A discussion on how to act more usefully given those potential change...
https://www.lesswrong.com/posts/fKS54Zd4SaBrjznzF/calibrating-against-undetectable-utilons-and-goal-changing
# CEA does not seem to be credibly high impact _I am highly grateful to Alexey Morgunov and Adam Casey for reviewing and commenting on an earlier draft of this post, and pestering me into migrating the content from many emails to a somewhat coherent post. _ Will Crouch has [posted](/lw/fej/giving_what_we_can_80000_...
https://www.lesswrong.com/posts/KBDiWMqhaYe7uPHvN/cea-does-not-seem-to-be-credibly-high-impact
# VNM agents and lotteries involving an infinite number of possible outcomes Summary: The VNM utility theorem only applies to lotteries that involve a finite number of possible outcomes. If an agent maximizes the expected value of a utility function when considering lotteries that involve a potentially infinite number...
https://www.lesswrong.com/posts/7wmBH76BGScL7XNct/vnm-agents-and-lotteries-involving-an-infinite-number-of
# An attempt to dissolve subjective expectation and personal identity _I attempt to figure out a way to dissolve the concepts of 'personal identity' and 'subjective expectation' down to the level of cognitive algorithms, in a way that would let one bite the bullets of the anthropic trilemma. I proceed by considering f...
https://www.lesswrong.com/posts/7XWGJGmWXNmTd2oAP/an-attempt-to-dissolve-subjective-expectation-and-personal
# "What-the-hell" Cognitive Failure Mode: a Separate Bias or a Combination of Other Biases? The ["what-the-hell" effect](http://goo.gl/D8aBF), when you break a rule and then go on a rule-breaking rampage, like [binge eating after a single dietary transgression](http://www.spring.org.uk/2011/03/the-what-the-hell-effect...
https://www.lesswrong.com/posts/6zH8RWtbxDPTrdJNn/what-the-hell-cognitive-failure-mode-a-separate-bias-or-a
# [Link] Tomasik's "Quantify with Care" [Brian Tomasik](http://utilitarian-essays.com/)'s latest article, '[Quantify with Care](http://felicifia.org/viewtopic.php?f=10&t=849)', seems to be of sufficient interest to readers of this forum to post a link to it here.  Abstract: > Quantification and metric optimization ar...
https://www.lesswrong.com/posts/RCrPHDWAT6NzS8xkE/link-tomasik-s-quantify-with-care
# Call for discussion: Signalling and/vs. accomplishment It seems to me that when people discover signalling, they see it everywhere and write essays about how no human activity is aimed at its stated purpose. However, stated purposes and other sorts of useful work get done anyway, and I'm sure there are constraints ...
https://www.lesswrong.com/posts/XA3BLcdHaqmMikzYS/call-for-discussion-signalling-and-vs-accomplishment
# Does evolution select for mortality? At a recent [Reddit AMA](http://www.reddit.com/r/IAmA/comments/18ymad/i_am_eric_lander_a_leader_of_the_human_genome/), [Eric Lander](http://en.wikipedia.org/wiki/Eric_Lander), a professor of biology who played an important part in the Human Genome Project, answered [this question...
https://www.lesswrong.com/posts/ra5dQit4wizLYCTYj/does-evolution-select-for-mortality
# Improving Human Rationality Through Cognitive Change (intro) This is the introduction to a paper I started writing long ago, but have since given up on. The paper was going to be an overview of methods for improving human rationality through cognitive change. Since it contains lots of handy references on rationality...
https://www.lesswrong.com/posts/hR92kW2ZSvmuca5Nf/improving-human-rationality-through-cognitive-change-intro
# [Link] Colonisation of Venus I was wondering what people thought of [this paper](http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20030022668_2003025525.pdf "this paper") by Geoffrey Landis on colonising Venus. In it he suggests that cloud-top Venus is one of the most benign environments in the Solar System. Tem...
https://www.lesswrong.com/posts/y8moeSHderg5Bmz2u/link-colonisation-of-venus
# Why Bayes? A Wise Ruling Why is Bayes' Rule useful? Most explanations of Bayes explain the how of Bayes: they take a well-posed mathematical problem and convert given numbers to desired numbers. While Bayes is useful for calculating hard-to-estimate numbers from easy-to-estimate numbers, the quantitative use of Baye...
https://www.lesswrong.com/posts/rm6o5imQCTrkfzCgq/why-bayes-a-wise-ruling
# Memory, nutrition, motivation, and genes There are two confusing but potentially important papers in the Jan. 25 2013 _Science_ on long-term memory (LTM) formation in fruit flies: Pierre-Yves Placais & Thomas Preat. To favor survival under food shortage, the brain disables costly memory. 339:440-441. Yukinori Hira...
https://www.lesswrong.com/posts/ZxwXtNcw9bMQph992/memory-nutrition-motivation-and-genes
# [LINK] Westerners may be terrible experimental psychology subjects [WEIRD](/lw/17x/beware_of_weird_psychological_samples/) may be weirder than you think. [We Aren't The World](http://www.psmag.com/magazines/pacific-standard-cover-story/joe-henrich-weird-ultimatum-game-shaking-up-psychology-economics-53135/) writes o...
https://www.lesswrong.com/posts/syRATXbXeJxdMwQBD/link-westerners-may-be-terrible-experimental-psychology
# Self-assessment in expert AI predictions _This brief post is written on behalf of [Kaj Sotala](/user/Kaj_Sotala/overview/), due to deadline issues._ The results of our prior analysis suggested that there was [little difference between experts and non-experts](/lw/e36/ai_timeline_predictions_are_we_getting_better/) ...
https://www.lesswrong.com/posts/KovmjL7fKyCuBtzgd/self-assessment-in-expert-ai-predictions
# Need help with an MLP fanfiction with a transhumanist theme. **EDIT: I am now taking arguments for alicornism. Alicornism being the placeholder term I've given to the stance that all ponies should be alicorns. Please PM me or post here if you have a good one, or an argument against one of anti-alicornism's strongest...
https://www.lesswrong.com/posts/sPWbNqfXz3BzZNZzD/need-help-with-an-mlp-fanfiction-with-a-transhumanist-theme
# Idea: Self-Improving Task Management Software So what the world needs is [yet](http://todotxt.com/) [another](http://taskwarrior.org/projects/show/taskwarrior) [task](http://www.rememberthemilk.com/) [management](http://www.producteev.com/) [program](http://www.google.co.uk/search?q=task+management+software), right?...
https://www.lesswrong.com/posts/RcuedsWZ3B2JCaceH/idea-self-improving-task-management-software
# Need some psychology advice I started going out with a fantastic girl a couple of weeks ago.  Everything is great, except that whenever I've sent her a text message or email requesting something and haven't received a response yet, I experience significant dysphoric anxiety, fearing that her response will be not jus...
https://www.lesswrong.com/posts/NjNJcDEx4NXYupjxN/need-some-psychology-advice
# Risk-aversion and investment (for altruists) (Cross-posted from [rationalaltruist](http://rationalaltruist.com/)) Suppose I hope to use my money to do good some day, but for now I am investing it and aiming to maximize my returns. I face the question: how much risk should I be willing to bear? Should I pursue safe ...
https://www.lesswrong.com/posts/CDdvNKGSTp77JrRZv/risk-aversion-and-investment-for-altruists
# Seize the Maximal Probability Moment Try and remember 3 or 4 things that you think would be _effective hacks_ for your life but you have _not_ so far implemented. Really, find three.  Probably that was not so hard. Now think of at which moment in time did you have a maximal probability of having implemented such h...
https://www.lesswrong.com/posts/ZqJLAasSRXqc32nXW/seize-the-maximal-probability-moment
# Decision Theory FAQ Co-authored with [crazy88](/user/crazy88/overview/). Please let us know when you find mistakes, and we'll fix them. Last updated 03-27-2013. **Contents**: * [1\. What is decision theory?](#what-is-decision-theory) * [2\. Is the rational decision always the right decision?](#is-the-rational-...
https://www.lesswrong.com/posts/zEWJBFFMvQ835nq6h/decision-theory-faq
# Constructive mathemathics and its dual I have stumbled upon an interesting and, as far as I know, new concept: thinking about the duality between constructive and paraconsistent logics, I've noticed that while the meta-theory of intuitionistic logic (constructive mathematics) is very well understood and studied, the...
https://www.lesswrong.com/posts/Jf67y793ZMc8DF2xQ/constructive-mathemathics-and-its-dual
# [Video] Brainwashed - A Norwegian documentary series on nature and nurture **Related:** [The Blank Slate](http://www.amazon.com/Blank-Slate-Modern-Denial-Nature/dp/0142003344), [The Psychological Diversity of Mankind](/lw/28k/the_psychological_diversity_of_mankind/), [Admitting to Bias](/lw/e1b/link_admitting_to_bia...
https://www.lesswrong.com/posts/nm8c68dfcYDe62rkR/video-brainwashed-a-norwegian-documentary-series-on-nature
# [LINK] Well-written article on the Future of Humanity Institute and Existential Risk This introduction to the concept of existential risk is perhaps the best such article I've read targeted at a general audience.  It manages to cover a lot of ground in a way that felt engaging to me and that I think would carry alon...
https://www.lesswrong.com/posts/x3ozuqLDn9SwjwEjN/link-well-written-article-on-the-future-of-humanity
# The VNM independence axiom ignores the value of information _Followup to : [Is risk aversion really irrational?](/lw/9oj/is_risk_aversion_really_irrational/)_ After reading the [decision theory FAQ](/lw/gu1/decision_theory_faq) and re-reading [The Allais Paradox](/lw/my/the_allais_paradox/) I realized I still don't...
https://www.lesswrong.com/posts/YHFvwDPDWmi8KdECw/the-vnm-independence-axiom-ignores-the-value-of-information
# Induction; or, the rules and etiquette of reference class tennis (Cross-posted from [rationalaltruist](http://www.rationalaltruist.com).) Some disasters (catastrophic climate change, high-energy physics surprises) are so serious that even a small probability (say 1%) of such a disaster would have significant policy...
https://www.lesswrong.com/posts/PXRxH4C6nKMwocBit/induction-or-the-rules-and-etiquette-of-reference-class