text stringlengths 300 320k | source stringlengths 52 154 |
|---|---|
# Link: Rob Bensinger on Less Wrong and vegetarianism
I'm currently unconvinced either way on this matter. However, enough arguments have been raised that I think this is worth the time of every reader to think a good deal about.
http://nothingismere.com/2014/11/12/inhuman-altruism-inferential-gap-or-motivational... | https://www.lesswrong.com/posts/MNB4dS2ERBDnxDjgq/link-rob-bensinger-on-less-wrong-and-vegetarianism |
# The Atheist's Tithe
I made a comment on another site a week or two ago, and I just realized that the line of thought is one that LW would appreciate, so here's a somewhat expanded version.
There's a lot of discussion around here about how to best give to charities, and I'm all for this. Ensuring donations are used... | https://www.lesswrong.com/posts/WmtoQNhE2WBBhFPNt/the-atheist-s-tithe |
# The "best" mathematically-informed topics?
Recently, I asked LessWrong about the important [math of rationality](/lw/l42/what_math_is_essential_to_the_art_of_rationality/). I found the responses extremely helpful, but thinking about it, I think there’s a better approach.
I come from a new-age-y background. As such,... | https://www.lesswrong.com/posts/3cfoYriyG3BAs38pp/the-best-mathematically-informed-topics |
# An optimality result for modal UDT
$$\newcommand{\box}{{\square}}$$
$$\newcommand{\q}[1]{\ulcorner{#1}\urcorner}$$
$$\newcommand{\PA}{\mathrm{PA}}$$
$$\newcommand{\GL}{\mathrm{GL}}$$
$$\newcommand{\NN}{\mathbb{N}}$$
$$\newcommand{\lr}{\leftrightarrow }$$
$$\newcommand{\UDT}{{\mathrm{UDT}}}$$
$$\newcommand{\... | https://www.lesswrong.com/posts/5bd75cc58225bf0670374e8f/an-optimality-result-for-modal-udt |
# Topological truth predicates: Towards a model of perfect Bayesian agents
$$\newcommand{\q}[1]{\ulcorner#1\urcorner}$$
$$\newcommand{\NN}{\mathbb{N}}$$
$$\newcommand{\RR}{\mathbb{R}}$$
$$\newcommand{\cL}{\mathcal{L}}$$
$$\newcommand{\cU}{\mathcal{U}}$$
$$\newcommand{\dom}{\mathrm{dom}}$$
$$\newcommand{\Pow}{\... | https://www.lesswrong.com/posts/5bd75cc58225bf0670374e5f/topological-truth-predicates-towards-a-model-of-perfect-bayesian-agents |
# My new paper: Concept learning for safe autonomous AI
_Abstract: Sophisticated autonomous AI may need to base its behavior on fuzzy concepts that cannot be rigorously defined, such as well-being or rights. Obtaining desired AI behavior requires a way to accurately specify these concepts. We review some evidence sugg... | https://www.lesswrong.com/posts/oqp4SeEDhrkYyvShx/my-new-paper-concept-learning-for-safe-autonomous-ai |
# Musk on AGI Timeframes
Elon Musk submitted [a comment](http://i.imgur.com/sL0uqqW.jpg) to edge.org a day or so ago, on [this article](http://edge.org/conversation/the-myth-of-ai). It was later removed.
> The pace of progress in artificial intelligence (I'm not referring to narrow AI) is incredibly fast. Unless you ... | https://www.lesswrong.com/posts/kzHJ5BRhgkj9CSQ3N/musk-on-agi-timeframes |
# Lying in negotiations: a maximally bad problem
In a [previous post](/lw/i20/even_with_default_points_systems_remain/), I showed that the [Nash](http://en.wikipedia.org/wiki/Bargaining_problem#Nash_bargaining_solution) [Bargaining Solution](/lw/2x8/lets_split_the_cake_lengthwise_upwise_and/) (NBS), the [Kalai-Smorodi... | https://www.lesswrong.com/posts/5GTBktYp2g3i2BwFd/lying-in-negotiations-a-maximally-bad-problem |
# I just increased my Altruistic Effectiveness and you should too
I was looking at the marketing materials for a charity (which I'll call X) over the weekend, when I saw something odd at the bottom of their donation form:
> Check here to increase your donation by 3% to defray the cost of credit card processing.
It's... | https://www.lesswrong.com/posts/QE4PAptjNWZnqQGGt/i-just-increased-my-altruistic-effectiveness-and-you-should |
# Systemic risk: a moral tale of ten insurance companies
_Once upon a time..._
Imagine there were ten insurance sectors, each sector being a different large risk (or possibly the same risks, in different geographical areas). All of these risks are taken to be independent.
To simplify, we assume that all the risks fo... | https://www.lesswrong.com/posts/7ArjK9WqdztqGncoi/systemic-risk-a-moral-tale-of-ten-insurance-companies |
# Superintelligence 10: Instrumentally convergent goals
_This is part of a weekly reading group on [Nick Bostrom](http://www.nickbostrom.com/)'s book, [Superintelligence](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111). For more information about the group, and an index of posts ... | https://www.lesswrong.com/posts/BD6G9wzRRt3fxckNC/superintelligence-10-instrumentally-convergent-goals |
# Bayes Academy: Development report 1
Some of you may remember me proposing a game idea that went by the name of [The Fundamental Question](/lw/gmn/the_fundamental_question_rationality_computer/). Some of you may also remember me talking a lot about developing an educational game about Bayesian Networks for my MSc the... | https://www.lesswrong.com/posts/XJbyks4QhbGmPk9hJ/bayes-academy-development-report-1 |
# The Centre for Effective Altruism is hiring to fill five roles in research, operations and outreach
The [Centre for Effective Altruism](https://centreforeffectivealtruism.org/careers/), the group behind [80,000 Hours](https://80000hours.org/), [Giving What We Can](http://www.givingwhatwecan.org/), the [Global Priori... | https://www.lesswrong.com/posts/9wqcTrFt2cPz7EpjL/the-centre-for-effective-altruism-is-hiring-to-fill-five |
# Link: Simulating C. Elegans
[http://radar.oreilly.com/2014/11/the-robotic-worm.html](http://radar.oreilly.com/2014/11/the-robotic-worm.html "The robotic worm")
Summary, as I understand it: The connectome for C. elegans's 302-neuron brain has been known for some time, but actually doing anything with it (especially ... | https://www.lesswrong.com/posts/3dy9pSuAexHCk8Bn6/link-simulating-c-elegans |
# How can one change what they consider "fun"?
Most of this post is background and context, so I've included a tl;dr horizontal rule near the bottom where you can skip everything else if you so choose. :)
Here's a short anecdote of Feynman's:
> ... I invented some way of doing problems in physics, quantum electrodyn... | https://www.lesswrong.com/posts/tTTXP9PFZZjASuTnM/how-can-one-change-what-they-consider-fun |
# The Categories Were Made For Man, Not Man For The Categories
**I.**
“Silliest internet atheist argument” is a hotly contested title, but I have a special place in my heart for the people who occasionally try to prove Biblical fallibility by pointing out whales are not a type of fish.
(this is going to end up being... | https://www.lesswrong.com/posts/aMHq4mA2PHSM2TMoH/the-categories-were-made-for-man-not-man-for-the-categories |
# When should an Effective Altruist be vegetarian?
I have lately noticed several people wondering why more [Effective Altruists](http://effective-altruism.com/) are not vegetarians. I am personally not a vegetarian because I don’t think it is an effective way to be altruistic.
As far as I can tell the fact that many ... | https://www.lesswrong.com/posts/hvqgTHBXhmxeNES6b/when-should-an-effective-altruist-be-vegetarian |
# TV's "Elementary" Tackles Friendly AI and X-Risk - "Bella" (Possible Spoilers)
I was a bit surprised to find this week's episode of _Elementary_ was about AI... not just AI and the Turing Test, but also a fairly even-handed presentation of issues like Friendliness, hard takeoff, and the difficulties of getting peop... | https://www.lesswrong.com/posts/dJF6ZhC3i45y27aeA/tv-s-elementary-tackles-friendly-ai-and-x-risk-bella |
# Breaking the vicious cycle
You may know me as the guy who posts a lot of controversial stuff about LW and MIRI. I don't enjoy doing this and do not want to continue with it. One reason being that the debate is turning into a flame war. Another reason is that I noticed that it does affect my health negatively (e.g. m... | https://www.lesswrong.com/posts/G9LNTP3uEyYCdr3mh/breaking-the-vicious-cycle |
# Improving the modal UDT optimality result
$$\newcommand{\box}{{\square}}$$
$$\newcommand{\q}[1]{\ulcorner{#1}\urcorner}$$
$$\newcommand{\PA}{\mathrm{PA}}$$
$$\newcommand{\GL}{\mathrm{GL}}$$
$$\newcommand{\NN}{\mathbb{N}}$$
$$\newcommand{\lr}{\leftrightarrow }$$
$$\newcommand{\UDT}{{\mathrm{UDT}}}$$
$$\newco... | https://www.lesswrong.com/posts/5bd75cc58225bf0670374ea9/improving-the-modal-udt-optimality-result |
# Shop for Charity: how to earn proven charities 5% of your Amazon spending in commission
If you shop on Amazon in the countries listed below, you can earn a substantial commission for charity by doing so via the links below. This is a cost-free way to do a lot of good, so I'd encourage you to do so! You can bookmark ... | https://www.lesswrong.com/posts/m5jfy5cPxp3kAFDse/shop-for-charity-how-to-earn-proven-charities-5-of-your |
# Request for suggestions: ageing and data-mining
Imagine you had the following at your disposal:
* A Ph.D. in a biological science, with a fair amount of reading and wet-lab work under your belt on the topic of aging and longevity (but in hindsight, nothing that turned out to leverage any real mechanistic insights... | https://www.lesswrong.com/posts/e4r7Z8omqPD4utyB8/request-for-suggestions-ageing-and-data-mining |
# Superintelligence 11: The treacherous turn
_This is part of a weekly reading group on [Nick Bostrom](http://www.nickbostrom.com/)'s book, [Superintelligence](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111). For more information about the group, and an index of posts so far see ... | https://www.lesswrong.com/posts/B39GNTsN3HocW8KFo/superintelligence-11-the-treacherous-turn |
# Stuart Russell: AI value alignment problem must be an "intrinsic part" of the field's mainstream agenda
[Edge.org](http://edge.org/conversation/the-myth-of-ai) has recently been discussing "the myth of AI". Unfortunately, although Superintelligence is cited in the opening, most of the participants don't seem to have... | https://www.lesswrong.com/posts/S95qCHBXtASmYyGSs/stuart-russell-ai-value-alignment-problem-must-be-an |
# The Hostile Arguer
**“Your instinct is to talk your way out of the situation, but that is an instinct born of prior interactions with reasonable people of good faith, and inapplicable to this interaction…”** – [Ken White](http://www.popehat.com/2014/01/15/the-privilege-to-shut-up/)
One of the Less Wrong Study Hall ... | https://www.lesswrong.com/posts/iX6F5NRmNzKqMZ5Qk/the-hostile-arguer |
# My experience of the recent CFAR workshop
[Originally posted at my blog.](http://kajsotala.fi/2014/11/event-report-cfars-rationality-workshop-england/)
\-\-\-
I just got home from a four-day rationality workshop in England that was organized by the [Center For Applied Rationality](http://rationality.org/) (CFAR). ... | https://www.lesswrong.com/posts/EiNPwK9kTs6hfLfbc/my-experience-of-the-recent-cfar-workshop |
# You have a set amount of "weirdness points". Spend them wisely.
_I've heard of the concept of "weirdness points" many times before, but after a bit of searching I can't find a definitive post describing the concept, so I've decided to make one. As a disclaimer, I don't think the evidence backing this post is all th... | https://www.lesswrong.com/posts/wkuDgmpxwbu2M2k3w/you-have-a-set-amount-of-weirdness-points-spend-them-wisely |
# When the uncertainty about the model is higher than the uncertainty in the model
Most models attempting to estimate or predict some elements of the world, will come with their own estimates of uncertainty. It could be the Standard Model of physics [predicting the mass](http://en.wikipedia.org/wiki/Standard_Model#Tes... | https://www.lesswrong.com/posts/nLkkzrZvD5mWbLMe3/when-the-uncertainty-about-the-model-is-higher-than-the |
# I'm in a really dark place right now, I think I need help.
Hi!
For a while now I've been having troubles with my life.
Today it got worse, likely I will feel better in a week, but problems related to a person I love and searching for the purpose of my life will not be solved, just ignored as I did for a couple of... | https://www.lesswrong.com/posts/nGpHqJRsTirgDWouC/i-m-in-a-really-dark-place-right-now-i-think-i-need-help |
# Integral versus differential ethics
In population ethics...
-----------------------
Most people start out believing that the following are true:
1. That adding more happy lives is a net positive.
2. That redistributing happiness more fairly is not a net negative.
3. That the [repugnant conclusion](http://plato.... | https://www.lesswrong.com/posts/fhkk75xKfqiqDnLBE/integral-versus-differential-ethics |
# The new GiveWell recommendations are out: here's a summary of the charities
GiveWell have [just announced](http://blog.givewell.org/2014/12/01/our-updated-top-charities/) their latest charity recommendations! What are everyone’s thoughts on them?
A summary: all of the old charities (GiveDirectly, SCI and Deworm the... | https://www.lesswrong.com/posts/mAtNMy74zNGicbQK9/the-new-givewell-recommendations-are-out-here-s-a-summary-of |
# Superintelligence 12: Malignant failure modes
_This is part of a weekly reading group on [Nick Bostrom](http://www.nickbostrom.com/)'s book, [Superintelligence](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111). For more information about the group, and an index of posts so far s... | https://www.lesswrong.com/posts/BqoE5vhPNCB7X6Say/superintelligence-12-malignant-failure-modes |
# [link] The Philosophy of Intelligence Explosions and Advanced Robotics
The philosopher John Danaher has posted a list of [all the posts that he's written on the topic of robotics and AI](http://philosophicaldisquisitions.blogspot.fi/2014/11/the-philosophy-of-intelligence.html). Below is the current version of the li... | https://www.lesswrong.com/posts/7sRPMn8zriesE5iYb/link-the-philosophy-of-intelligence-explosions-and-advanced |
# [LINK] Steven Hawking warns of the dangers of AI
[From the BBC:](http://www.bbc.co.uk/news/technology-30290540)
> \[Hawking\] told the BBC:"The development of full artificial intelligence could spell the end of the human race."
>
> ...
>
> "It would take off on its own, and re-design itself at an ever increasing ... | https://www.lesswrong.com/posts/Zgmmyb83iJQJ8sfha/link-steven-hawking-warns-of-the-dangers-of-ai |
# Link: LessWrong and AI risk mentioned in a Business Insider Article
[Google Has An Internal Committee To Discuss Its Fears About The Power Of Artificial](http://www.businessinsider.com/if-google-is-worried-about-artificial-intelligence-then-you-should-be-too)
"Worryingly, cofounder Shane Legg thinks the team's adva... | https://www.lesswrong.com/posts/DJgLmC5pf5j4FufJn/link-lesswrong-and-ai-risk-mentioned-in-a-business-insider |
# Good things to have learned....
I was looking at a discussion of what should be in a college curriculum, and as such discussions seem to go, there was a big list of things everyone should study, and some political claims about what's being offered but shouldn't be.
Instead, what do you wish you'd studied in colleg... | https://www.lesswrong.com/posts/zfZNkAWQxTbm5tLz5/good-things-to-have-learned |
# The Bay Area Solstice

As the holiday season approaches, we continue our tradition of celebrating the winter solstice.
This event is the offspring of Raemon's [New York Solstice](/lw/l35/solstice_2014_ki... | https://www.lesswrong.com/posts/iA52JzRxkHkJsX6XW/the-bay-area-solstice |
# Where is the line between being a good child and taking care of oneself?
I recently saw some posts here about how LW helps with personal stuff and that it's a good idea to post here^1^. Plus, you are the most supportive people I've ever met. I still hesitate. Know my courage. Also, I pretty much always put needs and... | https://www.lesswrong.com/posts/ZKNuXJSDWtSoBFX4q/where-is-the-line-between-being-a-good-child-and-taking-care |
# Could you be Prof Nick Bostrom's sidekick?
If funding were available, the Centre for Effective Altruism would consider hiring someone to work closely with Prof Nick Bostrom to provide anything and everything he needs to be more productive. Bostrom is obviously the Director of the Future of Humanity Institute at Oxfo... | https://www.lesswrong.com/posts/3K6SWChoKwvaMGmTW/could-you-be-prof-nick-bostrom-s-sidekick |
# [Link] Eric S. Raymond - Me and Less Wrong
http://esr.ibiblio.org/?p=6549
> I’ve gotten questions from a couple of different quarters recently about my relationship to the the rationalist community around Less Wrong and related blogs. The one sentence answer is that I consider myself a fellow-traveler and ally of t... | https://www.lesswrong.com/posts/phTEGJ6KdGEYWG67D/link-eric-s-raymond-me-and-less-wrong |
# Moving towards the goal
This post contains some advice. I dare not call it obvious, as the [illusion of transparency](http://en.wikipedia.org/wiki/Illusion_of_transparency) is ever-present. I will call it simple, but people occasionally remind me that they really appreciate the simple advice. So here we go:
1
=
(A... | https://www.lesswrong.com/posts/jfXHYnreYJvrjDQsj/moving-towards-the-goal |
# A Story With Zombies
*(inspired by [Zombies: Seriously, Enough](http://www.chriswooding.com/zombies-seriously-enough/), [Zombies Are So Overdone](http://thewritersadvice.com/2012/07/26/zombies-are-sooooooooo-overdone/), and [Scifi/Fantasy Stories Editors Are Tired Of Seeing: Zombies](http://io9.com/10-science-fictio... | https://www.lesswrong.com/posts/jFzovY2CERF5bd2EW/a-story-with-zombies |
# PSA: Eugine_Nier evading ban?
I know this reeks of witch-hunting, but... I have a hunch that [u/Eugine_Nier](/user/Eugine_Nier/) is back under the guise of [u/Azathoth123](/user/Azathoth123/). Reasons:
* Same political views, with a tendency to be outspoken about them
* Karma hovering in the 70s% for both acco... | https://www.lesswrong.com/posts/d2vZRvJyNboftm9sE/psa-eugine_nier-evading-ban |
# Stupid Questions December 2014
This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Don't be shy, everyone has holes in their knowledge, though the fewer and the smaller we can make them, the better.
Please be respectful of other people's admitting ignorance and don't... | https://www.lesswrong.com/posts/mSc3i4zNF6m6idnDs/stupid-questions-december-2014 |
# Superintelligence 13: Capability control methods
_This is part of a weekly reading group on [Nick Bostrom](http://www.nickbostrom.com/)'s book, [Superintelligence](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111). For more information about the group, and an index of posts so fa... | https://www.lesswrong.com/posts/398Swu6jmczzSRvHy/superintelligence-13-capability-control-methods |
# Does utilitarianism "require" extreme self sacrifice? If not why do people commonly say it does?
Chist Hallquist wrote the following in an article (if you know the article please, please don't bring it up, I don't want to discuss the article in general):
"For example, utilitarianism apparently endorses killing a si... | https://www.lesswrong.com/posts/Y4Adk743X2ZoMBSBi/does-utilitarianism-require-extreme-self-sacrifice-if-not |
# Lifehack Ideas December 2014
> **Life hacking** refers to any trick, shortcut, skill, or novelty method that increases productivity and efficiency, in all walks of life.
— [Wikipedia](http://en.wikipedia.org/wiki/Life_hacking)
This thread is for posting any promising or interesting ideas for lifehacks you've come ... | https://www.lesswrong.com/posts/tkJZvRJTqfMjk6obr/lifehack-ideas-december-2014 |
# Estimating the cost-effectiveness of research
At a societal level, how much money should we put into medical research, or into fusion research? For individual donors seeking out the best opportunities, how can we compare the expected cost-effectiveness of research projects with more direct interventions?
Over the p... | https://www.lesswrong.com/posts/R8ywQZsare2ANHLhQ/estimating-the-cost-effectiveness-of-research |
# What Peter Thiel thinks about AI risk
This is probably the clearest statement from him on the issue:
http://betaboston.com/news/2014/12/10/audio-peter-thiel-visits-boston-university-to-talk-entrepreneurship-and-backing-zuck/
25:30 mins in
TL;DR: he thinks its an issue but also feels AGI is very distant and hence ... | https://www.lesswrong.com/posts/gWCzxYPrkknFJ5C8N/what-peter-thiel-thinks-about-ai-risk |
# Beware The Man Of One Study
Aquinas famously [said](http://en.wikipedia.org/wiki/Homo_unius_libri): beware the man of one book. I would add: beware the man of one study.
For example, take medical research. Suppose a certain drug is weakly effective against a certain disease. After a few years, a bunch of different ... | https://www.lesswrong.com/posts/ythFNoiAotjvuEGkg/beware-the-man-of-one-study |
# Harper's Magazine article on LW/MIRI/CFAR and Ethereum
Cover title: “Power and paranoia in Silicon Valley”; article title: [“Come with us if you want to live: Among the apocalyptic libertarians of Silicon Valley”](http://harpers.org/archive/2015/01/come-with-us-if-you-want-to-live/) (mirrors: [1](https://pdf.yt/d/-j... | https://www.lesswrong.com/posts/pfHrgwZi38GBckzFL/harper-s-magazine-article-on-lw-miri-cfar-and-ethereum |
# A forum for researchers to publicly discuss safety issues in advanced AI
MIRI has an organizational goal of putting a wider variety of mathematically proficient people in a position to advance our understanding of beneficial smarter-than-human AI. The [MIRIx workshops](http://intelligence.org/mirix/), our new [resea... | https://www.lesswrong.com/posts/B8FSAGHRJoP5qeEtj/a-forum-for-researchers-to-publicly-discuss-safety-issues-in |
# Debunked And Well-Refuted
**I.**
As usual, I was insufficiently pessimistic.
I infer this from *The Federalist*‘s [article on campus rape](http://thefederalist.com/2014/12/11/new-doj-data-on-sexual-assaults-college-students-are-actually-less-likely-to-be-victimized/):
> A new report on sexual assault released tod... | https://www.lesswrong.com/posts/kdmCm5NQTpqhJmGm6/debunked-and-well-refuted |
# Podcast: Rationalists in Tech
I'll appreciate feedback on a new podcast, [Rationalists in Tech](http://rationalistsintech.blogspot.com/).
I'm interviewing founders, executives, CEOs, consultants, and other people in the tech sector, mostly software. Thanks to [Laurent Bossavit](/user/Morendil), [Daniel Reeves](... | https://www.lesswrong.com/posts/M458wmvvfhKtnzTDQ/podcast-rationalists-in-tech |
# Welcome to Less Wrong! (7th thread, December 2014)
If you've recently joined the [Less Wrong community](/lw/1/about_less_wrong), please leave a comment here and introduce yourself. We'd love to know who you are, what you're doing, what you value, [how you came to identify as an aspiring rationalist](/lw/2/tell_your_... | https://www.lesswrong.com/posts/eqaro7sMe5xw2kJWc/welcome-to-less-wrong-7th-thread-december-2014 |
# Has LessWrong Ever Backfired On You?
Several weeks ago I wrote a heavily upvoted post called [**_Don't Be Afraid of Asking Personally Important Questions on LessWrong_**](http://lesswrong.com/lw/l5w/dont_be_afraid_of_asking_personally_important/). I thought it would only be due diligence if I tried to track users on... | https://www.lesswrong.com/posts/KbXE7t63G3oZ5ZSGh/has-lesswrong-ever-backfired-on-you |
# Kickstarting the audio version of the upcoming book "The Sequences"
LessWrong is getting ready to release an actual book that covers most of the material found in the Sequences.
There have been a few posts about it in the past, here are two: [the](/lw/h7t/help_us_name_the_sequences_ebook/) [title debate](/lw/h7t/h... | https://www.lesswrong.com/posts/RsR8DFhibWTvhxagK/kickstarting-the-audio-version-of-the-upcoming-book-the |
# New paper from MIRI: "Toward idealized decision theory"
I'm pleased to announce a new paper from MIRI: _[Toward Idealized Decision Theory](http://intelligence.org/files/TowardIdealizedDecisionTheory.pdf)_.
Abstract:
> This paper motivates the study of decision theory as necessary for aligning smarter-than-human ar... | https://www.lesswrong.com/posts/3aQizCnaNEfda32DM/new-paper-from-miri-toward-idealized-decision-theory |
# Using machine learning to predict romantic compatibility: empirical results
Overview
--------
For many people, having a satisfying romantic relationship is one of the most important aspects of life. Over the past 10 years, online dating websites have gained traction, and dating websites have access to large amounts... | https://www.lesswrong.com/posts/uBNj85TAB2cikby2W/using-machine-learning-to-predict-romantic-compatibility |
# Entropy and Temperature
Eliezer Yudkowsky [previously wrote](/lw/o5/the_second_law_of_thermodynamics_and_engines_of/) (6 years ago!) about the second law of thermodynamics. Many commenters were skeptical about the statement, "if you know the positions and momenta of every particle in a glass of water, it is at absol... | https://www.lesswrong.com/posts/QqqMdm7FXMgZzadZJ/entropy-and-temperature |
# Giving What We Can - New Year drive
If you’ve been planning to get around to maybe thinking about Effective Altruism, we’re making your job easier. A group of UK students has [set up a drive](https://www.facebook.com/events/1581545938749145/) for people to sign up to the [Giving What We Can](https://givingwhatwecan.... | https://www.lesswrong.com/posts/MHarArYfjuDyaJwFo/giving-what-we-can-new-year-drive |
# (Very Short) PSA: Combined Main and Discussion Feed
For anyone who's annoyed by having to check newest submissions for Main and Discussion separately, there is a feed for combined submissions from both, in the form of [Newest Submissions - All](/r/all/new/) ([RSS feed](/r/all/new/.rss)). (There's also [Comments - A... | https://www.lesswrong.com/posts/bqy6vdgCS5AXpxeia/very-short-psa-combined-main-and-discussion-feed |
# Rationality Jokes Thread
This is an experimental thread. It is somewhat in the spirit of the Rationality Quotes Thread but without the requirements and with a focus on humorous value. You may post insightful jokes, nerd or math jokes or try out rationality jokes of your own invention.
ADDED: Apparently there has b... | https://www.lesswrong.com/posts/YixiQqbjCC3eRK7d8/rationality-jokes-thread |
# Nobody Is Perfect, Everything Is Commensurable
**I.**
Recently spotted on Tumblr:
> “This is going to be an unpopular opinion but I see stuff about ppl not wanting to reblog ferguson things and awareness around the world because they do not want negativity in their life plus it will cause them to have anxiety. The... | https://www.lesswrong.com/posts/qw3Z79HELMsmLkL9F/nobody-is-perfect-everything-is-commensurable |
# Letting it go: when you shouldn't respond to someone who is wrong
I'm requesting that people follow a simple guide when determining whether to respond to a post. This simple algorithm should raise the quality of discussion here.
* If you care about the answer to a question, you will research it.
* If you don't ... | https://www.lesswrong.com/posts/RmzP5dedZrmQ5BBhG/letting-it-go-when-you-shouldn-t-respond-to-someone-who-is |
# Signalling with T-Shirt slogans
It kind of started when I got this T-shirt as a present two years ago:

It is not just a slogan that is quickly filtered out under the heading 'generic ad-like content'. It invites checking where the... | https://www.lesswrong.com/posts/pHSrmZTXtqBYM6ojD/signalling-with-t-shirt-slogans |
# Training Reflective Attention
_Crossposted at [Agenty Duck](http://agentyduck.blogspot.com/2014/12/mindfulness.html)_
> And somewhere in the back of his mind was a small, small note of confusion, a sense of something wrong about that story; and it should have been a part of Harry's art to notice that tiny note, but... | https://www.lesswrong.com/posts/FpLuKu8HCdRHKbcPn/training-reflective-attention |
# How to Read
_Part of my attempt to provide [a bunch of unsolicited, anecdotal evidence](http://peterhurford.tumblr.com/post/105718667101/anecdotal-advice-the-authoritative-index) that probably doesn't work for everyone._
_-_
Of course you already know how to read. But do you know how to read well?
Many people wh... | https://www.lesswrong.com/posts/Ysu7pLmMRbnYgoFnk/how-to-read |
# Superintelligence 15: Oracles, genies and sovereigns
_This is part of a weekly reading group on [Nick Bostrom](http://www.nickbostrom.com/)'s book, [Superintelligence](http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111). For more information about the group, and an index of posts s... | https://www.lesswrong.com/posts/yTy2Fp8Wm7m8rHHz5/superintelligence-15-oracles-genies-and-sovereigns |
# State of the Solstice 2014
_This'll be the first of a collection of posts about the growing Secular Solstice. This post gives an overview of what happened this year. Future posts will explore what types of Solstice content resonates with which people, what I've learned about how Less Wrong culture intersects with ot... | https://www.lesswrong.com/posts/wTeEpNMok2eiAsTd5/state-of-the-solstice-2014 |
# MIRI's technical research agenda
I'm pleased to announce the release of [Aligning Superintelligence with Human Interests: A Technical Research Agenda](http://intelligence.org/files/TechnicalAgenda.pdf) written by Benja and I (with help and input from many, many others). This document summarizes and motivates MIRI's ... | https://www.lesswrong.com/posts/d3gMZmSSAHXaGisyJ/miri-s-technical-research-agenda |
# Pomodoro for Programmers
Unless you’ve been living under a productivity rock, you probably have heard of the Pomodoro Technique, where you use a timer to do 25 minutes of focused work, and then take a five minute break.
I used to use this technique a lot, up until I started doing computer programming.
You see, wit... | https://www.lesswrong.com/posts/c3SKeDSycHTkmuvyR/pomodoro-for-programmers |
# Why "Changing the World" is a Horrible Phrase
Steve Jobs famously convinced John Scully from Pepsi to join Apple Computer with the line, “Do you want to sell sugared water for the rest of your life? Or do you want to come with me and change the world?”. This sounds convincing until one thinks closely about it.
Ste... | https://www.lesswrong.com/posts/Du8G7DRRntcqWrucX/why-changing-the-world-is-a-horrible-phrase |
# Open and closed mental states
I learned a game at Burning Man this year that was about connecting to people and reading their nonverbal signals, called the "open-closed" game (h/t Minda Myers). There are two people in the game, and one is trying to approach the other and place a hand on their shoulder. No words can ... | https://www.lesswrong.com/posts/MYQgaisqQRzMajx2f/open-and-closed-mental-states |
# CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype
Summary: We outline CFAR’s purpose, our history in 2014, and our plans heading into 2015.
* [Highlights from 2014](#highlights).
* [Improving operations](#operations).
* [Attempts to go beyond the current workshop and t... | https://www.lesswrong.com/posts/KDGnsReYomusL89Rs/cfar-in-2014-continuing-to-climb-out-of-the-startup-pit |
# We Haven't Uploaded Worms
> In theory you can upload someone's mind onto a computer, allowing them to live forever as a digital form of consciousness, just like in the Johnny Depp film Transcendence.
>
> But it's not just science fiction. Sure, scientists aren't anywhere near close to achieving such feat with human... | https://www.lesswrong.com/posts/B5auLtDfQrvwEkw4Q/we-haven-t-uploaded-worms |
# The buildup to Jaynes?
Not too long ago, I asked LessWrong [which math topics to learn](/lw/l91/the_best_mathematicallyinformed_topics/). Eventually, I want to ask for what the prerequisites for each of those topics are and how I should go about learning them. This is a special case of that.
I'm rereading the sequ... | https://www.lesswrong.com/posts/gcL4xmAeNT5QEnPWg/the-buildup-to-jaynes |
# The Rubber Hand Illusion and Preaching to the Unconverted
It seems that the CFAR workshops so far have been dedicated to people who have preconceptions pretty close in ideaspace to the sorts of ideas proposed on LW and by the institutions related to it. This is not a criticism; it's easier to start out this way: as ... | https://www.lesswrong.com/posts/okTZBDTnYYYs5ha93/the-rubber-hand-illusion-and-preaching-to-the-unconverted |
# [LINK] Yudkowsky's Abridged Guide to Intelligent Characters
Some of you have likely seen this already, but for those of you who haven't, Eliezer recently finished a series of Tumblr posts on writing intelligent characters in fiction. It can be found at http://yudkowsky.tumblr.com/writing and is IMO worth a read. | https://www.lesswrong.com/posts/NWZ8YABJb4KoJWncK/link-yudkowsky-s-abridged-guide-to-intelligent-characters |
# Recent AI safety work
(Crossposted from [ordinary ideas](http://ordinaryideas.wordpress.com/)).
I’ve recently been thinking about AI safety, and some of the writeups might be interesting to some LWers:
1. Ideas for building useful agents without goals: [approval-directed agents](https://medium.com/@paulfchristia... | https://www.lesswrong.com/posts/BCeFcAGc5ayggJ3h5/recent-ai-safety-work |
# Dark Arts 101: Be rigorous, on average
I'm reading George Steiner's 1989 book on literary theory, _Real Presences_. Steiner is a literary theorist who achieved the trifecta of having appointments at Oxford, Cambridge, and Harvard. His book demonstrates an important Dark Arts method of argument.
So far, Steiner's ar... | https://www.lesswrong.com/posts/N4sGXcNue2jMET8uP/dark-arts-101-be-rigorous-on-average |
# Identity crafting
I spend a LOT of time on what I'll call "identity crafting". It's probably my most insidious procrastination tactic--far worse than, say, Facebook or Reddit.
What do I mean by "identity crafting"? Here are some examples:
* Brainstorming areas of my life where I want to improve (e.g. social sk... | https://www.lesswrong.com/posts/QDGiCjvP7KpJAbMd6/identity-crafting |
# Overpaying for happiness?
Happy New Year, everyone!
In the past few months I've been thinking several thoughts that all seem to point in the same direction:
1) People who live in developed Western countries usually make and spend much more money than people in poorer countries, but aren't that much happier. It fee... | https://www.lesswrong.com/posts/NToH5vtBY8ShiEeXm/overpaying-for-happiness |
# Understanding Who You Really Are
> Here are 14 ways in which you reveal who you really are. If you’re brave enough, or if you dare, aim to share who you really are, little by little, everyday, with those you trust.
>
> \- A typical 'Who You Really Are' article on [Lifehack](http://www.lifehack.org/articles/communic... | https://www.lesswrong.com/posts/7Aq5N5py3vnRKc6eJ/understanding-who-you-really-are |
# Brain-centredness and mind uploading
The naïve way of understanding mind uploading is "we take the connectome of a brain, including synaptic connection weights and characters, and emulate it in a computer". However, people want their _personalities_ to be uploaded, not just brains. That is more than just replicating... | https://www.lesswrong.com/posts/oQMZzr4jzzksdNdMe/brain-centredness-and-mind-uploading |
# [Link] Neural networks trained on expert Go games have just made a major leap
From the [arXiv](http://arxiv.org/abs/1412.6564):
> **Move Evaluation in Go Using Deep Convolutional Neural Networks**
>
> Chris J. Maddison, Aja Huang, Ilya Sutskever, David Silver
>
> The game of Go is more challenging than other boar... | https://www.lesswrong.com/posts/WjoPDTdTRaiLzMnQS/link-neural-networks-trained-on-expert-go-games-have-just |
# Non-obvious skills with highly measurable progress?
A lot of my significant personal improvement happened as a result of highly measurable progress and tight feedback loops. For example:
* Project Euler
* Go (the game has a very accurate ranking system)
* Strength training
However, these are somewhat obviou... | https://www.lesswrong.com/posts/dHdW6M5hzn8Fupvzh/non-obvious-skills-with-highly-measurable-progress |
# The Superstar Effect
> Modern microconomist Alfred Marshall explains that technology has greatly extended the power and reach of the planet's most gifted performers....He referenced a classical of the British opera singer Elizabeth Billington. She was a well-acclaimed soprano with a strong voice, that, naturally did... | https://www.lesswrong.com/posts/aRv9PzpFvMEwBfSrn/the-superstar-effect |
# Graphical Assumption Modeling
The Flaws of Fermi Estimates
----------------------------
Why don’t we use more Fermi estimates?\[1\] Many of us want to become more rational. We have lots of numbers we can think of and important variables to consider. There are a few reasons.
Fermi calculations get really messy. Aft... | https://www.lesswrong.com/posts/KigyGYE7nZZwPvN4s/graphical-assumption-modeling |
# Compartmentalizing: Effective Altruism and Abortion
Cross-posted [on my blog](https://effectivereaction.wordpress.com/2014/12/31/blind-spots-compartmentalizing/) and the [effective altruism forum](http://effective-altruism.com/ea/d4/blind_spots_compartmentalizing/) with some minor tweaks; apologies if some of the fo... | https://www.lesswrong.com/posts/E6dDvRCr8eeTDJrAG/compartmentalizing-effective-altruism-and-abortion |
# Negative polyamory outcomes?
Related article: [Polyhacking](/lw/79x/polyhacking/)
Note: This article was posted [earlier](/r/discussion/lw/lhn/negative_polyamory_outcomes/) for less than a day but accidentally deleted.
Although polyamory isn't one of the "official" topics of LW interest (human cognition, AI, proba... | https://www.lesswrong.com/posts/WKSmQtARnW4mPEpXL/negative-polyamory-outcomes |
# 2014 Survey Results
Thanks to everyone who took the 2014 Less Wrong Census/Survey. Extra thanks to Ozy, who did a lot of the number crunching work.
This year's results are below. Some of them may make more sense in the context of the original survey questions, which [can be seen here](https://docs.google.com/forms/... | https://www.lesswrong.com/posts/YAkpzvjC768Jm2TYb/2014-survey-results |
# Low Hanging fruit for buying a better life
What can I purchase with $100 that will be the best thing I can buy to make my life better?
I've decided to budget some regular money to improving my life each month. I'd like to start with low hanging fruit for obvious reasons - but when I sat down to think of improvement... | https://www.lesswrong.com/posts/CYDSRKEJruoKgdBXa/low-hanging-fruit-for-buying-a-better-life |
# Exams and Overfitting
When I hear something like _"What's going to be on the exam?"_, part of me gets indignant. WHAT?!?! You're defeating the whole point of the exam! You're committing the Deadly Sin of Overfitting!
Let me step back and explain my view of exams.
When I take a class, my goal is to learn the mat... | https://www.lesswrong.com/posts/nmnMuKLwxzKFguBwm/exams-and-overfitting |
# Programming-like activities?
Programming is quite a remarkable activity:
* It has an extremely low barrier to entry
* You don't need expensive equipment
* You don't need to be in a particular location
* You don't need special credentials
* You can finding information / resources just by op... | https://www.lesswrong.com/posts/6jaSMF5JHB5SSYPbs/programming-like-activities |
# 2015 Repository Reruns - Boring Advice Repository
This is the first post of the 2015 repository rerun, which [appears to be a good idea](/r/discussion/lw/lht/open_thread_jan_511_2015/btjs). The motivation for this rerun is that while the 12 [repositories](/lw/i64/repository_repository/) (go look them up, they're awe... | https://www.lesswrong.com/posts/w8BDunkugbkiTEBk8/2015-repository-reruns-boring-advice-repository |
# The Importance of Sidekicks
\[Reposted from my [personal blog](http://swimmer963.com/?p=383).\]
Mindspace is wide and deep. “People are different” is a truism, but even knowing this, it’s still easy to underestimate.
I spent much of my initial engagement with the rationality community feeling weird and different. ... | https://www.lesswrong.com/posts/BfBF6T6HA82zBxPrv/the-importance-of-sidekicks |
# Memes and Rational Decisions
In 2004, [Michael Vassar](http://en.wikipedia.org/wiki/Michael_Vassar) gave the following talk about how humans can reduce existential risk, titled Memes and Rational Decisions, to some transhumanists. It is well-written and gives actionable advice, much of which is unfamiliar to the con... | https://www.lesswrong.com/posts/AzByGtPNPXoJvCLKW/memes-and-rational-decisions |
# Questions of Reasoning under Logical Uncertainty
I'm pleased to announce a new paper from MIRI: _[Questions of Reasoning Under Logical Uncertainty](https://intelligence.org/files/QuestionsLogicalUncertainty.pdf)_.
Abstract:
> A logically uncertain reasoner would be able to reason as if they know both a programming... | https://www.lesswrong.com/posts/ZxBBhzFSP6q4cz4Fv/questions-of-reasoning-under-logical-uncertainty |
# Existential Risk and Existential Hope: Definitions
I'm pleased to announce [Existential Risk and Existential Hope: Definitions](http://www.fhi.ox.ac.uk/Existential-risk-and-existential-hope.pdf), a short new FHI technical report.
Abstract:
> We look at the strengths and weaknesses of two existing definitions of ex... | https://www.lesswrong.com/posts/JjY8Yq9YdEAHc7Lkb/existential-risk-and-existential-hope-definitions |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.