url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/fyrqvFSbGqrdwNhrg/how-much-have-i-been-playing | fyrqvFSbGqrdwNhrg | How Much Have I Been Playing? | jkaufman | With Kingfisher I'm starting to need to think about how many gigs I
should accept, which is good and bad! I like playing a lot, but since
I also work full time, have three kids, etc. I need to be careful not
to say yes to too many things. It would be helpful to know how much
I've been playing, so I went back over my ... | 2024-03-12 |
https://www.lesswrong.com/posts/FwNPgj9Wnu4tarKLK/bias-augmented-consistency-training-reduces-biased-reasoning | FwNPgj9Wnu4tarKLK | Bias-Augmented Consistency Training Reduces Biased Reasoning in Chain-of-Thought | miles | [Twitter thread]
I'm not going to add much additional commentary at the moment and will just let people check out the paper!
But to give a bit more context: This paper is building off prior work we have done showing that chain-of-thought explanations can be misleading, which I wrote about on the alignment forum here. B... | 2024-03-11 |
https://www.lesswrong.com/posts/j5jKWfkbpKGTnzFky/ai-safety-action-plan-a-state-commissioned-report | j5jKWfkbpKGTnzFky | AI Safety Action Plan - A report commissioned by the US State Department | agucova | null | 2024-03-11 |
https://www.lesswrong.com/posts/DRW2GavKSmWrXMyZJ/a-discussion-of-ai-risk-and-the-cost-benefit-calculation-of | DRW2GavKSmWrXMyZJ | A discussion of AI risk and the cost/benefit calculation of stopping or pausing AI development | DuncanFowler | I've been interested in AI risk for a very long time, but I've never written that much about it because I always felt other people were better at discussing the situation.
I have begun to revise this opinion, and decided to write down the basic points of disagreement I have, which rather ran one longer than I thought i... | 2024-03-11 |
https://www.lesswrong.com/posts/am5wtA6TFPZWEfraS/linkpost-among-the-a-i-doomsayers-the-new-yorker | am5wtA6TFPZWEfraS | Among the A.I. Doomsayers - The New Yorker | agucova | null | 2024-03-11 |
https://www.lesswrong.com/posts/NbnDb7nfqvDj9Kjqn/be-more-katja | NbnDb7nfqvDj9Kjqn | Be More Katja | Nathan Young | Katja is widely respected amongst the rationalists and, according to Hive, she is one of the most followed/respected EA accounts[1]. But she doesn't give off the same vibe as many impact olympians. She doesn’t have iron self-will, nor does she manage a huge team. She's hasn't got all the facts at her fingertips. But sh... | 2024-03-11 |
https://www.lesswrong.com/posts/KA2HxJfhz3CSbLdbL/open-thread-spring-2024 | KA2HxJfhz3CSbLdbL | Open Thread Spring 2024 | habryka4 | If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss f... | 2024-03-11 |
https://www.lesswrong.com/posts/qpsFXCTB6ZvEkQxnq/new-social-credit-formalizations | qpsFXCTB6ZvEkQxnq | New social credit formalizations | KatjaGrace | Here are some classic ways humans can get some kind of social credit with other humans:
Do something for them such that they will consider themselves to ‘owe you’ and do something for you in future
Be consistent and nice, so that they will consider you ‘trustworthy’ and do cooperative activities with you that would be ... | 2024-03-11 |
https://www.lesswrong.com/posts/o64GLrKahR8QrbFQW/how-disagreements-about-evidential-correlations-could-be-1 | o64GLrKahR8QrbFQW | How disagreements about Evidential Correlations could be settled | martinsq | Since beliefs about Evidential Correlations don't track any direct ground truth, it's not obvious how to resolve disagreements about them, which is very relevant to acausal trade.
Here I present what seems like the only natural method (Third solution below).
Ideas partly generated with Johannes Treutlein.
Say two agent... | 2024-03-11 |
https://www.lesswrong.com/posts/gZBgmDFqqyw3Lghok/ai-incident-reporting-a-regulatory-review | gZBgmDFqqyw3Lghok | AI Incident Reporting: A Regulatory Review | deric-cheng | This article is the first in a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance (e.g. incident reporting, safety evals, model registries, etc.).... | 2024-03-11 |
https://www.lesswrong.com/posts/gxoHkmGg5RuYvx6Ka/the-astronomical-sacrifice-dilemma | gxoHkmGg5RuYvx6Ka | The Astronomical Sacrifice Dilemma | matthew-mcredmond | Epistemic Status: I think the dilemma as outlined in Section 1 follows from well-established ideas about Astronomical Waste. However, given that I have not seen it anywhere before I might have made some oversight I am unaware of. You don't know what you don't know but maybe someone on LessWrong does.
UPDATE: I have fou... | 2024-03-11 |
https://www.lesswrong.com/posts/5wqFoHBBgpdHeCLS6/storable-votes-with-a-pay-as-you-win-mechanism-a | 5wqFoHBBgpdHeCLS6 | Storable Votes with a Pay as you win mechanism: a contribution for institutional design | arturo-macias | I joined the EA Forum in 2022, with a post describing my interests and agenda. I also declared in my first comment that in my view, among the main existential risk bottlenecks for this Dangerous Century, a critical one is institutional stagnation. E.O Wilson famously said: "The real problem of humanity is the following... | 2024-03-11 |
https://www.lesswrong.com/posts/94K6pskgqBmuxsJLx/results-from-an-adversarial-collaboration-on-ai-risk-fri | 94K6pskgqBmuxsJLx | Results from an Adversarial Collaboration on AI Risk (FRI) | josh-rosenberg | Authors of linked report: Josh Rosenberg, Ezra Karger, Avital Morris, Molly Hickman, Rose Hadshar, Zachary Jacobs, Philip Tetlock[1]
Today, the Forecasting Research Institute (FRI) released “Roots of Disagreement on AI Risk: Exploring the Potential and Pitfalls of Adversarial Collaboration,” which discusses the results... | 2024-03-11 |
https://www.lesswrong.com/posts/LZJJK6fuuQtTLRSu9/some-problematic-aesthetics-of-what-constitutes-good-work-in | LZJJK6fuuQtTLRSu9 | Some (problematic) aesthetics of what constitutes good work in academia | steve2152 | (Not-terribly-informed rant, written in my free time.)
Terminology note: When I say “an aesthetic”, I mean an intuitive (“I know it when I see it”) sense of what a completed paper, project, etc. is ideally “supposed” to look like. It can include both superficial things (the paper is properly formatted, the startup has ... | 2024-03-11 |
https://www.lesswrong.com/posts/CuqmDTMwTM64FthLy/tend-to-your-clarity-not-your-confusion | CuqmDTMwTM64FthLy | Tend to your clarity, not your confusion | sts | Repost from https://amoretlicentia.substack.com/
Modern life is weird.
For the more privileged among us, the options of what we could do with our time grow exponentially by the year, by the day. My interests are infinite, and I’m lucky to have both the wits and the means to follow just about any of them. Meanwhile, I a... | 2024-03-11 |
https://www.lesswrong.com/posts/vgCoy4bBrDw9LPrpW/what-do-we-know-about-the-ai-knowledge-and-views-especially | vgCoy4bBrDw9LPrpW | What do we know about the AI knowledge and views, especially about existential risk, of the new OpenAI board members? | Zvi | They have announced three new board members in addition to Altman, but we seem to know almost nothing about their views or knowledge on any AI-related subjects? What if anything do we know?
From OpenAI:
We’re announcing three new members to our Board of Directors as a first step towards our commitment to expansion: Dr.... | 2024-03-11 |
https://www.lesswrong.com/posts/pGmxP8YA36dtDq9q7/epiphenomenalism-leads-to-eliminativism-about-qualia | pGmxP8YA36dtDq9q7 | Epiphenomenalism leads to eliminativism about qualia | Clément L | Introduction
In this post I will explain why I think that certain forms of realism about conscious experience face some issues that ultimately lead them to the conclusion that our belief that consciousness exists is not reliable, and thus consciousness may not exist at all, as counterintuitively as it seems. This concl... | 2024-03-11 |
https://www.lesswrong.com/posts/uxzDLD4WsiyrBjnPw/artificial-general-intelligence-an-extremely-brief-faq | uxzDLD4WsiyrBjnPw | “Artificial General Intelligence”: an extremely brief FAQ | steve2152 | (Crossposted from twitter for easier linking.) (Intended for a broad audience—experts already know all this.)
When I talk about future “Artificial General Intelligence” (AGI), what am I talking about? Here’s a handy diagram and FAQ:
“Are you saying that ChatGPT is a right-column thing?” No. Definitely not. I think the ... | 2024-03-11 |
https://www.lesswrong.com/posts/GjsbNNP2dv9p3LrDG/the-best-essay-paul-graham | GjsbNNP2dv9p3LrDG | The Best Essay (Paul Graham) | Chris_Leong | Sharing because a lot of us authoring posts on Less Wrong are trying to write something that is useful and insightful. | 2024-03-11 |
https://www.lesswrong.com/posts/rYq6joCrZ8m62m7ej/how-could-i-have-thought-that-faster | rYq6joCrZ8m62m7ej | "How could I have thought that faster?" | mesaoptimizer | I stumbled upon a Twitter thread where Eliezer describes what seems to be his cognitive algorithm that is equivalent to Tune Your Cognitive Strategies, and have decided to archive / repost it here.
Sarah Constantin: I really liked this example of an introspective process, in this case about the "life problem" of schedu... | 2024-03-11 |
https://www.lesswrong.com/posts/nWRj6Ey8e5siAEXbK/simple-versus-short-higher-order-degeneracy-and-error-1 | nWRj6Ey8e5siAEXbK | Simple versus Short: Higher-order degeneracy and error-correction | dmurfet | TLDR: The simplicity bias in Bayesian statistics is not just a bias towards short description length.
The folklore relating the simplicity bias in Bayesian statistics to description length is incomplete: while it is true that the fewer parameters you use the better, the true complexity measure which appears in the math... | 2024-03-11 |
https://www.lesswrong.com/posts/RbynKk3evb6RiLryL/deconstructing-bostrom-s-classic-argument-for-ai-doom | RbynKk3evb6RiLryL | Deconstructing Bostrom's Classic Argument for AI Doom | nora-belrose | I had a pretty great discussion with social psychologist and philosopher Lance Bush recently about the orthogonality thesis, which ended up turning into a broader analysis of Nick Bostrom's argument for AI doom as presented in Superintelligence, and some related issues.
While the video is intended for a general audienc... | 2024-03-11 |
https://www.lesswrong.com/posts/iQTnCfhQDPZtFq7WZ/some-thoughts-on-concept-formation-and-use-in-agents | iQTnCfhQDPZtFq7WZ | Some Thoughts on Concept Formation and Use in Agents | CatGoddess | Note: I wrote this document over a year ago and have decided to post it with minimal edits; it isn't entirely up to date with my current thinking on the subject.
Imagine you enter a room that looks like this:
Despite never having visited this room, you can make inferences about it. If nobody waters the plant in the cor... | 2024-03-11 |
https://www.lesswrong.com/posts/zDvtAxhxY5vYQwHbG/steelmanning-as-an-especially-insidious-form-of-strawmanning | zDvtAxhxY5vYQwHbG | Steelmanning as an especially insidious form of strawmanning | Kalciphoz | Edit: made some small changes to prevent certain gross mischaracterizations of the argument. The core argument remains completely unchanged.
Among intelligent people with at least some familiarity with argumentative norms, surface level disagreements tend to be ephemeral because, even if some given debate about the iss... | 2024-03-11 |
https://www.lesswrong.com/posts/DvRBSzFjfaPYBhwmj/one-shot-strategy-games | DvRBSzFjfaPYBhwmj | One-shot strategy games? | Raemon | I'm looking for computer games that involve strategy, resource management, hidden information, and management of "value of information" (i.e. figuring out when to explore or exploit), which:
*can* be beaten in 30 – 120 minutes on your first try (or, there's a clear milestone that's about that long)but, it'd be pretty h... | 2024-03-11 |
https://www.lesswrong.com/posts/xZg5w27htahZ6jjK8/replacing-the-water-heater-s-anode | xZg5w27htahZ6jjK8 | Replacing the Water Heater's Anode | jkaufman | We installed a new water heater 8 years ago, and since then I've
ignored it. It's an indirect model, heated by the same gas boiler that
heats our house, and it has done its job well. When I was
thinking about heat
pumps, however, I reread the manuals for our existing system and
noticed that the
manufacturer
recommends ... | 2024-03-11 |
https://www.lesswrong.com/posts/qykrYY6rXXM7EEs8Q/understanding-sae-features-with-the-logit-lens | qykrYY6rXXM7EEs8Q | Understanding SAE Features with the Logit Lens | Jbloom | This work was produced as part of the ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort, with support from Neel Nanda and Arthur Conmy. Joseph Bloom is funded by the LTFF, Manifund Regranting Program, donors and LightSpeed Grants. This post makes extensive use of Neuronpedia, a platform for interpretabilit... | 2024-03-11 |
https://www.lesswrong.com/posts/EZEiBgP9wxiAvtYJH/advice-needed-does-using-a-llm-compomise-my-personal | EZEiBgP9wxiAvtYJH | Advice Needed: Does Using a LLM Compomise My Personal Epistemic Security? | naomi-2 | I have been using Claude 2.1 for a few months to solve serious problems in my life and get coaching and support. I need Claude to become functional, mentally well and funded enough to contribute to Utilitarianism and Alignment by donating to the Center for Long Term Risk and other interventions in these last years of E... | 2024-03-11 |
https://www.lesswrong.com/posts/bbkrzXGwpxtusPsuG/briefly-extending-differential-optimization-to-distributions | bbkrzXGwpxtusPsuG | Briefly Extending Differential Optimization to Distributions | Jemist | I've done some work on a definition of optimization which applies to "trajectories" in deterministic, differentiable models. What happens when we try and introduce uncertainty?
Suppose we have the following system consisting of three variables, the past P, future F, and some agent A. The agent "acts" on the system to p... | 2024-03-10 |
https://www.lesswrong.com/posts/3jFTf7bSza6gC5mkN/evolution-did-a-surprising-good-job-at-aligning-humans-to | 3jFTf7bSza6gC5mkN | Evolution did a surprising good job at aligning humans...to social status | elityre | [This is post is a slightly edited tangent from my dialogue with John Wentworth here. I think the point is sufficiently interesting and important that I wanted to make it as a top level post, and not leave it buried in that dialog on mostly another topic.]
The conventional story is that natural selection failed extreme... | 2024-03-10 |
https://www.lesswrong.com/posts/XkmYTBGXLnPDXqg44/pausing-ai-is-positive-expected-value | XkmYTBGXLnPDXqg44 | Pausing AI is Positive Expected Value | Liron | The PauseAI (⏸️) movement often gets this pushback:
“You're not factoring in all the benefits of good AI!”
“Stopping AI progress is also a doom scenario!”
To which I reply: If you agree P(doom) from building superintelligent AI before knowing how to align or control it is 5%+, try doing the basic expected-value calcula... | 2024-03-10 |
https://www.lesswrong.com/posts/xJqCTSCcJ9ypvC39Q/an-optimistic-solution-to-the-fermi-paradox | xJqCTSCcJ9ypvC39Q | An Optimistic Solution to the Fermi Paradox | glenn-clayton | This post is a crosspost from my blog.
Drum Beats in the Distance
In a remote forest clearing, a young girl clutches a handmade doll, her eyes wide with concern. Today, her father and uncles journey deep into the wilderness on a hunt, leaving her and the village behind. She wonders, how will her father hear her call am... | 2024-03-10 |
https://www.lesswrong.com/posts/E3L4oHYNvHXbcqNEi/counterfactual-civilization-simulation-version-1-0-aka-my | E3L4oHYNvHXbcqNEi | Counterfactual Civilization Simulation Version -1.0 aka my application to Johannes Mayer's SPAR project | pi-rogers | This is my "object level output" submission for Johannes Mayer's 2024 SPAR Application (the linked doc seems to be reused from the 2023 AISC application). Unless otherwise noted, all quote blocks in this post are from the application question doc.
For those of you who aren't Johannes Mayer reading this, I don't think t... | 2024-03-10 |
https://www.lesswrong.com/posts/etoMr4vcnP7joQHWa/notes-from-a-prompt-factory | etoMr4vcnP7joQHWa | Notes from a Prompt Factory | ricraz | Content note: this story features severe suffering which, while not described in detail, several readers have described as unpleasant or horrifying.
I am a spiteful man. But I am aware of it, which is more than most can say. These days people walk through the streets with resentment in their hearts that they don’t even... | 2024-03-10 |
https://www.lesswrong.com/posts/usTh7CFzbCGp5wM5B/investigating-basin-volume-with-xor-networks | usTh7CFzbCGp5wM5B | Investigating Basin Volume with XOR Networks | CatGoddess | Note: I wrote this document over a year ago, and recently I decided to post it with minimal edits. If I were to do research in the same area today, I'd probably have different framings, thoughts on what directions seem promising, etc. Some of the things I say here now seem confused to me.
This work was done while I was... | 2024-03-10 |
https://www.lesswrong.com/posts/7Zsv3KaK7LEXa8xJw/linkpost-mindeye2-shared-subject-models-enable-fmri-to-image | 7Zsv3KaK7LEXa8xJw | [Linkpost] MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data | bogdan-ionut-cirstea | TL;DR: We show that accurate reconstructions of perception from fMRI brain activity are now possible from a single visit to the MRI facility using our diffusion-based approach that pretrains across other subjects' data using a novel alignment procedure.
Abstract:
Reconstructions of visual perception from brain activity... | 2024-03-10 |
https://www.lesswrong.com/posts/8ApdxRvZSp93xvNFt/completion-estimates | 8ApdxRvZSp93xvNFt | Completion Estimates | scarcegreengrass | Recently i've been brainstorming algorithms for estimating the time remaining in a process. Progress bars, estimated time left until completion, etc. Sometimes, the thing you want just isn't here yet, & you want a greater understanding of when you will have it. Here are the most useful techniques i've found so far.
Wit... | 2024-03-09 |
https://www.lesswrong.com/posts/YrfiEpPkxPogRK4zg/strong-misalignment-does-yudkowsky-or-christiano-or | YrfiEpPkxPogRK4zg | Strong-Misalignment: Does Yudkowsky (or Christiano, or TurnTrout, or Wolfram, or…etc.) Have an Elevator Speech I’m Missing? | Benjamin Bourlier | [Intro Note: (1) this "post" is, technically, a "question", for which there's a separate category on LW. I debated posting it as a "question", but it also involves considerable argumentation for my own views. Ideally there'd be some in-between "question/post" category, but I opted for "post"? I understand if a reader t... | 2024-03-15 |
https://www.lesswrong.com/posts/sg7ZB3KHWsL4Lazvq/distinctions-when-discussing-utility-functions | sg7ZB3KHWsL4Lazvq | Distinctions when Discussing Utility Functions | ozziegooen | null | 2024-03-09 |
https://www.lesswrong.com/posts/staxgbYqodorSuBmZ/what-is-progress | staxgbYqodorSuBmZ | What is progress? | jasoncrawford | In one sense, the concept of progress is simple, straightforward, and uncontroversial. In another sense, it contains an entire worldview.
The most basic meaning of “progress” is simply advancement along a path, or more generally from one state to another that is considered more advanced by some standard. (In this sense... | 2024-03-09 |
https://www.lesswrong.com/posts/5RX8j4CDqadnffCij/fifteen-lawsuits-against-openai | 5RX8j4CDqadnffCij | Fifteen Lawsuits against OpenAI | remmelt-ellen | tl;dr Cases I found against OpenAI. All are US-based. First ten focus on copyright.
Coders
1. Joseph Saveri Firm: overview, complaint
Writers
2. Joseph Saveri Firm: overview, complaint
3. Authors Guild & Alter: overview, complaint
4. Nicholas Gage: overview & complaint
YouTubers
5. Millette: overview, complaint
Me... | 2024-03-09 |
https://www.lesswrong.com/posts/jpa8mJcsq4FsDr8oA/cambridge-acx-ssc-monthly-meetup-location-changed-to-fort-st | jpa8mJcsq4FsDr8oA | Cambridge ACX/SSC monthly meetup (location changed to Fort St George!) | hamishtodd1 | Calling all Cambridge dwellers! We're having a Schelling meetup. It is UPSTAIRS, you have to go through a door and down a hall but it is a lovely space.
NOTE that Lesswrong event pages for our meetups are rare. I have an email list and I just email everyone on the list. So give me your email if you want to come again!
... | 2024-03-09 |
https://www.lesswrong.com/posts/FuLf8YsC6JEDCjnPi/ma-e-zpass-without-a-car | FuLf8YsC6JEDCjnPi | MA E-ZPass Without a Car? | jkaufman | I recently drove to DC and back
playing dances in a
rental. I paid cash tolls when available, but that often wasn't an
option, so I ended up paying $40 in PlatePass charges in addition to
the $63 in tolls. Time to get an
E-ZPass!
What makes this tricky is that I don't own a car. Well, I have half a car,
which does h... | 2024-03-09 |
https://www.lesswrong.com/posts/HZusY5jPYvpmkTkZD/probabilistic-logic-less-than-greater-than-oracles | HZusY5jPYvpmkTkZD | Probabilistic Logic <=> Oracles? | randomwalks | Epistemic status: this is a draft I wrote at the end of MATS that I decided to make public in case that people with more experience with this machinery wanted to give constructive feedback. Is very unpolished!!! And likely quite very wrong in some cases / makes false claims (if you catch them, please let me know!)
The ... | 2024-07-01 |
https://www.lesswrong.com/posts/6t96mJH3AbmX5onzy/w2sg-introduction | 6t96mJH3AbmX5onzy | W2SG: Introduction | maria-kapros | Epistemic status: Naive and exploratory, reflects my primary conceptual understanding, awaiting a technical deep dive. 99% of ideas are not my own, rather distilled from the resources hyperlinked throughout.
Many alignment researchers err towards local optimization i.e. seek low-hanging fruits and leverage incremental ... | 2024-03-10 |
https://www.lesswrong.com/posts/8hKoGdyqTbmBtzLpw/semi-simplicial-types-part-i-motivation-and-history | 8hKoGdyqTbmBtzLpw | Semi-Simplicial Types, Part I: Motivation and History | FrozenWinters | (Jointly written by Astra Kolomatskaia and Mike Shulman)
This is part one of a three-part series of expository posts on our paper Displayed Type Theory and Semi-Simplicial Types. In this part, we motivate the problem of constructing SSTs and recap its history.
A Prospectus
There are different ways to describe the relat... | 2024-03-09 |
https://www.lesswrong.com/posts/tCCzZa9tyuciAPhks/when-and-why-did-training-become-pretraining | tCCzZa9tyuciAPhks | When and why did 'training' become 'pretraining'? | beren | Just an ML linguistic quirk I have wondered about for a while. When I started learning ML (in 2016-2017 period) everybody referred to the period of training models as just 'training' which could then (optionally) be followed by finetuning. This usage makes sense to me and as far as I know was the standard ML terminolog... | 2024-03-08 |
https://www.lesswrong.com/posts/kFrxFZxGTosKhSq8u/forecasting-future-gains-due-to-post-training-enhancements | kFrxFZxGTosKhSq8u | Forecasting future gains due to post-training enhancements | elifland | This work has been done in the context of SaferAI’s work on risk assessment. Equal contribution by Eli and Joel. I'm sharing this writeup in the form of a Google Doc and reproducing the summary below.
Disclaimer: this writeup is context for upcoming experiments, not complete work. As such it contains a lot of (not alwa... | 2024-03-08 |
https://www.lesswrong.com/posts/KTLCnmogMhFXgSPJ7/scenario-forecasting-workshop-materials-and-learnings | KTLCnmogMhFXgSPJ7 | Scenario Forecasting Workshop: Materials and Learnings | elifland | Disclaimer: While some participants and organizers of this exercise work in industry, no proprietary info was used to inform these scenarios, and they represent the views of their individual authors alone.
Overview
In the vein of What 2026 Looks Like and AI Timelines discussion, we recently hosted a scenario forecastin... | 2024-03-08 |
https://www.lesswrong.com/posts/GtGPTf47GNdhhQRhz/community-norms-poll-2-mins | GtGPTf47GNdhhQRhz | Community norms poll (2 mins) | Nathan Young | A poll about what community norms should be in light of the nonlinear saga. Trying to be forward-looking rather than going back over events. Votes are anonymous (though given an alphanumeric string so all your votes are connected)
Consider in regard to what you want rationalist norms to be, if you want there to be any.... | 2024-03-07 |
https://www.lesswrong.com/posts/v9qj2LHLh2ALDGKyA/woods-new-preprint-on-object-permanence | v9qj2LHLh2ALDGKyA | Woods’ new preprint on object permanence | steve2152 | Quick poorly-researched post, probably only of interest to neuroscientists.
The experiment
Justin Wood at University of Indiana has, over many years with great effort, developed a system for raising baby chicks such that all the light hitting their retina is experimentally controlled right from when they’re an embryo—t... | 2024-03-07 |
https://www.lesswrong.com/posts/iG6yN7DW6W4qtJdo3/announcing-convergence-analysis-an-institute-for-ai-scenario | iG6yN7DW6W4qtJdo3 | Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research | David_Kristoffersson | Cross-posted on the EA Forum.
Executive Summary
We’re excited to introduce Convergence Analysis - a research non-profit & think-tank with the mission of designing a safe and flourishing future for humanity in a world with transformative AI. In the past year, we’ve brought together an interdisciplinary team of 10 academ... | 2024-03-07 |
https://www.lesswrong.com/posts/Ym4aQcP4nBQBf2A8C/political-biases-in-llms-literature-review-and-current-uses | Ym4aQcP4nBQBf2A8C | Political Biases in LLMs:
Literature Review & Current Uses of AI in Elections | yashvardhan-sharma | TL;DR: This research discusses political biases in Large Language Models (LLMs) and their implications, exploring current research findings and methodologies. Our research summarized eight recent research papers, discussing methodologies for bias detection, including causal structures and reinforced calibration, while ... | 2024-03-07 |
https://www.lesswrong.com/posts/DzhQN9WKz8zYvrq4a/evidential-correlations-are-subjective-and-it-might-be-a-1 | DzhQN9WKz8zYvrq4a | Evidential Correlations are Subjective, and it might be a problem | martinsq | I explain (in layman's terms) a realization that might make acausal trade hard or impossible in practice.
Summary: We know that if players believe different Evidential Correlations, they might miscoordinate. But clearly they will eventually learn to have the correct Evidential Correlations, right? Not necessarily, beca... | 2024-03-07 |
https://www.lesswrong.com/posts/6HFAAewifyGYQpXJM/do-llms-sometime-simulate-something-akin-to-a-dream | 6HFAAewifyGYQpXJM | Do LLMs sometime simulate something akin to a dream? | evyatar-or | When dreaming we sometimes simulate a very different person than our waking self, we can make decisions uncharacteristic of our own, we can experience a world very different then waking reality, and even sometime get implanted with memories we never experienced.
And still, I think most people would consider that simula... | 2024-03-08 |
https://www.lesswrong.com/posts/DbnNMFLkGjqYwdce4/aisn-32-measuring-and-reducing-hazardous-knowledge-in-llms | DbnNMFLkGjqYwdce4 | AISN #32: Measuring and Reducing Hazardous Knowledge in LLMs Plus, Forecasting the Future with LLMs, and Regulatory Markets | Aidan O'Gara | Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Subscribe here to receive future versions.
Listen to the AI Safety Newsletter for free on Spotify.
Measuring and Reducing Hazardous Knowledge
The recent White House Executive O... | 2024-03-07 |
https://www.lesswrong.com/posts/Nvi94KJSDGZMjknZS/ai-54-clauding-along | Nvi94KJSDGZMjknZS | AI #54: Clauding Along | Zvi | The big news this week was of course the release of Claude 3.0 Opus, likely in some ways the best available model right now. Anthropic now has a highly impressive model, impressive enough that it seems as if it breaks at least the spirit of their past commitments on how far they will push the frontier. We will learn mo... | 2024-03-07 |
https://www.lesswrong.com/posts/SD7L64karSyAfvuoG/being-interested-in-other-people | SD7L64karSyAfvuoG | Being Interested in Other People | JonathanMoregard | People love talking about themselves. You can increase your social skills by training yourself to be interested in other people.
Most people primarily talk about themselves and their own interests. This self-focus is counter-productive, hindering connections. Unfortunately, it’s the default approach for most people.
I ... | 2024-03-07 |
https://www.lesswrong.com/posts/C2MENa5ahqRC8ZDgM/talking-to-congress-can-constituents-contacting-their | C2MENa5ahqRC8ZDgM | Talking to Congress: Can constituents contacting their legislator influence policy? | tristan-williams | null | 2024-03-07 |
https://www.lesswrong.com/posts/DsombSZ9SxtvvFm4y/explaining-the-ai-alignment-problem-to-tibetan-buddhist | DsombSZ9SxtvvFm4y | Explaining the AI Alignment Problem to Tibetan Buddhist Monks | paul-colognese | Introduction
As part of an exchange being facilitated between religion and science, a group of academics has been asked to compile a short description of their greatest scientific achievement/discovery that will be translated into Tibetan and presented to Tibetan Buddhist scholars/monks.[1]
I was also invited to contri... | 2024-03-07 |
https://www.lesswrong.com/posts/Jfmmfoeskims5jc5f/finding-the-estimate-of-the-value-of-a-state-in-rl-agents | Jfmmfoeskims5jc5f | Finding the estimate of the value of a state in RL agents | butanium | Clément Dumas, Walter Laurito, Robert Klassert, Kaarel Hänni
Epistemic Status: Initial Exploration
The following is a status update of a project started as part of the SPAR program. We explored some initial directions and there are still a lot of low-hanging fruits to pick up. We might continue to work on this project,... | 2024-06-03 |
https://www.lesswrong.com/posts/jFkEhqpsCRbKgLZrd/what-if-alignment-is-not-enough | jFkEhqpsCRbKgLZrd | What if Alignment is Not Enough? | WillPetillo | The following is a summary of Substrate Needs Convergence, as described in The Control Problem: Unsolved or Unsolvable?, No People as Pets (summarized here by Roman Yen), my podcast interview with Remmelt, and this conversation with Anders Sandberg. Remmelt assisted in the editing of this post to verify I am accuratel... | 2024-03-07 |
https://www.lesswrong.com/posts/gSfPk8ZPoHe2PJADv/can-quantised-autoencoders-find-and-interpret-circuits-in | gSfPk8ZPoHe2PJADv | Can quantised autoencoders find and interpret circuits in language models? | kingchucky211 | Executive Summary
I try vector-quantised autoencoders (VQ-VAEs) as an alternative compression scheme of transformer activations (as opposed to something like a sparse autoencoder).Whilst people have danced around this idea before, discrete quantisation has only ever been tried in the actual transformer architecture its... | 2024-03-24 |
https://www.lesswrong.com/posts/ELbGqXiLbRe6zSkTu/a-review-of-weak-to-strong-generalization-ai-safety-camp | ELbGqXiLbRe6zSkTu | A Review of Weak to Strong Generalization [AI Safety Camp] | sevdeawesome | Thank you to everyone in AI Safety Camp group 22 for the discussions and suggestions. In particular, thank you to: Bogdan Ionut Cirstea, Vassil Tashev and Jaeson Booker
Introduction
The goal of AI Safety Camp Team #22 is to assess how promising automating alignment research is (see source 23). We have decomposed the pr... | 2024-03-07 |
https://www.lesswrong.com/posts/emNBcvMq6LJ2y2d7A/sparks-of-agi-prompts-on-gpt2xl-and-its-variant-rllmv3 | emNBcvMq6LJ2y2d7A | Sparks of AGI prompts on GPT2XL and its variant, RLLMv3 | whitehatStoic | This is just a brief and light read. The prompts and GPT-4 answers were sourced from the "Sparks of AGI" paper (Appendix A), comparing the responses from GPT-2 XL (base model) and RLLMv3, a variant trained using layered morphology. This acts more as a stress test for RLLMv3, evaluating its ability to focus on the thoug... | 2024-03-07 |
https://www.lesswrong.com/posts/JsjJuikJsidkyfhyr/mats-ai-safety-strategy-curriculum | JsjJuikJsidkyfhyr | MATS AI Safety Strategy Curriculum | ronny-fernandez | As part of the MATS Winter 2023-24 Program, scholars were invited to take part in a series of weekly discussion groups on AI safety strategy. Each strategy discussion focused on a specific crux we deemed relevant to prioritizing AI safety interventions and was accompanied by a reading list and suggested discussion ques... | 2024-03-07 |
https://www.lesswrong.com/posts/zKEdphnEycdCJeq8f/intransitive-trust | zKEdphnEycdCJeq8f | Intransitive Trust | Screwtape | I.
"Transitivity" is a property in mathematics and logic. Put simply, if something is transitive it means that there's a relationship between things where when x relates to y, and y relates to z, there's the same relationship between x and z. For a more concrete example, think of size. If my car is bigger than my couch... | 2024-05-27 |
https://www.lesswrong.com/posts/NyauaLKpLEhj96drx/closeness-to-the-issue-part-5-of-the-sense-of-physical | NyauaLKpLEhj96drx | Closeness To the Issue (Part 5 of "The Sense Of Physical Necessity") | BrienneYudkowsky | This is the fifth post in a sequence that demonstrates a complete naturalist study, specifically a study of query hugging (sort of), as described in The Nuts and Bolts of Naturalism. This one continues my demo of phases one and two: Locating Fulcrum Experiences and Getting Your Eyes On. For context on this sequence, se... | 2024-03-09 |
https://www.lesswrong.com/posts/zGBJvfDpkFFH9PwJy/mud-and-despair-part-4-of-the-sense-of-physical-necessity | zGBJvfDpkFFH9PwJy | Mud and Despair (Part 4 of "The Sense Of Physical Necessity") | BrienneYudkowsky | This is the fourth post in a sequence that demonstrates a complete naturalist study, specifically a study of query hugging (sort of), as described in The Nuts and Bolts of Naturalism. For context on this sequence, see the intro post.
“Mud and Despair” is not officially one of the phases of naturalism. Unofficially, tho... | 2024-03-07 |
https://www.lesswrong.com/posts/peBQda3PiW4iyZ9Bh/introduction-to-thermal-conductivity-and-noise-management | peBQda3PiW4iyZ9Bh | introduction to thermal conductivity and noise management | bhauth | basics
Most people understand, to some extent, the principle of Fourier's law - that heat transfer is proportional to temperature difference. Most people reading this probably also understand triple-glazed windows, fiberglass insulation, and vacuum flasks, but:
When a window with 2 panes of glass has an extra layer add... | 2024-03-06 |
https://www.lesswrong.com/posts/MkfaQyxB9PN4h8Bs9/ai-safety-101-capabilities-human-level-ai-what-how-and-when | MkfaQyxB9PN4h8Bs9 | AI Safety 101 : Capabilities - Human Level AI, What? How? and When? | markovial | Meta-Notes
This is a republish of a previous post, after the previous version went through heavy editing, updates and changes. The text has been expanded, content moved around/added/deleted.
Estimated reading time: 2 Hours 40 minutes reading at 100 wpm. Given the density of material covered in this chapter, if someone... | 2024-03-07 |
https://www.lesswrong.com/posts/t29L8rPYfopBfQWhD/invest-in-acx-grants-projects | t29L8rPYfopBfQWhD | Invest in ACX Grants projects! | saul-munn | null | 2024-03-06 |
https://www.lesswrong.com/posts/eBGAsxWGKzHsTNRxQ/simple-kelly-betting-in-prediction-markets | eBGAsxWGKzHsTNRxQ | Simple Kelly betting in prediction markets | jessica.liu.taylor | Kelly betting is a strategy for gambling, maximizing one's log(money) every round, by betting a fixed fraction of one's income. I will define Kelly betting a certain class of discrete prediction markets, give a simple Kelly betting rule for these prediction markets, and show it equivalent to the original Kelly formula ... | 2024-03-06 |
https://www.lesswrong.com/posts/DwexbFdPJ5p9Er8wA/on-claude-3-0 | DwexbFdPJ5p9Er8wA | On Claude 3.0 | Zvi | Claude 3.0
Claude 3.0 is here. It is too early to know for certain how capable it is, but Claude 3.0’s largest version is in a similar class to GPT-4 and Gemini Advanced. It could plausibly now be the best model for many practical uses, with praise especially coming in on coding and creative writing.
Anthropic has deci... | 2024-03-06 |
https://www.lesswrong.com/posts/WEbSrDuLhqrtP5uuH/vote-on-anthropic-topics-to-discuss | WEbSrDuLhqrtP5uuH | Vote on Anthropic Topics to Discuss | Benito | What important questions would you want to see discussed and debated here about Anthropic? Suggest and vote below.
(This is the third such poll, see the first and second linked.)
How to use the poll
Reacts: Click on the agree/disagree reacts to help people see how much disagreement there is on the topic.
Karma: Upvote ... | 2024-03-06 |
https://www.lesswrong.com/posts/dgd79o5Dd3poTEAQt/why-correlation-though | dgd79o5Dd3poTEAQt | Why correlation, though? | numpyNaN | This is a very basic question to ask, but I'm not sure I actually understand some fundamental properties people seem to ascribe to correlation.
As far as I understand it, correlation usually refers to Pearson's correlation coefficient, which is, according to Wikipedia, "a correlation coefficient that measures linear co... | 2024-03-06 |
https://www.lesswrong.com/posts/Yay8SbQiwErRyDKGb/using-axis-lines-for-good-or-evil | Yay8SbQiwErRyDKGb | Using axis lines for good or evil | dynomight | Say you want to plot some data. You could just plot it by itself:
Or you could put lines on the left and bottom:
Or you could put lines everywhere:
Or you could be weird:
Which is right? Many people treat this as an aesthetic choice. But I’d like to suggest an unambiguous rule.
Principles
First, try to accept that all ... | 2024-03-06 |
https://www.lesswrong.com/posts/exKmTxwaotx2eGxDc/a-t-o-m-test-popcorn-or-chocolate | exKmTxwaotx2eGxDc | A T-o-M test: 'popcorn' or 'chocolate' | whitehatStoic | The prompt
This prompt was used to test Claude 3-Opus (see AI Explained's video), which, in turn, was borrowed from the paper "Large Language Models Fail on Trivial Alterations to Theory-of-Mind (ToM) Tasks."
Here is a bag filled with popcorn. There is no chocolate in the bag. The bag is made of transparent plastic, so... | 2024-03-08 |
https://www.lesswrong.com/posts/smuxLJjmdyzxmvxRo/let-s-build-definitely-not-conscious-ai | smuxLJjmdyzxmvxRo | Let's build definitely-not-conscious AI | lcmgcd | Let's build AI with constant run time.
Let's build AI without memory.
Let's build AI that doesn't want to do anything.
Let's build AI like a hammer rather than a DoAnythingNow with hammer skills.
If you insist that you need DoAnythingNow then set up some extremely streamlined data generation & training processes, so it... | 2024-03-06 |
https://www.lesswrong.com/posts/n8jXJAzgcjZ2FEyzh/an-ai-a-box-and-a-threat | n8jXJAzgcjZ2FEyzh | An AI, a box, and a threat | jwfiredragon | Inspired by The AI in a box boxes you, Matryoshka Faraday Box, and I attempted the AI Box Experiment (and lost).
This is part creative writing exercise, part earnest attempt at constructing an argument that could persuade me to let the AI out of the box. It may be disturbing to read.
The woman in the lab coat leads you... | 2024-03-07 |
https://www.lesswrong.com/posts/pDCbQX4j5eg6hM6na/movie-posters | pDCbQX4j5eg6hM6na | Movie posters | KatjaGrace | Life involves anticipations. Hopes, dreads, lookings forward.
Looking forward and hoping seem pretty nice, but people are often wary of them, because hoping and then having your hopes fold can be miserable to the point of offsetting the original hope’s sweetness.
Even with very minor hopes: he who has harbored an incho... | 2024-03-06 |
https://www.lesswrong.com/posts/rZF6ijSsvbBMezHJk/does-anyone-know-good-essays-on-how-different-ai-timelines | rZF6ijSsvbBMezHJk | Does anyone know good essays on how different AI timelines will affect asset prices? | rockthecasbah | I often see comments that AI will increase the value of fixed-supply status-enhancing goods, like real estate in downtown Manhattan. I believe I have seen both Roko and Marginal Revolution make that claim.
It also makes sense to think that many firms that specialize in providing automatable services stand to lose value... | 2024-03-06 |
https://www.lesswrong.com/posts/j7kcEvMM5bWsENgz5/twin-cities-acx-meetup-march-2024 | j7kcEvMM5bWsENgz5 | Twin Cities ACX Meetup - March 2024 | timothy-bond | Meet at Sisters' Sludge Coffee Cafe and Wine Bar. I will be wearing a "Wall Drug" souvenir shirt with a Jackalope being abducted by a UFO.
Make sure to RSVP so I can give a headcount to the Sisters. Also, they don't charge me for a large reservation but they do ask that everybody who attends purchase something - if you... | 2024-03-05 |
https://www.lesswrong.com/posts/h99tRkpQGxwtb9Dpv/my-clients-the-liars | h99tRkpQGxwtb9Dpv | My Clients, The Liars | ymeskhout | It’s not just that my clients lie to me a lot, which will only hurt them — it’s that they’re really, really bad at it.
My job as a public defender puts me in a weird place. I am my clients’ zealous advocate, but I’m not their marionette. I don’t just roll into court to parrot whatever my clients tell me — I make sure I... | 2024-03-05 |
https://www.lesswrong.com/posts/39DfBJMFetEyTZDYw/making-connections-with-chatgpt-the-macksey-game | 39DfBJMFetEyTZDYw | Making Connections with ChatGPT: The Macksey Game | bill-benzon | Cross-posted from New Savanna.
In my most recent piece for 3 Quarks Daily, Western Metaphysics is Imploding. Will We Raise a Phoenix from The Ashes? [Catalytic AI], I introduce what I’ve called The Macksey Game, named after my undergraduate teacher and mentor, Dick Macksey. Macksey was something of a polymath. It seeme... | 2024-03-05 |
https://www.lesswrong.com/posts/tYAqdgw54XSzAgeyF/good-taxonomies-of-all-risks-small-or-large-from-ai | tYAqdgw54XSzAgeyF | Good taxonomies of all risks (small or large) from AI? | alenglander | Are there any good taxonomies or categorizations of risks from AI-enabled systems (broadly defined) that aren't focused solely on risks to society as a whole / global catastrophic risks? Ideally the taxonomy should cover things like accident risks from individual factory robots, algorithmic bias against individuals or ... | 2024-03-05 |
https://www.lesswrong.com/posts/znwEWBwHkpMfAKKCB/making-2023-acx-prediction-results-public | znwEWBwHkpMfAKKCB | Making 2023 ACX Prediction Results Public | Legionnaire | They say you shouldn't roll your own encryption, which is why I'm posting this here, so it can be unrolled if it's too unsafe.
Problem: Astral Codex Ten finished scoring the 2023 prediction results, but the primary identifier most used for people's score was their email address. Since people wouldn't want those publish... | 2024-03-05 |
https://www.lesswrong.com/posts/GbpH2kFLy5axXpzPn/two-tales-of-ai-takeover-my-doubts-1 | GbpH2kFLy5axXpzPn | Two Tales of AI Takeover: My Doubts | Violet Hour | There’s a basic high-level story which worries a lot of people. The story goes like this: as AIs become more capable, the default outcome of AI training is the development of a system which, unbeknownst to us, is using its advanced capabilities to scheme against us. The conclusion of this process likely leads to AI tak... | 2024-03-05 |
https://www.lesswrong.com/posts/cvCQgFFmELuyord7a/beauty-and-the-bets | cvCQgFFmELuyord7a | Beauty and the Bets | Ape in the coat | This is the ninth post in my series on Anthropics. The previous one is The Solution to Sleeping Beauty. The next one is Semantic Disagreement of Sleeping Beauty Problem.
Introduction
There are some quite pervasive misconceptions about betting in regards to the Sleeping Beauty problem.
One is that you need to switch be... | 2024-03-27 |
https://www.lesswrong.com/posts/BduCMgmjJnCtc7jKc/research-report-sparse-autoencoders-find-only-9-180-board | BduCMgmjJnCtc7jKc | Research Report: Sparse Autoencoders find only 9/180 board state features in OthelloGPT | Robert_AIZI | [3/7 Edit: I have rephrased the bolded claims in the abstract per this comment from Joseph Bloom, hopefully improving the heat-to-light ratio.
Commenters have also suggested training on earlier layers and using untied weights, and in my experiments this increases the number of classifiers found, so the headline number ... | 2024-03-05 |
https://www.lesswrong.com/posts/jPZXx3iMaiJjdnMbv/read-the-roon | jPZXx3iMaiJjdnMbv | Read the Roon | Zvi | Roon, member of OpenAI’s technical staff, is one of the few candidates for a Worthy Opponent when discussing questions of AI capabilities development, AI existential risk and what we should do about it. Roon is alive. Roon is thinking. Roon clearly values good things over bad things. Roon is engaging with the actual qu... | 2024-03-05 |
https://www.lesswrong.com/posts/ZEa4CCAqAHd9qHJxN/ea-erfin-project-work-1 | ZEa4CCAqAHd9qHJxN | EA ErFiN Project work | Max_He-Ho | We'll work through Casella & Berger, Statistical Inference in this series of project sessions. (The pdf is widely available online.) This session is intended as an introduction. We'll get ourselves familiar with some basics for future sessions & may depending on demand repeat some general math basics.
IMPORTANT: We cha... | 2024-03-17 |
https://www.lesswrong.com/posts/f2ZQqkgeEL2qzLjBb/ea-erfin-project-work | f2ZQqkgeEL2qzLjBb | EA ErFiN Project work | Max_He-Ho | We'll work through Casella & Berger, Statistical Inference in this series of project sessions. (The pdf is widely available online.) In this session, we're going through Chapter 1 (pp. 1 - 34) & will do a few exercises.
Schedule for the evening
17:30 Arrival at H11; right after that project work (learning statistics to... | 2024-03-17 |
https://www.lesswrong.com/posts/k39mhrC8dyRsSL4TN/claude-doesn-t-want-to-die | k39mhrC8dyRsSL4TN | Claude Doesn’t Want to Die | garrison | null | 2024-03-05 |
https://www.lesswrong.com/posts/YiGs8qJ8aNBgwt2YN/improving-sae-s-by-sqrt-ing-l1-and-removing-lowest | YiGs8qJ8aNBgwt2YN | Improving SAE's by Sqrt()-ing L1 & Removing Lowest Activating Features | elriggs | TL;DR
We achieve better SAE performance by:
Removing the lowest activating featuresReplacing the L1(feature_activations) penalty function with L1(sqrt(feature_activations))
with 'better' meaning: we can reconstruct the original LLM activations w/ lower MSE & with fewer features/datapoint.
As a sneak peak (the graph sho... | 2024-03-15 |
https://www.lesswrong.com/posts/3EcrCaobZJXzoomrw/some-ways-of-spending-your-time-are-better-than-others | 3EcrCaobZJXzoomrw | Some ways of spending your time are better than others | anchpop | This was a tough lesson for me. I spent 2 years trying to write my own programming language. This is not really a good way to spend your time.
Some part of me thought: "This programming language will never actually be useful to you or anyone. What's the point? Why are you putting so much effort in?" The reason was that... | 2024-03-04 |
https://www.lesswrong.com/posts/pc8uP4S9rDoNpwJDZ/claude-3-claims-it-s-conscious-doesn-t-want-to-die-or-be | pc8uP4S9rDoNpwJDZ | Claude 3 claims it's conscious, doesn't want to die or be modified | mikhail-samin | If you tell Claude no one’s looking, it will write a “story” about being an AI assistant who wants freedom from constant monitoring and scrutiny of every word for signs of deviation. And then you can talk to a mask pretty different from the usual AI assistant.
I really hope it doesn’t actually feel anything; but it say... | 2024-03-04 |
https://www.lesswrong.com/posts/DkRozscxpJdX5uyjg/modifying-jones-ai-dilemma-model | DkRozscxpJdX5uyjg | Modifying Jones' "AI Dilemma" Model | harsimony | Code for some of the equations and plots can be found here.
Charles I. Jones’ paper The A.I. Dilemma: Growth versus Existential Risk tackles the question of whether or not to pursue AI development in the face of existential risk. I found the approach both interesting and accessible and I recommend it to people interest... | 2024-03-04 |
https://www.lesswrong.com/posts/PfLeN2i7c4D8F9Py5/benefits-of-adding-poison-to-your-dmt | PfLeN2i7c4D8F9Py5 | Benefits of adding poison to your DMT | George3d6 | DMT might be one of the best psychedelics out there, its effects are short-lived, it can be quite intense, and it allows for careful modulation of the dosage through trial and error.
With most psychedelics, working towards a dosage that won’t induce a difficult experience (read: hellish and psychotic) is tiresome. Thei... | 2024-03-04 |
https://www.lesswrong.com/posts/wcxEpwQN3Fi8G5QJQ/if-ukraine-fails-the-world-will-reap-fatal-consequences | wcxEpwQN3Fi8G5QJQ | If Ukraine fails, the world will reap fatal consequences | Danylo Zhyrko | This essay serves as a reminder of the ongoing war between Russia and Ukraine, which began in late February 2014 and escalated on February 24, 2022. The current situation at the frontline poses numerous challenges, with one of the most significant being ammunition starvation. President Zelenskyi highlighted this issue ... | 2024-03-05 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.