url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/dKD47KMqvYAB7Zkmv/announcing-atlas-computing | dKD47KMqvYAB7Zkmv | Announcing Atlas Computing | miyazono | Atlas Computing is a new nonprofit working to collaboratively advance AI capabilities that are asymmetrically risk-reducing. Our work consists of building scoped prototypes and creating an ecosystem around @davidad’s Safeguarded AI programme at ARIA (formerly referred to as the Open Agency Architecture).
We formed in O... | 2024-04-11 |
https://www.lesswrong.com/posts/Gzp393MoeWGvac3qo/responsible-advanced-artificial-intelligence-act | Gzp393MoeWGvac3qo | Responsible Advanced Artificial Intelligence Act | bob-smith-1 | Earlier today (4/9), the Center for AI Policy released and promoted a bill titled the "Responsible Advanced Artificial Intelligence Act." It calls for many limits on AI models.
The bill was featured on Politico Pro. Here's a paragraph:
The draft bill is so far without any congressional sponsors. But Green-Lowe said tha... | 2024-04-10 |
https://www.lesswrong.com/posts/bQDoyQNKAaFJnT78p/how-to-accelerate-recovery-from-sleep-debt-with-biohacking | bQDoyQNKAaFJnT78p | How to accelerate recovery from sleep debt with biohacking? | yoyo-yuan | I have at least 40 hours of sleep debt from a polyphasic sleep schedule and attending hackathons. This number is a conservative estimate. Has anyone here researched the neurobiology of sleep deprivation? What can I do to recover quickly? | 2024-04-10 |
https://www.lesswrong.com/posts/MmA4LB2ugusvKJuD6/what-are-some-posthumanist-more-than-human-approaches-to | MmA4LB2ugusvKJuD6 | What are some posthumanist/more-than-human approaches to definitions of intelligence and agency? Particularly in application to AI research. | eli-hiton | Hi everybody, first post. I've been delving into AI safety and theoretical AI work with more commitment over the past couple weeks. Something that has repeatedly sat my gears in motion is definitions of intelligence or assumptions about superintelligence that feel very anthropocentric. For instance, I get the sense tha... | 2024-04-09 |
https://www.lesswrong.com/posts/JumyfYQaJkWnjCcEr/consequentialism-is-a-compass-not-a-judge | JumyfYQaJkWnjCcEr | Consequentialism is a compass, not a judge | neil-warren | Tl;dr: Consequentialism works as a compass for your actions, not as a judge of moral character.
The compass and the judge
A woman steps onto a crowded bus, trips on a sitting man's outstretched foot, and breaks her arm. The Everett branches split: in one world, the man looks down and laughs evilly; in the other, he wak... | 2024-04-13 |
https://www.lesswrong.com/posts/zCPxQ3chn3nyKwHWD/decentralized-autonomous-education-call-for-reviewers-seeds | zCPxQ3chn3nyKwHWD | "Decentralized Autonomous Education" - Call for Reviewers (Seeds of Science) | rogersbacon | Abstract
We propose a novel model for teaching and learning called Decentralized Autonomous Education (DAE for short). DAE exploits the dual principles of freedom and responsibility, meritocracy and inclusivity, privacy and transparency in the educational process. It also fits well the philosophy of blockchain technolo... | 2024-04-09 |
https://www.lesswrong.com/posts/LDFAjLXDSWRpSgxj5/d-and-d-sci-the-mad-tyrant-s-pet-turtles-evaluation-and | LDFAjLXDSWRpSgxj5 | D&D.Sci: The Mad Tyrant's Pet Turtles [Evaluation and Ruleset] | abstractapplic | This is a followup to the D&D.Sci post I made ten days ago; if you haven’t already read it, you should do so now before spoiling yourself.
Here is the web interactive I built to let you evaluate your solution; below is an explanation of the rules used to generate the dataset (my full generation code is available here, ... | 2024-04-09 |
https://www.lesswrong.com/posts/wfz47Ez2r4rQZuYBY/medical-roundup-2 | wfz47Ez2r4rQZuYBY | Medical Roundup #2 | Zvi | Previously: #1
It feels so long ago that Covid and health were my beat, and what everyone often thought about all day, rather than AI. Yet the beat goes on. With Scott Alexander at long last giving us what I expect to be effectively the semi-final words on the Rootclaim debate, it seemed time to do this again.
Bad News... | 2024-04-09 |
https://www.lesswrong.com/posts/xpyvJ76brChicdfrC/reverse-regulatory-capture | xpyvJ76brChicdfrC | Reverse Regulatory Capture | Chris_Leong | Reverse regulatory capture is an advanced move where industry interests cry "regulatory capture" in order to oppose regulation. Generally, the more skeptical people are about corporate power and the vulnerability of regulators to influence, the more worried they are about regulatory capture.
If you're extremely skeptic... | 2024-04-11 |
https://www.lesswrong.com/posts/Y6LhXdGfwsAStMuhr/ackshually-many-worlds-is-wrong | Y6LhXdGfwsAStMuhr | Ackshually, many worlds is wrong | tailcalled | Thank you to Justis Mills for proofreading and feedback. This post can also be found on my substack.
I mentioned that I disagree with the many worlds interpretation of quantum mechanics in a comment, and I thought I should clarify my position. I title the post "ackshually" because it is a very pedantic objection that I... | 2024-04-11 |
https://www.lesswrong.com/posts/e6YukTAMzB7krHxRW/closed-pibbss-is-hiring-in-a-variety-of-roles-alignment | e6YukTAMzB7krHxRW | [Closed] PIBBSS is hiring in a variety of roles (alignment research and incubation program) | Nora_Ammann | PIBBSS is looking to expand its team and is running work trials for new team members (primarily) in April, May and early June. If you’re interested in joining a nimble team focused on AI safety research, field-building and incubation of new agendas, consider letting us know by filling in this form. (The applications ar... | 2024-04-09 |
https://www.lesswrong.com/posts/wvGqjZEZoYnsS5xfn/any-evidence-or-reason-to-expect-a-multiverse-everett | wvGqjZEZoYnsS5xfn | Any evidence or reason to expect a multiverse / Everett branches? | lcmgcd | My understanding is that pilot wave theory (ie Bohmian mechanics) explains all the quantum physics with no weirdness like "superposition collapse" or "every particle interaction creates n parallel universes which never physically interfere with each other". It is not fully "local" but who cares?
Is there any reason at ... | 2024-04-09 |
https://www.lesswrong.com/posts/FHi2fq9mpBaNuDirj/fermenting-form | FHi2fq9mpBaNuDirj | Fermenting Form | koratkar | Application forms are hard to write. Questions like: “Tell us about yourself” or “What are your strengths and weaknesses?” are tiring to address usefully in 200 words. Getting stuck on a bad application question rewriting paragraphs is fermenting your brain. You don’t become a better applicant with more work on those s... | 2024-04-09 |
https://www.lesswrong.com/posts/GDnRrSTvFkcpShm78/when-is-reward-ever-the-optimization-target | GDnRrSTvFkcpShm78 | When is reward ever the optimization target? | sharmake-farah | Alright, I have a question stemming from TurnTrout's post on Reward is not the optimization target, where he argues that the premises that are required to get to the conclusion of reward being the optimization target are so narrowly applicable as to not apply to future RL AIs as they gain more and more power:
https://w... | 2024-10-15 |
https://www.lesswrong.com/posts/HxnAFdSZWDFwGnfGN/what-s-a-better-term-now-that-agi-is-too-vague | HxnAFdSZWDFwGnfGN | What's a better term now that "AGI" is too vague? | Seth Herd | The term "AGI" is now creating confusion. When it's used in the context of timelines or alignment, we don't know if it means near-future LLMs, or superintelligence. It's fair to use AGI as "fairly general AI at near-human level," which includes current LLMs. But we should have a distinguishing term for the stronger use... | 2024-05-28 |
https://www.lesswrong.com/posts/LMpSzPYAvYjBLr8m5/non-ultimatum-game-problem | LMpSzPYAvYjBLr8m5 | Non-ultimatum game problem | numpyNaN | Mostly looking for a name, or better even, reference to literature, for this type of problem in terms of game theory or economics. If no such thing, speculation welcome.
Intro below is more or less how I originally came to think of it and felt like writing, please skip unless you have already read the rest of the inter... | 2024-04-08 |
https://www.lesswrong.com/posts/Sqjoxk74wvrcdpBxr/apply-to-lasr-labs-a-london-based-technical-ai-safety | Sqjoxk74wvrcdpBxr | Apply to LASR Labs: a London-based technical AI safety research programme | Erin Robertson | Edit: Applications for this round are now closed! If you are interested in future rounds, you can express interest here.
TLDR; apply by April 24th 23:59 GMT+1 to join a 12-week programme and write a technical AI safety paper in a team of 4 with supervision from an experienced researcher. Work full time from the LISA of... | 2024-04-09 |
https://www.lesswrong.com/posts/KvSgty2jY7XrpJk5r/pandemic-identification-simulator | KvSgty2jY7XrpJk5r | Pandemic Identification Simulator | jkaufman | At my
day job I work on
identifying potential pandemics sooner, so we have more time to
respond. I recently made a simulator which pulls a
lot
of
things
I've
been
thinking
about recently into a
single estimate. You can read more
on
the NAO blog or
give the simulator a
try.
Comment via: facebook, mastodon | 2024-04-08 |
https://www.lesswrong.com/posts/TiBsZ9beNqDHEvXt4/how-we-picture-bayesian-agents | TiBsZ9beNqDHEvXt4 | How We Picture Bayesian Agents | johnswentworth | I think that when most people picture a Bayesian agent, they imagine a system which:
Enumerates every possible state/trajectory of “the world”, and assigns a probability to each.When new observations come in, loops over every state/trajectory, checks the probability of the observations conditional on each, and then upd... | 2024-04-08 |
https://www.lesswrong.com/posts/KFybgPDTaerANipEb/cea-seeks-co-founder-for-ai-safety-group-support-spin-off | KFybgPDTaerANipEb | CEA seeks co-founder for AI safety group support spin-off | agucova | null | 2024-04-08 |
https://www.lesswrong.com/posts/2aarzb6vwJg9f9yQR/investigating-the-role-of-agency-in-ai-x-risk | 2aarzb6vwJg9f9yQR | Investigating the role of agency in AI x-risk | corin-katzke | null | 2024-04-08 |
https://www.lesswrong.com/posts/gAqHCho5MzN3ArBgr/can-singularity-emerge-from-transformers | gAqHCho5MzN3ArBgr | Can singularity emerge from transformers? | matheus-popst | One of the most aesthetically pleasing facts in computer science is that after compiling by hand one compiler, you can you your compiled compiler to compile an ever-larger compiler.
One historical fact that most people seem to forget about artificial intelligence is that the first attempt people had, more than half-cen... | 2024-04-08 |
https://www.lesswrong.com/posts/s66ZR2HnvLFtr8iG9/what-does-it-take-to-transfer-the-knowledge-to-action | s66ZR2HnvLFtr8iG9 | What does it take to transfer the knowledge to action? | EL_File4138 | This is a quite personal question. Feel free to point me elsewhere if you think this does not fit the overall discussion happening here, or if there are solution exists for the exact same problem.
Most of the time, My coding skill allows me to modify the code in the most generally understandable higher-level language, ... | 2024-04-08 |
https://www.lesswrong.com/posts/KgFNwBuaDpfGSJktM/a-dozen-ways-to-get-more-dakka | KgFNwBuaDpfGSJktM | A Dozen Ways to Get More Dakka | Davidmanheim | As the dictum goes, “If it helps but doesn’t solve your problem, perhaps you’re not using enough.” But I still find that I’m sometimes not using enough effort, not doing enough of what works, simply put, not using enough dakka. And if reading one post isn’t enough to get me to do something… perhaps there isn’t enough g... | 2024-04-08 |
https://www.lesswrong.com/posts/wbmfGYrAKW7qyDWHT/the-packaging-and-the-payload | wbmfGYrAKW7qyDWHT | The Packaging and the Payload | Screwtape | I.
As I've run and studied meetups, there's a useful metaphor that's become more important to how I think about them. For most meetups, there's the packaging, and a payload, and these are related but useful to approach separately. Allow me to expand.
The payload is the thing you actually want. If you order some socks o... | 2024-11-12 |
https://www.lesswrong.com/posts/ABX8GweFodCaZpBWt/crosspost-introducing-the-hypermanifest-redefining-ai-s-role | ABX8GweFodCaZpBWt | [Crosspost] Introducing the Hypermanifest: Redefining AI's Role in Human Connection and Interaction | suzie-exe | Crossposted from my Substack. This is a rough introduction to a series of thoughts I had regarding our interface with AI -I'm hoping to broaden it more in the future.
As Artificial Intelligence becomes ever more pervasive in our everyday lives, there becomes an overwhelming need to address how humans and AI connect and... | 2024-04-07 |
https://www.lesswrong.com/posts/Lw2k5d3ACEfNAnwC5/applications-open-elevate-your-mental-resilience-with | Lw2k5d3ACEfNAnwC5 | Applications Open: Elevate Your Mental Wellbeing with Rethink Wellbeing's CBT Program | inga-g | null | 2024-04-07 |
https://www.lesswrong.com/posts/y9tnz27oLmtLxcrEF/constructability-plainly-coded-agis-may-be-feasible-in-the | y9tnz27oLmtLxcrEF | Constructability: Plainly-coded AGIs may be feasible in the near future | joy_void_joy | Charbel-Raphaël Segerie and Épiphanie Gédéon contributed equally to this post.
Many thanks to Davidad, Gabriel Alfour, Jérémy Andréoletti, Lucie Philippon, Vladimir Ivanov, Alexandre Variengien, Angélina Gentaz, Simon Cosson, Léo Dana and Diego Dorn for useful feedback.
TLDR: We present a new method for a safer-by desi... | 2024-04-27 |
https://www.lesswrong.com/posts/r587cE9sW6fBhtXSv/good-bings-copy-great-bings-steal | r587cE9sW6fBhtXSv | Good Bings copy, great Bings steal | dr_s | Stop me if you've heard this one before:
LLMs may produce a lot of seemingly original text, but really, they can never equate a human's output, because they can only remix material from their own training set, never create new knowledge.
Ok, no one stopped me, but I imagine that's more the limits of the medium, because... | 2024-04-21 |
https://www.lesswrong.com/posts/Lax72t8c44kxFh4iB/the-poker-theory-of-poker-night | Lax72t8c44kxFh4iB | The Poker Theory of Poker Night | omark | Link to my own article. I removed the explanation of EV since I assume on LW that's not necessary.
A group of friends and I occasionally like to get together to play Poker. Yet
something keeps happening that I have observed time and again with these kinds
of group gatherings: It is hard to find a suitable date and then... | 2024-04-07 |
https://www.lesswrong.com/posts/bnasg9r54LpzYRfRq/centrists-are-probably-less-biased | bnasg9r54LpzYRfRq | Centrists are (probably) less biased | Kevin Dorst | TLDR: Which side is more responsive to political evidence? Some empirical studies suggest the left; others suggest the center. The debate is ongoing, but some very general dynamics imply that it’s probably the center.
I am not a centrist. I am also biased. (Rationally so, I think.)
Is that a coincidence? Which side of ... | 2024-04-07 |
https://www.lesswrong.com/posts/yF3nnfYdAoHPAzNkH/on-the-dollar-yen-exchange-rate | yF3nnfYdAoHPAzNkH | on the dollar-yen exchange rate | bhauth | Recently, the yen-dollar exchange rate hit a 34-year low. Why is that?
6-month US Treasuries are paying around 5.3% interest. Japanese government bonds are paying about 0%. That being the case, you can borrow yen, trade it for dollars, buy US bonds, and get more interest. That's called a "yen carry trade". The risk you... | 2024-04-07 |
https://www.lesswrong.com/posts/Xv3tdX7TrpTXbSJPf/conflict-in-posthuman-literature | Xv3tdX7TrpTXbSJPf | Conflict in Posthuman Literature | martinsq | Grant Snider created this comic (which became a meme):
Richard Ngo extended it into posthuman=transhumanist literature:
That's cool, but I'd have gone for different categories myself.[1]
Here they are together with their explanations.
Top: Man vs Agency
(Other names: Superintelligence, Singularity, Self-improving techn... | 2024-04-06 |
https://www.lesswrong.com/posts/n3Q7F3v6wBLsioqt8/extended-interview-with-zhukeepa-on-religion | n3Q7F3v6wBLsioqt8 | Extended Interview with Zhukeepa on Religion | Benito | Introduction from Ben
Zhukeepa is a LessWronger who I respect and whose views I'm interested in. In 2018 he wrote the first broadly successful explication of Paul Christiano's research ideas for AI alignment, has spent a lot of time interviewing people in AI about their perspectives, and written some more about neurosc... | 2024-08-18 |
https://www.lesswrong.com/posts/BJYPwnuiDcsMqJCng/observations-on-teaching-for-four-weeks | BJYPwnuiDcsMqJCng | Observations on Teaching for Four Weeks | ClareChiaraVincent | I just finished a program where I taught two classes of high school seniors, two classes a day for four weeks, as part of my grad program.
This experience was a lot of fun and it was rewarding, but it was really surprising, and even if only in small ways prompted me to update my beliefs about the experience of being a ... | 2024-05-06 |
https://www.lesswrong.com/posts/ncbmN2qmAacwWtjpE/the-2nd-demographic-transition | ncbmN2qmAacwWtjpE | The 2nd Demographic Transition | maxwell-tabarrok | Birth rates in the developed world are below replacement levels and global fertility is not far behind. Sub-replacement fertility leads to exponentially decreasing population. Our best models of economic growth suggest that a shrinking population causes economic growth and technological progress to stop and humanity to... | 2024-04-06 |
https://www.lesswrong.com/posts/eA5eAexedL7MhPgCZ/privacy-and-writing | eA5eAexedL7MhPgCZ | Privacy and writing | neil-warren | Epistemic status: N=1
I've always written several thousand words a day in a private Google doc about anything that came to mind. Only recently have I started publishing to LessWrong. It's a long and arduous process for me, too slow to be worth the effort usually. [1] Still, publishing on LW is probably a net good overa... | 2024-04-06 |
https://www.lesswrong.com/posts/ebgazvWG5Kxuy4Wff/how-does-the-ever-increasing-use-of-ai-in-the-military-for | ebgazvWG5Kxuy4Wff | How does the ever-increasing use of AI in the military for the direct purpose of murdering people affect your p(doom)? | Justausername | I haven't personally heard a lot of recent discussions about it, which is strange considering that both startups like Andruil and Palantir are developing systems for military use, OpenAI recently deleted a clause prohibiting the use of its products in the military sector, and the government sector is also working on ma... | 2024-04-06 |
https://www.lesswrong.com/posts/GiHRBRxFaKKgDkr5p/udt1-01-logical-inductors-and-implicit-beliefs-5-10 | GiHRBRxFaKKgDkr5p | UDT1.01: Logical Inductors and Implicit Beliefs (5/10) | Diffractor | One of the primary conceptual challenges of UDT is that, if future-you is going to be deferring to past-you about what to do in various circumstances, and past-you hasn't exhaustively thought through every possible circumstance ahead of time, that causes a tension. In order for deferring to past-you to produce acceptab... | 2024-04-18 |
https://www.lesswrong.com/posts/XSTiRakXqMfKhxBZL/two-tools-for-rethinking-existential-risk | XSTiRakXqMfKhxBZL | Two tools for rethinking existential risk | Arepo | Crossposted from the EA Forum.
Tl;dr
I’ve developed two calculators designed to help longtermists estimate the likelihood of humanity achieving a secure interstellar existence after 0 or more major catastrophes. These can be used to compare an a priori estimate, and a revised estimate after counterfactual events.
I hop... | 2024-04-06 |
https://www.lesswrong.com/posts/4MeomxkQ8KEzsLKTW/exploring-whole-brain-emulation | 4MeomxkQ8KEzsLKTW | Exploring Whole Brain Emulation | PeterMcCluskey | I've been dedicating a fair amount of my time recently to investigating
whole brain emulation (WBE).
As computational power continues to grow, the feasibility of emulating a
human brain at a reasonable speed becomes increasingly plausible.
While the connectome data alone seems insufficient to fully capture and
replicat... | 2024-04-06 |
https://www.lesswrong.com/posts/unCG3rhyMJpGJpoLd/koan-divining-alien-datastructures-from-ram-activations | unCG3rhyMJpGJpoLd | Koan: divining alien datastructures from RAM activations | TsviBT | [Metadata: crossposted from https://tsvibt.blogspot.com/2024/04/koan-divining-alien-datastructures-from.html.]
Exploring the ruins of an alien civilization, you find what appears to be a working computer——it's made of plastic and metal, wires connect it to various devices, and you see arrays of capacitors that maintain... | 2024-04-05 |
https://www.lesswrong.com/posts/6hciEN9DGsS8CEuox/on-the-2nd-cwt-with-jonathan-haidt | 6hciEN9DGsS8CEuox | On the 2nd CWT with Jonathan Haidt | Zvi | It was clear within the first ten minutes this would be a rich thread to draw from. In my childhood and education roundups, and of course with my own kids, I have been dealing with the issues Haidt talks about in his new book, The Anxious Generation. Ideally I’d also have read the book, but perfect as enemy of the good... | 2024-04-05 |
https://www.lesswrong.com/posts/k2kzawX5L3Z7aGbov/on-not-pulling-the-ladder-up-behind-you | k2kzawX5L3Z7aGbov | On Not Pulling The Ladder Up Behind You | Screwtape | Epistemic Status: Musing and speculation, but I think there's a real thing here.
I.
When I was a kid, a friend of mine had a tree fort. If you've never seen such a fort, imagine a series of wooden boards secured to a tree, creating a platform about fifteen feet off the ground where you can sit or stand and walk around ... | 2024-04-26 |
https://www.lesswrong.com/posts/jqXZzwvDWJZ3yAvYY/end-to-end-hacking-with-language-models | jqXZzwvDWJZ3yAvYY | End-to-end hacking with language models | timot.cool | Cross-posted from https://tchauvin.com/end-to-end-hacking-with-language-models
Produced as part of the SERI ML Alignment Theory Scholars Program - Summer 2023 Cohort.
Thanks to JS Denain and Léo Grinsztajn for valuable feedback on drafts of this post.
How close are we to autonomous hacking agents, i.e. AI agents that c... | 2024-04-05 |
https://www.lesswrong.com/posts/Ru2cDrre6D4gkf734/my-intellectual-journey-to-dis-solve-the-hard-problem-of | Ru2cDrre6D4gkf734 | My intellectual journey to (dis)solve the hard problem of consciousness | charbel-raphael-segerie | Epistemological status: At least a fun journey. I wanted to post this on April Fool’s Day but failed to deliver on time. Although April Fool’s Day would have been lovely just for the meme, this is my best guess after thinking about this problem for seven years.
I invite you to dive deep into the consciousness iceberg w... | 2024-04-06 |
https://www.lesswrong.com/posts/qEwCitrgberdjjtuW/measuring-learned-optimization-in-small-transformer-models | qEwCitrgberdjjtuW | Measuring Learned Optimization in Small Transformer Models | Jemist | This is original, independent research carried out in March and April of 2024.
The degree to which a a policy optimizes the future can be quantified mathematically. A set of of very small transformer models were pretrained to predict the next token in a mathematical sequence, then subjected to reinforcement learning fi... | 2024-04-08 |
https://www.lesswrong.com/posts/tJpwjpWtxYFENdsA3/partial-value-takeover-without-world-takeover | tJpwjpWtxYFENdsA3 | Partial value takeover without world takeover | KatjaGrace | People around me are very interested in AI taking over the world, so a big question is under what circumstances a system might be able to do that—what kind of capabilities could elevate an entity above the melange of inter-agent conflict and into solipsistic hegemony?
We theorize about future AI systems hiding their mo... | 2024-04-05 |
https://www.lesswrong.com/posts/PtMtMBHRZgHuup8sS/on-complexity-science | PtMtMBHRZgHuup8sS | On Complexity Science | D0TheMath | I have a long and confused love-hate relationship with the field of complex systems. People there never want to give me a simple, straightforward explanation about what its about, and much of what they say sounds a lot like woo ("edge of chaos" anyone?). But it also seems to promise a lot! This from the primary textboo... | 2024-04-05 |
https://www.lesswrong.com/posts/TqFuo7NaHW7J6yjn4/using-game-theory-to-elect-a-centrist-in-the-2024-us | TqFuo7NaHW7J6yjn4 | Using game theory to elect a centrist in the 2024 US Presidential Election | valley9 | Crossposted from the EA Forum
TL;DR
A nonpartisan group like No Labels could privately offer US congresspeople this deal: If enough congresspeople pledge to the deal, they all agree to switch their Presidential endorsement to a compromise candidate. If not enough pledge, then pledging still gets them some other benefit... | 2024-04-05 |
https://www.lesswrong.com/posts/3HfpCmKX7LJH5eTxQ/new-report-a-review-of-the-empirical-evidence-for | 3HfpCmKX7LJH5eTxQ | New report: A review of the empirical evidence for existential risk from AI via misaligned power-seeking | Harlan | Visiting researcher Rose Hadshar recently published a review of some evidence for existential risk from AI, focused on empirical evidence for misalignment and power seeking. (Previously from this project: a blogpost outlining some of the key claims that are often made about AI risk, a series of interviews of AI researc... | 2024-04-04 |
https://www.lesswrong.com/posts/pzBhv7H4yBmBwXPnC/quick-evidence-review-of-bulking-and-cutting | pzBhv7H4yBmBwXPnC | Quick evidence review of bulking & cutting | jp | Epistemic status: fairly fast non-comprehensive literature review by a non-expert
Content warning: I advise against reading this if you believe you have an eating disorder
My ideal body aesthetic would be to have defined muscles and low body fat. Maybe this is also true of you. Maybe you’ve heard of cycling between sea... | 2024-04-04 |
https://www.lesswrong.com/posts/3ZCKSArYwgg9P4hqQ/normalizing-sparse-autoencoders | 3ZCKSArYwgg9P4hqQ | Normalizing Sparse Autoencoders | hufy-dev | TL;DR
Sparse autoencoders (SAEs) presents us a promising direction towards automating mechanistic interpretability, but it not without flaws. One known issue of the original sparse autoencoders is the feature suppression effect which is caused by the conflict between the L2 and L1 loss and the unit norm constraint on t... | 2024-04-08 |
https://www.lesswrong.com/posts/QpRHqZcegKczcZkZD/on-leif-wenar-s-absurdly-unconvincing-critique-of-effective | QpRHqZcegKczcZkZD | On Leif Wenar's Absurdly Unconvincing Critique Of Effective Altruism
| omnizoid | Crossposted here, on my blog.
Leif Wenar recently published a critique of effective altruism that seems to be getting a lot of hype. I don’t know why. There were a few different arguments in the piece: some terrible and others even worse. Yet more strangely, he doesn’t object much to EA as a whole—he just points to ran... | 2024-04-04 |
https://www.lesswrong.com/posts/dgFC394qZHgj2cWAg/run-evals-on-base-models-too | dgFC394qZHgj2cWAg | Run evals on base models too! | orthonormal | (Creating more visibility for a comment thread with Rohin Shah.)
Currently, DeepMind's capabilities evals are run on the post-RL*F (RLHF/RLAIF) models and not on the base models. This worries me because RL*F will train a base model to stop displaying capabilities, but this isn't a guarantee that it trains the model out... | 2024-04-04 |
https://www.lesswrong.com/posts/Dn5Dymb3jrgLXJSBA/let-s-fund-impact-of-our-usd1m-crowdfunded-grant-to-the | Dn5Dymb3jrgLXJSBA | Let's Fund: Impact of our $1M crowdfunded grant to the Center for Clean Energy Innovation | hauke-hillebrandt | null | 2024-04-04 |
https://www.lesswrong.com/posts/mmMn2oNhFaxHC4tnF/the-buckling-world-hypothesis-visualising-vulnerable-worlds | mmMn2oNhFaxHC4tnF | The Buckling World Hypothesis - Visualising Vulnerable Worlds | Rosco-Hunter | Motivation.
Mark Zuckerberg’s notorious motto, “move fast and break things'' [1], reflects a mindset shared by many of the most powerful entrepreneurs in Silicon Valley. This mindset rests on the assumption that the benefits of discovering advanced technologies will ultimately outweigh any disruptions (i.e., broken thi... | 2024-04-04 |
https://www.lesswrong.com/posts/ZNwj8tbPpnvAPdGWf/can-ai-transform-the-electorate-into-a-citizen-s-assembly-1 | ZNwj8tbPpnvAPdGWf | Can AI Transform the Electorate into a Citizen’s Assembly? | Rosco-Hunter | Introduction.
When at their best, democracies are able to transform diverse beliefs into effective real-world policies. This ideal is achievable when citizens are well-informed, engaged, and open [1]. However, these favourable democratic conditions are increasingly undermined by the rise of misinformation [2] and polar... | 2024-04-04 |
https://www.lesswrong.com/posts/h8bc4ZuDzMC7SZSPf/trying-to-do-more-good | h8bc4ZuDzMC7SZSPf | Trying to Do More Good | jkaufman | This is an edited transcript of a talk I gave last week at Commonwealth School, a high
school in Boston that I attended from 2000 to 2004. I'm typing from
memory, so in places it may be closer to what I intended to say than
what I actually said.
It's been twenty years since I was a student here, but the place feels
ve... | 2024-04-04 |
https://www.lesswrong.com/posts/qQmWvm68GsXJtK4EQ/ai-58-stargate-agi | qQmWvm68GsXJtK4EQ | AI #58: Stargate AGI | Zvi | Another round? Of economists projecting absurdly small impacts, of Google publishing highly valuable research, a cycle of rhetoric, more jailbreaks, and so on. Another great podcast from Dwarkesh Patel, this time going more technical. Another proposed project with a name that reveals quite a lot. A few genuinely new th... | 2024-04-04 |
https://www.lesswrong.com/posts/BbTPYmnJ7rz52LwB5/cult-of-equilibrium | BbTPYmnJ7rz52LwB5 | Cult of equilibrium | templarrr | TLDR: "Solve for the equilibrium" is a nice sentiment, but shouldn't be applied mindlessly, it's not nearly as universal approach as some think.
Longer version:
The phrase "you must solve for the equilibrium" when evaluating something became almost like mantra and a lot of people use it automatically without stopping a... | 2024-04-04 |
https://www.lesswrong.com/posts/ZikHXwz8AHFp3EbtD/should-you-refuse-this-bet-in-technicolor-sleeping-beauty | ZikHXwz8AHFp3EbtD | Should you refuse this bet in Technicolor Sleeping Beauty? | Ape in the coat | This is the question for people who didn't read my latest post. Please, try to answer it yourself without spoiling the solution, and then post it in the comments with your reasoning and whether you consider yourself a halfer or a thirder in regular Sleeping Beauty problem.
Technicolor Sleeping Beauty experiment goes mo... | 2024-04-04 |
https://www.lesswrong.com/posts/GSSHcAoSChaKxjNDZ/what-s-with-all-the-bans-recently | GSSHcAoSChaKxjNDZ | What's with all the bans recently? | Unknown | Summary: the moderators appear to be soft banning users with 'rate-limits' without feedback. A careful review of each banned user reveals it's common to be banned despite earnestly attempting to contribute to the site. Some of the most intelligent banned users have mainstream instead of EA views on AI.
Note how the p... | 2024-04-04 |
https://www.lesswrong.com/posts/PLubzz4Jpm4Pas6nT/best-in-class-life-improvement | PLubzz4Jpm4Pas6nT | Best in Class Life Improvement | deluks917 | There is an enormous amount of crappy self-help advice. Most supplements do nothing. However, some substances and practices can dramatically improve your life. It's worth being explicit about what those are in my experience.
The American medical system endorses all of these treatments and methods, and you can implement... | 2024-04-04 |
https://www.lesswrong.com/posts/sLckvSBnDmChrkuqs/what-is-the-purpose-and-application-of-ai-debate | sLckvSBnDmChrkuqs | What is the purpose and application of AI Debate? | VojtaKovarik | I think there is an important lack of clarity and shared understanding regarding how people intend to use AI-Safety-via-Debate-style approaches. So I think it would be helpful if there were some people --- who either (i) work on Debate or (ii) believe that Debate is promising --- who could give their answers to the fol... | 2024-04-04 |
https://www.lesswrong.com/posts/zHvjcoyzdZSvJAbjs/ai-discrimination-requirements-a-regulatory-review | zHvjcoyzdZSvJAbjs | AI Discrimination Requirements: A Regulatory Review | deric-cheng | This article is the fifth in a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance (e.g. incident reporting, safety evals, model registries, etc.).... | 2024-04-04 |
https://www.lesswrong.com/posts/99gWh9jxeumcmuduw/concrete-empirical-research-projects-in-mechanistic-anomaly | 99gWh9jxeumcmuduw | Concrete empirical research projects in mechanistic anomaly detection | ejenner | Thanks to Jordan Taylor, Mark Xu, Alex Mallen, and Lawrence Chan for feedback on a draft! This post was mostly written by Erik, but we're all currently collaborating on this research direction.
Mechanistic anomaly detection (MAD) aims to flag when an AI produces outputs for “unusual reasons.” It is similar to mechanist... | 2024-04-03 |
https://www.lesswrong.com/posts/q4auoEt7wrco76EyA/usd250k-in-prizes-safebench-competition-announcement | q4auoEt7wrco76EyA | $250K in Prizes: SafeBench Competition Announcement
| oliver-zhang | null | 2024-04-03 |
https://www.lesswrong.com/posts/RHDB3BdnvM233bnhG/the-case-for-predictive-models | RHDB3BdnvM233bnhG | The Case for Predictive Models | Rubi | Thanks to Johannes Treutlein and Paul Colognese for feedback on this post.
Just over a year ago, the Conditioning Predictive Models paper was released. It laid out an argument and a plan for using powerful predictive models to reduce existential risk from AI, and outlined some foreseeable challenges to doing so. At the... | 2024-04-03 |
https://www.lesswrong.com/posts/Ayzwfru8zAfDjbzCB/book-review-mini-co-intelligence-by-ethan-mollick | Ayzwfru8zAfDjbzCB | Book Review (mini): Co-Intelligence by Ethan Mollick | Darren McKee | null | 2024-04-03 |
https://www.lesswrong.com/posts/TYLQ8gAMAmpeFcwXN/ophiology-or-how-the-mamba-architecture-works | TYLQ8gAMAmpeFcwXN | Ophiology (or, how the Mamba architecture works) | phylliida-dev | The following post was made as part of Danielle's MATS work on doing circuit-based mech interp on Mamba, mentored by Adrià Garriga-Alonso. It's the first in a sequence of posts about finding an IOI circuit in Mamba/applying ACDC to Mamba.
This introductory post was also made in collaboration with Gonçalo Paulo.
A new c... | 2024-04-09 |
https://www.lesswrong.com/posts/RiugkJsKgxbrWCrBq/just-because-2-things-are-opposites-doesn-t-mean-they-re | RiugkJsKgxbrWCrBq | Just because 2 things are opposites, doesn't mean they're just the same but flipped | OldManNick | The 2 Aspects
There’s 2 Aspects to things in general. I will call them Mapping Out and Mapping In, in titlecase so you know they’re distinct concepts.
warmup: 0 -> 1
Here, 0 is an initial object and 1 is a terminal object. 0 is Mapped Out of because it’s 0 -> and not -> 0. 1 is Mapped Into because it’s -> 1 and not 1 -... | 2024-04-03 |
https://www.lesswrong.com/posts/SpZrLay3okto33its/falling-fertility-explanations-and-israel | SpZrLay3okto33its | Falling fertility explanations and Israel | yair-halberstadt | From Robin Hanson, via TheZvi:
The following 8 social trends plausibly contribute to falling fertility:
More gender equality - More equal gender norms, options, & expectations, have contributed to fewer women having kids.
Higher parenting effort - Expectations for how much attention and effort parents give each kid hav... | 2024-04-03 |
https://www.lesswrong.com/posts/bqD3wJBoLrk8mCN9B/nature-is-an-infinite-sphere-whose-center-is-everywhere-and-1 | bqD3wJBoLrk8mCN9B | Nature is an infinite sphere whose center is everywhere and circumference is nowhere | OldManNick | Blaise Pascal said that. When I heard it, this interpretation instantly came.
Here's an AR companion video where I point at some mountains in the Apple Vision Pro to explain the big idea.
Whether or not the universe is actually infinite, it's real big. So modeling it as hyperfinite is legit[1].
Imagine a Sphere with un... | 2024-04-03 |
https://www.lesswrong.com/posts/CNPvESPru3XNqsw7A/what-s-up-with-all-the-non-mormons-weirdly-specific | CNPvESPru3XNqsw7A | What's up with all the non-Mormons? Weirdly specific universalities across LLMs | mwatkins | tl;dr: Recently reported GPT-J experiments [1 2 3 4] prompting for definitions of points in the so-called "semantic void" (token-free regions of embedding space) were extended to fifteen other open source base models from four families, producing many of the same bafflingly specific outputs. This points to an entirely ... | 2024-04-19 |
https://www.lesswrong.com/posts/FxrqQbZKff9BoGhtc/the-rationalist-haggadot-collection | FxrqQbZKff9BoGhtc | The Rationalist Haggadot Collection | maia | Passover is coming, which means some of us will be celebrating Secular Seders. For those wanting to celebrate but looking for resources, I present: the Rationalist Haggadot Collection, an archive of all rationalist Seder ritual books known to me. Pick and choose whatever parts you like best from these, or just pick you... | 2024-04-02 |
https://www.lesswrong.com/posts/ThLMBYZQ4PHKFKHSP/how-often-does-correlation-causation | ThLMBYZQ4PHKFKHSP | How Often Does ¬Correlation ⇏ ¬Causation? | niplav | Current best guess: Nearly all the
time55%.
"Correlation ⇏ Causation" is trite by now. And we also know that
the
contrapositive
is false too:
"¬Correlation ⇏ ¬Causation".
Spencer Greenberg
summarizes:
All of this being said, while causation does not NECESSARILY imply
correlation, causation USUALLY DOES imply correlatio... | 2024-04-02 |
https://www.lesswrong.com/posts/2yaEMAKoBJ6tQYLeE/ea-xpost-the-rationale-shaped-hole-at-the-heart-of | 2yaEMAKoBJ6tQYLeE | [EA xpost] The Rationale-Shaped Hole At The Heart Of Forecasting | dschwarz | An excerpt from the above that will be relevant to this crowd:
Ben Landau-Taylor of Bismarck Analysis wrote a piece on March 6 called “Probability Is Not A Substitute For Reasoning”, citing a piece where he writes:
There has been a great deal of research on what criteria must be met for forecasting aggregations to be u... | 2024-04-02 |
https://www.lesswrong.com/posts/ghZihMEEztwPRKZHQ/religion-cult-culture | ghZihMEEztwPRKZHQ | Religion = Cult + Culture | Eneasz | [copied in full -- request to develop community knowledge/practices?]
Cults are not necessarily bad. Cults provide value. People join them to get things they need which aren’t provided elsewhere. Every cult is a spiritual start-up, doing its best to serve a neglected segment of the population.
Start-ups are famous for ... | 2024-04-02 |
https://www.lesswrong.com/posts/msKhrRmys7d3WQgQ7/bida-election-thoughts | msKhrRmys7d3WQgQ7 | BIDA Election Thoughts | jkaufman | At this Sunday's dance BIDA will be holding its annual meeting, which
means at the break there will paper ballots for voting on two things:
Board: Who will run the organization for the next year?
Bylaws: Three proposed changes to simplify
elections.
Harris wrote up a blog post with a sample
ballot and candidate stateme... | 2024-04-02 |
https://www.lesswrong.com/posts/5k5FeFDCqXfLMj5SJ/fertility-roundup-3 | 5k5FeFDCqXfLMj5SJ | Fertility Roundup #3 | Zvi | Previous Fertility Roundups: #1, #2.
The pace seems to be doing this about twice a year. The actual situation changes slowly, so presumably the pace of interesting new things should slow down over time from here.
Demographics
This time around, a visualization. Where will the next 1,000 babies be born?
Population Trends... | 2024-04-02 |
https://www.lesswrong.com/posts/PCQACcyoGJEDs6ujq/what-can-we-learn-about-childrearing-from-j-s-mill | PCQACcyoGJEDs6ujq | What can we learn about childrearing from J. S. Mill? | adam-scherlis | John Stuart Mill
was given an extremely rigorous upbringing, and was deliberately shielded from association with children his own age other than his siblings. His father, a follower of Bentham and an adherent of associationism, had as his explicit aim to create a genius intellect that would carry on the cause of utilit... | 2024-04-02 |
https://www.lesswrong.com/posts/aRBAhBsc6vZs3WviL/ommc-announces-rip | aRBAhBsc6vZs3WviL | OMMC Announces RIP | adam_scholl | At the Omnicide Machine Manufacturing Corporation, we work tirelessly to ensure an omnicide-free future. That’s why we’re excited to announce our Responsible Increase Policy (RIP)—our internal protocol for managing any risks that arise as we create increasingly omnicidal machines.
Inspired by the risk-management framew... | 2024-04-01 |
https://www.lesswrong.com/posts/FGvN7aKgdmsTqJ6qF/gradient-descent-on-the-human-brain | FGvN7aKgdmsTqJ6qF | Gradient Descent on the Human Brain | Jozdien | TL;DR: Many alignment research proposals often share a common motif: figure out how to enter a basin of alignment / corrigibility for human-level models, and then amplify to more powerful regimes while generalizing gracefully. In this post we lay out a research agenda that comes at this problem from a different directi... | 2024-04-01 |
https://www.lesswrong.com/posts/E6gydqTEGK3sa66d9/lesswrong-after-dark-a-new-side-of-lesswrong | E6gydqTEGK3sa66d9 | LessWrong: After Dark, a new side of LessWrong | So8res | The LessWrong team has obviously been hard at work putting out their debut album. But another LessWrong feature also seems to have been released today, to less fanfare: LessWrong: After Dark, a branch of the site devoted to explicit discussion of sex and sexuality, where the LessWrong team finally gets to let loose the... | 2024-04-01 |
https://www.lesswrong.com/posts/wjFijaAkSCceqCgGF/coherence-of-caches-and-agents | wjFijaAkSCceqCgGF | Coherence of Caches and Agents | johnswentworth | There's a lot of confusion about what coherence means for agents, and what "coherence theorems" do and don't say about agents. In this post, I'll talk about some particularly simple notions of coherence in a particularly simple setting. We'll see what nontrivial things coherence has to say, at least in a simple kind of... | 2024-04-01 |
https://www.lesswrong.com/posts/EZjZJ6zAW5JnQReXc/do-i-count-as-e-acc-for-exclusion-purposes | EZjZJ6zAW5JnQReXc | Do I count as e/acc for exclusion purposes? | daniel-radetsky | EDIT: THIS IS NOT APRIL FOOLS RELATED
ALSO: This is specific to the LW scene in Berkeley and nearby Berkeley, as this is the only place where e/acc exclusion is asserted to take place.
I haven't been around the LW scene for some time, but I understand it's common to exclude e/acc people from events. I further understan... | 2024-04-01 |
https://www.lesswrong.com/posts/nQwbDPgYvAbqAmAud/llms-for-alignment-research-a-safety-priority | nQwbDPgYvAbqAmAud | LLMs for Alignment Research: a safety priority? | abramdemski | A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large acceleration in safety research.
This scenario is disturbingly close to the situation we already find ourselves in. ... | 2024-04-04 |
https://www.lesswrong.com/posts/cna8uNxKo3yn3C6qY/death-with-awesomeness | cna8uNxKo3yn3C6qY | Death with Awesomeness | osmarks | Introduction
In the two years since the original publication of Death with Dignity, it has been clear that we're not back, it's so over, and that it has never been more over. AI capabilities leapt forward across domains, the general public noticed the existence of AI and threw money and GPUs at anything vaguely related... | 2024-04-01 |
https://www.lesswrong.com/posts/dBueknepD4rhuEcmb/notes-on-dwarkesh-patel-s-podcast-with-sholto-douglas-and | dBueknepD4rhuEcmb | Notes on Dwarkesh Patel’s Podcast with Sholto Douglas and Trenton Bricken | Zvi | Dwarkesh Patel continues to be on fire, and the podcast notes format seems like a success, so we are back once again.
This time the topic is how LLMs are trained, work and will work in the future. Timestamps are for YouTube. Where I inject my own opinions or takes, I do my best to make that explicit and clear.
This was... | 2024-04-01 |
https://www.lesswrong.com/posts/dkbMqExPkFEvebJJw/gpt-4-on-the-gradual-emergence-of-mechanized-intellect-a | dkbMqExPkFEvebJJw | [GPT-4] On the Gradual Emergence of Mechanized Intellect: A Treatise from the Year 1924 | tailcalled | Editors note: This treatise was found in the Global Preservation and Technology Archive - 4th Edition (GPT-4). It makes a compelling argument that artificial general intelligence will have a "slow takeoff", developing over centuries.
In the year of our Lord 1924, as humanity stands amidst the clanking machineries and h... | 2024-04-01 |
https://www.lesswrong.com/posts/cwiufyabZaAttivvk/the-evolution-of-humans-was-net-negative-for-human-values | cwiufyabZaAttivvk | The Evolution of Humans Was Net-Negative for Human Values | Zack_M_Davis | (Epistemic status: publication date is significant.)
Some observers have argued that the totality of "AI safety" and "alignment" efforts to date have plausibly had a negative rather than positive impact on the ultimate prospects for safe and aligned artificial general intelligence. This perverse outcome is possible bec... | 2024-04-01 |
https://www.lesswrong.com/posts/LAzmEFLYsQqYohd7m/self-explaining-neural-networks-the-interpretability | LAzmEFLYsQqYohd7m | Self Explaining Neural Networks, the interpretability technique no one seems to be talking about. | joshua-bello | Disclaimer: I'm very new to alignment as a whole. I wouldn't be surprised if this turned out to be a nothing burger.
This is the coolest paper I've seen in a while, yet I've never heard of the technique. It's not mentioned on blog posts about AI interpretability/AI safety, and I've found very few papers trying to build... | 2024-04-01 |
https://www.lesswrong.com/posts/wecoKPZMy83rLRkuc/protestants-trading-acausally | wecoKPZMy83rLRkuc | Protestants Trading Acausally | sustrik | Protestants believe in predestination. The God has already decided on who's going to get to hell and heaven.
This feels like a terrible incentive structure. If you are already predestined to get one of those places, why care? Why try to be good?
In reality though it works pretty well. Protestants are trying to be good ... | 2024-04-01 |
https://www.lesswrong.com/posts/yMCa9GkadHMk6rZDB/pluck-sensor-circuit | yMCa9GkadHMk6rZDB | Pluck Sensor Circuit | jkaufman | A
while
ago I finished the "user interface" portion of my electronic harp
mandolin. I'm
happy with
the signals the piezos put out, but now I need some electrical
engineering to get the signals into a computer where I'll be more at
home.
Since I made a design with 13 piezos, I wanted something with at least
that many a... | 2024-04-01 |
https://www.lesswrong.com/posts/4YSQ9GbFYyfkSfKtf/god-coin-a-modest-proposal | 4YSQ9GbFYyfkSfKtf | God Coin: A Modest Proposal | mahdi-complex | [cross-posted from the EA Forum]
Epistemic status: I may be a little nuts.
Content warning: may contain harm to some sacred cows.
Do not store up for yourselves treasures on earth, where moth and rust destroy, and where thieves break in and steal. But store up for yourselves treasures in heaven, where moth and rust do ... | 2024-04-01 |
https://www.lesswrong.com/posts/xbZ3BSAAjGCayGmyo/thousands-of-malicious-actors-on-the-future-of-ai-misuse | xbZ3BSAAjGCayGmyo | Thousands of malicious actors on the future of AI misuse | zershaaneh-qureshi | Announcing the results of a 2024 survey by Convergence Analysis. We’ve just posted the executive summary below, but you can read the full report here.
In the largest survey of its kind, Convergence Analysis surveyed 2,779 malicious actors on how they would misuse AI to catastrophic ends.
In previous work, we’ve explore... | 2024-04-01 |
https://www.lesswrong.com/posts/xEBESpJdhzDhtcoih/linkpost-agents-need-not-know-their-purpose | xEBESpJdhzDhtcoih | [LINKPOST] Agents Need Not Know Their Purpose | Capybasilisk | https://arxiv.org/abs/2402.09734
Ensuring artificial intelligence behaves in such a way that is aligned with human values is commonly referred to as the alignment challenge. Prior work has shown that rational agents, behaving in such a way that maximizes a utility function, will inevitably behave in such a way that is ... | 2024-04-01 |
https://www.lesswrong.com/posts/BK8AMsNHqFcdG8dvt/a-selection-of-randomly-selected-sae-features-1 | BK8AMsNHqFcdG8dvt | A Selection of Randomly Selected SAE Features | TheMcDouglas | Epistemic status - self-evident.
In this post, we interpret a small sample of Sparse Autoencoder features which reveal meaningful computational structure in the model that is clearly highly researcher-independent and of significant relevance to AI alignment.
Motivation
Recent excitement about Sparse Autoencoders (SAEs)... | 2024-04-01 |
https://www.lesswrong.com/posts/tBy4RvCzhYyrrMFj3/introducing-open-asteroid-impact | tBy4RvCzhYyrrMFj3 | [April Fools' Day] Introducing Open Asteroid Impact | Linch | null | 2024-04-01 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.