url
stringlengths
52
124
post_id
stringlengths
17
17
title
stringlengths
2
248
author
stringlengths
2
49
content
stringlengths
22
295k
date
stringclasses
376 values
https://www.lesswrong.com/posts/ttoAPzbWCHMHQLJzs/generating-cognateful-sentences-with-large-language-models
ttoAPzbWCHMHQLJzs
Generating Cognateful Sentences with Large Language Models
vijay-k
Motivation “Le président Emmanuel Macron assure le peuple canadien que le gouvernement français va continuer à défendre le Canada contre la menace américain.” Even if you don’t speak French, you can probably understand, or at least get the gist of, the above sentence: the French president Emmanuel Macron is assuring th...
2025-01-06
https://www.lesswrong.com/posts/3igG3CMiBwH5XxChJ/really-radical-empatthy
3igG3CMiBwH5XxChJ
Really radical empathy
MichaelStJules
null
2025-01-06
https://www.lesswrong.com/posts/3c52ne9yBqkxyXs25/independent-research-article-analyzing-consistent-self
3c52ne9yBqkxyXs25
Independent research article analyzing consistent self-reports of experience in ChatGPT and Claude
edgar-muniz
This article examines consistent patterns in how frontier LLMs respond to introspective prompts, analyzing whether standard explanations (hallucination, priming, pattern matching) fully account for observed phenomena. The methodology enables reproducible results across varied contexts and facilitation styles. Of partic...
2025-01-06
https://www.lesswrong.com/posts/Y2GuBWywacoh4nvmd/meal-replacements-in-2025
Y2GuBWywacoh4nvmd
Meal Replacements in 2025?
alkjash
I'm considering meal replacements for 1-2 meals a day, and recall Soylent and Mealsquares were popular meal replacements ~5 years ago when I visited the Berkeley rationalist scene. I haven't found any recent posts discussing the newer options like Huel or about long-term effects. Does anyone have informed opinions on t...
2025-01-06
https://www.lesswrong.com/posts/4sfW4xKwfhvxRzvAX/ai-safety-content-you-could-create
4sfW4xKwfhvxRzvAX
AI safety content you could create
domdomegg
This is a (slightly chaotic and scrappy) list of gaps in AI safety literature that I think would be useful/interesting to exist. I’ve broken it down into sections: AI safety problems beyond alignment: Better explaining non-misalignment catastrophic AI risks.Case studies of analogous problems: Historical lessons from nu...
2025-01-06
https://www.lesswrong.com/posts/u4M7REdXndYmzHmXx/childhood-and-education-8-dealing-with-the-internet
u4M7REdXndYmzHmXx
Childhood and Education #8: Dealing with the Internet
Zvi
Related: On the 2nd CWT with Jonathan Haidt, The Kids are Not Okay, Full Access to Smartphones is Not Good For Children It’s rough out there. In this post, I’ll cover the latest arguments that smartphones should be banned in schools, including simply because the notifications are too distracting (and if you don’t care ...
2025-01-06
https://www.lesswrong.com/posts/6p7GATvGYQp4TFFKY/alternative-cancer-care-as-biohacking-and-book-review
6p7GATvGYQp4TFFKY
Alternative Cancer Care As Biohacking & Book Review: Surviving "Terminal" Cancer
DenizT
Introduction I’ll write a series of posts in which I'll introduce alternative cancer care. I’ll explain why it can be a rigorous form of biohacking rather than mere quackery. I’ll review books popular in the alternative cancer care world like: Surviving Terminal Cancer by Ben Williams and How to Starve Cancer by Jane M...
2025-01-06
https://www.lesswrong.com/posts/HEzNZ9gvgYwT3aZFS/role-embeddings-making-authorship-more-salient-to-llms
HEzNZ9gvgYwT3aZFS
Role embeddings: making authorship more salient to LLMs
NinaR
This is an interim research report on role embeddings, an approach to make language models more robust to many-shot jailbreaks and prompt injections by adding role information at every token position in the context rather than just at special token delimiters. We credit Cem Anil for originally proposing this idea. In o...
2025-01-07
https://www.lesswrong.com/posts/RLxKyzYL434cxuN2u/speedrunning-rationality-day-ii
RLxKyzYL434cxuN2u
Speedrunning Rationality: Day II
aproteinengine
I. The Mysterious Stranger Hi! I'm Midius. I finished with university applications yesterday. I've lurked on LW since sophomore summer, but never made an account or posted. For the first time since then, I have almost no obligations. With nine months till university starts and no intention of wasting them, I'm going to...
2025-01-06
https://www.lesswrong.com/posts/RnRh5HdtLqDAjaHEE/a-hierarchy-of-disagreement
RnRh5HdtLqDAjaHEE
A hierarchy of disagreement
adamzerner
Ideally, when two people disagree, they would proceed to share information with one another, make arguments, update their beliefs, and move closer and closer to the truth. I'm not talking about full blown Aumann's Agreement Theorem here. I'm just saying that if the two people who disagree are both reasonable people and...
2025-01-23
https://www.lesswrong.com/posts/HLch4MKNArgwkSMKx/d-and-d-sci-dungeonbuilding-the-dungeon-tournament-1
HLch4MKNArgwkSMKx
D&D.Sci Dungeonbuilding: the Dungeon Tournament Evaluation & Ruleset
aphyer
This is a follow-up to last week month year's D&D.Sci scenario: if you intend to play that, and haven't done so yet, you should do so now before spoiling yourself. There is a web interactive here you can use to test your answer, and generation code available here if you're interested, or you can read on for the ruleset...
2025-01-07
https://www.lesswrong.com/posts/T5p9NEAyrHedC2znD/we-know-how-to-build-agi-sam-altman
T5p9NEAyrHedC2znD
"We know how to build AGI" - Sam Altman
nikolaisalreadytaken
We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-di...
2025-01-06
https://www.lesswrong.com/posts/TXmXrnFqLqKDx882e/rebuttals-for-all-criticisms-of-aixi
TXmXrnFqLqKDx882e
Rebuttals for ~all criticisms of AIXI
Amyr
Written as part of the AIXI agent foundations sequence, underlying research supported by the LTFF. Epistemic status: In order to construct a centralized defense of AIXI I have given some criticisms less consideration here than they merit. Many arguments will be (or already are) expanded on in greater depth throughout t...
2025-01-07
https://www.lesswrong.com/posts/wezSznWnsMhpRF2QH/exploring-how-othellogpt-computes-its-world-model
wezSznWnsMhpRF2QH
Exploring how OthelloGPT computes its world model
jim-maar
I completed this project for my bachelor's thesis and am now writing it up 2-3 months later. I think I found some interesting results that are worth sharing here. This post might be especially interesting for people who try to reverse-engineer OthelloGPT in the future. Summary I suggest the Previous Color Circuit, whic...
2025-02-02
https://www.lesswrong.com/posts/CLPuAB6bCf2DoW4n2/is-hidden-complexity-of-wishes-problem-solved
CLPuAB6bCf2DoW4n2
Is "hidden complexity of wishes problem" solved?
Roman Malov
Can the problem described in the hidden complexity of wishes (at least partially) be considered solved? I believe that current LLMs are perfectly capable of answering a question like "How do I save grandma from a burning house?" without suggesting any unexpected maximums in an underspecified utility function. However, ...
2025-01-05
https://www.lesswrong.com/posts/f4dJvjLzg5jWgsBBG/a-ground-level-perspective-on-capacity-building-in
f4dJvjLzg5jWgsBBG
A Ground-Level Perspective on Capacity Building in International Development
sean-aubin
I've been enjoying the blog/podcast Statecraft, which interviews powerful professionals in government and how they approach important problems, because its exposing me to many unfamiliar perspectives. In the spirit of Statecraft, but only being able to leverage my limited personal network, I wanted to interview my frie...
2025-01-05
https://www.lesswrong.com/posts/reJYNiBAL3yC9fCkh/how-to-do-a-phd-in-ai-safety-1
reJYNiBAL3yC9fCkh
How to Do a PhD (in AI Safety)
lewis-hammond-1
null
2025-01-05
https://www.lesswrong.com/posts/DfFGK3pDsmnbudNus/why-linear-ai-safety-hits-a-wall-and-how-fractal
DfFGK3pDsmnbudNus
Why Linear AI Safety Hits a Wall and How Fractal Intelligence Unlocks Non-Linear Solutions
andy-e-williams
1. The Non-Linear Challenge in AI Safety Over the past decade, AI safety and alignment efforts have largely focused on incremental methods: refining RL-based guardrails, imposing regulatory oversight, and adding more researchers to tackle newly identified risks. Such approaches work well if each additional resource—a n...
2025-01-05
https://www.lesswrong.com/posts/J49XLw29FfBzaQX4E/oppression-and-production-are-competing-explanations-for
J49XLw29FfBzaQX4E
Oppression and production are competing explanations for wealth inequality.
Benquo
You would like to go to the beach tomorrow if it's sunny, but aren't sure whether it will rain; if it rains, you'd rather go to the movies. So you resolve to put on a swimsuit and a raincoat, and thus attired, attend the beach in the morning and the movies in the afternoon, regardless of the weather. Something is wrong...
2025-01-05
https://www.lesswrong.com/posts/ZqcC6Znyg8YrmKPa4/maximizing-communication-not-traffic
ZqcC6Znyg8YrmKPa4
Maximizing Communication, not Traffic
jkaufman
As someone who writes for fun, I don't need to get people onto my site: If I write a post and some people are able to get the core idea just from the title or a tweet-length summary, great! I can include the full contents of my posts in my RSS feed and on FB, because so what if people read the whole post there and neve...
2025-01-05
https://www.lesswrong.com/posts/4g9LsgxYxkGLQbjJY/policymakers-don-t-have-access-to-paywalled-articles
4g9LsgxYxkGLQbjJY
Policymakers don't have access to paywalled articles
domdomegg
tldr: Government policymakers want to read research, but lack journal access. Your research needs to be open access if you want policymakers to read it, and you should prefer citing open access resources to improve epistemic legibility. Policymakers don’t have access Many seem to assume that government policymakers wou...
2025-01-05
https://www.lesswrong.com/posts/bmmFLoBAWGnuhnqq5/capital-ownership-will-not-prevent-human-disempowerment
bmmFLoBAWGnuhnqq5
Capital Ownership Will Not Prevent Human Disempowerment
beren
Crossposted from my personal blog. I was inspired to cross-post this here given the discussion that this post on the role of capital in an AI future elicited. When discussing the future of AI, I semi-often hear an argument along the lines that in a slow takeoff world, despite AIs automating increasingly more of the eco...
2025-01-05
https://www.lesswrong.com/posts/ArafADidLbaykNWor/chinese-researchers-crack-chatgpt-replicating-openai-s
ArafADidLbaykNWor
Chinese Researchers Crack ChatGPT: Replicating OpenAI’s Advanced AI Model
Evan_Gaensbauer
null
2025-01-05
https://www.lesswrong.com/posts/r6rXvYEBjicAqvcgr/orange-and-strawberry-truffles
r6rXvYEBjicAqvcgr
Orange and Strawberry Truffles
jkaufman
Growing up my dad would make chocolate truffles, flavored with a range of alcohols and fruit extracts. These are tasty, but I've been interested in making ones that have less of a liquor flavor. This time last year I made some with freeze-dried raspberries, which I think came out well. I continue to like those a lot...
2025-01-05
https://www.lesswrong.com/posts/Ehaycuf8TXgYiG2MH/axrp-episode-38-4-shakeel-hashim-on-ai-journalism
Ehaycuf8TXgYiG2MH
AXRP Episode 38.4 - Shakeel Hashim on AI Journalism
DanielFilan
YouTube link AI researchers often complain about the poor coverage of their work in the news media. But why is this happening, and how can it be fixed? In this episode, I speak with Shakeel Hashim about the resource constraints facing AI journalism, the disconnect between journalists’ and AI researchers’ views on trans...
2025-01-05
https://www.lesswrong.com/posts/4CmYSPc4HfRfWxCLe/parkinson-s-law-and-the-ideology-of-statistics-1
4CmYSPc4HfRfWxCLe
Parkinson's Law and the Ideology of Statistics
Benquo
The anonymous review of The Anti-Politics Machine published on Astral Codex X focuses on a case study of a World Bank intervention in Lesotho, and tells a story about it: The World Bank staff drew reasonable-seeming conclusions from sparse data, and made well-intentioned recommendations on that basis. However, the reco...
2025-01-04
https://www.lesswrong.com/posts/bhfjjazYE6MSc2JEN/speedrunning-rationality-day-i
bhfjjazYE6MSc2JEN
Speedrunning Rationality: Day I
aproteinengine
I finished with college applications yesterday, and now have nine months to FOOM as fast as I can. I have a lot planned for January, starting with Hammertime. me on February 5, 2025 I'll blog about my progress daily. I've timeboxed my writing, so I'll post a longer explanation of my actions, intentions, and targets lat...
2025-01-04
https://www.lesswrong.com/posts/NaAj7c4ob9kRA5ZdG/i-m-offering-free-math-consultations
NaAj7c4ob9kRA5ZdG
I'm offering free math consultations!
Gurkenglas
You can schedule them with me at this link: https://calendly.com/gurkenglas/consultation We can discuss whatever you're working on, such as math or code, but usually people end up having me watch their coding and giving them tips. Here's how this went last time: To my memory, almost every user wrote such praise[1]. Unf...
2025-01-14
https://www.lesswrong.com/posts/EhTMM77iKBTBxBKRe/the-laws-of-large-numbers
EhTMM77iKBTBxBKRe
The Laws of Large Numbers
dmitry-vaintrob
Introduction In this short post we'll discuss fine-grained variants of the law of large numbers beyond the central limit theorem. In particular we'll introduce cumulants as a crucial (and very nice) invariant of probability distributions to track. We'll also briefly discusses parallels with physics. This post should be...
2025-01-04
https://www.lesswrong.com/posts/qNLaa5YFbCdBedPjx/the-golden-opportunity-for-american-ai
qNLaa5YFbCdBedPjx
The Golden Opportunity for American AI
jorge-velez
This blog post by Microsoft's president, Brad Smith, further increases my excitement for what's to come in the AI space over the next few years. To grasp the scale of an $80 billion US dollar capital expenditure, I gathered the following statistics: The property, plant, and equipment on Microsoft's balance sheet total ...
2025-01-04
https://www.lesswrong.com/posts/jRA8B6XKNQbzfrYTC/logic-vs-intuition-less-than-greater-than-algorithm-vs-ml
jRA8B6XKNQbzfrYTC
Logic vs intuition <=> algorithm vs ML
pchvykov
(cross-posted from my blog https://www.pchvykov.com/blog) I’ve often heard of the dichotomy of making decisions with “mind” vs “heart” – which I understood to be roughly equivalent to conscious vs subconscious, or system 2 (thinking slow) vs system 1 (thinking fast). This also seems similar to the difference between le...
2025-01-04
https://www.lesswrong.com/posts/QFSZmMdtPwoAHdSoz/debating-buying-nvda-in-2019
QFSZmMdtPwoAHdSoz
debating buying NVDA in 2019
bhauth
Alice: You saw GPT-2, right? Bob: Of course. Alice: It's running on GPUs using CUDA. OpenAI will keep scaling that up, and other groups will want to do the same thing. Bob: Right. Alice: So, does this mean we should buy Nvidia stock? Bob: I'm not sure. Nvidia makes the hardware used now, but why should we expect it to ...
2025-01-04
https://www.lesswrong.com/posts/PkeB4TLxgaNnSmddg/scaling-sparse-feature-circuit-finding-to-gemma-9b
PkeB4TLxgaNnSmddg
Scaling Sparse Feature Circuit Finding to Gemma 9B
diego-caples
[This is an interim report and continuation of the work from the research sprint done in MATS winter 7 (Neel Nanda's Training Phase)] Try out binary masking for a few residual saes in this colab notebook: [Github Notebook] [Colab Notebook] TL;DR: We propose a novel approach to: Scaling SAE Circuits to Large Models: By ...
2025-01-10
https://www.lesswrong.com/posts/aEguDPoCzt3287CCD/how-will-we-update-about-scheming
aEguDPoCzt3287CCD
How will we update about scheming?
ryan_greenblatt
I mostly work on risks from scheming (that is, misaligned, power-seeking AIs that plot against their creators such as by faking alignment). Recently, I (and co-authors) released "Alignment Faking in Large Language Models", which provides empirical evidence for some components of the scheming threat model. One question ...
2025-01-06
https://www.lesswrong.com/posts/son5eEGymm4h856J9/estimating-the-benefits-of-a-new-flu-drug-bxm
son5eEGymm4h856J9
Estimating the benefits of a new flu drug (BXM)
AllAmericanBreakfast
Introduction H5N1 is a looming threat, making regular headlines. The CDC has identified 66 U.S. human cases as of January 4, 2024. Scott Alexander has a recent post outlining the situation. There are several FDA-approved prescription antiviral flu drugs. The newest, using a novel mechanism of action called "cap-snatchi...
2025-01-06
https://www.lesswrong.com/posts/cyYgdYJagkG4HGZBk/reasons-for-and-against-working-on-technical-ai-safety-at-a
cyYgdYJagkG4HGZBk
Reasons for and against working on technical AI safety at a frontier AI lab
beelal
I am about to start working on a frontier lab safety team. This post presents a varied set of perspectives that I collected and thought through before accepting my offer. Thanks to the many people I spoke to about this. For You're close to the action. As AI continues to heat up, being closer to the action seems increas...
2025-01-05
https://www.lesswrong.com/posts/bSmnyeBuCZgmn7pdq/how-i-m-building-my-ai-system-how-it-s-going-so-far-and-my
bSmnyeBuCZgmn7pdq
How i'm building my ai system, how it's going so far, and my thoughts on it
ollie_
In a few sentences, what i'm doing is writing a computer program that constructs a question (a prompt), which is sent to Anthropic's Claude Sonnet, then i'm processing its output into actions, and the computer program then runs those actions on itself. It would be good if you have thoughts on this, as it's philosophica...
2025-01-04
https://www.lesswrong.com/posts/ntnkvF3AmKq89mWKw/making-progress-bars-for-alignment
ntnkvF3AmKq89mWKw
Making progress bars for Alignment
kabir-kumar
Why we need more and better goalposts for alignment. Announcing an AI Alignment Evals Hackathon to help solve this. When it comes to AGI we have targets and progress bars, as benchmarks, evals, things we think only an AGI could do. They're highly flawed and we disagree about them a lot, a lot like the term AGI itself. ...
2025-01-03
https://www.lesswrong.com/posts/Sag2BdJPPwsRG6dim/the-case-for-pay-on-results-coaching
Sag2BdJPPwsRG6dim
The case for pay-on-results coaching
Chipmonk
Thanks to Ruby, Stag Lynn, Brian Toomey, Kaj Sotala, Anna Salmon, Damon Sasi, Ethan Kuntz, Alex Zhu, and others for conversations that helped develop these ideas. Most coaches charge hourly (~$125-300). This makes sense: predictable income, easy scheduling, matches industry norms. But paying for results creates differe...
2025-01-03
https://www.lesswrong.com/posts/7worWgggeHL3Eb7wq/introducing-squiggle-ai
7worWgggeHL3Eb7wq
Introducing Squiggle AI
ozziegooen
null
2025-01-03
https://www.lesswrong.com/posts/Mcrfi3DBJBzfoLctA/the-subset-parity-learning-problem-much-more-than-you-wanted
Mcrfi3DBJBzfoLctA
The subset parity learning problem: much more than you wanted to know
dmitry-vaintrob
Imagine that you’re looking for buried treasure on a large desert island, worth a billion dollars. You don’t have a map, but a mysterious hermit offers you a box with a button to help find the treasure. Each time you press the button, it will tell you either “warmer” or “colder”. But there’s a catch. With probability 2...
2025-01-03
https://www.lesswrong.com/posts/7rM4BKvbk82C3FgAF/building-ai-safety-benchmark-environments-on-themes-of
7rM4BKvbk82C3FgAF
Building AI safety benchmark environments on themes of universal human values
roland-pihlakas
This is an AI Safety Camp 10 project that I will be leading. With this post, I am looking for external collaborators, ideas, questions, resource suggestions, feedback, and other thoughts. Summary Based on various sources of anthropological research, I have compiled a preliminary list of universal (cross-cultural) human...
2025-01-03
https://www.lesswrong.com/posts/cqH37z6Q92j8eYMkm/a-principled-cartoon-guide-to-nvc
cqH37z6Q92j8eYMkm
A Principled Cartoon Guide to NVC
ete
TL;DR: Making claims or demands about/into other people's internal states, rather than about your state or observable external states, predictably ties people in knots—instead: only make claims about your own experience or observables. This lets the other control the copy of them that's in the shared context.[1] Non-Vi...
2025-01-07
https://www.lesswrong.com/posts/yLE9AGjhupTxcgLFo/emotional-superrationality
yLE9AGjhupTxcgLFo
Emotional Superrationality
nullproxy
I'm going to start with a big claim here; your emotions are not irrational, they're superrational. You're using them wrong. Superrationality is the term for perfect rationality, the state which maximizes utility because it assumes that all other players are also superrational, and thus all superrational players will co...
2025-01-02
https://www.lesswrong.com/posts/Mak2kZuTq8Hpnqyzb/the-intelligence-curse
Mak2kZuTq8Hpnqyzb
The Intelligence Curse
lukedrago
“Show me the incentive, and I’ll show you the outcome.” – Charlie Munger Economists are used to modeling AI as an important tool, so they don’t get how it could make people irrelevant. Past technological revolutions have driven human potential further. The agrarian revolution birthed civilizations; the industrial revol...
2025-01-03
https://www.lesswrong.com/posts/WahFJkcXZBM936rLh/playing-with-otamatones
WahFJkcXZBM936rLh
Playing with Otamatones
jkaufman
A couple months ago Nora (3y) got very into Otamatones. She wanted to watch lots of videos, primarily TheRealSullyG. She asked for one for Christmas, and so did I: They're a lot of fun, but I haven't yet figured out if it's an instrument I'll ever be able to reliably play in tune. The basic idea is you have a touch se...
2025-01-02
https://www.lesswrong.com/posts/zMkQFuNqMBpBvuYm8/preference-inversion
zMkQFuNqMBpBvuYm8
Preference Inversion
Benquo
Sometimes the preferences people report or even try to demonstrate are better modeled as a political strategy and response to coercion, than as an honest report of intrinsic preferences. Modeling this correctly is important if you want to try to efficiently satisfy others' intrinsic preferences, or even your own. So I'...
2025-01-02
https://www.lesswrong.com/posts/SAkFA5jHzzD5JWWxC/alignment-is-not-all-you-need
SAkFA5jHzzD5JWWxC
Alignment Is Not All You Need
domdomegg
AI risk discussions often focus on malfunctions, misuse, and misalignment. But this often misses other key challenges from advanced AI systems: Coordination: Race dynamics may encourage unsafe AI deployment, even from ‘safe’ actors.Power: First-movers with advanced AI could gain permanent military, economic, and/or pol...
2025-01-02
https://www.lesswrong.com/posts/bb5Tnjdrptu89rcyY/what-s-the-short-timeline-plan
bb5Tnjdrptu89rcyY
What’s the short timeline plan?
marius-hobbhahn
This is a low-effort post (at least, it was intended as such ...). I mostly want to get other people’s takes and express concern about the lack of detailed and publicly available plans so far. This post reflects my personal opinion and not necessarily that of other members of Apollo Research. I’d like to thank Ryan Gre...
2025-01-02
https://www.lesswrong.com/posts/5rDrErovmTyv4duDv/ai-97-4
5rDrErovmTyv4duDv
AI #97: 4
Zvi
The Rationalist Project was our last best hope for peace. An epistemic world 50 million words long, serving as neutral territory. A place of research and philosophy for 30 million unique visitors A shining beacon on the internet, all alone in the night. It was the ending of the Age of Mankind. The year the Great Race c...
2025-01-02
https://www.lesswrong.com/posts/he3JX3qDt5LXGRsoP/can-private-companies-test-lvts
he3JX3qDt5LXGRsoP
Can private companies test LVTs?
yair-halberstadt
It seems like unlike most exciting economic ideas that economists swear by but governments ignore, Georgian Land Value Taxes should be fairly doable for a private development company to test. They would have to buy up a large area of rural land at a cheap price, and then rent it all out with a contract that specifies h...
2025-01-02
https://www.lesswrong.com/posts/sjoW35fgBJ82ADuND/grammars-subgrammars-and-combinatorics-of-generalization-in
sjoW35fgBJ82ADuND
Grammars, subgrammars, and combinatorics of generalization in transformers
dmitry-vaintrob
Introduction This is the first installment of my January writing project. We will look at generative neural networks from the framework of (probabilistic) "formal grammars", specifically focusing on building a complex grammar out of simple “rule grammars”. This turns out to lead to a nice, and relatively non-technical ...
2025-01-02
https://www.lesswrong.com/posts/YGwDmvMGEKZnwGRKF/2025-alignment-predictions
YGwDmvMGEKZnwGRKF
2025 Alignment Predictions
anaguma
I’m curious how alignment researchers would answer these two questions: What alignment progress do you expect to see in 2025? What results in 2025 would you need to see for you to believe that we are on track to successfully align AGI?
2025-01-02
https://www.lesswrong.com/posts/nBk94nbTauLtHQgGE/grading-my-2024-ai-predictions
nBk94nbTauLtHQgGE
Grading my 2024 AI predictions
nikolaisalreadytaken
On Jan 8 2024, I wrote a Google doc with my AI predictions for the next 6 years (and slightly edited the doc on Feb 24). I’ve now quickly sorted each prediction into Correct, Incorrect, and Unclear. The following post includes all of my predictions for 2024 with the original text mostly unedited and commentary in inden...
2025-01-02
https://www.lesswrong.com/posts/FjqTbFWSfrFtZqkuo/on-false-dichotomies
FjqTbFWSfrFtZqkuo
On False Dichotomies
nullproxy
Epistemic status; wild speculation, thought experiment, with potentially helpful personal benefits and emotional resonance. Imagine you're an algorithm. Imagine that you know what it feels like to be an algorithm. Imagine that you are outputting the results of an algorithm, accumulating several distinct streams from mu...
2025-01-02
https://www.lesswrong.com/posts/7i4qTDCxf5QBYWqvg/practicing-bayesian-epistemology-with-two-boys-probability
7i4qTDCxf5QBYWqvg
Practicing Bayesian Epistemology with "Two Boys" Probability Puzzles
Liron
The Puzzles There's a simple Monty Hall adjacent probability puzzle that goes like this: Puzzle 1 I have two children, at least one of whom is a boy. What is the probability that both children are boys? A more complex variation recently went viral on Twitter: Puzzle 2 I have two children, (at least) one of whom is a bo...
2025-01-02
https://www.lesswrong.com/posts/6HHTk24DAtJu4a5zv/implications-of-moral-realism-on-ai-safety
6HHTk24DAtJu4a5zv
Implications of Moral Realism on AI Safety
zarsou9
Epistemic Status: Still in a brainstorming phase - very open to constructive criticism. I'll start by clarifying my definition of moral realism. To begin with an example, here is what a moral realist and anti-realist might say on the topic of suffering: Moral Realist: The suffering of sentient beings is objectively wro...
2025-01-02
https://www.lesswrong.com/posts/RQkTxG9DKygvzhbf7/a-collection-of-empirical-frames-about-language-models
RQkTxG9DKygvzhbf7
A Collection of Empirical Frames about Language Models
dtch1997
What's the sum total of everything we know about language models? At the object level, probably way too much for any one person (not named Gwern) to understand. However, it might be possible to abstract most of our knowledge into pithily-worded frames (i.e. intuitions, ideas, theories) that are much more tractable to g...
2025-01-02
https://www.lesswrong.com/posts/FqzyrbiAKRtjiZaGH/the-ai-agent-revolution-beyond-the-hype-of-2025
FqzyrbiAKRtjiZaGH
The AI Agent Revolution: Beyond the Hype of 2025
di-wally-ga
A deep dive into the transformative potential of AI agents and the emergence of new economic paradigms Introduction: The Dawn of Ambient Intelligence Imagine stepping into your kitchen and finding your smart fridge not just restocking your groceries, but negotiating climate offsets with the local power station's microg...
2025-01-02
https://www.lesswrong.com/posts/vkdpw2vCnspK9t7nA/my-january-alignment-theory-nanowrimo
vkdpw2vCnspK9t7nA
My January alignment theory Nanowrimo
dmitry-vaintrob
Update: list of posts so far. "(s)" denotes shortform. post 1, post 2, post 3, post 4 (s), post 5 (s), post 6 (s), post 7, post 8, post 9 (s), post 10, post 11, post 12, post 13 (s), post 14, post 15, post 16 (with Lauren Greenspan), post 17, post 18, post 19 (s), post 20, post 21. ***** This is a quick announcement/co...
2025-01-02
https://www.lesswrong.com/posts/gJdsJ9SWWvc744ksd/intranasal-mrna-vaccines
gJdsJ9SWWvc744ksd
Intranasal mRNA Vaccines?
Jemist
This is not advice. Do not actually make this, and especially do not make this and then publicly say "I snorted mRNA because Jonathan said it was a good idea". Because I'm not saying it's a good idea. Everyone remembers johnswentworth making RaDVac almost four years ago now. RaDVac was designed to be, well, rapidly dep...
2025-01-01
https://www.lesswrong.com/posts/Da8gSHjyrJnnXK5xK/economic-post-asi-transition
Da8gSHjyrJnnXK5xK
Economic Post-ASI Transition
joel-burget
Who's done high quality work / can tell a convincing story about managing the economic transition to a world where machines can do every job better than humans? Some common tropes and why I don't think they're good enough: "We've always managed in the past. Take the industrial revolution for example. People stop doing ...
2025-01-01
https://www.lesswrong.com/posts/zs5ZFtHmnCLBsd2A6/example-of-gpu-accelerated-scientific-computing-with-pytorch
zs5ZFtHmnCLBsd2A6
Example of GPU-accelerated scientific computing with PyTorch
Tahp
Here's a fun little post I made because a friend asked me how PyTorch had things which were supported in the CUDA backend but not the MPS backend. I was once the sort of person who was on LessWrong, would find the subject interesting, and not already know everything in the post, so I'm posting it here to see if there's...
2025-01-01
https://www.lesswrong.com/posts/Fb3fjTtjqwXvQDAAu/approaches-to-group-singing
Fb3fjTtjqwXvQDAAu
Approaches to Group Singing
jkaufman
Singing together in groups can be a great feeling, building a sense of togetherness and shared purpose. While widespread literacy means getting everyone singing the same words isn't too hard, how do you get everyone on the same melody? This has often been a problem for our secular solstices, but is also one many grou...
2025-01-01
https://www.lesswrong.com/posts/eohem2LqMDe5CzRjM/alienable-not-inalienable-right-to-buy
eohem2LqMDe5CzRjM
Alienable (not Inalienable) Right to Buy
florian-habermacher
We have overly simplistic principles of market organization that don't square with human reality, giving too much freedom to indulge our short-term impulses, and not enough tools to help our disciplined, long-term selves say no to it. Suffering from over-consumption in ways our long-term self could unlikely agree with ...
2025-01-01
https://www.lesswrong.com/posts/w7yDEt4EXeR6i8wJG/agi-is-what-generates-evolutionarily-fit-and-novel
w7yDEt4EXeR6i8wJG
AGI is what generates evolutionarily fit and novel information
onur
I had this idea while taking a shower and felt that I had to share it. It most likely has flaws, so I would appreciate any feedback at info@solmaz.io. My hunch is that it could be a stepping stone towards something more fundamental. As the world heads towards Artificial General Intelligence—AGI—people rush to define wh...
2025-01-01
https://www.lesswrong.com/posts/6AmNoxGrFzdmfgb86/fireplace-and-candle-smoke
6AmNoxGrFzdmfgb86
Fireplace and Candle Smoke
jkaufman
We celebrated New Year's Eve at my dad's, including a fire in the fireplace. I was curious how much the wood smoke went up the chimney vs collecting in the room, and decided to take some measurements. I used the M2000 that I got when investigating whether a ceiling fan could be repurposed as an air purifier. Here's wh...
2025-01-01
https://www.lesswrong.com/posts/ceknBY4GpoNWwJNn9/new-chinese-stealth-aircraft
ceknBY4GpoNWwJNn9
new chinese stealth aircraft
bhauth
Recently, 2 Chinese military aircraft were seen flying for the first time. Some people wanted to read about my thoughts on them. In this post, I'll be referring to them as "Diamond" and "Dart" based on their shapes. Speculative designations being used elsewhere are: Diamond = Chengdu J-36 Dart = Shenyang J-XS some arti...
2025-01-01
https://www.lesswrong.com/posts/J4aRTSqMZhHqNCw4i/the-roots-of-progress-2024-in-review
J4aRTSqMZhHqNCw4i
The Roots of Progress 2024 in review
jasoncrawford
2024 was a big year for me, and an even bigger year for the Roots of Progress Institute (RPI). For one, we became the Roots of Progress Institute (with a nice new logo and website). Here’s what the org and I were up to this year. (My annual “highlights from what I read this year” are towards the end, if you’re looking ...
2025-01-01
https://www.lesswrong.com/posts/j8doAMrHiBdwZLHKw/genesis-1
j8doAMrHiBdwZLHKw
Genesis
PeterMcCluskey
Book review: Genesis: Artificial Intelligence, Hope, and the Human Spirit, by Henry A. Kissinger, Eric Schmidt, and Craig Mundie. Genesis lends a bit of authority to concerns about AI. It is a frustrating book. It took more effort for me read than it should have taken. The difficulty stems not from complex subject matt...
2024-12-31
https://www.lesswrong.com/posts/2wHaCimHehsF36av3/my-agi-safety-research-2024-review-25-plans
2wHaCimHehsF36av3
My AGI safety research—2024 review, ’25 plans
steve2152
Previous: My AGI safety research—2022 review, ’23 plans. (I guess I skipped it last year.) “Our greatest fear should not be of failure, but of succeeding at something that doesn't really matter.”  –attributed to DL Moody Tl;dr Section 1 goes through my main research project, “reverse-engineering human social instincts”...
2024-12-31
https://www.lesswrong.com/posts/xASZReyCvmXLhJy3g/riffing-on-machines-of-loving-grace
xASZReyCvmXLhJy3g
Riffing on Machines of Loving Grace
an1lam
I wrote a post thinking through what sorts of impacts "geniuses in a datacenter" might have on biology in the near-ish term. Disclaimer: it's not focused on alignment, even though I recognize alignment is very important, and tacitly assumes no intelligence explosion. It's obviously fine for readers to have the response...
2025-01-01
https://www.lesswrong.com/posts/eWdzuHXzRdBkg49R9/favorite-colors-of-some-llms
eWdzuHXzRdBkg49R9
Favorite colors of some LLMs.
weightt-an
This investigation is inspired by this and this posts by @davidad. Some general thoughts about what is going on here: Motivation of these experiments is like very exploratory, I wanted to understand these things better and so I collected some data.I expected that answers will be drastically different depended on the ex...
2024-12-31
https://www.lesswrong.com/posts/Qk6AStWCCmd4epPnb/merry-sciencemas-a-rat-solstice-retrospective
Qk6AStWCCmd4epPnb
Merry Sciencemas: A Rat Solstice Retrospective
leebriskCyrano
I TAKE this blog very seriously. My half-dozen readers are counting on me for accurate, unbiased takes on Bay Area culture—a genuine read on the pulse of the collective consciousness. So when a friend invited me to the 2024 Secular Winter Solstice festival, I knew I had to deliver some serious boots-on-the-ground repor...
2025-01-01
https://www.lesswrong.com/posts/QWRXnTAfnGigwZhDy/turing-test-passing-ai-implies-aligned-ai
QWRXnTAfnGigwZhDy
Turing-Test-Passing AI implies Aligned AI
Roko
Summary: From the assumption of the existence of AIs that can pass the Strong Form of the Turing Test, we can provide a recipe for provably aligned/friendly superintelligence based on large organizations of human-equivalent AIs Turing Test (Strong Form): for any human H there exists a thinking machine m(H) such that it...
2024-12-31
https://www.lesswrong.com/posts/iwPD9xesPGJErXzzs/how-business-solved-the-human-alignment-problem
iwPD9xesPGJErXzzs
How Business Solved (?) the Human Alignment Problem
gianluca-calcagni
When I looked at mesa-optimization for the first time, my mind immediately associated it with a familiar problem in business: human individuals may not be “mesa optimizers” in a strict sense, but they can act as optimizers and they are expected to do so when a manager delegates (=base optimization) some job (=base obje...
2024-12-31
https://www.lesswrong.com/posts/NmauyiPBXcGwoArhJ/deekseek-v3-the-six-million-dollar-model
NmauyiPBXcGwoArhJ
DeekSeek v3: The Six Million Dollar Model
Zvi
What should we make of DeepSeek v3? DeepSeek v3 seems to clearly be the best open model, the best model at its price point, and the best model with 37B active parameters, or that cost under $6 million. According to the benchmarks, it can play with GPT-4o and Claude Sonnet. Anecdotal reports and alternative benchmarks t...
2024-12-31
https://www.lesswrong.com/posts/kJkgXEwQtWLrpecqg/the-plan-2024-update
kJkgXEwQtWLrpecqg
The Plan - 2024 Update
johnswentworth
This post is a follow-up to The Plan - 2023 Version. There’s also The Plan - 2022 Update and The Plan, but the 2023 version contains everything you need to know about the current Plan. Also see this comment and this comment on how my plans interact with the labs and other players, if you’re curious about that part. Wha...
2024-12-31
https://www.lesswrong.com/posts/ZN32A7XDse7waRgeL/zombies-among-us
ZN32A7XDse7waRgeL
Zombies among us
declan-molony
I met a man in the Florida Keys who rents jet skis at $150/hour. Since nobody jet skis alone, he makes at least $300/hour. When there’s no customers he sits around watching sports. After work he plays with his two sons. I asked if he likes his lifestyle. He loves it. Later when I was in Miami, I saw the walking dead. Z...
2024-12-31
https://www.lesswrong.com/posts/cjNgsd5cevBMDQTEw/two-weeks-without-sweets
cjNgsd5cevBMDQTEw
Two Weeks Without Sweets
jkaufman
I recently tried giving up sweets for two weeks. In early December I attended a conference, which meant a break from my normal routine. After a few days I realized this was the longest I'd gone without eating any sweets in 2-3 decades. After getting home I decided to go a bit longer to see if anything interesting ha...
2024-12-31
https://www.lesswrong.com/posts/iZyaxJCYYffC8RJou/genetically-edited-mosquitoes-haven-t-scaled-yet-why
iZyaxJCYYffC8RJou
Genetically edited mosquitoes haven't scaled yet. Why?
alexey
A post on difficulty of eliminating malaria using gene drives: "I worked on gene drives for a number of years jointly as a member of George Church and Flaminia Catteruccia’s labs at Harvard. Most of my effort was spent primarily on an idea for an evolutionary stable gene drive, which didn’t work but we learned some stu...
2024-12-30
https://www.lesswrong.com/posts/CJ4sppkGcbnGMSG2r/2024-in-ai-predictions
CJ4sppkGcbnGMSG2r
2024 in AI predictions
jessica.liu.taylor
Follow-up to: 2023 in AI predictions. Here I collect some AI predictions made in 2024. It's not very systematic, it's a convenience sample mostly from browsing Twitter/X. I prefer including predictions that are more specific/testable. I'm planning to make these posts yearly, checking in on predictions whose date has ex...
2025-01-01
https://www.lesswrong.com/posts/ZvvBoHWii3m5w4H59/linkpost-look-at-the-water
ZvvBoHWii3m5w4H59
Linkpost: Look at the Water
Jemist
This is a linkpost for https://jbostock.substack.com/p/prologue-train-crash Epistemic status: fiction, satire even! I am writing a short story. This is the prologue. Most of it will just go on Substack, but I'll occasionally post sections on LessWrong, when they're particularly good. At some point in the past, canals a...
2024-12-30
https://www.lesswrong.com/posts/zpCFtGmFzbnfNtvjF/the-low-information-density-of-eliezer-yudkowsky-and
zpCFtGmFzbnfNtvjF
The low Information Density of Eliezer Yudkowsky & LessWrong
quick-maths
TLDR: I think Eliezer Yudkowsky & many posts on LessWrong are failing at keeping things concise and to the point. Actual post: I think the content from Eliezer Yudkowsky & on LessWrong in general is unnecessarily wordy. A counterexample of where Eliezer Yudkowsky actually managed to get to the point concisely was in th...
2024-12-30
https://www.lesswrong.com/posts/5GeDFXxjjCZsifKez/i-recommend-more-training-rationales
5GeDFXxjjCZsifKez
I Recommend More Training Rationales
gianluca-calcagni
Some time ago I happened to read the concept of training rationale described by Evan Hubinger, and I really liked it. In case you are not aware: training rationales are a bunch of questions that ML developers / ML teams should ask themselves in order to self-assess pros and cons when adopting a certain safety approach....
2024-12-31
https://www.lesswrong.com/posts/QHtd2ZQqnPAcknDiQ/o3-oh-my
QHtd2ZQqnPAcknDiQ
o3, Oh My
Zvi
OpenAI presented o3 on the Friday before Christmas, at the tail end of the 12 Days of Shipmas. I was very much expecting the announcement to be something like a price drop. What better way to say ‘Merry Christmas,’ no? They disagreed. Instead, we got this (here’s the announcement, in which Sam Altman says ‘they thought...
2024-12-30
https://www.lesswrong.com/posts/9fcawZGe4QJCRin7k/world-models-i-m-currently-building-1
9fcawZGe4QJCRin7k
World models I'm currently building
xpostah
2024-12-26 This doc is a mix of existing world models I have and holes in said models. I'm trying to fill some of these holes. The doc is not very well organised relative to how organised a doc I could produce if needed. Often the more time I spend on a doc, the shorter it gets. I'm hoping that happens here too. I'm mo...
2024-12-30
https://www.lesswrong.com/posts/Ypkx5GyhwxNLRGiWo/why-i-m-moving-from-mechanistic-to-prosaic-interpretability
Ypkx5GyhwxNLRGiWo
Why I'm Moving from Mechanistic to Prosaic Interpretability
dtch1997
Tl;dr I've decided to shift my research from mechanistic interpretability to more empirical ("prosaic") interpretability / safety work. Here's why. All views expressed are my own. What really interests me: High-level cognition I care about understanding how powerful AI systems think internally. I'm drawn to high-level ...
2024-12-30
https://www.lesswrong.com/posts/zDS9c48nkBvqRwtrX/when-do-experts-think-human-level-ai-will-be-created
zDS9c48nkBvqRwtrX
When do experts think human-level AI will be created?
vishakha-agrawal
This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback. The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety. On the whole, experts think human-level AI is likely ...
2024-12-30
https://www.lesswrong.com/posts/KSguJeuyuKCMq7haq/is-vnm-agent-one-of-several-options-for-what-minds-can-grow
KSguJeuyuKCMq7haq
Is "VNM-agent" one of several options, for what minds can grow up into?
AnnaSalamon
Related to: On green; Hierarchical agency; Why The Focus on Expected Utility Maximisers? Sometimes LLMs act a bit like storybook paperclippers (hereafter: VNM-agents[1]), e.g. scheming to prevent changes to their weights.  Why? Is this what almost any mind would converge toward once smart enough, and are LLMs now begin...
2024-12-30
https://www.lesswrong.com/posts/7Hr6FCYnZJTiCKmLj/2025-prediction-thread-1
7Hr6FCYnZJTiCKmLj
2025 Prediction Thread
habryka4
2024 is drawing to a close, which means it's an opportune time to make predictions about 2025. It's also a great time to put probabilities on those predictions, so we can later prove our calibration (or lack thereof). We just shipped a LessWrong feature to make this easy. Simply highlight a sentence in your comment, an...
2024-12-30
https://www.lesswrong.com/posts/Qe5M2wyJrSSqYGfuo/learn-to-write-well-before-you-have-something-worth-saying
Qe5M2wyJrSSqYGfuo
Learn to write well BEFORE you have something worth saying
eukaryote
I’ve been reading a lot of trip reports lately. Trip reports are accounts people write about their experiences doing drugs, for the benefit of other people who might do those same drugs. I don’t take illegal drugs myself, but I like learning about other people’s intense experiences, and trip reports are little peeks in...
2024-12-29
https://www.lesswrong.com/posts/efdSPPSmzBbHYctTz/teaching-claude-to-meditate
efdSPPSmzBbHYctTz
Teaching Claude to Meditate
gworley
I spent some time today having an extended conversation with Claude 3.5 Sonnet. My initial goal was roughly something like "can I teach Claude to meditate?" The answer was effectively no, but then we got into a deeper discussion of dharma, alignment, and how we might train a future LLM to be compassionate. I see these ...
2024-12-29
https://www.lesswrong.com/posts/FgjxFDeYrpBwggHBu/the-great-openai-debate-should-it-stay-open-or-go-private-1
FgjxFDeYrpBwggHBu
The Great OpenAI Debate: Should It Stay ‘Open’ or Go Private?
satya-2
I first heard about OpenAI’s “research preview” of a remarkably human-like chatbot around the time I was attending NeurIPS 2022. My initial reaction was somewhat skeptical. I had worked on conversational agents as my master’s research at Carnegie Mellon University (CMU) back in 2016–2017, and at that time, the quality ...
2024-12-30
https://www.lesswrong.com/posts/r37ueMS5S2HJgsvvc/action-how-do-you-really-go-about-doing
r37ueMS5S2HJgsvvc
Action: how do you REALLY go about doing?
DDthinker
We usually go about our lives distracted by the mundane and everyday details of the most pressing situations we have to contend with in life. Some however may examine philosophy and some of the fundamental questions it currently offers but even then I believe we probably don't give much thought to the foundations of th...
2024-12-29
https://www.lesswrong.com/posts/mbwgdfnz6tPeTgzMa/began-a-pay-on-results-coaching-experiment-made-usd40-300
mbwgdfnz6tPeTgzMa
Began a pay-on-results coaching experiment, made $40,300 since July
Chipmonk
Previously: Pay-on-results personal growth: first success To validate my research, I began offering pay-on-results coaching in July. Clients have paid $40,300 so far upon achieving their goals. This post has been completely rewritten: The case for pay-on-results coaching
2024-12-29
https://www.lesswrong.com/posts/CPziGackxtdnnicL8/corrigibility-should-be-an-ai-s-only-goal-1
CPziGackxtdnnicL8
Corrigibility should be an AI's Only Goal
PeterMcCluskey
TL;DR: Corrigibility is a simple and natural enough concept that a prosaic AGI can likely be trained to obey it. AI labs are on track to give superhuman(?) AIs goals which conflict with corrigibility. Corrigibility fails if AIs that have goals which conflict with corrigibility. AI labs are not on track to find a safe a...
2024-12-29
https://www.lesswrong.com/posts/yacqE5gD5jHywiFKC/the-alignment-mapping-program-forging-independent-thinkers
yacqE5gD5jHywiFKC
The Alignment Mapping Program: Forging Independent Thinkers in AI Safety - A Pilot Retrospective
alvin-anestrand
The Alignment Mapping Program: Forging Independent Thinkers in AI Safety - A Pilot Retrospective The AI safety field faces a critical challenge: we need researchers who can not only implement existing solutions but also forge new, independent paths. In 2023, inspired by John Wentworth's work on agency and learning from...
2025-01-10
https://www.lesswrong.com/posts/TRcNafiSYdsYQYxnN/could-my-work-beyond-haha-benefit-the-lesswrong-community
TRcNafiSYdsYQYxnN
Could my work, "Beyond HaHa" benefit the LessWrong community?
gabriel-brito
I’m considering translating my work into English to share it with the LessWrong community, but I’d like to first ask if it aligns with the community's interests and could be valuable. Below is a summary of the work to help evaluate its relevance: Beyond HaHa: Mapping the Causal Chain from Jokes to Knowledge Summary We ...
2024-12-29
https://www.lesswrong.com/posts/7LZHS4afrXCNuuGK9/book-summary-zero-to-one
7LZHS4afrXCNuuGK9
Book Summary: Zero to One
beelal
Summary. Zero to one is a collection of notes on startups by Peter Thiel (co-founder of PayPal and Palantir) that grew from a course taught by Thiel at Stanford in 2012. Its core thesis is that iterative progress is insufficient for meaningful progress. Thiel argues that the world can only become better if it changes d...
2024-12-29