url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/SFsifzfZotd3NLJax/utility-engineering-analyzing-and-controlling-emergent-value | SFsifzfZotd3NLJax | Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs | Matrice Jacobine | null | 2025-02-12 |
https://www.lesswrong.com/posts/Sz6xDcS9JHdwMgXPz/why-you-maybe-should-lift-weights-and-how-to | Sz6xDcS9JHdwMgXPz | Why you maybe should lift weights, and How to. | samusasuke | Who this post is for? Someone who either:
Wonders if they should start lifting weights, and could be convinced of doing so.Wants to lift weights, and doesn't know where to begin. If this is you, you can skip this first section, though I'm guessing you don't know all the benefits yet.
The WHY
Benefits of ANY EXERCISE:
G... | 2025-02-12 |
https://www.lesswrong.com/posts/k9zLfq2nnqMGJpEAT/teaching-ai-to-reason-this-year-s-most-important-story | k9zLfq2nnqMGJpEAT | Teaching AI to reason: this year's most important story | Benjamin_Todd | This doesn't contain much new to LW readers. I wrote it to try to explain what's going on to a broader audience – I'm posting in case people find it helpful for that. Feedback welcome.
Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking.
They've missed something hug... | 2025-02-13 |
https://www.lesswrong.com/posts/tybYgrTYp3xTaBjox/how-do-the-ceos-respond-to-our-concerns | tybYgrTYp3xTaBjox | how do the CEOs respond to our concerns? | avery-liu | When a decent rationalist walks up to Sam Altman, for example, and presents our arguments for AI doom, how does he respond? What stops us from simply walking up to the people in charge of these training runs, explaining to them the concept of AI doom very slowly and carefully while rebutting all their counterarguments,... | 2025-02-11 |
https://www.lesswrong.com/posts/vvgND6aLjuDR6QzDF/my-model-of-what-is-going-on-with-llms | vvgND6aLjuDR6QzDF | My model of what is going on with LLMs | Amyr | Epistemic status: You probably already know if you want to read this kind of post, but in case you have not decided: my impression is that people are acting very confused about what we can conclude about scaling LLMs from the evidence, and I believe my mental model cuts through a lot of this confusion - I have tried to... | 2025-02-13 |
https://www.lesswrong.com/posts/r86BBAqLHXrZ4mWWA/what-goals-will-ais-have-a-list-of-hypotheses | r86BBAqLHXrZ4mWWA | What goals will AIs have? A list of hypotheses | daniel-kokotajlo | My colleagues and I have written a scenario in which AGI-level AI systems are trained around 2027 using something like the current paradigm: LLM-based agents (but with recurrence/neuralese) trained with vast amounts of outcome-based reinforcement learning on diverse challenging short, medium, and long-horizon tasks, wi... | 2025-03-03 |
https://www.lesswrong.com/posts/bsTzgG3cRrsgbGtCc/extended-analogy-between-humans-corporations-and-ais | bsTzgG3cRrsgbGtCc | Extended analogy between humans, corporations, and AIs. | daniel-kokotajlo | There are three main ways to try to understand and reason about powerful future AGI agents:
Using formal models designed to predict the behavior of powerful general agents, such as expected utility maximization and variants thereof (explored in game theory and decision theory).Comparing & contrasting powerful future AG... | 2025-02-13 |
https://www.lesswrong.com/posts/7KijyCL8WNP8JnWCR/gradient-anatomy-s-hallucination-robustness-in-medical-q-and | 7KijyCL8WNP8JnWCR | Gradient Anatomy's - Hallucination Robustness in Medical Q&A | diego-sabajo | TL;DR
We investigated reducing hallucinations in medical question-answering with Llama-3.1-8B-Instruct.
Using Goodfire's Sparse Auto-Encoder (SAE) we identified neural features associated with accurate and hallucinated responses. Our study found that features related to the model’s awareness of its own knowledge limita... | 2025-02-12 |
https://www.lesswrong.com/posts/3XaizFzbcWAEp8G6o/ai-safety-at-the-frontier-paper-highlights-january-25 | 3XaizFzbcWAEp8G6o | AI Safety at the Frontier: Paper Highlights, January '25 | gasteigerjo | This is the selection of AI safety papers from my blog "AI Safety at the Frontier". The selection primarily covers ML-oriented research and frontier models. It's primarily concerned with papers (arXiv, conferences etc.).
tl;dr
Paper of the month:
Constitutional Classifiers demonstrate a promising defense against univer... | 2025-02-11 |
https://www.lesswrong.com/posts/boB3hJiZijxM3J6Ed/comparing-the-effectiveness-of-top-down-and-bottom-up | boB3hJiZijxM3J6Ed | Comparing the effectiveness of top-down and bottom-up activation steering for bypassing refusal on harmful prompts | ana-kapros | TL;DR
This project compares the effectiveness of top-down and bottom-up activation steering methods in controlling refusal behaviour. In line with prior work,[1] we find that top-down methods outperform bottom-up ones in behaviour steering, as measured using HarmBench. Yet, a hybrid approach is even more effective (pro... | 2025-02-12 |
https://www.lesswrong.com/posts/MmxtzkXLDWnyicQpF/the-news-is-never-neglected | MmxtzkXLDWnyicQpF | The News is Never Neglected | lsusr | Dear Lsusr,
I am inspired by your stories about Effective Evil. My teachers at school tell me it is my civic responsibility to watch the news. Should I reverse this advice? Or should I watch the news like everyone else, except use what I learn for evil?
Sincerely,
[redacted]
Dear [redacted],
If you want to make an impa... | 2025-02-11 |
https://www.lesswrong.com/posts/J6rgqYjj7Cm89Xu2w/where-would-good-forecasts-most-help-ai-governance-efforts-1 | J6rgqYjj7Cm89Xu2w | Where Would Good Forecasts Most Help AI Governance Efforts? | Violet Hour | Thanks to Josh Rosenberg for comments and discussion.
Introduction
One of LessWrong’s historical troves is its pre-ChatGPT AGI forecasts. Not just for the specific predictions people offered, but for observing which sorts of generative processes produced which kinds of forecasts. For instance:
[Nuno (Median AGI Timelin... | 2025-02-11 |
https://www.lesswrong.com/posts/ysghKGYev8DwPDY32/what-about-the-horses | ysghKGYev8DwPDY32 | What About The Horses? | maxwell-tabarrok | In a previous post, I argued that AGI would not make human labor worthless.
One of the most common responses was to ask about the horses. Technology resulted in mass unemployment and population collapse for horses even though they must have had some comparative advantage with more advanced engines. Why couldn’t the sam... | 2025-02-11 |
https://www.lesswrong.com/posts/dLnwRFLFmHKuurTX2/rethinking-ai-safety-approach-in-the-era-of-open-source-ai | dLnwRFLFmHKuurTX2 | Rethinking AI Safety Approach in the Era of Open-Source AI | weibing-wang | Open-Source AI Undermines Traditional AI Safety Approach
In the past years, the mainstream approach to AI safety has been "AI alignment + access control." In simple terms, this means allowing a small number of regulated organizations to develop the most advanced AI systems, ensuring that these AIs' goals are aligned wi... | 2025-02-11 |
https://www.lesswrong.com/posts/CJ4yywLBkdRALc4sT/on-deliberative-alignment | CJ4yywLBkdRALc4sT | On Deliberative Alignment | Zvi | Not too long ago, OpenAI presented a paper on their new strategy of Deliberative Alignment.
The way this works is that they tell the model what its policies are and then have the model think about whether it should comply with a request.
This is an important transition, so this post will go over my perspective on the n... | 2025-02-11 |
https://www.lesswrong.com/posts/uA3JcjbmrZRohToRY/world-citizen-assembly-about-ai-announcement | uA3JcjbmrZRohToRY | World Citizen Assembly about AI - Announcement | Camille Berger | null | 2025-02-11 |
https://www.lesswrong.com/posts/sekmz9EiBD6ByZpyp/detecting-ai-agent-failure-modes-in-simulations | sekmz9EiBD6ByZpyp | Detecting AI Agent Failure Modes in Simulations | michael-soareverix | AI agents have become significantly more common in the last few months. They’re used for web scraping,[1][2] robotics and automation,[3] and are even being deployed for military use.[4] As we integrate these agents into critical processes, it is important to simulate their behavior in low-risk environments.
In this pos... | 2025-02-11 |
https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-utopia-and-narrow-way-there-place-asi-dealing-with | LaruPAWaZk9KpC25A | Rational Utopia & Narrow Way There: Place ASI, Dealing With Dystopias, New Ethics... (V. 4) | ank | (This is the result of three years of thinking and modeling hyper‑futuristic and current ethical systems. The first post in the series. Everything described here can be modeled mathematically—it’s essentially geometry. I take as an axiom that every agent in the multiverse experiences real pain and pleasure. Sorry for t... | 2025-02-11 |
https://www.lesswrong.com/posts/tdb76S4viiTHfFr2u/why-did-elon-musk-just-offer-to-buy-control-of-openai-for | tdb76S4viiTHfFr2u | Why Did Elon Musk Just Offer to Buy Control of OpenAI for $100 Billion? | garrison | null | 2025-02-11 |
https://www.lesswrong.com/posts/wpz5bXhGxPBY6LGDk/positive-directions | wpz5bXhGxPBY6LGDk | Positive Directions | geoffrey-wood | Let's set the scene - Nihilism
We are chunks of self important jelly staggering about on the surface of a tiny nugget of rock in a second rate solar system.
Sol's system is ~ four billion years old
We have been “Ourselves” (Anatomically modern) for ~200,000 years
Countless people have lived and died, and you are but on... | 2025-02-11 |
https://www.lesswrong.com/posts/dJ7XFvqh5oWQbB4CJ/arguing-for-the-truth-an-inference-only-study-into-ai-debate | dJ7XFvqh5oWQbB4CJ | Arguing for the Truth? An Inference-Only Study into AI Debate | denisemester | 💡 TL;DR: Can AI debate be a reliable tool for truth-seeking? In this inference-only experiment (no fine-tuning), I tested whether Claude 3.5 Sonnet and Gemini 1.5 Pro could engage in structured debates over factual questions from BoolQ and MMLU datasets, with GPT-3.5 Turbo acting as an impartial judge. The findings we... | 2025-02-11 |
https://www.lesswrong.com/posts/yBtXoDqfjFXkEWLMc/logical-correlation | yBtXoDqfjFXkEWLMc | Logical Correlation | niplav | In which to compare how similarly programs compute their outputs,
naïvely and less naïvely.
Logical Correlation
Attention conservation notice: Premature formalization,
ab-hoc mathematical
definition.
Motivation, Briefly
In the twin prisoners
dilemma,
I cooperate with my twin because we're implementing the same algorith... | 2025-02-10 |
https://www.lesswrong.com/posts/zEKzCLzGJTXgoqSXf/lw-acx-social-meetup | zEKzCLzGJTXgoqSXf | LW/ACX social meetup | stefan-1 | Come by! Meet interesting people, chat interesting chat!
Normally we just chat about whatever comes up. Past topics of conversation have included AI alignment, decision theory (Newcomb's paradox etc), progress in AI and much much more.
(We will be on the second floor of the Condeco café, look for a book on the table) | 2025-02-10 |
https://www.lesswrong.com/posts/J59hfYefh6yA4wzLD/a-bearish-take-on-ai-as-a-treat | J59hfYefh6yA4wzLD | A Bearish Take on AI, as a Treat | cartier-gucciscarf | The implicit model that I have regarding the world around me on most topics is there is a truth on a matter, a select group of people and organizations who are closest to that truth, and an assortment of groups who espouse bad takes either out of malice or stupidity.
This was, to a close approximation, my opinion about... | 2025-02-10 |
https://www.lesswrong.com/posts/S8mEHmTnCPgYqfazv/notes-on-occam-via-solomonoff-vs-hierarchical-bayes | S8mEHmTnCPgYqfazv | Notes on Occam via Solomonoff vs. hierarchical Bayes | JesseClifton | Crossposted from my Substack.
Intuitively, simpler theories are better all else equal. It also seems like finding a way to justify assigning higher prior probability to simpler theories is one of the more promising ways of approaching the problem of induction. In some places, Solomonoff induction (SI) seems to be consi... | 2025-02-10 |
https://www.lesswrong.com/posts/3ZBmKDpAJJahRM248/proof-idea-slt-to-ait | 3ZBmKDpAJJahRM248 | Proof idea: SLT to AIT | Lblack | I think we may be able to prove that Bayesian learning on transformers[1] or recurrent neural networks with a uniform[2] prior over parameters is equivalent to a form of Solomonoff induction over a set of computationally-bounded programs. This bounded Solomonoff induction would still be 'approximately optimal' in a sen... | 2025-02-10 |
https://www.lesswrong.com/posts/ECLEsydXxvtK3XxMs/sleeping-beauty-an-accuracy-based-approach | ECLEsydXxvtK3XxMs | Sleeping Beauty: an Accuracy-based Approach | glauberdebona | This post does not propose a solution to the Sleeping Beauty problem, but presents arguments based on accuracy for thirders, halfers and double-halfers. A more detailed draft paper can be found here.
Summary
Accuracy-based arguments claim that one should plan to adopt posterior credences that maximize the expected (acc... | 2025-02-10 |
https://www.lesswrong.com/posts/Ddc5dArm8DXQK9ChC/political-idolatry | Ddc5dArm8DXQK9ChC | Political Idolatry | arturo-macias | Idolatry is the worship of non-conscious objects, sometimes falsely attributing consciousness to them, sometimes putting the value of some admittedly nonconscious being over that of conscious beings. Idolatry leads to human sacrifice because to prove your idol more important than the human soul the natural test is to s... | 2025-02-10 |
https://www.lesswrong.com/posts/QpaWHYEQomyQTBKw5/nonpartisan-ai-safety | QpaWHYEQomyQTBKw5 | Nonpartisan AI safety | yair-halberstadt | AI alignment is probably the most pressing issue of our time. Unfortunately it's also become one of the most controversial, with AI accelerationists accusing AI doomers/ai-not-kill-everyoneism-ers of being luddites who would rather keep humanity shackled to the horse and plow than risk any progress, whilst the doomers ... | 2025-02-10 |
https://www.lesswrong.com/posts/ARZ5c99k9M2RJJtdT/opinion-article-scoring-system | ARZ5c99k9M2RJJtdT | Opinion Article Scoring System | ciaran | Here I propose a system for scoring media opinion articles. It is part prediction markets - as there is a small amount of money involved - and part forecasting science mechanism design. Journalists that publish an article on the platform must do so with an accompanying stake. Readers (whether human or AI) that wish to ... | 2025-02-10 |
https://www.lesswrong.com/posts/L7xmssgoKXPJAbz4D/beyond-elo-rethinking-chess-skill-as-a-multidimensional | L7xmssgoKXPJAbz4D | Beyond ELO: Rethinking Chess Skill as a Multidimensional Random Variable | oliver-oswald | Introduction
The traditional ELO rating system reduces a player's ability to a single scalar value E, from which win probabilities are computed via a logistic function of the rating difference. While pragmatic, this one-dimensional approach may obscure the rich, multifaceted nature of chess skill. For instance, factors... | 2025-02-10 |
https://www.lesswrong.com/posts/xcMngBervaSCgL9cu/levels-of-friction | xcMngBervaSCgL9cu | Levels of Friction | Zvi | Scott Alexander famously warned us to Beware Trivial Inconveniences.
When you make a thing easy to do, people often do vastly more of it.
When you put up barriers, even highly solvable ones, people often do vastly less.
Let us take this seriously, and carefully choose what inconveniences to put where.
Let us also take ... | 2025-02-10 |
https://www.lesswrong.com/posts/ds98kG3FKpKv7665W/a-simulation-of-automation-economics | ds98kG3FKpKv7665W | A Simulation of Automation economics? | qbolec | Looks like even respected people disagree about the effects of automation of most jobs on prices, ability to earn and trade, ect. Is there a game which I could play to gain intuitions about it? A game where there are some things and services people need to get to survive, some resources like time, abilities or real est... | 2025-02-10 |
https://www.lesswrong.com/posts/geRo75Xi9baHcwzht/claude-is-more-anxious-than-gpt-personality-is-an-axis-of-2 | geRo75Xi9baHcwzht | Claude is More Anxious than GPT; Personality is an axis of interpretability in language models | future_detective | Aggregate Personality Differences
Users of Claude and GPT will be the first to tell you that the models have their own personality. Some users make decisions based on “who” they prefer to talk to. In my own experience, I’ve found Claude to be more deferential, GPT more clinical.
In "We Can Solve Psychology ith Text Emb... | 2025-02-10 |
https://www.lesswrong.com/posts/e8giAk8bmwDFrJxx2/should-i-divest-from-ai | e8giAk8bmwDFrJxx2 | Should I Divest from AI? | OKlogic | In recent years, AI has been all the rage in the stock market, and there is no reason to see that slowing down. With the picture of AGI on the horizon becoming clearer and clearer, faster and smarter models being released, and more and more investment being poured into AI stocks, it seems inevitable that prices will co... | 2025-02-10 |
https://www.lesswrong.com/posts/emdeWndtjD8QxzgS5/openai-lied-about-sft-vs-rlhf | emdeWndtjD8QxzgS5 | OpenAI lied about SFT vs. RLHF | sanxiyn | I used to think while OpenAI is pretty deceitful (eg for-profit conversion) it generally won't lie about its research. This is a pretty definitive case of lying, so I updated accordingly. I am posting here because it doesn't seem to be widely known. | 2025-02-10 |
https://www.lesswrong.com/posts/dhLmbpk346e7ARdnP/self-blackmail-and-alternatives | dhLmbpk346e7ARdnP | "Self-Blackmail" and Alternatives | jessica.liu.taylor | Ziz has been in the news lately. Instead of discussing that, I'll discuss an early blog post, "Self-Blackmail". This is a topic I also talked with Ziz about in person, although not a lot.
Let's start with a very normal thing people do: make New Year's resolutions. They might resolve that, for example, they will do stre... | 2025-02-09 |
https://www.lesswrong.com/posts/X9dy7LLaBbLcq8jky/altman-blog-on-post-agi-world | X9dy7LLaBbLcq8jky | Altman blog on post-AGI world | Julian Bradshaw | First part just talks about scaling laws, nothing really new. Second part is apparently his latest thoughts on a post-AGI world. Key part:
While we never want to be reckless and there will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to ach... | 2025-02-09 |
https://www.lesswrong.com/posts/3L3ZGSucpx4ypEou7/ml4good-colombia-applications-open-to-latam-participants | 3L3ZGSucpx4ypEou7 | ML4Good Colombia - Applications Open to LatAm Participants | alejandro-acelas | Applications are open for ML4Good Colombia April 2025
In partnership with AI Safety Colombia, ML4Good is running an intensive 10-day bootcamp focusing on upskilling in deep learning, exploring governance, and delving into conceptual topics for individuals who are motivated to work on addressing the risks posed by advan... | 2025-02-10 |
https://www.lesswrong.com/posts/g3fH7YzthnXwtCt6g/forecasting-newsletter-2-2025-forecasting-meetup-network | g3fH7YzthnXwtCt6g | Forecasting newsletter #2/2025: Forecasting meetup network | Radamantis | Highlights
Forecasting meetup network (a) looking for volunteers. If you want to host a meetup in your city, send an email at forecastingmeetupnetwork@gmail.com.Caroline Pham moves up to Chairman of the CFTC. She is much friendlier to prediction markets and has spent years writting dissents againsts regulatory overreac... | 2025-02-09 |
https://www.lesswrong.com/posts/HZ4sM28jc8JBcznDG/how-identical-twin-sisters-feel-about-nieces-vs-their-own | HZ4sM28jc8JBcznDG | How identical twin sisters feel about nieces vs their own daughters | dave-lindbergh | (cross posted from https://mugwumpery.com/how-identical-twin-sisters-feel-about-nieces-vs-their-own-daughters/)
It seems to be generally assumed that twin sisters feel the same way as other sisters – closer to their own children.
But per Hamilton/Trivers, they shouldn’t. They should feel equally related and care equall... | 2025-02-09 |
https://www.lesswrong.com/posts/XnAHe6iFfkEwTYgsA/the-structure-of-professional-revolutions | XnAHe6iFfkEwTYgsA | The Structure of Professional Revolutions | JohnBuridan | An expert is not merely someone who has memorized data but someone who has internalized the structure of knowledge itself. This is why we call them PhDs—Doctors of Philosophy. Their expertise extends beyond isolated facts to the organizing principles that connect those facts, allowing them to wield knowledge in novel w... | 2025-02-09 |
https://www.lesswrong.com/posts/2q4qbwEFEjhRcHSk2/how-do-you-make-a-250x-better-vaccine-at-1-10-the-cost | 2q4qbwEFEjhRcHSk2 | How do you make a 250x better vaccine at 1/10 the cost? Develop it in India. | abhishaike-mahajan | (I made a vaccinology/policy-based podcast! A very long one! If you'd like to avoid the summary below, here is the Youtube link and Substack link.).
Summary: There's a lot of discussion these days on how China's biotech market is on track to bypass the US's. I wondered: shouldn't we have observed the exact same phenome... | 2025-02-09 |
https://www.lesswrong.com/posts/dZfFLKwHWXbvamdja/less-laptop-velcro | dZfFLKwHWXbvamdja | Less Laptop Velcro | jkaufman | A year ago I broke my laptop screen, and took the opportunity to build
something I've always wanted: a monitor that
folds vertically so I
don't have to bend my
neck:
A few months ago my cracked-screen laptop finished dying, and I got a
new one. I use the stacked monitor a lot less now, since for quick
things the built... | 2025-02-09 |
https://www.lesswrong.com/posts/zavyum4dxEAqs6wHt/undergrad-ai-safety-conference | zavyum4dxEAqs6wHt | Undergrad AI Safety Conference | joanna-j-1 | TL;DR: undergrad AI safety conference in Chicago on world-modelling & thinking about the cruxes of the future of TAI. Takes place March 29-30, apply by Feb 20.
The UChicago AI Safety Team at XLab is excited to announce the Chicago Symposium on Transformative AI, an undergraduate AI Safety conference taking place on the... | 2025-02-19 |
https://www.lesswrong.com/posts/q56ZEDpQtqTbJ8af2/axrp-episode-38-7-anthony-aguirre-on-the-future-of-life | q56ZEDpQtqTbJ8af2 | AXRP Episode 38.7 - Anthony Aguirre on the Future of Life Institute | DanielFilan | YouTube link
The Future of Life Institute is one of the oldest and most prominant organizations in the AI existential safety space, working on such topics as the AI pause open letter and how the EU AI Act can be improved. Metaculus is one of the premier forecasting sites on the internet. Behind both of them lie one man... | 2025-02-09 |
https://www.lesswrong.com/posts/dovhoCuaEzSGBLzsB/job-ad-lisa-ceo | dovhoCuaEzSGBLzsB | [Job ad] LISA CEO | ryankidd44 | Overview
Job Title: Chief Executive OfficerCompany Name: London Initiative for Safe AI (LISA)Location: Old Street, LondonDuration: Full-timeSalary Range: £95-125k (more may be available for an exceptional candidate)Application Deadline: Monday 24th February
We are looking for a Chief Executive Officer with experience a... | 2025-02-09 |
https://www.lesswrong.com/posts/98K94XXGxfxdc9Pyd/p-s-risks-to-contemporary-humans | 98K94XXGxfxdc9Pyd | p(s-risks to contemporary humans)? | mhampton | Epistemic status
These are my cursory thoughts on this topic after having read about it over a few days and conversed with some other people. I still have high uncertainty and am raising questions that may address some uncertainties.
Content warning
Discussion of risks of astronomical suffering[1]
Why focus on s-risks ... | 2025-02-08 |
https://www.lesswrong.com/posts/skEhdgiK6T5HzMhis/think-it-faster-worksheet | skEhdgiK6T5HzMhis | "Think it Faster" worksheet | Raemon | This is a succinct worksheet version of the "Think It Faster" Exercise. [1]
You can use this worksheet either for purposeful practice, after completing some kind of challenging/confusing intellectual exercise (such as Thinking Physics or Baba is You). Or, if in your real life work you find something took a noticeably l... | 2025-02-08 |
https://www.lesswrong.com/posts/ZB3DqPp3CwcPNNk7y/visual-reference-for-frontier-large-language-models | ZB3DqPp3CwcPNNk7y | Visual Reference for Frontier Large Language Models | kenakofer | Hopefully this can be a helpful visual reference for the development and features of frontier large language models in the last year-ish. We are always open to feedback on how the reference could be improved.
FAQ:
Q: Which models/companies are included?
A: We include LLMs that are noteworthy in capabilities, price, or ... | 2025-02-11 |
https://www.lesswrong.com/posts/vQYPBatJKaGkfDfCt/closed-ended-questions-aren-t-as-hard-as-you-think | vQYPBatJKaGkfDfCt | Closed-ended questions aren't as hard as you think | electroswing | Summary
In this short post, I argue that closed-ended questions, even those of arbitrary difficulty, are not as difficult as they may appear. In particular, I argue that the benchmark HLE is probably easier than it may first seem.[1]
Specifically, I argue:
Crowd workers find it easier to write easy questions than hard ... | 2025-02-19 |
https://www.lesswrong.com/posts/3MCwMkP6cJmrxMmat/can-knowledge-hurt-you-the-dangers-of-infohazards-and | 3MCwMkP6cJmrxMmat | Can Knowledge Hurt You? The Dangers of Infohazards (and Exfohazards) | aggliu | In this Rational Animations video, we look at dangerous knowledge: information hazards (infohazards) and external information hazards (exfohazards). We talk about one way they can be classified, what kinds of dangers they pose, and the dangers that come from too much secrecy. The primary scriptwriter was Allen Liu (th... | 2025-02-08 |
https://www.lesswrong.com/posts/oBo7tGTvP9f26M98C/gary-marcus-now-saying-ai-can-t-do-things-it-can-already-do | oBo7tGTvP9f26M98C | Gary Marcus now saying AI can't do things it can already do | Benjamin_Todd | January 2020, Gary Marcus wrote GPT-2 And The Nature Of Intelligence, demonstrating a bunch of easy problems that GPT-2 couldn’t get right.
He concluded these were “a clear sign that it is time to consider investing in different approaches.”
Two years later, GPT-3 could get most of these right.
Marcus wrote a new list ... | 2025-02-09 |
https://www.lesswrong.com/posts/HqHcvKb6kw5aRdoMA/preserving-epistemic-novelty-in-ai-experiments-insights-and | HqHcvKb6kw5aRdoMA | Preserving Epistemic Novelty in AI: Experiments, Insights, and the Case for Decentralized Collective Intelligence | andy-e-williams | Introduction
In my recent experiments with AI models, I have encountered a fundamental problem: even when novel epistemic insights are introduced into AI interactions, the models tend to “flatten” or reframe these ideas into existing, consensus‐based frameworks. This compression of novelty limits an AI’s ability to evo... | 2025-02-08 |
https://www.lesswrong.com/posts/KEdr7E5SfaqjczFgD/technical-comparison-of-deepseek-novasky-s1-helix-p0 | KEdr7E5SfaqjczFgD | Technical comparison of Deepseek, Novasky, S1, Helix, P0 | Juliezhanggg | Comparing Novasky with S1:
NovaSky by Berkeley club, S1 by Feifei Li (arXiv:2501.19393), are the players who don't have capital or compute, mainly focuses on developing method that finetune a large language model with curated minimal reasoning datasets. Sky-T1 trained an entire model, with datasets from diverse domains... | 2025-02-25 |
https://www.lesswrong.com/posts/feknAa3hQgLG2ZAna/cross-layer-feature-alignment-and-steering-in-large-language-2 | feknAa3hQgLG2ZAna | Cross-Layer Feature Alignment and Steering in Large Language Model | dlaptev | The text below is a brief summary of our research in mechanistic interpretability. First, this article discusses the motivation behind our work. Second, it provides an overview of our previous work. Finally, we outline the future directions we consider important.
Introduction and Motivation
Large language models (LLMs)... | 2025-02-08 |
https://www.lesswrong.com/posts/cSPcey7FsKpNzLXuF/ai-safety-oversights | cSPcey7FsKpNzLXuF | AI Safety Oversights | davey-morse | The field of AI Safety at large is making four key oversights:
LLMs vs. Agents. AI safety researchers have been thorough in examining safety concerns from LLMs (bias, deception, accuracy, child safety, etc). Agents powered by LLMs, however, are more dangerous and dangerous in different ways than LLMs are alone. The fi... | 2025-02-08 |
https://www.lesswrong.com/posts/ZpnPxn433EMWw2t6m/wiki-on-suspects-in-lind-zajko-and-maland-killings | ZpnPxn433EMWw2t6m | Wiki on Suspects in Lind, Zajko, and Maland Killings | Rebecca_Records | Hey everyone,
I've been following the news about the killings linked to LaSota, Zajko and associates, and right now, finding all the relevant information is a challenge. Either you have to dig through scattered sources, or you’re stuck reading a single, extremely long Google Doc to get the full picture. To make things ... | 2025-02-08 |
https://www.lesswrong.com/posts/Cs8n4zeaGtxnvCjhe/two-hemispheres-i-do-not-think-it-means-what-you-think-it | Cs8n4zeaGtxnvCjhe | Two hemispheres - I do not think it means what you think it means | Viliam | I am going to address some misconceptions about brain hemispheres -- in popular culture, and in Zizian theory. The latter, because the madness must stop. The former, because it provided a foundation for the latter.
*
Two hemispheres in popular culture
About 99% of animals are bilaterally symmetric -- the left side and ... | 2025-02-09 |
https://www.lesswrong.com/posts/Hgj84BSitfSQnfwW6/so-you-want-to-make-marginal-progress | Hgj84BSitfSQnfwW6 | So You Want To Make Marginal Progress... | johnswentworth | Once upon a time, in ye olden days of strange names and before google maps, seven friends needed to figure out a driving route from their parking lot in San Francisco (SF) down south to their hotel in Los Angeles (LA).
The first friend, Alice, tackled the “central bottleneck” of the problem: she figured out that they p... | 2025-02-07 |
https://www.lesswrong.com/posts/s5SjpHGaKFAizbieu/reasons-based-choice-and-cluelessness | s5SjpHGaKFAizbieu | Reasons-based choice and cluelessness | JesseClifton | Crossposted from my Substack.
Rational choice theory is commonly thought of as being about what to do in light of our beliefs and preferences. But our beliefs and preferences come from somewhere. I would say that we believe and prefer things for reasons. My evidence gives me reason to believe I am presently in an airpo... | 2025-02-07 |
https://www.lesswrong.com/posts/26SHhxK2yYQbh7ors/research-directions-open-phil-wants-to-fund-in-technical-ai | 26SHhxK2yYQbh7ors | Research directions Open Phil wants to fund in technical AI safety | jake_mendel | The Open Philanthropy has just launched a large new Request for Proposals for technical AI safety research. Here we're sharing a reference guide, created as part of that RFP, which describes what projects we'd like to see across 21 research directions in technical AI safety.
This guide provides an opinionated overview ... | 2025-02-08 |
https://www.lesswrong.com/posts/pzhykwLuaNDFPvqsh/request-for-information-for-a-new-us-ai-action-plan-ostp-rfi | pzhykwLuaNDFPvqsh | Request for Information for a new US AI Action Plan (OSTP RFI) | agucova | null | 2025-02-07 |
https://www.lesswrong.com/posts/4x4QFzmdWadgr7mdj/translation-in-the-age-of-ai-don-t-look-for-unicorns | 4x4QFzmdWadgr7mdj | [Translation] In the Age of AI don't Look for Unicorns | mushroomsoup | Translator's note: This is another article from Jeffery Ding's Around the Horn. In short, the article suggests that a good metric of AI adoption is the number of daily average tokens used by a company. Companies which achieved successful adoption are those who use more than one billion tokens everyday. In China there a... | 2025-02-07 |
https://www.lesswrong.com/posts/vqsXncwxsxti5L9Ne/racing-towards-fusion-and-ai | vqsXncwxsxti5L9Ne | Racing Towards Fusion and AI | jeffrey-heninger | Racing Towards a New Technology Is a Collective Choice, Not an Inevitable Consequence of Incentives
Before I started thinking about AI policy, I was working in the trying-to-get-fusion industry.
There are some significant similarities between AI and fusion.
Both are emerging technologies.Both have the potential to have... | 2025-02-07 |
https://www.lesswrong.com/posts/KFJ2LFogYqzfGB3uX/how-ai-takeover-might-happen-in-2-years | KFJ2LFogYqzfGB3uX | How AI Takeover Might Happen in 2 Years | joshua-clymer | I’m not a natural “doomsayer.” But unfortunately, part of my job as an AI safety researcher is to think about the more troubling scenarios.
I’m like a mechanic scrambling last-minute checks before Apollo 13 takes off. If you ask for my take on the situation, I won’t comment on the quality of the in-flight entertainment... | 2025-02-07 |
https://www.lesswrong.com/posts/kpGEp4Jem7PrQhPRW/high-level-machine-intelligence-and-full-automation-of-labor | kpGEp4Jem7PrQhPRW | 'High-Level Machine Intelligence' and 'Full Automation of Labor' in the AI Impacts Surveys | jeffrey-heninger | The 60+ Year Gap
AI Impacts has run three surveys (2016, 2022, & 2023) asking AI researchers about how they expect AI to develop in the future.[1] One of the key questions addressed was when AI capabilities will exceed human capabilities.
The surveys did not ask directly about 'Artificial General Intelligence' (AGI). I... | 2025-02-07 |
https://www.lesswrong.com/posts/rGvkirbemm9deAM3g/the-devil-s-ontology | rGvkirbemm9deAM3g | the devil's ontology | lostinwilliamsburg | imagine you’re playing a game where some rules encoded in words are so special that no one is allowed to touch or change them. these special rules have special properties, so no one questions them. the devil, in this case, is like a sneaky player who hides behind those rules, using them to confuse everyone else. they c... | 2025-02-07 |
https://www.lesswrong.com/posts/gsj3TWdcBxwkm9eNt/10-year-timelines-remain-unlikely-despite-deepseek-and-o3 | gsj3TWdcBxwkm9eNt | ≤10-year Timelines Remain Unlikely Despite DeepSeek and o3 | sil-ver | [Thanks to Steven Byrnes for feedback and the idea for section §3.1. Also thanks to Justis from the LW feedback team.]
Remember this?
Or this?
The images are from WaitButWhy, but the idea was voiced by many prominent alignment people, including Eliezer Yudkowsky and Nick Bostrom. The argument is that the difference in ... | 2025-02-13 |
https://www.lesswrong.com/posts/etqbEF4yWoGBEaPro/on-the-meta-and-deepmind-safety-frameworks | etqbEF4yWoGBEaPro | On the Meta and DeepMind Safety Frameworks | Zvi | This week we got a revision of DeepMind’s safety framework, and the first version of Meta’s framework. This post covers both of them.
Table of Contents
Meta’s RSP (Frontier AI Framework).
DeepMind Updates its Frontier Safety Framework.
What About Risk Governance.
Where Do We Go From Here?
Here are links for previous co... | 2025-02-07 |
https://www.lesswrong.com/posts/mNKRibWTsx32J8GzW/request-for-proposals-improving-capability-evaluations | mNKRibWTsx32J8GzW | Request for proposals: improving capability evaluations | cb | Open Philanthropy is launching an RFP for work on AI capability evaluations. We're looking to fund three types of work:
Global Catastrophic Risk (GCR)-relevant capability benchmarks for AI agentsResearch to improve our understanding of how capabilities develop and scaleSolutions for enabling meaningful third-party eval... | 2025-02-07 |
https://www.lesswrong.com/posts/GxpSFtnHccNNqBz4N/baumol-effect-vs-jevons-paradox | GxpSFtnHccNNqBz4N | Baumol effect vs Jevons paradox | Hzn | Key points. The Baumol effect & Jevons paradox are 2 claims regarding the effect of increasing efficiency of a good or sector. Although not incompatible, they are at odds; one suggesting relative decline, the other suggesting absolute growth. I examine these & find that they are often defined & discussed in a confused ... | 2025-02-10 |
https://www.lesswrong.com/posts/Xt9r4SNNuYxW83tmo/a-computational-no-coincidence-principle | Xt9r4SNNuYxW83tmo | A computational no-coincidence principle | UnexpectedValues | This post presents a conjecture formulated at the Alignment Research Center in 2023. Our belief in the conjecture is at least partly load-bearing for our belief in ARC's overall agenda. We haven't directly worked on the conjecture for a while now, but we believe the conjecture is interesting in its own right.
In a rece... | 2025-02-14 |
https://www.lesswrong.com/posts/YXNeA3RyRrrRWS37A/a-problem-to-solve-before-building-a-deception-detector | YXNeA3RyRrrRWS37A | A Problem to Solve Before Building a Deception Detector | ea-1 | TL;DR: If you are thinking of using interpretability to help with strategic deception, then there's likely a problem you need to solve first: how are intentional descriptions (like deception) related to algorithmic ones (like understanding the mechanisms models use)? We discuss this problem and try to outline some cons... | 2025-02-07 |
https://www.lesswrong.com/posts/7HpFuGBLcdyjHj8tc/when-you-downvote-explain-why | 7HpFuGBLcdyjHj8tc | When you downvote, explain why | avery-liu | NOTE: this is not site policy, just my personal suggestion
Being a newcomer and having your post downvoted can be very discouraging. This isn't necessarily a bad thing—obviously we want to discourage people from posting things that are not worth our time to read—but it doesn't provide much feedback other than "somethin... | 2025-02-07 |
https://www.lesswrong.com/posts/kJzTmuhSZ7ufcgGTv/medical-windfall-prizes-1 | kJzTmuhSZ7ufcgGTv | Medical Windfall Prizes | PeterMcCluskey | Summary
AI may produce a windfall surge in government revenues in 5 to 10 years.
I want governments to spending a small fraction of that windfall on
retroactively rewarding entities in proportion to how they have
contributed to medical advances, measured by lives saved and suffering
avoided.
Motivations
This post was i... | 2025-02-06 |
https://www.lesswrong.com/posts/fwSnz5oNnq8HxQjTL/arbital-has-been-imported-to-lesswrong | fwSnz5oNnq8HxQjTL | Arbital has been imported to LessWrong | T3t | Arbital was envisioned as a successor to Wikipedia. The project was discontinued in 2017, but not before many new features had been built and a substantial amount of writing about AI alignment and mathematics had been published on the website.
If you've tried using Arbital.com the last few years, you might have noticed... | 2025-02-20 |
https://www.lesswrong.com/posts/nHDhst47yzDCpGstx/seven-sources-of-goals-in-llm-agents | nHDhst47yzDCpGstx | Seven sources of goals in LLM agents | Seth Herd | LLM agents[1] seem reasonably likely to become our first takeover-capable AGIs.[2] LLMs already have complex "psychologies," and using them to power more sophisticated agents will create even more complex "minds." A profusion of competing goals is one barrier to aligning this type of AGI.
Goal sources for LLM agents c... | 2025-02-08 |
https://www.lesswrong.com/posts/rJnygfgGyZ3ovo9Yu/aisn-47-reasoning-models | rJnygfgGyZ3ovo9Yu | AISN #47: Reasoning Models | corin-katzke | Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
Reasoning Models
DeepSeek-R1 has been one of the most significant model releases since ChatGPT. After ... | 2025-02-06 |
https://www.lesswrong.com/posts/wbJxRNxuezvsGFEWv/open-philanthropy-technical-ai-safety-rfp-usd40m-available | wbJxRNxuezvsGFEWv | Open Philanthropy Technical AI Safety RFP - $40M Available Across 21 Research Areas | jake_mendel | Open Philanthropy is launching a big new Request for Proposals for technical AI safety research, with plans to fund roughly $40M in grants over the next 5 months, and available funding for substantially more depending on application quality.
Applications (here) start with a simple 300 word expression of interest and ar... | 2025-02-06 |
https://www.lesswrong.com/posts/viZhwKytJ6Rqahymf/wild-animal-suffering-is-the-worst-thing-in-the-world | viZhwKytJ6Rqahymf | Wild Animal Suffering Is The Worst Thing In The World | omnizoid | Crossposted from my blog which many people are saying you should check out!
Imagine that you came across an injured deer on the road. She was in immense pain, perhaps having been mauled by a bear or seriously injured in some other way. Two things are obvious:
If you could greatly help her at small cost, you should do s... | 2025-02-06 |
https://www.lesswrong.com/posts/9pGbTz6c78PGwJein/detecting-strategic-deception-using-linear-probes | 9pGbTz6c78PGwJein | Detecting Strategic Deception Using Linear Probes | nicholas-goldowsky-dill | Can you tell when an LLM is lying from the activations? Are simple methods good enough? We recently published a paper investigating if linear probes detect when Llama is deceptive.
Abstract:
AI models might use deceptive strategies as part of scheming or misaligned behaviour. Monitoring outputs alone is insufficient, s... | 2025-02-06 |
https://www.lesswrong.com/posts/2w6hjptanQ3cDyDw7/methods-for-strong-human-germline-engineering | 2w6hjptanQ3cDyDw7 | Methods for strong human germline engineering
| TsviBT | PDF version. Image sizes best at berkeleygenomics.org with a wide monitor. Twitter thread
Introduction
This article summarizes the technical pathways to make healthy humans with significantly modified genomes. These are the pathways that I'm aware of and that seem plausibly feasible in the next two decades. A short sum... | 2025-03-03 |
https://www.lesswrong.com/posts/rAaGbh7w52soCckNC/ai-102-made-in-america | rAaGbh7w52soCckNC | AI #102: Made in America | Zvi | I remember that week I used r1 a lot, and everyone was obsessed with DeepSeek.
They earned it. DeepSeek cooked, r1 is an excellent model. Seeing the Chain of Thought was revolutionary. We all learned a lot.
It’s still #1 in the app store, there are still hysterical misinformed NYT op-eds and and calls for insane reacti... | 2025-02-06 |
https://www.lesswrong.com/posts/PRZLj96uDi25opEfy/hopeful-hypothesis-the-persona-jukebox | PRZLj96uDi25opEfy | Hopeful hypothesis, the Persona Jukebox. | donald-hobson | So there is this meme going around, that of the shoggoth. But one of the downsides of this model is that it's very vague about what is behind the mask.
A Jukebox was an old machine that would pick up vinyl records and place them on a turntable to play them.
So. What does the persona jukebox hypothesis say. It says that... | 2025-02-14 |
https://www.lesswrong.com/posts/R3RWnjqQWJ9w88rGZ/don-t-go-bankrupt-don-t-go-rogue | R3RWnjqQWJ9w88rGZ | Don't go bankrupt, don't go rogue | Nathan Young | Would you bet your life’s savings on a 50% chance to triple it? Now imagine betting your sanity, values and goals. How much should you be willing to put on the table?
You wanna bet?
Let’s say you go to a casino, which offers a coin flip. Heads you triple it, tails you lose it. How much of your money should you bet on i... | 2025-02-06 |
https://www.lesswrong.com/posts/oAaxcQJPAmWtkMC8p/biology-ideology-and-violence | oAaxcQJPAmWtkMC8p | Biology, Ideology and Violence | Zero Contradictions | The audio version can be listened to here:
I often use the term "ideology", so I thought I should explain what I mean by it. The Wikipedia definition is:
A comprehensive set of normative beliefs, conscious and unconscious ideas, that an individual, group or society has.
I use the term in a more specific way. My definit... | 2025-02-06 |
https://www.lesswrong.com/posts/rhzXQyTQBdmjPQ76z/chicanery-no | rhzXQyTQBdmjPQ76z | Chicanery: No | Screwtape | There is a concept I picked up from the Onyx Path forums, from a link that is now dead and a post I can no longer find. That concept is the Chicanery tag, and while I’ve primarily used it for tabletop RPGs (think Dungeons & Dragons) I find it applicable elsewhere. If you happen to be better at searching the forums than... | 2025-02-06 |
https://www.lesswrong.com/posts/zjqrSKZuRLnjAniyo/illusory-safety-redteaming-deepseek-r1-and-the-strongest | zjqrSKZuRLnjAniyo | Illusory Safety: Redteaming DeepSeek R1 and the Strongest Fine-Tunable Models of OpenAI, Anthropic, and Google | ccstan99 | DeepSeek-R1 has recently made waves as a state-of-the-art open-weight model, with potentially substantial improvements in model efficiency and reasoning. But like other open-weight models and leading fine-tunable proprietary models such as OpenAI’s GPT-4o, Google’s Gemini 1.5 Pro, and Anthropic’s Claude 3 Haiku, R1’s g... | 2025-02-07 |
https://www.lesswrong.com/posts/qGKq4G3HGRcSBc94C/mats-applications-research-directions-i-m-currently-excited | qGKq4G3HGRcSBc94C | MATS Applications + Research Directions I'm Currently Excited About | neel-nanda-1 | I've just opened summer MATS applications (where I'll supervise people to write mech interp papers) I'd love to get applications from any readers who are interested! Apply here, due Feb 28
As part of this, I wrote up a list of research areas I'm currently excited about, and thoughts for promising directions within thos... | 2025-02-06 |
https://www.lesswrong.com/posts/bo4pfdAyDQmkavqFY/hypnosis-question | bo4pfdAyDQmkavqFY | hypnosis question | avery-liu | hypnosis is really crazy and weird and
hypnosis is a real thing, right?if so, can I use it to motivate myself to do stuff, nudge my aliefs and System 1 dispositions closer to actual reality, be happier, etc? | 2025-02-06 |
https://www.lesswrong.com/posts/va7oH9XfjjmHaLnZF/bida-calendar-ical-feed | va7oH9XfjjmHaLnZF | BIDA Calendar iCal Feed | jkaufman | A while ago I played around with manually generating iCal feeds for
events, encoding the BIDA's regular schedule (1st and 3rd Sundays back
then, with a more complex pattern for family dances and open bands) as
in a machine-readable format. This was a fun project, but it didn't
work very well: it didn't have any way to... | 2025-02-06 |
https://www.lesswrong.com/posts/yaL7ZdQqA2twbiEmZ/do-no-harm-navigating-and-nudging-ai-moral-choices | yaL7ZdQqA2twbiEmZ | Do No Harm? Navigating and Nudging AI Moral Choices | sinem-erisken | TL;DR: How do AI systems make moral decisions, and can we influence their ethical judgments? We probe these questions by examining Llama's 70B (3.1 and 3.3) responses to moral dilemmas, using Goodfire API to steer its decision-making process. Our experiments reveal that simply reframing ethical questions - from "harm o... | 2025-02-06 |
https://www.lesswrong.com/posts/NYASwJFnwZyPRE8tS/inefficiencies-in-pharmaceutical-research-practices | NYASwJFnwZyPRE8tS | Inefficiencies in Pharmaceutical Research Practices | erioire | Epistemic status: I'm attempting to relate my observations, as well as some of those shared with me by my coworkers at the small CRO[1] where I work. I will try to make the distinction clear between object-level observations and speculative interpretations of the former.
The process of developing a new drug or medical ... | 2025-02-22 |
https://www.lesswrong.com/posts/jEZpfsdaX2dBD9Y6g/the-risk-of-gradual-disempowerment-from-ai | jEZpfsdaX2dBD9Y6g | The Risk of Gradual Disempowerment from AI | Zvi | The baseline scenario as AI becomes AGI becomes ASI (artificial superintelligence), if nothing more dramatic goes wrong first and even we successfully ‘solve alignment’ of AI to a given user and developer, is the ‘gradual’ disempowerment of humanity by AIs, as we voluntarily grant them more and more power in a vicious ... | 2025-02-05 |
https://www.lesswrong.com/posts/DnWkYz3w6xxsekok9/wired-on-doge-personnel-with-admin-access-to-federal-payment | DnWkYz3w6xxsekok9 | Wired on: "DOGE personnel with admin access to Federal Payment System" | Raemon | I haven't looked into this in detail, and I'm not actually sure how unique a situation this is. But, it updated me on "institutional changes to the US that might be quite bad[1]", and it seemed good if LessWrong folk did some sort of Orient Step on it.
(Please generally be cautious on LessWrong talking about politics. ... | 2025-02-05 |
https://www.lesswrong.com/posts/WHrig3emEAzMNDkyy/on-ai-scaling | WHrig3emEAzMNDkyy | On AI Scaling | harsimony | I’ve avoided talking about AI, mostly because everyone is talking about it. I think the implications of AI scaling were clear a while ago, and I mostly said my piece back then.
It’s time to take another crack at it.
Scaling laws
Scaling laws are a wonderful innovation[1]. With large enough training runs, performance is... | 2025-02-05 |
https://www.lesswrong.com/posts/3donrE5vFHeMJFLY9/what-makes-a-theory-of-intelligence-useful | 3donrE5vFHeMJFLY9 | What makes a theory of intelligence useful? | Amyr | This post is a sequel to "Action theory is not policy theory is not agent theory." I think this post is a little better, so if you want to start here you just need to know that I consider an action theory to discuss choosing the best action, a policy theory to discuss the best decision-making policy when the environmen... | 2025-02-20 |
https://www.lesswrong.com/posts/q5ihyrBWiGcRJfd7a/the-state-of-metaculus | q5ihyrBWiGcRJfd7a | The State of Metaculus | ChristianWilliams | null | 2025-02-05 |
https://www.lesswrong.com/posts/jfskr4ZZezpsq3pTd/alignment-paradox-and-a-request-for-harsh-criticism | jfskr4ZZezpsq3pTd | Alignment Paradox and a Request for Harsh Criticism | bridgett-kay | I’m not a scientist, engineer, or alignment researcher in any respect; I’m a failed science fiction writer. I have a tendency to write opinionated essays that I rarely finish. It’s good that I rarely finish them, however, because if I did, I would generate far too much irrelevant slop.
My latest opinionated essay was t... | 2025-02-05 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.