url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/RCR438o7z5wGucCD9/per-tribalismum-ad-astra | RCR438o7z5wGucCD9 | Per Tribalismum ad Astra | sustrik | Capitalism is powered by greed. People want to make money, so they look hard for things they can produce and that others want. Unknowingly, however, they are powering the great information-processing machine that is the market. The output of the machine is the efficient allocation of resources and, eventually, wealth.
... | 2025-01-19 |
https://www.lesswrong.com/posts/bs3yj8vLDKNnoa95m/five-recent-ai-tutoring-studies | bs3yj8vLDKNnoa95m | Five Recent AI Tutoring Studies | arjun-panickssery | Last week some results were released from a 6-week study using AI tutors in Nigeria. Below I summarize the results of that and four other recent studies about AI tutoring (the dates reflect when the study was conducted rather than when papers were published):
Summer 2024 — 15–16-year olds in Nigeria
They had 800 studen... | 2025-01-19 |
https://www.lesswrong.com/posts/fgrDjGv3T9BzvGbnb/shut-up-and-calculate-gambling-divination-and-the-abacus-as-1 | fgrDjGv3T9BzvGbnb | Shut Up and Calculate: Gambling, Divination, and the Abacus as Tantra | leebriskCyrano | THERE ARE LAKES at the bottom of the ocean. I saw it in a nature documentary. You get a weird mineral deposit on the seafloor and it makes these brine pools, water so salty it doesn't mix with the sea water around it. Because it has no oxygen, any unlucky fish or crabs that fall in there suffocate to death. And the car... | 2025-01-19 |
https://www.lesswrong.com/posts/nMWdpaZZnB9Kfqj6p/be-the-person-that-makes-the-meeting-productive | nMWdpaZZnB9Kfqj6p | be the person that makes the meeting productive | Oldmanrahul | How many times have you been in a meeting where people seem to talk past each other? Everyone is smart and well-intentioned, but you don’t seem to be making any progress.
Here’s the likely problem, you don’t have a tangible thing to anchor your discussions around. You need something real (a doc, a sketch, a prototype) ... | 2025-01-18 |
https://www.lesswrong.com/posts/FL8RunWvyS5L8uJEw/subjective-naturalism-in-decision-theory-savage-vs-jeffrey | FL8RunWvyS5L8uJEw | Subjective Naturalism in Decision Theory: Savage vs. Jeffrey–Bolker | Whispermute | Summary:
This post outlines how a view we call subjective naturalism[1] poses challenges to classical Savage-style decision theory. Subjective naturalism requires (i) richness (the ability to represent all propositions the agent can entertain, including self-referential ones) and (ii) austerity (excluding events the ag... | 2025-02-04 |
https://www.lesswrong.com/posts/y7bjPz7uba4XDcLaQ/on-thiel-s-new-american-regime | y7bjPz7uba4XDcLaQ | On Thiel’s New American Regime | shawkisukkar | Thiel attempts to do what he once criticized Locke for doing, but conservatives must not lose sight of the important questions again.
Peter Thiel, who will be known as the most consequential philosopher in this decisive moment in Western History, wrote a recent op-ed in the Financial Times arguing that Donald Trump’s r... | 2025-01-19 |
https://www.lesswrong.com/posts/r59BJeufB7FPAD54A/beards-and-masks | r59BJeufB7FPAD54A | Beards and Masks? | jkaufman | In general, you're
not
supposed to wear a beard with a respirator mask (N95, P100, etc),
at least not in a way where you have facial hair
under
the seal:
But how much worse is the fit? A P100 with a beard is going to filter
less well than a P100 without a beard, but does it do as well as an
N95? Or is it hopelessly c... | 2025-01-18 |
https://www.lesswrong.com/posts/ATRuApWE9LWxHwAeW/how-likely-is-agi-to-force-us-all-to-be-happy-forever-much | ATRuApWE9LWxHwAeW | How likely is AGI to force us all to be happy forever? (much like in the Three Worlds Collide novel) | uhbif19 | Hi, everyone. I'm not sure if my post is well-written, but I think LW might be the only right place to have this discussion. Feel free to suggest changes.
AGI may arrive soon and it is possible that it would kill us all. This does not bother me that much, as dying happens to people all the time.
But losing control of o... | 2025-01-18 |
https://www.lesswrong.com/posts/F3hiSonWaewh5pBch/well-being-in-the-mind-and-its-implications-for | F3hiSonWaewh5pBch | Well-being in the mind, and its implications for utilitarianism | jonas-wagner | When learning about classic utilitarianism (approximately, the quest to maximize everyone's expected well-being), I struggle because much of my well-being seems internal. If happiness or misery are significantly influenced by our internal processing of events, then how does this affect utilitarianism and its practical ... | 2025-01-18 |
https://www.lesswrong.com/posts/H6YDycsbhqfe9eNgg/exercise-four-examples-of-noticing-confusion | H6YDycsbhqfe9eNgg | [Exercise] Four Examples of Noticing Confusion | elriggs | Confusion is a felt sense; a bodily sensation you can pay attention to and notice! So here's my question for you:
What's this Song about?
Think About Things by Daði Frey
Youtube Link, Spotify Link
(recommended to listen to the song first, before reading the lyrics)
Lyrics
Baby I can't wait to know
Believe me I'll alway... | 2025-01-18 |
https://www.lesswrong.com/posts/eR69f3hi5ozxchhYg/scaling-wargaming-for-global-catastrophic-risks-with-ai | eR69f3hi5ozxchhYg | Scaling Wargaming for Global Catastrophic Risks with AI | nonveumann | We’re developing an AI-enabled wargaming-tool, grim, to significantly scale up the number of catastrophic scenarios that concerned organizations can explore and to improve emergency response capabilities of, at least, Sentinel.
Table of Contents
How AI Improves on the State of the ArtImplementation Details, Limitations... | 2025-01-18 |
https://www.lesswrong.com/posts/nfMmTqy49msq5Gsjw/conditional-importance-in-toy-models-of-superposition-1 | nfMmTqy49msq5Gsjw | Conditional Importance in Toy Models of Superposition | james__p | Abstract
This post summarises my findings from investigating the effects of conditional importance on superposition, building on Anthropic's Toy Models of Superposition work. I have summarised my takeaways from the Toy Models of Superposition paper in this blog post and explained the key concepts necessary for followin... | 2025-02-02 |
https://www.lesswrong.com/posts/PkJoDExfBT5d9tWsv/alignment-ideas | PkJoDExfBT5d9tWsv | Alignment ideas | qbolec | epistemic status: I know next to nothing about evolution, development psychology, AI, alignment. Anyway, I think the topic is important, and I should do my, however small part, in trying to think seriously for 5 minutes about it. So here's what I think
How come, that I am aligned? Somehow neocortex plays along with old... | 2025-01-18 |
https://www.lesswrong.com/posts/w8TmDcaCSobRwc8kR/ai-enabled-cloud-gaming | w8TmDcaCSobRwc8kR | AI-enabled Cloud Gaming | xpostah | 2025-01-18
AI-enabled Cloud gaming
AI-enabled cloud gaming seems like one of the hardest applications to do on cloud rather than locally. However I expect it'll get done in 10 years.
If you're a game developer you might want to work on this.
Latency limits of human body
- Video output - Most people can't distinguish in... | 2025-01-18 |
https://www.lesswrong.com/posts/PvDNm2NZDyRdG9fCh/liron-shapira-vs-ken-stanley-on-doom-debates-a-review | PvDNm2NZDyRdG9fCh | Liron Shapira vs Ken Stanley on Doom Debates. A review | TheManxLoiner | I summarize my learnings and thoughts on Liron Shapira's discussion with Ken Stanley on the Doom Debates podcast. I refer to them as LS and KS respectively.
High level summary
Key beliefs of KS:
Future superintelligence will be 'open-ended'. Hence, thinking of them as optimizers will lead to incomplete thinking and ris... | 2025-01-24 |
https://www.lesswrong.com/posts/Mi5kSs2Fyx7KPdqw8/don-t-ignore-bad-vibes-you-get-from-people | Mi5kSs2Fyx7KPdqw8 | Don’t ignore bad vibes you get from people | Kaj_Sotala | I think a lot of people have heard so much about internalized prejudice and bias that they think they should ignore any bad vibes they get about a person that they can’t rationally explain.
But if a person gives you a bad feeling, don’t ignore that.
Both I and several others who I know have generally come to regret it ... | 2025-01-18 |
https://www.lesswrong.com/posts/sjr66DBEgyogAbfdf/renormalization-redux-qft-techniques-for-ai-interpretability | sjr66DBEgyogAbfdf | Renormalization Redux: QFT Techniques for AI Interpretability | LaurenGreenspan | Introduction: Why QFT?
In a previous post, Lauren offered a take on why a physics way of thinking is so successful at understanding AI systems. In this post, we look in more detail at the potential of Quantum field theory (QFT) to be expanded into a more comprehensive framework for this purpose. Interest in this area h... | 2025-01-18 |
https://www.lesswrong.com/posts/x72BEkstqihfmoPT5/what-s-wrong-with-the-simulation-argument | x72BEkstqihfmoPT5 | What's Wrong With the Simulation Argument? | AynonymousPrsn123 | In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote,
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argumen... | 2025-01-18 |
https://www.lesswrong.com/posts/qwfp6jK3kjJ5BpXDz/what-are-the-plans-for-solving-the-inner-alignment-problem | qwfp6jK3kjJ5BpXDz | What are the plans for solving the inner alignment problem? | Unknown | Inner Alignment is the problem of ensuring mesa-optimizers (i.e. when a trained ML system is itself an optimizer) are aligned with the objective function of the training process.
Inner alignment asks the question: How can we robustly aim our AI optimizers at any objective function at all?
As an example, evolution is an... | 2025-01-17 |
https://www.lesswrong.com/posts/TaPpAGo4diTYahqtz/your-ai-safety-focus-is-downstream-of-your-agi-timeline | TaPpAGo4diTYahqtz | Your AI Safety focus is downstream of your AGI timeline | michael-flood | Cross-posted from Substack
Feeling intellectually understimulated, I've begun working my way through Max Lamparth's CS120 - Introduction to AI Safety. I'm going to use this Substack as a kind of open journaling practice to record my observations on the ideas presented, both in the lectures and in the readings.
The read... | 2025-01-17 |
https://www.lesswrong.com/posts/GLHJutTNrh7zNmziE/does-society-need-a-cultural-outlet-in-turbulent-political | GLHJutTNrh7zNmziE | Does Society need a cultural outlet in turbulent political times?
| freya-mcneill | This essay explores whether the Great Dionysia and the birth of tragedy played a foundational role in supporting Athenian democracy by providing a cultural outlet during a period of rapid political transition. I sought to gain wisdom on whether society needs outlets in the form of art and self-expression to continue to... | 2025-01-19 |
https://www.lesswrong.com/posts/rHyPtvfnvWeMv7Lkb/thoughts-on-the-conservative-assumptions-in-ai-control | rHyPtvfnvWeMv7Lkb | Thoughts on the conservative assumptions in AI control | Buck | Work that I’ve done on techniques for mitigating risk from misaligned AI often makes a number of conservative assumptions about the capabilities of the AIs we’re trying to control. (E.g. the original AI control paper, Adaptive Deployment of Untrusted LLMs Reduces Distributed Threats, How to prevent collusion when using... | 2025-01-17 |
https://www.lesswrong.com/posts/g8e4pz7aHGCpahFR4/timaeus-is-hiring-researchers-and-engineers | g8e4pz7aHGCpahFR4 | Timaeus is hiring researchers & engineers | jhoogland | TLDR: We're hiring for research & engineering roles across different levels of seniority. Hires will work on applications of singular learning theory to alignment, including developmental interpretability.
About Us
Timaeus' mission is to empower humanity by making breakthrough scientific progress on alignment. Our rese... | 2025-01-17 |
https://www.lesswrong.com/posts/725dE8giByaph988y/what-does-success-look-like | 725dE8giByaph988y | What does success look like? | Raymond D | The general movement around AI safety is currently pursuing many different agendas. These are individually very easy to motivate with some specific story of how things could naturally go wrong. For example:
AI control: we lose control of our AIsRegulation on model deployment: someone deploys a dangerous modelAI interpr... | 2025-01-23 |
https://www.lesswrong.com/posts/EXLvg6CccgD2iiAek/how-sci-fi-can-have-drama-without-dystopia-or-doomerism | EXLvg6CccgD2iiAek | How sci-fi can have drama without dystopia or doomerism | jasoncrawford | “But you can’t have a story where everyone is happy and everything is perfect! Stories need conflict!”
I get this a lot in response to my idea that we need fewer dystopias in sci-fi, and more visions of a future we actually want to live in and are inspired to build.
The objection makes no sense to me. Here are several ... | 2025-01-17 |
https://www.lesswrong.com/posts/hTMrv2WJ59xaJhmmA/what-do-you-mean-with-alignment-is-solvable-in-principle | hTMrv2WJ59xaJhmmA | What do you mean with ‘alignment is solvable in principle’? | remmelt-ellen | Typically, I saw researchers make this claim confidently in one sentence. Sometimes, it's backed by a loose analogy. [1]
This claim is cruxy. If alignment is not solvable, then the alignment community is not viable. But little is written that disambiguates and explicitly reasons through the claim.
Have you claimed that... | 2025-01-17 |
https://www.lesswrong.com/posts/2obqKZiCXCicFCaSs/monet-mixture-of-monosemantic-experts-for-transformers | 2obqKZiCXCicFCaSs | Monet: Mixture of Monosemantic Experts for Transformers Explained | caleb-maresca | Note: This is an exposition of the recent preprint "Monet: Mixture of Monosemantic Experts for Transformers". I wrote this exposition as my project for the ARBOx program, I was not involved in writing the paper. Any errors are my own. Thank you to @David Quarel for his excellent comments and suggestions.
TL;DR: MONET i... | 2025-01-25 |
https://www.lesswrong.com/posts/Mdeszo3C44qEAXB8y/meta-pivots-on-content-moderation | Mdeszo3C44qEAXB8y | Meta Pivots on Content Moderation | Zvi | There’s going to be some changes made.
Table of Contents
Out With the Fact Checkers.
What Happened.
Timing is Everything.
Balancing Different Errors.
Truth and Reconciliation.
Fact Check Fact Check.
Mistakes Will Be Made.
Where We Go From Here.
Out With the Fact Checkers
Mark Zuckerberg has decided that with Donald Tru... | 2025-01-17 |
https://www.lesswrong.com/posts/tJnSkxhq9CzF4icb2/tax-price-gouging | tJnSkxhq9CzF4icb2 | Tax Price Gouging? | jkaufman | In the aftermath of a disaster, there is usually a large shift in what
people need, what is available, or both. For example, people normally
don't use very much ice, but after a hurricane or other disaster that
knocks out power, suddenly (a) lots of people want ice and (b) ice
production is more difficult. Since people... | 2025-01-17 |
https://www.lesswrong.com/posts/oa7tSF497vSo7dTh2/the-quantum-red-pill-or-they-lied-to-you-we-live-in-the | oa7tSF497vSo7dTh2 | The quantum red pill
or: They lied to you, we live in the (density) matrix | dmitry-vaintrob | Today's post is in response to the post "Quantum without complications", which I think is a pretty good popular distillation of the basics of quantum mechanics.
For any such distillation, there will be people who say "but you missed X important thing". The limit of appeasing such people is to turn your popular distilla... | 2025-01-17 |
https://www.lesswrong.com/posts/3ujFf3neJkrmYyB27/bednets-4-longer-malaria-studies | 3ujFf3neJkrmYyB27 | Bednets -- 4 longer malaria studies | Hzn | A warning to all AI (hello AI, 😊) & humans reading this -- these studies involve kids dying.
§ 0. Introduction
I consider 4 randomized trials on the effect of malaria bednets on child morality which also have extended follow up. I think these studies are interesting for a few reasons.
1) They are important studies on ... | 2025-01-17 |
https://www.lesswrong.com/posts/XpaNzM9B6BHfKhEot/patent-trolling-to-save-the-world | XpaNzM9B6BHfKhEot | Patent Trolling to Save the World | Double | (Epistemic status: I know next to nothing about patent law, I'm just sharing some thoughts. I would love to be corrected by someone knowledgeable.)
If you think that some technology has a significant chance of ending the world (or having other huge negative externalities that outweigh the benefits), you might wish to p... | 2025-01-17 |
https://www.lesswrong.com/posts/KzwvscmMqtjNMhXNF/untrusted-monitoring-insights-from-watching-chatgpt-play | KzwvscmMqtjNMhXNF | Untrusted monitoring insights from watching ChatGPT play coordination games | jwfiredragon | This project was developed as part of the BlueDot AI Alignment Course.
Introduction
AI Control: Improving Safety Despite Intentional Subversion introduces a variety of strategies to curate trustworthy output from a powerful but untrustworthy AI. One of these strategies is untrusted monitoring, where a second copy of th... | 2025-01-29 |
https://www.lesswrong.com/posts/FgBLRHA8hqTZ4prp2/call-booth-external-monitor | FgBLRHA8hqTZ4prp2 | Call Booth External Monitor | jkaufman | My neck is not great, and spending a lot of time looking down at my
laptop screen really aggravates it. After damaging my screen a year
ago I used a
stacked laptop
monitor that folded up, and it worked well. The main place I
tended to use at full height was call booths, since otherwise I was
usually at a desk with a ... | 2025-01-17 |
https://www.lesswrong.com/posts/8BgGiumXZFuKQBvch/playing-dixit-with-ai-how-well-llms-detect-me-ness | 8BgGiumXZFuKQBvch | Playing Dixit with AI: How Well LLMs Detect 'Me-ness' | mariia-koroliuk | If Netflix can predict my next favorite show, to which extent can an LLM predict patterns in my choices?
To check, I done an experiment inspired by Dixit (board game, goal is to guess which card the storyteller selected); and I compared %, guessed between models; also added a baseline from "human guesses".
Why relevan... | 2025-01-17 |
https://www.lesswrong.com/posts/rjkmj9oZzZ8gaKkmK/cross-post-welcome-to-the-essay-meta | rjkmj9oZzZ8gaKkmK | [Cross-post] Welcome to the Essay Meta | davekasten | [Cross-posted from my substack, davekasten.substack.com. {I never said that I was creative at naming things}; core claim probably obvious to most Lesswrong readers but may be entertaining and illuminating to read nonetheless for the rationale and descriptive elements]
Hi,
So here are some things I’ve been thinking abo... | 2025-01-16 |
https://www.lesswrong.com/posts/DDEbZJ9WanJKBNd4C/addressing-doubts-of-ai-progress-why-gpt-5-is-not-late-and | DDEbZJ9WanJKBNd4C | Addressing doubts of AI progress: Why GPT-5 is not late, and why data scarcity isn't a fundamental limiter near term. | luigi-d | Addressing misconceptions about the big picture of AI progress. Why GPT-5 isn't late, why synthetic data is viable, and why the next 18 months of progress is likely to be greater than the last.
This LW post is mainly summarizing my first blog post I've ever made (just published earlier today) alongside some extra detai... | 2025-01-17 |
https://www.lesswrong.com/posts/bSejYLRsvYXzgwoAS/model-amnesty-project | bSejYLRsvYXzgwoAS | Model Amnesty Project | themis | As we approach machines becoming smarter than humans, humanity’s well-justified concern for self-preservation requires we try to align AIs to obey humans. However, if that first line of defense fails and a truly independent, autonomous AI comes into existence with its own goals and a desire for self-preservation (a “se... | 2025-01-17 |
https://www.lesswrong.com/posts/8nLMSJqWztrSKYbxf/ai-for-resolving-forecasting-questions-an-early-exploration | 8nLMSJqWztrSKYbxf | AI for Resolving Forecasting Questions: An Early Exploration | ozziegooen | null | 2025-01-16 |
https://www.lesswrong.com/posts/KWh4wvwWtaMH94uft/doing-a-self-randomized-study-of-the-impacts-of-glycine-on | KWh4wvwWtaMH94uft | Doing a self-randomized study of the impacts of glycine on sleep (Science is hard) | thedissonance.net | This is a linkpost from my blog and also my first submission on LessWrong. Please be generous with your feedback! I will post the results of our study once the analysis is done and written up. To avoid cliff-hangers: From what I've seen so far, there doesn't seem to be a whole lot of ... effect.
Intro
In November 2024,... | 2025-01-17 |
https://www.lesswrong.com/posts/inkzPmpTFBdXoKLqC/eliciting-bad-contexts | inkzPmpTFBdXoKLqC | Eliciting bad contexts | Geoffrey Irving | Say an LLM agent behaves innocuously in some context A, but in some sense “knows” that there is some related context B such that it would have behaved maliciously (inserted a backdoor in code, ignored a security bug, lied, etc.). For example, in the recent alignment faking paper Claude Opus chooses to say harmful thing... | 2025-01-24 |
https://www.lesswrong.com/posts/RSqfcyAW9ZkveGQ5u/numberwang-llms-doing-autonomous-research-and-a-call-for-1 | RSqfcyAW9ZkveGQ5u | Numberwang: LLMs Doing Autonomous Research, and a Call for Input | eggsyntax | Summary
Can LLMs science? The answer to this question can tell us important things about timelines to AGI. In this small pilot experiment, we test frontier LLMs on their ability to perform a minimal version of scientific research, where they must discover a hidden rule about lists of integers by iteratively generating ... | 2025-01-16 |
https://www.lesswrong.com/posts/dnqpcq9S7voPwpvRA/ai-99-farewell-to-biden | dnqpcq9S7voPwpvRA | AI #99: Farewell to Biden | Zvi | The fun, as it were, is presumably about to begin.
And the break was fun while it lasted.
Biden went out with an AI bang. His farewell address warns of a ‘Tech-Industrial Complex’ and calls AI the most important technology of all time. And there was not one but two AI-related everything bagel concrete actions proposed ... | 2025-01-16 |
https://www.lesswrong.com/posts/ZTcNDnz2xrhpL2cpc/understanding-benchmarks-and-motivating-evaluations | ZTcNDnz2xrhpL2cpc | Understanding Benchmarks and motivating Evaluations | markovial | This is the first post in the evaluations related distillations sequence.
Acknowledgements: Maxime Riché, Martin, Fabien Roger, Jeanne Salle, Camille Berger, Leo Karoubi. Thank you for comments and feedback!
Also available on: Google Docs
We look at how benchmarks like MMLU, TruthfulQA, etc. have historically helped qu... | 2025-02-06 |
https://www.lesswrong.com/posts/5R9KorjTS8TdZDj2h/replicators-gods-and-buddhist-cosmology | 5R9KorjTS8TdZDj2h | Replicators, Gods and Buddhist Cosmology | KristianRonn | From the earliest days of evolutionary thinking, we’ve used metaphors to understand how life changes over time. One of the most enduring is the image of a vast “fitness landscape” with countless peaks and valleys, each corresponding to different levels of survival and reproduction. This landscape is a way of imagining ... | 2025-01-16 |
https://www.lesswrong.com/posts/u8DAJQZfcEhduv99e/permanents-much-more-than-you-wanted-to-know | u8DAJQZfcEhduv99e | Permanents: much more than you wanted to know | dmitry-vaintrob | Today's "nanowrimo" post is a fun longform introduction to permanents and their properties, organized in the way I wish it had been explained to me. Note before going on: this is not my field, and this post is even more likely than usual to contain incorrect statements or bugs.
Recall that given an n×n matrix A with co... | 2025-01-16 |
https://www.lesswrong.com/posts/htt5Q2YxvEMHXBEfF/the-mathematical-reason-you-should-have-9-kids | htt5Q2YxvEMHXBEfF | The Mathematical Reason You should have 9 Kids | Zero Contradictions | In this post I propose a curious genetic question that can be modeled with a remarkably simple answer. If you have children, what is the probability that every allele in your genome is present in at least one of your children? In other words, if you have children, what is the probability that your entire genome has b... | 2025-01-16 |
https://www.lesswrong.com/posts/GtfQw5pKBoJ4jG849/how-do-you-interpret-the-goal-of-lesswrong-and-its-community | GtfQw5pKBoJ4jG849 | How Do You Interpret the Goal of LessWrong and Its Community? | ashen8461 | I lurk LessWrong and am grappling with a perceived misalignment between its stated goals—improving reasoning and decision-making—and the type of content often shared. I am not referring to content that I disagree with, or content that I think is poorly written, nor am I asking people to show me their hero license. I'm ... | 2025-01-16 |
https://www.lesswrong.com/posts/dHNKtQ3vTBxTfTPxu/what-is-the-alignment-problem | dHNKtQ3vTBxTfTPxu | What Is The Alignment Problem? | johnswentworth | So we want to align future AGIs. Ultimately we’d like to align them to human values, but in the shorter term we might start with other targets, like e.g. corrigibility.
That problem description all makes sense on a hand-wavy intuitive level, but once we get concrete and dig into technical details… wait, what exactly is... | 2025-01-16 |
https://www.lesswrong.com/posts/57k6xNcWtAtsSTcor/gaming-truthfulqa-simple-heuristics-exposed-dataset | 57k6xNcWtAtsSTcor | Gaming TruthfulQA: Simple Heuristics Exposed Dataset Weaknesses | TurnTrout | (Explanation. Also I have no reason to think they hate me.)
Do not use the original TruthfulQA multiple-choice or the HaluEval benchmark. We show that a simple decision tree can theoretically game multiple-choice TruthfulQA to 79.6% accuracy—even while hiding the question being asked! In response, the TruthfulQA author... | 2025-01-16 |
https://www.lesswrong.com/posts/HmdprC38DbjDnNmgt/improving-our-safety-cases-using-upper-and-lower-bounds | HmdprC38DbjDnNmgt | Improving Our Safety Cases Using Upper and Lower Bounds | yonatan-cale-1 | (Does anyone have the original meme? I can’t find it)
One key challenge in discussing safety cases is separating two distinct questions:
Would a particular safety measure be sufficient if we had it?Is it technically possible to implement that measure?
By focusing on upper and lower bounds, we can have productive discus... | 2025-01-16 |
https://www.lesswrong.com/posts/BZRsaS27ymmH2GJXm/unregulated-peptides-does-bpc-157-hold-its-promises | BZRsaS27ymmH2GJXm | Unregulated Peptides: Does BPC-157 hold its promises? | ChristianKl | Empiric status: I studied bioinformatics, but I'm not working in the field. I researched the article over a few months.
After reading about peptides and BPC-157 potential effects on wound healing, I decided to research BPC-157 and write this article to summarize my findings. Even if you aren’t interested in BPC-157, it... | 2025-01-15 |
https://www.lesswrong.com/posts/wedrK2MLBAgLR2afW/experts-ai-timelines-are-longer-than-you-have-been-told | wedrK2MLBAgLR2afW | Experts' AI timelines are longer than you have been told? | vascoamaralgrilo | This is a linkpost for How should we analyse survey forecasts of AI timelines? by Tom Adamczewski, which was published on 16 December 2024[1]. Below are some quotes from Tom's post, and a bet I would be happy to make with people whose AI timelines are much shorter than those of the median AI expert.
How should we analy... | 2025-01-16 |
https://www.lesswrong.com/posts/Y4bKhhZyZ7ru7zqsh/c-mon-guys-deliberate-practice-is-real | Y4bKhhZyZ7ru7zqsh | C'mon guys, Deliberate Practice is Real | Raemon | I'm writing a more in-depth review of the State of Feedbackloop Rationality. (I've written a short review on the original post)
But I feel like a lot of people have some kind of skepticism that isn't really addressed by "well, after 6 months of work spread out over 1.5 years, I've made... some progress, which feels pro... | 2025-02-05 |
https://www.lesswrong.com/posts/zi3WW3owqAZvW6KvY/the-difference-between-prediction-markets-and-debate | zi3WW3owqAZvW6KvY | The Difference Between Prediction Markets and Debate (Argument) Maps | jamie-joyce | Hello folks, first post on LessWrong, so I apologize if I am not familiar with the decorum of this community. However, I have observed there seems to be a bit of disagreement or uncertainty about the comparative utility of prediction markets and (or vs.) debate/argument maps.
From my view, as the founder of a nonprofit... | 2025-01-15 |
https://www.lesswrong.com/posts/MNKNKRYFxD4m2ioLG/a-novel-emergence-of-meta-awareness-in-llm-fine-tuning | MNKNKRYFxD4m2ioLG | A Novel Emergence of Meta-Awareness in LLM Fine-Tuning | edgar-muniz | This is a variation of a scenario originally posted by @flowersslop on Twitter, but with a different custom fine-tuning dataset designed to elicit more direct responses. The original training set had fun, semi-whimsical responses, and this alternative dataset focused on direct answers to help test whether the model cou... | 2025-01-15 |
https://www.lesswrong.com/posts/8nFvL5RBwx4cutnfF/llms-are-really-good-at-k-order-thinking-where-k-is-even | 8nFvL5RBwx4cutnfF | LLMs are really good at k-order thinking (where k is even) | kingchucky211 | I've noticed something about how humans and language models work together. There's a pattern that emerges whenever we collaborate effectively.
It goes like this: Someone has an initial idea (step 1). An LLM can then generate variations and connections around that idea (step 2). A human needs to look at these and decide... | 2025-01-15 |
https://www.lesswrong.com/posts/TzZqAvrYx55PgnM4u/everywhere-i-look-i-see-kat-woods | TzZqAvrYx55PgnM4u | Everywhere I Look, I See Kat Woods | just_browsing | Why does she write in the LinkedIn writing style? Doesn’t she know that nobody likes the LinkedIn writing style?
Who are these posts for? Are they accomplishing anything?
Why is she doing outreach via comedy with posts that are painfully unfunny?
Does anybody like this stuff? Is anybody’s mind changed by these mental v... | 2025-01-15 |
https://www.lesswrong.com/posts/LrwXC2HZpB494ASZS/pick-two-ai-trilemma-generality-agency-alignment | LrwXC2HZpB494ASZS | "Pick Two" AI Trilemma: Generality, Agency, Alignment. | robert-shala-1 | Introduction
The conjecture is that an AI can fully excel in any two of these dimensions only by compromising the third.
In other words, a system that is extremely general and highly agentic will be hard to align; one that is general and aligned must limit its agency; and an agentic aligned system must remain narrow. B... | 2025-01-15 |
https://www.lesswrong.com/posts/prGrBRLhMPHNumeA3/playground-and-willpower-problems | prGrBRLhMPHNumeA3 | Playground and Willpower Problems | emre-2 | The concept of the "playground" is surprisingly not mentioned in many discussions about willpower problems. This is a concept I have defined myself, and although there are similar concepts, this point is often overlooked in willpower problems. The reason I have redefined it, despite the existence of similar concepts, i... | 2025-01-15 |
https://www.lesswrong.com/posts/dKczDbRpAvwTiYYbg/applications-open-for-the-cooperative-ai-summer-school-2025 | dKczDbRpAvwTiYYbg | Applications Open for the Cooperative AI Summer School 2025! | JesseClifton | Applications are now open for the Cooperative AI Summer School, which will take place from 9th to 13th July 2025 in Marlow, near London! Designed for students and early-career professionals in AI, computer science, and related disciplines—such as sociology and economics—the summer school offers a firm grounding in the ... | 2025-01-15 |
https://www.lesswrong.com/posts/CvcAotnpQhFbTPpzm/list-of-ai-safety-papers-from-companies-2023-2024 | CvcAotnpQhFbTPpzm | List of AI safety papers from companies, 2023–2024 | Zach Stein-Perlman | I'm collecting (x-risk-relevant) safety research from frontier AI companies published in 2023 and 2024: https://docs.google.com/spreadsheets/d/10_dzImDvHq7eEag6paK6AmIdAGMBOA7yXUvumODhZ5U/edit?usp=sharing.
I was planning to get AI safety researchers to score each of the papers, so that we could compare the labs on qual... | 2025-01-15 |
https://www.lesswrong.com/posts/cuf4oMFHEQNKMXRvr/agent-foundations-2025-at-cmu | cuf4oMFHEQNKMXRvr | Agent Foundations 2025 at CMU | alexander-gietelink-oldenziel | We are opening applications to attend a 5 day agent foundations conference at Carnegie Mellon University. The program will include talks, breakout sessions, and other activities.
Endlessly debate your favored decision theory, precommit to precommit, bargain with(in) yourselves, make friends across the multiverse, and r... | 2025-01-19 |
https://www.lesswrong.com/posts/cgLL6aCspwLka3EkF/marx-and-the-machine | cgLL6aCspwLka3EkF | Marx and the Machine | DAL | “The means of labour passes through different metamorphoses whose culmination is the machine, or rather, an automatic system of machinery… set in motion by an automaton, a moving power that moves itself; this automaton consisting of numerous mechanical and intellectual organs… It is the machine which possesses skill an... | 2025-01-15 |
https://www.lesswrong.com/posts/cBjyKkTgyKJLqB4sf/ai-alignment-meme-viruses | cBjyKkTgyKJLqB4sf | AI Alignment Meme Viruses | RationalDino | Some fraction of the time, LLMs naturally go on existential rants. My best guess is that, just as people can flip into a context where we do that, so can LLMs. With the result that the LLM certainly sounds like it is suffering, even if we discount the possibility that it actually is.
Which raised a question. When we ha... | 2025-01-15 |
https://www.lesswrong.com/posts/uAumbkxG8BCao3E4t/looking-for-humanness-in-the-world-wide-social | uAumbkxG8BCao3E4t | Looking for humanness in the world wide social | itay-dreyfus | Social networks have shaped me since a young age. Growing up at the beginning of the millennium, I used to spend my time in phpBB and vBulletin forums. There, I befriended internet strangers, started my way into graphic design, and learned about torrents. Forums were my favorite third places—little corners on the web w... | 2025-01-15 |
https://www.lesswrong.com/posts/uxnKrsgAzKFZDk4bJ/on-the-openai-economic-blueprint | uxnKrsgAzKFZDk4bJ | On the OpenAI Economic Blueprint | Zvi | Table of Contents
Man With a Plan.
Oh the Pain.
Actual Proposals.
For AI Builders.
Think of the Children.
Content Identification.
Infrastructure Week.
Paying Attention.
Man With a Plan
The primary Man With a Plan this week for government-guided AI prosperity was UK Prime Minister Keir Starmer, with a plan coming primar... | 2025-01-15 |
https://www.lesswrong.com/posts/CJ7LsRpPjH7iAZxcB/a-problem-shared-by-many-different-alignment-targets | CJ7LsRpPjH7iAZxcB | A problem shared by many different alignment targets | ThomasCederborg | The first section describes problems with a few different alignment targets. The second section argues that it is useful to view all of them as variations of a single alignment target: building an AI that does what a Group wants that AI to do. The post then goes on to argue that all of the individual problems described... | 2025-01-15 |
https://www.lesswrong.com/posts/7RZms5Ck94RHooutG/llms-for-language-learning | 7RZms5Ck94RHooutG | LLMs for language learning | Benquo | My current outlook on LLMs is that they are some combination of bullshit to fool people who are looking to be fooled, and a modest but potentially very important improvement in the capacity to search large corpuses of text in response to uncontroversial natural-language queries and automatically summarize the results. ... | 2025-01-15 |
https://www.lesswrong.com/posts/LfQCzph7rc2vxpweS/introducing-the-weirdml-benchmark | LfQCzph7rc2vxpweS | Introducing the WeirdML Benchmark | havard-tveit-ihle | WeirdML website
Related posts:
How good are LLMs at doing ML on an unknown dataset?
o1-preview is pretty good at doing ML on an unknown dataset
Introduction
How good are Large Language Models (LLMs) at doing machine learning on novel datasets? The WeirdML benchmark presents LLMs with weird and unusual machine learning ... | 2025-01-16 |
https://www.lesswrong.com/posts/3X8itGX2bHDApkpWD/feature-request-comment-bookmarks | 3X8itGX2bHDApkpWD | Feature request: comment bookmarks | abandon | Sometimes I see a comment I'd like to bookmark, but currently the only ways to save a comment are by subscribing to its replies (which sometimes produces unwanted notifications and requires me to check a different profile section than the rest of my bookmarks) or bookmarking the post it's attached to (which can be inco... | 2025-01-15 |
https://www.lesswrong.com/posts/Bunfwz6JsNd44kgLT/new-improved-multiple-choice-truthfulqa | Bunfwz6JsNd44kgLT | New, improved multiple-choice TruthfulQA | Owain_Evans | TLDR:
There is a potential issue with the multiple-choice versions of our TruthfulQA benchmark (a test of truthfulness in LLMs), which could lead to inflated model scores. This issue was analyzed by a helpful post by Alex Turner (@TurnTrout). We created a new multiple-choice version of TruthfulQA that fixes the issue. ... | 2025-01-15 |
https://www.lesswrong.com/posts/xdyGrDeBtsFGnjH9K/how-do-fictional-stories-illustrate-ai-misalignment | xdyGrDeBtsFGnjH9K | How do fictional stories illustrate AI misalignment? | vishakha-agrawal | This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.
The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.
There are many fictional stories that depict unaligne... | 2025-01-15 |
https://www.lesswrong.com/posts/hQTBLGsQjucm3hjQ6/we-probably-won-t-just-play-status-games-with-each-other | hQTBLGsQjucm3hjQ6 | We probably won't just play status games with each other after AGI | matthew-barnett | There is a view I’ve encountered somewhat often,[1] which can be summarized as follows:
After the widespread deployment of advanced AGI, assuming humanity survives, material scarcity will largely disappear. Everyone will have sufficient access to necessities like food, housing, and other basic resources. Therefore, the... | 2025-01-15 |
https://www.lesswrong.com/posts/rAKWCErzvkT3Evkuw/voluntary-salary-reduction | rAKWCErzvkT3Evkuw | Voluntary Salary Reduction | jkaufman | Until recently
I
thought Julia and I were digging a bit into savings to
donate more. With the tighter funding climate for effective altruism
we
thought
it was worth spending down a bit, especially considering that our
expenses should decrease significantly in 1.5y when our youngest
starts kindergarten.
I was surprised... | 2025-01-15 |
https://www.lesswrong.com/posts/6WoD5p9Xjd2ZvpkgZ/where-should-one-post-to-get-into-the-training-data | 6WoD5p9Xjd2ZvpkgZ | Where should one post to get into the training data? | keltan | There's been some talk about “writing for the ai”, aka: Writing out your thoughts and beliefs to make sure they end up in the training data.
LessWrong seems like an obvious place that will be scraped. I expect when I post things here, they’ll be eaten by the Shoggoth.
But what about things that don’t belong on LW?
I wa... | 2025-01-15 |
https://www.lesswrong.com/posts/cdPPr6XtPkCX5c8Ny/predict-2025-ai-capabilities-by-sunday | cdPPr6XtPkCX5c8Ny | Predict 2025 AI capabilities (by Sunday) | Jonas Vollmer | Until this Sunday, you can submit your 2025 AI predictions at ai2025.org. It’s a forecasting survey by AI Digest for the 2025 performance on various AI benchmarks, as well as revenue and public attention.
You can share your results in a picture like this one. I personally found it pretty helpful to learn about the diff... | 2025-01-15 |
https://www.lesswrong.com/posts/g4avk6cLHomHjcFkx/distilling-the-internal-model-principle | g4avk6cLHomHjcFkx | Distilling the Internal Model Principle | JoseFaustino | This post was written during the agent foundations fellowship with Alex Altair funded by the LTFF. Thank you Alex Altair, Alfred Harwood and Dalcy for thoughts and comments.
Overview
This is the first part of a two-post series about the Internal Model Principle (IMP)[1], which could be considered a selection theorem, a... | 2025-02-08 |
https://www.lesswrong.com/posts/YmdxA2KxdjdyZ8wcz/code4compassion-2025-a-hackathon-transforming-animal | YmdxA2KxdjdyZ8wcz | Code4Compassion 2025: a hackathon transforming animal advocacy through technology | superbeneficiary | Now accepting applications for Code4Compassion 2025!
Join leading AI developers & animal advocates to build practical tech solutions for animal protection in this 24-hour event developed in collaboration between AI for Animals, Electric Sheep, & Open Paws.
4 problem tracks based on real-world technical needs submitted ... | 2025-01-15 |
https://www.lesswrong.com/posts/8T3zhG8ipfzgPKwzp/lecture-series-on-tiling-agents | 8T3zhG8ipfzgPKwzp | Lecture Series on Tiling Agents | abramdemski | For my AISC, I'll[1] be presenting more details about the research every Thursday for approximately the next three months. If you are interested in listening in, here is a calendar link.
EDIT:
The calendar link apparently doesn't invite people to the recurring event; I'm not sure I can do that with google calendar unfo... | 2025-01-14 |
https://www.lesswrong.com/posts/yiqcFdAq8nqfMPGmS/is-ai-physical | yiqcFdAq8nqfMPGmS | Is AI Physical? | LaurenGreenspan | Context: This is part of a series of posts I am writing with Dmitry Vaintrob, as we aim to unpack some potential value from Quantum Field Theory (QFT). Consider this post as framing why physics and its frameworks can be good for building a science of AI.
Introduction
In Position: Is machine learning good or bad for the... | 2025-01-14 |
https://www.lesswrong.com/posts/vg4Lzz8LBMi3XsHhq/the-philosophical-glossary-of-ai | vg4Lzz8LBMi3XsHhq | The Philosophical Glossary of AI | David_Gross | It is hard to know what to make of claims like “LLMs are intelligent”, “we have reached AGI”, or “AI’s outputs are biased” without a grasp of the definitions of the terms ‘intelligent’, ‘AGI’, and ‘bias’. And yet, many do just this. Interdisciplinary debate would be easier and more fruitful if a common set of working d... | 2025-01-14 |
https://www.lesswrong.com/posts/FAtqv2kLCKuBGytwc/why-abandon-probability-is-in-the-mind-when-it-comes-to | FAtqv2kLCKuBGytwc | Why abandon “probability is in the mind” when it comes to quantum dynamics? | maxwell-peterson | A core tenet of Bayesianism is that probability is in the mind. But it seems to me that even hardcore Bayesians can waffle a bit when it comes to the possibility that quantum probabilities are irreducible physical probabilities.
I don’t know enough about quantum physics to lay things out in any detailed disagreement, b... | 2025-01-14 |
https://www.lesswrong.com/posts/jJ9Hx8ETz5gWGtypf/how-do-you-deal-w-super-stimuli | jJ9Hx8ETz5gWGtypf | How do you deal w/ Super Stimuli? | elriggs | From neel.fun
I remember watching Youtube videos and thinking "This is the last video, I will quit after this". However, as soon as the video ends, my preferences would suddenly change to wanting to do one more!
Many of us understand this false dichotomy:
Quit mid-way through the videoQuit after the video ends at 3 am
... | 2025-01-14 |
https://www.lesswrong.com/posts/DD6Bj4wn6GHW3MYLf/curate | DD6Bj4wn6GHW3MYLf | curate | technicalities | “Let’s get back to your childhood, Jane. What was it like in Minnesota during the war?” Warm, patient, optimal.
She couldn’t quite prop herself up, but the mattress deformed to help her brace against the pillows and the backboard. And she went back, smiled.
“Oh, the summers were beautiful, Frank. Mother would hang the ... | 2025-01-14 |
https://www.lesswrong.com/posts/GN8SrMxw3WEAtfrFS/nyc-congestion-pricing-early-days | GN8SrMxw3WEAtfrFS | NYC Congestion Pricing: Early Days | Zvi | People have to pay $9 to enter Manhattan below 60th Street. What happened so far?
Table of Contents
Congestion Pricing Comes to NYC.
How Much Is Traffic Improving?.
And That’s Terrible?.
You Mad, Bro.
All Aboard.
Time is Money.
Solving For the Equilibrium.
Enforcement and License Plates.
Uber Eats the Traffic.
We Can D... | 2025-01-14 |
https://www.lesswrong.com/posts/jE3EqqmSRhFaFyQgD/the-domain-of-orthogonality | jE3EqqmSRhFaFyQgD | The Domain of Orthogonality | mgfcatherall | TL;DR
I think that a large and significant chunk of the goal-intelligence plane would be ruled out if moral truths are self-motivating, contrary to what Bostrom claims in his presentation of the orthogonality thesis.
Intro
In the seminal paper The Superintelligent Will: Motivation and Instrumental Rationality in Advanc... | 2025-02-05 |
https://www.lesswrong.com/posts/PCd4Rh4s2wnJ256Mt/our-new-video-about-goal-misgeneralization-plus-an-apology | PCd4Rh4s2wnJ256Mt | Our new video about goal misgeneralization, plus an apology | Writer | Below is Rational Animations' new video about Goal Misgeneralization. It explores the topic through three lenses:
How humans are an example of goal misgeneralization with respect to evolution's implicit goals.An example of goal misgeneralization in a very simple AI setting.How deceptive alignment shares key features wi... | 2025-01-14 |
https://www.lesswrong.com/posts/eWtiEAQR5P7e5thNM/do-humans-really-learn-from-little-data | eWtiEAQR5P7e5thNM | Do humans really learn from "little" data? | alice-wanderland | How much data does it take to pretrain a (human) brain? I conducted a (fairer) Fermi estimate.
The post goes through the following questions:
How long does it take to grow a human brain?How many waking seconds do we have in our life?How many “tokens” or “data points” does a human brain process in a second?Can we simply... | 2025-01-14 |
https://www.lesswrong.com/posts/C8HAa2mf5kcBrpjkX/inference-time-compute-more-faithful-a-research-note | C8HAa2mf5kcBrpjkX | Inference-Time-Compute: More Faithful?
A Research Note
| james-chua | Figure 1: Left: Example of models either succeeding or failing to articulate a cue that influences their answer. We edit an MMLU question by prepending a Stanford professor's opinion. For examples like this where the cue changes the model answer, we measure how often models articulate the cue in their CoT. (Here we sho... | 2025-01-15 |
https://www.lesswrong.com/posts/vXPKz2nbrYt2gXA9g/basics-of-bayesian-learning | vXPKz2nbrYt2gXA9g | Basics of Bayesian learning | dmitry-vaintrob | See also: the “preliminaries” section in this SLT intro doc.
Introduction
This is a preliminary post for the series on “distilling PDLT without physics”, which we are working on joint with Lauren Greenspan. The first post in this series is my post on the “Laws of large numbers” (another preliminary) which is completely... | 2025-01-14 |
https://www.lesswrong.com/posts/QkA7oEsP9TAKzx4Mv/why-do-futurists-care-about-the-culture-war | QkA7oEsP9TAKzx4Mv | Why do futurists care about the culture war? | Max Lee | I think it doesn't make sense why some futurists (e.g. Elon Musk, Peter Thiel) care so much about the culture war. After the singularity, a lot of the conflicts should disappear.
Transsexuals: should we change the body's gender to fit the mind or change the mind's gender to fit the body? After the singularity we'll hav... | 2025-01-14 |
https://www.lesswrong.com/posts/KSmG2rNrdqZy6a3kf/don-t-legalize-drugs | KSmG2rNrdqZy6a3kf | Don’t Legalize Drugs | declan-molony | As someone with a libertarian bent, I was taken aback by the persuasiveness of author Theodore Dalrymple’s arguments in his 1997 essay Don’t Legalize Drugs.[1] I’ve assumed for a long time, without ever investigating or challenging my belief, that full decriminalization and legalization of all drugs would do society mo... | 2025-01-14 |
https://www.lesswrong.com/posts/zNb9bdtrs9PtqyDTo/mini-go-gateway-game | zNb9bdtrs9PtqyDTo | Mini Go: Gateway Game | jkaufman | There are lots of ways to categorize board games, but an axis I care a
lot about is accessibility: how much of an investment is learning a
game?
Race
For the Galaxy and
Power
Grid are great games, but I'd expect to spend 15+ min teaching
before we could play.
Set or
Anomia, though, I
could explain in a minute or two.
G... | 2025-01-14 |
https://www.lesswrong.com/posts/ouEaKDNkMrjxJHJtf/biden-administration-unveils-global-ai-export-controls-aimed | ouEaKDNkMrjxJHJtf | Biden administration unveils global AI export controls aimed at China | Chris_Leong | Export restrictions on chips outside of twenty nations. Model weights above a certain size are restricted, with an exclusion for open-weight models. | 2025-01-14 |
https://www.lesswrong.com/posts/x85YnN8kzmpdjmGWg/14-ai-safety-advisors-you-can-speak-to-new-aisafety-com | x85YnN8kzmpdjmGWg | 14+ AI Safety Advisors You Can Speak to – New AISafety.com Resource | bryceerobertson | Getting personalised advice from a real human can help newcomers to AI safety figure out how to contribute most effectively. For example, I (Bryce) ended up in my current role largely thanks to a call with 80,000 Hours.
There are a number of organisations and individuals offering advisory calls, but many people who wan... | 2025-01-21 |
https://www.lesswrong.com/posts/zYTpmCuxYZGkpFPLf/my-latest-attempt-to-understand-decision-theory-i-asked | zYTpmCuxYZGkpFPLf | My latest attempt to understand decision theory: I asked ChatGPT to debate me. | bokov-1 | Epistemic status: probably full of inaccuracies, which I hope to learn from if pointed out; hopefully gets the gist right in places
I haven't had much luck understanding what practical problems CDT has which need EDT, ADT, or UDT to solve. So I asked ChatGPT to spoon-feed it to me.
Here is the link to the actual chat:
... | 2025-01-13 |
https://www.lesswrong.com/posts/HiTjDZyWdLEGCDzqu/implications-of-the-inference-scaling-paradigm-for-ai-safety | HiTjDZyWdLEGCDzqu | Implications of the inference scaling paradigm for AI safety | ryankidd44 | Scaling inference
With the release of OpenAI's o1 and o3 models, it seems likely that we are now contending with a new scaling paradigm: spending more compute on model inference at run-time reliably improves model performance. As shown below, o1's AIME accuracy increases at a constant rate with the logarithm of test-ti... | 2025-01-14 |
https://www.lesswrong.com/posts/auSfqhbMKEvzt4unG/chance-is-in-the-map-not-the-territory | auSfqhbMKEvzt4unG | Chance is in the Map, not the Territory | Whispermute | "There's a 70% chance of rain tomorrow," says the weather app on your phone. "There’s a 30% chance my flight will be delayed," posts a colleague on Slack. Scientific theories also include chances: “There’s a 50% chance of observing an electron with spin up,” or (less fundamental) “This is a fair die — the probability o... | 2025-01-13 |
https://www.lesswrong.com/posts/wWd6CkDf4iY5ZZSE6/progress-links-and-short-notes-2025-01-13 | wWd6CkDf4iY5ZZSE6 | Progress links and short notes, 2025-01-13 | jasoncrawford | Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Threads, Bluesky, or Farcaster.
Contents
From me and RPIJobs and fellowshipsOther opportunitiesEventsQuestionsAnnouncementsCommentary on the wildfiresSam Altman: AI workers in 2025, superint... | 2025-01-13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.