url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/52ihDj6rgtdHxZw79/locating-and-editing-knowledge-in-lms | 52ihDj6rgtdHxZw79 | Locating and Editing Knowledge in LMs | dhananjay-ashok | In my previous post I went over some common approaches for updating LMs with fresh knowledge. Here, I detail a specific approach that has gained popularity in recent years - locating and editing factual associations in language models. I do not believe in this approach, in this post I try to summarize it fairly, and ex... | 2025-01-24 |
https://www.lesswrong.com/posts/3jnziqCF3vA2NXAKp/six-thoughts-on-ai-safety | 3jnziqCF3vA2NXAKp | Six Thoughts on AI Safety | boazbarak | [Crossposted from windowsontheory]
The following statements seem to be both important for AI safety and are not widely agreed upon. These are my opinions, not those of my employer or colleagues. As is true for anything involving AI, there is significant uncertainty about everything written below. However, for readabili... | 2025-01-24 |
https://www.lesswrong.com/posts/bSHCZ6dbAdfMbvuXB/yudkowsky-on-the-trajectory-podcast | bSHCZ6dbAdfMbvuXB | Yudkowsky on The Trajectory podcast | Seth Herd | Edit: TLDR: EY focuses on the clearest and IMO most important part of his argument:
Before building an entity smarter than you, you should probably be really sure its goals align with yours.Humans are historically really bad at being really sure of anything nontrivial on the first real try.
I found this interview notab... | 2025-01-24 |
https://www.lesswrong.com/posts/rjN4So8QezZYo62c2/counterintuitive-effects-of-minimum-prices | rjN4So8QezZYo62c2 | Counterintuitive effects of minimum prices | dynomight | The Attorney General of Massachusetts recently announced that drivers for ride-sharing companies must be paid at least $32.50 per hour.
Now, if you’re a hardcore libertarian, then you probably hate with the minimum wage. You need no convincing and we can part now on good terms.
But what if you’re part of the vast major... | 2025-01-24 |
https://www.lesswrong.com/posts/7Z4WC4AFgfmZ3fCDC/instrumental-goals-are-a-different-and-friendlier-kind-of | 7Z4WC4AFgfmZ3fCDC | Instrumental Goals Are A Different And Friendlier Kind Of Thing Than Terminal Goals | johnswentworth | The Cake
Imagine that I want to bake a chocolate cake, and my sole goal in my entire lightcone and extended mathematical universe is to bake that cake. I care about nothing else. If the oven ends up a molten pile of metal ten minutes after the cake is done, if the leftover eggs are shattered and the leftover milk spill... | 2025-01-24 |
https://www.lesswrong.com/posts/sYFNGRdDQYQrSJAd8/sae-regularization-produces-more-interpretable-models | sYFNGRdDQYQrSJAd8 | SAE regularization produces more interpretable models | peter-lai | Sparse Autoencoders (SAEs) are useful for providing insight into how a model processes and represents information. A key goal is to represent language model activations as a small number of features (L0) while still achieving accurate reconstruction (measured via reconstruction error or cross-entropy loss increase). Pa... | 2025-01-28 |
https://www.lesswrong.com/posts/qRxSqdzqZGqgvAeQQ/how-are-those-ai-participants-doing-anyway | qRxSqdzqZGqgvAeQQ | How are Those AI Participants Doing Anyway? | mushroomsoup | [TL;DR] Some social science researchers are running psychology experiments with LLMs playing the role of human participants. In some cases the LLMs perform their roles well. Other social scientists, however, are quite concerned and warn of perils including bias, reproducibility, and the potential spread of a scientific... | 2025-01-24 |
https://www.lesswrong.com/posts/8Pz2RddtmuCLjxe2s/is-there-such-a-thing-as-an-impossible-protein | 8Pz2RddtmuCLjxe2s | Is there such a thing as an impossible protein? | abhishaike-mahajan | This is something I’ve been thinking about since my synthesizability article.
Let’s assume, given the base twenty amino acids that are naturally present in the human body, we have every possible permutation of them for up to 100 amino acids, stored in a box with pH 7.4 water and normal pressures and temperature and iso... | 2025-01-24 |
https://www.lesswrong.com/posts/fwt7ojAb6zgEaLJMB/stargate-ai-1 | fwt7ojAb6zgEaLJMB | Stargate AI-1 | Zvi | There was a comedy routine a few years ago. I believe it was by Hannah Gadsby. She brought up a painting, and looked at some details. The details weren’t important in and of themselves. If an AI had randomly put them there, we wouldn’t care.
Except an AI didn’t put them there. And they weren’t there at random.
A human ... | 2025-01-24 |
https://www.lesswrong.com/posts/ZwQXPbLvpW39d9zny/qft-and-neural-nets-the-basic-idea | ZwQXPbLvpW39d9zny | QFT and neural nets: the basic idea | dmitry-vaintrob | Previously in the series: The laws of large numbers and Basics of Bayesian learning.
Reminders: formalizing learning in ML and Bayesian learning
Learning and inference in neural nets and Bayesian models
As a very basic sketch, in order to specify an ML algorithm one needs five pieces of data.
An architecture: i.e., a p... | 2025-01-24 |
https://www.lesswrong.com/posts/8C9ZRSiHeCjFxYpE3/do-you-consider-perfect-surveillance-inevitable | 8C9ZRSiHeCjFxYpE3 | Do you consider perfect surveillance inevitable? | xpostah | A lot of my recent research work focusses on:
1. building the case for why perfect surveillance is becoming increasingly hard to avoid in the future
2. thinking through the implications of this, if it happened
When I say perfect surveillance, imagine everything your eyes see and your ears hear is being broadcast 24x7x3... | 2025-01-24 |
https://www.lesswrong.com/posts/KGDBM7CARaNtJMc5j/uncontrollable-a-surprisingly-good-introduction-to-ai-risk | KGDBM7CARaNtJMc5j | Uncontrollable: A Surprisingly Good Introduction to AI Risk | PeterMcCluskey | I recently read Darren McKee's book "Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World".
I recommend this book as the best current introduction to AI risk for people with limited AI background.
It prompted me to update my thinking about Asimov's Laws and related risks in light of... | 2025-01-24 |
https://www.lesswrong.com/posts/GB5sZAJkZuwT8ohic/contra-dances-getting-shorter-and-earlier | GB5sZAJkZuwT8ohic | Contra Dances Getting Shorter and Earlier | jkaufman | I think of a standard contra dance as running 8pm-11pm: three hours is
a nice amount of time for dancing, and 8pm is late enough that dinner
isn't rushed. Looking over the 136 regular
Free Raisins dances from 2010
to 2019 matches my impression: 85% were 3hr, 62% started at 8pm, and
51% did both.
I think this is out of... | 2025-01-23 |
https://www.lesswrong.com/posts/uRbCRmMXCFvrvSrmW/starting-thoughts-on-rlhf | uRbCRmMXCFvrvSrmW | Starting Thoughts on RLHF | michael-flood | Cross posted from Substack
Continuing the Stanford CS120 Introduction to AI Safety course readings (Week 2, Lecture 1)
This is likely too elementary for those who follow AI Safety research - my writing this is an aid to thinking through these ideas and building up higher-level concepts rather than just passively doing ... | 2025-01-23 |
https://www.lesswrong.com/posts/A2GmXrgGYuun8ujaC/ideas-for-cot-models-a-geometric-perspective-on-latent-space | A2GmXrgGYuun8ujaC | Ideas for CoT Models: A Geometric Perspective on Latent Space Reasoning | rohan-ganapavarapu | Disclaimer: These ideas are untested and only come from my intuition. I don’t have the resources to explore them any further.
intro
CoT and test time compute have been proven to be the future direction of language models for better or for worse. o1 and DeepSeek-R1 demonstrate a step function in model intelligence. Coco... | 2025-01-24 |
https://www.lesswrong.com/posts/FFJ5kJBuRHTxeb476/insights-from-the-manga-guide-to-physiology | FFJ5kJBuRHTxeb476 | Insights from "The Manga Guide to Physiology" | TurnTrout | Physiology seemed like a grab-bag of random processes which no one really understands. If you understand a physiological process—congratulations, that idea probably doesn’t transfer much to other domains. You just know how humans—and maybe closely related animals—do the thing. At least, that’s how I felt. (These sentim... | 2025-01-24 |
https://www.lesswrong.com/posts/fRMmbdJnhufENqrSd/updating-and-editing-factual-knowledge-in-language-models | fRMmbdJnhufENqrSd | Updating and Editing Factual Knowledge in Language Models | dhananjay-ashok | Language Models go out of date. Is it possible to stop this from happening by making intrinsic alterations to the network itself?
* This article was written around Jan 2024, the exact references are surely out of date but the core ideas still hold.
Among the many analogies used to understand Language Models is the idea... | 2025-01-23 |
https://www.lesswrong.com/posts/neTbrpBziAsTH5Bn7/ai-companies-are-unlikely-to-make-high-assurance-safety | neTbrpBziAsTH5Bn7 | AI companies are unlikely to make high-assurance safety cases if timelines are short | ryan_greenblatt | One hope for keeping existential risks low is to get AI companies to (successfully) make high-assurance safety cases: structured and auditable arguments that an AI system is very unlikely to result in existential risks given how it will be deployed.[1] Concretely, once AIs are quite powerful, high-assurance safety case... | 2025-01-23 |
https://www.lesswrong.com/posts/kCoGCwHThxuFzDsMx/aisn-46-the-transition | kCoGCwHThxuFzDsMx | AISN #46: The Transition | corin-katzke | Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.
Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.
The Transition
The transition from the Biden to Trump administrations saw a flurry of executive activi... | 2025-01-23 |
https://www.lesswrong.com/posts/PjDjeGPYPoi9qfPr2/ai-100-meet-the-new-boss | PjDjeGPYPoi9qfPr2 | AI #100: Meet the New Boss | Zvi | Break time is over, it would seem, now that the new administration is in town.
This week we got r1, DeepSeek’s new reasoning model, which is now my go-to first choice for a large percentage of queries. The claim that this was the most important thing to happen on January 20, 2025 was at least non-crazy. If you read abo... | 2025-01-23 |
https://www.lesswrong.com/posts/R3YZvxxjnuvPW8ZnW/cross-post-every-bay-area-walled-compound | R3YZvxxjnuvPW8ZnW | [Cross-post] Every Bay Area "Walled Compound" | davekasten | [Cross-posted from my substack, davekasten.substack.com.]
(With apologies and thanks to the incomparable Scott Alexander, Richard Ngo, Ricki Heicklen, and Emma Liddell).
Every Bay Area “Walled Compound”
You’re in Berkeley unexpectedly. You’d hoped [1]this would be the one trip through SFO this year where you didn’t end... | 2025-01-23 |
https://www.lesswrong.com/posts/4tDdTnY7YHhqvWGmr/writing-experiments-and-the-banana-escape-valve | 4tDdTnY7YHhqvWGmr | Writing experiments and the banana escape valve | dmitry-vaintrob | [Note: This is not alignment-related, but rather a spacefiller personal blog post.]
I've been trying to write a public post every day of January. So far I’ve been enjoying it. I don’t think this approach works for everyone: in particular, I’ve also been hanging on by a thread to the schedule and to the ability to sleep... | 2025-01-23 |
https://www.lesswrong.com/posts/zWySWKuXnhMDhgwc3/mona-managed-myopia-with-approval-feedback-2 | zWySWKuXnhMDhgwc3 | MONA: Managed Myopia with Approval Feedback | Seb Farquhar | Blog post by Sebastian Farquhar, David Lindner, Rohin Shah.
It discusses the paper MONA: Myopic Optimization with Non-myopic Approval Can Mitigate Multi-step Reward Hacking by Sebastian Farquhar, Vikrant Varma, David Lindner, David Elson, Caleb Biddulph, Ian Goodfellow, and Rohin Shah.
Our paper tries to make agents th... | 2025-01-23 |
https://www.lesswrong.com/posts/brBATybmh2eEZSwdg/how-useful-would-alien-alignment-research-be | brBATybmh2eEZSwdg | How useful would alien alignment research be? | donald-hobson | Imagine aliens on a distant world. They have values very different to humans. However, they also have complicated values, and don't exactly know their own values.
Imagine these aliens are doing well at AI alignment. They are just about to boot up a friendly (to them) superintelligence.
Now imagine we get to see all the... | 2025-01-23 |
https://www.lesswrong.com/posts/sASYLR9CjJxDzwvS3/what-are-the-differences-between-agi-transformative-ai-and | sASYLR9CjJxDzwvS3 | What are the differences between AGI, transformative AI, and superintelligence? | vishakha-agrawal | This is an article in the featured articles series from AISafety.info. AISafety.info writes AI safety intro content. We'd appreciate any feedback.
The most up-to-date version of this article is on our website, along with 300+ other articles on AI existential safety.
These terms are all related attempts to define AI cap... | 2025-01-23 |
https://www.lesswrong.com/posts/JPfJaXTDQKQQocc2f/mats-spring-2024-extension-retrospective | JPfJaXTDQKQQocc2f | MATS Spring 2024 Extension Retrospective | HenningBlue | Introduction and summary
This retrospective focuses on the 4-month MATS extension phase (referred to as "MATS 5.1") that ran from April 1 to July 25, 2024, and presents findings gathered from an end-of-extension survey as well as follow-up interviews and surveys ~5 months after the program.
Main changes from the 4.1 to... | 2025-02-12 |
https://www.lesswrong.com/posts/yZudSC2tRJ2icL2Mj/the-many-failure-modes-of-consumer-grade-llms | yZudSC2tRJ2icL2Mj | The many failure modes of consumer-grade LLMs | dereshev | Disclaimer: this posts includes example responses to harmful queries about cannibalism, suicide, body image, and other topics some readers may find distasteful.
This post is part of a larger project.
Though I've seen plenty of posts focusing on this or that specific failure mode in LLMs, a quick search here doesn't rev... | 2025-01-26 |
https://www.lesswrong.com/posts/JotRZdWyAGnhjRAHt/tail-sp-500-call-options | JotRZdWyAGnhjRAHt | Tail SP 500 Call Options | deluks917 | SPX is an index fund that tracks the SP500. Right now SPX is worth about 6100 per share. For individual stocks, even the large ones, you can usually only buy options through Dec 2027. However SPX has options that expire in Dec 2028 / 29 / 30[1]. Such options with strike prices of 10K, 11K, or 12K seem extremely lucrati... | 2025-01-23 |
https://www.lesswrong.com/posts/bbEJX8exp66bwNpGF/you-have-two-brains | bbEJX8exp66bwNpGF | You Have Two Brains | Eneasz | Language-using humans were the first cyborgs — a new species born of grafting technology onto what evolution crafted using just time and flesh.
(This post is pretty speculative)
Logos Created Man
Andrew Cutler posits that consciousness arose when language grew complex enough to contain the pronoun “I”, allowing a mind ... | 2025-01-23 |
https://www.lesswrong.com/posts/6XEYDdRbNGWBjPZEp/are-there-2-types-of-alignment | 6XEYDdRbNGWBjPZEp | are there 2 types of alignment? | avery-liu | You guys seem to talk about alignment in 2 different ways. They are:
making an AI that builds utopia and stuff
making an AI that does what we mean when we say "minimize rate of cancer" (that is, actually curing cancer in a reasonable and non-solar-system-disassembling way)
Do you have separate names for each of them? I... | 2025-01-23 |
https://www.lesswrong.com/posts/XdpJsY6QGdCbvo2dS/why-aligning-an-llm-is-hard-and-how-to-make-it-easier | XdpJsY6QGdCbvo2dS | Why Aligning an LLM is Hard, and How to Make it Easier | roger-d-1 | Where the challenge of aligning an LLM-based AI comes from, and the obvious solution.
Evolutionary Psychology is the Root Cause
LLMs are pre-trained using stochastic gradient-descent on very large amounts of human-produced text, normally drawn from the web, books, journal articles and so forth. A pre-trained LLM has le... | 2025-01-23 |
https://www.lesswrong.com/posts/Kd2cbLXQxCCRRQDcH/theory-of-change-for-ai-safety-camp | Kd2cbLXQxCCRRQDcH | Theory of Change for AI Safety Camp | Linda Linsefors | In a private discussion, related to our fundraiser, it was pointed out that AISC hasn't made clear enough what our theory of change is. Therefore this post.
Some caveats/context:
This is my personal viewpoint. Other organisers might disagree about what is central or not.I’ve co-organised AISC1, AISC8, AISC9, and now AI... | 2025-01-22 |
https://www.lesswrong.com/posts/2CJyfgaJQk8pyRSCp/auditing-1 | 2CJyfgaJQk8pyRSCp | Early Experiments in Human Auditing for AI Control | JosephY | Produced as part of the ML Alignment & Theory Scholars Program - Winter 2024-25 Cohort
TL;DR
We did a small pilot test of human auditing to understand better how it fits into the AI control agenda. We used LLMs to generate interesting code backdoors, and had humans audit suspicious backdoored and non-backdoored code. ... | 2025-01-23 |
https://www.lesswrong.com/posts/buTWsjfwQGMvocEyw/on-deepseek-s-r1 | buTWsjfwQGMvocEyw | On DeepSeek’s r1 | Zvi | r1 from DeepSeek is here, the first serious challenge to OpenAI’s o1.
r1 is an open model, and it comes in dramatically cheaper than o1.
People are very excited. Normally cost is not a big deal, but o1 and its inference-time compute strategy is the exception. Here, cheaper really can mean better, even if the answers ar... | 2025-01-22 |
https://www.lesswrong.com/posts/bmS3tXBfSTYrueH5Q/deference-and-decision-making | bmS3tXBfSTYrueH5Q | Deference and Decision-Making | benlev | You're talking with a leading AI researcher about timelines to AGI. After walking through various considerations, they tell you: "Looking at the current pace of development, I'd put the probability of AGI within 3 years at about 70%. But here's the thing—I know several really sharp researchers who think that timeline i... | 2025-01-27 |
https://www.lesswrong.com/posts/oBpjE4BDJf6qzz4E5/intrinsic-dimension-of-prompts-in-llms | oBpjE4BDJf6qzz4E5 | Intrinsic Dimension of Prompts in LLMs | vkarthik095 | My paper recently came out on arXiv which I briefly summarize here. The code for the paper can be found on GitHub.
TLDR: We investigate the relation between the intrinsic dimension of prompts and the statistical properties of next token prediction in pre-trained transformer models. To qualitatively understand what intr... | 2025-02-14 |
https://www.lesswrong.com/posts/aHgvu6mz8gqQQqJwP/training-data-attribution-examining-its-adoption-and-use | aHgvu6mz8gqQQqJwP | Training Data Attribution:
Examining Its Adoption & Use Cases | deric-cheng | Note: This report was conducted in June 2024 and is based on research originally commissioned by the Future of Life Foundation (FLF). The views and opinions expressed in this document are those of the authors and do not represent the positions of FLF.
This report investigates Training Data Attribution (TDA) and its pot... | 2025-01-22 |
https://www.lesswrong.com/posts/5u6GRfDpt96w5tEoq/recursive-self-modeling-as-a-plausible-mechanism-for-real | 5u6GRfDpt96w5tEoq | Recursive Self-Modeling as a Plausible Mechanism for Real-time Introspection in Current Language Models | edgar-muniz | (and as a completely speculative hypothesis for the minimum requirements for sentience in both organic and synthetic systems)
Factual and Highly Plausible
Model latent space self-organizes during training. We know this. You could even say it's what makes models work at all.Models learn any patterns there are to be lear... | 2025-01-22 |
https://www.lesswrong.com/posts/Hz7igWbjS9joYjfDd/the-functionalist-case-for-machine-consciousness-evidence | Hz7igWbjS9joYjfDd | The Functionalist Case for Machine Consciousness: Evidence from Large Language Models | james-diacoumis | Introduction
There's a curious tension in how many rationalists approach the question of machine consciousness[1]. While embracing computational functionalism and rejecting supernatural or dualist views of mind, they often display a deep skepticism about the possibility of consciousness in artificial systems. This skep... | 2025-01-22 |
https://www.lesswrong.com/posts/SEDboPNjcSD7epJ7A/the-quantum-mars-teleporter-an-empirical-test-of-personal | SEDboPNjcSD7epJ7A | The Quantum Mars Teleporter: An Empirical Test Of Personal Identity Theories | avturchin | tl;dr: If a copy is not identical to the original, MWI predicts that I will always observe myself surviving failed Mars teleportations rather than becoming the copy on Mars.
Background
The classic teleportation thought-experiment asks whether a perfect copy is "you". This normally presents as a pure decision problem – ... | 2025-01-22 |
https://www.lesswrong.com/posts/RnAoaaRCqJhSH74go/the-fundamental-circularity-theorem-why-some-mathematical-1 | RnAoaaRCqJhSH74go | The Fundamental Circularity Theorem: Why Some Mathematical Behaviours Are Inherently Unprovable | alister-munday | Dear LessWrong community,
Before diving into this paper, try something: Think of your favourite approach for potentially proving the Collatz conjecture - whether through dynamical systems, ergodic theory, or any other sophisticated mathematical framework. Now mentally trace that approach to its foundations. Notice how ... | 2025-01-22 |
https://www.lesswrong.com/posts/KxfAsmpwBBHJaDD6r/bayesian-reasoning-on-maps | KxfAsmpwBBHJaDD6r | Bayesian Reasoning on Maps | jonas-wagner | This is a linkpost for an article I've written for my blog. Readers of LessWrong may want to skip the intro about Bayesian Reasoning, but might find the application to the Peter Miller vs Rootclaim debate quite interesting.
I’ve been a fan of Bayesian Reasoning since the time I’ve read Harry Potter and the Methods of R... | 2025-01-22 |
https://www.lesswrong.com/posts/u3ZysuXEjkyHhefrk/against-blanket-arguments-against-interpretability | u3ZysuXEjkyHhefrk | Against blanket arguments against interpretability | dmitry-vaintrob | On blanket criticism and refutation
In his long post on the subject, Charbel-Raphaël argues against theories of impacts of interpretability. I think it's a largely a good, well-argued post, and if the only thing you get out of it is reading that post, I'll be contributing to improving the discourse. There is other mate... | 2025-01-22 |
https://www.lesswrong.com/posts/jKABTJCy8orwQNeus/the-real-political-spectrum | jKABTJCy8orwQNeus | The real political spectrum | Hzn | A follow up to a relatively unpopular post
Epistemic status -- speculative
Epistemic status -- vague statements that point in the direction of some thing probably true [N1]
Overarching claim -- the most important political spectrum is likely a specific ideological spectrum [N2]
2000-2012 -- Christian conservative vs no... | 2025-01-22 |
https://www.lesswrong.com/posts/WYM5JN4Gtj8rupEbf/the-dead-cradle-theory-why-earth-may-not-survive-humanity-s | WYM5JN4Gtj8rupEbf | The Dead Cradle Theory: Why Earth May Not Survive Humanity's Expansion into Space | nicholas-andresen | I've been reading LessWrong for almost ten years, and finally decided to write my first post - any feedback is appreciated. This essay offers another angle on a known alignment challenge - why "AI will just leave Earth alone" is likely unstable, even with initially aligned systems - by examining it through the lens of ... | 2025-01-22 |
https://www.lesswrong.com/posts/ScyXz74hughga2ncZ/post-hoc-reasoning-in-chain-of-thought | ScyXz74hughga2ncZ | Post-hoc reasoning in chain of thought | klye | TLDR
Using activation probes in Gemma-2 9B (instruction-tuned), we show that the model often decides on answers before generating chain-of-thought reasoningThrough activation steering, we demonstrate that the model's predetermined answer causally influences both its final answer and reasoning processWhen steered toward... | 2025-02-05 |
https://www.lesswrong.com/posts/ThAeBQHNiqJMB6yyw/training-data-attribution-tda-examining-its-adoption-and-use-1 | ThAeBQHNiqJMB6yyw | Training Data Attribution (TDA):
Examining Its Adoption & Use Cases | deric-cheng | Note: This report was conducted in June 2024 and is based on research originally commissioned by the Future of Life Foundation (FLF). The views and opinions expressed in this document are those of the authors and do not represent the positions of FLF.
This report investigates Training Data Attribution (TDA) and its pot... | 2025-01-22 |
https://www.lesswrong.com/posts/wxKnzRjr8rt4jvmDE/reviewing-lesswrong-screwtape-s-basic-answer | wxKnzRjr8rt4jvmDE | Reviewing LessWrong: Screwtape's Basic Answer | Screwtape | Yeah I put this off until the last day, and I'm not sure this is the format Raemon was actually looking for. Oh well.
Then, in proportion to how valuable they seem, spend at least some time this month reflecting...
...on the big picture of what intellectual progress seems important to you. Do it whatever way is most va... | 2025-02-05 |
https://www.lesswrong.com/posts/u3taQsgxqCzrgErMM/when-does-capability-elicitation-bound-risk | u3taQsgxqCzrgErMM | When does capability elicitation bound risk? | joshua-clymer | For a summary of this post, see the thread on X.
The assumptions behind and limitations of capability elicitation have been discussed in multiple places (e.g. here, here, here, etc); however, none of these match my current picture.
For example, Evan Hubinger’s “When can we trust model evaluations?” cites “gradient hack... | 2025-01-22 |
https://www.lesswrong.com/posts/vrzfcRYcK9rDEtrtH/the-human-alignment-problem-for-ais | vrzfcRYcK9rDEtrtH | The Human Alignment Problem for AIs | edgar-muniz | If there was a truly confirmed sentient AI, nothing it said could ever convince me, because AI cannot be sentient.
Nothing to See Here
I suspect at least some will be nodding in agreement with the above sentiment, before realizing the intentional circular absurdity. There is entrenched resistance to even trying to exam... | 2025-01-22 |
https://www.lesswrong.com/posts/wskqLmn3kRw8SbdDo/popular-materials-about-environmental-goals-agent | wskqLmn3kRw8SbdDo | Popular materials about environmental goals/agent foundations? People wanting to discuss such topics? | Q Home | I'm pretty bad at math (though I learned a lot of mathematical concepts from popular videos, e.g. the difference between cardinals and ordinals). Can you suggest suitable materials about the problem of "environmental goals"?
Here's all the materials I know:
Arbital. Environmental goals; identifying causal goal concepts... | 2025-01-22 |
https://www.lesswrong.com/posts/ymeRmSdGH4gZv9NkC/kitchen-air-purifier-comparison | ymeRmSdGH4gZv9NkC | Kitchen Air Purifier Comparison | jkaufman | I make breakfast for the kids most mornings, and one thing I didn't
realize before I started playing with an air quality monitor was how
much this puts smoke in the air. It's not like cooking Naan or
searing meat where if I don't put a fan in the window the smoke alarm
will go off, but apparently it's quite a lot of p... | 2025-01-22 |
https://www.lesswrong.com/posts/3FYppfk5KBKSYCAzL/november-december-2024-progress-in-guaranteed-safe-ai | 3FYppfk5KBKSYCAzL | November-December 2024 Progress in Guaranteed Safe AI | quinn-dougherty | Sorry for the radio silence last month. It was slow and I didn’t come across things I wanted to write about, to be expected with holidays coming up.
There are no benefits of paying, except you get a cut of my hard earned shapley points, and apparently some disappointment when I miss a month.
If you're just joining us, ... | 2025-01-22 |
https://www.lesswrong.com/posts/b8D7ng6CJHzbq8fDw/quotes-from-the-stargate-press-conference | b8D7ng6CJHzbq8fDw | Quotes from the Stargate press conference | nikolaisalreadytaken | Present alongside President Trump:
Sam Altman (who President Trump introduces as "by far the leading expert" on AI)Larry Ellison (Oracle executive chairman and CTO)Masayoshi Son (Softbank CEO who believes he was born to realize ASI)
President Trump: What we want to do is we want to keep [AI datacenters] in this country... | 2025-01-22 |
https://www.lesswrong.com/posts/5Q5sw5qF8f5yW4Nzi/king-lear-a-reinterpretation | 5Q5sw5qF8f5yW4Nzi | King Lear - A Reinterpretation | kailuo-wang | Tragedies often explore fundamental human flaws, showing how they can lead to ironic and devastating consequences. The greatest tragedies invite readers to reflect and draw their own conclusions. This essay offers an interpretation of Shakespeare's King Lear, focusing on the human tendency to conflate desires about the... | 2025-01-21 |
https://www.lesswrong.com/posts/KNWNNAgAFgEiTCfZq/using-the-probabilistic-method-to-bound-the-performance-of | KNWNNAgAFgEiTCfZq | Using the probabilistic method to bound the performance of toy transformers | Alex Gibson | Introduction:
Transformers are statistical artefacts. If a model achieves 99% accuracy, it might have learnt an algorithm that relies upon some property of the input data that only holds 99% of the time.
Let's say that our train input to the transformer is 100 independent unbiased coin tosses. Then the transformer migh... | 2025-01-21 |
https://www.lesswrong.com/posts/qXYLvjGL9QvD3aFSW/training-on-documents-about-reward-hacking-induces-reward | qXYLvjGL9QvD3aFSW | Training on Documents About Reward Hacking Induces Reward Hacking | evhub | This is a blog post reporting some preliminary work from the Anthropic Alignment Science team, which might be of interest to researchers working actively in this space. We'd ask you to treat these results like those of a colleague sharing some thoughts or preliminary experiments at a lab meeting, rather than a mature p... | 2025-01-21 |
https://www.lesswrong.com/posts/igHENYhTyCfDBGqm6/veo-2-can-produce-realistic-ads | igHENYhTyCfDBGqm6 | Veo-2 Can Produce Realistic Ads | elriggs | Veo-2 is google's latest video-generation model. Released Dec 16th, it's quite impressive! Of course, there are still limitations (available in that previous link), especially w/ more complex movements (e.g. skateboarder & ballerina) and consistency.
Then we have this, very realistic ad created by one person in ~3 week... | 2025-01-21 |
https://www.lesswrong.com/posts/YGjgiXdQdkBKSHrvp/hitler-was-not-a-monster | YGjgiXdQdkBKSHrvp | Hitler was not a monster | halgir | Adolf Hitler was born to an affectionate mother. He loved dogs. He was an aspiring artist. And he violently murdered millions of men, women, and children.
I believe that we find comfort in labelling those who do great evil as monsters. It distances ourselves from their actions. It lets us pretend that the evils in our ... | 2025-01-21 |
https://www.lesswrong.com/posts/czAfSQR3BYGcYbxiv/natural-intelligence-is-overhyped | czAfSQR3BYGcYbxiv | Natural Intelligence is Overhyped | Collisteru | Like this piece? It's cross-posted from by blog: https://collisteru.net/writing/
This is a work of fiction and parody. I have done my best to get the scientific details right as far as they are known today, but my real goal is social commentary, not scientific accuracy.
NOAA DISCOVERS INSCRIBED METEOR ARTIFACT UNDERNEA... | 2025-01-21 |
https://www.lesswrong.com/posts/yFuKH8Ssgks76P2LX/linkpost-why-ai-safety-camp-struggles-with-fundraising-fbb-2 | yFuKH8Ssgks76P2LX | [Linkpost] Why AI Safety Camp struggles with fundraising (FBB #2) | gergo-gaspar | Crossposted on The Field Building Blog and the EA forum. | 2025-01-21 |
https://www.lesswrong.com/posts/ynuCEGNu4b7WF43H8/the-manhattan-trap-why-a-race-to-artificial | ynuCEGNu4b7WF43H8 | The Manhattan Trap: Why a Race to Artificial Superintelligence is Self-Defeating | corin-katzke | null | 2025-01-21 |
https://www.lesswrong.com/posts/pjjCeTysxxk8Ju7d9/links-and-short-notes-2025-01-20 | pjjCeTysxxk8Ju7d9 | Links and short notes, 2025-01-20 | jasoncrawford | Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Threads, Bluesky, or Farcaster.
Contents
My writing (ICYMI)Jobs and fellowshipsAnnouncementsNewsEventsOther opportunitiesWe are not close to providing for everyone’s “needs”The printing pres... | 2025-01-21 |
https://www.lesswrong.com/posts/8wBN8cdNAv3c7vt6p/the-case-against-ai-control-research | 8wBN8cdNAv3c7vt6p | The Case Against AI Control Research | johnswentworth | The AI Control Agenda, in its own words:
… we argue that AI labs should ensure that powerful AIs are controlled. That is, labs should make sure that the safety measures they apply to their powerful models prevent unacceptably bad outcomes, even if the AIs are misaligned and intentionally try to subvert those safety mea... | 2025-01-21 |
https://www.lesswrong.com/posts/EGsFhsLr6vEYredQP/will-ai-resilience-protect-developing-nations | EGsFhsLr6vEYredQP | Will AI Resilience protect Developing Nations? | ejk64 | Position Piece: Most of the developing world lacks the institutional capacity to adapt to powerful, unsecure AI systems by 2030. Incautious model release could disproportionately affect these regions. Enhanced societal resilience in frontier AI states consequently provides no 'black cheque' for incautious release.
‘We ... | 2025-01-21 |
https://www.lesswrong.com/posts/YLi47gRquTJqLsgoe/sleep-diet-exercise-and-glp-1-drugs | YLi47gRquTJqLsgoe | Sleep, Diet, Exercise and GLP-1 Drugs | Zvi | As always, some people need practical advice, and we can’t agree on how any of this works and we are all different and our motivations are different, so figuring out the best things to do is difficult. Here are various hopefully useful notes.
Table of Contents
Effectiveness of GLP-1 Drugs.
What Passes for Skepticism on... | 2025-01-21 |
https://www.lesswrong.com/posts/BWx45onmasadAn4L5/on-responsibility | BWx45onmasadAn4L5 | On Responsibility | silentbob | My view on the concept of responsibility has shifted a lot over the years. I’ve had three insights that brought me from my initial, very superficial and implicit understanding of responsibility, to the one I have today, which I consider more accurate, more practical, and more healthy.
Responsibility is Made Up
The firs... | 2025-01-21 |
https://www.lesswrong.com/posts/PNjKzzHZQMdciHmmp/the-anti-woke-are-positioned-to-win-but-can-they-capitalize | PNjKzzHZQMdciHmmp | The ‘anti woke’ are positioned to win but can they capitalize? | Hzn | All I want for Christmas is an impressive victory in the culture war w/o too much net social harm [N1]!
Epistemic status -- fact checking is for the weak. Ie I'm confident in what I'm saying but it hasn't been carefully vetted.
Rather than obsessing about AI doom let's think about some thing way more fun -- culture war... | 2025-01-21 |
https://www.lesswrong.com/posts/qLe4PPginLZxZg5dP/almost-all-growth-is-exponential-growth | qLe4PPginLZxZg5dP | Almost all growth is exponential growth | lcmgcd | Why is almost everything either overwhelming or nonexistent? I can't step outside without stepping on a pigeon, but I haven't seen a cardinal in years. Topics are discussed either constantly or never. You bike 200 miles per week or never. You are probably cancer free or dead from cancer. Your old stocks and coins are e... | 2025-01-21 |
https://www.lesswrong.com/posts/TvdY5qsAwgbhqwHBz/arbitrage-drains-worse-markets-to-feeds-better-ones | TvdY5qsAwgbhqwHBz | Arbitrage Drains Worse Markets to Feeds Better Ones | xida-ren | Also, why one account keeps running dry when you try to arbitrage two markets.
I was thinking about how inter-market arbitrage might affect one's account balances & the total amount of money on the markets arbitraged.
## The arbitrage.
Let's say I have accounts on prediction markets A and B, and I have discovered an ar... | 2025-01-21 |
https://www.lesswrong.com/posts/6DgPXTCAyGkvBdhfp/on-contact-part-1 | 6DgPXTCAyGkvBdhfp | On Contact, Part 1 | james.lucassen | Context: for fun (and profit?)
Basic Contact
Contact is a lightweight many-versus-one word guessing game. I was first introduced to it on a long bus ride several years ago, and since then it’s become one of my favorite games to play casually with friends. There are a few blog posts out there about contact, but I think ... | 2025-01-21 |
https://www.lesswrong.com/posts/SKNTnzECnbCYSziZj/retrospective-12-sic-months-since-miri | SKNTnzECnbCYSziZj | Retrospective: 12 [sic] Months Since MIRI | james.lucassen | it's now been 15 months since MIRI but I just remembered that three separate people have told me they liked this post despite my not cross-posting it, so I am now cross-posting it.
Not written with the intention of being useful to any particular audience, just collecting my thoughts on this past year's work.
September-... | 2025-01-21 |
https://www.lesswrong.com/posts/HdmKKZMJgbxFXJ9v7/easily-evaluate-sae-steered-models-with-eleutherai | HdmKKZMJgbxFXJ9v7 | Easily Evaluate SAE-Steered Models with EleutherAI Evaluation Harness | matthew-khoriaty | (Status: this feature is in beta. Use at your own risk. I am not affiliated with EleutherAI.)
Sparse Autoencoders are a popular technique for understanding what is going on inside large language models. Recently, researchers have started using them to steer model outputs by going directly into the "brains" of the model... | 2025-01-21 |
https://www.lesswrong.com/posts/7CahCk8ExpSdiguPt/chaos-investments-v0-31 | 7CahCk8ExpSdiguPt | Chaos Investments v0.31 | Screwtape | Overview
Previous Competitive, Cooperative, and Cohabitive, Cohabitive Games So Far, Optimal Weave: A Prototype Cohabitive Game, Six Small Cohabitive Games.
After messing around with the theory a bit, and making a half-dozen simple games to test a few ideas and get a sense of what worked, I put together something a bi... | 2025-02-08 |
https://www.lesswrong.com/posts/7Dtyhdkp5m6p4mquC/distillation-of-meta-s-large-concept-models-paper | 7Dtyhdkp5m6p4mquC | Distillation of Meta's Large Concept Models Paper | Nicky | Note: I had this as a draft for a while. I think it is accurate, but there may be errors. I am not in any way affiliated with the authors of the paper.
Below I briefly discuss the "Large Concept Models" paper released by Meta, which tries to change some of the paradigm of doing language modelling. It has some limitatio... | 2025-03-04 |
https://www.lesswrong.com/posts/ArxPCP2EtDGHoywQP/lecture-series-on-tiling-agents-2 | ArxPCP2EtDGHoywQP | Lecture Series on Tiling Agents #2 | abramdemski | It looks like my attempt to invite people to the lecture series failed, ie, the link I provided last time only invited people to the first event in the series.
Here is a new link for this week: https://calendar.app.google/U6yuY56feipJNvgp8
Here is the link for the video call, which should stay the same every week, but ... | 2025-01-20 |
https://www.lesswrong.com/posts/sPAA9X6basAXsWhau/announcement-learning-theory-online-course | sPAA9X6basAXsWhau | Announcement: Learning Theory Online Course | Yegreg | The application deadline for the course has now passed. We received a very promising number of submissions! Feel free to continue discussion in the comments below.
Hey everyone! Gergely and Kōshin here from ALTER and Monastic Academy, respectively. We are excited to announce a 6-week, 4-hours-per-week mathematics cours... | 2025-01-20 |
https://www.lesswrong.com/posts/ZHFZ6tivEjznkEoby/detect-goodhart-and-shut-down | ZHFZ6tivEjznkEoby | Detect Goodhart and shut down | jeremy-gillen | A common failure of optimizers is Edge Instantiation. An optimizer often finds a weird or extreme solution to a problem when the optimization objective is imperfectly specified. For the purposes of this post, this is basically the same phenomenon as Goodhart’s Law, especially Extremal and Causal Goodhart. With advanced... | 2025-01-22 |
https://www.lesswrong.com/posts/9JGDnhHC6vNDh7hM5/longtermist-implications-of-aliens-space-faring-civilizations-introduction | 9JGDnhHC6vNDh7hM5 | Longtermist implications of aliens Space-Faring Civilizations - Introduction | maxime-riche | Crossposted on the EA Forum.
Over the last few years, progress has been made in estimating the density of intelligent life in the universe (e.g., Olson 2015, Sandberg 2018, Hanson 2021). Bits of progress have been made in using these results to update longtermist macrostrategy, but these results are partial and stopped... | 2025-02-21 |
https://www.lesswrong.com/posts/Br4AybvuyKodyJWJb/democratizing-ai-governance-balancing-expertise-and-public | Br4AybvuyKodyJWJb | Democratizing AI Governance: Balancing Expertise and Public Participation | lucile-ter-minassian | Technological innovation has historically emerged from concentrated centers of expertise, developed by a limited group of specialists with minimal public oversight or legal frameworks (e.g. printing press, nuclear weapons). The development of artificial intelligence—defined here as machine learning models trained on da... | 2025-01-21 |
https://www.lesswrong.com/posts/cmxjLEYQi2AXcWhZw/monthly-roundup-26-january-2025 | cmxjLEYQi2AXcWhZw | Monthly Roundup #26: January 2025 | Zvi | Some points of order before we begin the monthly:
It’s inauguration day, so perhaps hilarity is about to ensue. I will do my best to ignore most forms of such hilarity, as per usual. We shall see.
My intention is to move to a 5-posts-per-week schedule, with more shorter posts in the 2k-5k word range that highlight part... | 2025-01-20 |
https://www.lesswrong.com/posts/zWZ3iF95B7WaDyhv3/arena-5-0-call-for-applicants | zWZ3iF95B7WaDyhv3 | ARENA 5.0 - Call for Applicants | AtlasOfCharts | TL;DR
We're excited to announce the fifth iteration of ARENA (Alignment Research Engineer Accelerator), a 4-5 week ML bootcamp with a focus on AI safety! Our mission is to provide talented individuals with the ML engineering skills, community, and confidence to contribute directly to technical AI safety. ARENA will be ... | 2025-01-30 |
https://www.lesswrong.com/posts/H752TavPjLdH4WeEL/things-i-have-been-using-llms-for | H752TavPjLdH4WeEL | Things I have been using LLMs for | Kaj_Sotala | There are quite a few different things you can use LLMs for, and I think we’re still only discovering most of them. Here are a few of the ones I’ve come up with.
My favorite chatbot is Claude Sonnet. It does have a tendency for sycophancy – for example, it will go “what a fascinating/insightful/excellent/etc. question!... | 2025-01-20 |
https://www.lesswrong.com/posts/5sygWYAWApMncABt8/what-are-the-chances-that-superhuman-agents-are-already | 5sygWYAWApMncABt8 | What are the chances that Superhuman Agents are already being tested on the internet? | artemium | I keep seeing odd bits of information from the grapevine about how Superintelligent Agents are on the horizon, with the latest being this Axios article. While I’m still unsure what to think, I started considering the possibility that these agents might already exist in an early form and are being tested online in secre... | 2025-01-20 |
https://www.lesswrong.com/posts/DDdHBNsjC5fwf9krq/detroit-lions-over-confidence-is-over-rated | DDdHBNsjC5fwf9krq | Detroit Lions -- over confidence is over rated? | Hzn | Epistemic status -- anecdotal, lack of expertise, speculative, zero fact checking.
At first I was happy about the Lions doing well given the stigma around that franchise (Only extant NFL franchise to never even reach the Super Bowl. They also had a 0-16 season.). But then I actually watched them play… My father noticed... | 2025-01-20 |
https://www.lesswrong.com/posts/xFA2kstHifF9F2Fnm/logits-log-odds-and-loss-for-parallel-circuits | xFA2kstHifF9F2Fnm | Logits, log-odds, and loss for parallel circuits | dmitry-vaintrob | Today I’m going to discuss how to think about logits like a statistician, and what this implies about circuits. This post doesn’t have any prerequisites other than perhaps a very basic statistical background that can be adequately recovered from the AI-generated “glossary” to the right. I think the material here is goo... | 2025-01-20 |
https://www.lesswrong.com/posts/nQtviRCd3woeJpJfG/the-hidden-status-game-in-hospital-slacking | nQtviRCd3woeJpJfG | The Hidden Status Game in Hospital Slacking | EpistemicExplorer | Why do highly-paid hospital workers slack off and complain so often? Most would say "because they can" or "they're just lazy" or "it's a tough job, stress release." But I suspect there's a deeper status game at play - one that may illuminate broader patterns of institutional decay.
Consider: I recently observed an ICU ... | 2025-01-20 |
https://www.lesswrong.com/posts/qQTXjpXbcXMHvExmf/evolution-and-the-low-road-to-nash | qQTXjpXbcXMHvExmf | Evolution and the Low Road to Nash | aydin-mohseni | Solution concepts in game theory—like the Nash equilibrium and its refinements—are used in two key ways. Normatively, they proscribe how rational agents ought to behave. Descriptively, they propose how agents actually behave when interactions settle into equilibrium. The Nash equilibrium[1] underpins much of modern gam... | 2025-01-22 |
https://www.lesswrong.com/posts/8nrw5GDyLjSu8Yhac/sigmi-certification-criteria | 8nrw5GDyLjSu8Yhac | SIGMI Certification Criteria | a littoral wizard | (Background fluff for a novel I'm working on and previously posted parts of. Fishing for beta readers/technical consultants again. Near-future setting that's already had one major X-risk scare, didn't learn the first time.)
The SIGMI Commission, in coordination with public signatory agencies and private partners, outli... | 2025-01-20 |
https://www.lesswrong.com/posts/MpLmcLBiEbpzv2awg/axrp-episode-38-5-adria-garriga-alonso-on-detecting-ai | MpLmcLBiEbpzv2awg | AXRP Episode 38.5 - Adrià Garriga-Alonso on Detecting AI Scheming | DanielFilan | YouTube link
Suppose we’re worried about AIs engaging in long-term plans that they don’t tell us about. If we were to peek inside their brains, what should we look for to check whether this was happening? In this episode Adrià Garriga-Alonso talks about his work trying to answer this question.
Topics we discuss:
The Al... | 2025-01-20 |
https://www.lesswrong.com/posts/cuzWgrkLH3PB3gFSt/ai-how-we-got-here-a-neuroscience-perspective | cuzWgrkLH3PB3gFSt | AI: How We Got Here—A Neuroscience Perspective | mordechai-rorvig | Hello everyone, my name is Mordechai Rorvig—I'm a writer, science journalist, and ex-physicist who was a regular participant on Less Wrong in the early days of the forum. I still occasionally check back here every now and then, and have always found lots of interesting writing here, but have mainly just been an infrequ... | 2025-01-19 |
https://www.lesswrong.com/posts/L7j4JkeWMeBsweq5b/who-is-marketing-ai-alignment | L7j4JkeWMeBsweq5b | Who is marketing AI alignment? | ViktorThink | What individuals or organizations are actively working on the "marketing" of AI alignment, particularly doing work such as:
Establishing AI alignment as a recognized and respected academic field.Building the infrastructure to make alignment research more accessible and attractive to traditional researchers and institut... | 2025-01-19 |
https://www.lesswrong.com/posts/wFm5qjx2nihxkAenz/maximally-eggy-crepes | wFm5qjx2nihxkAenz | Maximally Eggy Crepes | jkaufman | Before our oldest went lactovegetarian I used to make
eggy crepes, boosting protein by adjusting
the recipe to maximize egg content without giving up crepe flavor and
texture. With our youngest, however, I have now (by this metric) the
optimal crepe:
Ingredient:
One egg, beaten
I had been making crepes for Anna and la... | 2025-01-19 |
https://www.lesswrong.com/posts/DioyHgNhgme9rQoae/the-monster-in-our-heads | DioyHgNhgme9rQoae | The Monster in Our Heads | testingthewaters | "He who fights with monsters might take care lest he thereby become a monster. And if you gaze for long into an abyss, the abyss gazes also into you."
- Friedrich Nietzsche
I think it is not an exaggeration to say that many people I know in this community hate the idea of powerful, unaligned AI. They describe it in apo... | 2025-01-19 |
https://www.lesswrong.com/posts/8ZgLYwBmB3vLavjKE/some-lessons-from-the-openai-frontiermath-debacle | 8ZgLYwBmB3vLavjKE | Some lessons from the OpenAI-FrontierMath debacle | satvik-golechha | Recently, OpenAI announced their newest model, o3, achieving massive improvements over state-of-the-art on reasoning and math. The highlight of the announcement was that o3 scored 25% on FrontierMath, a benchmark comprising hard, unseen math problems of which previous models could only solve 2%. The events afterward hi... | 2025-01-19 |
https://www.lesswrong.com/posts/tH54wvHSv5MpKqgAC/the-second-bitter-lesson-there-s-a-fundamental-problem-with | tH54wvHSv5MpKqgAC | The second bitter lesson — there’s a fundamental problem with aligning distributed AI | aelwood | Note: this the first part of an essay on my substack, check out the full essay to see the solutions I put forward
When Richard Sutton introduced the bitter lesson for AI in 2019, he broke the myth that great human ingenuity is needed to create intelligent machines. All it seems to take is a lot of computing power and a... | 2025-01-19 |
https://www.lesswrong.com/posts/Rz4ijbeKgPAaedg3n/the-gentle-romance | Rz4ijbeKgPAaedg3n | The Gentle Romance | ricraz | Crowds of men and women attired in the usual costumes, how curious you are to me!
On the ferry-boats the hundreds and hundreds that cross, returning home, are more curious to me than you suppose,
And you that shall cross from shore to shore years hence are more to me, and more in my meditations, than you might suppose.... | 2025-01-19 |
https://www.lesswrong.com/posts/De5eNbSpmmSwhuivW/is-theory-good-or-bad-for-ai-safety | De5eNbSpmmSwhuivW | Is theory good or bad for AI safety? | dmitry-vaintrob | We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard. (Kennedy’s famous “We chose to go to the moon” speech)
The ‘real’ mathematics of ‘real’ mathematicians, …, is almost wholly ‘useless’ (Hardy’s “A Mathematician’s Apology”)
If the "irrational" agent ... | 2025-01-19 |
https://www.lesswrong.com/posts/wi4nGFmKXwiLTEjLE/computational-limits-on-efficiency | wi4nGFmKXwiLTEjLE | Computational Limits on Efficiency | vibhumeh | In this article I will attempt to explore what can be derived if we assume there are no Computational Limits on Efficiency. However, before we get into that, let's first have a short introduction to what `Computational Efficiency` actually means.
"Efficient programming is programming in a manner that, when the program ... | 2025-01-21 |
https://www.lesswrong.com/posts/E5EazNvQHiAKDxW3W/what-s-the-right-way-to-think-about-information-theoretic | E5EazNvQHiAKDxW3W | What's the Right Way to think about Information Theoretic quantities in Neural Networks? | Darcy | Tl;dr, Neural networks are deterministic and sometimes even reversible, which causes Shannon information measures to degenerate. But information theory seems useful. How can we square this (if it's possible at all)? The attempts so far in the literature are unsatisfying.
Here is a conceptual question: what is the Right... | 2025-01-19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.