url
stringlengths
52
124
post_id
stringlengths
17
17
title
stringlengths
2
248
author
stringlengths
2
49
content
stringlengths
22
295k
date
stringclasses
376 values
https://www.lesswrong.com/posts/Zfg2psEshmYqFZc3X/an-effective-ai-safety-initiative
Zfg2psEshmYqFZc3X
an effective ai safety initiative
logan-zoellner
a typical story A frequent criticism of the Effective Altruism/AI Safety memeplex is that they are too focused on theoretical harms from AI and not enough on real ones (it's me, I'm the one who frequently makes this criticism). Consider the recent California AI bill, which has heavy fingerprints of EA on it. Notice tha...
2024-05-06
https://www.lesswrong.com/posts/fxJiPEn7cmkzpYWKh/biorisk-is-an-unhelpful-analogy-for-ai-risk
fxJiPEn7cmkzpYWKh
Biorisk is an Unhelpful Analogy for AI Risk
Davidmanheim
null
2024-05-06
https://www.lesswrong.com/posts/YAMijCetKxNRcAwFB/accidental-electronic-instrument
YAMijCetKxNRcAwFB
Accidental Electronic Instrument
jkaufman
I've been working on a project with the goal of adding virtual harp strings to my electric mandolin. As I've worked on it, though, I've ended up building something pretty different: It's not what I was going for! Instead of a small bisonoric monophonic picked instrument attached to the mandolin, it's a large unisonor...
2024-05-06
https://www.lesswrong.com/posts/Eszpybm27yg46qoLr/the-social-impact-of-trolley-problems
Eszpybm27yg46qoLr
The Social Impact of Trolley Problems
atomantic
Every few years, someone asks me what I would do to solve a Trolley Problem. Sometimes, they think I’ve never heard of it before—that I’ve never read anything about moral philosophy (e.g. Plato, Foot, Thomson, Graham)—and oh do they have a zinger for me. But for readers who are well familiar with these problems, I have...
2024-05-08
https://www.lesswrong.com/posts/yf6gAcgPp22T7AdnZ/explaining-a-math-magic-trick
yf6gAcgPp22T7AdnZ
Explaining a Math Magic Trick
Robert_AIZI
Introduction A recent popular tweet did a "math magic trick", and I want to explain why it works and use that as an excuse to talk about cool math (functional analysis). The tweet in question: This is a cute magic trick, and like any good trick they nonchalantly gloss over the most important step. Did you spot it? Did ...
2024-05-05
https://www.lesswrong.com/posts/rZ6wam9gFGFQrCWHc/does-reducing-the-amount-of-rl-for-a-given-capability-level
rZ6wam9gFGFQrCWHc
Does reducing the amount of RL for a given capability level make AI safer?
Chris_Leong
Some people have suggested that a lot of the danger of training a powerful AI comes from reinforcement learning. Given an objective, RL will reinforce any method of achieving the objective that the model a) tries and b) finds to be successful. It doesn't matter whether it includes things like deceiving us or increasing...
2024-05-05
https://www.lesswrong.com/posts/3rkcbvpRKZPtXKFwN/haymarket-at-closing-time
3rkcbvpRKZPtXKFwN
Haymarket at Closing Time
jkaufman
Historically produce shopping was mostly in open-air markets, but in the US produce is now typically sold in buildings. Most open-air produce sales are probably at farmers markets, but these focus on the high end. I like that Boston's Haymarket more similar to the historical model: competing vendors selling conventio...
2024-05-05
https://www.lesswrong.com/posts/xgrvmaLFvkFr4hKjz/introduction-to-cancer-vaccines
xgrvmaLFvkFr4hKjz
introduction to cancer vaccines
bhauth
cancer neoantigens For cells to become cancerous, they must have mutations that cause uncontrolled replication and mutations that prevent that uncontrolled replication from causing apoptosis. Because cancer requires several mutations, it often begins with damage to mutation-preventing mechanisms. As such, cancers often...
2024-05-05
https://www.lesswrong.com/posts/cgrgbboLmWu4zZeG8/some-experiments-i-d-like-someone-to-try-with-an-amnestic
cgrgbboLmWu4zZeG8
Some Experiments I'd Like Someone To Try With An Amnestic
johnswentworth
A couple years ago, I had a great conversation at a research retreat about the cool things we could do if only we had safe, reliable amnestic drugs - i.e. drugs which would allow us to act more-or-less normally for some time, but not remember it at all later on. And then nothing came of that conversation, because as fa...
2024-05-04
https://www.lesswrong.com/posts/W7estof3P7JgBKWrN/introducing-ai-powered-audiobooks-of-rational-fiction
W7estof3P7JgBKWrN
Introducing AI-Powered Audiobooks of Rational Fiction Classics
Askwho
(ElevenLabs reading of this post:) Your browser does not support the audio element. I'm excited to share a project I've been working on that I think many in the Lesswrong community will appreciate - converting some rational fiction into high-quality audiobooks using cutting-edge AI voice technology from ElevenLabs, und...
2024-05-04
https://www.lesswrong.com/posts/bDoGbZX7Jzjr2x3Aa/s-risks-fates-worse-than-extinction
bDoGbZX7Jzjr2x3Aa
S-Risks: Fates Worse Than Extinction
aggliu
Cross-posted to the EA forum In this Rational Animations video, we discuss s-risks (risks from astronomical suffering), which involve an astronomical number of beings suffering terribly. Researchers on this topic argue that s-risks have a significant chance of occurring and that there are ways to lower that chance. The...
2024-05-04
https://www.lesswrong.com/posts/KKayJ2CXEgW2CaCXr/shannon-vallor-s-technomoral-virtues
KKayJ2CXEgW2CaCXr
Shannon Vallor’s “technomoral virtues”
David_Gross
Shannon Vallor is the first Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute. She is the author of Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. She believes that we need to build and cultivate a new virtue ethics appropriate to ou...
2024-05-04
https://www.lesswrong.com/posts/rspCJZo7srEjZSrn4/if-you-are-assuming-software-works-well-you-are-dead
rspCJZo7srEjZSrn4
If you are assuming Software works well you are dead
johannes-c-mayer
Suno Version I say this because I can hardly use a computer without constantly getting distracted. Even when I actively try to ignore how bad software is, the suggestions keep coming. Seriously Obsidian? You could not come up with a system where links to headings can't break? This makes you wonder what is wrong with hu...
2024-05-04
https://www.lesswrong.com/posts/sNnhq9PQvHW9PtoDH/ohgood-a-coordination-body-for-compute-governance
sNnhq9PQvHW9PtoDH
OHGOOD: A coordination body for compute governance
domdomegg
Core to many compute governance proposals is having some kind of register that records who owns AI chips. This article explores how this register could be implemented in practice, outlining an organisation that maintains such a register and its necessary processes. It's named OHGOOD, the Organisation Housing the GPU & ...
2024-05-04
https://www.lesswrong.com/posts/Lgvw4rFsGcXoyYZbw/ccs-on-compound-sentences
Lgvw4rFsGcXoyYZbw
CCS on compound sentences
artkpv
Finding internal knowledge representation(s) inside transformer models without supervision is certainly a challenging task which is important for scalable oversight and to mitigate the deception risk factor. I’m testing Contrast-Consistent Search (CCS[1]) on TruthfulQA[2] dataset for compound sentences (conjunction and...
2024-05-04
https://www.lesswrong.com/posts/bkr9BozFuh7ytiwbK/my-hour-of-memoryless-lucidity
bkr9BozFuh7ytiwbK
My hour of memoryless lucidity
UnexpectedValues
Yesterday, I had a coronectomy: the top halves of my bottom wisdom teeth were surgically removed. It was my first time being sedated, and I didn’t know what to expect. While I was unconscious during the surgery, the hour after surgery turned out to be a fascinating experience, because I was completely lucid but had alm...
2024-05-04
https://www.lesswrong.com/posts/caZ3yR5GnzbZe2yJ3/how-to-do-patching-fast
caZ3yR5GnzbZe2yJ3
How To Do Patching Fast
Josephm
This post outlines an efficient implementation of Edge Patching that massively outperforms common hook-based implementations. This implementation is available to use in my new library, AutoCircuit, and was first introduced by Li et al. (2023). What is activation patching? I introduce new terminology to clarify the dist...
2024-05-11
https://www.lesswrong.com/posts/uQgbAmjwkMT8Drrb7/extra-tall-crib
uQgbAmjwkMT8Drrb7
Extra Tall Crib
jkaufman
A few days ago I came upstairs to: Me: how did you get in there? Nora: all by myself! Either we needed to be done with the crib, which had a good chance of much less sleeping at naptime, or we needed a taller crib. This is also something we went through when Lily was little, and that time what worked was removing the ...
2024-05-04
https://www.lesswrong.com/posts/YstyvDymtaaXFQdfM/get-your-tickets-to-manifest-2024-by-may-13th
YstyvDymtaaXFQdfM
Get your tickets to Manifest 2024 by May 13th!
saul-munn
null
2024-05-03
https://www.lesswrong.com/posts/dL5kzowP8r4JJxhep/were-there-any-ancient-rationalists
dL5kzowP8r4JJxhep
Were there any ancient rationalists?
OliverHayman
I've recently read some cool posts on rationality through history (e.g., see here), and want to see if there are more examples! So: do you know of any "ancient" individual figures or institutions that you would consider rational? Or at least from several centuries ago.
2024-05-03
https://www.lesswrong.com/posts/gprh2HD6PDK6AZDqP/ai-safety-for-fleshy-humans-an-ai-safety-explainer-by-nicky
gprh2HD6PDK6AZDqP
"AI Safety for Fleshy Humans" an AI Safety explainer by Nicky Case
habryka4
Nicky Case, of "The Evolution of Trust" and "We Become What We Behold" fame (two quite popular online explainers/mini-games) has written an intro explainer to AI Safety! It looks pretty good to me, though just the first part is out, which isn't super in-depth. I particularly appreciate Nicky clearly thinking about the ...
2024-05-03
https://www.lesswrong.com/posts/K86KpNvt6bGygnBHE/conserved-quantities-stat-mech-part-2
K86KpNvt6bGygnBHE
Conserved Quantities (Stat Mech Part 2)
Jemist
As before, we will consider particles moving in boxes in an abstract and semi-formal way. Imagine we have two types of particle, red and blue, in a box. Imagine they can change colour freely, and as before let's forget as much as possible. Our knowledge over particle states now looks like: [Position]∼Uniform over insid...
2024-05-04
https://www.lesswrong.com/posts/324pQjqoHEHeF2vPs/ai-clarity-an-initial-research-agenda
324pQjqoHEHeF2vPs
AI Clarity: An Initial Research Agenda
justin-bullock
Cross-posted on our website: https://www.convergenceanalysis.org/publications/ai-clarity-an-initial-research-agenda Cross-posted on the EA Forum: https://forum.effectivealtruism.org/posts/JyhoTRXxYvLfFycXi/ai-clarity-an-initial-research-agenda Executive Summary Transformative AI (TAI) has the potential to solve many of...
2024-05-03
https://www.lesswrong.com/posts/7SnqxTzZzPMaKL5d3/apply-to-espr-and-pair-rationality-and-ai-camps-for-ages-16
7SnqxTzZzPMaKL5d3
Apply to ESPR & PAIR, Rationality and AI Camps for Ages 16-21
anna-gajdova
TLDR – Apply now to ESPR and PAIR. ESPR welcomes students between 16-19 years. PAIR is for students between 16-21 years. The FABRIC team is running two immersive summer workshops for mathematically talented students this year. The Program on AI and Reasoning (PAIR) is for students with an interest in artificial intelli...
2024-05-03
https://www.lesswrong.com/posts/hqDfsYTftQGr4eM4H/now-this-is-forecasting-understanding-epoch-s-direct
hqDfsYTftQGr4eM4H
Now THIS is forecasting: understanding Epoch’s Direct Approach
elliot
Happy May the 4th from Convergence Analysis! Cross-posted on the EA Forum. As part of Convergence Analysis’s scenario research, we’ve been looking into how AI organisations, experts, and forecasters make predictions about the future of AI. In February 2023, the AI research institute Epoch published a report in which it...
2024-05-04
https://www.lesswrong.com/posts/k8fvDmRMuKyPagwLM/llm-planners-hybridisation-for-friendly-agi
k8fvDmRMuKyPagwLM
LLM+Planners hybridisation for friendly AGI
installgentoo
Every LLM in existence is a blackbox, and alignment relying on tuning the blackbox never succeeds - that is evident by that fact that even models like ChatGPT get jailbroken constantly. Moreover, blackbox tuning has no reason to transfer to bigger models. A new architecture is required. I propose using an LLM to parse ...
2024-05-03
https://www.lesswrong.com/posts/3GqWPosTFKxeysHwg/mechanistic-interpretability-workshop-happening-at-icml-2024
3GqWPosTFKxeysHwg
Mechanistic Interpretability Workshop Happening at ICML 2024!
neel-nanda-1
Announcing the first academic Mechanistic Interpretability workshop, held at ICML 2024! I think this is an exciting development that's a lagging indicator of mech interp gaining legitimacy as an academic field, and a good chance for field building and sharing recent progress! We'd love to get papers submitted if any of...
2024-05-03
https://www.lesswrong.com/posts/yqu6hvLSR8S5GHNt6/geometrically-maximal-lottery-lotteries-are-probably-not
yqu6hvLSR8S5GHNt6
(Geometrically) Maximal Lottery-Lotteries Are Probably Not Unique
Lorxus
epistemic/ontological status: almost certainly all of the following - a careful research-grade writeup of some results I arrived at a genuinely kinda shiny open(?) question in theoretical psephology that we are near-certainly never going to get to put into practice for any serious non-cooked-up purpose let alone at sca...
2024-05-10
https://www.lesswrong.com/posts/JsqPftLgvHLL4Pscg/weekly-newsletter-for-ai-safety-events-and-training-programs
JsqPftLgvHLL4Pscg
Weekly newsletter for AI safety events and training programs
bryceerobertson
We've merged the newsletters from aisafety.training and aisafety.events to create one clean, comprehensive weekly email covering newly announced events and training programs in the AI safety space. Events and training programs are important for the ecosystem to grow and mature, so we wanted to make it as easy as possib...
2024-05-03
https://www.lesswrong.com/posts/buSq5PxfjbAg3GpD7/ccs-counterfactual-civilization-simulation
buSq5PxfjbAg3GpD7
CCS: Counterfactual Civilization Simulation
pi-rogers
I don't think this is very likely, but a possible path to alignment is formal goal alignment, which is basically the following two step plan: Define a formal goal that robustly leads to good outcomes under heavy optimization pressureBuild something that robustly pursues the formal goal you give it I think currently the...
2024-05-02
https://www.lesswrong.com/posts/kZpAPv96Daha8PMw8/let-s-design-a-school-part-2-1-school-as-education-structure
kZpAPv96Daha8PMw8
Let's Design A School, Part 2.1 School as Education - Structure
Sable
What are our goals when it comes to school-as-education? What are we actually trying to achieve? It’s my understanding that the current school system is designed to produce factory workers - that is, to create a class of adults capable of working productively in a factory. In that context, many aspects of education tha...
2024-05-02
https://www.lesswrong.com/posts/Jhfaeh8ZkroqMW9En/why-i-m-not-doing-pauseai
Jhfaeh8ZkroqMW9En
Why I'm not doing PauseAI
ariel-kwiatkowski
I'm a daily user of ChatGPT, sometimes supplementing it with Claude, and the occasional local model for some experiments. I try to make squeeze LLMs into agent-shaped bodies, but it doesn't really work. I also have a PhD, which typically would make me an expert in the field of AI, but the field is so busy and dynamic t...
2024-05-02
https://www.lesswrong.com/posts/pPwt5ir2zFayLx7tH/ai-61-meta-trouble
pPwt5ir2zFayLx7tH
AI #61: Meta Trouble
Zvi
Note by habryka: This post failed to import automatically from RSS for some reason, so it's a week late. Sorry for the hassle. The week’s big news was supposed to be Meta’s release of two versions of Llama-3. Everyone was impressed. These were definitely strong models. Investors felt differently. After earnings yesterd...
2024-05-02
https://www.lesswrong.com/posts/HMgb9jiw7mb5mGbAZ/ai-salon-trustworthy-ai-futures-1
HMgb9jiw7mb5mGbAZ
Ai Salon: Trustworthy AI Futures #1
ian-eisenberg
This monthly SF-based in person series focuses on AI Safety and its associated disciplines (e.g., AI governance & Policy, AI ethics, trustworthy AI, scalable oversight). We will dive deep into one topic each gathering, bringing our diverse perspectives together. Find more info and sign up at the Luma event! Theme: Ethi...
2024-05-02
https://www.lesswrong.com/posts/Qyr9tfBLck53em4hf/how-to-write-pseudocode-and-why-you-should
Qyr9tfBLck53em4hf
How to write Pseudocode and why you should
johannes-c-mayer
TLDR: Writing pseudocode is extremely useful when designing algorithms. Most people do it wrong. Mainly because they don't intentionally try to keep the code as abstract as possible. Possibly this happens because in normal programming a common workflow is to focus on getting the implementation of one function right bef...
2024-05-02
https://www.lesswrong.com/posts/sZpj4Xf9Ly2jyR9tK/ai-62-too-soon-to-tell
sZpj4Xf9Ly2jyR9tK
AI #62: Too Soon to Tell
Zvi
What is the mysterious impressive new ‘gpt2-chatbot’ from the Arena? Is it GPT-4.5? A refinement of GPT-4? A variation on GPT-2 somehow? A new architecture? Q-star? Someone else’s model? Could be anything. It is so weird that this is how someone chose to present that model. There was also a lot of additional talk this ...
2024-05-02
https://www.lesswrong.com/posts/aqW9AttBs85pTuLaB/whiteboard-program-traceing-debug-a-program-before-you-have
aqW9AttBs85pTuLaB
Whiteboard Program Traceing: Debug a Program Before you have the Code
johannes-c-mayer
TL;DR: A Whiteboard Program Trace is where you write down a data structure on a whiteboard and step-by-step walk through how an algorithm A would manipulate this data, i.e. you execute A for one specific input in a visual way. Usually, I do this before I have understood all the details of A, in order to figure them out...
2024-05-02
https://www.lesswrong.com/posts/eBzJawjxkMdNaCeMm/which-skincare-products-are-evidence-based
eBzJawjxkMdNaCeMm
Which skincare products are evidence-based?
vanessa-kosoy
The beauty industry offers a large variety of skincare products (marketed mostly at women), differing both in alleged function and (substantially) in price. However, it's pretty hard to test for yourself how much any of these product help. The feedback loop for things like "getting less wrinkles" is very long. So, whic...
2024-05-02
https://www.lesswrong.com/posts/qsGRKwTRQ5jyE5fKB/q-and-a-on-proposed-sb-1047
qsGRKwTRQ5jyE5fKB
Q&A on Proposed SB 1047
Zvi
Previously: On the Proposed California SB 1047. Text of the bill is here. It focuses on safety requirements for highly capable AI models. This is written as an FAQ, tackling all questions or points I saw raised. Safe & Secure AI Innovation Act also has a description page. Why Are We Here Again? There have been many hig...
2024-05-02
https://www.lesswrong.com/posts/XTdByFM6cmgB3taEN/key-takeaways-from-our-ea-and-alignment-research-surveys
XTdByFM6cmgB3taEN
Key takeaways from our EA and alignment research surveys
cameron-berg
Many thanks to Spencer Greenberg, Lucius Caviola, Josh Lewis, John Bargh, Ben Pace, Diogo de Lucena, and Philip Gubbins for their valuable ideas and feedback at each stage of this project—as well as the ~375 EAs + alignment researchers who provided the data that made this project possible. Background Last month, AE Stu...
2024-05-03
https://www.lesswrong.com/posts/z7Eenm7mbKgceJTFw/what-are-the-activities-that-make-up-your-research-process
z7Eenm7mbKgceJTFw
What are the Activities that make up your Research Process?
johannes-c-mayer
There are a bunch of activities that I engage in when doing research. These include but are not limited to: Figuring out the best thing to do. Talking out loud to force my ideas into language. For the last 3 months I have been working maybe 50 hours per week by meeting with people and doing stream-of-thought reasoning....
2024-05-02
https://www.lesswrong.com/posts/GAJfuPQinGX9p8BXt/how-do-you-select-the-right-research-acitivity-in-the-right
GAJfuPQinGX9p8BXt
How do you Select the Right Research Acitivity in the Right Moment?
johannes-c-mayer
In doing research, I have a bunch of activities that I engage in, including but not limited to: Figuring out the best thing to do. Talking out loud to force my ideas into language. Trying to explain an idea on the whiteboard. Writing pseudocode. Writing a concrete implementation we can run. Writing down things that we ...
2024-05-02
https://www.lesswrong.com/posts/T9fHFvksxwxpwxc2J/how-would-you-navigate-a-severe-financial-emergency-with-no
T9fHFvksxwxpwxc2J
How would you navigate a severe financial emergency with no help or resources?
Tigerlily
Hello, friends. This is my first post on LW, but I have been a "lurker" here for years and have learned a lot from this community that I value. I hope this isn't pestilent, especially for a first-time post, but I am requesting information/advice/non-obvious strategies for coming up with emergency money. I wouldn't ask ...
2024-05-02
https://www.lesswrong.com/posts/WCvjdws5qCYby7YGr/can-stealth-aircraft-be-detected-optically
WCvjdws5qCYby7YGr
Can stealth aircraft be detected optically?
yair-halberstadt
5th generation military aircraft are extremely optimised to reduce their radar cross section. It is this ability above all others that makes the f-35 and the f-22 so capable - modern anti aircraft weapons are very good, so the only safe way to fly over a well defended area is not to be seen. But wouldn't it be fairly t...
2024-05-02
https://www.lesswrong.com/posts/X2238QKvd7y5EW9DM/an-explanation-of-evil-in-an-organized-world
X2238QKvd7y5EW9DM
An explanation of evil in an organized world
KatjaGrace
A classic problem with Christianity is the so-called ‘problem of evil’—that friction between the hypothesis that the world’s creator is arbitrarily good and powerful, and a large fraction of actual observations of the world. Coming up with solutions to the problem of evil is a compelling endeavor if you are really root...
2024-05-02
https://www.lesswrong.com/posts/3tsZZoR6WddCuJtqk/why-i-stopped-working-on-ai-safety
3tsZZoR6WddCuJtqk
Why I stopped working on AI safety
jbkjr
Here’s a description of a future which I understand Rationalists and Effective Altruists in general would endorse as an (if not the) ideal outcome of the labors of humanity: no suffering, minimal pain/displeasure, maximal ‘happiness’ (preferably for an astronomical number of intelligent, sentient minds/beings). (Becaus...
2024-05-02
https://www.lesswrong.com/posts/eh29hsLjbzKoYdyu7/why-is-agi-asi-inevitable
eh29hsLjbzKoYdyu7
Why is AGI/ASI Inevitable?
DeathlessAmaranth
Hello! My name is Amy. This is my first LessWrong post. I'm about somewhat certain it will be deleted, but I'm giving it a shot anyway, because I've seen this argument thrown around a few places and I still don't understand. I've read a few chunks of the Sequences, and the fundamentals of rationality sequences. What ma...
2024-05-02
https://www.lesswrong.com/posts/jybSEG6cGwLxRmq4Z/linkpost-silver-bulletin-for-most-people-politics-is-about
jybSEG6cGwLxRmq4Z
[Linkpost] Silver Bulletin: For most people, politics is about fitting in
Gunnar_Zarncke
Nate Silver tries to answer the question: "How do people formulate their political beliefs?" An important epistemological question that is, he says, under-discussed. He lays out his theory: I think political beliefs are primarily formulated by two major forces: Politics as self-interest. Some issues have legible, mater...
2024-05-01
https://www.lesswrong.com/posts/HCAdGAHz3YakPKJrz/aisn-34-new-military-ai-systems-plus-ai-labs-fail-to-uphold
HCAdGAHz3YakPKJrz
AISN #34: New Military AI Systems Plus, AI Labs Fail to Uphold Voluntary Commitments to UK AI Safety Institute, and New AI Policy Proposals in the US Senate
Aidan O'Gara
Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required. Subscribe here to receive future versions. Listen to the AI Safety Newsletter for free on Spotify. AI Labs Fail to Uphold Safety Commitments to UK AI Safety Institute In Novemb...
2024-05-02
https://www.lesswrong.com/posts/HA8Yena6WyP6Cgg5c/shane-legg-s-necessary-properties-for-every-agi-safety-plan
HA8Yena6WyP6Cgg5c
Shane Legg's necessary properties for every AGI Safety plan
jacques-thibodeau
I've been going through the FAR AI videos from the alignment workshop in December 2023. I'd like people to discuss their thoughts on Shane Legg's 'necessary properties' that every AGI safety plan needs to satisfy. The talk is only 5 minutes, give it a listen: Otherwise, here are some of the details: All AGI Safety plan...
2024-05-01
https://www.lesswrong.com/posts/4LSh73CEq9dqLwFxR/kan-kolmogorov-arnold-networks
4LSh73CEq9dqLwFxR
KAN: Kolmogorov-Arnold Networks
Gunnar_Zarncke
ADDED: This post is controversial. For details see the comments below or the post Please stop publishing ideas/insights/research about AI (which is also controversial). Abstract: Inspired by the Kolmogorov-Arnold representation theorem, we propose Kolmogorov-Arnold Networks (KANs) as promising alternatives to Multi-Lay...
2024-05-01
https://www.lesswrong.com/posts/AzoopyYgzNimBJDdY/manifund-q1-retro-learnings-from-impact-certs
AzoopyYgzNimBJDdY
Manifund Q1 Retro: Learnings from impact certs
austin-chen
null
2024-05-01
https://www.lesswrong.com/posts/Na4t6QcpQij2paJQM/acx-covid-origins-post-convinced-readers
Na4t6QcpQij2paJQM
ACX Covid Origins Post convinced readers
ErnestScribbler
ACX recently posted about the Rootclaim Covid origins debate, coming out in favor of zoonosis. Did the post change the minds of those who read it, or not? Did it change their judgment in favor of zoonosis (as was probably the goal of the post), or conversely did it make them think Lab Leak was more likely (as the "Don'...
2024-05-01
https://www.lesswrong.com/posts/AdGjrWYB7y5rMtTSr/lesswrong-community-weekend-2024-open-for-applications
AdGjrWYB7y5rMtTSr
LessWrong Community Weekend 2024, open for applications
UnplannedCauliflower
Main event page Friday 13th September- Monday 16th September 2024 is the 11th annual Less Wrong Community Weekend (LWCW) in Berlin. This is the world’s largest rationalist social gathering which brings together 250+ aspiring rationalists from across Europe and beyond for four days of intellectual exploration, socialisi...
2024-05-01
https://www.lesswrong.com/posts/vJ5GGzsnRbvTHG6LB/launching-applications-for-ai-safety-careers-course-india
vJ5GGzsnRbvTHG6LB
Launching applications for AI Safety Careers Course India 2024
Axiom_Futures
Announcing open applications for the AI Safety Careers Course India 2024! Axiom Futures has launched its flagship AI Safety Careers Course 2024 to equip emerging talent working on India with foundational knowledge in AI safety. Spread out across 8-10 weeks, the program will provide candidates with key skills and networ...
2024-05-01
https://www.lesswrong.com/posts/8rBk6fMgwfG4wHt37/axrp-episode-30-ai-security-with-jeffrey-ladish
8rBk6fMgwfG4wHt37
AXRP Episode 30 - AI Security with Jeffrey Ladish
DanielFilan
YouTube link Top labs use various forms of “safety training” on models before their release to make sure they don’t do nasty stuff - but how robust is that? How can we ensure that the weights of powerful AIs don’t get leaked or stolen? And what can AI even do these days? In this episode, I speak with Jeffrey Ladish abo...
2024-05-01
https://www.lesswrong.com/posts/hnZFsQfNPhwtypPhu/neuro-bci-wbe-for-safe-ai-workshop
hnZFsQfNPhwtypPhu
Neuro/BCI/WBE for Safe AI Workshop
allison-duettmann
If you're working on neurotechnology for safe AI, including brain-computer interfaces or whole-brain emulation approaches, consider joining this upcoming workshop: 2024 Neuro/BCI/WBE for Safe AI Workshop May 21 - 22, 9 am - 5 pm Lighthaven, Berkeley Goals Whole Brain Emulation (WBE) represents a promising technology fo...
2024-05-01
https://www.lesswrong.com/posts/vaMfew2h4BMBXKJMW/agi-cryptography-security-and-multipolar-scenarios-workshop
vaMfew2h4BMBXKJMW
AGI: Cryptography, Security & Multipolar Scenarios Workshop
allison-duettmann
If you're working at the intersection between cryptogrpahy, secuity and AI, consider joining this upcoming workshop: Foresight's AGI: Cryptography, Security & Multipolar Scenarios Workshop May 14-15, all-day The Institute, Salesforce Tower, San Francisco Goals To help AI development benefit humanity, Foresight Institut...
2024-05-01
https://www.lesswrong.com/posts/icE2SKMN2M2nBRsyz/the-formal-goal-is-a-pointer
icE2SKMN2M2nBRsyz
The formal goal is a pointer
pi-rogers
When I introduce people to plans like QACI, they often have objections like "How is an AI going to do all of the simulating necessary to calculate this?" or "If our technology is good enough to calculate this with any level of precision, we can probably just upload some humans." or just "That's not computable." I think...
2024-05-01
https://www.lesswrong.com/posts/sTZ7Ybtuk9pLx4oLG/open-source-ai-is-a-lie-but-it-doesn-t-have-to-be
sTZ7Ybtuk9pLx4oLG
"Open Source AI" is a lie, but it doesn't have to be
jacobhaimes
NOTE: This post was updated to include two additional models which meet the criteria for being considered Open Source AI. As advanced machine learning systems become increasingly widespread, the question of how to make them safe is also gaining attention. Within this debate, the term “open source” is frequently brought...
2024-04-30
https://www.lesswrong.com/posts/jbJ7FynonxFXeoptf/questions-for-labs
jbJ7FynonxFXeoptf
Questions for labs
Zach Stein-Perlman
Associated with AI Lab Watch, I sent questions to some labs a week ago (except I failed to reach Microsoft). I didn't really get any replies (one person replied in their personal capacity; this was very limited and they didn't answer any questions). Here are most of those questions, with slight edits since I shared the...
2024-04-30
https://www.lesswrong.com/posts/Ghwju5cHXzdbakiqQ/reality-comprehensibility-are-there-illogical-things-in
Ghwju5cHXzdbakiqQ
Reality comprehensibility: are there illogical things in reality?
DDthinker
Introduction For all thought that an actor in the cosmos (such as yourself) does, the ability to comprehend reality and the nature of reality as being comprehensible is foundational to understand. With ourselves as entities which make sense of reality via our perception, everything seems to follow logic and reason and ...
2024-04-30
https://www.lesswrong.com/posts/x7hYCqkGXGPm3z7YP/what-is-the-easiest-funnest-way-to-build-up-a-comprehensive
x7hYCqkGXGPm3z7YP
What is the easiest/funnest way to build up a comprehensive understanding of AI and AI Safety?
Jordan Arel
I have spent around 100–200 hours listening to AI safety audiobooks, AI Safety Fundamentals course, Rob Miles YouTube, The Sequences, various bits and pieces of a bunch of YouTube AI channels and podcasts, as well as some time thinking through the basic case for X-risk. When I look at certain heavy academic stuff or tr...
2024-04-30
https://www.lesswrong.com/posts/bSpojpCLBrzToGd5A/arch-anarchy-theory-and-practice
bSpojpCLBrzToGd5A
Arch-anarchy:Theory and practice
Peter lawless
This article aims to expand on some of the ideas in the text " arch-anarchy"(See my previous post) republished from Extropy magazine. First of all, I believe that the nation-state is doomed to fall in a few years to the decentralization of the economy, and distributed computing networks (bitcoin, smart contract, etc.),...
2024-04-30
https://www.lesswrong.com/posts/FP4Sq763BHXT5XSot/announcing-the-2024-roots-of-progress-blog-building
FP4Sq763BHXT5XSot
Announcing the 2024 Roots of Progress Blog-Building Intensive
jasoncrawford
Today we’re opening applications for the 2024 cohort of The Roots of Progress Blog-Building Intensive, an 8-week program for aspiring progress writers to start or grow a blog. Last year, nearly 500 people applied to the inaugural program. The 19 fellows who completed the program have sung the program’s praises as “life...
2024-04-30
https://www.lesswrong.com/posts/pchAuTJhfKJS2s8it/finding-the-wisdom-to-build-safe-ai
pchAuTJhfKJS2s8it
Finding the Wisdom to Build Safe AI
gworley
We may soon build superintelligent AI. Such AI poses an existential threat to humanity, and all life on Earth, if it is not aligned with our flourishing. Aligning superintelligent AI is likely to be difficult because smarts and values are mostly orthogonal and because Goodhart effects are robust, so we can neither rely...
2024-07-04
https://www.lesswrong.com/posts/zjGh93nzTTMkHL2uY/the-intentional-stance-llms-edition
zjGh93nzTTMkHL2uY
The Intentional Stance, LLMs Edition
ea-1
In memoriam of Daniel C. Dennett. tl;dr: I sketch out what it means to apply Dennett's Intentional Stance to LLMs. I argue that the intentional vocabulary is already ubiquitous in experimentation with these systems therefore what is missing is the theoretical framework to justify this usage. I aim to make up for that a...
2024-04-30
https://www.lesswrong.com/posts/Lgq2DcuahKmLktDvC/applying-refusal-vector-ablation-to-a-llama-3-70b-agent
Lgq2DcuahKmLktDvC
Applying refusal-vector ablation to a Llama 3 70B agent
dalasnoin
TLDR; I demonstrate the use of refusal vector ablation on Llama 3 70B to create a bad agent that can attempt malicious tasks such as trying to persuade and pay me to assassinate another individual. I introduce some early work on a benchmark for Safe Agents which comprises two small datasets, one benign, one bad. In gen...
2024-05-11
https://www.lesswrong.com/posts/ASmcQYbhcyu5TuXz6/llms-could-be-as-conscious-as-human-emulations-potentially
ASmcQYbhcyu5TuXz6
LLMs could be as conscious as human emulations, potentially
weightt-an
Firstly, I'm assuming that high resolution human brain emulation that you can run on a computer is conscious in normal sense that we use in conversations. Like, it talks, has memories, makes new memories, have friends and hobbies and likes and dislikes and stuff. Just like a human that you could talk with only through ...
2024-04-30
https://www.lesswrong.com/posts/GqA8wyeX4uGFkomGb/an-interesting-mathematical-model-of-how-llms-work
GqA8wyeX4uGFkomGb
An interesting mathematical model of how LLMs work
bill-benzon
My colleague, Ramesh Viswanathan, sent this to me. It’s the most interesting thing I’ve seen on how transformers work. Alas, the math is beyond me, which is often then case, but there are diagrams early in the paper, and I understand them well enough (I think). It seems consistent with intuitions I developed while work...
2024-04-30
https://www.lesswrong.com/posts/YmkjnWtZGLbHRbzrP/transcoders-enable-fine-grained-interpretable-circuit
YmkjnWtZGLbHRbzrP
Transcoders enable fine-grained interpretable circuit analysis for language models
jacob-dunefsky
Summary We present a method for performing circuit analysis on language models using "transcoders," an occasionally-discussed variant of SAEs that provide an interpretable approximation to MLP sublayers' computations. Transcoders are exciting because they allow us not only to interpret the output of MLP sublayers but a...
2024-04-30
https://www.lesswrong.com/posts/N2r9EayvsWJmLBZuF/introducing-ai-lab-watch
N2r9EayvsWJmLBZuF
Introducing AI Lab Watch
Zach Stein-Perlman
I'm launching AI Lab Watch. I collected actions for frontier AI labs to improve AI safety, then evaluated some frontier labs accordingly. It's a collection of information on what labs should do and what labs are doing. It also has some adjacent resources, including a list of other safety-ish scorecard-ish stuff. (It's ...
2024-04-30
https://www.lesswrong.com/posts/JyCyupaAhuaHJ8b4z/the-market-singularity-a-new-perspective
JyCyupaAhuaHJ8b4z
The Market Singularity: A New Perspective
azsantosk
"Do you want to be rich, or do you want to be king? — The founder's dilemma. As we approach the technological singularity, the sometimes applicable trade-off between wealth (being rich) and control (being king) may extend into the realm of AI and governance. The prevailing discussions around governance often converge o...
2024-05-30
https://www.lesswrong.com/posts/ioPnHKFyy4Cw2Gr2x/mechanistically-eliciting-latent-behaviors-in-language-1
ioPnHKFyy4Cw2Gr2x
Mechanistically Eliciting Latent Behaviors in Language Models
andrew-mack
Produced as part of the MATS Winter 2024 program, under the mentorship of Alex Turner (TurnTrout). TL,DR: I introduce a method for eliciting latent behaviors in language models by learning unsupervised perturbations of an early layer of an LLM. These perturbations are trained to maximize changes in downstream activatio...
2024-04-30
https://www.lesswrong.com/posts/bCtbuWraqYTDtuARg/towards-multimodal-interpretability-learning-sparse-2
bCtbuWraqYTDtuARg
Towards Multimodal Interpretability: Learning Sparse Interpretable Features in Vision Transformers
hugofry
Executive Summary In this post I present my results from training a Sparse Autoencoder (SAE) on a CLIP Vision Transformer (ViT) using the ImageNet-1k dataset. I have created an interactive web app, 'SAE Explorer', to allow the public to explore the visual features the SAE has learnt, found here: https://sae-explorer.st...
2024-04-29
https://www.lesswrong.com/posts/4YNdaY5evGjzxJzot/super-additivity-of-consciousness
4YNdaY5evGjzxJzot
Super additivity of consciousness
arturo-macias
In “Freedom under naturalistic dualism” I have carefully argued that consciousness is radically noumenal, that is, it is the most real (perhaps the only real) thing in the Universe, but also totally impossible to be observed by others (non-phenomenal). In my view this strongly limits our knowledge on sentience, with im...
2024-04-29
https://www.lesswrong.com/posts/H7fkGinsv8SDxgiS2/ironing-out-the-squiggles
H7fkGinsv8SDxgiS2
Ironing Out the Squiggles
Zack_M_Davis
Adversarial Examples: A Problem The apparent successes of the deep learning revolution conceal a dark underbelly. It may seem that we now know how to get computers to (say) check whether a photo is of a bird, but this façade of seemingly good performance is belied by the existence of adversarial examples—specially prep...
2024-04-29
https://www.lesswrong.com/posts/wB8KTFem8FsZ8u3Az/aisc9-has-ended-and-there-will-be-an-aisc10
wB8KTFem8FsZ8u3Az
AISC9 has ended and there will be an AISC10
Linda Linsefors
The 9th AI Safety Camp (AISC9) just ended, and as usual, it was a success! Follow this link to find project summaries, links to their outputs, recordings to the end of camp presentations and contact info to all our teams in case you want to engage more. AISC9 both had the largest number of participants (159) and the sm...
2024-04-29
https://www.lesswrong.com/posts/vzGC4zh73dfcqnFgf/open-source-ai-a-regulatory-review
vzGC4zh73dfcqnFgf
Open-Source AI: A Regulatory Review
elliot
Cross-posted on the EA Forum. This article is part of a series of ~10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance, such as incident reporting, safety eva...
2024-04-29
https://www.lesswrong.com/posts/YW9vwuJJ2nbJEMaG2/can-kauffman-s-nk-boolean-networks-make-humans-swarm
YW9vwuJJ2nbJEMaG2
Can Kauffman's NK Boolean networks make humans swarm?
yori-ong
With this article, I intend to initiate a discussion with the community on a remarkable (thought) experiment and its implications. The experiment is to conceptualize Stuart Kauffman's NK Boolean networks as a digital social communication network, which introduces a thus far unrealized method for strategic information t...
2024-05-08
https://www.lesswrong.com/posts/WqLWXpR44q6zTAcku/san-francisco-acx-meetup-first-saturday-3
WqLWXpR44q6zTAcku
San Francisco ACX Meetup “First Saturday”
nate-sternberg
Date: Saturday, May 4th, 2024 Time: 1 pm – 3 pm PT Address: Yerba Buena Gardens in San Francisco, just outside the Metreon food court, coordinates 37°47'04.4"N 122°24'11.1"W Contact: 34251super@gmail.com Come join San Francisco’s First Saturday (or SFFS – easy to remember, right?) ACX meetup. Whether you're an avid rea...
2024-04-29
https://www.lesswrong.com/posts/AvPcE4vhFy25Za6vG/the-prop-room-and-stage-cognitive-architecture
AvPcE4vhFy25Za6vG
The Prop-room and Stage Cognitive Architecture
nonmali-1
This is a post on a novel cognitive architecture I have been thinking about for a while now, first as a conceptual playground to concretise some of my agent foundation ideas, and lately as an idea for a project that approaches the Alignment Problem directly by concretising a sort of AI-Seed approach for an inherently i...
2024-04-29
https://www.lesswrong.com/posts/geEfyPzfTXZNmhRs4/how-are-simulators-and-agents-related
geEfyPzfTXZNmhRs4
How are Simulators and Agents related?
nonmali-1
In this post, I will provide some speculative reasoning about Simulators and Agents being entangled in certain ways. I have thought quite a bit about LLMs and the Simulator framing for them, and I am convinced that it is a good explanatory/predictive frame for behavior of current LLM (+multimodal) systems. It provides ...
2024-04-29
https://www.lesswrong.com/posts/raYoQHcuryfXE47eE/extended-embodiment
raYoQHcuryfXE47eE
Extended Embodiment
nonmali-1
I find that an especially illustrative thought experiment regarding embodiment is to imagine a superintelligent Stone that can talk. Let’s say that this Stone can somehow perceive its environment but is, as you might expect, incapable of moving around. The Stone is not a very powerful optimiser of its environment as lo...
2024-04-29
https://www.lesswrong.com/posts/CASPvoEBhwLYvrqEK/referential-containment
CASPvoEBhwLYvrqEK
Referential Containment
nonmali-1
This is an idea I am toying around with for understanding resolutionally adjusted causal modeling - this is just a bunch of intuitions and pointing towards a somewhat clear framing of a fundamental thing. I am sure there are already plenty of accounts for how to approach this kind of task, but I like to figure stuff ou...
2024-04-29
https://www.lesswrong.com/posts/JAmvLDQGr9wL8rEqQ/d-and-d-sci-long-war-defender-of-data-mocracy-evaluation-and
JAmvLDQGr9wL8rEqQ
D&D.Sci Long War: Defender of Data-mocracy Evaluation & Ruleset
aphyer
This is a follow-up to last week's D&D.Sci scenario: if you intend to play that, and haven't done so yet, you should do so now before spoiling yourself. There is a web interactive here you can use to test your answer, and generation code available here if you're interested, or you can read on for the ruleset and scores...
2024-05-14
https://www.lesswrong.com/posts/DCbz8vtYekubPFqiM/big-endian-is-better-than-little-endian
DCbz8vtYekubPFqiM
Big-endian is better than little-endian
Menotim
This is a response to the post We Write Numbers Backward, in which lsusr argues that little-endian numerical notation is better than big-endian.[1] I believe this is wrong, and big-endian has a significant advantage not considered by lsusr. Lsusr describes reading the number "123" in little-endian, using the following ...
2024-04-29
https://www.lesswrong.com/posts/wFmzoktuvf2WqhNNP/list-your-ai-x-risk-cruxes
wFmzoktuvf2WqhNNP
List your AI X-Risk cruxes!
alenglander
[I'm posting this as a very informal community request in lieu of a more detailed writeup, because if I wait to do this in a much more careful fashion then it probably won't happen at all. If someone else wants to do a more careful version that would be great!] By crux here I mean some uncertainty you have such that yo...
2024-04-28
https://www.lesswrong.com/posts/8JvowdXv47B3vsPaa/things-i-tell-myself-to-be-more-agentic
8JvowdXv47B3vsPaa
Things I tell myself to be more agentic
DMMF
I'm sharing this here since being agentic has become somewhat of a meme in EA circles. Despite its meme status, I find the idea very helpful and empowering, though difficult to put into action. This is written as a personal self-affirmation — my hope is that by writing it up and sharing it publicly, I will internalize ...
2024-04-28
https://www.lesswrong.com/posts/ECdd97P3zz6QigzHX/review-the-case-against-reality
ECdd97P3zz6QigzHX
Review: “The Case Against Reality”
David_Gross
This is not a red stop sign: For one thing, in a ceci n'est pas une pipe way, it’s not a stop sign at all, but a digital representation of a photograph of a stop sign, made visible by a computer monitor or maybe a printer. More subtly, “red” is not a quality of the sign, but of the consciousness that perceives it.[1] E...
2024-10-29
https://www.lesswrong.com/posts/BLqepGHGR9uqyDcB3/estimating-the-number-of-players-from-game-result
BLqepGHGR9uqyDcB3
Estimating the Number of Players from Game Result Percentages
daniel-lyakovetsky
Recently I got into the daily word puzzle game Couch Potato Salad. At the end of the game, it shows the percent of players who “nailed”, ”sailed”, ”prevailed”, ”exhaled” and ”failed”. Once, I played the game shortly after midnight when the new puzzle becomes available. I nailed it (ha!), but noticed that the game resul...
2024-04-28
https://www.lesswrong.com/posts/SHQ3WzAbhYWjT6vmz/the-science-algorithm-aisc-2024-final-presentation
SHQ3WzAbhYWjT6vmz
The Science Algorithm - AISC 2024 Final Presentation
johannes-c-mayer
I gave a presentation about what I have been working on in the last 3 months. Well a tiny part of it, as it was only a 10-minute presentation. Here is the vector planning post mentioned in the talk.
2024-04-28
https://www.lesswrong.com/posts/RzsXRbk2ETNqjhsma/ai-safety-strategies-landscape
RzsXRbk2ETNqjhsma
AI Safety Strategies Landscape
charbel-raphael-segerie
The full draft textbook is available here. This document constitutes the Chapter 3. Introduction tldr: Even if we still don't know how to make AI development generally safe, many useful classes of strategies already exist, which are presented in this chapter. You can look at the table of contents and the first figure t...
2024-05-09
https://www.lesswrong.com/posts/PbXwdFnSC26Q96FG3/aspiration-based-designs-outlook-dealing-with-complexity
PbXwdFnSC26Q96FG3
[Aspiration-based designs] Outlook: dealing with complexity
Jobst Heitzig
Summary. This teaser post sketches our current ideas for dealing with more complex environments. It will ultimately be replaced by one or more longer posts describing these in more detail. Reach out if you would like to collaborate on these issues. Multi-dimensional aspirations For real-world tasks that are specified i...
2024-04-28
https://www.lesswrong.com/posts/pMaXQAT2EAPRdwn9d/playing-northboro-with-lily-and-rick
pMaXQAT2EAPRdwn9d
Playing Northboro with Lily and Rick
jkaufman
This afternoon Lily, Rick, and I ("Dandelion") played our first dance together, which was also Lily's first dance. She's sat in with Kingfisher for a set or two many times, but this was her first time being booked and playing (almost) the whole time. Lily started playing fiddle in Fall 2022, and after about a year she...
2024-04-28
https://www.lesswrong.com/posts/MvufXqXfcsy8LMHa4/release-of-un-s-draft-related-to-the-governance-of-ai-a
MvufXqXfcsy8LMHa4
Release of UN's draft related to the governance of AI (a summary of the Simon Institute's response)
Sebastian Schmidt
I just spent a couple of hours trying to understand the UN’s role in the governance of AI. The most important effort seems to be the Global Digital Compact (GDC). An initiative for member states to “outline shared principles for an open, free and secure digital future for all". The GDC has been developed by member stat...
2024-04-27
https://www.lesswrong.com/posts/Jgue2EmzQrPXshJdu/mercy-to-the-machine-thoughts-and-rights-2
Jgue2EmzQrPXshJdu
Mercy to the Machine: Thoughts & Rights
False Name, Esq.
Abstract: First [1)], a suggested general method of determining, for AI operating under the human feedback reinforcement learning (HFRL) model, whether the AI is “thinking”; an elucidation of latent knowledge that is separate from a recapitulation of its training data. With independent concepts or cognitions, then, an ...
2024-04-27
https://www.lesswrong.com/posts/TNHfhG2EWyGPLeEyd/so-what-s-up-with-pufas-chemically
TNHfhG2EWyGPLeEyd
So What's Up With PUFAs Chemically?
Jemist
This is exploratory investigation of a new-ish hypothesis, it is not intended to be a comprehensive review of the field or even a a full investigation of the hypothesis. I've always been skeptical of the seed-oil theory of obesity. Perhaps this is bad rationality on my part, but I've tended to retreat to the sniff test...
2024-04-27
https://www.lesswrong.com/posts/Mq5LqZtP54BC8fXte/link-let-s-think-dot-by-dot-hidden-computation-in
Mq5LqZtP54BC8fXte
Link: Let's Think Dot by Dot: Hidden Computation in Transformer Language Models by Jacob Pfau, William Merrill & Samuel R. Bowman
Chris_Leong
One consideration that is pretty important for AI safety is understanding the extent to which a model's outputs are aligned with its chain of thought. This paper (Twitter thread linked) provides some relevant evidence. It demonstrates that it is possible for a model to achieve performance comparable to chain-of-thought...
2024-04-27
https://www.lesswrong.com/posts/tifiH2h5GtE3hogmQ/two-vernor-vinge-book-reviews
tifiH2h5GtE3hogmQ
Two Vernor Vinge Book Reviews
maxwell-tabarrok
Vernor Vinge is a legendary and recently deceased sci-fi author. I’ve just finished listening to the first two books in the Zone of Thought trilogy. Both books are entertaining and culturally influential. The audio versions are high-quality. A Deepness in the Sky is about two spacefaring human civilizations with clashi...
2024-04-27