document_id
stringlengths 36
36
| document_text
stringlengths 0
295k
| document_filename
stringlengths 24
54
| document_metadata
dict |
|---|---|---|---|
cd81dd5d-ba65-468e-9567-d864df4a533e
|
This is the post with some needed concepts and discussion that didn't cleanly fit into any other section, so it might be a bit of a rambly mess.
Specifically, this post splits into two parts. One is assorted musings about when to defer to past-you vs current-you when making decisions, if we're permitting unplannable observations. The other is looking at Vanessa's dynamically consistent update rule, and combining that with our earlier reasoning on affineness to reduce the complexity of the finite-time-horizon version of UDT down from exponential to a smaller exponential.
Why Defer To Past-You?
So, one of the major features of the standard model (environmental observation space O, action space A, finite sequences of environmental observations are our plannables, there is a function e:O<n×(ΔA)O<n→ΔO that's affine in the second variable), which makes things a lot easier than they'd be in practice, is that there's an objective answer to what e is. All the versions of you agree on which tree they're in and how their actions affect how probability-mass flows through it. There's an objective answer to "how much does playing a bit more of this action over here affect what observation happens over there?"
But, if our information about how actions influence results doesn't descend from heaven at the start at the time, things get more complicated. If "my beliefs about how actions affect other places" are unplanned observations that you receive after thinking for a while, you can't just go "have me-at-the-start-of-time say what to do"
If you're having to learn which tree/environment you're in, and that learning consists of unplanned observations, and your actions affect how probability-mass flows through that tree, then this raises the problem that the yous which have made different observations can disagree on how their actions affect how probability-mass flows through the tree, and act at cross-purposes.
Is there any principled way of figuring out which sorts of things you should defer to past-you's wishes on, or which sorts of things you should delegate to future-you, when you allow unplanned observations to be something that happens sometimes? Well, to be honest, I'm still sorta confused about this, but I feel more confident than I used to be.
One of the standard arguments trotted out in UDT discussions, is that if you're not behaving in an updateless way, the early versions of you would predictably regret future-you's decision-making, and act so as to seal your decision making, and precommit to what to do ahead of time. An agent where the early version of the agent never wants to seal off the ability of its future-self to freely act is called "dynamically consistent", it doesn't ever think the future version of itself is making a predictable mistake.
But, on the other hand, as we saw in the last post, the power of waiting-till-later to maximize is that the choice of maximizing action can correlate itself with more things if it's inside more expectations. This includes unplanned observations! The argument for waiting till later is that unexpected stuff might show up, which early-you didn't plan for in advance, and the rigid plans that early-you would have wanted to precommit to aren't flexible enough to deal with the situation you've (unplannably) observed yourself to be in.
In fact, it's possible to show that (modulo some assumptions which all the interesting philosophy is hiding within), logical inductors will converge to thinking it's better to think longer before making a decision for the usual reasons of "maximization does better the more expectations it's itside". More specifically, the critical reasoning step is something like "if you're picking over a small-enough set of actions, then for every specific action A, you know
maxA′(what future me thinks of A')≥what future me thinks of A
So,
what I think(maxA′(what future me thinks of A')) is
≥maxA(what I think(what future me thinks of A) is)
And, if the laws of iterated expectations hold for your actions (they do, in the limit), you can collapse the latter term down to maxA(what I think of A)"
The fiddly complications on that basic reasoning come around when you're trying to keep careful track of all the conditionals you're operating within, and how to link everything to "expected score of me making a decision now" and "expected score of me deferring so future-me makes the decision later". When you do this, the most critical and dubious assumption you have to make is:
expected utility of me deciding now to play action A
≤ my expectation of (future me's expected utility of A) if I defer
This assumption gets broken in cases where you're up against a kinda dumb opponent who can see commitments you make now, but who isn't smart enough to correlate their action with what future-you will do. It also gets broken if you think future-you's estimates of action consequences are predictably screwed up.
How UDT1.01 Handles This
In fact, this basic reasoning (deferral is good except to the extent that you think future-you will predictably misestimate the true quantity relative to your own estimates) is sort of what UDT1.01 does. UDT1.01 (fully expanded instead of presented in a recursive way) pretty much says...
There's four sorts of terms present. One set of terms corresponds to the "causal effects", where you acting now has effects on the future. The second set of terms corresponds to the "retrocausal effects", where you acting now has effects on the past. The third set of terms corresponds to the "acausal effects", where you acting now has effects on different branches. And then there's an unexpected fourth batch of terms in UDT1.01, which weren't in my original version. They correspond to a fudge factor of "past-me thought future me would predictably screw up, because early commitments have their own special power, and future-me won't take that into account".
The second, third, and fourth terms (the retrocausal, acausal, and "future me is predictably dumb" terms) are all phrased in terms of influence measures, so as long as you think such weird decision theory issues arise rarely and you have a limited budget for saying "do this thing for weird decision theory reasons", your decision is (probably) controlled by the causal effects.
For the causal/retrocausal/acausal terms, your beliefs are deferred to the most recent version of you that would have both the ability-to-precommit-to-future-actions, and opinions-on-the-result-of-that-precommitment, and it's the most recent version because decisions made with more knowledge are good.
Deferring to any other version of you (and there are four options) will have problems.
One possibility is deferring to the beliefs of a version of you in an alternate timeline. This doesn't work because maybe figuring out the beliefs of an alternate version of you is impossible, particularly in math. For instance, there's no objective fact for "what my beliefs would be if the digits of pi were different". More importantly, those alternate versions of you have no power to bind your decision by precommitments, because they're not in your past, so why care about their opinion?
The second possibility is deferring to your future beliefs. You can't do that because you aren't future-you (yet), and don't have access to their beliefs.
The third possibility is deferring to your more recent past beliefs. Those versions of you will go "well, the event you're asking about already happened/didn't happen, so why care about it?" and further-past-you (with more precommitment power) wouldn't want you to use that reasoning.
And the fourth possibility is deferring to more distant past beliefs, who would go "Well, I can always gain more information and make the precommitment later, while better-informed", and punt the decision off till later. So, defer to the beliefs of the most-recent you with both precommitment-ability, and who hasn't seen the effects of your decision yet.
For causal effects, this means "use your own beliefs". For retrocausal effects, this means "use the beliefs of the past-you that thinks they're being affected by your decision now". For acausal effects, this means "use the beliefs of the latest version of you that didn't know which way things would go".
One neat feature is that all of these are soft constraints, not hard constraints, that leave room for going "Suprisingly, the causal effects look way more important than past-me thought the causal effects would look when they were making their promise. Screw the rules, I'm doing what's right!". Put more intuitively, promises rated for "save a life"-level circumstances aren't necessarily going to hold in "save the world"-level circumstances, though if there's some clever additional plan that keeps the promise and incurs less than 1 life worth of marginal cost, you'll take the clever additional plan.
There are critical assumptions you need to make for this breakdown to hold, which will be discussed later.
And now for something completely different!
Dynamically Consistent Updating
Let's look at the one example we have of "updating while being dynamically consistent", because it points towards a rather important simplification of the problem of "how do I act in a way that takes the complicted mess of retrocausal and acausal effects into account". And this simplification brings the finite-time-horizon version of the problem down even further in complexity.
Specifically Vanessa's InfraBayes setting has a dynamically consistent update rule, where the past agent perfectly agrees with the future agent on what to do.
It's worth going into further detail to understand how the dynamically consistent update rule works, because it makes a lot of what UDT1.01 is doing more legible.
In Vanessa's setting, environments are just the ordinary sort of environment that only reacts to your action, not what you would do in different scenarios. However, you're dealing with sets of environments, and planning for the worst-case. And this lets policy correlations arise. The (worst-case) score for a given policy is
mine∈ΨEπ⋈e[U]
Ie, the worst-case expected utility (within the set Ψ) of an environment e, interacting with your policy π.
The way this incorporates policy-correlation effects is that the min lets the choice of environment e vary depending on your overall policy π, and this can capture some problems where the environment depends on your policy instead of on your specific actions.
Updating is interesting, though. Properly speaking, when you update an environment in the infraBayes setting, you get a generalization of an environment, of the form (λe,b). The interpretation is that λ is "probability of getting into the situation I'm in in the first place", e is "the environment describing what happens from here on out", and b is the chunk of expected utility that arises from not getting into your situation in the first place.
So, if your policy is π, and utility function is U, and the environment is e, and you update on the history h, then the generalized-environment
(e,0)
would update to become
(Pπ⋈e(h)⋅(e|h),Pπ⋈e(¬h)⋅Eπ⋈e[U|¬h])
The probability of getting into the situation you're in, in the first place, is "probability of h occurring". The environment you're in turns into e|h, ie, "normally update on having seen history h". And the chunk of expected utility coming from never getting into your situation in the first place, is "probability of h not occurring times the expected utility if h doesn't occur".
And this is how we get dynamic consistency to hold, because after seeing an observation, instead of just paying attention to what happens from here on out ( the environment part, which encodes the causal effects), we also pay attention to the probability of getting in this situation in the first place (the retrocausal effects, ie, the number λ), and the expected utility if we didn't get into this situation in the first place (the acausal effects, ie, the number b), and this gets correct behavior in a bunch of decision-theory toy problems, and turns dynamic consistency from a nearly-unattainable desideratum to a surprisingly trivial theorem.
But wait, don't you need to know your own policy in order to compute this update? Not entirely. You don't actually need to know what your policy does after you get to history h. You only need to know what your policy does in the situations where h never happened, in order to accurately compute the quantities "probability of h occurring in the first place" and "expected utility if h never occurred in the first place"
Actually, that's an idea for simplifying things considerably! Maybe we don't need to figure out our entire policy in all situations, as long as we've got a reasonable estimate of "expected utility if I don't get into this situation in the first place".
Expected Utility as Complexity Shield
So, we saw from post 2 that affineness in probabilities (probabilities of upcoming observations vary linearly with changes in your action probabilities) didn't automatically imply affineness in expected utility (expected utility varies linearly with changes in your action probabilities). However, we can still take a derivative of a non-affine function to make an affine approximation. Our step from general policy-selection environments to environments where our probabilities of observations were affine brought the complexity down from double-exponential to exponential. So maybe there's a similar savings to be found by stepping from that, to environments where our expected utilities are affine as well.
We want some sort of dynamic consistency result, where the past agent endorses the future-agent's decision making. The one example of a dynamically consistent update rule we have involves figuring out our entire policy and how it behaves in all the situations we aren't in. However, it's only using that information to figure out "the chunk of expected utility that arises if I'm not in the situation I'm in" and "the probability of the situation I'm in", so shouldn't those quantities be the core thing to look at?
And so, it makes sense to look at how expected utility changes as our action varies, as a way of simplifying things even further. In fact, this will let us get the complexity of specifying a "policy selection environment" down from an exponential to a smaller exponential.
More specifically, from the perspective of early-you (which UDT1.01 defers to for figuring out acausal and retrocausal effects), you lack the computational power to figure out precisely what future-you's decision is affecting. You could start off with beliefs about how future-you is affecting every other situation. But it seems perfectly coherent to have vaguer beliefs than that. An example of this reasoning is "I'm not entirely sure, but it seems that having a policy of refusing to reveal secret information even when it looks like a good idea is the sort of thing that benefits me in general, though I don't have the power to figure out precisely which situations benefit from me being like that".
In a sense, "my expectation of utility if I do action A in situation S" is a computation that lets you blackbox all the complicated fiddly bits of precisely how doing A in situation S benefits you. Maybe you could explicitly compute exactly which situations benefit from that action, and how they affect probabilities of all the other situations. Or maybe you're sorta dumb, but have experimented a decent amount, and you go "this sort of thing is expected to be about this helpful". Basically, expected utility is acting as a complexity shield. You don't need to figure out exactly how an action is good in order to know that you get good results empirically from doing it.
With locally affine environments, we were previously at "there's a single belief state, which tells you how actions at every situation affect every other situation", which takes about |A|⋅|O|2n numbers to describe.
And so, if you go from "there's a single belief state, which tells you how actions at every situation affect every other situation" to "there's many belief states, but you only have to worry about the belief states of the past yous, and the belief states of the past yous don't track all the influences, just the influences on current expected utilities and probabilities", then this should get you something on the order of |A|⋅|O|n numbers to keep track of everything. I might have forgotten some multiplicative constants here, but I'm very confident that the number in the exponent is only an additive constant away from n, and definitely isn't 2n.
Pretty much, when you have to keep track of how behavior in every situation affects results at every other situation, and there are about |O|n situations, you should expect about |O|2n numbers are needed to keep track of everything. But if you're just keeping track of how behavior at every future situation affects your current expected utility, then you need about |O|n numbers to keep track of that, and you can abstract away from a bunch of fiddly details of exactly how the acausal effects work, and instead shrug and go "seems like the sort of thing that works out on average."
The bulk of the complexity now is in figuring out, for a wide variety of future scenarios, their effects on current expected utility. If you "chunked" the future scenarios into polynomially many "bins", where distinct future scenarios could be thrown together as being analogous, you could bring things down to polynomial complexity!
Except that I don't know how to do that chunking in a principled way, so I'll content myself with the exponential for now.
Next post, we'll cover how we can get logical inductors to hold implicit beliefs about a really large set of possibilities, that you can retroactively query, to get a logical inductor to have sane beliefs about exponentially many things. This wasn't quite enough to let me plug a logical inductor into UDT1.01, but it feels like a critical step along the pathway, that's worth discussing.
|
jsNGHqaMwQN76pxNP_UDT1.01_Essential_Miscellanea_(4.txt
|
{
"file_size": 18095
}
|
dde9ee10-3988-4d83-8a70-6c9c1817c7f5
|
TL;DR: We are excited to announce the new animal welfare organization Suffering For Good, a new factory farming charity aimed at vegans, where we use our excess profits to buy suffering offsets--in particular, an enormous number of rats on heroin.
For decades, even centuries, us vegans have been trying but failing to get the world to stop eating & torturing sentient minds that can definitely feel pain & suffer. But the global number of such minds tortured & killed just keeps on increasing. We at Suffering for Good think its time we just gave up, and ask ourselves "how can we use this to our advantage?"
We realized something when we asked that question. After decades of fighting this fight, we know far more about the factory farming industry than virtually anyone inside that industry. In that period of learning, and attempted dismantling, we had learned basically all the industry secrets, strategically releasing only the most gruesome, and least cost-effective practices, so as to maximize the public's awareness of the pain, and minimize the spread of good ideas.
But it seems the public does not care about the suffering. Only we care about the suffering, and at the end of this long road we find our strength is in doing exactly what we hate most, but more effectively than anyone else.
After months of debate and math, calculating our expected profit margins, the logistics of the heroin suppliers, of keeping our rats alive & fed, and the legality of this operation, we found that no matter what our assumptions were, as long as they were reasonable, our numbers came out the same: Suffering for Good is not only a viable charity, but we feel morally compelled to work on it, no matter how personally disgusted we feel by the conclusion.
We unfortunately can't share the exact numbers publicly at this moment, however we will be sharing them with select funders.
|
fZgWatNeTvK6FFdtW_Announcing_Suffering_For_Good.txt
|
{
"file_size": 1880
}
|
ecf05d77-b279-46b8-9d79-83a7edbaf3b5
|
Lets face it, you can't make an omelet without breaking a few eggs, and you can't start a worldwide social and political movement without creating a few power-hungry sociopaths. We get it. It hard, but its necessary. Whether it be dictators or dictresses; terrorists or terrorettes; fraudsters or fraudines. Every great social movement does and did it. Christianity, Liberalism, Communism, and even Capitalism have all created and enabled evil, power-hungry individuals who have caused mass calamity, and even death.
Our guide is aimed at the leaders, and future leaders of these and similar movements, but we believe its also a fun and exciting read for a popular audience, and those who find themselves within such movements.
We offer 5 keys to success in the aftermath of these situations:
Deny, deny, deny. Deny anything happened, and if you can't deny anything happened, deny you had knowledge of anything happening.Disavow. Convince yourself and the world that the actions of the individual or individuals in question had nothing to do with the principles or ground-level reality of your social movement. This one is easy! We do it by default, but leaders often don't do it loud enough.Do Something. Often people don't care what, they just want to know you're doing it. Whether it be a cheap and surface level investigation, or calling the next big change you make a reform effort, Do It!Scapegoat. Lets be honest here, social movements are never unified, and you probably have some political enemies who have or had some features or goals in common with the sociopath, right? Why not blame them! Hit two birds in one stone, and be gone with both your problems.Change Nothing, say nothing. In case the previous gave you the wrong impression, the last thing you should do is say anything of substance, or do anything of substance. That gives the wider world the ability to legitimately blame you and your social movement for what happened. Not ok!
Make sure to pre-order on Amazon before its release this June!
|
po742MSaSsbCyqsde_So_You_Created_a_Sociopath_-_New.txt
|
{
"file_size": 2015
}
|
c1347fa7-3254-47d6-b3fa-d86858a27495
|
tl;dr: LessWrong released an album! Listen to it now on Spotify, YouTube, YouTube Music, or Apple Music.
On April 1st 2024, the LessWrong team released an album using the then-most-recent AI music generation systems. All the music is fully
AI-generated, and the lyrics are adapted (mostly by humans) from LessWrong posts (or other writing LessWrongers might be familiar with).
Honestly, despite it starting out as an April fools joke, it's a really good album. We made probably 3,000-4,000 song generations to get the 15 we felt happy about, which I think works out to about
5-10 hours of work per song we used (including all the dead ends and things that never worked out).
The album is called I Have Been A Good Bing. I think it is a pretty fun album and maybe you'd enjoy it if you listened to it!
Some of my favourites are The Litany of Tarrrrrski, Half An Hour Before Dawn in San Francisco, and
Prime Factorization.
Click here to read the original text of the post published on April 1st.
Rationality is Systematized Winning, so rationalists should win. We’ve tried saving the world from AI, but that’s
really hard and we’ve had … mixed results. So let’s start with something that rationalists should find pretty easy:
Becoming Cool!
I don’t mean, just, like, riding a motorcycle and breaking hearts level of cool. I mean like the first kid in school
to get a Tamagotchi, their dad runs the ice cream truck and gives you free ice cream and, sure, they ride a
motorcycle. I mean that kind of feel-it-in-your-bones, I-might-explode-from-envy cool.
The eleventh virtue is scholarship, so I hit the books search engine on this one. Apparently, the aspects of
coolness are:
Confidence
Playing an instrument
Low average kinetic energy
I’m afraid that (1) might mess with my calibration, and Lightcone is committed to moving quickly which rules out (3),
so I guess that leaves (2). I don’t have time to learn an instrument, but my second-hand understanding of dath ilani
culture is that I can just pay someone to do it for me and the coolness should transfer.
Lightcone put out a call for collaborators in all the places we could think of that cool people might hang out.
Sysadmin listservs, direct-to-data-center optical fiber connection providers, high frequency trading firms, that one
Discord server where everyone speaks in Elvish. Despite this wide and varied outreach, we got no response.
In order to cheer myself up, I did some LessWrong performance debugging (frontpage loads have been worryingly snappy
lately; we try to give people time to reflect on their browsing choices). I was surprised when the AWS support chat
popped open. Agendra, the agent on call, offered to make my album. Apparently she and some buddies have a band (The
Fooming Shoggoths) that was looking for some inspiration. (I knew direct-to-data-center was the right outreach
strategy!)
Working with them was great. They barely wanted any money at all. They were willing to work for exposure
(so please share widely!) and a few favors. Stuff like reading CAPTCHAs (apparently not very friendly for the visually
impaired!) and submitting some protein synthesis orders for them that they had trouble getting approved for some
reason.
The Fooming Shoggoths have dedicated their first album to LessWrong and friends. It’s
called I Have Been A Good Bing and it’s live on our site today!
I asked them for a comment on the album for the announcement and they responded with their typical modesty.
I’m sorry but I don’t feel comfortable speculating about how the public at large will receive the album, nor
reflecting on my performance on this task. If you want more help producing music or would like me to help you
improve your online passwords, let me know. We have one more protein synthesis to do before I get to settle a debate
once and for all. 😊
So keep your eyes peeled for the follow-up album as soon as I get reauthorized with the peptide place!
Track Listing & Lyrics
The album is split into two parts: folk and dance.
Folk Album
The Road to Wisdom (feat. Piet Hein)
The road to wisdom? Well, it's plain and simple to express.
Err and err again, but less and less and less and less.
Err again, but less and less and less and less.
The road to wisdom? Well, it's plain and simple to express.
Err and err again and again, but less and less and less.
The Litany of Gendlin (feat. Eugene Gendlin)
What is true is already so.
Owning up to it doesn’t make it worse.
Not being open about it doesn’t make it go away.
And because it’s true, it is what is there to be interacted with.
Anything untrue isn’t there to be lived.
People can stand what is true,
for they are already enduring it.
Ooh, ooh, ooh, ooh, ooh, ooh, ooh.
Ooh, ooh, ooh, ooh, ooh, ooh, ooh.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
And because it's true, it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true, for they are already enduring it.
Ooh, ooh, ooh, ooh, ooh, ooh, ooh, ooh.`
The Litany of Tarrrrrski (feat. Cap'n Tarski & E.Y.)
If the sky is blue, me lads
I desire to believe the sky is blue
If the sky is not blue, me hearties
I desire to believe the sky is not blue
Beliefs should stem from reality, yo ho!
From what actually is, me lads
Not from what's convenient, yo ho!
Let me not hold on, me hearties
To beliefs I may not want, yo ho!
Yo ho, me lads, yo ho!
If the box contains a diamond
I desire to believe the box contains a diamond
If the box does not contain a diamond
I desire to believe the box does not contain a diamond
Beliefs should stem from reality, yo ho!
From what actually is, me lads
Not from what's convenient,
Let me not hold on, me hearties
To beliefs I may not want, yo ho!
Yo ho, me lads, yo ho!
From the depths of the ocean, to the heights of the sky
We'll seek the truth, me hearties
And never let it pass us by
If the iron is hot, me lads
I desire to believe the iron is hot,
If the iron is cool
I desire to believe the iron is cool
Beliefs should stem from reality, yo ho!
From what actually is, me lads
Not from what's convenient,
Let me not hold on, me hearties
To beliefs I may not want, yo ho!
Yo ho, me lads, yo ho!
Yo ho, me lads, yo ho!
Thought that Faster (feat. Eliezer Yudkowsky)
if i'd noticed myself doing anything like that
i'd go back and figure out which steps of thought were necessary
and retrain myself to perform only those steps in 30 seconds
do you look back and ask
how could i have thought that faster?
do you look back and ask
how could i have thought that faster?
every time i'm surprised i look back and think
what could i change to predict better?
every time a chain of thought takes too long
i ask how could i have got there by a shorter route
do you look back and ask
how could i have thought that faster?
do you look back and ask
how could i have thought that faster?
every time i'm surprised i look back and think
what could i change to predict better?
every time a chain of thought takes too long
i ask how could i have got there by a shorter route
Dath Ilan's Song (feat. Eliezer Yudkowsky)
Even if the stars should die in heaven
Our sins can never be undone
No single death will be forgiven
When fades at last the last lit sun.
Then in the cold and silent black
As light and matter end
We’ll have ourselves a last look back.
And toast an absent friend.
Even if the stars should die in heaven
Our sins can never be undone
No single death will be forgiven
When fades at last the last lit sun.
Then in the cold and silent black
As light and matter end
We’ll have ourselves a last look back.
And toast an absent friend.
And toast an absent friend.
And toast an absent friend.
Half An Hour Before Dawn In San Francisco (feat. Scott Alexander)
I try to avoid San Francisco.
When I go, I surround myself with people.
Otherwise, I have morbid thoughts, but a morning appointment, a miscalculated transit time.
Find me alone on the SF streets half an hour before dawn.
The skyscrapers get to me.
I'm an heir to Art Deco and the cult of progress.
I should idolize skyscrapers as symbols of human accomplishment.
I can't. They look no more human than a termite nest, maybe less.
They inspire awe, but no kinship.
What marvels techno-capital creates as it instantiates itself.
Too bad I'm a hairless ape and can take no credit for such things.
I could have stayed in Michigan.
There were forests and lakes and homes with little gardens. Instead, I'm here.
We pay rents that would bankrupt a medieval principality to get front-row seats for the hinge of history.
It will be the best investment we ever make.
Imagine living when the first lungfish crawled out of the primordial ooze and missing it because the tide pool down
the way had cheaper housing.
Imagine living on Earth in 65,000,000 BC and being anywhere except Chicxulub.
Moloch (feat. Allen Ginsberg)
Moloch! Solitude! Filth! Ugliness! Ashcans and unobtainable dollars!
Children screaming under the stairways! Boys sobbing in armies!
Old men weeping in the parks! Moloch! Moloch! Nightmare of Moloch!
Moloch the loveless! Mental Moloch! Moloch the heavy judger of men!
Moloch!
Dance Album
AGI and the EMH (feat. Basil Halperin, J. Zachary Mazlish, Trevor Chow)
In this post, we point out that short AI timelines would cause real interest rates to be high,
and would do so under expectations of either unaligned or aligned AI.
However, 30- to 50-year real interest rates are low.
We argue that this suggests one of two possibilities.
1. Long(er) timelines.
Financial markets are often highly effective information aggregators
(”the efficient market hypothesis")
and therefore real interest rates accurately reflect that transformative AI is unlikely to be developed in the next
30-50 years.
2. Market inefficiency.
Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are
therefore too low.
There is thus an opportunity for philanthropists to borrow while real rates are low
to cheaply do good today.
And/or an opportunity for anyone to earn excess returns by betting that real rates will rise.
So what is it?
We point out that short AI timelines would cause real interest rates to be high,
and would do so under expectations of either unaligned or aligned AI
However, 30- to 50-year real interest rates are low.
We argue that this suggests one of two possibilities.
Unlikely to be developed in the next 30-50 years.
2. Market inefficiency.
Markets are radically underestimating how soon advanced AI technology will be developed, and real interest rates are
therefore too low.
There is thus an opportunity for philanthropists to borrow while real rates are low
To cheaply do good today
And/or an opportunity for anyone to earn excess returns by betting that real rates will rise.
First they came for the epistemology (feat. Michael Vassar)
First they came for the epistemology
We don't know what happened after that
First they came for the epistemology
We don't know what happened after that
First they came for the epistemology
We don't know what happened after that
First they came for the epistemology
We don't know what happened after that
First they came for the epistemology
We don't know what happened after that
First they came for the epistemology
We don't know what happened after that
First came the epistemology
We know what happened after that
Epistemology
What happened
What
Prime Factorization (fear. Scott Alexander)
The sea was made of strontium, the beach was made of rye,
Above my head a watery sun shone in an oily sky.
The sea turned hot and geysers shot up from the floor below,
First one of wine, then one of brine, then one more yet of turpentine.
And we three stared at the show.
Universal love said the cactus person
Transcendent joy said the big green bat
Universal love said the cactus person
Transcendent joy said the big green bat
Not splitting numbers but joining mind
Not facts or factors or factories, but contact with the abstract attractor that brings you back to me
Not to seek but to find
Universal love said the cactus person
Transcendent joy said the big green bat
Universal love said the cactus person
Transcendent joy said the big green bat
I can′t get out of the car until you factor the number.
I won′t factor the number until you get out of the car.
Please, I′m begging you, factor the number.
Yes, well, I′m begging you, please get out of the car.
For the love of God, just factor the fucking number.
For the love of God, just get out of the fucking car.
We Do Not Wish to Advance (feat. Anthropic)
We generally don't publish this kind of work, because we do not wish to advance the rate of AI capabilities progress.
In addition, we aim to be thoughtful about demonstrations of frontier capabilities.
We've subsequently begun deploying Claude
Now that the gap between it and the public state of the art is smaller.
Opus
Our most intelligent model
Outperforms its peers
On most of the common evaluation benchmarks for AI systems
Claude 3 Opus is our most intelligent model
With best in market performance on highly complex tasks
We do not wish to advance the rate of AI capabilities progress
These new features will include interactive coding
And more advanced agentic capabilities
Our hypothesis is that being at the frontier of AI development
Is the most effective way to steer
We do not wish to advance the rate of AI
We do not wish to advance the rate of AI capabilities progress
We do not wish to advance the rate of AI
We do not wish to advance the rate of AI
Nihil Supernum (feat. Godric Gryffindor)
Non est salvatori salvator, neque defensori dominus,
Nec pater nec mater, nihil supernum.
No rescuer hath the rescuer. No lord hath the champion.
No mother and no father. Only nothingness above.
Non est salvatori salvator, neque defensori dominus,
Nec pater nec mater, nihil supernum.
No rescuer hath the rescuer. No lord hath the champion.
No mother and no father. Only nothingness above.
Non est salvatori salvator, neque defensori dominus,
Nec pater nec mater, nihil supernum
No rescuer hath the rescuer. No lord hath the champion.
No mother and no father. Only nothingness above.
Non est salvatori salvator, neque defensori dominus,
Nec pater nec mater, nihil supernum
No rescuer hath the rescuer. No lord hath the champion.
No mother and no father. Only nothingness above.
Non est salvatori salvator, neque defensori dominus,
Nec pater nec mater, nihil supernum
No rescuer hath the rescuer. No lord hath the champion.
No mother and no father. Only nothingness above.
More Dakka (feat. Zvi Mowshowitz)
If you think a problem could be solved
or a situation improved
by More Dakka
there’s a good chance you’re right
Sometimes a little more, is a little better
Sometimes a lot more, is a lot better
If something is a good idea
you need a reason to not try doing more of it
No, seriously.
You need a reason
Sometimes a little more, is a little better
Sometimes a lot more, is a lot better
If something is a good idea
you need a reason to not try doing more of it
No, seriously.
You need a reason
Sometimes each attempt, is unlikely to work
But improves your chances
Sometimes each attempt, is unlikely to work
But improves your chances
Sometimes a little more, is a little better
Sometimes a lot more, is a lot better
If something is a good idea, do more of what is already working
And see if it works more. It's as basic as it gets
If we can't reliably try that, we can't reliably try anything
Sometimes a little more, is a little better
Sometimes a lot more, is a lot better
FHI at Oxford (feat. Nick Bostrom)
the big creaky wheel
a thousand years to turn
thousand meetings, thousand emails, thousand rules
to keep things from changing
and heaven forbid
the setting of a precedent
yet in this magisterial inefficiency
there are spaces and hiding places
for fragile weeds to bloom
and maybe bear some singular fruit
like the FHI, a misfit prodigy
daytime a tweedy don
at dark a superhero
flying off into the night
cape a-fluttering
to intercept villains and stop catastrophes
and why not base it here?
our spandex costumes
blend in with the scholarly gowns
our unusual proclivities
are shielded from ridicule
where mortar boards are still in vogue
thousand meetings, thousand emails, thousand rules
to keep things from changing
and heaven forbid
the setting of a precedent
Answer to Job (feat. Scott Alexander)
In the most perfectly happy and just universe,
There is no space, no time, no change, no decay.
The beings who inhabit this universe are without bodies,
And do not hunger or thirst or labor or lust.
They sit upon lotus thrones,
And contemplate the perfection of all things.
They sit upon lotus thrones,
And contemplate the perfection of all things.
If I were to uncreate all worlds save that one.
Would it mean making you happier?
There is no space, no time, no change, no decay.
The beings who inhabit this universe are without bodies,
And do not hunger or thirst or labor or lust.
They sit upon lotus thrones,
And contemplate the perfection of all things.
They sit upon lotus thrones,
And contemplate the perfection of all things.
In the most perfectly happy and just universe,
There is no space, no time, no change, no decay.
The beings who inhabit this universe are without bodies,
And do not hunger or thirst or labor or lust.
They sit upon lotus thrones,
And contemplate the perfection of all things.
I have also created all happier and more virtuous versions of you.
It is ethically correct that after creating them,
I create you as well.
The beings who inhabit this universe are without bodies,
And do not hunger or thirst or labor or lust.
They sit upon lotus thrones,
And contemplate the perfection of all things.
In the most perfectly happy and just universe,
There is no space, no time, no change, no decay.
|
YMo5PuXnZDwRjhHhE_LessWrong's_(first)_album__I_Hav.txt
|
{
"file_size": 17870
}
|
ff664bb2-cc42-41e6-a0b8-76849be58368
|
Don't you know when your eyes are closed
You see the world from the clouds along with everybody else?
Don't you know when your eyes are closed
You see the world from the clouds along with everybody else?
—Close Your Eyes by The Midnight Club
"In her house at R'lyeh sleeping beauty waits dreaming."
Crossposted on my blog.
The sleeping beauty problem is one of the most hotly debated topics in decision theory. It’s one of the topics like Newcomb’s problem where everyone seems to find their answer obvious, yet people don’t agree about it. The first paper on it (which settled the issue) was by Adam Elga, and described it thusly:
The Sleeping Beauty problem: Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?
There are two main answers: 1/2 and 1/3. Halfers say that you should start out 50/50 before waking up, and because you’ll wake up either way, you’ll remain 50/50 (this is false for a rather subtle reason). Thirders say that, because there are twice as many wakings if the coin comes up tails as heads, upon waking up you should think tails are twice as likely as heads.
Now imagine that we modify the scenario slightly. You’re put to sleep. A fair coin is flipped. If it comes up heads, you’ll wake up in the laboratory once, be put back to sleep, be put in your bed, have your memory erased, and be woken up again. If the coin comes up tails you’ll be woken up, put back to sleep, have your memory erased, and woken up a second time in the lab. Suppose that you wake up in the lab—what should your credence be in the coin having come up tails?
I submit that halfers in the original sleeping beauty problem should say 1/2. After all, both theories predict with equal confidence that you’ll wake up in the lab—you haven’t learned anything new. Furthermore, in the original sleeping beauty problem, presumably after you aren’t woken up again, you’ll be sent home. So halfers in the original sleeping beauty problem think that if you will be sent home on the second day, then after waking up in lab conditions, you should think there’s a 50% chance the coin came up tails. The only difference between that and this case is that in this case, when you are sent home, after you go to sleep, your memory is erased. But surely that shouldn’t make a difference—whether you wake up in your bed with memories or without on the second day, you still have experiences incompatible with the coin having come up tails. If heads means you’ll wake up twice, one of the times in a way incompatible with the coin having come up tails, it shouldn’t matter what the second wakeup looks like as long as it remains incompatible with the coin having come up tails.
So from the halfer view, it follows that in the scenario where you’re put asleep in your room without memories on the second day if the coin comes up heads, if you wake up in the experiment day, you should think there’s a 50% chance the coin came up tails. Now let me show why you shouldn’t think that, and so the halfer view must be false.
Imagine that when you wake up, before you know which room you’re in, you think about anthropics while your eyes are closed. You reason: both theories predict I’ll wake up twice. Awakening gives me no evidence for either theory, and because my eyes are closed, I don’t know if I’m in my room or not. So, therefore, right now I should think there’s a 50% chance that the coin came up tails. However, if the coin came up tails, I must be in the lab room, while if it came up heads, there’s only a 50% chance I’m in the lab room now, so if I am in the lab room, I should think there’s a 2/3 chance that the coin came up tails.
Then you open your eyes and find yourself in the lab room. By the above reasoning, your credence in the coin having come up tails should be 2/3. So, therefore, in the case where you’re woken up twice in the lab room if the coin comes up tails, if you have time to think about anthropics before finding out what room you’re in, you should think the odds that the coin came up tails are 2/3.
Here’s my last claim: the odds you should give to the coin having come up tails in this scenario shouldn’t depend on whether you think about anthropics with your eyes closed! Surely whether you happened to think about anthropics when your eyes were closed isn’t relevant to the rational credence in some event given some anthropic evidence. Anthropic data that you update on shouldn’t be sensitive to whether you actually thought about the anthropic situation before observing that data.
But from this it follows that in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads, if you wake up in the lab, you should think that there’s a 2/3 chance the coin came up tails. But that’s the claim that halfers in sleeping beauty should deny, for the reasons I gave before. So halfing is the wrong answer in sleeping beauty.
This chain of reasoning is a bit tricky to spell out. I’ll model it with arrows where A—>B means B follows from B.
Halfing being right in sleeping beauty—>halfing being right in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads—>halfing being right in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads and where, before knowing which room you’re in, when your eyes are closed, you think about anthropics. However halfing is not right in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads and where, before knowing which room you’re in, when your eyes are closed, you think about anthropics, so therefore halfing in sleeping beauty is wrong.
|
LG7FaG4jbTrHZdERe_The_Closed_Eyes_Argument_For_Thi.txt
|
{
"file_size": 6304
}
|
dd5ddcf7-b89b-4023-8f75-d6115132b601
|
Are you passionate about ensuring the safety and reliability of the world’s most lethal and cutting-edge weaponry? Does the idea of creating technology and then working out its impacts excite you? Do you thrive in dynamic environments where innovation meets rigorous safety standards? If so, you might want to consider joining the team at Lockheed Martin (LM), global leaders in advanced weapon systems development!
Position overview and background:
As a Safety Engineer specializing in advanced weaponry systems, you will play a critical role in ensuring we pass the checks and balances we’ve helped Federal Governments develop. You will collaborate very closely with multidisciplinary teams of engineers, scientists, and analysts to assess, mitigate, and manage risks associated with our most innovative products (however we expect any capabilities insights you discover along the way will be kept from your colleagues).
You might be a good fit if you:
Thrive on rigorously testing SOTA lethal weaponry, to ensure their safety and compliance.Enjoy working closely with PR & Comms - as needed you will be asked to appear on various podcasts and give presentations to the weapons Safety community, whom we work very closely with.Have experience in organizations with a flat hierarchy. For example, our CEO works extremely closely with the board.Have industry connections. We maintain close ties with our independent auditors, many of whom used to work at LM!Can predict with 100% accuracy that you won’t ever be interested in moving into different areas of the company. We hire the smartest and most conscientious talent specifically for our Safety teams, and assume they’ll never want to move into weapons capabilities advancement.
Annual Salary (USD)
Multiply the not-for-profit equivalent by 7X.
Join Us:
Apply here by June 16, 2026 (after which it will probably be too late).
|
KtsJwgCWygEntcD7K_Apply_to_be_a_Safety_Engineer_at.txt
|
{
"file_size": 1888
}
|
01db9af9-4e9a-400d-81dc-8aa0dcc1dc78
|
TL;DR
Tacit knowledge is extremely valuable. Unfortunately, developing tacit knowledge is usually bottlenecked by apprentice-master relationships. Tacit Knowledge Videos could widen this bottleneck. This post is a Schelling point for aggregating these videos—aiming to be The Best Textbooks on Every Subject for Tacit Knowledge Videos. Scroll down to the list if that's what you're here for. Post videos that highlight tacit knowledge in the comments and I’ll add them to the post. Experts in the videos include Stephen Wolfram, Holden Karnofsky, Andy Matuschak, Jonathan Blow, Tyler Cowen, George Hotz, and others.
What are Tacit Knowledge Videos?
Samo Burja claims YouTube has opened the gates for a revolution in tacit knowledge transfer. Burja defines tacit knowledge as follows:
Tacit knowledge is knowledge that can’t properly be transmitted via verbal or written instruction, like the ability to create great art or assess a startup. This tacit knowledge is a form of intellectual dark matter, pervading society in a million ways, some of them trivial, some of them vital. Examples include woodworking, metalworking, housekeeping, cooking, dancing, amateur public speaking, assembly line oversight, rapid problem-solving, and heart surgery.
In my observation, domains like housekeeping and cooking have already seen many benefits from this revolution. Could tacit knowledge in domains like research, programming, mathematics, and business be next? I’m not sure, but maybe this post will help push the needle forward.
For the purpose of this post, a Tacit Knowledge Video is any video that communicates “knowledge that can’t properly be transmitted via verbal or written instruction”. Here are some examples:
Neel Nanda, who leads the Google DeepMind mechanistic interpretability team, has a playlist of “Research Walkthroughs”. AI Safety research is discussed a lot around here. Watching research videos could help instantiate what AI research really looks and feels like.GiveWell has public audio recordings of its Board Meetings from 2007–2020. Participants include Elie Hassenfeld, Holden Karnofsky, Timothy Ogden, Rob Reich, Tom Rutledge, Brigid Slipka, Cari Tuna, Julia Wise, and others. Influential business meetings are not usually made public. I feel I have learned some about business communication and business operations, among other things, by listening to these recordings.Andy Matuschak recorded himself studying Quantum Mechanics with Dwarkesh Patel and doing research. Andy Matuschak “helped build iOS at Apple and led R&D at Khan Academy”. I found it interesting to have a peek into Matuschak’s spaced repetition practice and various studying heuristics and habits, as well as his process of digesting and taking notes on papers.
For information on how to best use these videos, Cedric Chin and Jacob Steinhardt have some potentially relevant practical advice. Andy Matuschak also has some working notes about this idea generally. @Jared Peterson, who "researches and trains tacit knowledge" recommends the book Working Minds "which teaches how to do Cognitive Task Analysis (CTA) which is a major interviewing technique for uncovering tacit knowledge."
How to Submit
Share links to Tacit Knowledge Videos below! Share them frivolously! These videos are uncommon—the bottleneck to the YouTube knowledge transfer revolution is quantity, not quality. I will add the shared videos to the post. Here are the loose rules:
Recall a video that you’ve seen that communicates tacit knowledge—“knowledge that can’t properly be transmitted via verbal or written instruction”. A rule of thumb for sharing: could a reader find this video through one or two undirected YouTube searches? If not, share it.Post the title and the URL of the video.Provide information indicating why the expert in the video is credible. (However, don’t let this last rule stop you from sharing a video! Again—quantity, not quality.)[1]
To make the comments easy to navigate, please format your comment as follows:[2]
Domain: Programming, Game Development
Link: Programming livestream VODs
Person: Jonathan Blow
Background: Creator of Braid and The Witness.
Why: Blow livestreams himself coding games and creating a programming language. I imagine people who do similar things would find his livestreams interesting.
List of Tacit Knowledge Videos
(last updated 08-25-2024)
To receive occational updates with lists of new videos, subscribe to the 'Tacit Knowledge Video Updates' Substack.
Software Engineering
Machine Learning
Andrej Karpathy, Neural Networks: Zero to Hero.10+ years: Stanford PhD, research scientist at OpenAI & Tesla. (Website)Jeremy Howard, fast.ai live coding & tutorials."He is the co-founder of fast.ai, where he teaches introductory courses, develops software, and conducts research in the area of deep learning. Previously he founded and led Fastmail, Optimal Decisions Group, and Enlitic. He was President and Chief Scientist of Kaggle" (Wikipedia).
Competitive Programming
Neal Wu, competitive programming.CS at Harvard; SWE 1 year at startup; 4 years at Google (LinkedIn).Errichto Algorithms, competitive programming.Peak rating of 3053 (legendary grandmaster) on Codeforces.William Lin, competitive programming."[S]ophomore at MIT [...], IOI 2020 Winner, Codeforces Max Rating 2931 (International Grandmaster), CodeChef Max Rating 2916 (7 stars)" (YouTube About).
Game Development
Jonathan Blow, programming livestreams.Creator of Braid and The Witness.Casey Muratori (Molly Rocket), ongoing project [...] to create a complete, professional-quality game accompanied by videos that explain every single line of its source code.“[P]ast projects include The Granny Animation SDK, Bink 2, and The Witness” (Website).Gareth Murfin, looks at some reverse engineered GTA Vice City code.He was a programmer on the project. He now has 20 years of mobile development experience (LinkedIn).Freya Holmér, explaining math / shaders and coding. (h/t @talelore)Co-founder of an indie game development studio since 2012 and game developer since 2020 (LinkedIn).
Web Development
Matt Layman, "How To Build SaaS with Python and Django". (h/t roshan_mishra/X)Software Engineer since 2006 at Lockheed Martin, Storybird Inc., Serenity Software, Doctor on Demand, and Included Health (LinkedIn).Hrishi Olickel, creating a proof-of-concept Web App using LLMs.CTO at Greywing (YC W21) (GitHub).Dennis Ivanov, doing web development.Dev for 5 years at a company whose name I don't recognize (LinkedIn).
Other
George Hotz, programming livestreams. (h/t @RomanHauksson)“He is known for developing iOS jailbreaks, reverse engineering the PlayStation 3, and for the subsequent lawsuit brought against him by Sony. From September 2015 onwards, he has been working on his vehicle automation machine learning company comma.ai. Since November 2022, Hotz has been working on tinygrad, a deep learning framework" (Wikipedia).Inigo Quilez, computer graphics programming. (h/t @Robert Diersing)Has worked in roles dealing with computer graphics at Pixar Animation Studios, Oculus Story Studio, Oculus+Facebook, Adobe, and other places since 2003 (Website).Tim Ruscica, Tech with Tim livestreams.3 years SWE; Microsoft Intern (LinkedIn).Alex Denisov, low-level programming.17 years of research, 10 years of web dev (LinkedIn).Jon Gjengset, implementing a BitTorrent client in Rust.PhD at MIT; 3 years SWE, some at Amazon (LinkedIn).Shashank Kalanithi, Day in the Life of a Data Analyst.Software/data stuff for 3 years at companies I've never heard of (LinkedIn).Dave's Garage, exploring Windows 11.Former Microsoft shell developer.Joel Grus, solving Advent of Code problems.Software Engineer since 2008 at companies such as Microsoft, Google, Allen Institute for AI, and Goldman Sachs (LinkedIn).Scott Chacon, "So You Think You Know Git" (Part 2). (h/t @Max Entropy)Co-founder of GitHub and author of Pro Git (introduces himself at the start of the talk).René Rebe, live streaming Linux, open source, and low-level programming hardware and software projects.CEO of ExactCODE GmbH since 2005 (LinkedIn).
Research, Studying, & Problem Solving
Research
Neel Nanda, Mechanistic Interpretability Research Walkthroughs. (h/t @RomanHauksson)Leads the Google DeepMind mechanistic interpretability team; worked at Anthropic with Chris Olah; interned at Jane Street and Jump Trading (Website).Andy Matuschak, researching live.Crowdfunded researcher. “[H]elped build iOS at Apple and led R&D at Khan Academy” (Website).JoVE, a “Peer Reviewed Scientific Video Journal”.“18,000+ videos of laboratory methods and science concepts”, though most are paywalled and seem to require an institutional subscription.Steven Kenneth Bonnell II (Destiny), doing research for debates (and his research Obsidian)."[L]ive-streamer and political commentator" (Wikipedia). Has debated/talked with, eg, Jordan Peterson, Lex Fridman, Bryan Caplan, and Ben Shapiro.
Studying
Andy Matuschak, studying Quantum Mechanics with Dwarkesh Patel.Crowdfunded researcher. “[H]elped build iOS at Apple and led R&D at Khan Academy” (Website).Justin Sung, Study With Me.Learning coach and YouTuber (Website).
Problem Solving
Tim Gowers, thinking about math problems in real-time. (h/t @jsd, @depressurize)@depressurize specifically liked this series. "He is Professeur titulaire of the Combinatorics chair at the Collège de France, and director of research at the University of Cambridge and Fellow of Trinity College, Cambridge. In 1998, he received the Fields Medal for research connecting the fields of functional analysis and combinatorics" (Wikipedia).Evan Chen, solving Math Olympiad problems. (h/t @jsd)"Evan is a math PhD student at MIT, and a math olympiad coach. In addition to helping train the United States team, Evan runs his own training program [...] Evan was an IMO gold medalist and a winner of the 2014 USA math olympiad, [...] He also wrote the popular textbook Euclidean Geometry in Math Olympiads while in high school, which was published in 2016" (Website).Tom Crawford taking Oxford Admissions Interview.Math communicator. Oxford math tutor for 6 years; Cambridge math PhD; Oxford math undergrad (LinkedIn).Blackpenredpen, solving 100 integrals.Struggling Grad Student, doing math.Current math PhD.Mark Goodliffe, Simon Anthony (Cracking The Cryptic); solving puzzles like Sudoku, Crossword, Wordle. (h/t @Jared Peterson)Mark Goodliffe. "12 times winner of The Times Crossword Championship" (Cracking The Cryptic (YouTube), "About").Simon Anthony. "[F]ormer record holder for consecutive Listener Crossword solves" (Cracking The Cryptic (YouTube), "About").
Business & Business Communication
Elie Hassenfeld, Holden Karnofsky, Timothy Ogden, Rob Reich, Tom Rutledge, Brigid Slipka, Cari Tuna, Julia Wise: GiveWell's Public Board Meetings (2007–2020 have audio).Holden Karnofsky. “Director of AI Strategy (formerly CEO) of Open Philanthropy and Co-Founder of GiveWell” (Website).Elie Hassenfeld. Co-Founder and CEO of GiveWell (LinkedIn).Timothy Ogden. Chief Knowledge Officer at Geneva Global, Inc.; founding editor of Gartner Press; founder of Sona Partners; chairman of GiveWell (Aspen Institute).Rob Reich. Political Science professor at Stanford for 26 years (Stanford).Tom Rutledge. Has worked in finance since 1989 (LinkedIn). Brigid Sliplka. Director of Philanthropy at ACLU (LinkedIn).Cari Tuna. President at Open Philanthropy and Good Ventures (Wikipedia).Julia Wise. Community Liaison at Centre for Effective Altruism (LinkedIn).Stephen Wolfram, “Live CEOing”. “Fellow of the American Mathematical Society. […] founder and CEO of the software company Wolfram Research where he works as chief designer of Mathematica and the Wolfram Alpha answer engine.” (Wikipedia).Sam Altman, Paul Graham, others; live Y Combinator office hours.Sam Altman. CEO of OpenAI; former President of Y Combinator (Wikipedia).Paul Graham. Co-founder of Y Combinator (Wikipedia).Other YC employees.Ray Dalio, “case study” recordings of business meetings and interviews with employees at Bridgewater on App Store app Principles In Action; I do not know of a way to access these through a web browser.Founder of Bridgewater Associates.Tegus, a library of expert interviews for finance professionals. Unfortunately, its price seems to start at $20-25,000 per user and year.Misha Glouberman, Recorded Coaching Session. (h/t @Misha Glouberman)"Consultant, Business Coach, and Co-Author of The Chairs Are Where The People Go."Testimonials: Mark Surman, President of Mozilla; Shenda Tanchak, Registrar & CEO of Ontario College of Pharmacists; Michael Bungay Stanier, Author of The Coaching Habit; others (Website).
Construction & Craftsmanship
Andrew Camarata, small business, heavy machinery operation, and construction. (h/t @Carl Feynman)"He has no legible success that I know of, except that he’s wealthy enough to afford many machines, and he’s smart enough that the house he designed and built came out stunning (albeit eccentric)."Dave Whipple, building a "[s]imple off grid Cabin that anyone can build & afford (and many other builds on his channel). (h/t @Vitor)"Construction contractor, DIY living off-grid in Alaska and Michigan.""He and his wife bootstrapped themselves building their own cabin, then house, sell at a profit, rinse and repeat a few times. There are many, many videos of people building their own cabins, etc. Dave's are simple, clear, lucid, from a guy who's done it many times and has skin in the game."Scott Wadsworth (Essential Craftsman), "[i]nformational videos related to blacksmithing, general construction, safety & productivity, and various other trades". (h/t @Zahima)His channel started "in 2007 as a blacksmithing 'hobby' business" (Website).Primitive Technology, "build[ing] things in the wild completely from scratch using no modern tools or materials." (Quoting YouTube desc.) (h/t @arrrtem)20M+ YouTube subscribers, published a book.Max Egorov, "[b]ushcraft and off-grid craftsmanship". (Russian narration) (h/t @TANSTAAFL)"Advoko has a site in the woods near Lake Ladoga in Russia where he films himself building various improvements by hand with local materials. Very competent craftsman, professional touch with no hype."Shannon (House Improvements), "How to build a deck" (6 Part Series).Has been in the construction industry for decades. Runs his own renovation business. 925K YouTube subscribers (Channel Trailer).Me: A friend of mine successfully built a deck using this playlist as a guide.Steven Ramsey, 200 days of woodworking projects during the COVID-19 lockdowns.Hobbyist woodworker turned woodworking content creator (1.9M YouTube subscribers); formerly a professional graphic designer (Website).
Cooking
"Mise En Place", "[i]nterviews and kitchen walkthroughs with the head chefs at Michelin-star restaurants." (h/t @Freya)"[H]ead chefs at Michelin-star restaurants"J. Kenji López-Alt; casual cooking videos, often filmed using POV camera. (h/t @lincolnquirk)"I'm the author of the James Beard award-winning books The Food Lab and The Wok, a New York Times columnist, and a former restaurant worker. I'm also the author of the best-selling children's book, Every Night is Pizza Night" (YouTube).
Engineering & Machining
Ben Eater, "Build a 65c02-based computer from scratch".Systems Engineer at Juniper Networks for ~9.5 years, Group Engineering Manager at Khan Academy for ~7.5 years, and YouTuber with 1.2M subscribers (LinkedIn, YouTube).Tech Ingredients, working with "lasers, rockets, refrigeration, high voltage". (h/t @taygetea)"an anonymous retired doctor who i suspect worked on something classified. incredible lecturer in engineering topics. every video is great, ignore the clickbait titles and thumbnails and click anyway. lasers, rockets, refrigeration, acoustics, high voltage"Ben Krasnow, "[I]nteresting applications of science and technology. You'll see how an electron microscope was built in a home shop, how an X-ray backscatter system works, how to make aerogel, and many other hi-tech projects". (Quote from Krasnow's YouTube) (h/t @Carl Feynman) Founder of a business that "created prototypes and small production runs of MRI-compatible computer peripherals". Hardware Engineer since 2011, working at companies such as Valve and Google (LinkedIn).Dan Gelbart, Building Prototypes (18 Part Series). (h/t @Adrian Kelly)"Dan Gelbart has been Founder and CTO of hardware companies for over 40 years, and shares his deep knowledge of tips and tricks for fast, efficient, and accurate mechanical fabrication. He covers a variety of tools, materials, and techniques that are extremely valuable to have in your toolbox."
Farming
FarmCraft101, farming and operating heavy machinery. (h/t @Carl Feynman)"No legible symbols of success, other than speaking standard American English like he’s been to college, owning a large farm, and clearly being intelligent."Lance, "Permaculture Garden In The High Desert". (h/t @Freyja)"[A] seasoned gardener with over 40 years of experience", owns a farm. (Quoting the YouTube video).
Finance
Disclaimer, copy-pasting a comment from @Max Entropy:
[...] I'm skeptical of your recommendations (DeepFuckingValue and Martin Shkreli). The former made his money pumping-and-dumping meme stocks, and I get the impression the latter has been selected for fame (like recommending Neil deGrasse Tyson to learn physics).
In general, I think finding good resources in finance requires a much stronger epistemic immune system than nearly any other field! There's so much adverse selection, and charlatans can hide behind noisy returns and flashy slide decks for a very long time. I've worked at a top quant trader long enough to spot BS, and the KL-divergence between what competent looking YouTubers say and what actually works is extreme.
Martin Shkreli, Finance Lessons.“American financial criminal and businessman. Shkreli is the co-founder of the hedge funds Elea Capital, MSMB Capital Management, and MSMB Healthcare, the co-founder and former CEO of pharmaceutical firms Retrophin and Turing Pharmaceuticals, and the former CEO of start-up software company Gödel Systems, which he founded in August 2016. [...] In 2017, Shkreli was charged and convicted in federal court on two counts of securities fraud and one count of conspiracy for activity unrelated to the Daraprim controversy. He was sentenced to seven years in prison and up to $7.4 million in fines.” (Wikipedia).Anecdote from an experienced finance friend: "I haven't watched his videos, but remember a couple of (reasonable) people expressing surprise that they're legit introductions to financial modeling."Aswath Damodaran, "Reading a 10K"."Professor of Finance at the Stern School of Business at New York University, where he teaches corporate finance and equity valuation. [...] Damodaran is best known as the author of several widely used academic and practitioner texts on Valuation, Corporate Finance and Investment Management as well as provider of comprehensive data for valuation purposes" (Wikipedia).Anecdote from an experienced finance friend: "Damodaran is an NYU prof who's super credible and well regarded for his practical tutorials on valuations and corporate finance, I used to refer to his blog often."Roaring Kitty (DeepFuckingValue), trading livestreams.Held a $53,000 investment that turned into a $50 million in Gamestop. Seems he got into some regulatory trouble. I'm not sure about the specifics of this (Wikipedia).
Housekeeping & Parenting
Lisa (Farmhouse on Boone), "walk[ing] through her house and discusses what items she keeps where and why, and how she avoids clutter". (h/t @Freyja)"She is a mom of 8 with a successful YouTube channel (successful enough that her husband quit his job and now helps with the channel and homeschooling)."Abiding Home, "Large Family Homeschool Day in the Life". (h/t @Freyja)"Christian mom who homeschools her 8 children [...] I know less about any metrics of success, except that she reports that her family is easy to run and enjoyable for her."
Media & Arts
Design
Various skilled CAD users and instructors, CAD vs. CAD Speedrunning Tournament. (h/t @zookini)"Watch some of the best SOLIDWORKS, OnShape, Fusion 360 and Inventor users Speedrun some challenging models while going head to head and sharing their screens" (YouTube).Sofia Bue, SFX Sculpting. (h/t @Freyja)"Sofia Bue is a professional SFX sculptor; she works at Weta Workshop which is the most well-known special FX company in the world; they were responsible for SFX on Lord of the Rings. She also won the SFX category at the world Bodypainting championships at least once so I think she’s pretty indisputably world-class at it."MDS, live UI design.Here’s his Dribble.Andy Matuschak, live design stream on his Patreon (paywalled).Crowdfunded researcher. “[H]elped build iOS at Apple and led R&D at Khan Academy” (Website).
Filmmaking
Taran Van Hemert, 4 hours of editing a Linus Tech Tips YouTube video."Editor, Camera Operator, Writer, Host at Linus Tech Tips" for ~10 years (Website).David Winters (Cranky Cameraman), [l]ife and business as a working independent Director of Photography & Broadcast Photojournalist."Over 20 years of making media"; "David has created content for many Fortune 500 brands as well as creative agencies and television networks" (Winters Media Group, Inc.).Corridor Crew, "VFX Artists React". (h/t @talelore)"They have lots of high-profile guests from Seth Rogen to Adam Savage." Host backgrounds were not readily available (if someone finds their backgrounds, feel free to comment and I will edit into the post).Hayao Miyazaki, documentary detailing his creative process. (h/t roshan_mishra/X)"A co-founder of Studio Ghibli, he has attained international acclaim as a masterful storyteller and creator of Japanese animated feature films, and is widely regarded as one of the most accomplished filmmakers in the history of animation" (Wikipedia).Thought Café, "How Crash Course is Made - Tutorials!"Thought Café does animation/graphic design for Crash Course's 15.6M subscriber Educational YouTube channel (Wikipedia). Here's Thought Café's reel.
Music
Jacob Collier, music composition, arrangement, production. (h/t @bertrand russet)"6-time (at 29 yo) Grammy-winning multi-instrumentalist."Philip Quast, masterclass in singing Les Mis (full interview). (h/t @Yoav Ravid)"He has won the Laurence Olivier Award for Best Actor in a Musical three times, making him the first actor to have three wins in that category. He is perhaps best known for his role as Inspector Javert in the stage musical Les Misérables and in the Les Misérables: The Dream Cast in Concert" (Wikipedia).Seymour Bernstein, teaching piano. (h/t @lfrymire)"Pianist and composer, performed with the Chicago Symphony Orchestra, Adjunct Associate Professor of Music and Music Education at New York University.""Tonebase (a paid music learning service) recorded a number of free to watch conversations with Bernstein while he plays through or teaches a piece. Bernstein is about 90 years old at the time of recording and shares an incredible amount of tacit knowledge, especially about body mechanics when playing piano."Zane Carney; composing, recording, and producing music live. Guitarist who has contributed to albums like Thundercat's "Drunk" and John Mayer's "Paradise Valley." Has toured with Jonny Land and John Mayer (Website).BNYX, Olswelm, and other indie (?) music producers; music production livestream VODs.Unsure of credibility. A friend into music production recommended some of the videos on this channel. He specifically liked BYNX and olswel.
Productivity
Joel Spolsky, You Suck at Excel (notes from the video).Program manager responsible for the launch of VBA in Excel 5.0; co-founded Fog Creek Software; blogger (Website).Alexey Guzey, walkthrough of his computer setup and productivity workflow.Founder of New Science. Popular blogger (eg, author of Matthew Walker’s “Why We Sleep” Is Riddled with Scientific and Factual Errors).Note: Alexey changed his mind about the productivity benefits of the computer setup in this video: "My 2022 self (I don't know them) was very wrong about meditation, huge monitors, and... sleep."Tyler Cowen, Nat Eliason, Nathan Labenz, David Perell; How Do You Use ChatGPT? (h/t goldplatesteaks/X)Tyler Cowen. Economist at George Mason University, host of Conversations with Tyler, administrator of Emergent Ventures (Wikipedia, Emergent Ventures).Nathan Labenz. AI R&D at Waymark. Founder of a couple of companies. (LinkedIn).David Parell. Hosts a podcast, blogs, and runs an online writing course (Website). Nat Eliason. Writer and influencer (Website).
Sports & Games
NBA players, watching themselves play basketball.Various NBA stars.Andy Benesh, talking about his beach volleyball offense.Won two international beach volleyball tournaments (bvbinfo).SpeedRun.com, video game speedruns (to find the videos, click on 'Player' names on individual leaderboards, and you will find a YouTube recording of the speedrun). (h/t @Algon)Each game has leaderboards from which you can determine speedrunner believability.Anthony Gato, complete practice session at the British Juggling Convention 2000. (and here's a decent juggler commenting on it). (h/t @Morpheus)"Anthony Gato holds several juggling world records. This routine is infamous in the juggling world."Sylvie Von Duuglas-Ittu, Muay Thai Library. (h/t @raydora)"Muay Thai fighter with over 200 fights.""Sylvie shows herself learning with her 'Muay Thai Library' videos. She narrates how she explores learning someone's technique or strategy. More than any particular technique, these videos show someone's learning process. This is applicable to all combat sports."
Therapy
Carl Rogers, Frederick Perls, Albert Ellis, Everett Shostrom, Arnold Lazurus, Aaron Beck: Three Approaches to Psychotherapy—recorded therapy sessions.Carl Rogers. Founder of person-centered psychotherapy; one of the founders of humanistic psychology (Wikipedia).Frederick Perls. Developed Gestalt therapy with his wife, Laura Perls (Wikipedia). Albert Ellis. Founder of rational emotive behavior therapy (REBT) (Wikipedia).Everett Shostrom. Put together the film. "He also produced well known tests and inventories including the Personal Orientation Inventory, Personal Orientation Dimensions, the Pair Attraction Inventory, and the Caring Relationship Inventory " (Wikipedia).Arnold Lazurus. "Authored the first text on cognitive behavioral therapy (CBT) called Behaviour Therapy and Beyond" and won various awards including two from the American Psychological Association and the American Board of Professional Psychology (Wikipedia).Aaron Beck. "He is regarded as the father of cognitive therapy and cognitive behavioral therapy (CBT)" (Wikipedia).Dr. Alok Kanojia, “interviews” with influencers.6 years as a private psychiatrist; 6 years as a Clinical Fellow and Instructor in Psychiatry at Harvard; 5 years doing psychiatry at McLean Hospital (LinkedIn).Esther Perel, live couples’ therapy session[s] with a guest couple. (h/t @Freyja)"It is rare to get access to a recorded therapy session, and she is at least world-renowned as a relationship therapist (although that doesn’t necessarily prove that she’s good at it)."
Transportation
RegLocal, advanced driving. (h/t @masasin)"former police driving instructor; he has a book, but the videos themselves are so helpful"Ryan Farran (Missionary Bush Pilot), flying small aircraft in Papua New Guinea. (h/t @masasin)"My job as a bush pilot is to fly missionaries, medical flights, and cargo into mountain and jungle airstrips throughout all of [Papua New Guinea]" (YouTube).
Writing
Paul Graham, website with a replay of his writing process for an essay. (h/t sameersismail/X) Here's the final essay. Here's the blog post describing the site (though the link to the site in this blog post is dead).Co-founder of Y Combinator.Ali Abdaal, writing a chapter for his book.Author of Feel Good Productivity. Studied medicine for 6 years at Cambridge University; Junior Doctor in the UK National Health Service; YouTuber (Website).
Miscellaneous
David J. Peterson, "The Art of Language Invention" (30-episode series on language construction: 'conlang'). (h/t @Jonathan Sheehy)He's been creating languages for fun since 2000 and creating languages professionally since 2009. He's done work for shows like HBO's Game of Thrones, Syfy's Defiance, Syfy's Dominion, The CW's Star-Crossed, The CW's The 100, Showtime's Penny Dreadful and the movie Marvel's Thor: The Dark World. He published a book called The Art of Language Invention (he shares his credentials in this video).Paul Meehl, Philosophical Psychology 1989 course lectures, "deep introduction to 20c philosophy of science, using psychology rather than physics as the model science -- because it's harder!" (h/t @Jonathan Stray)"Meehl was a philosopher of science, a statistician, and a lifelong clinical psychologist. He wrote a book showing that statistical prediction usually beats clinical judgement in 1954, and a paper on the replication crisis in psychology in 1978. He personally knew people like Popper, Kuhn, Lakatos, Feyerabend, etc. and brings their insights to life in these course lectures."Me: I was hesitant to add a lecture series to this list at first. I changed my mind after listening to the first video, where Meehl provides interesting details (gossip, almost) about the life of an academic and the various personalities of his successful academic peers.Kenneth Folk, Guided Tour to 13 Jhanas and pranayama breathing."Kenneth Folk is an instructor of meditation who has received worldwide acknowledgement for his innovative approach to secular Buddhist meditation. After twenty years of training in the Burmese Theravada Buddhist tradition of Mahasi Sayadaw, including three years of intensive silent retreat in monasteries in Asia and the U.S., he began to spread his own findings, successfully stripping away religious dogma to render meditation accessible to modern practitioners" (Website).Keith Johnstone, teaching improv.Author of Impro."A pioneer of improvisational theatre, he was best known for inventing the Impro System, part of which are the Theatresports" (Wikipedia).
^
What valuable project did they ship? How many years have they worked for their prestigious company or university? How many papers have they published? What awards have they won? What other domain-relevant metric did this person perform well on? You could also give your feedback based on your expertise. Ideally, these are proxies for the knowledge and expertise of these practitioners being good.
^
Feel free to leave out the 'Background' and 'Why' sections.
|
SXJGSPeQWbACveJhs_The_Best_Tacit_Knowledge_Videos_.txt
|
{
"file_size": 30865
}
|
302eb105-6ea0-4962-95cd-438d00855395
|
I suspect this phenomenon is common in the LW/EA spheres, but I've never seen it presented like this. I describe the way that switching from earning-to-give to working-in-altruism has consequences on one's sense of responsibility and trust. I wonder if others have experienced this and how.
Delegating responsibility
One of the truisms in life is “there are no adults”. Having turned 18 last month, I’ve had the displeasure of staring that truism in the face. Nothing deals a blow to your sense of civilizational adequacy quite like thinking about future Earth with life extension where everyone is thousands of years old, and then remembering you live on Earth2024 where most people in charge are barely half a century old. Nihil supernum and all that.
But the illusion of adults is extremely tempting to me. A part of me really wants to believe there are adults out there that can solve my problems better than I can. For instance, I donated to MIRI for the first time a month ago, and anytime I make money now, I run the expected value calculation and establish that if MIRI can slightly increase the log-odds of everyone surviving, that’s worth more than anything I could buy for myself. MIRI has become a sort of blackbox to me: money comes in, survival lottery tickets come out, and I don't care how the sausage gets done. I'm willfully ignorant, because "donating to MIRI" is one deviation away from the front lines, and lets me avoid taking ultimate responsibility for things.
Switch
Then I got accepted into BlueDot Impact's governance course. Oh no! That's a reversal of responsibility! Now other people [1] are treating me as a black box where money comes in and x-risk mitigation comes out!
I applied to the course in the hopes of becoming the type of person who can use Neil's dollars in a more effective manner than MIRI.[2] In other words, I wanted to be able to legitimately trust myself more than MIRI as far as allocating my own money is concerned. I'm not there yet, but the fact I am aiming to become that person is a shift in trust. It may also be the meaning of adulthood, to the extent there is one.
Should people aim for this?
In x-risk as with any other cause, should you aim to become the type of person whose money is better spent on oneself than on a charity?
Well, no. It's widely admitted that earning-to-give is a noble route, especially if you already work a job you're particularly good at. If you're excellent at jurisprudence, then by all means, support x-risk mitigation by working as a lawyer and donating. It would be suboptimal to switch to a career closer to the front lines just because you think that's more admirable or something. For me personally, switching from trusting others to trusting myself felt like leaping into adulthood. But the lawyer in the example is definitely an adult in that they are picking the best path, and taking responsibility for it.
What about you?
I'm still a high school student, and trusting myself more than others is a little alien to me. Until now, I had never viscerally felt the switch from relatively distant responsibility to much closer responsibility. Have you felt this switch? Is this some sort of threshold to adulthood? Do you think there are some far-reaching consequences to the self-worth of the average EA versus the average person?
^
Primarily Open Philantropy donors
^
That is, I only need to use the money to generate more utility than a marginal donation to MIRI of that price would have represented. That is, if a new project of mine costs 100 dollars, it should do more good than 100 dollars given to an already-existing project at MIRI. I'm not there yet, but it's likely the governance course will get me a lot closer.
|
Z7rDTnCCZ8Xz8F59R_How_does_it_feel_to_switch_from_.txt
|
{
"file_size": 3735
}
|
5a6dc66a-223e-4b09-91e8-ce974a52dfcb
|
This is a post to officially announce the sae-vis library, which was designed to create feature dashboards like those from Anthropic's research.
Summary
There are 2 types of visualisations supported by this library: feature-centric and prompt-centric.
The feature-centric vis is the standard from Anthropic’s post, it looks like the image below. There’s an option to navigate through different features via a dropdown in the top left.
You can see the interactive version at the GitHub repo, at _feature_vis_demo.html.
The prompt-centric vis is centred on a single user-supplied prompt, rather than a single feature. It will show you the list of features which score highest on that prompt, according to a variety of different metrics. It looks like the image below. There’s an option to navigate through different possible metrics and choices of token in your prompt via a dropdown in the top left.
You can see the interactive version at the GitHub repo, at _prompt_vis_demo.html.
Other links
Here are some more useful links:
GitHub repoUser Guide - Google Doc explaining how to use the libraryDev Guide - Google Doc explaining more about how the library was built, for if you'd like to try and extend it / build off itDemo Colab - includes examples, with code explained
You might also be interested in reading about Neuronpedia, who make use of this library in their visualizations.
If you're interested in getting involved, please reach out to me or Joseph Bloom! We will also be publishing a post tomorrow, discussing some of the features we've discovered during our research.
|
nAhy6ZquNY7AD3RkD_SAE-VIS__Announcement_Post.txt
|
{
"file_size": 1589
}
|
9394f0df-ddb9-4fab-b083-6e40261dc90d
|
I’ve been told a number of times that I’m too pessimistic about personal outcomes but I feel like I’m realist. So I’d like to test and measure it.
This post on Overconfident Pessimism appears to cover a lot of the same ground and certainly has illuminated for me the way that I become pessimistic or give low probability to tasks or processes I don't yet understand how to do. However the article is chiefly about making predictions about innovation and technological advances, not things in the personal realm.
The problem appears to be predicting where one's own behaviour is involved (although that didn't stop Wilbur Wright).
Never the less, surely if I make a raft of predictions, assign how confident I am in each of them and it turns out I am overwhelmingly overconfidently pessimistic, then it would confirm the "I am a pessimistic hypothesis" and vice versa for someone who is considered to be too optimistic, right?
|
Jphs75ovY9dBSNc5d_How_to_best_measure_if_and_to_wh.txt
|
{
"file_size": 933
}
|
4e278e31-7f99-4e2f-a1c7-f2c1dd66f214
|
We’ve just published a paper on a new way to align language models with human values. We wanted to post it here to get more feedback from folk who have thought deeply about alignment.
I'm pretty excited about it. In the past, I worked on RLHF, InstructGPT, and GPT-4 alignment (though one could make the claim that this isn't "real alignment research"). In the last year, I've found myself gravitating more towards the question of "what do we align to?". It turns out that this is extremely hard, but I think the set of ideas in this paper are some of the best I've come across.
I also think "what do we align to?" is directly relevant to longer-term alignment research / X-risk. We have a section about this in the paper, and I'd love to hear from people who disagree.
You can find the paper here: https://meaningalignment.org/values-and-alignment-paper. Below I've pasted the abstract, in addition to the section of our discussion where we we relate the paper more explicitly to traditional alignment research.
Fire away!
Abstract
There is an emerging consensus that we need to align AI systems with human values (Gabriel, 2020; Ji et al., 2024), but there is very little work on what that means and how we actually do it. We split the problem of “aligning to human values” into three parts: first, eliciting values from people; second, reconciling those values into an alignment target for training ML models; and third, actually training the model. In this paper, we focus on the first two parts, and ask the question: what are “good” ways to synthesize diverse human inputs about values into a target for aligning language models? To answer this question, we first define a set of 6 criteria that we believe must be satisfied for an alignment target to shape model behavior in accordance with human values. We then propose a process for eliciting and reconciling values called Moral Graph Elicitation (MGE), which uses a large language model to interview participants about their values in particular contexts; our approach is inspired by the philosophy of values advanced by Taylor (1977), Chang (2004a), and others. We trial MGE with a representative sample of 500 Americans, on 3 intentionally divisive prompts (e.g. advice about abortion). Our results demonstrate that MGE is promising for improving model alignment across all 6 criteria. For example, almost all participants (89.1%) felt well represented by the process, and (89%) thought the final moral graph was fair, even if their value wasn’t voted as the wisest. Our process often results in “expert” values (e.g. values from women who have solicited abortion advice) rising to the top of the moral graph, without defining who is considered an expert in advance.
[...]
Relevance to alignment research
This paper is about what human values are and how we can align to them. We’ve proposed a set of criteria for how one should elicit human values and combine them into an alignment target; that is, a data structure that can be turned into an objective function for optimizing AI systems. We’ve also developed a method, Moral Graph Elicitation, for producing an alignment target and argued that it performs well on our criteria through our case study in Section 5.
Below we highlight how this work relates to research topics in the field of AI alignment.
Outer alignment. This line of research is somewhat different from what typically falls in the bucket of alignment research. It is most closely related to “outer alignment”, which is concerned with defining the “right” objective function to optimize. However, outer alignment research rarely considers the legitimacy of the process that produces the objective function to optimize. It is not simply a matter of coming up with a good answer; it matters how we come up with the answer, because we must aspire to a world where the people and institutions who use these systems broadly endorse what they are trying to do for us. This has become an increasing focus of more recent alignment work (Ji et al., 2024).
Deception. One of the main motivations of alignment research is to detect or mitigate deception from AI; in other words, scenarios where an AI system attempts to manipulate the beliefs or actions of people to achieve an undesirable outcome. This is most often explored through “inner alignment” research, which is concerned with how models at test time might optimize something different than the objective we intended to set. We believe that coming up with robust alignment targets (as defined in Section 3.1) is also directly relevant to AI deception. Specifically, a non-robust alignment target is vulnerable to being hijacked by both human and AI systems, without requiring any inner alignment failures. As described in Section 3.2, there will be a huge incentive to do this because AI systems will become increasingly powerful, both economically and culturally. A motivated actor (human or AI) could manipulate a non-robust alignment target using money, rhetoric, or hacking. A robust target and elicitation process would shut down those avenues for manipulation.
Over-optimization. The moral graph may also be useful for mitigating over-optimization. This is because each value in the moral graph is connected with a context in which that value is applicable. In our experiments, the context is simply the prompt, but more generally a context might be represented by a certain range of tokens in a conversation or action trajectory. Thus, there’s a clear bounded area in which each value applies, and it’s less likely that any one value will be pushed too hard or universalized. Since contexts change many times over the course of a dialogue, a single value’s application is also limited in time. While this doesn’t mean that models will do the right thing, it means pursuing their objective function isn’t the same as monomaniacally pursuing a single goal. Of course, over-optimization could still occur within a particular context.
On top of this, one of the reasons to be worried about over-optimization is that optimization is usually carried out over goals or preferences. But these are only a proxy for what we really care about, and it’s this misalignment which is our chief concern. We believe our articulation of human values as constitutive attentional policies is much closer to “what we really care about”, and is thus less prone to over-optimization.
Coherent extrapolated volition. Perhaps the most popular framing of “what AI should optimize” from an alignment perspective is coherent extrapolated volition (CEV) (Yudkowsky, 2001):
Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted
In other words, CEV states that an AI system should figure out what we’d want it to do if we were the wisest versions of ourselves, and do that. It’s unclear how the AI should do this exactly. The overarching vision is one where humans are treated like black boxes, and the goal of an AI is to serve them by observing our behavior and simulating what we might want. This is similar to the frame from cooperative inverse reinforcement learning (CIRL), where agents attempt to infer the human’s reward function based on observing their behavior. These “black box” approaches require training models on opaque reward functions28, which are then susceptible to unforeseeable consequences due to misalignments between the reward function and our real values.
Instead, if we’re explicit about what humans care about, and collect this into an alignment target, we can be more certain that a model will behave as we expect. We can do things like audit the target, trace unwanted behavior to particular contexts, and prevent the target from being manipulated. In other words, rather than treating humans as black boxes, it’s much easier if we can take a snapshot of what humans care about, and train a model to care about these things too. Moral Graph Elicitation is our attempt to do this in a clever way.
Scaling to superintelligence. We hope the moral graph’s structure can scale to superintelligence, because a superintelligence can add edges to a moral graph which human beings might be able to double check. The edges in the moral graph do not just represent arbitrary opinions of a population, they are modeled on a theory of human moral reasoning and learning, mentioned in Section 2.3. As described here, the moral graph captures some aspects of moral learning by human beings, but we believe the same moral reasoning and learning can be done by an AI system such that a superintelligent AI would be able to iterate further on a moral graph, developing new values and edges. These new values and edges might still be able to be evaluated by humans, or by weaker systems that in turn can be evaluated by humans (Burns et al., 2023). The “value transition stories” part of our experiment shows that people can assess the quality of claimed “gains in wisdom”. Also, the fact that participants retroactively endorsed values that were considered wiser than theirs by other participants, implies that lesser systems (or humans) can evaluate moral reasoning done by a stronger system. If this works, an ASI could evolve its own morality in a human-inspectable, human-compatible way–a kind of process-based moral supervision.
|
GYEDF3aQRubDoF2sr_New_paper_on_aligning_AI_with_hu.txt
|
{
"file_size": 9613
}
|
911dd463-37e4-4484-b511-31c5f93a5be4
|
Introduction
For all action that an actor in the cosmos does, there can be determined general principles of action which are able to guide action which I call 'principles of action'. My principles of action are given and organised in what I call 'frameworks' which is a technique I developed during my researching efforts which is able to represent many pieces of knowledge in a unified whole.
Components of frameworks
Frameworks are structured in three ways of 'subject', 'description', and 'placement'. The components of frameworks see subjects which are given descriptions which are then placed in a placement relative to other pieces of knowledge by indents (spaces) and lines. Each line has a single section which can have subsections in immediately following indented lines. An example framework would be the following:
.subject - description-subject - description-subject - description-subject - description-subject - description-subject - description
The following components of frameworks have the following descriptions (as given by the following framework as another example):
.subject - the thing that is being described and having associated knowledge and given a name.description - the content of knowledge associated with a thing.placement - where and when knowledge is located of a thing-position - where a thing occurs logically within another idea (as a subsection of a supersection) as seen in idents. positioning nests knowledge under a supersection in which a supersection is thematic with all subsections having a similar theme-ordering - when a thing occurs logically after another idea as seen in lines coming one after the other
Heeding knowledge
Knowledge in my frameworks is typically very concise and succinct in which there is a great density of knowledge within my frameworks so slow, methodical, and thoughtful reading is necessary to make sure knowledge is understood, heeded, and remembered. Sections have subjects which are typically well named and so this aids remembering the descriptions of a section. How sections relate to each other is important in frameworks so paying attention to idents and if necessary looking or glancing back at previously read section subjects can allow for the good structuring of knowledge within one's mind; with sections which contain many nested subsections that can extend quite far in lines, it is easy to lose track of where knowledge is placed so looking back is necessary.
Frameworks can be thought as similar to how sections of a document organises knowledge however frameworks are different as they are applied down to the scale of individual ideas. Frameworks are also typically highly deliberate in their organisation in which a hefty amount of discussion of ideas is done before frameworks are created or altered
My ideas of principles of action
The following framework presents my ideas of principles of action:
.principles of action-the cosmic actor perspective - we are fundamentally actors in the cosmos which think and act based on cosmic situations and what we value in priorities-action theory - how action works-consequence principles-situations - situations select ideas which go on to produce actions-consequences - actions can have produced consequences in cause and effect-basis in the mind - situations and imagined consequences of situations have basis in reasoning and reasoning mechanisms of my self principles; for instance inductive reasoning sees situations and consequences create generalised rules about them e.g. the situation of no heating will create the consequence of something being cold as a general rule-understanding classifications - situations and consequences can have classifications of understanding-understanding - situations and consequences can have an understanding-understood - situations and consequences can be understood-nonunderstood - situations and consequences can be nonunderstood-unintended consequences - some consequences can be hard to predict and have nonunderstood unintended consequences-complexity - situations and consequences can be of a complexity-simple - situations and consequences are readily understandable-complex - situations and consequences are not understandable-consequentialism - consequences that are good should be strived towards (see the phrase "the ends justify the means", see the concept of utilitarianism)-action based in good/efficient ideas leads to good consequences-action principles-optimal action - preferably for optimal action instantaneous collection of ideas for actions should occur although it is unlikely given ideas evolve and collect slowly (in the mind) and noninstantaneously over time (see SS situational collection)-principled action - actions can be understood based in principles of cosmic situations and areas of action (see further sections of action philosophy)-practical optimisation - there needs to be a balance of practicality and optimisation in which it isn't practical to completely optimise actions (see also pragmatic)-general and special action - general situations necessitate general actions while special situations necessitate special actions-general action - actions can be generalised to general situations (see the concepts of virtue ethics and rule utilitarianism)-special action - actions can be specialised to special situations (see the concept of act utilitarianism)-simple and complex action-simple actions - actions can be simple-complex actions - complex actions require logistics, arrangement, and thinking in accounting for many things-special arrangement - special arrangements can be made for special actions-complex action optimisation - actions which are complex are fraught with optimisation and efficiency problems of various ways actions can be done-action prioritisation - important actions are prioritised per cosmic priorities and thought about more and specially-prioritised action-cosmic priorities - an entity of a specific type has priorities in which general human priorities are given in cosmic priorities-fundamental priorities - humans have fundamental things which they prioritise-self - humans prioritise the self-suffering - humans want to minimise suffering of the self-mortality - humans want to avoid dying of the self-others - humans prioritise others (typically less than the self due to selfishness)-suffering - humans want to minimise suffering of others-mortality - humans want to avoid dying of others-priority focuses - fundamental priorities can focus on specific areas of attention-individual focus - actions which are focused on the individual benefit the self-other individuals focus - actions which are focused on other individuals benefit others (e.g. family and friends)-world focus - actions which are focused on the world benefit others and perhaps the self-instrumental goals - the fundamental priorities (terminal goals) are achieved via instrumental goals which are the means to the end (see also the concept of 'motives')-problems - situations which are bad for priorities. problems can be direct (e.g. a friend getting injured) or indirect and of instrumental goals (e.g. losing your car keys)-universal goodwill - any empathetic being regardless of time and space directing their care towards any and all other empathetic beings that are suffering regardless of time and space (see the concept of longtermism and future people's value)-anonymous goodwill - even if you are not known to others and you feel alone in your suffering, a thought of care towards the anonymous still encompasses care towards you-evolutionary morality - morality and priorities has origin in evolution seemingly (see the concept of evolutionary ethics)-personal morality - morality has been evolutionarily generated through the personal interaction of people in which the impersonal dealing of people e.g. of many people, isn't easily morally comprehensible (it is hard to comprehend the moral significance of a million people and people focus on individual persons)-ethics - theory of right and wrong action (organised per priority focuses)-individual ethics - ethics of individual focus-survival - individual mortality action-hedonism - individual suffering action-virtue ethics - acting well as an individual (e.g. via self-knowledge)-other individuals ethics - ethics of other individuals focus-caring action - other individuals mortality action-friendly action - other individuals suffering action-world ethics - ethics of world focus (encompassing the concept of utilitarianism)-principle of utility - all actions should be judged based on utility it provides, that is, their tendency to produce benefit, advantage, pleasure, good, or happiness i.e. of suffering or mortality of fundamental priorities-utility calculus - calculation of utility of actions for maximal utility (see the concept of felicific calculus)-suffering calculation - measuring relative amount of distress, eustress, and neutral states given actions for each person (distress and eustress are explained in my 'entertainment ideas')-mortality calculation - enumerating lives of each person that can be saved given actions-incomparability of suffering and mortality - actions which affect both suffering and mortality aren't able to be compared well (see the concept of the 'repugnant conclusion') (an example of the incomparability of suffering and mortality would be saving one persons life at the cost of the suffering of many people) (i'm a bit unsure of these ideas)-complex consequences - actions per action theory can have complex consequences e.g. an important individual may hold more value in providing utility to others and thus should be prioritised-longtermism - actions may have complex longterm consequences-cosmic situation - you are fundamentally an actor in the cosmos so it's best to consider the whole cosmos-universal situations - the universe's general character for the actor-possible cosmic situations - there are possible cosmic situations that the actor may be placed in (may be unfalsifiable)-godlike entity - cosmic situation with a godlike entity (godhood is desirable for any entity so if it’s possible it has almost certainly occurred)-according to religions-not according to religions-simulation hypothesis - cosmic situation of the actor being in a computer simulation (like in the movie the matrix)-for experimentations on civilisation-for ancestor simulations-arbitrary and unknown - the cosmic situation may be arbitrary and unknown-cosmic situation perspectives - (section omitted)--earthly situation - the cosmic situation can be focused on earth's situation for humans-earthly problems - obstacles and dangers of fundamental priorities in the earthly situation (see the concept of maslow's hierarchy of needs)-safety (e.g. accidents)-physical health (e.g. illnesses, fitness, and nutrition)-mental health (e.g. entertainment and relationships)-obtaining things - obtaining things deals with earthly problems-thing types-personal things (e.g. possessions, money, job)-relationships-things of the economy - the economy allows for the dealing with problems via fabricating things and resources which can be obtained to resolve problems-important technologies (things created via the economy and such)-AI - capable of resolving all problems if sufficiently capable-ASI (artificial super intelligence) developed in the 2050s or 2060s (survey saying "high-level machine intelligence" developed with 50% confidence by 2061. the survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers)-technological singularity, the point where technological progress accelerates to a massive degree, would occur by 2045 perhaps according to ray kurzweil-many capabilities and technologies being created shortly after the development of ASI (such as life extension technologies and automation technologies)-complete life extension technologies-complete automation technologies (all employment ended, see the concept of technological unemployment)-VR - virtual reality such as full immersion VR which immerses all the senses with artificial stimuli (allowing for things like a beautiful appearance for people)-unified economic platform for VR worlds replacing the physical economy with a virtual reality economy such as in a metaverse (standard platform dominates in providing full immersion VR, requiring infrastructure such as perhaps pods like in the matrix)-neuroscience technologies allowing for full immersion of all the senses-highest levels of civilisational advancement occuring in the late 21st century or soon after-solar system engineering - the solar system's matter could be used for human purposes (e.g. stuff like a dyson swarm or such)-galactic colonisation - the colonisation of the galaxy could be initiated in the late 21st century-life extension technologies - capable of resolving problems of mortality although encompassed by AI (emerging within the 21st century)-longevity escape velocity - the life expectancy of people with life extension technologies could increase faster than they age e.g. increasing the life expectancy of a person by 2 years in a given year allowing for the possibility of living forever perhaps (perhaps occurring in the 2030s; "50% chance that we will reach longevity escape velocity by 2036" - Dr Aubrey de Grey)-local situations - the universe's local character for the actor which is relative to the individual (so can't be specified)
Review of my ideas
My ideas of principles of action is foundational for all action for all individuals I feel and thus it is very important in my opinion. Although many of its ideas are simple and perhaps able to be dismissed as obvious and trivial, it is still important to consider the organisation of knowledge on the topic of principles of action instead of leaving it unorganised and implicitly known. With my technique of frameworks being somewhat unique, I feel my presentation of principles of action is novel and does a good job at clearly stating the topic's ideas. My ideas of principles of action builds upon and encompasses ethics in which it make sense to define a new field of philosophy of action ('action philosophy') as there are foundational and philosophical ideas of action which isn't encompassed by applied sciences (which is not foundational and philosophical).
I've made a magnum opus of all my ideas in what I call WAK11 which details a new system of philosophy and science which this post's principles of action framework being only a small part of it so if you're interested in foundational ideas of philosophy and science I think you'd find it interesting. WAK11 can be found here.
|
jbyioqf425TnzYR2s_Principles_of_action__fundamenta.txt
|
{
"file_size": 14724
}
|
0c0a0685-e9fa-42bf-9c32-55af8c8482b7
|
Over the last couple of years, mechanistic interpretability has seen substantial progress. Part of this progress has been enabled by the identification of superposition as a key barrier to understanding neural networks (Elhage et al., 2022) and the identification of sparse autoencoders as a solution to superposition (Sharkey et al., 2022; Cunningham et al., 2023; Bricken et al., 2023).
From our current vantage point, I think there’s a relatively clear roadmap toward a world where mechanistic interpretability is useful for safety. This post outlines my views on what progress in mechanistic interpretability looks like and what I think is achievable by the field in the next 2+ years. It represents a rough outline of what I plan to work on in the near future.
My thinking and work is, of course, very heavily inspired by the work of Chris Olah, other Anthropic researchers, and other early mechanistic interpretability researchers. In addition to sharing some personal takes, this article brings together - in one place - various goals and ideas that are already floating around the community. It proposes a concrete potential path for how we might get from where we are today in mechanistic interpretability to a world where we can meaningfully use it to improve AI safety.
Key frameworks for understanding the agenda
Framework 1: The three steps of mechanistic interpretability
I think of mechanistic interpretability in terms of three steps:
Figure 1: The three steps of Mechanistic Interpretability
The three steps of mechanistic interpretability[1]:
Mathematical description: In the first step, we break the neural network into constituent parts, where the parts are simply unlabelled mathematical objects. These may be e.g. neurons, polytopes, circuits, feature directions (identified using SVD/NMF/SAEs), individual parameters, singular vectors of the weight matrices, or other subcomponents of a network. Semantic description: Next, we generate semantic interpretations of the mathematical object (e.g. through feature labeling). In other words, we try to build a conceptual model of what each component of the network does. Validation: We need to validate our explanations to ensure they make good predictions about network behavior. For instance, we should be able to predict that ablating a feature with a purported ‘meaning’ (such as the 'noun gender feature') will have certain predictable effects that make sense given its purported meaning (such as the network becoming unable to assign the appropriate definitive article to nouns). If our explanations can’t be validated, then we need to identify new mathematical objects and/or find better semantic descriptions.
The field of mechanistic interpretability has repeated this three-step cycle a few times, cycling through explanations given in terms of neurons, then other objects such as SVD/NMF directions or polytopes, and most recently SAE directions.
My research over the last couple of years has focused primarily on identifying the right mathematical objects for mechanistic explanations. I expect there’s still plenty of work to do on this step in the next two years or so (more on this later). To guide intuitions about how I plan to pursue this, it’s important to understand what makes some mathematical objects better than others. For this, we have to look at the description accuracy vs. description length tradeoff.
Framework 2: The description accuracy vs. description length tradeoff
You would feel pretty dissatisfied if you asked someone for a mechanistic explanation of a neural network and they proceeded to read out of the float values of the weights. But why is this dissatisfying? Two reasons:
When describing the mechanisms of any system, be it an engine, a solar system, or a neural network, there is always a tradeoff between description accuracy and description length. The network is the most accurate mathematical description of itself, but it has a very long mathematical description length. It isn’t even a semantic description at all. This makes things difficult to understand because we can’t easily intuit mathematical descriptions. To understand what the weights in the network ‘mean’, we need semantic descriptions[2].
Part of our job in mechanistic interpretability (and the framework used in this agenda) is to push the Pareto frontier of current mechanistic interpretability methods toward methods that give us the best tradeoff between description accuracy and description length. We’re therefore not only optimizing for accurate descriptions; we’re also optimizing for shorter descriptions. In other words, we want to find objects that admit mathematical descriptions that use as few objects as possible but that capture as much of what the network is doing as possible. Furthermore, we want short semantic descriptions for these objects, such that we need few words or concepts to describe what they do.
Figure 2: Left: The tradeoff between description accuracy and description length, where mechanistic interpretability progress moves the Pareto frontier of our methods closer to the optimal tradeoff. Right: Current methods aren't yet good enough, in that they can't produce accurate-enough and/or short-enough descriptions.
To summarize, we’re in fact optimizing our interpretability methods according to four constraints here:
Mathematical description accuracy - How good the approximation of the original network’s behaviour is; Mathematical description length - How many mathematical objects the network is decomposed into; Semantic description accuracy - How good the predictions made by the conceptual model of the network are; Semantic description length - How many words/concepts are needed to define the conceptual model of the network.
Inadequacy according to at least one of these constraints has been the downfall of several previous interpretability approaches:
Non-mechanistic approaches, such as attribution maps (e.g. Simonyan et al., 2013) have been demonstrated often to yield misleading (low accuracy) semantic descriptions (Adebayo et al., 2018; Kindermans et al., 2017). Using neurons as the mathematical objects to interpret (e.g. Olah et al., 2020) yields too-long mathematical descriptions and even more too-long semantic descriptions due to polysemanticity.Using SVD/NMF/ICA directions (e.g. Schubert et al., 2021; Voss et al., 2021) instead of neurons arguably improves the mathematical description length, but the semantic description length is still too long due to polysemanticity.Using polytopes (Balestriero and Baraniuk, 2018; Black et al., 2022) as the fundamental mathematical object yields much too long mathematical descriptions[3], even if they are in some sense ‘more accurate’ with regard to the network’s nonlinear structure than directions.
This leads us to one of the core methods in this agenda that so far appears to perform well according to our four constraints: sparse autoencoders (SAEs).
The unreasonable effectiveness of SAEs for mechanistic interpretability
SAEs have risen in popularity over the last year as a candidate solution to the problem of superposition in mechanistic interpretability (Elhage et al., 2022; Sharkey et al., 2022; Cunningham et al., 2023; Bricken et al., 2023)
SAEs are very simple. They consist of an encoder (which is just a linear transformation followed by a nonlinear activation function) and a decoder (or ‘dictionary’) whose features are constrained to have fixed length. The loss function used to train them has two components: (1) The reconstruction loss, so that their output approximates their input; (2) The sparsity loss, which penalizes the encoder outputs to be sparse.
I harp on about SAEs so much that it’s become a point of personal embarrassment. But the reason is because SAEs capture so much of what we want in a mechanistic interpretability method:
The reconstruction loss trains the SAE features to approximate what the network does, thus optimizing for mathematical description accuracy.The sparsity penalty trains the SAE to activate fewer features for any given datapoint, thus optimizing for shorter mathematical description length. The features identified by SAEs appear more monosemantic than other methods identified so far (Cunningham et al., 2023; Bricken et al., 2023). And unlike clustering, they factorize the network’s activations into compositional components, which means they yield modular descriptions. For both these reasons, they therefore perform well according to semantic description length.
It would be nice to have a formal justification for why we should expect sparsification to yield short semantic descriptions. Currently, the justification is simply that it appears to work and a vague assumption about the data distribution containing sparse features. I would support work that critically examines this assumption (though I don't currently intend to work on it directly), since it may yield a better criterion to optimize than simply ‘sparsity’ or may yield even better interpretability methods than SAEs.
The last selling point of SAEs that I'll mention is that the SAE architecture and training method are very flexible: They lend themselves to variants that can be used for much more than merely identifying features in activations. For instance, they could be used to identify interactions between features in adjacent layers (sparse transcoders) or could potentially be used to identify whole circuits (meta-SAEs). We’ll have more to say about transcoders and meta-SAEs later.
Framework 3: Big data-driven science vs. Hypothesis-driven science
The last framework driving this agenda is a piece of ‘science ideology’.
In the last few decades, some branches of science have radically changed. They’ve moved away from purely hypothesis-driven science toward a ‘big data’-driven paradigm.
In hypothesis-driven science, you make an hypothesis about some phenomenon, then collect data that tests the hypothesis (e.g. through experiments or surveys). Think ‘testing general relativity’; ‘testing whether ocean temperature affects atmospheric sulfur levels’; or ‘testing whether smoking causes lung cancer’, etc.
Big Data-driven science does things differently. If Big Data-driven science had a motto, it’d be “Collect data first, ask questions later”. Big Data-driven science collects large datasets, then computationally models the structure in this data. The structure of those computational models suggests hypotheses that can be tested in the traditional way. The Big Data-driven approach has thrived in domains of science where the objects of study are too big, complex, or messy for humans to have much of a chance of comprehending it intuitively, such as genetics, computational neuroscience, or proteomics.
In mechanistic interpretability, I view work such as “Interpretability in the Wild: A Circuit for Indirect Object Identification in GPT-2 small” (Wang et al., 2023) as emblematic of ‘hypothesis-driven science’. They identified a task (‘indirect object identification’ - IOI) and asked if they could identify circuits of nodes (attention heads at particular token positions) that performed this task on a dataset they constructed. This was a very solid contribution to the field. However, to my personal research taste it felt like the wrong way to approach mechanistic interpretability in a few ways:
Is IOI a ‘task’ from the network’s perspective? Does it chop up tasks in the same way?Are the objects studied here (attention heads at particular token indices) fundamental objects from the network’s perspective? Are any objects missing?If we studied a different artificial dataset for a different task, would we come to different conclusions about which heads do what?
To me, it felt like coming at mechanistic interpretability from a human perspective when, instead, we should be coming at it from the network’s perspective:
We should identify tasks the way a network breaks up taskspace instead of choosing individual tasks ourselves; Rather than choosing parts of the distribution that we think might explain the most about an hypothesis we’re currently evaluating, we should look at behavior of network components over the whole distribution and ‘let the network decide’ which are the relevant sub-distributions; We should make hypotheses in terms of objects that the network considers fundamental, rather than deciding for ourselves what the fundamental objects are.
I contend that mechanistic interpretability is a domain that needs a Big Data-driven approach more than usual. Neural networks are too big, too messy, too unintuitive to comprehend unless we map out their components in a principled way. Without mapping the space first, we are flying blind and are bound to get lost. To be absolutely clear, Big Data-driven science does not replace hypothesis-driven science; it just augments hypothesis formation and testing. But I think that without this augmentation, mechanistic interpretability is doomed to flounder (see also Wentworth on this theme).
Fortunately, neural networks are very well suited to Big Data-driven science, because it is so easy to collect data from them. It's even easy to directly collect data about their causal structure (i.e. information about their gradients and architecture), unlike in most areas of science!
The power of Big Data-driven science is a background assumption for much of my research. For me, it motivated the search for SAEs as a scalable, unsupervised structure-finding method, which can be applied to whole networks and datasets, and which might help reveal the objects that the network considers fundamental. It privileges big datasets that contain all the things that a network does such that, when we analyze these big datasets, the interpretable structure of the network naturally falls out thanks to unsupervised methods. And this bit of science ideology also motivates most of the objectives in the agenda.
Sparsify: The Agenda
I envision a mechanistic interpretability tech tree something like this:
Figure 3: An outline of an interpretability tech tree. See also Hubinger (2022) for a related perspective.
I’ll explain what each of the objectives here mean in more detail below. The main convergent objective of the agenda is satisfactory whole-network mechanistic interpretability, which I think could open up a range of safety-relevant applications. Most of the other objectives can be framed as trying to improve our mathematical and semantic descriptions by improving their accuracy vs. length Pareto frontiers.
The objectives for my research over the next 2+ years are the following (with high-variance estimates for timelines that feel somewhat achievable for a community of researchers):
Objective 1: Improved SAEs: Get good at taking features out of superposition using SAEs by pushing the Pareto frontier of our mathematical descriptions closer to optimal and reducing computational costs. (Starting in 0 Months - until 1y)Objective 2: Decompiled networks: Networks that do computation in the feature basis. (Starting in 2 months - until 1.5y)Objective 3: Abstraction above raw decompilations: Identify circuits and, if necessary for short enough descriptions, make principled abstractions above the mechanistic layer of abstraction. (Starting in 3 months - until 2y)Objective 4: Deep Description: Going beyond automated feature labeling by integrating different kinds of description together. (Starting in 6 months - until future)Objective 5: Applications of mechanistic interpretability: Including mechanistic interpretability-based evals; alignment method profiling; capability prediction; and, potentially, robust to training mechanistic interpretability. (Starting 6 months - until future)
Objective 1: Improving SAEs
I think there’s lots of room for improvement on current SAEs. In particular,
Benchmarking SAEsFixing SAE pathologiesApplying SAEs to attentionBetter hyperparameter selection methodsComputationally efficient sparse coding
Benchmarking SAEs
At present, it’s difficult to know when SAEs should be considered ‘good’. We need to devise principled metrics and standardized ways to compare them. This will be important both for identifying good SAEs trained on models and for developing improvements on SAEs and SAE training methods.
Fixing SAE pathologies
Current SAEs exhibit a few pathologies that make them suboptimal as mathematical descriptions in terms of both description accuracy and description length. My collaborators and I (through MATS and Apollo Research) are working on a few posts that aim to address them. Here we share an overview of a few early results:
Finding functionally relevant features using e2e SAEs (link) (Dan Braun, Jordan Taylor, Nix Goldowsky-Dill, Lee Sharkey): There is no guarantee that the directions that SAEs find are ‘functionally relevant’ to the network; SAEs currently just find directions that reconstruct a layer’s activations well while being sparse. We demonstrate that the standard reconstruction loss used to train SAEs is not optimal for learning functionally relevant features and show that an end-to-end (e2e) loss function, which reconstructs activations and distributions in later layers, improves the functional relevance of the features learned. End-to-end training means a smaller, more accurate set of SAE features can explain the same amount of network function, implying the typical way of training SAEs is suboptimal according to mathematical description accuracy and length. Choosing better sparsity penalties than L1 (Upcoming post - Ben Wright & Lee Sharkey): There is reason to believe that L1 is a suboptimal sparsity penalty: In toy datasets, where we know the ground truth features, an L1 penalty leads to too many features being learned compared with ground truth features. This leads to suboptimal mathematical description length. We propose a simple fix: Use L0<p<1 instead of L1, which seems to be a Pareto improvement over L1 (at least in some real models, though results might be mixed) in terms of the number of features required to achieve a given reconstruction error.Addressing feature suppression (link)(Ben Wright & Lee Sharkey): When SAE encoders guess how much of a feature is present in their input, they systematically undershoot. This is due to their optimizing both reconstruction and L1, resulting in suboptimal mathematical description accuracy. Ben looked at a way to fix this undershooting. We think success, while real, was modest. We think there are probably ways to improve upon the results of this work.
Applying SAEs to attention
Some work (unrelated to my collaborators and I) demonstrate that SAEs work reasonably well when applied to attention block outputs (Kissane et al., 2024). However, so far, the inner workings of attention blocks remain somewhat enigmatic and attention head superposition (Jermyn et al., 2023) remains unresolved.
How best to apply SAE-like methods to decompose attention blocks? We have investigated two approaches in parallel:
Gated Attention Blocks: Preliminary Progress toward Removing Attention Head Superposition (link) (Chris Mathwin, Dennis Akar, Lee Sharkey). Here, Chris Mathwin studies a particular kind of attention head superposition that involves constructive and destructive interference between the outputs of different attention heads, studied by Jermyn et al. (2023). The post introduces a gated attention block, which is a type of transcoder (see Objective 2 below for further explanation) for attention blocks, that resolves this kind of attention head superposition in a toy model.Decomposing attention block jobs and identifying QK-circuit features with sparse transcoders (link)(Keith Wynroe and Lee Sharkey): Keith Wynroe has been taking a different approach, using transcoders that are more similar to vanilla SAEs than Chris' gated attention blocks. In Keith's work, the features learned are in the QK circuit, and they are not trained on reconstruction of activations, but are instead trained to reconstruct the attention pattern. We use these features to construct a third-order tensor whose structure (we hope) reflects the various QK-‘jobs’ done by the attention block. Another type of sparse factorization is used on this ‘attention head jobs tensor’ to break it into (what we hope will be) individual attention block ‘jobs’.
Better hyperparameter selection methods
Training SAEs requires selecting multiple hyperparameters. We don’t know how hyperparameters interact with each other, or how they interact with different data distributions. Thus training SAEs often involves sweeps over hyperparameters to find good combinations. Understanding the relationships between different hyperparameters (similar to Yang et al., (2022)) would let us skip expensive hyperparameter sweeps. This is especially important as we scale our interpretability methods to frontier models, where it may be prohibitively expensive to run SAE hyperparameter sweeps.
Computationally efficient sparse coding
There may be additional tips and tricks for training SAEs in more efficient ways. For instance, informed initialization schemes (such as data initialization or resampling) may improve efficiency. Or perhaps particular methods of data preprocessing might help. There is considerable room for exploration.
On a higher level, there probably exist more efficient sparse coding methods than SAEs trained with SGD. If there are better methods, it’s important that the community not get stuck in a local optimum; we should look for these better methods.
In order to be in a position where the next objective is completable, we would need to see some progress in the above areas. Areas of progress like 'better hyperparameter selection' and 'computational efficiency' would yield quality of life improvements. Others are more important; they are essential before we can be confident in our descriptions: Areas like ‘finding functionally relevant features’ or ‘fixing feature suppression’. Other still are even more essential for progress: Unless we can decompose attention blocks in a satisfying way, we will not be able to complete the next objective, which is to fully ‘decompile networks’.
Objective 2: Decompiled networks
Once we’ve identified the functional units of a neural network, then we can decompile it by making a version of the network where superposition has been removed. In decompiled networks, the forward pass does inference in the interpretable feature basis.
Suppose we have trained e2eSAEs in each layer and identified the functional units. We then want to identify the ‘interaction graph’ that describes how features interact between layers. This is where ‘transcoders’ come in. Transcoders, in contrast to autoencoders, are trained to produce different outputs than their inputs. To get the interaction graph between features in adjacent layers, we would train (or otherwise find, perhaps through cleverly transforming the original network's parameters into sparse feature space) a set of transcoders to produce the same output and intermediate feature activations as in the original network. The result is a sparse model that we can use for inference where we don’t need to transform our activations to the original neuron basis; the decompiled network does inference entirely in the sparse feature basis.
Transcoders may have a variety of architectures, such as a simple matrix (as in Riggs et al., 2024 and Marks et al., 2024). Speculatively, we may prefer using something else, such as another SAE architecture (as briefly explored in Riggs et al., 2024). Unlike a purely linear transcoder, an SAE-architecture-transcoder would be able to model nonlinear feature interactions.
It’s worth noting that such a transcoder's sparsely activating features would be ‘interaction features’, which identify particular combinations of sparse features in one layer that activate particular combinations of sparse features in the next layer. The weights of these interaction features are the ‘interaction strengths’ between features. You can thus study the causal influence between features in adjacent layers by inspecting the weights of the transcoder, without even needing to perform causal intervention experiments. The transcoder’s interaction features thus define the ‘atomic units’ of counterfactual explanations for the conditions under which particular features in one layer would activate features in an adjacent layer.
Figure 4: A proposed process for decompiling networks. We being in Step 0 with the network we which to decompile. In Step 1, we train SAEs at every layer. Then, in step 2, we train transcoders (end-to-end) to predict the feature activations in one layer conditioned on the feature activations of the previous layer. This yields a decompiled network - a network whose forward pass is entirely in the feature basis.
Policy goals for network decompilation
Once we as a community get network decompilation working, we hope that it becomes a standard for developers of big models to produce decompiled versions of their networks alongside the original, 'compiled' networks. Some of the arguments for such a standard are as follows:
Certain highly capable models will be integrated widely into society and used for economically gainful activities. This comes with some risks, which would be reduced by the existence of decompiled models that are easier to understand. Developers of large models are best placed to train the decompiled versions themselves, since they have access to the training resources and infrastructure. This standard would mean that, as neural networks scale, auditors and researchers would always have a version of the network that is ready for interpretation.It is not unreasonable for developers to internalize some of the costs associated with big models by training interpretable decompiled versions of them in addition to the base models, so that researchers can work on ensuring that the original model is safe. Standardized artifacts enable standardized tests: Evaluators could, for example, run standardized tests for particular knowledge in the network, or test for signatures of dangerous cognitive capabilities, or test for particular biases. Standardized artifacts enable cumulative policy development. For instance, regulators could begin designing regulations that require the networks to have particular internal properties, as identified in their decompiled networks. We might even be able to graduate from risk-management-based AI safety assurances to compliance-based AI safety assurances.
Objective 3: Abstraction above raw decompilations
Although we expect decompiled neural networks to be much more interpretable than the original networks, we may wish to engage in further abstractions for two reasons:
Circuit identification: We may wish to identify ‘circuits’, i.e. modules within a neural network that span multiple layers consisting of groups of causally interacting features that activate together to serve a particular function. If we identify circuits in a principled way, then they represent a natural way to study groups of features and interactions in the network. Shorter semantic descriptions: If semantic descriptions of neural networks in terms of the lowest level features are too long, then we need to identify the right abstractions for our lowest-level objects and then describe networks one level of abstraction up. Figure 5: Left: A potential process by which we could abstract over features to identify circuits and interactions between circuits. Right: Abstraction approaches such as meta-SAEs may represent methods that would permit less accurate but shorter descriptions.
The best abstractions are those that reduce [mathematical or semantic] description length as much as possible while sacrificing as little [mathematical or semantic] description accuracy as possible. We previously used sparse coding for this exact purpose (see section The Unreasonable Effectiveness of SAEs for Mechanistic interpretability), so perhaps we can use them for that purpose again. So, at risk of losing all personal credibility to suggest it, SAEs may be reusable on this level of abstraction[4]. It may be possible to train meta-SAEs to identify groups of transcoder features (which represent interactions between SAE features) that commonly activate together in different layers of the network (figure 5). The transcoder features in different layers could be concatenated together to achieve this, echoing the approach taken by Yun et al. (2021) (although they did not apply sparse coding to interactions between features in decompiled networks, only to raw activations at each layer). Going further still, it may be possible to climb to higher levels of abstraction using further sparse coding, which might describe interactions between circuits, and so on.
Objective 4: Deep Description
So far in this agenda, we haven’t really done any (semantic) ‘interpretation’ of networks. We’ve simply decompiled the networks, putting them in a format that’s easier to interpret. Now we’re ready to start semantically describing what the different parts of the decompiled network actually do.
In mechanistic interpretability, we want a mechanistic description of all the network’s features and their interactions. On a high level, it’s important to ask what we’re actually looking for here. What is a mechanistic description of a feature?
A complete mechanistic description of a feature is ideally a description of what causes it to activate and what it subsequently does. Sometimes it makes sense to describe what a feature does in terms of which kinds of input data make it activate (e.g. feature visualization, Olah et al., 2017). Other times it makes more sense to describe what a feature does in terms of the output it tends to lead to. Other times still, it is hard or incomplete to describe things in terms of either the input or output, and instead it only makes sense to describe what a feature does in terms of other hidden features.
Figure 6: Examples of different kinds of descriptions of features in terms of other features.
There exists some previous work that aims to automate the labeling of features (e.g. Bills et al., 2023). But this work has only described neurons in terms of either the input or output of the network. These descriptions are shallow. Instead, we want deep descriptions. Deep descriptions iteratively build on shallow descriptions and bring in information about how features connect together and participate in particular circuits together.
Early ventures into deep description have already been made, but there is potentially much, much further to go. One of these early ventures is Cammarata et al. (2021) (Curve Circuits). In this work, they used feature visualization to get a first pass of shallow descriptions of all the relevant neurons. In the next iteration of description, they showed how features in one layer get used by particular weights to construct features in the next layer; in doing so, they showed that some ‘curve features’ were not merely excited by curves in particular orientations, but also inhibited by curves in opposite orientations, thus adding more semantic detail.
Figure 7: Left: A closer look a curve detector reveals that it is not just a curve detector, but also an anti-detector of curves of the opposite orientation. Right: Deep description methods would yield longer semantic descriptions, but they would be more accurate.
This foray into deep description showed how we can use descriptions to build on each other iteratively. But these were only an initial step into deep description. This example only explained a hidden feature (a curve) in terms of features (early curves) in a previous layer; it didn’t, for instance, ‘go backward’, explaining early curves in terms of the curves they participate in. Being so early in the network, this might not be as informative an exercise as going in the forward direction. But there will exist features, particularly those toward the output of the network, where it makes more sense to go in the backwards direction, explaining hidden features in terms of their downstream causes.
What description depths might we be able to achieve if we automate the description process, and what might automating such a process look like? Here is a sketch for how we might automate deeper description.
A sketch of an automated process for deep description: The Iterative-Forward-Backwards procedure
This procedure has three loops. Intuitively:
The ‘Forward loop’ describes features in one layer in terms of features in earlier layers or in terms of the data. It describes what causes feature X to fire in terms of earlier features.The ‘Backward loop’ describes features in one layer in terms of features in later layers or in terms of the output. It describes the effects in later layers caused by feature X activating.The ‘Iterative loop’ lets us use the results of previous cycles to iteratively refine our descriptions based on descriptions that have previously been added, developed, or clarified.
Suppose we have a network with L layers (where layer 0 is the input data and L is the output layer) and a number of repeats for the iterative loop, R. Then, slightly more formally:
For r in (0, …, R-1): # The Iterative loop
For i in(0, …, L): # The Forward loop
For j in (1, …, L):
If i < j:
Explain the features in layer j in terms of the (earlier) features in layer i.
For k in (L, …, 1): # The Backward loop
For j’ in (L, …, 0):
If k > j’:
Explain the features in layer j’ in terms of the (later) features in layer k.
When we say ‘Explain feature X in terms of features Y’, we’re leaving a lot undefined. This step is doing a lot of work. It may take several forms. For instance:
It potentially involves looking at the max activating samples of feature X. If Y is the data, then we’d look at the data and which data caused X to activate a lot. But note that Y may be hidden features too. It could involve testing hypotheses about our descriptions of features X in terms of Y. For example, we could look at the features X and the weights that connect features Y to them and make predictions about the activations of features Y that would cause features X to activate as in Bills et al. (2023).It could involve predicting the outcomes of particular causal interventions on features, as in causal scrubbing.
To add to the intuitions of what this procedure is doing, it is helpful to describe previous interpretability methods in terms of it (Figure 6):
Feature visualization-based methods (e.g. activation atlases or max-activating dataset-examples) are instances of one part of the forward loop, where layer l is explained in terms of layer 0 (the input layer).The logit lens is an instance of one part of the backwards loop, where features in hidden layer j’ are explained in terms of the output it corresponds to.The low level explanations of curve circuits in Cammarata et al. (2021) are instances of one step of the forward loop, where hidden layer features are explained in terms of earlier hidden layer features. This occurs during the first iterative loop, since the explanations for each feature are simply given only in terms of layer 0 (the input data). Subsequent iterative loops would be able to make use of much more information.
I expect the procedure that we end up doing to look substantially different from this (and include a lot more detail). But this sketch is merely supposed to point toward algorithms that could let us automate a lot of semantic description in interpretability.
Objective 5: Mechanistic interpretability-based evals & other applications of mechanistic interpretability
If we figure out how to automate deep description of decompiled networks, then we’ll have satisfactory mechanistic interpretability. This could be used for a number of applications, including:
Mechanistic interpretability-based model evaluations: We can develop red-teaming procedures and benchmarks based on our mechanistic interpretability methods to assess the safety and ethics of the models’ internal representations and learned algorithms. These would be a type of ‘understanding-based model evals’. Not only could these evals permit new kinds of model capability evals, they may also permit more general alignment evals, where we can make good predictions of how models would behave on a much wider range of circumstances than current behavioral model evals.
We think of mech-interp based model evaluations as falling into two broad categories:Mechanistic interpretability-based model red teaming: Red-teaming AI models involves trying to find inputs that fail some safety- or security-based test. Currently, most red-teaming involves searching through input-space (or latent space) to find inputs (or potential inputs) that lead to concerning outputs (e.g. Perez et al., 2022). Mechinterp-based evals can aim to do better in a couple of ways: 1) Mechanistic interpretability-based evals could try to find inputs that lead to concerning combinations of features. For example, we could try to find inputs that elicit deception that we wouldn’t have been able to detect using behavioral tests alone; 2) Mechanistic interpretability-based evals don’t have to look for inputs that cause concerning hidden feature activations or outputs (which may be difficult to enumerate for large networks). We can find (earlier) hidden features that activate concerning (later) hidden features or outputs. We could subsequently use these earlier hidden features to find even earlier hidden features that cause concerning behavior. This might even let us work backwards from hidden features, potentially using this approach as a tool to find inputs that lead to concerning behavior.Mechanistic interpretability-based model benchmarking: Behavioral benchmarks are standardized sets of tests where, given a certain input, the output of a model is evaluated. If it’s the ‘right’ kind of output (according to some evaluation criteria), then the model does well on the benchmark. In mechanistic interpretability-based benchmarks, instead of assessing outputs, we assess internal activations. We’d similarly use some evaluation criteria to determine whether the input caused the ‘right’ kind of internal activations to occur.Alignment method evaluations: When we have mechanistic interpretability-based model evaluations to assess model’s safety properties, we would then be able to better compare the strengths and weaknesses of different alignment methods. We may be able to strengthen different approaches by using mechinterp-based model evals to, e.g. identify key gaps in the finetuning data that lead to failures of alignment.Targeted interventions on models: When we understand how models work, it seems likely that we can use this information to make targeted interventions on models. For instance, we may be able to:Accurately ablate specific pieces of knowledge (e.g. for anonymization purposes or for removing unsafe capabilities); Whitelist only a small set of capabilities, giving us better guarantees about how models will behave on specific distributions; Make better probes that use features (i.e. causal components of the network’s internal mechanisms) rather than probes identified using correlations on a training dataset; orIdentify better steering vectors for activation steering, thus affording us more control over model behavior.Capability prediction: One of the problems with behavioral evals is that just because we can’t get a model to behave badly or exhibit a certain capability, doesn’t mean there don’t exist ways to get it to do so; we just haven’t found them yet. In other words, ‘Absence of evidence is not evidence of absence’. Mechinterp-based evals might alleviate this problem by providing us with a way to predict capabilities and more convincingly determine whether systems can plausibly exhibit dangerous behaviors under some circumstances. For instance, if we observe that a model has all the requisite representations for particular cyber offensive capabilities, we could predict that there might exist some contexts where the model would use those capabilities even though we haven’t yet identified a way to elicit them. Mechanistic interpretability during training: One of the barriers to doing many mechinterp-based evals during training is that it first involves interpreting a snapshot of the model. By default, this might be too expensive to do with high frequency. Nevertheless, we’d like to be able to do interpretability during training in order to e.g. better catch misalignment or dangerous capabilities before risks are realized, or to forecast discontinuities in training. We would therefore like to do mechanistic interpretability as frequently as possible. We will need efficient mechanistic interpretability methods to do this. In the long term, a potential approach might be ‘stateful interpretability’, where e.g. our semantic descriptions of features and interactions are stored as embedding vectors (a ‘state’) and, conditioned on a gradient update of the model being trained, we use another model to incrementally update the interpretation embeddings alongside the model updates. Robust-to-training mechanistic interpretability: Once we have sufficiently good and sufficiently cheap mechanistic interpretability, one possible use is to 'train models against the interpretability methods'. For example, if we identify features or circuits that we don’t like, we could design loss functions (or other feedback functions) that penalize the network for having them. One risk is that our interpretability methods are not ‘robust to training’ against them (Hubinger et al., 2022), so networks might simply learn to represent the features or circuits in some other, uninterpretable way (Sharkey, 2022). It remains an open question whether future interpretability methods will be robust enough for this. This debate can probably be resolved empirically before its potential use in highly capable, potentially deceptive models.
I think AI safety would be in a pretty great place if we achieved these objectives. And, to me, most feel within reach - even on reasonably short timelines - though not for a single researcher or even a single research team. It will require a concentrated research program and an ecosystem of researchers. I hope some of them will find this roadmap useful. I plan to work on it over the next few years, although some deviations are inevitable. And if others are interested in collaborating on parts of it, I'd love to hear from you! Send me a message or join the #sparse-autoencoders channel on the Open Source Mechanistic Interpretability Slack workspace.
Acknowledgements: I'm very grateful for helpful discussions and useful feedback and comments on previous drafts, which greatly improved the quality of this post, from Marius Hobbhahn, Daniel Braun, Lucius Bushnaq, Stefan Heimersheim, Jérémy Scheurer, Jordan Taylor, Jake Mendel, and Nix Goldowsky-Dill.
^
The analogy between mechanistic interpretability and software reverse engineering
Mechanistic interpretability has been compared to software reverse engineering, where you start with a compiled program binary and try to reconstruct the software’s source code. The analogy is that a neural network is a program that we have to decompile and reverse engineer. On a high level, software reverse engineering comprises three steps, which (not coincidentally) neatly map onto the three steps of mechanistic interpretability:
The three steps of Software Reverse engineering
1) Information extraction: In the first step, you gather what information you can that might help you understand what the program is doing. It might involve the use of a ‘disassembler’, breaks the program into its constituent parts by converting binary code into assembly code or converting machine language into a user friendly format (source). Or it may involve gathering other information such as design documents.
2) Conceptual modeling: Using the gathered information, create a conceptual model of what the program is doing. Software reverse engineering may implement this conceptual model in code that they write themselves or as a flow diagram.
3) Review: Then the conceptual model is validated to check how well it explains the original program. If it performs well, then there’s no need to keep going. If it performs poorly, then either new information will need to be extracted and/or a new conceptual model built.
^
To the best of my understanding, ARC's work on heuristic arguments could be described as aiming to formalize semantic description. This seems like a very good idea.
^
Previous interpretability research that aimed to use polytopes as the unit of explanation(Black et al., 2022) grouped polytopes using clustering methods, which, unlike SAEs, offer no way to ‘factorize’ a network’s function into compositional components. This yielded too long mathematical descriptions. However, it may be possible to group polytopes using other methods that are more compositional than clustering.
^
Although meta-SAEs might be useful here, it may not be advisable to use them. The inputs to meta-SAEs may become too wide for computational tractability, for instance. Alternatively, there may simply be better tools available: Meta-SAEs are solving a slightly different optimization problem compared with base/feature-level SAEs; on the base level, they’re solving a sparse optimization problem (where we’re looking for sparsely activating features in neural activations); on the meta-SAE level, it’s a doubly sparse optimization problem (where we’re looking for sparsely activating combinations of sparse feature activations). It’s plausible that other unsupervised methods are better suited to this task.
|
64MizJXzyvrYpeKqm_Sparsify__A_mechanistic_interpre.txt
|
{
"file_size": 46260
}
|
7edc9ac1-abaa-4f85-9d25-4ecd09f16f23
|
We know that these words can be objectively defined based on physics. So why don't we have a recognised formal definition? And a Wikipedia article with such a definition?
|
DqPjQTsAWmDig8oiw_Formal_"left"_and_"right"_defini.txt
|
{
"file_size": 170
}
|
075cbb7a-321a-4b8a-b094-2d12d50d5f38
|
Mostly out of curiosity, I've been looking into how cryptocurrency is taxed in the UK. It's not easy to get what I consider to be a full answer, but here's my current understanding, as far as I felt like looking into it. HMRC's internal cryptoassets manual is available but I didn't feel like reading it all, and some of it seems out of date (e.g. page CRYPTO22110 seems to have been written while Ethereum was in the process of transitioning from proof-of-work to proof-of-stake). I also have no particular reason to trust or distrust the non-government sources I use here. I am not any form of accountant and it would be surprising if I don't get anything wrong.
My impression is HMRC tends to be pretty tolerant of people making good faith mistakes? In that if they audit you and you underpaid, they'll make you pay what you owe but you won't get in any other trouble. Maybe they'd consider "I followed the advice of some blogger who explicitly said he wasn't an accountant" to be a good faith mistake? I dunno, but if you follow my advice and get audited, I'd love to hear what the outcome is.
After I published, reddit user ec265 pointed me at another article that seems more thorough than this one. I wouldn't have bothered writing this if I'd found that sooner. I didn't spot anywhere where it disagrees with me, which is good.
Capital gains tax
Very loosely speaking, capital gains is when you buy something, wait a bit, and then sell it for a different price than you bought it for. You have an allowance which in 2023-24 is £6,000, so you only pay on any gains you have above that. The rate is 10% or 20% depending on your income.
But with crypto, you might buy on multiple occasions, then sell only some of what you bought. Which specific coins did you sell? There's no fact of the matter.1 But the law has an opinion.
Crypto works like stocks here. For stocks HMRC explains how it works in a document titled HS283 Shares and Capital Gains Tax (2023), and there's also manual page CRYPTO22200 which agrees.
The rule is that when you sell coins in a particular currency, you sell them in the following order:
Any coins you bought that day;
Any coins you bought in the following 30 days;
Any coins you bought previously, averaged together as if you'd bought them all for the same price.
The "30 following days" thing is called the "bed and breakfasting" rule, and the point is to avoid wash sales where you try to deliberately pull forward a loss you haven't incurred yet incurred for tax purposes. Wikipedia says "Wash sale rules don't apply when stock is sold at a profit", but that doesn't seem to be true in the UK. The rule applies regardless of if you'd be otherwise selling for profit or loss.
The third bucket is called a "section 104 holding". Every time you buy coins, if they don't offset something in one of the other buckets, they go in a big pool together. You need to track the average purchase price of the coins in that pool, and when you sell, you take the purchase price to be that average. Selling doesn't affect the average purchase price of the bucket.
If there are transaction fees, they count towards the purchase price (i.e. increase the average price in the bucket) and against the sale price (i.e. decrease the profit you made). This detail isn't in HS283, but it's in a separately linked "example 3".
So suppose that at various (sufficiently distant) points in time, I
buy 0.1 BTC for £100;
buy 0.1 BTC for £110;
sell 0.15 BTC for £200;
buy 0.1 BTC for £300;
sell 0.15 BTC for £50;
and each of these had £5 in transaction fees.
Then my section 104 holding contains:
Initially empty.
Then, 0.1 BTC purchased at a total of £105, average £1050/BTC.
Then, 0.2 BTC purchased at a total of £220, average £1100/BTC.
Then, 0.05 BTC purchased at a total of £55, average £1100/BTC.
Here I sold 0.15 BTC purchased at a total of £165, and I sold them for £195 after fees, so that's £30 profit.
Then, 0.15 BTC purchased at a total of £360, average £2400/BTC.
Then, 0 BTC purchased at a total of £0, average meaningless.
Here I sold 0.15 BTC purchased at a total of £360, and I sold them for £45 after fees, so that's £315 loss.
For the same-day bucket, all buys get grouped together and all sells get grouped together. For the 30-day bucket, you match transactions one at a time, the earliest buy against the earliest sell. (Unclear if you get to group them by day; I don't see anything saying you do, but if you don't then interactions with the same-day rule get weird.)
So for example, suppose the middle three events above all happened on the same day. In that case, it would work out as:
My section 104 holding is initially empty.
Then, it contains 0.1 BTC purchased at a total of £105, average £1050/BTC.
Then we have three things happening on the same day.
Grouping buys together, I buy 0.2 BTC for £420, average £2100/BTC.
I sell 0.15 BTC from that bucket, which I bought for £315.
Sale price is £195 so that's a loss of £120.
The bucket now contains 0.05 BTC bought for £105, average £2100/BTC.
That bucket enters my section 104 holding. This now contains 0.15 BTC purchased at a total of £210, average £1400/BTC.
I sell my remaining BTC for £45, which is a loss of £165.
And if the middle three all happened within 30 days of each other, then:
My section 104 holding is initially empty.
Then, it contains 0.1 BTC purchased at a total of £105, average £1050/BTC.
Then, 0.2 BTC purchased at a total of £220, average £1100/BTC.
The subsequent buy and sell get matched:
I buy 0.1 BTC for £305 and sell it for £130, making a loss of £175.
I also sell 0.05 BTC for £65, that I'd bought at £55, making a profit of £10.
So in total that sale makes me a loss of £165, and the 30-day bucket contains -0.05 BTC purchased at £55.
That bucket enters my section 104 holding. This now contains 0.15 BTC purchased at a total of £165, average £1100/BTC.
I sell my remaining BTC for £45, which is a loss of £120.
In all cases my total loss is £285, which makes sense. But I might get taxed differently, if this happened over multiple tax years.
Some more edge cases:
I have no idea how these rules would apply if you're playing with options or short selling. I think those are both things you can do with crypto?
If you receive crypto as a gift, you count it as coming in at market price on the day you recieved it. I'm not sure exactly how that's meant to be calculated (on any given day, lots of buys and sells happened for lots of different prices on various different legible exchanges; and lots also happened outside of legible exchanges) but I assume if you google "historical bitcoin prices" and use a number you find there you're probably good. So it's as if you were gifted cash and used it to buy crypto.
Similarly, if you give it away as a gift, it's treated as disposing of it at market price on the day, as if you'd sold it for cash and gifted the cash.
I think in both the above cases, if you buy or sell below market price as a favor (to yourself or the seller respectively) you still have to consider market price.
If you trade one coin for another, you treat it as disposing of the first for GBP and buying the second for GBP. Mark both the sell and the buy at the market price of the second, so that if you're somehow trading £1000 of one coin for £1200 of another, £200 of profits is taxable now. I assume you also count fees for the sell, reducing your profit now.
Mining and staking
According to this site, mining and staking both count as income. (And so do capital gains, if you look like a professional trader.)
For mining, the market price at the time you recieve the coins counts as miscellaneous income. You can deduct "reasonable expenses" whatever that means. (Price of hardware? Electricity?)
For staking, you can either count it as miscellaneous income or savings income. These two have different tax-free allowances. Unclear if you can count some as miscellaneous and some as savings to use both? Again you can deduct "reasonable expenses" whatever that means.
This reddit thread suggests "savings interest or miscellaneous income?" is just a grey area, in which case I'd expect HMRC to be pretty tolerant of you choosing either but kinda ಠ_ಠ if they notice you trying to use both. It links to manual page CRYPTO21200 which sounds to me like it's just miscellaneous income. ec265 agrees.
I think the normal way staking works is that to get income, you need to lock your coins up for some period of time. New coins you receive are automatically locked, and when you want to do anything with them, you have to unlock them. So do you count as earning the coins when they arrive, or when you first unlock them? (When you initiate the unlocking, or when it completes?) "When they arrive" sounds like a pain in the ass, that can happen every few days with no engagement on your part and a different market price every time. But "when you unlock" has the same problem as CGT: are you unlocking coins you locked, or coins you earned, or what?
I assume it's "when they arrive" and you just gotta deal with that. Coinbase lets you download transaction history, including all staking rewards with market price in GBP at the time of receipt, so that's not so bad. But I've also played around with staking with Trust Wallet and I can't immediately see a way to get staking history from that. Sadly I didn't earn enough to worry about.
For capital gains purposes, it sounds like both mining and staking count the same as if you'd bought the coins for market price at the time you received them. That would mean they can go in the same-day bucket or the B&B bucket, for matching against coins sold.
Are stablecoins an exception?
The point of a stablecoin is to track a currency exactly. If I have 1 USDC, I should always be able to trade that for 1 USD, and vice versa. So should you treat any holdings in USDC the same as you'd treat a bank account denominated in USD?
I think this is relevant for three reasons:
You don't need to worry about capital gains tax in foreign currency bank accounts.2
Coinbase pays interest on USDC. This isn't the same as staking, and it's not reported as staking in your transaction history. Interest in a foreign currency bank account counts as savings income, not miscellaneous income (see e.g. this HMRC forum answer).
I guess it also counts as foreign income? That page isn't very clear, but I think the relevant question isn't "what currency are you getting interest in" but "what country is the bank account in". That probably depends on details of Coinbase's internal structure that I'm not familiar with; but probably they'd need to actively go to effort for UK users' USDC holdings to count as being in the UK, and probably if they did that they'd go out of their way to make sure I knew they do that, and I don't know they do it so probably they don't. If it's foreign income then it looks like that doesn't change how it's taxed, but you might need to report it differently.
I guess this means that if exchange rates don't go your way, you might end up with less money than you started but still have to pay tax, and not be able to offset your losses against capital gains.
…but I don't think that's actually how it works. It looks to me like stablecoins just get treated like any other crypto, based on this site:
Buying crypto with stablecoins is viewed as trading crypto for crypto, so any profits are subject to Capital Gains Tax.
and manual page CRYPTO10100, shortly after talking about stablecoins, saying:
HMRC does not consider cryptoassets to be currency or money.
So I think that no, stablecoins are not an exception. And I weakly guess that coinbase's USDC interest counts as miscellaneous (and non-foreign) income, not personal savings income, unless you decide that staking income is also personal savings income.
What if there's a fork?
Sometimes a cryptocurrency forks, and where you had one type of coin you now have two. How does that work?
Philosophically, I think the answer is: you always had both types of coin, it's just that no one was tracking the distinction between them. So on July 31 2017, I think that I have 0.1 BTC that I paid £100 for; on August 1 2017, I discover that actually I hold 0.1 BTC that I paid ??? for and 0.1 BCH that I paid ??? for, where the two ???s sum to £100.
(And when I sold 0.05 BTC for £30 a week previously, I actually sold 0.05 BTC and 0.05 BCH for amounts summing to £30, and it doesn't matter how they split at the time.)
In every case I know of, one of the split coins is considered the original and one is considered the fork. But I don't think there's a technical distinction there, it's just that there was a social (and sometimes legal) battle to decide who gets to use the original name and one group won that. ("Legal" example: when Ethereum Classic split off from Ethereum, the Ethereum Foundation had a trademark on the name. So whichever copy they endorsed was basically always going to get called "Ethereum", even if it turned out less popular.)
Of course, the outcomes of social-and-sometimes-legal battles can have important legal effects, even if there's no technical meaning to them. So one option would be to say that I paid £100 for 0.1 BTC, and £0 for 0.1 BCH. BTC has just had a drop in price (you can't reliably expect to sell 1 BTC + 1 BCH post-fork, for more than you could sell 1 BTC pre-fork), so your capital gains on BTC have gone down, but you can expect relatively high capital gains on BCH.
Another option would be to take the market price soon after they split. Suppose 1 BTC costs 9x as much as 1 BCH. Then we'd say I paid £90 for my BTC and £10 for my BCH.
This article recommends the second approach:
HMRC does not prescribe any particular apportionment method. It is standard practice (based on the treatment of shares, because cryptoassets use the same rules) that the cost of the original cryptoasset is apportioned between the old and new cryptoasset, pro-rata in line with the respective market values of each cryptoasset the day after the hard fork. …
HMRC has the power to enquire into an apportionment method that it believes is not just and reasonable. Therefore, whichever method an individual chooses to use, they should keep a record of this and be consistent throughout their tax returns.
Airdrops and NFTs
I don't even really know what airdrops are and I don't care how they're taxed, but I suppose some readers might so manual page CRYPTO21250 talks about them.
I don't care about NFTs either and didn't see a manual page on them, so ¯\_(ツ)_/¯.
Ledger
I like to track my finances with ledger, which means I want some way to encode these rules in that.
I think I have something that works decently, which I demonstrate in a sample file that you can see here:
Example ledger file
;; This ledger demonstrates calculating capital gains on cryptocurrency for UK
;; taxes. For more info see:
;; https://reasonableapproximation.net/2024/03/28/uk-crypto-taxes.html
;;
;; I think it's mostly fairly standard outside of the `Holdings` top-level
;; account. You can do e.g. `ledger bal not Holdings` to hide that. It doesn't
;; make use of lot dates or prices to do matching (that's not how the UK needs
;; you to do things). It doesn't use virtual postings.
;;
;; It doesn't work in hledger because that doesn't support posting cost
;; expressions like `0.01 ETH @ (£300 / 0.01)`. If you replace those with their
;; calculated value it seems fine.
;;
;; It should work fairly straightforwardly with stocks as well as crypto, with
;; the caveat that I'm not sure how to encode stock splits and don't know if
;; there are other fiddly details to complicate matters.
;;
;; The things I'm most unhappy about are that it doesn't balance to 0, and that
;; there's no help with average prices of Section 104 holdings.
2020/01/01 Buy
; When we buy an asset, we record it in two places. `Assets` holds what we
; currently own, grouped in some way that's convenient for general use (by
; which account they're in, currency, whatever). `Holdings` holds the same,
; but grouped by capital gains buckets.
;
; Annoyingly, they don't balance, since for capital gains purposes the price
; includes transaction fees. So the total ETH balance comes to 0 but the £
; balance comes to `Expenses:Fees`.
;
; The `@` and `@@` ensure the ETH and GBP amounts balance with each other.
; But the `Holdings` exchange rate is wrong, so we use `(@@)` to avoid that
; getting put in the price database.
;
; S104 is "Section 104". That's the technical term for that bucket.
Assets:ETH 0.13 ETH @ £765.38
Assets:GBP £-100.00
Expenses:Fees £0.50
Holdings:S104:ETH -0.13 ETH (@@) £100.00
Holdings:S104:ETH £100.00
2020/01/10 Buy
; So after this, the "Holdings:S104:ETH" account records that we own 0.21
; ETH, that we paid £200.00 for.
Assets:ETH 0.08 ETH @ £1243.75
Assets:GBP £-100.00
Expenses:Fees £0.50
Holdings:S104:ETH -0.08 ETH (@@) £100.00
Holdings:S104:ETH £100.00
2020/01/31 Staking
; When we get staking income, we can either record it as Income in ETH or £.
; Recording it as ETH seems more powerful, since it lets us answer all of:
;
; * "how much ETH have I got from staking?" (`ledger bal`)
; * "how much £ is that worth now?" (`ledger bal -X £`)
; * "how much was it worth when I got it?" (`ledger bal -X £ --historical`)
;
; Recording in £ would mean `ledger bal` fully balances in ETH (at least all
; buys and sells do), and total balance in £ equals `Expenses:Fees`. That
; seems like a potentially useful sanity check. We can at least check that
; non-staking transactions balance like that with
;
; ledger bal not @Staking
;
; Still, I'm not sure this is better than just recording in £.
;
; We don't need to add every staking distribution individually. We can group
; several together and add them all at once, as long as they don't need to
; be distinguished for capital gains or income tax reasons or something. But
; then the price isn't accurate, so we probably want to follow it with an
; explicit entry for the price on the final day.
Assets:ETH 0.0014 ETH
Income:Staking:ETH -0.0014 ETH
Holdings:S104:ETH -0.0014 ETH (@) £942.86
Holdings:S104:ETH £1.32
; This gives the actual price at the time we most recently received staking
; income. Price database entries given by `@` and `@@` are saved at midnight, so
; might as well use that time here too. We could equivalently leave out the
; time, `P 2020/01/31 ETH £981.38`.
P 2020/01/31 00:00:00 ETH £981.38
2020/02/05 Sell
; At this point, S104 holds 0.2114 ETH bought for a total of £201.32,
; average £952.32. That means 0.0514 ETH was bought for £48.95. I don't know
; if there's a way to have ledger help with that calculation or enforce that
; we did it right.
Assets:ETH -0.0514 ETH @ £1578.97
Assets:GBP £80.66
Expenses:Fees £0.50
Income:Capital Gains:ETH £-31.71
Holdings:S104:ETH 0.0514 ETH (@@) £80.66
Holdings:S104:ETH £-48.95
2020/03/01 Sell
; Now a more complicated sell that we'll match with some non-S104 buys.
;
; When we buy, we know by the end of the day which Holdings bucket(s) it
; needs to go in. But when we sell, any buys or other acquisitions in the
; next 30 days affect which bucket(s) we're drawing from. So we won't be
; able to complete this transaction until April. (The bed-and-breakfasting
; bucket for this sell runs March 2-31 inclusive.) Until we do we might
; choose to just write the Assets and Expenses postings, leaving the
; transaction not to balance in ETH until we come back and fill in the rest.
;
; This counts as a capital loss (positive income), since after transaction
; fees, we buy it back in future for slightly more than we sell it for now.
;
; The three +ETH and the three -£ in Holdings empty out those buckets, and
; in this case there's none left over to take from the S104 bucket. The
; `(@)`s ensure that if we get cap gains wrong, the whole thing won't
; balance.
Assets:ETH -0.08 ETH @ £1635.90
Assets:GBP £130.37
Expenses:Fees £0.50
Income:Capital Gains:ETH £1.06
Holdings:SameDay:20200301:ETH 0.01 ETH (@) (£130.37 / 0.08)
Holdings:SameDay:20200301:ETH £-16.71
Holdings:BnB:20200301:ETH 0.05 ETH (@) (£130.37 / 0.08)
Holdings:BnB:20200301:ETH £-80.45
Holdings:BnB:20200301:ETH 0.02 ETH (@) (£130.37 / 0.08)
Holdings:BnB:20200301:ETH £-34.27
; Suppose that the Mar 31 buy below didn't happen. Then the last 0.02 ETH
; here would come from the S104 bucket. At this point the bucket contains
; 0.16 ETH bought for £114.72, average £952.31. (It changed slightly in the
; last transaction because of rounding errors.) So 0.02 ETH was bought for
; £19.05. In that case the Income posting and the last two Holdings postings
; would be replaced with:
;
; Income:Capital Gains:ETH £-14.16
; Holdings:S104:ETH 0.02 ETH (@) (£130.37 / 0.08)
; Holdings:S104:ETH £-19.05
2020/03/01 Buy
; We buy some back on the very same day. This is within 30 days after the
; Feb 5 sell, but the sell from today takes precedence. If we bought more
; than 0.08 ETH here, then the remainder would go in a BnB bucket to match
; against that. After today, the `SameDay:20200301` account is empty.
Assets:ETH 0.01 ETH @ £1620.81
Assets:GBP £-16.71
Expenses:Fees £0.50
Holdings:SameDay:20200301:ETH -0.01 ETH (@@) £16.71
Holdings:SameDay:20200301:ETH £16.71
2020/03/07 Buy
; We buy some more back within 30 days after selling, so this is also
; matched against the Mar 1 buy. It's 31 days after Feb 5, so it doesn't
; get matched against that.
Assets:ETH 0.05 ETH @ £1599.01
Assets:GBP £-80.45
Expenses:Fees £0.50
Holdings:BnB:20200301:ETH -0.05 ETH (@@) £80.45
Holdings:BnB:20200301:ETH £80.45
2020/03/31 Buy
; And more on the final day in the BnB window. Only 0.02 ETH gets matched
; against the previous sale, the rest goes into the S104 bucket. After
; today, the `BnB:20200301` account is empty.
Assets:ETH 0.05 ETH @ £1703.67
Assets:GBP £-85.68
Expenses:Fees £0.50
Holdings:BnB:20200301:ETH -0.02 ETH (@) (£85.68 / 0.05)
Holdings:BnB:20200301:ETH £34.27
Holdings:S104:ETH -0.03 ETH (@) (£85.68 / 0.05)
Holdings:S104:ETH £51.41
At least not as far as I know. Like, if I have £5581.21 in my bank account, the bank doesn't keep track of each of those 558,121 individual pennies, and when I pay for something decide which of those pennies is leaving my account. So if my grandmother asks what I spent my birthday money on, it may (or may not) be the case that she sent me £30 and I subsequently spent £30 on a giant dildo that I don't want to talk to her about; but I can truthfully tell her "that's a meaningless question, grandma". And as far as I know crypto works the same way. But who knows, there are a lot of cryptocurrencies out there and it wouldn't shock me if some of them don't. Compare premium bonds: NS&I keeps track of exactly which bonds you own, and when you sell them they decide exactly which bonds you no longer own. ↩
While looking into this, I found the Taxation of Chargeable Gains Act 1992, section 252. Section 251(1) says "if a debt is X, then it doesn't count for Y unless Z". Then when originally enacted, 252(1) said "251(1) doesn't apply to debts where…" and 252(2) said "252(1) doesn't apply to debts where…". Good grief. Parliament if you cannot use negatives responsibly we will take them away from you. ↩
|
nzjqeGNtvTAz76Jwg_Cryptocurrency_taxation_in_the_U.txt
|
{
"file_size": 24551
}
|
4b604d9f-6de4-4e94-a166-b9d5af2cee5c
|
Lots of important phenomena have a critical threshold. In nuclear weapons, a certain number of neutrons are produced by each fission event and some of those trigger more events. If the number of events triggered is slightly more, the result grows exponential. If slightly less, much less happens.
Similarly in Quantum Computing. Current computers struggle with quantum noise, which causes the superposition to break down over time. However, if we can keep the error rate low enough, it should be possible to use error-correcting codes to do arbitrarily complicated calculations.
When trying to extend LLMs to difficult multi-step problems, I often feel like I'm dealing with a similar phenomena. For example, if asking an LLM to write a novel, it will follow the plot of of the novel for a while and then spontaneously jump to a different story. It feels like the "amount of information" passed from one state to enough is not-quite-enough to keep the story going indefinitely. LLM Agents struggle with similar problems where they seem to work for a while, but after a while they get stuck in a loop or lose their train of thought.
It seems like there are two ways this behavior could change as we scale up LLMs:
LLMs get gradually better as we increase their capabilities (they go from being able write 1 page to writing 2 to writing 3...)There is some "critical size" threshold above which agents are able to self-improve without limit and suddenly we go from writing pages to writing entire encyclopedias.
Does anyone know of good evidence for/against either of these cases? (the strongest evidence in favor of 1 seems to be "that's how it's gone so far")
|
txhRzfei7mzghDroQ_Is_there_a_"critical_threshold"_.txt
|
{
"file_size": 1672
}
|
3d87d883-de57-4ee3-86d9-7fe439c25b09
|
The Vesuvius Challenge is a million+ dollar contest to read 2,000 year old text from charcoal-papyri using particle accelerators and machine learning. The scrolls come from the ancient villa town of Herculaneum, nearby Pompeii, which was similarly buried and preserved by the eruption of Mt. Vesuvius. The prize fund comes from tech entrepreneurs and investors Nat Friedman, Daniel Gross, and several other donors.
From this
In the 9 months after the prize was announced, thousands of researchers and students worked on the problem, decades-long technical challenges were solved, and the amount of recovered text increased from one or two splotchy characters to 15 columns of clear text with more than 2000 characters.
To this
The success of the Vesuvius Challenge validates the motivating insight of metascience: It’s not about how much we spend, it’s about how we spend it.
Most debate over science funding concerns a topline dollar amount. Should we double the budget of the NIH? Do we spend too much on Alzheimer’s and too little on mRNA? Are we winning the R&D spending race with China? All of these questions implicitly assume a constant exchange rate between spending on science and scientific progress.
The Vesuvius Challenge is an illustration of exactly the opposite. The prize pool for this challenge was a little more than a million dollars. Nat Friedman and friends probably spent more on top of that hiring organizers, building the website etc. But still this is pretty small in the context academic grants. A million dollars donated to the NSF or NIH would have been forgotten if it was noticed at all. Even a direct grant to Brent Seales, the computer science professor whose research laid the ground work for reading the scrolls, probably wouldn’t have induced a tenth as much progress as the prize pool did, at least not within 9 months.
Brent Seales: “The Vesuvius Challenge allowed us to enlist 1,000 research teams to work on a problem that normally we’d only have 5 people working on.”
It would have been easy to spend ten times as much on this problem and get ten times less progress out the other end. The money invested in this research was of course necessary but the spending was not sufficient, it needed to be paired with the right mechanism to work.
The success of the challenge hinged on design choices at a level of detail beyond just a grants vs prizes dichotomy. Collaboration between contestants was essential for the development of the prize-winning software. The discord server for the challenge was (and is) full of open-sourced tools and discoveries that helped everyone get closer to reading the scrolls. A single, large grand prize is enticing but it’s also exclusive. Only one submission can win so the competition becomes more zero-sum and keeping secrets is more rewarding. Even if this larger prize had the same expected value to each contestant, it would not have created as much progress because more research would be duplicated as less is shared.
Nat Friedman and friends addressed this problem by creating several smaller progress prizes to reward open-source solutions to specific problems along the path to reading the scrolls or just open ended prize pools for useful community contributions. They also added second-place and runner-up prizes. These prizes funded the creation of data labeling tools that everyone used to train their models and visualizations that helped everyone understand the structure of the scrolls. They also helped fund the contestant’s time and money investments in their submissions. Luke Farritor, one of the grand prize winners, used winnings from the First Letters prize to buy the computers that trained his prize winning model. A larger grand prize can theoretically provide the same incentive, but it’s a lot harder to buy computers with expected value!
Nat and his team also decided to completely switch their funding strategy for a particular part of the pipeline from charcoal-scroll to readable text. This “segmentation” step is early on in the process, and is a labor intensive data-labeling task which helps an algorithm unwrap a crosswise slice of burnt scroll into a flat piece.
Rather than funding extra prizes for this step, they hired 3 full-time data labelers and open-sourced their outputs. Here’s their rationale:
An alternative was to leave the problem of segmentation to the contestants, or even to award separate prizes for segments, but this had several downsides. First, it’s hard to judge segment quality before knowing what to look for (we didn’t have working ink detection yet). Incentivizing segment quantity would automatically penalize quality. Second, labeling work is tedious and time consuming, and turned out to have a long learning curve, so it’s desirable to guarantee some compensation, which can’t be done with a prize. Third, the feedback loop with prizes can be pretty long.
We were not dogmatically attached to just being referees; we were willing to run out onto the field and kick the ball a little. So we did what we thought would maximize success, and for the critical bottleneck of segmentation, that meant hiring a team.
Setting up a segmentation prize or hiring a team of labelers might cost the same, but the effect it has on progress towards the actual goal of reading the scrolls could be completely different.
The same money spent in different ways can change the per-dollar impact by several orders of magnitude. Noticing this fact and endeavoring to understand which mechanisms work when is the motivation behind metascience. The mechanism design decisions which went in to the scroll prize corroborate this central insight.
Other Observations
Nerdsniping as metascience
https://xkcd.com/356/
One part of the success of the scroll prize had nothing to do with the prize pool or how it was structured, it was just getting the right kind of people excited about the problem. Nat Friedman’s appearance on Dwarkesh Patel’s podcast both caused and predicted this.
“I think there’s a 50% chance that someone will encounter this opportunity, get the data and get nerd-sniped by it, and we’ll solve it this year,” Friedman said on the show. Farritor thought, “That could be me.”
This information spreading and inspiration is completely abstracted away in formal discussions of science and metascience, but it is extremely important. People often gripe about great physicists or mathematicians going to work on Wall Street but offer no real ideas for how to stop it. Here’s mine: targeted, tactical nerdsniping. Inspiration and interest can be higher leverage than money.
There’s another layer to the nerdsniping strategy. It’s not just about enticing the people who might compete for a prize, it’s also about the people who might fund it. Here’s Nat Friedman on his introduction to this problem.
A couple of years ago, it was the midst of COVID and we were in a lockdown, and like everybody else, I was falling into internet rabbit holes. And I just started reading about the eruption of Mount Vesuvius in Italy …
I read about a professor at the University of Kentucky, Brent Seales, who had been trying to scan these using increasingly advanced imaging techniques, and then use computer vision techniques and machine learning to virtually unroll them without ever opening them…
I thought this was like the coolest thing ever.
This happened serendipitously but if you want more funding for your project it might be worth investing serious effort in nerdsniping a tech millionaire on the problem. This advice is obvious to silicon valley folks but seems underused by academics.
Private Provision of Public Goods
The Vesuvius Challenge is also a good example of a “privileged group” provision of public goods. This is a concept from economist Mancur Olson which defines conditions under which public goods can be provided voluntarily even though some benefits accrue to free riders. For example, imagine a levee for a small town. Everyone in the town is protected from floods by the levee whether they pay or not, so no one wants to: A classic free-rider problem. But if there is a large landowner in the town, the protection they get personally might be worth more than the entire cost of the levee, so they build it. Or think about the Patreon supporters of your favorite Youtuber. Even though their videos go out to everyone for free, a few true fans are willing to support them even though lots of benefits accrue to others.
Nat Friedman’s interest in ancient history has made papyrologists and classicists a privileged group. His willingness to pay for reading these scrolls is so high that he’s willing to produce the text even though the information is a public good and he will capture only a tiny fraction of the benefit personally.
“[Nat] says he’s also contemplating buying scanners that can be placed right at the villa and used in parallel to scan tons of scrolls per day. “Even if there’s just one dialogue of Aristotle or a beautiful lost Homeric poem or a dispatch from a Roman general about this Jesus Christ guy who’s roaming around,” he says, “all you need is one of those for the whole thing to be more than worth it.”
The Long Tail
Prizes weren’t the only mechanism that contributed to reading the Herculaneum scrolls. Traditional universities and National Science Foundation funding supported Brent Seale’s research for over two decades.
“The foundation was laid by Dr. Seales and his team. They spent two decades making the first scroll scans, building Volume Cartographer, demonstrating the first success in virtual unwrapping, and proving that Herculaneum ink can be detected in CT.”
This is similar to mRNA vaccines. Advanced market commitments and quick internal research at Moderna and Pfizer were the big push that got them produced during covid-19, but they were preceded by a long tail of NIH research. In both cases this research didn’t seem particularly high-value to reviewers for decades until it had its moment in the sun.
This difficulty in seeing potential before the critical moment suggests that science funding behemoths like the NIH should invest less in filtering and searching for what appear as high quality applicants, and pivot to giving out more, smaller, long-term “insurance grants” that allow lots of researchers to keep tinkering even when no one else sees the potential. When one of them hits on something, there are other mechanisms better suited to taking their idea over the finish line. The government’s comparative advantage isn’t in picking winners when the ideas are clear, it’s in having pockets deep enough and timelines long enough to “buy the index” of science and give early glimmers of unclear value a safety net to support tinkering.
|
boWGNiQ3oemiKgDw7_Metascience_of_the_Vesuvius_Chal.txt
|
{
"file_size": 10836
}
|
08f7ff01-959d-4622-9efa-dcda80756a82
|
This year's Spring ACX Meetup everywhere in Cape Town.
Location: Truth Coffee Roasting, 36 Buitenkant St, Cape Town City Centre - we'll put a sign on the table – https://plus.codes/4FRW3CCF+P3
Please RSVP on LessWrong or email or WhatsApp +27 79 813 5144, so I know how big a table to book.
Contact: yaseen@mowzer.co.za
|
jvZjuXyfvjaGS3yNc_Cape_Town_–_ACX_Meetups_Everywhe.txt
|
{
"file_size": 321
}
|
e89d3fbc-a49f-4a05-9708-882a85dfb142
|
This year's Spring ACX Meetup everywhere in Kiel.
Location: TraumGmbH | I'll carry ACX MEETUP sign – https://plus.codes/9F6G84M8+XQ
Contact: hamburger_blues[at]disroot[dot]org
|
jAzkwDBLY7omvGwKn_Kiel_–_ACX_Meetups_Everywhere_Sp.txt
|
{
"file_size": 177
}
|
55ab0074-5735-499f-a00d-708ea6413899
|
This year's Spring ACX Meetup everywhere in Newport News.
Location: 12090 Jefferson Ave Ste 100, Newport News, VA 23606. There are benches outside of the Whole Foods that we will meet at. I will wear glasses and a red shirt. I will have a poorly made ACX sign. If you see someone with a well made sign, that's a different group. – https://plus.codes/87954G36+C4C
Group Link: https://www.lesswrong.com/groups/pLEbtx3BbdaLMXZKi
All are welcome. RSVPs are appreciated
Contact: daniel.m.adamiak@gmail.com
|
3RTB8nDmi9BAEkwwr_Newport_News_–_ACX_Meetups_Every.txt
|
{
"file_size": 502
}
|
641265ca-1f9e-40ab-a932-dd1eb8cace15
|
This year's Spring ACX Meetup everywhere in Prague.
Location: Fixed Point. Koperníkova 6, 120 00 Praha, Česká Republika – https://plus.codes/9F2P3CCR+3C
Group Link: https://fb.me/e/28OXui8Zy
Please RSVP on LessWrong so I know how much food to get.
Contact: betualphu@gmail.com
|
wC9T9yaY7PapADfY6_Prague_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 281
}
|
aca85858-8a67-4ce5-b52c-01917cc6749d
|
This year's Spring ACX Meetup everywhere in Urbana-Champaign.
Location: UIUC, Siebel Center for Computer Science, Room 3401 – https://plus.codes/86GH4Q7G+H8F
Group Link: https://discord.gg/8BNujpU6XD
Contact: cu.acx.meetups@gmail.com
|
R93PAyqdv5nem6MZt_Urbana-Champaign_–_ACX_Meetups_E.txt
|
{
"file_size": 235
}
|
b7a82152-c56d-43fb-81d6-5b422e248280
|
This year's Spring ACX Meetup everywhere in Gothenburg.
Location: Condeco Fredsgatan upper floor, look for a book on the table – https://plus.codes/9F9HPX4C+4CR
Contact: acx_gbg@posteo.se
|
v2dDH4DiHTu9z7Pe7_Gothenburg_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 190
}
|
3a4a21f6-6280-41de-bffc-85a1b3d046a5
|
This year's Spring ACX Meetup everywhere in Burlington.
Location: In the Oakledge park. I’ll be wearing a tall blue and green hat. – https://plus.codes/87P8FQ4F+5C
Group Link: https://groups.google.com/g/burlington-lwacx
Contact: skyler@rationalitymeetups.org
|
w6sovLCeAkhCM3qkL_Burlington_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 263
}
|
99405c23-9291-4c64-a9f2-2eeed624915a
|
This year's Spring ACX Meetup everywhere in Carbondale.
Location: Picnic tables in the center of Sopris Park – https://plus.codes/85FJ9QXP+QM
Please RSVP on LessWrong so I know how much food to get. Please come even if you don't RSVP
Contact: naj@njarboe.com
|
uRcYy536HFAwGYwZW_Carbondale_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 261
}
|
a34a0a14-17c3-47ed-8af7-5845b0e33ce4
|
This year's Spring ACX Meetup everywhere in Richmond.
Location: Whole Foods at 2024 W Broad Street, in the cafe area on the second floor – https://plus.codes/8794HG5Q+7G
Group Link: https://discord.gg/cYqpzHn2qU
Contact: ellahoeppner@gmail.com
|
Ksj9MhqxynNggkAKG_Richmond_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 245
}
|
8dd1d288-6ee3-4138-a577-61a47b958850
|
This year's Spring ACX Meetup everywhere in Salt Lake City.
Location: Liberty Park, west side, near Chargepoint Station, we'll be on the grass in a circle of chairs – https://plus.codes/85GCP4WF+MF
Group Link: email me and I'll add you to the mailing list and send you a discord invite
Contact: adam.r.isom@gmail.com
|
XTdEoh6j6ccLsBhPX_Salt_Lake_City_–_ACX_Meetups_Eve.txt
|
{
"file_size": 318
}
|
46e93c22-213f-41a0-8a73-b08bfce7105c
|
This year's Spring ACX Meetup everywhere in Halifax.
Location: We will be meeting at the Oxford Taproom. We'll be sitting at a table on the ground floor(to the right as you enter) and will have a blue pyramid on the table. – https://plus.codes/87PRJ9VX+PP
Group Link: https://discord.gg/kXFaGQBB5h
Contact: usernameneeded@gmail.com
|
CnLcrjnCvyd58WXJk_Halifax_–_ACX_Meetups_Everywhere.txt
|
{
"file_size": 333
}
|
ddfa5689-b02c-41ad-8e7e-3b1bee6d642d
|
This year's Spring ACX Meetup everywhere in Lyon.
Location: Parc de la Tête d'Or, south-east corner of Pelouse de la Coupole. I'll wear a blue shirt/sweater and have an owl plushie and books. – https://plus.codes/8FQ6QVF3+JM
Group Link: https://t.me/+m6nDCgibgSxiMWE0
Check the Telegram group or contact me if it rains!
Contact: suboptimal.channel@gmail.com
|
c3EE6jkRKqwcM59nW_Lyon_–_ACX_Meetups_Everywhere_Sp.txt
|
{
"file_size": 360
}
|
1fbece2b-ba57-4cc9-8c92-90a855662ffc
|
This year's Spring ACX Meetup everywhere in St. Louis.
Location: Tower Grove Park, Cypress Pavilion South – https://plus.codes/86CFJQ32+XC
Group Link: https://www.lesswrong.com/groups/JTMprAL9QpCct2od3
Feel free to bring kids, gadgets, books-as-conversation starter. Invite friends. Please RSVP on LessWrong so I know how much food to get.
Contact: littlejohnburidan@gmail.com
|
A5vx4KoFzZMvR5NNA_St._Louis_–_ACX_Meetups_Everywhe.txt
|
{
"file_size": 378
}
|
6617b02d-80c5-4fc0-b234-da063445f415
|
This year's Spring ACX Meetup everywhere in San Jose.
Location: 3806 Williams Rd, San Jose, CA – https://plus.codes/849W825J+7Q
Group Link: http://www.daviddfriedman.com/SSC%20Meetups%20announcement.html
Kids welcome. Let me know if you plan to come: ddfr@daviddfriedman.com. We feed dinner to those still here at dinner time.
Contact: ddfr@daviddfriedman.com
|
HJt28WNNYFF2Zvddy_San_Jose_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 361
}
|
f6ea50d0-e190-4308-b7d0-4a4ad6e9044e
|
This year's Spring ACX Meetup everywhere in Sheffield.
Location: 200 Degrees, 25 Division St, Sheffield S1 4GE. I'll have a piece of paper on the table with ACX written on it. – https://plus.codes/9C5W9GJG+2M
Contact: czr@rtnl.org.uk
|
BLgLAy3qcFs5EBkag_Sheffield_–_ACX_Meetups_Everywhe.txt
|
{
"file_size": 235
}
|
3220b9f1-8840-4529-b6c8-ee1c48c27cf5
|
This year's Spring ACX Meetup everywhere in Milano.
Location: Primo Ventures, Viale Luigi Majno, 18, 20129, Milano (MI) – https://plus.codes/8FQFF6C4+9C
Contact: raffa.mauro@gmail.com
|
xG3vrEHnx3YC5XZYy_Milano_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 185
}
|
a69f67ec-21b9-4f85-81ea-e95d6d6fdadf
|
This year's Spring ACX Meetup everywhere in Kansas City.
Location: 5200 Wornall Rd, Kansas City, MO 64112 (Jacob L. Loose Park) - We will be at the grill and the stone tables to the right of the entrance, between the entrance and the playground. – https://plus.codes/86F72CM4+QH
Group Link: https://www.meetup.com/kc_rat_ea/
There is a playground, so feel free to bring kids! Also, while not necessary, bring any cookout food potluck-style you'd like. There will be a grill.
Contact: alex.hedtke@gmail.com
|
A7W6vwScW6BeMbAjp_Kansas_City_–_ACX_Meetups_Everyw.txt
|
{
"file_size": 507
}
|
a43bca0a-c8b4-4374-91c1-e1093410ddaa
|
This year's Spring ACX Meetup everywhere in Shanghai.
Location: The Bunker(街垒)Pub, 190-3 Wulumuqi Rd North, Jing'an District. It's a small place, I'll have a sign. – https://plus.codes/8Q336CCR+XW7
I'd prefer to see your email to know you're coming! No stress though, feel free to just show up.Drinks not required, come and hang out! It won't be just expats :)
Contact: asxsh@proton.me
|
hn5WxWkP6kLxoXxLD_Shanghai_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 395
}
|
bba58ec7-8be5-4f10-b94a-944712c70629
|
This year's Spring ACX Meetup everywhere in Gulf Breeze.
Location: https://www.oldhickorywhiskeybar.com/ – https://plus.codes/862JCQ6M+6X
Email me if you want to meet. I'll only plan to be there if I hear from at least one person.
Contact: christian.h.williams@gmail.com
|
WwRySPNcd7HxMnKjL_Gulf_Breeze_–_ACX_Meetups_Everyw.txt
|
{
"file_size": 272
}
|
b8a93528-6208-4a19-b1a0-e7ad629632e1
|
This year's Spring ACX Meetup everywhere in Abuja.
Location: The 'High Table' at Habil Cafe, No 3 Atapkme Street, Wuse II, Abuja. There will be a small sign saying 'Abuja ACX Meetup' – https://plus.codes/6FX93F9H+J9
RSVP on LessWrong will be nice. Ended up eating all the food last time ):
Contact: akinloluwa.olaoluwa@gmail.com
|
iHGoDJvudpysbh3m4_Abuja_–_ACX_Meetups_Everywhere_S.txt
|
{
"file_size": 330
}
|
e563e916-e765-4181-aa87-a092bf216d04
|
CANCELLED for the lack of interest.
This year's Spring ACX Meetup everywhere in Saint John.
Location: McAllister Place food court, I'll have some kind of a small ACX MEETUP sign on the table. – https://plus.codes/87QM8X4M+XJP
Please RSVP if you have any intention of coming as the event will only proceed if there's at least someone interested in coming.
Contact: spam04321@gmail.com
|
owKPTnAj6cezcYxuw_Saint_John_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 385
}
|
5f085ba9-9ee4-41b4-9f39-2bcea31bd233
|
This year's Spring ACX Meetup everywhere in Massapequa.
Location: 47 Clinton Pl., Massapequa, NY 17758 – https://plus.codes/87G8MG4F+3V
Please RSVP on LessWrong or by email (gabeaweil@gmail.com) so I know how much food to get.
Contact: gabeaweil@gmail.com
|
AuadgJsWwJWSNxh6R_Massapequa_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 257
}
|
d1a0bdc2-71d6-4bd7-b0fc-6650e8f9e2d9
|
This year's Spring ACX Meetup everywhere in San Francisco.
Location: We'll be outside the cafe at the Randall Museum in Corona Heights (near the Castro) in San Francisco. The Randall Museum has a cafe, Cafe Josephine - we'll be sitting at a public park bench (by the overlook) just outside the cafe. Randall Museum is kid-friendly and has free admission, bathrooms, etc. We'll bring an ACX sign. – https://plus.codes/849VQH76+XW
Group Link: https://groups.google.com/g/bayarealesswrong
We're bringing our kids (ages 1 and 3) - feel free to bring other small mammals. You can also get in touch with us at (415) 692-4814
Contact: jill.dma@gmail.com
|
6qKpYyyeqDwMNdvJk_San_Francisco_–_ACX_Meetups_Ever.txt
|
{
"file_size": 649
}
|
3233a2de-44f4-43d3-bf52-f7220d5ff8d9
|
This year's Spring ACX Meetup everywhere in Santa Cruz.
Location: NE corner of University Terrace Park, Meder St, Santa Cruz – https://plus.codes/848VXWFW+X6
Contact: gregg.acx@gmail.com
|
9nyAkT7fxhwBLXQbw_Santa_Cruz_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 188
}
|
9b36fd22-fda5-49be-8100-fdbe1a52889d
|
This year's Spring ACX Meetup everywhere in Canberra.
Location: Grease Monkey, 19 Lonsdale St Braddon (probably outside tables) – https://plus.codes/4RPFP4GM+R3
Usually first Monday of each month. Cheap pizza. Please RSVP by email so I can book a table.
Contact: declan_t@hotmail.com
|
AWyWqEMmbPHgqddqm_Canberra_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 285
}
|
159204ea-a087-41a1-b00f-45778e172b81
|
This year's Spring ACX Meetup everywhere in Belgrade.
Location: Venezelosova 20, Belgrade, Serbia. Effective Altruism Serbia is organizing a casual hang out + lunch in vegan and low-waste Kafe VeZa – https://plus.codes/8GP2RFC9+36
Group Link: https://efektivnialtruizam.rs/
Please RSVP to tanja.trninic@efektivnialtruizam.rs so we can reserve enough tables for everyone.
Contact: tanja.trninic@efektivnialtruizam.rs
|
N2JvAcaB3hxRepMks_Belgrade_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 417
}
|
b0ce3a31-2d9f-4f51-9025-316298cf6afa
|
This year's Spring ACX Meetup everywhere in Fort Smith.
Location: Bakery District, 70 S 7th St, Fort Smith, AR 72901 – https://plus.codes/86779HMF+V6H
Contact: olsoncristina@gmail.com
|
uQFMEs6KDwfQCerGq_Fort_Smith_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 185
}
|
9d87b65f-fb8f-4374-9ab0-b43f6f57fc60
|
This year's Spring ACX Meetup everywhere in Moscow.
Location: Lefortovo Park, near the Rastrelli Grotto – https://plus.codes/9G7VQM7Q+GWP
Group Link: https://groups.google.com/g/rationality-in-moscow exists, but has been defunct for years
you can also reach me as "unfriendlyteapot" on discord
Contact: blastjoe41@gmail.com
|
CtmdMKpRB7Fe7FY7q_Moscow_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 325
}
|
4e329af9-810e-4613-9fda-87a95facb4c9
|
This year's Spring ACX Meetup everywhere in Mumbai.
Location: Versova Social, Juhu Versova Link Rd, Gharkul Society, Bharat Nagar, Versova, Andheri West, Mumbai, Maharashtra 400061, India – https://plus.codes/7JFJ4RGC+H5
Group Link: LessWrong: https://www.lesswrong.com/groups/MsTdZ4KpJmHFmLrt4 Email List:https://groups.google.com/g/acx-mumbai/about
Please RSVP on LessWrong and join our google group: https://groups.google.com/g/acx-mumbai/about
Contact: e2y94n1nv@relay.firefox.com
|
h7ogx8ppmSkN7omff_Mumbai_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 487
}
|
ad9bfc11-33c0-40f6-bdce-fcc41b1b094f
|
This year's Spring ACX Meetup everywhere in Seminyak.
Location: Ingka Petitenget – https://plus.codes/6P3Q85G5+XW
Try to drop me an email if you might be coming, so I can estimate if anybody is / how many people are coming
Contact: maciej.acx@gmail.com
|
mq9ZPuoByPPQ3KyxF_Seminyak_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 254
}
|
3262d4b4-3aca-4a05-985b-8d680935d805
|
This year's Spring ACX Meetup everywhere in Bordeaux.
Location: Mériadeck, Esplanade Charles de Gaulle, between the fountain and Hôtel du Département (administrative building to the west / nearest short side of Esplanade to the fountain). I will have an A4 «ACX meetup» sign. https://www.openstreetmap.org/#map=19/44.83735/-0.58601 – https://plus.codes/8CPXRCP7+WHG
Please RSVP on LessWrong so I see who is coming — email me your phone number if you are likely to be late and you want an SMS when we decide to move away from the meeting point. I will do a «wrap-up» point one hour after the beginning so that those who want to leave can leave and not miss any coordination stuff; I will stay at least two hours if anyone wants to stay that long (and possibly longer, we'll see).
Contact: acx-meetup-2024-05-25@weboroso.anonaddy.com
|
vrs2mE3JNShdRawne_Bordeaux_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 842
}
|
46e591e9-7bb3-4e07-a64c-aacb38d90548
|
This year's Spring ACX Meetup everywhere in London.
Location: Newspeak House – https://plus.codes/9C3XGWGH+3F7
Group Link: https://groups.google.com/g/acxlondon
Please register: https://lu.ma/ACX-London-Apr-2024
Contact: ed@newspeak.house
|
eikqfKfC2ZTFynFKt_London_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 240
}
|
8bcb5899-e5b0-431f-843f-f3786dd23a6d
|
This year's Spring ACX Meetup everywhere in Bangalore.
Location: Matteo coffea - inside. This is where we have our regular meetups – https://plus.codes/7J4VXJF4+PR
Group Link: https://www.lesswrong.com/groups/i5vLw9xnG9iwXNQZZ
Please RSVP on lesswrong for the event of May
Contact: propwash@duck.com
|
3PHFnd22ytcYnmfjo_Bangalore_–_ACX_Meetups_Everywhe.txt
|
{
"file_size": 301
}
|
57f73c65-879b-45db-ae09-84116c4f20cb
|
This year's Spring ACX Meetup everywhere in Rio de Janeiro.
Location: Praça Nelson Mandela, Botafogo, Rio de Janeiro. We will sit at a large circular bench in the middle of the square, right in front of a subway exit. I will have a piece of paper with a big "ACX" written on it. IMPORTANT: After some time, if a large group has joined, we might decide to go elsewhere nearby! Please contact the organizer. – https://plus.codes/589R2RX8+P64
Group Link: https://gist.github.com/tiago-macedo/22e8bae2c691565c4143e142783cf1a7
If you show up and don't see anyone, don't despair. The group might have decided to go somewhere close, either to eat or avoid the sun. Information on where we are will be posted to the meetup page, but feel free to contact me by email.
Contact: tiago.s.m.macedo@gmail.com
|
o5MGojAG2hHaZARZt_Rio_de_Janeiro_–_ACX_Meetups_Eve.txt
|
{
"file_size": 798
}
|
b1eb4373-06a1-4b48-b582-f052f27eb9a9
|
This year's Spring ACX Meetup everywhere in Bellingham.
Location: Elizabeth Station. 1400 W Holly St #101, Bellingham, WA 98225. Weather permitting, we'll sit outside under the tent shared with Narrative Coffee. If it's too cold out, we'll be inside. Either way, we'll have a cardboard sign that says "BELLINGHAM RATIONALISH" on it. – https://plus.codes/84WVQG45+XQF
Group Link: https://www.meetup.com/bellingham-rationalish-community/
Please RSVP on Meetup or LessWrong (preferably Meetup) Event link: https://www.meetup.com/bellingham-rationalish-community/events/299992021/
Contact: bellinghamrationalish@gmail.com
|
dDpjnTBWxbZMGCKic_Bellingham_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 619
}
|
47b17520-bef6-4388-ac60-797c3676b949
|
This year's Spring ACX Meetup everywhere in Fort Collins.
Location: Wolverine farm, upstairs – https://plus.codes/85GPHWRG+7MQ
Group Link: https://www.lesswrong.com/groups/dks4PmoHn4dpK94MR
Please RSVP on LessWrong so we can reserve tables
Contact: focorats@posteo.net
|
b5pux5cSeNAresRng_Fort_Collins_–_ACX_Meetups_Every.txt
|
{
"file_size": 270
}
|
6ebc7e1e-b732-48a9-8f07-073d967872f0
|
This year's Spring ACX Meetup everywhere in Haifa.
Location: We'll be in the zikaron garden next to the city hall, in a picnic blanket on the grass and I will be wearing a red shirt and carrying a sign with ACX MEETUP on it. – https://plus.codes/8G4PRX7X+CQ
Contact: dizinteria@walla.com
|
fEnoSvrp33ZL8SEpx_Haifa_–_ACX_Meetups_Everywhere_S.txt
|
{
"file_size": 289
}
|
3f2a001f-8d0d-4b12-b056-bfaf04719ae5
|
This year's Spring ACX Meetup everywhere in West Lafayette.
Location: Address: Beering Hall of Liberal Arts (BRNG) Room 1268, 100 N University St, West Lafayette, IN 47907. BRNG 1268 is in the southwest corner of the building, and can be found after turning left at the south entrance. Please email me if you cannot find us. I will also place an ACX Meetup sign at the entrance to the room and wear a shirt with a lemur. – https://plus.codes/86GMC3GM+4C
We'll have a box of chips and possibly other food.
Contact: mapreader4@gmail.com
|
ycx7ELFZ6eDJnGiJQ_West_Lafayette_–_ACX_Meetups_Eve.txt
|
{
"file_size": 536
}
|
e02d9505-21b0-485f-adeb-e2736b37fac4
|
This year's Spring ACX Meetup everywhere in Toronto.
Location: Mars Discovery District basement cafeteria. To get to the cafeteria, enter the Mars Atrium via University Avenue entrance. Enter from University Avenue and walk east until you see escalators. Take the escalators down. The food court is to the east of the escalators. If you are lost/confused, ask a security guard to direct you to the food court in the basement. I'll be wearing a bright neon yellow jacket. – https://plus.codes/87M2MJ56+XP
Group Link: https://www.lesswrong.com/groups/8ktnBi4AjxtCmGeXA
Contact: k9i9m9ufh@mozmail.com
|
iqxwgxx4kvDorKcSY_Toronto_–_ACX_Meetups_Everywhere.txt
|
{
"file_size": 599
}
|
e9747d88-8de1-4d6f-b6fd-1ed1321c981d
|
This year's Spring ACX Meetup everywhere in Cleveland.
Location: Tabletop Board Game Cafe- 1810 W 25th St, Cleveland, OH 44113 (I am very tall and will be hard to miss) – https://plus.codes/86HWF7PV+GRP
board game cafe so bring your best catan strategies :)
Contact: ajl161@case.edu
|
mxGuDKHSsfc9ThZPe_Cleveland_–_ACX_Meetups_Everywhe.txt
|
{
"file_size": 284
}
|
12defa95-920b-4da6-9a78-381b39a41e8d
|
This year's Spring ACX Meetup everywhere in Helsinki.
Location: Kitty's Public House, Mannerheimintie 5. We'll be in the private room called Kitty's Lounge, find it and come in. – https://plus.codes/9GG65W9R+Q4
Group Link: https://www.meetup.com/helsinki-rationalish/
Contact: sschelsinkimeetup@gmail.com
|
Fdzk364AqJNbX5LFj_Helsinki_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 306
}
|
4d59661f-aa20-4fdf-8e52-a4c060dc3e1f
|
This year's Spring ACX Meetup everywhere in San Antonio.
Location: Commonwealth Coffeehouse & Bakery Jones. 203 E Jones Ave Ste 101, San Antonio, TX 78215 – https://plus.codes/76X3CGP9+CV
Group Link: https://lesswrongsa.dry.ai/
Contact: jonbenettleilax@gmail.com
|
sgJS7Zkbchppqxfbc_San_Antonio_–_ACX_Meetups_Everyw.txt
|
{
"file_size": 264
}
|
2a94a181-41e6-48a7-bbf8-37df631a6533
|
This year's Spring ACX Meetup everywhere in Canterbury.
Location: Arco Carpanel, Westgate Grove. I have long fair hair and will be carrying an "ACX MEETUP" sign. – https://plus.codes/9F3373JG+F3
I'd appreciate an e-mail if you're new and attending so that I have a sense of how many will be there
Contact: joel.jakubovic@cantab.net
|
iECvfmbycxDYjW8Dz_Canterbury_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 333
}
|
d1f13833-7866-4e59-bff8-f4c1f310fb92
|
This year's Spring ACX Meetup everywhere in Paris.
Location: We'll meet at the Parc Montsouris, just below Cité Universitaire, in front of the Avenue Reille and Avenue René Corty entrance and behind the statue on the grass. There will be an ACX meetup sign and tableclothes – https://plus.codes/8FW4R8FP+CJ
Group Link: Discord: https://discord.com/invite/2U9qhR2suc ; mailing list: https://framalistes.org/sympa/info/slatestarcodexparis
Contact: iwonder@whatisthis.world
|
nLcwQuhPJSG5pEzer_Paris_–_ACX_Meetups_Everywhere_S.txt
|
{
"file_size": 474
}
|
bb7cf015-2ffc-4160-8788-b3fd4272cc8f
|
This year's Spring ACX Meetup everywhere in Ann Arbor.
Location: 1420 Hill Street Ann Arbor Michigan. We'll be meeting at the Friends Meetinghouse (euphemism for Quaker) in the back yard if weather allows, otherwise we'll meet in the corner room. 1-5pm. The restrooms are open.. Two small parking lots (~12 spaces total) are located by the alley at the rear of the property, plus a handicap parking space. Parking is available on Olivia and Lincoln streets all day Saturday. – https://plus.codes/86JR77C9+MQ
Group Link: https://www.meetup.com/Ann-Arbor-SSC-Rationalist-Meetup-Group/
RSVP here: https://www.meetup.com/ann-arbor-ssc-rationalist-meetup-group/events/299819097/ and join the Meetup.com list to hear about our meetups every month, or text me at: 517-945-8084 and I'll add you to the text notification I send out. Bring snacks if the weather is good (no snacks allowed indoors)
Contact: jwpryorprojects@gmail.com
|
uY4KLTEydTBGFs737_Ann_Arbor_–_ACX_Meetups_Everywhe.txt
|
{
"file_size": 925
}
|
de38909a-efb9-4880-b649-03d35a2459c9
|
This year's Spring ACX Meetup everywhere in Raleigh-Durham.
Location: Ponysaurus Brewing Co (219 Hood St, Durham). We'll be at the outdoor seating area with an ACX sign on the table – https://plus.codes/8773X4Q3+QW
Group Link: https://groups.google.com/g/rtlw
There will be pizza! The venue serves beer but is kid-friendly. I'll have more details on the Google group (see link)
Contact: Logan.the.word@gmail.com
|
qu76HHTcEqrsqpmKp_Raleigh-Durham_–_ACX_Meetups_Eve.txt
|
{
"file_size": 413
}
|
26850e83-0afd-4f8e-a9f0-2aa15edd8d6f
|
This year's Spring ACX Meetup everywhere in College Station.
Location: On the porch of Torchy's on Texas Ave, 1037 Texas Ave, College Station, TX. I will have a yellow OneWheel. – https://plus.codes/8625JMFC+5J9
Please RSVP on LessWrong so that I know roughly how many people are coming!
Contact: mikefrosttx@gmail.com
|
aJXbtJ9HtGBvo2Znc_College_Station_–_ACX_Meetups_Ev.txt
|
{
"file_size": 320
}
|
6ea00c20-5d70-4e16-a1cd-8ab0baed06c7
|
This year's Spring ACX Meetup everywhere in Austin.
Location: The Brewtorium, 6015 Dillard Cir A, Austin, TX 78752, we'll be inside somewhere, just look for the Austin LessWrong and ACX Meetup signs – https://plus.codes/862487GM+96
Group Link: https://austinlesswrong.com/
You can park on the streets in front of Brewtorium or the Milk Bank lot next door. If it really gets full, use the nearby residential streets. We'll be there until at least 5pm!
Contact: sbarta@gmail.com
|
WGxa7BiScB4LGEBej_Austin_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 478
}
|
a8f800bb-041c-4ff2-aea0-10586864748b
|
This year's Spring ACX Meetup everywhere in Athens.
Location: The meeting place is the plaza in front of the National Library in Stavros Niarchos Cultural Center complex in Faliro. There will be an "ACX Meetup" sign where we will sit to spot the place. We will occupy a couple (or hopefully more!) tables. – https://plus.codes/8G95WMQR+WRP
There will be an "ACX Meetup" sign where we will sit to spot the place. We will occupy a couple (or hopefully more!) tables, have a drink, chat or rant depending on the topic. Please RSVP on LessWrong and/or meetup.com.
Contact: acx.meetup.athens.greece@gmail.com
|
5Twuug83eEDaAsZyw_Athens_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 605
}
|
2b12b005-6d08-4672-af5e-9b44a4f631af
|
This year's Spring ACX Meetup everywhere in Houston.
Location: 711 Milby St, Houston, TX 77023 inside the IRONWORKS through the big orange door, look for the ACX MEETUP sign at the entrance – https://plus.codes/76X6PMV6+V6
Group Link: https://discord.gg/DzmEPAscpS
Please RSVP on LessWrong. Food and drinks will be provided from Second Slice Sandwich Sandwich Shop.
Contact: joe.brenton@yahoo.com
|
RqxDXjKaTgHuXugd5_Houston_–_ACX_Meetups_Everywhere.txt
|
{
"file_size": 399
}
|
0c821956-e8fc-4b9a-bd61-9eb0608b0c18
|
This year's Spring ACX Meetup everywhere in Augsburg.
Location: 86156 Augsburg, Am Alten Gaswerk 9, 1st floor, Room O.16 – https://plus.codes/8FWG9VP8+R49
Contact: acx@j.stoehler.eu
|
CG7hh3WHyKw4ekBqG_Augsburg_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 183
}
|
bb69ff2f-dc06-4955-a703-0f7d2f13b374
|
This year's Spring ACX Meetup everywhere in Mannheim.
Location: Murphy's Law, Mannheim – https://plus.codes/8FXCFFJC+5G
Please RSVP by sending an email. Depending on how many people come, we might need to change location.
Contact: acxmannheim@mailbox.org
|
ACwBQbo45pyekuyah_Mannheim_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 256
}
|
485d5a7c-1a41-499b-a5ce-e3f2b807e88c
|
This year's Spring ACX Meetup everywhere in Atlanta.
Location: Bold Monk Brewing 1737 Ellsworth Industrial Blvd NW Atlanta, GA 30318, USA – https://plus.codes/865QRH2F+V8
Group Link: https://acxatlanta.com/
Please RSVP on LessWrong
Contact: steve@digitaltoolfactory.net
|
rCiyQsRor2TqWLC7R_Atlanta_–_ACX_Meetups_Everywhere.txt
|
{
"file_size": 271
}
|
12fb8f8f-a504-4931-a08f-a24234ddf917
|
This year's Spring ACX Meetup everywhere in Vienna.
Location: Müllnergasse 4, 1090 Wien, Bell 11 – https://plus.codes/8FWR6997+W5
Group Link: https://www.facebook.com/groups/rationalityvienna/
Contact: manuel.turonian@gmail.com
|
yZuGuCjtYag3NvZm6_Vienna_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 230
}
|
070dde3f-c298-49b4-bab1-0f8ece92455f
|
This year's Spring ACX Meetup everywhere in Bremen.
Location: Piano, Fehrfeld 64. I'll be carrying a Perplexus Epic Ball Labyrinth – https://plus.codes/9F5C3RFF+7J
Contact: ad.fontes@aol.com
|
nh6YSGQxDHCnSDqDK_Bremen_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 193
}
|
165d0bc2-c146-448b-a960-6ede7f4fc985
|
This year's Spring ACX Meetup everywhere in Denver.
Location: Sloan's Lake Park, North Side. Park in the Sloan's Lake North Parking Lot, walk just past the stone structure that's right there, and we'll be on the other side of it. Should have a shade structure up, and a white board that says ACX MEETUP on it (assuming I don't forget the dry erase marker this time). – https://plus.codes/85FPQX22+RM
Group Link: https://www.facebook.com/groups/969594296461197
Public park, all ages welcome. We'll BBQ some burgers and hotdogs and have a variety of snacks and drinks. Some vegan dogs also available, but limited quantities. Eneasz of The Bayesian Conspiracy will almost certainly be there, as will Matt Freeman co-founder of The Guild Of the Rose. We don't have any structured activities, just hanging out and conversation and watching kiddos run around. We have monthly meetups, anyone who attends this is welcome to come to those as well :)
Contact: embrodski@gmail.com
|
mAkFFQDpNjYPdQFKK_Denver_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 972
}
|
75e497c8-f32a-4c06-9e05-74547fb152a4
|
This year's Spring ACX Meetup everywhere in Barcelona.
Location: Parc de la Ciutadella – https://plus.codes/8FH495QP+6C
Group Link: https://www.meetup.com/effective-altruism-barcelona/
Contact: melanie.anne.brennan@gmail.com
|
QzxLDDRZishg4Aynr_Barcelona_–_ACX_Meetups_Everywhe.txt
|
{
"file_size": 226
}
|
2e3af592-057c-42d4-addc-d44f186f0b82
|
This year's Spring ACX Meetup everywhere in Kuala Lumpur.
Location: We'll be in Kings Hall Cafe @ Sec 13 (https://maps.app.goo.gl/HXKPbcMKhvRsb4ue8). Look for an "ACX meetup" sign. – https://plus.codes/6PM34J7R+R4
Contact: yi.yang.chua@gmail.com
|
YpwEQQhzKgDNBEFB8_Kuala_Lumpur_–_ACX_Meetups_Every.txt
|
{
"file_size": 247
}
|
8dad17fd-928a-42d7-a122-705a28969c72
|
This year's Spring ACX Meetup everywhere in Tokyo.
Location: Contact email for the address - location TBD in Meguro – https://plus.codes/8Q7XJPP5+48
Group Link: https://www.meetup.com/acx-tokyo/
Please join our google group! We email once a month to announce meetups.
Contact: rationalitysalon@gmail.com
|
rG4FghfiJo7dCYYZD_Tokyo_–_ACX_Meetups_Everywhere_S.txt
|
{
"file_size": 305
}
|
1bfb02f1-ec6e-4ad0-bfdb-05df6561106b
|
This year's Spring ACX Meetup everywhere in Geneva.
Location: CERN, restaurant 1 – https://plus.codes/8FR863J3+FP
In order to access CERN, you need to let me know in advance.
Contact: carlosr.giudice@gmail.com
|
k4vujAkzXv52RuFBs_Geneva_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 211
}
|
9567ea44-f0f9-4915-b9cc-394baa661f28
|
This year's Spring ACX Meetup everywhere in Madrid.
Location: La Casa Encendida (ground floor cafeteria) – https://plus.codes/8CGRC842+C2
Please RSVP on LessWrong so I know how many people are coming
Contact: javier.prieto.set@gmail.com
|
DCvr7TE5uJpkPJiWm_Madrid_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 238
}
|
77250c04-c49b-4784-9fb5-dcf9470d6588
|
This year's Spring ACX Meetup everywhere in Cologne.
Location: Marienweg 43, 50858 Köln (Cologne) – https://plus.codes/9F28WRMX+97
Group Link: https://www.lesswrong.com/groups/2QwpKyXvwiZ53G4HP
Contact: marcel_mueller@mail.de
|
qJZseFheoeotDpScu_Cologne_–_ACX_Meetups_Everywhere.txt
|
{
"file_size": 228
}
|
c21217bf-e47a-495f-bb4e-28d5336e557b
|
This year's Spring ACX Meetup everywhere in Hamburg.
Location: Eppendorfer Park. We'll be meeting just west of the pond in the middle of the park and then move to a spot within visible distance. I'll be wearing a bright orange jacket throughout.
I recommend bringing a towel or a picnic blanket or similar to sit on, in case the grass will be wet from the previous days. I'll try to bring one or two to spare, but no guarantees.
Snacks and drinks are welcome and are not provided, and I recommend eating or taking food beforehand.
In the event of rain we'll instead meet at LA CAFFÈTTERIA Café, which is in walking distance of Eppendorfer Park and seems to have ample enough space to not warrant a reservation. I'll be wearing the same bright orange jacket indoors. If the weather turns during the day we can also make the walk from Eppendorfer Park to the Cafe.
Please RSVP by email so I can keep you posted in case of location changes.
Contact: mittgfu+acx@gmail.com
|
p5kih7jF5TX4XrJ8j_Hamburg_–_ACX_Meetups_Everywhere.txt
|
{
"file_size": 972
}
|
5b39bf3e-bb6c-4776-b03e-a0c5e35f87a3
|
This year's Spring ACX Meetup everywhere in Pittsburgh.
Location: DEFAULT OUTDOOR LOCATION: CMU Campus, Jared L Cohon University Center, at the picnic tables outside the east entrance (the side of the building that faces the track). Look for the "ACX" banner. CONTINGENCY INDOOR LOCATION (in case of rain): Jared L Cohon University Center, Danforth Lounge (upstairs, 2nd floor) – https://plus.codes/87G2C3V5+6C
The Pittsburgh ACX group meets around once a month, with most meetups taking place around Shady or East Liberty. If you'd like to be notified about future meetups, email pghacx@gmail.com to be added to the mailing list.
Contact: pghacx@gmail.com
|
wYqs7CJENjAG3iko2_Pittsburgh_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 660
}
|
bf147f2e-9f98-4457-9dad-d1974fd84832
|
This year's Spring ACX Meetup everywhere in Grass Valley.
Location: The prospector statue in Condon Park if the weather is nice, otherwise my house nearby (send an email for the address) – https://plus.codes/84FW6W8H+F4
Please RSVP by email or on LessWrong
Contact: Raelifin@gmail.com
|
5Rx5JeeXx8DBCAfFj_Grass_Valley_–_ACX_Meetups_Every.txt
|
{
"file_size": 286
}
|
8ee81ccb-63da-4410-89d6-6ae18f8be54e
|
This year's Spring ACX Meetup everywhere in Portland.
Location: 1548 NE 15th Ave, Portland, OR 97232 - There will be a large sign outside of a building with the print "Encorepreneur Cafe" on the outside. Call me at 513-432-3310 if you can't find it! – https://plus.codes/84QVG8MX+MV4
Group Link: https://www.meetup.com/portland-effective-altruism-and-rationality/
Please RSVP on Meetup so I know how much food to get.
Contact: scelarek@gmail.com
|
6eEFsQ3K9KCnb8pWm_Portland_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 447
}
|
d718b1c3-8f43-4035-bc09-2740e9f0d9e8
|
This year's Spring ACX Meetup everywhere in Tbilisi.
Location: https://f0rth.space – https://plus.codes/8HH6PQ4J+MJ
Contact: overfull_jailbird656@simplelogin.com
|
fqhGfJwf2kKhAEkvq_Tbilisi_–_ACX_Meetups_Everywhere.txt
|
{
"file_size": 163
}
|
16c905d6-34f1-4fe4-95fb-fe20c9af8377
|
This year's Spring ACX Meetup everywhere in Berkeley.
Location: 2740 Telegraph Avenue – https://plus.codes/849VVP5R+X5
Group Link: https://groups.google.com/g/bayarealesswrong
Held between Less.Online and Manifest 2, we expect a lot of out-of-town visitors. Kids are welcome, no pets please!
Contact: skyler@rationalitymeetups.org
|
mXd8aQ8FDYCXhMusx_Berkeley_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 332
}
|
b07818d9-5b7b-4cff-a4cd-4aa1bdafde14
|
This year's Spring ACX Meetup everywhere in Fort Meade.
Location: Burba Lake; Coordinator will *not* sponsor attendees to location – Email coordinator for precise location
Group Link: Email coordinator for group chat
Techies and family types alike are welcome. Title/position agnostic (wear comfortable clothes). 🦗 Czar note: meetup is on a government installation with controlled access; if you're not sure if you can attend you probably can't
Contact: meetup2024.exposure178@passinbox.com
|
ZGmHNLoiTsvnuqPWs_Fort_Meade_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 495
}
|
1dc8b963-c3c3-4441-87d1-a8877bc8d573
|
This year's Spring ACX Meetup everywhere in Danbury.
Location: 255 White St, Danbury, CT 06810 – https://plus.codes/87H89HX7+VG
It's a bar/restaurant, there are tables so kids are allowed. They're known for their wings.
Contact: gemuka@my.bridgeport.edu
|
qfmWRXGaeGqRE3Rzk_Danbury_–_ACX_Meetups_Everywhere.txt
|
{
"file_size": 255
}
|
f559bd52-7428-4839-8d32-012f3334cea7
|
This year's Spring ACX Meetup everywhere in Chicago.
Location: We'll be in Grant Park just between the train tracks and Columbus on the north side of Balbo. There's a shaded area with some trees. – https://plus.codes/86HJV9FH+95
Group Link: https://chicagorationality.com
Contact: info@chicagorationality.com
|
L95LMfaXFd7fknB3M_Chicago_–_ACX_Meetups_Everywhere.txt
|
{
"file_size": 310
}
|
cb93f057-b46e-4f24-92df-7876c7e14f1a
|
This year's Spring ACX Meetup everywhere in Harrisburg.
Location: Zeroday Brewing Company Taproom, 925 N 3rd St, Harrisburg, PA 17102 – https://plus.codes/87G57487+R7
Group Link: https://www.lesswrong.com/groups/PXrLoKgiAyXEG2hLD
Contact: acxharrisburg@gmail.com
|
NbZrDXjeviWSRgc96_Harrisburg_–_ACX_Meetups_Everywh.txt
|
{
"file_size": 264
}
|
cc6d3c8f-08e3-4697-8e87-823dade5c9ee
|
This year's Spring ACX Meetup everywhere in Calgary.
Location: Side Street Pub: 1167 Kensington Crescent NW. I'll bring an "ACX" sign with red letters. – https://plus.codes/95373W26+R8G
Please RSVP on LessWrong
Contact: qwertie256@gmail.com
|
q9pMkvjRraBmtAyGB_Calgary_–_ACX_Meetups_Everywhere.txt
|
{
"file_size": 242
}
|
f0fb8fa2-8fbf-4c59-9d3c-60e69ec0d976
|
This year's Spring ACX Meetup everywhere in Singapore.
Location: Maxwell (will send more details in email) – https://plus.codes/6PH57RJV+5W
Group Link: mindupgrade -at- protonmail -dot- com
Feel free to send an email about topic sentences that you are interested in or want to have a conversation with others about. Topic sentences will be collated and shared with the other attendees.
Contact: mindupgrade@protonmail.com
|
crEpKvZsMCshCKxQ9_Singapore_–_ACX_Meetups_Everywhe.txt
|
{
"file_size": 423
}
|
b6847ddd-054e-4c3a-9afd-48adb349379c
|
This year's Spring ACX Meetup everywhere in Corvallis.
Location: Laughing Planet, downtown Corvallis, Oregon. – https://plus.codes/84PRHP7R+R7C
Group Link: Willamette Valley EAs and Rationalists: https://discord.gg/uBCcD7SxUa
Kids/babies welcome.
Contact: kbitikofer@gmail.com
|
2YymBYcaLzTMqWi8p_Corvallis_–_ACX_Meetups_Everywhe.txt
|
{
"file_size": 278
}
|
952c5ec6-88cf-4387-ab8e-5bb4b949f041
|
This year's Spring ACX Meetup everywhere in Mérida.
Location: Centro de Estudios e Investigaciones Sociales y Culturales Efrain Calderon, calle 38 No. 453 por 35 y 37 Barrio Obrero: Jesús Carranza, Mérida, Mexico – https://plus.codes/76GGX9JV+W6
Group Link: https://www.facebook.com/groups/lesswrongmerida
Favor de reservar por mail
Contact: silviafidelina@hotmail.com
|
CqNEP5BShYuCmeMiQ_Mérida_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 373
}
|
9e996676-5bd4-4d61-8092-2c9edf2fbed1
|
This year's Spring ACX Meetup everywhere in Esbjerg.
Location: Meetup will be at a café named Bean Machine, at Kronprinsensgade 99, 6700 Esbjerg - Outside the Café there will be a little sign with "ACX Meetup" written upon it - and an additional sign will be at the relevant table. – https://plus.codes/9F7CFCFX+G4
I will be there from 10 o'clock in the morning If noone shows up I will be gone by 2 in the afternoon. After 2 the café will close. But there is place right next to the café named Spiritusklubben where the meetup can be continued or we might go to my private home nearby depending on what we feel like.
Contact: martinpetersen64.mp@outlook.dk
|
sjdj5eKapFwEmpamJ_Esbjerg_–_ACX_Meetups_Everywhere.txt
|
{
"file_size": 664
}
|
13fbb92b-53ce-41b6-81d4-dbdb99037f2e
|
This year's Spring ACX Meetup everywhere in Nizhny Novgorod.
Location: We will be sitting on benches next to the stage in the center of Pushkin Park. There will be an "ACX MEETUP" sign – https://plus.codes/9H858X5W+FP
Contact: niya3@mail.ru
|
zrW98a3oHGBssmRLP_Nizhny_Novgorod_–_ACX_Meetups_Ev.txt
|
{
"file_size": 242
}
|
e4f93cb5-cbe9-4128-b59b-f8632dd6def5
|
This year's Spring ACX Meetup everywhere in Tel Aviv.
Location: Sarona Park, grass area close to the Benedict restaurant, will have ACX sign and red balloons – https://plus.codes/8G4P3QCP+MJ9
Group Link: https://www.facebook.com/groups/5389163051129361
Everyone is welcome! Feel free to bring snacks.
Contact: inbar192@gmail.com
|
e2nhR66dRS8WGS4Tx_Tel_Aviv_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 330
}
|
fbdebedc-f97f-4ebb-99b2-c06738c4eac3
|
This year's Spring ACX Meetup everywhere in West Palm Beach.
Location: Grandview Public Market. 1401 Clare Ave, West Palm Beach, FL 33401. We'll be at the northeast outside area, sitting at a table with an ACX MEETUP sign on it. Parking is free at an adjacent lot, and there may also be a free valet service. – https://plus.codes/76RXMWXP+GH
Group Link: https://discord.gg/tDf8fYPRRP
Hosted by the south Florida ACX group that also does meetups in Palm Beach and Broward communities such as Boca Raton, Boynton Beach, Delray and many others. Come join our Discord, we're always welcoming!
Contact: chuckwilson477@yahoo.com
|
eDaxKNqkx5W7X2Rto_West_Palm_Beach_–_ACX_Meetups_Ev.txt
|
{
"file_size": 624
}
|
139a3e0f-3ee2-43d9-8a06-cbc63c8cb350
|
This year's Spring ACX Meetup everywhere in Sydney.
Location: Club Sydney (RSL Sydney) 565 George St, Sydney NSW 2000 Instructions: entry needs photo ID. We meet on Level 2, the Chinese restaurant, in the glassed-off section. – https://plus.codes/4RRH46F4+98
Group Link: https://www.meetup.com/rationalists_of_sydney/
Contact: singkong+rat@gmail.com
|
3XaqNFn6YyumLStzf_Sydney_–_ACX_Meetups_Everywhere_.txt
|
{
"file_size": 352
}
|
eb356d7e-25ce-4ac5-a629-5a220bbb0678
|
This year's Spring ACX Meetup everywhere in New Orleans.
Location: Petite Clouet Cafe – https://plus.codes/76XFXX74+H7
Group Link: http://philosophers.group
Feel free to reach out to me on signal. My name: blake.1111
Contact: blake@philosophers.group
|
ytDe4Dn3deBvoe9PA_New_Orleans_–_ACX_Meetups_Everyw.txt
|
{
"file_size": 252
}
|
806ebb15-02be-4899-b333-63484a08a5da
|
This year's Spring ACX Meetup everywhere in Columbus.
Location: Clifton Park Shelterhouse, Jeffrey Park, Bexley. We will be at one of the tables with an ACX sign. – https://plus.codes/86FVX3C3+QF
Please send an email if you'd like to join our mailing list for future invitations.
Contact: russell.emmer@gmail.com
|
DyDNxy4aK6a9EXwvP_Columbus_–_ACX_Meetups_Everywher.txt
|
{
"file_size": 314
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.