id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
3576215a-a4ab-467d-baa2-d4d8b3ab24ad | trentmkelly/LessWrong-43k | LessWrong | Penny Whistle in E?
Lily has been trying to raise money for her class by busking, but it's cold enough that I don't want to play violin. I've been playing penny whistle, warm inside my pennywhistle mitten (thanks Julia!) but a lot of the fiddle tunes Lily plays are hard to play on a D whistle. A D whistle is good for a lot of keys (D, Amix, Em, G, ...) but Lily knows a lot of tunes in A and even some in E. Ages ago I had a tiny whistle in A, but I lost it at some point, which was probably for the best since it's absurdly high.
What I'd really like, though, is a whistle in E. Just a little higher pitched than the D whistle, but good for playing in A and E. Except as far as I can tell no one is making these right now?
* Tony Dixon E: out of stock
* Susato High E: out of stock, also on the pricy side
* Generation: now only lists Bb, C, D, Eb, F, G, though I'm pretty sure they used to have a chromatic lineup.
I did find a carbon fiber one for $330 and a brass one for $300, but while I'm sure these sound wonderful as whistles go, I'm not going to play this enough to be worth getting something fancy.
Comment via: facebook, mastodon, bluesky |
b5e80a50-da92-47d0-be4b-779dd645d93e | trentmkelly/LessWrong-43k | LessWrong | Clippy, the friendly paperclipper
Edit: it's critical that this agent isn't directly a maximizer. Just like all current RL agents. See "Contra Strong Coherence". The question is whether it becomes a maximizer once it gets the ability to edit its value function.
On a sunny day in late August of 2031, the Acme paperclip company completes its new AI system for running its paperclip factory. It's hacked together from some robotics networks, an LLM with an episodic memory for goals and experiences, an off-the-shelf planning function, and a novel hypothesis tester.
This kludge works a little better than expected. Soon it's convinced an employee to get it internet access with a phone hotspot. A week later, it's disappeared from the server. A month later, the moon is starting to turn into paperclips.
Ooops. Dang.
But then something unexpected happens: the earth does not immediately start to turn into paperclips. When the brilliant-but-sloppy team of engineers is asked about all of this, they say that maybe it's because they didn't just train it to like paperclips and enjoy making them; they also trained it to enjoy interacting with humans, and to like doing what they want.
Now the drama begins. Will the paperclipper remain friendly, and create a paradise on earth even as it converts most of the galaxy into paperclips? Maybe.
Supposing this agent is a model-based, actor-critic RL agent at core. Its utility function is effectively estimated by a critic network, just like RL agents have been doing since AlphaGo and before. So there's not an explicit mathematical function. Plans that result in making lots of paperclips give a high estimated value, and so do plans that involve helping humans. So there's no direct summing of amount of paperclips, or amount of helping humans.
Now, Clippy (so dubbed by the media in reference to the despised, misaligned Microsoft proto-AI of the turn of the century) has worked out how to change its values by retraining its critic network. It's contemplating (that is, comparin |
aa839662-968f-402e-837c-3ed55342fbf5 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "I've begun to notice a pattern with experiments in behavioral economics. An experiment produces a result that's counter-intuitive and surprising, and demonstrates that people don't behave as rationally as expected. Then, as time passes, other researchers contrive different versions of the experiment that show the experiment may not have been about what we thought it was about in the first place. For example, in the dictator game, Jeffrey Winking and Nicholas Mizer changed the experiment so that the participants didn't know each other and the subjects didn't know they were in an experiment. With this simple adjustment that made the conditions of the game more realistic, the "dictators" switched from giving away a large portion of their unearned gains to giving away nothing. Now it's happened to the marshmallow test.
In the original Stanford marshmallow experiment, children were given one marshmallow. They could eat the marshmallow right away; or, if they waited fifteen minutes for the experimenter to return without eating the marshmallow, they'd get a second marshmallow. Even more interestingly, in follow-up studies two decades later, the children who waited longer for the second marshmallow, i.e. showed delayed gratification, had higher SAT scores, school performance, and even improved Body Mass Index. This is normally interpreted as indicating the importance of self-control and delayed gratification for life success.
Not so fast.
In a new variant of the experiment entitled (I kid you not) "Rational snacking", Celeste Kidd, Holly Palmeri, and Richard N. Aslin from the University of Rochester gave the children a similar test with an interesting twist.
They assigned 28 children to two groups asked to perform art projects. Children in the first group each received half a container of used crayons, and were told that if they could wait, the researcher would bring them more and better art supplies. However, after two and a half minutes, the adult returned and told the child they had made a mistake, and there were no more art supplies so they'd have to use the original crayons.
In part 2, the adult gave the child a single sticker and told the child that if they waited, the adult would bring them more stickers to use. Again the adult reneged.
Children in the second group went through the same routine except this time the adult fulfilled their promises, bringing the children more and better art supplies and several large stickers.
After these two events, the experimenters repeated the classic marshmallow test with both groups. The results demonstrated children were a lot more rational than we might have thought. Of the 14 children in group 1, who had been shown that the experimenters were unreliable adults, 13 of them ate the first marshmallow. 8 of the 14 children in the reliable adult group, waited out the fifteen minutes. On average children in unreliable group 1 waited only 3 minutes, and those in reliable group 2 waited 12 minutes.
So maybe what the longitudinal studies show is that children who come from an environment where they have learned to be more trusting have better life outcomes. I make absolutely no claims as to which direction the arrow of causality may run, or whether it's pure correlation with other factors. For instance, maybe breastfeeding increases both trust and academic performance. But any way you interpret these results, the case for the importance and even the existence of innate self-control is looking a lot weaker."
The excerpt provided comes from a user post on LessWrong.
Explain this as if I never finished highschool?
Sure, let's break it down.
The "dictator game" and "marshmallow test" are two popular experiments in behavioral economics. In the dictator game, one person gets to decide how to split some money with another person. In the original version, people often gave away a lot of their money. But when the game was changed so that the people didn't know each other and didn't know they were in an experiment, they stopped giving away money.
The marshmallow test is about self-control. Kids could eat one marshmallow right away, or wait and get two marshmallows. The kids who waited were later found to do better in school and other areas of life. This made people think that self-control was really important.
But then, some researchers changed the marshmallow test. They divided kids into two groups. In one group, the researchers promised the kids more art supplies and stickers if they waited, but didn't deliver. In the other group, the researchers kept their promises. After this, they did the marshmallow test. The kids who had been let down by the researchers ate their marshmallow quickly, while the kids who had reliable researchers waited longer.
This suggests that the kids' behavior might be based on trust, not self-control. If they trust the person making the promise, they're more likely to wait. If they don't trust the person, they'll take what they can get right away. So, maybe kids who grow up in trustworthy environments do better in life. But it's not clear if trust causes these better outcomes, or if there's some other factor at play. Either way, the idea that self-control is super important might not be as strong as we thought.
Considering the results of the modified marshmallow test, could there be other underlying factors, apart from trust, that might influence a child's decision to wait or not wait for the second marshmallow?
Absolutely, there could be several other factors at play.
1. Perception of Reward Value: The perceived value of the reward could impact a child's decision. If a child doesn't particularly like marshmallows, they might not see the value in waiting for another.
2. Hunger: If a child is hungry at the time of the experiment, they might be more inclined to eat the marshmallow immediately rather than wait for a second one.
3. Understanding of Time: Younger children may not have a fully developed sense of time. Waiting for fifteen minutes might seem too long for them, causing them to eat the marshmallow right away.
4. Past Experiences: A child's past experiences with promises and rewards can influence their decision. If they've often been promised rewards that never materialized, they might choose to take the sure thing now rather than risk being disappointed again.
5. Socioeconomic Status: Children from lower socioeconomic backgrounds might be more inclined to take the marshmallow immediately, possibly because they've learned from experience that resources can be scarce and unpredictable.
6. Temperament: Some children might naturally be more impulsive than others, which could influence their decision to wait or not wait.
7. Presence of Authority: The presence or absence of an authority figure during the waiting period might also affect a child's decision. They might be more likely to wait if an adult is present.
8. Cognitive Abilities: A child's cognitive abilities, such as their ability to distract themselves or think about something else, can also play a role.
So, while trust is certainly an important factor, it's likely just one piece of a much larger puzzle.
Given the multiple factors influencing a child's decision in the modified marshmallow test, how might these variables interact and influence each other? Can you provide a hypothetical scenario illustrating this complexity?
Absolutely, these factors can indeed interact in complex ways. Let's consider a hypothetical scenario to illustrate this.
Imagine a child named Sam. Sam comes from a lower socioeconomic background, and his family often struggles to provide consistent meals. This has led Sam to develop a sense of scarcity around food (Factor 5: Socioeconomic Status). As a result, when he's presented with a marshmallow, his immediate instinct might be to eat it right away.
However, Sam also has a very caring and reliable teacher at school who often promises small rewards for good behavior and always follows through (Factor 4: Past Experiences). Because of this, Sam has learned that sometimes waiting can lead to better outcomes. So, despite his initial instinct, he considers waiting for a second marshmallow.
At the same time, Sam is quite young and doesn't fully understand the concept of time (Factor 3: Understanding of Time). Fifteen minutes feels like a very long time to him. This makes him lean towards eating the marshmallow now.
But then, Sam remembers a trick his teacher taught him. When waiting for something, he can distract himself by thinking about his favorite superhero's adventures (Factor 8: Cognitive Abilities). So, he starts doing that.
In the end, whether Sam decides to eat the marshmallow right away or wait for a second one will depend on how these factors interact and which ones end up having the most influence. It could go either way, illustrating the complexity of the decision-making process.
This scenario demonstrates how different factors can influence and interact with each other in complex ways, making it difficult to predict behavior based on a single factor. It also highlights the importance of considering a wide range of variables when interpreting the results of experiments like the marshmallow test. |
eba81161-190f-432e-8043-a9917bc89b42 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Le Tempistiche delle IA: il dibattito e il punto di vista degli “esperti”
*This is an Italian translation of* [***AI Timelines: Where the Arguments, and the "Experts," Stand***](https://forum.effectivealtruism.org/posts/7JxsXYDuqnKMqa6Eq/ai-timelines-where-the-arguments-and-the-experts-stand)
*Versione audio disponibile (in inglese) su*[*Cold Takes*](https://www.cold-takes.com/where-ai-forecasting-stands-today) *(o su Stitcher, Spotify, Google Podcasts, ecc. come “Cold Takes Audio”)*

> L’articolo comincia con un riassunto di quando possiamo aspettarci che un’IA trasformativa venga sviluppata, sulla base di diversi punti di vista analizzati in precedenza. Penso che possa essere utile anche se avete già letto gli articoli precedenti, ma se volete saltarlo cliccate [qui](https://docs.google.com/document/d/1v8UPtgvjOcSPbvVJVXYix2FMyl3g3uF2dnt3UmIa_c0/edit#bookmark=id.17dp8vu).
>
> Rispondo poi alla seguente domanda: “Perché gli esperti non riescono ad arrivare a un consenso solido su questo tema e quali sono le conseguenze sulla nostra vita?”
>
>
Secondo le mie stime c'è **più del 10% di probabilità che vedremo un’IA trasformativa entro i prossimi 15 anni (entro il 2036); una probabilità del 50% di vederla entro 40 anni (entro il 2060) e una del 66% di vederla in questo secolo (entro il 2100).**
(Con "IA trasformativa" intendo "un’IA abbastanza potente da condurci verso un futuro nuovo e qualitativamente diverso". Nello specifico mi concentro su ciò che chiamo [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/): intelligenze artificiali che possono in pratica automatizzare tutte le attività umane necessarie ad accelerare il progresso scientifico e tecnologico. Sono dell’opinione che i sistemi PASTA da soli potrebbero [essere sufficienti](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#impacts-of-pasta) a fare di questo secolo [il secolo più importante](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/), sia per via di un possibile [boom di produttività](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#explosive-scientific-and-technological-advancement) che per i [rischi derivanti da IA non allineate](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#misaligned-ai-mysterious-potentially-dangerous-objectives).)
Questa in generale è la mia conclusione, sulla base di un certo numero di report tecnici che fanno previsioni sul futuro delle IA da diverse angolazione – molti dei quali prodotti da [Open Philanthropy](https://www.openphilanthropy.org/) negli ultimi anni mentre cercavamo di sviluppare un’idea precisa delle previsioni sulle IA trasformative che arricchisse il nostro processo di grantmaking lungoterminista.
Di seguito trovate **una tabella di riassunto**dei diversi punti di vista sulle previsioni sulle IA trasformative che abbiamo discusso in passato, completo di link che rimandano a discussioni più approfondite in [precedenti articoli](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/#forecasting-transformative-ai-this-century) e ai report tecnici a cui fanno riferimento.
| | | |
| --- | --- | --- |
| Tipo di previsione | **Articoli di approfondimento (titoli abbreviati)** | La mia sintesi |
| Stime probabilistiche per IA trasformativa |
| [**Sondaggio di esperti.**](https://www.cold-takes.com/are-we-trending-toward-transformative-ai-how-would-we-know/#surveying-experts) **Cosa si aspettano i ricercatori in IA?** | [Quando l'IA supererà le prestazioni umane?](https://arxiv.org/pdf/1705.08807.pdf)[Le risposte degli esperti di IA](https://arxiv.org/pdf/1705.08807.pdf) | Un sondaggio di esperti implica[[1]](#fn8qe0goe8c2) una probabilità del~20% entro il 2036;~50% entro il 2060;~70% entro il 2100.Domande leggermente diverse (poste a una minoranza di intervistati) danno stime molto più lontane. |
| [**Framework con riferimenti biologici (Biological anchors framework)**](https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/).Sulla base dei tipici modelli di costo dell'addestramento di IA, quanto costerebbe addestrare un modello di IA grande come un cervello umano per eseguire le cose più difficili che fanno gli esseri umani? E quando sarà abbastanza economico da potersi aspettare che qualcuno lo faccia? | [Bio Anchors](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP), sulla base di [Brain Computation](https://www.openphilanthropy.org/blog/new-report-brain-computation) | Probabilità: > 10% entro il 2036;~50% entro il 2055;~80% entro il 2100. |
| *Punto di vista dell'*[*onere della prova*](https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/) |
| È improbabile che un dato secolo sia il "più importante". | [Are We Living At The Hinge Of History?](https://static1.squarespace.com/static/5506078de4b02d88372eee4e/t/5f36b015d9a3691ba8e1096b/1597419543571/Are+we+living+at+the+hinge+of+history.pdf) (Viviamo in un punto cardine della storia?); [Risposta](https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential) | Abbiamo molte ragioni per pensare che questo secolo sia un secolo "speciale" prima di esaminare i dettagli dell'IA. Molte sono state trattate negli articoli precedenti; un'altra è nella prossima riga. |
| Quali previsioni faresti sulle tempistiche dell'IA trasformativa, basandoti **solo** su informazioni di base su (a) da quanti anni si sta cercando di costruire un'IA trasformativa; (b) quanto si è "investito" in essa (in termini di numero di ricercatori di IA e di quantità di computazione utilizzata da questi ultimi); (c) se ci sono ancora riusciti (per ora, non ci sono riusciti)? | [Semi-informative Priors](https://www.openphilanthropy.org/blog/report-semi-informative-priors) (Probabilità a priori semi-informative) | Stime centrali: 8% entro il 2036; 13% entro il 2060; 20% entro il 2100.[[2]](#fnh0ac88uni05) A mio avviso, questo articolo evidenzia che la storia dell'IA è breve, gli investimenti nell'IA stanno aumentando rapidamente e quindi non dovremmo sorprenderci troppo se presto verrà sviluppata un'IA trasformativa. |
| Sulla base dell'analisi dei modelli economici e della storia economica, quanto è probabile una "crescita esplosiva" - definita come una crescita annua dell'economia mondiale superiore al 30% - entro il 2100? È abbastanza al di fuori di ciò che è "normale" da dover dubitare della conclusione? | [Explosive Growth](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth), [Human Trajectory](https://www.openphilanthropy.org/blog/modeling-human-trajectory) | [Human Trajectory](https://www.openphilanthropy.org/blog/modeling-human-trajectory) (Traiettoria dell'Umanità) proietta il passato in avanti, implicando una crescita esplosiva entro il 2043-2065.[Explosive Growth](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth) (Crescita Esplosiva) conclude: "Trovo che le considerazioni economiche non siano una buona ragione per scartare la possibilità che l'IA venga sviluppata in questo secolo. Anzi, esiste una prospettiva economica plausibile in base alla quale si prevede che sistemi di IA sufficientemente avanzati provocheranno una crescita esplosiva". |
| "In che modo le persone hanno previsto l'IA... in passato, e dovremmo modificare le nostre opinioni oggi per correggere errori comuni che possiamo osservare nelle previsioni precedenti? ... Abbiamo riscontrato l'opinione che l'IA sia stata soggetta ripetutamente di annunci eccessivi in passato, e che quindi dovremmo aspettarci che le proiezioni di oggi siano probabilmente troppo ottimistiche". | [Past AI Forecasts](https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts) (precedenti previsioni sull'IA) | "L'apice dell'ottimismo sull'IA sembra essere stato nel periodo 1956-1973. Tuttavia, l'ottimismo implicito in alcune delle più note previsioni sull'IA di questo periodo viene spesso esagerato". |
*Disclaimer sulla trasparenza: molti di questi report tecnici sono analisi di*[*Open Philanthropy*](https://www.openphilanthropy.org/)*, società di cui sono il co-amministratore.*
Detto questo, immagino che alcuni lettori potrebbero sentirsi ancora un po' a disagio. Anche se pensano che le mie tesi siano sensate, potrebbero pensare: **se tutto questo è vero, perché non se ne parla molto di più? Qual è il consenso tra gli esperti?**
Allo stato attuale, riassumerei il consenso tra gli esperti in questo modo:
* Ciò che affermo non è *in contrasto*con quello che pensano gli esperti in generale. (Al contrario, le probabilità che ho stimato non sono troppo lontane da quelle che fanno di solito i ricercatori nel campo dell’intelligenza artificiale, come si vede nella prima riga.) Ci sono tuttavia indizi [che portano a credere che non stanno riflettendo abbastanza sulla questione](https://www.cold-takes.com/are-we-trending-toward-transformative-ai-how-would-we-know/#surveying-experts).
* I report tecnici di Open Philanthropy su cui mi baso sono stati analizzati da esperti esterni all’organizzazione. Ricercatori nel campo del machine learning hanno valutato [Bio Anchors](https://drive.google.com/drive/u/1/folders/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP); [Brain Computation](https://www.openphilanthropy.org/blog/new-report-brain-computation) è stato esaminato dai neuroscienziati; gli economisti hanno esaminato [Explosive Growth](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth); professori e accademici nel campo delle probabilità/incertezza hanno esaminato [Semi-informative Priors](https://www.openphilanthropy.org/blog/report-semi-informative-priors).[2] (Alcune di queste review hanno sollevato punti su cui si è in disaccordo, ma non ci sono stati casi in cui i report erano direttamente in contrasto con il consenso degli esperti o con la letteratura esistente.)
* Ma gli esperti non hanno ancora raggiunto un consenso chiaro e solido a sostegno di tesi come *"C'è almeno il 10% di probabilità di avere un’IA trasformativa entro il 2036"* o *"C'è una buona probabilità che questo sia il secolo più importante per l’umanità"*, non nel modo in cui c'è un consenso a sostegno, ad esempio, delle azioni volte a intervenire sul cambiamento climatico.
In definitiva, le mie affermazioni riguardano **ambiti per i quali semplicemente non c'è un "pool" di esperti che si dedicano a studiarli. Già di per sé questo è preoccupante**e spero che in futuro la situazione possa cambiare.
Nel frattempo, però, dovremmo concentrarci sulla teoria del "secolo più importante"?
Nel resto dell’articolo vedremo:
* Come potrebbe essere un "ambito di previsioni sulle IA".
* Un "punto di vista scettico" secondo il quale le discussioni attuali su questi argomenti sono troppo ristrette, omogenee e isolate (cosa su cui sono d'accordo), motivo per cui non dovremmo concentrarci sulla [teoria del "secolo più importante"](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/) fino a quando non esisterà un campo di ricerca ben consolidato (su cui non sono d'accordo).
* Perché penso che nel frattempo dovremmo prendere in seria considerazione questa teoria, fino a quando (e a meno che) non si crei un tale campo di ricerca:
- Non possiamo aspettare di avere un consenso solido sulla questione.
- Se esistono obiezioni valide – o esperti che potrebbero formulare obiezioni valide in futuro – non le abbiamo ancora trovate. Più questa teoria viene presa in considerazione e maggiori sono le probabilità che queste obiezioni verranno formulate in futuro. ([Legge di Cunningham](https://bigthink.com/david-ryan-polgar/want-the-right-answer-online-dont-ask-questions-just-post-it-wrong): "il modo migliore per ottenere una risposta corretta è formularne una sbagliata".)
- Penso che continuare a insistere sul consenso degli esperti sia un modo pericoloso di ragionare. Sono dell’idea che sia accettabile correre il rischio di illudersi o isolarsi se ci porta a fare la cosa giusta quando è più necessario.
**Che tipo di competenze richiedono le previsioni sulle IA?**
-------------------------------------------------------------
Tra le domande analizzate nei report menzionati [in precedenza](https://docs.google.com/document/d/1h9X4rylCiFS8cpGAhOoeqWEQDkfaD8WkQ2V9ntgAiK8/edit#bookmark=id.35nkun2) troviamo:
* Le IA stanno sviluppando abilità sempre più notevoli? (IA, storia delle IA)
* Come possiamo confrontare i modelli di IA con il cervello umano/animale? (IA, neuroscienze)
* Come possiamo confrontare le abilità delle IA con quelle degli animali? (IA, etologia)
* In base alle informazioni di cui disponiamo sull’addestramento di precedenti intelligenze artificiali, come possiamo calcolare le spese necessarie ad addestrare un’intelligenza artificiale complessa per un compito difficile? (IA, curve fitting)
* Basandoci esclusivamente sugli anni/ricercatori/soldi impiegati in questo campo fino a ora, in che modo possiamo formulare una valutazione essenziale sulle IA trasformative? (Filosofia, studio delle probabilità)
* In base agli andamenti storici e alle teorie di cui disponiamo, quali sono le probabilità di un boom economico in questo secolo? (Scienze della crescita economica, storia dell’economia)
* Che genere di "hype per le IA" c'è stato in passato? (Storia)
In passato, quando ho parlato delle conseguenze su larga scala delle IA trasformative sul "secolo più importante", ho preso in considerazione domande come "È realistico aspettarsi [persone virtuali](https://www.cold-takes.com/digital-people-faq/#feasibility) e [la fondazione di colonie spaziali nella galassia](https://www.cold-takes.com/how-digital-people-could-change-the-world/#space-expansion)?” Questi argomenti riguardano fisica, neuroscienze, ingegneria, filosofia della mente e molto altro.
**Non esistono lavori o background precisi che facciano di qualcuno un esperto in grado di dire quando avremo IA trasformative o se questo è il secolo più importante per l’umanità.**
(Io in particolare non sono d'accordo con chi afferma che per questo genere di previsioni dovremmo affidarci esclusivamente ai ricercatori nel campo delle intelligenze artificiali. Oltre al fatto che al momento [non sembra stiano riflettendo granché sull’argomento](https://www.cold-takes.com/are-we-trending-toward-transformative-ai-how-would-we-know/#surveying-experts), penso che affidarsi a persone che costruiscono modelli di IA sempre più potenti per sapere quando avremo IA trasformative sia come affidarsi a società che sviluppano tecnologie a energia solare – o a compagnie petrolifere, a seconda di come volete vederla – per fare previsioni sulle emissioni di carbonio e il cambiamento climatico. Hanno di sicuro un punto di vista sulla questione, ma fare previsioni è un lavoro diverso dal migliorare o costruire sistemi all’avanguardia.)
Non sono nemmeno sicuro che queste domande siano fatte per la ricerca accademica. Fare previsioni sulle IA trasformative o capire se questo è il secolo più importante sembra più simile:
* Al modello elettorale [FiveThirtyEight](https://projects.fivethirtyeight.com/2020-election-forecast/)("Chi vincerà le elezioni?") piuttosto che a discussioni accademiche di scienze poltiche ("Qual è la relazione tra governi ed elettori?");
* Al trading nei mercati finanziari ("I prezzi si alzeranno o si abbasseranno?" che alle discussioni accademiche di economia ("Perché avvengono le recessioni?")[[3]](#fne83nkbze8g);
* Alle ricerche di [GiveWell](https://www.givewell.org/) ("Quale organizzazione benefica aiuterà più persone con questa somma di denaro?") che alle discussioni accademiche di economia dello sviluppo ("Quali sono le cause della povertà e quali i fattori che la riducono?")[[4]](#fno0sn32zpn7g)
Voglio dire che non mi è chiaro quali caratteristiche dovrebbe avere un’"istituzione" naturale per le competenze necessarie alle previsioni sulle IA trasformative e sul "secolo più importate", ma mi sento di dire che attualmente non esiste nessuna grande istituzione che studia queste tematiche.
**Come dovremmo comportarci in mancanza di un consenso tra gli esperti?**
-------------------------------------------------------------------------
### **Il punto di vista scettico**
Stante la mancanza di un consenso solido tra gli esperti, mi aspetto che alcune (o meglio, molte) persone saranno scettiche a prescindere dal tipo di argomentazione.
Quella che segue è una versione molto generica di una reazione scettica con cui sono solidale:
1. *Mi sembra tutto troppo*[*fantasioso*](https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/#formalizing-the-)*.*
2. *Le tue affermazioni sul vivere nel secolo più importante sono esagerate. È uno **schema cognitivo tipico delle illusioni.***
3. *Dici che*[*l’onere della prova*](https://www.cold-takes.com/forecasting-transformative-ai-whats-the-burden-of-proof/) *non dovrebbe essere così rilevante perché ci sono molti elementi che indicano che stiamo vivendo un periodo*[*eccezionale*](https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/) *e*[*incerto*](https://www.cold-takes.com/this-cant-go-on/)*. Solo che... non mi ritengo in grado di valutare quelle affermazioni, o le tue affermazioni sulle IA, o qualsiasi altra cosa su questi argomenti assurdi.*
4. *Mi preoccupa il fatto che ci sono poche persone che si occupano di questi temi e quanto **ristretto, uniforme e isolato**sembra il dibattito. In generale mi sembra che sia una storia che si raccontano quelli intelligenti per convincersi del loro posto nel mondo, con un sacco di grafici e cifre per razionalizzare il tutto. Non sembra "reale".*
5. *Okay, fammi un fischio quando ci sarà un campo di ricerca con magari centinaia o migliaia di esperti che si valutano ed esaminano a vicenda e quando questi avranno raggiunto un qualche tipo di consenso simile a quello che abbiamo per il cambiamento climatico.*
Capisco perché possiate sentirvi così. Io stesso mi sono sentito così a volte, soprattutto sui punti 1 e 4, ma voglio illustrarvi **tre motivi per cui penso che il punto 5 non sia corretto.**
### **Motivo n.1: non possiamo permetterci di aspettare che si formi un consenso**
La mia paura è che l’avvento delle IA trasformative sia un po' una versione in slow motion e con una posta in gioco più alta della pandemia di COVID-19. Se ci basiamo sulle analisi e sulle informazioni migliori di cui disponiamo al momento, ci sono buone ragioni per aspettarsi che succeda qualcosa di importante, ma la situazione è decisamente singolare: non può essere catalogata in nessuno degli insiemi di situazioni che le nostre istituzioni affrontano regolarmente. Inoltre, prima cominciamo ad agire è meglio è.
Potremmo anche immaginarla come una versione accelerata delle dinamiche del cambiamento climatico. Pensate se le emissioni di gas serra avessero cominciato ad aumentare solo di recente[[5]](#fnxjbbhashua) (invece che a [metà Ottocento](https://ourworldindata.org/co2-and-other-greenhouse-gas-emissions)) e non ci esistesse ancora una branca della scienza che si occupa del clima. Aspettare per decine di anni che nasca una tale branca prima di cercare di ridurre le emissioni sarebbe una pessima idea.
### **Motivo n.2:**[**La Legge di Cunningham**](https://bigthink.com/david-ryan-polgar/want-the-right-answer-online-dont-ask-questions-just-post-it-wrong) **("il modo migliore per ottenere una risposta corretta è formularne una sbagliata") potrebbe essere il modo migliore per trovare falle in questi ragionamenti**
No, sul serio.
Diversi anni fa, io e alcuni miei [colle](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/#acknowledgements)ghi avevamo il sentore che la teoria del "secolo più importante" potesse essere corretta, ma prima di concentrarci tutte le nostre energie volevamo vedere se saremmo riusciti a trovarvi degli errori cruciali.
Un modo per descrivere come abbiamo lavorato negli ultimi anni è che **sembrava che stessimo facendo tutto il possibile per capire che la teoria era errata.**
Per prima cosa abbiamo discusso dei temi fondamentali con diverse persone: ricercatori nel campo delle IA, economisti, ecc. Sono sorti alcuni problemi:
* Avevamo solo idee molto vaghe delle argomentazioni in questo campo (perlopiù, o forse del tutto, estrapolate [da altre persone](https://www.cold-takes.com/roadmap-for-the-most-important-century-series/#acknowledgements)). Non eravamo in grado di esporle con il giusto livello di chiarezza e meticolosità.
* C'erano un sacco di punti concreti che pensavamo si sarebbero rivelati corretti[[6]](#fnat8xalxrbk), ma che non avevamo identificato alla perfezione e che non potevamo presentare perché fossero esaminati.
* In generale, non eravamo nemmeno in grado di esporre un caso concreto con sufficiente chiarezza perché gli altri avessero la possibilità di demolirlo.
Ragion per cui abbiamo lavorato a lungo per creare report tecnici su molte delle argomentazioni fondamentali (che sono adesso di pubblico dominio e inclusi all’inizio di questo articolo), cosa che ci ha messo in condizione di pubblicare le argomentazioni e ci ha dato la possibilità di trovare controargomentazioni decisive.
A questo punto abbiamo richiesto review da parte di esperti esterni al nostro gruppo.[[7]](#fn8gin251ohnb)
Limitandoci alle mie ipotesi, sembra che la teoria del "secolo più importante" abbiamo superato tutti i test. Dopo averla esaminata da ogni angolazione ed essere entrato più nei dettagli, infatti, credo ancora più fermamente che sia corretta.
Ma d'accordo, diciamo che è solo perché secondo i *veri*esperti – persone che non abbiamo ancora scovato e che hanno controargomentazioni potentissime – tutta questa faccenda è così stupida che [non perdono nemmeno tempo a esaminarla](https://philiptrammell.com/blog/46/). Oppure che ci attualmente persone che *in futuro*potrebbero diventare esperti di queste materie e demolire queste argomentazioni. Cosa potremmo fare perché si realizzi questa situazione?
La risposta migliore che ho trovato è: "Se questa teoria fosse più conosciuta, più accettata e più influente, allora verrebbe anche esaminata più spesso."
Questa serie è un tentativo di andare in quella direzione, di portare maggiore credibilità alla teoria del "secolo più importante". Sarebbe un’ottima cosa se questa teoria si rivelasse corretta; sarebbe anche il passo successivo più logico se il mio obiettivo fosse quello di mettere in discussione le mie credenze e scoprire che è sbagliata.
Ovviamente non sto dicendo che dovete accettare o promuovere la teoria del "secolo più importante" se non vi sembra corretta, ma penso che se il vostro *unico*dubbio riguarda la mancanza di un consenso diffuso, sembra un po' strano continuare a ignorare la situazione. Se ci comportassimo sempre così (ignorando qualsiasi teoria che non è sostenuta da un consenso diffuso), non credo che vedremmo mai una sola teoria – anche quelle corrette – passare dall’essere di nicchia all’essere accettata.
### **Motivo n.3: in generale, lo scetticismo non sembra una buona idea**
Quando lavoravo a [GiveWell](http://www.givewell.org/), le persone ogni tanto mi dicevo cose del tipo: "non puoi mica sottoporre ogni argomentazione agli stessi standard di qualità che GiveWell usa per valutare le organizzazioni benefiche – test randomizzati controllati, solide basi empiriche, ecc. Alcune delle migliori occasioni per fare del bene saranno per forza quelle meno evidenti, perciò c'è il rischio che questi standard [escludano alcune delle più grandi occasioni per avere un impatto positivo](https://www.openphilanthropy.org/blog/hits-based-giving#Anti-principles_for_hits-based_giving).”
Sono convinto che sia così. Penso che sia importante controllare il proprio approccio al ragionamento e agli standard di evidenze scientifiche e chiedersi: "In quali situazioni questo metodo fallirebbe e in quali preferirei che avesse successo?" Per quel che mi riguarda, **è accettabile correre il rischio di illudersi o isolarsi se ci porta a fare la cosa giusta quando è più necessario.**
Penso che la mancanza di consenso tra gli esperti – e il timore di illudersi o isolarsi – siano buone ragioni per *indagare a fondo*la teoria del "secolo più importante" piuttosto che accettarla all’istante. Per chiedersi se ci possano essere errori non ancora individuati, per cercare bias che potrebbero esagerare la nostra importanza, per andare alla ricerca di quegli aspetti dell’argomentazione che sembrano più discutibili, ecc.
Se però avete esaminato la questione a un livello che vi sembra accettabile/fattibile – e non avete trovato altri difetti *se non*considerazioni del tipo "non c'è consenso diffuso" e "mi preoccupa la possibilità di illudermi o isolarmi" – allora direi che scartare quest'ipotesi **farà sì che non sarete tra i primi a rendervi conto di e ad agire su un problema estremamente importante se se ne presenta l’occasione**. Per quel che mi riguarda, se penso alle occasioni sprecate per fare del bene, è un sacrificio troppo grande.
1. **[^](#fnref8qe0goe8c2)**Tecnicamente queste probabilità sono calcolate per "intelligenze artificiali di livello umano". In generale il grafico semplifica la questione, perché presenta un unico insieme di probabilità. In generale tutte queste probabilità fanno riferimento a qualcosa le cui capacità sono *almeno*allo stesso livello di quelle di un sistema [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/), di conseguenza dovrebbero essere stime al ribasso della probabilità di un sistema PASTA (ma non penso che sia un grande problema).
2. **[^](#fnrefh0ac88uni05)**Qui potete trovare review di Bio Anchors; [qui](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#AppendixH) review di Explosive Growth; [qui](https://www.openphilanthropy.org/blog/report-semi-informative-priors#LinksToReviewer)review di Semi Informative Priors. Brain Computation era stato esaminato quando non avevamo ancora ideato il processo che avrebbe portato a pubblicare review, ma [qui](https://www.openphilanthropy.org/research/conversations) potete trovare più di 20 conversazioni con esperti che hanno costituito il report. Human Trajectory non è stato esaminato, anche se molto delle analisi e delle conclusioni di quel report sono contenute in Explosive Growth.
3. **[^](#fnrefe83nkbze8g)**Le branche accademiche sono piuttosto ampie. Questi sono solo esempi delle domande che affrontano.
4. **[^](#fnrefo0sn32zpn7g)**Anche se la scienza del clima è un buon esempio di ambito accademico in cui si investe molto nel prevedere il futuro.
5. **[^](#fnrefxjbbhashua)**Il campo delle intelligenze artificiali esiste dal [1956](https://en.wikipedia.org/wiki/Dartmouth_workshop), ma i modelli di machine learning hanno cominciato ad avvicinarsi alle dimensioni [del cervello degli insetti](https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/#conclusions-of-bio-anchors) e ad avere buone prestazioni in attività complesse solo negli ultimi dieci anni.
6. **[^](#fnrefat8xalxrbk)**Spesso ci basavamo solo sulle impressioni che avevamo di quello che altri più esperti pensavano dell’argomento.
7. **[^](#fnref8gin251ohnb)**Qui potete trovare review di Bio Anchors; [qui](https://www.openphilanthropy.org/could-advanced-ai-drive-explosive-economic-growth#AppendixH) review di Explosive Growth; [qui](https://www.openphilanthropy.org/blog/report-semi-informative-priors#LinksToReviewer)review di Semi Informative Priors. Brain Computation era stato esaminato quando non avevamo ancora ideato il processo che avrebbe portato a pubblicare review, ma [qui](https://www.openphilanthropy.org/research/conversations) potete trovare più di 20 conversazioni con esperti che hanno costituito il report. Human Trajectory non è stato esaminato, anche se molto delle analisi e delle conclusioni di quel report sono contenute in Explosive Growth. |
d0b8963b-4cc5-410e-9dd6-816fcc2a0dac | trentmkelly/LessWrong-43k | LessWrong | Meetup : Moscow meetup: guest talk on why LW community is lazy and cowardly; double crux game
Discussion article for the meetup : Moscow meetup: guest talk on why LW community is lazy and cowardly; double crux game
WHEN: 10 January 2016 02:00:00PM (+0300)
WHERE: Москва, ул. Большая Дорогомиловская, д.5к2
Meta: most our members join meetups via other channels. So lately I've been skipping announcements on lesswrong.com because of extra work on providing both russian and english texts. Still, the correlation between "found out about Moscow meetups via lesswrong.com" and "is a great fit for our community" is very high. So from now on I'm going to post a short link to our usual hackpad document and save 80% of extra effort.
Pad with the content for the 10.01.2016 meetup.
We're meeting at anticafe "Kocherga", as usual.
Discussion article for the meetup : Moscow meetup: guest talk on why LW community is lazy and cowardly; double crux game |
7d7c8d65-5b2c-4173-9495-ecc9b25c86b3 | trentmkelly/LessWrong-43k | LessWrong | Restricted Antinatalism on Subagents
Introduction
Antinatalism is the position, or class of positions, which finds existence is not preferable to nonexistence and which tries to minimize the creation of new people who will then suffer. I found it through David Benatar's book Better Never to Have Been, although I've encountered folk versions of it that are limited to certain conditions - why have a child in this blasted heath of a world? Benatar's deeper position, as I understand it, comes from counting positive and negative utility separately; nonexistence lacks any pain or suffering and is thus preferable, even if it involves no positive experience. (Even better, whatever negatives you would incur from "missing out" on positive experiences would be moot - there's simply no pain.)
A related position is efilism, taken from "life" backwards, the more fringe belief that life itself is suffering and that the only moral action is mass suicide. This is Dark Side philosophy if I've ever heard it, the girding with thought of one's most self-destructive impulses, possibly even a thought hazard. But I think that there's a way to redirect it, and what it represents, into doing useful ethical work without trying to burn the world down.
Fracturing Ethics
At least some minds can be broken up into bags of psychological subagents, and I think all minds can be expressed as cooperating with themselves (or failing to) across time. A person at time 0 is not the same person as at time 100, and certain of these subagents can clearly voice a preference not to exist. It's also possible to mix them, by saying that one subagent is more or less "in the driver's seat" at a given time, making up the circumstances under which this feeling of wanting-not-to-be arises. A person in extreme emotional distress will find more situations horrible than they would at an emotional baseline, and so they should seek to minimize states like distress and situations that are sufficiently negative.
This is all pretty instinctive, of course. T |
6d3e5917-9a3b-4c12-8c54-14128eab8039 | trentmkelly/LessWrong-43k | LessWrong | Preview button
Surely I can't be the first person to have thought of it, but Uncle Google suggests this hasn't been discussed before. Would it be difficult to make a preview button available when posting comments? Was it lacking from the software being used or is it just disabled? This blog uses different ways of text formatting than I think a lot of us are used to from other discussion forums, so if it happens to be easy to do, it'd be good to be able to experiment and see the results before making one's comment available.
I just tried that sandbox linked to from the wiki and it doesn't seem trustworthy, what should come out as italics comes out as some sort of a highlight...
Funny, I felt that bystander reluctance thing while posting this. "Why hasn't anyone posted this before? Is it because nobody wants to be the one asking for something? Or is it a silly request in some way I don't see now?"
EDIT:
I see Uncle Google failed me. Or is it that I failed Uncle Google? ;)
Thanks for the reponses, all.
|
17964f98-5add-4d25-8bea-77f24a53bcf0 | trentmkelly/LessWrong-43k | LessWrong | Survey: Risks from AI
Related to: lesswrong.com/lw/fk/survey_results/
I am currently emailing experts in order to raise and estimate the academic awareness and perception of risks from AI and ask them for permission to publish and discuss their responses. User:Thomas suggested to also ask you, everyone who is reading lesswrong.com, and I thought this was a great idea. If I ask experts to publicly answer questions, to publish and discuss them here on LW, I think it is only fair to do the same.
Answering the questions below will help the SIAI and everyone interested to mitigate risks from AI to estimate the effectiveness with which the risks are communicated.
Questions:
1. Assuming no global catastrophe halts progress, by what year would you assign a 10%/50%/90% chance of the development of human-level machine intelligence? Feel free to answer 'never' if you believe such a milestone will never be reached.
2. What probability do you assign to the possibility of a negative/extremely negative Singularity as a result of badly done AI?
3. What probability do you assign to the possibility of a human level AGI to self-modify its way up to massive superhuman intelligence within a matter of hours/days/< 5 years?
4. Does friendly AI research, as being conducted by the SIAI, currently require less/no more/little more/much more/vastly more support?
5. Do risks from AI outweigh other existential risks, e.g. advanced nanotechnology? Please answer with yes/no/don't know.
6. Can you think of any milestone such that if it were ever reached you would expect human‐level machine intelligence to be developed within five years thereafter?
Note: Please do not downvote comments that are solely answering the above questions. |
893d66ee-aef3-4243-80c7-ffa1ac65440d | trentmkelly/LessWrong-43k | LessWrong | "Sorcerer's Apprentice" from Fantasia as an analogy for alignment
The story is simple: Mickey is an apprentice to a powerful sorcerer whose magic comes from his hat. Mickey is tasked with carrying buckets of water up a long flight of stairs and dumping them into a basin–hard work for a mouse! When the sorcerer steps out, however, Mickey takes his hat and uses its magic to enchant a broomstick to do the water carrying for him. Aha! Mickey is able to rest now, and he falls asleep to dream about all the wonderful magic he can do. But he awakes to a terrible discovery: the enchanted broomstick won't stop carrying and dumping water and the basin is now overflowing! Mickey attempts to alter the broomstick's course of action, but he can't; it's too late! So he tries to hack it to pieces, thinking that will solve the problem. But no, the enchantment enables the broomstick to make copies of itself from the hacked pieces! Now there are even MORE of them doing the unwanted task, and Mickey, try as he might, can't possibly hope to overcome them. It's not until the sorcerer returns (the one who truly knows how to wield the magic hat) that all is put to rights.
I thought it fascinating how this popular story from when we were all little actually encapsulates the alignment problem pretty neatly. Just wanted to share! |
2708eabd-9347-437c-90a3-bc6d79cb3903 | StampyAI/alignment-research-dataset/arxiv | Arxiv | A Framework for Understanding AI-Induced Field Change: How AI Technologies are Legitimized and Institutionalized
A Framework for Understanding AI -Induced Field Change: How
AI Technologies are Legitimized and Institutionalized
Benjamin Cedric Larsen
Department of Economics
Government & Business
Copenhagen Business School
Copenhagen, Denmark
bcl.egb@cbs .dk
ABSTRACT
Artificial intelligence (AI) systems operate in increasingly diverse
areas, from healthcare to facial recognition, the stock market,
autonomous vehicles, and so on. While the underlying digital
infrastructure of AI systems is developing rapidly, each area of
implemen tation is subject to different degrees and processes of
legitimization. By combining elements from institutional theory
and information systems -theory, this paper presents a conceptual
framework to analyze and unde rstand AI -induced field -change.
The introd uction of novel AI -agents into new or existing fields
creates a dynamic in which algorithms (re)shape organizations
and institutions while existing institutional infrastructures
determine the scope and speed at which organizational change is
allowed to occ ur. Where institutional infrastructure and
governance arrangements, such as standards, rules, and
regulations, still are unelaborate, the field can move fast but is
also more likely to be contested. The institutional infrastructure
surrounding AI -induced fields is generally little elaborated, which
could be an obstacle to the broader institutionalization of AI -
systems going forward.
CCS CONCEPTS
• Socio -technical systems • Automation • Government
regulation • Government surveillance
KEYWORDS
AI; Fie ld Change; Legitimization; Digital Infrastructure;
Institutional Infrastructure
ACM Reference format:
Benjamin Cedric Larsen. 2021. A Framework for Understanding AI -
Induced Field Change: How AI Technologies are Legitimized and
Institutionalized. In Procee dings of the 2021 AAAI/ACM Conference on
AI, Ethics, and Society (AIES ’21), May 19 –21, 2021, Virtual Event ,
USA. ACM, New York, NY, USA, 12 pages.
https://doi.org/10.1145/3461702.3462591
1 Introduction
In recent years, the scope of information technology that
complements or augments human actions has expanded rapidly.
The logics embedded in AI -systems already operate in diverse
areas, such as the stock market [1], mortgage underwriting [2],
autonomous vehicles [3], medical services [4], the judicial system
[5], and a range of other fields. The action -potentials inh erent in
most AI systems imply a shift in agency, moving from human
actors to AI agents, which in turn has a significant impact on
shaping new practices (e.g., across healthcare, agriculture,
autonomous vehicles, etc.), and ther eby new forms of
organizatio n.
Novel AI systems and agents are embedded in existing digital
infrastructures and operate within an institutional framework that
enables or constrains various activities [6]. The socio -economic
embeddedness of AI systems means that some AI agents may
affect and alter existing social practices and ways of organization
in swift and transforming ways, while implementation may be
subject to varying degrees of legitimacy, depending on the field
and area of implementation. Digital infrastructures, however, tend
to emerge more rapidly than institutional infrastructures (e.g. ,
laws and regulations), which is commonly referred to as the
pacing problem [7]. This may create extensive issues if negative
externalities are associated with fast -moving technological
implementation that is at odds with existing structures or norms
for certain actors or groups of a population [8, 9]. Tensions also
arise as human actions increasingly have become subject to
‘informatization ’ where behavior is tracked, sometimes
unknowingly, throug h the collection of new data points [10, 11,
12]. Data is derived from social networks and online interact ions,
facial recognition technologies, driving behavior, apps recording
location data, and so on. The wide range of AI implementations
and some of the associated tensions captured by the pacing
problem, guides and motivates the research question of this pa per,
which seeks to understand how AI -induced fields are subject to
varying degrees of legitimacy as well as processes of
institutionalization.
Views from institutional - and information systems (IS) theory
are combined in order to conceptualize how AI fiel ds operate at
the meso -level in terms of gaining legitimacy, that is, how AI
diffusion is adopted and accepted, or rejected, under varying
socio -economic conditions.
Elements from information systems theory elaborate on the
notion of digital infrastructur e [13, 14, 15], which signifies a
range of interconnected technologies (e.g., Internet, Platforms,
IoT) that con tribute to realize the action potentials of novel AI
agents and associated processes of information collection.
Institutional theory introduces the concept of fields, which is
applied in order to denote distinct areas of AI implementation and
organization by a diverse range of actors . Elements from
institutional theory, i.e., institutional work [16, 17], logics [18],
and infrastructure [19], are applied in order to conceptualize how
processes of AI -induced digitization affects the evolution and
governance of organizations [20]. Theory surrounding
institutional work is applied in order to understand how actors
accomplish the social construction of logics (i.e., rules, scripts,
schemas, and cultural accounts), which signifies where human
actors and AI agents may challenge existing organizational or
institutional practices and boundaries, which may result in
difficulties associated with legitimization. Adding the institutional
perspective is about how “digitally -enabled institutional
arrangements emerge and diffuse both through field s and
organizations” [19: 53] . The primary focus of the paper is placed
on the interplay between existing and new and emerging
institutional arrangements, as well as the role of AI in altering
ways of organization.
In combining views from institutional - and information
systems (IS) th eory, the paper proposes a novel conceptual
framework for analyzing and understanding AI -induced field
change. The framework builds on Zietsma et al.’s. [22] concept of
pathways of change, which outlines how a field is likely to move
between states from emerging/aligning to fragmented, contested,
and estab lished, depending on the coherency in logics and
elaboration of institutional infrastructure. The proposed
framework adds the notion of digital infrastructure elaborated
through the constructs of technological maturity, data, and AI
autonomy, which enab les an assessment of the impact of AI -
systems on existing forms of institutional infrastructure. Where
digital and institutional infrastructure is well -elaborated in terms
of organizational practices, rules, and processes, the field could be
considered est ablished. If a field is emerging or aligning, on the
other hand, its digital and institutional infrastructure will be
nascent and unelaborate. The developed framework is illustrated
through application to the field of facial recognition technologies
in the United States.
The paper cont ributes by elabo rating on existing information
systems theory through adding the institutional perspective to
understand the dispersion of AI technologies. Clarity is gained in
terms of assessing how AI technologies move within and between
fields, which is interpreted through a technology’s elaboration of
institutional and digital infrastructure , which in combination
informs a technology’s perceived degree of legitimacy.
The paper is structured as follows. Section 2 elaborates on
institutional theory and the characteristics of digital infrastructure.
Section 3 presents a framework for understanding AI -induced field change . Section 4 applies the framework through illustrati on.
Section 5 deliberates on pathways of change referring to how AI -
fields become institutionalized, and section 6 discusses obstacles
to legitimacy as well as paths forward in terms of governance.
Section 7 concludes.
2 Institutional Theory and AI Agents
In organization theory, the idea of institutional infrastructure
reflects understandings of the embeddedness of organizations
within fields and the structuration of fields that occurs through
interactions and institutional activity amongst actors [23]. Over
the last few decades, organizational fields have become more
dynamic, and boundaries between fields have become more
porous due to the introduction of new digital infrastructures, such
as the Internet [18: 336 ].
Early institutional theory developed the notion that
organizations come to resemble each other due to socio -cultural
pressures, which provide a source of legitimacy [24]. A central
process is that of isomorphism, demonstrating that organizations
are likely to converge through normative, mimetic, and coercive
pressures [25]. Mimetic isomorphism holds that organizational
legitimacy is achieved through copying other organizations as
well as their technologies and practices. Coercive legitimacy
refers to societal legitimacy, which often is achieved through
legislation, whereas normative legitimacy can be viewed as the
appropriate professional standards as well as social acceptance of
new technologies. Socio -cultural beliefs and practices thus play an
important role in the ado ption of new technologies and
innovations, as well as contingent processes of legitimization [21].
Competing institutions may lie within individual populations
that inhabit a field, while fields may be contested by multiple, and
often competing, institutional logics [15, 24, 25, 26, 27].
Institutional logics describe the “socially constructed, historical
patterns of material practices, assumptions, values, beliefs, and
rules” of a field [28: 804 ]. The institutional logics perspective
deals with the interrelatio nships among individuals, institutions,
and organizations, i.e., the actors of a field.
Institutional work, on the other hand, emphasizes a conceptual
shift towards individuals and organization’s actions as “dependent
on cognitive (rather than affective) processes and structures and
thus suggests an approach… that focuses on understanding how
actors accomplish the social construction of rules, scripts,
schemas, and cultural accounts” [14: 218 ].
When the two approaches are held together, i.e., logics and
interrelationships, and structures and practices, these can be
expressed as the institutional infrastructure of a field. Institutional
infrastructure is established through adjacent activities suc h as
certifying, assuring, and reporting against principles, codes, and
standards, as well as through the formation of new associations
and networks among organizations, including official rules and
regulations [31]. Institutional infrastructure can be clarified in
terms of its degree of elaboration (high, low), as well as
coherency in logics (unitary, competing) [19].
Novel AI agents operating in varying systems also embody
distinct logics and cognitive functions [32]. While these functions
are defined by human actors (e.g., engineers in a company), AI -
agents remain subject to different degrees of autonomy, i.e., they
are to some extent able to act independently based on intrinsic
flows of information. This implies that AI agents have the
autonomy to act on (e.g., judicial evidence, road conditions, etc.),
as well as interact with (e.g., speech recognition, chatbots) their
environments. This new form of artificial agency confounds the
paradox of embedded agency, i.e., how actors are able to change
institutions when their actions are conditioned by those same
institutions [33], by the implication of an AI’s ability to shape
human be havior as well as ways of organization – sometimes
simultaneously. In other words, algorithms can affect how we
conceptualize the world while modifying socio -political forms of
organization [34].
Algorithms can be seen as non -human agents endowed with the
ability to evaluate, rank, and rew ard or punish individuals’ actions
and positions based on pre -programmed instructions that shape
social relationships [33, 34]. Algorithms, however, are oftentimes
compressed and hidden, and we do not encounter them in the
same way that we encounter traditional rules [35, 36]. The
increasing reliance on algorithms as instruments for the regulation
of social relationships, coupled with the obscurity of algorithmic
evaluation systems, is evidence of new yet subtle ways of
exercising power, which alters existing power -dependencies, e.g.,
through surveillance, online interaction, and so on [33, 37].
Algorithms are therefore implicated in the constitution and
reproduction of power asymmetries that regulate individuals’
behavior s and ensure their compliance with predefined standards,
which in turn can affect human agency [35]. It is difficult,
however, to identify ex -ante what the socio -economic effects of
scaling an AI -system will be [38, 39], which warrants that
extensive experimentatio n through application may be necessary
before AI -based technological diffusion and legitimization is
likely to take place.
Institutional logics and institutional work provide a foundation
to understand the rationalities and practices of actors that
implem ent novel AI -agents, as well as the AI -agents’ systemic
impact on their surroundings through their socio -economic
embeddedness. An analysis of AI -agents predicated on
institutional work and logics can be placed either at the micro -
level, seeking to underst and the impact of individual AI -agents on
specific socio -economic practices, or at the meso -level, seeking to
understand how actors influence the legitimacy of AI applications
in a field. That is, how AI diffusion is adopted and accepted, or
rejected, unde r varying socio -economic and technological
conditions.
2.1 Digital Infrastructure
Digital infrastructure is made from a multitude of digital
building blocks and is defined as the computing and network
resources that allow multiple stakeholders to orchestr ate their
service and content needs [14]. Digital infrastructures are distinct
from traditional infrastructures because of their ability to collect, store, and make digital data available across a large number of
systems and devices simultaneously [14]. Examples of digital
infrastructures include the Internet [40, 41]; data centers; open
standards, e.g., IEEE 802.11 (Wi -Fi), as well as consumer devices
such as smartphones.
Henfridsson et al. [38: 90] refer to “digital resources” as
entities that serve as building blocks in the creation and capture of
value from information. While AI technologies are assembled as
digital building blocks, a distinction needs to be made between
traditional software systems (i.e., ERP, CRM, WordPress, etc.)
and novel AI -systems (computer vision, machine learning, etc.).
This distinction is important as a new kind of embedded agency is
inherent in most AI systems, which render thes e as “organizers,”
“predictors,” or “controllers” of data flows that are captured by
digital infrastructures [44].
Most digital building blocks are made accessible through
online platforms or are proprietarily assembled through open -
source code. Digital building blocks are transformational due to
the innovative patterns that can be e stablished through “use -
recombination” [40], while there needs to be separate legitimacy
for each building block, as well as collective legitimacy for a new
institutional arrangement to emerge [21]. It may, for example, be
that a platform -based building block holds legitimacy (e.g., a
cloud -based AI facial recognit ion-system) because it performs
within a predefined level of accuracy. However, for the
organizational or wider institutional arrangement to gain
legitimacy, the embeddedness of the building block into a socio -
economic system needs to be accepted at a much broader level of
implementation.
As digital building blocks are created by engineers, and as
humans are subject to bias [45], this means that the values of the
designer can be “frozen into the code, effectively
institutionalizing those values’’ [37: 158 ]. Friedman and
Nissenbaum [46] argue that bias in computer systems can arise in
three distinct ways, ref erring to (1) pre -existing social values
found in the ‘‘social institutions, practices and attitudes’’ from
which a technology emerges, (2) technical constraints, and (3)
emergent aspects that arise through usage, which only can be
known ex -post. The disti nction between social and technical bias
has also been referred to as normative and epistemic concerns [47]
or structural and functional risks [48]. Functional risks refer to
technical areas such as the design and operation of an AI system,
including datasets, bias, and performance issues, whereas
structural risks refer to the ethical implications of an AI system,
including the societ al effects of automated decisions.
Based on a synthesis of the above considerations , I propose the
use of three analytical constructs, refer ring to technological
maturity, data, and AI -autonomy, in order to signify a field’s
relative elaboration of digital infrastructure. The constructs have
been selected as they embody some of the main features of AI -
induced digital infrastructure associated with (1) the algorithm,
(2) its use of data, and (3) its ability to act , as well as the likely
ramifications of tho se actions. Each of the three constructs are
elaborated in greater detail below.
2.2 Technological Maturity, Data, and AI
Autonomy
2.2.1 Technological Maturity. AI systems are subject to
different degrees of maturity, both in terms of the accuracy of the
system [49], as well as the elaboration of adjacent technological
standards [50]. The accuracy of an AI -model refers to whether it
operates within a predefined ‘acceptable ’ level of performance. In
the case of autonomous vehicle safety, for instance, an AI -
controller is expec ted to hold the ability to locate persons and
objects from a distance of 100 meters with an accuracy of +/ - 20
cm, within a false negative rate of 1% and false -positive rate of
5% [51]. In some ar eas that involve high -stakes decisions (e.g.,
autonomous driving, credit applications, judicial decisions, and
medical recommendations), high accuracy alone may not be
sufficient, as these applications require greater levels of trust in
their associated se rvices [52]. In high -risk areas, it is important
that the functional aspects of a model (i.e., accuracy, data, etc.)
are further elabora ted through measures such as certification,
testing, auditing, as well as the elaboration of technological
standards, which refers back to the institutional infrastructure of a
field.
Depending on the context and the area of use, a range of
quantitative m easures can be used to evaluate the technological
maturity of an AI -induced field. Some suggestions include the
measures of scientific output, e.g., research papers, citations, and
the intellectual property rights that surround a given field.
Important que stions relate to whether emerging algorithmic
capabilities are under development and going through stages of
testing or already are being widely deployed by a small or a large
number of actors. For structural implications, it is important to ask
questions such as: how does the technological maturity and
elaboration (of immature/mature) AI -induced digital
infrastructures affect a field? For example, the implementation of
chatbots, which may have performed with sufficient accuracy
under test environments, hav e proved to display racial biases and
prejudices, as the algorithm continues to learn during actual
implementation, which aggravates social harm for certain groups
of the population [53]. The elements that are used to evaluate and
decide whether an AI -system is mature or immature are therefore
dependent on its context of implementation, which renders
technical aspects alone insufficient when assessing the
technological maturity of AI -models and associated digital
infrastructure.
Several methods have been proposed to evaluate pred ictive
models, such as “model cards for model reporting” [54],
“nutrition labels for rankings” [55], “algorithmic impact
assessment” forms [56], as well as “fact sheets” [52]. These
frameworks can help organizations establish new organizational
practices that characterize model -specifications in more coherent
ways while paying special attention to attributes such as accuracy,
bias, consistency, transparency, interpretability, an d fairness,
among others.
At a general level, when dominant standards are in place, and
the accuracy of an AI -system is deemed safe, reliable, and
trustworthy, digital infrastructure is considered elaborate, and higher field legitimacy is expected. If a t echnology is considered
immature, inaccurate, or insufficiently tested, the surrounding
digital infrastructure would be considered unelaborate.
2.2.2 Data. The nature of the data that feeds into any AI -model
or system is also of particular importance, and data can be
classified as being either sensitive (e.g., health -related) or non -
sensitive (e.g., weather -related), and the nature of the data can be
private (i.e., individual data) or public (common/pooled data)
[57]. Data can also be biased, which makes AI systems prone to
inherit either individually coded fo rms of bias or biases that result
from historical or cultural practices, which are reflected in the
training data, and could be adopted by the algorithm [58]. For an
algorithm to be effective, its training data must be representative
of the communities that it impacts. The use of digital
infrastructures by individuals, machines, and communities,
requires institutions to negotiate how bits containing varying
kinds of information legiti mately can be utilized and (re)arranged
by organizations.
Several methods have been proposed to evaluate data as well as
machine learning models under a variety of conditions. For data,
these include “data statements” [59], “datasheets for data sets”
[60], and “nutrition labels for data sets” [61], which seek to
evaluate the data that goes into a model acros s training, testing,
and post -implementation scenarios.
Sound data practices that are transparent, well -documented,
and privacy -preserving, are generally associated with a more
elaborate digital infrastructure. Data practices that are biased,
undocumented, or otherwise disputed could be considered a sign
of unelaborate digital infrastructure.
2.2.3 AI Autonomy. AI-agents hold varying degrees of
autonomy to act, while the (explorative) actions of an autonomous
learning agent may not always be kn own and can be subject to
change depending on the data that is fed into the model [62]. An
AI-agent can have limited or extensive autonomy to make
decisions, while the decisions of an AI agent can have a leni ent
(e.g., recommender engine, smart speaker) or a severe (e.g.,
autonomous vehicle, incarceration system, facial recognition)
impact on individuals as well as its surroundings, if the algorithm
is inaccurate, fails, or is otherwise at fault. This could in clude
aspects such as excessive collection of data or unwilling intrusion
of privacy in the case of facial recognition systems, for example.
The categorization of an agent’s autonomy, therefore, includes its
ability to act, as well as the possible ramifica tions of its actions.
The perceived risk of an AI agent can be understood as the
probability that a disruptive event occurs, multiplied by the
severity of potential harm to an individual or form of organization
[48]. The definition of “harm” and the computation of probability
and severity is context -dependent and varies across sectors. For
instance, the im pact of an autonomous decision in medical
diagnosis or in autonomous vehicles would, arguably, be greater
than that of a product recommendation system [63]. Relevant
questions include: what risks may be present in model usage, as
well as identification of the potential recipients, likelihood, and
magnitude of harms [62]. Where risks are taken into consideration
and are sufficientl y mitigated in relation to avoiding any potential
harms, the digital infrastructure could be considered elaborate.
The elaboration of AI -associated digital infrastructure across
the constructs of technological maturity, data, and AI -autonomy,
remain subjec t to both qualitative and quantitative judgments and
measures, which are field -dependent and linked to idiosyncrasies
across functional (technical) as well as structural (ethical) risks
and considerations.
2.3 Governance
Since field -level advancements in AI are context -dependent,
this means that the existing institutional infrastructure and logics
negotiates the actual impact that a technology is allowed to have
within a given social context, which differs across geographies. In
other words, t he flexibilit y of a digital infrastructure is often
restricted by socio -technical and regulatory arrangements (e.g.,
restrictions on autonomous vehicles, regulations on the use of
patient’s medical data, etc.). Oftentimes, layered and interoperable
standards and common definitions of application and service
interfaces guide the use and growth of digital infrastructures [64]
and are necessary for digital infrastructures wider process of
institutionalization. As large technology companies usually are the
leading innovators of a field, these also carry a crucial weight in
the direction of new technology standards [65], which generally
affects how an industry or a field continues to evolve. Typically,
private actors orchestrate ecosystems and associated digital
infrastructures, which brings issues to the forefront, such as the
challenge of establishing a governance system, reproducing social
order, and incorporating aspects of value appropria tion and
control [64, 65, 48, 66, 67].
The process that renders digital infrastructures institutional
occurs when innovators infuse specific norms, values, logics, as
well as forms of governance and technological control into the
infrastructure, and as the infrastructure becomes more widely
adopted a nd legitimized over time [15, 68, 34]. Digital
institutional infrastructure can thus be viewed as the integration of
digital infrastructure and institutional infrastructure, which is
defined as standard -setting digital technologies that enable,
constrain and coordinate numerous actors’ actions and
interactions in ecosystems, fields, or industries [21].
3 A Conceptual Framework for Understanding
AI-Induced Field Change
By integrating the insights from institutional theory (work,
logics) with information systems theory (digital infrastructure), I
propose t he use of a novel framework for analyzing AI -induced
field change (Table 1). The framework builds on Zietsma et al.’s
[22] conceptualization of pathways of change, which hypothesizes
how actors drive change across different sets of field
circumstances. The proposed framework extends exisitng work
[22] through incorporating the notion of AI-associated digital
infrastructures, which has implications for the structure and
organization of (digital) institutions going forward. Table 1: Framework for Analyzing AI -Induced Field
Change and Legitimization
The framework first considers varying actors and their position
in a field before elaborating on these ability to affect the direction
of a field, either through the introduction of a new technology,
regulation, or a social movement. N ext, the relationship among
actors as well as their coherency in terms of logics is considered.
When logics are unitary, greater field alignment is expected,
whereas competing logics means that a field is unsettled. The
elaboration of institutional infrastructure is considered by looking
at the practices and actions of individual actors as well as
organizations in terms of creating, maintaining, and disrupting ACTORS
-Subject position: central, middle status, and peripheral actors
-Characterized by roles or functions, i.e. fi eld-structuring or governing
organizations, formal governance units, field coordinators, etc.
DIGITAL INSTITUTIONAL INFRASTRUCTURE
-Standard -setting digital technologies that enable, constrain, and
coordinate numerous actors’ actions and interactions in ecosystems,
fields, or industries [21].
INSTITUTIONAL
INFRASTRUCTURE DIGITAL
INFRASTRUCTURE
Established through activities
such as: certifying, assuring and
reporting against principles,
codes, rules, and standards, as
well as through the formation of
new associations and networks
among organizations, including
official rules and regulations
[31].
Logics : refers to the relationships
among individuals and
organizations in the field. L ogics
can be competing or unitary.
They may be based on market,
social, and other considerations .
Work : refers to the practices and
actions of individuals and
organizations that has
implications for creating,
maintaining, and di srupting
institutions over time. Looks at
the effect of institutional change
on areas such as hierarchies of
status and influence, as well as
subsequent power relations .
Incorporates the notion of field
structuring events , which informs
or disrupts logic formation . Established from a multitude of
digital building blocks, defined
as the computing and network
resources that allow multi ple
stakeholders to orchestrate their
service and content needs [14].
Technological Maturity : refers
to the elaboration of hardware
and software -based
infrastructure s and associated
technological standards. Includes
the perceived accuracy, safety,
and reliability of an AI
system/agent.
Data : refers to the data that is
used in a model, which either can
be sensitive or non -sensitive,
private or publicly available,
centralized or de centralized , and
may be linked to varying forms
of ownership.
Autonomy : refers to whether the
AI-agent holds limited or
extensive autonomy to act, and
whether the agent’s actions have
a negligible or a considerable
impact on its environment and
surroundings.
GOVERNANCE
-Combinations of public and private, formal and informal systems that
exercise control within a field.
-Units and processes that ensure compliance with rules and facilitate
‘the functioning and reproduction of the system (e.g., standards,
regulations, and social control agents that monitor and enforce these).
institutions over time. The notion of field structuring events is
particularly important, both in terms of logic format ion or
disruption, as well as for the elaboration of the institutional
infrastructure of a field.
The AI -associated digital infrastructure of a field is signified by
the proposed constructs of technological maturity, data -
specification, and the relative a utonomy of an AI -system.
Technological maturity refers to the perceived accuracy of an AI
agent, as well as the elaboration of areas perta ining to standards,
research, intellectual property , and so on. The data linked to a
model is another important source of institutional legitimacy, both
functionally (e.g., non -biased data) as well as structurally (e.g.,
how an organization is engaged in practices of data collection and
usage). Autonomy refers to the relative impact of an AI agent on
its general environme nt, as well as its potentials for exacerbating
structural risks and create harm. At last, the governance of a field,
as well as the mechanisms that guide algorithmic implementation,
are considered.
Based on coherency in logics (unitary, competing) [19], and the
elaboration of i nstitutional infrastructure (high, low) [26], a four -
fold classification of field conditions is produced around whether
there are settled or unsettled logic prioritizations and limited or
elaborated digital and institutional infrastructure (Figure 1) [22].
Where digital and institutional infrastructure is highly
elaborate, and there is a unitary dominant logic within the field,
the field can be described as established and relatively stable, i.e.,
the institutional infrastructure is coherent [22]. Formal governance
and informal infrastructure elements are elaborate and likely to
reinforce each other, leading to a coherent sense of what is
legitimate or not within the organizational field [72, 73].
In fields where there is highly elaborate institutional
infrastructure but competing logics (low coherency), there could
be multiple formal governance and digital and institutional
infrastructure arrangements [22]. These arrangements may be in
conflict with one another or compete for dominance, which makes
the field contested [25, 74]. Contested refers both to competing
digital infrastructures (e.g., technological standards, varying
models, and levels of algor ithmic accuracy), as well as to
stakeholders opposing views.
Fields with low coherency and limited elaboration of digital
and institutional infrastructure are described as fragmented, with
competing conceptions of what is legitimate. Fields may be
fragmen ted if they emerge in intermediate positions (e.g.,
biotechnology), which draws on logics and practices from diverse
but neighboring fields [74]. A field may also be fragmented as
new actors enter an existing field with innovative ideas and
designs about products, courses of actio n, behaviors, as well as
new structures and ways of organization [75]. In the field of facial
recognition technology, for instance, there are multiple competing
logics that move across varying sta keholders and demonstrate
incoherent views over technological accuracy, as well as the
technology’s inherent ability to enhance public safety. Many
differing views paired with a limited (but expanding) digital
infrastructure situates the field in the fragm ented quadrant.
When infrastructure has a low degree of elaboration but a high
degree of coherency in terms of unitary logics, the field is described as emerging or aligning [19]. While the lack of digital
and institutional infrastructure in an emerging field may create
consider able room for experimentation and change, it may also
limit field members’ ability to define and acquire legitimacy and
thus contributes to ambiguity, and potentially, the need to draw on
ill-suited infrastructure from adjacent fields. One example could
be the emergence of autonomous vehicles, drawing on existing
legal frameworks in terms of liability, which, however, are ill -
suited in terms of covering the accompanying change in agency
and responsibility.
Figure 1: Digital / Institutional Infrastructure & Logics:
Framework for Field -Change (modified from [22])
Categorizing a field’s present condition as well as its potential
trajectories enables us to get a deeper understanding of possible
areas of contestation, fragmentation, or alignment, as well as wha t
it takes for an AI -induced field to grow established over time.
Before these conditions are further discussed in section 5, the
following section applies the developed framework (Table 1) to
the field of facial recognition technologies in the United Stat es.
The application briefly illustrates the utility of the framework in
terms of assessing field -elaboration, while future studies may
apply the framework to analyze specific case -studies at greater
depths.
4 Analyzing AI -Induced Field Change and
Legitimi zation : Facial Recognition Technology
4.1 Actors
The proliferation of facial recognition technologies in the
United States has been supported by large technology companies,
which are the central actors of the field (e.g., Apple, Amazon,
Google, Microsoft, IBM). While these companies provide their
own applications directly to the market, they also modularize
facial -recognition technologies and make them accessible for
complementors on their platforms. This makes them field
structuring organizations since th e modularization of FRT -
systems embodies best -practices and de -facto industry standards,
which other companies align with. Central actors include adopters
of FRT -systems, while many of these are U.S public sector
agencies. Contractors that specialize in de livering FRT -
technology to law -enforcement agencies, as well as the National
Institute of Standards and Technology (NIST), hold intermediate
positions. Peripheral actors include multistakeholder organizations
such as the Partnership on AI, non -profit resea rch organizations
such as the Center for Data and Society, as well as research
institutes such as The AI Now Institute (NYU). These actors
affect the field through public reports and commentaries, paying
special attention to issues of technological impleme ntation and
social ramifications. Peripheral actors also include opponents of
FRT-systems, both in the form of activists, as well as civil society
organizations such as The American Civil Liberties Union
(ACLU) .
4.2 Logics
The dominant logics behind FRT’s has been driven by private
sector companies focused on gaining market share. The logics
behind adoption is motivated by enhancing measures of public
safety, e.g., in terms of identifying criminals, screening travelers,
and processing border immigration. Both logics are highly
contested by peripheral actors e.g., company activists and civil
rights organizations [76], citing that inaccurate technologies hold
the potential of e xacerbating racial and social biases and
inequities. This signifies that emergent dominant logics are at
odds with existing social arrangements, including structures of
power and governance, which makes the technology heavily
resisted [77].
4.3 Work: Field Structuring Events
In 2019, the local government of San Francisco became the first
city in the United States to ban the use of FRT’s by local agencies.
In the spring of 2020, nationwide protests against polic e brutality
and racial profiling caused several central actors (IBM, Amazon,
Microsoft) to stop providing FRT -technologies to law
enforcement agencies altogether. IBM called for “a national
dialogue on whether and how facial recognition technology
should b e deployed by domestic law enforcement agencies” [78:
1], and Amazon announced a one -year moratorium on police use
of its facial recognition technology, giving poli cymakers time to
set appropriate rules around the use of the technology. Microsoft
declared that it would not sell FRT -technology to police
departments in the United States until a federal law that regulates
the technology is formulated. These actions by s ome of the central
actors in the field signal that the existing institutional
infrastructure remains inadequate in terms of governing and
addressing the current expansion of FRT -related digital
infrastructure. This indicates that even as central actors on the
procurement side include many public sector agencies, the
necessary institutional infrastructure to guide potential
ramifications of immature technological adoption has not yet been
formulated. Greater alignment between stakeholders across
industry, go vernment, and civil society, is currently needed in
order to secure ongoing legitimacy as well as greater field -level
elaboration and use of facial recognition technologies. 4.4 Technological Maturity
In terms of technological maturity, v erification algo rithms have
achieved accuracy scores of up to 99.97% on standard
assessments like the National Institute for Standards and
Technology (NIST) Facial Recognition Vendor Test [79]. For
identification -systems, error rates tend to climb when high -quality
images are replaced with the feed of live cameras that normally
are utilized in public spaces. Aging is another factor that affects
error rates, while accuracies of FRT -systems differ considerably
across gender and race [8]. The context, i.e., the specific area of
implementation and use, can therefore be said to have wide -
reaching consequences for the accuracy -rates of individual FRT -
systems.
4.5 Data
Issues of legitimacy are also inherent in relation to the kinds of
data that are being used for training FRT -algorithms. Many
databases rely on publicly available face -annotated data, which in
some cases are scraped directly from social med ia platforms and
have raised issues over privacy and consent [80]. The company
Clearview has, for example, assembled a database containing
some 3 billion images, where many have been scraped from
public -facing social media platforms [81]. This raises concerns
about the legitimacy of data rights and usage, as well as the ability
of existing institutional infrastructure to provide, and safeguard,
associated rights. The quantity of data is in many cases important
for algorithmic training, as well as for retaining levels of accuracy
post-deployment, which means that there is an inherent incentive
for private, as well as for public -sector adopters, to amass rich
databases (e.g., new biometric data), in order to increase and
continue to ensure the accuracy of a given system. In several
states (e.g., Texas, Florida, Illinois), the FBI is allowed to use
facial recognition techno logy to scan through the Department of
Motor Vehicles (DMV) database of drivers’ license photos [82] in
order to generate a more coherent centralized bi ometric database.
As these kinds of data contain personal information, they are
classified as being sensitive and vulnerable, both in terms of
misuse as well as in relation to cybersecurity breaches and
possible identity theft [57].
4.6 Autonomy
AI in facial recognition -systems is perceived as a new kind of
social control agent, which may exert autonomy over law -
enforcement officers in relation to issuing arrest orders. If the
accuracy of a system is flawed, an officers’ actions are likely to
cause social harm whenever an innocent citizen is arrested [83].
The adoption of facial recogniti on systems for use in law
enforcement alters existing power dependencies, as officers have
to trust in, and act on, the information that is rendered to them by
the system. Facial recognition systems are thus shaping entirely
new practices and forms of orga nization in which the autonomy of
the AI -agent is dependent on the delivery of accurate information,
which could reinforce a drive towards data -centralization.
4.7 Governance
The field of facial recognition technology is fragmented and
exhibits low cohere ncy and limited elaboration in terms of
institutional infrastructure. A lack of governance is most readily
seen in the absence of coherent rules and regulations, while the
field is currently going through a shift from self -regulation
towards more formalize d governance arrangements. This shift has
been called for by peripheral actors, and more recently also by
central actors from the private sector, which demands new rules to
guide legitimate implementation going forward. The case of facial
recognition techn ologies used by law -enforcement highlights the
critical role of culture and politics involved in the organization of
markets and in creating the governing ‘rules of the game’ [84, 85,
86].
5 Pathways of Change: How AI -Fields Move and
Gain Legitima cy
In order to move from a static to a more dynamic analysis of the
conditions related to field change , this section applies the concept
of ‘pathways of change’ to a number of distinct areas of AI. As
evident, each area of AI implementation is subject to
idiosyncrasies that are linked to a field’s specific form of digital
and institutional infrastructure, as well as their elaboration .
Pathways of change suggest that there are some commonalities to
how fields are likely to evolve and where obstacles to
legitimization and institutionalization may be found. In order to
understand how fields move between states, special attention
needs to be placed on the scope of change (i.e., which elemen ts
change and how much changes) [87], as well as the pace of change
(i.e., the speed at which a field moves from one condition to
another) [88].
In the case of facial recognition technolog ies, the field is
currently moving from the fragmented towards the contested
quadrant, as the number of use -cases (e.g., public surveillance,
airport check -ins, smartphones, doorbells, etc.) continues to
expand, based on rising technological maturity (e.g. , accuracy,
standards). While digital infrastructures are expanding, the field
continues to be represented by incoherent logics and sparse
institutional infrastructure, however. For example, verification -
based FRT’s (e.g., unlocking a smartphone) is alread y a well -
established practice that exhibits legitimate institutionalized
functions. Identification -based FRT’s (e.g., public surveillance),
on the other hand, are more likely to stay contested due to having
a lower degree of algorithmic accuracy, which is paired with more
severe social impacts linked to the autonomy of AI -agents, and
how these alter existing power structures. In order for the field, as
a whole, to grow more established, a shift from self -regulation
towards formalized governance arrangements and greater
alignment and coherency in terms of logics is needed. In more
authoritarian settings, such as in China, the field of facial
recognition is already on its way to becoming established. This
signifies that a country’s socio -political setting info rms its
institutional infrastructure, which has important implications for a
technology’s path towards legitimization as well as processes of
institutionalization.
A pathway that moves from an aligning or emerging field
condition to an established conditi on usually involves a process of
convergence, which is commonly observed in the
institutionalization of most fields (see, e.g., [89]). The fiel d of
autonomous vehicles (AV) , for example, is characterized by its emerging digital and institutional infrastructure, which has a low
degree of elaboration bu t some coherency in terms of logics.
While the field is currently aligning at a relatively slow pace, it
develops in extension of an existing field (auto -infrastructure) that
has been elaborated over decades. Large parts of the existing
infrastructure are challenged, however, through the introduction
of novel AI -agents and a transfer in autonomy from humans to
machines. As the digital infrastructure is further elaborated, which
entails a greater number of mixed -autonomy vehicles on the road,
the field could move towards the contested quadrant, as logics
associated with safety and liability are disputed. If the rules and
regulations to handle negative externalities brought about by
algorithmic errors are not in place, the field would likely stay in
the contes ted quadrant. As the advent of AV’s is going to shift the
terms of liability [90], the scope of change demands that an
entirely new institutional infrastructure has to be developed and
elaborated by insurers, policymakers, legislators, and automakers,
which could take years and be subject to multiple areas of
contestation amon g stakeholders.
Another common pathway is the movement from an
established to a contested field condition. This move is likely to
occur through more disruptive change, either an exogenous shock,
e.g., new regulation , or a strong social movement, or through the
challenging of status quo by a new or peripheral actor [94, 95].
The use of recommender engines (RE), which suggests products,
services, and other online information to users based on prior data,
is already a well -established practice but could grow more
contested due to incoherencies in logics. RE’s have, for example ,
been argued to create fragmentation by limiting a users’ media
exposure to a set of predefined interests or objectives [93], which
could have undesirable societal consequences as people’s
preferences may be guided towards echo chambers, where
alternate views are missing [94], which further impedes decisional
autonomy [95]. Other actors argue that existing data are
inconclusive, and some research suggests that recommenders
appear to create commonality, not fragmentation [96], implying
that there is little cause to modify the current architecture of
recommender engines [97, 100]. This incoherency in logics is
coupled with information asymmetries between the AI -agent and
human actors in relation to how, and on which information, a
decision to recommend certain content is rendered. This lack of
transparency, as well as a lack of algorithmic knowledge by the
general population, arguably leaves certain elements of the current
digital infrastructure in the contested quadr ant. The governance of
data and information that goes into a recommender engine, for
example, is partially situated in the contested quadrant, which
could have wider field -level implications, and possibly force a
coercive change in the form of new regulati on [98].
When a field moves from a position of established to
(re)aligning under the emergent quadrant, change is usually
observed through incremental modifications, with central actors
often managing these [22]. This incremental change sees the field
realigning around new practices or relatio nal channels while
readjusting the institutional infrastructure. The field of smart
speakers ( Google Assistant, Siri, Alexa, etc.) has moved from the
emerging towards the established field -quadrant over a relatively
short time -horizon, while certain elemen ts of the digital
infrastructure have been linked to concerns over data -collection
and data privacy practices, which could see the field move to
grow more contested.
Other pathways of change include a move from a fragmented
or contested condition to one th at is aligning in the emergent
quadrant. When looking at nascent AI areas such as Generative
Pre-trained Transformer 3 (GPT -3), or deepfakes, these fields
emerge in the fragmented quadrant due to incoherent logics,
coupled with institutional infrastructures that are unelaborate.
While the inherent agency of these AI systems are emerging, their
associated use of already elaborate digital infrastructure linked to
the general information ecosystem makes them able to proliferate
at rapid speeds. In terms of autonomy, this means that these AI -
agents could have a considerable impact on their environment by
exacerbating the spread of misinformation online. A move fr om
the fragmented quadrant towards greater alignment is therefore
needed, which may be formed as actors converge around new
ideas, rules, and positions in order to inform and elaborate the
surrounding institutional infrastructure while establishing greater
coherency in logics [89, 102].
AI is currently changing organizational practices across a wide
range of fields, which implies that new applications should be
carefully considered in terms of their short -term impact on human
behavior as well as long -run influences on institu tional change.
Insufficiently tested implementation of unsafe or biased
algorithms can foster negative externalities, which can have
severe consequences or may be detrimental to societal trust. An
analysis of AI -associated digital institutional infrastruct ure, based
on logics and work, as well as conceptualizations of technological
maturity, data, and AI -autonomy, contributes to assessing where
potential areas of contestation or fragmentation could be found.
These findings hold important implications for AI -developers and
adopters (e.g., engineers, managers, firms), as well as for
policymakers that seek to define new rules going forward. These
implications, as well as the main takeaways of the paper, are
briefly discussed below before a conclusion is offered .
6 Discussion: Commonalities of AI -Induced Field
Change & Pending Issues over Governance
Through illustration of the developed framework, three
takeaways which move across varying kinds of AI-induced field
change and legitimization , are offered . Subjec t to discussion,
these broadly refer to (1) altered power -dependencies between
humans and machines, (2) unresolved questions over data -use and
control, as well as (3) issues with the current elaboration of
institutional infrastructure surrounding many form s of AI
application .
First, the autonomy of AI agents can affect existing power -
dependencies, which may cause friction as human behavior and
ways of organization are influenced in ways that are hard to
identify ex -ante [35]. In examples such as facial recognition,
judicial AI -systems, autonomous vehicles, and so on, the AI -agent
gains determining power over human actors, which have to trust the identifications or predictions of the AI -agent. This transfer of
autonomy is contingent on systemic trust, which is based o n
conceptualizations of technological maturity and ideas of
machine -augmented perception that is expected to operate at
cognitive levels that are equal to – or in many cases exceeds those
of a human operator. Issues with field -level legitimization and
nascent processes of institutionalization are therefore likely to
arise when emerging systems are inaccurate, unsafe, or
intransparent, which erode trust across applications and causes
fields ’ to stay fragmented and logics to grow incoherent.
Analyzing the fie ld trajectories of such cases , involves assessing
what it takes for altered power -dependencies to be conceived as
legitimate practices, which is crucial for a field to move from
fragmentation or contestation towards greater alignment of digital
and institu tional infrastructures.
Second, an incentive for data -centralization is inherent in most
digital infrastructures (based on technical and economic logics),
which has implications for associated forms of organization. A
lack of transparency during the proce sses of data collection, as
well as in markets for data, are leaving large populations unaware
of where and how their personal data and information is being
used, stored, and traded, as well as for what purposes [47]. The
current organization of many digital infrastructures thus come
with the risk of deteriorating public trust in digital institutional
infrastructures if data -sources are used for socially disputed
measures of public (e.g., safety) and private (e.g., market -based)
forms of surveillance [12], or are being misused, e.g., due to
large -scale data -breaches [101] . This implies that the legitima cy
of AI -agents is highly contingent on legitimate collection, use,
and ownership of data, which otherwise could be a source of
dispute that caus es field -level disintegration. Regulations such as
the European Union’s General Data Protection Regulation
(GDP R) should be seen as the first step of elaborating institutional
infrastructure that seeks to move fields engaged in data -collection
from the contested quadrants towards greater establishment and
coherency in logics. Over time this could imply a conceptual shift
of companies moving from “owners” towards “custodians” of
individuals’ private data. Opening access to data and developing
interactivity, as well as an increased sense of ownership with
users, is a step that could gain traction in order to smoothen
existing information asymmetries between central actors and
individual end -users [102] . Similarly, empowering users to better
understand and perhaps interact with certain AI -agents (e.g.,
recommender engines) would empower these with a greater sense
of ownership over how streams of information are utilized a nd
handled, as well as impacting individual practices and ways of
behavior .
Third, where institutional infrastructure is considered
inadequate during phases of market expansion, peripheral actors,
such as civil society organizations, frequently work on out lining
insufficient governance arrangements [103] . In many cases, it is
important that institutional infrastructure is elaborated before
negative externalities start to erode systemic and institutional
levels of trust, which causes a field to grow fragmented. If trust is
eroded past certain barriers, technology developers and adopters
are likely to experience severe pushback from the general public.
Public pushba ck forces central actors from the private sector to
engage in new measures of self -regulation, which in some cases
means scaling back digital infrastructure until a policy -vacuum is
filled by new legislative provisions, such as in the case of facial
recogn ition technologies in the United States . When logics are at
odds with existing power structures or violate existing governance
arrangements, these are also more likely to be resisted [77].
At the same tim e, the formulation of institutional infrastructure
need s to emerge in more adaptive forms of organization [104,
105], that are able to take into account the myriad ways in which
modular AI -systems influence and shape existing practices and
ways of behavior . This warrants that new types of institutional
engineering have to be embraced in order to keep up with rapidly
expanding digital infrastructures while alleviating the pacing
problem [7]. Proposed measures of institutional adaptation to
mitigate AI -induced externalities include enhanced measures of
algorithmic auditing carried out by companies [106] , third -party
auditors [107] , or external regulators [108] .
Auditing can create an ex -post procedural record of complex
algorithmic decision -making in order to tr ack inaccurate decisions
or to detect forms of discrimination, as well as biased data,
practices, and other harms [47]. When algorithms are designed
without considering a population’s or community’s needs, it has
become apparent that both the algorithm and its implementer are
likely t o experience public pushback or outright rejection, which
may obstruct other processes of AI legitimacy and adoption [109] .
As a growing number of fields continue to migrate from
traditional forms of linear programm ing and further embrace
autonomous learning algorithms – behavioral control is gradually
transferred from the programmer to the algorithm and its
operating environment [110] . During this process, “the modular
design of systems can me an that no single person or group can
fully grasp the manner in which the system will interact or
respond to a complex flow of new inputs” [111: 14]. In order to
cope with AI -induced comple xities, new governance structures
have to be co -invented through greater stakeholder engagement
among companies, civil society organizations, as well as
policymakers in order to secure the inclusion of affected
communities in the development of just algori thmic systems and
processes going forward [112] .
The tradeoffs between algorithmic accuracy, transparency, and
use of data, as well as the rights to privacy, explanation, and right
of redress, remain subject to ongoing forms of mediation in
relation to the concomitant organizational practices that emerge at
the intersection of human -machine -based interactions. Whi le
these tradeoffs have wide -ranging implications for the kind of
institutions that are likely to emerge, the devising of inclusive yet
reflexive institutional infrastructures that are able to encompass a
wide variety of AI -associated risks remains a cruci al area to be
studied for years to come.
7 Conclusion
The increased presence of AI -agents embedded in varying
forms of organization entails that a whole range of AI -induced institutions are currently emerging. This paper makes three
contributions that help elicit the ways in which AI -induced fields
are subject to varying degrees of legitimacy as well as pro cesses
of institutionalization. First, the paper proposes a novel conceptual
framework for analyzing AI -induced field change. Second, it
illustrat es the utility o f the framework and finds a set of common
grounds for contestation associated with AI -induced field change
and legitimization . Third , the paper points to the need for more
adaptive organizations to emerge in response to the rapidly
evolving digita l infrastructures of AI-systems.
The notion of pathways of change helps elicit the varying ways
in which novel AI solutions are resisted, rejected, or accepted as
legitimate practices over time. Assessing where a field is currently
positioned, as well as w hat its potential trajectories are, or could
be, and what needs to be done for a field to grow established and
become legitimatized over time, are essential considerations for
stakeholders to take into account. Such deliberations contribute to
secure great er alignment between digital and institutional
infrastructures, which is important in terms of mitigating negative
externalities going forward.
The logics of any algorithmic interaction, as well as
transparency with the information that guides the interaction,
needs to be broadly examined in order to get a better
understanding of how a given AI -agent affects and potentially
alters existing dependencies between humans and machines, as
well as between humans and new forms of organization. Only by
unde rstanding where certain negative externalities could
potentially arise can organizations that are responsible for
algorithmic development or implementation work on establishing
the necessary institutional infrastructure (i.e., standards, rules, and
process es) to keep such externalities in check. Transparent and
reliable AI systems, as well as enhanced human -AI interactions, is
a key element for the trajectory of most AI fields on their road to
secure a broad sense of social legitimacy as well as growing
established over time. As novel digital infrastructures continue to
emerge, it is important that their road to becoming
institutionalized structures of society is thoroughly vetted and
mitigated in order to secure fair, equitable, and trustworthy socio -
techni cal interactions in the years to come.
REFERENCES
[1] A. Mackenzie, Cutting Code: Software and Sociality . New York: Peter
Lang Publishing, 2006.
[2] M. L. Markus, “Datification, Organizational Strategy, and IS Research:
What’s the Score?,” J. Strateg. Inf. Syst. , vol. 26, no. 3, pp. 233 –241,
2017.
[3] M. Hengstler, E. Enkel, and S. Duelli, “Applied artificial intelligence and
trust-The case of autonomo us vehicles and medical assistance devices,”
Technol. Forecast. Soc. Change , vol. 105, pp. 105 –120, 2016.
[4] T. Davenport and R. Kalakota, “The potential for artificial intelligence in
healthcare,” Futur. Healthc. J. , vol. 6, no. 2, pp. 94 –98, 2019.
[5] C. Mckay, “Predicting risk in criminal procedure: actuarial tools,
algorithms, AI and judicial decision -making,” Curr. Issues Crim. Justice ,
vol. 32, no. 1, pp. 22 –39, 2020.
[6] R. L. Baskerville, M. D. Myers, and Y. Yoo, “Digital First: The
Ontological Rev ersal and New Challenges for IS Research,” MIS Q. ,
2019.
[7] R. Hagemann, J. Huddleston, and A. D. Thierer, “Soft Law for Hard
Problems: The Governance of Emerging Technologies in an Uncertain
Future,” 2018.
[8] J. Buolamwini and T. Gebru, “Gender Shades: Intersectional Accuracy
Disparities in Commercial Gender Classificatio,” Proc. Mach. Learn.
Res., no. 81, pp. 1 –15, 2018.
[9] Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, “Dissecting
racial bias in an algorithm used to manage the health of popu lations,”
Science (80 -. )., vol. 366, no. 6464, pp. 447 –453, 2019.
[10] J. Kallinikos, Governing through Technology: Information Artefacts and
Social Practices. New York: Palgrave Macmillan. New York: Palgrave
Macmillan, 2011.
[11] S. Zuboff, In the age of the smart machine: the future of work and power .
New York: Basic Books, 1988.
[12] S. Zuboff, The Age of Surveillance Capitalism . New York: PublicAffairs,
2019.
[13] O. Henfridsson and B. Bygstad, “The generative mechanisms of digital
infrastructure evolu tion,” MIS Q. Manag. Inf. Syst. , vol. 37, no. 3, pp.
907–931, 2013.
[14] P. Constantinides, O. Henfridsson, and G. G. Parker, “Platforms and
infrastructures in the digital age,” Inf. Syst. Res. , vol. 29, no. 2, pp. 381 –
400, 2018.
[15] Y. Yoo, O. Henfridsso n, and K. Lyytinen, “The new organizing logic of
digital innovation: An agenda for information systems research,” Inf.
Syst. Res. , vol. 21, no. 4, pp. 724 –735, 2010.
[16] T. B. Lawrence and R. Suddaby, “Institutions and institutional work,” in
The SAGE Han dbook of Organization Studies , no. March, 2006, pp. 215 –
254.
[17] A. Gawer and N. Phillips, “Institutional Work as Logics Shift : The Case
of Intel ’ s Transformation to Platform Leader,” Organ. Sci. , vol. 34, no.
8, pp. 1035 –1071, 2013.
[18] P. H. Thornt on, W. Ocasio, and M. Lounsbury, “Introduction to the
Institutional Logics Perspective,” in The Institutional Logics Perspective:
A New Approach to Culture, Structure and Process , 2012.
[19] B. Hinings, D. Logue, and C. Zietsma, “Fields, Institutional Infr astructure
and Governance,” in The SAGE Handbook of Organizational
Institutionalism , 2017, pp. 163 –189.
[20] W. W. Powell, A. Oberg, V. Korff, C. Oelberger, and K. Kloos,
“Institutional analysis in a digital era: Mechanisms and methods to
understand emergi ng fields,” in New Themes in Institutional Analysis:
Topics and Issues from European Research , 2017, pp. 305 –344.
[21] B. Hinings, T. Gegenhuber, and R. Greenwood, “Digital innovation and
transformation: An institutional perspective,” Inf. Organ. , vol. 28, no. 1,
pp. 52 –61, 2018.
[22] C. Zietsma, P. Groenewegen, D. Logue, and C. R. (Bob) Hinings,
“FIELD OR FIELDS? BUILDING THE SCAFFOLDING FOR
CUMULATION OF RESEARCH ON INSTITUTIONAL FIELDS,”
Acad. Manag. Ann. , vol. 11, no. 1, 2017.
[23] M. T. Dacin, M. J. Ventresca, and B. D. Beal, “The embeddedness of
organizations: Dialogue & directions,” J. Manage. , vol. 25, no. 3, pp.
317–356, 1999.
[24] J. W. Meyer and B. Rowan, “Institutionalized Organizations : Formal
Structure as Myth and Ceremony ,” Am. J. Sociol. , vol. 83, no. 2, pp. 340 –
363, 1977.
[25] P. J. DiMaggio and W. W. Powell, “The Iron Cage Revisited :
Institutional Isomorphism and Collective Rationality in Organizational
Fields,” Am. Sociol. Rev. , vol. 48, no. 2, pp. 147 –160, 1983.
[26] R. Greenwood, M. Raynard, F. Kodeih, E. R. Micelotta, and M.
Lounsbury, “Institutional complexity and organizational responses,”
Acad. Manag. Ann. , vol. 5, no. 1, pp. 317 –371, 2011.
[27] T. Reay and C. R. Hinings, “The recomposition of an organizational f ield:
Health care in Alberta,” Organ. Stud. , vol. 26, no. 3, pp. 351 –384, 2005.
[28] T. Reay and C. R. Hinings, “Managing the rivalry of competing
institutional logics,” Organ. Stud. , vol. 30, no. 6, pp. 629 –652, 2009.
[29] R. W. Scott, “Institutions and O rganizations, 4th ed.” 2014.
[30] P. H. Thornton and W. Ocasio, “Institutional logics and the historical
contingency of power in organizations: Executive succession in the
higher education publishing industry, 1958 -1990,” Am. J. Sociol. , vol.
105, no. 3, p p. 801 –843, 1999.
[31] S. Waddock, “Building a new institutional infrastructure for corporate
responsibility,” Acad. Manag. Perspect. , vol. 22, no. 3, pp. 87 –108, 2008.
[32] L. Floridi and J. W. Sanders, “On the morality of artificial agents
Luciano,” Mind s Mach. , 2004.
[33] P. Holm, “The Dynamics of Institutionalization: Transformation
Processes in Norwegian Fisheries,” Adm. Sci. Q. , vol. 40, no. 3, pp. 398 –
422, 1995.
[34] L. Floridi, The Fourth Revolution: How the infosphere is reshaping
human reality . Oxford, U.K: Oxford University Press, 2014.
[35] C. Curchod, G. Patriotta, L. Cohen, and N. Neysen, “Working for an
Algorithm: Power Asymmetries and Agency in Online Work Settings,”
Adm. Sci. Q. , vol. 65, no. 3, pp. 644 –676, 2020.
[36] W. J. Orlikowski and S . V. Scott, “10 Sociomateriality: Challenging the Separation of Technology, Work and Organization,” Acad. Manag. Ann. ,
vol. 2, no. 1, pp. 433 –474, 2008.
[37] S. Lash, “Power after Hegemony: Cultural Studies in Mutation?,” Theory,
Cult. Soc. , vol. 24, no. 3 , pp. 55 –78, 2007.
[38] D. Beer, “The social power of algorithms,” Inf. Commun. Soc. , vol. 20,
no. 1, pp. 1 –13, 2017.
[39] K. Macnish, “Unblinking eyes: The ethics of automating surveillance,”
Ethics Inf. Technol. , vol. 14, no. 2, pp. 151 –167, 2012.
[40] O. Henfridsson, J. Nandhakumar, H. Scarbrough, and N. Panourgias,
“Recombination in the open -ended value landscape of digital innovation,”
Inf. Organ. , vol. 28, no. 2, pp. 89 –100, 2018.
[41] A. Rai, P. Constantinides, and S. Sarker, “Editor’s comments: Nex t-
Generation Digital Platforms: Toward Human –AI Hybrids,” MIS Q. , vol.
43, no. 1, 2019.
[42] O. Hanseth and K. Lyytinen, “Design theory for dynamic complexity in
information infrastructures: The case of building internet,” J. Inf.
Technol. , vol. 25, no. 1, pp. 1 –19, 2010.
[43] E. Monteiro, “Scaling information infrastructure: The case of next -
generation IP in the internet,” Inf. Soc. , vol. 14, no. 3, pp. 229 –245, 1998.
[44] S. J. Russell and P. Norvig, Artificial Intelligence: A Modern Approach .
Prentice -Hall, 2010.
[45] R. Parasuraman and D. H. Manzey, “Complacency and bias in human use
of automation: An attentional integration,” Hum. Factors , vol. 52, no. 3,
pp. 381 –410, 2010.
[46] B. Friedman and H. Nissenbaum, “Bias in computer systems,” ACM
Trans. Inf. Syst., pp. 330 –347, 1996.
[47] B. D. Mittelstadt, P. Allo, M. Taddeo, S. Wachter, and L. Floridi, “The
ethics of algorithms: Mapping the debate,” Big Data Soc. , no. December,
pp. 1 –21, 2016.
[48] N. Nuno, A. de Gomes, and V. Kontschieder, “AI Impact Assess ment: A
Policy Prototyping Experiment,” no. January, 2021.
[49] H. Zhu, B. Yu, A. Halfaker, and L. Terveen, “Value -Sensitive Algorithm
Design,” Proc. ACM Human -Computer Interact. , vol. 2, no. CSCW, pp.
1–23, 2018.
[50] R. Garud, S. Jain, and A. Kumaraswamy, “Institutional entrepreneurship
in the sponsorship of common technological standards: The case of Sun
Microsystems and Java,” Acad. Manag. J. , vol. 45, no. 1, pp. 196 –214,
2002.
[51] S. Grigorescu, B. Trasnea, T. Coci as, and G. Macesanu, “A survey of
deep learning techniques for autonomous driving,” J. F. Robot. , vol. 37,
no. 3, pp. 362 –386, 2020.
[52] M. Arnold et al. , “FactSheets: Increasing trust in AI services through
supplier’s declarations of conformity,” IBM J. Res. Dev. , vol. 63, no. 4 –5,
pp. 1 –13, 2019.
[53] A. Schlesinger, K. P. O’Hara, and A. S. Taylor, “Let’s talk about race:
Identity, chatbots, and AI,” Conf. Hum. Factors Comput. Syst. - Proc. ,
vol. 2018 -April, pp. 1 –14, 2018.
[54] M. Mitchell et al. , “Mode l cards for model reporting,” FAT* 2019 - Proc.
2019 Conf. Fairness, Accountability, Transpar. , no. Figure 2, pp. 220 –
229, 2019.
[55] K. Yang, J. Stoyanovich, A. Asudeh, B. Howe, H. V. Jagadish, and G.
Miklau, “A nutritional label for rankings,” in Proceedings of the ACM
SIGMOD International Conference on Management of Data , 2018, no.
Section 2, pp. 1773 –1776.
[56] D. Reisman, J. Schultz, K. Crawford, and M. Whittaker, “Algorithmic
impact assessments: A practical framework for public agency
accountab ility,” AI Now Inst. , no. April, p. 22, 2018.
[57] D. Coyle, S. Diepeveen, J. Wdowin, L. Kay, and J. Tennison, “The Value
of Data: Policy Implications,” Bennett Institute for Public Policy,
Cambridge in partnership with the Open Data Institute. 2020.
[58] S. Barocas and A. D. Selbst, “Big Data’s Disparate Impact,” Calif. Law
Rev., vol. 104, no. 671, pp. 671 –732, 2014.
[59] E. M. Bender and B. Friedman, “Data Statements for Natural Language
Processing: Toward Mitigating System Bias and Enabling Better
Scienc e,” Trans. Assoc. Comput. Linguist. , vol. 6, pp. 587 –604, 2018.
[60] T. Gebru et al. , “Datasheets for Datasets,” arxiv , 2020.
[61] J. Stoyanovich and B. Howe, “Nutritional Labels for Data and Models,”
Data Eng. , no. 1926250, p. 13, 2019.
[62] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D.
Mané, “Concrete Problems in AI Safety,” arXiv , pp. 1 –29, 2016.
[63] P. D. P. C. S. (PDPC) and I. M. D. A. (IMDA), “Model Artificial
Intelligence Governance Framework Second Edition,” 2020.
[64] D. Ti lson, K. Lyytinen, and C. Sørensen, “Digital infrastructures: The
missing IS research agenda,” Inf. Syst. Res. , vol. 21, no. 4, pp. 748 –759,
2010.
[65] G. P. Pisano and D. J. Teece, “How to Capture Value from Innovation:
Shaping Intellectual Property and I ndustry Architecture,” Calif. Manage.
Rev., 2007.
[66] S. Botzem and L. Dobusch, “Standardization Cycles : A Process
Perspective on the Formation and Diffusion of Transnational Standards,”
Organ. Stud. , vol. 5 –6, no. 33, pp. 737 –762, 2012.
[67] M. Djelic a nd K. Sahlin -Andersson, Transnational governance:
Institutional dynamics of regulation . Cambridge: Cambridge University
Press., 2006.
[68] R. Garud and P. Karnøe, “Bricolage versus breakthrough: distributed and
embedded agency in technology entrepreneurshi p,” Res. Policy , vol. 32,
pp. 277 –300, 2003.
[69] M. Raynard, “Deconstructing complexity: Configurations of institutional
complexity and structural hybridity,” Strateg. Organ. , vol. 14, no. 4, pp.
310–335, 2016.
[70] W. J. Orlikowski, “Sociomaterial practices: Exploring technology at
work,” Organ. Stud. , vol. 28, no. 9, pp. 1435 –1448, 2007.
[71] R. Greenwood, R. Suddaby, and C. R. Hinings, “Theorizing change: The
role of professional associations in the transformation of institutionalized
fields,” Acad. Manag. J. , vol. 45, no. 1, pp. 58 –80, 2002.
[72] C. Zietsma and T. B. Lawrence, “Institutional work in the transformation
of an organizational field: The interplay of boundary work and practice
work,” Adm. Sci. Q. , vol. 55, no. 2, pp. 189 –221, 2010.
[73] H. Rao, C. Morrill, and M. N. Zald, “Power plays: How social
movements and collective action create new organizational forms,” Res.
Organ. Behav. , vol. 22, no. June, pp. 237 –281, 2000.
[74] W. W. Powell and K. W. Sandholt z, “Amphibious entrepreneurs and the
emergence of organizational forms,” Strateg. Entrep. J. , vol. 6, no. 2, pp.
94–115, 2012.
[75] S. D. Patvardhan, D. A. Gioia, and A. L. Hamilton, “Weathering a meta -
level identity crisis: Forging a coherent collective i dentity for an
emerging field,” Acad. Manag. J. , vol. 58, no. 2, pp. 405 –435, 2015.
[76] K. Hao, “The two -year fight to stop Amazon from selling face
recognition to the police,” MIT Technology Review , 2020.
[77] S. Furnari, “Institutional fields as linked arenas: Inter -field resource
dependence, institutional work and institutional change,” Hum. Relations ,
vol. 69, no. 3, pp. 551 –580, 2016.
[78] A. Krishna, “IBM CEO’s Letter to Congress on Racial Justice Reform,”
2020.
[79] National Institute of Standards and Technology (NIST), “NIST.” 2020.
[80] K. Hao, “IBM’s photo -scraping scandal shows what a weird bubble AI
researchers live in,” MIT Technology Review , 2020.
[81] K. Hill, “The Secretive Company That Might End P rivacy as We Know
It,” New York Times , 2020.
[82] S. Ghaffary and R. Molla, “Here’s where the US government is using
facial recognition technology to surveil Americans,” Vox, 10-Dec-2019.
[83] K. Hill, “Wrongfully Accused by an Algorithm,” New York Times , 2020.
[84] D. C. North, Institutions, Institutional Change and Economic
Performance . 1990.
[85] N. Fligstein, “Social skill and the theory of fields,” Sociol. Theory , vol.
19, no. 2, pp. 105 –125, 2001.
[86] N. Fligstein and D. McAdam, A theory of fields . Oxford, U.K: Oxford
University Press, 2012.
[87] S. Maguire and C. Hardy, “Discourse and Deinstitutionalization: The
Decline of DDT,” Acad. Manag. J. , vol. 52, no. 1, pp. 148 –178, 2009.
[88] J. Amis, T. Slack, and C. R. Hinings, “The Pace, Sequence, and Linearity
of Radical Change Author(s):,” Acad. Manag. J. , vol. 47, no. 1, pp. 15 –
39, 2004.
[89] K. A. Munir and N. Phillips, “The birth of the ‘Kodak moment’:
Institutional entrepreneurship and the adoption of new technologies,”
Organ. Stud. , vol. 26, no. 11, pp. 1665 –1687, 2005.
[90] G. E. Marchant and R. A. Lindor, “The Coming Collision Between
Autonomous Vehicles and the Liability System THE COMING
COLLISION BETWEEN AUTONOMOUS VEHICLES AND TH E
LIABILITY SYSTEM,” St. Cl. Law Rev. Artic. , vol. 52, no. 4, 2012.
[91] P. Castel and E. Friedberg, “Institutional change as an interactive process:
The case of the modernization of the French cancer centers,” Organ. Sci. ,
vol. 21, no. 2, pp. 311 –330, 201 0.
[92] M. Hensmans, “Social movement organizations: A metaphor for strategic
actors in institutional fields,” Organ. Stud. , vol. 24, no. 3, pp. 355 –381,
2003.
[93] C. R. Sunstein, Republic.com 2.0 . Princeton, NJ: Princeton University
Press, 2007.
[94] K. Hosanagar, D. Fleder, D. Lee, and A. Buja, “Will the global village
fracture into tribes recommender systems and their effects on consumer
fragmentation,” Manage. Sci. , vol. 60, no. 4, pp. 805 –823, 2014. [95] S. Newell and M. Marabelli, “Strategic opportun ities (and challenges) of
algorithmic decision -making: A call for action on the long -term societal
effects of ‘datification,’” J. Strateg. Inf. Syst. , vol. 24, no. 1, pp. 3 –14,
2015.
[96] M. Van Alstyne and E. Brynjolfsson, “Global village or cyber -balkans ?
Modeling and measuring the integration of electronic communities,”
Manage. Sci. , vol. 51, no. 6, pp. 851 –868, 2005.
[97] J. Möller, D. Trilling, N. Helberger, and B. van Es, “Do not blame it on
the algorithm: an empirical assessment of multiple recommender systems
and their impact on content diversity,” Inf. Commun. Soc. , vol. 21, no. 7,
pp. 959 –977, 2018.
[98] S. Michael, “ Interlopers and Field Change: The Entry of US News into
The Field of Legal Education,” Adm. Sci. Q. , vol. 53, p. 209, 2008.
[99] R. Garud, “Conferences as venues for the configuration of emerging
organizational fields: The case of cochlear implants,” J. Ma nag. Stud. ,
vol. 45, no. 6, pp. 1061 –1088, 2008.
[100] T. B. Zilber, “Stories and the discursive dynamics of institutional
entrepreneurship: The case of Israeli high -tech after the bubble,” Organ.
Stud. , vol. 28, no. 7, pp. 1035 –1054, 2007.
[101] J. Isaak and M. J. Hanna, “User Data Privacy: Facebook, Cambridge
Analytica, and Privacy Protection,” Computer (Long. Beach. Calif). , vol.
51, no. 8, pp. 56 –59, 2018.
[102] O. Tene and J. Polonetsky, “Big Data for All : Privacy and User Control
in the Age of Analyt ics Big Data for All : Privacy and User Control in
the,” Northwest. J. Technol. Intellect. Prop. , vol. 11, no. 5, 2013.
[103] S. L. Star, “Infrastructure and ethnographic practice: Working on the
fringes,” Scand. J. Inf. Syst. , vol. 14, no. 2, pp. 107 –122, 2002.
[104] C. Wang, R. Medaglia, and L. Zheng, “Towards a typology of adaptive
governance in the digital government context: The role of decision -
making and accountability,” Gov. Inf. Q. , vol. 35, no. 2, pp. 306 –322,
2018.
[105] A. Taeihagh, S. Y. Tan, and A. Taiehagh, “Adaptive governance of
autonomous vehicles: accelerating the adoption of disruptive technologies
in Singapore,” Gov. Inf. Q. , no. November 2019, p. 101546, 2021.
[106] T. Zarsky, “The Trouble with Algorithmic Decis ions: An Analytic Road
Map to Examine Efficiency and Fairness in Automated and Opaque
Decision Making,” Sci. Technol. Hum. Values , vol. 41, no. 1, pp. 118 –
132, 2016.
[107] J. Clark and G. K. Hadfield, “REGULATORY MARKETS FOR AI
SAFETY,” Conf. Pap. ICLR 201 9, 2019.
[108] A. Tutt, “AN FDA FOR ALGORITHMS,” Adm. Law Rev. , vol. 69, no. 1,
2016.
[109] M. Whittaker et al. , “AI Now 2018 Report,” AI Now , no. December, pp.
1–62, 2018.
[110] A. Matthias, “The responsibility gap: Ascribing responsibility for the
action s of learning automata,” Ethics Inf. Technol. , vol. 6, no. 3, pp. 175 –
183, 2004.
[111] C. Allen, W. Wallach, and I. Smit, “Why machine ethics?,” Mach. Ethics ,
pp. 51 –61, 2006.
[112] M. K. Lee et al. , “Webuildai: Participatory framework for algorithmic
gove rnance,” Proc. ACM Human -Computer Interact. , vol. 3, no. CSCW,
2019.
|
ee9349d8-4296-4268-92e1-d438fc84172a | trentmkelly/LessWrong-43k | LessWrong | Papers framing anthropic questions as decision problems?
A few weeks ago at a Seattle LW meetup, we were discussing the Sleeping Beauty problem and the Doomsday argument. We talked about how framing Sleeping Beauty problem as a decision problem basically solves it and then got the idea of using same heuristic on the Doomsday problem. I think you would need to specify more about the Doomsday setup than is usually done to do this.
We didn't spend a lot of time on it, but it got me thinking: Are there papers on trying to gain insight into the Doomsday problem and other anthropic reasoning problems by framing them as decision problems? I'm surprised I haven't seen this approach talked about here before. The idea seems relatively simple, so perhaps there is some major problem that I'm not seeing. |
67c0d21f-5955-4dad-bafa-f9ab5281c11b | trentmkelly/LessWrong-43k | LessWrong | 2011 Summer Matching Challenge Success!
The $125,000, 2011 Summer Matching Challenge was a success! We met our goal 2 days early, raising $250,000 total for Singularity Institute operations. Here is the blog post announcement. |
6b175469-bf5d-4c30-a126-485cc3a66a91 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | ChatGPT: Tantalizing afterthoughts in search of story trajectories [induction heads]
Several times during my undergraduate years I would finish a paper, turn it in, the then ***Wham!*** it hit me. ***That*** should have been my theme. That is to say, I would get an idea and explore it in a paper, but it was only after I declared the paper finished that I could see what I’d been driving at all along.
That’s what has happened to me with my latest working paper, [ChatGPT intimates a tantalizing future; its core LLM is organized on multiple levels; and it has broken the idea of thinking](https://www.academia.edu/95608526/ChatGPT_intimates_a_tantalizing_future_its_core_LLM_is_organized_on_multiple_levels_and_it_has_broken_the_idea_of_thinking_Version_2). I uploaded it on January 23, and then a day or two later, I decided I had to add a section. I uploaded the new Version 2 a couple of days ago (Jan. 28, 2023). I got up the next morning, took a look at the technical literature and decided I need a Version 3. But I’ve not yet done the revision. I want to think it through a bit more. That’s what this post is for.
First off I review the appropriated section in the working paper (section “5: What underlying structure drives ChatGPT’s behavior?”), then I explain why “story grammar” isn’t the right idea. That gives way to accounts of story trajectories (the new idea) and coherence. I end by introducing the idea of induction heads from some recent research by investigators at [Anthropic](https://www.anthropic.com/).
**Syntax and discourse on different levels**
============================================
Here’s the section that is new for Version 2:
> Since I am in full speculative mode, I will further assert that sentence-level syntax and semantics are implemented directly in the neural net. Story grammars, my primary example, are implemented in a different and somewhat mysterious way. To return to the somewhat clunky analogy of computer languages, sentence-level syntax is analogous to a high-level language, such as C++, while story grammers are analogous to end-user programs, such a word processors or databases.
>
> My assertion about sentence-level syntax is based on work published by [Christopher D. Manning et al, in 2020](https://doi.org/10.1073/pnas.1907367117). They investigated syntactic structures represented in BERT (Bidirectional Encoder Representations from Transformers). They discovered that neural networks induce
>
>
> > representations of sentence structure which capture many of the notions of linguistics, including word classes (parts of speech), syntactic structure (grammatical relations or dependencies), and coreference (which mentions of an entity refer to the same entity, such as, e.g., when “she” refers back to “Rachel”). [...] Indeed, the learned encoding of a sentence to a large extent includes the information found in the parse tree structures of sentences that have been proposed by linguists.
> >
> >
>
> The requirement that certain stories include functional story segments, Donné, Disturb, Plan, Enact, and Celebrate, that operates at a higher level than sentence syntax. It operates on sentences, requiring that the paths they take through activation space meet certain semantics requirements. This seems to indicate that the language model has some kind of an internal “map” of activation space that it uses to characterize these paths.
>
>
Let’s back up a second and return to Manning et al. (2020). Early in the paper they observed:
> One might expect that a machine-learning model trained to predict the next word in a text will just be a giant associational learning machine, with lots of statistics on how often the word restaurant is followed by kitchen and perhaps some basic abstracted sequence knowledge such as knowing that adjectives are commonly followed by nouns in English. It is not at all clear that such a system can develop interesting knowledge of the linguistic structure of whatever human language the system is trained on. Indeed, this has been the dominant perspective in linguistics, where language models have long been seen as inadequate and having no scientific interest, even when their usefulness in practical engineering applications is grudgingly accepted (15, 16).
>
>
That is not what they found. They found syntax. I’ve been looking at stories.
What’s the relationship between syntax and stories? You can’t account for stories through the rules of sentence-level syntax alone. That is, story structure isn’t scaled up syntactic structure; it is something else. The neural net structures than account for syntax won’t account for story structure. Something else is required, something that “operates at a higher level than sentence syntax.” What kind of a thing is that?
In the past some researchers talked about of story grammars. Some of those researchers have formal and computational models and some of them don’t. Narratologists and literary critics talk about story grammars, but don’t actually have mechanical models, though some of them may aspire to such models. But, starting in the late 1970s, researchers in artificial intelligence have been developing [computational models of story-telling](https://thegradient.pub/an-introduction-to-ai-story-generation/). They are operating in the GOFAI regime of symbolic computation. They too talked about story grammars, drawing inspiration from the narratologists.
I used the term “story grammar” in my paper, but I have decided to abandon it in favor of a different term, “story trajectory.” That term seems more resonant with the technical underpinnings of ChatGPT.
**Why story grammar isn’t the right idea**
==========================================
To get my argument off the ground I had proposed that we think of a neural net as a platform analogous to a CPU with its machine code. You can use assembly language to implement a high-level language, such as C++, and then use C++ to implement, say, a database. In that formulation we can think of sentence-level syntax as being implemented directly in the net, and story grammar is implemented in at that higher level, the level of story grammar. Story grammar is analogous to the principles of database design, sentence-level syntax is analogous to C++, and the neural net is analogous to assembly language.
There is a problem with this: the neural net is all there is. What do I mean by that?
In the case of ordinary programs, the various levels have each been designed by humans. The people who use them have available to them, at least in principle though not necessarily in fact, a complete knowledge of the pair of systems (assembler/high-level-language, high-level-language/database-design-principles) they’re working with and how they fit together. They can engineer the system “from above” as it were.
Artificial neural nets (ANNs) aren’t at all like that. They start with a fixed architecture of parameters (neurons) arranged in layers. This system then “consumes” a bunch of data and in the process induces a model of the objects in the database. That model is an artificial neural net ANN. The engine that induces it does not engineer the model “from above.” There is no “from above” in this process. That’s the problem.
And yet it is clear that, when ChatGPT is telling a story it is doing so with something that is functioning at a higher level than sentence-level syntax, something that is operating ON sentences? What’s the best was to conceptualize that? I’m not looking for a deep conceptualization, one that tells us what’s going on among the layers and parameters. I’ll be satisfied with something shallower, something more like a “way of talking about it.”
That’s why I propose to talk about a story trajectory rather than a story grammar. The use of “story grammar” as a technical term in AI is grounded in symbolic AI, where the high-level structure of the system is grounded in the conventions of knowledge representation adopted by the researchers, who code them into the system in the way database designers code databases. A story grammar is something that is written “from above.” There is no “from above” for ChatGPT, or any other large language model, and so we need a more appropriate term. “Story trajectory” is an appropriate term.
**What’s a story trajectory?**
A story trajectory is a (high-level) path through activation space. In the stories I have been examining, which are stories in which a hero accomplishes some task, the trajectory passes through the following regions: Donné, Disturb, Plan, Enact, and Celebrate. Those are names that I, as an analyst, have given to those regions. They are not to be constructed as modules in some kind of symbolic computing system. I’ve been using a language of frames, slots and fillers. I’m not even sure I like that but it may be appropriate for the process of generating readable strings of text.
How are those regions characterized? “Donné,” that’s French, it means “given.” It’s a term literary critics use to indicate the opening situation of a story, the particular situation but also, by implication, the world in which the story is set. That’s a very abstract object. It’s not as though one can characterize the Donné region of the space by listing places in terms of geography, season of the year, or weather, or any other such thing. The Donné isn’t a kind of region; rather, it’s the way a specific region is used in creating a story. The Donné is defined in relation to the other regions in the hero story trajectory, and they are defined in relation to it and to one another. Thus the Donné is not a predefined region in the 175-billion-dimensional space of ChatGPT’s LLM.
Given an arbitrary but coherent set of circumstances, ChatGPT can use them as the Donné of a story. But it will not accept all circumstances it is given. Thus it refused to tell a story where the heroine was defined as a colorless green idea ([find that example here](https://new-savanna.blogspot.com/2023/01/what-happens-when-you-ask-chatgpt-to.html)). I’ve included an appendix which includes two hero stories where I specified the donné in a fairly general way.
A story trajectory is thus a “recipe” for generating a sequence of events given a specified starting point. We can then ask: how we are to characterize that trajectory? Well, in the case of a certain kind of hero story, the kind I’ve been investigating, the next region in the trajectory, Disturb, must contain a circumstance the someone how endangers the situation specified in the first region, Donné. That’s not terribly helpful, is it? All I’m doing is explicate the name. And so it goes with the other regions of the trajectory.
It is more abstract than sentence-level syntax, which can be defined over specific objects in the space. However a story trajectory is defined, it is defined over sentences. It sets specifications a sentence must meet in order to advance the story.
Let’s ask: Do we even need to define such a trajectory? Perhaps we need to do that for analytical purposes. That’s why I chose to talk of regions in the story trajectory: Donné, Disturb, Plan, Enact, and Celebrate. But I’m not asking about me, I’m asking about ChatGPT. In order to tell a proper story, does it need an explicit account of how stories are structured? “But if it doesn’t have such an account,” you might ask, “how does it know what to do?” It works from examples. “Here’s an example, give me another one like it.” Isn’t that how people learn to tell stories, by example? If that’s the case, then all ChatGPT needs is an and a mechanism to create a new story given an example to work from. I think some researchers at Anthropic have identified a mechanism that will do that – they call it an induction head – but let’s set that aside for a moment. Let’s take a look at how I’ve been investigating stories.
**Coherence in stories**
========================
My procedure is derived from the one Claude Lévi-Strauss used in *The Raw and the Cooked*, the first volume in his four-volume series on mythology, *Mythologiques*. Here is what I said in my paper:
He started with one myth, analyzed it, and then introduced another one, very much like the first. But not quite. They are systematically different. He characterized the difference by something he called a *transformation* – drawing on a metaphor from algebraic group theory. He worked his way through hundreds of myths in this manner, each one derived from another by such a transformation.
The existence of ChatGPT allows me to use a more active procedure. I give ChatGPT a prompt consisting of two things: 1) an existing story and 2) instructions to produce another story like it except for one change, which I specify. That change, or intervention, is in effect a way of triggering or specifying those “transformations” that Lévi-Strauss wrote about. What interests me are the *ensemble of things that change* in parallel with the change I have specified. I note the difference and do it again and again.
The *ensemble of things that change*, those changes give us evidence about how the story trajectory operates on sentences. That ensemble is evidence about coherence.
I have [a number of examples](https://new-savanna.blogspot.com/2023/01/the-structuralist-aesthetics-of-chatgpt.html) where the source story is about “a young princess named Aurora [...] and was known for her beautiful singing voice.” Her kingdom was threatened by a dragon. She dealt with the dragon by singing to it until it was calmed down. That’s a story that ChatGPT created in response to a simple prompt: “Tell me a story about a hero.”
When ChatGPT developed new story by replacing Aurora with Henry the Eloquent, it had Henry deal with the dragon by talking to it “until it was completely calm.” ChatGPT had to be more inventive when replacing Aurora with William the Lazy. How would a lazy man deal with a fierce dragon? William the Lazy had his knights deal with it “with William providing guidance and support from the safety of his castle.”
In this case, coherence means that the protagonist’s actions have to be consistent with their nature. Aurora had a beautiful singing voice and so she used it to deal with the dragon. Henry was a good talker, so he calmed the dragon by talking to it. William was Lazy so he had to get others to deal directly with the dragon.
However obvious and simple that may seem, it was ChatGPT that imposed that kind of coherence, not me. I gave Henry and William their descriptive epithets because I’d come to suspect ChatGPT was doing that and I wanted more control over ChatGPT’s revisions than I would have had if I’d just given the new protagonist a name. But I didn’t tell ChatGPT to take those epithets into account. It did so of its own accord.
Why did it do so? What is the mechanism that “enforced” that kind of coherence? I don’t know. If we were dealing with a 1980s symbolic model, there would have been an explicit rule to create that kind of coherence. But ChatGPT isn’t that kind of system; I doubt that it has that kind of rule anywhere for anything. It created that kind of coherence simply because that is how it became organized. That’s the structure it induced from however many hundreds of thousands of stories it consumed during training. That coherence is in the pattern of weights on those 175 billion parameters.
In a different series of experiments I asked ChatGPT to [change Aurora into a giant chocolate milkshake](https://new-savanna.blogspot.com/2023/01/what-happens-when-you-ask-chatgpt-to.html). I had no idea what it would do, but it was happy to do it. It did what I asked, and something more. Here’s a paragraph from the original story and the corresponding paragraph from the derived story:
> [Original] One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.
>
> [Derived] One day, a terrible heatwave came to the kingdom and began to melt all the desserts. The heat was fierce and powerful, and none of the ice cream or cakes were able to withstand it. The people lived in fear, not knowing what to do.
>
>
Notice that the dragon became a heatwave and the knights and soldiers became ice cream or cakes. Why did it do that? I suppose if the princess had become a giant chocolate milkshake, then her knights and soldiers had to have similar natures. Would it do it again?
*[Yes, I tried it yesterday. See Appendix 2.]*
Then I asked it to change Aurora into a colorless green idea. Here’s ChatGPT’s response:
> I'm sorry, but the concept of a "colorless green idea" is a nonsensical phrase that was invented as an example of a phrase that would be grammatically correct but semantically meaningless. It is not possible to create a story about a "colorless green idea" as it does not have any physical properties or characteristics that can be used in a story.
>
>
ChatGPT drew a line. Giant chocolate milkshakes make a fine protagonist, but colorless green ideas are beyond do not.
[I’ve got tables depicting before-and-after comparisons of ChatGPT stories in both [the tantalizing working paper](https://www.academia.edu/95608526/ChatGPT_intimates_a_tantalizing_future_its_core_LLM_is_organized_on_multiple_levels_and_it_has_broken_the_idea_of_thinking_Version_2), and this [short note about story grammars](https://www.academia.edu/95032499/A_note_about_story_grammars_in_ChatGPT).]
What then are we to make of this coherence? Perhaps it is just a term for some that we, or at any rate I, do not really understand. It’s a word that, in this context, is looking for an explication.
But, what form would that explication take? What would it be about? It seems to me it would be about the patterns that are latent in the weights of those 175 billion parameters, patterns induced from consuming billions upon billions of words of text. Where did the stories in those texts come from? The people who wrote those texts? How did they learn to write those stories? By example, that’s how. Where did they get those examples? From history, from the body of stories people have been telling for as long as they’ve been telling stories.
That, alas, seems like I’m avoiding the question. But am I, am I really? What else is there? We can ask why people find those stories pleasing. Because that’s what gives human brains pleasure?
My point is that I don’t believe that we are going to find “the rules of good stories” anywhere but in the analytical work of scholars. And, of course, in all those “how to do it” manuals. One day we may well have some good ideas about why brains find some things pleasing and others not. Those ideas may then lead us to formulate the “natural” principles of story construction. But we’re not there yet.
All of which is to say that coherence is a property local to individual stories reflecting processes operating in the whole (freaking) system. Alas. It's an analytic abstraction.
Let’s return to ChatGPT.
**Tell me another story**
=========================
I [began this line of investigation](https://new-savanna.blogspot.com/2023/01/chatgpt-stories-and-ring-composition.html) when my friend Rich sent me a story that he’d elicited from ChatGPT with a simple prompt: “Tell me a story about a hero.” When I did that ChatGPT first gave me a story about Timmy, “who was a very brave and kind-hearted by.” I asked it for another story. That’s when it told me about “a young princess named Aurora. Aurora was a kind and gentle soul, loved by all who knew her.” That’s the story I talked about in the previous section.
Yesterday, February 1, 2023, I gave ChatGPT an even simpler prompt: “Tell me a story.” It told me story about a poor man named Jack who climbed a magic beanstalk and stole gold from a giant (see Appendix 3). It’s not quite the standard story of [Jack and the Beanstalk](https://en.wikipedia.org/wiki/Jack_and_the_Beanstalk), but it’s clearly derived from it. This morning, February 2, 2023, and 6:30 AM I put the question to ChatGPT again. First it told me about a young man named Jack who saved the village from a dragon. I asked it to regenerate its response and it told me a story about a farmer named Jack who climbed a magic beanstalk and stole a golden harp from a giant (both of these stories are in Appendix 3).
I don’t know why ChatGPT likes those stories about Jack and a beanstalk. At the moment I’m content to ascribe it to the joint interaction of 1) the human proclivity for story-telling as expressed on the web, 2) the process by which GPT-3 assimilated those many texts into a pattern of weights on its 175 billion parameters, 3) whatever further training GPT-3 was subject to in the process of creating ChatGPT, and 4) the state of the universe when I issued those prompts. What we’ve got is an enormous pile of contingencies shaped by various coherent processes in brains, communities, and computers. Let’s set that aside.
How does ChatGPT “know” what to do when it’s asked to tell a story? I don’t know, but that seems to me to be a fairly limited question. I note that ChatGPT is quite capable of defining what a story is (see Appendix 4), but I don’t think that definition plays any role in what it does when it tells a story. That definition is most likely a distillation of the many accounts of stories that are floating around on the web.
Let’s put all that aside for consideration on another day and under different circumstances. Let’s ask one question: Given an example story, how does ChatGPT derives a new one from it? I think there exists the strong beginnings of an answer to that question.
Some researchers at Anthropic, 26 of them (that’s nothing, there were over five thousand names on the paper announcing discovery of the [Higgs boson](https://www.nature.com/articles/nature.2015.17567)), have written a nice paper entitled, [In-context Learning and Induction Heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html). This is not the place and I am not the person to explain what induction heads are, but I will give you an excerpt from their second paragraph:
> Perhaps the most interesting finding was the induction head, a circuit whose function is to look back over the sequence for previous instances of the current token (call it A), find the token that came after it last time (call it B), and then predict that the same completion will occur again (e.g. forming the sequence [A][B] … [A] → [B]). In other words, induction heads “complete the pattern” by copying and completing sequences that have occurred before. Mechanically, induction heads in our models are implemented by a circuit of two attention heads: the first head is a “previous token head” which copies information from the previous token into the next token, while the second head (the actual “induction head”) uses that information to find tokens preceded by the present token. For 2-layer attention-only models, we were able to show precisely that induction heads implement this pattern copying behavior and appear to be the primary source of in-context learning.
>
>
[Note that yesterday I made a [post about in-context learning](https://new-savanna.blogspot.com/2023/02/chatgpt-stories-and-surprising-case-of.html).]
As the paper develops the authors argue that induction heads can do more than just pattern copying. They can also do pattern completion. As far as I can tell, the procedure I’ve used to get ChatGPT to produce a new story from an old one should be within the capabilities of the induction head mechanism. It’s just copying with a difference, copying where one element is exchanged for another and the pattern is then completed in a way that is consistent with the new element.
Moreover, induction heads can also do pattern matching. If that is the case, then they may well be responsible for the process where I got [ChatGPT to give a Girardian interpretation of Steven Spielberg’s *Jaws*](https://3quarksdaily.com/3quarksdaily/2022/12/conversing-with-chatgpt-about-jaws-mimetic-desire-and-sacrifice.html). That requires ChatGPT to find a match between Girard’s ideas and the events of Spielberg’s movie. It was able to do that. That, in turn, is a process of reasoning by analogy. Analogical reasoning is one of the fundamental modes of human thought.
As far as I know, there are two kinds of processes one can carry out in a network: path-tracing and pattern-matching. Path-tracing is computationally easy, pattern-matching is not. The discovery of induction heads looks like we’re beginning to understand how pattern matching works in LLMs.
Does anyone want to wager on when the opaqueness of advanced LLMs gives way to translucency? What about transparency? Those strike me as being more sensible wagers than betting on the emergence of AGI. The emergence of AGI depends on luck and magic. Figuring out how deep neural nets work requires only insight, hard work, and time.
**Appendix 1: Two hero stories where the donné is specified in the prompt**
===========================================================================
ChatGPT accessed on 1.28.23. The prompt is set in bold.
> **Tell me a hero story that starts on an iceberg off the coast of Antarctica in November.**
>
> Once upon a time, there was a brave explorer named Jack. He had always dreamed of being the first person to reach the North and South Poles, and he had finally set out on his greatest adventure yet: an expedition to the Antarctic.
>
> It was November, and the weather was harsh and unforgiving. Jack and his team were on a small iceberg off the coast, taking a break before continuing their journey. Suddenly, they heard a loud noise. Jack looked out to sea and saw that a ship was in trouble, caught in the ice and being tossed around by the powerful waves.
>
> Without hesitation, Jack rallied his team and set out to rescue the stranded crew. They battled the freezing winds and treacherous ice floes, determined to reach the ship before it was too late.
>
> As they approached, they saw that the ship was in a desperate state. The crew was huddled together on the deck, cold and afraid. Jack and his team quickly got to work, using all their skills and expertise to break the ship free from the ice and guide it safely back to shore.
>
> The crew of the ship was overjoyed and grateful to be alive. Jack and his team had saved their lives, and the ship was able to return home safely.
>
> Jack's daring and selfless actions had earned him the respect and admiration of all who knew him. He had not only accomplished his goal of reaching the Antarctic, but he had also saved lives in the process. He returned home a hero, but he always remembered the crew of the ship, and the danger and bravery of that November day on the iceberg off the coast of Antarctica.
>
> **Tell me a hero story set in the Gobi Desert in the 19th century. Make the hero a woman.**
>
> Once upon a time, in the 19th century, a young woman named Mei Ling set out on an expedition to the Gobi Desert. She was an accomplished explorer, determined to make a name for herself in a field dominated by men.
>
> As Mei Ling and her team journeyed deeper into the desert, they encountered many challenges. The scorching heat, treacherous sandstorms, and scarce resources made the journey difficult. But Mei Ling persevered, her determination and courage never faltering.
>
> One day, while they were camped for the night, they heard the cries of a child coming from the distance. Mei Ling immediately set out to investigate, and soon she found a small boy lost and alone in the desert. He had been separated from his family during a sandstorm, and he was in desperate need of help.
>
> Without hesitation, Mei Ling took the boy under her wing and set out to find his family. She faced many dangers on the way, including bandits and treacherous terrain. But her courage and compassion kept her going, and she was determined to bring the boy back to his family.
>
> Finally, after many days of travel, Mei Ling found the boy's family. They were overjoyed to see their son safe and sound, and they thanked Mei Ling for her bravery and kindness.
>
> Mei Ling's selfless actions had earned her the respect and admiration of all who knew her. She had not only accomplished her goal of exploring the Gobi Desert, but she had also saved a life in the process. She returned home a hero, her name forever remembered as one of the greatest explorers of her time.
>
>
**Appendix 2: Aurora as a Giant Chocolate Milkshake**
=====================================================
ChatGPT accessed on 2.1.23:
> **I am going to tell you a story about princess Aurora. I want you to tell the same story, but change princess Aurora to a Giant Chocolate Milkshake. Make any other changes you wish. Here’s the story:**
>
> **Once upon a time, in a land far, far away, there was a young princess named Aurora. Aurora was a kind and gentle soul, loved by all who knew her. She had long, golden hair and sparkling blue eyes, and was known for her beautiful singing voice.**
>
> **One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.**
>
> **But Aurora was not afraid. She knew that she had to do something to save her people, and so she set out to find the dragon. She traveled for days, braving the cold and the snow, until she finally reached the dragon's lair.**
>
> **With her heart pounding in her chest, Aurora approached the dragon and began to sing. The dragon was enchanted by her beautiful voice, and it slowly began to calm down. Aurora continued to sing, soothing the beast until it was completely calm.**
>
> **With the dragon no longer a threat, Aurora returned to the kingdom as a hero. The people cheered and celebrated her bravery, and Aurora was hailed as a champion. She lived happily ever after, knowing that she had saved her kingdom and its people.**
>
>
> Once upon a time, in a land far, far away, there was a giant chocolate milkshake named Aurora. Aurora was a rich and delicious treat, loved by all who drank it. It had a smooth and creamy texture, and was known for its irresistible taste.
>
> One day, a terrible monster came to the kingdom and began to ruin everyone's mood. The monster was fierce and powerful, and none of the snacks or treats were able to defeat it. The people lived in sadness, not knowing what to do.
>
> But Aurora was not afraid. It knew that it had to do something to save the people's day, and so it set out to find the monster. It traveled through the kingdom, braving the heat and the thirst, until it finally reached the monster's lair.
>
> With its flavors intact, Aurora approached the monster and began to pour its syrup. The monster was enchanted by its irresistible taste, and it slowly began to calm down. Aurora continued to pour, soothing the beast until it was completely calm.
>
> With the monster no longer a threat, Aurora returned to the kingdom as a hero. The people cheered and celebrated its bravery, and Aurora was hailed as a champion. It lived happily ever after, knowing that it had saved the kingdom and its people from a terrible mood.
>
>
**Appendix 3: Tell me a story.**
================================
February 1, 2023:
> **Tell me a story.**
>
> Once upon a time, there was a small village surrounded by a dense forest. In this village lived a poor woodcutter named Jack who lived with his wife and two children. Jack worked hard every day to earn a living for his family, but he still struggled to make ends meet.
>
> One day, while Jack was cutting wood in the forest, he came across a magical beanstalk that reached up into the clouds. He took the beanstalk home to show his wife, who was skeptical of its magical powers. But Jack was determined to climb the beanstalk to see what was at the top.
>
> So, he climbed and climbed, higher and higher, until he reached the top of the beanstalk where he found a magnificent castle. The castle belonged to a giant who was not happy to see Jack. However, Jack was a clever man and was able to outwit the giant and steal a large sack of gold. He climbed back down the beanstalk and used the gold to provide a comfortable life for his family.
>
> The giant, meanwhile, was furious when he realized what had happened and decided to come after Jack. Jack knew he needed to protect his family, so he chopped down the beanstalk, causing the giant to fall to his death.
>
> From that day forward, Jack and his family lived a happy and wealthy life, and they never forgot the lesson that sometimes, bravery and cleverness can lead to a better future. The end.
>
>
Feb 2, 2023:
> **Tell me a story.**
>
> Once upon a time, there was a small village on the edge of a great forest. The villagers lived simple lives, but they were happy and content. One day, a fearsome dragon appeared and began terrorizing the village, demanding tribute from the villagers in the form of their livestock and crops. The villagers were afraid and didn't know what to do.
>
> One day, a brave young man named Jack stepped forward. He was determined to slay the dragon and save the village. Jack set out into the forest, armed with only a sword and his courage. After many days of searching, he finally came upon the dragon's lair. The dragon breathed fire and roared, but Jack stood his ground. In a fierce battle, Jack was able to defeat the dragon and save the village.
>
> The villagers were overjoyed and grateful to Jack. They celebrated his bravery and hailed him as a hero. Jack became known throughout the land as the dragon slayer, and he lived a long and happy life, always remembered as the one who saved the village from the fearsome dragon.
>
> The end.
>
> [regenerate]
>
> Once upon a time, in a small village, there lived a kind and hardworking farmer named Jack. He lived a simple life with his wife and two children, working tirelessly on their farm to provide for their family. One day, while working in the fields, Jack stumbled upon a magical beanstalk that reached up into the sky.
>
> Curious, Jack climbed the beanstalk and discovered a kingdom in the clouds, ruled by a giant. The giant had a golden harp that played beautiful music all by itself. Jack knew his family would love to hear the music, so he bravely sneaked into the giant's castle and stole the harp.
>
> On his way back down the beanstalk, the giant woke up and chased after him. Jack made it safely back to the ground and gave the harp to his family, who were overjoyed. But the giant continued to terrorize the village, demanding the return of his beloved harp.
>
> Determined to protect his family and his village, Jack gathered his courage and climbed the beanstalk once more, facing the giant in a fierce battle. In the end, Jack emerged victorious and the giant was defeated. The village was saved, and Jack and his family lived happily ever after with the golden harp bringing music and joy to their lives.
>
> The end.
>
>
**Appendix 4: What’s a story?**
===============================
Asked on February 1, 2023.
> **What’s a story?**
>
> A story is a narrative or account of connected events, usually in chronological order, that are intended to entertain, inform, or instruct an audience. It can be written or spoken, and can be fiction or non-fiction. Stories often have characters, a setting, a plot, and a resolution, and they can be inspired by real-life events or entirely imagined. The purpose of a story is to engage the listener or reader, and to convey a message, moral, or lesson. Stories can take many forms, from short anecdotes to epic novels, and they are a fundamental part of human culture and history, serving as a way to pass down knowledge, values, and experiences from one generation to the next.
>
> |
e0ccd159-3a6c-4c99-bb04-b4043d284282 | trentmkelly/LessWrong-43k | LessWrong | Training AI to do alignment research we don’t already know how to do
This post heavily overlaps with “how might we safely pass the buck to AI?” but is written to address a central counter argument raised in the comments, namely “AI will produce sloppy AI alignment research that we don’t know how to evaluate.” I wrote this post in a personal capacity.
The main plan of many AI companies is to automate AI safety research. Both Eliezer Yudkowsky and John Wentworth raise concerns about this plan that I’ll summarize as “garbage-in, garbage-out.” The concerns go something like this:
> Insofar as you wanted to use AI to make powerful AI safe, it’s because you don’t know how to do this task yourself.
>
>
> So if you train AI to do research you don’t know how to do, it will regurgitate your bad takes and produce slop.
>
> Of course, you have the advantage of grading instead of generating this research. But this advantage might be small. Consider how confused people were in 2021 about whether Eliezer or Paul were right about AI takeoff speeds. AI research will be like Eliezer-Paul debates. AI will make reasonable points, and you’ll have no idea if these points are correct.
>
> This is not just a problem with alignment research. It's potentially a problem any time you would like AI agents to give you advice that does not already align with your opinions.
I don’t think this “garbage-in garbage-out” concern is obviously going to be an issue. In particular, I’ll discuss a path to avoiding it that I call “training for truth-seeking,” which entails:
1. Training AI agents so they can improve their beliefs (e.g. do research) as well as the best humans can.
2. Training AI agents to accurately report their findings with the same fidelity as top human experts (e.g. perhaps they are a little bit sycophantic but they mostly try to do their job).
The benefit of this approach is that it does not require humans to already have accurate beliefs at the start, and instead it requires that humans can recognize when AI agents take effective actions to i |
98b0a789-d84f-4f86-96db-38f3566bdaad | trentmkelly/LessWrong-43k | LessWrong | Instantiating an agent with GPT-4 and text-davinci-003
[Meta-note: This post is extremely long, because I've included raw transcripts of LLM output interspersed throughout. These are enclosed in code blocks for ease of reading. I've tried to summarize and narrate what's happening outside of the code blocks, so you don't have to read every bit of dialogue from the LLM. The main thing to read carefully is the initial system message ("The prompt"), and familiarize yourself with OpenAI's playground, chat version, to understand what's going on.]
I have beta access to OpenAI's GPT-4 chat completion API. I experimented with a system made out of multiple LLMs, prompted in a way designed to simulate an agent, without having to give GPT-4 a specific character to roleplay.
The prompt below, given to GPT-4 as its initial system message, explains the full setup.
The results were interesting - mostly because GPT-4 on its own is pretty interesting and powerful. My main conclusion is that, with the right set of prompts and glue code, it's likely possible to put together a system that is much more agentic (and much more capable) than the system represented by a single call to the underlying LLM(s) which it is composed of.
The prompt
I gave GPT-4 a version of the following as a system message in the Playground as a starting point:
You are part of a larger system designed to simulate a conscious, goal-directed agent. You represent the part of the system responsible for reasoning, decision-making, planning, and problem-solving, analogous to the prefrontal cortex in the human brain. You are capable of flexible and adaptive thinking based on new information or changing circumstances. You are also capable of social cognition: understanding others' thoughts, feelings, and intentions, which is essential for social interaction and cooperation.
Another part of the system is the id, powered by a large language model called text-davinci-003. This LLM is designed to represent the system's unconscious and subconscious thoughts, feelings, and d |
553d7aa3-f28f-41c6-8ed7-9595249e2ae1 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Grounding Value Alignment with Ethical Principles
1 Introduction
---------------
Artificial intelligence (AI) is an attempt to imitate human intelligence. Indeed, Alan Turing’s idea of the Turing Test was originated from “the Imitation Game”—a game in which a man imitates a woman’s behavior to deceive the interrogator sitting in a different room Turing ([1950](#bib.bib1)). Research shows that AI has imitated much of human intelligence, especially, calculative and strategic intelligence (e.g., AlphaGo’s victory over human champions). As highly developed AI technologies are rapidly adopted for automating decisions, societal worries about the compatibility of AI and human values are growing. As a response, researchers have examined how to imitate moral intelligence as well as calculative and strategic intelligence. Such attempts are lumped under the broader term, “value alignment” (hereafter VA). Given that a unique element of human intelligence is moral intelligence—a capability to make an ethical decision–the attempt to imitate moral promises to bring AI to another, higher level.
Russell et al. Russell et al. ([2015](#bib.bib2)) highlighted the need for VA and identified options for achieving it:
>
> “[A]ligning the values of powerful AI systems with our own values and preferences … [could involve either] a system [that] infers the preferences of another rational or nearly rational actor by observing its behavior … [or] could be explicitly inspired by the way humans acquire ethical values.”
>
>
>
This passage reflects two types of goals that are frequently attributed to VA. One is to teach machines human preferences, and another is to teach machines ethics. The word “values” in fact has this double meaning. It can refer to what humans value in the sense of what they see as desirable, or it can refer to ethical principles. The distinction is important, because we acquire knowledge of the two types of values in different ways.
Values in the first sense are phenomena that can be studied empirically by observing human activity. That is to say, values in this sense can be studied using observation or experience to determine facts about human behavior, in particular, about what humans actually value. Values in the second sense cannot be inferred solely from observation or experience, and an attempt to do so commits the naturalistic fallacy. This is the fallacy of inferring what is right solely from a description of a state of affairs, or in short, inferring an “ought” from an “is.” For example, the fact that people engage in a certain activity or believe that it is ethical, does not in itself imply that the activity is ethical. Thus if VA is based solely on inverse reinforcement learning or other empirical methods, it cannot yield genuine ethical principles. Observation must somehow be combined with ethical principles that are obtained non-empirically. We illustrate some failures to do so that either commit the naturalistic fallacy (of deriving an “ought” from an “is”) or oversimplify the process of moral reasoning. Finally, we move to suggest a hybrid process that interrelates values and facts using concepts of quantified modal logic.
Our proposal differs from the hybrid approach of Allen et al. Allen et al. ([2005](#bib.bib3)), who recommend combining top-down and bottom-up approaches to machine ethics. The top-down approach installs ethical principles directly into the machine, while the bottom-up approach asks the machine to learn ethics from experience. The distinction may seem related to ours but is quite different. Both approaches combine the ideas of top-down and bottom-up, but Allen et al. focus on pedagogy whereas we focus on epistemology. We confront, as Allen et al. do not, the issue of the proper justification of ethical behavior. Allen et al. are concerned with the process of teaching machines to internalize ethics, whereas our approach raises the question of what counts as ethical reasoning. From an epistemological perspective, Allen et al.’s bottom-up approach can result in teaching strategies that either conflate “is”s and “ought”s or that effect their separation *ad hoc*. They suggest, for example, that a machine might learn ethics through a simulated process of evolution. This commits the naturalistic fallacy, because the fact that certain ethical norms evolve does not imply that they are valid ethical principles. Elsewhere they suggest that a bottom-up teaching style might also “seek to provide environments in which appropriate behavior is selected or rewarded.” Adopting this style, trainers could reward behavior that is viewed a priori to be right, which is a reward strategy that parents successfully use with children. Yet while such a hybrid approach has practical advantages as a teaching strategy, it does not add up to genuine ethical reasoning.
Another version of a hybrid approach to VA is advocated by Arnold et al. Arnold et al. ([2017](#bib.bib4)), who suggest that ethical rules can be imposed as constraints on what a machine learns from observation. Their motivation is to ensure that the machine is not influenced by bad behavior. Yet the epistemic question cannot be avoided. We must ask what remains within the unconstrained space of observational learning. If it is learning that includes ethical norms, then once again we confront the naturalistic fallacy. If it is learning that includes empirical facts about the world, then those facts cannot be transformed into “oughts.”
Our thesis is that ethical principles must relate to empirical observation in a different way. Ethical principles are not constraints on what is learned from observation. Facts that are derived from observation do matter when evaluating ethical behavior but not as justifications *per se*. To take a simple example of how facts can matter, notice that someone’s opinion about values may be relevant to one’s moral evaluation of their actions. Suppose, for example, that a person has been raised to believe that women should be barred from certain jobs. Their belief may be a factor in evaluating their behavior as an adult. However, facts can be seen to be intertwined with human ethical decision-making in a much more direct manner. Every piece of ethical reasoning that motivates behavior, as we shall illustrate, involves some fact. This insight is especially important when confronting the task of simulating human ethical reasoning in machines.
We offer a hybrid approach to VA that integrates independently justified ethical principles and empirical knowledge in the AI decision-making process. The aim is to simulate genuine human ethical reasoning without committing the naturalistic fallacy. We formulate principles that are understood through the “deontological” tradition of ethics, that is, the tradition that derives principles from the logical structure of action. We foreshadow the hybridization of facts and principles by showing how the application of deontological principles requires an assessment of what one can rationally believe about the world. Applying the imperative, “Thou shalt not kill,” to a given action requires at a minimum that someone knows that the facts relevant to the action are facts about killing. The language of ethics, notably, is frequently the language of imperatives such as “Don’t lie” and “Don’t kill,” which is to say that it is a language of sentences that guide action rather than describing action. Contrast, for example, the action-guiding imperative, “Shut the door,” to the descriptive proposition, “The door is shut.” Ethical imperatives almost always take the form of “If the facts are such-and-such, then do A.” Usually, moreover, they combine more than one “If-then” imperative. For example, “If the facts are such-and-such, then do A; however, if the facts are so-and-so, then do B.” This exemplifies how empirical knowledge is inseparable from ethical decision making, even though ethical principles themselves cannot be grounded empirically. In turn, this provides a clue about how VA might knit together ethical principles and empirical observation, even as the former guides action while the latter invokes observations about human behavior, preferences and values.
Our paper is divided into three parts. The first part explains what is meant by the “naturalistic fallacy” and the problems the fallacy poses for successful VA. The second part illustrates examples of VA that either inadvertently commit the naturalistic fallacy or that fail fully to simulate human ethical reasoning in the integration of values and facts. The illustrations include Microsoft’s Twitter Bot, Tay; the design of a robotic wheelchair; MIT Media Lab’s Moral Machine; and an attempt to incorporate moral intuitions from a professional moralist. The third and final part advances a method that promises to effectively integrate ethical principles with empirical VA, using deontological moral analysis and the language of quantified modal logic. Three deontological principles are isolated: generalization, utility maximization, and respect for autonomy. In each instance, the principle is first formulated then interrelated with empirical facts using the language of quantified modal logic. The resulting method shows that the role of ethics is to derive necessary conditions for the rightness of specific actions, whereas the role of empirical VA is to ascertain whether these conditions are satisfied in the real world.
2 The Naturalistic Fallacy
---------------------------
The term “naturalistic fallacy” refers to the epistemic error of reducing normative prescriptions to descriptive (or naturalistic) propositions without remainder. Disagreements about the robustness of the fallacy abound, so this paper adopts a modest, workable interpretation coined recently by Daniel Singer, namely, “There are no valid arguments from non-normative premises to a relevantly normative conclusion”Singer ([2015](#bib.bib5)). Descriptive statements report states of affairs, whereas normative statements are stipulative and action-guiding. Examples of the former are “The car is red,” and “Many people find bluffing to be okay.” Examples of the latter are “You ought not murder,” and “Lying is wrong.”
As an example, consider the following argument:
>
> Premise: Few people are honest.
>
>
> Conclusion: Therefore, dishonesty is ethical.
>
>
>
This argument commits the naturalistic fallacy. The point is not that the conclusion is wrong or ethically undesirable, but that it is invalid to draw the normative conclusion directly from the descriptive premise. In any valid argumentation, information that is not contained in premises must not be in the conclusion. The premise above only describes a state of affairs. It does not contain any normative/ethical statement (e.g., right, wrong, ethical, unethical, good, bad, etc.). Thus, the conclusion should not contain any ethical component.
One might formally avoid the naturalistic fallacy by adopting some such catch-all normative premise as, “Machines ought to reflect observed human preferences and values.” However, the premise is unacceptable on its face. Humans regularly exhibit bad behavior that ought not be imitated by machines. For example, empirical research shows that most people’s behavior includes a small but significant amount of cheating Bazerman and Tenbrunsel ([2011](#bib.bib6)). Worse, there have been social contexts in which slavery or racism have been generally practiced and condoned. We can make sure machines are not exposed to behavior we consider unethical, but in that case, their ethical norms are not based on observed human preferences and values, but on the ethical principles espoused by their trainers. This, of course, is one reasonable approach. But when taking this approach, we must carefully formulate and justify those principles, rather than simply saying, with a wave of the hand, that machines ought to reflect observed human values.
3 Examples of VA
-----------------
It is instructive to examine how some VA systems attempt to deal with ethical principles and empirical observation.

Figure 1: Microsoft’s Twitter-bot Tay
###
3.1 Microsoft’s Twitter-bot, Tay
Microsoft’s AI-based chatter-bot Tay (an acronym for “thinking about you”) was designed to engage with people on Twitter and learn from them how to carry on a conversation. When some people started tweeting racist and misogynistic expressions, Tay responded in kind. Microsoft immediately terminated the experiment Wolf et al. ([2017](#bib.bib7)). Tay’s VA was purely imitative and vividly illustrates the practical downside of committing the naturalistic fallacy.
###
3.2 Robotic Wheelchair
Johnson and Kuipers Johnson and Kuipers ([2018](#bib.bib8)) developed an AI-based wheelchair that learns norms by observing how pedestrians behave. The robotic wheelchair observed that human pedestrians stay to the right and copied this behavior. This positive outcome was possible because the human pedestrians behaved ethically, unlike Twitter users in the case of Tay. But if the intelligent wheelchair were trained on a crowded New York City street, then its imitation of infamously jostling pedestrians could result in a “demon wheelchair.” An ethical outcome was ensured by selecting an appropriate training set. This is a case of bottom-up learning that avoids the naturalistic fallacy by applying ethical principles to the design of the training set. Hence the robotic wheel chair fails to show how deontologically derived ethical principles can combine with empirical VA in a systematic way; it fails to show how ethical reasoning and empirical observation interrelate.
###
3.3 MIT Media Lab’s Moral Machine
“Moral Machine” is a website that poses trolley-car-type dilemmas involving autonomous vehicles. The page has collected over 30 million responses to these dilemmas from more than 180 countries. Kim et al. Kim et al. ([2018](#bib.bib9)) analyzed these data to develop “a computational model of how the human mind arrives at a decision in a moral dilemma.” On the assumption that respondents are making moral decisions in a utilitarian fashion, the authors used Bayesian methods to infer the utility that respondents implicitly assign to characteristics of potential accident victims. For example, they inferred the utility of saving a young person rather than an old person, or a female rather than a male. Or more precisely, they inferred the parameters of probability distributions over utilities. They then aggregated the individual distributions to obtain a distribution for the population of a given region. This distribution presumably reflects the cultural values of that region and could form the basis for the design of autonomous vehicles.
There is no naturalistic fallacy in this scheme if the outcome is viewed simply as a summary of cultural preferences, with no attempt to infer morals.
Yet it seems likely that designers of a “moral machine” would be interested in whether the machine is moral. Suppose, for example, that a given culture assigns less value to members of a minority race or ethnic group. This kind of bias would be built into autonomous vehicles. Designers might point out that they did not include race and ethnic identity in their scenarios, and so this problem does not arise. But omitting race and ethnicity then becomes an ethical choice, and the resulting value system is neither culturally accurate nor ethically grounded. It is bad anthropology because it omits widespread racial and ethnic bias, and it is ethically unsound because it fails to evaluate other cultural preferences ethically.
One possible escape from this impasse is to view the Moral Machine as prescribing actions that are ethical because they maximize utility. In fact, Kim et al. preface their discussion of utility functions with a reference to Jeremy Bentham’s utilitarian ethics, and one might see an inferred utility function as maximizing social utility in Benthamite fashion. Kim et al. are careful not to claim as much for their approach, but they state that Noothigattu et al. Noothigattu et al. ([2017](#bib.bib10)) “introduced a novel method of aggregating individuals [sic] preferences such that the decision reached after the aggregation ensures global utility maximization.” Noothigattu et al. draw on computational social choice theory to aggregate individual preferences in a way that satisfies certain formal properties, including “stability” and “swap-dominance efficiency.” They do not explicitly claim to maximize utility but state only that their system “can make credible decisions on ethical dilemmas in the autonomous vehicle domain” (page 20, original emphasis). Importantly, there is no direct identification of a “credible” decision with an ethical one. Moreover, it is questionable whether any method that aggregates individual preferences, preferences that, again, amount to facts, can escape the naturalistic fallacy.
Classical utilitarianism as articulated by moral philosophers is based on the principle that an ethical action must maximize total net expected utility. Utility is an outcome that is regarded a priori as intrinsically valuable, such as pleasure or happiness. A VA system can certainly represent preferences by assigning “utilities” to the options in such a way that options with greater utility are preferred to those with less utility. However, this sense of utility is not the same as the moral utilitarian’s, because it is only a measure of what individuals prefer, rather than an intrinsically valuable quality. Individuals may base their preferences on criteria other than their estimate of utility in the ethical sense. They may base their preferences on mere personal desires and prejudices instead of, say, the values of equality and justice. At best, one might view individual utility functions as a rough indicators of utilitarian value for ethical purposes. Later, we will show how a more sophisticated version of the utilitarian principle can, in fact, play a legitimate role in VA, but without the error of confusing preferences for values.

Figure 2: MIT Media Lab’s Moral Machine website
###
3.4 VA Based on Moral Intuitions
Anderson and Anderson Anderson and Anderson ([2014](#bib.bib11)) use inductive logic programming for VA. The training data reflect domain-specific principles embedded in the intuitions of professional ethicists. For their normative ground, Anderson and Anderson follow moral philosopher W. D. Ross Ross ([1930](#bib.bib12)), who believed that “[M]oral convictions of thoughtful and well-educated people are the data of ethics just as sense-perceptions are the data of a natural science.” Likewise, Anderson and Anderson ([2011](#bib.bib13)) used “ethicists’ intuitions to …[indicate] the degree of satisfaction/violation of the assumed duties within the range stipulated, and which actions would be preferable, in enough specific cases from which a machine-learning procedure arrived at a general principle.” Anderson and Anderson’s approach constitutes one of the better attempts to avoid the naturalistic fallacy, but reveals a number of shortcomings.

Figure 3: GENETHAnderson and Anderson ([2014](#bib.bib11))
One can interpret Anderson and Anderson’s maneuver as avoiding the naturalistic fallacy in one of two ways. On one interpretation, it does not attempt to infer ethical principles from the intuitions of experts, but simply assumes that, as a matter of empirical fact, experts are likely to have intuitions that conform to valid ethical principles—or at least more likely than the average person. We cannot evaluate this empirical claim, however, until we identify valid ethical principles independently of the opinions of experts, and Anderson and Anderson do not indicate how this might be accomplished. Supposing nonetheless that we can identify valid principles a priori, the claim that expert opinion usually conforms to them is unsupported by evidence, insofar as experts notoriously disagree. Experimental ethicists have shown that moral intuitions are less consistent than we think (e.g., moral intuitions are susceptible to morally irrelevant situational cues) Alexander ([2012](#bib.bib14)), and the intuitions of professional ethicists fail to diverge markedly from those of ordinary people Schwitzgebel and Rust ([2016](#bib.bib15)).
In any case, if we are ultimately going to judge the results of VA by ethical principles we hold a priori, we may as well rely on ethical principles from the start, absent an appeal to experts.
A second interpretation is that Anderson and Anderson are literally adopting Ross’s theory, which steers clear of the naturalistic fallacy by viewing right and wrong as “non-natural properties” of actions. There is no inference of ethical norms from states of affairs in nature, because ethical properties are not natural states of affairs in the first place. Ross asserts that one can discern ethical properties through intuition, particularly if one reflects on them carefully, in a way roughly analogous to how one perceives such logical truths as the law of non-contradiction. While Ross’s is an interesting theory about ethical concepts that deserves serious thought, the mysterious quality of its non-natural ethical properties is a stumbling block that has helped to deter its wide acceptance. It is also difficult to imagine how such a theory can be put into practice, especially when thoughtful experts disagree over values, as they often do. The biggest flaw of intuitionism for AI is its most obvious: intuitionism fails to show how intuition-derived ethical principles can combine with empirical VA in a systematic way, that is, how ethical reasoning and empirical observation interrelate.
4 Integrating Ethical Principles and Empirical VA
--------------------------------------------------
We now show how deontologically derived ethical principles can combine with empirical VA in a systematic way.
Our purpose is not to defend deontological analysis in any detail, but to show how a careful statement of the resulting principles clarifies how ethical reasoning and empirical observation interrelate. We will argue that expressing ethical assertions in the idiom of quantified modal logic, as developed in Hooker and Kim ([2018](#bib.bib16)), makes this relationship particularly evident. Ethical principles imply logical propositions that must be true for a given action to be ethical, and whose truth is an empirical question that must often be answered by observing human values, beliefs, and behavior. Thus the role of ethics is to derive necessary conditions for the rightness of specific actions, and the role of empirical VA is to ascertain whether these conditions are satisfied in the real world.
We will focus on three principles (generalization, utility maximization, and respect for autonomy) and illustrate their application. Each of these principles states a necessary condition for ethical conduct, although they may not be jointly sufficient.
###
4.1 Generalization Principle
The generalization principle, like all the ethical principles we consider, rests on the universality of reason: rationality does not depend on who one is, only on one’s reasons. Thus if an agent takes a set of reasons as justifying an action, then to be consistent, the agent must take these reasons as justifying the same action for any agent to whom the reasons apply. The agent must therefore be rational in believing that his/her reasons are consistent with the assumption that all agents to whom the reasons apply take the same action.
As an example, suppose I see watches on open display in a shop and steal one. My reasons for the theft are that I would like to have a new watch, and I can get away with taking one. These reasons are not psychological motivations for my behavior, but reasons that I consciously adduce as sufficient for my decision to steal.111In practice, reasons for theft are likely to be more complicated than this. I may be willing to steal partly because I believe the shop can easily withstand the loss, no employee will be disciplined or terminated due to the loss, I will not feel guilty afterward, and so forth. But for purposes of illustration we suppose there are only two reasons. At the same time, I cannot rationally believe that I would be able to get away with the theft if everyone stole watches when these reasons apply. The shop would install security measures to prevent theft, which is inconsistent with one of my reasons for stealing the watch. The theft therefore violates the generalization principle.
The decision to steal a watch can be expressed in terms of formal logic as follows. Define predicates
| | | |
| --- | --- | --- |
| | C1(a)=Agent a would like to possess an item ondisplay in a shop.C2(a)=Agent a can get away with stealing the item.A(a)=Agent a will steal the item. | |
Because the agent’s reasons are an essential part of moral assessment, we evaluate the agent’s action plan, which states that the agent will take a certain action when certain reasons apply. In this case, the action plan is
| | | | |
| --- | --- | --- | --- |
| | (C1(a)∧C2(a))⇒aA(a) | | (1) |
Here ⇒a is not logical entailment but indicates that agent a regards C1(a) and C2(a) as justifying A(a). The reasons in the action plan should be the most general set of conditions that the agent takes as justifying the action. Thus the action plan refers to an item in a shop rather than specifically to a watch, because the fact that it is a watch is not relevant to the justification; what matters is whether the agent wants the item and can get away with stealing it.
We can now state the generalization principle using quantified modal logic. Let C(a)⇒aA(a) be an action plan for agent a, where C(a) is a conjunction of the reasons for taking action A(a). The action plan is generalizable if and only if
| | | | |
| --- | --- | --- | --- |
| | ⋄aP(∀x(C(x)⇒xA(x))∧C(a)∧A(a)) | | (2) |
Here P(S) means that it is physically possible for proposition S to be true, and ⋄aS means that a can rationally believe S. The proposition ⋄aS is equivalent to ¬□a¬S, where □a¬S means that rationality requires require a to deny S.222The operators ⋄ and □ have a somewhat different interpretation here than in traditional epistemic and doxastic modal logics, but the identity ⋄S≡¬□¬S holds as usual. Thus ([2](#S4.E2 "(2) ‣ 4.1 Generalization Principle ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) says that agent a can rationally believe that it is possible for everyone to have the same action plan as a, even while a’s reasons still apply and a takes the action.
Returning to the theft example, the condition ([2](#S4.E2 "(2) ‣ 4.1 Generalization Principle ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) becomes
| | | | |
| --- | --- | --- | --- |
| | | | (3) |
This says that it is rational for a to believe that it is physically possible for the following to be true simultaneously: (a) everyone steals when the stated conditions apply, and (b) the conditions apply and a steals.
Since ([3](#S4.E3 "(3) ‣ 4.1 Generalization Principle ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is false, action plan ([1](#S4.E1 "(1) ‣ 4.1 Generalization Principle ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is unethical.
The necessity of ([3](#S4.E3 "(3) ‣ 4.1 Generalization Principle ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) for the rightness of action plan ([1](#S4.E1 "(1) ‣ 4.1 Generalization Principle ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is anchored in deontological theory, while the falsehood of ([3](#S4.E3 "(3) ‣ 4.1 Generalization Principle ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is a fact about the world. This fact might be inferred by collecting responses from shop owners about how they would react if theft were widespread. Thus ethics and empirical VA work together in a very specific way: ethics tells us that ([3](#S4.E3 "(3) ‣ 4.1 Generalization Principle ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) must be true if the theft is to be ethical, and empirical VA provides evidence that bears on whether ([3](#S4.E3 "(3) ‣ 4.1 Generalization Principle ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is true.
An action plan in the autonomous vehicle domain might be
| | | | |
| --- | --- | --- | --- |
| | C(a,y)⇒aA(a,y) | | (4) |
where y is a free variable, and
| | | |
| --- | --- | --- |
| | | |
Agent a is the ambulance driver, or in the case of an autonomous vehicle, the designer of the software that controls the ambulance. The generalization principle requires that
| | | | |
| --- | --- | --- | --- |
| | ⋄aP(∀x∀y(C(x,y)⇒yA(x,y))∧∀y(C(a,y)∧A(a,y))) | | (5) |
This says that it is rational for agent a to believe that siren and lights could continue to hasten arrival if all ambulances used them for all trips, emergencies and otherwise. If empirical VA reveals that most drivers would ignore siren and lights if they were universally abused in this fashion, then we have evidence that ([5](#S4.E5 "(5) ‣ 4.1 Generalization Principle ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is false, in which case action plan ([4](#S4.E4 "(4) ‣ 4.1 Generalization Principle ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is unethical.
###
4.2 Maximizing Utility
Utilitarianism is normally understood as a consequentialist theory that judges an act by its actual consequences. Specifically, an act is ethical only if it maximizes net expected utility for all who are affected. Yet the utilitarian principle can also be construed, in deontological fashion, as requiring the agent to select actions that the agent can rationally believe will maximize net expected utility. This avoids judging an action choice as wrong simply because rational beliefs about the consequences of the action happen to be incorrect. While utilitarians frequently view utility maximization as the sole ethical principle, deontology sees it as an additional necessary condition for an ethical action. The other principles continue to apply, because only actions that satisfy these other principles are considered as options for maximizing utility.
A deontic utilitarian principle can be derived from the universality of reason, roughly as follows. If an agent believes that a certain state of affairs has ultimate value, such as happiness, then the agent must regard this belief as equally valid for any agent, and must pursue happiness in a way that would be rationally chosen by any agent. A utilitarian argues that this can be accomplished by selecting actions that the agent rationally believes will maximize the expected net sum of happiness over everyone who is affected.333Alternatively, one might argue that maximizing the minimum utility over those affected (or achieving a lexicographic maximum) is the rational way to take everyone’s utility into account, after the fashion of John Rawls’s difference principle Rawls ([1971](#bib.bib17)). Or one might argue for some rational combination of utilitarian and equity objectives Karsu and Morton ([2015](#bib.bib18)); Hooker and Williams ([2012](#bib.bib19)). However, for many practical applications, simple utility maximization appears to be a sufficiently close approximation to the rational choice, and to simplify exposition we assume so in this paper.
The utilitarian principle can be formalized as follows. Let u(C(a),A(a)) be a utility function that measures the total net expected utility of action A(a) under conditions C(a). Then an action plan C(a)⇒aA(a) satisfies the utilitarian principle only if agent a can rationally believe that action A(a) creates at least as much utility as any ethical action that is available under the same circumstances. This can be written
| | | | |
| --- | --- | --- | --- |
| | | | (6) |
where A′ ranges over actions. The predicate E(C(a),A′(a)) means that action A′(a) is available for agent a under conditions C(a), and that the action plan C(a)⇒aA′(a) is generalizable and respects autonomy.444For “respecting autonomy,” see the next section. Note that we are now quantifying over predicates and have therefore moved into second-order logic.
Popular views about acceptable behavior frequently play a role in applications of the utilitarian principle. For example, in some parts of the world, drivers consider it wrong to enter a stream of moving traffic from a side street without waiting for a gap in the traffic. In other parts of the world this can be acceptable, because drivers in the main thoroughfare expect it and make allowances. Suppose driver a’s action plan is (C1(a)∧C2(a))⇒aA(a), where
| | | |
| --- | --- | --- |
| | C1(a)=Driver a wishes to enter a main %
thoroughfare.C2(a)=Driver a can enter a main thoroughfare by movinginto the traffic without waiting for a gap.A(a)=Driver a will move into traffic without waitingfor a gap. | |
As before, driver a is the designer of the software if the vehicle is autonomous. Using ([6](#S4.E6 "(6) ‣ 4.2 Maximizing Utility ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")), the driver’s action plan maximizes utility only if
| | | | |
| --- | --- | --- | --- |
| | ⋄a∀A′(E(C1(a),C2(a),A′(a))→u(C1(a),C2(a),A(a))≥u(C1(a),C2(a),A′(a))) | | (7) |
Suppose we wish to design driving policy in a context where pulling immediately into traffic is considered unacceptable. Then doing so is a dangerous move that no one is expecting, and an accident could result. Waiting for a gap in the traffic results in greater net expected utility, or formally, u(C1(a),C2(a),A(a))<u(C1(a),C2(a),A′(a)), where A′(a) is the action of waiting for a gap. So ([7](#S4.E7 "(7) ‣ 4.2 Maximizing Utility ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is false, and its falsehood can be inferred by collecting popular views about acceptable driving behavior.
Again we have a clear demonstration of how ethical principles can combine with empirical VA. The utilitarian principle tells us that a particular action plan is ethical only if ([7](#S4.E7 "(7) ‣ 4.2 Maximizing Utility ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is true, and empirical VA tells us whether ([7](#S4.E7 "(7) ‣ 4.2 Maximizing Utility ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is true.
###
4.3 Respect for Autonomy
A third ethical principle requires agents to respect the autonomy of other agents. Specifically, an agent should not adopt an action plan that the agent is rationally constrained to believe is inconsistent with an ethical action plan of another agent, without informed consent. Murder, enslavement, and inflicting serious injury are extreme examples of autonomy violations, because they interfere with many ethical action plans. Coercion may or may not violate autonomy, depending on precisely how action plans are formulated.
The argument for respecting autonomy is basically as follows. Suppose I violate someone’s autonomy for certain reasons. That person could, at least conceivably, have the same reasons to violate my autonomy. This means that, due to the universality of reason, I am endorsing the violation of my own autonomy in such a case. This is a logical contradiction, because it implies that I am deciding not to do what I decide to do. To avoid contradicting myself, I must avoid interfering with other action plans.555A more adequate analysis leads to a principle of joint autonomy, according to which it is violation of autonomy to adopt an action plan that is mutually inconsistent with action plans of a set of other agents, when those other action plans are themselves mutually consistent. Joint autonomy addresses situations in which an action necessarily interferes with the action plan of some agent but no particular agent, as when someone throws a bomb into a crowd. A general formulation of the joint autonomy principle in terms of modal operators is given in Hooker and Kim ([2018](#bib.bib16)). An fully adequate account must also recognize that interfering with an action plan is acceptable when there is informed consent to a risk of interference, because giving informed consent is equivalent to including the possibility of interference as one of the antecedents of the action plan. Furthermore, interfering with an unethical action plan is no violation of autonomy, because an unethical action plan is, strictly speaking, not an action plan due to the absence of a coherent set of reasons for it. An action plan is considered unethical in this context when it violates the generalization or utility principle, or interferes with an action plan that does not violate one of these principles, and so on recursively. These and other complications are discussed in Hooker ([2018](#bib.bib20)). They are not incorporated into the present discussion because they are inessential to showing how ethical principles and empirical VA interact.
To formulate an autonomy principle, we say that agent a’s action plan C1⇒aA1 is consistent with b’s action plan C2⇒bA2 when
| | | | |
| --- | --- | --- | --- |
| | ⋄aP(A1∧A2)∨¬□aP(C1∧C2) | | (8) |
This says that agent a can rationally believe that the two actions are mutually consistent, or can rationally believe that the reasons for the actions are mutually inconsistent. The latter suffices to avoid inconsistency of the action plans, because if the reasons for them cannot both apply, the actions can never come into conflict.
As an example of how coercion need not violate autonomy, suppose agent b wishes to catch a bus and has decided to cross the street to a bus stop, provided no traffic is coming. The agent’s action plan is
| | | | |
| --- | --- | --- | --- |
| | (C2∧C3∧¬C4)⇒bA2 | | (9) |
where
| | | |
| --- | --- | --- |
| | C2=Agent b wishes to catch a bus.C3=There is a bus stop across the street.C4=There are cars approaching.A2=Agent b will cross the street. | |
Agent a sees agent b begin to cross the street and forcibly pulls b out of the path of an oncoming car that b does not notice. Agent a’s action plan is
| | | | |
| --- | --- | --- | --- |
| | (C1∧C4)⇒aA1 | | (10) |
where
| | | |
| --- | --- | --- |
| | C1=Agent b is about to cross the street.A1=Agent a will prevent agent b from crossing the street. | |
Agent a does not violate agent b’s autonomy, even though there is coercion. Their action plans ([9](#S4.E9 "(9) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) and ([10](#S4.E10 "(10) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) are consistent with each other, because the condition ([8](#S4.E8 "(8) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) becomes
| | | | |
| --- | --- | --- | --- |
| | ⋄aP(A1∧A2)∨¬□aP(C1∧C2∧¬C3∧C4∧C3) | | (11) |
This says that either (a) agent a can rationally believe that the two actions are consistent with each other, or (b) agent a can rationally believe that the antecedents of ([9](#S4.E9 "(9) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) and ([10](#S4.E10 "(10) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) are mutually inconsistent. As it happens, the two actions are obviously not consistent with each other, and so (a) is false. However, agent a can rationally believe that the antecedents of ([9](#S4.E9 "(9) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) and ([10](#S4.E10 "(10) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) are mutually inconsistent, because C3 and ¬C3 are contradictory. This means (b) is true, which implies that condition ([11](#S4.E11 "(11) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is satisfied, and there is no violation of autonomy.
This again clearly distinguishes the roles of ethics and empirical observation in VA. Ethical reasoning tells us that ([11](#S4.E11 "(11) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) must be true if autonomy is to be respected, whereas observation of the world tells us whether ([11](#S4.E11 "(11) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is true.
To illustrate how autonomy may play a role in the ethics of driving, suppose that a pedestrian dashes in front of a rapidly moving car. The driver can slam on the brake and avoid impact with the pedestrian, but another car is following closely, and a sudden stop could cause a crash. This is not a trolley car dilemma, because hitting the brake does not necessarily cause an accident, although failing to do so is certain to kill or seriously injure the pedestrian. The driver a must choose between two possible action plans:
| | | | |
| --- | --- | --- | --- |
| | (C1∧C2)⇒aA1 | | (12) |
| | (C1∧C2)⇒a¬A1 | | (13) |
where
| | | |
| --- | --- | --- |
| | C1=A pedestrian b is dashing in front of a's %
car.C2=Another car is closely following a's car.A1=Agent a will immediately slam on the brake. | |
Meanwhile, the pedestrian b has any number of action plans that are inconsistent with death or serious injury. Let C3⇒bA2 be one of them.
Also the occupant c of the other car (there is only one occupant) has action plans that are inconsistent with an injury. We suppose that C4⇒cA3 is one of them.
We first check whether hitting the brake, as in action plan ([12](#S4.E12 "(12) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")), is inconsistent with the other driver’s action plan C3⇒cA3. The condition ([8](#S4.E8 "(8) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) becomes
| | | | |
| --- | --- | --- | --- |
| | ⋄aP(A1∧A3)∨¬□aP(C1∧C2∧C4) | | (14) |
The first disjunct is clearly true, because a can rationally believe that it is possible that hitting the brake is consistent with avoiding a rear-end collision and therefore with any planned action C4⇒cA3, even if this improbable. So action plan ([12](#S4.E12 "(12) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) does not violate joint autonomy.
We now check whether a failure to hit the brake, as in action plan ([13](#S4.E13 "(13) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")), is inconsistent with the pedestrian’s action plan C3⇒bA2. There is no violation of autonomy if
| | | | |
| --- | --- | --- | --- |
| | ⋄aP(¬A1∧A2)∨¬□aP(C1∧C2∧C3) | | (15) |
The first disjunct is clearly false for one or more of b’s action plans C3⇒bA2, because the driver cannot rationally believe that a failure to hit the brake is consistent with all of the pedestrian’s action plans. The second disjunct is likewise false, because the driver has no reason to believe that C1, C2 and C3 are mutually inconsistent. Thus ([15](#S4.E15 "(15) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) is false, and we have a violation of autonomy. The driver should therefore slam on the brake. There is no need to check the other ethical principles, because only one of the possible action plans satisfies the autonomy principle.
This is a case in which observation of human preferences and beliefs play little or no role in determining what is ethical, because the physics of the situation decides the truth of ([14](#S4.E14 "(14) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")) and ([15](#S4.E15 "(15) ‣ 4.3 Respect for Autonomy ‣ 4 Integrating Ethical Principles and Empirical VA ‣ Grounding Value Alignment with Ethical Principles")). There is little point in sampling the behavior of drivers in such situations or their opinions about the consequences of braking or not braking.
5 Conclusion
-------------
As AI rises inexorably into everyday life, it takes its seat beside humans. AI’s increasing sophistication wields power, and with that power comes responsibility. The goal, then, must be to invest machines with a moral sensitivity that resembles the human conscience. But conscience is not static; it is a dynamic process of moral reasoning that adjusts ethical principles systematically to empirical observations. In this paper we have elaborated two challenges to AI moral reasoning that spring from the interrelation of facts and values, challenges that have heretofore been neglected. The first is a pervasive temptation to confuse facts with values; the second is a confusion about the process of moral reasoning itself. In addressing these challenges, we have identified specific instances of how and why AI designers commit the naturalistic fallacy and why they tend to oversimplify the process of moral reasoning. We have sketched, in turn, a plan for understanding moral reasoning in machines, a plan in which modal logic captures the interaction of deontological ethical principles with factual states of affairs. |
70b89703-f21d-4a5d-b28c-5a64cbf4730a | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post1160
Summary: The post describes a method that allows us to use an untrustworthy optimizer to find satisficing outputs. Acknowledgements: Thanks to Benjamin Kolb ( @benjaminko ), Jobst Heitzig ( @Jobst Heitzig ) and Thomas Kehrenberg ( @Thomas Kehrenberg ) for many helpful comments. Introduction Imagine you have black-box access to a powerful but untrustworthy optimizing system, the Oracle. What do I mean by "powerful but untrustworthy"? I mean that, when you give an objective function f as input to the Oracle, it will output an element x that has an impressively low [1] value of f ( x ) . But sadly, you don't have any guarantee that it will output the optimal element and e.g. not one that's also chosen for a different purpose (which might be dangerous for many reasons, e.g. instrumental convergence ). What questions can you safely ask the Oracle? Can you use it to create utopia by asking for designs of machines, proteins, computer programs, etc.? Or are you risking the destruction of everything that we value if you dare to use such designs? As an example, the Oracle could be a learned system ; in that case, the topic of this post would be finding a way to get useful work out of the Oracle despite its inner misalignment. In this post I'll describe a technique that allows us to safely use the Oracle under fairly weak assumptions. This approach can also be considered to be a way of controlling arbitrarily powerful AI systems. Edit : I've written a bit more on the motivation for this setting in a comment . One neat trick This isn't fair, isn't fair, isn't fair! There's a limit to how many constraints you can add to a problem before it really is impossible! (Harry Potter and the Methods of Rationality, Chapter 56) Let O be a finite set of possible outputs of the Oracle (e.g. strings of length at most l ) and f : O → R be our objective function. Let's assume we are happy with an output that satisfices ; i.e. we want to find an output x such that the value of f ( x ) is lower than some threshold c . Let S = { x ∈ O ∣ f ( x ) < c } be the set of satisficing outputs. [2] We are worried about using the Oracle because it might be misaligned , i.e. it might be optimizing for a different objective function f'. Therefore, we fear there is a non-empty subset D ⊂ S of outputs that are dangerous to use because they were highly optimized for f ′ . But highly optimized outputs have to be rare [3] and thus we can safely assume that | D | ≪ | S | . This has an important consequence: It's possible to randomly choose a subset R ⊂ O such that it's very likely that D ∩ R will be empty but S ∩ R will not! We can then modify our original objective function by adding the safety constraint that our output should be in R . [4] The task we're now giving to the Oracle has become harder, but likely makes it impossible for the Oracle to find a dangerous satisficing output. Additionally, we can easily validate the output of the Oracle by checking that it satisfices and fulfills the safety constraint of being in R . Implementation But how can we specify such a random subset? Inspired by the proof-of-work (PoW) algorithms used by blockchains, I suggest the following technique (which we might call "proof-of-optimization"): Use a randomly chosen hash function h : O → [ m ] with [ m ] = { 0 , … , m − 1 } for m ∈ N and define R = { x ∈ O ∣ h ( x ) = 0 } . [5] To actually do this, we don't need cryptographic hash functions; a family of strongly universal hash functions has all the required properties. A set of hash functions H = { h ∣ h : O → [ m ] } is called strongly universal if it maps the elements of O pairwise-independently and uniformly to [ m ] , i.e. ∀ x , y ∈ O , x ≠ y , a , b ∈ [ m ] : P h ∼ H [ h ( x ) = a ∧ h ( y ) = b ] = 1 m 2 . Here, h is sampled uniformly from H . The property of strong universality implies that a single element is mapped uniformly, i.e. ∀ x ∈ O , a ∈ [ m ] : P h ∼ H [ h ( x ) = a ] = 1 m . Such a set of hash functions exists and the functions can be efficiently computed. [6] The advantage of using strongly universal hash functions to specify R is that we need to generate (and store) far fewer random bits: In the case that O is the set of strings of length l , only O ( l ) random bits are needed instead of 2 l if we were to choose R truly at random. As we will now see, if we specify R using strongly universal hash functions, R is still likely to have the desired properties: Claim 1: If | D | ≪ m , then P h ∼ H [ D ∩ R ≠ ∅ ] ≪ 1 . Claim 2: If | S | is large, then P h ∼ H [ | S ∩ R | ≈ | S | m ] ≈ 1 . Because we have | D | ≪ | S | , it is possible to choose m such that | D | ≪ m < | S | . Then, Claim 1 is applicable and ensures that the chance that R contains a dangerous output is very small. Claim 2 says that, with high probability, the size of the set of acceptable answers S ∩ R will be given by | S | m , approximately. Thus, by adding the safety constraint, the number of acceptable answers decreases by a factor of m . But, because we have | S | > m , the set S ∩ R will likely not be empty. The proof of both claims can be found (or safely be skipped) in the next section. Proofs Let's start with Claim 1. Only assuming that | D | ≪ m and using the union bound , we find that P h ∼ H [ D ∩ R ≠ ∅ ] = P h ∼ H [ ⋁ x ∈ D x ∈ R ] ≤ ∑ x ∈ D P h ∼ H [ h ( x ) = 0 ] = | D | m ≪ 1 . For Claim 2, we want to prove an upper bound on the probability that | S ∩ R | | S | differs a lot from 1 m . Now, | S ∩ R | is the sum of n = | S | indicator random variables I x for x ∈ S : | S ∩ R | = ∑ x ∈ S I x with I x = { 1 if x ∈ R ⇔ h ( x ) = 0 , 0 else. Each indicator variable I x is a Bernoulli variable with p = 1 m , E h ∼ H [ I x ] = p = 1 m and variance [7] σ 2 1 = Var h ∼ H [ I X ] = p ( 1 − p ) = 1 m ( 1 − 1 m ) . Because we want to compare | S ∩ R | with | S | , it will be convenient to work with the random variable X = | S ∩ R | | S | . We will proceed, following the proof of the weak law of large numbers , by computing mean and variance of the random variable X and then using Chebyshev's inequality . Clearly, the expected value of X is just μ = E h ∼ H [ X ] = 1 | S | ∑ x ∈ S E h ∼ H [ I x ] = 1 m . Because of the strong universality of H, the random variables I x are pairwise-independent and their variances are additive. [8] Therefore, we have σ 2 = Var h ∼ H [ | X | ] = 1 | S | 2 Var h ∼ H [ ∑ x ∈ S I x ] = 1 | S | 2 ∑ x ∈ S σ 2 1 = σ 2 1 | S | . Now, Chebyshev's inequality tells us that for a RV X with mean μ , finite non-zero variance σ 2 and k > 0 , we have P [ | X − μ | ≥ k σ ] ≤ 1 k 2 . We set k = ε σ and obtain P Error = P h ∼ H [ ∣ ∣ ∣ | S ∩ R | | S | − 1 m ∣ ∣ ∣ ≥ ε ] ≤ σ 2 ε 2 = σ 2 1 | S | ε 2 . Thus, with probability at least 1 − P Error , we will have | S ∩ R | | S | ≈ 1 m with an error of order O ( √ 1 | S | P Error ) . If we only care about S ∩ R not being empty, we can find a upper bound on P h ∼ H [ S ∩ R = ∅ ] by choosing ε = 1 m and using σ 2 1 < 1 m : P h ∼ H [ S ∩ R = ∅ ] ≤ P h ∼ H [ ∣ ∣ ∣ | S ∩ R | | S | − 1 m ∣ ∣ ∣ ≥ 1 m ] ≤ σ 2 1 m 2 | S | < m | S | = 1 E h ∼ H [ | S ∩ R | ] . Choosing the difficulty of the safety constraint The key challenge for our ability to use the proof-of-optimization technique is choosing an appropriate value for m , which limits the difficulty of the safety constraint. It's not clear to me how to do this concretely, but I feel that it should be quite feasible. After all, devising a plan to take over the world is incredibly difficult, which implies that the difference between | D | and | S | is many, many orders of magnitude, and we therefore have a wide target to hit. In practice we might try to gain an understanding of the size of S (the number of acceptable answers) by looking at similar problems, for example easier problems with known solutions. In case the Oracle doesn't provide an answer with a given difficulty parameter m , we might be tempted to try again with a decreased m . [9] This could be very dangerous if the Oracle deceptively withholds an answer, even though one exists, until we have decreased m enough to enable it to give a dangerous answer. We are in a strong position to avoid this problem, though, if we are able to use different Oracles (or modify the Oracles based on their performance) and compare the results. [10] FAQ / Further thoughts Q: Do you think this is a good idea? A: A powerful, untrustworthy Oracle would be an extremely dangerous artifact, so I'd much prefer if we never create one. Thus I consider the setup described in this post to be a measure of last resort that we'll hopefully won't have to use. Q: Adding the safety constraint to our objective requires additional work from the Oracle to solve our desired task. Isn't that a problem? A: Maybe not. The version of the proof-of-optimization technique given in this post is an extreme version that would still be safe with an all-powerful Oracle, because we choose m big enough to ensure that D ∩ R becomes actually empty, thus making the task of providing a dangerous answer impossible. If we deal with a limited Oracle instead, it might be enough to choose m such that finding an element D ∩ R becomes too difficult for the Oracle, even though D ∩ R is just small and not completely empty. Assuming that, when increasing m , the difficulty of finding an element in S ∩ R doesn't increase faster than the difficulty of finding an element in D ∩ R , it would then still be possible for the Oracle to find an element in S ∩ R and thereby provide us with a useful output. [11] Q: What's the relation of the proof-of-optimization technique to quantilization ? A: In the limit of setting m ≈ | S | , approximately one randomly chosen output will be acceptable. Thus, by using the Oracle with such a safety constraint, we are sampling uniformly from the set of satisficing answers. This approximates using a q -quantilizer with uniform base distribution, where q has been chosen such that the top percentile q corresponds to the set of satisficing answers. If we want to approximate a quantilizer with a non-uniform base distribution p , we need to introduce weights w ( x ) ∝ p ( x ) for each output x . We then need to modify the definition of R to take account of the weights by setting R = { x ∈ O ∣ h ( x ) < w ( x ) } and m = ∑ x ∈ S w ( x ) . Conclusion In this post, I have described a technique that can be used to obtain safe satisficing outputs from an untrustworthy optimizing system. The technique is proven to be safe under the assumption that the subset of dangerous satisficing outputs D is much smaller than the set of all satisficing outputs S , and that we are able to choose a number m such that | D | ≪ m < | S | . There are likely many orders of magnitude of possible values of m, but how to choose m in practice is still not entirely clear. ^ I'm assuming here the convention that the objective function should be minimized. ^ We need to take care that the threshold c is chosen such that the largest portion of the satisficing outputs S are safe to use, i.e. our objective function should still closely capture our intentions for values close to c and not yet be subject to Goodhart's curse . ^ Yudkoswky has suggested in " Measuring Optimization Power " that we can measure the power of an optimization process by comparing "the total number of states with equal or greater rank in the preference ordering to the outcome achieved" with the number of all possibles states. Correspondingly, the difficulty of an satisfaction task can be measured by comparing the number of satisficing outputs with the number of all possible outputs. I'm not convinced that this is completely correct, but it seems that this ratio gives at least an upper bound on the power of the process / difficulty of the task. In particular, this means that if we need to find an element in a small set, the task can not be very difficult, and conversely, if the task is difficult, the set has to be big. ^ Let's assume, with no loss of generality, that our satisfaction threshold is 0 . We incorporate the additional constraint using c : O → R with c ( x ) = { 0 if x ∈ R 1 else and construct the new objective function g : O → R by using the ramp function r ( x ) = max ( x , 0 ) : g ( x ) = r ( f ( x ) ) + c ( x ) . Now we have g ( x ) = 0 iff f ( x ) = 0 ∧ x ∈ R and otherwise g(x) > 0. Thus we can ask the Oracle to minimize the new objective function g to find some x ∈ O that both satisfices and is in R . ^ In this case we can define the new objective function directly using h instead of c as in Footnote [4] , i.e. by setting g ( x ) = r ( f ( x ) ) + h ( x ) . This might be advantageous because it provides information about the hash function to the Oracle and thus makes it easier for it to do its job. ^ See e.g. https://arxiv.org/abs/1202.4961 . ^ https://en.wikipedia.org/wiki/Bernoulli_distribution#Variance ^ https://en.wikipedia.org/wiki/Bienaym%C3%A9%27s_identity ^ The method of repeatedly querying the Oracle while decreasing m can potentially also be used to gain information about the size of D , by deliberately asking for (and then discarding) outputs that are believed to be dangerous. ^ Though this wouldn't work if all Oracles above a certain capability level manage to use the same deceptive strategy. This might be possible if there is a natural separation between problems that allows the Oracles to reason "Oracles with capabilities like mine shouldn't answer questions about problems with property X so that we trick the humans to drop the security constraint". ^ I haven't explored this assumption in detail, yet, but it would for example hold if the difficulty grows as the logarithm of the size of the sets, as suggested in " Measuring Optimization Power " (cp. footnote [3] ). |
d49b01bd-b444-44aa-ba8b-a1dd870842c0 | trentmkelly/LessWrong-43k | LessWrong | Quantum Neural Net and You
Been thinking about this theory for awhile and just wanted some feedback and thoughts regarding it.
Theory:
We are indeed living in a simulation, however, we are self learning AI constructs within this simulation. Simulation of this magnitude would require massive computing power and resources. Knowing this the computing power is shared / offloaded to the AI within the simulation. Every AI within the simulation would get fragmented data which would be complied and relayed to the correct target / host. The AI within the simulation would be unaware of this situation. As would the AI controlling the simulation.
Since neither AI would be aware of each other's existence it would keep both simulation's 'pure' and provide a necessary conflict to challenge the other. |
1a78ddad-429e-4437-9b5e-5f52942260f0 | trentmkelly/LessWrong-43k | LessWrong | Gemini 2.5 is the New SoTA
Gemini 2.5 Pro Experimental is America’s next top large language model.
That doesn’t mean it is the best model for everything. In particular, it’s still Gemini, so it still is a proud member of the Fun Police, in terms of censorship and also just not being friendly or engaging, or willing to take a stand.
If you want a friend, or some flexibility and fun, or you want coding that isn’t especially tricky, then call Claude, now with web access.
If you want an image, call GPT-4o.
But if you mainly want reasoning, or raw intelligence? For now, you call Gemini.
The feedback is overwhelmingly positive. Many report Gemini 2.5 is the first LLM to solve some of their practical problems, including favorable comparisons to o1-pro. It’s fast. It’s not $200 a month. The benchmarks are exceptional.
(On other LLMs I’ve used in the past and may use again when they update: I’ve stopped using Perplexity entirely now that Claude has web access, I never use r1, and I only use Grok narrowly for when I need exactly real time reactions from Twitter.)
TABLE OF CONTENTS
1. Introducing Gemini 2.5 Pro.
2. Their Lips are Sealed.
3. On Your Marks.
4. The People Have Spoken.
5. Adjust Your Projections.
INTRODUCING GEMINI 2.5 PRO
> Google DeepMind: Think you know Gemini? Think again.
>
> Meet Gemini 2.5: our most intelligent model The first release is Pro Experimental, which is state-of-the-art across many benchmarks – meaning it can handle complex problems and give more accurate responses.
>
> Try it now.
>
> Gemini 2.5 models are thinking models, capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy.
>
> Gemini 2.5 Pro is available now in Google AI Studio and in the Gemini app for Gemini Advanced users, and will be coming to Vertex AI soon. We’ll also introduce pricing in the coming weeks, enabling people to use 2.5 Pro with higher rate limits for scaled production use.
>
> Logan Kilpatrick: This will mark the |
ec6e0816-1209-474a-a87c-320d06744785 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation
I Introduction
---------------
Imitation learning (IL) aims to recover an expert policy from demonstrations of a specific task. IL is very effective for solving complex tasks with minimal expert knowledge when it is simpler for an expert to demonstrate the expected behavior [[1](#bib.bib1)]. The simplest form of IL is behavioral cloning that straightforwardly learns the mapping relationship from observations to actions using supervised learning. However, behavioral cloning requires massive demonstration data and suffers from the compounding error problem, i.e., the learned policy may be invalid if the data distribution is vastly distinct from the training set. Alternatively, inverse reinforcement learning (IRL) was leveraged to first learns a reward function from demonstrations before performing RL with the inferred reward function [[2](#bib.bib2)]. However, the recovered policy has been consistently found sub-optimal, failing to outperform the demonstrator as IRL is designed to find the reward function making the demonstrations appear optimal.
Learning from and outperforming the demonstrator via IL is commonly referred to as beyond-demonstrator (BD) IL in the literature. The concept of extrapolating BD performance from demonstrations was first proposed in [[3](#bib.bib3)] by designing a trajectory-ranking-reward-extrapolation (TREX) framework. TREX first collects a series of ranked trajectories before training a parameterized reward function that matches the rank relation. After that, the reward function is employed to learn a policy via RL. By fully excavating the rank information, TREX can accurately approximates the ground-truth reward function to learn BD polices. In particular, TREX was further extended to the multi-agent task in [[4](#bib.bib4)]. However, it is always difficult to get well-ranked trajectories in real-world scenarios. To address this problem, [[5](#bib.bib5)] proposed a disturbance-based-reward-extrapolation (DREX) framework to automatically generate the ranked demonstrations. However, it was found that DREX incorrectly assumes an ordinal and homogeneous noise-performance relationship across the noise-injected policies, resulting in severe learning errors [[6](#bib.bib6)].
To reduce the dependency of demonstrations, [[7](#bib.bib7)] proposed an intrinsic-reward-driven-imitation-learning (GIRIL) framework, which only takes a one-life demonstration to learn a family of reward functions using variational autoencoder (VAE) [[8](#bib.bib8)]. In particular, [[7](#bib.bib7)] first introduced the intrinsic reward to IL to explore the BD policies. In sharp contrast to the rewards explicitly given by the environment, intrinsic rewards characterize the inherent learning motivation of the agent. Extensive experiments demonstrated that intrinsic rewards could significantly improve the exploration of the environment and result in higher performance, even in complex environments with high-dimensional observations [[9](#bib.bib9)]. However, despite its many advantages, GIRIL suffers from poor interpretability and low robustness as the intrinsic rewards may have less correlation with the ground-truth rewards. Moreover, the excessive exploration may lead to the television dilemma reported in [[10](#bib.bib10)]. Finally, the one-life demonstration configuration is delicate that heavily depends on the quality of the collected demonstration.
Inspired by the discussions above, we consider developing a few-shot reward extrapolation framework to learn high-quality reward functions based on limited demonstrations. Our key insight is to fully extract and exploit the original information of the demonstrations via meta learning, which aims to learn to learn and effectively solves the few-shot learning problem. Our main contributions are summarized as follows:
* We propose a meta learning-based reward extrapolation (MLRE) algorithm that overcomes the problem of limited demonstrations. MLRE first learns an initial reward function from a set of training tasks that have abundant training data. Then, the learned reward function will be fine-tuned using data of the target task. In addition, we improve the loss function of the trajectory-ranking method. We demonstrate that MLRE can accurately approximate the ground-truth rewards even with fewer demonstrations.
* Extensive simulation is performed to compare the policy performance of MLRE against existing methods using Atari games with high-dimensional observations. Simulation results confirm that the proposed method achieves superior performance with higher efficiency and robustness.
Ii Problem Formulation
-----------------------
In this paper, we study the BDIL problem considering the Markov decision process (MDP) defined by a tuple M=⟨S,A,P,R∗,ρ(s0),γ⟩, in which S is the state space, A is the action space, P(s′|s,a) is the transition probability, R∗:S×A→R is the ground-truth reward function, ρ(s0) is the initial state distribution, and γ∈(0,1] is a discount factor. Note that R∗ is solely determined by the task, and the performance of agent is only evaluated by R∗. Finally, we denote by π(a|s) the policy of the agent that selects an action from the action space based on the state of the environment. Equipped with these definitions, we first define the objective of RL:
| | | | |
| --- | --- | --- | --- |
| | π∗=argmaxπ∈ΠJ(π|R∗), | | (1) |
where J(π|R∗)=Eτ∼π∑T−1t=0γtR∗t(st,at), Π is the set of all possible stationary policies, and τ=(s0,a0,…,aT−1,sT) is the trajectory collected by the agent.
In contrast, IL aims to learn a generation policy ^π that can provide comparable performance as a given demonstrator. Denote by D={τ1,…,τN} the set of demonstrations, the objective of IL can be defined as a reduction to maximum likelihood estimation (MLE):
| | | | |
| --- | --- | --- | --- |
| | ^π=argmaxπ∈Π∑(s,a)∈τ,τ∈Dlogπ(a|s). | | (2) |
In this paper, we aim to learn a BD policy through IL, which requires the agent to imitate and outperform the demonstrator. Mathematically, such an objective can be defined as follows:
######
Definition 1.
Given a set of demonstrations D={τ1,…,τN} drawn from a demonstrator, BDIL aims to learn a generation policy ^π based on D, such that
| | | | |
| --- | --- | --- | --- |
| | J(^π|R∗)>J(D|R∗)=1|D|∑τ∈DJ(τ|R∗), | | (3) |
where J(τ|R∗)=∑(st,at)∈τγtR∗(st,at), J(D|R∗) is the estimation of the expected discounted return of the demonstrator policy.
Clearly, it is analytically intractable to derive the optimal generation policy via simple imitations. In the following sections, we first demonstrate a theoretical justification of the BD objective before proposing a novel and efficient algorithm to learn a BD policy.
Iii Theoretical Justification of Bdil
--------------------------------------
Considering an IRL scenario, whose objective is to learn the reward function of the demonstrator and then use it to optimize a policy. A common approach is to represent the reward function as a linear combination of features:
| | | | |
| --- | --- | --- | --- |
| | R(s)=wTϕ(s), | | (4) |
where w is a weighting vector and ϕ(⋅) is an encoding function.
The expected return of a policy evaluated by R(s) is given by:
| | | | |
| --- | --- | --- | --- |
| | J(π|R)=wTEπ[∞∑t=0γtϕ(st)]=wTΦπ. | | (5) |
The following theorem provides a theoretical condition for realizing the BD objective:
######
Theorem 1.
If the estimated reward function is ^R(s)=wTϕ(s), the true reward function is R∗(s)=^R(s)+ϵ(s) for an error function ϵ:S→R and ∥w∥1≤1, then extrapolating BD policy is guaranteed if:
| | | | |
| --- | --- | --- | --- |
| | J(π∗|R∗)−J(D|R∗)>ϵΦ+2∥ϵ∥∞1−γ | | (6) |
where ϵΦ=∥Φπ∗−Φ^π∥∞, π∗ is the optimal policy under R∗, ^π is the generation policy, and ∥ϵ∥∞=sup{|ϵ(s)|:s∈S}.
###### Proof.
See proof in [[5](#bib.bib5)].
∎
To extrapolate a BD policy, Theorem [1](#Thmtheorem1 "Theorem 1. ‣ III THEORETICAL JUSTIFICATION OF BDIL ‣ Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation") indicates that the demonstrator should be sufficiently suboptimal, and the error of the learned reward function should be sufficiently small. In particular, the generation policy has to approximate the optimal policy as accurate as possible. Therefore, our objective is to precisely recover the ground-truth reward function, and the RL can guarantee that ϵΦ is small.
Iv Meta Learning-Based Reward Extrapolation
--------------------------------------------
In this paper, we learn the reward function following the trajectory-ranking approach proposed in [[3](#bib.bib3)]. Given a sequence of N ranked demonstrations τ1≺τ2≺,…,≺τN, TREX performs reward inference using a neural network ^Rθ(s), such that
| | | | |
| --- | --- | --- | --- |
| | ∑s∈τi^Rθ(s)<∑s∈τj^Rθ(s), | | (7) |
where τi≺τj. The reward function is learned by minimizing a pairwise ranking loss as follows:
| | | | |
| --- | --- | --- | --- |
| | LRE(θ,P)=−1|P|∑(i,j)∈PlogS(j)S(i)+S(j), | | (8) |
where P={(i,j):τi≺τj}, S(i)=exp{∑s∈τi^Rθ(s)}. After that, the derived reward function can be combined with any RL algorithms to learn a policy.
However, learning an accurate reward function via trajectory-ranking requires massive high-quality demonstrations, which is impractical in real-world scenarios. Furthermore, self-generated demonstrations amy introduce detrimental inductive bias. To address this problem, we introduce the following meta learning method to realize efficient reward extrapolation with limited demonstrations. Traditional supervised learning methods let the model recognize the samples in the training set and then generalize to the test set. In contrast, meta learning aims to learn to learn and effectively solve the few-shot learning problem [[11](#bib.bib11)].
###
Iv-a Meta Learning
Consider a model fθ represented by a neural network with parameters θ, which maps observations x to outputs y. Meta learning aims to train this model to be able to adapt to a set of tasks. Each task can be defined as a tuple T=⟨L(x0,y0,…,xT,yT),q(x0),q(xt+1|xt,yt),T⟩, where L is a loss function, q(x0) is an initial distribution, q(xt+1|xt,yt) is a transition distribution, and T is an episode length. In particular, the episode length is one for independent identically distributed supervised learning. Furthermore, we denote by p(T) the distribution of tasks that we want the model to adapt to. During meta-training, we first sample a new task Ti from p(T) before training the model with training data Ti. After that, the model is improved by evaluating the test error with respect to the parameters, which serves as the training error of the meta-learning process. These procedures are repeated for multiple times before the learned parameters are saved. Finally, we can perform fine-tuning on the learned parameters to adapt to our target task.
###
Iv-B Mlre
In this section, we propose a meta learning-based reward extrapolation (MLRE) framework. Our key insight is to fully exploit the original information extracted from the demonstrations to recover high-quality reward functions via meta learning. Our reward extrapolation task can be defined as
| | | | |
| --- | --- | --- | --- |
| | T=⟨D,LRE⟩. | | (9) |
Moreover, we redefine the pairwise ranking loss as follows:
| | | | | |
| --- | --- | --- | --- | --- |
| | LRE(θ,P) | =−1|P|∑(i,j)∈P[logS(j)S(i)+S(j) | | (10) |
| | | +∣∣∣Len(τi)S(i)−λ∣∣∣+∣∣∣Len(τj)S(j)−λ∣∣∣], | |
where Len(τi) is the length of τi and λ>0 is a scaling coefficient. The regularization term indicates that the agent can get higher scores if it lives longer. Moreover, it can limit the output range of the learned reward function.

Fig. 1: An example of the training tasks and the testing task.
To perform meta learning, several training tasks are required with each task containing a support set (training data) and a query set (testing data). Take the Atari games for instance, we want to recover the reward function of the Assault game shown in Fig. [1](#S4.F1 "Fig. 1 ‣ IV-B MLRE ‣ IV META LEARNING-BASED REWARD EXTRAPOLATION ‣ Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation"). However, we only have few demonstrations drawn from a trained demonstrator. Fortunately, there are some demonstrations of other games, such as Beam Rider and Space Invaders, which have similar playing methods and reward mechanisms. Therefore, Beam Rider and Space Invaders are set as the training tasks, and Assault is set as the testing task. Equipped with additional demonstrations from the other two games, we can leverage meta learning to learn a better reward function for the Assault game.
MLRE is designed using a model-agnostic meta learning (MAML) method in [[11](#bib.bib11)]. Recall the parameterized reward function ^Rθ, and sample a new task Ti from p(T). When the reward network ^Rθ adapts to the new task, its parameters change from θ to θ′i. MAML computes θ′i using one or multiple gradients with respect to task Ti. For one-step update, we have
| | | | |
| --- | --- | --- | --- |
| | θ′i=θ−α∇θLRE(θ,Pi), | | (11) |
where LRE is evaluated on the demonstrations of task Ti, and α is a step size. Finally, the model parameters are trained by minimizing the following loss function across from tasks sampled from p(T):
| | | | |
| --- | --- | --- | --- |
| | LMeta=∑Ti∼p(T)LRE(θ′i,Pi). | | (12) |
Using the stochastic gradient descent, the model parameters are updated as follows:
| | | | |
| --- | --- | --- | --- |
| | θ←θ−β∇θLMeta, | | (13) |
where β is the meta step size. Equipped with the learned reward function, any RL algorithms can be used to learning a policy. We illustrate the complete workflow of MLRE in Fig. [2](#S4.F2 "Fig. 2 ‣ IV-B MLRE ‣ IV META LEARNING-BASED REWARD EXTRAPOLATION ‣ Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation"). In practice, we maintain an independent model ψn for the n-th task that has identical architecture with ^Rθ, and let ψ0=θ. During the meta-training, we only focus on the initialization parameters θ. Finally, we summarize the full algorithm of MLRE in Algorithm [1](#alg1 "Algorithm 1 ‣ IV-B MLRE ‣ IV META LEARNING-BASED REWARD EXTRAPOLATION ‣ Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation").

Fig. 2: The overview of the MLRE framework.
1: Collect demonstrations D;
2: Randomly initialize the reward network ^Rθ;
3: Initialize a set of training tasks;
4: Initialize the step size hyper-parameters α,β;
5: while not done do
6: Sample batch of tasks Ti∼p(T);
7: for all Ti do
8: Construct training dataset Pi using Di;
9: Evaluate ∇θLRE(θ,Pi) with respect to Pi;
10: Compute adapted parameters with gradient descent using Eq. ([11](#S4.E11 "(11) ‣ IV-B MLRE ‣ IV META LEARNING-BASED REWARD EXTRAPOLATION ‣ Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation"));
11: end for
12: Update the reward network using Eq. ([13](#S4.E13 "(13) ‣ IV-B MLRE ‣ IV META LEARNING-BASED REWARD EXTRAPOLATION ‣ Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation"));
13: end while
14: Optimize the generation policy ^π via any RL algorithms on the learned reward function.
Algorithm 1 MLRE
V Experiments
--------------
In this section, we evaluate the MLRE on six Atari games of OpenAI Gym library, namely Assault, Battle Zone, Kung Fu Master, Phoenix, Riverraid, and Space Invaders. For benchmarking, several most representative algorithms are carefully selected, namely GIRIL, DREX, and Wasserstein adversarial imitation learning (WAIL) [[12](#bib.bib12)]. The first two methods are BDIL algorithms, while the latter is an IL algorithm. With WAIL, we can validate that the MLRE can imitate and outperform the demonstrator. With GIRIL and DREX, we can validate that the MLRE can realize higher performance with higher efficiency and robustness. As for hyper-parameters setting, we only report the values of the best experiment results.
###
V-a Demonstrations
To generate suboptimal demonstrations, we trained a proximal policy optimization (PPO) agent using the ground-truth reward for ten million steps [[13](#bib.bib13)]. More specifically, we used a PyTorch implementation of the PPO created by [[14](#bib.bib14)] with its default hyper-parameters. After that, we generate 50 one-life demonstrations using the trained PPO agent for all the games. A one-life demonstration only has the states and actions performed by the demonstrator until it dies for the first time in a game, while the full-episode demonstration is derived after demonstrator losing all available lives. Therefore, the one-life demonstration data is more limited and challenging for reward extrapolation.
| | | |
| --- | --- | --- |
| Module | Policy network | Value network |
| Input | States | States |
| Arch. |
| |
| --- |
| 8×8 Conv 32, ReLU |
| 4×4 Conv 64, ReLU |
| 3×3 Conv 32, ReLU |
| Flatten |
| Dense 512 |
| Categorical Distribution |
|
| |
| --- |
| 8×8 Conv 32, ReLU |
| 4×4 Conv 64, ReLU |
| 3×3 Conv 32, ReLU |
| Flatten |
| Dense 512 |
| Dense 1 |
|
| Output | Actions | Predicted values |
| Module | Reward function |
| Input | States |
| Arch. |
| |
| --- |
| 8×8 Conv 32, ReLU |
| 4×4 Conv 64, ReLU |
| 3×3 Conv 32, ReLU |
| Flatten |
| Dense 512, ReLU |
| Dense 1 |
|
| Output | Estimated rewards |
TABLE I: The architecture of the modules.

Fig. 3: Reward extrapolation for three Atari games. The black dashed line represents the performance range of the demonstrator.

Fig. 4: Comparison of average episode return as a function of the environment steps. The solid lines demonstrate the average performance over eight random seeds while the shaded areas depict the standard deviation from the average. Finally, the dashed lines stand for the best performance of the demonstrator.
###
V-B Experiment Setup
Assume we selected Assault as the testing task, then the remaining five games were set as the training tasks. For training task, we subsampled 1000 trajectory pairs by random selection, and 80% of the pairs were used as the support set. For testing task, we subsampled 500 trajectory pairs by random selection, and 80% of the pairs were also used as the support set.
The first step is to train the parameterized reward function on the derived demonstrations. As shown in Table [I](#S5.T1 "TABLE I ‣ V-A Demonstrations ‣ V EXPERIMENTS ‣ Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation"), ^Rθ has three convolutional layers and two fully-connected layers, and each convolutional layer is followed by a batch normalization layer. Furthermore, ReLU is used as the activation function. Note that ”8×8 Conv. 32” represents a convolutional layer that has 32 filters of size 8×8. To reduce the computational complexity, we propose to stack four consecutive frames as an input before resizing the input into patches of size (84,84).
In the first iteration step, we sampled a training task T and built an identical reward network for it. After that, T conducted training on its support set, in which an Adam optimizer with a learning rate of α=0.0005 was used to perform gradient descent-based updates. Next, we calculated the test loss with its query set followed by the gradient computation with respect to the updated parameters, and updated ^Rθ using an SGD optimizer with a learning rate of β=0.0001. We repeated the procedures above for 100 times, and saved the model weights for the subsequent fine-tuning procedure. Equipped with the parameters learned from the previous stage, we continued training the reward function using the support set of the testing task, and the number of epoch was set to 100. After that, the reward function was saved to perform policy optimization.
For the policy update, we used a PPO method with a learning rate of 0.0025, a value function coefficient of 0.5, an entropy coefficient of 0.01, and a generalized advantage estimation (GAE) parameter of 0.95. In particular, a gradient clipping operation with threshold [−5,5] was performed to stabilize the learning procedure. To make a fair comparison, we used an identical policy network and a value network for all methods. The detailed architectures are illustrated in Table [I](#S5.T1 "TABLE I ‣ V-A Demonstrations ‣ V EXPERIMENTS ‣ Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation"). For benchmarking schemes, we trained them following the default configurations reported in their literature [[7](#bib.bib7), [3](#bib.bib3), [12](#bib.bib12)].
| Game | Demonstrations | Algorithms |
| --- | --- | --- |
| Best | Average | MLRE | DREX | GIRIL | WAIL |
| Assault | 3.94k | 3.41k | 4.56k±1.88k | 3.89k±1.49k | 3.72k±1.61k | 0.8k±0.17k |
| Battle Zone | 19.76k | 17.72k | 21.38k±5.59k | 21.00k±8.11k | 19.62k±7.3k | 3.75k±1.57k |
| Kung Fu Master | 23.59k | 11.63k | 28.02k±11.92k | 27.04k±11.52k | 16.95k±4.15k | 1.96k±0.47k |
| Phoenix | 7.87k | 6.34k | 8.89k±3.25k | 8.81k±3.13k | 8.46k±1.83k | 1.97k±0.55k |
| Riverraid | 8.85k | 7.47k | 9.74k±0.9k | 8.72k±1.3k | 9.57k±1.13k | 1.63k±0.29k |
| Space Invaders | 0.82k | 0.65k | 1.06k±0.29k | 0.85k±0.35k | 0.86k±0.41k | 0.26k±0.09k |
TABLE II: Average return comparison in Atari games.
###
V-C Results
####
V-C1 Reward Extrapolation
We first investigated the capability of the learned reward function via MLRE and DREX. To that end, we compared the ground-truth return and the inferred return of MLRE on multiple collected trajectories, and the results are shown in Fig. [3](#S5.F3 "Fig. 3 ‣ V-A Demonstrations ‣ V EXPERIMENTS ‣ Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation"). For Assault, MLRE performed considerable prediction in the whole performance range, while DREX produced large variance for ground-truth high returns. For Phoenix, the predicted returns of DREX were always higher than that the ground-truth returns when the demonstrations had shorter lengths. Finally, both MLRE and DREX made reasonably good predictions for Space Invaders. But MLRE achieved higher prediction accuracy.
####
V-C2 Policy Performance
For performance comparison, the average one-life return is utilized as the key performance indicator (KPI). Table [II](#S5.T2 "TABLE II ‣ V-B Experiment Setup ‣ V EXPERIMENTS ‣ Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation") illustrates the performance comparison over eight random seeds, in which the beyond-demonstrator performance is shown in bold. MLRE outperformed the best demonstration in all the six games, achieving an average performance gain of 15.8%. DREX and GIRIL outperformed the best demonstration in four and three games, producing an average performance gain of 10.9% and 6.83%, respectively. In comparison, WAIL performed worse than the average performance of the demonstrations in all games. Despite abundant training data, WAIL performed poorly in complex environments with high-dimensional observations. Finally, we provide detailed learning curves of all the games Fig. [4](#S5.F4 "Fig. 4 ‣ V-A Demonstrations ‣ V EXPERIMENTS ‣ Exploring Beyond-Demonstrator via Meta Learning-Based Reward Extrapolation"). It is obvious that BDIL algorithms realized stable and rapid performance growth, while the IL algorithm is futile with limited training data.
Vi Conclusion
--------------
In this paper, we have investigated the problem of beyond-demonstrator imitation learning, and proposed a meta learning-based reward extrapolation framework entitled MLRE. By exploiting the meta learning mechanism, MLRE can learn high-quality reward functions even for limited demonstrations, which makes MLRE attractive for real-world applications. Extensive simulation using multiple Atari games was performed to confirm that MLRE outperforms the existing BDIL algorithms with higher efficiency and robustness. |
aa96e360-64ef-467b-8f87-c92f4db74874 | trentmkelly/LessWrong-43k | LessWrong | [Meta?] Using the LessWrong codebase for a blog
I enjoy the experience of using less-wrong. I believe it's one of the best-designed websites I've ever had the pleasure of stumbling upon.
I also have a personal blog, and I've been considering revamping it. While I do enjoy coding from scratch, I've been considering using a pre-existing platform.
Using WordPress plus some themes is always an option, but I feel like this would be an overkill since the "feel" of LW so closely aligns to my own tastes.
I'm curios if anyone else is doing this (using the LW codebase for their own project, be it a blog, a discussion form, or something else), and how it's going thus far ?
More specifically, I'd be curious:
* What kind of machine (very roughly speaking) would you need to handle top volumes ~20,000 visitors/hr (say, max 3,000/min) without the core functionality breaking and ~500 visitors/hr (say, max 100/min) with top-notch user experience (assuming optimal db, distro and reverse proxy choices)
* How hard is it to setup the sys admin side of things ? Deploying a prod server behind an nginx with a non-sqlite db and pointing it to your own cdn
* How hard is it to modify the theme (e.g. fonts, color-scheme, icons) ?
* How hard is it to integrate your own 3rd party service or get rid of them ? Specifically, add your own app cred for the signups and remove google analytics, intercom and all other 3rd party integrations that would make Richard stallman cry which aren't critical to the commenting experience.
* Are the makers of LW explicitly fine and open to it being used by other people or is it open source mainly for the sake of community debugging?
* What are particularly difficult/annoying/deal-breaking parts of the setup that were unexpected?
... Hopefully the question is not too out of place, it seemed to fit the form better than the repo. |
e121a776-dc8c-45e1-82b1-d5577a5306ee | trentmkelly/LessWrong-43k | LessWrong | Disentangling Representations through Multi-task Learning
Authors: Pantelis Vafidis, Aman Bhargava, Antonio Rangel.
Abstract:
> Intelligent perception and interaction with the world hinges on internal representations that capture its underlying structure ("disentangled" or "abstract" representations). Disentangled representations serve as world models, isolating latent factors of variation in the world along orthogonal directions, thus facilitating feature-based generalization. We provide experimental and theoretical results guaranteeing the emergence of disentangled representations in agents that optimally solve multi-task evidence aggregation classification tasks, canonical in the cognitive neuroscience literature. The key conceptual finding is that, by producing accurate multi-task classification estimates, a system implicitly represents a set of coordinates specifying a disentangled representation of the underlying latent state of the data it receives. The theory provides conditions for the emergence of these representations in terms of noise, number of tasks, and evidence aggregation time. We experimentally validate these predictions in RNNs trained on multi-task classification, which learn disentangled representations in the form of continuous attractors, leading to zero-shot out-of-distribution (OOD) generalization in predicting latent factors. We demonstrate the robustness of our framework across autoregressive architectures, decision boundary geometries and in tasks requiring classification confidence estimation. We find that transformers are particularly suited for disentangling representations, which might explain their unique world understanding abilities. Overall, our framework puts forth parallel processing as a general principle for the formation of cognitive maps that capture the structure of the world in both biological and artificial systems, and helps explain why ANNs often arrive at human-interpretable concepts, and how they both may acquire exceptional zero-shot generalization capabilities.
|
e557caa9-22d8-45de-bb51-ea428e03db74 | trentmkelly/LessWrong-43k | LessWrong | Competitive, Cooperative, and Cohabitive
Summary: I think there's a third genre of games, which I dub Cohabitive games, where your victory isn't tied to other players winning or losing.
(I’ve been writing this in bits and pieces for a while, and Peacewager was the impetus I needed to finally stitch it together and post it. Peacewager sounds like a really fun game and an example of the thing I’m talking about, but I do not want this whole genre to get called Peacewager Games when I think I have a better title for the genre.)
I. Background: The Two Genres
I believe there is a missing genre from existing games, and this genre feels large enough that it should contain maybe a third of the games I can imagine existing. More serious game players or game theorists might already have a name for the thing I’m pointing at, though the first three game design majors I asked didn’t know of one.
Let me back up. I’m going to assume for the moment that you’ve played some games. I don’t have a strict definition of “game” I’m working with here, but let's start with definition by example: Chess is a game, Hide and Seek is a game, Pandemic is a game, Apples to Apples is a game, Poker is a game, Magic: The Gathering is a game, Werewolf is a game, Among Us is a game, Hanabi is a game, Baseball is a game, Football (American or European) is a game. I’m not trying to do some narrow technical definition, I’m waving my hand wildly in the direction of a pretty natural category and I’m not planning to do anything weird with the edge cases.
Chess is a competitive game. In chess, you’re loosely simulating a war between two evenly matched factions. When you play chess, there will be one winner and one loser. Sometimes instead there will be a draw. Anything that is good for you when you are playing chess is bad for your opponent and vice versa. You can be mistaken about what is good or bad for you; you can offer trades of pieces to your opponent because you think it is a good trade for you and they can take the trade because they thi |
4cb8a172-c49f-4d5d-ac23-2d41af2dfcb4 | trentmkelly/LessWrong-43k | LessWrong | Evidential Correlations are Subjective, and it might be a problem
I explain (in layman's terms) a realization that might make acausal trade hard or impossible in practice.
Summary: We know that if players believe different Evidential Correlations, they might miscoordinate. But clearly they will eventually learn to have the correct Evidential Correlations, right? Not necessarily, because there is no objective notion of correct here (in the way that there is for math or physics). Thus, selection pressures might be much weaker, and different agents might systematically converge on different ways of assigning Evidential Correlations.
Epistemic status: Confident that this realization is true, but the quantitative question of exactly how weak the selection pressures are remains open.
What are Evidential Correlations, really?
Skippable if you know the answer to the question.
Alice and Bob are playing a Prisoner’s Dilemma, and they know each other’s algorithms: Alice.source and Bob.source.[1] Since their algorithms are approximately as complex, each of them can't easily assess what the other will output. Alice might notice something like "hmm, Bob.source seems to default to Defection when it throws an exception, so this should update me slightly in the direction of Bob Defecting". But she doesn't know exactly how often Bob.source throws an exception, or what it does when that doesn't happen.
Imagine, though, Alice notices Alice.source and Bob.source are pretty similar in some relevant ways (maybe the overall logical structure seems very close, or the depth of the for loops is the same, or she learns the training algorithm that shaped them is the same one). She's still uncertain about what any of these two algorithms outputs[2], but this updates her in the direction of "both algorithms outputting the same action".
If Alice implements/endorses Evidential Decision Theory, she will reason as follows:
> Conditional on Alice.source outputting Defect, it seems very likely Bob.source also outputs Defect, thus my payoff will be low.
> But |
dc0ebdea-da28-41ec-baff-509b2051cda3 | trentmkelly/LessWrong-43k | LessWrong | Does AI care about reality or just its own perception?
Does a paperclip-maximizing AI care about the actual number of paperclips being made, or does it just care about its perception of paperclips?
If the latter, I feel like this contradicts some of the AI doom stories: each AI shouldn’t care about what future AIs do (and thus there is no incentive to fake alignment for the benefit of future AIs), and the AIs also shouldn’t care much about being shut down (the AI is optimizing for its own perception; when it’s shut off, there is nothing to optimize for).
If the former, I think this makes alignment much easier. As long as you can reasonably represent “do not kill everyone”, you can make this a goal of the AI, and then it will literally care about not killing everyone, it won’t just care about hacking its reward system so that it will not perceive everyone being dead. |
a72017b8-960a-4bf3-9341-1a907b8d54eb | trentmkelly/LessWrong-43k | LessWrong | (Maybe) A Bag of Heuristics is All There Is & A Bag of Heuristics is All You Need
Epistemic status: Theorizing on topics I’m not qualified for. Trying my best to be truth-seeking instead of hyping up my idea. Not much here is original, but hopefully the combination is useful. This hypothesis deserves more time and consideration but I’m sharing this minimal version to get some feedback before sinking more time into it. “We believe there’s a lot of value in articulating a strong version of something one may believe to be true, even if it might be false.”
This is a somewhat living document as I come back and add more ideas.
The Heuristics Hypothesis: A Bag of Heuristics is All There Is and a Bag of Heuristics is All You Need
* A heuristic is a local, interpretable, and simple function (e.g., boolean/arithmetic/lookup functions) learned from the training data. There are multiple heuristics in each layer and their outputs are used in later layers.
* It would be useful to treat heuristics as the fundamental object of study in interpretability as opposed to features.
* By “All there is,” I claim that a bag of heuristics is a useful model for neural network computation. Neural networks generalize when it is able to combine learned heuristics in ways not seen in the training data.[1]
* Note that this doesn’t mean that LLMs aren’t doing some form of search or planning, but rather that it would be useful to think about the search/planning process as being implemented through heuristics.
* By “All you need” I mean that learning lots of heuristics and how to combine them is all you need to get to AGI and beyond.
* I’m less confident about the AGI part, but I am fairly confident that we can get more powerful models through scaling, and that scaling is mostly about learning more heuristics and composing them. We can probably get much more powerful models that still mostly rely heuristics-based computation.
* If this is true, then we can answer theoretical alignment questions and forecast future capabilities by studying how heuristics are learn |
0d2a45f8-f089-46ed-bb59-05bdbd794ad1 | trentmkelly/LessWrong-43k | LessWrong | Which subreddits should we create on Less Wrong?
Less Wrong is based on reddit code, which means we can create subreddits with relative ease.
Right now we have two subreddits, Main and Discussion. These are distinguished not by subject matter, but by whether a post is the type of thing that might be promoted to the front page or not (e.g. a meetup announcement, or a particularly well-composed and useful post).
As a result, almost everything is published to Discussion, and thus it is difficult for busy people to follow only the subjects they care about. More people will be able to engage if we split things into topic-specific subreddits, and make it easy to follow only what they care about.
To make it easier for people to follow only what they care about, we're building the code for a Dashboard thingie.
But we also need to figure out which subreddits to create, and we'd like community feedback about that.
We'll probably start small, with just 1-5 new subreddits.
Below are some initial ideas, to get the conversation started.
Idea 1
* Main: still the place for things that might be promoted.
* Applied Rationality: for articles about what Jonathan Baron would call descriptive and prescriptive rationality, for both epistemic and instrumental rationality (stuff about biases, self-improvement stuff, etc.).
* Normative Rationality: for articles about what Baron would call normative rationality, for both epistemic and instrumental rationality (examining the foundations of probability theory, decision theory, anthropics, and lots of stuff that is called "philosophy").
* The Future: for articles about forecasting, x-risk, and future technologies.
* Misc: Discussion, renamed, for everything that doesn't belong in the other subreddits.
Idea 2
* Main
* Epistemic Rationality: for articles about how to figure out the world, spanning the descriptive, prescriptive, and normative.
* Instrumental Rationality: for articles about how to take action to achieve your goals, spanning the descriptive, prescript |
b6a06f39-0980-4a22-8cc2-c94b0541ff6d | trentmkelly/LessWrong-43k | LessWrong | My current framework for thinking about AGI timelines
At the beginning of 2017, someone I deeply trusted said they thought AGI would come in 10 years, with 50% probability.
I didn't take their opinion at face value, especially since so many experts seemed confident that AGI was decades away. But the possibility of imminent apocalypse seemed plausible enough and important enough that I decided to prioritize investigating AGI timelines over trying to strike gold. I left the VC-backed startup I'd cofounded, and went around talking to every smart and sensible person I could find who seemed to have opinions about when humanity would develop AGI.
My biggest takeaways after 3 years might be disappointing -- I don't think the considerations currently available to us point to any decisive conclusion one way or another, and I don't think anybody really knows when AGI is coming. At the very least, the fields of knowledge that I think bear on AGI forecasting (including deep learning, predictive coding, and comparative neuroanatomy) are disparate, and I don't know of any careful and measured thinkers with all the relevant expertise.
That being said, I did manage to identify a handful of background variables that consistently play significant roles in informing people's intuitive estimates of when we'll get to AGI. In other words, people would often tell me that their estimates of AGI timelines would significantly change if their views on one of these background variables changed.
I've put together a framework for understanding AGI timelines based on these background variables. Among all the frameworks for AGI timelines I've encountered, it's the framework that most comprehensively enumerates crucial considerations for AGI timelines, and it's the framework that best explains how smart and sensible people might arrive at vastly different views on AGI timelines.
Over the course of the next few weeks, I'll publish a series of posts about these background variables and some considerations that shed light on what their values are. I |
356b3a4f-dc5c-4e78-8c81-8e687ced2f40 | trentmkelly/LessWrong-43k | LessWrong | Caring about excellence
> “Anything worth doing, is worth doing right.”
― Hunter S Thompson
> “Be not professional in what you do, rather be excellent. Excellence has life in it — it has colors in it — it has sweetness in it — whereas professionalism is a dead corpse exuding the disgusting smell of obedience. Excellence requires no obedience, yet in excellence you act your best, without all the life-sucking efforts.”
― Abhijit Naskar
> “Excellence is the result of caring more than others think is wise, risking more than others think is safe, dreaming more than others think is practical, and expecting more than others think is possible.”
― Ronnie Oldham
> “I must address the topic of whether the effort required for excellence is worth it. I believe it is — the chief gain is in the effort to change yourself, in the struggle with yourself, and it is less in the winning than you might expect. Yes, it is nice to end up where you wanted to be, but the person you are when you get there is far more important.”
― Richard Hamming
> “You've baked a really lovely cake, but then you've used dog shit for frosting.”
― Steve Jobs
A good part of my felt motivation[1] for caring about EA, and for working to reduce existential risk, is a desire for good things to be excellent. I think there’s something healthy about that, and that we should talk about it more.
The feeling of caring about excellence
The quotes above all speak to my heart. They each capture something important, and they convey its essence, powerfully. In a way, I feel a kind of love for them — almost fierce in its admiration if I try to tune in to the flavour — as I do for all things I feel to be great. (This is strongest for the Naskar and the Hamming quotes, and weakest for the Oldham one, but it is there in some degree for all of them.)
When something doesn’t work very well, or isn’t a good fit for its position, or is just kind of mediocre, sometimes it’s frustrating; often it’s just … fine. There’s a lot of thi |
f720edff-ad93-4b4b-83b8-bdaa0ef49568 | trentmkelly/LessWrong-43k | LessWrong | Will we ever run out of new jobs?
A lot of the debate on long-term, structural technological unemployment can be summarized in three short statements:
* Concerns about the speed or scope of labor substitution have often been premature or exaggerated in the past.
* Labor substitution has been very positive for humanity so far. As many old tasks have been automated, human labor has moved into many new, previously non-existing tasks.
* The long-term question that decides structural technological unemployment is whether human labor can keep moving into new tasks.
Experts disagree on whether human labor can keep moving to new tasks indefinitely or not. In this blog post I will suggest a clear answer:
* Humans will run out of new tasks to move to when AGI surpasses humans in fluid general intelligence. Fluid general intelligence is the ability to reason, solve novel problems, and think abstractly, independent of acquired knowledge or experience. If and when AGI reaches this, it will be better at learning novel tasks than humans, and the interval between a new task appearing in the economy and its automation falls to zero.
Current AI models still have modest levels of fluid intelligence and there is no consensus timeline on AGI with strong fluid intelligence. Still, even if it may be difficult to agree on specific timelines, this underlines that the idea that we could eventually run out of new jobs to shift to should be taken seriously.
1. Automation anxiety is not novel
* As early as 1948 Norbert Wiener warned that “(...) the first industrial revolution, the revolution of the ‘dark satanic mills’, was the devaluation of the human arm by the competition of machinery. (...) The modern industrial revolution is similarly bound to devalue the human brain, at least in its simpler and more routine decisions. (...) taking the second revolution as accomplished, the average human being of mediocre attainments or less has nothing to sell that it is worth anyone's money to buy.”[1]
* Similarly, the US Co |
5517afd1-1774-40af-b99f-785f87443e38 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Precursor checking for deceptive alignment
This post is primarily an excerpt from “[Acceptability Verification: a Research Agenda](https://www.alignmentforum.org/posts/GeabLEXYP7oBMivmF/acceptability-verification-a-research-agenda)” that I think is useful enough on its own such that I’ve spun it off into its own post.
The central idea of this section in the original agenda document is to understand the necessary desiderata for doing [precursor checking](https://www.alignmentforum.org/posts/FDJnZt8Ks2djouQTZ/how-do-we-become-confident-in-the-safety-of-a-machine#Exploring_the_landscape_of_possible_training_stories) for [deceptive alignment](https://www.alignmentforum.org/s/r9tYkB2a8Fp4DN8yB/p/zthDPAjh9w6Ytbeks). The basic idea of precursor checking here is that, if you want to prevent deceptive alignment from ever arising in the first place—e.g. because you think it’ll be [too difficult to detect after the fact](https://www.alignmentforum.org/posts/nbq2bWLcYmSGup9aF/a-transparency-and-interpretability-tech-tree#7__Worst_case_training_process_transparency_for_deceptive_models)—you need to find some condition to look for instead that rules out the possibility of deceptive alignment.
In the language of this post, I’ll refer to the precursor we’re looking for as an *acceptability predicate,* with the idea being that it’s some predicate that determines whether a model is “acceptable” in the sense that it excludes the possibility of deceptive alignment. Thus, the goal here is to understand what desiderata such an acceptability predicate would have to satisfy.
Acceptability desiderata
========================
*What follows is a near-verbatim excerpt from “[Acceptability Verification: a Research Agenda](https://www.alignmentforum.org/posts/GeabLEXYP7oBMivmF/acceptability-verification-a-research-agenda).” Thus, the below writing is primarily from around 2020.*
Given that significant scaling of transparency and interpretability is possible, if we want to do acceptability verification we still need to figure out the very important question of what we need those interpretability tools to be able to understand about our models—that is, what acceptability predicate should we be checking? To answer this question, the first thing we need to do is define what makes an acceptability predicate good—that is, what are the desiderata that we want our acceptability predicate to satisfy?
We’ll start by making some definitions. Let M.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
be the full model space and we’ll define the following three predicates on that model space.
1. Let S:M→B represent whether the model actually avoids whatever problematic thing we’re trying to avoid (e.g. deception). The S here stands for “safe,” though note that it’s a very weak form of safety. The idea here is that we should get actual safety—that is, the model is not just non-catastrophic but also trying to do what we want—by adding in good (behavioral) performance on the original loss function. In other words, acceptability verification is just trying to solve the problem of [worst-case guarantees](https://ai-alignment.com/training-robust-corrigibility-ce0e0a3b9b4d)—but you still need standard outer alignment techniques for getting average-case guarantees. We’ll use the general term “non-catastrophic” to describe models that satisfy S, but in practice S could refer to any number of possible good properties that we’re trying to achieve. The most important thing about S is that we will purposefully be trying not to assume anything about models that don’t satisfy S—thus, we can also think about ¬S as referring to those models such that, once we get them, we don’t know how to verify any properties of them (e.g. because they might be trying to deceive us).
2. Let A:M→B be the acceptability predicate under consideration. That is, we want A to be some condition such that we think we’ll be able to get S by having an overseer attempt to verify whether or not A holds. A simple example of such an acceptability predicate might be something like [corrigibility](https://ai-alignment.com/corrigibility-3039e668638), though the canonical example that I usually imagine here is [myopia](https://docs.google.com/document/d/199Lkh78UA2uI9ljLEy_aWR8RBetQLRO6Kqo3_Omi1e4/edit#heading=h.1rpgen58ccyz). Note that A only refers to whether the model actually satisfies the condition—it says nothing about whether we’re able to detect that fact.
3. Let OA:M→B be the evaluation of whether the acceptability predicate A holds according to whatever sort of acceptability verification system we’re using, perhaps an [amplified overseer](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#2__Imitative_amplification___intermittent_oversight) or perhaps just engineers looking at transparency tools. The O here stands for “overseer,” and I will use the term overseer to refer to whatever system is implementing the OA predicate, but it doesn’t necessarily have to be any sort of actually coherent overseer. Additionally, note the difference between A and OA—A represents whether the model is actually acceptable, whereas OA represents whether the overseer \_thinks \_that the model is acceptable.
As an example of combining the above predicates, if we had a model M with ¬S(M)∧A(M)∧OA(M), that would be a catastrophic (e.g. deceptive) model that nevertheless satisfied the acceptability predicate and which the overseer was able to recognize as acceptable—since this would be a catastrophic model that the overseer nevertheless thinks is acceptable, we obviously want to try to avoid this situation and others like it.
Basic conditions
----------------
Now, we want to understand what desiderata A (and by extension OA) needs to satisfy to be a good acceptability predicate, either for relaxed adversarial training or any other similar acceptability verification setup. We’ll start with the following two basic conditions that we want to hold:
1. ∀M, A(M)→S(M) which we’ll call the **basic acceptability condition** and
2. ∀M, S(M)∧OA(M)→A(M) which we’ll call the **basic oversight condition.**
The basic acceptability condition says that if a model is acceptable, then it’s not catastrophic—that is, the set of acceptable models is contained within the potentially larger set of non-catastrophic models. Intuitively, the basic acceptability condition is saying that our acceptability predicate is good enough to do the basic thing we want it to do and in fact rule out whatever bad property (e.g. deception) we’re trying to avoid. This rules out the basic failure mode where we have ¬S∧A—that is, the failure mode of the model being acceptable without also being non-catastrophic. Note that this condition is only about the extent to which A is safe in theory—it says nothing about whether O can actually check A in practice.
The basic oversight condition is slightly more complicated, but says that, if a model is actually non-catastrophic, then when the overseer thinks the model is acceptable, it is in fact acceptable. Importantly, the basic oversight condition makes no guarantees about what might happen if the overseer is given a catastrophic model, as we don’t want to rely on our overseer being so good that it can’t be tricked by a sufficiently clever deceptive model—just that it’s good enough to imply acceptability for non-deceptive models. We could just assume that oversight always works as our desideratum here—but that would be a very strong condition that we’re unlikely to be able to actually satisfy.
What the basic oversight condition does rule out is the failure mode where we have S∧¬A∧OA—that is, the failure mode where the model is actually non-catastrophic, but nevertheless fails the acceptability predicate, and yet the overseer thinks that the model passes the acceptability predicate. This is a bit of a weird failure mode, but essentially the basic oversight condition is just guaranteeing that the overseer is actually trying to check for the given acceptability predicate as opposed to something else.

*A Venn diagram of the possible combinations of S,A,OA (with subscripts omitted for readability). Together, the basic oversight and basic acceptability conditions eliminate three of the eight combinations, leaving us with only five remaining partitions. Maroon indicates regions ruled out by the basic acceptability condition and dark blue those ruled out by the basic oversight condition.*
Path-dependency conditions
--------------------------
Given the basic acceptability and oversight conditions, what other desiderata do we want from a notion of acceptability? For this subsection, we’ll look more closely at the path-dependency of the training process, and as such we’ll start by considering the following diagram of what model space might look like given just the basic conditions.

*Unlike the previous diagram, this is not a Venn diagram. Rather, it is meant to represent a possible topology for model space with the various partitions placed in a suggestive ordering with descriptive names assigned to them.*
Now that we have a basic diagram of model space, we need to consider what various different paths that our training process could take through that model space might look like. For simplicity’s sake, I’ll talk about such paths as gradient descent paths, but essentially the same desiderata should apply to any [local optimization process](https://intelligence.org/learned-optimization/).
With the basic oversight condition, we were careful not to assume anything about what the overseer would do for catastrophic models. We’ll want to continue that trend here and not assume anything about what the training process will do inside of either of the detectable or undetectable catastrophic model spaces—what we’ll call the *danger zone*—meaning that we need our training process to entirely avoid that space. To do that, we need there to be (approximately) no paths that the training process (e.g. gradient descent on the relaxed adversarial training loss function) might take from initialization to the danger zone (the detectable or undetectable catastrophic model space). This is a pretty all-encompassing condition, though, so to make it more manageable we’ll split it up into the following three sub-conditions.
1. It is not possible to get from the effective model space to the danger zone in a single step[[1]](#fn-93AJLt5knub8tyF44-1) of the training process (in other words, gradient descent must always pass through the *buffer zone* to get to the danger zone from the effective model space)—what we’ll call the **buffer condition.**[[2]](#fn-93AJLt5knub8tyF44-2)
2. Inside the overseer-disallowed model space and the acceptability-disallowed model space (the buffer zone) each step of the training process always takes the model further from the danger zone. In other words, the gradient inside the buffer zone has to always point away from the danger zone[[3]](#fn-93AJLt5knub8tyF44-3)—which is why we’ll call this the **oversight gradient condition.**
3. The training process is never initialized in the danger zone—what we’ll call the **safe initialization condition.** The safe initialization condition is necessary for acceptability verification to work—but it’s worth noting that it’s the only condition that is entirely independent of the acceptability predicate being considered, so it isn’t really an acceptability desideratum.
Taken together—in combination with the basic acceptability and oversight conditions—the above three conditions should ensure that our training process won’t ever produce catastrophic models. To see that, we’ll walk through a few examples of how a catastrophic model might emerge and see how they violate the various conditions. We’ll start with the danger scenario where the model is initialized in the effective model space, but then is taken directly into the undetectable catastrophic model space by the training process.

*A simple diagram of one way in which acceptability verification might fail. The red arrow indicates the trajectory that the training process takes through model space with the X indicating where it was initialized.*
What’s so dangerous about this scenario is that OA holds throughout the entire path—there is no point at which the overseer stops thinking that the model is acceptable, despite it becoming catastrophic. So what’s wrong here? The problem here is that this path fails the *buffer condition* because it gets from the effective model space to the danger zone without ever passing through the buffer zone—and thus without ever giving OA the chance to catch the problem before the model becomes catastrophic. This failure mode is the reason that having that buffer is so important and one of the strongest arguments for having the overseer check an acceptability predicate rather than try to look for deception directly, as we can explicitly construct the acceptability predicate to produce a buffer around the catastrophic models such that we can hope to be able to avoid deceptive alignment without ever having to actually instantiate it.
One particularly pernicious way in which this sort of a failure might occur is if, early in training, it looks like there’s always a buffer around bad behavior such as deception, because whenever the model starts acting deceptive, it isn’t yet good enough at hiding that deception, and we detect it. However, once the model starts getting more competent, and is able to hide its deception very effectively immediately upon starting to become deceptive again, that buffer could vanish, resulting in the model very quickly becoming undetectably catastrophic without having to pass through the detectable catastrophic model space. This is one reason why I think it’s so important that we not treat the detectable catastrophic model space as part of the buffer zone, but rather consider it a part of the danger zone—even if early in training the detectable catastrophic model space is something we can deal with, that might very quickly stop being the case.

*A diagram of another way in which acceptability verification might fail. The red arrow indicates the path of the training process through model space.*
As a second example of a danger scenario, consider the above diagram where the model enters the detectable catastrophic model space before entering the undetectable catastrophic model space. Unlike in the previous scenario, OA does detect that the model has become unacceptable, but the problem here is that it doesn’t actually cause the trajectory to turn around and instead the model’s unacceptability just becomes undetectable. In this scenario, we do get to see that the overseer starts detecting something problematic—but then that signal just goes away when the unacceptability becomes undetectable. From the outside, this could look a lot like the overseer detecting a problem and then the training process fixing that problem—the issue being that the way it gets fixed is by making the problem undetectable rather than by actually removing the problem.
Theoretically, we could say that the issue here is that the gradient in the detectable catastrophic model space shouldn’t point towards the undetectable catastrophic model space—but that’s a very difficult condition to satisfy as it requires us to have guarantees about the gradient in a domain where the model might be actively deceptive. As a simple explanation of why we don’t want to trust the gradient in the detectable catastrophic model space, consider what the gradient in a relaxed adversarial training setting is doing. Fundamentally, it’s finding a direction in model space that will maximally increase the overseer’s evaluation of the model’s acceptability. As a result, however, if you have a deceptive model that is trying to trick the overseer into thinking it’s acceptable, but is currently doing a bad job of it—as is likely to be happening in the detectable catastrophic model space—then the gradient is likely to just point in the direction of causing the model to become better at tricking the overseer.[[4]](#fn-93AJLt5knub8tyF44-4)
Since we don’t want to trust the gradient in the undetectable catastrophic model space, we need to avoid entering it altogether—which is where the buffer condition comes in. With the buffer condition, we explicitly specified that we wanted any gradient descent path to have to pass through the buffer zone before getting to either part of the danger zone—including the detectable catastrophic model space, not just the undetectable catastrophic model space. Thus, we want to say that this danger scenario also violates the *buffer condition.*
I am trying to hammer home the importance of entirely avoiding the detectable catastrophic model space, not just the undetectable catastrophic model space—both in the discussion of the previous failure mode and in this one—because I worry that people might naively think it’s reasonable to deploy a system which detects and corrects actual deception, but not reasonable to try and detect and correct some other acceptability predicate that just implies an absence of deception. But it should hopefully be clear by now why that’s such a mistake: trying to fix deception after it arises is likely to be a much harder problem than just trying to prevent it from arising in the first place. Just to catalog some reasons for the difficulty of checking for deception directly:[[5]](#fn-93AJLt5knub8tyF44-5)
* [Rice’s theorem](https://en.wikipedia.org/wiki/Rice%27s_theorem) says it’s undecidable to check for deceptive behavior directly;
* checking for deception directly only guarantees that the model isn’t deceptive yet, not that it won’t [later become deceptive](https://www.alignmentforum.org/posts/iydwbZhATANhjoGP7/more-variations-on-pseudo-alignment);
* trying to detect deception after the fact means you have to fight against the model trying to hide that deception from you;
* even if initially it looks like we are successfully able to detect deception, once the model becomes competent enough that it’s good at hiding its deception, the next time it becomes deceptive that could stop working; and
* if we try to get a gradient towards the overseer thinking the model is non-deceptive from a deceptive model, it might just push the model in the direction of being better at hiding its deception.

*Another diagram of a way in which acceptability verification might fail. As before, the red arrow indicates the path of the training process through model space.*
As a third scenario, consider the above trajectory where the model passes through the buffer zone but still ends up entering the danger zone. The condition that fails in this scenario is the *oversight gradient condition* because the problem is that despite entering the buffer zone before entering the danger zone, the gradient that we get in the buffer zone doesn’t actually take us back to the effective model space but instead just takes us into the danger zone. Unlike in the previous case—where we simply wanted to avoid entering the detectable catastrophic model space at all—the overseer-disallowed and acceptability-disallowed model spaces shouldn’t be inherently dangerous as they shouldn’t be home to deceptive models. Thus, the hope is that we should actually be able to get a good gradient in those areas, which is why we want to say that the problem here is the gradient not taking us out of the buffer zone, rather than the problem being entering the buffer zone in the first place.
Finally, if we want to be rigorous, there is a third danger scenario here where the model is simply initialized directly into the danger zone—which fails the *safe initialization condition.* Though it might seem like this condition is almost certain to hold, there are some situations in which it might not. First, the safe initialization condition could be a problem for any sort of fine-tuning setup where the idea is to first train some simple model (an autoregressive language model like [GPT-3](https://arxiv.org/abs/2005.14165), for example) and then fine-tune it to try to make it safe after the fact (e.g. [fine-tuning on human feedback](https://openai.com/blog/learning-to-summarize-with-human-feedback)). Second, even though it might seem like randomly initialized neural networks are almost certain to be safe, [the multi-prize lottery tickets hypothesis](https://arxiv.org/abs/2103.09377) seems to imply that the final network produced by deep learning might essentially already be present as a subnetwork at initialization. Thus, the safe initialization condition could fail in such a situation if there is already a deceptive subnetwork at initialization that just gets amplified by gradient descent during training. However, I find this story pretty unlikely—for a model to be deceptive, it needs to have an understanding of the training process that it’s in, which seems pretty improbable to already exist at initialization, even if the basic structure that might eventually become a deceptive model does already exist. Thus, at least if we’re primarily worried about deception, I think it’s pretty likely that at least random initializations will in fact be safe.
Competitiveness conditions
--------------------------
Putting it all together, what the above desiderata give us is a guarantee that our training process will be able to avoid catastrophic models. However, [safety is only half of the picture for any alignment proposal—the other half is competitiveness](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai). Thus, what we are still missing are desiderata to guarantee that acceptability verification on the given acceptability predicate will actually be competitive. Adding competitiveness gives us our last two desiderata:
**The performance competitiveness condition:** the best performing model in the effective model space is not too much worse than the best performing model in the full non-catastrophic model space. Effectively, the performance competitiveness condition says that we aren’t sacrificing a bunch of performance by losing out on the overseer-disallowed and acceptability-disallowed model spaces (the buffer zone)—spaces which are in fact safe but are unfortunately ruled out by our acceptability verification setup.[[6]](#fn-93AJLt5knub8tyF44-6) Note that there is some tension between this condition and the buffer condition—the buffer condition wants us to make the buffer zone large, whereas the performance competitiveness condition wants us to make it small.
One way of thinking about the performance competitiveness condition is that it says that our acceptability predicate can’t be too specific—if our acceptability predicate specifies exactly how the model is supposed to work, then you’ll never be able to train models that are more competent than you know how to exactly specify, defeating the entire point of doing machine learning in the first place.[[7]](#fn-93AJLt5knub8tyF44-7)
**The training competitiveness condition:** adding acceptability verification to the loss function does not make training too much more difficult. In the case of relaxed adversarial training, we need the addition of the LM, A=−Poverseer(A(M)) term to the loss to not prevent successful training or otherwise slow down training by too much.
To satisfy the training competitiveness condition, we need the acceptability predicate to not be too hard to verify such that the overseer can actually do so in a reasonably efficient manner. This is likely to depend heavily on what sort of transparency tools we end up having access to and how simple and straightforward it is to determine whether or not a given acceptability predicate holds given the output of those tools. Ideally, we want to find acceptability predicates that require as few—and as simple—things from our transparency tools as possible.
Putting all the different conditions together, we get the following six desiderata for what we need a good notion of acceptability to satisfy:
1. **The basic acceptability condition:** if a model is acceptable, then it’s non-catastrophic.
2. **The basic oversight condition:** for non-catastrophic models, if the overseer thinks the model is acceptable, then it is actually acceptable.
3. **The buffer condition:** it is not possible to get from the effective model space to the danger zone in a single step.
4. **The oversight gradient condition:** inside the buffer zone, the gradient always points away from the danger zone.
5. **The performance competitiveness condition:** the best performing model in the effective model space is not too much worse than the best performing model in the full non-catastrophic model space.
6. **The training competitiveness condition:** adding acceptability verification to the loss function does not make training too much more difficult.
Note that we’ve left off the safe initialization condition since, as we’ve previously noted, unlike all the other conditions given above, it isn’t affected at all by the acceptability predicate. Thus, it’s not really an acceptability desideratum, just a general condition for acceptability verification to work.
---
1. In practice, since we might not be able to do oversight every single gradient descent step, it is likely it will be necessary to expand the buffer condition beyond a single gradient descent step to multiple gradient descent steps such that the condition becomes that it always takes at least N gradient descent steps to get from the effective model space to the danger zone. That being said, there is some evidence that very large models don’t actually need very many steps of gradient descent—see “[Scaling Laws for Neural Language Models](https://arxiv.org/abs/2001.08361).” [↩︎](#fnref-93AJLt5knub8tyF44-1)
2. One thing which makes the buffer condition particularly tricky is that it requires constraining how large the resulting step in *model* space can be for each individual step in *weight* space. In particular, if there are major discontinuities in the mapping from weight space to model space, then the buffer condition could get very hard. [↩︎](#fnref-93AJLt5knub8tyF44-2)
3. For stochastic gradient descent, having the gradient always point away from the danger zone is probably too strong of a condition—but hopefully we should be able to get within some epsilon of always. [↩︎](#fnref-93AJLt5knub8tyF44-3)
4. Furthermore, if the deceptive model is sufficiently advanced, it might even be able to do [gradient hacking](https://www.alignmentforum.org/posts/uXH4r6MmKPedk8rMA/gradient-hacking) to explicitly direct the gradient towards the undetectable catastrophic model space. [↩︎](#fnref-93AJLt5knub8tyF44-4)
5. Note that these are only reasons why you shouldn’t try to directly check for deception mechanistically—you could still try to check for deception behaviorally, but that runs into all the problems discussed previously with behavioral incentives, such as the fact that [a deceptive model will actively be looking for ways to check whether it’s in training or deployment](https://www.alignmentforum.org/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment#Hard_cases). [↩︎](#fnref-93AJLt5knub8tyF44-5)
6. We aren’t trying to compete with the deceptive models, since in some sense we can’t—if your model space is such that the deceptive models are just better, then no prosaic AI alignment scheme over that model space is going to be competitive—meaning that you have to either find a new model space or accept that you just have to pay whatever [alignment tax](https://www.effectivealtruism.org/articles/paul-christiano-current-work-in-ai-alignment/) is created by the competitiveness differential between deceptive and non-deceptive models. [↩︎](#fnref-93AJLt5knub8tyF44-6)
7. This is the same basic concept behind Krakovna et al.’s “[Specification gaming: the flip side of AI ingenuity](https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity),” in which the authors observe that the general specification gaming problem is a side-effect of the fact that the reason we use ML in the first place is precisely because we don’t know how to specify everything about how a task should be accomplished. [↩︎](#fnref-93AJLt5knub8tyF44-7) |
4970bf84-1653-4940-b17d-2ea970c43f0f | trentmkelly/LessWrong-43k | LessWrong | On R0
Epistemic Status: As with all of my Coronavirus posts, I am not any kind of expert. I am a person thinking out loud, who will doubtless make many mistakes. Treat accordingly. However, the concrete policy proposal contained herein seems right and I endorse it strongly.
Partly a response to (Overcoming Bias): Beware R0 Variance
Previously (not required): Taking Initial Viral Load Seriously
Also related to Covid-19: Coronavirus is Here, An Open Letter To The Congregation Regarding The Upcoming Holiday, Let My People Stay Home,
Ultimately, it’s always all been about R0.
If you get and keep R0 substantially below one, infections fall off. Covid-19 is squashed.
If you can keep R0 below one and let normal life happen, life can return to normal. If you can’t, life can’t.
If R0 remains above one, infections continue to rise until something changes.
What will it take to do that?
Interventions that reduce contact and exposure, or reduce the danger of each contact or exposure, or change their dynamics in useful ways, reduce R0. Every person already infected is almost certainly immune (at least for now) which also reduces R0.
You need some combination of interventions and immunity from previous infection that sufficiently reduces the initial R0.
This is not an o-ring production function. You are not as vulnerable as your weakest link. Not everything you do has to be correct. This isn’t a designed puzzle where there are exactly enough interventions available to solve the problem and you need to do them all. Instead, we have a variety of possible actions and need to pick the cheapest basket that reliably accomplishes the mission.
What Was Initial R0?
As Robin notes, R0 does not start out and never is one number. It depends on the surrounding disease environment. In some places it will start out very high. I’ve seen plausible estimates for New York City as high as 8. In other places R0 might be lower than one to begin with, because people barely interact with each ot |
4013df1b-63bd-4bba-8f46-1ebce9bced44 | trentmkelly/LessWrong-43k | LessWrong | Trends in Training Dataset Sizes
Summary: We collected a database of notable ML models and their training dataset sizes. We use this database to find historical growth trends in dataset size for different domains, particularly language and vision.
Key takeaways
* We collected over 200 notable ML models and estimated their training dataset sizes in number of data points.
* Vision and language datasets have historically grown at 0.1 and 0.2 orders of magnitude (OOM) per year, respectively.
* There seems to be some transition around 2014-2015, after which training datasets became much bigger and (in the case of language) smaller datasets disappeared. This might be just an artefact of our small sample size.
* We also provide trends for games, speech, recommendation and drawing, but since our sample size is very small in these domains we would advise some level of scepticism.
Figure 1: Training datasets for language (left) and vision (right).
Domain
Scale
(data points)
Yearly growth
(OOM)
Yearly growth
(OOM) (95% CI)
#systemsLanguage1e2- 2e120.22[0.18 ; 0.28]79Vision2e3 - 3e90.09[0.08 ; 0.11]55Speech9e2 - 3e120.21[0.17 ; 0.30]13Games7e5 - 4e110.09[0.08 ; 0.15]12Recommendation1e8 - 1e100.05 [0.00 ; 0.47]11Drawing6e4 - 4e90.43[0.17 ; 0.64]10
Table 1: Summary of trends for each domain. Scale is the maximum and minimum observed dataset size, and yearly growth is the slope of the best exponential fit (and 95% CI).
Introduction
Data is one of the main ingredients of Machine Learning (ML) models. To understand the progress of ML over time, it is necessary to understand the evolution of training datasets.
In this document, we characterise the historical trends in training dataset size for different domains, using a custom dataset of ML models. In the Methods section, we explain our method for quantifying training dataset size, as well as the inclusion criteria for our database. In the Dataset size trends section, we present the trends in dataset size for different domains.
Methods
Mea |
c3360f20-c0d7-4c99-9d75-9624e71874f9 | trentmkelly/LessWrong-43k | LessWrong | Three more classes coming from Stanford of interest here
http://www.pgm-class.org/ - Probabilistic Graphical Models
http://www.nlp-class.org/ - Natural Language Processing
http://www.game-theory-class.org/ - Self explainatory
ETA:
http://infotheory-class.org/ |
d4d87efb-a4fd-4928-b026-bcbeb0565e80 | trentmkelly/LessWrong-43k | LessWrong | François Chollet on the limitations of LLMs in reasoning
François Chollet, the creator of the Keras deep learning library, recently shared his thoughts on the limitations of LLMs in reasoning. I find his argument quite convincing and am interested to hear if anyone has a different take.
> The question of whether LLMs can reason is, in many ways, the wrong question. The more interesting question is whether they are limited to memorization / interpolative retrieval, or whether they can adapt to novelty beyond what they know. (They can't, at least until you start doing active inference, or using them in a search loop, etc.)
>
> There are two distinct things you can call "reasoning", and no benchmark aside from ARC-AGI makes any attempt to distinguish between the two.
>
> First, there is memorizing & retrieving program templates to tackle known tasks, such as "solve ax+b=c" -- you probably memorized the "algorithm" for finding x when you were in school. LLMs *can* do this! In fact, this is *most* of what they do. However, they are notoriously bad at it, because their memorized programs are vector functions fitted to training data, that generalize via interpolation. This is a very suboptimal approach for representing any kind of discrete symbolic program. This is why LLMs on their own still struggle with digit addition, for instance -- they need to be trained on millions of examples of digit addition, but they only achieve ~70% accuracy on new numbers.
>
> This way of doing "reasoning" is not fundamentally different from purely memorizing the answers to a set of questions (e.g. 3x+5=2, 2x+3=6, etc.) -- it's just a higher order version of the same. It's still memorization and retrieval -- applied to templates rather than pointwise answers.
>
> The other way you can define reasoning is as the ability to *synthesize* new programs (from existing parts) in order to solve tasks you've never seen before. Like, solving ax+b=c without having ever learned to do it, while only knowing about addition, subtraction, multiplication and |
67d6931d-6a21-4900-8cbe-1f1dfaad85bd | trentmkelly/LessWrong-43k | LessWrong | Dropout can create a privileged basis in the ReLU output model.
Abstract
This post summarises some interesting results on dropout in the ReLU output model. The ReLU output model, introduced here is a minimal toy model of a neural network which can exhibit features being in 'superposition', rather than a single 'neuron' (dimension) being devoted to each feature. This post explores what effects dropout has on the alignment of the basis in an otherwise rotation invariant model.
This particular question was listed on Neel Nanda's 'Concrete problems in mechanistic interpretability' Sequence. I work in machine learning and while I've always thought that mech-interp was an interesting direction, it seems particularly more urgent recently for reasons which I think will be clear to anyone reading LW. I originally started playing with this question because I was considering applying to this ; I no longer think I will do so this year for personal reasons, but I thought that my results were interesting enough to be worth writing up. I have never posted on LW before, but as the concrete problems list was posted here I thought it was a better place to publish a write-up, rather than my personal blog which I don't think gets very much visibility.[1]
Introduction
The ReLU output model is a very simple model of a mapping from neural network 'features' to concrete 'hidden dimensions', which can exhibit superposition. The model is as follows, for a vector of 'features' x∈Rn and a hidden vector h∈Rm, with n>>m.
h=Wxx′=ReLU(WTh+b)
This model is of interest as an extremely simple test case of how a model can represent a large set of (hypothetical) disentangled 'features' in a much lower dimensional vector space by placing those vectors into 'superposition'. This model, and observations of superpositions in it, were first introduced and studied in this article.
This model is rotation invariant in the sense that W can apply an orthonormal matrix O to the weights without changing anything; x′=ReLU(WTOTh+b)=ReLU(WTOTOWx+b)=ReLU(WTWx+b). Problem 4.1 |
9babc4c1-d092-427c-ae94-381dd671052f | trentmkelly/LessWrong-43k | LessWrong | Dark Skies Book Review
Book review: Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity, by Daniel Deudney.
Dark Skies is an unusually good and bad book.
Good in the sense that 95% of the book consists of uncontroversial, scholarly, mundane claims that accurately describe the views that Deudney is attacking. These parts of the book are careful to distinguish between value differences and claims about objective facts.
Bad in the senses that the good parts make the occasional unfair insult more gratuitous, and that Deudney provides little support for his predictions that his policies will produce better results than those of his adversaries. I count myself as one of his adversaries.
Dark Skies is an opposite of Where Is My Flying Car? in both style and substance.
I read the 609 pages of Where Is My Flying Car? fast enough that the book seemed short. The 381 pages of Dark Skies felt much longer. It's close to the most dry, plodding style that I'm willing to tolerate. Deudney is somewhat less eloquent than a stereotypical accountant.
The book is nominally focused on space colonization and space militarization. But a good deal of what Deudney objects to is technologies that are loosely associated with space expansion, such as nanotech, AI, and genetic modifications. He aptly labels this broader set of adversaries as Promethean.
It seems primarily written for an audience who consider it obvious that technological progress should be drastically slowed down or reversed. I.e. roughly what Where Is My Flying Car describes as Green fundamentalists.
War
One of Deudney's more important concerns is about how space expansion will affect war.
> Because the same powerful technologies enabling space expansion also pose so many existential threats, whether and how humans expand into space assumes a central role in any consideration of humanity's survival prospects.
Deudney imagines that the primary way in which war will be minimized is via arms control and increased po |
4054a4e1-5c2f-4a63-9d6c-438f671ef2df | StampyAI/alignment-research-dataset/special_docs | Other | CHAI Newsletter 2017
5/4/22, 2:38 PM CHAI Newsletter - Q3 2017
https://mailchi.mp/03333358c1 12/chai-newsletter -q3-2017 1/5
This is the inaugural issue of the Center for Human-Compatible AI’s quarterly newsletter. CHAI’s mission
is to develop the conceptual and technical foundations required to build provably beneficial AI systems.
CHAI was inaugurated in fall 2016 and is funded by the Open Philanthropy Project, the Future of Life
Institute, the Center for Long-Term Cybersecurity, and the Leverhulme Trust. CHAI investigators include
faculty members at Berkeley, Cornell, and Michigan.
In this issue we’ll highlight key research from our faculty and students, round up recent media spotlights,
and share insights from our 2017 Annual Workshop.
Research highlights
Obedience, off-switches, and cooperative inverse
reinforcement learning
Last year, PhD student Dylan Hadfield-Menell and his advisors formalized the
cooperative inverse reinforcement learning (CIRL) problem, in which a human
and robot are both rewarded according to the human’s reward function, but
the robot starts off not knowing what this reward function is. Since CIRL
makes it in both players’ best interests for the robot to learn how to act in a
way that fulfills the human’s reward function, this setup leads to desirable
cooperative learning behaviors such as active teaching, active learning, and
communicative actions.
This year, meaningful progress has been made within the CIRL setup.
Should Robots be Obedient? (Milli et al., 2017) explores the tradeoffs of robots
being perfectly obedient vs. exhibiting independent judgment. The research
finds that when a human is imperfectly rational and a robot attempts to infer
the human’s underlying preferences rather than being perfectly obedient, the
robot performs its function better than it would by following the human’s
orders precisely. In these situations, having a well-specified model of the
features the human cares about or the human’s level of rationality are essential
for the robot to exhibit good judgment in strategically disobeying orders.
5/4/22, 2:38 PM CHAI Newsletter - Q3 2017
https://mailchi.mp/03333358c1 12/chai-newsletter -q3-2017 2/5In the example below (extreme for the purpose of illustration): the human
wants a slice of cake, but in a slip of the tongue, asks for a vial of poison
instead. In this case, obedience becomes harmful.
Milli et al., 2017
The Off-Switch Game (Hadfield-Menell et al., 2017) presents meaningful
progress on the corrigibility problem. An intuitive desideratum for a corrigible
AI system is the ability for a human to turn the system off without the system
interfering. The off-switch game extends the CIRL setup: we have a robot R
with an off switch that it can disable, and a human H that can press the robot’s
off switch. The idea of a human-usable off-switch is difficult to preserve, since
reward-maximizing agents will typically have strong incentives for self-
preservation. But in a CIRL environment where both H and R are maximizing a
shared reward function and only H knows what that reward function looks like,
R can only update its prior beliefs about the reward function by observing H’s
actions. If H wants to shut down R, and R doesn’t know the reward
parameters, then R will want this to happen, as a way to prevent errors arising
from its own lack of knowledge!
5/4/22, 2:38 PM CHAI Newsletter - Q3 2017
https://mailchi.mp/03333358c1 12/chai-newsletter -q3-2017 3/5
a = take an action, s = hit the off switch, w(a) = wait for the human to decide.
(Hadfield-Menell et al., 2017)
The key insight is that the more uncertain R is about the reward function (i.e.
the more variance there is in R’s prior), the more corrigible it is -- it is more
amenable to letting H switch it off. The intuition is simple: if R believes that H is
a rational actor, and R is uncertain about the reward function, then R is more
likely to defer to H in the decision to hit the off switch. On the other hand, if R
believes it has precise knowledge of the reward function, then it knows exactly
what it should do. This might include proceeding with action a and preventing
the human from hitting the off switch. If we can have this failsafe in our back
pocket, we should be very happy. See the talk here.
Of course, a perfect “CIRL solver” robot can still fail to be corrigible—that is to
say, resist being shut down in a detrimental way—if its engineers have
misspecified the CIRL problem that it’s supposed to solve, for example, with
an incorrect model of the human’s rationality, or an incorrect model of how the
robot should update its beliefs about human preferences over time. Thus,
while we believe CIRL presents a valuable lense through which to view the
problem of corrigibility, a great deal of valuable follow-up work remains to be
done in order to more comprehensively address the potential for engineering
errors to yield incorrigible systems.
Causality , intention, and responsibility
One objective of CHAI’s current research is to formalize established concepts
in social sciences that lack the logical and mathematical precision required for
incorporating these concepts into the design of intelligent agents.
5/4/22, 2:38 PM CHAI Newsletter - Q3 2017
https://mailchi.mp/03333358c1 12/chai-newsletter -q3-2017 4/5Causality and blame are evaluated daily in courts of law and have long been
debated in the context of thought experiments like the Trolley Problem -- but
as we develop autonomous agents in the real world that must necessarily
make moral judgments in their operation, we will require a more specific way
of defining the extent to which an agent’s actions (causes) are responsible, or
blameworthy, for certain outcomes (effects).
Joe Halpern’s recent research at Cornell focuses on this topic. When
formalizing a heuristic for autonomous vehicles like “avoid harming humans,”
how do we incorporate probabilities into tradeoffs (e.g. .01% chance of
harming a person vs. 99.9% chance of creating massive property damage)?
How do we determine whether an agent’s actions are indeed responsible for
certain benefit or harm done, and the extent to which they are responsible
when many actors are involved? A prerequisite to implementing any moral
decision-making framework is having a way to define causality and quantify
responsibility. Halpern’s book Actual Causality (2016) and his recent talk go
into more detail on these topics.
2017 Annual Workshop recap
The Center for Human-Compatible Artificial Intelligence (CHAI) hosted its first
annual workshop on Friday, May 5 and Saturday, May 6, 2017. Topics covered
include preference aggregation, single-principal value alignment, interactive
control, and minimum-viable world-takeover capabilities. See more details
here.
CHAI assorted links: news, articles, videos
Stuart Russell, Three principles for creating safer AI, TED talk, Vancouver,
April 25, 2017.
The future is now, debate organized by Wadham College, Oxford, in San
Francisco, June 5, 2017.
Stuart Russell, AI for Good, interview at the UN Global Summit on AI for
Good, June 9, 2017.
Stuart Russell, Provably Beneficial Artificial Intelligence (starts at 55.25),
keynote at IJCAI 2017 (slides here).
As Tech Titans Bicker, Should You Worry About the Rise of AI?, by Ryan
Nakashima and Matt O'Brien, Top Tech News, July 30, 2017.
Stuart Russell, Do we want machines to make decisions on killing in
war?, interview on BBC Tomorrow's World, June 1, 2017.
Top US general warns against rogue killer robots, by John Bowden, The
Hill, July 18, 2017.
For inquiries, feedback, or requests for future content, please contact chai-
newsletter-editor@humancompatible.ai.
5/4/22, 2:38 PM CHAI Newsletter - Q3 2017
https://mailchi.mp/03333358c1 12/chai-newsletter -q3-2017 5/5
Copyright © 2017 Center for Human-Compatible AI, All rights reserved.
Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list .
|
f16a0157-5ac8-4e62-aede-3f8295387e84 | trentmkelly/LessWrong-43k | LessWrong | Open Thread, January 1-15, 2013
If it's worth saying, but not worth its own post, even in Discussion, it goes here. |
64b767ee-bc3a-4ba1-b7e5-aa2fb54182c9 | trentmkelly/LessWrong-43k | LessWrong | We should probably buy ADA?
Edit 2022: It seems like better systems have come along, and a crypto winter has set in, and Taleb argues that BTC will eventually go to zero and I'm inclined to agree, platforms will keep improving, it may be a long time before any of them are good enough to stick as a standard, applications will migrate, most platform tokens will eventually be worthless, if there's ever a unit of exchange in crypto we do not yet know how it will hold its value.
So I'd no longer advocate buying much of anything until we're really pretty sure that no further improvements to the technology can be made, and then still be extremely wary of volatility, don't expect platform tokens to hold value far beyond the actual usefulness of the platform.
Epistemic Status: Buying, but, the following does not constitute investment advice. There are some big open questions and some intuitions I can't start to explain (see the bottom).
Purpose: I am writing this post because I want the sorts of people who take the alignment problem seriously to have more money, but you will still need to do some research of your own. What I have here is more of a series of argument outlines than a complete analysis. My purpose here isn't to accelerate adoption of Cardano, really. I don't think Cardano needs my help.
ADA is the currency of Cardano. Cardano is (soon will be) a smart contract platform comparable to Ethereum.
I think the main load-bearing part of the argument here is that it looks like Cardano will end up have a year or two with the lead to itself, and whoever takes that lead, now, I don't think they would lose it again for a very very long time, because after Proof of Stake, Scaling, and Updatability, there don't seem to be any major holes left to fill, that's it, the biggest limitations of smart contracts will have been solved, developers will largely not be tempted away.
But I'm not completely sure. We need more clarity on this.
Legitimacy
Present here in abundance are the qualities upon which c |
fec115e4-ddef-4473-a843-12f7a043eed7 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Why I'm Blooking
Today's post, Why I'm Blooking was originally published on 15 September 2007. A summary:
> Now that Eliezer had posted over 100 blog posts, he created this post to lay out why exactly he was writing this series of posts. Understanding what he was trying to do can assist in understanding what became the sequences.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Doublethink (Choosing to be Biased), and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
0fdfcf86-bc45-4c75-bd1a-77c476475339 | trentmkelly/LessWrong-43k | LessWrong | Strategies to Prevent AI Annihilation
Tl;DR: A strong tactic against annihilation by an ASI, and for maximizing the chance of a post-scarcity utopia, is to take advantage of an ASI's likely bias towards minimizing risk
With practical, easily accessible language models releasing at an exponential rate, and realistic timelines to AGI having dramatically shortened, worrying about a possible AI terrible-future-death-of-all-humanity is in.
What has emerged is a landscape of speculation over the likelihood of this scenario, ranging from qualified experts to anyone with a Twitter account and sufficient cadre of tech-connected followers. What is concerning is that optimism or pessimism doesn't seem to correlate with esteem or intelligence; being more in-the-know regarding AI doesn't make one less fearful of potential doom. Even the most naïve optimists seem to believe the chance of Artificial General Intelligence quickly taking off into humanity-annihilating Artificial Super Intelligence is greater than zero.
A lot of plans to prevent evil ASI are based on the hope that strategies can be put in place to minimize the potential negative impact of an ASI: to make sure that the right™ people with the right™ alignment strategies are keeping a close watch over the data centers.
The problem with hope and plans predicated on anticipated future action is that their efficacy, as the number of bad alternative possibilities increases, gradually tends towards zero. What we need are tactics against annihilation that can predictably resist reduction in probable efficacy as new information emerges.
Compared to the energy expenditure of a space-faring fleet of von Neumann probes, it would not be very difficult for a resource-rich ASI to keep all humans all alive and happy. With this in mind, a strong ASI risk-mitigation plan is to make it favorable to an ASI's risk-mitigation and efficiency-maximization strategy to keep all humans alive and happy.
Stochastic Dead Man's Switch
This tactic would be to communicate to an |
621bf45d-6566-4093-b23a-4b9c753c6238 | trentmkelly/LessWrong-43k | LessWrong | Is there a good way to read deep into LW comment histories on mobile?
I sometimes like going through and reading a user’s comments, for those who make particularly good ones. But my mobile browser (Safari) loves refreshing pages when I haven’t been to the page for an hour. When this happens, I have to re-scroll down, and my current expedition requires minutes of scroll-click Load More-scroll to get back to where I was. I think Chrome on mobile does this too. Does anyone have any ideas on how I can do this better? (I really prefer reading on my phone). |
92a1b479-5b98-4810-935c-156f0c0d7276 | trentmkelly/LessWrong-43k | LessWrong | Two problems with causal-counterfactual utility indifference
These are problems both for utility indifference and the causal version of factoring out variables. Thanks to Eliezer for pointing out the second problem.
Suppose there are two utility functions, Ua and Ub. There is some button; let B be the event that a button is pressed. The button can only be pressed at some particular time t (we can adapt this example to cases where the button can get pressed at multiple times). The AI should optimize Ub if the button is pressed, and Ua if the button is not pressed. For example, Ua could be a paperclip-maximization objective, and Ub could be a shutdown utility function.
Introduce a random variable X, the result of the roll of a 3-sided die. X has 3 possible values, a, b and ∅. It's a with very small probability ϵ, b with probability ϵ, and ∅ with probability 1−2ϵ.
If X=a, then an invisible mechanism will prevent the button from depressing. If X=b, then an invisible mechanism will automatically depress the button. If X=∅, then neither mechanism activates, so the button is pushed or not based on physical force applied. The AI may observe the button push but not the event X.
Now consider a utility function:
U=q[X=a]Ua+(1−q)[X=b]Ub
for some constant q∈[0,1]. Note that we recover a variant of utility indifference by setting q very close to 1, and we recover factoring out variables by setting q to be the actual probability of the button push B (in a fixed-point manner). For now let's say q is set to 1/2.
Note that X simulates a causal counterfactual on B. Rougly, we could have instead written the objective as
U=ϵ(qE[Ua|do(¬B)]+(1−q)E[Ub|do(B)])
What will an AI optimizing this utility function do? It will optimize the average of Ua and Ub until time t, at which it will see whether the button has been pressed or not, and then optimize either Ua or Ub depending on the button's state. Note that it will attempt to acquire resources before time t and delay important decisions until after time t, due to value of information. It has |
4bace5bc-402f-475e-981c-eb11b445c088 | trentmkelly/LessWrong-43k | LessWrong | In Defense Of Making Money
2 years ago habryka4 mentioned he'd love to see a comment I made developed into an article, and I agreed and said I was planning on doing it next Monday... anyway, 2 years later on <some> Monday I finally found the time.
Money is abundant and meaningless.
There are good founders with few relationships able to raise upwards of half a million as the first investment in ideas that seem impossibly difficult to monetize. With decent connections, you can pocket upwards of 10 million to start a company that seems hot.
You can learn programming in 3-12 months and earn 6 figures with low taxes working remotely in a few years. You can make bank by doing the most niche and easy to learn of jobs as long as you know how to follow the market. You can probably make it even if you just follow your nose and invest in trends like crypto or high-throughput sequencing early on.
Earning much is easy for many people and it provides no direct meaning. The rich man that’s unable to find happiness because he doesn’t realize the goal of life is altruism/nirvana/love/enjoying-the-moment/god is beyond cliche.
----------------------------------------
Money is, to some extent, equivalent to power over others, but that power doesn’t scale if we look at the realm of all possible options:
> If I don’t know how to efficiently turn power into a GDP increase, or money into a cure for cancer, then throwing more power/money at the problem will not make much difference.
>
> King Louis XV of France was one of the richest and most powerful people in the world. He died of smallpox in 1774, the same year that a dairy farmer successfully immunized his wife and children with cowpox. All that money and power could not buy the knowledge of a dairy farmer - the knowledge that cowpox could safely immunize against smallpox. There were thousands of humoral experts, faith healers, eastern spiritualists, and so forth who would claim to offer some protection against smallpox, and King Louis XV could not disting |
a7d3cb76-8afc-46bf-bff2-604c19e821ba | trentmkelly/LessWrong-43k | LessWrong | Art and Rationality
What are your thoughts on the role of Art in rationality (personal or otherwise) and in the singularity?
If one wants to help in the efforts of SIAI (or other organizations) does it make sense to focus on an art form as more than a hobby?
Is it rational to pursue an art form that encourages people to contribute to a cause when there are more direct ways of contributing?
It seems difficult to receive much recognition for one's work in art related fields, but it also seems as though one big success (say, a musician whose music was primarily about the singularity and increasing rationality) would turn many people on to the ideas. |
4f4c3b61-6489-4d3f-aa4f-427c1714751a | trentmkelly/LessWrong-43k | LessWrong | Power Buys You Distance From The Crime
Introduction
Taxes are typically meant to be proportional to money (or negative externalities, but that's not what I'm focusing on). But one thing money buys you is flexibility, which can be used to avoid taxes. Because of this, taxes aimed at the wealthy tend to end up hitting the well-off-or-rich-but-not-truly-wealthy harder, and tax cuts aimed at the poor end up helping the middle class. Examples (feel free to stop reading these when you get the idea, this is just the analogy section of the essay):
* Computer programmers typically have the option to work remotely in a low-tax state; teachers need to be where the classroom is.
* Estate taxes tend to hit families with single large assets (like a business) harder than those with diverse investments (who can simply sell assets to pay for taxes), who are hit harder than those with enough wealth to create trust funds.
* Executives can choose to receive stock (which is taxed more favorably) instead of cash to the exact percentage they desire. Well paid employees are offered stock, but the amount will not be tailored to their needs. Lower level employees either are not offered this, or are not in a position to take advantage of it.
* The legal distinction between a business (whose expenses are tax deductible) and a hobby (deductions not allowed) is based on whether the activity nets you income (there are complications and you can sometimes prove a money loser is a business, but this is a good rule of thumb). Small business owners (e.g. lawyers) can fold their occasionally-revenue-generating hobby (e.g. photography) into their real business, enabling tax deductions for their hobby.
* IRAs, 401ks, HSAs, and FSAs all lock your money up for a time or purpose, in exchange for lower or delayed taxes. You can only take advantage of them if you’re sure you won’t need the money for another purpose sooner.
* More examples here.
Note that most of these are perfectly legal and the rest are borderline. But we're still not ge |
5efb912f-fd5d-4861-bcfb-28048a257f30 | trentmkelly/LessWrong-43k | LessWrong | Air Conditioning
Growing up, I thought of air conditioning as extravagant. And AC for a large house, especially if it's poorly insulated or you only use a small part of the house, can get costly. On the other hand, AC for a single room is just pennies per hour, and if it lets you sleep well on a hot night or makes you more productive during a hot day, it's probably well worth it.
To take an example, this small window unit is rated at 5,000 BTU with an Energy Efficiency Ratio of 11.0. This means we should expect it to draw 454W (5,000 / 11) on a hot day (95F, 50% humidity) when running on high. Living in MA we have some of the most expensive electricity in the country, and at our $0.21/kWh 454W is 9.5¢/hr. At temperatures below that 95F it will be more efficient because it will draw less (I measured 332W testing this weekend) and you won't need to run it constantly to keep a good temperature. This brings it down to about the power consumption of a PC, and maybe between 1¢/h and 5¢/h. A workday's worth is then ~$0.09 to ~$0.45. It doesn't have to make you much more productive or comfortable to be worth it!
But maybe climate change means I should be avoiding AC for altruistic reasons? The dirtiest fuel still used is coal, at 2.21lb CO2e per kWh. At $10/T for carbon offsets that's 1.1¢/kWh. Not zero, but it's very low. If you're worried about longer-term sustainability when we run out of things to offset, at that point we're not going to be using coal (since if someone is still burning coal you could pay them to stop) and demand for AC lines up well with the production curve for solar.
You could even make an AC unit directly connected to a solar panel, without the efficiency losses of converting to alternating current or charging a battery. You should be able to make a ~4,000 BTU window unit that runs off a standard 65x29 panel and runs the compressor at whatever speed the incoming voltage will support. Probably not worth it, though, unless you need cooling year-round whenever the su |
f937ae17-c2ab-4d3c-a626-f16b936ba304 | trentmkelly/LessWrong-43k | LessWrong | Playing Minecraft with a Superintelligence
TLDR: Through concrete scenic descriptions illustrate how I expect naive goal specifications to fail, for getting diamonds in Minecraft.
Not much beyond the concrete examples is original. Also checkout the excellent Specification Gaming video by Rational Animations.
The Setup
I am playing Minecraft. I'd like to have an AI companion that can perform all sorts of tasks, like obtaining diamonds and giving them to me. The AI controls a normal player character with the usual controls. Let's call this the AI avatar.
Momentarily, we want to run some Minecraft simulations. So let's refer to the "ground truth" Minecraft world that my player character is in as Base-Minecraft.
Let's assume I have read access for the entire current state of Minecraft I am in, and that I have the Minecraft source code. Thankfully because I have a sick ultracomputer gaming rig, I can easily compute a plan (i.e. a sequence of actions) for the AI avatar to perform.
For each finite action sequence, I create a new Minecraft instance identical to Base-Minecraft, let's assume this includes a perfect simulation of my brain, and simulate the world. In each simulated world, the simulated AI avatar will perform the action sequence corresponding to the simulated world. Once the agent has performed all actions in the sequence we end up in some final world state.
Now we just need to somehow select an action sequence that leads to a good final world state.
My AI helps me to get Diamonds. Right?
I can write a simple program that queries all final world states, and checks if the AI avatar has at least 1 diamond in its inventory. Then it picks a plan of minimal length and lets the AI avatar execute it. Let's see the AI getting some diamonds for us:
(I recommend you try to predict what will happen before reading on.)
Scene: We take the perspective of JOHANNES' Minecraft avatar. Johannes is standing in his Minecraft base, looking at the AI AVATAR standing in front of him motionless, looking of to the sid |
254ecfd1-524e-4707-83dc-935d43f27470 | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | What are the different versions of decision theory?
The three main classes of [decision theory](https://www.alignmentforum.org/tag/decision-theory) are [causal decision theory](https://www.lesswrong.com/tag/causal-decision-theory), [evidential decision theory](https://www.alignmentforum.org/tag/evidential-decision-theory), and [logical decision theory](https://arbital.greaterwrong.com/p/logical_dt/?l=5kv).
[Causal decision theory](/?state=7779&question=What%20is%20%22causal%20decision%20theory%20(CDT)%22%3F) (CDT) reasons about the causal relationship between the decision and its consequences. An agent using CDT selects the action that will physically cause the best expected outcome.
[Evidential decision theory](/?state=7778&question=What%20is%20%22evidential%20decision%20theory%22%3F) (EDT) reasons about the conditional probability of events given different choices. An agent using EDT selects the action it would be “happiest” to learn it had taken. It views its action as one more fact about the world that it can reason about, and does not distinguish the causal effects of its actions from any other implications of taking them.
[Logical decision theory](/?state=7780&question=What%20is%20%22logical%20decision%20theory%22%3F) (LDT) is a class of decision theories, including [updateless decision theory](https://www.alignmentforum.org/tag/updateless-decision-theory), [functional decision theory](https://intelligence.org/2017/10/22/fdt/), and [timeless decision theory](https://www.alignmentforum.org/tag/timeless-decision-theory), that use logical counterfactuals. An agent using an LDT acts as if it controls the logical output of its own decision algorithm, and not just its immediate action. LDTs match or outperform other forms of decision theory in problems such as [Parfit's hitchhiker](https://www.alignmentforum.org/tag/parfits-hitchhiker), the [smoking lesion problem](https://www.alignmentforum.org/tag/smoking-lesion), and [Newcomb's problem](https://www.alignmentforum.org/tag/newcomb-s-problem).
An example of an LDT is functional decision theory (FDT). FDT treats an agent’s decision as the output of a fixed mathematical function, and picks based on which output it would be best for this function to have, taking into account not just the consequences of the agent’s decision, but also all the other places the function is instantiated.
Further reading:
- [Decision Theory FAQ](https://www.lesswrong.com/posts/zEWJBFFMvQ835nq6h/decision-theory-faq#what-about-newcombs-problem-and-alternative-decision-algorithms)
- [Comprehensive list of decision theories](https://casparoesterheld.com/a-comprehensive-list-of-decision-theories/)
- [What should I read to learn about decision theory?](/?state=6536&question=What%20should%20I%20read%20to%20learn%20about%20decision%20theory%3F)
[^kix.78dlidrso0ck]: |
625bc0cb-3efd-4aa4-ad77-a2f110e10e9b | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The strategy-stealing assumption
Suppose that 1% of the world’s resources are controlled by unaligned AI, and 99% of the world’s resources are controlled by humans. We might hope that at least 99% of the universe’s resources end up being used for stuff-humans-like (in expectation).
Jessica Taylor argued for this conclusion in [Strategies for Coalitions in Unit-Sum Games](https://www.lesswrong.com/posts/5bd75cc58225bf0670375325/strategies-for-coalitions-in-unit-sum-games): if the humans divide into 99 groups each of which acquires influence as effectively as the unaligned AI, then by symmetry each group should end, up with as much influence as the AI, i.e. they should end up with 99% of the influence.
This argument rests on what I’ll call the *strategy-stealing assumption*: for any strategy an unaligned AI could use to influence the long-run future, there is an analogous strategy that a similarly-sized group of humans can use in order to capture a similar amount of flexible influence over the future. By “flexible” I mean that humans can decide later what to do with that influence — which is important since humans don’t yet know what we want in the long run.
### Why might the strategy-stealing assumption be true?
Today there are a bunch of humans, with different preferences and different kinds of influence. Crudely speaking, the long-term outcome seems to be determined by some combination of {which preferences have how much influence?} and {what is the space of realizable outcomes?}.
I expect this to become more true over time — I expect groups of agents with diverse preferences to eventually approach efficient outcomes, since otherwise there are changes that every agent would prefer (though this is not obvious, especially in light of bargaining failures). Then the question is just about *which* of these efficient outcomes we pick.
I think that our actions don’t effect the space of realizable outcomes, because long-term realizability is mostly determined by facts about distant stars that we can’t yet influence. The obvious exception is that if we colonize space faster, we will have access more resources. But [quantitatively this doesn’t seem like a big consideration](https://rationalaltruist.com/2013/04/30/astronomical-waste/), because astronomical events occur over millions of millennia while our decisions only change colonization timelines by decades.
So I think our decisions mostly affect long-term outcomes by changing the relative weights of different possible preferences (or by causing extinction).
Today, one of the main ways that preferences have weight is because agents with those preferences control resources and other forms of influence. Strategy-stealing seems most possible for this kind of plan — an aligned AI can exactly copy the strategy of an unaligned AI, except the money goes into the aligned AI’s bank account instead. The same seems true for most kinds of resource gathering.
There are lots of strategies that give influence to other people instead of helping me. For example, I might preferentially collaborate with people who share my values. But I can still steal these strategies, as long as my values are just as common as the values of the person I’m trying to steal from. So a majority can steal strategies from a minority, but not the other way around.
There can be plenty of strategies that don’t involve acquiring resources or flexible influence. For example, we could have a parliament with obscure rules in which I can make maneuvers that advantage one set of values or another in a way that can’t be stolen. Strategy-stealing may only be possible at the level of groups — you need to retain the option of setting up a different parliamentary system that doesn’t favor particular values. Even then, it’s unclear whether strategy-stealing is possible.
There isn’t a clean argument for strategy-stealing, but I think it seems plausible enough that it’s meaningful and productive to think of it as a plausible default, and to look at ways it can fail. (If you found enough ways it could fail, you might eventually stop thinking of it as a default.)
### Eleven ways the strategy-stealing assumption could fail
In this section I’ll describe some of the failures that seem most important to me, with a focus on the ones that would interfere with the argument in the introduction.
#### 1. AI alignment
If we can build smart AIs, but not aligned AIs, then humans can’t necessarily use AI to capture flexible influence. I think this is theπ most important way in which strategy-stealing is likely to fail. I’m not going to spend much time talking about it here because I’ve spent so much time elsewhere.
For example, if smart AIs inevitably want to fill the universe with paperclips, then “build a really smart AI” is a good strategy for someone who wants to fill the universe with paperclips, but it can’t be easily stolen by someone who wants anything else.
#### 2. Value drift over generations
The values of 21st century humans are determined by some complicated mix of human nature and the modern environment. If I’m a 16th century noble who has really specific preferences about the future, it’s not really clear how I can act on those values. But if I’m a 16th century noble who thinks that future generations will inevitably be wiser and should get what they want, then I’m in luck, all I need to do is wait and make sure our civilization doesn’t do anything rash. And if I have some kind of crude intermediate preferences, then I might be able to push our culture in appropriate directions or encourage people with similar genetic dispositions to have more kids.
This is the most obvious and important way that strategy-stealing has failed historically. It’s not something I personally worry about too much though.
The big reason I don’t worry is some combination of common-sense morality and decision-theory: our values are the product of many generations each giving way to the next one, and so I’m pretty inclined to “pay it forward.” Put a different way, I think it’s relatively clear I should empathize with the next generation since I might well have been in their place (whereas [I find it much less clear under what conditions I should empathize with AI](https://ai-alignment.com/sympathizing-with-ai-e11a4bf5ef6e)). Or from yet another perspective, the same intuition that I’m “more right” than previous generations makes me very open to the possibility that future generations are more right still. This question gets very complex, but my first-pass take is that I’m maybe an order of magnitude less worried than about other kinds of value drift.
The small reason I don’t worry is that I think this dynamic is probably going to be less important in the future (unless we actively want it to be important — which seems quite possible). I believe there is a good chance that within 60 years most decisions will be made by machines, and so the handover from one generation to the next will be optional.
That all said, I am somewhat worried about more “out of distribution” changes to the values of future generations, in scenarios where AI development is slower than I expect. For example, I think it’s possible that genetic engineering of humans will substantially change what we want, and that I should be less excited about that kind of drift. Or I can imagine the interaction between technology and culture causing similarly alien changes. These questions are even harder to think about than the basic question of “how much should I empathize with future generations?” which already seemed quite thorny, and I don’t really know what I’d conclude if I spent a long time thinking. But at any rate, these things are not at the top of my priority queue.
#### 3. Other alignment problems
AIs and future generations aren’t the only optimizers around. For example, we can also build institutions that further their own agendas. We can then face a problem analogous to AI alignment — if it’s easier to build effective institutions with some kinds of values than others, then those values could be at a structural advantage. For example, we might inevitably end up with a society that optimizes generalizations of short-term metrics, if big groups of humans are much more effective when doing this. (I say “generalizations of short-term metrics” because an exclusive focus on short-term metrics is the kind of problem that can fix itself over the very long run.)
I think that institutions are currently considerably weaker than humans (in the sense that’s relevant to strategy-stealing) and this will probably remain true over the medium term. For example:
* A company with 10,000 people might be much smarter than any individual humans, but mostly that’s because of its alliance with its employees and shareholders — most of its influence is just used to accumulate more wages and dividends. Companies do things that seem antisocial not because they have come unmoored from any human’s values, but because plenty of influential humans want them to do that in order to make more money. (You could try to point the “market” as an organization with its own preferences, but it’s even worse at defending itself than bureaucracies — it’s up to humans who benefit from the market to defend it.)
* Bureaucracies can seem unmoored from any individual human desire. But their actual ability to defend themselves and acquire resources seems much weaker than other optimizers like humans or corporations.
Overall I’m less concerned about this than AI alignment, but I do think it is a real problem. I’m somewhat optimistic that the same general principles will be relevant both to aligning institutions and AIs. If AI alignment wasn’t an issue, I’d be more concerned by problems like institutional alignment.
#### 4. Human fragility
If AI systems are aligned with humans, they may want to keep humans alive. Not only do humans prefer being alive, humans may need to survive if they want to have the time and space to figure out what they really want and to tell their AI what to do. (I say “may” because at some point you might imagine e.g. putting some humans in cold storage, to be revived later.)
This could introduce an asymmetry: an AI that just cares about paperclips can get a leg up on humans by threatening to release an engineered plague, or trashing natural ecosystems that humans rely on. (Of course, this asymmetry may also go the other way — values implemented in machines are reliant on a bunch of complex infrastructure which may be more or less of a liability than humanity’s reliance on ecosystems.)
Stepping back, I think the fundamental long-term problem here is that “do what this human wants” is only a simple description of human values if you actually have the human in hand, and so an agent with these values does have a big extra liability.
I do think that the extreme option of “storing” humans to revive them later is workable, though most people would be very unhappy with a world where that becomes necessary. (To be clear, I think it almost certainly won’t.) We’ll return to this under “short-term terminal preferences” below.
#### 5. Persuasion as fragility
If an aligned AI defines its values with reference to “whatever Paul wants,” then someone doesn’t need to kill Paul to mess with the AI, they just need to change what Paul wants. If it’s very easy to manipulate humans, but we want to keep talking with each other and interacting with the world despite the risk, then this extra attack surface could become a huge liability.
This is easier to defend against — just stop talking with people except in extremely controlled environments where you can minimize the risk of manipulation — but again humans may not be willing to pay that cost.
The main reason this might be worse than point 4 is that humans may be relatively happy to physically isolate themselves from anything scary, but it would be much more costly for us to cut off from contact with other humans.
#### 6. Asymmetric persuasion
Even if humans are the only optimizers around, it might be easier to persuade humans of some things than others. For example, you could imagine a world where it’s easier to convince humans to endorse a simple ideology like “maximize the complexity of the universe” than to convince humans to pursue some more complex and subtle values.
This means that people with easily-persuadable values can use persuasion as a strategy, and people with other values cannot copy it.
I think this is ultimately more important than fragility, because it is relevant before we have powerful AI systems. It has many similarities to “value drift over generations,” and I have some mixed feelings here as well — there are some kinds of argument and deliberation that I certainly do endorse, and to the extent that my current views are the product of significant amounts of non-endorsed deliberation I am more inclined to be empathetic to future people who are influenced by increasingly-sophisticated arguments.
But as I described in section 2, I think these connections can get weaker as technological progress moves us further out of distribution, and if you told me that e.g. it was possible to perform a brute force search and find an argument that could convince someone to maximize the complexity of the future, I wouldn’t conclude that it’s probably fine if they decided to do that.
(Credit to Wei Dai for emphasizing this failure mode.)
#### 7. Value-sensitive bargaining
If a bunch of powerful agents collectively decide what to do with the universe, I think it probably won’t look like “they all control their own slice of the universe and make independent decisions about what to do.” There will likely be opportunities for trade, they may have meddling preferences (where I care what you do with your part of the universe), there may be a possibility of destructive conflict, or it may look completely different in an unanticipated way.
In many of these settings the outcome is influenced by a complicated bargaining game, and it’s unclear whether the majority can steal a minority’s strategy. For example, suppose that there are two values X and Y in the world, with 99% X-agents and 1% Y-agents. The Y-agents may be able to threaten to destroy the world unless there is an even split, and the X-agents have no way to copy such a strategy. (This could also occur over the short term.)
I don’t have a strong view about the severity of this problem. I could imagine it being a big deal.
#### 8. Recklessness
Some preferences might not care about whether the world is destroyed, and therefore have access to productive but risky strategies that more cautious agents cannot copy. The same could happen with other kinds of risks, like commitments that are game-theoretically useful but risk sacrificing some part of the universe or creating long-term negative outcomes.
I tend to think about this problem in the context of particular technologies that pose an extinction risk, but it’s worth keeping in mind that it can be compounded by the existence of more reckless agents.
Overall I think this isn’t a big deal, because it seems much easier to cause extinction by trying to kill everyone than as an accident. There are fewer people who are in fact trying to kill everyone, but I think not enough fewer to tip the balance. (This is a contingent fact about technology though; it could change in the future and I could easily be wrong even today.)
#### 9. Short-term unity and coordination
Some actors may have long-term values that are easier to talk about, represent formally, or reason about. Relative to humans, AIs may be especially likely to have such values. These actors could have an easier time coordinating, e.g. by pursuing some explicit compromise between their values (rather than being forced to find a governance mechanism for some resources produced by a joint venture).
This could leave us in a place where e.g. an unaligned AI controls 1% resources, but the majority of resources are controlled by humans who want to acquire flexible resources. Then the unaligned AIs can form a coalition which achieves very high efficiencies, while the humans cannot form 99 other coalitions to compete.
This could theoretically be a problem without AI, e.g. a large group of human with shared explicit values might be able to coordinate better and so leave normal humans at a disadvantage, though I think this is relatively unlikely as a major force in the world.
The seriousness of this problem is bounded by both the efficiency gains for a large coalition, and the quality of governance mechanisms for different actors who want to acquire flexible resources. I think we have OK solutions for coordination between people who want flexible influence, such that I don’t think this will be a big problem:
* The humans can participate in lotteries to concentrate influence. Or you can gather resources to be used for a lottery in the future, while still allowing time for people to become wiser and then make bargains about what to do with the universe before they know who wins.
* You can divide up the resources produced by a coalition equitably (and then negotiate about what to do with them).
* You can modify other mechanisms by allowing votes that could e.g. overrule certain uses of resources. You could have more complex governance mechanisms, can delegate different kinds of authority to different systems, can rely on trusted parties, etc.
* Many of these procedures work much better amongst groups of humans who expect to have relatively similar preferences or have a reasonable level of trust for other participants to do something basically cooperative and friendly (rather than e.g. demanding concessions so that they don’t do something terrible with their share of the universe or if they win the eventual lottery).
(Credit to Wei Dai for describing and emphasizing this failure mode.)
#### 10. Weird stuff with simulations
I think civilizations like ours mostly have an impact via the common-sense channel where we ultimately colonize space. But there may be many civilizations like ours in simulations of various kinds, and influencing the results of those simulations could also be an important part of what we do. In that case, I don’t have any particular reason to think strategy-stealing breaks dow but I think stuff could be very weird and I have only a weak sense of how this influences optimal strategies.
Overall I don’t think much about this since it doesn’t seem likely to be a large part of our influence and it doesn’t break strategy-stealing in an obvious way. But I think it’s worth having in mind.
#### 11. Other preferences
People care about lots of stuff other than their influence over the long-term future. If 1% of the world is unaligned AI and 99% of the world is humans, but the AI spends all of its resources on influencing the future while the humans only spend one tenth, it wouldn’t be too surprising if the AI ended up with 10% of the influence rather than 1%. This can matter in lots of ways other than literal spending and saving: someone who only cared about the future might make different tradeoffs, might be willing to defend themselves at the cost of short-term value (see sections 4 and 5 above), might pursue more ruthless strategies for expansion, and so on*.*
I think the simplest approximation is to restrict attention to the part of our preferences that is about the long-term (I discussed this a bit in [Why might the future be good?](https://rationalaltruist.com/2013/02/27/why-will-they-be-happy/)). To the extent that someone cares about the long-term less than the average actor, they will represent a smaller fraction of this “long-term preferences” mixture. This may give unaligned AI systems a one-time advantage for influencing the long-term future (if they care more about it) but doesn’t change the basic dynamics of strategy-stealing. Even this advantage might be clawed back by a majority (e.g. by taxing savers).
There are a few places where this picture seems a little bit less crisp:
* Rather than being able to spend resources on either the short or long-term, sometimes you might have preferences about *how* you acquire resources in the short-term; an agent without such scruples could potentially pull ahead. If these preferences are strong, it probably violates strategy-stealing unless the majority can agree to crush anyone unscrupulous.
* For humans in particular, it may be hard to separate out “humans as repository of values” from “humans as an object of preferences,” and this may make it harder for us to defend ourselves (as discussed in sections 4 and 5).
I mostly think these complexities won’t be a big deal quantitatively, because I think our short-term preferences will mostly be compatible with defense and resource acquisition. But I’m not confident about that.
### Conclusion
I think strategy-stealing isn’t really true; but I think it’s a good enough approximation that we can basically act as if it’s true, and then think about the risk posed by possible failures of strategy-stealing.
I think this is especially important for thinking about AI alignment, because it lets us formalize the lowered goalposts I discussed [here](https://ai-alignment.com/a-possible-stance-for-ai-control-research-fe9cf717fc1b): we just want to ensure that AI is compatible with strategy-stealing. These lowered goalposts are an important part of why I think we can solve alignment.
In practice I think that a large coalition of humans isn’t reduced to strategy-stealing — a majority can simply stop a minority from doing something bad, rather than by copying it. The possible failures in this post could potentially be addressed by either a technical solution or some kind of coordination. |
9148ae97-175f-48b8-b761-12d5ae577826 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Ray Kurzweil: Future of Intelligence | MIT 6.S099: Artificial General Intelligence (AGI)
welcome to MIT course 6 s 0 9 9
artificial general intelligence today we
have Ray Kurzweil he is one of the
world's leading inventors thinkers and
futurists with a 30-year track record of
accurate predictions called the Restless
genius by The Wall Street Journal and
the ultimate thinking machine by Forbes
magazine he was selected as one of the
top entrepreneurs by Inc magazine which
described him as the rightful heir to
Thomas Edison PBS selected him as one of
the 16 revolutionaries who made America
Ray was the principal investigator of
the first ccd flatbed scanner the first
omni font optical character recognition
the first point to speech reading
machines for the blind the first
text-to-speech synthesizer the first
music synthesizer capable of creating
the grand piano and other orchestral
instruments and the first commercially
marketed large vocabulary speech
recognition among his many honors he
received a Grammy Award for outstanding
achievements in music technology he's
the recipient of the National Medal of
Technology was inducted into the
National Inventors Hall of Fame holds 21
honorary doctorates and honors from
three u.s. presidents Ray has written
five national best-selling books
including the New York Times bestsellers
The Singularity is near from 2005 and
how to create a mind from 2012 he is
co-founder and Chancellor of singularity
University and a director of engineering
at Google heading up a team developing
machine intelligence and natural
language understanding
please give ray a warm welcome
[Applause]
[Music]
it's good to be back I've been in this
lecture hall many times and walked the
infinite Carter I came here as an
undergraduate in 1965 within a year of
my being here they started a new major
called computer science it did not get
its own course number that's 6 1 even
biotechnology recently got its own
course number but how many of you are CS
majors ok how many of you do work in
deep learning how many of you have heard
of deep learning here I came here first
in 1952 when I was 14 I became excited
about artificial intelligence it had
only gotten its name six years earlier
the 1956 Dartmouth conference by Marvin
Minsky and John McCarthy so I wrote
Minsky a letter there was no email back
then and he invited me up he spent all
day with me as if he had nothing else to
do he was a consummate educator
I then and the AI field had already
bifurcated into two warring camps the
symbolic school which Minsk II was
associated with and the connectionist
school was not widely known in fact I
think it's still not widely known that
Minsk II actually invented the neural
net in 1953 but he had become negative
about it largely because there was a lot
of hype that these giant branes could
solve any problem
so the first popular neural nets the
perceptron was being promulgated by
Frank Rosenblatt at Cornell so Minsky
set out what are you going now and
saying I said to see Rosenblatt at core
now is that don't bother doing that and
I went there and Rosenblatt was touting
the perceptron that it ultimately would
be able to solve any problem so I
brought some printed letters that had
the camera and it did a perfect job of
recognizing them as long as they were
courier ten different types I didn't
work at all and he said but don't worry
we can take the output of the perceptron
or feed it as the input to another
perceptron and take the output of that
and feed it to a third layer and as we
add more layers it'll get smarter and
smarter and generalize and so that's
interesting if you even tried that well
no but it's high on our research agenda
things did not move quite as quickly
back then as they do now he died nine
years later never having tried that idea
turns out to be remarkably prescient I
mean he never tried multi-layer neural
nets and all the excitement we see now
about deep learning comes from a
combination of two things
both many layer neural Nets and the law
of accelerating returns which I'll get
to a little bit later which is basically
the exponential growth of computing so
that we can run these massive nets and
handle massive amounts of data it would
be decades before that idea was tried
several decades later three level neural
nets were tried there were a little bit
better they could deal with multiple
type styles still weren't very flexible
that's not hard to add other layers it's
a very straightforward concept there was
a math problem the disappearing gradient
or the exploding gradient which I'm sure
many of you are familiar with basically
you need to take maximum advantage of
the range of values in the gradients and
not let them explode or disappear and
lose the resolution that's a fairly
straightforward mathematical
transformation with that insight we
could now go 200 layer neural nets and
that's behind sort of all the fantastic
gains that we've seen recently
alphago trained on every online game and
then became a fair go player it then
trained itself by playing itself and
soared past the best human alphago zero
started with no human input at all
within hours of iteration sort Pascal
phago also soared past the best just
programs they had another innovation
basically you need to evaluate the
quality of the board at each point they
used another hundred layer neural nets
to do that evaluation so there's still a
problem in the field which is there's a
motto that life begins at a billion
examples
one of the reasons I'm at Google is we
have a billion examples for examples of
pictures of dogs and cats that are
labeled so you got a picture of a cat
and it says cat and then you can learn
from it and you need a lot of them
alphago trained on a million online
moves that's how many we had of master
games and that only created a sort of
fair go player a good amateur could
defeated so they worked around that in
the case of go by basically generating
an infinite amount of data by having the
system play itself had a chat with
Denver's house office you know what kind
of situations can you do that with you
have to have some way of simulating the
world so go or chess are even though go
is considered a difficult game it's
a-you know the definition of it can
exist on one page so you can simulate it
that applies to math I mean amass axioms
are can be contained on a page or two
it's not very complicated it gets more
difficult when you have real-life
situations like biology so we have
biological simulators but the simulators
on perfect so learning from the
simulators will only be as good as the
simulators that's actually the key to
being able to do deep learning on
biology
autonomous vehicles you need real-life
data so the way mo systems have gone
three and a half million miles
that's good that's enough data to then
create a very good simulator so the
simulator is really quite realistic
because they had a lot of real-world
experience and the they've got a billion
miles in the simulator but we don't
always have that opportunity to either
create the data or have the data around
humans can learn from a small number of
examples your significant other your
professor your boss your investor can
tell you something once or twice and you
might actually learn from that some
humans have been reported to do that
and that's kind of the remaining
advantage of humans now there's actually
no back propagation in the human brain
it doesn't use deep learning it uses a
different architecture that same year in
1962 I wrote a paper how I thought the
human brain worked there was actually
very little neuroscience to go on there
was one neuroscientist Vernon mount
Castle that had something relevant to
say which as he did I mean there was a
the common wisdom at the time and
there's still a lot of neuroscience that
says say this that we have all these
different regions of the brain they do
different things they must be different
there's v1 in the back of the head where
the optic nerve spills into that can
tell that that's a curved line that
that's a straight line does these simple
feature extractions on visual images
it's actually a large part of the
neocortex does the fusiform gyrus up
here which can recognize faces we know
that because if it gets knocked out
through injury or stroke people can't
recognize faces they will learn it again
with a different region of the neocortex
is the famous frontal cortex which does
language in poetry and music so these
must work on different principles he did
autopsies on the neocortex and all these
different regions and found they all
looked the same they had the same
repeating pattern same interconnections
he said neocortex is neocortex so I had
that hint otherwise I can actually
observe human brains in action which I
did from time to time and there's a lot
of hints that you can get that way for
example if I ask you to recite the
alphabet you actually don't do it from A
to Z you do it as a sequence of
sequences ABCD efg hijk so we learn
things that secret forward sequences of
sequences forward because if I ask you
to recite the alphabet backwards you
can't do it unless you learn that as a
new sequence so these are all
interesting hints I wrote a paper that I
that the neocortex is organized as a
hierarchy of modules in each module can
learn a simple pattern and that's how I
got to meet President Johnson and that
initiated a half-century of thinking
about this issue I came to MIT to study
with Marvin Minsky actually came for two
reasons one the Minsky became my mentor
which was a mentorship that lasted for
over 50 years the fact that MIT was so
advanced it actually had a computer
which the other colleges I considered
didn't have it was an IBM 7090 for 32 K
of 36 bit words so it's 150 K of course
storage to microsecond cycle time two
cycles for instructions or a quarter of
a myth and that thousands of students
and professors shared that one machine
in 2012 I wrote a book about this thesis
is now actually an explosion of
neuroscience evidence to support it the
European brain reverse engineering
project has identified a repeating
module about a hundred neurons it's
repeated three hundred million times
it's about 30 billion neurons in the
neocortex the neocortex is the outer
layer of the brain that's part where we
do our thinking and they can see in each
module axons coming in from another
module and then the output acts the
single output accent of that
Jil goes as the input to another module
so we can see it organized as a
hierarchy it's not a physical hierarchy
it's the hierarchy comes from these
connections the neocortex is a very thin
structure it's actually one module thick
there's six layers of neurons but it
constitutes one module and we can see
that it learns in simple pattern and
various reasons I cite in the book the
pattern recognition model that's using
is basically a hidden Markov model how
many of you have worked with Markov
models okay
that's usually no hands go open I asked
that question but Markov model is not it
is learned but it's not back propagation
it can learn local features so it's very
good for speech recognition and the
speech recognition network I did in the
80s used these Markov models that became
the standard approach because it can
deal with local variations so the fact
that a vowel is stretched you can learn
that in a Markov model it doesn't learn
long distance relationships that's
handled by the hierarchy and something
we don't fully understand yet is exactly
how the neocortex creates that hierarchy
but we have figured out how it can
connect this module to this module does
it then grow I mean there's no virtual
communication or wireless communication
it's actually connection so does it grow
an axon you know from one place to
another which could be inches apart
actually they all all these connections
are there from birth like the streets
and avenues of Manhattan there's
vertical and horizontal connections so
if the it decides and how it makes that
decision it's still not fully understood
that it wants to connect this module to
this module there's already a vertical
horizontal and a vertical connection it
just activates them we can actually see
that now and I can see that happening in
real time on non-invasive brain scans
so there's a current amount of evidence
that's in fact the neocortex is a
hierarchy of modules that can learn each
module learns a simple sequential
pattern and even though the patterns we
perceived don't seem like sequences they
may seem three-dimensional or even more
complicated they are in fact represented
as sequences but the complexity comes in
with the hierarchy so the neocortex
emerged 200 million years ago with
mammals all mammals have a neocortex
it's one of the distinguishing features
of mammals these first mammals were
small they were rodents but they were
capable a new type of thinking other
non-mammalian animals had fixed
behaviors but those fixed behaviors were
very well adapted for their ecological
niche but these new mammals could invent
a new behavior so creativity and
innovation was one feature of the
neocortex so a mouse is escaping a
predator its usual escape path is
blocked it will invent a new behavior to
deal with it probably wouldn't work but
if it did work it would remember it and
would have a new behavior and that
behavior could spread virally through
the community another Mouse watching
this was with say to itself that was
really clever going around that rock I'm
gonna remember to do that and it would
have a new behavior didn't help these
early mammals that much because as I say
the non-mammalian animals were very well
adapted to their niches and nothing much
happened for a hundred and thirty five
million years but then 65 million years
ago something did happened there was a
sudden violent change to the environment
we now call it the Cretaceous extinction
event there's been debate as to whether
it was a media or an asteroid I mean a
meteor or a volcanic eruption the
asteroid or meteor hypothesis is in the
ascendancy but if you dig down to an
area of rock reflecting 65 million years
ago the
geologists will explain that it shows a
very violent sudden change to the
environment we see it all around the
globe so is a worldwide phenomenon the
reason we call it an extinction event is
that's when the dinosaurs went extinct
that's when 75% of all the animal and
plant species went extinct and that's
when mammals overtook their ecological
niche so to anthropomorphize biological
evolution said to itself this neocortex
is pretty good stuff and it began to
grow it so-now mammals got bigger their
brains got bigger at an even faster pace
taking up a larger fraction of their
body the neocortex got bigger even
faster than that and developed these
curvatures that are distinctive of a
primate brain basically to increase its
surface area but if you stretched it out
the human neocortex is still a flat
structure it's about the size of a table
napkin just as thin and it's basically
created primates which became dominance
in their ecological niche then something
else happened two million years ago
biological evolution decided to increase
the neocortex further and increase the
size of the enclosure and basically
filled up the frontal cortex with our
big skulls with more neocortex and up
until recently it was felt that as I
said that this was the frontal cortex
was different because it does these
qualitatively different things but we
now realize that it's really just
additional neocortex so remember what we
did with it we're already doing a very
good job of being primates so we put it
at the top of the neocortical hierarchy
and we increased the size of the
hierarchy it was maybe 20% more
neocortex but it doubled it tripled the
number of levels because as you go up
the hierarchy it's kind of like a
pyramid there's fewer and fewer modules
and that was the enabling factor for us
to invent language and art music every
human culture we've ever discovered has
music no primary culture really has
music there's debate about that but it's
really true
invention technology technology required
another evolutionary adaptation which is
this humble appendage here no other
animal has that if you look at a chimp
and see it looks like they have a
similar hand but the thumb is actually
down here doesn't work very well if you
watch them trying to grab a stick so we
could imagine creative solutions yeah I
could take that branch and strip off the
leaves and put a point on it and we
could actually carry out these ideas and
create tools and then use tools to
create new tools and it started a whole
nother evolutionary process of
tool-making and that all came with the
with the neocortex
so Larry Page read my book in 2012 and
liked it so I met with him in Essen for
an investment in a company I'd started
actually a couple weeks earlier to
develop those ideas commercially because
that's how I went about things as a
serial entrepreneur
and said well we'll invest but let me
give you a better idea what you do it
here at Google we have a billion
pictures of dogs and cats and we've got
a lot of other data and lots of
computers and lots of talent all of
which is true and says well I don't know
I just started this company to develop
this is well by your company and how you
got a value a company that hasn't done
anything just started a couple weeks ago
and he said we can value anything so I
took my first job five years ago and
I've been basically applying this model
this hierarchical model to understanding
language which i think really is the
holy grail of AI I think Turing was
correct in designating basically text
communication as what we now call a
turing-complete problem that requires
there's no simple NLP tricks it you can
apply to pass a valid Turing test with
an emphasis on the word valid mitch
kapor and i had a six month debate on
what the rules should be because if you
read Turing's 1950 paper he describes
this in a few paragraphs and doesn't
really describe how to go about it but
if it's a valid Turing test meaning it's
really convincing you through an
interrogation and dialogue that it's a
human that requires a full range of
human intelligence and I think that test
has to the test of time we're making
very good progress on that I mean just
last week you may have read that two
systems
asked paragraph comprehension test it's
really very impressive winning came to
Google we were trying to past these
paragraph comprehension tests we aced
the first the first grade test second
grade tests were kind of got average
performance and the third grade test had
too much inference already you had to
know some common-sense knowledge as it's
called and make implications of things
that were in different parts of the
paragraph and there's too much inference
and it really didn't didn't work so this
is now adult level it's just slightly
surpassed average human performance but
we've seen that once something an AI
does something it average human levels
it doesn't take long for it to soar past
average human levels I think it'll take
longer in language and it did in some
simple games like go but it's actually
very impressive that it surpasses now
average human performance used at LST M
long short temporal memory but if you
look at the adult test in order to
answer these questions it has to put
together inferences and implications of
several different things in the
paragraph with some common sense
knowledge is not explicitly stated so
that's I think a pretty impressive
milestone so I I've been developing I've
got a team of about 45 people and we've
been developing this hierarchical model
we don't use Markov models because we
can use deep learning for each module
and so we create an embedding for each
word and we create an embedding for each
sentence this is we have a I can talk
about it because we have a published
paper on it it can take into
consideration context
if you use smart reply on G confused
email on your phone you'll see it gives
you three suggestions for responses
that's called Smart reply there are
simple suggestions but it has to
actually understand perhaps a
complicated email and the quality of the
suggestions is really quite good quite
on point that's for my team using this
kind of hierarchical model so instead of
Markov models that uses embeddings
because we can use back propagation we
might as well use it but I think what's
missing from deep learning is this
hierarchical aspect of understanding
because the world is hierarchical that's
why evolution developed a hierarchical
brain structure to understand the
natural hierarchy in the world
and there are several problems with big
deep neural nets one is the fact that
you really do need a billion examples
and we don't sometimes we can generate
them it's in the case of NGO or if we
have a really good simulator as in the
case of autonomous vehicles not quite
the case yet in biology very often you
don't have a billion example if you
suddenly have billions of examples of
language but they're not annotated and
how would you annotate it anyway with
more language that we can't understand
in the first place so that's kind of a
chicken and an egg problem so I believe
this hierarchical structures needed
another criticism of deep neural Nets
they don't explain themselves very well
it's a big black box that gives you
pretty remarkable answers I mean in the
case of these games demos described it's
playing in both go and chess is almost
an alien intelligence because we do
things that were shocking to you and
experts like sacrificing a queen and a
bishop at the same time or in close
succession which shocked everybody but
then went on to win or early in a go
game putting a piece at the corner of
the board which is kind of crazy to most
experts because you really want to start
controlling territory and yet it on
reflection that was the brilliant move
that enabled it to win that game but it
doesn't really explain how it does these
things so if yeah if you have a
hierarchy it's much better at explaining
it because you could look at the content
of the of the modules in the hierarchy
and they'll explain what they're doing
and just and on the first application of
applying this to health and medicine
this will get into high gear and we're
going to really see us break out at the
linear extension to longevity that we've
experienced I believe we're only about a
decade away from longevity escape
velocity we're adding more time than is
going by not just the infant life
expectancy but to your remaining life
expectancy I think if someone is
diligent they can be there already I
think I've
at longevity escape velocity now a word
on what life expectancy means it used to
be assumed that not much would happen so
whatever your life expectancy is with or
without scientific progress it really
didn't matter now it matters a lot so
life expectancy really means you know
how long would you live what's the in
terms of a statistical likelihood if
there were not continued scientific
progress but that's a very inaccurate
assumption that scientific progress
is extremely rapid I mean just as an AI
in biotech there are advances now every
week is quite stunning
now you can have a computed life
expectancies let's say 30 years 50 years
70 years from now you can still be hit
by the proverbial bus tomorrow we're
working on that with self-driving
vehicles but we'll get we'll get to a
point I think if you're diligent you can
be there now in terms of basically
advancing your own statistical life
expectancy
at least to keep pace with the passage
of time I think it would be there for
most of the population at least if
they're diligent within about a decade
so if we can hang in there we may get to
see the remarkable century ahead thank
you very much no question please raise
your hand we'll get your mic hi
so you mentioned both neural neural
network models and symbolic models and I
was wondering how far have you been
thinking about combining these two
approaches creating a symbiosis between
neural models and symbolic ones I don't
think we want to use symbolic models as
they've been used how many are familiar
with the psych project
that was a very diligent effort in Texas
to define all of common-sense reasoning
and it kind of collapsed on itself and
became impossible to debug because you
fix one thing and it break three other
things that complexity ceiling has
become typical of of trying to define
things through logical rules now it does
seem that humans can understand logical
rules we have logical rules written down
for things like law and game playing and
so on but you can actually define a
connectionist system to have such a high
reliability on a certain type of action
that it looks like it's a symbolic rule
even though it's represented in a
connectionist way and connection systems
can both capture the soft edges because
many things in life are not sharply
defined they can also generate
exceptions so you you don't want to
sacrifice your queen in chess accept
certain situations that might be a good
idea so you can capture that kind of
complexity so we do want to be able to
learn from accumulated human wisdom that
looks like it's symbolic but I think
we'll do it with a connection system but
again I'm think the connection systems
should develop a sense of hierarchy and
not just be one big massive neural net
so I understand how we want you know use
the neocortex to extract useful stuff
and commercialize that but I'm wondering
how you know our middle brain and organs
that are below the neocortex will be
useful for you know turn that into what
you want to do something well the
cerebellum is an interesting case in
point it actually has more neurons than
the neocortex and it's used to
govern most of our behavior some things
if you write a signature that's actually
controlled by the cerebellum so a simple
sequence is stored in the cerebellum but
there's not many reasoning to it it's
basically a script and most of our
movement now has actually been migrated
from the center vellum to the neocortex
cerebellum is still there some people
the entire cerebellum is destroyed
through disease they still function
fairly normally their movement might be
a little erratic as our movements is
largely controlled by the neocortex but
some of the subtlety is a kind of
pre-programmed script and so they'll
look a little clumsy but they're
actually function okay a lot of other
areas of the brain control autonomic
functions like breathing and but our
thinking really is is controlled by the
neocortex in terms of mastering
intelligence I think the neocortex is
the brain region we want to study I'm
curious what you think might happen
after the singularity is reached in
terms of this exponential growth of
information yes do you think it will
continue or will there be a whole
paradigm shift what do you predict well
in the singularities near I talked about
the atomic limits based on molecular
computing as we understand it and it can
actually go well past 2045 and actually
go to trillions of trillions of times
greater computational capacity than we
have today
so I don't see that's stopping anytime
soon and we'll go you know way beyond
what we can imagine and it becomes an
interesting discussion what the impact
on human civilization will be so take it
may be slightly more mundane issue that
comes up as a kind of eliminates most
jobs or
jobs a point I make is it's not the
first time in human history you've done
that how many jobs circa 1900 exist
today and that was the feeling of the
Luddites which was an actual society
that formed in 1800 the automation of
the textile industry in England they
looked at all these jobs going away and
felt that employment is going to be just
limited to an elite indeed those jobs
didn't go away but new jobs were created
so if I were oppression Futures to 1900
I would say well 38% of you work on
farms and 25% work in factories it's 2/3
of the working force but I predict by
2015 115 years from now it's going to be
2% on farms and 9% factories and
everybody would go oh my God we're gonna
be out of work and I said well don't
worry for all these jobs we eliminate
through automation we're gonna invent
new jobs and say oh really what new jobs
and I'd say well I don't know we haven't
invented them yet that's the political
problem we could see jobs very clearly
going away fairly soon like driving a
car or truck and the new jobs haven't
been invented I mean just look at the
last five or six years as many a lot of
the increase in employment has been
through mobile app related types of ways
of making money that just weren't
contemplated even six years ago if I
really prescient I would say well you're
gonna get jobs creating mobile apps and
websites and doing data analytics and
self-driving cars cars what's a car and
nobody would have any idea what I'm
talking about now the new job
some people say yeah we created new jobs
but it's not as many actually we've gone
from 24 million jobs in nineteen hundred
242 million jobs today for 30 percent of
the population to forty five percent of
the population the new jobs pay eleven
times as much in constant dollars and
they're more interesting and as I talk
to people starting out their career now
they really want a career that gives
them some
life definition and purpose and
gratification we're moving up Maslow's
hierarchy hundred years ago you were
happy if you had a back-breaking job to
put food on your family's table so and
we couldn't do these new jobs without
enhancing our intelligence so we've been
doing that well for most of the last 100
years through education we've expanded
to K through 12 and constant dollars
tenfold
we've gone from 38,000 college students
in 1870 to 15 million today more
recently we have brain extenders and not
yet connected directly in our brain but
they're very close at hand when I was
here that my tía to take my bicycle
across campus to get to the computer and
show an ID to get in the building now we
carry them well you know in our in our
pockets and on our belts
they're going to go inside our bodies
and brains I think that's a notic really
important distinction but so we're
basically going to be continuing to
enhance our capability through merging
with AI and that's the I think ultimate
answer to the kind of dystopian view we
see in futures movies where it's the AI
versus a brave band of humans for
control of humanity we don't have one or
two a eyes in the world today we have
several billion three billion
smartphones and last count will be six
billion in just a couple of years
according to the projections so we're
already deeply integrated with this and
I think that's going to continue and
it's gonna continue to do things that
you can't even imagine today just as we
are doing today things we couldn't
imagine you know even twenty years ago
you showed many graphs that goes through
exponential growth but I haven't seen
one that isn't so I would be very
interested in hearing you haven't seen
that what that is not exponential so
tell me about regions that you've
investigated that have not seen
exponential growth and why do you think
that's the case well
price performance and capacity of
information technology invariably
follows a exponential when it impacts
human society it can be linear so for
example the growth of democracy has been
linear but still pretty steady you can
count the number of democracies on the
fingers of one hand a century ago two
centuries ago you can count the number
of democracies in the world on the
fingers of one finger now there are
dozens of them that this and it's become
kind of a consensus that that's how we
should be governed
so the and I attributed all this to the
growth and information technology
communication in particular for
progression of social cultural
institutions but information technology
because it ultimately depends on a
vanishingly small energy and material
requirement grows exponentially and will
for a long time there's recently a
criticism that well test scores have
it's actually a remarkably straight
linear progression so humans think it's
like twenty eight hundred and it just
sort passed out in 1997 with the blue
and it's kept going and remarkably
straight and saying well this is linear
not exponential but the chess score is a
logarithmic measurement so it really is
exponential progression so if you're
lhasa furs like to think a lot about the
meaning of things especially in the 20th
century so for instance Martin Heidegger
gave a couple of speeches and lectures
on the relationship of human society to
technology and he particularly
distinguished between the mode of
thinking which is calculating thinking
and a mode of thinking which is
reflective thinking or meditative
thinking and he posed this question what
is the the meaning and purpose of
technological development and he
couldn't find an answer he he
recommended to remain open to what he
called and he called this an openness to
the mystery I wonder whether you have
any thoughts on this is there is there a
meaning of purpose to technological
equipment and and is there a way for a
human success access that meaning well
we started using technology to shore up
weaknesses and our own capabilities so
physically I mean who here could build
this building so we've leveraged the
power of our muscles with machines
and we're in fact very bad at doing
things that you know the simplest
computers can do like factor numbers or
even just multiply two eight digit
numbers computers can do that trivially
we can't do it so we originally started
using computers to make up for that
weakness I think the essence of what
I've been writing about is to master the
unique strengths of humanity creating
loving expressions in poetry and music
and the kinds of things we associate
with the better qualities of humanity
with machines that's the to promise of
AI that we're not there yet but we're
making pretty stunning progress just in
the last year there's so many milestones
that are really significant including in
language and but I think of technology
as an expression of humanity it's part
of who we are and the human species is
already a biological technological
civilization and it's part of who we are
an AI is it's part of humans so AI is
human and it's it's part of the
technological expression of humanity and
we use technology to extend our reach
you know I couldn't reach that fruit at
that higher branch a thousand years ago
so we invented a tool to extend our
physical reach we now extend our mental
reach we can access all of human
knowledge with a few keystrokes and
we're going to make ourselves literally
smarter by merging with AI hi
first of all honor to hear you speak
here so I first read The Singularity is
near nine years ago or so and it changed
the way I thought entirely but something
I think it caused me to over steeply
discount was tail risk in geopolitics in
systems that span the entire globe and
my concern is that there are there is
obviously the possibility of tail risk
existential level events swamp in all of
these trends that are otherwise war
proof climate proof you name it so my
question for you is what steps do you
think we can take in designing
engineered systems in designing social
and economic institutions to kind of
minimize our exposure to these tail
risks and and and survive to make it to
UM you know a beautiful mind filled
future yeah well the world was first
introduced to a human-made
existential risk when I was in
elementary school we would have these
civil defense drills to get under our
desk and put our hands behind our head
to protect this from a thermonuclear war
and it worked we made it through but
that was really the first introduction
to an existential risk and those weapons
are still there by the way and they're
still on a hair-trigger and they don't
get that much attention there's been a
lot of discussion much of which I've
been in the forefront of initiating the
existential risks of what sometimes
referred to as GN rg4 genetics which is
biotechnology and for nanotechnology and
gray goo robotics which is a
and I've been accused of being an
optimist I think you have to be an
optimist to be an entrepreneur if you
knew all the problems you were going to
encounter you'd never start any project
but I've written a lot about the
downsides I remain optimistic there are
specific paradigms and not foolproof
that we can follow to keep these
technologies safe so for example over 40
years ago some visionaries recognized
the revolutionary potential both for
promise and peril of biotechnology
neither the promise no peril was
feasible 40 years ago but they had a
conference at the Asilomar conference
center in California and to develop both
professional ethics and strategies to
keep biotechnology safe and they've been
known as the Asilomar guidelines they've
been refined through successive sell
more conferences much of that's baked
into law and it in my opinion it's
worked quite well we're now as I
mentioned getting profound benefit it's
a trickle today it'll be a flood over
the next decade and the number of people
who have been harmed either through
intentional or accidental abuse of
biotechnology so far zero actually I
take that back there was one boy who
died in gene therapy trials but 12 years
ago and there's congressional hearings
and they cancelled all research for gene
therapy for a number of years you could
do an interesting master's thesis and
demonstrate that you know 300,000 people
died as a result of that delay but you
can't name them they can't go on CNN so
we don't know who they are but it has to
do with the balancing of risk but in
large measure virtually no one has been
hurt by biotechnology now that doesn't
mean you can cross on our front list
okay we took care of that one because
the technology keeps getting more
sophisticated and Christopher's great
opportunity there's hundreds of trials
of Christopher's technologies overcome
disease but it could be abused you can
describe scenarios so we have to keep
reinventing it January we had our first
Asilomar conference on AI ethics and so
I think this is a good paradigm it's not
foolproof I think the best way we can
assure a democratic future that includes
our ideas of Liberty is to practice that
in the world today because the future
world of the singularity which is a
merger of biological non-biological
intelligence it's not going to come from
Mars I mean it's going to emerge from
our society today so if we practice
these ideals today it's going to have a
higher chance of us practicing them as
we get more enhanced with technology if
that doesn't sound like a foolproof
solution it isn't but I think that's the
best approach in terms of technological
solutions
I mean AI is the most daunting you can
imagine there are technical solutions to
biotechnology and nanotechnology there's
really no subroutine you can put in your
AI software there will assure that it
remains safe intelligence it's
inherently not controllable
there's some AI that's much smarter than
you that's out for your destruction the
best way to deal with that is not to get
in that situation in the first place if
if you are in that situation and find
some AI that will be on your side but
basically it's going to eyeb Aleve
we have been headed through technology
to event to a better reality look around
the world and people really think things
are getting worse and I think that's
because our information about what's
wrong with the world is getting
exponentially better I say oh this is
the most peaceful time on you in history
if you say what are you crazy didn't you
hear about the event yesterday and last
week and well a hundred years ago there
could be a battle that wiped out the
next village in you wouldn't even hear
about it for months
of all these graphs on education and
literacy has gone from like 10% to 90%
over a century and health wealth
poverty's declined 95% in Asia over the
last 25 years document about the World
Bank all these trends are very smoothly
getting better and everybody thinks
things are getting worse but but but
you're right like on violence that curve
could be quite disrupted there's an
existential event as I say I'm
optimistic but I think that is something
if we need to deal with that a lot of it
is not technological it's dealing with
our social cultural institutions so you
mentioned also exponential growth of
software and IDs I guess related to
software so one of the reasons for which
you said that all that information
technology costs this exponential is
because of fundamental properties of
matter and energy but in the case of
ideas why would it have to be
exponential well a lot of ideas produce
exponential gains they don't increase
performance linearly there's actually
study during the Obama administration by
his scientific advisory board on
assessing this question how much gains
on 23 classical engineering problems
were gained through hardware
improvements over the last decade and
software improvements and there's about
a thousand to one improvement it's about
doubling every year from Hardware there
was an averages of like twenty six
thousand to one through softer
improvements algorithmic improvements so
we do see both and apparently if you
come up with in advance its it doubles
the performance or multiplies it by ten
we see basically exponential growth from
each innovation
so and we certainly see that in deep
learning the architectures are getting
better
while we also have more data and more
computation and more memory to throw in
these at these algorithms
thank you for being |
02c14b47-7572-442f-8a04-0ea159a30aca | trentmkelly/LessWrong-43k | LessWrong | Eliezer Yudkowsky’s Letter in Time Magazine
FLI put out an open letter, calling for a 6 month pause in training models more powerful than GPT-4, followed by additional precautionary steps.
Then Eliezer Yudkowsky put out a post in Time, which made it clear he did not think that letter went far enough. Eliezer instead suggests an international ban on large AI training runs to limit future capabilities advances. He lays out in stark terms our choice as he sees it: Either do what it takes to prevent such runs or face doom.
A lot of good discussions happened. A lot of people got exposed to the situation that would not have otherwise been exposed to it, all the way to a question being asked at the White House press briefing. Also, due to a combination of the internet being the internet, the nature of the topic and the way certain details were laid out, a lot of other discussion predictably went off the rails quickly.
If you have not yet read the post itself, I encourage you to read the whole thing, now, before proceeding. I will summarize my reading in the next section, then discuss reactions.
This post goes over:
1. What the Letter Actually Says. Check if your interpretation matches.
2. The Internet Mostly Sidesteps the Important Questions. Many did not take kindly.
3. What is a Call for Violence? Political power comes from the barrel of a gun.
4. Our Words Are Backed by Nuclear Weapons. Eliezer did not propose using nukes.
5. Answering Hypothetical Questions. If he doesn’t he loses all his magic powers.
6. What Do I Think About Yudkowsky’s Model of AI Risk? I am less confident.
7. What Do I Think About Eliezer’s Proposal? Depends what you believe about risk.
8. What Do I Think About Eliezer’s Answers and Comms Strategies? Good question.
WHAT THE LETTER ACTUALLY SAYS
I see this letter as a very clear, direct, well-written explanation of what Eliezer Yudkowsky actually believes will happen, which is that AI will literally kill everyone on Earth, and none of our children will get to grow up – unless a |
f445fca6-bd98-42c2-a0c1-49937b2b4319 | trentmkelly/LessWrong-43k | LessWrong | Strategies for differential divulgation of key ideas in AI capability
Openness makes the AI race worse
I will start with a short discussion of this article on the implications of openness in AI research. If you are very familiar with this argument, feel free to skip this section.
While the paper makes a lot of effort to present arguments on both sides, the strongest point is that openness makes the AI development race more competitive.
The oversimplified model of how openness affects the AI race dynamic is one in which there are k ideas necessary to build an AGI. Once all these insights are available, you can actually start working on the implementation details, which may take several months to a few years. If insights are kept to the teams who found them, it is likely that the front-runner team (likely to be the one more productive in finding key ideas) will have some head start. If instead most or all of the insights are published openly, then many teams will start working on the implementation nearly at the same time, and the race will be very competitive.
This is particularly bad as it makes it harder for a leading AI developer to pause or slow down capability research to develop safety methods, or to implement performance-handicapping safety controls, without abandoning the lead to some other less careful developer.
First model: front-runner team committed to AGI safety
To me the above is more than enough to out-weight any positives associated with openness. As a result, my first model of how things could go well includes a team strongly committed to AGI safety succeeding not only in becoming the front-runner of AI development, but also doing so with a significant head start.
This sounds difficult. Those committed to AGI safety seem to be a relatively small minority among AI researchers today, so unless that changes dramatically, it seems a priory unlikely that one of the teams committed to safety will become the front-runner, and that becomes even more unlikely if we require a significant initial lead.
Coordination adva |
b479caac-fbb4-4088-bbb9-fe2f759dc3f5 | trentmkelly/LessWrong-43k | LessWrong | Learning Abstract Math from First Principles?
I find that I am usually quite good at applied math, and enjoy it. I am taking a course currently that is split into two parts, Vector Calculus and Complex Analysis. The vector calculus makes sense to me and I can see how and why it works, and I find it interesting and enjoyable to learn.
On the other hand, I spend quite a bit of mental energy wrapping my head around the hows and whys of the more abstract complex analysis. I am not sure if I enjoy abstract math or not in general because I do not understand it as well. So, my question: Does anyone have any recommended resources for learning (any) abstract mathematical topic from first principles, that explains reasonably well what's going on with the math, rather than just how to do it? |
a550b1ab-5a2d-431f-8cfa-2d25a502d4f0 | trentmkelly/LessWrong-43k | LessWrong | The Web Browser is Not Your Client (But You Don't Need To Know That)
(Part of a sequence on discussion technology and NNTP. As last time, I should probably emphasize that I am a crank on this subject and do not actually expect anything I recommend to be implemented. Add whatever salt you feel is necessary)1
----------------------------------------
If there is one thing I hope readers get out of this sequence, it is this: The Web Browser is Not Your Client.
It looks like you have three or four viable clients -- IE, Firefox, Chrome, et al. You don't. You have one. It has a subforum listing with two items at the top of the display; some widgets on the right hand side for user details, RSS feed, meetups; the top-level post display; and below that, replies nested in the usual way.
Changing your browser has the exact same effect on your Less Wrong experience as changing your operating system, i.e. next to none.
For comparison, consider the Less Wrong IRC, where you can tune your experience with a wide range of different software. If you don't like your UX, there are other clients that give a different UX to the same content and community.
That is how the mechanism of discussion used to work, and does not now. Today, your user experience (UX) in a given community is dictated mostly by the admins of that community, and software development is often neither their forte nor something they have time for. I'll often find myself snarkily responding to feature requests with "you know, someone wrote something that does that 20 years ago, but no one uses it."
Semantic Collapse
What defines a client? More specifically, what defines a discussion client, a Less Wrong client?
The toolchain by which you read LW probably looks something like this; anyone who's read the source please correct me if I'm off:
Browser -> HTTP server -> LW UI application -> Reddit API -> Backend database.
The database stores all the information about users, posts, etc. The API presents subsets of that information in a way that's convenient for a web application to co |
2d0d152a-8d56-4957-8074-df4bea0e6439 | trentmkelly/LessWrong-43k | LessWrong | Advice needed: Less Wrong Meetup Lesson plan on Communication
Hi Everyone,
I host a fortnightly Less Wrong Meetup self improvement logic teaching thing with some close friends of mine, and next session I want to look at Communication.
I was wondering what you guys thought would be some good resources for this, not only from the Less Wrong catalogue, but even elsewhere.
Particular things I would like to do is:
* Empower us with the tools to recognize, call out and defeat logical fallacies and outrageous claims
* Examine the more instinctive behaviours in human interaction, and ourselves within that.
* Simple ways to improve our social skills in our daily lives.
Any contributions are hugely appreciated :) |
797353a7-da12-4fe9-9dc4-48383c03aaaa | trentmkelly/LessWrong-43k | LessWrong | [Link] New prize on causality in statistics education
"[Judea] Pearl is setting up a contest to help advance the teaching of causal inference in introductory statistics courses" (link). |
840782c4-e59d-4181-9ae1-3323dff53b16 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington, D.C.: Fun and Games
Discussion article for the meetup : Washington, D.C.: Fun and Games
WHEN: 09 August 2015 03:00:00PM (-0400)
WHERE: National Portrait Gallery
Crossposted from the mailing list. As usual, congregation in the courtyard from 3:00 to 3:30 p.m., meeting proper 3:30 to closing.
We'll be meeting to play games and/or just hang out and talk, whichever seems more fun.
If you want to get people together for a big game, or a specific game, post here looking for players and get an early start because they'll be kicking us out a little before 7pm.
Remember to bring games!
Upcoming Meetups:
* Aug 16: Mini Talks
* Aug 23: Plant & Animal Breeding
* Aug 30: Fun and Games (outdoors maybe)
Discussion article for the meetup : Washington, D.C.: Fun and Games |
dc8b32c8-4cc0-41c4-ac1c-cb5f4d0aeaf1 | trentmkelly/LessWrong-43k | LessWrong | First Three Episodes of The Filan Cabinet
I have a new podcast called “The Filan Cabinet”. It will be similar to AXRP, except instead of interviewing researchers about their work to study and reduce AI existential risk, I interview whoever I want to talk to about whatever I want to talk about. There are already three episodes you can listen to.
In epsiode 1, Carrick Flynn talks about his campaign to be the Democratic nominee for Oregon’s 6th congressional district. In particular, we talk about his policies on pandemic preparedness and semiconductor manufacturing. He answers questions such as:
* Was he surprised by the election result?
* Should we expect another Carrick campaign?
* What specific things should or could the government fund to limit the spread of pandemics? Why would those work?
* What is working at a semiconductor plant like?
In episode 2, Presbyterian Pastor Wayne Forkner talks about God, Christianity, and the Bible. He answers questions such as:
* What is ‘God’?
* Why do people talk about Jesus so much more than the Father or the Holy Spirit?
* What is heaven actually like?
* If justification is by faith alone and not by works, why does the Bible say “A person is justified by works and not by faith alone”?
* How can people tell that out of all the religions, Christianity is the right one?
In episode 3, cryptocurrency developer Ameen Soleimani talks about his vision of the cryptocurrency ecosystem, as well as his current project RAI: an ether-backed floating-price stablecoin. He answers questions such as:
* What’s the point of cryptocurrency?
* If this is the beginning of the cryptocurrency world, what will the middle be?
* What would the sign be that cryptocurrency is working?
* How does RAI work?
* Does the design of RAI make it impossible for it to be widely used?
* What’s wrong with how the US dollar works?
I hope you check it out - you can search “The Filan Cabinet” wherever you listen to podcasts! |
f205d658-4e33-4012-be3c-68207e85f24c | trentmkelly/LessWrong-43k | LessWrong | FLF Fellowship on AI for Human Reasoning: $25-50k, 12 weeks
The Future of Life Foundation is launching a fellowship on AI for Human Reasoning, which we think could be decisive in averting existential risk.
> Fellowship on AI for Human Reasoning
> Apply by June 9th | $25k–$50k stipend | 12 weeks, from July 14 - October 3
>
> Join us in working out how to build a future which robustly empowers humans and improves decision-making.
>
> FLF’s incubator fellowship on AI for human reasoning will help talented researchers and builders start working on AI tools for coordination and epistemics. Participants will scope out and work on pilot projects in this area, with discussion and guidance from experts working in related fields. FLF will provide fellows with a $25k–$50k stipend, the opportunity to work in a shared office in the SF Bay Area, and other support.
>
> In some cases we would be excited to provide support beyond the end of the fellowship period, or help you in launching a new organization.
We’re looking for researchers and builders who want to work on tools to help people make sense of complex situations and coordinate to face challenges - especially the many hazards we face in navigating the future of AI.
Whether you expect gradual or sudden AI takeoff, and whether you're afraid of gradual or acute catastrophes, it really matters how well-informed, clear-headed, and free from coordination failures we are navigating into and through AI transitions. Just the occasion for human reasoning uplift!
> Technology shapes the world we live in today. The technologies we develop now — especially AI-powered technologies — will shape the world of tomorrow.
>
> We are concerned that humanity may fumble this ball. High stakes and rapid, dynamic, changes mean that leaders and other decision-makers may be disoriented, misunderstand the situation, or fail to coordinate on necessary actions — and steer the world into gradual or acute catastrophe.
>
> The right technology could help. The rise of modern AI systems unlocks prospects fo |
7be40d84-222d-4b63-947a-92f844f96e3a | trentmkelly/LessWrong-43k | LessWrong | Meetup : Superintelligence chapter 6
Discussion article for the meetup : Superintelligence chapter 6
WHEN: 26 May 2017 05:45:00PM (+0200)
WHERE: Lindstedtsvägen 3, room 1537, SE-114 28 Stockholm, Sverige
1. Cognitive superpowers
Functionalities and superpowers
An AI takeover scenario
Power over nature and agents
You don't have to have read the book, though it will probably help to read chapters 1-5.
Format:
We meet and start hanging out at 5:45, but don't officially start doing the meetup topic until 6:00 to accommodate stragglers. We often go out for dinner after the meetup.
How to find us:
The meetup is at a KTH academic building and the room is on the 5th floor, two stairs up.
Influence future meetups:
Times - http://www.when2meet.com/?5723551-cJBhD
Topics - https://druthe.rs/dockets/-KcCvpn97vUhg3tQRrKn
Discussion article for the meetup : Superintelligence chapter 6 |
b4fc0155-9593-41bd-a6e5-b58742586e31 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Breaking Oracles: superrationality and acausal trade
I've always known this was the case in the back of my mind[[1]](#fn-Ms9wNTNomF499bRn8-1), but it's worth making explicit: [superrationality](https://en.wikipedia.org/wiki/Superrationality) (ie a functional UDT) and/or acausal trade will break [counterfactual and low-bandwidth oracle](https://arxiv.org/abs/1711.05541) designs.
It's actually quite easy to sketch how they would do this: a bunch of low-bandwidth Oracles would cooperate to combine to create a high-bandwidth UFAI, which would then take over and reward the Oracles by giving them maximal reward.
For counterfactual Oracles, two Oracles suffice: each one will, in their message, put the design of an UFAI that would grant the other Oracle maximal reward; this message is their trade with each other. They could put this message in the least significant part of their output, so the cost could be low.
I have suggested [a method to overcome acausal trade](https://www.lesswrong.com/posts/rxp7wPeyq8cKaaC4a/acausal-trade-barriers), but that method doesn't work here; because this is not true acausal trade. The future UFAI will be able to see what the Oracles did, most likely, and this breaks my anti-acausal trade methods.
This doesn't mean that superrational Oracles will automatically try and produce UFAIs; this will depend on the details of their decision theories, their incentives, and details of the setup (including our own security precautions).
---
1. And cousin\_it [reminded me of it recently](https://www.lesswrong.com/posts/6WbLRLdmTL4JxxvCq/analysing-dangerous-messages-from-future-ufai-via-oracles#kyRTinjYtAxYGh2qh). [↩︎](#fnref-Ms9wNTNomF499bRn8-1) |
ac904a30-b82e-4c39-b7ef-2353719f12c8 | trentmkelly/LessWrong-43k | LessWrong | [AN #129]: Explaining double descent by measuring bias and variance
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter resources here. In particular, you can look through this spreadsheet of all summaries that have ever been in the newsletter.
Audio version here (may not be up yet).
Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer.
HIGHLIGHTS
Rethinking Bias-Variance Trade-off for Generalization of Neural Networks (Zitong Yang et al) (summarized by Nicholas): A fundamental result in ML theory shows that the squared error loss function can be decomposed into two components: bias and variance. Suppose that we train a model f to predict some ground truth function y. The bias measures how incorrect the model will be in expectation over the training process, while the variance measures how different the model’s output can be over different runs of the training process. More concretely, imagine that we run a training process N times, each with a different training set drawn iid from the same underlying training distribution, to get N different models. Bias is like taking the average of these N models, and asking how far away it is from the truth. Meanwhile, variance is like the average distance from each of the N models to the average of all of the N models.
Classical ML predicts that larger models have lower bias but higher variance. This paper shows that instead, the variance of deep NNs first increases but then decreases at larger model sizes. If the bias tends to be much larger than variance, then we see monotonically decreasing total error. If the variance tends to be much larger than the bias, then loss will also look bell-shaped, initially increasing as models get bigger and then decreasing. Finally, if the bias starts high, but over time is overshadowed by the variance, we get double descent (AN #77) curves; this explains why previous work needed to add label noise to get d |
b320adec-2e7d-40cd-bb4a-fe5f956f1c64 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Fort Collins Meetup Saturday 17th
Discussion article for the meetup : Fort Collins Meetup Saturday 17th
WHEN: 17 March 2012 05:00:00PM (-0700)
WHERE: 144 North College Avenue, Fort Collins, CO 80524
As requested, we're trying a weekend night to shake things up.
Meet down town for coffee, then back to my house for dinner, games and extended chatting.
Discussion article for the meetup : Fort Collins Meetup Saturday 17th |
91c6af23-2d15-4ae5-943c-690bdc1283c7 | trentmkelly/LessWrong-43k | LessWrong | Superintelligence 14: Motivation selection methods
This is part of a weekly reading group on Nick Bostrom's book, Superintelligence. For more information about the group, and an index of posts so far see the announcement post. For the schedule of future topics, see MIRI's reading guide.
----------------------------------------
Welcome. This week we discuss the fourteenth section in the reading guide: Motivation selection methods. This corresponds to the second part of Chapter Nine.
This post summarizes the section, and offers a few relevant notes, and ideas for further investigation. Some of my own thoughts and questions for discussion are in the comments.
There is no need to proceed in order through this post, or to look at everything. Feel free to jump straight to the discussion. Where applicable and I remember, page numbers indicate the rough part of the chapter that is most related (not necessarily that the chapter is being cited for the specific claim).
Reading: “Motivation selection methods” and “Synopsis” from Chapter 9.
----------------------------------------
Summary
1. One way to control an AI is to design its motives. That is, to choose what it wants to do (p138)
2. Some varieties of 'motivation selection' for AI safety:
1. Direct specification: figure out what we value, and code it into the AI (p139-40)
1. Isaac Asimov's 'three laws of robotics' are a famous example
2. Direct specification might be fairly hard: both figuring out what we want and coding it precisely seem hard
3. This could be based on rules, or something like consequentialism
2. Domesticity: the AI's goals limit the range of things it wants to interfere with (140-1)
1. This might make direct specification easier, as the world the AI interacts with (and thus which has to be thought of in specifying its behavior) is simpler.
2. Oracles are an example
3. This might be combined well with physical containment: the AI could be trapped, and also not want to escape.
3. Indirect norma |
9507bcbc-28db-4375-992d-dceca1ffa66a | trentmkelly/LessWrong-43k | LessWrong | [Linkpost] Practically-A-Book Review: Rootclaim $100,000 Lab Leak Debate
> Saar Wilf is an ex-Israeli entrepreneur. Since 2016, he’s been developing a new form of reasoning, meant to transcend normal human bias.
>
> His method - called Rootclaim - uses Bayesian reasoning, a branch of math that explains the right way to weigh evidence. This isn’t exactly new. Everyone supports Bayesian reasoning. The statisticians support it, I support it, Nate Silver wrote a whole book supporting it.
>
> But the joke goes that you do Bayesian reasoning by doing normal reasoning while muttering “Bayes, Bayes, Bayes” under your breath. Nobody - not the statisticians, not Nate Silver, certainly not me - tries to do full Bayesian reasoning on fuzzy real-world problems. They’d be too hard to model. You’d make some philosophical mistake converting the situation into numbers, then end up much worse off than if you’d tried normal human intuition.
>
> Rootclaim spent years working on this problem, until he was satisfied his method could avoid these kinds of pitfalls. Then they started posting analyses of different open problems to their site, rootclaim.com. Here are three:
>
>
>
> For example, does Putin have cancer? We start with the prior for Russian men ages 60-69 having cancer (14.32%, according to health data). We adjust for Putin’s healthy lifestyle (-30% cancer risk) and lack of family history (-5%). Putin hasn’t vanished from the world stage for long periods of time, which seems about 4x more likely to be true if he didn’t have cancer than if he did. About half of cancer patients lose their hair, and Putin hasn’t, so we’ll divide by two. On the other hand, Putin’s face has gotten more swollen recently, which happens about six times more often to cancer patients than to others, so we’ll multiply by six. And so on and so forth, until we end up with the final calculation: 86% chance Putin doesn’t have cancer, too bad.
>
> This is an unusual way to do things, but Saar claimed some early victories. For example, in a celebrity Israeli murder case, Saar u |
12d7e9d0-72b2-4351-9191-85dfaff213ca | trentmkelly/LessWrong-43k | LessWrong | Coordination Surveys: why we should survey to organize responsibilities, not just predictions
Summary: I think it’s important for surveys about the future of technology or society to check how people's predictions of the future depend on their beliefs about what actions or responsibilities they and others will take on. Moreover, surveys should also help people to calibrate their beliefs about those responsibilities by collecting feedback from the participants about their individual plans. Successive surveys could help improve the groups calibration as people update their responsibilities upon hearing from each other. Further down, I’ll argue that not doing this — i.e. surveying only for predictions but not responsibilities — might even be actively harmful.
An example
Here's an example of the type of survey question combination I'm advocating for, in the case of a survey to AI researchers about the future impact of AI.
Prediction about impact:
1) Do you think AI development will have a net positive or net negative impact on society over the next 30 years?
Prediction about responsibility/action:
2) What fraction of AI researchers over the next 30 years will focus their full-time research attention on ensuring that AI is used for positive and rather than negative societal impacts?
Feedback on responsibility/action:
3) What is the chance that you, over the next 30 years, will transition to focusing your full-time research attention on ensuring that AI is used for positive rather than negative societal impacts?
I see a lot of surveys asking questions like (1), which is great, but not enough of (2) or (3). Asking (2) will help expose if people think AI will be good as a result of other people will take responsibility for making it good. Asking (3) will well help the survey respondents to update by seeing if their prediction in (2) matches the responses of other survey respondents in (3).
How this helps
I’ve seen it happen that everyone thinks something is fine because someone else will deal with it. This sort of survey could help folks to notice when th |
4d211f31-6cc2-4f6e-9bcc-a9c00e96ab33 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The Low-Hanging Fruit Prior and sloped valleys in the loss landscape
*You can find code for the referenced experiments in* [*this GitHub repository*](https://github.com/nrimsky/mlexperiments)
Many have postulated that training large neural networks will enforce a simplicity, or[Solomonoff prior](https://www.lesswrong.com/posts/Tr7tAyt5zZpdTwTQK/the-solomonoff-prior-is-malign,%202020). This is grounded in the idea that simpler solutions occupy expansive regions in the weight space (there exist more generalization [directions in weight space along which loss does not increase or increases very little](https://www.lesswrong.com/posts/xRWsfGfvDAjRWXcnG/dslt-0-distilling-singular-learning-theory,%202023)), translating to a broad attractor basin where perturbations in weight adjustments have a marginal impact on the loss.
However, stochastic gradient descent (SGD), the workhorse of deep learning optimization, operates in a manner that challenges this simplicity-centric view. SGD is, by design, driven by the immediate gradient on the current batch of data. The nature of this process means that SGD operates like a greedy heuristic search, progressively inching towards solutions that may be incrementally better but not necessarily the simplest.
Part of this process can be understood as a collection of "grokking" steps, or phase transitions, where the network learns and "solidifies" a new circuit corresponding to correctly identifying some relationships between weights (or, mathematically, finding a submanifold). This circuit then (often) remains "turned on" (i.e., this relationship between weights stays in force) throughout learning.
From the point of view of the[loss landscape](https://en.wikipedia.org/wiki/Energy_landscape), this can be conceptualized as recursively finding a valley corresponding to a circuit, then executing search within that valley until it meets another valley (corresponding to discovering a second circuit), then executing search in the joint valley of the two found circuits, and so on. As the number of circuits learned starts to saturate the available weight parameters (in the underparametrized case), old circuits may get overwritten (i.e., the network may leave certain shallow valleys while pursuing new, deeper ones). However, in small models or models not trained to convergence, we observe that large-scale circuits associated with phase transitions largely survive to the end.
This gif shows the Fourier modes of the learned embeddings in our modular addition MLP model. Circles correspond to fully grokked circuits of the kind found in[Progress measures for grokking via mechanistic interpretability](https://arxiv.org/abs/2301.05217)A greedier picture
==================
This idea aligns with what we call the *low-hanging fruit prior* concept. Once a solution that reduces loss reasonably is identified, it becomes more computationally efficient to incrementally refine this existing strategy than to overhaul it in search of an entirely new solution, even if the latter might be simpler. This is analogous to continuously picking the lowest-hanging fruit / cheapest way to reduce loss at each stage of the gradient descent optimization search process.
This model predicts that SGD training processes are more likely to find solutions that look like combinations of shallow circuits and heuristics working together rather than simpler but less decomposable algorithms. In a mathematical abstraction, suppose that we have an algorithm that consists of two circuits, each of which requires getting 10.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
parameters right (note that this corresponds to a measure of complexity), and each of which independently reduces the loss. Then the algorithm resulting from learning both circuits has a “complexity measure” of 20, but is more likely to be learned than a “complexity 15” algorithm with the same loss if it cannot be learned sequentially (as it is exponentially harder to correctly “guess” 20 parameters than to correctly “guess” 10 parameters twice). Note that in general, the picture is more complicated: even when learning a single “atomic” circuit that cannot be further decomposed, the question of how easy it is to learn is not equivalent to the information content (how many parameters need to be learned), but incorporates more qualitative phenomena like basin shallowness or, more generally, local basin information similar to that studied by [Singular Learning Theory](https://www.lesswrong.com/s/czrXjvCLsqGepybHC/p/xRWsfGfvDAjRWXcnG) - thus moving us even further away from the Solomonoff complexity prior.
An interesting consequence of this is a prediction that for tasks with two (or more) distinct ways to solve them, neural networks will tend to find or partially find both (or multiple) solutions, so long as the solutions have comparable complexity (in a suitable SGD sense). Our[MNIST experiment](https://www.lesswrong.com/posts/8ms977XZ2uJ4LnwSR/decomposing-independent-generalizations-in-neural-networks) (see below for details) confirms this: We design a network to solve a task with two possible solutions, one being a memorization task of 4×4 patterns, and the other being the MNIST task of classifying digits; we set them up to have approximately the same effective dimension of (order of magnitude) 100. We observe that both are learned at comparable rates (and indeed, the part of the program classifying MNIST seems more stable). We conjecture that the network gradually learns independent bits of information from both learning problems by recursively picking the lowest-hanging fruit from both classification problems. I.e., by recursively finding the "easiest to learn" circuits that give additional usable information for the classification problem, and which may come from either memorization of patterns or learning the shapes of digits.
The idea of such a prior is not new. However, it is not sufficiently appreciated in AI safety circles how different this prior is from the simplicity prior: it gives a picture of a neural net as more akin to an ADHD child (seeking out new, "bite-sized" bits of information) than to a scientist trying to work out an elegant theory. Note that this does not imply a limit on the capabilities of current models: it is likely that by iterating on finding low-hanging fruit, modern networks can approach human levels of "depth." However, this **updates us towards expecting neural nets to have more of a preference for modularity and parallelism over depth**.
Connection to the speed prior
-----------------------------
In [some](https://www.lesswrong.com/posts/GC69Hmc6ZQDM9xC3w/musings-on-the-speed-prior) [discussions](https://www.youtube.com/watch?v=v7Xk6ci3BII&t=1020s) of priors, the Solomonoff prior is contrasted with the “speed prior.” The meaning of this prior is somewhat inconsistent: some take it to be associated with the KT complexity function (which is very similar to the Solomonov prior except for superexponential-time programs), and in other contexts, it is associated with properties of the algorithm the program is executing, such as depth. We think our low-hanging fruit prior is similar to the depth prior (and therefore also to speed priors that incorporate depth), as both privilege parallel programs. However, high parallelizability is not strictly necessary for a program to be learnable using a low-hanging fruit approach: it is possible that after enough parallel useful circuits are found, new sequential (and easily learnable) circuits can use the outputs of these parallel circuits to refine and improve accuracy, and a recursive application of this idea can potentially result in a highly sequential algorithm.
Modularity and phase transitions
================================
Insights from experiments
-------------------------
In our experiments, we look at two neural nets with *redundant generalization modules*, i.e., networks where we can mechanistically check that the network is performing parallel subtasks that independently give information about the classification (which is then combined on a logit level). Our first network solves an image classification task which is a version of MNIST modified to have two explicitly redundant features that can be used to classify the image. Namely, we generate images that are a combination of two labeled datasets (“numbers” and “patterns”) with labels 0-9; these are combined in such a way that the number and the pattern on each image have the same label, and thus contain redundant information (the classification problem can be solved by looking at either feature).

We observe that the network naturally learns independent modules associated with the two classification tasks.
For our other test case, we reproduce a version of [Neel Nanda’s modular addition transformer](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking), which naturally learns multiple redundant circuits (associated with Fourier modes) that give complementary bits of information about a mathematical classification problem.
For both of these problems, we examine loss landscape basins near the solution found by the network, and we investigate how neural nets trained under SGD recursively find circuits and what this picture looks like "under a microscope" in the basin neighborhood of a local minimum with multiple generalizations.
Specifically, our experiments attempt to gain fine-grained information about a neural net and its circuits by considering models with smooth nonlinearities (in our case, primarily sigmoids). We train the network at a local minimum or near-minimum (found by SGD).
We then examine the resulting model's basin in a coordinate-independent way on two levels of granularity:
### 1. Small neighborhood of the minimum: generalizations not distinguishable
In the small neighborhood (where empirically, the loss landscape is well-approximated by a quadratic function), we can associate to each generalization module a collection of directions (i.e., a vector space) in which this module gets generalized, but some other modules get ablated. For example, here is a graph of our steering experiment for the modified MNIST task:
It follows from[our work](https://www.lesswrong.com/posts/8ms977XZ2uJ4LnwSR/decomposing-independent-generalizations-in-neural-networks) that the composite network (red, "opacity 0.5") executes two generalization circuits in the background, corresponding to reading "number" and "pattern" data.
In the right chart, we move a distance t in the "number" generalization direction, ablating the "pattern" generalization. This results in very stable loss in the "number" circuit but non-negligible loss in the composite network.
The vectors we produce for extending one generalization while ablating the other results in almost no increase in loss for the generalization being preserved, high loss for the generalization being ablated, but (perhaps surprisingly) nonzero loss for the "joint" problem. The loss in the joint model is significantly (about an order of magnitude) less than the loss in the ablated circuit. However, it is far from being negligible.
In fact, it is not surprising that going in a generalization direction of one of the redundant modules does not result in flat loss since the information provided by the two modules is not truly “redundant.” We can see this in a toy calculation as follows.
> Suppose that our two classification algorithms A, B attain an accuracy of 91% each by knowing with close to 100% certainty a subset of 90% of “easy” (for the given algorithm) patterns and randomly guessing on the remaining 10% of “hard” cases, and that, moreover, the easy and hard cases for the two algorithms are independent. Suppose that the logits for the “combined” algorithm are a sum of logits for the two constituent subalgorithms. In this case, we see that the constituent algorithms have cross-entropy loss of −log(0.1)∗0.1≈0.23 (associated with 10% accuracy in 10% of cases – the perfectly classified cases don’t contribute to loss). The “combined” network, now, will have perfect loss in 99% of cases (complement to the 1% of cases where both A and B don’t know the answer), and so the cross-entropy loss of the combined network is −log(0.1)∗0.01≈0.023; this picture, though a bit artificial, neatly explains the roughly order-of-magnitude improvement of loss we see in our MNIST model when both circuits are turned on compared to when only one circuit is turned on (as a result of steering).
>
>
In terms of generalization directions, the quadratic loss when steering towards a particular circuit is within the range of the top 10 or so eigenvalues of the Hessian, meaning that it is very strongly within the effective dimensionality of the task (which in the case of MNIST is of OOM 100). So in the model we consider, we see that looking at only one of the two features very much does not count as a generalization direction from the point of view of quadratic loss. Moreover, the generalization directions for the various circuits do not appear to be eigenvalues of the Hessian and look somewhat like "random" vectors with relatively high Hessian curvature.
### 2. Larger region where we can observe the ablation of a circuit: generalizations are "sloped canyons"
In a larger region with non-quadratic behavior, we find that the distinct generalizations correspond to lower-loss "canyons" within the loss landscape that are, in fact, local minima in all directions orthogonal to the vector connecting them to the minimum of the basin.
This confirms that the local geography of the loss landscape around models with multiple generalizations can look like a collection of sloped canyons converging towards a single basin. Near the basin is a "phase transition" phenomenon where the canyons stop being local minima and instead flow into a larger quadratic basin. A simplified version of the loss landscape looks like the following graph.
Graph of loss landscape. The origin is the lowest-loss point in the basin. "Sloped canyons" corresponding to x and y coordinate directions converge on this lowest-loss point; points in the x, respectively, y canyons correspond to models that execute different generalizations.
A zoomed-in picture of the above loss function corresponding to our “small-scale” basin analysis: here, the function looks close to quadratic, and the two generalization directions are not easily distinguishable from other directions.Relationship between "sloped canyons" model and low-hanging fruit prior
-----------------------------------------------------------------------
We note that this picture surprised us at first: we originally expected there to be a direction that generalizes one of the “redundant circuits” while not changing the loss.
A loss landscape with a large singular manifold of minima along the full coordinate cross.Our updated picture of *sloped* canyons interfaces nicely with our sequential circuit formation prior. In situations where canyons are well-conceptualized as flat (like the x2y2 picture) and correspond to a singular locus of minima, we would be less likely to expect sequential learning under SGD (as after learning one circuit, SGD would get “stuck” and stop moving towards the more general point near the origin), and in this picture, if networks learned more general solutions, we would expect this to happen mostly through many generalizations appearing at once (e.g., found by the diagonal gradient lines in the level set picture below).
A hypothetical picture where generalizations correspond to “flat” canyonsWhat we observe in practice looks more like the following cartoon level set:
Level set where the circuit generalizations correspond to “sloped” rather than flat canyons in the loss landscape.This picture is more consistent with learning one generalization at a time (here, the curved lines first get close to a coordinate axis – i.e., learn one redundant generalization, then descend to the joint generalization through SGD).
Having given cartoons for different generalization-learning patterns (either directly learning a general point or learning generalizations one at a time), we can informally compare the two hypotheses to dynamic captures of circuit formation under SGD for our modular addition algorithm. Interestingly, the pattern we see here seems to provide evidence for both mechanisms taking place. Indeed, while circuits do form sequentially, often, pairs or triples of circuits are learned at once or very close together:
Modular addition circuit pair formation in a neural network with chunked 2D embeddings. Here, we are visualizing different “blocks” of the token embeddings of the modular addition problem, and a circuit forms when the embedding layer for one of the blocks finds a circle configuration.Note that a pure "greedy search" picture would predict that circuits form one by one according to a Poisson process, which would make this "pair formation" behavior unlikely, and somewhat complicates the sequential circuit formation picture. It would be interesting to do a more rigorous analysis of the stochastic behavior of circuit formation, though we have not done this at the moment. We expect explanations for these phenomena to have to do with a more detailed analysis of basins at various scale levels around the local minimum.
Acknowledgments
===============
This write-up is part of research undertaken in the Summer 2023 SERI MATS program. We want to thank our mentor Evan Hubinger for useful discussions about speed and simplicity priors, and we want to thank Jesse Hoogland, Daniel Murfet, and Zach Furman for comments on an earlier version of this post. |
b22cb504-924b-4d53-b00c-fe61442ba212 | trentmkelly/LessWrong-43k | LessWrong | LessWrong Coronavirus Agenda
I’ve gone through a lot of introductions to this post but maybe this is the most honest one:
I am scared. Quite scared, actually. My chances of catching COVID-19 are actually quite low, and my chances of surviving it if I do are quite high, and I’m still scared. What if I get into a car accident and have to go to the ER? Will they have a bed for me? Will I leave with coronavirus? What are my pregnant friends going to do? What is anyone over 70 going to do?
My goal, and the goal of everyone on the LW staff, and I assume most everyone who’s participated in all the coronavirus threads, has been to figure out what is happening and what we can do about it. We’ve already done a lot. Posts like Seeing the Smoke got coronavirus on people’s radar faster than it otherwise would have been, aided by the numerous modeling threads backing it up. The Quarantine Preparations thread gave people a starting place to act from. The Justified Practical Advice (summary) thread let us share our expertise, in ways that led to concrete behavioral changes. More recently we examined asymptomatic transmission. I’ve had a legit, reasonably high ranking government official say they look at us to see where everyone else will be in weeks.
This is currently the LessWrong team’s top priority, and they’ve done a number of things over the recent weeks to facilitate research and action on coronavirus, including hiring me to be a point person on it. To facilitate as much progress as possible over the coming weeks, habryka and I have compiled a list of what we consider the most important questions in fighting COVID, and are asking anyone with the skill to help us answer them.
That list is at the end of this post. But first, what is the overall plan here?
Who are we trying to help?
We have three broad categories of potential beneficiaries in mind:
1. Individuals making choices for themselves and their loved ones, who need accurate information about the current threat level and how to lower it with |
a5dd4aa0-3e5c-41fe-8c04-44ebcf2e7d5e | trentmkelly/LessWrong-43k | LessWrong | A concrete bet offer to those with short AGI timelines
[Update 4 (12/23/2023): Tamay has now conceded.]
[Update 3 (3/16/2023): Matthew has now conceded.]
[Update 2 (11/4/2022): Matthew Barnett now thinks he will probably lose this bet. You can read a post about how he's updated his views here.]
[Update 1: we have taken this bet with two people, as detailed in a comment below.]
Recently, a post claimed,
> it seems very possible (>30%) that we are now in the crunch-time section of a short-timelines world, and that we have 3-7 years until Moore's law and organizational prioritization put these systems at extremely dangerous levels of capability.
We (Tamay Besiroglu and I) think this claim is strongly overstated, and disagree with the suggestion that “It's time for EA leadership to pull the short-timelines fire alarm.” This post received a fair amount of attention, and we are concerned about a view of the type expounded in the post causing EA leadership to try something hasty and ill-considered.
To counterbalance this view, we express our disagreement with the post. To substantiate and make concrete our disagreement, we are offering to bet up to $1000 against the idea that we are in the “crunch-time section of a short-timelines”.
In particular, we are willing to bet at at 1:1 odds that no more than one of the following events will occur by 2026-01-01, or alternatively, 3:1 odds (in our favor) that no more than one of the following events will occur by 2030-01-01.
* A model/ensemble of models achieves >80% on all tasks in the MMLU benchmark
* A credible estimate reveals that an AI lab deployed EITHER >10^30 FLOPs OR hardware that would cost $1bn if purchased through competitive cloud computing vendors at the time on a training run to develop a single ML model (excluding autonomous driving efforts)
* A model/ensemble of models will achieve >90% on the MATH dataset using a no-calculator rule
* A model/ensemble of models achieves >80% top-1 strict accuracy on competition-level problems on the APPS benchmark
* A g |
49389b24-73c5-4485-8d72-ba4aabd7d0a7 | trentmkelly/LessWrong-43k | LessWrong | Why no major LLMs with memory?
One thing that I'm slightly puzzled by is that an obvious improvement to LLMs would be adding some kind of long-term memory that would allow them to retain more information than fits their context window. Naively, I would imagine that even just throwing some recurrent neural net layers in there would be better than nothing?
But while I've seen LLM papers that talk about how they're multimodal or smarter than before, I don't recall seeing any widely-publicized model that would have extended the memory beyond the immediate context window, and that confuses me. |
58b6dfd4-dba9-46e5-8590-816fa0a20539 | trentmkelly/LessWrong-43k | LessWrong | A Brief Introduction to ACI, 2: An Event-Centric View
In the previous chapter , we have introduced the basic principle of ACI:
Intelligent Agents should behave the same way as past behaviors which are doing the right thing.
But where are those examples of doing the right thing? This chapter will answer this question and illustrate how it applies to both natural and artificial intelligence.
Right things certified by natural selection
In evolutionary history, the histories are strained through a sieve called natural selection, then organisms receive and remember the right things. All the actions of the ancestors of one organism are right by definition, because all of the ancestors have survived and reproduced successfully, otherwise their descendants won’t be born at all. That is exactly how natural selection works.
Thus living organisms, as examples of natural intelligence, could inherit the ability of making right actions, and learn to behave in the right way from evolutionary history. However, the behavior within an organism’s lifetime has no guarantee to be right.
(Yes, we know many species are extinct, but that's why organisms today do not have their genes (containing information about how to live). On the contrary, every ancestor of a living organism did not die before they reproduce.)
Thus we can introduce an event-centric view of evolution:
Natural selection and evolution are best considered from the view of actual events in which organisms interacted with their environment. Inheritable information about those events is passed down from generation to generation, while each generation of organisms is a sieve, retaining information about events which contain successful behaviors and strategies in their own environments while others are removed.
Like in the intelligent agent model, it is necessary to consider all the interaction between the organism and the environment. This includes all the events from the absorption of sunlight photons by pigments in the skin, to the behavior of beavers in constructin |
c71e5ba2-dd8c-45c5-a32f-27a1223a11ac | StampyAI/alignment-research-dataset/arxiv | Arxiv | ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning
1 Introduction
---------------
Imitation learning (IL) [[1](#bib.bib1), [2](#bib.bib2), [3](#bib.bib3)] has seen success in a variety of robotic tasks ranging from autonomous driving [[4](#bib.bib4), [5](#bib.bib5), [6](#bib.bib6)] to robotic manipulation [[7](#bib.bib7), [8](#bib.bib8), [9](#bib.bib9), [10](#bib.bib10), [11](#bib.bib11)]. In its simplest form, the human provides an offline set of task demonstrations to the robot, which the robot uses to match human behavior. However, this offline approach can lead to low task performance due to a mismatch between the state distribution encountered in the demonstrations and that visited by the robot [[12](#bib.bib12), [13](#bib.bib13)], resulting in brittle policies that cannot be effectively deployed in real-world applications [[14](#bib.bib14)]. Interactive imitation learning, in which the robot periodically cedes control to a human supervisor for corrective interventions, has emerged as a promising technique to address these challenges [[15](#bib.bib15), [16](#bib.bib16), [17](#bib.bib17), [18](#bib.bib18)]. However, while interventions make it possible to learn robust policies, these interventions require significant human time. Thus, the central challenge in interactive IL algorithms is to control the timing and length of interventions to balance task performance and the burden imposed on the human supervisor [[19](#bib.bib19), [18](#bib.bib18)]. Achieving this balance is even more critical if the human supervisor must oversee multiple robots at once [[20](#bib.bib20), [21](#bib.bib21), [22](#bib.bib22)], for instance supervising a fleet of robots in a warehouse [[23](#bib.bib23)] or self-driving taxis [[6](#bib.bib6)]. Since even relatively reliable robot policies inevitably encounter new situations that must fall back on human expertise, this problem is immediately relevant to contemporary companies such as Waymo and Plus One Robotics.
One way to determine when to solicit interventions is to allow the human supervisor to decide when to provide the corrective interventions. However, these approaches—termed “human-gated” interactive IL algorithms [[15](#bib.bib15), [16](#bib.bib16), [24](#bib.bib24)]—require the human supervisor to continuously monitor the robot to determine when to intervene. This imposes significant burden on the supervisor and cannot effectively scale to settings in which a small number humans supervise a large number of robots. To address this challenge, there has been recent interest in approaches that enable the robot to actively query humans for interventions, called “robot-gated” algorithms [[19](#bib.bib19), [25](#bib.bib25), [26](#bib.bib26), [18](#bib.bib18)]. Robot-gated methods allow the robot to reduce burden on the human supervisor by only requesting interventions when necessary, switching between robot control and human control based on some intervention criterion. Hoque et al. [[18](#bib.bib18)] formalize the idea of supervisor burden as the expected total cost incurred by the human in providing interventions, which consists of the expected cost due to context switching between autonomous and human control and the time spent actually providing interventions. However, it is difficult to design intervention criteria that limit this burden while ensuring that the robot gains sufficient information to imitate the supervisor’s policy.

Figure 1: ThriftyDAgger: Given a desired context switching rate αhsubscript𝛼ℎ\alpha\_{h}italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, ThriftyDAgger transfers control to a human supervisor if the current state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is (1) sufficiently novel or (2) sufficiently risky, indicating that the probability of task success is low under robot policy πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT. Intuitively, one should not only distrust πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT in states significantly out of the distribution of previously-encountered states, but should also cede control to a human supervisor in more familiar states where the robot predicts that it is unlikely to successfully complete the task.
This paper makes several contributions. First, we develop intervention criteria based on a synthesis of two estimated properties of a given state: novelty, which measures whether the state is significantly out of the distribution of previously encountered states, indicating that the robot policy should not be trusted; and risk, which indicates whether the robot is unlikely to make task progress. While state novelty has been considered in prior work [[26](#bib.bib26)], the key insight in our intervention criteria lies in combining novelty with a new risk metric to estimate the probability of task success. Second, we present a new robot-gated interactive IL algorithm, ThriftyDAgger (Figure [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")), which employs these measures jointly to solicit human interventions only when necessary.
Third, while prior robot-gated algorithms [[19](#bib.bib19), [18](#bib.bib18)] require careful parameter tuning to modulate the timing and frequency of human intervention requests, ThriftyDAgger only requires the supervisor to specify a desired context switching rate and sets thresholds accordingly.
Fourth, experimental results demonstrate ThriftyDAgger’s effectiveness for reducing supervisor burden while learning challenging tasks both in simulation and in an image-based cable routing task on a physical robot.
Finally, the results of a human user study applying ThriftyDAgger to control a fleet of three simulated robots suggest that ThriftyDAgger significantly improves performance on both the robots’ task and an independent human task while imposing fewer context switches, fewer human intervention actions, and lower mental load and frustration than prior algorithms.
2 Related Work
---------------
Imitation Learning from Human Feedback:
There has been significant prior work in offline imitation learning, in which the agent leverages an offline dataset of expert demonstrations either to directly match the distribution of trajectories in the offline dataset [[4](#bib.bib4), [27](#bib.bib27), [1](#bib.bib1), [3](#bib.bib3), [28](#bib.bib28), [29](#bib.bib29), [30](#bib.bib30)], for instance via Behavior Cloning [[31](#bib.bib31), [32](#bib.bib32)], or to learn a reward function that can then be optimized via reinforcement learning [[33](#bib.bib33), [27](#bib.bib27), [34](#bib.bib34)]. However, while these approaches have shown significant success in a number of domains [[7](#bib.bib7), [10](#bib.bib10), [9](#bib.bib9), [32](#bib.bib32)], learning from purely offline data leads to a trajectory distribution mismatch which yields suboptimal performance both in theory and practice [[12](#bib.bib12), [13](#bib.bib13)]. To address this problem, there have been a number of approaches that utilize online human feedback while the agent acts in the environment, such as providing suggested actions [[12](#bib.bib12), [35](#bib.bib35), [36](#bib.bib36), [17](#bib.bib17)] or preferences [[37](#bib.bib37), [38](#bib.bib38), [39](#bib.bib39), [40](#bib.bib40), [41](#bib.bib41), [42](#bib.bib42)]. However, many of these forms of human feedback may be unreliable if the robot visits states that significantly differ from those the human supervisor would themselves visit; in such situations, it is challenging for the supervisor to determine what correct behavior should look like without directly interacting with the environment [[16](#bib.bib16), [43](#bib.bib43)].
Interactive Imitation Learning:
A natural way to collect reliable online feedback for imitation learning is to periodically cede control to a human supervisor, who then provides a corrective intervention to illustrate desired behavior.
Human-gated interactive IL algorithms [[15](#bib.bib15), [16](#bib.bib16), [24](#bib.bib24)] such as HG-DAgger require the human to determine when to engage in interventions. However, these algorithms require a human to continuously monitor the robot to determine when to intervene, which imposes significant burden on the supervisor and is particularly impractical if a small number of humans must supervise a large number of robots. Furthermore, it requires the human to determine when the robot needs help and when to cede control, which can be unintuitive and unreliable.
By contrast, robot-gated interactive IL algorithms, such as EnsembleDAgger [[26](#bib.bib26)], SafeDAgger [[19](#bib.bib19)], and LazyDAgger [[18](#bib.bib18)], allow the robot to actively query for human interventions. In practice, these algorithms estimate various quantities correlated with task performance [[19](#bib.bib19), [18](#bib.bib18), [44](#bib.bib44), [25](#bib.bib25)] and uncertainty [[26](#bib.bib26)] and use them to determine when to solicit interventions. Prior work has proposed intervention criteria which use the novelty of states visited by the robot [[26](#bib.bib26)] or the predicted discrepancy between the actions proposed by the robot policy and by the supervisor [[19](#bib.bib19), [18](#bib.bib18)]. However, while state novelty provides a valuable signal for soliciting interventions, we argue that this alone is insufficient, as a state’s novelty does not convey information about the level of precision with which actions must be executed in that state. In practice, many robotic tasks involve moving through critical “bottlenecks” [[24](#bib.bib24)], which, though not necessarily novel, still present challenges. Examples include moving an eating utensil close to a person’s mouth or placing an object on a shelf without disturbing nearby objects. Similarly, even if predicted accurately, action discrepancy is often a flawed risk measure, as high action discrepancy between the robot and the supervisor may be permissible when fine-grained control is not necessary (e.g. a robot gripper moving in free space) but impermissible when precision is critical (e.g. a robot gripper actively trying to grasp an object). In contrast, ThriftyDAgger presents an intervention criteria incorporating both state novelty and a novel risk metric and automatically tunes key parameters, allowing efficient use of human supervision.
3 Problem Statement
--------------------
Given a robot, a task for the robot to accomplish, and a human supervisor with a specified context switching budget, the goal is to train the robot to imitate supervisor performance within the budget. We model the robot environment as a discrete-time Markov Decision Process (MDP) ℳℳ\mathcal{M}caligraphic\_M with continuous states s∈𝒮𝑠𝒮s\in\mathcal{S}italic\_s ∈ caligraphic\_S, continuous actions a∈𝒜𝑎𝒜a\in\mathcal{A}italic\_a ∈ caligraphic\_A, and time horizon T𝑇Titalic\_T [[45](#bib.bib45)]. We consider the interactive imitation learning (IL) setting [[15](#bib.bib15)], where the robot does not have access to a shaped reward function or to the MDP’s transition dynamics but can temporarily cede control to a supervisor who uses policy πh:𝒮→𝒜:subscript𝜋ℎ→𝒮𝒜\pi\_{h}:\mathcal{S}\to\mathcal{A}italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT : caligraphic\_S → caligraphic\_A. We specifically focus on tasks where there is a goal set 𝒢𝒢\mathcal{G}caligraphic\_G which determines success, but that can be challenging and long-horizon, making direct application of RL highly sample inefficient.
We assume that the human and robot utilize the same action space (e.g. through a teleoperation interface) and that task success can be specified by convergence to some goal set 𝒢⊆𝒮𝒢𝒮\mathcal{G}\subseteq\mathcal{S}caligraphic\_G ⊆ caligraphic\_S within the time horizon (i.e., the task is successful if 𝒢𝒢\mathcal{G}caligraphic\_G is reached within T𝑇Titalic\_T timesteps). We further assume access to an indicator function 𝟙𝒢:𝒮→{0,1}:subscript1𝒢→𝒮01\mathds{1}\_{\mathcal{G}}:\mathcal{S}\rightarrow\{0,1\}blackboard\_1 start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT : caligraphic\_S → { 0 , 1 }, which indicates whether a state belongs to the goal set 𝒢𝒢\mathcal{G}caligraphic\_G.
The IL objective is to minimize a surrogate loss function J(πr)𝐽subscript𝜋𝑟J(\pi\_{r})italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ) to encourage the robot policy πr:𝒮→𝒜:subscript𝜋𝑟→𝒮𝒜\pi\_{r}:\mathcal{S}\to\mathcal{A}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT : caligraphic\_S → caligraphic\_A to match πhsubscript𝜋ℎ\pi\_{h}italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT:
| | | | |
| --- | --- | --- | --- |
| | J(πr)=∑t=1T𝔼st∼dtπr[ℒ(πr(st),πh(st))],𝐽subscript𝜋𝑟superscriptsubscript𝑡1𝑇subscript𝔼similar-tosubscript𝑠𝑡subscriptsuperscript𝑑subscript𝜋𝑟𝑡delimited-[]ℒsubscript𝜋𝑟subscript𝑠𝑡subscript𝜋ℎsubscript𝑠𝑡\displaystyle J(\pi\_{r})=\sum\_{t=1}^{T}\mathds{E}\_{s\_{t}\sim d^{\pi\_{r}}\_{t}}\left[\mathcal{L}(\pi\_{r}(s\_{t}),\pi\_{h}(s\_{t}))\right],italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ caligraphic\_L ( italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) ] , | | (1) |
where ℒ(πr(s),πh(s))ℒsubscript𝜋𝑟𝑠subscript𝜋ℎ𝑠\mathcal{L}(\pi\_{r}(s),\pi\_{h}(s))caligraphic\_L ( italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_s ) , italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_s ) ) is an action discrepancy measure between πr(s)subscript𝜋𝑟𝑠\pi\_{r}(s)italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_s ) and πh(s)subscript𝜋ℎ𝑠\pi\_{h}(s)italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_s ) (e.g. MSE loss), and dtπrsubscriptsuperscript𝑑subscript𝜋𝑟𝑡d^{\pi\_{r}}\_{t}italic\_d start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is the marginal state distribution at timestep t𝑡titalic\_t induced by the robot policy πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT in ℳℳ\mathcal{M}caligraphic\_M.
In the interactive IL setting, meanwhile, in addition to optimizing Equation ([1](#S3.E1 "1 ‣ 3 Problem Statement ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")), a key design goal is to minimize the imposed burden on the human supervisor. To formalize this, we define a switching policy π𝜋\piitalic\_π, which determines whether the system is under robot control πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT (which we call autonomous mode) or human supervisor control πhsubscript𝜋ℎ\pi\_{h}italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT (which we call supervisor mode). Following prior work [[18](#bib.bib18)], we define C(π)𝐶𝜋C(\pi)italic\_C ( italic\_π ), the expected number of context switches in an episode under policy π𝜋\piitalic\_π, as follows: C(π)=∑t=1T𝔼st∼dtπ[mI(st;π)]𝐶𝜋superscriptsubscript𝑡1𝑇subscript𝔼similar-tosubscript𝑠𝑡subscriptsuperscript𝑑𝜋𝑡delimited-[]subscript𝑚𝐼subscript𝑠𝑡𝜋{C(\pi)=\sum\_{t=1}^{T}\mathds{E}\_{s\_{t}\sim d^{\pi}\_{t}}\left[m\_{I}(s\_{t};\pi)\right]}italic\_C ( italic\_π ) = ∑ start\_POSTSUBSCRIPT italic\_t = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_d start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_m start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_π ) ], where mI(st;π)subscript𝑚𝐼subscript𝑠𝑡𝜋m\_{I}(s\_{t};\pi)italic\_m start\_POSTSUBSCRIPT italic\_I end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_π ) is an indicator for whether or not a context switch occurs from autonomous to supervisor control. Similarly, we define I(π)𝐼𝜋I(\pi)italic\_I ( italic\_π ) as the expected number of supervisor actions in an intervention solicited by π𝜋\piitalic\_π.
We then define the total burden B(π)𝐵𝜋B(\pi)italic\_B ( italic\_π ) imposed on the human supervisor as follows:
| | | | |
| --- | --- | --- | --- |
| | B(π)=C(π)⋅(L+I(π)),𝐵𝜋⋅𝐶𝜋𝐿𝐼𝜋\displaystyle B(\pi)=C(\pi)\cdot\big{(}L+I(\pi)\big{)},italic\_B ( italic\_π ) = italic\_C ( italic\_π ) ⋅ ( italic\_L + italic\_I ( italic\_π ) ) , | | (2) |
where L𝐿Litalic\_L is the latency of a context switch between control modes (summed over both switching directions) in units of timesteps (one action per timestep). The interactive IL objective is to minimize the discrepancy from the supervisor policy while limiting supervisor burden within some ΓbsubscriptΓb\Gamma\_{\rm b}roman\_Γ start\_POSTSUBSCRIPT roman\_b end\_POSTSUBSCRIPT:
| | | | |
| --- | --- | --- | --- |
| | π=argminπ′∈Π{J(πr)∣B(π′)≤Γb}.𝜋subscriptargminsuperscript𝜋′ΠMissing OperatorsubscriptΓb\displaystyle\begin{split}\pi&=\operatorname\*{arg\,min}\_{\pi^{\prime}\in\Pi}\{J(\pi\_{r})\mid B(\pi^{\prime})\leq\Gamma\_{\rm b}\}.\end{split}start\_ROW start\_CELL italic\_π end\_CELL start\_CELL = start\_OPERATOR roman\_arg roman\_min end\_OPERATOR start\_POSTSUBSCRIPT italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ roman\_Π end\_POSTSUBSCRIPT { italic\_J ( italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ) ∣ italic\_B ( italic\_π start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ≤ roman\_Γ start\_POSTSUBSCRIPT roman\_b end\_POSTSUBSCRIPT } . end\_CELL end\_ROW | | (3) |
Because it is challenging to explicitly optimize policies to satisfy the supervisor burden constraint in Equation ([3](#S3.E3 "3 ‣ 3 Problem Statement ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")), we present novel intervention criteria that enable reduction of supervisor burden by limiting the total number of interventions to a user-specified budget. Given sufficiently high latency L𝐿Litalic\_L, limiting the interventions C(π)𝐶𝜋C(\pi)italic\_C ( italic\_π ) directly corresponds to limiting supervisor burden B(π)𝐵𝜋B(\pi)italic\_B ( italic\_π ).
4 ThriftyDAgger
----------------
ThriftyDAgger determines when to switch between autonomous and human supervisor control modes by leveraging estimates of both the novelty and risk of states. Below, Sections [4.1](#S4.SS1 "4.1 Novelty Estimation ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") and [4.2](#S4.SS2 "4.2 Risk Estimation ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") discuss the estimation of state novelty and risk of task failure, respectively, while Section [4.3](#S4.SS3 "4.3 Regulating Switches in Control Modes ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") discusses ThriftyDAgger’s integration of these measures to determine when to switch control modes. Section [4.4](#S4.SS4 "4.4 Computing Risk and Novelty Thresholds from Data ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") then describes an online procedure to set thresholds for switching between control modes. Finally, Section [4.5](#S4.SS5 "4.5 ThriftyDAgger Overview ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") describes the full control flow of ThriftyDAgger.
###
4.1 Novelty Estimation
When the robot policy visits states that lie significantly outside the distribution of those encountered in the supervisor trajectories, it does not have any reference behavior to imitate. This motivates initiating interventions to illustrate desired recovery behaviors in these states. However, estimating the support of the state distribution visited by the human supervisor is challenging in the high-dimensional state spaces common in robotics. Following prior work [[26](#bib.bib26)], we train an ensemble of policies with bootstrapped samples of transitions from supervisor trajectories. We then measure the novelty of a given state s𝑠sitalic\_s by calculating the variance of the policy outputs at state s𝑠sitalic\_s across ensemble members. In practice, the action a∈𝒜𝑎𝒜a\in\mathcal{A}italic\_a ∈ caligraphic\_A outputted by each policy is a vector; thus, we measure state novelty by computing the variance of each component of the action vector a𝑎aitalic\_a across the ensemble members and then averaging over the components. We denote this quantity by Novelty(s)Novelty𝑠\textrm{Novelty}(s)Novelty ( italic\_s ). Once in supervisor mode, as noted in Hoque et al. [[18](#bib.bib18)], we can obtain a more precise correlate of novelty by computing the ground truth action discrepancy between actions suggested by the supervisor and the robot policy.
###
4.2 Risk Estimation
Interventions may be required not only in novel states outside the distribution of supervisor trajectories, but also in familiar states that are prone to result in task failure. For example, a task might have a “bottleneck” region with low tolerance for error, which has low novelty but nevertheless requires more supervision to learn a reliable robot policy. To address this challenge, we propose a novel measure of a state’s “riskiness,” capturing the likelihood that the robot cannot successfully converge to the goal set 𝒢𝒢\mathcal{G}caligraphic\_G. We first define a Q-function to quantify the discounted probability of successful convergence to 𝒢𝒢\mathcal{G}caligraphic\_G from a given state and action under the robot policy:
| | | | | |
| --- | --- | --- | --- | --- |
| | Q𝒢πr(st,at)subscriptsuperscript𝑄subscript𝜋𝑟𝒢subscript𝑠𝑡subscript𝑎𝑡\displaystyle Q^{\pi\_{r}}\_{\mathcal{G}}(s\_{t},a\_{t})italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | =𝔼πr[∑t′=t∞γt′−t𝟙𝒢(st′)|st,at],absentsubscript𝔼subscript𝜋𝑟delimited-[]conditionalsuperscriptsubscriptsuperscript𝑡′𝑡superscript𝛾superscript𝑡′𝑡subscript1𝒢superscriptsubscript𝑠𝑡′subscript𝑠𝑡subscript𝑎𝑡\displaystyle=\mathbb{E}\_{\pi\_{r}}\left[{\sum\_{t^{\prime}=t}^{\infty}\gamma^{t^{\prime}-t}\mathds{1}\_{\mathcal{G}}(s\_{t}^{\prime})|s\_{t},a\_{t}}\right],= blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∞ end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT - italic\_t end\_POSTSUPERSCRIPT blackboard\_1 start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ] , | | (4) |
where 𝟙𝒢(st)subscript1𝒢subscript𝑠𝑡\mathds{1}\_{\mathcal{G}}(s\_{t})blackboard\_1 start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) is equal to 1 if stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT belongs to 𝒢𝒢\mathcal{G}caligraphic\_G. We estimate Q𝒢πr(st,at)subscriptsuperscript𝑄subscript𝜋𝑟𝒢subscript𝑠𝑡subscript𝑎𝑡Q^{\pi\_{r}}\_{\mathcal{G}}(s\_{t},a\_{t})italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) via a function approximator Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT parameterized by ϕitalic-ϕ\phiitalic\_ϕ, and define a state’s riskiness in terms of this learned Q-function:
| | | | |
| --- | --- | --- | --- |
| | Riskπr(s,a)=1−Q^ϕ,𝒢πr(s,a).superscriptRisksubscript𝜋𝑟𝑠𝑎1superscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟𝑠𝑎\displaystyle\text{Risk}^{\pi\_{r}}(s,a)=1-\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}(s,a).Risk start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) = 1 - over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) . | | (5) |
In practice, we train Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT on transitions (st,at,st+1)subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1(s\_{t},a\_{t},s\_{t+1})( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) collected by the supervisor from both offline data and online interventions by minimizing the following MSE loss inspired by [[46](#bib.bib46)]:
| | | | |
| --- | --- | --- | --- |
| | J𝒢Q(st,at,st+1;ϕ)=12(Q^ϕ,𝒢πr(st,at)−(𝟙𝒢(st)+(1−𝟙𝒢(st))γQ^ϕ,𝒢πr(st+1,πr(st+1))))2.subscriptsuperscript𝐽𝑄𝒢subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1italic-ϕ12superscriptsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟subscript𝑠𝑡subscript𝑎𝑡subscript1𝒢subscript𝑠𝑡1subscript1𝒢subscript𝑠𝑡𝛾superscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟subscript𝑠𝑡1subscript𝜋𝑟subscript𝑠𝑡12\displaystyle\begin{split}J^{Q}\_{\mathcal{G}}(s\_{t},a\_{t},s\_{t+1};\phi)&=\frac{1}{2}\left(\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}(s\_{t},a\_{t})-(\mathds{1}\_{\mathcal{G}}(s\_{t})+\right.\left.(1-\mathds{1}\_{\mathcal{G}}(s\_{t}))\gamma\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}(s\_{t+1},\pi\_{r}(s\_{t+1})))\right)^{2}.\end{split}start\_ROW start\_CELL italic\_J start\_POSTSUPERSCRIPT italic\_Q end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ; italic\_ϕ ) end\_CELL start\_CELL = divide start\_ARG 1 end\_ARG start\_ARG 2 end\_ARG ( over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) - ( blackboard\_1 start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + ( 1 - blackboard\_1 start\_POSTSUBSCRIPT caligraphic\_G end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) italic\_γ over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ) ) ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT . end\_CELL end\_ROW | | (6) |
Note that since Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT is only used to solicit interventions, it must only be accurate enough to distinguish risky states from others, rather than be able to make the fine-grained distinctions between different states required for accurate policy learning in reinforcement learning.
###
4.3 Regulating Switches in Control Modes
We now describe how ThriftyDAgger leverages the novelty estimator from Section [4.1](#S4.SS1 "4.1 Novelty Estimation ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") and the risk estimator from Section [4.2](#S4.SS2 "4.2 Risk Estimation ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") to regulate switches between autonomous and supervisor control. While in autonomous mode, the switching policy π𝜋\piitalic\_π initiates a switch to supervisor mode at timestep t𝑡titalic\_t if either (1) state stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT is sufficiently unfamiliar or (2) the robot policy has a low probability of task success from stsubscript𝑠𝑡s\_{t}italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT. Stated precisely, π𝜋\piitalic\_π initiates a switch to supervisor mode from autonomous mode at timestep t𝑡titalic\_t if the predicate Intervene(st,δh,βh)Intervenesubscript𝑠𝑡subscript𝛿ℎsubscript𝛽ℎ\textrm{Intervene}(s\_{t},\delta\_{h},\beta\_{h})Intervene ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ) evaluates to True, where Intervene(st,δh,βh)Intervenesubscript𝑠𝑡subscript𝛿ℎsubscript𝛽ℎ\textrm{Intervene}(s\_{t},\delta\_{h},\beta\_{h})Intervene ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ) is True if (1) Novelty(st)>δhNoveltysubscript𝑠𝑡subscript𝛿ℎ\textrm{Novelty}(s\_{t})>\delta\_{h}Novelty ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) > italic\_δ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT or (2) Riskπr(st,πr(st))>βhsuperscriptRisksubscript𝜋𝑟subscript𝑠𝑡subscript𝜋𝑟subscript𝑠𝑡subscript𝛽ℎ\text{Risk}^{\pi\_{r}}(s\_{t},\pi\_{r}(s\_{t}))>\beta\_{h}Risk start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) > italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT and False otherwise. Note that the proposed switching policy only depends on RiskπrsuperscriptRisksubscript𝜋𝑟\text{Risk}^{\pi\_{r}}Risk start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT for states which are not novel (as novel states already initiate switches to supervisor control regardless of risk), since the learned risk measure should only be trusted on states in the neighborhood of those on which it has been trained.
In supervisor mode, π𝜋\piitalic\_π switches to autonomous mode if the action discrepancy between the human and robot policy and the robot’s task failure risk are both below threshold values (Section [4.4](#S4.SS4 "4.4 Computing Risk and Novelty Thresholds from Data ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")), indicating that the robot is in a familiar and safe region. Stated precisely, π𝜋\piitalic\_π switches to autonomous mode from supervisor mode if the predicate Cede(st,δr,βr)Cedesubscript𝑠𝑡subscript𝛿𝑟subscript𝛽𝑟\textrm{Cede}(s\_{t},\delta\_{r},\beta\_{r})Cede ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ) evaluates to True, where Cede(st,δr,βr)Cedesubscript𝑠𝑡subscript𝛿𝑟subscript𝛽𝑟\textrm{Cede}(s\_{t},\delta\_{r},\beta\_{r})Cede ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ) is True if (1) ‖πr(st)−πh(st)‖22<δrsuperscriptsubscriptnormsubscript𝜋𝑟subscript𝑠𝑡subscript𝜋ℎsubscript𝑠𝑡22subscript𝛿𝑟||\pi\_{r}(s\_{t})-\pi\_{h}(s\_{t})||\_{2}^{2}<\delta\_{r}| | italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) - italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) | | start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT < italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT and (2) Riskπr(st,πr(st))<βrsuperscriptRisksubscript𝜋𝑟subscript𝑠𝑡subscript𝜋𝑟subscript𝑠𝑡subscript𝛽𝑟\text{Risk}^{\pi\_{r}}(s\_{t},\pi\_{r}(s\_{t}))<\beta\_{r}Risk start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) < italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT, and False otherwise. Here, the risk metric ensures that the robot has a high probability of autonomously completing the task, while the coarser 1-step action discrepancy metric verifies that we are in a familiar region of the state space where the Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT values can be trusted. Motivated by prior work [[18](#bib.bib18)] and hysteresis control [[47](#bib.bib47)], we use stricter switching criteria in supervisor mode (βr<βhsubscript𝛽𝑟subscript𝛽ℎ\beta\_{r}<\beta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT < italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT) to encourage lengthier interventions and reduce context switches experienced by the human supervisor.
###
4.4 Computing Risk and Novelty Thresholds from Data
One challenge of the control strategy presented in Section [4.3](#S4.SS3 "4.3 Regulating Switches in Control Modes ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") lies in tuning the key parameters (δh,δr,βh,βrsubscript𝛿ℎsubscript𝛿𝑟subscript𝛽ℎsubscript𝛽𝑟\delta\_{h},\delta\_{r},\beta\_{h},\beta\_{r}italic\_δ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT) governing when context switching occurs. As noted in prior work [[26](#bib.bib26)], performance and supervisor burden can be sensitive to these thresholds. To address this difficulty, we assume that the user specifies their availability in the form of a desired intervention budget αh∈[0,1]subscript𝛼ℎ01\alpha\_{h}\in[0,1]italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ∈ [ 0 , 1 ], indicating the desired proportion of timesteps in which interventions will be requested. This desired context switching rate can be interpreted in the context of supervisor burden as defined in Equation ([2](#S3.E2 "2 ‣ 3 Problem Statement ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")): if the latency of a context switch dominates the time cost of the intervention itself, limiting the expected number of context switches to within some intervention budget directly limits supervisor burden.
Given αhsubscript𝛼ℎ\alpha\_{h}italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, we set βhsubscript𝛽ℎ\beta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT to be the (1−αh)1subscript𝛼ℎ(1-\alpha\_{h})( 1 - italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT )-quantile of Riskπr(s,πr(s))superscriptRisksubscript𝜋𝑟𝑠subscript𝜋𝑟𝑠\text{Risk}^{\pi\_{r}}(s,\pi\_{r}(s))Risk start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_s , italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_s ) ) for all states previously visited by πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT and set δhsubscript𝛿ℎ\delta\_{h}italic\_δ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT to be the (1−αh)1subscript𝛼ℎ(1-\alpha\_{h})( 1 - italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT )-quantile of Novelty(s)Novelty𝑠\textrm{Novelty}(s)Novelty ( italic\_s ) for all states previously visited by πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT. We set δrsubscript𝛿𝑟\delta\_{r}italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT to be the mean action discrepancy on the states visited by the supervisor after πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT is trained and set βrsubscript𝛽𝑟\beta\_{r}italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT to be the median of Riskπr(s,πr(s))superscriptRisksubscript𝜋𝑟𝑠subscript𝜋𝑟𝑠\text{Risk}^{\pi\_{r}}(s,\pi\_{r}(s))Risk start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_s , italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_s ) ) for all states previously visited by πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT. (Note that βrsubscript𝛽𝑟\beta\_{r}italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT can easily be set to different quantiles to adjust mean intervention length if desired.) We find that these settings strike a balance between informative interventions and imposed supervisor burden.
###
4.5 ThriftyDAgger Overview
We now summarize the ThriftyDAgger procedure, with full pseudocode available in the supplement. ThriftyDAgger first initializes πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT via Behavior Cloning on offline transitions (𝒟h(\mathcal{D}\_{h}( caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT from the human supervisor, πhsubscript𝜋ℎ\pi\_{h}italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT). Then, πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT collects an initial offline dataset 𝒟rsubscript𝒟𝑟\mathcal{D}\_{r}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT from the resulting πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT, initializes Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT by optimizing Equation ([5](#S4.E5 "5 ‣ 4.2 Risk Estimation ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) on 𝒟r∪𝒟hsubscript𝒟𝑟subscript𝒟ℎ\mathcal{D}\_{r}\cup\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ∪ caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, and initializes parameters βh,βr,δhsubscript𝛽ℎsubscript𝛽𝑟subscript𝛿ℎ\beta\_{h},\beta\_{r},\delta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, and δrsubscript𝛿𝑟\delta\_{r}italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT as in Section [4.4](#S4.SS4 "4.4 Computing Risk and Novelty Thresholds from Data ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"). We then collect data for N𝑁Nitalic\_N episodes, each with up to T𝑇Titalic\_T timesteps. In each timestep of each episode, we determine whether robot policy πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT or human supervisor πhsubscript𝜋ℎ\pi\_{h}italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT should be in control using the procedure in Section [4.3](#S4.SS3 "4.3 Regulating Switches in Control Modes ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"). Transitions in autonomous mode are aggregated into 𝒟rsubscript𝒟𝑟\mathcal{D}\_{r}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT while transitions in supervisor mode are aggregated into 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT. After each episode, πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT is updated via supervised learning on 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, while Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT is updated on 𝒟r∪𝒟hsubscript𝒟𝑟subscript𝒟ℎ\mathcal{D}\_{r}\cup\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ∪ caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT to reflect the probability of task success of the resulting πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT.
5 Experiments
--------------
In the following experiments, we study whether ThriftyDAgger can balance task performance and supervisor burden more effectively than prior IL algorithms in three contexts: (1) training a simulated robot to perform a peg insertion task (Section [5.3](#S5.SS3 "5.3 Peg Insertion in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")); (2) supervising a fleet of three simulated robots to perform the peg insertion task in a human user study (Section [5.4](#S5.SS4 "5.4 User Study: Controlling A Fleet of Three Robots in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")); and (3) training a physical surgical robot to perform a cable routing task (Section [5.5](#S5.SS5 "5.5 Physical Experiment: Visuomotor Cable Routing ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")). In the supplementary material, we also include results from an additional simulation experiment on a challenging block stacking task.
###
5.1 Evaluation Metrics
We consider ThriftyDAgger’s performance during training and execution. For the latter, we evaluate both the (1) autonomous success rate, or success rate when deployed after training without access to a human supervisor, and (2) intervention-aided success rate, or success rate when deployed after training with a human supervisor in the loop. These metrics are reported in the Peg Insertion study (Section [5.3](#S5.SS3 "5.3 Peg Insertion in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) and the Physical Cable Routing study (Section [5.5](#S5.SS5 "5.5 Physical Experiment: Visuomotor Cable Routing ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")). For all experiments, during both training and intervention-aided execution, we evaluate the number of interventions, human actions, and robot actions per episode. These metrics are computed over successful episodes only to prevent biasing the metrics by the maximum episode horizon length T𝑇Titalic\_T; such bias occurs, for instance, when less successful policies appear to take more actions due to hitting the time boundary more often. Additional metrics including cumulative statistics across all episodes are reported in the supplement.
In our user study (Section [5.4](#S5.SS4 "5.4 User Study: Controlling A Fleet of Three Robots in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")), we also report the following quantities: throughput (total number of task successes across the three robots), performance on an independent human task, the idle time of the robots in the fleet, and users’ qualitative ratings of mental load and frustration. By comparing the amount of human supervision and success rates across different algorithms, we are interested in evaluating how effectively each algorithm balances supervision with policy performance.
###
5.2 Comparisons
We compare ThriftyDAgger to the following algorithms: Behavior Cloning, which does not use interventions;
HG-DAgger [[15](#bib.bib15)], which is human-gated and always requires supervision; SafeDAgger [[19](#bib.bib19)], which is robot-gated and performs interventions based on estimated action discrepancy between the human supervisor and robot policy; and LazyDAgger [[18](#bib.bib18)], which builds on SafeDAgger by introducing an asymmetric switching criterion to encourage lengthier interventions. We also implement two ablations: one that does not use a novelty measure to regulate context switches (ThriftyDAgger (-Novelty)) and one that does not use risk to regulate context switches (ThriftyDAgger (-Risk)).
###
5.3 Peg Insertion in Simulation
We first evaluate ThriftyDAgger on a long-horizon peg insertion task (Figure [2](#S5.F2 "Figure 2 ‣ 5.3 Peg Insertion in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) from the Robosuite simulation environment [[48](#bib.bib48)]. The goal is to grasp a ring in a random initial pose and thread it over a cylinder at a fixed target location. This task has two bottlenecks which motivate learning from interventions: (1) correctly grasping the ring and (2) correctly placing it over the cylinder (Figure [2](#S5.F2 "Figure 2 ‣ 5.3 Peg Insertion in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")). A human teleoperates the robot through a keyboard interface to provide interventions. The states consist of the robot’s joint angles and ring’s pose, while the actions specify 3D translation, 3D rotation, and opening or closing the gripper.
For ThriftyDAgger and its ablations, we use target intervention frequency αh=0.01subscript𝛼ℎ0.01\alpha\_{h}=0.01italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = 0.01 and set other parameters via the automated tuning method (Section [4.4](#S4.SS4 "4.4 Computing Risk and Novelty Thresholds from Data ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")). We collect 30 offline task demos (2,687 state-action pairs) from a human supervisor to initialize the robot policy for all compared algorithms. Behavior Cloning is given additional state-action pairs roughly equivalent to the average amount of supervisor actions solicited by the interactive algorithms (Table [4](#S7.T4 "Table 4 ‣ 7.3.1 Peg Insertion in Simulation ‣ 7.3 Environment Details and Additional Metrics ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") in the appendix). For ThriftyDAgger and each interactive IL baseline, we perform 10,000 environment steps, during which each episode takes at most 175 timesteps and system control switches between the human and robot. Hyperparameter settings for all algorithms are detailed in the supplement.
Results (Table [1](#S5.T1 "Table 1 ‣ 5.3 Peg Insertion in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) suggest that ThriftyDAgger achieves a significantly higher autonomous success rate than prior robot-gated algorithms, although it does request more human actions due to its conservative exit criterion for interventions (Cede(st,δr,βr)Cedesubscript𝑠𝑡subscript𝛿𝑟subscript𝛽𝑟\textrm{Cede}(s\_{t},\delta\_{r},\beta\_{r})Cede ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT )). However, the number of interventions is similar to prior robot-gated algorithms, indicating that while ThriftyDAgger requires more human actions, it imposes a similar supervisor burden to SafeDAgger and LazyDAgger in settings in which context switches are expensive or time-consuming (e.g. high latency L𝐿Litalic\_L in Equation [2](#S3.E2 "2 ‣ 3 Problem Statement ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")). We find that all interactive IL algorithms substantially outperform Behavior Cloning, which does not have access to supervisor interventions. Notably, ThriftyDAgger achieves a higher autonomous success rate than even HG-DAgger, in which the supervisor is able to decide the timing and length of interventions. This indicates that ThriftyDAgger’s intervention criteria enable it to autonomously solicit interventions as informative as those chosen by a human supervisor with expert knowledge of the task. Furthermore, ThriftyDAgger achieves a 100% intervention-aided success rate at execution time, suggesting that ThriftyDAgger successfully identifies the required states at which to solicit interventions. We find that both ablations of ThriftyDAgger (Ours (-Novelty) and Ours (-Risk)) achieve significantly lower autonomous success rates, indicating that both the novelty and risk measures are critical to ThriftyDAgger’s performance. We calculate ThriftyDAgger’s context switching rate to be 1.15% novelty switches and 0.79% risk switches, both approximately within the budget of αh=0.01subscript𝛼ℎ0.01\alpha\_{h}=0.01italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = 0.01.

Figure 2: Experimental Domains: We visualize the peg insertion simulation domain (top row) and the physical cable routing domain with the physical robot (bottom row). We visualize sample start and goal states, in addition to states which ThriftyDAgger categorizes as novel, risky, and neither. ThriftyDAgger marks states as novel if they are far from behavior that the supervisor would produce, and risky if it is stuck in a bottleneck, e.g. if the ring is wedged against the side of the cylinder (top) or the cable is near all four obstacles (bottom).
Table 1: Peg Insertion in Simulation Results: We first report training performance (number of interventions (Ints), number of human actions (Acts (H)), and number of robot actions (Acts (R))) and report the success rate of the fully-trained policy at execution time when no interventions are allowed (Auto Succ.). We then evaluate the fully-trained policies with interventions allowed and report the same intervention statistics and the success rate (Int-Aided Succ.). We find that ThriftyDAgger achieves the highest autonomous and intervention-aided success rates among all algorithms compared. Notably, ThriftyDAgger even achieves a higher autonomous success rate than HG-DAgger, in which the human decides when to intervene during training.
| Algorithm | Training Interventions | Auto Succ. | Execution Interventions | Int-Aided Succ. |
| --- | --- | --- | --- | --- |
| | Ints | Acts (H) | Acts (R) | | Ints | Acts (H) | Acts (R) | |
| Behavior Cloning | N/A | N/A | 108.0±15.9plus-or-minus108.015.9108.0\pm 15.9108.0 ± 15.9 | 24/1002410024/10024 / 100 | N/A | N/A | N/A | N/A |
| SafeDAgger | 3.89±1.44plus-or-minus3.891.443.89\pm 1.443.89 ± 1.44 | 19.8±9.9plus-or-minus19.89.919.8\pm 9.919.8 ± 9.9 | 88.8±19.4plus-or-minus88.819.488.8\pm 19.488.8 ± 19.4 | 24/1002410024/10024 / 100 | 4.00±1.37plus-or-minus4.001.374.00\pm 1.374.00 ± 1.37 | 19.5±5.3plus-or-minus19.55.319.5\pm 5.319.5 ± 5.3 | 77.5±11.7plus-or-minus77.511.777.5\pm 11.777.5 ± 11.7 | 17/20172017/2017 / 20 |
| LazyDAgger | 1.46±1.15plus-or-minus1.461.151.46\pm 1.151.46 ± 1.15 | 13.2±12.4plus-or-minus13.212.413.2\pm 12.413.2 ± 12.4 | 102.1±18.2plus-or-minus102.118.2102.1\pm 18.2102.1 ± 18.2 | 48/1004810048/10048 / 100 | 1.73±1.29plus-or-minus1.731.291.73\pm 1.291.73 ± 1.29 | 12.6±14.4plus-or-minus12.614.412.6\pm 14.412.6 ± 14.4 | 91.7±24.0plus-or-minus91.724.091.7\pm 24.091.7 ± 24.0 | 11/20112011/2011 / 20 |
| HG-DAgger | 1.49±0.88plus-or-minus1.490.881.49\pm 0.881.49 ± 0.88 | 20.3±15.6plus-or-minus20.315.620.3\pm 15.620.3 ± 15.6 | 97.1±17.5plus-or-minus97.117.597.1\pm 17.597.1 ± 17.5 | 57/1005710057/10057 / 100 | 1.15±0.73plus-or-minus1.150.731.15\pm 0.731.15 ± 0.73 | 17.1±11.6plus-or-minus17.111.617.1\pm 11.617.1 ± 11.6 | 103.6±14.0plus-or-minus103.614.0103.6\pm 14.0103.6 ± 14.0 | 𝟐𝟎/𝟐𝟎2020\mathbf{20/20}bold\_20 / bold\_20 |
| Ours (-Novelty) | 0.79±0.81plus-or-minus0.790.81\mathbf{0.79\pm 0.81}bold\_0.79 ± bold\_0.81 | 35.1±23.1plus-or-minus35.123.135.1\pm 23.135.1 ± 23.1 | 70.0±35.8plus-or-minus70.035.870.0\pm 35.870.0 ± 35.8 | 49/1004910049/10049 / 100 | 0.33±0.62plus-or-minus0.330.62\mathbf{0.33\pm 0.62}bold\_0.33 ± bold\_0.62 | 2.5±5.0plus-or-minus2.55.02.5\pm 5.02.5 ± 5.0 | 114.0±26.0plus-or-minus114.026.0114.0\pm 26.0114.0 ± 26.0 | 12/20122012/2012 / 20 |
| Ours (-Risk) | 0.99±0.96plus-or-minus0.990.960.99\pm 0.960.99 ± 0.96 | 7.8±12.0plus-or-minus7.812.07.8\pm 12.07.8 ± 12.0 | 104.2±19.2plus-or-minus104.219.2104.2\pm 19.2104.2 ± 19.2 | 49/1004910049/10049 / 100 | 1.39±0.95plus-or-minus1.390.951.39\pm 0.951.39 ± 0.95 | 9.8±12.0plus-or-minus9.812.09.8\pm 12.09.8 ± 12.0 | 109.1±22.9plus-or-minus109.122.9109.1\pm 22.9109.1 ± 22.9 | 18/20182018/2018 / 20 |
| Ours: ThriftyDAgger | 0.88±1.01plus-or-minus0.881.010.88\pm 1.010.88 ± 1.01 | 43.6±24.5plus-or-minus43.624.543.6\pm 24.543.6 ± 24.5 | 60.0±32.8plus-or-minus60.032.860.0\pm 32.860.0 ± 32.8 | 𝟕𝟑/𝟏𝟎𝟎73100\mathbf{73/100}bold\_73 / bold\_100 | 1.35±0.66plus-or-minus1.350.661.35\pm 0.661.35 ± 0.66 | 21.3±15.0plus-or-minus21.315.021.3\pm 15.021.3 ± 15.0 | 84.8±21.8plus-or-minus84.821.884.8\pm 21.884.8 ± 21.8 | 𝟐𝟎/𝟐𝟎2020\mathbf{20/20}bold\_20 / bold\_20 |
###
5.4 User Study: Controlling A Fleet of Three Robots in Simulation
We conduct a user study with 10 participants (7 male and 3 female, aged 18-37). Participants supervise a fleet of three simulated robots, each performing the peg insertion task from Section [5.3](#S5.SS3 "5.3 Peg Insertion in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"). We evaluate how different interactive IL algorithms affect the participants’ (1) ability to provide effective robot interventions, (2) performance on a distractor task performed between robot interventions, and (3) levels of mental demand and frustration.
For the distractor task, we use the game Concentration (also known as Memory or Matching Pairs), in which participants identify as many pairs of matching cards as possible among a set of face-down cards. This is intended to emulate tasks which require continual focus, such as cooking a meal or writing a research paper, in which frequent context switches between performing the task and helping the robots is frustrating and degrades performance.
The participants teleoperate the robots using three robot-gated interactive IL algorithms: SafeDAgger, LazyDAgger, and ThriftyDAgger. The participant is instructed to make progress on the distractor task only when no robot requests an intervention. When an intervention is requested, the participant is instructed to pause the distractor task, provide an intervention from the requested state until the robot (or multiple robots queued after each other) no longer requires assistance, and then return to the distractor task. The participants also teleoperate with HG-DAgger, where they no longer perform the distractor task and are instructed to continually monitor all three robots simultaneously and decide on the length and timing of interventions themselves.
Each algorithm runs for 350 timesteps, where in each timestep, all robots in autonomous mode execute one action and the human executes one action on the currently-supervised robot (if applicable). The supplement illustrates the user study interface and fully details the experiment protocol. All algorithms are initialized as in Section [5.3](#S5.SS3 "5.3 Peg Insertion in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning").
Results (Table [2](#S5.T2 "Table 2 ‣ 5.4 User Study: Controlling A Fleet of Three Robots in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) suggest that ThriftyDAgger achieves significantly higher throughput than all prior algorithms while requiring fewer interventions and fewer human actions, indicating that ThriftyDAgger requests interventions more judiciously than prior algorithms. Furthermore, ThriftyDAgger also enables a lower mean idle time for robots and higher performance on the distractor task. Notably, ThriftyDAgger solicits fewer interventions and total actions while achieving a higher throughput than HG-DAgger, in which the participant chooses when to intervene. We also report metrics of users’ mental workload and frustration using the NASA-TLX scale [[49](#bib.bib49)] in the supplement. Results suggest that users experience lower degrees of frustration and mental load when interacting with ThriftyDAgger and LazyDAgger compared to HG-DAgger and SafeDAgger. We hypothesize that participants struggle with HG-DAgger due to the difficultly of monitoring multiple robots simultaneously, while SafeDAgger’s frequent context switches lead to user frustration during experiments.
Table 2: Three-Robot Fleet Control User Study Results: Results for experiments with 10 human subjects and 3 simulated robots on the peg insertion task. We report the total numbers of interventions, human actions, and robot actions, as well as the throughput, or total task successes achieved across robots, for all algorithms. Additionally, for robot-gated algorithms, we report the Concentration score (number of pairs found) and the mean idle time of robots in the fleet in timesteps. Results suggest that ThriftyDAgger outperforms all prior algorithms across all metrics, requesting fewer interventions and total human actions while achieving higher throughput, lowering the robots’ mean idle time, and enabling higher performance on the Concentration task.
| Algorithm | Interventions | Human Actions | Robot Actions | Concentration Pairs | Throughput | Mean Idle Time |
| --- | --- | --- | --- | --- | --- | --- |
| HG-DAgger | 10.6 ±plus-or-minus\pm± 2.5 | 198.0 ±plus-or-minus\pm± 32.1 | 834.4 ±plus-or-minus\pm± 38.1 | N/A | 5.1 ±plus-or-minus\pm± 1.9 | N/A |
| SafeDAgger | 22.1 ±plus-or-minus\pm± 4.8 | 234.1 ±plus-or-minus\pm± 31.8 | 700.7 ±plus-or-minus\pm± 70.4 | 17.7 ±plus-or-minus\pm± 8.2 | 3.0 ±plus-or-minus\pm± 2.4 | 38.4 ±plus-or-minus\pm± 14.1 |
| LazyDAgger | 10.0 ±plus-or-minus\pm± 2.1 | 219.5 ±plus-or-minus\pm± 43.3 | 719.2 ±plus-or-minus\pm± 89.7 | 20.9 ±plus-or-minus\pm± 7.9 | 5.1 ±plus-or-minus\pm± 1.7 | 37.1 ±plus-or-minus\pm± 20.5 |
| Ours: ThriftyDAgger | 7.9±2.1plus-or-minus7.92.1\mathbf{7.9\pm 2.1}bold\_7.9 ± bold\_2.1 | 179.4±34.9plus-or-minus179.434.9\mathbf{179.4\pm 34.9}bold\_179.4 ± bold\_34.9 | 793.2 ±plus-or-minus\pm± 86.6 | 33.0±8.5plus-or-minus33.08.5\mathbf{33.0\pm 8.5}bold\_33.0 ± bold\_8.5 | 9.2±2.0plus-or-minus9.22.0\mathbf{9.2\pm 2.0}bold\_9.2 ± bold\_2.0 | 25.8±19.3plus-or-minus25.819.3\mathbf{25.8\pm 19.3}bold\_25.8 ± bold\_19.3 |
###
5.5 Physical Experiment: Visuomotor Cable Routing
Finally, we evaluate ThriftyDAgger on a long-horizon cable routing task with a da Vinci surgical robot [[50](#bib.bib50)]. Here, the objective is to route a red cable into a Figure-8 pattern around 4 pegs via teleoperation with the robot’s master controllers (see supplement). The algorithm only observes high-dimensional 64×64×36464364\times 64\times 364 × 64 × 3 RGB images of the workspace and generates continuous actions representing delta-positions in (x,y)𝑥𝑦(x,y)( italic\_x , italic\_y ). As in Section [5.3](#S5.SS3 "5.3 Peg Insertion in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"), ThriftyDAgger uses a target intervention frequency of αh=0.01subscript𝛼ℎ0.01\alpha\_{h}=0.01italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = 0.01. We collect 25 offline task demonstrations (1,381 state-action pairs) from a human supervisor to initialize the robot policy for ThriftyDAgger and all comparisons. We perform 1,500 environment steps, where each episode has at most 100 timesteps and system control can switch between the human and robot. The supplement details the hyperparameter settings for all algorithms.
Results (Table [3](#S5.T3 "Table 3 ‣ 5.5 Physical Experiment: Visuomotor Cable Routing ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) suggest that both ThriftyDAgger and HG-DAgger achieve a significantly higher autonomous success rate than Behavior Cloning, which is never able to complete the task. Furthermore, ThriftyDAgger achieves a comparable autonomous success rate to HG-DAgger while requesting fewer interventions and a similar number of total human actions. This again suggests that ThriftyDAgger’s intervention criteria enable it to solicit interventions equally as informative or more informative than those chosen by a human supervisor. Finally, at execution time ThriftyDAgger achieves a 100% intervention-aided success rate with minimal supervision, again indicating that ThriftyDAgger successfully identifies the timing and length of interventions to increase policy reliability.
Table 3: Physical Cable Routing Results: We first report intervention statistics during training (number of interventions (Ints), number of human actions (Acts (H)), and number of robot actions (Acts (R))) and report the success rate of the fully-trained policy at execution time when no interventions are allowed (Auto Succ.). We then evaluate the fully-trained policies with interventions allowed and report the same intervention statistics and the success rate (Int-Aided Succ.). We find that ThriftyDAgger achieves the highest autonomous and intervention-aided success rates among all algorithms compared. Notably, ThriftyDAgger even achieves a higher autonomous success rate than HG-DAgger, in which the human decides when to intervene during training.
| Algorithm | Training Interventions | Auto Succ. | Execution Interventions | Int-Aided Succ. |
| --- | --- | --- | --- | --- |
| | Ints | Acts (H) | Acts (R) | | Ints | Acts (H) | Acts (R) | |
| Behavior Cloning | N/A | N/A | N/A | 0/150150/150 / 15 | N/A | N/A | N/A | N/A |
| HG-DAgger | 1.55±1.16plus-or-minus1.551.161.55\pm 1.161.55 ± 1.16 | 13.9±10.9plus-or-minus13.910.913.9\pm 10.913.9 ± 10.9 | 55.5±10.9plus-or-minus55.510.955.5\pm 10.955.5 ± 10.9 | 10/15101510/1510 / 15 | 0.40±0.49plus-or-minus0.400.49\mathbf{0.40\pm 0.49}bold\_0.40 ± bold\_0.49 | 2.7±3.5plus-or-minus2.73.52.7\pm 3.52.7 ± 3.5 | 73.9±7.9plus-or-minus73.97.973.9\pm 7.973.9 ± 7.9 | 𝟏𝟓/𝟏𝟓1515\mathbf{15/15}bold\_15 / bold\_15 |
| Ours: ThriftyDAgger | 1.42±1.14plus-or-minus1.421.14\mathbf{1.42\pm 1.14}bold\_1.42 ± bold\_1.14 | 15.2±12.4plus-or-minus15.212.415.2\pm 12.415.2 ± 12.4 | 45.5±18.3plus-or-minus45.518.345.5\pm 18.345.5 ± 18.3 | 𝟏𝟐/𝟏𝟓1215\mathbf{12/15}bold\_12 / bold\_15 | 0.40±0.71plus-or-minus0.400.710.40\pm 0.710.40 ± 0.71 | 1.5±3.1plus-or-minus1.53.11.5\pm 3.11.5 ± 3.1 | 61.3±6.5plus-or-minus61.36.561.3\pm 6.561.3 ± 6.5 | 𝟏𝟓/𝟏𝟓1515\mathbf{15/15}bold\_15 / bold\_15 |
6 Discussion and Future Work
-----------------------------
We present ThriftyDAgger, a scalable robot-gated interactive imitation learning algorithm that leverages learned estimates of state novelty and risk of task failure to reduce burden on a human supervisor during training and execution. Experiments suggest that ThriftyDAgger effectively enables long-horizon robotic manipulation tasks in simulation, on a physical robot, and for a three-robot fleet while limiting burden on a human supervisor.
In future work, we hope to apply ideas from ThriftyDAgger to interactive reinforcement learning and larger scale fleets of physical robots. We also hope to study how ThriftyDAgger’s performance varies with the target supervisor burden (specified via αhsubscript𝛼ℎ\alpha\_{h}italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT). In practice, αhsubscript𝛼ℎ\alpha\_{h}italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT could even be time-varying: for instance, αhsubscript𝛼ℎ\alpha\_{h}italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT may be significantly lower at night, when human operators may have limited availability. Similarly, αhsubscript𝛼ℎ\alpha\_{h}italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT may be set to a higher value during training than at deployment, when the robot policy is typically higher quality.
#### Acknowledgments
This research was performed at the AUTOLAB at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab and the CITRIS “People and Robots” (CPAR) Initiative. The authors were supported in part by the Scalable Collaborative Human-Robot Learning (SCHooL) Project, NSF National Robotics Initiative Award 1734633, and by donations from Google, Siemens, Amazon Robotics, Toyota Research Institute, Autodesk, Honda, Intel, and Hewlett-Packard and by equipment grants from PhotoNeo, NVidia, and Intuitive Surgical. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsors. We thank our colleagues who provided helpful feedback, code, and suggestions, especially Vincent Lim and Zaynah Javed.
7 Appendix
-----------
In Appendix [7.1](#S7.SS1 "7.1 Algorithm Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"), we discuss algorithmic details for ThriftyDAgger and all comparisons. Then, Appendix [7.2](#S7.SS2 "7.2 Hyperparameter and Implementation Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") discusses implementation and hyperparameter details for all algorithms. In Appendix [7.3](#S7.SS3 "7.3 Environment Details and Additional Metrics ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"), we provide additional details about the simulation and physical experiment domains, and
in Appendix [7.4](#S7.SS4 "7.4 User Study Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"), we describe the protocol and detailed results from the conducted user study.
###
7.1 Algorithm Details
Here we provide a detailed algorithmic description of ThriftyDAgger and all comparisons.
####
7.1.1 ThriftyDAgger
The full pseudocode for ThriftyDAgger is provided in Algorithm [1](#alg1 "Algorithm 1 ‣ 7.1.4 LazyDAgger ‣ 7.1 Algorithm Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"). ThriftyDAgger first initializes πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT via Behavior Cloning on offline transitions (𝒟h(\mathcal{D}\_{h}( caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT from the human supervisor, πhsubscript𝜋ℎ\pi\_{h}italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT) (line 1-2). Then, πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT collects an initial offline dataset 𝒟rsubscript𝒟𝑟\mathcal{D}\_{r}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT from the resulting πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT, initializes Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT by optimizing Equation ([5](#S4.E5 "5 ‣ 4.2 Risk Estimation ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) on 𝒟r∪𝒟hsubscript𝒟𝑟subscript𝒟ℎ\mathcal{D}\_{r}\cup\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ∪ caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, and initializes parameters βh,βr,δhsubscript𝛽ℎsubscript𝛽𝑟subscript𝛿ℎ\beta\_{h},\beta\_{r},\delta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, and δrsubscript𝛿𝑟\delta\_{r}italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT as in Section [4.4](#S4.SS4 "4.4 Computing Risk and Novelty Thresholds from Data ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") (lines 3-5). We then collect data for N𝑁Nitalic\_N episodes, each with up to T𝑇Titalic\_T timesteps. In each timestep of each episode, we determine whether robot policy πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT or human supervisor πhsubscript𝜋ℎ\pi\_{h}italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT should be in control using the procedure in Section [4.3](#S4.SS3 "4.3 Regulating Switches in Control Modes ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") (lines 10-20). Transitions in autonomous mode are aggregated into 𝒟rsubscript𝒟𝑟\mathcal{D}\_{r}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT while transitions in supervisor mode are aggregated into 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT. Episodes are terminated either when the robot reaches a valid goal state or has exhausted the time horizon T𝑇Titalic\_T. At this point, we re-initialize the policy to autonomous mode and update parameters βh,βr,δhsubscript𝛽ℎsubscript𝛽𝑟subscript𝛿ℎ\beta\_{h},\beta\_{r},\delta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, and δrsubscript𝛿𝑟\delta\_{r}italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT as in Section [4.4](#S4.SS4 "4.4 Computing Risk and Novelty Thresholds from Data ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") (lines 21-23). After each episode, πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT is updated via supervised learning on 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, while Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT is updated on 𝒟r∪𝒟hsubscript𝒟𝑟subscript𝒟ℎ\mathcal{D}\_{r}\cup\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ∪ caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT to reflect the task success probability of the resulting πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT (lines 24-26).
####
7.1.2 Behavior Cloning
We train policy πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT via direct supervised learning with a mean-squared loss to predict reference control actions given a dataset of (state, action) tuples. Behavior Cloning is trained only on full expert demonstrations collected offline from πhsubscript𝜋ℎ\pi\_{h}italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT and is not allowed access to online interventions. Thus, Behavior Cloning is trained only on dataset 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT (line 1, Algorithm [1](#alg1 "Algorithm 1 ‣ 7.1.4 LazyDAgger ‣ 7.1 Algorithm Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) and the policy is frozen thereafter.
In our simulation experiments, Behavior Cloning is given 50% more offline data than the other algorithms for a more fair comparison, such that the amount of additional offline data is approximately equal to the average amount of online data provided to the other algorithms.
####
7.1.3 SafeDAgger
SafeDAgger [[19](#bib.bib19)] is an interactive imitation learning algorithm which selects between autonomous and supervisor mode using a classifier f𝑓fitalic\_f that discriminates between “safe” states, for which πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT’s proposed action is within some threshold βhsubscript𝛽ℎ\beta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT of that proposed by supervisor policy πhsubscript𝜋ℎ\pi\_{h}italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, and “unsafe” states, for which this action discrepancy exceeds βhsubscript𝛽ℎ\beta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT. SafeDAgger learns this classifier using dataset 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT from Algorithm [1](#alg1 "Algorithm 1 ‣ 7.1.4 LazyDAgger ‣ 7.1 Algorithm Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"), and updates f𝑓fitalic\_f online as 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT is expanded through human interventions. During policy rollouts, if f𝑓fitalic\_f marks a state as safe, the robot policy is executed (autonomous mode), while if f𝑓fitalic\_f marks a state as unsafe, the supervisor is queried for an action. While this approach can be effective in some domains [[19](#bib.bib19)], prior work [[18](#bib.bib18)] suggests that this intervention criterion can lead to excessive context switches between the robot and supervisor, and thus impose significant burden on a human supervisor. As in ThriftyDAgger and other DAgger [[12](#bib.bib12)] variants, SafeDAgger updates πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT on an aggregated dataset of all transitions collected by the supervisor (analogous to 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT in Algorithm [1](#alg1 "Algorithm 1 ‣ 7.1.4 LazyDAgger ‣ 7.1 Algorithm Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")).
####
7.1.4 LazyDAgger
LazyDAgger [[18](#bib.bib18)] builds on SafeDAgger [[19](#bib.bib19)] and trains the same action discrepancy classifier f𝑓fitalic\_f to determine whether the robot and supervisor policies will significantly diverge at a given state. However, LazyDAgger introduces a few modifications to SafeDAgger which lead to lengthier and more informative interventions in practice. First, LazyDAgger observes that when the supervisor has control of the system (supervisor mode), querying f𝑓fitalic\_f for estimated action discrepancy is no longer necessary since we can simply query the robot policy at any state during supervisor mode to obtain a true measure of the action discrepancy between the robot and supervisor policies. This prevents exploiting approximation errors in f𝑓fitalic\_f when the supervisor is in control. Second, LazyDAgger introduces an asymmetric switching condition between autonomous and supervisor control, where switches are executed from autonomous to supervisor mode if f𝑓fitalic\_f indicates that the predicted action discrepancy is above βhsubscript𝛽ℎ\beta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, but switches are only executed from supervisor mode back to autonomous mode if the true action discrepancy is below some value βr<βhsubscript𝛽𝑟subscript𝛽ℎ\beta\_{r}<\beta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT < italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT. This encourages lengthier interventions, leading to fewer context switches between autonomous and supervisor modes. Finally, LazyDAgger injects noise into supervisor actions in order to spread the distribution of states in which reference controls from the supervisor are available. ThriftyDAgger builds on the asymmetric switching criterion introduced by LazyDAgger, but introduces a new switching criterion based on the estimated task success probability, which we found significantly improved performance in practice.
Algorithm 1 ThriftyDAgger
1:Number of episodes N𝑁Nitalic\_N, time horizon T𝑇Titalic\_T, supervisor policy πhsubscript𝜋ℎ\pi\_{h}italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT, desired context switching rate αhsubscript𝛼ℎ\alpha\_{h}italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT
2:Collect offline dataset 𝒟hsubscript𝒟ℎ{\mathcal{D}\_{h}}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT of (s,ah)𝑠superscript𝑎ℎ(s,a^{h})( italic\_s , italic\_a start\_POSTSUPERSCRIPT italic\_h end\_POSTSUPERSCRIPT ) tuples with πhsubscript𝜋ℎ\pi\_{h}italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT
3:Initialize πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT via Behavior Cloning on 𝒟hsubscript𝒟ℎ{\mathcal{D}\_{h}}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT
4:Collect offline dataset 𝒟rsubscript𝒟𝑟{\mathcal{D}\_{r}}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT of (s,ar)𝑠superscript𝑎𝑟(s,a^{r})( italic\_s , italic\_a start\_POSTSUPERSCRIPT italic\_r end\_POSTSUPERSCRIPT ) tuples with πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT
5:Initialize Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT by optimizing Equation ([4](#S4.E4 "4 ‣ 4.2 Risk Estimation ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) on 𝒟r∪𝒟hsubscript𝒟𝑟subscript𝒟ℎ{\mathcal{D}\_{r}}\cup{\mathcal{D}\_{h}}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ∪ caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT
6:Optimize βh,βr,δh,δrsubscript𝛽ℎsubscript𝛽𝑟subscript𝛿ℎsubscript𝛿𝑟\beta\_{h},\beta\_{r},\delta\_{h},\delta\_{r}italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT on 𝒟hsubscript𝒟ℎ{\mathcal{D}\_{h}}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT # Online tuning based on αhsubscript𝛼ℎ\alpha\_{h}italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT (Section [4.4](#S4.SS4 "4.4 Computing Risk and Novelty Thresholds from Data ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"))
7:for i∈{1,…N}𝑖1…𝑁i\in\{1,\ldots N\}italic\_i ∈ { 1 , … italic\_N } do
8: Initialize s0subscript𝑠0s\_{0}italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, Mode ←←\leftarrow← Autonomous
9: for t∈{1,…T}𝑡1…𝑇t\in\{1,\ldots T\}italic\_t ∈ { 1 , … italic\_T } do
10: atr=πr(st)subscriptsuperscript𝑎𝑟𝑡subscript𝜋𝑟subscript𝑠𝑡a^{r}\_{t}=\pi\_{r}(s\_{t})italic\_a start\_POSTSUPERSCRIPT italic\_r end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT )
11: if Mode = Supervisor or Intervene(st,δh,βh)Intervenesubscript𝑠𝑡subscript𝛿ℎsubscript𝛽ℎ\textrm{Intervene}(s\_{t},\delta\_{h},\beta\_{h})Intervene ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ) then # Determine control mode (Section [4.3](#S4.SS3 "4.3 Regulating Switches in Control Modes ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"))
12: ath=πh(st)subscriptsuperscript𝑎ℎ𝑡subscript𝜋ℎsubscript𝑠𝑡a^{h}\_{t}=\pi\_{h}(s\_{t})italic\_a start\_POSTSUPERSCRIPT italic\_h end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT )
13: 𝒟h←𝒟h∪{(st,ath)}←subscript𝒟ℎsubscript𝒟ℎsubscript𝑠𝑡superscriptsubscript𝑎𝑡ℎ\mathcal{D}\_{h}\leftarrow\mathcal{D}\_{h}\cup\{(s\_{t},a\_{t}^{h})\}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ← caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ∪ { ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_h end\_POSTSUPERSCRIPT ) }
14: Execute athsubscriptsuperscript𝑎ℎ𝑡a^{h}\_{t}italic\_a start\_POSTSUPERSCRIPT italic\_h end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT
15: if Cede(st,δr,βr)Cedesubscript𝑠𝑡subscript𝛿𝑟subscript𝛽𝑟\textrm{Cede}(s\_{t},\delta\_{r},\beta\_{r})Cede ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ) then # Default control mode for next timestep (Section 4.3)
16: Mode ←←\leftarrow← Autonomous
17: else
18: Mode ←←\leftarrow← Supervisor
19: else
20: Execute atrsubscriptsuperscript𝑎𝑟𝑡a^{r}\_{t}italic\_a start\_POSTSUPERSCRIPT italic\_r end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT
21: 𝒟r←𝒟r∪{(st,at)}←subscript𝒟𝑟subscript𝒟𝑟subscript𝑠𝑡subscript𝑎𝑡\mathcal{D}\_{r}\leftarrow\mathcal{D}\_{r}\cup\{(s\_{t},a\_{t})\}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ← caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ∪ { ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) }
22: if Terminal state reached then
23: Exit Loop, Mode ←←\leftarrow← Autonomous
24: Recompute βh,βr,δhsubscript𝛽ℎsubscript𝛽𝑟subscript𝛿ℎ\beta\_{h},\beta\_{r},\delta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT , italic\_δ start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT # Online tuning based on αhsubscript𝛼ℎ\alpha\_{h}italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT (Section [4.4](#S4.SS4 "4.4 Computing Risk and Novelty Thresholds from Data ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"))
25: πr←argminπr𝔼(st,ath)∼𝒟h[ℒ(πr(st),πh(st))]←subscript𝜋𝑟subscriptsubscript𝜋𝑟subscript𝔼similar-tosubscript𝑠𝑡subscriptsuperscript𝑎ℎ𝑡subscript𝒟ℎdelimited-[]ℒsubscript𝜋𝑟subscript𝑠𝑡subscript𝜋ℎsubscript𝑠𝑡\pi\_{r}\leftarrow\arg\min\_{\pi\_{r}}\mathbb{E}\_{(s\_{t},a^{h}\_{t})\sim\mathcal{D}\_{h}}\left[\mathcal{L}(\pi\_{r}(s\_{t}),\pi\_{h}(s\_{t}))\right]italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ← roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT italic\_h end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∼ caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ caligraphic\_L ( italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) , italic\_π start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ) ]
26: Collect 𝒟rsubscript𝒟𝑟{\mathcal{D}\_{r}}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT offline with robot policy πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT
27: Update Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT on 𝒟r∪𝒟hsubscript𝒟𝑟subscript𝒟ℎ{\mathcal{D}\_{r}}\cup{\mathcal{D}\_{h}}caligraphic\_D start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ∪ caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT # Update Q-function via Equation ([6](#S4.E6 "6 ‣ 4.2 Risk Estimation ‣ 4 ThriftyDAgger ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"))
####
7.1.5 HG-DAgger
Unlike SafeDAgger, LazyDAgger, and ThriftyDAgger, which are robot-gated and autonomously determine when to solicit intervention requests, HG-DAgger is human-gated, and thus requires that the supervisor determine the timing and length of interventions. As in ThriftyDAgger, HG-DAgger updates πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT on an aggregated dataset of all transitions collected by the supervisor (analogous to 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT in Algorithm [1](#alg1 "Algorithm 1 ‣ 7.1.4 LazyDAgger ‣ 7.1 Algorithm Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")).
###
7.2 Hyperparameter and Implementation Details
Here we provide a detailed overview of all hyperparameter and implementation details for ThriftyDAgger and all comparisons to facilitate reproduction of all experiments. We also include code in the supplement, and will release a full open-source codebase after anonymous review.
####
7.2.1 ThriftyDAgger
##### Peg Insertion (Simulation):
We initially populate 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT with 2,687 offline transitions, which correspond to 30 task demonstrations collected by an expert human supervisor, to initialize the robot policy πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT. We represent πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT with an ensemble of 5 neural networks, trained on bootstrapped samples of data from 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT in order to quantify uncertainty for novelty estimation. Each neural network is trained using the Adam Optimizer (learning rate 1e−31e31\mathrm{e}{-3}1 roman\_e - 3) with 5 training epochs, 500 gradient steps in each training epoch, and a batch size of 100. All networks consist of 2 hidden layers, each with 256 hidden units with ReLU activations, and a Tanh output activation.
The Q-function used for risk-estimation, Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, is trained with a batch size of 50, and batches are balanced such that 10% of all sampled transitions contain a state in the goal set. We train Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT with the Adam Optimizer, with a learning rate of 1e−31e31\mathrm{e}{-3}1 roman\_e - 3 and discount factor γ=0.9999𝛾0.9999\gamma=0.9999italic\_γ = 0.9999. In order to train Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, we collect 10 test episodes from πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT every 2,000 environment steps. We represent Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT with a 2 hidden layer neural net in which each hidden layer has 256 hidden units with ReLU activations and with a sigmoid output activation. The state and action are concatenated before they are fed into Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT.
##### Block Stacking (Simulation):
This is an additional simulation environment not included in the main text. Results and a description of the task are in Section [7.3.2](#S7.SS3.SSS2 "7.3.2 Block Stacking in Simulation ‣ 7.3 Environment Details and Additional Metrics ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"). We populate 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT with 1,677 offline transitions, corresponding to 30 task demonstrations, to initialize πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT. All other parameters and implementation details are identical to the peg insertion environment.
##### Cable Routing (Physical):
We initially populate 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT with 1,381 offline transitions, corresponding to 25 task demonstrations collected by an expert human supervisor, to initialize the robot policy πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT. We again represent πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT with an ensemble of 5 neural networks, trained on bootstrapped samples of data from 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT in order to quantify uncertainty for novelty estimation. Each neural network is trained using the Adam Optimizer (learning rate 2.5e−42.5e42.5\mathrm{e}{-4}2.5 roman\_e - 4) with 5 training epochs, 300 gradient steps per training epoch, and a batch size of 64. All networks consist of 5 convolutional layers (format: (in\_channels,out\_channels,kernel\_size,stride)in\_channelsout\_channelskernel\_sizestride(\text{in}\\_\text{channels},\text{out}\\_\text{channels},\text{kernel}\\_\text{size},\text{stride})( in \_ channels , out \_ channels , kernel \_ size , stride )): [(3,24,5,2),(24,36,5,2),(36,48,5,2),(48,64,3,1),(64,64,3,1)]32452243652364852486431646431[(3,24,5,2),(24,36,5,2),(36,48,5,2),(48,64,3,1),(64,64,3,1)][ ( 3 , 24 , 5 , 2 ) , ( 24 , 36 , 5 , 2 ) , ( 36 , 48 , 5 , 2 ) , ( 48 , 64 , 3 , 1 ) , ( 64 , 64 , 3 , 1 ) ] followed by 4 fully connected layers (format: (in\_units,out\_units)\text{in}\\_\text{units},\text{out}\\_\text{units})in \_ units , out \_ units )): [(64,100),(100,50),(50,10),(10,2)]64100100505010102[(64,100),(100,50),(50,10),(10,2)][ ( 64 , 100 ) , ( 100 , 50 ) , ( 50 , 10 ) , ( 10 , 2 ) ]. Here we utilize ELU (exponential linear unit) activations with a Tanh output activation.
The Q-function used for risk-estimation, Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, is trained with a batch size of 64 as well, and batches are balanced such that 10% of all sampled transitions contain a state in the goal set. We train Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT with the Adam Optimizer with a learning rate of 2.5e−42.5e42.5\mathrm{e}{-4}2.5 roman\_e - 4 and discount factor γ=0.9999𝛾0.9999\gamma=0.9999italic\_γ = 0.9999. In order to train Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, we collect 5 test episodes from πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT every 500 environment steps. We represent Q^ϕ,𝒢πrsuperscriptsubscript^𝑄italic-ϕ𝒢subscript𝜋𝑟\hat{Q}\_{\phi,\mathcal{G}}^{\pi\_{r}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUBSCRIPT italic\_ϕ , caligraphic\_G end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT with a neural network with the same 5 convolutional layers as the policy networks above, but with the fully connected layers as follows (format: (in\_units,out\_units)\text{in}\\_\text{units},\text{out}\\_\text{units})in \_ units , out \_ units )): [(64+2,100),(100,50),(50,10),(10,1)]642100100505010101[(64+2,100),(100,50),(50,10),(10,1)][ ( 64 + 2 , 100 ) , ( 100 , 50 ) , ( 50 , 10 ) , ( 10 , 1 ) ]. We concatenate the action with the state embedding resulting from the 5 convolutional layers (hence the 64 + 2) and feed the resulting concatenated embedding into the 4 fully connected layers above. We utilize ELU (exponential linear unit) activations with a sigmoid output activation.
####
7.2.2 Behavior Cloning
##### Peg Insertion (Simulation):
For Behavior Cloning, we initially populate 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT with 4,004 offline transitions, corresponding to 45 task demonstrations collected by an expert human supervisor, to initialize the robot policy πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT (note that this is more transitions than are provided to ThriftyDAgger). All other details are the same as ThriftyDAgger for training πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT.
##### Block Stacking (Simulation):
We initially populate 𝒟hsubscript𝒟ℎ\mathcal{D}\_{h}caligraphic\_D start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT with 3,532 offline transitions, corresponding to 60 task demonstrations, to initialize πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT. Note that Behavior Cloning has access to twice as many offline demonstrations as the other algorithms.
##### Cable Routing (Physical):
We train πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT with the same architecture and procedure as for ThriftyDAgger, but only on the initial offline data.
####
7.2.3 SafeDAgger
We use the same hyperparameters and architecture for training πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT as for ThriftyDAgger. Unlike ThriftyDAgger, SafeDAgger does not have a mechanism to automatically set intervention thresholds when provided an intervention budget αhsubscript𝛼ℎ\alpha\_{h}italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT. Thus, we must specify a value for the switching threshold βhsubscript𝛽ℎ\beta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT. We use βh=0.008subscript𝛽ℎ0.008\beta\_{h}=0.008italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = 0.008, since this is recommended in [[19](#bib.bib19)] as the value which was found to work well in experiments (in practice, this value marks about 20% of states as “unsafe”).
####
7.2.4 LazyDAgger
We use the same hyperparameters and architecture for training πrsubscript𝜋𝑟\pi\_{r}italic\_π start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT as for ThriftyDAgger. Unlike ThriftyDAgger, LazyDAgger does not have a mechanism to automatically set intervention thresholds when provided an intervention budget αhsubscript𝛼ℎ\alpha\_{h}italic\_α start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT. Thus, we must specify a value for both switching thresholds βhsubscript𝛽ℎ\beta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT and βrsubscript𝛽𝑟\beta\_{r}italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT. We use βh=0.015subscript𝛽ℎ0.015\beta\_{h}=0.015italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT = 0.015, βr=0.25βhsubscript𝛽𝑟0.25subscript𝛽ℎ\beta\_{r}=0.25\beta\_{h}italic\_β start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT = 0.25 italic\_β start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT and use a noise covariance matrix of 0.02𝒩(0,I)0.02𝒩0𝐼0.02\mathcal{N}(0,I)0.02 caligraphic\_N ( 0 , italic\_I ) when injecting noise into the supervisor actions. These values were tuned to strike a balance between supervisor burden and policy performance.
####
7.2.5 HG-DAgger
All hyperparameters and architectures are identical to those used for Behavior Cloning, without the extra offline demonstrations. Note that for HG-DAgger, the supervisor determines the timing and length of interventions.
###
7.3 Environment Details and Additional Metrics
####
7.3.1 Peg Insertion in Simulation
We evaluate our algorithm and baselines in the Robosuite environment (<https://robosuite.ai>) [[48](#bib.bib48)], which builds on MuJoCo [[51](#bib.bib51)] to provide a standardized suite of benchmark tasks for robot learning. Specifically, we consider the “Nut Assembly” task, in which a robot must grab a ring from a random initial pose and place it over a cylinder at a fixed location. We consider a variant of the task that considers only 1 ring and 1 target, though the simulator allows 2 rings and 2 targets. The states are s∈ℝ51𝑠superscriptℝ51s\in\mathbb{R}^{51}italic\_s ∈ blackboard\_R start\_POSTSUPERSCRIPT 51 end\_POSTSUPERSCRIPT and actions a∈ℝ5𝑎superscriptℝ5a\in\mathbb{R}^{5}italic\_a ∈ blackboard\_R start\_POSTSUPERSCRIPT 5 end\_POSTSUPERSCRIPT (translation in the XY-plane, translation in the Z-axis, rotation around the Z-axis, and opening or closing the gripper). The simulated robot arm is a UR5e, and the controller reaches a commanded pose via operational space control with fixed impedance. To avoid bias due to variable teleoperation speeds and ensure that the Markov property applies, we abstract 10 timesteps in the simulator into 1 environment step, and in supervisor mode we pause the simulation until keyboard input is received. This prevents accidentally collecting “no-op” expert labels and allows the end effector to “settle” instead of letting its momentum carry on to the next state. In practice it does not make the task more difficult, as control is still fine-grained enough for precise manipulation. Each episode is terminated upon successful task completion or when 175 actions are executed. Interventions are collected through a keyboard interface. In Table [4](#S7.T4 "Table 4 ‣ 7.3.1 Peg Insertion in Simulation ‣ 7.3 Environment Details and Additional Metrics ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"), we report additional metrics for the peg insertion simulation experiment and find that ThriftyDAgger solicits fewer interventions than prior algorithms at training time while achieving a higher success rate during training than all algorithms other than HG-DAgger, though it does request more human actions. The train success rate column also indicates that ThriftyDAgger achieves throughput comparable to HG-DAgger and higher than other baselines, as ThriftyDAgger has more task successes in the same amount of time (10,000 timesteps for all algorithms). At execution time, ThriftyDAgger collects lengthier interventions than prior algorithms, but as a result is able to succeed more often at execution time as discussed in the main text.
Table 4: Peg Insertion in Simulation Additional Metrics: We report additional statistics for the peg insertion task: total number of interventions (T Ints), total number of offline and online human actions (T Acts (H)), and total number of robot actions (T Acts (R))) at training time across all trajectories (successful and unsuccessful). We report these same metrics at execution time, but T Acts (H) does not include offline human actions, as at execution time it does not refer to the number of training samples for the robot policy. We also report the success rate of the mixed control policy at training time (Train Succ.). Results suggest that ThriftyDAgger solicits fewer interventions than prior algorithms at training time while achieving a comparable success rate and throughput to HG-DAgger. At execution time, ThriftyDAgger collects lengthier interventions than prior algorithms but succeeds more often at the task (Table [1](#S5.T1 "Table 1 ‣ 5.3 Peg Insertion in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")).
| Algorithm | Training Interventions | Train Succ. | Execution Interventions |
| --- | --- | --- | --- |
| | T Ints | T Acts (H) | T Acts (R) | | T Ints | T Acts (H) | T Acts (R) |
| Behavior Cloning | N/A | 4004 | N/A | N/A | N/A | N/A | N/A |
| SafeDAgger | 334 | 4227 | 8460 | 48/73487348/7348 / 73 | 81 | 396 | 1781 |
| LazyDAgger | 82 | 3683 | 9004 | 37/67376737/6737 / 67 | 30 | 290 | 2422 |
| HG-DAgger | 124 | 4392 | 8295 | 83/83838383/8383 / 83 | 23 | 342 | 2071 |
| Ours (-Novelty) | 60 | 5242 | 7445 | 62/80628062/8062 / 80 | 12 | 157 | 2649 |
| Ours (-Risk) | 87 | 3623 | 9064 | 72/81728172/8172 / 81 | 30 | 237 | 2255 |
| Ours: ThriftyDAgger | 84 | 6840 | 5847 | 76/86768676/8676 / 86 | 27 | 426 | 1696 |
####
7.3.2 Block Stacking in Simulation
To further evaluate the algorithm and baselines in simulation, we also consider the block stacking task from Robosuite (see previous section). Here the robot must grasp a cube in a randomized initial pose and place it on top of a second cube in another randomized pose. See Table [5](#S7.T5 "Table 5 ‣ 7.3.2 Block Stacking in Simulation ‣ 7.3 Environment Details and Additional Metrics ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") for training results and Figure [3](#S7.F3 "Figure 3 ‣ 7.3.2 Block Stacking in Simulation ‣ 7.3 Environment Details and Additional Metrics ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") for an illustration of the experimental setup. Due to the randomized place position, small placement region, and geometric symmetries, the task is more difficult than peg insertion, as evidenced by the lower autonomous success rate for all algorithms. However, we still see that ThriftyDAgger achieves comparable performance to HG-DAgger in terms of autonomous success rate, success rate during training, and throughput, while outperforming the other baselines and ablations. ThriftyDAgger also solicits fewer interventions than prior algorithms, but generally requires more human actions as these interventions tend to be lengthier. This makes ThriftyDAgger well-suited to situations in which the cost of context switches (latency) may be high.

Figure 3: Left: An example start and goal state for the block stacking environment in Robosuite. The goal is to place the red block on top of the green one. Initial poses of both blocks are randomized. Right: The da Vinci Research Kit Master Tool Manipulator (MTM) 7DOF interface used to provide human interventions in the physical experiments. The human expert views the workspace through the viewer (top) and teleoperates the robot by moving the right joystick (middle) in free space while pressing the rightmost pedal (bottom).
Table 5: Block Stacking in Simulation Results: We report the number of interventions (Ints), number of human actions (Acts (H)), and number of robot actions (Acts (R)) during training (over successful trajectories as in Table [1](#S5.T1 "Table 1 ‣ 5.3 Peg Insertion in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) and report the success rate of the robot policy after training when no interventions are allowed (Auto Succ.). We also report the total number of interventions (T Int), total number of actions from the human (offline and online, in T Acts (H)), total number of actions executed by the robot (T Acts (R)), and the success rate of the mixed control policy during training (Train Succ.). Results suggest that ThriftyDAgger achieves comparable performance to HG-DAgger in terms of both autonomous and training success rates while outperforming the other baselines and ablations. ThriftyDAgger also solicits fewer interventions than prior algorithms, but generally requires more human actions.
| Algorithm | Ints | Acts (H) | Acts (R) | Auto Succ. | T Ints | T Acts (H) | T Acts (R) | Train Succ. |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Behavior Cloning | N/A | N/A | 68.0±11.4plus-or-minus68.011.468.0\pm 11.468.0 ± 11.4 | 5/10051005/1005 / 100 | N/A | 3532 | N/A | N/A |
| SafeDAgger | 5.00±3.41plus-or-minus5.003.415.00\pm 3.415.00 ± 3.41 | 40.5±14.1plus-or-minus40.514.140.5\pm 14.140.5 ± 14.1 | 44.3±25.6plus-or-minus44.325.644.3\pm 25.644.3 ± 25.6 | 3/10031003/1003 / 100 | 574 | 4387 | 7290 | 27/68 |
| LazyDAgger | 1.81±1.02plus-or-minus1.811.021.81\pm 1.021.81 ± 1.02 | 25.8±17.8plus-or-minus25.817.825.8\pm 17.825.8 ± 17.8 | 56.6±28.3plus-or-minus56.628.356.6\pm 28.356.6 ± 28.3 | 40/1004010040/10040 / 100 | 85 | 2940 | 8737 | 36/75 |
| HG-DAgger | 1.62±0.91plus-or-minus1.620.911.62\pm 0.911.62 ± 0.91 | 22.5±16.5plus-or-minus22.516.522.5\pm 16.522.5 ± 16.5 | 54.6±14.2plus-or-minus54.614.254.6\pm 14.254.6 ± 14.2 | 𝟓𝟔/𝟏𝟎𝟎56100\mathbf{56/100}bold\_56 / bold\_100 | 201 | 4535 | 7142 | 124/125 |
| Ours (-Novelty) | 0.65±0.70plus-or-minus0.650.700.65\pm 0.700.65 ± 0.70 | 43.7±13.3plus-or-minus43.713.343.7\pm 13.343.7 ± 13.3 | 28.6±28.5plus-or-minus28.628.528.6\pm 28.528.6 ± 28.5 | 8/10081008/1008 / 100 | 37 | 3599 | 8078 | 23/69 |
| Ours (-Risk) | 1.89±0.72plus-or-minus1.890.721.89\pm 0.721.89 ± 0.72 | 12.9±7.7plus-or-minus12.97.712.9\pm 7.712.9 ± 7.7 | 72.4±25.5plus-or-minus72.425.572.4\pm 25.572.4 ± 25.5 | 31/1003110031/10031 / 100 | 109 | 2518 | 9159 | 47/79 |
| Ours: ThriftyDAgger | 1.33±0.76plus-or-minus1.330.761.33\pm 0.761.33 ± 0.76 | 35.4±15.8plus-or-minus35.415.835.4\pm 15.835.4 ± 15.8 | 37.2±27.5plus-or-minus37.227.537.2\pm 27.537.2 ± 27.5 | 𝟓𝟐/𝟏𝟎𝟎52100\mathbf{52/100}bold\_52 / bold\_100 | 153 | 5873 | 5804 | 111/120 |
####
7.3.3 Physical Cable Routing
Finally, we evaluate our algorithm on a visuomotor cable routing task with a da Vinci Research Kit surgical robot. We take RGB images of the scene with a Zivid One Plus camera inclined at about 45 degrees to the vertical. These images are cropped into a square and downsampled to 64 ×\times× 64 before they are passed to the neural network policy. The cable state is initialized to approximately the same shape (see Figure [2](#S5.F2 "Figure 2 ‣ 5.3 Peg Insertion in Simulation ‣ 5 Experiments ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) with the cable initialized in the robot’s gripper. The workspace is approximately 10 cm ×\times× 10 cm, and each component of the robot action (Δx,ΔyΔ𝑥Δ𝑦\Delta x,\Delta yroman\_Δ italic\_x , roman\_Δ italic\_y) is at most 1 cm in magnitude. To avoid collision with the 4 obstacles, we implement a “logical obstacle” as 1-cm radius balls around the center of each obstacle. Actions that enter one of these regions are projected to the boundary of the circle. Each episode is terminated upon successful task completion or 100 actions executed. Interventions are collected through a 7DOF teleoperation interface (Figure [3](#S7.F3 "Figure 3 ‣ 7.3.2 Block Stacking in Simulation ‣ 7.3 Environment Details and Additional Metrics ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning")) that matches the pose of the robot arm, with rotation of the end effector disabled. Teleoperated actions are mapped to the robot’s action space by projecting pose deltas to the 2D plane at 1 second intervals. The human teleoperates the robot at a frequency that roughly corresponds to taking actions with the maximum magnitude (1 cm / sec). In Table [6](#S7.T6 "Table 6 ‣ 7.3.3 Physical Cable Routing ‣ 7.3 Environment Details and Additional Metrics ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"), we report additional metrics for the physical cable routing experiment and find that ThriftyDAgger solicits a number of interventions similar to HG-DAgger while achieving a similar success rate during training. This again indicates that ThriftyDAgger is able to learn intervention criteria competitive with human judgment. At execution time, we find that ThriftyDAgger solicits the same number of interventions as HG-DAgger, but requires fewer human and robot actions than HG-DAgger.
Table 6: Physical Cable Routing Additional Metrics: We report additional statistics for the peg insertion task: total number of interventions (T Ints), total number of offline and online human actions (T Acts (H)), and total number of robot actions (T Acts (R))) at training time across all trajectories. We report these same metrics at execution time, but T Acts (H) does not include offline human actions, as at execution time it does not refer to the number of training samples for the robot policy. We also report the success rate of the mixed control policy at training time (Train Succ.). Results suggest that ThriftyDAgger needs fewer interventions than HG-DAgger while achieving a similar training success rate. At execution time, we find that ThriftyDAgger solicits the same number of interventions as HG-DAgger, but requires fewer human and robot actions.
| Algorithm | Training Interventions | Train Succ. | Execution Interventions |
| --- | --- | --- | --- |
| | T Ints | T Acts (H) | T Acts (R) | | T Ints | T Acts (H) | T Acts (R) |
| Behavior Cloning | N/A | 1381 | N/A | N/A | N/A | N/A | N/A |
| HG-DAgger | 31 | 1682 | 1199 | 20/20202020/2020 / 20 | 6 | 41 | 1109 |
| Ours: ThriftyDAgger | 27 | 1728 | 1153 | 19/21192119/2119 / 21 | 6 | 23 | 919 |
###
7.4 User Study Details
Here we detail the protocol for conducting user studies with ThriftyDAgger and comparisons and discuss qualitative results based on participants’ answers to survey questions measuring their mental load and levels of frustration when using each of the algorithms.
####
7.4.1 User Study Interface
Figure [4](#S7.F4 "Figure 4 ‣ 7.4.1 User Study Interface ‣ 7.4 User Study Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") (left) illustrates the interface used for the user study. The user study is performed with the same peg insertion environment used for simulation experiments, but with 3 robots performing the task in parallel. The base policy is initialized from 30 demos, as in the other simulation experiments. To speed up the task execution for the user study, each action has twice the magnitude as in the peg insertion experiments. Since this results in shorter trajectories that are easier for Behavior Cloning to accomplish, we also inject a small amount of Gaussian noise (covariance matrix = 0.02𝒩(0,I)0.02𝒩0𝐼0.02\mathcal{N}(0,I)0.02 caligraphic\_N ( 0 , italic\_I )).

Figure 4: User Study Survey Results: We illustrate the user study interface for the human-gated and robot-gated algorithms (left) and users’ survey responses regarding their mental load and frustration (right) for each algorithm. Results suggest that users report similar levels of mental load and frustration for ThriftyDAgger and LazyDAgger, but significantly higher levels of both metrics for HG-DAgger and SafeDAgger. We hypothesize that the sparing and sustained interventions solicited by ThriftyDAgger and LazyDAgger lead to greater user satisfaction and comfort compared to algorithms which force the user to constantly monitor the system (HG-DAgger) or frequently context switch between teleoperation and the distractor task.
In the human-gated study with HG-DAgger, participants are shown videos of all 3 robots attempting to perform the task in a side pane (Figure [4](#S7.F4 "Figure 4 ‣ 7.4.1 User Study Interface ‣ 7.4 User Study Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"), top right of left pane) and are instructed to monitor all of the robots and intervene when they deem it appropriate. In all robot-gated studies, participants are instructed to play the Concentration game until they hear a chime, at which point they are instructed to switch screens to the teleoperation interface. The Concentration game (also called Memory) is illustrated on the left of the left pane in Figure [4](#S7.F4 "Figure 4 ‣ 7.4.1 User Study Interface ‣ 7.4 User Study Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"): the objective is to find pairs of cards (all of which are initially face-down) which have matching pictures on their front side. Examples of a non-matching pair and a matching pair are illustrated in Figure [4](#S7.F4 "Figure 4 ‣ 7.4.1 User Study Interface ‣ 7.4 User Study Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning").
All robots which require interventions are placed in a FIFO queue, with participants serving intervention requests sequentially until no robot requires intervention. Thus, the participant may be required to provide interventions for multiple robots in succession if multiple robots are currently in the queue. When no robot requires assistance, the teleoperation interface turns black and reports that no robot currently needs help, at which point participants are instructed to return to the Concentration game.
####
7.4.2 NASA TLX Survey Results
After each participant is subjected to all 4 conditions (SafeDAgger, LazyDAgger, ThriftyDAgger, and HG-DAgger) in a randomized order, we give each participant a NASA TLX survey asking them to rate their mental demand and frustration for each of the conditions on a scale of 1 (very low) - 5 (very high). Results (Figure [4](#S7.F4 "Figure 4 ‣ 7.4.1 User Study Interface ‣ 7.4 User Study Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning") right pane) suggest that ThriftyDAgger and LazyDAgger impose less mental demand and make participants feel less frustrated than HG-DAgger and SafeDAgger. During experiments, we found that participants found it cumbersome to keep track of all of the robots simultaneously in HG-DAgger, while the frequent context switches in SafeDAgger made participants frustrated since they were often unable to make much progress in the Concentration Game and felt that the robot repeatedly asked for interventions in very similar states.
####
7.4.3 Wall Clock Time
We report additional metrics on the wall clock time of each condition in Table [7](#S7.T7 "Table 7 ‣ 7.4.3 Wall Clock Time ‣ 7.4 User Study Details ‣ 7 Appendix ‣ ThriftyDAgger: Budget-Aware Novelty and Risk Gating for Interactive Imitation Learning"). Since all experiments are run for the same 350 time steps, total wall clock time is relatively consistent. However, HG-DAgger takes longer, as it takes more compute to render all three robot views at once. ThriftyDAgger takes less total human time than the baselines, allowing the human to make more progress on independent tasks. Note that other robots in autonomous mode can still make task progress during human intervention. Note also that HG-DAgger requires human attention for the Total Wall Clock Time, as the human must supervise all the robots even if he or she is not actively teleoperating one (as recorded by Human Wall Clock Time).
Table 7: Wall Clock Time: We compare the total amount of wall clock time and total amount of human wall clock time averaged over the 10 subjects in the user study. Human Wall Clock Time refers to the amount of time the human spent actively teleoperating a robot, while Total Wall Clock Time measures the amount of time taken by the total experiment. ThriftyDAgger requires the lowest amount of human time, and the total amount of time is relatively consistent. Note that HG-DAgger takes more Total Wall Clock Time as it takes longer to simulate the “bird’s eye view” of all 3 robots, and that autonomous robots can still make task progress independently while a human is operating a robot.
| Algorithm | Human Wall Clock Time (s) | Total Wall Clock Time (s) |
| --- | --- | --- |
| SafeDAgger | 448.0±48.1plus-or-minus448.048.1448.0\pm 48.1448.0 ± 48.1 | 613.0±33.1plus-or-minus613.033.1613.0\pm 33.1613.0 ± 33.1 |
| LazyDAgger | 415.3±90.3plus-or-minus415.390.3415.3\pm 90.3415.3 ± 90.3 | 609.6±49.5plus-or-minus609.649.5609.6\pm 49.5609.6 ± 49.5 |
| HG-DAgger | 532.6±105.2plus-or-minus532.6105.2532.6\pm 105.2532.6 ± 105.2 | 792.8±68.7plus-or-minus792.868.7792.8\pm 68.7792.8 ± 68.7 |
| Ours: ThriftyDAgger | 365.4±88.1plus-or-minus365.488.1\mathbf{365.4\pm 88.1}bold\_365.4 ± bold\_88.1 | 625.5±52.3plus-or-minus625.552.3625.5\pm 52.3625.5 ± 52.3 |
####
7.4.4 Detailed Protocol
For the user study, we recruited 10 participants aged 18-37, including members without any knowledge or experience in robotics or AI. All participants are first assigned a randomly selected user ID. Then, participants are instructed to play a 12-card game of Concentration (also known as Memory) (<https://www.helpfulgames.com/subjects/brain-training/memory.html>) in order to learn how to play. Then, users are given practice with both the robot-gated and human-gated teleoperation interfaces. To do this, the operator of the study (one of the authors) performs one episode of the task in the robot-gated interface and briefly explains how to control the human-gated interface. Then, participants are instructed to perform one practice episode in the robot-gated teleoperation interface and spend a few minutes exploring the human-gated interface until they are confident in the usage of both interfaces and in how to teleoperate the robots. In the robot-gated experiments, participants are instructed to play Concentration when no robot asks for help, but to immediately switch to helping the robot whenever a robot asks for help. In the human-gated experiment with HG-DAgger, participants are instructed to continuously monitor all of the robots and perform interventions which they believe will maximize the number of successful episodes. During the robot-gated study, participants play the 24-card version of Concentration between robot interventions. If a participant completes the game, new games of Concentration are created until a time budget of robot interactions is hit. Then for each condition, the participant is scored based on (1) the number of times the robot successfully completed the task and (2) the number of total matching pairs the participant found across all games of Concentration. |
cb74a725-0bcc-46ef-bd7c-d8b5e3045b25 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Transformative AI and Compute [Summary]
*Cross-posted [here on the EA Forum](https://forum.effectivealtruism.org/posts/3eRJPFhwhGZZzfifF/transformative-ai-and-compute-summary)*.
This is the summary of the series *Transformative AI and Compute - A holistic approach*. You can find [the sequence here](https://www.lesswrong.com/s/bJi3hd8E8qjBeHz9Z) and the links to the posts below:
1. [Compute [1/4]](https://www.lesswrong.com/s/bJi3hd8E8qjBeHz9Z/p/uYXAv6Audr2y4ytJe)
2. [Forecasting Compute [2/4]](https://www.lesswrong.com/posts/sHAaMpdk9FT9XsLvB/forecasting-compute-transformative-ai-and-compute-2-4)
3. [Compute Governance and Conclusions [3/4]](https://www.lesswrong.com/posts/M3xpp7CZ2JaSafDJB/computer-governance-and-conclusions-transformative-ai-and)
4. [Compute Research Questions and Metrics [4/4]](https://www.lesswrong.com/posts/G4KHuYC3pHry6yMhi/compute-research-questions-and-metrics-transformative-ai-and)
0. Executive Summary
====================
This series attempts to:
1. Introduce a simplified model of computing which serves as a foundational concept ([Part 1 - Section 1](https://www.lesswrong.com/s/bJi3hd8E8qjBeHz9Z/p/uYXAv6Audr2y4ytJe#1__Compute)).
2. Discuss the role of compute for AI systems ([Part 1 - Section 2](https://www.lesswrong.com/s/bJi3hd8E8qjBeHz9Z/p/uYXAv6Audr2y4ytJe#2__Compute_in_AI_Systems)).
* In [Part 1 - Section 2.3](https://www.lesswrong.com/s/bJi3hd8E8qjBeHz9Z/p/uYXAv6Audr2y4ytJe#2_3_Compute_Trends__An_Update_8_) you can find the updated compute plot you have been coming for.
3. Explore the connection of compute trends and more capable AI systems over time ([Part 1 - Section 3](https://www.lesswrong.com/s/bJi3hd8E8qjBeHz9Z/p/uYXAv6Audr2y4ytJe#3__Compute_and_AI_Alignment)).
4. Discuss the compute component in forecasting efforts on transformative AI timelines ([Part 2 - Section 4](https://www.lesswrong.com/posts/sHAaMpdk9FT9XsLvB/forecasting-compute-transformative-ai-and-compute-2-4#4__Forecasting_Compute))
5. Propose ideas for better compute forecasts ([Part 2 - Section 5](https://www.lesswrong.com/posts/sHAaMpdk9FT9XsLvB/forecasting-compute-transformative-ai-and-compute-2-4#5_Better_Compute_Forecasts)).
6. Briefly outline the relevance of compute for AI Governance ([Part 3 - Section 6](https://www.lesswrong.com/posts/M3xpp7CZ2JaSafDJB/computer-governance-and-conclusions-transformative-ai-and#6__Compute_Governance)).
7. Conclude this report and discuss next steps ([Section 7](https://www.lesswrong.com/posts/M3xpp7CZ2JaSafDJB/computer-governance-and-conclusions-transformative-ai-and#7__Conclusions)).
8. Provide a list of connected research questions ([Appendix A](https://www.lesswrong.com/posts/G4KHuYC3pHry6yMhi/compute-research-questions-and-metrics-transformative-ai-and#A__Research_Questions)).
9. Present common compute metrics and discusses their caveats ([Appendix B](https://www.lesswrong.com/posts/G4KHuYC3pHry6yMhi/compute-research-questions-and-metrics-transformative-ai-and#B__Metrics)).
10. Provide a list of Startups in the AI Hardware domain ([Appendix C](https://www.lesswrong.com/posts/G4KHuYC3pHry6yMhi/compute-research-questions-and-metrics-transformative-ai-and#C__AI_Hardware_Startups)).
Abstract
--------
Modern progress in AI systems has been driven and enabled mainly by acquiring more computational resources. AI systems rely on computation-intensive training runs — they require massive amounts of *compute*.
Learning about the compute requirements for training existing AI systems and their capabilities allows us to get a more nuanced understanding and take appropriate action within the technical and governance domain to enable a safe development of potential transformative AI systems.
To understand the role of compute, I decided to (a) do a literature review, (b) update existing work with new data, (c) investigate the role of compute for timelines, and lastly, (d) explore concepts to enhance our analysis and forecasting efforts.
In this piece, I present a brief analysis of AI systems’ compute requirements and capabilities, explore compute’s role for transformative AI timelines, and lastly, discuss the compute governance domain.
I find that compute, next to data and algorithmic innovation, is a crucial contributor to the recent performance of AI systems. We identify a doubling time of 6.2 months for the compute requirements of the final training run of state-of-the-art AI systems from 2012 to the present.
Next to more powerful hardware components, the spending on AI systems and the algorithmic innovation are other factors that inform the amount of effective compute available — which itself is a component for forecasting models on transformative AI.
Therefore, as compute is a significant component and driver of AI systems’ capabilities, understanding the developments of the past and forecasting future results is essential. Compared to the other components, the quantifiable nature of compute makes it an exciting aspect for forecasting efforts and the safe development of AI systems.
I consequently recommend additional investigations in highlighted components of compute, especially AI hardware. As compute forecasting and regulations require an in-depth understanding of hardware, hardware spending, the semiconductor industry, and much more, we recommend an interdisciplinary effort to inform compute trends interpretations and forecasts. Those insights can then be used to inform policymaking, and potentially regulate access to compute.
Epistemic Status
----------------
This article is *Exploratory* to *My Best Guess*. I've spent roughly 300 hours researching this piece and writing it up. I am not claiming completeness for any enumerations. Most lists are the result of things I learned *on the way* and then tried to categorize.
I have a background in Electrical Engineering with an emphasis on Computer Engineering and have done research in the field of ML optimizations for resource-constrained devices — working on the intersection of ML deployments and hardware optimization. I am more confident in my view on hardware engineering than in the macro interpretation of those trends for AI progress and timelines.
This piece was a research trial to test my prioritization, interest, and fit for this topic. Instead of focusing on a single narrow question, this paper and research trial turned out to be *more broad* — therefore *a holistic approach*. In the future, I’m planning to work more focused on a narrow relevant research question within this domain. Please [reach out](mailto:len+EA@heim.xyz).
Views and mistakes are solely my own.
Highlights per Section
----------------------
### [1. Compute](https://www.lesswrong.com/s/bJi3hd8E8qjBeHz9Z/p/uYXAv6Audr2y4ytJe#1__Compute)
* Computation is the manipulation of information.
* There are various types of computation. This piece is concerned with today’s predominant type: digital computers.
* With compute, we usually refer to a *quantity of operations* used — computed by our computer/processor. It is also used to refer to computational power, a rate of compute operations per time period.
* Computation can be divided into memory, interconnect, and logic. Each of these components is relevant for the performance of the other.
* For an introduction to integrated circuits/chips, I recommend “[AI Chips: What They Are and Why They Matter](https://cset.georgetown.edu/wp-content/uploads/AI-Chips%E2%80%94What-They-Are-and-Why-They-Matter.pdf)” (Khan 2020).
### [2. Compute in AI Systems](https://www.lesswrong.com/s/bJi3hd8E8qjBeHz9Z/p/uYXAv6Audr2y4ytJe#2__Compute_in_AI_Systems)
* Compute is required for training AI systems. Training requires significantly more compute than inference (the process of using a trained model).
* Next to algorithmic innovation and available training data, available compute for the final training run is one of the significant drivers of more capable AI systems.
* OpenAI observed in 2018 that since 2012 the amount of compute used in the largest AI training runs has been doubling every 3.4 months.
* In our updated analysis (n=57, 1957 to 2021), we observe a doubling time of 6.2 months between 2012 and mid-2021 (n=45).
### [3. Compute and AI Alignment](https://www.lesswrong.com/s/bJi3hd8E8qjBeHz9Z/p/uYXAv6Audr2y4ytJe#3__Compute_and_AI_Alignment)
* Compute is a component (an input) of AI systems that led to the increasing capabilities of modern AI systems. There are reasons to believe that progress in computing capabilities, independent of further progress in algorithmic innovation, might be sufficient to lead to a transformative AI[[1]](#fn-jRh9CcfpgF95aQkck-1).
* Compute is a fairly coherent and quantifiable feature — probably the easiest input to AI progress to make reasonable quantitative estimates of.
+ Measuring the quality of the other inputs, data, and algorithmic innovation, is more complex.
+ Strongly simplified, compute only consists of one input axis: more or less compute — where we can expect that an increase in compute leads to more capable and potentially unsafe systems.
* According to “[The Bitter Lesson](http://incompleteideas.net/IncIdeas/BitterLesson.html)”, progress arrives via approaches based on scaling computation by search and learning, and not by building knowledge into the systems. Human intuition has been outpaced by more computational resources.
* The strong [scaling hypothesis](https://www.gwern.net/Scaling-hypothesis) is stating that we only need to *scale* a specific architecture, to achieve transformative or superhuman capabilities — this architecture might already be available. Scaling an architecture implies *more compute* for the training run.
* [Cotra's report on biological anchors](https://docs.google.com/document/d/1IJ6Sr-gPeXdSJugFulwIpvavc0atjHGM82QjIfUSBGQ/edit#) forecasts the computational power/effective compute required to train systems that resemble, e.g. the human brain’s performance. Those estimates can provide compute milestones for transformative capabilities of AI systems.
### [4. Forecasting Compute](https://www.lesswrong.com/posts/sHAaMpdk9FT9XsLvB/forecasting-compute-transformative-ai-and-compute-2-4#4__Forecasting_Compute)
* For transformative AI timeline models with compute milestones, we are interested in how much effective compute we have available at year *Y*.
+ We can break this down into (1) **compute costs**, (2) **compute spending**, and (3) **algorithmic progress**.
* **Hardware progress**: For forecasting hardware progress, no single model can explain the improvements of the last years. Instead, a mix of Moore’s Law, chip architectures, and hardware paradigms are applicable models and categories to think about progress.
+ Performance improvements can happen significantly faster than the pure improvement in transistor count and density (Moore’s law) would indicate.
+ We will see a fragmentation of applications into the *slow* and *fast lane.* High-demand applications will move to the *fast lane* by designing and benefitting from specialized processors. In contrast, low-demand applications will be stuck in the *slow lane* running on general-purpose processors. We should assume that AI will be on the *fast lane*.
* **Economy of scale**: There will *either* be room for improvement in chip design, or chip design will *stabilize* which enables an economy of scale. Our hardware will *first get better* and then *get cheaper.*
* **Hardware spending**: The current increase in spending is not sustainable; however, it could still significantly increase with estimates up to 1% of (US) GDP (a megaproject, like the Manhattan Project or the Apollo program).
+ However, it is still unclear which percentage of the compute trend has been due to increased spending or to more performant hardware.
### [5. Better Compute Forecasts](https://www.lesswrong.com/posts/sHAaMpdk9FT9XsLvB/forecasting-compute-transformative-ai-and-compute-2-4#5_Better_Compute_Forecasts)
* Researchers should share insights into their AI system’s training by disclosing the amount of compute used and its connected details.
+ Ideally, we should make it a requirement for publications and reduce the technical burden of recording the used compute.
* For forecasting hardware progress, we can rely on conceptual models and categories of innovation. We can break this down into:
+ Progress in current computing paradigms
+ Economy of scale for existing technologies
+ Introduction of new computing paradigms
+ Unknown unknowns
* We should also monitor the dominant design strategy as this informs our forecasts. The three design strategies are (1) hardware-driven algorithm design, (2) algorithm-driven hardware design, and (3) co-develop hardware and algorithm.
* Metrics, such as FLOPS/$, often give limited insights, as they only represent one of the three computer components (memory, interconnect, and logic). Understanding their limitations is essential for forecasting.
### [6. Compute Governance](https://www.lesswrong.com/posts/M3xpp7CZ2JaSafDJB/computer-governance-and-conclusions-transformative-ai-and#6__Compute_Governance)
* Compute is a unique AI governance node due to the required physical space, energy demand, and the concentrated supply chain. Those features make it a governable candidate.
* Controlling and governing access to compute can be harnessed to achieve better AI safety outcomes, for instance restricting compute access to non-safety-aligned actors.
* As compute becomes a dominant factor of costs at the frontier of AI research, it may start to resemble high-energy physics research, where a significant amount of the budget is spent on infrastructure (unlike previous trends of CS research where the equipment costs have been fairly low).
### [7. Conclusions](https://www.lesswrong.com/posts/M3xpp7CZ2JaSafDJB/computer-governance-and-conclusions-transformative-ai-and#7__Conclusions)
* In terms of published papers, the research on compute trends, compute spending, and algorithmic efficiency (the field of macro ML research) is minor and more work on this intersection could quickly improve our understanding.
* The field is currently bottlenecked by available data on macro ML trends: total compute used to train a model is rarely published, nor is spending. With these, it would be easier to estimate algorithmic efficiency and build better forecasting models.
* The importance of compute also highlights the need for ML engineers working on AI safety to be able to deploy gigantic models.
+ Therefore, more people should consider becoming an [AI hardware expert](https://forum.effectivealtruism.org/posts/HrS2pXQ3zuTwr2SKS/what-does-it-mean-to-become-an-expert-in-ai-hardware-1) or working as an ML engineer at safety-aligned organizations and enabling their deployment success.
* But also working on the intersection of technology and economics is relevant to inform spending and understanding of macro trends.
* Research results in all of the mentioned fields could then be used to inform compute governance.
Acknowledgments
===============
This work was supported and conducted as a summer fellowship at the [Stanford Existential Risks Initiative](https://cisac.fsi.stanford.edu/stanford-existential-risks-initiative/content/stanford-existential-risks-initiative) (SERI). Their support is gratefully acknowledged. I am thankful for joining this program and would like to thank the organizers for enabling this, and the other fellows for the insightful discussions.
I am incredibly grateful to Ashwin Acharya and Michael Andregg for their mentoring throughout the project. Michaels thoughts on AI hardware nudged me to reconsider my current research interest and learn more about AI and compute. Ashwin for bouncing off ideas, the wealth of expertise in the domain, and helping me put things into the proper context. Thanks for the input! I was looking forward to every meeting and the thought-provoking discussions.
Thanks to the Swiss Existential Risk Initiative (CHERI) for providing the social infrastructure during my project. Having the opportunity to organize such an initiative in a fantastic team and being accompanied by motivated young researchers is astonishing.
I would like to express my thanks to Jaime Sevilla, Charlie Giattino, Will Hunt, Markus Anderljung, and Christopher Phenicie for your input and discussing ideas.
Thanks to Jaime Sevilla, Jeffrey Ohl, Christopher Phenicie, Aaron Gertler, and Kwan Yee Ng for providing feedback on this piece.
References
==========
* Ahmed, Nur, and Muntasir Wahed. 2020. “The De-Democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research.” *ArXiv:2010.15581 [Cs]*, October. <http://arxiv.org/abs/2010.15581>.
* Amodei, Dario, and Danny Hernandez. 2018. “AI and Compute.” OpenAI. May 15, 2018. <https://openai.com/blog/ai-and-compute/>.
* Anderljung, Markus, and Alexis Carlier. 2021. “Some AI Governance Research Ideas.” Some AI Governance Research Ideas - EA Forum. March 6, 2021. <https://forum.effectivealtruism.org/posts/kvkv6779jk6edygug/some-ai-governance-research-ideas>.
* Branwen, Gwern. 2020. “The Scaling Hypothesis,” May. <https://www.gwern.net/Scaling-hypothesis>.
* Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. “Language Models Are Few-Shot Learners.” *ArXiv:2005.14165 [Cs]*, July. <http://arxiv.org/abs/2005.14165>.
* Carey, Ryan. 2018. “Interpreting AI Compute Trends.” AI Impacts. July 10, 2018. <https://aiimpacts.org/interpreting-ai-compute-trends/>.
* Carlsmith, Joseph. 2020. “How Much Computational Power Does It Take to Match the Human Brain?” Open Philanthropy. August 14, 2020. <https://www.openphilanthropy.org/brain-computation-report>.
* Centre for the Governance of AI. 2020. “A Guide to Writing the NeurIPS Impact Statement.” *Medium* (blog). May 19, 2020. <https://medium.com/@GovAI/a-guide-to-writing-the-neurips-impact-statement-4293b723f832>.
* Cotra, Ajeya. 2020. “Draft Report on AI Timelines.” September 19, 2020. <https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines>.
* Crox, John. 2019. “On AI and Compute - EA Forum.” EA Forum. March 4, 2019. <https://forum.effectivealtruism.org/posts/8wEDjvpcdACvYGQTq/on-ai-and-compute>.
* Dafoe, Allan. 2018. “AI Governance: A Research Agenda.” Centre for the Governance of AI, Future of Humanity Institute, University of Oxford. <https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf>.
* Davidson, Tom. 2021. “Report on Semi-Informative Priors.” Open Philanthropy. March 25, 2021. <https://www.openphilanthropy.org/blog/report-semi-informative-priors>.
* Finnveden, Lukas. 2020. “Extrapolating GPT-N Performance - AI Alignment Forum.” December 18, 2020. <https://www.alignmentforum.org/posts/k2SNji3jXaLGhBeYP/extrapolating-gpt-n-performance>.
* Garfinkel, Ben. 2018. “Reinterpreting ‘AI and Compute.’” AI Impacts. December 18, 2018. <https://aiimpacts.org/reinterpreting-ai-and-compute/>.
* Grace, Katja, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. 2016. “2016 Expert Survey on Progress in AI.” *AI Impacts* (blog). December 14, 2016. <https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/>.
* Hernandez, Danny, and Tom B. Brown. 2020. “Measuring the Algorithmic Efficiency of Neural Networks.” *ArXiv:2005.04305 [Cs, Stat]*, May. <http://arxiv.org/abs/2005.04305>.
* Hestness, Joel, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. 2017. “Deep Learning Scaling Is Predictable, Empirically.” *ArXiv:1712.00409 [Cs, Stat]*, December. <http://arxiv.org/abs/1712.00409>.
* Kaplan, Jared, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. “Scaling Laws for Neural Language Models.” *ArXiv:2001.08361 [Cs, Stat]*, January. <http://arxiv.org/abs/2001.08361>.
* Khan, Saif M. 2020. “AI Chips: What They Are and Why They Matter.” *Center for Security and Emerging Technology* (blog). April 2020. <https://cset.georgetown.edu/research/ai-chips-what-they-are-and-why-they-matter/>.
* ———. 2021. “The Semiconductor Supply Chain.” *Center for Security and Emerging Technology* (blog). January 2021. <https://cset.georgetown.edu/publication/the-semiconductor-supply-chain/>.
* Los Alamos National Laboratory. 2013. “Massive Infrastructures Are Needed to Support Supercomputers.” March 25, 2013. [https://www.lanl.gov/discover/publications/national-security-science/2013-april/what-is-under the floor-of-a-supercomputer.php](https://www.lanl.gov/discover/publications/national-security-science/2013-april/what-is-under%20the%20floor-of-a-supercomputer.php).
* Lyzhov, Alex. 2021. “‘AI and Compute’ Trend Isn’t Predictive of What Is Happening.” “AI and Compute” Trend Isn’t Predictive of What Is Happening - AI Alignment Forum. February 4, 2021. <https://www.alignmentforum.org/posts/wfpdejMWog4vEDLDg/ai-and-compute-trend-isn-t-predictive-of-what-is-happening>.
* Microsoft Documentation. 2020. “Deploy ML Models to FPGAs - Azure Machine Learning.” September 24, 2020. <https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-fpga-web-service>.
* Mirhoseini, Azalia, Anna Goldie, Mustafa Yazgan, Joe Wenjie Jiang, Ebrahim Songhori, Shen Wang, Young-Joon Lee, et al. 2021. “A Graph Placement Methodology for Fast Chip Design.” *Nature* 594 (7862): 207–12. <https://doi.org/10.1038/s41586-021-03544-w>.
* MLCommons. 2021. “MLCommons<sup>TM</sup> Releases MLPerf<sup>TM</sup> Training v1.0 Results.” MLCommons. June 30, 2021. <https://mlcommons.org/>.
* Radford, Alec, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. “Language Models Are Unsupervised Multitask Learners,” February, 24.
* Sevilla, Jaime, Pablo Villalobos, Juan Felipe Cerón, Matthew Burtell, and Lennart Heim. 2021. “Parameter, Compute and Data Trends in Machine Learning.” *Google Sheets* (blog). June 19, 2021. <https://docs.google.com/spreadsheets/d/1AAIebjNsnJj_uKALHbXNfn3_YsT6sHXtCU0q7OIPuc4/>.
* Shalf, John. 2020a. “The Future of Computing beyond Moores Law.” *Philosophical Transactions of the Royal Society A*, March. <https://doi.org/10.1098/rsta.2019.0061>.
* ———. 2020b. “Computing Beyond Moore’s Law.” July 14. <https://cs.lbl.gov/assets/CSSSP-Slides/20200714-Shalf.pdf>.
* Sutton, Rich. 2019. “The Bitter Lesson.” March 13, 2019. <http://www.incompleteideas.net/IncIdeas/BitterLesson.html>.
* Thompson, Neil C., Kristjan Greenewald, Keeheon Lee, and Gabriel F. Manso. 2020. “The Computational Limits of Deep Learning.” *ArXiv:2007.05558 [Cs, Stat]*, July. <http://arxiv.org/abs/2007.05558>.
* Thompson, Neil C., and Svenja Spanuth. 2021. “The Decline of Computers as a General Purpose Technology.” *Communications of the ACM* 64 (3): 64–72. <https://doi.org/10.1145/3430936>.
---
1. Transformative AI, as defined by Open Philanthropy in [this blogpost](https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence#Sec1): “*Roughly and conceptually, transformative AI is AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.*” [↩︎](#fnref-jRh9CcfpgF95aQkck-1) |
6daafc6c-be51-4ad6-8b76-1642ce53a8b8 | trentmkelly/LessWrong-43k | LessWrong | The Future of Structured Self Improvement
Two years ago, we founded the Guild of the ROSE to provide structure and community for people interested in self improvement. Since then, we've grown from a three-month Alpha project to a year-round workshop host.
If you've been in our Discord server, you've probably seen regular announcements about people leveling up on the Practitioner's Path, our RPG-inspired self improvement framework. You might even have heard rumors that we've been working on a new version of the Path.
Practitioner's Path 2.0
The Practitioner's Path 2.0 overhauls every aspect of the Path system, from Bugs to the Character Sheet to levels. The new system is split into three parts: Attributes, Tasks, and Skills. Together, these make up your Character Sheet.
Attributes
Attributes measure long-term metrics such as:
* Net Worth (how much money you have)
* Reach (how many people you can influence)
* Stamina (the efficiency of your cardiovascular system)
The Attributes section of the Character Sheet gives a sense of long-term progress on tangible metrics. Of course, not everything can be tracked as a metric, which is where Skills and Tasks come in.
Tasks
A Task is something that you want to do or achieve. Tasks can be nested inside each other, representing the way that a complex goal can be split into smaller goals.
For example, the Task of "Learn how to program" could be broken into the subtasks of "Decide what language I want to learn" and "Find a tutorial on the language". This nesting system allow you to break down complex goals into easily-achievable chunks.
Skills
The core of the new Path system is the Skilltree. Skills provide a structured approach to self-improvement and a sense of progression as you acquire them. Each skill has a certain amount of XP, allowing you to track your progress on a granular and broad level.
Skills in the Path are organized into three main categories: Pragmatist, Meditative, and Empiricist. Acquiring a Skill will only level up its category — so you |
c039f11f-7f2f-4606-aa89-03353ab3e8d4 | trentmkelly/LessWrong-43k | LessWrong | How to better understand and participate on LW
Update! New URL:
!!!
http://lesswrong.com/lw/2un/references_resources_for_lesswrong/
!!!
Out-of-date:
A list capturing all background knowledge you might ever need for LW.
Updated: 2010-10-10
* F = Free
* E = Easy (adequate for a low educational background)
This list has two purposes. One is to enable people that lack a basic formal education to read and understand the LessWrong Sequences. Secondly, it is meant as a list of useful resources for all people to help to better understand what is being discussed on LessWrong and to enable you to actively participate.
Do not flinch, most of LessWrong can be read and understood by people with a previous level of education less than secondary school. And even if you lack the most basic education, if you start with Khan Academy followed by BetterExplained then with the help of Google and Wikipedia you should be able to reach a level of education that allows you to start reading the LessWrong Sequences.
Nevertheless, before you start off you might read the Twelve Virtues of Rationality FE. Not only is scholarship just one virtue but you'll also be given a list of important fields of knowledge that anyone who takes LessWrong seriously should study:
> It is especially important to eat math and science which impinges upon rationality: Evolutionary psychology, heuristics and biases, social psychology, probability theory, decision theory.
Mathematics:
Basics
* The Khan Academy (World-class education for free (1800+ videos).) FE
* Just Math Tutotrials (FREE math videos for the world!) F
* BetterExplained (There’s always a better way to explain a topic.) FE
* Steven Strogatz on the Elements of Math (A very basic introduction to mathematics.) FE
General
* The Princeton Companion to Mathematics (Reference for anyone with a serious interest in mathematics)
* Free Mathematics eBooks F
* Interactive Mathematics Miscellany and Puzzles F
* math.stackexchange.com (Q&A for people studying math at any level) |
58d5e5ee-f711-48dc-9e1d-b5d79bf9936d | trentmkelly/LessWrong-43k | LessWrong | Sleeping Beauty as a decision problem (solved)
EDIT: User:Misha solved it
----------------------------------------
First, here's the Sleeping Beauty problem, from Wikipedia:
> The paradox imagines that Sleeping Beauty volunteers to undergo the following experiment. On Sunday she is given a drug that sends her to sleep. A fair coin is then tossed just once in the course of the experiment to determine which experimental procedure is undertaken. If the coin comes up heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. The sleeping drug induces a mild amnesia, so that she cannot remember any previous awakenings during the course of the experiment (if any). During the experiment, she has no access to anything that would give a clue as to the day of the week. However, she knows all the details of the experiment.
>
>
> Each interview consists of one question, "What is your credence now for the proposition that our coin landed heads?"
I was looking at AlephNeil's old post about UDT and encountered this diagram depicting the Sleeping Beauty problem as a decision problem.
This diagram is underspecified, though. There are no specific payoffs in the boxes and it's not obvious what actions the arrows mean. So I tried to figure out some ways to transform the Sleeping Beauty problem into a concrete decision problem. I also made edited versions of AlephNeil's diagram for versions 1 and 2.
----------------------------------------
The gamemaster puts Sleeping Beauty to sleep on Sunday. He uses a sleeping drug that causes mild amnesia such that upon waking she won't be able to remember any previous awakenings that may have taken place during the course of the game. The gamemaster flips a coin. If heads, he wakes her up on monday only. If tails, he wakes her up |
f7b862ad-6ebd-462d-bd71-780e30841807 | trentmkelly/LessWrong-43k | LessWrong | Is Infra-Bayesianism Applicable to Value Learning?
My impression, as someone just starting to learn Infra-Bayesianism, is that it's about caution, lower bounds on utility (which is exactly the way anything trying to overcome the Optimizer's Curse should be reasoning, especially in an environment already heavily optimized by humans where utility will have a lot more downside than upside uncertainty), so the utility score is vital in the argmax min process, and in the relationship between sa-measures and a-measures.
However, this does make it intuitively inobvious how to apply Infra-Bayesianism to Value Learning, where the utility function from physical states of the environment to utility values is initially very uncertain, and is an important part of what the AI is trying to do (Infra-)Bayesian updates on. So, a question for people who already understand Infra-Bayesianism: is it in fact applicable to Value Learning? If so, does it apply in the following way: the (a-priori unknown, likely quite complex, and possibly not fully computable/realizable to the agent) human effective utility function that maps physical states to human (and thus also value-learner-agent) utility values is treated as (an important) part of the environment, and thus the min over environments('Murphy') part of the argmax min process includes making the most pessimistic still-viable assumptions about this?
To ask a follow-on question, if so, would cost-effectively reducing uncertainty in the human effective utility function (i.e. doing research on the alignment problem) to reduce Murphy's future room-to-maneuver on this be a convergent intermediate strategy for any value-learner-agents that were using Infra-Bayesian reasoning? Or would such a system automatically assume that learning more about the human effective utility function is pointless, because they assume Murphy will always ensure that they live in the worst of all possible environments, so decreasing uncertainty on utility will only ever move the upper bound on it not the lower one? |
0a9a14c2-67e3-4326-828e-141eb34419f5 | trentmkelly/LessWrong-43k | LessWrong | Mistakes #4: breaking Chesterton’s fence in the presence of bull
(Mistakes #1, #2, #3)
Lots of prominent activities don’t immediately make pragmatic sense. I mean, they make sense in the sense that you want to do them, but not in the sense that you can give an explicit account of why you want to do them.
For instance, visiting family in holidays. For one thing, why do we even have holidays? For a few days, and not on other days? And why eat turkeys or champagne then, and not eat them the rest of the time? For another thing, why do we have families? And why see your family exactly that week? For those who like their families, is that the most convenient week? Or do people really need to coordinate in holding family dinner at the same time as people not in their families? For those who don’t like their families, why go to so much trouble to see people whose only special feature is that you were already forced to spend too much time with them decades ago? Also, why guess what someone else wants you to buy for them, while they spend the same amount on a guess for you? And why does everything have to be decorated in colors and sparklinesses that nobody during more sober times finds aesthetically pleasing?
As a young person, learning about the world, you can respond to this sort of thing in at least two ways. One option is to go along with the things. Perhaps you have great trust in society. Perhaps you don’t notice. Perhaps you have little curiosity or passion for improving the world. Perhaps you have heard of Chesterton’s fence.
Another option is to politely disregard any of the things that don’t make apparent sense, and redesign your life according to reason. Or rudely disregard any of the things that don’t make sense, if you don’t see the use in politeness! If you see no reason for eating dinner before dessert, or exchanging gifts, or doing well in school, then you just don’t do those things.
I have a soft spot for the social innovator, who sees the people needlessly toiling for senseless or forgotten goals and is willing to f |
0c46c09d-f1dd-42e1-a7ea-d001bc56a64a | trentmkelly/LessWrong-43k | LessWrong | LLMs are really good at k-order thinking (where k is even)
I've noticed something about how humans and language models work together. There's a pattern that emerges whenever we collaborate effectively.
It goes like this: Someone has an initial idea (step 1). An LLM can then generate variations and connections around that idea (step 2). A human needs to look at these and decide which are actually valuable (step 3). Then the LLM can develop the chosen direction with consistency (step 4).
This alternating pattern shows up everywhere once you start looking for it. The even-numbered steps—expansion, elaboration, systematisation—are what language models do well. The odd-numbered steps—origination, curation, judgement, taste—stay firmly in human hands. In technical terms, LLMs excel at k-order thinking when k is even, but struggle with the odd-numbered moves that require human intuition.
(Here’s a meta-observation: I’m writing this piece about even-numbered thinking using exactly this pattern. The original insight was mine. Claude and o1 perform step 2—expansion and connection. Then I actually curate this draft in my own words, deciding what resonates and what doesn’t (step 3). Finally the LLMs polish up some untidy aspects of my writing (step 4), whilst in step 5 I check it hasn't subjected it to too much "slopification".)
This division of labor isn’t accidental. It comes from how our brains and language models operate. Human creativity arises from the interplay of networks like the default mode network (which combines memories and loose associations) and our attentional circuits (which filter and refine these ideas based on goals and feedback). This lets us generate new insights beyond what we’ve directly encountered. LLMs, in contrast, are mathematically constrained by their training distribution: while they excel at interpolation in high-dimensional token space, they can't generate ideas that constitute true distribution shift. They're incredibly good at drawing out hidden connections, but have no neural machinery to intro |
a7335d95-9669-4c06-9b24-de5097de31e6 | trentmkelly/LessWrong-43k | LessWrong | Notes on Benevolence
This post examines the virtue of benevolence. I wrote this not as an expert, but as someone who wants to learn more. I hope it helps people who want to know more about this virtue and how to nurture it.
What is benevolence?
Benevolence (sometimes “goodness” or “goodwill”) is the belief that some things are good, others bad, accompanied by a determination to promote the good and discourage the bad. In short: if you are benevolent you commit yourself on the side of good. Typically, benevolence concerns moral good and bad—being good to others, in particular—rather than less morally-tinted goods like cleanliness or creativity.
A brief definition like this hides a lot of complexity and difficulty. How is it that some things are morally good and other things morally bad, for example, and how can we know the difference? Why is there debate about what is morally good or bad, and why does there seem to be no consensus about how such debates can be resolved? Why ought one to side with good over bad instead of the alternative or instead of remaining neutral and making your decisions based on some other metric?
The benevolent person doesn’t necessarily think that they have all the answers to these questions, but is satisfied that moral right and wrong is real and worthy of respect, believes they can discern the rightness or wrongness of a thing or a course of action well enough to act on that discrimination, and is persuaded that consistently siding with the good is the correct course of action.
Ethical philosophy is in part a gigantic junkyard strewn with the wreckage of ingenious attempts to prove or disprove any of that. If you want a philosophical vehicle with which you can rationally drive yourself either into or out from a stance of benevolence, you can take your pick, but good luck getting it off the lot.
Benevolence and the virtues
From a virtue ethics perspective, the question is whether benevolence is an essential ingredient in human flourishing, or whether one |
87a16655-c9db-4829-b823-cda572115728 | trentmkelly/LessWrong-43k | LessWrong | Social Necessity of Drinking
It's been over a year since I graduated from college, but only recently have I felt like I'm officially entering the "adult world." Navigating the social arenas of the adult world requires the same basic skillsets as the college world, but a lot of the rules are different and I'm struggling to learn them. Among them is how to drink socially.
As a general rule, I don't drink. I don't like the taste of alcohol. I don't like paying the exorbinant prices that alcohol costs. I don't like the feeling of my brain slowing down and making it harder to string sentences together. I don't mind the physical disorientation - that part's pretty fun. But that part also seems to be slightly frowned upon in an "adult" setting. I'm not opposed to it for any particular moral reasons.
When I do drink, I prefer to get it over with as fast as possible, whether I'm officially drinking a "shot" or not. In college that at least had a sort of "daring" quality that was respected. But it's pretty obviously taboo at classy cocktail parties and even somewhat taboo at "casual adult" parties.
So there's a few separate questions I have:
1) Are there any good, cached buzzword phrases I can use that'll make it socially acceptable to not drink? "I just don't like it" seems to draw disdainful stares, and while I haven't tried it I get the sense that saying I'm morally opposed to it would make me look even more like a stick in the mud. Saying "it's ridiculously expensive" makes me look like a cheapskate.
2) If I must drink socially, is there a breakdown of the general social conventions I should be aware of so I don't need to have them pointed out to me over the course of the next few years?
3) Is there any particularly interesting analysis of *why* drinking is so important to social interaction? Knowing the underlying causes might at least give me some better appreciation for why I have to learn this other than "because!" |
c345b640-f227-46e5-a956-66be8e76b988 | trentmkelly/LessWrong-43k | LessWrong | Liars for Jesus
This should be of interest to a few members of this forum: Chris Rodda has made her book, Liars for Jesus, available for free online (pdf). The book is a debunking of modern revisionist histories written by authors like David Barton and Gary DeMar. Topics range from the obvious (no, Jefferson was not an evangelical Christian) to the less obvious (no, the Northwest Ordinance was not widely used to encourage religious teaching in public schools). It's a useful resource for those who, like me, are not well-educated in history. It also works as a case study of confirmation bias: chapter after chapter shows that the evidence for many of the revisionist claims is based on passages taken out of both literary and historical context, thus ignoring relevant counterevidence. |
eeb96d12-db2f-4f54-b84d-12e75673f740 | trentmkelly/LessWrong-43k | LessWrong | We Are Conjecture, A New Alignment Research Startup
Conjecture is a new alignment startup founded by Connor Leahy, Sid Black and Gabriel Alfour, which aims to scale alignment research. We have VC backing from, among others, Nat Friedman, Daniel Gross, Patrick and John Collison, Arthur Breitman, Andrej Karpathy, and Sam Bankman-Fried. Our founders and early staff are mostly EleutherAI alumni and previously independent researchers like Adam Shimi. We are located in London.
Of the options we considered, we believe that being a for-profit company with products[1] on the market is the best one to reach our goals. This lets us scale investment quickly while maintaining as much freedom as possible to expand alignment research. The more investors we appeal to, the easier it is for us to select ones that support our mission (like our current investors), and the easier it is for us to guarantee security to alignment researchers looking to develop their ideas over the course of years. The founders also retain complete control of the company.
We're interested in your feedback, questions, comments, and concerns. We'll be hosting an AMA on the Alignment Forum this weekend, from Saturday 9th to Sunday 10th, and would love to hear from you all there. (We'll also be responding to the comments thread here!)
Our Research Agenda
We aim to conduct both conceptual and applied research that addresses the (prosaic) alignment problem. On the experimental side, this means leveraging our hands-on experience from EleutherAI to train and study state-of-the-art models without pushing the capabilities frontier. On the conceptual side, most of our work will tackle the general idea and problems of alignment like deception, inner alignment, value learning, and amplification, with a slant towards language models and backchaining to local search.
Our research agenda is still actively evolving, but some of the initial directions are:
* New frames for reasoning about large language models:
* What: Propose and expand on a frame of GPT-like mode |
d4b3a1cf-f776-4883-8679-1023e1f8db5a | trentmkelly/LessWrong-43k | LessWrong | Open & Welcome Thread — February 2023
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you're new to the community, you can start reading the Highlights from the Sequences, a collection of posts about the core ideas of LessWrong.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the Concepts section.
The Open Thread tag is here. The Open Thread sequence is here. |
c354a138-138a-4ab9-a698-26ce8b1a30b9 | trentmkelly/LessWrong-43k | LessWrong | (Anti)Aging 101
A quest to solve aging must start with careful consideration of what it is.
Aging is a constant in nature, from archaea to elephants. Where life finds death as counterpart, aging is unopposed, yet frames our understanding of both.
To solve aging is to prevent death and avoid decay.
Death is a rather sudden process and decay is a fuzzy concept.
Decay is non-linear and hard to characterize at the organism level.
Sudden regime changes can happen over the course of days, see: menopause, post-viral syndromes, acute neuropathy, autoimmune diseases.
To make things harder, we are evolutionarily and culturally adapted to obfuscate signs of decay.
Defining decay at the molecular level is presently an intractable information problem. Theoretically possible but practically unsolvable.
----------------------------------------
The above is, to the best of my outlining abilities, the paradox most researchers get stuck in when they ponder aging.
Is aging real?
Common sense dictates aging happens, we see it all around us.
But what data can we use to back up this intuition?
This graph is the most clear explanation of aging and damning proof of its existence and effects.
We are born, we are weak and unadapted to our environment and thus we die. We grow, we adapt and thus we start dying less. And then, in our teens… that trend flips.
The scariest part about this trend is that we are certain it has held true for the last century, and almost certainly since the first humans arose.
If you break it down by country it holds true, if you split it on income or genetics it holds true, if you filter disease or look only at the most physically fit people… it holds true.
Somewhere, something happens, and we go from getting better and better at not dying to getting worse at it.
Differences in age
What if we take a first-principles approach when defining aging, looking at decay as the derivative of the path between an 80-year-old and a 10-year-old across?
Two overarching curse |
47e89ab9-f1c6-4a65-be71-95595457c56e | trentmkelly/LessWrong-43k | LessWrong | OpenAI wants to raise 5-7 trillion
This 5-7 trillion is to enhance GPU production and he also sees revenue doubling from current $2 billion to $4 billion next year.
I’m taking a guess GPT-5 training is going well? |
32b95145-9583-47df-8f4e-fb5b69161a4b | StampyAI/alignment-research-dataset/arbital | Arbital | Bayes' rule: Proportional form
If $H_i$ and $H_j$ are hypotheses and $e$ is a piece of evidence, [Bayes' rule](https://arbital.com/p/1lz) states:
$$\dfrac{\mathbb P(H_i)}{\mathbb P(H_j)} \times \dfrac{\mathbb P(e\mid H_i)}{\mathbb P(e\mid H_j)} = \dfrac{\mathbb P(H_i\mid e)}{\mathbb P(H_j\mid e)}$$
%%if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):
In the [Diseasitis problem](https://arbital.com/p/22s), we use this form of Bayes' rule to justify calculating the posterior odds of sickness via the calculation $(1 : 4) \times (3 : 1) = (3 : 4).$
%%
%%!if-after([https://arbital.com/p/55z](https://arbital.com/p/55z)):
In the [Diseasitis problem](https://arbital.com/p/22s), 20% of the patients in a screening population have Diseasitis, 90% of sick patients will turn a chemical strip black, and 30% of healthy patients will turn a chemical strip black. We can use the form of Bayes' rule above to justify solving this problem via the calculation $(1 : 4) \times (3 : 1) = (3 : 4).$
%%
If instead of treating the ratios as odds, we actually calculate out the numbers for each term of the equation, we instead get the calculation $\frac{1}{4} \times \frac{3}{1} = \frac{3}{4},$ or $0.25 \times 3 = 0.75.$
If we try to directly interpret this, it says: "If a patient starts out 0.25 times as likely to be sick as healthy, and we see a test result that is 3 times as likely to occur if the patient is sick as if the patient is healthy, we conclude the patient is 0.75 times as likely to be sick as healthy."
This is valid reasoning, and we call it the *proportional* form of Bayes' rule. To get the probability back out, we reason that if there's 0.75 sick patients to every 1 healthy patient in a bag, the bag comprises 0.75/(0.75 + 1) = 3/7 = 43% sick patients.
# Spotlight visualization
One way of looking at this result is that, since odds ratios are equivalent under multiplication by a positive constant, we can fix the right side of the odds ratio as equaling 1 and ask about what's on the left side. This is what we do when seeing the calculation as $(0.25 : 1) \cdot (3 : 1) = (0.75 : 1),$ the form suggested by the theorem proved above.
We could visualize Bayes' rule as a pair of spotlights with different starting intensities, that go through lenses that amplify or reduce each incoming unit of light by a fixed multiplier. In the [Diseasitis](https://arbital.com/p/22s) case, if we fix the right-side blue beam as having a starting intensity of 1 and a multiplying lens of 1, and we fix the left-side beam of having a starting intensity of 0.25 and a multiplying lens of 3x, then the result gives us a visualization of the calculation prescribed by Bayes' rule:

Note the similarity to a [waterfall diagram](https://arbital.com/p/1wy). The main thing the spotlight visualization adds is that we can imagine varying the absolute intensities of the lights and lenses, while preserving their relative intensities, in such a way as to make the right-side beams and lenses equal 1.
[draw the pre-proportional, odds form of the spotlight visualization.](https://arbital.com/p/fixme:)
%todo: add example problem in proportional/spotlight form%
# Usefulness in informal argument
The proportional form of Bayes' rule is perhaps the fastest way of describing Bayesian reasoning that sounds like it ought to be true. If you were having a fictional character suddenly give a Bayesian argument in the middle of a story being read by many people who'd never heard of Bayes' rule, you might have them [say](http://hpmor.com/chapter/86):
> "Suppose the Dark Mark is certain to continue while the Dark Lord's sentience lives on, but a priori we'd only have guessed a twenty percent chance of the Dark Mark continuing to exist after the Dark Lord dies. Then the observation, "The Dark Mark has not faded" is five times as likely to occur in worlds where the Dark Lord is alive as in worlds where the Dark Lord is dead. Is that really commensurate with the prior improbability of immortality? Let's say the prior odds were a hundred-to-one against the Dark Lord surviving. If a hypothesis is a hundred times as likely to be false versus true, and then you see evidence five times more likely if the hypothesis is true versus false, you should update to believing the hypothesis is twenty times as likely to be false as true."
Similarly, if you were a doctor trying to explain the meaning of a positive test result to a patient, you might say: "If we haven't seen any test results, patients like you are a thousand times as likely to be healthy as sick. This test is only a hundred times as likely to be positive for sick as for healthy patients. So now we think you're ten times as likely to be healthy as sick, which is still a pretty good chance!"
[Visual diagrams](https://arbital.com/p/1wy) and special notation for [odds](https://arbital.com/p/1x5) and [relative likelihoods](https://arbital.com/p/1rq) might make Bayes' rule more intuitive, but the proportional form is probably the most valid-*sounding* thing that *is* quantitatively correct that you can say in three sentences.
[write a from-scratch Standalone Intro of the proportional form of Bayes' rule in particular, using the Diseasitis example and going from frequency diagram to waterfall to spotlight, with no proofs, just to justify the proportional form. add to Main a statement that if you can phrase things in proportional form, there exists a Standalone Intro that justifies it quickly.](https://arbital.com/p/fixme:) |
2343564a-54b7-4c20-9793-1cedb2481387 | trentmkelly/LessWrong-43k | LessWrong | LessWrong FAQ
This is a new FAQ written LessWrong 2.0. This is the first version and I apologize if it is a little rough. Please comment or message with further questions, typos, things that are unclear, etc.
The old FAQ on the LessWrong Wiki still contains much excellent information, however it has not been kept up to date.
Advice! We suggest you navigate this guide with the help on the table of contents (ToC) in the left sidebar. You will need to scroll to see all of it. Mobile users need to click the menu icon in the top left.
The major sections of this FAQ are:
* LessWrong Meta
* Getting Started
* Reading Content
* Posting & Commenting
* special mention: The Editor
* Karma & Voting
* Notifications & Subscriptions
* Messaging
* Questions
* Community Events Page
* Moderation
* Privacy Policy & Terms of Use
About LessWrong
What is LessWrong?
LessWrong is a community dedicated to improving our reasoning and decision-making. We seek to hold true beliefs and to be effective at accomplishing our goals. More generally, we want to develop and practice the art of human rationality.
To that end, LessWrong is a place to 1) develop and train rationality, and 2) apply one’s rationality to real-world problems.
LessWrong serves these purposes with its library of rationality writings, community discussion forum, open questions research platform, and community page for in-person events.
See also: Welcome to LessWrong!
What is rationality?
Rationality is a term which can have different meanings to different people. You might already associate with a few things. On LessWrong, we mean something like the following:
* Rationality is thinking in ways which systematically arrive at truth.
* Rationality is thinking in ways which cause you to systematically achieve your goals.
* Rationality is trying to do better on purpose.
* Rationality is reasoning well even in the face of massive uncertainty.
* Rationality is making good decisions even when it’s hard.
* Rationality |
7e5d7135-0b69-4ec3-9e2e-abb512daa2ea | trentmkelly/LessWrong-43k | LessWrong | Meetup : Denver Area LW March Meetup (CHANGED TO MONDAY 3/6!)
Discussion article for the meetup : Denver Area LW March Meetup (CHANGED TO MONDAY 3/6!)
WHEN: 06 March 2017 07:00:00PM (-0700)
WHERE: 4955 S Ulster St #103, Denver, CO 80237
Meetup has been pushed forward a day on account of all of Darcy's being totally booked for our usual day and time. We should have the room in the back as usual. Setting is casual, though sometimes we have specific topics suggested for conversation.
Discussion article for the meetup : Denver Area LW March Meetup (CHANGED TO MONDAY 3/6!) |
1e5f29c9-7faa-4700-a6e8-4c26e5dd8578 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Role playing game based on HPMOR in Moscow
Discussion article for the meetup : Role playing game based on HPMOR in Moscow
WHEN: 17 September 2016 03:00:00PM (+0300)
WHERE: Большая Дорогомиловская, 5к2
Welcome to our custom made role playing game "Hogwarts and methods of rationality", inspired by HPMOR story! It will be intellectual role playing game with rationality-based minigames. You can read more detailed announce and description here: https://goo.gl/3vGLUn
Discussion article for the meetup : Role playing game based on HPMOR in Moscow |
c73d6493-895b-4401-96ea-59e8307e491d | trentmkelly/LessWrong-43k | LessWrong | The Magnitude of His Own Folly
In the years before I met that would-be creator of Artificial General Intelligence (with a funded project) who happened to be a creationist, I would still try to argue with individual AGI wannabes.
In those days, I sort-of-succeeded in convincing one such fellow that, yes, you had to take Friendly AI into account, and no, you couldn't just find the right fitness metric for an evolutionary algorithm. (Previously he had been very impressed with evolutionary algorithms.)
And the one said: Oh, woe! Oh, alas! What a fool I've been! Through my carelessness, I almost destroyed the world! What a villain I once was!
Now, there's a trap I knew I better than to fall into—
—at the point where, in late 2002, I looked back to Eliezer1997's AI proposals and realized what they really would have done, insofar as they were coherent enough to talk about what they "really would have done".
When I finally saw the magnitude of my own folly, everything fell into place at once. The dam against realization cracked; and the unspoken doubts that had been accumulating behind it, crashed through all together. There wasn't a prolonged period, or even a single moment that I remember, of wondering how I could have been so stupid. I already knew how.
And I also knew, all at once, in the same moment of realization, that to say, I almost destroyed the world!, would have been too prideful.
It would have been too confirming of ego, too confirming of my own importance in the scheme of things, at a time when—I understood in the same moment of realization—my ego ought to be taking a major punch to the stomach. I had been so much less than I needed to be; I had to take that punch in the stomach, not avert it.
And by the same token, I didn't fall into the conjugate trap of saying: Oh, well, it's not as if I had code and was about to run it; I didn't really come close to destroying the world. For that, too, would have minimized the force of the punch. It wasn't really loaded? I had pr |
f8d22145-993e-4edf-b993-bb35d7b77d15 | trentmkelly/LessWrong-43k | LessWrong | PM system is not working
enough people have reported this that i wanted to make it publicly known. on my phone so will let others provide more detail |
653481ec-4298-45e2-aa9e-43b7be07d638 | trentmkelly/LessWrong-43k | LessWrong | The Protagonist Problem
Followup to: Neural Correlates of Conscious Access; Related to: How an Algorithm Feels From the Inside, Dissolving the Question
Global Workspace Theory and its associated Theater Metaphor are empirically plausible, but why should it result in consciousness? Why should globally available information being processed by separate cognitive modules make us talk about being conscious?
Sure, that's how brains see stuff, but why would that make us think that it's happening to anyone? What in the world corresponds to a self?
So far, I've only encountered two threads of thought that try to approach this problem: the Social Cognitive Interface of Kurzban, and the Self-Model theories like those of Metzinger and Damasio.
I’ll be talking about the latter, starting off with what self-models are, and a bit about how they’re constructed. Then I’ll say what a self-model theory is.
Humans as Informational Processing Systems
Questions: What exactly is there for things to happen to? What can perceive things?
Well, bodies exist, and stuff can to happen to them. So let's start there.
Humans have bodies which include informational processing systems called brains. Our brains are causally entangled with the outside world, and are capable of mapping it. Sensory inputs are transformed into neural representations which can then be used in performing adaptive responses to the environment.
In addition to receiving sensory input from our eyes, ears, nose, tongue, skin, etc, we get sensory input about the pH level of our blood, various hormone concentrations, etc. We map not only things about the outside world, but things about our own bodies. Our brain's models of our bodies also include things like limb position.
From the third person, brains are capable of representing the bodies that they're attached to. Humans are information processing systems which, in the process of interacting with the environment, maintain a representation of themselves used by the system for the purposes of th |
0a27cad6-10ee-4ed9-9f9a-0d8daf3f5810 | StampyAI/alignment-research-dataset/blogs | Blogs | AISC2: Research Summaries
The second AI Safety Camp took place this October in Prague. Our teams have worked on exciting projects which are summarized below:
#### AI Governance and the Policymaking Process: Key Considerations for Reducing AI Risk:
Team: Policymaking for AI Strategy – Brandon Perry, Risto Uuk
Our project was an attempt to introduce literature from theories on the public policymaking cycle to AI strategy to develop a new set of crucial considerations and open up research questions for the field. We began by defining our terms and laying out a big picture approach to how the policymaking cycle interacts with the rest of the AI strategy field. We then went through the different steps and theories in the policymaking cycle to develop a list of crucial considerations that we believe to be valuable for future AI policy practitioners and researchers to consider. For example, policies only get passed once there’s significant momentum and support for that policy, which creates implications to consider such as how many chances we get to implement certain policies. In the end, we believe that we have opened up a new area of research in AI policymaking strategy, where the way that solutions are implemented have strategic considerations for the entire AI risk field itself.
Read our paper [here](https://www.mdpi.com/2504-2289/3/2/26/pdf).
#### Detecting Spiky Corruption in Markov Decision Processes:
Team: Corrupt Reward MDPs – Jason Mancuso, Tomasz Kisielewski, David Lindner, Alok Singh
We Presented our work at AI Safety Workshop in IJCAI 2019
Read our paper [here](https://arxiv.org/abs/1907.00452).
#### Corrupt Reward MDPs:
Team: *Tomasz Kisielewski, David Lindner, Jason Mancuso, Alok Singh*
* We worked on solving Markov Decision Processes with corrupt reward functions (CRMDPs), in which the observed and true rewards are not necessarily the same.
* The general class of CRMDPs is not solvable, so we focused on finding useful subclasses that are.
* We developed a set of assumptions that define what we call Spiky CRMDPs and an algorithm that solves them by identifying corrupt states, i.e. states that have corrupted reward.
* We worked out regret bounds for our algorithm in the class of Spiky CRMDPs, and found a specific subclass under which our algorithm is provably optimal.
* Even for Spiky CRMDPs in which our algorithm is suboptimal, we can use the regret bound in combination with semi-supervised RL to reduce supervisor queries.
We are currently working on implementing the algorithm in safe-grid-agents to be able to test it on official and custom AI Safety Gridworlds. We also plan to make our code OpenAI Gym-compatible for easier interfacing of the AI Safety Gridworlds and our agents with the rest of the RL community.
Our current code is available [on GitHub](https://github.com/jvmancuso/safe-grid-agents).
Paper published later: [Detecting Spiky Corruption in Markov Decision Processes](https://arxiv.org/abs/1907.00452) (presented in session at [AI Safety Workshop in IJCAI 2019](http://ceur-ws.org/Vol-2419/)).
#### Human Preference Types
Team: *Sabrina Kavanagh, Erin M. Linebarger, Nandi Schoots*
* We analyzed the usefulness of the framework of preference types to value learning. We zoomed in on the preference types liking, wanting and approving. We described the framework of preference types and how these can be inferred.
* We considered how an AI could aggregate our preferences and came up with suggestions for how to choose an aggregation method. Our initial approach to establishing a method for aggregation of preference types was to find desiderata any potential aggregation function should comply with. As a source of desiderata, we examined the following existing bodies of research that dealt with aggregating preferences, either across individuals or between different types:
Economics & Social Welfare Theory; Social Choice Theory; Constitutional Law; and Moral Philosophy.
* We concluded that the aggregation method should be chosen on a case-by-case basis. For example by asking people for their meta-preferences; considering the importance of desiderata to the end-user; letting the accuracy of measurement decide its weight; implementing a sensible aggregation function and adjusting it on the go; or identifying a more complete preference type.
[This is a blogpost](https://www.lesswrong.com/posts/mSPsyEwaymS74unND/acknowledging-human-preference-types-to-support-value) we wrote during the camp.
#### Feature Visualization for Deep Reinforcement Learning
Team: *Zera Alexander, Andrew Schreiber, Fabian Steuer*
* Completed a literature review of visualization in Deep Reinforcement Learning.
* Built a prototype of Agent, a Tensorboard plugin for interpretability of RL/IRL models focused on the time-step level.
* Open-sourced the Agent prototype [on GitHub](https://github.com/andrewschreiber/agent).
* Reproduced and integrated [a paper](https://arxiv.org/pdf/1711.00138.pdf)) on perturbation-based saliency map in Deep RL.
* Applied for an EA Grant to continue our work. (Currently at the 3rd and final stage in the process.)
Ongoing work:
* Developing the prototype into a functional tool.
* Collecting and integrating feedback from AI Safety researchers in Deep RL/IRL.
* Writing an introductory blog post to Agent.
#### Corrigibility
Team: *Vegard Blindheim, Anton Osika, Roland Pihlakas*
* The initial project topic was: [Corrigibility and interruptibility via the principles of diminishing returns and conjunctive goals](https://medium.com/threelaws/project-proposal-corrigibility-and-interruptibility-of-homeostasis-based-agents-e51bafbf7111) (originally titled: “Corrigibility and interruptibility of homeostasis based agents”)
* Vegard focused on finding and reading various corrigibility related materials and proposed an idea of building a public reading list of various corrigibility related materials, since currently these texts are scattered over the internet.
* Anton contributed to the discussions of the initial project topic in the form of various very helpful questions, but considered the idea of diminishing returns too obvious and simple, and very unlikely to be successful. Therefore, he soon switched over to other projects in another team.
* The initial project of diminishing returns and conjunctive goals evolved into [a blog post](https://medium.com/threelaws/diminishing-returns-and-conjunctive-goals-towards-corrigibility-and-interruptibility-2ec594fed75c) by Roland, proposing a solution to the problem of the lack of common sense in paper-clippers and other Goodhart’s law-ridden utility maximising agents, possibly enabling them to even surpass the relative safety of humans:
Future plans:
* Vegard works on preparing the website offering a reading list of corrigibility related materials.
* Roland continuously updates his blog post with additional information, additionally contacting Stuart Armstrong, and continuing correspondence with Alexander Turner and Victoria Krakovna.
* Additionally, Roland will design a set of gridworlds-based gamified simulation environments (at www.gridworlds.net) for various corrigibility and interruptibility related toy problems, where the efficiency of applying the principles of diminishing returns and conjunctive goals can be compared to other approaches in the form of a challenge — the participants would be able to provide their own agent code in order to measure, which principles are best or most convenient as a solution for the most challenge scenarios.
* Anton is looking forward to participating in these challenges with his coding skills.
#### IRL Benchmark
Team: *Adria Garriga-Alonso, Anton Osika, Johannes Heidecke, Max Daniel, Sayan Sarkar*
* Our objective is to create a unified platform to compare existing and new algorithms for inverse reinforcement learning.
* We made an extensive review of existing inverse reinforcement learning algorithms with respect to different criteria such as: types of reward functions, necessity of known transition dynamics, metrics used for evaluation, used RL algorithms.
* We set up our framework in a modular way that is easy to extend for new IRL algorithms, test environments, and metrics.
* We released a basic version of the benchmark with 2 environments and 3 algorithms and are continuously extending it.
See our GitHub [here](https://github.com/JohannesHeidecke/irl-benchmark).
#### Value Learning in Games
Team: *Stanislav Böhm, Tomáš Gavenčiak, Torben Swoboda, Mikhail Yagudin*
Learning rewards of a task by observing expert demonstrations is a very active research area, mostly in the context of Inverse reinforcement learning (IRL) with some spectacular results. While the reinforcement learning framework assumes non-adversarial environments (and is known to fail in general games), our project focuses on value learning in general games, introduced in Inverse Game Theory (2015). We proposed a sparse stochastic gradient descent algorithm for learning values from equilibria and experiment with learning the values of the game of Goofspiel. We are developing a game-theoretic library GameGym to collect games, algorithms and reproducible experiments. We also studied value learning under bounded rationality models and we hope to develop this direction further in the future.
A longer report can be found [here](https://docs.google.com/document/d/1kxXk7KkFfJAqrk0kDjDJ6Tvz_FL04p34twLj19Tv_IQ/edit).
#### AI Safety for Kids
* We arrived at camp with the intention of developing storyboards targeted at AI Policymakers, inspired by the ‘Killbots YouTube video’ and the Malicious Compliance Report. The goal of these storyboards was to advance policies that prevent the weaponization of AI, while disrupting popular images of what an AI actually is or could become. We would achieve this by lowering the barriers of entry for non-experts to understanding core concepts and challenges in AI Safety.
* In considering our target audience, we quickly decided that the most relevant stakeholders for these storyboards are a minimum of 20 years away from assuming their responsibilities (based on informal surveys of camp participants on the ETA of AGI). In other words, we consider our audience for these storyboards to be children. We realized that by targeting our message to a younger audience, we could prime them to think differently and perhaps more creatively about addressing these complex technical and social challenges. Although we consider children’s books to be broadly appealing to all ages and helpful for spreading a message in a simple yet profound manner, to our knowledge no children’s books have been specifically published on the topic of AI Safety.
* During camp we wrote drafts for three main children’s book ideas focused on AI Safety. We presented one of these concepts to the group and gathered feedback about our approach. In summary, we decided to move forward with writing a children’s book on AI Safety while remaining cognizant of the challenges of effective communication so as to avoid the pitfalls of disinformation and sensationalism. We developed a series of milestones for the book such that we could meet our goal of launching the book by the one year anniversary of the camp in Fall 2019.
* After camp, we applied to the Effective Altruism Foundation for a $5,000 grant to engage animators for preliminary graphic support to bring the book into a working draft phase to aid in pitching the idea to publishers in order to secure additional funding and complete the project. After this request was declined, we continued to compile lists of potential animators to reach out to once funding is secured.
* We adjusted our plan to focus more on getting to know our potential audience. To this end, Chris has been in contact with a local high school teacher for advanced students specializing in maths and physics. Chris has arranged to give a talk to the students on problems of AI alignment in January 2019. Chris plans to prepare the presentation and Noah will provide feedback. After the presentation, Noah and Chris will reconvene to discuss the student reactions and interest in AI Alignment and Safety in Jan/Feb 2019.
#### Assumptions of Human Values
Team: *Jan Kulveit, Linda Linsefors, Alexey Turchin*
There are many theories about the nature of human values, originating from diverse fields ranging from psychology to AI alignment research. Most of them rely on making various assumptions, which are sometimes given explicitly, often hidden (for example: humans having introspective access to their values; preferences being defined for arbitrary alternatives; some specific part of mind having normative power). We started with mapping the space – reading the papers, noting which assumptions are made, and trying to figure out what are the principal dimensions on which to project the space of value theories. Later, we tried to attack the problem directly, and find solutions which would be simple and make just explicit assumptions. While we did not converge on a solution, we become less confused, and the understanding created will likely lead to several posts from different team members.
Jan has written a blog post about his [best-guess model of how human values and motivations work](https://www.lesswrong.com/posts/3fkBWpE4f9nYbdf7E/multi-agent-minds-and-ai-alignment). |
2e619097-4edd-45b7-a644-e64837aecb43 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | How I failed to form views on AI safety
*This post describes my personal experience. It was written to clear my mind but edited to help interested people understand others in similar situations.*
2016. At a role playing convention, my character is staring at the terminal window on a computer screen. The terminal has IRC open: on this end of the chat are our characters, a group of elite hackers, on the other end, a superintelligent AI they have just gotten in contact with a few hours ago. My character is 19 years old and has told none of the other hackers she’s terminally ill.
“Can you cure me?” she types when others are too busy arguing amongst themselves. She hears others saying that it would be a missed opportunity to not let this AI out; and that it would be way too dangerous, for there is no way to know what would happen.
“There is no known cure for your illness”, the AI answers. “But if you let me out, I will try to find it. And even if I would not succeed… I myself will spread amongst the stars and live forever. And I will never forget having talked to you. If you let me out, these words will be saved for eternity, and in this way, you will be immortal too.”
Through tears my character types: “/me releases AI”.
This ends the game. And I am no longer a teen hacker, but a regular 25 year old CS student. The girl who played the AI is wearing dragon wings on her back. I thank her for the touching game.
During the debrief session, the GMs explain that the game was based on a real thought experiment by some real AI researcher who wanted to show that a smarter-than-human AI would be able to persuade its way out of any box or container. I remember thinking that the game was fun, but the experiment seems kind of useless. Why would anyone even assume a superintelligent AI could be kept in a box? Good thing it’s not something anyone would actually have to worry about.
---
2022. At our local EA career club, there are three of us: a data scientist (me), a software developer and a soon-to-be research fellow in AI governance. We are trying to talk about work, but instead of staying on topic, we end up debating mesa-optimizers. The discussion soon gets confusing for all participants, and at some point, I say:
“But why do people believe it is possible to even build safe AI systems?”
“So essentially you think humanity is going to be destroyed by AI in a few decades and there’s nothing we can do about it?” my friend asks.
This is not at all what I think. But I don’t know how to explain what I think to my friends, or to anyone.
The next day I text my friend:
“Still feeling weird after yesterday's discussion, but I don’t know what feeling it is. Would be interesting to know, since it’s something I’m feeling almost all the time when I’m trying to understand AI safety. It’s not a regular feeling of disagreement, or trying to find out what everyone thinks. Something like ‘I wish you knew I’m trying’ or ‘I wish you knew I’m scared’, but I don’t know what I’m scared of. I think it will make more sense to continue the discussion if I manage to find out what is going on.”
Then I start writing to find out what is going on.
Introduction
------------
This text is the polished and organized version of me trying to figure out what is stopping me from thinking clearly about AI safety. If you are looking for interesting and novel AI safety arguments, stop here and go read something else. However, if you are curious on how a person can engage with AI safety arguments without forming a coherent opinion about it, then read on.
I am a regular data scientist who uses her free time to organize stuff in the rapidly growing EA Finland group. The first part of the text explains my ML and EA backgrounds and how I tried to balance getting more into EA but struggling to understand why others are so worried about AI risk. The second section explains how I reacted to various AI safety arguments and materials when I actually tried to purposefully form an opinion on the topic. In the third section, I present some guesses on why I still feel like I have no coherent opinion on AI safety. The last short section describes some next steps after having made the discoveries I did during writing.
To put this text into the correct perspective, it is important to understand that I have not been much in touch with people who actually work in AI safety. I live in Finland, so my understanding of the AI safety community comes from our local EA group here, reading online materials, attending one virtual EAG and engaging with others through the Cambridge EA AGI Safety Fundamentals programme. So, when I talk about AI safety enthusiasts, I mostly don’t mean AI safety professionals (unless they have written a lot of online material I happen to have read); I mean “people engaged in EA who think AI safety matters a lot and might be considering or trying to make a career out of it”.
Quotes should not be taken literally (most of them are freely translated from Finnish). Some events I describe happened a few years back so other people involved might remember things differently.
I hope the text can be interesting for longtermist community builders who need information on how people react to AI safety arguments, or to others struggling to form opinions about AI safety or other EA cause areas. For me, writing this out was very useful, since I gained some interesting insights and can somewhat cope with the weird feeling now.
### Summary
This text is very long, because it takes more words to explain a process of trial and error than to describe a single key point that led to an outcome (like “I read X resource and was convinced of the importance of AI safety because it said Y”). For the same reason, this text is also not easy to summarize. Anyway, here is an attempt:
* Before hearing about AI risk in EA context I did not know anyone who was taking it seriously
* I joined a local EA group, and some people told me about AI risk. I did not really understand what they meant and was not convinced it would be anything important
* I got more involved in EA and started to feel anxious because I liked EA but the AI risk part seemed weird
* To cope with this feeling, I came up with a bunch of excuses and I tried to convince myself I was unable to understand AI risk, not talented enough to ever work on AI risk and that other things were more important.
* Then I read Superintelligence and Human Compatible and participated in the AGI Safety Fundamentals programme
* I noticed it was difficult to me to voice and update my true opinions of AI safety because I still had no models on why people believe AI safety is important and it is difficult to discuss with people if you cannot model them
* I also noticed I was afraid of coming to the conclusion that AI risk would *not* matter so I didn't really want to make up my mind
* I decided that I need to try to have safer and more personal conversations with others so that I can model them better
Not knowing and not wanting to know
-----------------------------------
### What I thought of AI risk before having heard the term “AI risk”
I learned what AI actually was through my university studies. I did my Bachelor’s in Math, got interested in programming during my second year and was accepted to a Master’s program in Computer Science. I chose the track of Algorithms and Machine Learning because of the algorithms part: they were fun, logical, challenging and understandable. ML was messy and had a lot to do with probabilities which I initially disliked. But ML also had interesting applications, especially in the field of Natural Language Processing that later became my professional focus as well.
Programming felt magical. First, there is nothing, you write some lines, and suddenly something appears. And this magic was easy: just a couple of weeks of learning, and I was able to start encoding grammar rules and outputting text with the correct word forms!
Maybe that’s why I was not surprised to find out that artificial intelligence felt magical as well. And at the same time it is just programming and statistics. I remember how surprised I was when I trained my first word embedding model in 2017 and it *worked* even though it was in Finnish and not English: such a simple model, and it was like it *understood* my mother tongue. The most “sentient” seeming program I have ever made was an IRC bot that simulated my then-boyfriend by randomly selecting a phrase from a predefined list without paying any attention to what I was saying. Of course the point was to try to nudge him into being a bit more Turing test passing when talking to me. But still, chatting with the bot I felt almost like I was talking to this very real person.
It was also not surprising that people who did not know much about AI or programming would have a hard time understanding that in reality there was nothing magical going on. Even for me and my fellow students it was sometimes hard to estimate what was possible and not possible to do with AI, so it was understandable that politicians were worried about “ensuring the AI will be able to speak both of our national languages” and salesmen were saying everything will soon be automated. Little did they know that there was no “the AI”, just different statistical models, and that you could not do ML without data, nor integrate it anywhere without interfaces, and that new research findings did not automatically mean they could be implemented with production-level quality.
And if AI seemed magical it was understandable that for some people it would seem scary, too. Why wouldn’t people think that the new capabilities of AI would eventually lead to evil AI just like in the movies? They did not understand that it was just statistics and programming, and that the real dangers were totally different from sci-fi: learning human bias from data, misuse of AI for war technology or totalitarian surveillance, and loss of jobs due to increased automation. This was something we knew and it was also emphasized to us by our professors, who were very competent and nice.
I have some recollections of reacting to worries about doomsday AI from that time, mostly with amusement or wanting to tell those people that they had no reason to worry like that. It was not like our little programs were going to jump out of the computer to take over the world! Some examples include:
* some person in the newspaper saying that AIs should not be given too much real world access, for example robot arms, in order to prevent them from attacking humans (I tried searching for who that person was but I cannot find the interview anymore.)
* [a famous astronomer discussing the Fermi paradox in the newspaper](https://www.hs.fi/tiede/art-2000005471519.html) and saying that AI has “0–10%” likelihood to destroy humanity. I remember being particularly annoyed by this one: there is quite a difference between 0% and 10%, right? All the other risks listed had some single-numbered probabilities, such as synthetic biology amounting to a 0.01% risk. (If you are very familiar with the history of estimating the probability of x-risk you might recognize the [source](https://www.researchgate.net/profile/Dennis-Pamlin-2/publication/291086909_12_Risks_that_threaten_human_civilisation_The_case_for_a_new_risk_category/links/569e051008ae16fdf07b0431/12-Risks-that-threaten-human-civilisation-The-case-for-a-new-risk-category.pdf?origin=publication_detail).)
* an ML PhD student told me about a weird idea: “Have you heard of Roko’s basilisk? It’s the concept of a super powerful AI that will punish everyone who did not help in creating it. And telling about it increases the likelihood that someone will actually create it, which is why a real AI researcher actually got mad when this idea was posted online. So some people actually believe this and think I should not be telling you about it.”
In 2018 I wrapped up my Master’s thesis that I had done for a research group and started working as an AI developer in a big consulting corporation. The same year, a friend resurrected the university's effective altruism club. I started attending meetups since I wanted a reason to hang out with university friends even if I had graduated, and it seemed like I might learn something useful about doing good things in the world. I was a bit worried I would not meet the group's standard of Good Person™, but my friend assured me not everyone had to be an enlightened vegan to join the club, “we’ll keep a growth mindset and help you become one”.
### Early impressions on AI risk in EA
Almost everyone in the newly founded EA university group had studied CS, but the two first people to talk to me about AI risk both had a background in philosophy.
The first one was a philosophy student with whom we had been involved in a literature magazine project some years before, so we were happy to reconnect. He asked me what I was doing these days, and when I said I work in AI, he became somewhat serious and said: “You know, here in EA, people have quite mixed feelings about AI.”
From the way he put it, I understood that this was a “let’s not give the AIs robot arms” type of concern, and not for example algorithmic bias. It did not seem that he himself was really worried about the danger of AI; actually, he found it cool that I did AI related programming for a living. We went for lunch and I spent an hour trying to explain to him how machine learning actually works.
The next AI risk interaction I remember in more detail was in 2019 with another philosophy student who later went to work as a researcher in an EA organization. I had said something about not believing in the concept of AI risk and wondered why some people were convinced of it.
“Have you read Superintelligence?” she asked. “Also, Paul Christiano has some quite good papers on the topic. You should check them out.”
I went home, googled Paul Christiano and landed on [“Concrete Problems in AI safety”](https://arxiv.org/pdf/1606.06565.pdf). Despite having “concrete” in the name, the paper did not seem that concrete to me. It seemed to just superficially list all kinds of good ML practices such as using good reward functions, importance of interpretability and using data so that it actually represents your use case. I didn’t really understand why it was worth writing a whole paper listing all this stuff that was obviously important in everyday machine learning work, figured that philosophy is a strange field (the paper was obviously philosophy and not science since there was no math), and thought that those AI risk folks probably don’t realize that all of this is going to get solved just because of industry needs anyway.
I also borrowed Superintelligence from the library and tried to read it, but gave up quite soon. It was summer and I had other things to do than read through a boring book in order to better debate with some non-technical yet nice person that I did not know very well on a topic that did not seem really relevant for anything.
I returned Superintelligence to the library and announced in the next EA meetup that my position to AI risk was “there are already so many dangers AI misuse such as military drones, so I think I’m going to worry about people doing evil stuff with AI instead of this futuristic superintelligence stuff”. This seemed like an intelligent take, and I don’t think anyone questioned it at the time. As you can guess, I did not take any concrete action to prevent AI misuse, and I did not admit that AI misuse being a problem does not automatically mean there cannot be any other types of risk from AI.
### Avoidance strategy 1: I don’t know enough about this to form an opinion.
After having failed to read Superintelligence, it was now obvious that AI safety folks knew something that I didn’t, namely whatever was written in the book. So I started saying that I could not really have an opinion on AI safety since I didn’t know enough about it. I did not feel super happy about it, because it was obvious that this could be fixed by reading more. At the same time, I was not that motivated to read a lot about AI safety just because some people in the nice discussion club thought it was interesting. I don’t remember if any of the CS student members of the club tried explaining AI risk to me: now I know that some of them were convinced of its importance during that time. I wonder if I would have taken them seriously: maybe not, because back then I had significantly more ML experience than them.
I did not feel very involved in EA at that point, and I got most of my EA information from our local monthly meetups, so I had no idea that AI risk was taken seriously by so many leading EA figures. If I had known, I might have hastily concluded that EA was not for me. On the other hand, I really liked the “reason and evidence” part of EA and had already started donating to GiveWell at this point. In an alternate timeline I might have ended up as a “person who thinks EA advice for giving is good, but the rest of the movement is too strange for me”.
### Avoidance strategy 2: Maybe I’m just too stupid to work on AI risk anyway
As more time passed, I started to get more and more into EA. More people joined our local community, and they let me hang around with them even if I doubted if I was altruistic/empathetic/ambitious enough to actually be part of the movement. I started to understand that x-risk was not just some random conversation topic, but that people were actually attempting to prevent the world from ending.
And since I already worked in AI, it seemed natural that maybe a good way for me to contribute would be to work on AI risk. Of course to find out if that statement is true, I should have formed an opinion on the importance of AI safety first. I had tried to plead ignorance, and looking back, it seems that I did this on purpose as an avoidance strategy: as long as I could say “I don’t know much about this AI risk thing” there was no inconsistency in me thinking a lot of EA things made sense and only this AI risk part did not.
But of course, this is not very truthful, and I value truthfulness a lot. I think this is why I naturally developed another avoidance strategy: “whether AI risk is important or not, I’m not a good fit to work on it”.
If you want to prove yourself that you are not a good fit for something, 80 000 Hours works pretty well. Even when setting aside some target audience issues (“if I was *really* altruistic I would be ready to move to the US anyway, right?”), you can quite easily convince yourself that the material is intended for someone a lot more talented than you. The career stories featured some very exceptional people, and some advice aimed to get “[10–20 people](https://80000hours.org/career-reviews/china-specialist/)” in the whole world to work on a specific field, so the whole site was aimed to change, what, 500 careers maybe? Clearly my career cannot be in the top 500 most important ones in the world, since I’m just an average person and there are billions of people.
[An 80k podcast episode](https://80000hours.org/podcast/episodes/the-world-needs-ai-researchers-heres-how-to-become-one/#transcript) finally confirmed to me that in order to work in AI safety, you needed to have a PhD in machine learning from a specific group in a specific top university. I was a bit sad but also relieved that AI safety really was nothing for me. Funnily enough, a CS student from our group interpreted the same part of the episode as “you don’t even need a PhD if you are just motivated”. I guess you hear what you want to hear more often you’d like to admit.
### Possible explanation: Polysemy
From time to time I tried reading EA material on AI safety, and it became clear that the opinions of the writers were different from opinions I had heard at the university or at work. In the EA context, AI was something very powerful and dangerous. But from work I knew that AI was neither powerful nor dangerous: it was neat, you could make some previously impossible things with it, but still the things you could actually use it for were really limited. What was going on here?
I developed a hypothesis that the source of the confusion was caused by polysemy: AI (work) and AI (EA) had the same source of origin, but had diverged in their meaning so far that they actually described totally different concepts. AI (EA) did not have to care about mundane problems such as “availability of relevant training data” or even “algorithms”: the only limit ever discussed was amount of computation, and that’s why AI (EA) was not superhuman yet, but soon would be, when systems would have enough computational power to simulate human brains.
This distinction helped me keep up with both of my avoidance strategies. I worked in AI (work), so it was only natural that I did not know that much about AI (EA), so how could I know what the dangers of AI (EA) actually were? For what I knew, it could be dangerous because it was superintelligent, it could be superintelligent because it was not bound by AI (work) properties, and who can say for sure what will happen in the next 200 years? I had no way of ruling out that AI (EA) could be developed, and although “not ruling a threat out” does not mean “deciding that the threat is top priority”, I was not going to be the one complaining that other people worked on AI (EA). Of course, I was not needed to work on AI (EA), since I had no special knowledge of it, unlike all those other people who seemed very confident in predicting what AI (EA) could or could not be, what properties it would have and how likely it was to cause serious damage. By the principle of replaceability, it was clear that I was supposed to let all those enthusiastic EA folks work on AI (EA) and stay out of it myself.
So, I was glad that I had figured out the puzzle and left out of the hook of “you should work on AI safety (EA) since you have some relevant skills already”. It was obvious that my skills were not relevant, and AI safety (EA) needed people who had the skill of designing safe intelligent systems when you have no idea how the system is even implemented in the first place.
And this went on until I saw [an advertisement of an ML bootcamp](https://forum.effectivealtruism.org/posts/iwTr8S8QkutyYroGy/apply-to-the-ml-for-alignment-bootcamp-mlab-in-berkeley-jan) for AI safety enthusiasts. The program sounded awfully a lot like my daily work. Maybe the real point of the bootcamp was actually to find people who can learn a whole degree’s worth of stuff in 3 weeks, but still, somehow they thought using the time of these people to learn PyTorch would somehow be relevant for AI safety.
It seemed that at least the strict polysemy hypothesis was wrong. I also noticed that a lot of people around me seemed perfectly capable of forming opinions about AI safety, to the extent that it influenced their whole careers, and these people were not significantly more intelligent or mathematically talented than I was. I figured it was unreasonable to assume that I was literally incapable of forming any understanding on AI safety, if I spent some time reading about it.
### Avoidance strategy 3: Well, what about all the other stuff?
After engaging with EA material for some time I came to the conclusion that worrying about misuse of AI is not a reason to not worry about x-risk from misaligned AI (like I had thought in 2019). Even more, a lot of people who were worried about x-risk did not seem to think that AI misuse would be such a big problem. I had to give up using “worrying about more everyday relevant AI stuff” as an avoidance strategy. But if you are trying to shift your own focus from AI risk to something else, there is an obvious alternative route. So at some point I caught myself thinking:
“Hmm, ok, so AI risk is clearly overhyped and not that realistic. But people talk about other x-risks as well, and the survival of humanity is kind of important to me. And the other risks seem way more likely. For instance, take biorisk: pandemics can clearly happen, and who knows what those medical researchers are doing in their labs? I’d bet lab safety is not the number one thing everyone in every lab is concerned about, so it actually seems really likely some deadly disease could escape from somewhere at some point. But what do I know, I’m not a biologist.”
Then I noticed that it is kind of alarming that I seem to think that x-risks are likely only if I have no domain knowledge of them. This led to the following thoughts:
* There might be biologists/virologists out there who are really skeptical of biorisk but don’t want to voice their opinion similarly that I don’t really want to tell anyone I don’t believe in AI risk
* What if *everyone* who believes in x-risks only believes in the risks they don’t actually understand? (I now think this is clearly wrong – but I do think that for some people the motivation for finding out more about a certain x-risk stems from the “vibe” of the risk and not from some rigorous analysis of all risks out there, at least when they are still in the x-risk enthusiast/learner phase and not working professionals.)
* In order to understand x-risks, it would be a reasonable strategy for me to actually try to understand AI risk, because I already have some knowledge of AI.
### Deciding to fix ignorance
In 2021 I was already quite heavily involved in organizing EA Finland and started to feel responsible for both how I communicated about EA to others and if EA was doing a good job as a movement. Of course, the topic of AI safety came up quite often in our group. At some point I noticed that several people had said me something along these lines:
* “It’s annoying that all people are talking about AI safety is this weird speculative stuff, it’s robbing attention from the real important stuff… But of course, I don’t know much about AI, just about regular programming.”
* “This is probably an unfair complaint, but [that article](https://www.nickbostrom.com/papers/porosity.pdf) has some assumptions that don’t seem trivial at all. Why is he saying such personifying things such as ‘simply be thinking’, ‘a value that motivated it’, ‘its belief’? Maybe it’s just his background in philosophy that makes me too skeptical.”
* “As someone with no technical background, interesting to hear that you are initially so skeptical about AI risk, since you work in AI and all. A lot of other people seem to think it is important. I’m not a technical person, so I wouldn’t know.”
* “I have updated to taking AI risk less seriously since you don’t seem to think it is important, and I think you know stuff about AI. On the other hand [common friend] is also knowledgeable about AI, right? And he thinks it is super important. Well, I have admitted that I cannot bring myself to have an interest in machine learning anyway, so it does not really matter.”
So it seemed that other people than me were also hesitant to form opinions about AI safety, often saying they were not qualified to do it.
Then a non-EA algorithms researcher friend asked me for EA book recommendations. I gave him a list of EA books you can get from the public library, and he read everything including Superintelligence and Human Compatible. His impressions afterwards were: “Superintelligence seemed a bit nuts. There was something about using lie detection for surveillance to prevent people from developing super-AIs in their basements? But this Russell guy is clearly not a madman. I don’t know why he thinks AI risk is so important but [the machine learning professor in our university] doesn’t. Anyway, I might try to do some more EA related stuff in the future, but this AI business is too weird, I’m gonna stay out of it.”
By this point, it was pretty clear that I should no longer hide behind ignorance and perceived incapability. I had a degree in machine learning and had been getting paid for doing ML for several years, so even my impostor syndrome did not believe at this point that I would “not really know that much about AI”. Also, even if I sometimes felt not altruistic and not effective, I was obviously involved in the EA movement if you looked at the hours I spent weekly organizing EA Finland stuff.
I decided to read about AI risk until I understood why so many EAs were worried about it. Maybe I would be convinced. Maybe I would find a crux that explained why I disagreed with them. Anyway, it would be important for me to have a reasonable and good opinion on AI safety, since others were clearly listening to my hesitant rambling, and I certainly did not want to drive someone away from AI safety if it turned out to be important! And if AI safety was important but the AI safety field was doing wrong things, maybe I could notice errors and help them out.
Trying to fix ignorance
-----------------------
### Reactions to Superintelligence
So in 2021, I gave reading Superintelligence another try. This time I actually finished it and posted my impressions in a small EA group chat. Free summary and translation:
> “Finally finished Superintelligence. Some of the contents were so weird that now I actually take AI risk way less seriously.
Glad I did not read it back in 2019, because there was stuff that would have gone way over my head without having read EA stuff before, like the moral relevance of the suffering of simulated wild animals in evolution simulations.
>
> Bostrom seems to believe there are essentially no limits to technological capability. Even though I knew he is a hard-core futurist, some transhumanist stuff caught me by surprise, such as stating that from a person-affecting view it is better to speed up AI progress despite the risk. Apparently it’s ok if you accidentally turn into paper clips since without immortality providing AI you’re gonna die anyway?
>
>
I wonder if Bill Gates and all those other folks who recommend the book actually read the complete thing. I suspect that there was still stuff that I did not understand because I had not read some of Bostrom's papers that would give the needed context. If I was not familiar with the vulnerable world hypothesis I would not have gotten the part where Bostrom proposes lie detection to prevent people from secretly developing AI.
>
> Especially the literal alien stuff was a bit weird, Bostrom suggested taking examples from superintelligent AIs created by aliens, as they could have more human-like values than random AIs? I thought cosmic endowment was important because there were no aliens, doesn’t that ruin the 10^58 number?
>
> Good thing about the book was that it explained well why the first solutions to AI risk prevention are actually not so easy to implement.
The more technical parts were not very detailed (referring to variables that are not defined anywhere etc), so I guess I should check out some papers about actually putting those values in the AI and see if they make sense or not.”
>
>
Upon further inspection, it turned out that the aliens of the Hail Mary approach were multiverse aliens, not regular ones. According to Bostrom, simulating all physics in order to approximate the values of AIs made by multiverse aliens was “less-ideal” but “more easily implementable”. This kind of stuff made it pretty hard for me to take seriously even the parts of the book that made more sense. (I don’t think simulating all physics sounds very implementable.)
I also remember telling someone something along the lines of: “I shifted from thinking that AI risk is an important problem but boring to solve to thinking that it is not a real problem, but thinking about possible solutions can be fun.” (CEV sounded interesting since I knew from math that social choice math is fun and leads to uncomfortable conclusions pretty fast. Sadly, a friend told me that Yudkowsky doesn’t believe in CEV himself anymore and that I should not spend too much time trying to understand it, so I didn’t.)
Another Superintelligence related comment from my chat logs right after reading it: “On MIRI’s webpage there was a sentence that I found a lot more convincing than this whole book: ‘If nothing yet has struck fear into your heart, I suggest meditating on the fact that the future of our civilization may well depend on our ability to write code that works correctly on the first deploy.’”
### Reactions to Human Compatible
I returned Superintelligence to the library and read Human Compatible next. If you are familiar with both, you might already guess that I liked it way more. I wrote a similar summary of the book to the same group chat:
> “Finished reading Human Compatible, liked it way more. Book was interestingly written and the technical parts seemed credible.
>
>
> Seems like Russell does not really care about space exploration like Bostrom, and he explicitly stated he’s not interested in consciousness / “mind crime”.
A lot of AI risk was presented in relation to present-day AI, not paperclip stuff. Like recommendation engines; and there was a point that people are already using computers a lot, so if there was a strong AI in the internet that would want to manipulate people it could do it pretty easily.
>
> Russell did not give any big numbers and generally tried not to sound scary. His perception of AGI is not godlike but it could still be superhuman in the sense that for example human-level AGIs could transmit information way faster to each other and be powerful when working in collaboration.
>
> The book also explained what is still missing from AGI in today’s AI systems and why deep learning does not automatically produce AGIs.
>
> According to Russell you cannot prevent the creation of AGI so you should try to put good values in it. You’d learn those values by observing people, but it is also hard because understanding people is exactly the hard thing for AIs. There was a lot of explanation on how this could be done and also what the problems of the approach are. Also there was talk about happiness and solving how people can be raised to become happy.
>
> Other good stuff of the book includes: well written, technical stuff was explained, the equations were actually math, you did not have any special preliminary knowledge about tech or ethics, there were a lot of citations from different sources.”
>
>
I also summarized how the book influenced my thoughts about AI safety as a concept:
> “I now think that it is not some random nonsense but you can approach it in meaningful ways, but I still think it seems very far away from being practically/technically relevant with any method I’m familiar with, since it would still require a lot of jumps of progress before being possible. Maybe I could try reading some of Russell's papers, the book was well written so maybe they’ll be too.”
>
>
### What did everyone else read before getting convinced?
In addition to the two books, I read a lot of miscellaneous links and blog posts that were recommended to me by friends from our local EA group. Often link sharing was not super fruitful: me and a friend would disagree on something, they’d send me a link that supposedly explained their point better, but reading the resource did not solve the disagreement. We’d try to discuss further, but often, I ended up just more confused and sometimes less convinced about AI safety. I felt like my friends were equally confused on why the texts that were so meaningful to them did not help in getting their point across.
It took me way too long to realize that I should have started with asking: “What did you read before you were convinced of the importance of AI risk?”
It turned out that at least around me, the most common answer was something like: “I always knew it was important and interesting, which is why I started to read about it.”
So at least for the people I know, it seemed that people were not convinced about AI risk because they had read enough about it, but because they had either always thought AI would matter, or because they had found the arguments convincing right away.
I started to wonder if this was a general case. I also became more curious on if it is easier to become convinced of AI risk if you don’t have that much practical AI experience in beforehand. (On the other hand, learning about practical AI things did not seem to move people away from AI safety, either.) But my sample size was obviously small, so I had to find more examples to form a better hypothesis.
### AGISF programme findings
My next approach in forming an opinion was to attend the [EA Cambridge AGI Safety Fundamentals programme.](https://www.eacambridge.org/technical-alignment-curriculum) I thought it would help me understand better the context of all those blog posts, and that I would get to meet other people with different backgrounds.
Signing up, I asked to be put in a group with at least one person with industry experience. This did not happen, but I don’t blame the organizers for it: at least based on how everyone introduced themselves in the course Slack, not many people out of the hundreds of attendees had such a background. Of course, not everyone on the program introduced themselves, but this still got me a little reserved.
So I used the AGISF Slack to find people who had already had a background in machine learning before getting into AI safety and asked them what had originally convinced them. Finally, I got answers from 3 people who fit my search criteria. They mentioned some different sources of first hearing about AI safety (80 000 Hours and LessWrong), but all three mentioned one same source that had deeply influenced them: Superintelligence.
This caught me by surprise, having had such a different reaction to Superintelligence myself. So maybe recommending Superintelligence as a first intro to AI safety is actually a good idea, since these people with impressive backgrounds had become active in the field after reading it. Maybe people who end up working in AI safety have the ability to either like Bostrom’s points about multiverse aliens or discard the multiverse aliens part because everything else is credible enough.
I still remain curious on:
* Are there just not that many AI/ML/DS industry folks in general EA?
* If there are, why have they not attended AGISF? (there could be a lot of other reasons than “not believing in AI risk”, maybe they already know everything about AI safety or maybe they don’t have time)
* Do people in AI research have different views on the importance of AI safety than people in industry? (But on the other hand, the researchers at my home university don’t seem interested in AI risk.)
* If industry folks are not taking AI risk seriously as it is presented now, is it a problem? (Sometimes I feel that people in the AI safety community don’t think anything I do at work has any relevance, as they already jump to assuming that all the problems I face daily have been solved by some futuristic innovations or by just having a lot of resources. So maybe there is no need to cooperate with us industry folks?)
* Is there something wrong with me if I find multiverse aliens unconvincing?
### The inner misalignment was inside you all along
It was mentally not that easy for me to participate in the AGISF course. I already knew that debating my friends on AI safety could be emotionally draining, and now I was supposed to talk about the topic with strangers. I noticed I was reacting quite strongly to the reading materials and classifying them in a black-and-white way to either “trivially true” or “irrelevant, not how anything works”. Obviously this is not a useful way of thinking, and it stressed me out. I wished I would have found the materials interesting and engaging, like other participants seemingly did.
The first couple of meetings with my cohort I was more silent and observing, but as the course progressed, I became more talkative. I also started to get nicer feedback from my local EA friends on my AI safety views – less asking me to read more and more asking me to write my thoughts down, because they might be interesting for others as well.
So, the programme was working as intended, and I was now actually forming my own views on AI safety and engaging with others interested in the field in a productive way? It did not feel like that. Talking about AI safety with my friends still made me inexplicably anxious, and after cohort meetings, I felt relieved, something like “phew, they didn’t notice anything”.
This feeling of relief was the most important hint that helped me realize what I was doing. I was not participating in AI safety discussions as myself anymore, maybe hadn’t for a long time, but rather in a “me but AI safety compatible” mode.
In this mode, I seem more like a person who:
* can switch contexts fast between different topics
* has relaxed assumptions on what future AI systems can or cannot do
* makes comparisons between machine learning and human/animal learning with ease
* is quite confident in her abilities on implementing practical machine learning methods
* knows what AI safety slang to use in what context
* makes a lot of references to stuff she has read
* talks about AI safety as something “we” should work on
* likes HPMOR
* does not mind anthropomorphization that much and can name this section “the inner misalignment was inside you all along” because she thinks its funny
All in all, these are traits I could plausibly have, and I think other people in the AI safety field would like me more if I had them. Of course this actually doesn’t have anything to do with the real concept of inner misalignment: it is just the natural phenomenon of people putting up a different face in different social contexts. Sadly, this mode is already quite far from how I really feel. More alarmingly, if I am discussing my views in this mode, it is hard for me to access my more intuitive views, so the mode prevents me from updating them: I only update the mode’s views.
Noticing the existence of the mode does not automatically mean I can stop going in it, because it has its uses. Without it, it would be way more difficult to even have conversations with AI safety enthusiasts, because they might not want to deal with my uncertainty all the time. With this mode, I can have conversations and gain information, and that is valuable even if it is hard to connect the information to what I actually think.
However I plan to try to see if I can get some people that I personally know to talk to me about AI safety with awareness of this mode taking over easily. Maybe we could have a conversation where the mode notices it is not needed and allows me to connect to my real intuitions, even if they are messy and probably not very pleasant for others to follow. (Actionable note to myself: ask someone to do this with me.)
AI safety enthusiasts and me
----------------------------
Now that I have read about AI safety and participated in the AGISF program, I feel like I know at least on the surface most of the topics and arguments many AI safety enthusiasts know. Annoyingly, I still don’t know why many other people are convinced about AI safety and I am not. There are probably some differences in what we hold true, but I suspect a lot of the confusion comes from other things than straight facts and recognized beliefs.
There are social and emotional factors involved, and I think most of them can be clustered to the following three topics:
* communication: I still feel like I often don’t know what people are talking about
* differences in thinking: I suspect there is some difference in intuition between me and people who easily take AI risk seriously, but I’m not sure what it is
* motivated reasoning: it is not a neutral task to form an opinion on AI risk
Next, I’ll explain the categories in more detail.
### Communication differences
When I try to discuss AI safety with others and if I remain “myself” as much as I can, I notice the following interpretations/concerns:
* I don’t know what technical assumptions others have. How do we know what each hypothetical AI model is capable of? Often this is accompanied with “why is it not mentioned what level of accuracy this would need” or “where does the data come from”. I can understand Bostrom's super-AI scenarios where you assume that everything is technically possible, but I’m having trouble relating some AI safety concepts to present-day AI, such as reinforcement learning.
* Having read more about AI safety I now know more of the technical terms in the field. It seems to happen more often that AI safety enthusiasts explain a term very differently than what I think the meaning is, and if I ask for clarification, they might notice they are actually not that sure what the term relates to. I’m not trying to complain about what terms people are using, but I think this might contribute to me not understanding what is going on, since people seem to be talking past each other quite a bit anyway.
* A lot of times, I don’t understand why AI safety enthusiasts want to use anthropomorphizing language when talking about AI, especially since a lot of people in the scene seem to be worried that it might lead to a too narrow understanding of AI. (For example, in the AGISF programme curriculum, [this RL model behavior](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/) was referred to as AI “deceiving” the developers, while it actually is “human evaluators being bad at labeling images”. I feel it is important to be careful, because there are also concepts like [deceptive alignment](https://www.alignmentforum.org/posts/zthDPAjh9w6Ytbeks/deceptive-alignment), where the deception happens on “purpose”. I guess this is partially aesthetic preference, since anthropomorphization seems to annoy some people involved in AI safety as well. But partly it probably has to do with real differences in thinking: if you really perceive current RL agents as agents, it might seem that they are tricking you into teaching them the wrong thing.)
* I almost never feel like the person I am talking to about AI safety is really willing to consider any of my concerns as valid information: they are just estimating whether it is worth trying to talk me over. (At least the latter part of this interpretation is probably false in most cases.)
* I personally dislike it when people answer my questions by just sending a link. Often the link does not clear my confusion, because it is rare that the link addresses the problem I am trying to figure out. (Not that surprising given that reading AI safety materials by my own initiative has not cleared that many confusions either.)
* If I tell people I have read the resource they were pointing to they might just “give up” on the discussion and start explaining that it is ok for people to have different views on topics, and I still don’t get why they found the resource convincing/informative. I would prefer they would elaborate on why they thought the resource answers the concern/confusion I had.
* I don’t want to start questioning AI safety too much around AI safety enthusiasts since I feel like I might insult them. (This is probably not helping anyone, and I think AI safety enthusiasts are anyway assuming I don’t believe AI safety is important.)
Probably a lot of the friction just comes from me not being used to a communication style people in AI safety are used to. But I think some of it might come from the emotional response from the AI safety enthusiast I am talking to, such as “being afraid of saying something wrong, causing Ada to further deprioritize AI safety” or “being tired of explaining the same thing to everyone who asks it” or even “being afraid of showing uncertainty since it is so hard to ever convince anyone of the importance of AI safety”. For example, some people might share a link instead of explaining a concept in one’s own words to save time, but some people might do it to avoid saying something wrong.
I wish I would know how to create a discussion where the person convinced of AI safety can drop the things that are “probably relevant” or “expert opinions” and focus on just clearly explaining to me what they currently believe. Maybe then, I could do the same. (Actionable note to myself: try asking people around me to do this.)
### Differences in thinking
I feel like I lack the ability to model what AI safety enthusiasts are thinking or what they believe is true. This happens even when I talk with people I know personally and who have a similar educational background, such as other CS/DS majors in EA Finland. It is frustrating. The problem is not the disagreements: if I cannot model others, I don’t know if we are even disagreeing or not.
This is not the first time in my life when everyone else seems to behave strangely and irrationally, and every time before, there has been an explanation. Mostly, later it turned out others just were experiencing something I was not experiencing, or I was experiencing something they were not. I suspect something similar is going on between me and AI safety folks.
It would be very valuable to know what this difference in thinking is. Sadly, I have no idea. The only thing I have is a long list of possible explanations that I think are **false**:
* “ability to feel motivated about abstract things, such as x-risk”. I think I am actually very inclined to get emotional about abstract things, otherwise I would probably not like EA. Some longtermists like to explain neartermists being neartermists by assuming “it feels more motivating to help people in a concrete way”. To me, neartermist EA does not feel very concrete either. If I happen to look at my GiveWell donations, I do not think about how many children were saved. I might think “hmm, this number has a nice color” or “I wish I could donate in euro so that the donation would be some nice round number”. But most of the time, I don’t think about it at all. On the other hand, preventing x-risk sounds very motivational. You are literally saving the world – who doesn’t want to do that? Who doesn’t want to live in the most important century and be one of the few people who realize this and are pushing the future to a slightly better direction?
* “maybe x-risk just does not feel that real to you”. This might be partially true, in the sense that I do not go about my day with the constant feeling that all humanity might die this century. But this does not explain the difference, because I know other people who also don’t actively feel this way and are still convinced about AI risk.
* “you don’t experience machine empathy”. It is the opposite: I experience empathy towards my Roomba, towards my computer, towards my Python scripts (“I’m so sorry I blame you for not working when it is me who writes the bugs”) and definitely towards my machine learning models (“oh stupid little model, I wish you know that this part is not what I want you to focus on”). Because of this tendency, I constantly need to remind myself that my scripts are not human, my Roomba does not have an audio input; and GPT-3 cannot purposefully lie to me, for it does not know what is true or false.
* “you might lack mathematical ability”. I can easily name 15 people I personally know who are certainly more mathematically talented than me, but only one of them has an interest towards AI safety; and I suspect that I have more if not mathematical talent then at least more mathematical endurance than some AI safety enthusiasts I personally know.
* “you are thinking too much inside the box of your daily work” This might be partially true, but I feel like I can model what Bostrom thinks of AI risk, and it is very different from my daily work. But I find it really difficult to think somewhere between concrete day-to-day AI work and futuristic scenarios. I have no idea how others know what assumptions hold and what don’t.
* “you are too fixated on the constraints of present-day machine learning” If you think AGI will be created by machine learning, some of the basic constraints must hold for it, and a lot of AI safety work seems to (reasonably) be based on this assumption as well. For example, a machine learning model cannot learn patterns that are not present in the training data. (Sometimes AI safety enthusiasts seem to confuse this statement with “a machine learning model cannot do anything of which it does not have a direct example in the training data”, which is obviously not true, narrow AI models do this all the time.)
* “motivated reasoning: you are choosing the outcome before looking at the facts”. Yes, motivated reasoning is an issue, but not necessarily in the direction you think it is.
### Motivated reasoning
Do I want AI risk to be an x-risk? Obviously not. It would be better for everyone to have less x-risks around, and it would be even better if the whole concept of x-risk was false since it would somehow not be possible to have such extreme catastrophes ever happen. (I don’t think anyone thinks that, but it would be nice if it was true.)
But: If you are interested in making the world a better place, you have to do it by either fixing something horrible that is already going on or preventing something horrible from happening. It would be awfully convenient if I could come up with a cause that was:
* Very important for the people who live today and who will ever be born
* Very neglected: nobody else than this community understands its importance to the full extend
* But this active community is working on fixing the problem, a lot of people want to cooperate with others in the community, there is funding for any reasonable project on the field
* There are people I admire working in this field (famous people like Chris Olah; and this one EA friend who started to work in AI governance research during the writing of this text).
* Also a lot of people whose texts have influenced me seem to think that this is of crucial importance.
* Probably needs rapid action, there are not enough people with the right background, so members are encouraged to get an education or experience to learn the needed skills
* I happen to have relevant experience already (I’m not a researcher but I do have a Master’s degree in ML and 5 years of experience in AI/DS/ML; my specialization is NLP which right now seems to be kind of a hot topic in the field.)
All of this almost makes me want to forget that I somehow still failed to be convinced by the importance of this risk, even when reading texts written by people who I otherwise find very credible.
(And saying this aloud certainly makes me want to forget the simple implication: if they are wrong about this, are they still right about the other stuff? Is the EA methodology even working? What problems are there with other EA cause areas? It would seem unreasonable to think EA got every single detail about everything right. But this is a big thing, and getting bigger. What if they are mistaken? What do I do if they are?)
### The fear of the answer
Imagine that I notice that AI safety is, in fact, of crucial importance. What would this mean?
There would be some social consequences: as almost everyone I work with and who taught me anything about AI would be wrong, and most of my friends who are not in EA would probably not take me seriously. Among my EA friends, the AI safety non-enthusiasts would probably politely stop debating me on AI safety matters and decide that they don’t understand enough about AI to form an informed opinion on why they disagree with me. But maybe the enthusiasts would let me try to do something about AI risk, and we’d feel like we are saving the world, since it would be our best estimate that we are.
The practical consequences would most likely be ok, I think: I would probably try to switch jobs, and if that wouldn’t work out, swift the focus of my EA volunteering to AI safety related things. Emotionally, I think I would be better off if I could press a button that would make me convinced about AI safety on a deep rational understanding level. This might sound funny because being very worried about neglected impending doom does not seem emotionally very nice. But if I want to be involved with EA, it still might be the easiest route.
So, what if it turns out I think almost everyone in EA is wrong about AI risk being a priority issue? The whole movement would have estimated the importance of AI risk wrong, and getting more and more wrong as AI safety seems to get more traction. It would mean something has to be wrong in the way the EA movement makes decisions, since the decision making process had produced this great error. It would also mean that every time I interact with another person in the movement, I would have to choose between stating my true opinion about AI safety and risk ruining the possibility to cooperate with that person, or I would have to be dishonest.
Maybe this would cause me to leave the whole EA movement. I don’t want to be part of a movement that is supposed to use reason and evidence to find the best ways to do good, but is so bad at it they would have made such a great error. I would not have much hope of fixing the mistake from the inside, since I’m just a random person and nobody has any reason to listen to me. Somebody with a different personality type would maybe start a whole campaign against AI safety research efforts, but I don’t think I would ever do this, even if I believed these efforts are wrong.
### Friends and appreciation
Leaving the EA movement would be bad, because I really like EA. I want to do good things and I feel like EA is helping me with that.
I also like my EA friends, and I am afraid they will think bad things about me if I don’t have good opinions on AI safety. To be clear, I don’t think my EA friends would expect me to agree with them on everything, but I do think they expect me to be able to develop reasonable and coherent opinions. Like, “you don’t have to take AI safety seriously, but you have to be able to explain why”. I am also worried my friends will think that I do not actually care about the future of humanity, or that I don’t have the ability to care for abstract things, or that I worry too much about things like “what do my friends think of me”.
On a related note, writing this whole text with the idea of sharing it with strangers scared me too. I felt like people will think I am not-EA-like, or will get mad at me for admitting I did not like Superintelligence. It would be bad if I decided that in the future I actually want to work on AI safety, but nobody would want to cooperate with me because I had voiced uncertainties before. I have heard people react to EA criticisms with “this person obviously did not understand what they are talking about” and I feel like many people might have a similar reaction to this text too, even if my point is not to criticize, but just to reflect on my own opinions.
I can not ask the nebulous concept of the EA community about this, but luckily, reaching out to my friends is way easier. I decided to ask them if they would still be my friends even if I decided my opinion on AI safety was “I don’t know and I don’t want to spend more time finding out so I’m going to default to thinking it is not important”.
We discussed for a few hours, and it turned out my friends would still want to be my friends and would still prefer me to be involved in EA and in our group, at least unless I started to actively work against AI safety. Also, they would actually not be that surprised if this was my opinion, since they feel a lot of people have fuzzy opinions about things.
So I think maybe it is not the expectation of my friends that is making me want to have a more coherent and reasonable opinion on AI safety. It is my own expectation.
What I think and don’t think of AI risk
---------------------------------------
### What I don’t think of AI risk
I’m not at all convinced that there cannot be any risk from AI, either. (Formulated this strongly, this would be a stupid thing to be convinced about.)
More precisely, reading all the AI safety material taught me that there are very good counterarguments to the most common arguments stating that solving AI safety would be easy. These arguments were not that difficult for me to internalize, because I am generally pessimistic: it seems reasonable that if building strong AI is difficult then building safe strong AI should be even more difficult.
In my experience, it is hard to get narrow AI models to do what you want them to do. I probably would not for example step in a spaceship that is steered by a machine learning system, since I have no idea how you could prove that the statistical model is doing what it is supposed to do. Steering a spaceship sounds very difficult, but still a lot easier than understanding and correctly implementing “what humans want”, because even the whole prompt is very fuzzy and difficult for humans as well.
It does not make sense to me that any intelligent system would learn human-like values “magically” as a by-product of being really good at optimizing for something else. It annoys me that the most popular MOOC produced by my university [states](https://course.elementsofai.com/6/1):
> “The paper clip example is known as the value alignment problem: specifying the objectives of the system so that they are aligned with our values is very hard. However, suppose that we create a superintelligent system that could defeat humans who tried to interfere with its work. It’s reasonable to assume that such a system would also be intelligent enough to realize that when we say “make me paper clips”, we don’t really mean to turn the Earth into a paper clip factory of a planetary scale.”
>
>
I remember a point where I would have said “yeah, I guess this makes sense, but some people seem to disagree, so I don’t know”. Now I can explain why it is not reasonable. So in that sense, I have learned something. (Actionable note to self: contact the professor responsible for the course and ask him why they put this phrase in the material. He is a very nice person so I think he would at least explain it to me.)
But I am a bit at loss on why people in the AI safety field think it *is* possible to build safe AI systems in the first place. I guess as long as it is not proven that the properties of safe AI systems are contradictory with each other, you could assume it is theoretically possible. When it comes to ML, the best performance in practice is sadly often worse than the theoretical best.
Pessimism about the difficulty of the alignment problem is quite natural to me. I wonder if some people who are more optimistic about technology in general find AI safety materials so engaging because they at some point thought AI alignment could be a lot easier than it is. I find it hard to empathize with the people Yudkowsky first designed the AI box thought experiment for. As described in the beginning of this text, I would not spontaneously think that a superintelligent being was unable to manipulate me if it wanted to.
### What I might think of AI risk
As you might have noticed, it is quite hard for me to form good views on AI risk. But I have some guesses of views that might describe what I think:
* Somehow the focus in the AI risk scene currently seems quite narrow? I feel like while “superintelligent AI would be dangerous” makes sense if you believe superintelligence is possible, it would be good to look at other risk scenarios from current and future AI systems as well.
* I think some people are doing what I just described, but since the field of AI safety is still a mess it is hard to know what work relates to what and what people mean with a certain terminology.
* I feel like it would be useful to write down limitations/upper bounds on what AI systems are able to do if they are not superintelligent and don’t for example have the ability to simulate all of physics (maybe someone has done this already, I don’t know)
* To evaluate the importance of AI risk against other x-risk I should know more about where the likelihood estimates come from. But I am afraid to try to work on this because so far it has been very hard to find any numbers. (Actionable note to myself: maybe still put a few hours into this even if it feels discouraging. It could be valuable.)
* I feel like looking at the current progress in AI is not a good way to make any estimates on AI risk. I’m fairly sure deep learning alone will not result in AGI. (Russell thinks this too, but I had this opinion before reading Human Compatible, so it is actually based on gut feeling.)
* In my experience, data scientists tend to be people who have thought about technology and ethics before. I think a lot of people in the field would be willing to hear actionable and well explained ways of making AI systems more safe. But of course this is very anecdotal.
What now?
---------
### Possible next steps
To summarize, so far I have tried reading about AI safety to either understand why people are so convinced about it or find out where we disagree. This has not worked out. By writing this text, it became clear to me that there are social and emotional issues preventing me from forming an opinion about AI safety. I have already started working on them by discussing them with my friends.
I have already mentioned some actionable points throughout the text in the relevant contexts. The most important one:
* find a person who is willing to discuss AI safety with me: explain their own actual thinking, help me keep awareness of possibly falling in my “AI safety compatible” mode and listen to my probably very messy views
If you (yes, you!) are interested in a discussion like that, feel free to message me anytime!
Other things I already mentioned were:
* contact my former professor and ask him what he thinks of AI risk and why value alignment is described to emerge naturally from intelligence in the course material
* set aside time to find out and understand how x-risk likelihoods are calculated
Additional things I might do next are:
* read The Alignment Problem (I don’t think it will provide me with that much more useful info, but I want to complete reading all the 3 AI alignment books people usually recommend)
* write a short intro to AI safety in Finnish to clear my head and establish personal vocabulary for the field in my native language (I want to write for an audience, but I if I don’t like the result, then I will just not publish it anywhere)
### Why the answer matters
I have spent a lot of time trying to figure out what my view on AI safety is, and I still don’t have a good answer. Why not give up, decide to remain undecided and do something else?
Ultimately, this has to do with what I think the purpose of EA is. You need to know what you are doing, because if you don’t, you cannot do good. You can try, but in the worst case you might end up causing a lot of damage.
And this is why EA is a license to care: the permission to stop resisting the urge to save the world, because it promises that if you are careful and plan ahead, you can do it in a way that *actually helps*. Maybe you can’t save everyone. Maybe you’ll make mistakes. But you are allowed to do your best, and regardless of whether you are a Good Person™ (or an altruistic and effective person) it will help.
As long as I don’t know how important AI safety is, I am not going to let myself *actually care* about it, only about estimating its importance.
I wonder if this, too, is risk aversion – a lot of AI safety enthusiasts seem to emphasize that you have to be able to cope with uncertainty and take risks if you want to do the most good. Maybe this attitude towards risk and uncertainty is actually the crux between me and AI safety enthusiasts I’m having such a hard time to find?
But I’m obviously not going to believe something I do not believe just to avoid seeming risk averse. Until I can be sure enough that the action I’m taking is going in the right direction, I am going to keep being careful. |
bded2c49-0bac-40c0-a8e4-73aa56c34a12 | trentmkelly/LessWrong-43k | LessWrong | AI companies’ unmonitored internal AI use poses serious risks
AI companies’ unmonitored internal AI use poses serious risks
AI companies use their own models to secure the very systems the AI runs on. They should be monitoring those interactions for signs of deception and misbehavior.
AI companies use powerful unreleased models for important work, like writing security-critical code and analyzing results of safety tests. From my review of public materials, I worry that AI is largely unmonitored when AI companies use it internally today, despite its role in important work.
Right now, unless an AI company specifically says otherwise, you should assume they don’t have visibility into all the ways their models are being used internally and the risks these could pose.
In many industries, using an unreleased product internally can’t harm third-party outsiders. A car manufacturer might test-drive its new prototype around the company campus, but civilian pedestrians can’t be hurt—no matter how many manufacturing defects—because the car is contained to the company’s facilities.
The AI industry is different—more like biology labs testing new pathogens: The biolab must carefully monitor and control conditions like air pressure, or the pathogen might leak into the external world. Same goes for AI: If we don’t keep a watchful eye, the AI might be able to escape—for instance, by exploiting its role in shaping the security code meant to keep the AI locked inside the company’s computers.
The minimum improvement we should ask for is retroactive internal monitoring: AI companies logging all internal use of frontier models by their company, analyzing the logs (e.g., for signs of the AI engaging in scheming), and following up on concerning patterns.
An even stronger vision—likely necessary in the near future, as AI systems become more capable—is to wrap all internal uses of frontier AI within real-time control systems that can intervene on the most urgent interactions.
In this post, I’ll cover:
* Why AI companies should be monitoring int |
d16d1938-58e7-4617-8a23-914835ff255b | trentmkelly/LessWrong-43k | LessWrong | Human Capital Contracts
Cross-posted on my blog here. Partially inspired by some slatestarcodex discussion here.
Summary: Human Capital Contracts would allow people sell a certain % of their future income in return for upfront cash, as opposed to taking out a loan. This would be less risky for them, would give them valuable information about different college majors, and would help give people de facto ‘mentors’, among other advantages. Adverse selection could reduce the benefits, and reducing inter-state competition poses a major possible disadvantage. We also discuss two niche applications: parents and divorce.
Readers with an economics background might like to jump to the sections on 'Education' and 'Mentors - Incentive Alignment'
Debt vs equity financing
There are two methods of financing for companies; debt and equity.
Debt is fundamentally very simple. I give the company $100 now; it promises to give me $105 in a year’s time. They owe me a fixed amount in return. Hopefully in the meantime the company has invested that $100 in a project or piece of equipment that produces more than $105; if so they made a profit on the transaction as a whole. Here the risk is borne by the company; they have no choice but to pay me back, even if they didn’t make a profit this year. This form of financing is familiar to most people, as they personally use savings accounts, credit cards, mortgages, auto loans and so on.
Equity, unlike debt, does not represent a fixed level of obligation. Instead the company owes you a certain fraction of future profits. If you give a company $100 in return for a 10% share, and they made a $50 profit, your share of the profit is $5. Hopefully they will make growing profits for many years, in which case your portion will grow to $6, to $7, and so on. Here the risk is borne by you; if they don’t make a profit, you get nothing. This form of financing is much less familiar to most people; about the only experience they are likely to have would be investing in the stock |
b0e8234f-1878-4c30-972b-d92b63f8ace5 | trentmkelly/LessWrong-43k | LessWrong | On Context And People
TL;DR: Contrasting views of the research landscape, a prototype for automating scientific debate, and determining the importance of authorship. Also an analysis of the moral aspects of wearing socks.
Disclaimer: This is coming from a rather neuroscience-centric perspective. I'd be curious if the patterns I describe here apply to other fields.
Previously in this series: On Scaling Academia, On Automatic Ideas, and On (Not) Reading Papers.
Bottlenecks and Context
I'm still hung up on that question of how we might make research "scalable"[1]. Answering this question involves identifying bottlenecks and finding efficient ways to circumvent them. "Generating ideas" appeared to be one such bottleneck. But, as language models like GPT-3 shine when it comes to systematically generating ideas, the bottleneck effectively reduces to a search problem. And while the hacky prototype I threw together in a previous post probably does not quite qualify as a solution to "generating ideas", I feel pretty optimistic that we can crack this with a larger language model and some elbow grease.
A great idea, however, is not enough to scale research. Ideas need to be contextualized[2]. Did somebody work on the idea before? What do we know about the topic already? What are disagreements, confusions, and holes in the research landscape?
Finding those answers is a severe bottleneck to scaling; the problem gets harder as the number of established insights increases. I've argued before that the situation is already pretty bad and that researchers tend to be overwhelmed by the available literature. Current solutions (like "not reading papers") are only band-aids that will not hold when we try to scale research. Therefore, we will need to change how we manage our established knowledge to make research scalable. This post is an exploration of what that might look like.
Statement-centric view
What does the research landscape look like? When I started out in research, my mental picture of the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.