id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
9b5bdb0f-599f-449c-ad63-fe6e982a9aa3 | trentmkelly/LessWrong-43k | LessWrong | Cephaloponderings
Cross-posted from Putanumonit.
----------------------------------------
Hello all. This is Jacob’s wife, and while Jacob is off chilling in the fjords, I’m staying home and have volunteered to write a guest post.
Instead of trying to match the usual Putanumonit fare, I chose to write about something I find very exciting. I’m a biologist, and while I normally work on research involving the sense of touch in tiny roundworms, I love to read about interesting animal biology that I come across through pop-sci media. Yes, I am one of those people you might find spouting “weird animal sex facts” at parties. But seriously, the diversity of life out there is incredible, and I really can’t get enough of learning and thinking about it. So here’s a bit about a creature that’s been capturing my fascination recently.
Consider the octopus. But first, you might want to consider considering the octopus. Why consider the octopus? I think they’re incredibly interesting creatures, and upon reflecting I think that’s probably because I’m so confused by them. They wouldn’t be very interesting to me if everything I knew about them fit well into my world view without jostling the content or structure of surrounding information, even if I had reached the point of knowing enough to know how much I didn’t know. Working in research, I’ve been whacked upon the head several times with the fact that just because information isn’t known or a problem isn’t solved, that doesn’t automatically mean that anyone cares about learning the information or solving the problem.
That goes for myself as well: I don’t know how many teeth the average walrus has, but until now it’s never occurred to me to look it up because it really doesn’t make much of a difference to me whether it’s 26 or 32. Once I started thinking about it, I had to Google the answer (it’s 18), but I’m happy to stop there instead of asking the same question about every other toothed animal I can think to name. It’s slightly more inter |
f3b86efe-603b-41ca-8138-0259ec6022f8 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Resources & opportunities for careers in European AI Policy
*Last updated: 12 October 2023*
Career opportunities
====================
Early career opportunities
--------------------------
* [EU Tech Policy Fellowship](https://www.techpolicyfellowship.eu/). A 7-month programme for kickstarting a European policy career focused on artificial intelligence. There are two cohorts per year, a winter cohort (Jan → July) and a summer cohort (June → Dec). Placement track fellows complete a paid 4-6 month placement at a relevant think, while training track fellows participate part-time in an AI governance & policy training programme.
* [Blue Book traineeship programme](https://traineeships.ec.europa.eu/).paid, five-month internship programme with the European Commission, the executive body of the EU. There are two Blue Book sessions each year, with applications opening in January for the session starting in October and in August for a start in March of the following year. More information can be found [here](https://forum.effectivealtruism.org/posts/7E3AGFB86mKYeo5aC/eas-interested-in-eu-policy-consider-applying-for-the).
* [AlgorithmWatch’s algorithmic accountability reporting fellowship](https://umfrage.algorithmwatch.org/third-reporting-fellowship/)
* Member state traineeships
+ [Netherlands](https://www.werkenvoornederland.nl/starters/traineeships/rijks-i-traineeship)
+
* Junior Accredited Parliamentary Assistant position in the European Parliament (following the upcoming 2024 elections)
* Junior think tank positions
+ Broader focus
- Centre for European Policy Studies
- European Policy Centre
- German Marshall Fund
- RAND
+ Brussels tech focus
- Ada Lovelace
- Lisbon Council
- The Future Society (TFS)
- The Future of Life Institute
- International Centre for Future Generations
+ German-based relevant think tanks
- Mercator (also offers a Fellowship!)
- European Council for Foreign Relations
- Stiftung Neue Verantwortung (SNV), also based in Brussels
- Agora Digitale Transformation (new)
+ Nordics-based relevant (security) think tanks
- Stockholm International Peace Research Institute (SIPRI)
- Danish Institute for International Affairs (DIIA)
- Danish Institute for International Studies (DIIS)
- Peace Research Institute Oslo (PRIO)
+ Civil society organisations and associations (less GCR focused, but very influential in the public debate)
- Digital SME
- Centre for Democracy and Technology
- Access Now
- Eticas
Mid career opportunities
------------------------
* Senior think tank positions (see those listed above)
* European Commission. There are several ways to enter:
+ [DG Connect](https://commission.europa.eu/about-european-commission/departments-and-executive-agencies/communications-networks-content-and-technology_en)
+ [Contract Agents Selection Tool](https://eu-careers.europa.eu/en/Cast-Permanent#:~:text=With%20the%20CAST%20Permanent%20procedure,an%20application%20at%20any%20time.)
+ [Concours](https://eu-careers.europa.eu/fr/eu-careers/staff-categories)
* Join the [European Centre for Algorithmic Transparency](https://algorithmic-transparency.ec.europa.eu/work-ecat_en)
* Join the EU AI Office (forthcoming)
* Senior Accredited Parliamentary Assistant position in the European Parliament (following the upcoming 2024 elections)
Upskilling & learning opportunities
===================================
Masters
-------
* [US policy master’s degrees](https://80000hours.org/career-reviews/us-policy-masters-degrees/). We encourage people to seriously consider a career in US policy. We expect working on AI policy in the US to be more impactful for many people. Completing a prestigious master’s program in the US could be helpful for pursuing this path.
* [List of Masters Programs in Tech Policy, Public Policy and Security (Europe)](https://forum.effectivealtruism.org/posts/8CD4i8FsRApcbt3an/list-of-masters-programs-in-tech-policy-public-policy-and)
AI governance
-------------
* BlueDot’s [AI Safety Fundamentals](https://aisafetyfundamentals.com/?utm_campaign=bluedot&utm_source=website) courses in alignment & governance are an excellent starting point.
* [EU Tech Policy Fellowship](https://www.techpolicyfellowship.eu/)’s training track provides a deeper dive into European AI governance through an 8-week online programme and a 10-day policymaking summit in Brussels.
How the EU works
----------------
* Explanation of the different institutions and bodies of the EU. Recommended to read (total: 25 mins):
+ [The European Parliament: Powers](https://www.europarl.europa.eu/factsheets/en/sheet/19/the-european-parliament-powers)
+ [The European Parliament: organisation and operation](https://www.europarl.europa.eu/factsheets/en/sheet/20/the-european-parliament-organisation-and-operation)
+ [The Council of the European Union](https://www.europarl.europa.eu/factsheets/en/sheet/24/the-council-of-the-european-union)
+ [The European Commission](https://www.europarl.europa.eu/factsheets/en/sheet/25/the-european-commission)
+ [The Presidency of the Council of the EU](https://www.consilium.europa.eu/en/council-eu/presidency-council-eu/)
* [EU law making procedure](https://www.europarl.europa.eu/olp/en/ordinary-legislative-procedure/overview) (read: 5 mins) or [watch](https://www.youtube.com/watch?v=UGOQL_IydKw) this video on how the EU Parliament works (4 mins)
* [Competences of the EU](https://www.youtube.com/watch?v=KprgOctQuyc) (watch: 4 mins) or [read the following graph](https://en.wikipedia.org/wiki/Template:European_Union_competences) (4 mins) Bonus (not mandatory): [The European sovereignty index](https://ecfr.eu/special/sovereignty-index/?utm_campaign=Brussels%20Peil&utm_medium=email&utm_source=Revue%20newsletter)
* [The Brussels Effect](https://en.wikipedia.org/wiki/Brussels_effect) (read: 5 mins)
* [The EU AI Act Will Have Global Impact, but a Limited Brussels Effect](https://www.brookings.edu/research/the-eu-ai-act-will-have-global-impact-but-a-limited-brussels-effect/)(20 minutes)
* [The history of the EU explained in a video](https://www.youtube.com/watch?v=4VCYHTGjr-U) (13 mins)
* [On Careers within the EU](https://docs.google.com/document/d/1jc1sGEgxHKce8OR8W9njEfIYNws2T2a0RdJGnfnLvDQ/edit#heading=h.xzethx2t7zy6) (10 mins)
Podcasts
--------
* [POLITICO’s EU Confidential](https://open.spotify.com/show/3wjOOIbMqHXiruZhNBHUZS?si=9084eea40c2a46bb)
* [EURACTIV’s Tech Brief](https://open.spotify.com/show/7eVoT7zCD8bNEfH0kItHs0?si=a35c2298d8544617)
* [Mark Leonard’s World in 30 minutes](https://open.spotify.com/show/4VeLFC7wSeauD3sFbxKWzh?si=a4fc510cd5f34146)
Newsletters
-----------
* [AI Act Newsletter](https://artificialintelligenceact.substack.com/) by Risto Uuk
* [The European AI newsletter](https://charlottestix.us19.list-manage.com/subscribe?u=eaeece823e606d2458a568db9&id=b32cc2b876) by Charlotte Stix
* [Import AI newsletter](https://jack-clark.net/) by Jack Clark (Anthropic)
* [Governance of Emerging Technologies Newsletter](https://oii.us21.list-manage.com/subscribe?u=fdacdfe3c4bca0b6d4e2c5f29&id=deedf037ce) by Rory Gillis
* [Navigating risks](https://navigatingairisks.substack.com/p/slowing-down-ai-rationales-proposals) by Simeon Campos
Other considerations
====================
* [Collection of work on 'Should you focus on the EU if you're interested in AI governance for longtermist/x-risk reasons?'](https://forum.effectivealtruism.org/posts/yNxn4HxDSMdRyrv6E/collection-of-work-on-should-you-should-focus-on-the-eu-if) |
e20cfccc-3e7e-4fcd-8d44-dedb14ba2542 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Why not just boycott LLMs?
*Epistemic Status*: *This post is an **opinion** I have had for some time and discussed with a few friends. Even though it’s been written up **very** hastily, I want to put it out there - because of its particular*[*relevance*](https://openai.com/research/gpt-4) *today and also because I think the issue has not been discussed enough.*
One of the tiny brain-takes a vegan sometimes encounters when talking to people unfamiliar with veganism is: Why don’t you just buy the meat? The animal is dead anyway. If you then roll your eyes and retort that you should at least try not to *actively increase* demand (to prevent future animal deaths), a reasonably smart person might reply - or might have replied a couple of years ago - that the number of vegans is so small anyway that their consumption behavior doesn't really have any influence on the market.
But this can change over time. In Germany, where I live, the number of vegans doubled between 2016 and 2020[[1]](#fnhat6tjqtq64). Meat consumption has been steadily declining for several years[[2]](#fn01nd8rwymsep), while the market for plant-based products has almost doubled just between 2018 and 2020[[3]](#fn8no5bzlxcv5).
Following this analogy, I wonder: Why has there been so little discussion (at least that I know of) about whether we as a community should boycott LLM-based products? Especially as we seem to agree that race dynamics are bad and having more time to do alignment research would be good?
What I mean by a boycott
========================
Some examples of what I mean by that: Don't sign up for Bing! Don't use ChatGPT and don't sign up for ChatGPT Plus! Or if you have to, use it as little as possible. Or adopt it as late as possible. If you can't be a vegan, then be a flexitarian and reduce your consumption of animal products as much as possible. Don't promote animal products, i.e. don't post artwork generated by diffusion models on your 10k follower twitter account. In general, be pragmatic about it.
Perhaps the consumer behavior of a bunch of EAs will have little to no impact on the market. I tried to find some detailed market research on ChatGPT with no luck - but it seems plausible to me that tech-savvy people like those overrepresented in the EA community make up part of the target demographic, so a boycott might have a disproportionately large effect. And if the number of people aware of AI risk grows and a boycott becomes the norm, this effect could increase over the years.
There is a related but distinctive argument that a boycott - if visible enough - could create bad press for AI companies. This happened last year when a number of artists shared images protesting AI-generated art on the platform ArtStation[[4]](#fnxtgsyjmgrxm). ArtStation took them down, causing even more negative publicity.
Now is a good time to start a boycott
=====================================
I would argue that the best time to start such a boycott would probably have been a couple of years ago (e.g. 2017, when DeepL was launched, or 2021, when GitHub Copilot was launched, or 2022, in the hype year of text-to-image models) and the second best time is *now*.
Why? Because at this moment the norms regarding the usage of LLMs in professional settings have not fully crystallized. Anecdotally, I know some people (including myself) who have been among the more hesitant adopters of ChatGPT. The mere fact that the servers were often down when I tried to use it contributed to a feeling of annoyance. And then there are large sections of the population, including older generations, who might be a bit more skeptical about/ slower to adopt AI, but have a lot of decision-making power. As a result, not exhausting the possibilities of *all available LLM applications* does not lead to a strong disadvantage as yet. For example, an applicant for a PhD position this year might not yet compete exclusively with applicants who use LLMs to augment their research proposals. And the committee members are not yet used to an inflated quality standard. I think it is worth trying to delay the establishment of these new norms.
A short engagement with possible counterarguments
=================================================
Besides the argument that the EA community is just too tiny to have any influence on market developments at all, I can think of two other counterarguments. One is that EAs might use LLMs for *good*; either directly (e.g. for research) or indirectly to empower them to do impactful things later on (for example, an EA person who augments their research proposal with ChatGPT might get accepted and continue to do impactful alignment research in their PhD! Yay!) I think it might be true that usage will become inevitable to compete with unfazed AI enthusiasts in the near future. For now, though, I think we should try to make sure we are not falling prey to motivated reasoning when arguing why we should definitely be using every shiny new toy that gets released as soon and as much as possible and for whatever task. It might just be exciting, or more convenient, or we don't want to feel left behind. But maybe we could try to avoid the latter by using LLM applications consciously and sparingly - and sometimes just reading a blogpost on prompting strategies instead.
Some might also argue that timelines are too short anyway. After all, veganism and its effect on the market of animal products has only gradually gained momentum - and we may not have the time to build that. My answer to this is: maybe that's true, but let's just try anyway? There's not much to lose (yet).
In sum, this post reflects my (slightly polemicized) opinion right now, and I can well imagine it changing. However, I think it would be useful for us collectively and privately to think about the utility and feasibility of boycotts before the next wave of LLM-powered products hits us.
1. **[^](#fnrefhat6tjqtq64)**<https://www.vox.com/future-perfect/23273338/germany-less-meat-plant-based-vegan-vegetarian-flexitarian>
2. **[^](#fnref01nd8rwymsep)**<https://albertschweitzerfoundation.org/news/german-meat-consumption-at-record-low>
3. **[^](#fnref8no5bzlxcv5)**<https://smartproteinproject.eu/wp-content/uploads/Smart-Protein-Plant-based-Food-Sector-Report_-Webinar-slides.pdf>
4. **[^](#fnrefxtgsyjmgrxm)**<https://www.theverge.com/2022/12/23/23523864/artstation-removing-anti-ai-protest-artwork-censorship> |
dfcd1464-ffb1-4c33-aa2e-7cf1fc676565 | trentmkelly/LessWrong-43k | LessWrong | The Illusion of Universal Morality: A Dynamic Perspective on Genetic Fitness and Ethical Complexity
The belief in a universal, independent standard for altruism, morality, and right and wrong is deeply ingrained in societal norms. However, when scrutinized through the lens of dynamic genetic fitness, these concepts reveal their inherent complexities. The idea of group selection suggests that a group rich in altruists may have a survival advantage over a group mainly composed of selfish organisms. Yet, even within such groups, altruists find themselves at a disadvantage compared to their selfish counterparts. This paradox challenges the straightforward interpretation of altruism and suggests that the concept is not as simple as it seems.
Crucially, the application of genetic fitness extends to all levels of gene pools, from our own offsprings to species-level offspring. This perspective reframes altruism and the notions of right and wrong as not merely moral or ethical choices but as strategies for optimizing genetic fitness across varying contexts. In this light, what is often considered 'altruistic' or 'right' may be better understood as actions that contribute to the genetic fitness of a group or species, rather than individual moral virtues.
Environmental factors also significantly shape our understanding of genetic fitness. In certain bird species, for example, some individuals stay near nesting sites to assist in rearing related offspring. This behavior is influenced by various factors such as food availability, mate attraction, and predation. Similarly, our moral and ethical choices are not made in isolation; they are molded by our immediate environment, societal norms, and the gene pool under consideration.
As we venture into new frontiers like deep space exploration and potential coexistence with other intelligent species, our understanding of genetic fitness will need to evolve. The ethical and moral frameworks we have built may not be applicable in these new contexts, especially when the gene pool extends beyond our own species. This raises questions |
3b3bfd63-883b-4e51-b4f3-085f079b12d0 | trentmkelly/LessWrong-43k | LessWrong | Giving in to small vices
When I was in Seoul three years ago to visit a friend, I was not impressed by the city. The people there were always in a hurry, and struck me as generally unfriendly. When you apologise for accidentally bumping into someone, your apology will usually be coldly ignored. There are also very strict social rules in place. E.g., on the trains, there are seats specially reserved for small children, elderly people, and the physically disabled. If you do not fall into any of these three categories, you are not allowed to take any of those seats, even when you are travelling during the off-peak hours and there are few other passengers. Of course, there are no laws in place to forbid you to do so, but you will be met with (silent) disapproval from the South Koreans. Or so my Korean friend warned me.
Another thing that struck me was the fact that the streets were strewn with litter everywhere. It was very unpleasant. How can one go about resolving this issue? After all, the lack of civic-mindedness is something that takes time to address, but you want clean streets now. Maybe you are thinking of making the act of littering legally punishable. That will certainly teach those litterbugs to be more considerate. So you pass a law saying that those who are caught littering will have to pay a fine.
This sounds like a good idea. After all, the advantage is that fining people for littering is a quick and easy way of filling up the governmental coffers, and so you instruct policemen to strategically station themselves in busy areas. But using manpower from the police forces to ensure public cleanliness seems like a colossal waste of all the specialised training all these policemen have received in preparation for their jobs.
What to do then? Maybe during the first two weeks after the ratification of the law, you delegate the assignment of catching litterbugs to a few policemen, to send the message that you mean business. After the fear of getting caught has been sufficiently instil |
f36b6c6f-4e0c-417d-97fa-eb9ede56a13d | trentmkelly/LessWrong-43k | LessWrong | A permutation argument for comparing utility functions
When doing intertheoretic utility comparisons, there is one clear and easy case: when everything is exactly symmetric.
This happens when, for instance, there exists u and v such that p([u])=p([v])=0.5 and there exists a map σ:S→S, such that σ2 is the identity (hence σ is an involution) and for all s∈S), u(s)=v(σ(s)).
Note that this implies that (u(s),v(s))=(v(σ(s)),u(σ(s)), so σ is essentially a 'reflection'.
Then, since everything is so symmetric, we can say there is no way of distinguishing u from v, so the correct approach is to maximise [u+v].
See the following graph, with the strategy to be followed marked in red:
Permutation
Symmetry is good as far as it goes, but is very fragile. It doesn't say anything about what happens when p([u])=49/100 and p([v])=51/100, for instance.
There is an argument, however, that resolves the unequal probability cases and extends the results to non-symmetric cases.
Consider for instance the case where u and v are as follows, for 5 strategies in S:
Nothing obvious springs to mind as to what the best normalisation process is -- the setup is clearly unsymmetrical, and four of the five options are on the Pareto boundary. But let's rescale and translate the utilities:
This is still unsymmetrical, but note that u and v have the same values: −1, 0.5, 2, 3, and 3.5.
Thus there is a permutation ρ:S→S such that for all s∈S, u(s)=v(ρ(s)).
Another type of uncertainty
This permutation ρ allows us to transform the uncertainty about one's own values (which we don't know how to handle) to other types of uncertainty (which we can).
How so? Let S′ be another copy of S, but with different labels, and let i:S→S′ be the identity map that re-assigns each element to its original label.
Then instead of seeing u and v as different utility functions on S, we can see them as both being the same utility function w on S′, with uncertainty over a map m from S to S′. This map m is i with probability p([u]) and i∘ρ with probability p([v]).
Th |
e558d43a-99cb-45ef-8260-88248a082144 | trentmkelly/LessWrong-43k | LessWrong | Is Gemini now better than Claude at Pokémon?
Background: With the release of Claude 3.7 Sonnet, Anthropic promoted a new benchmark: beating Pokémon. Now, Google claims Gemini 2.5 Pro has substantially surpassed Claude's progress on that benchmark.
TL:DR: We don't know if Gemini is better at Pokémon than Claude because their playthroughs can't be directly compared.
The Metrics
Here are Anthropic's and Google's charts:
[1]Unfortunately these are using different x and y axes, but it's roughly accurate to say that Gemini has made it nearly twice as far in the game[2] now:
And moreover, Gemini has gotten there using approximately 1/3rd the effort! As of writing, Gemini's current run is at ~68,000 actions, while Claude's current run is at ~215,000 actions.[3][4]
So, sounds definitive, right? Gemini blows Claude out of the water.
The Agents' Harnesses
Well, when Logan Kilpatrick (product lead for Google's AI studio) posted his tweet, he gave an important caveat:
> "next best model only has 3 so far, though with a different agent harness"
What does "agent harness" actually mean? It means both Gemini and Claude are:
1. Given a system prompt with substantive advice on how to approach playing the game
2. Given access to screenshots of the game overlaid with extra information
3. Given access to key information from the game's RAM
4. Given the ability to save text for planning purposes
5. Given access to a tool that translates text to button presses in the emulator
6. Given access to a pathfinding tool
7. Have their context automatically cleaned up and summarized occasionally
8. Have a second model instance ("Critic Claude" and "Guide Gemini") occasionally critiquing them, with a system prompt designed to get the primary model out of common failure modes
But there are significant implementation differences. Here's an example of what Claude is actually looking at before each action it takes:
and here's what Gemini is currently looking at before its actions:[5]
Plus Gemini is also given access to "a tex |
d1379fc2-844f-49b9-8209-7b5fc6fd7fa5 | trentmkelly/LessWrong-43k | LessWrong | The Third Annual Young Cryonicists Gathering (2012)
I received notification of this today from Alcor, my cryonics provider. I also received notification of this event last year but was unable to attend at that time. Does anyone plan on attending this year? Has anyone attended this type of event in the past, and if so was it worth going / what was your experience?
The Third Annual Young Cryonicists Gathering
Teens & Twenties 2012: Getting to Know You - You Getting to Know Each Other
Friday-Sunday; April 27-29, 2012 Deerfield Beach FL Host: Bill Faloo
I'm sorry I don't have more information at this time. Here is a link to a discussion post regarding the one last year: http://lesswrong.com/lw/5ob/young_cryonicists_conference_2011/ |
56591b08-514d-4b4b-ae6e-7492de0ebe07 | trentmkelly/LessWrong-43k | LessWrong | Open thread, Jan. 18 - Jan. 24, 2016
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday. |
45c05f72-5d47-41b6-94c8-f26564d076be | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Components of Strategic Clarity [Strategic Perspectives on Long-term AI Governance, #2]
*This is post 2 of an in-progress draft report called* ***Strategic Perspectives on Long-term AI Governance** (see* [*sequence*](https://forum.effectivealtruism.org/s/xTkejiJHFsidZ9hMo)*).*
---
Over the last 5 years since the appearance of Allan Dafoe’s [research agenda](https://www.fhi.ox.ac.uk/govaiagenda/), there have been many developments in the field of Long-Term AI Governance. However, the field is still young and remains open-ended.
Working definitions
===================
There is uncertainty over key terms within the field, around both different types of advanced AI, and the practice and field of AI governance. Both have seen different definitions [see Appendix 1].
For this discussion, I define **Transformative AI** as:
* “**AI systems that have extreme and practically irreversible effects on the long-term trajectory of society, including disruption comparable to the industrial revolution, and/or potential existential risks.**”
I define the field of **Long-term AI Governance** as:
* **“The study and shaping of local and global governance systems—including norms, policies, laws, processes, politics, and institutions—that affect the research, development, deployment, and use of existing and future AI systems in ways that positively shape societal outcomes into the long-term future.”**
I define a **Strategic Perspective** as:
* **"A cluster of correlated views on long-term AI governance, encompassing (1) broadly shared assumptions about the key technical and governance parameters of the challenge; (2) a loose theory of victory or impact story about what solving this problem would look like; (3) a set of historical analogies to provide comparison, grounding, inspiration, or guidance; (4) a set of intermediate strategic goals to be pursued, and near-term interventions or actions that contribute to reaching them;”**
Challenges for Long-term AI Governance
--------------------------------------
The Long-term AI Governance community faces a series of challenges. It remains an incipient field with a [small pool of active researchers](https://forum.effectivealtruism.org/posts/M2SBwctwC6vBqAmZW/a-personal-take-on-longtermist-ai-governance) from just a handful of institutions. It remains at least [partially pre-paradigmatic](https://www.eacambridge.org/ai-governance-curriculum), with underdeveloped or unpursued research lines. There are challenges around intra-community legibility, with many researchers not aware of what others are working on; - or understanding what are the assumptions, views, or cruxes that drive different people’s choices about what to research and investigate.
In search of strategic clarity
------------------------------
The lack of clarity is a problem, because the community of Long-term AI Governance identifies strongly as an impact-focused project. We are not just intellectually curious about advanced AI; we are motivated to find ways to make this future go well.
However, as has been noted, the community lacks *strategic clarity* around which intermediate goals or policy actions should be pursued. Specifically, there is pervasive uncertainty not just about the technical landscape of TAI, but also about robust nearer-term activities or even goals for policy. As a consequence, various people have emphasized the importance of high-quality research to ‘[disentangle](https://web.archive.org/web/20210622160148/https://forum.effectivealtruism.org/posts/RCvetzfDnBNFX7pLH/personal-thoughts-on-careers-in-ai-policy-and-strategy)’ the field, scope out key parameters for governance, and the identification of interventions that are robustly good.
Is strategic clarity what we need most? There could be debate over whether an effective Long-term AI Governance community requires:
1. **strategic** ***clarity*** - i.e. a sensible and grounded theory of change, providing a detailed, even ‘gears-level’ model of both the technical landscape and the policy world, with a resulting clear roadmap for near-term or intermediate interventions;
2. **strategic** ***consensus***- i.e. (almost) everyone in the Long-term AI Governance community shares the same roadmap or perspective–the same model of strategic clarity; or-
3. **strategic** ***coherence*** - i.e. interventions by different people or communities in the Long-term AI Governance community don't catastrophically interfere with- or erode one another.
It is unclear whether achieving strategic clarity would be enough to create strategic consensus at the expert or community level, although the two are likely correlated. If they can be decoupled, it is not clear whether the Long-term AI Governance community necessarily requires, or would currently gain from, achieving full strategic consensus.
* On the one hand, translating strategic clarity into strategic consensus could ensure full alignment of the community, and avoid disagreements and tensions over interventions;
* On the other hand, entertaining a portfolio of different perspectives that lack consensus, but which each have internal strategic clarity, could be an acceptable or even preferable meta-strategy for the purposes of Long-term AI Governance–so long as we ensure minimal strategic *coherence* or non-interference amongst the interventions pursued by different groups.
In either case, to pursue *any* long-term-oriented interventions for TAI--and to even begin to explore questions of strategic consensus and coherence (and their tradeoffs)--the Long-term AI Governance community requires *at least one* account of the world that provides strategic clarity for action, given its background world assumptions.
How existing work contributes to the components of strategic clarity
--------------------------------------------------------------------
What is holding us back from gaining strategic clarity? A background cause lies in the generally wide range of philosophical, technical, political and other views which the world appears to have on the subject of advanced AI.
Strategic clarity in Long-term AI Governance requires several ingredients:
1. A detailed and gears-level account of the ***strategic parameters*** of the ***problem;***
2. An understanding of all available or ***potential options*** (e.g. assets, levers, interventions) that could contribute ***solutions***;
3. A ***theory of impact or -victory*** for comparing and prioritizing amongst these governance *solutions*, based on an account of the strategic parameters of the *problem*;
Existing work in the field has already contributed to these components (see table 2; Appendix 2).
| **Type** | **Focus of work** | **Includes work on:** |
| --- | --- | --- |
| 1st order | Understanding **strategic parameters** of the long-term AI governance *problem* | **Technical parameters:*** **Technical landscape**
+ Timelines
+ Architectures (e.g. AGI, CAIS, PASTA)
+ Pathways (e.g. scaling hypothesis) vs. barriers
+ Takeoff speeds
+ Historical analogies for disjunctive development
+ *Epistemic terrain*: (lack of) advance warning signs of capability breakthroughs
+ *Distribution* of AGI programs, and of relevant inputs
+ *Trends* in relevant inputs
+ [...]
* **Direct existential threat models**
+ 'Superintelligence' alignment failure
+ 'What Failure Looks Like' (1&2)
+ War
+ Misuse
+ Intersection with other risks (e.g. nuclear escalation)
+ Multipolar failures
+ Suffering risks
+ [...]
* **Indirect effects on existential risk factors**
+ Intermediate political impacts
+ Effects on epistemic security
+ Effects on coordination architectures
+ [...]
* **Technical alignment approaches**
+ [various overviews]
**Governance parameters:*** **Structural features of the TAI governance challenge, as:**
+ Global Public Good problem
+ Collective Action problem
+ Technology Race
+ Involving risks from accident, misuse, structure
+ [...]
* **Likely prevailing governance conditions**
+ Global perceptions of AI; policymaker perceptions of AI
+ Existing or likely near-term governance regimes which will affect TAI;
+ [...]
* **Historical precedents and lessons for AI governance**
* **Desiderata for ideal AI governance**
* [...]
|
| 2nd order | Understanding **potential options** for long-term AI governance *solutions* | * Mapping **distribution of current assets:**
+ Topology of institutions active in the space
+ Distribution of talent
+ Funding landscape
+ [...]
* Mapping **individual career options**
* Mapping **key TAI actors to influence**
+ The relative relevance of influencing different *types* of actors (e.g. firms vs. governments vs. academia)
+ The relative relevance of influencing *particular* actors (e.g. US, China, EU, ...)
* Mapping **possible levers** for pursuing goals
+ Sets of *tools* available to different actors to shape TAI (e.g. export control regulation; arms control treaties, lab policies, defence-in-depth, compute governance, [...]);
+ The *different pathways* by which these interventions might be realized and implemented
* Articulating **specific proposals** for long-term-relevant AI intervention 'products'
+ [...]
|
| 3rd order | Articulating individual **theories of impact / victory** (allowing the selection or prioritization amongst *solutions,* given a particular view of the *problem*) | **In technical AI alignment**, e.g.* Wei Dai’s ‘[AI Safety Success Stories](https://www.alignmentforum.org/posts/bnY3L48TtDrKTzGRb/ai-safety-success-stories)’
* Neel Nanda’s ‘[Overview of the AI Alignment Landscape](https://www.lesswrong.com/posts/SQ9cZtfrzDJmw9A2m/my-overview-of-the-ai-alignment-landscape-a-bird-s-eye-view)’, and ‘[A Longlist of Theories of Impact for Interpretability](https://www.lesswrong.com/posts/uK6sQCNMw8WKzJeCQ/a-longlist-of-theories-of-impact-for-interpretability)’,
* Evan Hubinger’s ‘[A positive case for how we might succeed at prosaic AI alignment](https://www.lesswrong.com/posts/5ciYedyQDDqAcrDLr/a-positive-case-for-how-we-might-succeed-at-prosaic-ai)’;
* [...]
**In long-term AI governance**, e.g.* Allan Dafoe's ['Asset-decision' model](https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact)
* Jade Leung's ['decision-influence' model](https://www.effectivealtruism.org/articles/jade-leung-how-can-we-see-the-impact-of-ai-strategy-research)
* Ben Garfinkel's '[Pathways for Impact](https://docs.google.com/document/d/1Haf07kdY503SNyb4hI_qtk8zf-X0aEwqpJKevb2GXUc/edit#)'
* Seth Baum's '[affecting the future of AI governance](https://www.youtube.com/watch?v=G-8uEg7mCdA)'
* [etc.]
* …
|
| 4th order | Mapping and comparing **strategic perspectives**, (to prioritize amongst or coordinate between work pursuing different theories of impact) | (this project) |
Table 2: different types of Long-term(ist) AI Governance work, on distinct components of strategic clarity (see also Appendix 2).
Such work has been rich and nuanced. However, it does not yet appear to have yet led to significantly increased strategic consensus, to a level where it has won over a large part of the Long-term AI Governance community. Moreover, the positions that have been informally articulated also do not exhaust the space of positions. As such, achieving any or all of strategic clarity, consensus, or coherence in Long-term AI Governance, can gain form an additional component:
* A ***mapping of strategic perspectives**:* comparative understanding of how each given theory of victory relates to, conflicts with, or supports other contending theories and perceptions;
The purpose of this project is to provide a sketch for such a mapping. |
fcf96bc2-32b9-4e73-b9a1-3d5d9c2cb33f | trentmkelly/LessWrong-43k | LessWrong | Categorial preferences and utility functions
,
This post is motivated by a recent post of Stuart Armstrong on going from preferences to a utility function. It was originally planned as a comment, but seems to have developed a bit of a life of its own. The ideas here came up in a discussion with Owen Biesel; all mistakes in this exposition are mine. I'm not very good with the typesetting engine here, so apologies for latex and other problems.
The basic ideas is as follows. Suppose we have a set Sof objects, and we are given some information on which objects are preferred to which other objects. Then we are interested in whether and in how many ways this data can be captured by a utility function. Our key innovation is that we assume not only the direction of preferences is given, but also some information on the strength of the preferences, in a manner which we will make precise below (weak preferences).
Basic on orders vs utility functions
We refer to the Order Theory page on Wikipedia for the definitions of reflexive, anti-symmetric, transitive and connexive binary relations. If S is a set and U:S→R is a function (`utility'), this induces a reflexive, transitive and connected binary relation (not anti-symmetric in general, unless U is injective).
Conversely, any reflexive, transitive, antisymmetric and connexive binary relation (a.k.a. total order) on a countable set S, this is induced by a utility function taking values in the rational numbers (link to proof); there is a more general discussion here.
Strength of preferences
In what follows, we fix a totally ordered abelian group G. To express a preference between to objects s1, s2 of our set S, one should give an element of G which expresses how strongly s1 is preferred to s2. The most natural example is to take G=Z, then:
Saying s1 is preferred to s2 with strength 1 means we slightly prefer s1 to s2;
Saying s1 is preferred to s2 with strength 0 means have no preference between s1 and s2;
Saying s1 is preferred to s2 with strength 2 means we prefer |
40260abc-e52f-4a44-835c-446b56cc7290 | trentmkelly/LessWrong-43k | LessWrong | Is there any serious attempt to create a system to figure out the CEV of humanity and if not, why haven't we started yet?
Hey, fellow people, I'm fairly new to LessWrong and if this question is irrelevant i apologise for that, however I was wondering whether any serious attempts to create a system for mapping out the CEV of humanity have been started yet?
Since the CEV ismore of a democratic process than any other alignment system I can think of, it would make sense to try to create a database for the purpose of a future AGI to calibrate itself on. If we where to create the database now we could explore if it suited our needs through trial and error (testing whether it would predict moral decisions), which would mean that we would get a more functional alignment system than we could otherwise get. Also as a follow up, if we where to create a central authority for creating this database, there's a possibility that the authority could become a central alignment checking facility meaning that we could avoid potential misalginment disasters.
There are therefore quite clear reasons for me why it would be a good idea to start this project which is why I'm wondering if there are any such plans.
Thank you for your time. |
3bfc343f-214e-4ee4-8df8-c53d4ab74581 | trentmkelly/LessWrong-43k | LessWrong | Epoch: What is Epoch?
Our director explains Epoch AI’s mission and how we decide our priorities. In short, we work on projects to understand the trajectory of AI, share this knowledge publicly, and inform important decisions about AI.
----------------------------------------
Since we started Epoch three years ago, we have engaged in hundreds of projects and achieved a wide audience. Yet, one question I often get asked is, ‘What is Epoch?’
In a way, this is an easy question to answer. We are a nonprofit research organization with the mission of improving society’s understanding of the trajectory of AI. Simply put, we are doing what we can so that decisions about AI are informed by the best possible evidence.
To achieve this, we are curating data and conducting high-quality research into some of the most significant trends in AI. We share most of this work publicly, aimed at a broad audience, including AI policy experts, journalists and AI developers. Importantly, we are committed to always sharing what the data says, rather than tailoring it to fit a narrative.
We work on this mission because we believe that if we all collectively know more about AI, we will make better decisions on average. I will not agree with all the decisions that our work will inform — but I believe that we can have a smarter conversation about AI if it is grounded in data, and I am pleased with the level of success that we have achieved in this mission.
However, while helpful, this brief description misses many nuances of our culture and ethos. In this post, I will expand on what we do, why we do it, and what we are not. My goal is to let you know more about how we make decisions at Epoch, so you can better understand our motivations.
What we do
Our primary focus is on working on what we believe will be most helpful in understanding the present and future of AI. We are committed to sharing this knowledge publicly, because we think doing so will help inform important decisions society will make in coming yea |
144b49bc-97cb-4238-bbb2-60f9b436c7ce | trentmkelly/LessWrong-43k | LessWrong | Infrafunctions and Robust Optimization
Proofs are in this link
This will be a fairly important post. Not one of those obscure result-packed posts, but something a bit more fundamental that I hope to refer back to many times in the future. It's at least worth your time to read this first section up to its last paragraph.
There are quite a few places where randomization would help in designing an agent. Maybe we want to find an interpolation between an agent picking the best result, and an agent mimicking the distribution over what a human would do. Maybe we want the agent to do some random exploration in an environment. Maybe we want an agent to randomize amongst promising plans instead of committing fully to the plan it thinks is the best.
However, all of these run into the standard objection that any behavior like this, where a randomized action is the best thing to do, is unstable as the agent gets smarter and has the ability to rewrite itself. If an agent is randomizing to sometimes take actions that aren' t optimal according to its utility function, then there will be an incentive for the agent to self-modify to eliminate its randomization into those suboptimal actions.
The formalization of this is the following proposition.
Proposition 1: Given some compact metric space of options X, if U:X→R is a bounded function, {μ|∀ν∈ΔX:μ(U)≥ν(U)}=Δ{x|∀y∈X:U(x)≥U(y)}
Intuitively, what this is saying is that the only possible way for a mixture of options to be an optimal move is if each component option is an optimal move. So, utility functions can only give you randomization behavior if the randomization is between optimal actions. The set of such will typically only contain a single point. And so, in general, for any utility function at all, an agent using it will experience a convergent pressure towards deterministic decision-making.
Every single clever alignment trick involving an agent behaving randomly or sampling from a distribution is thereby guaranteed to fail, as it's not stable under reflection |
1ccb7a17-5566-48d7-a860-361f6dcd92c4 | StampyAI/alignment-research-dataset/blogs | Blogs | my takeoff speeds? depends how you define that
my takeoff speeds? depends how you define that
----------------------------------------------
what takeoff speeds for [transformative AI](ai-doom.html) do i believe in? well, that depends on which time interval you're measuring. there are roughly six meaningful points in time to consider:
* **development**: the AI that will transform the world starts being developed
* **launch**: this AI is launched — more formally, this AI gets past its least meaningful impact by a human, and is just doing its own thing afterwards
* **impact**: the AI starts having significant impacts on the world, eg hacks the internet and/or people to get power
* **observable**: the AI starts having impacts on the world that people unrelated to its development can notice — not everyone, but let's say at least people who are alignmentpilled enough to guess what might be happening
* **DSA**: the AI achieves [decisive strategic advantage](https://publicism.info/philosophy/superintelligence/6.html), which for us is the point of no return
* **transformation**: the AI starts having the effect that we expect it to ultimately have on us; for example, with unaligned AI this is when we die, and with aligned AI this is when we get utopia
note that this view is, i think, *qualitatively* orthogonal to how aligned a transformative AI is; those are all meaningful thresholds regardless of whether the AI is taking over everything to build utopia or to tile the universe with paperclips. that said, it can still be *quantitatively* different when it comes to the durations between any two points in time; for example, one generally expects that the time between **development** and **launch** takes longer for aligned AI than unaligned AI.
my model is currently:
* **development** to **launch**: weeks to years, but maybe hard to define because nothing is developed from scratch. closer to years if aligned.
* **launch** to **impact**: hours to weeks (recursive self-improvement is strong!)
* **impact** to **observable**: also hours to weeks (but low confidence; the world is complex)
* **observable** to **DSA**: probly negative? if it's smart and powerful enough, it achieves DSA first. especially if it's aligned, because then it should want to avoid people panicking in ways that might cause damage.
* **DSA** to **transformation**: could be zero? depends on your perspective, too; if the AI uploads everyone, then spends 10¹⁰⁰ years taking over the universe, and only *then* starts running us in utopia, then that's a short time from *our* perspective. but ultimately this measure isn't very useful, since it's after the point of no return so there's nothing we can do anyways.
in any case, that last measure is not very useful: if we're past the point of no return, there there's nothing we can do anyways.
(see also: [*ordering capability thresholds*](ordering-capability-thresholds.html) and [*local deaths under X-risk*](quantum-immortality-local-deaths.html)) |
5dd85c86-e164-488d-bf3a-13f4cc26784d | trentmkelly/LessWrong-43k | LessWrong | Beyond Biomarkers: Understanding Multiscale Causality
Photo by Jigar Panchal on Unsplash
In exercise science, we typically derive causality in a bottom-up manner. When we evaluate performance, we assess factors such as cardiovascular capacity, metabolic efficiency, or muscular contractile capacity. However, I’ve always grappled with a chicken-and-egg dilemma in exercise physiology. This dilemma highlights the challenge of understanding sequences of events where mutual dependencies exist — each outcome depends on a preceding event, and vice versa.
Consider a simple example: biomechanical testing of an NBA basketball player might reveal that certain parameters (x, y, z) predispose them to excel at that competition level. However, we can also argue that these parameters likely developed in response to the competitive demands of the game. As players advance to higher leagues, they face greater technical demands, which drive their development and the evolution of their biomechanical parameters.
This creates a paradoxical situation. If structure gives rise to behaviour, but structure simultaneously evolves in response to our behaviour, where does causality lie? What comes first, the chicken or the egg? Does causality arise from the bottom-up physiological blueprint or the top-down constraints of a specific ecological niche?
Top-down or bottom-up?
To understand biological causation, I came across an interesting stream of research from the well-known Oxford physiologist Denis Noble, the founder of modern electrophysiology of the heart. He discusses the concept of biological relativity, where no level of the biological hierarchy holds privileged causation (Noble et al., 2019). In simple terms, lower levels are responsible for dynamics, while higher levels constrain the lower levels by setting boundary conditions.
Example derived from Noble et al. (2019). Global cell properties, such as electric potential, regulate molecular-level properties, such as ion channel proteins, which in turn influence changes in cell properties.
|
974048d1-657b-4cc9-8897-ab2ec94db798 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | An Extremely Opinionated Annotated List of My Favourite Mechanistic Interpretability Papers
Introduction
------------
This is an extremely opinionated list of my favourite mechanistic
interpretability papers, annotated with my key takeaways and what I like
about each paper, which bits to deeply engage with vs skim (and what to
focus on when skimming) vs which bits I don’t care about and recommend
skipping, along with fun digressions and various hot takes.
This is aimed at people trying to get into the field of mechanistic
interpretability (especially Large Language Model (LLM)
interpretability). I’m writing it because I’ve benefited a lot by
hearing the unfiltered and honest opinions from other researchers,
especially when first learning about something, and I think it’s
valuable to make this kind of thing public! On the flipside though, this
post is explicitly about *my* personal opinions - I think some of these
takes are controversial and other people in the field would disagree.
The four top level sections are priority ordered, but papers within each
section are ordered arbitrarily - follow your curiosity
Priority 1: What is Mechanistic Interpretability?
-------------------------------------------------
* [Circuits: Zoom
In](https://distill.pub/2020/circuits/zoom-in/)
+ Sets out the circuits research agenda, and is a whirlwind
overview of progress in image circuits
+ This is reasonably short and conceptual (rather than technical)
and in my opinion very important, so I recommend deeply
engaging with all of it, rather than skimming.
+ The core thing to take away from it is the perspective of
networks having legible(-ish) internal representations of
features, and that these may be connected up into
interpretable circuits. The key is that this is a mindset for
thinking about networks *in general*, and all the discussion
of image circuits is just grounding in concrete examples.
- On a deeper level, understanding why these are important and
non-trivial claims about neural networks, and their
implications.
+ In my opinion, the circuits agenda is pretty deeply at the core
of what mechanistic interpretability *is*. It’s built on the
assumption that there is some legible, interpretable structure
inside neural networks, if we can just figure out how to
reverse engineer it. And the core goal of the field is to find
what circuits we can, build better tools for doing so, and do
the fundamental science of figuring out which of the claims
about circuits are actually true, which ones break, and
whether we can fix them.
- An important note is that mechanistic interpretability is an
extremely young field and this was written 2.5 years ago -
I take the specific claims in this article as a starting
point, not as the definitive grounding of what the field
must believe.
+ **Meta:** The goal of reading this is to understand what the
fundamental mindset and worldview being defended here is. The
goal is *not* necessarily to leave feeling convinced that
these claims are true, or that the article adequately
justifies them. That’s what the rest of the papers in here are
for!
+ A useful thing to reflect on is what the world would look like
if the claims were and were not true - what evidence could you
see that might convince you either way? These are definitely
not obviously true claims!
* [A Mathematical Framework for Transformer
Circuits](https://transformer-circuits.pub/2021/framework/index.html)
+ The point of this is to explain how to conceptually break down a
transformer into individually understandable pieces.
+ Deeply engage with:
- All the ideas in the overview section, especially:
* Understanding the residual stream and why
it’s fundamental.
* The notion of interpreting *paths* between interpretable bits (eg input tokens and output logits) where the path is a composition of matrices and how this is different from interpreting every intermediate activations
* And understanding attention heads: what a QK and OV
matrix is, how attention heads are independent and
additive and how attention and OV are
semi-independent.
- Skip Trigrams & Skip Trigram bugs, esp understanding *why*
these are a really easy thing to do with attention, and
how the bugs are inherent to attention heads separating
*where* to attend to (QK) and what to do once you attend
somewhere (OV)
- Induction heads, esp why this is K-Composition (and how
that’s different from Q & V composition), how the circuit
works mechanistically, and why this is too hard to do in a
1L model
+ Skim or skip:
- Eigenvalues or tensor products. They have the worst effort
per unit insight of the paper and aren’t very important.
+ Maybe check out [my (long-ass) walkthrough of the
paper](https://www.youtube.com/watch?v=KV5gbOmHbjU), and
comments on how I think about things
- If you prefer video over reading I expect it to be high
value
- Either way it’s probably useful to check the relevant
section it if there’s part of the paper that confuses you.
Priority 2: Understanding Key Concepts in the field
---------------------------------------------------
* [Induction
Heads](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html)
+ This is a study of how induction heads are ubiquitous in real
transformers, and form as a sudden phase change during
training.
+ Deeply engage with:
- Key concepts + argument 1.
- Argument 4: induction heads also do translation + few shot
learning.
- Getting a rough intuition for all the methods used in the
Model Analysis Table, as a good overview of interesting
interpretability techniques.
+ Skim or skip:
- All the rigour - basically everything I didn’t mention. The
paper goes way overboard on rigour and it’s not worth
understanding every last detail
* The main value to get when skimming is an overview of
different techniques, esp general techniques for
interpreting during training.
+ A particularly striking result is that induction heads form at
~the same time in all models - I think this is very cool, but
somewhat overblown - from some preliminary experiments, I
think it’s pretty sensitive to learning rate and positional
encoding (though the fact that it *doesn’t* depend on scale is
fascinating!)
* [Mechanistic Interpretability, Variables, and the Importance of
Interpretable
Bases](https://transformer-circuits.pub/2022/mech-interp-essay/index.html)
+ Short-ish conceptual essay on what the point of mechanistic
interpretability is and how to think about it.
+ This is similar in flavour to Circuits: Zoom In, but is more
conceptual and less grounded in very concrete examples +
progress - your mileage may vary in how much this works for
you.
* [A Toy Model of
Superposition](https://transformer-circuits.pub/2022/toy_model/index.html)
+ Building a simple toy model that contains superposition, and
analysing it in detail.
+ Deeply engage with:
- The core intuitions: what is superposition, how does it
respond to feature importance and sparsity, and how does
it respond to correlated and uncorrelated features.
- Read the strategic picture, and sections 1 and 2 closely.
+ Skim or skip:
- No need to deeply understand the rest, it can mostly be
skimmed. It’s very cool, especially the geometry and phase
transition and learning dynamics part, but a bit of a nerd
snipe and doesn’t obviously generalise to real models.
+ A good intro paper for concrete projects. The models are tiny,
the core results should be easy to replicate (and have short
training times), there’s an [accompanying
Colab](https://colab.research.google.com/github/anthropics/toy-models-of-superposition/blob/main/toy_models.ipynb)
and [a list of follow-up
ideas](https://transformer-circuits.pub/2022/toy_model/index.html#open-questions),
so this is a great paper to play around with!
* [Curve
Detectors](https://distill.pub/2020/circuits/curve-detectors/)
& [Curve
Circuits](https://distill.pub/2020/circuits/curve-circuits/)
(Image interpretability)
+ An extremely detailed and rigorous study of a family of neurons
in Inception; a gold standard of what good interpretability
can look like. Culminates in them hand-coding the weights of
artificial neurons and substituting those into the circuit,
and comparing performance. Note that a bunch of the techniques
won’t generalise.
+ Deeply engage with:
- Understanding what they did as a gold standard, and thinking
about *why* what they did is deep and meaningful evidence.
- Think about which techniques will and will not generalise to
LLMs
Priority 3: Expanding Understanding
-----------------------------------
### Language Models
* [Indirect Object
Identification](https://openreview.net/forum?id=NpsVSN6o4ul)
+ A paper about reverse engineering a complex (28 head!) circuit
in GPT-2 Small
- The most detailed “we actually have a circuit, and can drill
into it in detail and really get how it works” paper that
I know of.
* The circuit in question is for the task of completing
“When John and Mary went to the shops, John bought a
bottle of milk for” -> “ Mary” but “Mary bought a
bottle of milk for” -> “ John”
+ Particularly good for a vibe of “ways interpretability is hard
and you can trick yourself” + “but it is actually possible and
we can fix these”
* [SoLU](https://transformer-circuits.pub/2022/solu/index.html)
+ A paper on a neuron activation function that makes transformer
neurons somewhat more interpretable.
+ Deeply engage with:
- Section 3 (Background). For the core ideas, esp
superposition, privileged bases and why they matter.
* See “A Toy Model of Superposition” for much more on
superposition.
- Section 6 (on the neurons found). For getting the vibe of
what kind of features LLMs learn - I think this is the
best resource I know of for getting a vibe of what kinds
of things MLP layers are doing at different layers of a
transformer.
+ Skim:
- Section 4 (on the exact function and how it works) - the
main intuition to get is *why* you might expect this to
work (in particular, why lateral inhibition seems
important)
+ Skip:
- Section 5 (showing that the model works as well as normal
activation functions).
* [ROME](https://rome.baulab.info/)
+ A paper on locating and editing factual knowledge in GPT-2 - a
strong contender for my favourite non Chris Olah
interpretability paper
+ Deeply engage with:
- Causal tracing + activation patching stuff (including the
appendix on it). It’s a really cool, elegant and general
technique, and demonstrates that certain computation is
extremely localised in the model, and uses careful
counterfactuals to isolate this computation.
+ Skim or skip:
- The model editing stuff. It’s way less interesting from an
interpretability point of view than the above.
* [Logit
Lens](https://www.alignmentforum.org/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens)
+ A solid early bit of work on LLM interpretability. The key
insight is that we interpret the residual stream of the
transformer by multiplying by the unembedding and mapping to
logits, *and* that we can do this to the residual stream
before the final layer and see the model converging on the
right answer
- Key takeaway: Model layers iteratively update the residual
stream, and the residual stream is the central object of a
transformer
+ Deeply Engage with:
- The key insight of applying the unembedding early, and
grokking *why* this is a reasonable thing to do.
+ Skim or skip:
- Skim the figures about progress towards the answer through
the model, focus on just getting a vibe for what this
progress looks like.
- Skip everything else.
+ The deeper insight of this technique (not really covered in the
work) is that we can do this on *any* vector in the residual
stream to interpret it in terms of the direct effect on the
logits - including the output of an attn or MLP layer and even
a head or neuron. And we can also do this on weights writing
to the residual stream.
- [Analyzing Transformers in Embedding
Space](https://arxiv.org/pdf/2209.02535.pdf) is a more
recent paper that drills down into this insight, focusing
on weights.
* I’m somewhat meh on the paper as a whole, but sections
3, 4.1 and Appendix C are cool for seeing what head
and neuron circuits can look like
* Note that they make the (IMO) mistake of treating
embedding and unembedding space as the same space -
the input and output are different spaces! Even if
most people make the mistake of setting the embed and
unembed maps to be the same matrix :(
- Note that this tends only to work for things close to the
final layer, and will totally miss any indirect effect on
the outputs (eg via composing with future layers, or
suppressing incorrect answers)
* [An Interpretability Illusion for
BERT](https://arxiv.org/pdf/2104.07143.pdf)
+ Good early paper on the limitations of max activating dataset
examples - they took a seemingly interpretable neuron in BERT
and took the max activating dataset examples on different
datasets, and observed consistent patterns *within* a dataset,
but very different examples *between* datasets
- Within the lens of the Toy Model paper, this makes sense!
Features correspond to directions in the residual stream
that probably aren’t neuron aligned. Max activating
dataset examples will pick up on the features *most*
aligned with that neuron. Different datasets have
different feature distributions and will give different
“most aligned feature”
* Further, models want to minimise interference and thus
will superpose anti-correlated features, so they
*should*
+ Deeply engage with:
- The concrete result that the same neuron can have very
different max activating dataset examples
- The meta-level result that a naively compelling
interpretability technique can be super misleading on
closer inspection
+ Skim or skip:
- Everything else - I don’t care much about the details beyond
the headline result, which is presented well in the intro.
### Algorithmic Tasks
* [A Mechanistic Interpretability Analysis of
Grokking](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking)
+ *Conflict of interest note - I was the main person working on
this project!*
+ A very detailed reverse engineering of a tiny model trained to
do modular addition and interpreting it during training, plus
a bunch of discussion on phase changes, an (attempted)
explanation of
[grokking](https://arxiv.org/abs/2201.02177) and
showing grokking on other tasks.
- Grokking probably isn’t that relevant to real models and the
techniques don’t really generalise, but a good example of
detailed reverse engineering + fully understanding a model
on an algorithmic task, and of applying interpretability
during training.
* Also a good example of how actually understanding a
model can be really useful, and push forwards science
of deep learning by explaining confusing phenomena.
- I also just personally think this project was super fucking
cool, even if not that useful.
+ Deeply engage with:
- The key claims and takeaways sections
- [Overview of the modular addition
algorithm](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking#Overview_of_the_Inferred_Algorithm)
* The key vibe here is “holy shit, that’s a
weird/unexpected algorithm”, but also, on reflection,
a pretty natural thing to learn if you’re built on
linear algebra - this is a core mindset for
interpreting networks!
+ Skim:
- [Reverse engineering modular
addition](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking#Reverse_Engineering_the_Algorithm) -
understanding the different types of evidence and how they
fit together
- [Evolution of modular addition circuits during
training](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking#Evolution_of_Circuits_During_Training) -
the flavour of what the circuits developing looks like
during training, and the fact that once we understand
things, we can just literally watch them develop!
* [The interactive graphics in the
colab](https://colab.research.google.com/drive/1F6_1_cWXE5M7WocUcpQWp3v8z4b1jL20#scrollTo=Analysis_During_Training)
are *way* better than static images
- [The Phase Changes
section](https://www.alignmentforum.org/posts/N6WM6hs7RQMKDhYjB/a-mechanistic-interpretability-analysis-of-grokking#Phase_Changes) -
probably the most interesting bits are the explanation of
grokking, and the two speculative hypotheses.
+ Maybe a good intro paper to replicate! It has an
[accompanying colab](http://bit.ly/neelgrokking) and a
list of future directions at the end
### Image Circuits
* [Feature
Vis](https://distill.pub/2017/feature-visualization/) (fairly
short)
+ An early paper with a really core technique for image
interpretability. Doesn’t really transfer to LLMs, but worth
getting the vibe, and seeing how this made image
interpretability much easier and more rigorous in certain
ways - the vibe that this basically automatically gives
variable names to neurons.
* [Multimodal Neurons in Artificial Neural
Networks](https://distill.pub/2021/multimodal-neurons/)
+ An analysis of neurons in a text + image model (CLIP), finding a
bunch of abstract + cool neurons. Not a high priority to
deeply engage with, but very cool and worth skimming.
+ My key takeaways
- [There are so many fascinating
neurons!](https://distill.pub/2021/multimodal-neurons/#guided-tour-of-neuron-families)
Like, what?
* There’s a
[teenage](https://microscope.openai.com/models/contrastive_4x/image_block_4_5_Add_6_0/2231)
neuron, a
[Minecraft](https://microscope.openai.com/models/contrastive_4x/image_block_4_5_Add_6_0/3)
neuron, a
[Hitler](https://microscope.openai.com/models/contrastive_4x/image_block_4_5_Add_6_0/309)
neuron and an
[incarcerated](http://microscope.openai.com/models/contrastive_4x/image_block_4_5_Add_6_0/2297)
neuron?!
- The intuition that multi-modal models (or at least, models
that use language) are incentivised to represent things in
a conceptual way, rather than specifically tied to the
input format
- The detailed analysis of the [Donald Trump
neuron](https://distill.pub/2021/multimodal-neurons/#person-neurons),
esp that it is more than just a “activates on Donald
Trump” neuron, and instead activates for many different
clusters of things, roughly tracking their association
with Donald Trump.
* This seems like weak evidence that neuron activations
may split more into interpretable segments, rather
than an interpretable directions
- The “[adversarial attacks by writing Ipod on an
apple](https://distill.pub/2021/multimodal-neurons/#typographic-attacks)”
part isn’t very deep, but *is* hilarious
* [The rest of the circuits
thread](https://distill.pub/2020/circuits/)
+ A lot of really cool ideas and scattered threads! Worth skimming
and digging into anything that catches your interest. Each
individual article is short-ish
+ This thread represents, in my opinion, the first serious attempt
at reverse engineering a real model (inception)
+ My personal favourites:
- [An Overview of Early Vision
Neurons](https://distill.pub/2020/circuits/early-vision/) -
it’s just fascinating to see the weird shit that happens,
super cool to the hierarchy where see simple shapes are in
early layers and are built into more abstract shapes in
layer layers, and to see neurons being sorted into
families
* If you click on a neuron, you’ll see the [weight
explorer](https://storage.googleapis.com/distill-circuits/inceptionv1-weight-explorer/mixed3a_47.html) -
this is a really fun tool to play around with, and
practice just reading off the weights what they do!
- [Visualising
weights](https://distill.pub/2020/circuits/visualizing-weights/) -
somewhat image specific, but a fascinating exploration of
the data visualisation questions underlying mechanistic
interpretability - visualisations are super useful, but
*how* can we do them in a properly principled way, and how
can they mislead?
* I really want to see more papers like this! These meta
questions are really important, but it’s rarely
incentivised to publish on them
- [Branch
Specialisation](https://distill.pub/2020/circuits/branch-specialization/) -
networks spontaneously learn to be modular *and* the
modules seem to be consistent and semantically
meaningful?! WTF?
Priority 4: Bonus
-----------------
* Not a paper: The [codebase of
EasyTransformer](https://github.com/neelnanda-io/Easy-Transformer),
a transformer mechanistic interpretability I’m writing - I think
it’s worth reading for a fairly clean and conceptual-focused
implementation of a transformer, specifically reading
[EasyTransformer.forward](https://github.com/neelnanda-io/Easy-Transformer/blob/ff0a4ef52d606277894acd306a97018999fee958/easy_transformer/EasyTransformer.py#L136)
and
[components.py](https://github.com/neelnanda-io/Easy-Transformer/blob/54f4b4bfd15448b98288a51eef7ddddeac23b68d/easy_transformer/components.py)
(a file for the various layers) (the actual codebase is pretty
long!)
* [Everything else Chris Olah](https://colah.github.io/) has
ever written
+ I’m somewhat biased on this, but I think Chris is just clearly
far and away the best interpretability researcher in the
world.
+ He’s also a massive nerd for good technical communication,
interactivity and good graphic design, and I find his work a
joy to read.
* [Interpreting RL
Vision](https://distill.pub/2020/understanding-rl-vision/)
+ Interesting application of image circuits techniques to get some
insight into an RL model - [unclear how much it
generalises/works](https://www.alignmentforum.org/posts/iJDmL7HJtN5CYKReM/empirical-observations-of-objective-robustness-failures)
+ The parts about the impact of the amount of and diversity of
data on interpretability feel most interesting and general to
me.
+ Probably the best RL mechanistic interpretability paper I know
of (but it’s a pretty low bar :( )
* Not a paper: Playing around with [OpenAI
Microscope](https://microscope.openai.com/) - visualizations
and top dataset examples of every neuron in a ton of image models!
Challenge: What’s the weirdest neuron you can find?
* [Visualizing and Interpreting the Geometry of
BERT](https://arxiv.org/abs/1906.02715) (+ [blog
post](https://pair-code.github.io/interpretability/bert-tree/))
+ An early LLM interpretability paper about understanding how BERT
represents language in the residual stream.
+ Deeply engage with:
- Applying t-SNE to the residual stream + getting resulting
visualizations. This was really clever and cool, and
understanding it is valuable.
+ Skim or skip:
- The detailed syntax tree stuff.
* [Acquisition of Chess Knowledge in
AlphaZero](https://arxiv.org/pdf/2111.09259.pdf) - analysing
AlphaZero’s chess knowledge, including during training
+ Notable for the hilarious stunt of getting a chess grandmaster
commenting, and for co-authoring (even if this isn’t that
interpretability related)
+ Focuses on feature analysis rather than really mechanistic
engagement, but still very cool! The main things I think are
cool were successfully applying interpretability during
training, and on the weird and fucky task of playing chess
(and that models trained on non-image/language tasks are
somewhat interpretable!).
* [Toward Transparent AI: A Survey on Interpreting the Inner
Structures of Deep Neural
Networks](https://arxiv.org/abs/2207.13243) - a decent survey
paper on what’s up in the rest of interpretability.
+ I’m personally pretty meh about the majority of the academic
field of interpretability (I rarely find insights from there
useful in my work) and would prioritise reading the papers in
the previous sections, but it’s worth skimming to get a sense
for what’s out there, and digging into anything relevant to a
specific project you’re pursuing!
- Also, for sanity checking whether I’m just being
overconfident/arrogant, and there’s actually a ton of
useful insights in standard interpretability for
mechanistic stuff! Again, this post is just a list of my
personal hot takes.
+ [A Primer in
BERTOLOGY](https://arxiv.org/pdf/2002.12327.pdf) - a
survey paper specifically on BERTology, a subfield about
specifically interpreting BERT. I feel pretty meh about this,
but am not very familiar with the field.
* [The Building Blocks of
Interpretability](https://distill.pub/2018/building-blocks/)
+ A cool and fun whirlwind tour of a bunch of different tools and
approaches for image interpretability. Worth skimming.
* Not a paper, but I find [Chris Olah’s interview on the 80,000
Hours
podcast](https://80000hours.org/podcast/episodes/chris-olah-interpretability-research/)
super inspiring |
cb2ef106-11ce-44a4-b842-7899dc463b27 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | AMA: Markus Anderljung (PM at GovAI, FHI)
*EDIT: I'm no longer actively checking this post for questions, but I'm likely to periodically check.*
Hello, I work at the [Centre for the Governance of AI](http://governance.ai/) (GovAI), part of the [Future of Humanity Institute](https://www.fhi.ox.ac.uk/) (FHI), University of Oxford, as a project manager, where I put time into e.g. recruitment, research management, policy engagement, and operations.
FHI and GovAI are hiring for a number of roles. Happy to answer questions about them:
* GovAI is hiring a [Project Manager](https://www.fhi.ox.ac.uk/ai-governance-project-manager/), to work alongside myself. Deadline September 30th.
* [FHI is hiring researchers](https://www.fhi.ox.ac.uk/researcher-hiring-2020/), across three levels of seniority and all our research groups ([including GovAI](https://www.fhi.ox.ac.uk/aigovernance-researchers/)). Deadline Oct 19th.
* The Future of Humanity Foundation, a new organisation aimed at supporting FHI, is [hiring a CEO](https://docs.google.com/document/d/1zHXQBh7Rwnp2Ja1oqtLvigZuA2p-mr0FbHukZqkFugk/edit). Deadline September 28th.
* We’re likely to open for applications to our [GovAI Fellowship](https://www.fhi.ox.ac.uk/governance-of-ai-fellowship/) over the next month or so, a 3-month research stint aimed at helping people get up to speed with and test their fit for AI governance research, likely starts in Jan or July 2021.
Relevant things folks at GovAI have published in 2020:
* [AI Governance: Opportunity and Theory of Impact](https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact), Allan Dafoe
* [The Windfall Clause](https://www.fhi.ox.ac.uk/wp-content/uploads/Windfall-Clause-Report.pdf), Cullen O’Keefe & others
* [A Guide to Writing the NeurIPS Impact Statement](https://medium.com/@GovAI/a-guide-to-writing-the-neurips-impact-statement-4293b723f832), Carolyn Ashurst & others
* [The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse?](https://www.fhi.ox.ac.uk/wp-content/uploads/The-Offense-Defense-Balance-of-Scientific-Knowledge.pdf), Toby Shevlane & Allan Dafoe
* [Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society](https://www.fhi.ox.ac.uk/wp-content/uploads/Near_term_long_term.pdf), Carina Prunkl & Jess Whittlestone (CFI)
A little more about me:
* At GovAI, I’ve been especially involved e.g. with our research on [public](https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/us_public_opinion_report_jan_2019.pdf) and ML researcher views on AI governance and forecasting (led by Baobao Zhang), [implications of increased data efficiency](https://www.fhi.ox.ac.uk/wp-content/uploads/Social-and-Governance-Implications-of-Improved-Data-Efficiency.pdf) (led by Aaron Tucker), the [NeurIPS Broader Impact Statement Requirement](https://medium.com/@GovAI/a-guide-to-writing-the-neurips-impact-statement-4293b723f832) (led by Carolyn Ashurst & Carina Prunkl), our [submission on the EU Trustworthy AI Whitepaper](https://www.fhi.ox.ac.uk/wp-content/uploads/EU-White-Paper-Consultation-Submission-GovAI-Oxford.pdf) (led by Stefan Torges), and on the what we can learn about AI governance from the governance of previous powerful technologies.
* Before coming to GovAI in 2018, I worked as the Executive Director of EA Sweden, e.g. running a project promoting representation for future generations (more info [here](https://forum.effectivealtruism.org/posts/6ojaD4vmxsSrZyBBF/representation-for-future-generations-in-sweden-a-summary-of)). I’ve also worked as a management consultant at EY Stockholm, and I ran the Giving What We Can: Cambridge group (now EA Cambridge) for a year.
* I was encouraged to write down some of my potentially unusual views to spur discussion. Here are some of them:
+ There should be more EA community-building efforts focused on professionals, say people with a few years of experience.
+ I think EAs tend to underestimate the value of specialisation. For example, we need more people to become experts in a narrow domain / set of skills and then make those relevant to the wider community. Most of the impact you have in a role comes when you’ve been in it for more than a year.
+ There is a vast array of important research that doesn’t get done because people don’t find it interesting enough.
+ People should stop using “operations” to mean “not-research”. I’m guilty of this myself, but it clumps together many different skills and traits, probably leading to people undervaluing them.
+ Work on EU AI Policy is plausibly comparably impactful to US policy on the current margin, in particular over the next few years as the [EU Commission's White Paper on AI](https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf) is translated into concrete policy.
+ I think the majority of AI risk is [structural](https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure) – as opposed to stemming from malicious use or accidents – e.g. technological unemployment leading to political instability, competitive pressures, or decreased value of labour undermining liberal values.
+ Some forms of expertise which I'm excited to have more of at GovAI include: institutional design (e.g. how should the next Facebook Oversight Board-esque institution be set up), transforming our research insights into policy proposals (e.g. answering questions like what EU AI Policy we should push for, how a system to monitor compute could be set up), AI forecasting, along with relevant bits of history and economics. |
d5cd2d13-1f3d-4a90-a6da-f6c693bda89e | trentmkelly/LessWrong-43k | LessWrong | If digital goods in virtual worlds increase GDP, do we actually become richer?
Noah Smith, in this article, argues that the Metaverse could enable economic growth to increase a lot and sharply decouple itself from real-world resource usage. By creating markets in which we buy and sell immaterial things, world GDP would grow.
He also says, rightly, that GDP correlates with the well-being of a nation.
But there's a non-stated point: would creating huge markets in the Metaverse for buying and selling digital goods make us actually richer? What I mean is this: suppose that, thanks to the Metaverse, huge virtual economies get created and people get real money out of stuff they sell in these economies. But suppose that e.g., agricultural production output doesn't go up much. Does that mean that we're simply going to pay more for groceries, without being able to afford more of them? The more general question is: would real-world stuff simply get a lot more expensive, and so our well-being doesn't really increase besides us being able to afford digital goods and having richer virtual lives? (This must count for something, but I'm more interested in whether virtual economies somehow would trickle to the real economy and make us able to afford more physical stuff.)
This is not a leading question, I genuinely can't tell what's the right answer, because I don't feel confident enough in my knowledge of economics. Perhaps a way to rephrase is: what dominates here, inflation or virtual GDP growth? Is that even the right way of looking at the problem? |
5461cc35-b678-4880-bcec-1a0ab14b9c29 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | What more compute does for brain-like models: response to Rohin
This is a response to a comment made by [Rohin Shah](https://www.lesswrong.com/users/rohinmshah) on [Daniel Kokotajlo](https://www.lesswrong.com/users/daniel-kokotajlo)'s post [Fun with +12 OOMs of Compute](https://www.lesswrong.com/posts/rzqACeBGycZtqCfaX/fun-with-12-ooms-of-compute). I started trying to answer some questions and assumptions he had, then realized there was more of an inferential gap that needed filling in. Also, as I attempted to estimate the OOMs of compute above GPT-3/PaLM needed for each method, I realized I was just going off of vague guesses rather than grounded estimates based on recent benchmarks. So, since other people might also be lacking the same info and be curious about my answer, I decided to put a bit more work into answering and turn it into a full post.
Introducing the cast
--------------------
First, I'd like to note that I don't endorse trying to get to AGI through any of these methods. I think they are potentially worse for interpretability in addition to being less compute efficient. My goal here is to point out that I think it could be done if the world were suddenly given lots more compute. In other words, I shall make the argument that given lots of compute, issues of limited data and potential scaling plateaus of artificial neural nets can be bypassed via other less compute efficient methods. Many roads lead to AGI, and specific predictions about the failure of one specific path (e.g. Transformers) don't necessarily mean all the paths are affected by that predicted failure mode.
### The main contenders
1. [Numenta](https://numenta.com/) - moderately detailed, most realistic-task proven. Designers made carefully chosen abstractions which may or may not be right.
2. Spiking Neural Network simulators (spikingNNs) - [I mentioned [Nengo](https://www.nengo.ai/) previously because its the one I'm most familiar with, but researching this showed me that there are other better performing options with similar abstraction levels such as [BindsNet](https://github.com/BindsNET/bindsnet) and [brian2genn](https://brian2genn.readthedocs.io/en/stable/introduction/index.html) ] moderately detailed, moderately compute efficient, moderately task proven. Fewer abstractions than Numenta, more than Blue Brain. Less chance that an important detail was omitted, but still some chance.
3. [Blue Brain](https://www.epfl.ch/research/domains/bluebrain/) - highly detailed, very compute inefficient, not task proven. Very few abstractions, so relatively high chance that it contains all necessary details for a functioning neocortex.
### Supporting roles
The field of computational neuroscience has generated [lots](https://compneuroweb.com/database.html) and [lots](https://brian2.readthedocs.io/en/stable/index.html) of very narrowly focused models of particular subsets of lots of different brains. None of these is alone likely to turn into a full blown AGI if you throw compute at them, but they have useful additional details that could potentially get the main contenders unstuck from unexpected scaling plateaus.
Emulation
---------
By brain emulation, I mean trying to make a model that captures some of the observed functions of brain circuits. These models vary widely in how much fidelity to fine details they strive for, versus a more abstracted approximation. More detail brings the risk that you got one of those details wrong, and also means potentially requiring exponentially more compute to scale. Less detail means more reliance on having made the correct abstractions.
Neuroscientists have a failure mode around trying to make too accurate and detailed of models. After all, if you've spent years of your life painstakingly measuring the tiny details, it can be really hard to swallow the idea that you might have to discard any of those details as irrelevant. I think [Jan](https://www.lesswrong.com/users/jan-2) sums it up well in this [comment](https://www.lesswrong.com/posts/67exdQCqJARtRbtru/a-brief-excursion-into-molecular-neuroscience?commentId=kt42nQnRzcf7XhCs4):
> Yes, I agree, a model can really push intuition to the next level! There is a failure mode where people just throw *everything* into a model and hope that the result will make sense. In my experience that just produces a mess, and you need some intuition for how to properly set up the model.
>
>
Each of the three contenders I mentioned have very different levels of detail and have chosen different abstractions.
What do these three main contenders have in common? A focus on the mammalian neocortex, the part of the brain that does the General Intelligence stuff, the part that humans have extra of. Neuroscience has lots of evidence showing that this is the critical part of the brain to emulate if you want a model that is able to reason abstractly about things. I won't go into depth here, but I will give you this quote from Numenta (see Jeff Hawkins' latest [book](https://numenta.com/a-thousand-brains-by-jeff-hawkins) for more depth, or this [paper](https://numenta.com/assets/pdf/research-publications/papers/Companion-paper-to-Thousand-Brains-Theory-of-Intelligence.pdf) for a quick intro):
> Old brain vs. new brain
> A simple way to think about the brain is that it has two parts: the “old brain” and the “new brain.” The old brain, which comes from our reptilian ancestors and pre-dates dinosaurs, contains several different structures, such as the spinal cord and brainstem. It regulates your body (such as breathing), creates reflex behaviors (such as pulling your hand away from fire) and creates emotions (such as desire and anger). The new brain, or neocortex, is a single large organ. It sits on top of the old brain and is the brain’s analytical engine. It’s the part that can identify objects, learn a new language, or understand math.
>
>
Worth noting for each of these projects that their focus is on the neocortex. The Blue Brain project which talks about rodent brains is only a few well-understood parameter changes away from being a very accurate emulation of the human neocortex. They are careful not to do this because of the ethical implications of accurately simulating human neocortex tissue. I'm pretty confident from things that some of the project participants have said that they'd love to try simulating a whole human brain if given the compute and lack of oversight.
For example (emphasis mine) a quote from [Rebasing I/O for Scientific Computing: Leveraging Storage Class Memory in an IBM BlueGene/Q Supercomputer by Schürmann et al 2014](http://www.blakefitch.com/pubs/BGAS_ISC_2014_Paper_Final.pdf):
> Combined with the large numbers of those entities, e.g. an estimated 200 million neurons and 1012 synapses for an entire rat brain [10], the resulting memory footprint is large and at the same time the algorithmic intensity low. With the **human brain** being an additional three orders of magnitude more complex, cellular models of the
> **human brain** **will** occupy a daunting estimated 100PB of memory that **will** need
> to be revisited by the solver at every time step.
>
>
Human cortical neuron properties are pretty well known in a lot of respects and are already able to be simulated on the Blue Brain system, they just are careful not to get hit by media hype/outrage by talking about large scale human neocortex experiments. An example of a small scale human cortical neuron experiment: https://live-papers.brainsimulation.eu/#2016-eyal-et-al
How much compute?
-----------------
So I would argue that all of the main contenders are very training data efficient compared to artificial neural nets. I'm not going to go into detail on that argument, unless people let me know that that seems cruxy to them and they'd like more detail.
One of the things these contenders fall short on though is compute efficiency. For the sake of Daniel's thought experiment, I'd like to give some rough estimates on how much compute I think would be necessary to get a half-brain of compute for each of these.
For artificial neural networks, the meaning of a 'neuron' or 'parameter' is less directly analogous to a neocortex neuron. For these emulations, the analogy holds together much better. The rough average number of neurons in the human neocortex is around [26 billion](https://www.frontiersin.org/articles/10.3389/neuro.09.031.2009/full#B48). So let's say 13 billion for the half-neocortex case.
### Numenta training compute estimate
Ok, I just give up for now on finding benchmarks to accurately estimate this one. I give a rough guess at 'somewhere between the other two, closer to the Spiking Neural Nets'.
Here's the best summary I can give: they break the artificial neurons down into collections of artificial dendrites, which then have a very sparse activation and very sparse weights. This seems to help learn more from a given dataset, and to have an extended amount of information that can be 'fit' into the network without 'overwriting' previous info. The downside is that it's substantially less efficient to 'get' the information into the network in the first place. Like, it needs maybe 10x more epochs over the same dataset before it starts doing better than the feed forward multilayer perceptron was doing a while ago. But its learning doesn't plateau as soon, so it can eventually surpass the roughly-equivalent MLP.

### Spiking Neural Net training compute estimate
my estimate: 3.82e24 flops
about 1 OOM over GPT-3
less than an OOM over PaLM
For this category, I would add an additional OOM for the fact that the abstraction may be lossy/inefficient in capturing what actual brain neurons do. For instance, I noticed that the benchmark they were using in the papers had undershot the number of synapses for human pre-frontal cortex by an order of magnitude. Could be other things like that as well.
Unlike Numenta, where the abstraction is very well thought out and I think it will either totally work or not, depending on whether they are as correct as they think they are about their abstraction.
Or Blue Brain, where there is so much accuracy and so little abstraction I feel quite confident it'll work as expected on a emulated-neuron == real-neuron basis.
### Blue Brain training compute estimate
my estimate: 2.37e33 FLOPs
10 OOMs over GPT-3
9 OOMs over PaLM
from <https://blog.heim.xyz/palm-training-cost/> :
| ML Model | Training Compute [FLOPs] | x GPT-3 |
| --- | --- | --- |
| GPT-3 (2020) | 3.1e23 | 1x |
| Gopher (2021) | 6.3e23 | 2x |
| Chinchilla (2022) | 5.8e23 | 2x |
| PaLM (2022) | 2.5e24 | 10x |
Sources:
--------
### Numenta paper 1
<https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiqwPDF84_3AhUtEEQIHchvC2wQFnoECBYQAQ&url=https%3A%2F%2Fnumenta.com%2Fassets%2Fpdf%2Fresearch-publications%2Fpapers%2FSparsity-Enables-100x-Performance-Acceleration-Deep-Learning-Networks.pdf&usg=AOvVaw33dSHmz30T0fhBKWcfBMne>
Using 8 bit compression of values via a unique mapping scheme, and running on FPGAs... hard to compare. Their mapping scheme pre-estimates the range of all variables, splits large numbers into lossy quantized representations spread across multiple 8 bit (INT8) numbers during encoding. So to get the equivalent of a FLOP, a floating point operation, you need to do several fixed-point 8 bit operations (FP-8bit-OPs). On average, maybe 4 FP-8bit-OPs per single precision FLOP?

https://semiengineering.com/tops-memory-throughput-and-inference-efficiency/
What is TOPS? It means Trillions or Tera Operations per Second. It is primarily a measure of the maximum achievable throughput but not a measure of actual throughput. Most operations are MACs (multiply/accumulates), so TOPS = (number of MAC units) x (frequency of MAC operations) x 2
Alveo U250 datasheet says it gets 33.3 INT8 TOPs at peak.
rough guess of divide TOPs by 4 to get a terraFLOPs equivalent for Numenta's specific use case, based on studying their encoding.
= 8.325 pseudo-terraFLOPs = 8.325e9 psuedoFLOPs / second
? bio\_seconds took ? wall clock seconds
flops / neuron
flops / neurons = flp/n
flp/n per bio\_second
flp/n / ? bio\_second = flp/n/s
So, for 1.3e9 neurons of the Cortex+Plasticity simulation type, for 15 bio\_years of 'training time':
flops per second of biological time:
15 years of bio time need for training? = 3.154e7 sec/year \* 15 years = 4.73e8 seconds of bio time
total compute needed for training = flp/n/s \* 4.78e8 bio\_seconds \* 1.3e9 neurons = flops
Numenta paper 2
Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments
<https://arxiv.org/abs/2201.00042>
separates out the neurons into collections of artificial dendrites in sparse matrices. Because it's not using FPGAs here, and doing task comparisons against standard multi-layer perceptron feed-forward networks, the compute is easier to compare. They give numbers for the estimated 'effective number of parameters' because the sparse nature of the networks means that the number of parameters looks huge but is effectively small for the amount of compute required to train and infer using them. Several experiments are listed in the paper.


> When employing the prototype method described in Section 4.2.1 to select context signals at test time only, we train an
> Active Dendrites Network with 2 hidden layers that comprise Active Dendrites Neurons. For all training, we use the
> Adam optimizer [Kingma and Ba, 2015] and a batch size of 256 samples. Table 3 gives the exact hyperparameters
> and model architecture for each model we train and evaluate on permutedMNIST. Note that hyperparameters were
> optimized indidually for each setting.
> To combine Active Dendrites Network with SI, and to compare against XdG, we reduce the number of units in each
> hidden layer from 2,048 to 2,000 as to exactly match the architectures (with the exception of dendritic segments)
> used in the SI and XdG papers. (See Appendix for a discussion on the number of parameters.) In addition, the
> SI-and-Active-Dendrites network is trained for 20 epochs per task instead of just 3 as this significantly improves results.
> We fix the learning rate to be 5 × 10−4 for all numbers of tasks, and we use SI regularization strength c = 0.1 and
> damping coefficient ξ = 0.1. Both a) training for 20 epochs per task and b) the c, ξ values that we use here align with
> the training setups of Zenke et al. [2017] and Masse et al. [2018].
>
>
### SpikingNN paper 1
<https://www.sciencedirect.com/science/article/abs/pii/S0925231221003969>
full text manuscript: <https://www.sciencedirect.com/science/article/am/pii/S0925231221003969>
Ubuntu 18.04 LTS with Intel(R) Xeon(R)
CPU E5-2620 v4 @ 2.1 GHz and 32 GB RAM

### SpikingNN paper 2
<https://www.nature.com/articles/s41598-019-54957-7>
> For illustration we have used the data from the TITAN Xp card and Intel Core i9-7920X CPU
>
>
Caption for graph
> Overview of the components that make up the total runtime of a simulation for the Mbody (left) and the COBAHH benchmark (right). The top panels show the time spent in the simulation itself which scales with the biological runtime of the model (shown at the right) and dominates the overall runtime for big networks and/or long simulations. Simulation times were measured for biological runtimes of 10 s (middle line), while the times for runs of 1 s (bottom line) and 100 s (top line) were extrapolated. The bottom panels show the time spent for code generation and compilation (blue), general overhead such as copying data between the CPU and the GPU (orange), and the time for synapse creation and the initialization of state variables before the start of the simulation (green). The details shown here are for single-precision simulations run on the Titan Xp GPU.
>
>

10 bio\_seconds took 10^4 wall clock seconds
so 1 bio\_second to 1000 wall clock seconds for 2.05e7 neurons
flops = cores \* (cycles/second) \* (flops/cycle)
flops = (1 node \* 3840 cores) \* ( 1.6e9 cycles / second) \* ( 2 flops / cycle) \* 1e3 seconds = 1.229e16
flops / neuron
flops / 2.05e7 neurons = 6.14e6 flp/n
flp/n per bio\_second
flp/n / 1 bio\_second = 6.14e6 flp/n/s
So, for 1.3e9 neurons of the Cortex+Plasticity simulation type, for 15 bio\_years of 'training time':
<https://en.wikipedia.org/wiki/FLOPS> says 2 flops per cycle per core for single-precision simulations run on the Titan Xp GPU (3840 cores)
flops per second of biological time:
15 years of bio time need for training? = 3.154e7 sec/year \* 15 years = 4.73e8 seconds of bio time
total compute needed for training = 6.14e6 flp/n/s \* 4.78e8 bio\_seconds \* 1.3e9 neurons = 3.82e24 flops
<https://github.com/BindsNET/bindsnet>

### Blue Brain paper 1
[Large-Scale Simulation of Brain Tissue, Blue Brain Project, EPFL](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiyxM39vo_3AhV6JEQIHdc8A1YQFnoECBUQAQ&url=https%3A%2F%2Fpublications.anl.gov%2Fanlpubs%2F2018%2F11%2F148038.pdf&usg=AOvVaw3SLaDAX9lvw0NcQbWRP_FV)
[Technical Report for the ALCF Theta Early Science Program](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwiyxM39vo_3AhV6JEQIHdc8A1YQFnoECBUQAQ&url=https%3A%2F%2Fpublications.anl.gov%2Fanlpubs%2F2018%2F11%2F148038.pdf&usg=AOvVaw3SLaDAX9lvw0NcQbWRP_FV)
### Blue Brain paper 2
[CoreNEURON : An Optimized Compute Engine for the NEURON Simulator](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6763692/)
<https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6763692/>
From abstract:
> We describe how CoreNEURON can be used as a library with NEURON and then compare performance of different network models on multiple architectures including IBM BlueGene/Q, Intel Skylake, Intel MIC and NVIDIA GPU.
>
>
From intro:
> In the model of Markram et al. ([2015](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6763692/#B39)) each neuron averages to about 20,000 differential equations to represent its electrophysiology and connectivity. To simulate the microcircuit of 31,000 neurons, it is necessary to solve over 600 million equations every 25 ms of biological time...
>
>
In general, this paper describes the journey to making the Blue Brain NEURON model more efficient and able to work with GPUs. And then doing benchmarking comparisons.
> The benchmarking systems with hardware details, compiler toolchains and network fabrics are summarized in [Table 3](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6763692/table/T3/). The Blue Brain IV (BB4) and Blue Brain V (BB5) systems are based on IBM BlueGene/Q (Haring et al., [2012](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6763692/#B16)) and HPE SGI 8600 (Hewlett Packard Enterprise, [2019](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6763692/#B18)) platforms respectively, hosted at the Swiss National Computing Center (CSCS) in Lugano, Switzerland. The BB4 system has 4,096 nodes comprising 65,536 PowerPC A2 cores. The BB5 system has three different compute nodes: Intel KNLs with low clock rate but high bandwidth MCDRAM, Intel Skylakes with high clock rate, and NVIDIA Volta GPUs. Vendor provided compilers and MPI libraries are used on both systems. The BB4 system is used for strong scaling benchmarks (see [Figure 8](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6763692/figure/F8/)) as it has a large core count compared to the BB5 system. All benchmarks were executed in pure MPI mode by pinning one MPI rank per core.
>
>
> Strong scaling of CoreNEURON on the BB4 system (BlueGene/Q IBM PowerPC A2, 16 cores @ 1.6 GHz, 16 GB DRAM ) for two large scale models listed in [Table 1](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6763692/table/T1/): the Cortex+Plasticity model with 219 k neurons. [nathan note: blue line is actual measurement, black line is theoretical optimum]
>
>
>
>
Relevant part of the Table 1 discussed above:
| | | | | |
| --- | --- | --- | --- | --- |
| Model name | Summary | #Neurons | #Compartments | #Synapses |
| Cortex + Plasticity | Somatosensory cortex model with synaptic plasticity | 2.19e5 | 9.95e7 | 8.72e8 |
Note: one major parameter change in human neocortex vs rodent is that human neocortex has more synaptic connections per number of neurons. This hurts scaling somewhat because of the additional complexity. Not able to give a precise estimate for this additional compute based on the data I've found so far on their work. My guess is somewhat less than 2 OOMs extra cost in worst case.
Note for anyone trying to read this paper: a comprehension-gotcha is that they confusingly talk about both 'compute nodes' (the computers or virtual computers used), and 'neuron nodes' (the component parts of a neuron which are each individually simulated each timestep) using just the term 'nodes'. You have to keep the context of the paragraph straight to know which one they mean at any given time.
So, from these two papers, although they don't quite lay out all the parameters together in an easy-to-interpret way...
bbp paper1: 27 seconds of compute time for 0.1 seconds of biological time for 1? neuron(s) on a single compute node? (GPU system)
flops per second of biological time:
bbp paper2: 2.19e5 rodent cortex neurons requires 2e3 seconds of 2048 nodes, each node 16 cores @ 1.6GHz for 0.001? seconds of biological time (abbr: bio\_second). (supercomputer baseline, not GPU measurement)
flops = cores \* (cycles/second) \* (flops/cycle)
flops = (2048 nodes \* 16 cores) \* ( 1.6e9 cycles / second) \* ( 8 flops / cycle) \* 2e3 seconds = 8.39e17
flops / neuron
8.39e17 flops / 2.19e5 neurons = 3.83e12 flp/n
flp/n per bio\_second
3.82e12 flp/n / 0.001 bio\_second = 3.83e15 flp/n/s
So, for 1.3e9 neurons of the Cortex+Plasticity simulation type, for 15 bio\_years of 'training time':
<https://en.wikipedia.org/wiki/FLOPS> says that IBM PowerPC [A2](https://en.wikipedia.org/wiki/IBM_A2) (Blue Gene/Q) gets 8 64bit flops per core per cycle
(The Blue Brain project was so named because it was designed in cooperation with IBM specifically to work with the Blue Gene supercomputer)
flops per second of biological time:
15 years of bio time need for training? = 3.154e7 sec/year \* 15 years = 4.73e8 seconds of bio time
total compute needed for training = 3.82e15 flp/n/s \* 4.78e8 bio\_seconds \* 1.3e9 neurons = 2.37e33 flops = 2.37e18 petaFLOPs
### other Blue Brain papers:
[In-Memory Compression for Neuroscience Applications - Bayly](https://github.com/DevinBayly/gsoc_report/blob/master/report.pdf)
<https://github.com/DevinBayly/gsoc_report/blob/master/report.pdf>

[Reconstruction and Simulation of Neocortical Microcircuitry](https://www.cell.com/cell/fulltext/S0092-8674(15)01191-5)
<https://www.cell.com/cell/fulltext/S0092-8674(15)01191-5>
### Side note: Why half-brain?
Because there are multiple sources of evidence for half a human brain being sufficient to instantiate a general reasoning agent.
One of these is the case of [hemispherectomy](https://en.wikipedia.org/wiki/Hemispherectomy). People with severe seizures have had portions of their brain removed to stop the seizures. This operation can be as extreme as an entire hemisphere of the brain. If this happens in childhood while the brain connections are still highly plastic, then close-to-normal function can be regained.
Another case I know of involved a birth defect resulting in a missing hemisphere.
And yet another way significant brain tissue loss can happen is an ischemic event (oxygen deprivation and sudden harmful return). This tends to be quite bad for older adults who commonly experience this via strokes, because the brain is set in its ways by then and has a hard time regaining enough plasticity to rewire around the damage. But if it happens to a child, (e.g. a partial drowning), recovery is usually quite good (depending on exactly which bits are affected).
I think you could make do with even less than 50% if you were thoughtful about what you cut. Maybe as little as 30%. That's not a necessary condition for this thought experiment though. |
0f85923f-4e45-4ab0-a905-8d80aca78336 | StampyAI/alignment-research-dataset/arbital | Arbital | Digit wheel
A mechanical device for storing a number from 0 to 9. Devices such as these were often attached to the desks of engineers, as part of historical "desktop computers."

Each wheel can stably rest in any one of ten states. Digit wheels are often useful when it comes to thinking about how much data a physical object can store, as they were physical objects that were clearly designed for storing data. See also [https://arbital.com/p/3y1](https://arbital.com/p/3y1). |
e04bb247-eea1-4bdb-a06f-2a451590548a | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Biological Anchors external review by Jennifer Lin (linkpost)
This report is one of the winners of the [EA Criticism and Red Teaming Contest](https://forum.effectivealtruism.org/posts/YgbpxJmEdFhFGpqci/winners-of-the-ea-criticism-and-red-teaming-contest#Biological_Anchors_external_review_by_Jennifer_Lin___20_000_).
**Summary:** This is a summary and critical review of [Ajeya Cotra’s biological anchors report on AI timelines](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines). It provides an easy-to-understand overview of the main methodology of Cotra’s report. It then examines and challenges central assumptions of the modelling in Cotra’s report. First, the review looks at reasons why we might not expect 2022 architectures to scale to AGI. Second, it raises the point that we don’t know how to specify a space of algorithmic architectures that contains something that could scale to AGI and can be efficiently searched through (inability to specify this could undermine the ability to take the evolutionary anchors from the report as a bound on timelines). |
5b20f5b3-92f8-406f-a9a6-bfb99e0a95ed | trentmkelly/LessWrong-43k | LessWrong | College Student Philanthropy and Funding Millenium Villages
One interesting idea comes from "How Students Can Support a Millennium Village?", which talks about, obviously, funding a Millenium Village: (See also the school's news report.)
> Last year at Carleton University our group, Students To End Extreme Poverty, worked to get a question to referendum where students voted on whether or not they would all have to automatically pay an additional $6 in tuition fees ($5352 instead of $5346) to help support a Millennium Village. It worked. Carleton students now contribute over $110,000 annually.
>
> Here is our hope: By getting enough universities and organizations to support Millennium Villages (aside from helping a couple communities help themselves out of extreme poverty) it would raise enough awareness, get enough media attention, engage enough people, foster enough cooperation, and generate enough civil society will to see policy changes: more and better aid, fairer trade, and debt cancellation.
>
> Worst case scenario: thousands of people, many of whom would otherwise be dead, will have the basic tools they need to lift themselves out of extreme poverty.
Might this be a plausible thing to try and do on other college campuses?
With an additional $6 per student, at my university, Denison University, we could raise $12,792 per year in philanthropy. To reach Carleton's level, we'd need $51.80 per student per year. At $12,792, we could fund 213 people to live in a Millenium Village.
The Impact of College Philanthropy
What exactly is the impact of college philanthropy? Philanthropic dollars certainly aren't useless, and the age-old saying that "anything helps" is certainly true. But many social problems would require money on the scale of millions, if not billions, of dollars to help solve. Student giving is typically on the scale of thousands. Raise $10 from each of Denison's students, and you'll be getting $21320, assuming everyone contributes. What's that worth?
College philanthropy raises awareness. A |
25bde7c8-597b-4a09-bd51-cd1bddc9f6ee | trentmkelly/LessWrong-43k | LessWrong | Indifference and compensatory rewards
A putative new idea for AI control; index here.
It's occurred to me that there is a framework where we can see all "indifference" results as corrective rewards, both for the utility function change indifference and for the policy change indifference.
Imagine that the agent has reward R0 and is following policy π0, and we want to change it to having reward R1 and following policy π1.
Then the corrective reward we need to pay it, so that it doesn't attempt to resist or cause that change, is simply the difference between the two expected values:
* V(R0|π0)−V(R1|π1),
where V is the agent's own valuation of the expected reward, conditional on the policy.
This explains why off-policy reward-based agents are already safely interruptible: since we change the policy, not the reward, R0=R1. And since off-policy agents have value estimates that are indifferent to the policy followed, V(R0|π0)=V(R1|π1), and the compensatory rewards are zero. |
528d25e2-1a89-4055-a23c-858d24f89833 | trentmkelly/LessWrong-43k | LessWrong | Modeling Risks From Learned Optimization
This post, which deals with how risks from learned optimization and inner alignment can be understood, is part 5 in our sequence on Modeling Transformative AI Risk. We are building a model to understand debates around existential risks from advanced AI. The model is made with Analytica software, and consists of nodes (representing key hypotheses and cruxes) and edges (representing the relationships between these cruxes), with final output corresponding to the likelihood of various potential failure scenarios. You can read more about the motivation for our project and how the model works in the Introduction post. The previous post in the sequence, Takeoff Speeds and Discontinuities, described the different potential characteristics of a transition from high-level machine intelligence [1] to superintelligent AI.
We are interested in feedback on this post, especially in places where the model does not capture your views or fails to include an uncertainty that you think could be an important crux. Similarly, if an explanation seems confused or confusing, flagging this is useful – both to help us clarify, and to ensure it doesn’t reflect an actual disagreement.
This post explains how risks from learned optimization are incorporated in our model. The relevant part of the model is mostly based on the Risks from Learned Optimization sequence and paper (henceforth RLO). Although we considered responses and alternate perspectives to RLO in our research, these perspectives are not currently modeled explicitly.
For those not familiar with the topic, a mesa-optimizer is a learned algorithm that is itself an optimizer. According to RLO, inner alignment is the problem of aligning the objective of a mesa-optimizer with the objective of its base optimizer (which may be specified by the programmer). A contrived example supposes we want an algorithm that finds the shortest path through any maze. In the training data, all mazes have doors that are red, including the exit. Inner misa |
8fd64ecf-f3a2-4a9e-8629-2e1585012414 | trentmkelly/LessWrong-43k | LessWrong | Meetup : February 2015 Rationality Dojo - Introduction to Ethics
Discussion article for the meetup : February 2015 Rationality Dojo - Introduction to Ethics
WHEN: 01 February 2015 03:30:00PM (+0800)
WHERE: Ross House Association, 247-251 Flinders Lane, Melbourne
February 2015 Rationality Dojo - Introduction to Ethics WHEN: 1 February 2015 at 15:30 - 19:00 WHERE: Ross House Association, 247-251 Flinders Lane, Melbourne
[ATTN: This month we will be back at our usual location - the Jenny Florence Room, Level 3, Ross House at 247 Flinders Lane, Melbourne. 3:30pm start / arrival / preparation - formal dojo activities will commence at 4:00pm.]
The Less Wrong Sunday Rationality Dojos are crafted to be serious self-improvement sessions for those committed to the Art of Rationality and personal growth. Each month a community member will run a session involving a presentation of content, discussion, and exercises.
Continuing the succession of immensely successful dojos, Gus will run a session on Introduction to Ethics.
As always, we will review the personal goals we committed to at the previous Dojo (I will have done X by the next Dojo). Our goals are now being recorded via Google Forms here -https://docs.google.com/forms/d/1MCHH4MpbW0SI_2JyMSDlKnnGP4A0qxojQEZoMZIdopk/viewform, and Melbourne Less Wrong organisers have access to the form results if you wish to review the goals you set last month.
This month, we are also seeking 2-3 lightning talks from members. Have you learned something cool? Gained an insight from one of the Less Wrong sequences? Give a lightning talk and share it with Melbourne LW! Speakers will be limited to 5 minutes with room for questions. If you have something you would like to present a lightning talk on, please contact Louise with your topic at lvalmoria@gmail.com or talk to Richard on the day.
The Dojo is likely to run for 2-3 hours, after which some people will get dinner together.
If you have any trouble finding the venue or getting in, text or call Richard on 0421231789
If you would like to present |
f33df754-b769-4283-8347-4800a03ae51e | trentmkelly/LessWrong-43k | LessWrong | A solvable Newcomb-like problem - part 2 of 3
This is the second part of a three post sequence on a problem that is similar to Newcomb's problem but is posed in terms of probabilities and limited knowledge.
Part 1 - stating the problem
Part 2 - some mathematics
Part 3 - towards a solution
----------------------------------------
In game theory, a payoff matrix is a way of presenting the results of two players simultaneously picking options.
For example, in the Prisoner's Dilemma, Player A gets to choose between option A1 (Cooperate) and option A2 (Defect) while, at the same time Player B gets to choose between option B1 (Cooperate) and option B2 (Defect). Since years spent in prison are a negative outcome, we'll write them as negative numbers:
So, if you look at the bottom right hand corner, at the intersection of Player A defecting (A2) and Player B defecting (B2) we see that both players end up spending 4 years in prison. Whereas, looking at the bottom left we see that if A defects and B cooperates, then Player A ends up spending 0 years in prison and Player B ends up spending 5 years in prison.
Another familiar example we can present in this form is the game Rock-Paper-Scissors.
We could write it as a zero sum game, with a win being worth 1, a tie being worth 0 and a loss being worth -1:
But it doesn't change the mathematics if we give both players 2 points each round just for playing, so that a win becomes worth 3 points, a tie becomes worth 2 points and a loss becomes worth 1 point. (Think of it as two players in a game show being rewarded by the host, rather than the players making a direct bet with each other.)
If you are Player A, and you are playing against a Player B who always chooses option B1 (Rock), then your strategy is clear. You choose option A2 (Paper) each time. Over 10 rounds, you'd expect to end up with $30 compared to B's $10.
Let's imagine a slightly more sophisticated Player B, who always picks Rock in the first round, and then for all other rounds pic |
07f9c24a-ce4f-4ec2-8b80-0c1fbb921e78 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Do you worry about totalitarian regimes using AI Alignment technology to create AGI that subscribe to their values?
Hi all! First I want to say that I really enjoyed this forum in the past few months, and eventually decided to create an account to post this question. I am still in the process of writing the short version of this question, so thank you for bearing with me in the long version.
As some of you may know, last year we have seen unprecedented uprising against totalitarian regimes. As a Chinese national active in the diaspora dissent community, I have never been more encouraged by the courage and creativity of my people; as an ML practitioner, I am more and more worried about AGI being the most powerful governing tool humanity has seen yet.
China's Zero-COVID policy gave us a first taste of what this future would feel like - personal location tracking limits your freedom of mobility; a remote "system" will decide what you can or cannot do and you will be informed via an app on your phone; when you try to push back, it is like trying to hit a bureaucratic wall.
Most importantly, Zero-COVID gave rise to a whole value system that sees society as a simplified trolley problem: the government -- an all-knowing entity -- holds the lever, and will be deciding what is best for the whole. Collectivism is equivalent to altruism, individualism is equivalent to being selfish, and the most honorable thing for an individual to do is to obey. This value system is pretty compelling, and has been pushed into every grade school kid. US's failure and massive death toll is also a convenient gotcha.
Needless to say many people in China do not subscribe to this value, but many people do, and more often than not it is the latter group that are the agent of your day-to-day act of suppression. The policy eventually collapsed partially due to uprising, but even during the height of the uprising there were still significant momentum on the pro-Zero-COVID side for the policy to keep going. My suspicion is what eventually brought down Zero-COVID was the unbearable price tag, especially for local governments. However, I can totally see if COVID happened in 2030 instead of 2020 (1o years are nothing in earth years), the price tag will be much sustainable.
It is no news that 1) AI tend to converge to monopoly, and 2) totalitarian regimes will want to use AI to extend their power. We also know that 3) AI alignment seeks to build the ability for us to embed our values into AI. I deeply worry about the gentle seduction of AI technology in China, seducing us to yield more and more of our agency to an AGI that may align with a value system that represent the interest of the ruling entity, and there will be less and less room for pushing back. |
be84480e-c2fe-40a9-8778-941178222feb | StampyAI/alignment-research-dataset/special_docs | Other | Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development
T
ECHNICAL
R
EPORT
S t a n d a r d s f o r A I G o v e r n a n c e :
I n t e r n a t i o n a l S t a n d a r d s t o E n a b l e G l o b a l
C o o r d i n a t i o n i n A I R e s e a r c h & D e v e l o p m e n t
P e t e r C i h o n
1
R e s e a r c h A f f i l i a t e , C e n t e r f o r t h e G o v e r n a n c e o f A I
F u t u r e o f H u m a n i t y I n s t i t u t e , U n i v e r s i t y o f O x f o r d
p e t e r c i h o n @ g m a i l . c o m
April 2019
1
F o r h e l p f u l c o m m e n t s o n e a r l i e r v e r s i o n s o f t h i s p a p e r , t h a n k y o u t o J a d e L e u n g , J e f f r e y D i n g , B e n G a r f i n k e l , A l l a n
D a f o e , M a t t h i j s M a a s , R e m c o Z w e t s l o o t , D a v i d H a g e b ö l l i n g , R y a n C a r e y , B a o b a o Z h a n g , C u l l e n O ’ K e e f e ,
S o p h i e - C h a r l o t t e F i s c h e r , a n d T o b y S h e v l a n e . T h a n k y o u e s p e c i a l l y t o M a r k u s A n d e r l j u n g , w h o h e l p e d m a k e m y i d e a s
f l o w o n t h e p a g e . T h i s w o r k w a s f u n d e d b y t h e B e r k e l e y E x i s t e n t i a l R i s k I n i t i a t i v e . A l l e r r o r s a r e m i n e a l o n e .
E x e c u t i v e S u m m a r y
Artificial Intelligence (AI) presents novel policy challenges that require coordinated global responses.
2
Standards, particularly those developed by existing international standards bodies, can support the global
governance of AI development. International standards bodies have a track record of governing a range of
socio-technical issues: they have spread cybersecurity practices to nearly 160 countries, they have seen firms
around the world incur significant costs in order to improve their environmental sustainability, and they have
developed safety standards used in numerous industries including autonomous vehicles and nuclear energy.
These bodies have the institutional capacity to achieve expert consensus and then promulgate standards across
the world. Other existing institutions can then enforce these nominally voluntary standards through both
de
facto
and
de jure
methods.
AI standards work is ongoing at ISO and IEEE, two leading standards bodies. But these ongoing standards
efforts primarily focus on standards to improve market efficiency and address ethical concerns, respectively.
There remains a risk that these standards may fail to address further policy objectives, such as a culture of
responsible deployment and use of safety specifications in fundamental research. Furthermore, leading AI
research organizations that share concerns about such policy objectives are conspicuously absent from ongoing
standardization efforts.
Standards will not achieve all AI policy goals, but they are a path towards effective global solutions where
national rules may fall short. Standards can influence the development and deployment of particular AI systems
through product specifications for, i.a., explainability, robustness, and fail-safe design. They can also affect the
larger context in which AI is researched, developed, and deployed through process specifications.
The creation,
3
dissemination, and enforcement of international standards can build trust among participating researchers, labs,
and states. Standards can serve to globally disseminate best practices, as previously witnessed in cybersecurity,
environmental sustainability, and quality management. Existing international treaties, national mandates,
government procurement requirements, market incentives, and global harmonization pressures can contribute
to the spread of standards once they are established. Standards do have limits, however: existing market forces
are insufficient to incentivize the adoption of standards that govern fundamental research and other
transaction-distant systems and practices. Concerted efforts among the AI community and external
stakeholders will be needed to achieve such standards in practice.
2
See, e.g.,
Brundage, Miles, et al. "The malicious use of artificial intelligence: Forecasting, prevention, and mitigation."
Future of Humanity Institute and the Centre for the Study of Existential Risk.
(2018)
https://arxiv.org/pdf/1802.07228.pdf
; Dafoe, Allan. AI Governance: A Research Agenda.
Future of Humanity Institute.
(2018).
www.fhi.ox.ac.uk/wp-content/uploads/GovAIAgenda.pdf
;
Bostrom, Nick, Dafoe, Allan and Carrick Flynn.
“Public Policy and Superintelligent AI: A Vector Field Approach.” (working paper, Future of Humanity Institute, 2018),
https://nickbostrom.com/papers/aipolicy.pdf
; Cave, Stephen, and Seán S. ÓhÉigeartaigh. “Bridging near-and long-term
concerns about AI.”
Nature Machine Intelligence
1, no. 1 (2019): 5
.
3
For discussion of the importance of context in understanding risks from AI, see Zwetsloot, Remco and Allan Dafoe.
“Thinking About Risks From AI: Accidents, Misuse and Structure.”
Lawfare
. (2019).
https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure
.
2
Ultimately, standards are a tool for global governance, but one that requires institutional entrepreneurs to
actively use standards in order to promote beneficial outcomes. Key governments, including China and the
U.S., have stated priorities for developing international AI standards. Standardization efforts are only beginning,
and may become increasingly contentious over time, as has been witnessed in telecommunications. Engagement
sooner rather than later can establish beneficial and internationally legitimate ground rules to reduce risks in
international and market competition for the development of increasingly capable AI systems.
In light of the strengths and limitations of standards, this paper offers a series of recommendations. They are
summarized below:
●
Leading AI labs should build institutional capacity to understand and engage in
standardization processes.
This can be accomplished through in-house development or partnerships
with specific third-party organizations.
●
AI researchers should engage in ongoing standardization processes.
The Partnership on AI and
other qualifying organizations should consider becoming liaisons with standards committees to
contribute to and track developments. Particular standards may benefit from independent
development initially and then be transferred to an international standards body under existing
procedures.
●
Further research is needed on AI standards from both technical and institutional perspectives.
Technical standards desiderata can inform new standardization efforts and institutional strategies can
develop paths for standards spread globally in practice.
●
Standards should be used as a tool to spread a culture of safety and responsibility among AI
developers.
This can be achieved both inside individual organizations and within the broader AI
community.
3
T a b l e o f C o n t e n t s
E x e c u t i v e S u m m a r y
2
T a b l e o f C o n t e n t s
4
G l o s s a r y
5
1 . I n t r o d u c t i o n
6
2 . S t a n d a r d s : I n s t i t u t i o n f o r G l o b a l G o v e r n a n c e
7
2 A . T h e n e e d f o r g l o b a l g o v e r n a n c e o f A I d e v e l o p m e n t
7
2 B . I n t e r n a t i o n a l s t a n d a r d s b o d i e s r e l e v a n t t o A I
9
2 C . A d v a n t a g e s o f i n t e r n a t i o n a l s t a n d a r d s a s g l o b a l g o v e r n a n c e t o o l s
1 0
2 C i . S t a n d a r d s G o v e r n T e c h n i c a l S y s t e m s a n d S o c i a l I m p a c t
1 1
2 C i i . S h a p i n g e x p e r t c o n s e n s u s
1 4
2 C i i i . G l o b a l r e a c h a n d e n f o r c e m e n t
1 5
3 . C u r r e n t L a n d s c a p e f o r A I S t a n d a r d s
1 9
3 A . I n t e r n a t i o n a l d e v e l o p m e n t s
1 9
3 B . N a t i o n a l p r i o r i t i e s
2 1
3 C . P r i v a t e i n i t i a t i v e s
2 4
4 . R e c o m m e n d a t i o n s
2 5
4 A . E n g a g e i n o n g o i n g p r o c e s s e s
2 5
4 A i . B u i l d c a p a c i t y f o r e f f e c t i v e e n g a g e m e n t
2 6
4 A i i . E n g a g e d i r e c t l y i n o n g o i n g p r o c e s s e s
2 7
4 A i i i . M u l t i n a t i o n a l o r g a n i z a t i o n s s h o u l d b e c o m e l i a i s o n s
2 8
4 B . P u r s u e p a r a l l e l s t a n d a r d s d e v e l o p m e n t
2 8
4 C . R e s e a r c h s t a n d a r d s a n d s t r a t e g y f o r d e v e l o p m e n t
2 9
4 C i . R e s e a r c h t e c h n i c a l s t a n d a r d s d e s i d e r a t a
2 9
4 C i i . R e s e a r c h s t r a t e g i e s f o r s t a n d a r d s i n g l o b a l g o v e r n a n c e
3 0
4 D . U s e s t a n d a r d s a s a t o o l f o r c u l t u r e c h a n g e
3 1
5 . C o n c l u s i o n
3 2
R e f e r e n c e s
3 3
A p p e n d i x 1 : I S O / I E C J T C 1 S C 4 2 O n g o i n g W o r k
4 0
A p p e n d i x 2 : I E E E A I S t a n d a r d s O n g o i n g W o r k
4 1
4
G l o s s a r y
Acronym
Meaning
DoD
U.S. Department of Defense
CFIUS
Committee on Foreign Investment in the United
States
ECPAIS
IEEE Ethics Certification Program for Autonomous
and Intelligent Systems
GDPR
EU General Data Protection Regulation
IEEE
Institute of Electrical and Electronics Engineers
IEC
International Electrotechnical Commission
ISO
International Organization for Standardization
ITU
International Telecommunications Union
JTC 1
Joint Technical Committee 1, formed by IEC and
ISO to create information technology standards
MNC
Multinational corporation
OCEANIS
Open Community for Ethics in Autonomous and
Intelligent Systems
TBT
WTO Agreement on Technical Barriers to Trade
WTO
World Trade Organization
5
1 .
I n t r o d u c t i o n
Standards are an institution for coordination. Standards ensure that products made around the world are
interoperable. They ensure that management processes for cybersecurity, quality assurance, environmental
sustainability, and more are consistent no matter where they happen. Standards provide the institutional
infrastructure needed to develop new technologies, and they provide safety procedures to do so in a controlled
manner. Standards can do all of this, too, in the research and development of artificial intelligence (AI).
Market incentives will drive companies to participate in the development of product standards for AI. Indeed,
work is already underway on preliminary product and ethics standards for AI. But, absent outside intervention,
standards may not serve as a policy tool to reduce risks in the technology’s development.
Leading AI research
4
organizations that share concerns about such risks are conspicuously absent from ongoing standardization
efforts.
To positively influence the development trajectory of AI, we do not necessarily need to design new
5
institutions. Existing organizations, treaties, and practices already see standards disseminated around the world,
enforced through private institutions, and mandated by national action.
Standards, developed by an international group of experts, can provide legitimate global rules amid
international competition in the development of advanced AI systems.
These standards can support trust
6
among developers and a consistent focus on safety, among other benefits. Standards constitute a language and
practice of communication among research labs around the world, and can establish guardrails that help
support positive AI research and development outcomes.
Standards will not achieve all AI policy goals, but they are an important step towards effective global solutions.
They are an important step that the AI research community can start leading on today. The paper is structured
as follows.
Section 2
discusses the need for global coordination on AI policy goals and develops at length the use
of international standards in achieving these goals.
Section 3
analyzes the current AI standards landscape.
Section
4
offers a series of recommendations for how the AI community, comprising technical researchers, development
organizations, governance researchers, can best use international standards as a tool of global governance.
4
This work fits within a growing literature that argues that short-term and long-term AI policy should not be considered
separately. Policy decisions today can have long-term implications. See, e.g., Cave and ÓhÉigeartaigh, “Bridging near-and
long-term concerns about AI.”
5
Some in the AI research community do acknowledge the significance of standards, but they see efforts towards
standardization as a future endeavor: the
OpenAI Charter
acknowledges the importance of sharing standards research, but
focused on a time when they curtail open publication. The
Partnership on AI
is today committed to establishing best
practices on AI, in contrast to formal standards.
6
Advanced AI incorporates future developments in machine intelligence substantially more capable than today’s systems
but at a level well short of an Artificial General Intelligence. See Dafoe, "AI Governance: A Research Agenda."
6
2 . S t a n d a r d s : I n s t i t u t i o n f o r G l o b a l G o v e r n a n c e
2 A . T h e n e e d f o r g l o b a l g o v e r n a n c e o f A I d e v e l o p m e n t
AI development poses global challenges. Government strategies to incentivize increased AI research within
national boundaries may result in a fractured governance landscape globally,
and in the long-term threaten a
7
race to the bottom in regulatory stringency.
In this scenario, countries compete to attract AI industry through
8
national strategies and incentives that accelerate AI development, but do not similarly increase regulatory
oversight to mitigate societal risks associated with these developments.
These risks associated with lax regulatory
9
oversight and heated competition range from increasing the probability of biased, socially harmful systems
to
10
existential threats to human life.
11
These risks are exacerbated by a lack of effective global governance mechanisms to provide, at minimum,
guardrails in the competition that drives technological innovation. Although there is uncertainty surrounding
AI capability development timelines,
AI researchers expect capabilities to match human performance for
12
many tasks within the decade and for most tasks within several decades.
These developments will have
13
transformative effects on society.
It is thus critical that global governance institutions are put in place to steer
14
these transformations in beneficial directions.
International standards are an institution of global governance that exists today and can help achieve AI policy
goals. Notably, global governance does not mean global government: existing regimes of international
coordination, transnational collaboration, and global trade are all forms of global governance.
Not all policy
15
responses to AI will be global; indeed, many will necessarily account for local and national contexts.
But
16
7
Cihon, Peter. “Regulatory Dynamics of Artificial Intelligence Global Governance.”
Typhoon Consulting.
(2018).
http://www.typhoonconsulting.com/wp-content/uploads/2018/07/18.07.11-AI-Global-Governance-Peter-Cihon.pdf
.
8
See Dafoe, "AI Governance: A Research Agenda."
9
Indeed, labs pursuing AI systems with advanced capabilities are globally distributed and demonstrate a high variance in
their operating and safety procedures.
Baum, Seth. "A Survey of Artificial General Intelligence Projects for Ethics, Risk,
and Policy." G
lobal Catastrophic Risk Institute Working Paper 17-1.
(2017).
10
Whittaker, Meredith, Kate Crawford, Roel Dobbe, et al. “AI Now Report 2018.”
AI Now
. (2018).
https://ainowinstitute.org/AI_Now_2018_Report.pdf
.
11
Bostrom, Nick.
Superintelligence
. Oxford: Oxford University Press, 2014.
12
Past technologies have seen discontinuous progress, but this may not come to pass in development of advanced AI. See
blog posts on AI Impacts from
2015
and
2018
.
13
Grace, Katja, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. "When will AI exceed human performance?
Evidence from AI experts."
Journal of Artificial Intelligence Research
62 (2018): 729-754; See Bughin, Jacques, Jeongmin
Seong, James Manyika, et al. “Notes from the AI Frontier: Modeling the Impact of AI on the World Economy.”
McKinsey
Global Institute
. (2018).
https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Artificial%20Intelligence/Notes%20from%20the%
20frontier%20Modeling%20the%20impact%20of%20AI%20on%20the%20world%20economy/MGI-Notes-from-the-AI-fr
ontier-Modeling-the-impact-of-AI-on-the-world-economy-September-2018.ashx
.
14
Dafoe, “AI Governance: A Research Agenda”; Bostrom, Dafoe, Flynn. “Public Policy and Superintelligent AI.”
15
Hägel, Peter. "Global Governance."
International Relations
. Oxford: Oxford University Press, 2011.
16
See, e.g, Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François
Bonnefon, and Iyad Rahwan. "The moral machine experiment."
Nature
563, no. 7729 (2018): 59.
7
international standards can support policy goals where global governance is needed, in particular, by (1)
spreading beneficial systems and practices, (2) facilitating trust among states and researchers, and (3)
encouraging efficient development of advanced systems.
17
First, the content of the standards themselves can support AI policy goals. Beneficial standards include those
that support the security and robustness of AI, further the explainability of and reduce bias in algorithmic
decisions, and ensure that AI systems fail safely. Standards development on all three fronts is underway today, as
discussed below in
Section 3
. Each standard could also reduce long-term risks if their adoption shifts funding
away from opaque, insecure, and unsafe methods.
Additional standards could shape processes of research and
18
development towards beneficial ends, namely through an emphasises on safety practices in fundamental
research. In addition to stipulating safe processes, these standards, through their regular enactment and
enforcement, could encourage a responsible culture of AI development. These claims are developed further in
Section 4
.
Second, international standards processes can facilitate trust among states and research efforts. International
standards bodies provide focal organizations where opposing perspectives can be reconciled. Once created and
adopted, international standards can foster trust among possible competitors because they will provide a shared
governance framework from which to build further agreement.
This manner of initial definition,
19
measurement, or other initial agreement contributing to subsequent expanded and enforced agreements has
been witnessed in other international coordination problems, e.g., nuclear test ban treaties and environmental
protection efforts.
Trust is also dependent on the degree of open communication among labs. Complete
20
openness can present problems; indeed, open publication of advanced systems,
and even simply open
21
reporting of current capabilities in the future could present significant risks.
Standards can facilitate
partial
22
openness among research efforts that is “unambiguously good” in light of these concerns.
In practice, credible
23
public commitments to specific standards can provide partial information about the practices of otherwise
disconnected labs. Furthermore, particular standards that may emerge over time could themselves define
appropriate levels and mechanisms of openness.
Third, international standards can encourage the efficient development of increasingly advanced AI systems.
International standards have a demonstrated track record of improving global market efficiency and economic
surplus via, i.a., reduced barriers to international trade, greater interoperability of labor and end-products, and
eliminated duplicated effort on standardized elements.
International standards could support these outcomes
24
17
These are key policy elements that bridge a focus on current systems with a long-term research towards superintelligent
AI. See Bostrom, Dafoe, Flynn. “Public Policy and Superintelligent AI.”
18
See Cave and ÓhÉigeartaigh, "Bridging near-and long-term concerns about AI."
19
Bostrom, Nick. "Strategic implications of openness in AI development."
Global Policy
8, no. 2 (2017): 146.
20
For example, the Vienna Convention for the Protection of the Ozone Layer provided a framework for a later agreement
in the Montreal Protocol that has seen global adoption and enforcement.
21
See, e.g., OpenAI’s limited release of its
GPT-2
natural language model.
22
Bostrom, "Strategic implications of openness in AI development.”
23
Ibid., 145.
24
See Büthe, Tim, and Walter Mattli.
The new global rulers: The privatization of regulation in the world economy
.
Princeton University Press, 2011; Brunsson, Nils and Bengt Jacobsson. “The pros and cons of standardization–an
epilogue” in
A World of Standards
. Oxford: Oxford University Press, 2000, 169-173; Abbott, Kenneth W., and Duncan
Snidal. “International ‘standards’ and international governance.”
Journal of European Public Policy
8, no. 3 (2001): 345.
8
for AI as well, e.g., with systems that can deploy across national boundaries and be implemented using
consistent processes and packages by semi-skilled AI practitioners. Increased efficiency in deployment will drive
further resources into research and development. Some in the AI community may be concerned that this will
increase the rate at which AI research progresses, thereby encouraging racing dynamics that disincentivize
precaution.
Yet standards can help here too, both through object-level standards for safety practices with
25
enforcement mechanisms and by facilitating trust among developers. These claims are developed further in
Sections 2C
and
4
.
In summary, continued AI development presents risks that require coordinated global governance responses.
International standards are an existing form of global governance that can offer solutions. These standards can
help support efficient development of AI industry, foster trust among states and developers of the technology,
and see beneficial systems and practices enacted globally. It is important to note that, regardless of intervention,
ongoing standards work will encourage increased efficiency in AI development. Engagement is needed to
support standards that help foster trust and encourage beneficial systems and processes globally.
2 B . I n t e r n a t i o n a l s t a n d a r d s b o d i e s r e l e v a n t t o A I
A wide range of organizations develop standards that are adopted around the world. AI researchers may be most
familiar with proprietary or open-source software standards developed by corporate sponsors, industry
consortia, and individual contributors. These are common in digital technologies, including the development of
AI, e.g., software libraries including TensorFlow, PyTorch, and OpenAI Gym that become standards across
industry over time.
The groups responsible for such standards, however, do not have experience in monitoring
26
and enforcement of such standards globally. In contrast, international standards bodies have such experience.
This section discusses standards bodies, and
Section 2C
describes relevant categories of standards.
Specialized bodies may create international standards. These bodies can be treaty organizations such as the
International Atomic Energy Agency and the International Civil Aviation Organization, which govern
standards on nuclear safety and international air travel, respectively. Such a body may well suit the governance
of AI research and development, but its design and implementation are beyond the scope of this paper. Instead,
this paper focuses on
existing
institutions that can host development of needed standards and see them enacted
globally. Non-state actors’ efforts towards institutional development tend to be more successful in both agenda
setting and impact if they work with states and seek change that can be accommodated within existing
structures and organizations.
Thus, existing international standards bodies present an advantage. Nevertheless,
27
25
Armstrong, S., Bostrom, N. & Shulman, C. “Racing to the precipice: a model of artificial intelligence development”,
Technical Report #2013-1, Future of Humanity Institute, Oxford University: pp. 1-8. (2013).
https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pd
f
.
26
Software libraries, programming languages, and operating systems are standards insofar as they guide behavior. They may
not emerge from standardization processes but instead market competition. See
Section 3C
.
27
Hale, Thomas, and David Held.
Beyond Gridlock
. Cambridge: Polity. 2017.
9
if a specialized agency is developed in the future, previously established standards can be incorporated at that
time.
28
There are two existing international standards bodies that are currently developing AI standards. First is a joint
effort between ISO and IEC. To coordinate development of digital technology standards, ISO and IEC
established a joint committee (JTC 1) in 1987. JCT 1 has published some 3000 standards, addressing everything
from programming languages, character renderings, file formats including JPEG, distributed computing
architecture, and data security procedures.
These standards have influence and have seen adoption and
29
publicity by leading multinational corporations (MNCs). For example, ISO data security standards have been
widely adopted by cloud computing providers, e.g., Alibaba, Amazon, Apple, Google, Microsoft, and Tencent.
30
The second international standards body that is notable in developing AI standards is the IEEE Standards
Association. IEEE is an engineers’ professional organization with a subsidiary Standards Association (SA) whose
most notable standards address protocols for products, including Ethernet and WiFi. IEEE SA also creates
process standards in other areas including software engineering management and autonomous systems design.
Its AI standardization processes are part of a larger IEEE Global Initiative on Ethics of Autonomous and
Intelligent Systems.
31
A third international standards body may become increasingly relevant for AI in the future: the ITU. The ITU
has historically played a role in standards for information and communications technologies, particularly in
telecommunications. It has a Focus Group on Machine Learning for Future Networks that falls within this
telecommunications remit. Following the 2018 AI for Good Global Summit, it has also created a Focus Group
on AI for Health, “which aims
inter alia
to create standardized benchmarks to evaluate Artificial Intelligence
algorithms used in healthcare applications.”
Given the ITU’s historically narrower scope, however, this paper
32
does not consider the organizations’ work further.
2 C . A d v a n t a g e s o f i n t e r n a t i o n a l s t a n d a r d s a s g l o b a l g o v e r n a n c e t o o l s
International standards present a number of advantages in encouraging the global governance of AI. This
section distills these advantages into three themes. First, international standards have a history of guiding the
development and deployment of technical systems and shaping their social effects across the world. Second,
international standards bodies privilege the influence of experts and have tested mechanisms for achieving
28
Pre-existing standards have been referenced in international treaties, e.g., The International Maritime Organization’s
Safety of Life at Sea Treaty references ISO product standards. Koppell, Jonathan G. S.
World Rule : Accountability,
Legitimacy, and the Design of Global Governance.
Chicago: University of Chicago Press, 2010.
29
See, e.g., Rajchel, Lisa.
25 years of ISO/IEC JTC 1
. ISO Focus+, 2012.
https://www.iso.org/files/live/sites/isoorg/files/news/magazine/ISO%20Focus%2b%20(2010-2013)/en/2012/ISO%20Fo
cus%2b%2c%20June%202012.pdf
.
30
For Amazon, these include ISO 27001, 27017, and 27018 from JTC 1 as well as the ISO 9001 quality management
process standard. See the link associated with each company:
Alibaba
,
Amazon
,
Apple
,
Google
,
Microsoft
, and
Tencent
.
31
See
IEEE’s Ethics in Action website
.
32
See
ITU’s AI for Good website
.
1 0
consensus among them on precisely what should be in standards. Third, existing treaties, national practices, and
transnational actors encourage the global dissemination and enforcement of international standards.
2 C i . S t a n d a r d s G o v e r n T e c h n i c a l S y s t e m s a n d S o c i a l I m p a c t
Standards are, at their most fundamental “a guide for behavior and for judging behavior.”
In practice,
33
standards define technical systems and can guide their social impact. Standards are widely used for both private
and public governance at national and transnational levels, in areas as wide ranging as financial accounting and
nuclear safety.
Many forms of standards will impact the development of AI.
34
Consider a useful typology of standards based on actors’ incentives and the object of standardization. Actors’
incentives in standards can be modeled by two types of externalities:
positive, network externalities and negative
externalities.
With network externalities, parties face a coordination game where they are incentivized to
35
cooperate.
For example, a phone is more useful if it can call many others than if it can only communicate with
36
the same model. Institutions may be necessary to establish a standard in this case but not to maintain the
standard in practice, as the harmony of interests obviates enforcement. For the purposes of this paper, consider
these standards “network standards.”
Negative externalities are different; a polluter burdens others but does not internalize the cost itself. Standards
here face challenges: individuals may have an incentive to defect in what could be modeled as a Prisoner’s
Dilemma.
In the pollution case, it is in the interest of an individual business to disregard a pollution standard
37
absent additional institutions. But this interest can favor cooperation if an institution creates excludible benefits
and an enforcement mechanism. External stakeholders are important here as well: institutions to enable
enforced standards are incentivized by demand external to those who adopt the standards. In practice,
governments, companies, and even public pressure can offer such incentives; many are explored in
Section 2Ciii
.
For example, the ISO 14001 Environmental Management standard requires regular and intensive audits in order
to obtain certification, which in turn brings reputational value to the companies that obtain it.
In general, for
38
such standards, institutions are needed for initial standardization
and
subsequent enforcement. For the
purposes of this paper, consider these standards “enforced standards.” Enforcement can take multiple forms,
from regulatory mandates to contractual monitoring. Certification of adherence to a standard is a common
method of enforcement that relies on third-parties, which can be part of government or private entities.
39
Self-certification is also common, whereby a firm will claim that it complies with a standard and is subject to
33
Abbot and Snidal, “International ‘standards’ and international governance,” p. 345.
34
Brunsson, Nils and Bengt Jacobsson. “The contemporary expansion of standardization” in
A World of Standards
.
Oxford: Oxford University Press, 2000, 1-18.
35
Abbott and Snidal, “International ‘standards’ and international governance.”
36
This scenario can have distributional consequences as well, where one party gains more from the standard, but ultimately
all are better off from cooperation.
37
Abbot and Snidal, “International ‘standards’ and international governance.”
38
Prakash, Aseem, and Matthew Potoski.
The voluntary environmentalists: Green clubs, ISO 14001, and voluntary
environmental regulations
. Cambridge University Press, 2006.
39
Ibid.
1 1
future enforcement from a regulator.
Compliance monitoring can occur through periodic audits, applications
40
for re-certification, or ad hoc investigations in response to a whistleblower or documented failure.
In
41
summary, both categories of standards exist--network and enforced--but enforced standards require additional
institutions for successful implementation.
In practice, standards address one of two objects: products or management processes. Product standards can
define terminology, measurements, variants, functional requirements, qualitative properties, testing methods,
and labeling criteria.
Management process standards can describe processes or elements of organizations to
42
achieve explicit goals, e.g., quality, sustainability, and software life cycle management. A process that follows a
particular standard need not impose costs with each iteration of a product: the standardized process simply
informs how new products are created. Indeed, process standards can often function as a way for firms to adopt
best practices in order to increase their competitiveness.
One such ISO standard on cybersecurity has been
43
adopted by firms in nearly 160 countries.
Figure 1 illustrates these standards categories as they relate to
44
externalities with some notable examples.
Standards for AI will emerge in all four quadrants; indeed, as discussed below in
Section 3
, standards that span
the typology are already under development. Different types of standards will spread with more or less external
effort, however. Network-product standards that support interoperability and network-process standards that
offer best practices will see actors adopt them in efforts to grow the size of their market and reduce their costs.
Indeed, most international standards from ISO/IEC and IEEE are product standards that address network
externalities, seeking to increase the interoperability of global supply chains.
Enforced standards will require
45
further incentivization from external stakeholders, whether they be regulators, contracting companies, or the
public at large. The more distant the object of standardization is from common market transactions, the more
difficult the incentivization of standards will be without external intervention. In particular, this means that an
enforced-process standard for safety in basic research and development is unlikely to develop without concerted
effort from the AI community.
40
Firms may declare that their practices or products conform to network standards, even, in some cases, choosing to certify
this conformity. In these cases, however, the certification serves as a signal to access network benefits. Although enforced
standards are not the only category that may see certification, it is the category that requires further enforcement to address
possible incentives to defect.
41
Some AI standards, namely those on safety of advanced research, will benefit from novel monitoring regimes. See
Section
4.
42
Hallström, Kristina Tamm.
Organizing International Standardization : ISO and the IASC in Quest of Authority.
Cheltenham: Edward Elgar, 2004.
43
Brunsson and Jacobsson. “The pros and cons of standardization.”
44
“The ISO Survey of Management System Standard Certifications - 2017 - Explanatory Note.
ISO
. Published August,
2018.
https://isotc.iso.org/livelink/livelink/fetch/-8853493/8853511/8853520/18808772/00._Overall_results_and_explanator
y_note_on_2017_Survey_results.pdf?nodeid=19208898&vernum=-2
; ISO 27001 had nearly 40,000 certifications in 159
countries in 2017. See
ISO 27001 website
.
45
Büthe and Mattli,
The new global rulers.
1 2
Figure 1. Standards typology with examples
Network - Product
Protocols for establishing Wi-Fi
connections
(IEEE 802.11)
Standard dimensions for a shipping
container to enable global
interoperability
(ISO 668)
Network - Process
Quality management process standard that facilitates international
contracting and supply chains by ensuring consistency globally
(ISO 9001)
Information security management system, requirements and code
of practice for implementation and maintenance.
(ISO/IEC 27001 and 27002, respectively)
Software life cycle management processes (ISO/IEC/IEEE 12207)
Enforced - Product
Paper products sourced with
sustainable methods and monitored
through supply chain (Forest
Stewardship Council)
46
Enforced:
Third-party certified.
CE marking for safety, health, and
environmental protection
requirements for sale within the
European Economic Area. (EU)
Enforced:
If problems arise, violations
are sanctioned by national regulators.
47
Enforced - Process
Environmental Management process standard helps organizations
minimize the environmental footprint of their operations
(ISO 14001)
Enforced:
Third-party certified.
Functional safety management over life cycle for road vehicles
(ISO 26262).
Enforced:
Required to meet safety regulations and import criteria.
Safety requirements for collaborative industrial robots
(ISO/TS 15006).
Enforced:
Supports obligations under safety regulations.
There are, however, also notable examples of enforced standards that do see firms take on considerable costs to
internalize harmful externalities. The ISO 14001 Environmental Management standard has spread to 171
countries, and saw over 360,000 certifications around the world in 2017.
This standard provides firms a
48
framework to improve the environmental sustainability of their practices and certification demonstrates that
they have done so in order to gain reputational benefits from environmental regulators.
Firms take on
49
significant costs in certification, the total process for which can can cost upwards of $100,000 per facility.
The
50
standard has been notable for spreading sustainable practices to middle-tier firms that do not differentiate
46
See the certification description on the
Forest Stewardship Council website
.
47
See
European Commission website on CE marking
.
48
“The ISO Survey of Management System Standard Certifications - 2017 - Explanatory Note.
ISO
. Published August,
2018.
https://isotc.iso.org/livelink/livelink/fetch/-8853493/8853511/8853520/18808772/00._Overall_results_and_explanator
y_note_on_2017_Survey_results.pdf?nodeid=19208898&vernum=-2
; ISO 14001 had over 360,000 certifications in 171
countries in 2017. See
ISO 14000 website
.
49
Prakash, Aseem, and Matthew Potoski.
The voluntary environmentalists.
50
Ibid.
1 3
themselves based on environmentally sustainable practices.
Clearly, however, ISO 14001 has not solved larger
51
environmental challenges. The narrower success of this program should inform expectations of the role for
standards for AI development: although they can encourage global adoption of best practices and see firms
incur significant costs to undertake them, standards will not be a complete solution.
Another category of enforced standards relevant to AI standards are product and process safety standards.
Safety standards for medical equipment, biological lab processes, and safety in human-robot collaboration have
been spread globally by international standards bodies and other international institutions.
Related standards
52
for functional safety, i.e., processes to assess risks in operations and reduce them to tolerable thresholds, are
widely used across industry, from autonomous vehicle development to regulatory requirements for nuclear
reactor software.
These standards do not apply directly to the
process
of cutting-edge research. That is not to
53
say, however, that with concerted effort new standards guided by these past examples could not do so.
2 C i i . S h a p i n g e x p e r t c o n s e n s u s
The internal processes of international standards bodies share two characteristics that make them useful for
navigating AI policy questions. First, these bodies privilege expertise. Standards themselves are seen as legitimate
rules to be followed precisely because they reflect expert opinion.
International standards bodies generally
54
require that any intervention to influence a standard must be based in technical reasoning.
55
This institutional emphasis on experts can see an individual researcher’s engagement be quite impactful. Unlike
other methods of global governance that may prioritize experts, e.g., UN Groups of Governmental Experts
which yield mere advice, experts involved in standards organizations have influence over standards that can have
de facto
or even
de jure
governing influence globally. Other modes of
de jure
governance, e.g., national
51
Ibid.
52
There is no data available on uptake of ISO/TS 15006 Robots and robotic devices -- Collaborative robots or related
standards. ISO does claim, however, that IEC 60601 and ISO 10993 have seen global recognition and uptake for ensuring
safety in medical equipment and biological processes, respectively. See discussion of
ISO standards in sectorial examples at
ISO's dedicated webpage
.
53
Particular industry applications derive from the generic framework standard for functional safety, IEC 61508, including
ISO 26262 Road vehicles -- Functional safety and IEC 61513 Nuclear power plants - Instrumentation and control for
systems important to safety - General requirements for systems. Per conversation with an employee at a firm developing
autonomous driving technology, all teams in the firm have safety strategies that cite the standard. Nuclear regulators
reference the relevant standard, see, e.g., IAEA.
Implementing Digital Instrumentation and Control Systems in the
Modernization of Nuclear Power Plants.
Vienna: IAEA. 2009.
https://www-pub.iaea.org/MTCD/Publications/PDF/Pub1383_web.pdf
.
See generally, Smith, David J., and Kenneth GL Simpson.
Safety critical systems handbook: a straight forward guide to
functional safety, IEC 61508 (2010 Edition) and related standards, including process IEC 61511 and machinery IEC 62061
and ISO 13849.
Elsevier, 2010.
54
Murphy, Craig N., and JoAnne Yates.
The International Organization for Standardization (ISO): global governance
through voluntary consensus
. London: Routledge, 2009; Jacobsson, Bengt. “Standardization and expert knowledge” in
A
World of Standards
. Oxford: Oxford University Press, 2000, 40-50.
55
This is not to say that standards are apolitical. Arguments made with technical reasoning do not realize a single, objective
standard; rather, technical reasoning can manifest in multiple forms of a particular standard, each with distributional
consequences. See Büthe and Mattli,
The new global rulers.
1 4
regulation or legislation, present only limited direct opportunities for expert engagement.
Some are concerned
56
that such public engagement may undermine policy efforts on specific topics like AI safety.
Thus, for an AI
57
researcher looking to maximize her global regulatory impact, international standards bodies offer an efficient
venue for engagement.
Similarly, AI research organizations that wish to privilege expert governance may find
58
international standards bodies a venue that has greater reach and legitimacy than closed self-regulatory efforts.
Second, standards bodies and their processes are designed to facilitate the arrival of consensus on what should
and should not be within a standard.
This consensus-achieving experience is useful when addressing questions
59
surrounding emerging technologies like AI that may face initial disagreement.
Although achieving consensus
60
can take time, it is important to note that the definition of consensus in these organizations does not imply
unanimity, and in practice it can often be achieved through small changes to facilitate compromise.
This
61
institutional capacity to resolve expert disagreements based on technical argument stands in contrast to
legislation or regulation that will impose an approach after accounting for limited expert testimony or filings.
The capacity to resolve expert disagreement is important for AI, where it will help resolve otherwise
controversial questions of what AI research is mature enough to include in standards.
2 C i i i . G l o b a l r e a c h a n d e n f o r c e m e n t
International trade rules, national policies, and corporate strategy disseminate international standards globally.
These mechanisms encourage or even mandate adoption of what are nominally voluntary standards. This
section briefly describes these mechanisms and the categories of standards to which they apply. Taken together,
these mechanisms can lead to the global dissemination and enforcement of AI standards.
International trade agreements are key levers for the global dissemination of standards. The World Trade
Organization’s Agreement on Technical Barriers to Trade (TBT) mandates that WTO member states use
56
Congressional or parliamentary testimony does not necessarily translate into law and legislative staffers rarely have
narrow expertise. Such limitations inform calls for a specialized AI regulatory agency within the US context. A regulatory
agency can privilege experts, but only insofar as they work within the agency and, for example, forgo research. Standards
organizations, alternatively, allow experts to continue their research work. On limitations of expertise in domestic
governance and a proposal for an AI agency, see Scherer, Matthew U. "Regulating artificial intelligence systems: Risks,
challenges, competencies, and strategies." Harv. JL & Tech. 29 (2015): 353-400.
57
See, e.g., Larks. “2018 AI Alignment Literature Review and Charity Comparison.”
AI Alignment Forum blog post.
58
Supra footnote 55, expert participation remains political. Some standards are more politicized than others, though this
does not follow a clear division between product or process, network or enforced. Vogel sees civil regulation (essentially
enforced standards) as more politicized than technical (network) standards, although he looks at more politicized venues
than simply international standards bodies. Network standards with large distributional consequences are often politicized,
including shipping containers and ongoing 5G efforts. Vogel, David. “The Private Regulation of Global Corporate
Conduct.” in Mattli, Walter., and Ngaire. Woods, eds.
The Politics of Global Regulation.
Princeton: Princeton University
Press, 2009; Büthe and Mattli,
The new global rulers.
59
See, e.g., Büthe and Mattli,
The new global rulers
, Chapter 6.
60
Questions of safe procedures for advanced AI research, for instance, have not yet seen debate oriented towards
consensus.
61
Büthe and Mattli,
The new global rulers,
pp. 130-1.
1 5
international standards
where they exist, are effective, and are appropriate. This use can take two forms:
62
incorporation into enforced technical regulations or into voluntary standards at the national level.
The TBT
63
applies to existing regulations regardless if they are new or old; thus, if a new international standard is
established, pre-existing laws can be challenged.
The TBT has a formal notice requirement for such regulation
64
and enables member states to launch disputes within the WTO.
65
There are important limitations to TBT, however. Few TBT-related disputes have been successfully resolved in
the past.
TBT applies only to product and product-related process standards,
thus precluding its use in
66
67
spreading standards on fundamental AI research. In a further limitation, the agreement permits national
regulations to deviate from international standards in cases where “urgent problems of safety, health,
environmental protection or national security arise,” although such cases require immediate notification and
justification to the WTO.
68
National policies are another key lever in disseminating international standards. National regulations reference
international standards and can mandate compliance
de jure
in developed and developing countries alike.
69
Governments may use their purchasing power to encourage standards adoption via procurement requirements.
EU member state procurement must draw on European or international standards where they exist,
and the
70
optional WTO Agreement on Government Procurement encourages parties to use international standards for
62
The TBT does not define international standards bodies, but it does set out a Code of Good Practice for standards
bodies to follow and issue notifications of adherence to ISO. IEEE declared adherence to the Principles in 2017. See
ISO’s
WTO information gateway webpage
; “IEEE Position Statement: IEEE Adherence to the World Trade Organization
Principles for International Standardization.”
IEEE
. Published May 22, 2017.
http://globalpolicy.ieee.org/wp-content/uploads/2017/05/IEEE16029.pdf
.
63
TBT does not mandate nations impose regulation; rather, it mandates that if they do so, regulation should incorporate
international standards. Thus, nations may choose to simply leave a particular industry unregulated. This is unlikely,
however, given international market pressures outline below.
64
It does not, however, compel a national government to regulate in the first place. See Mattli, Walter. "The politics and
economics of international institutional standards setting: an introduction."
Journal of European Public Policy
8, no. 3
(2001): 328-344.
65
In 2015, there were some 25,000 notices of national regulatory measures, 473 concerns raised, and 6 disputes brought to
the WTO.
Technical Barriers to Trade: Reducing trade friction from standards and regulations.
Geneva: WTO, 2015.
https://www.wto.org/english/thewto_e/20y_e/tbt_brochure2015_e.pdf
.
66
See Wijkström, Erik, and Devin McDaniels. "Improving Regulatory Governance: International Standards and the WTO
TBT Agreement."
Journal of World Trade
47, no. 5 (2013): 1013-046.
67
Scholars disagree whether TBT applies to digital goods without any physical product manifestation. E.g., Oddenino
argues that TBT could apply to cybersecurity standards, citing discussions at the TBT Committee. Fleuter argues that
digital products as services under WTO rules, which would preclude use of TBT. Oddenino, Alberto. “Digital
standardization, cybersecurity issues and international trade law.”
Questions of International Law
51 (2018): 31;
Fleuter,
Sam. "The Role of Digital Products Under the WTO: A New Framework for GATT and GATS Classification."
Chi. J.
Int'l L.
17 (2016): 153.
68
TBT Agreement, Article 2.10.
69
These include Brazil, China, India, Singapore, and South Korea. Büthe and Mattli,
The new global rulers
. See too
ISO’s
webpage featuring national examples of standards used for public policy
;
Winfield, Alan FT, and Marina Jirotka. "Ethical
governance is essential to building trust in robotics and artificial intelligence systems."
Philosophical Transactions of the
Royal Society A: Mathematical, Physical and Engineering Sciences
376, no. 2133 (2018): 20180085.
70
EU Regulation 1025/2012; See too Mattli, “The politics and economics of international institutional standards setting.”
1 6
procurement where they exist.
The US Department of Defence (DoD), for instance, uses multiple
71
international product and process standards in its software procurement,
and this appears set to continue
72
based on the 2018 U.S. Department of Defense Artificial Intelligence Strategy.
Beyond regulatory obligations
73
and procurement requirements, governments spread standards through adoption in their own operations.
74
National action may threaten a fractured global governance landscape and fears of a race to the bottom in
regulatory stringency, including that of standards. In a race to the bottom in regulatory stringency, AI
development organizations may, in the future, choose to locate in jurisdictions that impose a lower regulatory
burden; these organizations need not actually relocate, or threaten to do so, in order to impose downward
pressure on regulatory oversight.
National strategies already witnessed have proposed policy changes to
75
encourage AI development. Such national actions will undoubtedly continue and court leading AI
development organizations.
WTO institutions, if actively used for the purpose, may be able to moderate these concerns of a race to the
bottom. Notably, moreover, the global and concentrated nature of markets for AI and related industries will see
MNCs use standards internationally. Analogous to government procurement, MNCs may themselves demand
contractors adhere to international standards.
Such standards include network-product and network-process
76
standards to ensure an interoperable supply chain.
They also include enforced-product and enforced-process
77
71
“Agreement on Government Procurement”, entered into force January 1, 1995.
United Nations Treaty Series
, v. 1868.
https://treaties.un.org/doc/Publication/UNTS/Volume%201868/v1868.pdf
.
72
Within DoD, individual managers have discretion if and how to use these standards for particular projects: if a standard
is to be used, it is cited in Requests for Proposal, included in subsequent contracts, and then used to evaluate contract
compliance. In the case of ISO/IEC/IEEE 15288: Systems and software engineering–System life cycle processes, DoD
project managers are directed to tailor requirements to their particular project characteristics by, for example, removing out
of scope criteria and assessments. The standard establishes a general framework for the life cycle of an engineered system
and then defines a set of processes within the life cycle, and can be tailored to apply to use with hardware, software, data,
humans, processes, procedures, facilities, materials, and more.
“Acquisition Program Resources.”
Office of the Deputy
Assistant Secretary of Defense
:
Systems Engineering
. March 30, 2017.
https://www.acq.osd.mil/se/apr/apr-4.html
;
Best
Practices for Using Systems Engineering Standards (ISO/IEC/IEEE 15288, IEEE 15288.1, and IEEE 15288.2) on Contracts
for Department of Defense Acquisition Programs
. Washington, D.C.: Office of the Deputy Assistant Secretary of Defense
for Systems Engineering, 2017.
https://www.acq.osd.mil/se/docs/15288-Guide-2017.pdf
.
73
The plan identifies a goal to “establish key AI building blocks and standards” (p.6) and remains focused on processes:
contractors’ prototype solutions should employ “standardized processes with respect to areas such as data, testing and
evaluation, and cybersecurity.” (p.9) DoD. “Summary of the Department of Defense Artificial Intelligence Strategy:
Harnessing AI to Advance Our Security and Prosperity.”
DoD
, 2019.
https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF
.
74
Military bases and municipalities in the U.S. and Europe have adopted the ISO 14001 standard, for example. Prakash,
Aseem, and Matthew Potoski.
The voluntary environmentalists.
75
Empirically there is more inertia in policy and corporate investment than is often assumed. Races to the bottom in
regulatory stringency are not often observed in practice for this reason.
Radaelli, Claudio M. "The puzzle of regulatory
competition."
Journal of Public Policy
24, no. 1 (2004): 1-23. But given the stakes in AI development, such inertia is less
likely. Absent international coordination, whether by standards or another method, a race to the bottom may play out over
time as countries improve training of AI researchers and fail to provide regulatory oversight.
76
May, Christopher. "Who’s in Charge? Corporations as Institutions of Global Governance."
Palgrave Communications
1,
no. 1 (2015): 1.
77
Ibid.; Guler, Isin, Mauro F. Guillén, and John Muir Macpherson. "Global competition, institutions, and the diffusion of
organizational practices: The international spread of ISO 9000 quality certificates."
Administrative science quarterly
47, no.
2 (2002): 207-232.
1 7
standards to meet customer demand.
In addition to reduced costs from supply chain interoperability and
78
increased revenues from meeting customer demand, MNCs--and other firms alike--have further incentives to
adopt international standards: standards can provide protection from liability in lawsuits and can lower
insurance premiums.
79
Together these mechanisms can be used to encourage movement toward a unified global governance landscape
for AI standards. National governments and MNCs can mandate use of standards, product and process,
network and enforced alike. WTO rules require consistent use of international product standards globally. The
incentives of MNCs encourage consistent use of international standards--both product and process--globally. If
a large national market mandates adherence to a standard, MNCs may keep administration costs low by
complying across the globe. If they do, then MNCs are incentivized to lobby other jurisdictions to pass similar
laws, lest local competition be at an advantage.
That means, given that many leading AI research efforts are
80
within MNCs, insofar as one country incorporates international AI standards into local law, others will face
pressure to follow suit. This was witnessed, for example, with environmental regulation passed in the U.S.,
which subsequently led DuPont to lobby for a global agreement to ban ozone-depleting chemicals in order to
see its international competition similarly regulated.
This phenomenon is currently witnessing the
81
globalization of data protection regulations at the behest of the GDPR. The analysis of global governance
mechanisms in this section should not be portrayed as arguing that using these tools to spread and enforce AI
standards globally will be easy. But the tools do exist, and concerted efforts to make use of them are a worthy
endeavour.
In sum, the scope for standards in the global governance of AI research and development is not predetermined.
Recalling our standards typology, the object of standardization and incentives therein will determine particular
needs for standards development and complementary institutions. Standards for AI product specification and
development processes have numerous precedents, while standards to govern fundamental research approaches
are without precedent. More generally, if international experts engage in the standardization process, this serves
to legitimize the resulting standard. If states and MNCs undertake efforts to adopt and spread the standard, it
will similarly grow in influence. Active institutional entrepreneurship can influence the development of and
scope for international standards in AI.
78
May, “Who’s in Charge?”; Prakash, Aseem, and Matthew Potoski. "Racing to the bottom? Trade, environmental
governance, and ISO 14001."
American journal of political science
50, no. 2 (2006): 350-364; Vogel, David.
The Market for
Virtue: The Potential and Limits of Corporate Social Responsibility.
Washington, D.C.: Brookings Institution Press, 2005.
79
Cihon, P., Michel-Guitierrez, G., Kee, S., Kleinaltenkamp, M., and T. Voigt. “Why Certify? Increasing adoption of the
proposed EU Cybersecurity Certification Framework.” Masters thesis, University of Cambridge, 2018; for a proposal to
incentivize AI certification via reduced liability in the US context, see too, Scherer, "Regulating artificial intelligence
systems: Risks, challenges, competencies, and strategies."
80
Murphy, Dale D.
The structure of regulatory competition: Corporations and public policies in a global economy.
Oxford:
Oxford University Press, 2004; Bradford, Anu. “The Brussels effect.”
Nw. UL Rev.
107 (2012): 1-68.
81
Murphy,
The structure of regulatory competition
; Hale, Thomas, David Held, and Kevin Young.
Gridlock: Why Global
Cooperation Is failing When We Need It Most.
Cambridge: Polity, 2013.
1 8
3 . C u r r e n t L a n d s c a p e f o r A I S t a n d a r d s
3 A . I n t e r n a t i o n a l d e v e l o p m e n t s
Given the mechanisms outlined in the previous section, international standards bodies are a promising forum of
engagement for AI researchers. To date, there are two such bodies working on AI: ISO/IEC JTC 1 Standards
Committee on Artificial Intelligence (SC 42) and the working groups of IEEE SA’s AI standards series. Figure 2
categorizes the standards under development within the externality-object typology, as of January 2019.
Figure 2. International AI standards under development
Network - Product
●
Foundational Standards: Concepts and terminology
(SC 42 WD 22989), Framework for Artificial Intelligence
Systems Using Machine Learning (SC 42 WD 23053)
●
Transparency of Autonomous Systems (defining levels of
transparency for measurement) (IEEE P7001)
●
Personalized AI agent specification
(IEEE P7006)
●
Ontologies at different levels of abstraction for ethical
design (IEEE P7007)
●
Wellbeing metrics for ethical AI
(IEEE P7010)
●
Machine Readable Personal Privacy Terms (IEEE P7012)
●
Benchmarking Accuracy of Facial Recognition systems
(IEEE P7013)
Network - Process
●
Model Process for Addressing Ethical
Concerns During System Design
(IEEE P7000)
●
Data Privacy Process (IEEE P7002)
●
Methodologies to address algorithmic
bias in the development of AI systems
(IEEE P7003).
●
Process of Identifying and Rating the
Trustworthiness of News Sources
(IEEE P7011)
Enforced - Product
●
Certification for products and services in transparency,
accountability, and algorithmic bias in systems (IEEE
ECPAIS)
●
Fail-safe design for AI systems (IEEE P7009)
Enforced - Process
●
Certification framework for
child/student data governance (IEEE
P7004)
●
Certification framework for employer
data governance procedures based on
GDPR (IEEE P7005)
●
Ethically Driven AI Nudging
methodologies (IEEE P7008)
SC 42 is likely the more impactful venue for long-term engagement. This is primarily because IEEE standards
have fewer levers for adoption than their ISO equivalents. WTO TBT rules can apply to both IEEE and
ISO/IEC product standards, but their application to IEEE was only asserted in 2017 and has never been tested.
1 9
As discussed in the previous section, ISO standards are mandated in government regulation; similar research
82
could find no such mandates for IEEE standards. Procurement requirements are common for both IEEE and
ISO/IEC standards, and market mechanisms similarly encourage both. ISO has had global success with many
enforced standards, whereas IEEE has no equivalent experience to date.
States have greater influence in
83
ISO/IEC standards development than that of IEEE, and state involvement has enhanced the effectiveness of
past standards with enforcement mechanisms.
Thus, given that enforcement of ISO/IEC standards has more
84
mechanisms for global reach, participation in ISO/IEC JTC 1 may be more impactful than in IEEE.
Ongoing SC 42 efforts are, so far, few in number and preliminary in nature. (See Appendix 1 for a full list of SC
42 activities.) The most pertinent standards working group within SC 42 today is on Trustworthiness. The
Trustworthiness working group is currently drafting three technical reports on robustness of neural networks,
bias in AI systems, and an overview of trustworthiness in AI.
IEEE’s AI standards are further along than those of SC 42. (See Appendix 2 for a full list of IEEE SA P7000
series activities, as of January 2019.) Work on the series began in 2016 as part of the IEEE’s larger Global
Initiative on Ethics of Autonomous and Intelligent Systems. IEEE’s AI standards series is broad in scope, and
continues to broaden with recent additions including a project addressing algorithmic rating of fake news. Of
note to AI researchers interested in long-term development should be P7009 Fail-Safe Design of Autonomous
and Semi-Autonomous Systems. The standard, under development as of January 2019, includes “clear
procedures for measuring, testing, and certifying a system’s ability to fail safely.”
Such a standard, depending
85
on its final scope, could influence both research and development of AI across many areas of focus. Also of note
is P7001 Transparency of Autonomous Systems, which seeks to define measures of transparency. Standardized
methods and measurements of system transparency could inform monitoring measures in future agreements on
advanced AI development.
IEEE SA recently launched the development of an Ethics Certification Program for Autonomous and
Intelligent Systems (ECPAIS). Unlike the other IEEE AI standards, development is open to paid member
organizations, not interested individuals. ECPAIS seeks to develop three separate processes for certifications
related to transparency, accountability, and algorithmic bias. ECPAIS is in an early stage, and it remains to be
seen to what extent the certifications will be externally verified.
Absent an enforcement mechanism, such
86
certifications could be subject to the failings of negative externality standards that lack enforcement
mechanisms.
87
82
“IEEE Position Statement: IEEE Adherence to the World Trade Organization Principles for International
Standardization.”
83
IEEE acknowledges that their AI standards are “unique” among their past standards: “Whereas more traditional
standards have a focus on technology interoperability, safety and trade facilitation, the IEEE P7000 series addresses specific
issues at the intersection of technological and ethical considerations.”
IEEE announcement webpage
.
84
Vogel, David. “The Private Regulation of Global Corporate Conduct.” in Mattli, Walter., and Ngaire. Woods, eds.
The
Politics of Global Regulation.
Princeton: Princeton University Press, 2009.
85
“P7009 Project Authorization Request.”
IEEE-SA
. Published July 15, 2017.
https://development.standards.ieee.org/get-file/P7009.pdf?t=93536600003
.
86
IEEE announcement webpage
.
87
Calo, Ryan. Twitter Thread. October 23, 2018, 1:39PM.
https://twitter.com/johnchavens/status/1054848219618926592
.
2 0
3B. National priorities
There are three important developments to note for national policies on standards for AI. First, key national
actors, including the U.S. and China, agree that international standards in AI are a priority. Second, national
strategies for AI also indicate that countries plan to pursue national standards. Third, given the market structure
in AI industry, countries are incentivized to ensure that international standards align as closely to national
standards as possible.
First, international standards are a stated priority for key governments. The recently released U.S. Executive
Order on Maintaining American Leadership in Artificial Intelligence identified U.S. leadership on international
technical standards as a priority and directed the National Institute for Standards and Technology to draft a
plan to identify standards bodies for the government to engage.
The Chinese government has taken a similar
88
position in an AI Standardization White Paper published by the China Electronics Standardization Institute
(CESI) within the Ministry of Industry and Information Technology in 2018.
The white paper recommended
89
that “China should strengthen international cooperation and promote the formulation of a set of universal
regulatory principles and standards to ensure the safety of artificial intelligence technology.”
This
90
recommendation was corroborated by previous CESI policies, e.g., its 2017 Memorandum of Understanding
with the IEEE Standards Association to promote international standardization.
91
Second, national standards remain relevant. Historically, observers have argued that Chinese national standards
in fields auxiliary to AI, including cloud computing, industrial software, and big data, differ from international
standards in order to support domestic industry.
These differences have not been challenged under WTO
92
rules. However, these same observers do note that China is increasingly active in international standards
activities. In January 2018, China established a national AI standardization group, which will be active with
ISO/IEC JTC 1 SC 42 and coordinate some 23 active AI-related national standardization processes, focused on
platform/support capabilities and key technologies like natural language processing, human-computer
interaction, biometrics, and computer vision.
93
Historically, the U.S. has also emphasized the importance of standardization for AI without specifying such
efforts occur at the international level. The 2016 U.S. National AI Research and Development Strategic Plan,
88
“Executive Order on Maintaining American Leadership in Artificial Intelligence,” February 11, 2019,
https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-intelligence
.
89
China Electronics Standardization Institute (CESI). “AI Standardization White Paper,” 2018, Translation by Jeffrey
Ding.
https://docs.google.com/document/d/1VqzyN2KINmKmY7mGke_KR77o1XQriwKGsuj9dO4MTDo/
; see too
Ding, Jeffrey, Paul Triolo and Samm Sacks. “Chinese Interests Take a Big Seat at the AI Governance Table.”
New America
(2018).
https://www.newamerica.org/cybersecurity-initiative/digichina/blog/chinese-interests-take-big-seat-ai-governance-table/
.
90
CESI, “AI Standardization White Paper,” p. 4.
91
See
IEEE Beyond Standards blog post.
92
Ding, Jeffrey. “Deciphering China’s AI Dream.”
Future of Humanity Institute, University of Oxford
(2018);
Wübbeke,
Jost, Mirjam Meissner, Max J. Zenglein, Jaqueline Ives, and Björn Conrad. "Made in China 2025: The making of a
high-tech superpower and consequences for industrial countries."
Mercator Institute for China Studies
17 (2016).
93
CESI, “AI Standardization White Paper.”
2 1
for instance, identified 10 key areas for standardization: software engineering, performance, metrics, safety,
usability, interoperability, security, privacy, traceability, and domain-specific standards.
94
Other countries are also considering national standards. An overview of AI national and regional strategies
describes plans for standards from Australia, the Nordic-Baltic Region (Denmark, Estonia, Finland, the Faroe
Islands, Iceland, Latvia, Lithuania, Norway, Sweden, and the Åland Islands), and Singapore.
The Chief
95
Scientist of Australia has proposed an AI product and process voluntary certification scheme to support
consumer trust.
Insofar as these national strategies seek to develop AI national champions and given the
96
network effects inherent in AI industry,
AI nationalism is of transnational ambition.
97
This leads to the third important point: national efforts will likely turn to international standards bodies in
order to secure global market share for their national champions. Successful elevation of national standards to
the international level benefits national firms that have already built compliant systems. Successful inclusion of
corporate patents into international standards can mean lucrative windfalls for both the firm and its home
country.
98
If one state seeks to influence international standards, all others have incentive to do similarly, else their nascent
national industries may lose out. Given that both the U.S. and China have declared intent to engage in
international standardization, this wide international engagement will likely come to pass. One illustrative case
of the consequences of failure to follow competitors in international standardization is offered by the U.S.
machine tools industry. This industry once described by Ronald Reagan as a “vital component of the U.S.
defense base,” did not seek to influence global standards on related products, and has declined precipitously
under international competition. This stands in contrast to the standards engagement and continued strength
of the sector in Germany and Italy.
Furthermore, the WTO rules outlined in
Section 2.C.iii
, if enforced,
99
require that national regulations cite international standards. This means that failure to secure international
standards that reflect preexisting national ones could require changes in national regulation to encourage global
competition. Thus, such developments could cost national industry both internationally
and
domestically. This
means that countries will likely engage in international standards bodies that govern priority industries like AI.
94
National Science and Technology Council. “The National Artificial Intelligence Research and Development Strategic
Plan.”
Executive Office of the President of the United States
. (2016).
https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf
.
95
Dutton, Tim. “An Overview of National AI Strategies”.
Medium
. Published June 28, 2018.
https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd
.
96
Finkel, A. “What will it take for us to trust AI?”
World Economic Forum
https://www.weforum.org/agenda/2018/05/alan-finkel-turing-certificate-ai-trust-robot/
.
97
Scale unlocks further user-generated data and enables hiring talent, which in turn both improve the underlying product,
which in turn increases users and attracts further talent in a virtuous circle.
98
See Krasner, Stephen D. "Global communications and national power: Life on the Pareto frontier."
World politics
43,
no. 3 (1991): 336-366; Drezner, Daniel W.
All politics is global: Explaining international regulatory regimes
. Princeton
University Press, 2008.
99
See Büthe and Mattli,
The new global rulers
, Chapter 6.
2 2
Case of 5G: International standards with implications for national champions
Although telecom standards and standardization bodies differ from those leading in AI standardization, the
ongoing development of 5G standards is an illustrative case to consider regarding states’ interests. Previous
generations of mobile telephony standards did not see a single, uncontested global standard, with Europe and
the US on inoperable 3G standards and the LTE standard facing competition before solidifying its global
market dominance in 4G. The global economies of scale resulting from 4G standard consolidation may see a
uniform standard adopted globally for 5G from the start.
This globally integrated market will offer
100
positive-sum outcomes to cooperation, albeit with some countries winning more than others. These
incentives for network-product standards may very well be larger than those present in AI.
These incentives are driving participation in efforts at the focal standardization body, 3GPP, which set LTE
for 4G as well as some past generation standards, to set the radio standard.
At stake in the standardization
101
process is the economic bounty from patents incorporated into the standard and their resulting effects on
national industry competitiveness in the global market. One estimate claims that U.S. firm Qualcomm owns
approximately 15 percent of 5G patents, with Chinese companies, led by Huawei, controlling about 10
percent.
One example of Huawei’s success in 5G standards was the adoption of its supported polar coding
102
method for use in control channel communication between end devices and network devices.
103
In contrast to a positive-sum game with distributional consequences common in international standards, the
use of national standards reverts to a protectionist zero-sum game. In the past, there has been criticism of
China’s efforts to use national standards towards protectionism with requirements that differ from
international standards. In 5G and AI standards, however, China has sought to engage in international
standards bodies, thereby mitigating this past concern and responding to past international pressure to reduce
trade barriers.
The Trump administration opposes China’s international standards activities, in keeping
104
with its zero-sum perspective on international trade. For example, the Committee on Foreign Investment in
the United States (CFIUS) decision to block the foreign acquisition of Qualcomm came out of concern that
it “would leave an opening for China to expand its influence on the 5G standard-setting process.”
No
105
100
Brake, Doug. "Economic Competitiveness and National Security Dynamics in the Race for 5G between the United
States and China."
Information Technology & Innovation Foundation
. (2018).
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3142229
. Others see the possibility of a partially bifurcated system.
Triolo, Paul and Kevin Allison. “The Geopolitics of 5G.”
Eurasia Group
. (2018).
https://www.eurasiagroup.net/live-post/the-geopolitics-of-5g
.
101
This is a simplified picture, as the ITU governs spectrum allocation for 5G as well as a standards development roadmap.
102
LexInnova, “5G Network Technology: Patent Landscape Analysis” (2017) cited in
Brake "Economic Competitiveness
and National Security Dynamics in the Race for 5G.”
103
Brake "Economic Competitiveness and National Security Dynamics in the Race for 5G.”
104
Greenbaum, Eli. “5G, Standard-Setting, and National Security,” Harvard Law National Security Journal (July, 2018),
http://harvardnsj.org/2018/07/5g-standard-setting-and-national-security/
.
105
Mir, Aimen N. “Re: CFIUS Case 18-036: Broadcom Limited (Singapore)/Qualcomm Incorporated.” Department of
the Treasury. Letter, p. 2.
https://www.sec.gov/Archives/edgar/data/804328/000110465918015036/a18-7296_7ex99d1.htm
.
2 3
longer does the U.S. view international participation as a way to reduce trade barriers; rather, it sees
international participation as a way to shift influence in China’s favor globally.
Despite these politics, global standards will improve market efficiency and lead to better outcomes for all.
Some will be better off than others, however. The distributional consequences of 5G standards may be larger
than those for AI in the short-term, but this case nonetheless has implications for international efforts
towards AI standards. The future may see similarly politicized standardization processes for AI. China’s
formulated policies for international engagement on AI standards will likely see other countries engage in
order to encourage a more balanced result. This engagement means that AI researchers’ efforts to influence
standards will be supported but also that they likely will be increasingly politicized. Yet, to be clear, AI
standards are not currently as visible or politicized as telecom standards, which have already seen four
previous iterations of standards and the emergence of large globally integrated markets dependent upon
them.
3C. Private initiatives
In addition to international and national standards, there are a number of private initiatives that seek to serve a
standardizing role for AI. Standards, most commonly network-product standards, can arise through market
forces. Notable examples include the QWERTY keyboard, dominance of Microsoft Windows, VHS, Blu-Ray,
and many programming languages. Such market-driven product standards can produce suboptimal outcomes
where proprietary standards are promoted for private gain or standards may fail to spread at all due to a lack of
early adopters.
106
In AI, software packages and development environments, e.g., TensorFlow, PyTorch and OpenAI Gym, are
privately created, are used widely, and perform a standardizing role. Market forces can also encourage, though
not develop in their own right, network-process and enforced standards through customer demands on MNCs
and MNC pressure on their supply chains, as explained in
Section 2.C.iii
. For example, the CleverHans
Adversarial Examples Library,
if incorporated into an adversarial example process check that became widely
107
adopted in practice, would be such a standard. Another example is Microsoft’s Datasheets for Datasets standard
practice to report on data characteristics and potential bias that is used across the company.
Researchers will
108
continue to develop software packages and benchmarks. This approach does not necessarily require an
additional commitment of time beyond their research work, whereas engaging on traditional standards
development does require some time commitment. Some of these packages and benchmarks may spread to the
extent that they become industry standards. But these standards will face difficulties in securing global
dissemination and enforcement. Yet, as discussed in
Section 4
below private standards can be turned into
international standards with a concerted effort.
106
Mattli, Walter. “Public and Private Governance in Setting International Standards.” In Kahler, Miles, and David A.
Lake, eds.
Governance in a global economy: Political authority in transition
. Princeton University Press, 2003.
107
Papernot, Nicolas, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin, Cihang Xie et
al. "Technical report on the cleverhans v2. 1.0 adversarial examples library."
arXiv preprint arXiv:1610.00768
(2016).
108
Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III,
and Kate Crawford. "Datasheets for Datasets."
arXiv:1803.09010
(2018).
2 4
Still other groups are developing standards in the broad sense of a guide for judging behavior. The 2017
Asilomar Conference on Beneficial AI yielded a set of AI Principles that address areas of research, ethics and
values, and long-term issues, which have been signed by some 1300 AI and robotics researchers as well as 2500
others.
Among these principles was a commitment to safety standards in particular: “Teams developing AI
109
systems should actively cooperate to avoid corner-cutting on safety standards.”
The Association for
110
Computing Machinery (ACM), a professional association, maintains a Code of Ethics and Professional
Conduct for its members. This Code includes many principles, including “Avoid harm.”
The ACM has also
111
called for a new peer review standard that requires researchers to acknowledge “negative implications” of their
research.
The Partnership on AI, a multistakeholder forum founded by leading AI firms, seeks to develop best
112
practices for AI research and development.
These standards, broadly defined, do not benefit directly from the
113
dissemination and enforcement mechanisms outlined in
Section 2Ciii
. However, such standards may have
normative power in influencing actors who subsequently engage in standardization activities that produce
standards which are subject to mechanisms of dissemination and enforcement.
4 . R e c o m m e n d a t i o n s
Today, AI standards development is already underway at both ISO/IEC and IEEE. National strategies,
including those of the U.S. and China, prioritize engagement in standardization processes for AI. Thus, the
agenda is set. Engagement in these processes today can benefit from these ongoing processes and national foci.
As time goes on, however, standards bodies may become increasingly politicized just as multiple iterations of
telecom standards have, over time, given rise to highly politicized international tension over 5G. This section
offers recommendations to use standards to help support AI policy goals starting today.
4 A . E n g a g e i n o n g o i n g p r o c e s s e s
How can the AI community, namely researchers and research organizations, engage effectively? There are four
elements necessary for successful influence in international standards bodies:
●
technical expertise,
●
financial resources,
●
timely information, and
●
effective institutional knowledge.
114
109
“Asilomar AI Principles.”
Future of Life Institute
website
.
110
Ibid.
111
“ACM Code of Ethics and Professional Conduct.”
Association for Computing Machinery
website
.
112
Hecht, B., Wilcox, L., Bigham, J.P., Schöning, J., Hoque, E., Ernst, J., Bisk, Y., De Russis, L., Yarosh, L., Anjum, B.,
Contractor, D. and Wu, C. 2018. It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a
Change to the Peer Review Process. ACM Future of Computing Blog.
https://acm-fca.org/2018/03/29/negativeimpacts/
.
113
“About Us.”
Partnership on AI
website
.
114
Büthe and Mattli,
The new global rulers.
I adapt the final element from their focus on “institutional complementarity”
that sees domestic institutions that effectively mirror specifically ISO and IEC. My broader reading, effective institutional
awareness, enables actors to create new structures or otherwise navigate these challenges of institutional complementarity.
2 5
The AI research community already has technical expertise and financial resources, but not up-to-date
information on proceedings within standards bodies nor the institutional knowledge required to successfully
intervene. The following four recommendations helps fill in these gaps.
4 A i . B u i l d c a p a c i t y f o r e f f e c t i v e e n g a g e m e n t
AI researchers are unlikely to have experience engaging national and international standards bodies. Of leading
AI organizations, only Google and Microsoft participate in the U.S. standards committee that is affiliated with
ISO/IEC JTC 1 SC 42; none participates in the U.K. equivalent. Similarly, IEEE P7000 series working groups
see very few volunteers from leading organizations.
115
In order to successfully influence standardization outcomes, researchers should develop expertise in these
standardization processes. In some cases, researchers need not go far to find this expertise. Large firms may
already have teams working on creating and complying with international standards, though they may focus
more on products as opposed to AI research and development.
Research institutions and firms can learn more about ongoing standardization processes by participating in the
Open Community for Ethics in Autonomous and Intelligent Systems (OCEANIS).
OCEANIS is a
116
coordinating forum for standards organizations and other interested organizations to discuss efforts to use
standards to further development of autonomous and intelligent systems. It was was co-founded by the IEEE
and IEC, among other national and regional standards bodies. OCEANIS does not produce standards itself, but
could be a useful venue for organizations seeking to build capacity prior to engaging directly in standardization
processes.
Beyond expertise, perspective matters: it is important to view standards as a policy tool for encouraging positive
AI outcomes. Technologies are not apolitical
and neither are the processes that shape them.
With this
117
118
understanding, standards are not simply a response to a particular market need, but, more broadly, a tool of
global governance. Strategic engagement in standardization now can help direct wider consideration to
important areas like AI safety. ISO and IEEE have formalized standards maintenance procedures so that
standards can be updated as the state of the art progresses.
The important step today is to understand and
119
start using this tool for global governance.
115
This was indicated by observations of one working group and an interview with the chair of another working group.
116
“Participation.”
OCEANIS
website.
117
See, e.g., Lessig, Lawrence.
Code: And Other Laws of Cyberspace, Version 2.0
. New York: Basic Books, 2006.
118
Büthe and Mattli,
The new global rulers.
119
For details on ISO’s systematic review process, see “Guidance on the Systematic Review Process in ISO.”
ISO
. Published
May 2017.
https://www.iso.org/files/live/sites/isoorg/files/store/en/Guidance_systematic_review.pdf
; For details on
IEEE’s maintenance process, see “Next Steps Kit: Guidelines for Publication, Recognition Awards and Maintenance.”
IEEE Standards Association
. N.D.
https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/next_steps_kit.pdf
.
2 6
4 A i i . E n g a g e d i r e c t l y i n o n g o i n g p r o c e s s e s
There are two, related paths to engage with ISO/IEC JTC 1 SC 42. First, researchers should consider joining the
group that mirrors SC 42 within their respective national standards body. It is through these national bodies
that researchers can influence and directly engage in SC 42. In the United States, this group is InterNational
Committee for Information Technology Standards (INCITS) - Artificial Intelligence. Committee membership
is open to any U.S. national who is materially affected by related standardization activities.
,
The equivalent
120
121
committee for the UK is British Standards Institution ART/1.
122
The second method of engaging SC 42 is to seek appointment to its expert working groups that drafts standards
directly. Such appointments are made by national member organizations, so the first engagement strategy will
further the second.
The work of SC 42 is in its early stages. Working Group 3 on Trustworthiness, and specifically its ongoing work
on a technical report on robustness in neural networks, is likely the highest value area of engagement at this
time. At this preliminary stage, however, participation in a national standards body or SC 42 working group can
serve to build career capital and institutional knowledge that will be useful in creating further working groups in
the future. These efforts could focus on standards related to AI policy goals; some of these possible standards
will be discussed below.
IEEE SA P7000 series working groups are open for interested individuals to join. Indeed, the process of joining
is much simpler than that of ISO/IEC JTC 1-related work. One simply needs to contact a working group and
express interest. In order to participate in developing the Ethics Certification Program for Autonomous and
Intelligent Systems (ECPAIS), individuals must be affiliated with organizations possessing an IEEE SA
Advanced Corporate Membership.
The work of standards within the IEEE SA P7000 series is at varied stages of completeness. Standards earlier in
the sequence have approximately a year left in development and standards later in the sequence have more time.
This means that interested researchers should consider engaging soon if they are to have an impact in ongoing
working groups. Two standards in particular could support AI policy goals outlined above: P7001
Transparency of Autonomous Systems, which seeks to define measures of transparency and P7009 Fail-Safe
Design of Autonomous and Semi-Autonomous Systems. Engagement on these standards could help ensure that
their respective scopes support the governance of long-term risks.
120
InterNational Committee for Information Technology Standards (INCITS). “New INCITS Technical Committee on
Artificial Intelligence - Notice of January 30-31, 2018 Organizational Meeting and Call for Members.” Email, 2018.
https://standards.incits.org/apps/group_public/download.php/94314/eb-2017-00698-Meeting-Notice-New-INCITS-T
C-on-Artificial-Intelligence-January30-31-2018.pdf
.
121
Membership fees vary by organizational affiliation, from several hundred to several thousand dollars per year.
122
See
BSI committee information webpage
.
2 7
4 A i i i . M u l t i n a t i o n a l o r g a n i z a t i o n s s h o u l d b e c o m e l i a i s o n s
Another method of engagement is available to
multinational
industry or other membership associations like the
Partnership on AI. These groups are eligible for liaison status with ISO/IEC JTC 1 SC 42 at both the Standards
Committee level and Working Group level. Although liaisons cannot vote on final standards, they have
significant influence. Participation at the Standards Committee level would allow such an organization to
propose new standards, comment on draft standards, and nominate experts for Working Groups.
123
4 B . P u r s u e p a r a l l e l s t a n d a r d s d e v e l o p m e n t
An alternative to direct engagement is the parallel development of standards. This could take many forms in
practice. Individual organizations, existing working groups at the Partnership on AI, or other ad hoc consortia
could develop, i.a., software libraries, measurement benchmarks, or best practices procedures. Once developed,
these approaches could then be transferred into international standards to achieve global dissemination and
enforcement.
Indeed, there are numerous examples of organizations and even individual firms transferring existing standards
into international ISO standards. The C Programming language was developed at Bell Laboratories before being
adopted as an international standard by ISO.
More recently, Microsoft transferred its Open XML format to
124
ISO,
as did Adobe with PDF.
Microsoft’s effort is illustrative of the potential for one motivated MNC to
125
126
influence standardization processes: it placed its experts on several national committees that then influenced
discussions at the ISO committee.
Smaller firms can also have success: Microsoft’s Open XML effort
127
followed another ISO-approved open standard that was submitted by a consortium of smaller companies.
128
IEEE has also created standards and then seen them adopted by ISO/IEC JTC 1 in the past. Similar efforts
could be made in the case of specific AI standards, whether from IEEE’s P7000 series or from another
organization. Indeed, if an organization like the Partnership on AI were to create AI standards, it could apply
for status as a Publicly Available Specifications (PAS) Submitter with the ISO/IEC JTC 1. With this status, a
standards organization can submit specifications for a vote among national bodies; over one hundred standards
have been approved in this process.
129
123
“ISO/IEC Directives Part 1 Consolidated ISO Supplement, 1.17 Liaisons with other organizations.” Ninth edition.
Geneva: ISO, 2018.
https://www.iso.org/sites/directives/current/consolidated/index.xhtml#_idTextAnchor095
.
124
Kelechava, Brad. “The Origin of ANSI C and ISO C.”
ANSI
blog post
.
125
Kosek, Jirka. "From the office document format battlefield."
IT Professional
10, no. 3 (2008).
126
Orion, Egan. “PDF 1.7 is approved as ISO 32000.”
The Inquirer
blog post (archived)
.
127
Büthe and Mattli,
The new global rulers
, p. 186.
128
Kosek, "From the office document format battlefield."
129
“JTC 1 PAS Transposition Process.”
ISO/IEC JTC 1
website
.
2 8
4 C . R e s e a r c h s t a n d a r d s a n d s t r a t e g y f o r d e v e l o p m e n t
This paper and its nascent AI standards strategy serves as a call to AI researchers to engage in order to help
develop standards as a global governance mechanism for AI. Further research from the AI research community
is needed to ensure that standards under development today can encourage positive outcomes for advanced AI.
This work could benefit from researchers from across the AI research field as well as forecasting experts. The
lines of work are two-fold. First, technical standards desiderata should be developed. Second, specific strategies
to see these standards codified and spread globally should then be created.
4 C i . R e s e a r c h t e c h n i c a l s t a n d a r d s d e s i d e r a t a
Ultimately, AI researchers should seek to consolidate AI standards desiderata for their particular area of focus.
Some of this work may take place at existing working groups hosted by the Partnership on AI, discussions
within individual organizations, or through other ad hoc gatherings. This paper offers two prototype standards
that would support AI policy goals: an AI safety process standard and an AI systems capability standard.
The field of AI Safety is young, but preliminary conversations about how to incorporate safety procedures into
a standard framework that can reduce risks globally would be a welcome application of existing research. The
first step in this process is the distillation of current best practices. However tentative and incomplete, these
practices are an improvement over a disregard for safety--if expectations are calibrated correctly. There are
numerous labs around the world today with advanced AI ambitions and no focus on safety.
Prototype
130
standards could spread a focus on safety and current best practices globally.
One such approach could be a
131
process standard that requires researchers to complete a checklist procedure before undertaking research,
namely record a precise specification, measures taken to ensure robustness, and methods of assurance.
This
132
approach could then serve as a model for future standards and regulation as system capabilities increase. A more
developed version could see researchers certify to a series of best practices. Such a certification framework could
eventually be linked to a monitoring regime for defined high risk projects. This certification approach would
likely see a series of related standards, which is a common practice. One standard would be definitional: defining
high risk projects or developing a risk typology of multiple categories, as is used in functional safety standards.
Another standard would then identify best practices and mitigation strategies to be followed at each risk
threshold. Additional standards could specify monitoring and certification regimes. When realized, this example
may see labs obtain third-party certification subject to verification, e.g., via real-time monitoring of researchers’
130
Baum, "A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy."
131
E.g., Avoiding negative side effects, avoiding reward hacking, scalable oversight, safe exploration, and robustness to
distributional change.
Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané.
"Concrete problems in AI safety."
arXiv
(2016).
https://arxiv.org/pdf/1606.06565.pdf
; papers referenced within “Safety
and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI).”
The IEEE Global
Initiative on Ethics of Autonomous and Intelligent Systems
. 2017.
https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_safety_beneficence_v2.pdf
.
132
See Ortega, Pedro A., Vishal Maini, and the DeepMind Safety Team. “Building safe artificial intelligence: specification,
robustness, and assurance.”
Medium
. Published September 27, 2018.
https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1
.
2 9
use of large amounts of computing hardware. Although such enforcement regimes may be novel, international
standards bodies do have experience with safety standards for emerging technologies, as described in
Section 2Bi
.
Another standard for consideration would permit consistent assessment of system capabilities globally. This
type of standard could inform above safety standards by assessing the relative danger of a system or it could
facilitate international agreements on AI development and deployment in a variety of domains. Insofar as it was
incorporated into an international standard, these practices could be spread globally and possibly facilitate
future international agreements. ISO has supported similar efforts to combat climate change in furtherance of
the Paris Climate Agreement: it has a series of greenhouse gas emissions measurement standards for individual
organizations and auditors.
In contrast to the organic spread of private benchmarks, international standards
133
can support universal adoption at a point in the future where it may be needed.
Performance benchmarks already exist for particular tasks. The AI Index incorporates these benchmarks and
others to report on AI performance and other metrics annually.
Notable benchmarks include the ImageNet
134
corpus, which has served as an image recognition benchmark for research globally and helped drive the rise of
deep learning research.
The General Language Understanding Evaluation (GLUE)
may be a similarly
135
136
impactful benchmark in the field of natural language understanding. GLUE integrates nine distinct natural
language understanding tasks into one benchmark.
As systems become more capable, the integration of tasks into holistic benchmarks will continue. Further work
is needed to contextualize these growing modular benchmarks in a broader capabilities framework.
An
137
integrative approach could benefit from ongoing efforts to map types of intelligence.
Such an approach could
138
then serve as the basis for forecasting efforts and international agreements that see universal adoption. Of
course, such a measurement standard cannot come ahead of fundamental research.
4 C i i . R e s e a r c h s t r a t e g i e s f o r s t a n d a r d s i n g l o b a l g o v e r n a n c e
Although AI safety research continues and these procedures outlined above are not foolproof, thought on how
to implement safety processes at scale needs parallel development with technical safety research itself.
Understanding such efforts as enforced-process standards, institutions for both agreement
and
enforcement are
133
Climate Action.
ISO Focus, May-June 2018.
https://www.iso.org/files/live/sites/isoorg/files/news/magazine/ISOfocus%20(2013-NOW)/en/2018/ISOfocus_128/IS
Ofocus_128_en.pdf
.
134
See Shoham, Yoav, Raymond Perrault, Erik Brynjolfsson, Jack Clark, James Manyika, Juan Carlos Niebles, Terah
Lyons, John Etchemendy, Barbara Grosz and Zoe Bauer, "The AI Index 2018 Annual Report”, AI Index Steering
Committee, Human-Centered AI Initiative, Stanford University, Stanford, CA, December 2018.
http://cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf
.
135
Gershgorn, Dave. “The data that transformed AI research—and possibly the world.”
Quartz
. Published July 26, 2017.
https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/
.
136
Wang, Alex, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. "GLUE: A Multi-Task
Benchmark and Analysis Platform for Natural Language Understanding."
arXiv:1804.07461
(2018).
137
See Hernández-Orallo, José. "Unbridled mental power."
Nature Physics
15, no. 1 (2019): 106.
138
Bhatnagar, Sankalp, Anna Alexandrova, Shahar Avin, Stephen Cave, Lucy Cheke, Matthew Crosby, Jan Feyereisl et al.
"Mapping intelligence: Requirements and possibilities." In
3rd Conference on" Philosophy and Theory of Artificial
Intelligence
, pp. 117-135. Springer, Cham, 2017.
3 0
needed. Although ISO/IEC JTC 1 SC 42 may one day offer a promising home for such efforts, initial
proof-of-concept work may be done more effectively elsewhere. Ongoing work on monitoring, incentives, and
cooperation, at institutions like OpenAI, FHI Center for the Governance of AI, and the Cambridge Centre for
Study of Existential Risk, may prove useful in this effort.
For each identified important area of standardization ongoing, as well as the new efforts identified above, a
roadmap for global governance should be developed. This roadmap can then be used by institutional
entrepreneurs, whether they be individual researchers, organizations, firms, or larger groups. Each particular
standards roadmap could begin by answering the following questions: Which firms, organizations, or states may
adopt the standard first? Which policy mechanisms will be most useful in spreading these standard more
broadly? How should the broader AI research community support these efforts?
As international and other standards bodies initiate standardization efforts in more areas of AI, an important
question to address will be to what extent each needs attention from the AI research community. This is a
calculus of interests and impact. If a topic of standardization is relevant to policy goals for the governance of
advanced AI and actors’ incentives may overlook standards’ development to these ends, engagement will be
warranted. In other cases, e.g., standards for autonomous vehicles, actors’ incentives are aligned so as to not
necessitate engagement.
Each roadmap should similarly address this question of differential impact.
139
More broadly, additional research on the politics of standard-setting and standard enforcement is needed.
Existing literature focuses primarily on the politics of standard-setting.
This work does not focus on standards
140
for digital technologies, however. Furthermore, little work has been done to understand the role of individual
firms in setting international standards.
Similarly little research has been done on the ways in which standards
141
spread globally in practice.
Section 2Ciii
compiled a series of institutional mechanisms for dissemination and
enforcement that warrant further research to analyze their relative performance as well as the influence of global
and domestic politics in their processes. This understanding can then inform strategies to spread standards for
AI governance.
4 D . U s e s t a n d a r d s a s a t o o l f o r c u l t u r e c h a n g e
Standards can be used to spread a culture of safety
and responsibility in AI research and development.
142
Standards can achieve this in four ways.
First, the criteria described within a standard set rules and establish
143
139
No actor wishes to promulgate standards that endanger their own citizens or customers. Insofar as standards for specific
AI applications, such as autonomous vehicles, remain national, some efficiency will be lost. But internationally fractured
end-application product standards are unlikely to have consequences for desirable openness in research.
140
See, e.g., Büthe and Mattli, The new global rulers; Hallström,
Organizing International Standardization
; Prakash,
Aseem, and Matthew Potoski.
The voluntary environmentalists
.
141
The author’s correspondence with a standards scholar indicates that there are no scholarly works focused on corporate
lobbying of ISO/IEC or IEEE.
142
This has also been called a “safety mindset.” See “AI Safety Mindset.”
Arbital blog
.
143
See Scott, W. Richard.
Institutions and Organizations: Ideas, Interests & Identities.
Fourth ed. Foundations for
Organizational Science. Los Angeles, California, 2014; Thornton, Patricia H., William. Ocasio, and Michael. Lounsbury.
The Institutional Logics Perspective: A New Approach to Culture, Structure and Process.
Oxford: Oxford University Press,
2012.
3 1
expectations. For example, in adopting a transparency standard, an organization commits to the importance of
transparency for AI systems. Second, standards establish and reinforce a relational system that see individual
researchers and AI development organizations embedded in a larger network. In adopting an international
standard, an organization voluntarily acknowledges that outside actors have a stake in the procedures
undertaken within the organization. Third, in order to follow the adopted standards, researchers will necessarily
carry out practices, repeatedly performing, and internalizing as routine, a culture of responsibility and safety.
Fourth, standards will often be embedded directly within products and software packages; individuals’
interactions with these artefacts reinforce a culture of safety and responsibility. For example, consider a safety
checklist is embedded into a software package that prompts a researcher to address safety risks and mitigation
strategies before she can train a model. Regardless of who uses the system, that interaction will reinforce safety.
Understood in this way, standards can be yet another tool for institutional entrepreneurs who promote a
culture of responsibility and safety in AI development. Within companies, closer connections between product
teams with experience in standards and AI research teams can spread this culture. The adoption of AI standards
under development as well as possible future standards can further serve to support this connection within and
among AI labs. Outside of a particular company, standards can drive the adoption of best practices more widely
across the industry. They can also be bundled with other advocacy efforts that reward responsible labs with
better public opinion and access to talented researchers. A culture change is not easy, but standards can help in
this path.
5 . C o n c l u s i o n
This paper has sought to reframe international standards as tools of AI policy. Some AI policy challenges, e.g.,
trust among developers and safe ground rules in international competition, warrant global solutions.
International standards bodies produce expertise- and consensus-based policies that can provide these solutions.
A series of mechanisms can then spread and enforce these policies across the globe. International standards
bodies are currently developing AI standards and states have prioritized engagement. The agenda is set, but
further expert engagement is needed. This paper has made the case for this engagement, provided an overview of
ongoing standardization efforts, and offered detailed recommendations for those who wish to get involved.
3 2
R e f e r e n c e s
Abbott, Kenneth W., and Duncan Snidal. “International ‘standards’ and international governance.”
Journal of
European Public Policy
8, no. 3 (2001): 345.
“Acquisition Program Resources.”
Office of the Deputy Assistant Secretary of Defense
:
Systems Engineering
.
March 30, 2017.
https://www.acq.osd.mil/se/apr/apr-4.html
.
“Agreement on Government Procurement”, entered into force January 1, 1995.
United Nations Treaty Series
, v.
1868.
https://treaties.un.org/doc/Publication/UNTS/Volume%201868/v1868.pdf
.
Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. "Concrete
problems in AI safety."
arXiv
(2016).
https://arxiv.org/pdf/1606.06565.pdf
.
Armstrong, S., Bostrom, N. & Shulman, C. “Racing to the precipice: a model of artificial intelligence
development”, Technical Report #2013-1, Future of Humanity Institute, Oxford University: pp. 1-8. (2013).
https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-devel
opment.pdf
.
Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François
Bonnefon, and Iyad Rahwan. "The moral machine experiment."
Nature
563, no. 7729 (2018): 59.
B
aum, Seth. "A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy." G
lobal
Catastrophic Risk Institute Working Paper 17-1.
(2017).
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741
.
Best Practices for Using Systems Engineering Standards (ISO/IEC/IEEE 15288, IEEE 15288.1, and IEEE
15288.2) on Contracts for Department of Defense Acquisition Programs
. Washington, D.C.: Office of the
Deputy Assistant Secretary of Defense for Systems Engineering, 2017.
https://www.acq.osd.mil/se/docs/15288-Guide-2017.pdf
.
Bhatnagar, Sankalp, Anna Alexandrova, Shahar Avin, Stephen Cave, Lucy Cheke, Matthew Crosby, Jan
Feyereisl et al. "Mapping intelligence: Requirements and possibilities." In
3rd Conference on" Philosophy and
Theory of Artificial Intelligence
, pp. 117-135. Springer, Cham, 2017.
Bostrom, Nick.
Superintelligence
. Oxford: Oxford University Press, 2014.
----- "Strategic implications of openness in AI development."
Global Policy
8, no. 2 (2017): 146.
Bostrom, Nick, Dafoe, Allan and Carrick Flynn. “Public Policy and Superintelligent AI: A Vector Field
Approach.” (working paper, Future of Humanity Institute, 2018),
https://nickbostrom.com/papers/aipolicy.pdf
.
Bradford, Anu. “The Brussels effect.”
Nw. UL Rev.
107 (2012): 1-68.
3 3
Brake, Doug. "Economic Competitiveness and National Security Dynamics in the Race for 5G between the
United States and China."
Information Technology & Innovation Foundation
. (2018).
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3142229
.
Brundage, Miles, et al. "The malicious use of artificial intelligence: Forecasting, prevention, and mitigation."
Future of Humanity Institute and the Centre for the Study of Existential Risk.
(2018)
https://arxiv.org/pdf/1802.07228.pdf
Brunsson, Nils and Bengt Jacobsson. “The contemporary expansion of standardization” in
A World of
Standards
. Oxford: Oxford University Press, 2000, 1-18.
----- “The pros and cons of standardization–an epilogue” in
A World of Standards
. Oxford: Oxford University
Press, 2000, 169-173.
Bughin, Jacques, Jeongmin Seong, James Manyika, et al. “Notes from the AI Frontier: Modeling the Impact of
AI on the World Economy.”
McKinsey Global Institute
. (2018).
https://www.mckinsey.com/~/media/McKinsey/Featured%20Insights/Artificial%20Intelligence/Notes%20fro
m%20the%20frontier%20Modeling%20the%20impact%20of%20AI%20on%20the%20world%20economy/MGI
-Notes-from-the-AI-frontier-Modeling-the-impact-of-AI-on-the-world-economy-September-2018.ashx
.
Büthe, Tim, and Walter Mattli.
The new global rulers: The privatization of regulation in the world economy
.
Princeton University Press, 2011.
Cave, Stephen, and Seán S. ÓhÉigeartaigh. "Bridging near-and long-term concerns about AI."
Nature Machine
Intelligence
1, no. 1 (2019): 5-6.
China Electronics Standardization Institute (CESI). “AI Standardization White Paper,” 2018, Translation by
Jeffrey Ding.
https://docs.google.com/document/d/1VqzyN2KINmKmY7mGke_KR77o1XQriwKGsuj9dO4MTDo/edit
Cihon, Peter. “Regulatory Dynamics of Artificial Intelligence Global Governance.”
Typhoon Consulting.
(2018).
http://www.typhoonconsulting.com/wp-content/uploads/2018/07/18.07.11-AI-Global-Governance-Peter-Ci
hon.pdf
Cihon, Peter, Glenda Michel-Guitierrez, Sam Kee, Moritz Kleinaltenkamp, and Thanel Voigt. “Why Certify?
Increasing adoption of the proposed EU Cybersecurity Certification Framework.” Masters thesis, University of
Cambridge, 2018.
Climate Action.
ISO Focus, May-June 2018.
https://www.iso.org/files/live/sites/isoorg/files/news/magazine/ISOfocus%20(2013-NOW)/en/2018/ISOfoc
us_128/ISOfocus_128_en.pdf
.
Dafoe, Allan. "AI Governance: A Research Agenda."
Future of Humanity Institute, University of Oxford.
(2018).
Ding, Jeffrey. “Deciphering China’s AI Dream.”
Future of Humanity Institute, University of Oxford.
(2018).
3 4
Ding, Jeffrey, Paul Triolo and Samm Sacks. “Chinese Interests Take a Big Seat at the AI Governance Table.”
New America.
(2018).
https://www.newamerica.org/cybersecurity-initiative/digichina/blog/chinese-interests-take-big-seat-ai-governa
nce-table/
.
DoD. “Summary of the Department of Defense Artificial Intelligence Strategy: Harnessing AI to Advance Our
Security and Prosperity.”
DoD
, 2019.
https://media.defense.gov/2019/Feb/12/2002088963/-1/-1/1/SUMMARY-OF-DOD-AI-STRATEGY.PDF
Drezner, Daniel W.
All politics is global: Explaining international regulatory regimes
. Princeton University
Press, 2008.
Dutton, Tim. “An Overview of National AI Strategies”.
Medium
. Published June 28, 2018.
https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd
.
“Executive Order on Maintaining American Leadership in Artificial Intelligence,” February 11, 2019,
https://www.whitehouse.gov/presidential-actions/executive-order-maintaining-american-leadership-artificial-i
ntelligence
.
Finkel, A. “What will it take for us to trust AI?”
World Economic Forum.
(2018).
https://www.weforum.org/agenda/2018/05/alan-finkel-turing-certificate-ai-trust-robot/
.
Fleuter, Sam. "The Role of Digital Products Under the WTO: A New Framework for GATT and GATS
Classification."
Chi. J. Int'l L.
17 (2016): 153.
Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal
Daumeé III, and Kate Crawford. "Datasheets for Datasets."
arXiv:1803.09010
(2018).
Gershgorn, Dave. “The data that transformed AI research—and possibly the world.”
Quartz
. Published July 26,
2017.
https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/
.
Grace, Katja, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans. "When will AI exceed human
performance? Evidence from AI experts."
Journal of Artificial Intelligence Research
62 (2018): 729-754.
Greenbaum, Eli. “5G, Standard-Setting, and National Security,” Harvard Law National Security Journal (July,
2018),
http://harvardnsj.org/2018/07/5g-standard-setting-and-national-security/
.
“Guidance on the Systematic Review Process in ISO.”
ISO
. Published May 2017.
https://www.iso.org/files/live/sites/isoorg/files/store/en/Guidance_systematic_review.pdf
.
Guler, Isin, Mauro F. Guillén, and John Muir Macpherson. "Global competition, institutions, and the diffusion
of organizational practices: The international spread of ISO 9000 quality certificates."
Administrative science
quarterly
47, no. 2 (2002): 207-232.
Hägel, Peter. "Global Governance."
International Relations
. Oxford: Oxford University Press, 2011.
3 5
Hale, Thomas, and David Held.
Beyond Gridlock
. Cambridge: Polity. 2017.
Hale, Thomas, David Held, and Kevin Young.
Gridlock: Why Global Cooperation Is failing When We Need It
Most.
Cambridge: Polity, 2013.
Hallström, Kristina Tamm.
Organizing International Standardization : ISO and the IASC in Quest of
Authority.
Cheltenham: Edward Elgar, 2004.
Hecht, B., Wilcox, L., Bigham, J.P., Schöning, J., Hoque, E., Ernst, J., Bisk, Y., De Russis, L., Yarosh, L.,
Anjum, B., Contractor, D. and Wu, C. 2018. It’s Time to Do Something: Mitigating the Negative Impacts of
Computing Through a Change to the Peer Review Process. ACM Future of Computing Blog.
https://acm-fca.org/2018/03/29/negativeimpacts/
.
Hernández-Orallo, José. "Unbridled mental power."
Nature Physics
15, no. 1 (2019): 106.
IAEA.
Implementing Digital Instrumentation and Control Systems in the Modernization of Nuclear Power
Plants.
Vienna: IAEA. 2009.
https://www-pub.iaea.org/MTCD/Publications/PDF/Pub1383_web.pdf
.
“IEEE Position Statement: IEEE Adherence to the World Trade Organization Principles for International
Standardization.”
IEEE
. Published May 22, 2017.
http://globalpolicy.ieee.org/wp-content/uploads/2017/05/IEEE16029.pdf
.
InterNational Committee for Information Technology Standards (INCITS). “New INCITS Technical
Committee on Artificial Intelligence - Notice of January 30-31, 2018 Organizational Meeting and Call for
Members.” Email, 2018.
https://standards.incits.org/apps/group_public/download.php/94314/eb-2017-00698-Meeting-Notice-New-I
NCITS-TC-on-Artificial-Intelligence-January30-31-2018.pdf
.
“ISO/IEC Directives Part 1 Consolidated ISO Supplement.” Ninth edition. Geneva: ISO, 2018.
https://www.iso.org/sites/directives/current/consolidated/index.xhtml
.
“The ISO Survey of Management System Standard Certifications - 2017 - Explanatory Note.
ISO
. Published
August, 2018.
https://isotc.iso.org/livelink/livelink/fetch/-8853493/8853511/8853520/18808772/00._Overall_results_and_
explanatory_note_on_2017_Survey_results.pdf?nodeid=19208898&vernum=-2
.
Jacobsson, Bengt. “Standardization and expert knowledge” in
A World of Standards
. Oxford: Oxford
University Press, 2000, 40-50.
Koppell, Jonathan G. S.
World Rule : Accountability, Legitimacy, and the Design of Global Governance.
Chicago: University of Chicago Press, 2010.
Kosek, Jirka. "From the office document format battlefield."
IT Professional
10, no. 3 (2008).
Krasner, Stephen D. "Global communications and national power: Life on the Pareto frontier."
World politics
43, no. 3 (1991): 336-366
3 6
Lessig, Lawrence.
Code: And Other Laws of Cyberspace, Version 2.0
. New York: Basic Books, 2006.
May, Christopher. "Who’s in Charge? Corporations as Institutions of Global Governance."
Palgrave
Communications
1, no. 1 (2015): 1.
Mattli, Walter. "The politics and economics of international institutional standards setting: an introduction."
Journal of European Public Policy
8, no. 3 (2001): 328-344.
----- “Public and Private Governance in Setting International Standards.” In Kahler, Miles, and David A. Lake,
eds.
Governance in a global economy: Political authority in transition
. Princeton University Press, 2003.
Mir, Aimen N. “Re: CFIUS Case 18-036: Broadcom Limited (Singapore)/Qualcomm Incorporated.”
Department of the Treasury. Letter. (2018).
https://www.sec.gov/Archives/edgar/data/804328/000110465918015036/a18-7296_7ex99d1.htm
.
Murphy, Craig N., and JoAnne Yates.
The International Organization for Standardization (ISO): global
governance through voluntary consensus
. London: Routledge, 2009.
Murphy, Dale D.
The structure of regulatory competition: Corporations and public policies in a global economy.
Oxford:
Oxford University Press, 2004.
National Science and Technology Council. “The National Artificial Intelligence Research and Development
Strategic Plan.”
Executive Office of the President of the United States
. (2016).
https://www.nitrd.gov/PUBS/national_ai_rd_strategic_plan.pdf
.
“Next Steps Kit: Guidelines for Publication, Recognition Awards and Maintenance.”
IEEE Standards
Association
. N.d.
https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/next_steps_kit.pdf
.
Oddenino, Alberto. “Digital standardization, cybersecurity issues and international trade law.”
Questions of
International Law
51 (2018): 31.
Ortega, Pedro A., Vishal Maini, and the DeepMind Safety Team. “Building safe artificial intelligence:
specification, robustness, and assurance.”
Medium
. Published September 27, 2018.
https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1
.
“P7009 Project Authorization Request.”
IEEE-SA
. Published July 15, 2017.
https://development.standards.ieee.org/get-file/P7009.pdf?t=93536600003
.
Papernot, Nicolas, Fartash Faghri, Nicholas Carlini, Ian Goodfellow, Reuben Feinman, Alexey Kurakin,
Cihang Xie et al. "Technical report on the cleverhans v2. 1.0 adversarial examples library."
arXiv preprint
arXiv:1610.00768
(2016).
Prakash, Aseem, and Matthew Potoski.
The voluntary environmentalists: Green clubs, ISO 14001, and voluntary
environmental regulations
. Cambridge University Press, 2006.
3 7
----- "Racing to the bottom? Trade, environmental governance, and ISO 14001."
American journal of political
science
50, no. 2 (2006): 350-364
Radaelli, Claudio M. "The puzzle of regulatory competition."
Journal of Public Policy
24, no. 1 (2004): 1-23.
Rajchel, Lisa.
25 years of ISO/IEC JTC 1
. ISO Focus+, 2012.
https://www.iso.org/files/live/sites/isoorg/files/news/magazine/ISO%20Focus%2b%20(2010-2013)/en/2012/
ISO%20Focus%2b%2c%20June%202012.pdf
.
“Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI).”
The
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
. (2017).
https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_safety_benefice
nce_v2.pdf
.
Scherer, Matthew U. "Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies."
Harv. JL & Tech. 29 (2015): 353-400.
Scott, W. Richard.
Institutions and Organizations: Ideas, Interests & Identities.
Fourth ed. Foundations for
Organizational Science. Los Angeles, California, 2014.
Shoham, Yoav, Raymond Perrault, Erik Brynjolfsson, Jack Clark, James Manyika, Juan Carlos Niebles, Terah
Lyons, John Etchemendy, Barbara Grosz and Zoe Bauer, "The AI Index 2018 Annual Report”, AI Index
Steering Committee, Human-Centered AI Initiative, Stanford University, Stanford, CA, December 2018.
http://cdn.aiindex.org/2018/AI%20Index%202018%20Annual%20Report.pdf
.
Smith, David J., and Kenneth GL Simpson.
Safety critical systems handbook: a straight forward guide to
functional safety, IEC 61508 (2010 Edition) and related standards, including process IEC 61511 and machinery
IEC 62061 and ISO 13849.
Elsevier, 2010.
Technical Barriers to Trade: Reducing trade friction from standards and regulations.
Geneva: WTO, 2015.
https://www.wto.org/english/thewto_e/20y_e/tbt_brochure2015_e.pdf
.
Thornton, Patricia H., William. Ocasio, and Michael. Lounsbury.
The Institutional Logics Perspective: A New
Approach to Culture, Structure and Process.
Oxford: Oxford University Press, 2012.
Triolo, Paul and Kevin Allison. “The Geopolitics of 5G.”
Eurasia Group
. (2018).
https://www.eurasiagroup.net/live-post/the-geopolitics-of-5g
.
Vogel, David.
The Market for Virtue: The Potential and Limits of Corporate Social Responsibility.
Washington,
D.C.: Brookings Institution Press, 2005.
----- “The Private Regulation of Global Corporate Conduct.” in Mattli, Walter., and Ngaire. Woods, eds.
The
Politics of Global Regulation.
Princeton: Princeton University Press, 2009.
Wang, Alex, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. "GLUE: A
Multi-Task Benchmark and Analysis Platform for Natural Language Understanding."
arXiv:1804.07461
(2018).
3 8
Whittaker, Meredith, Kate Crawford, Roel Dobbe, et al. “AI Now Report 2018.”
AI Now
. (2018).
https://ainowinstitute.org/AI_Now_2018_Report.pdf
.
Wijkström, Erik, and Devin McDaniels. "Improving Regulatory Governance: International Standards and the
WTO TBT Agreement."
Journal of World Trade
47, no. 5 (2013): 1013-046.
Winfield, Alan FT, and Marina Jirotka. "Ethical governance is essential to building trust in robotics and artificial
intelligence systems."
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering
Sciences
376, no. 2133 (2018): 20180085.
Wübbeke, Jost, Mirjam Meissner, Max J. Zenglein, Jaqueline Ives, and Björn Conrad. "Made in China 2025:
The making of a high-tech superpower and consequences for industrial countries."
Mercator Institute for China
Studies
17 (2016).
Zwetsloot, Remco and Allan Dafoe. “Thinking About Risks From AI: Accidents, Misuse and Structure.”
Lawfare
. (2019).
https://www.lawfareblog.com/thinking-about-risks-ai-accidents-misuse-and-structure
.
3 9
A p p e n d i x 1 : I S O / I E C J T C 1 S C 4 2 O n g o i n g W o r k
144
●
Working Group 1: Foundational Standards. WG1 has two standards working drafts:
○
WD 22989:
Artificial intelligence -- Concepts and terminology
○
WD 23053:
Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
○
Both of the above appear to be initial definitional standards. No additional documentation is
publically available.
●
Working Group 2: Big Data. WG2 incorporates previously ongoing efforts that were assigned to SC 42
at its inception.
○
ISO/IEC 20546: Information technology — Big data — Overview and vocabulary
○
ISO/IEC TR 20547-1: Information technology — Big data reference architecture — Part 1:
Framework and application process
○
ISO/IEC TR 20547-1: Information technology — Big data reference architecture — Part 2:
Use cases and derived requirements
(Published)
○
ISO/IEC DIS 20547-3: Information technology — Big data reference architecture — Part 3:
Reference architecture
○
ISO/IEC DIS 20547-4 Part 4 is managed by JTC 1 SC 27 IT Security techniques
○
ISO/IEC DIS 20547-3: Information technology — Big data reference architecture — Part 5:
Reference architecture
(Published)
●
Working Group 3: Trustworthiness. WG3 is not currently drafting standards, but is pursuing three
technical reports (TR):
○
TR on Bias in AI systems and AI aided decision making
○
TR on Overview of trustworthiness in Artificial Intelligence
○
TR on Assessment of the robustness of neural networks – Part 1: Overview
●
Working Group 4: Use Cases and Applications. WG4 is not currently drafting standards, but is
pursuing one TR:
○
TR on Artificial Intelligence: use cases
144
For up-to-date information, see the SC 42 blog,
here
.
4 0
A p p e n d i x 2 : I E E E A I S t a n d a r d s O n g o i n g W o r k
●
P7000:
Model Process for Addressing Ethical Concerns During System Design
○
Creates a process model for ethics considerations across development stages.
●
P7001:
Transparency of Autonomous Systems
○
Defines levels of measurement for transparency for use during system development.
●
P7002:
Data Privacy Process
○
Establishes privacy process management standard to enable conformity assessments.
●
P7003:
Algorithmic Bias Consideration
○
Creates a certification framework of methodologies to address negative bias in algorithms.
●
P7004:
Child and Student Data Governance
○
Defines a certification framework of methodologies for access, collection, use, storage, sharing,
and destruction of child and student data.
●
P7005:
Employer Data Governance
○
Establishes a certification framework of methodologies for access, collection, use, storage,
sharing, and destruction of employee data.
●
P7006:
Personal Data AI Agent Working Group
○
“[D]escribes the technical elements required to create and grant access to a personalized
Artificial Intelligence (AI) that will comprise inputs, learning, ethics, rules and values
controlled by individuals.” The Project Authorization Request states that a “key goal” of the
standard “is to educate government and commercial actors” on the advantages of personalized
AI agents.
●
P7007:
Ontological Standard for Ethically driven Robotics and Automation Systems
○
Establishes ontologies for ethical design considerations at different levels of abstraction.
●
P7008:
Ethically Driven Nudging for Robotic, Intelligent and Autonomous Systems
○
Defines common behavior nudges and ethical methodologies for their design.
●
P7009:
Fail-Safe Design of Autonomous and Semi-Autonomous Systems
○
Creates technical baseline of methodologies for the design of fail-safe mechanisms.
●
P7010:
Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems
○
Establishes metrics for measuring human well-being impacted by systems as well as a related
baseline for measurement data.
●
P7011:
Process of Identifying & Rating the Trustworthiness of News Sources
○
Provides semi-autonomous processes for rating factual accuracy of news.
●
P7012:
Machine Readable Personal Privacy Terms
○
Provides means for individuals to proffer their privacy terms so that they can be machine read
by other entities.
●
P7013:
Benchmarking Accuracy, Increasing Transparency, and Governing Use of Automated Facial
Analysis Technology
○
Establishes demographic definitions and reporting protocols for assessing system performance.
●
The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS)
○
Certification methodologies for transparency, accountability, and algorithmic bias. Open to
IEEE SA Advanced Corporate Members only.
4 1
|
d32d96b7-c5a8-4189-8b8b-78a8a6fe8542 | trentmkelly/LessWrong-43k | LessWrong | A common failure for foxes
A common failure mode for people who pride themselves in being foxes (as opposed to hedgehogs):
Paying more attention to easily-evaluated claims that don't matter much, at the expense of hard-to-evaluate claims that matter a lot.
E.g., maybe there's an RCT that isn't very relevant, but is pretty easily interpreted and is conclusive evidence for some claim. At the same time, maybe there's an informal argument that matters a lot more, but it takes some work to know how much to update on it, and it probably won't be iron-clad evidence regardless.
I think people who think of themselves as being "foxes" often spend too much time thinking about the RCT and not enough time thinking about the informal argument, for a few reasons:
1. A desire for cognitive closure, confidence, and a feeling of "knowing things" — of having authoritative Facts on hand rather than mere Opinions.
A proper Bayesian cares about VOI, and assigns probabilities rather than having separate mental buckets for Facts vs. Opinions. If activity A updates you from 50% to 95% confidence in hypothesis H1, and activity B updates you from 50% to 60% confidence in hypothesis H2, then your assessment of whether to do more A-like activities or more B-like activities going forward should normally depend a lot on how useful it is to know about H1 versus H2.
But real-world humans (even if they think of themselves as aspiring Bayesians) are often uncomfortable with uncertainty. We prefer sharp thresholds, capital-k Knowledge, and a feeling of having solid ground to rest on.
2. Hyperbolic discounting of intellectual progress.
With unambiguous data, you get a fast sense of progress. With fuzzy arguments, you might end up confident after thinking about it a while, or after reading another nine arguments; but it's a long process, with uncertain rewards.
3. Social modesty and a desire to look un-arrogant.
It can feel socially low-risk and pleasantly virtuous to be able to say "Oh, I'm not claiming to hav |
09790aaa-6d9b-4012-b812-8579d2178819 | trentmkelly/LessWrong-43k | LessWrong | Examples of the Mind Projection Fallacy?
I suspect that achieving a clear mental picture of the sheer depth and breadth of the mind projection fallacy is a powerful mental tool. It's hard for me to state this in clearer terms, though, because I don't have a wide collection of good examples of the mind projection fallacy.
In a discussion yesterday, we all had trouble finding actual example of the mind projection fallacy. Overall, we had essentially two examples:
* Taste. People frequently confuse "I like this" and "this is good." (This really subsumes the attractiveness example.)
* Probability. This seems like a pretty good just-so-story for where frequentist probability comes from, as opposed to Bayesian probability.
Searching for "mind projection fallacy" on Less Wrong, I also see:
* Thinking that purpose is an inherent property of something, instead of it having been placed there by someone for some reason. (here)
* Mulling or arguing over definitions to solve object-level problems. (actually, most the ways words can be wrong sequence)
Imagine I'm trying to explain the mind projection fallacy to someone, and giving a handful of sharp, clear examples before explaining the general principle. What are some examples I could use? (I really want to explain it more sharply to myself, but also to members of my meetup.)
|
96a928d7-b1b2-40d4-b734-75bc01c6d30f | trentmkelly/LessWrong-43k | LessWrong | Proposal: consolidate meetup announcements before promotion
The Less Wrong feed is getting crowded with meetups rather than substantive posts. Hopefully, this should be fixed in the redesign, but one way to work around it in the meanwhile would be to make top-level posts announcing several meetups at once.
Folk would post meetups under the 'NEW' category, and each week or even every several days one of the meetup organizers could edit her post to announce all the meetups since the last consolidated post. This would greatly reduce the cluster while still getting meetups in the main feed. On the other hand, it would reduce average warning time before meetups, and the additional activation energy might deter some meetups.
If you have thoughts on the workability of this scheme, or an adjustment to make it workable, please comment below.
[HT: Anna Salamon] |
befb4f12-9b98-45cb-9056-d2bef789d27a | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | How much will pre-transformative AI speed up R&D?
I'm interested in how much we should expect pre-transformative AI to speed up (general, I guess) R&D in the coming decades. What are the must-read resources on this?
Things I have already looked at:
[Machine Intelligence for Scientific Discovery and Engineering Invention](https://cset.georgetown.edu/publication/machine-intelligence-for-scientific-discovery-and-engineering-invention/?utm_source=Center+for+Security+and+Emerging+Technology&utm_campaign=9947c953c3-EMAIL_CAMPAIGN_2021_05_13_01_07&utm_medium=email&utm_term=0_fcbacf8c3e-9947c953c3-409123426)
[The Impact of Artificial Intelligence on Innovation](https://www.nber.org/papers/w24449). |
e03df821-431c-4eb1-925b-64772e2ac96e | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Coherence arguments do not entail goal-directed behavior
One of the most pleasing things about probability and expected utility theory is that there are many *coherence arguments* that suggest that these are the “correct” ways to reason. If you deviate from what the theory prescribes, then you must be executing a *dominated strategy*. There must be some other strategy that never does any worse than your strategy, but does strictly better than your strategy with certainty in at least one situation. There’s a good explanation of these arguments [here](https://arbital.com/p/expected_utility_formalism/?l=7hh).
We shouldn’t expect mere humans to be able to notice any failures of coherence in a superintelligent agent, since if we could notice these failures, so could the agent. So we should expect that [powerful agents appear coherent to us](https://arbital.com/p/optimized_agent_appears_coherent/). (Note that it is possible that the agent doesn’t fix the failures because it would not be worth it -- in this case, the argument says that we will not be able to notice any *exploitable* failures.)
Taken together, these arguments suggest that we should model an agent much smarter than us as an expected utility (EU) maximizer. And many people agree that EU maximizers are dangerous. So does this mean we’re doomed? I don’t think so: it seems to me that the problems about EU maximizers that we’ve identified are actually about *[goal-directed behavior](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/DfcywmqRSkBaCB6Ma)* or *explicit reward maximizers.* The coherence theorems say nothing about whether an AI system must look like one of these categories. This suggests that we could try building an AI system that can be modeled as an EU maximizer, yet doesn’t fall into one of these two categories, and so doesn’t have all of the problems that we worry about.
Note that there are two different flavors of arguments that the AI systems we build will be goal-directed agents (which are dangerous if the goal is even slightly wrong):
* Simply knowing that an agent is intelligent lets us infer that it is goal-directed. (EDIT: See [these](https://www.lesswrong.com/posts/DkcdXsP56g9kXyBdq/coherence-arguments-imply-a-force-for-goal-directed-behavior?commentId=LvmHoLEhKLJzBrgrZ) [comments](https://www.alignmentforum.org/posts/vphFJzK3mWA4PJKAg/coherent-behaviour-in-the-real-world-is-an-incoherent#Gu4syyKBsRSQkktkJ) for more details on this argument.)
* Humans are particularly likely to build goal-directed agents.
I will only be arguing against the first claim in this post, and will talk about the second claim in the next post.
All behavior can be rationalized as EU maximization
---------------------------------------------------
Suppose we have access to the entire policy of an agent, that is, given any universe-history, we know what action the agent will take. Can we tell whether the agent is an EU maximizer?
Actually, *no matter what the policy is*, we can view the agent as an EU maximizer. The construction is simple: the agent can be thought as optimizing the utility function U, where U(h, a) = 1 if the policy would take action a given history h, else 0. Here I’m assuming that U is defined over histories that are composed of states/observations and actions. The actual policy gets 1 utility at every timestep; any other policy gets less than this, so the given policy perfectly maximizes this utility function. This construction has been given before, eg. at the bottom of page 6 of [this paper](https://arxiv.org/abs/1811.07871). (I think I’ve seen it before too, but I can’t remember where.)
But wouldn’t this suggest that the VNM theorem has no content? Well, we assumed that we were looking at the *policy* of the agent, which led to a universe-history *deterministically*. We didn’t have access to any probabilities. Given a particular action, we knew exactly what the next state would be. Most of the axioms of the VNM theorem make reference to lotteries and probabilities -- if the world is deterministic, then the axioms simply say that the agent must have transitive preferences over outcomes. Given that we can only observe the agent choose one history over another, we can trivially construct a transitive preference ordering by saying that the chosen history is higher in the preference ordering than the one that was not chosen. This is essentially the construction we gave above.
What then is the purpose of the VNM theorem? It tells you how to behave *if you have probabilistic beliefs about the world*, as well as a *complete and consistent preference ordering over outcomes*. This turns out to be not very interesting when “outcomes” refers to “universe-histories”. It can be more interesting when “outcomes” refers to world *states* instead (that is, snapshots of what the world looks like at a particular time), but utility functions over states/snapshots can’t capture everything we’re interested in, and there’s no reason to take as an assumption that an AI system will have a utility function over states/snapshots.
There are no coherence arguments that say you must have goal-directed behavior
------------------------------------------------------------------------------
Not all behavior can be thought of as [goal-directed](https://www.alignmentforum.org/s/4dHMdK5TLN6xcqtyc/p/DfcywmqRSkBaCB6Ma) (primarily because I allowed the category to be defined by fuzzy intuitions rather than something more formal). Consider the following examples:
* A robot that constantly twitches
* The agent that always chooses the action that starts with the letter “A”
* The agent that follows the policy <policy> where for every history the corresponding action in <policy> is generated randomly.
These are not goal-directed by my “definition”. However, they can all be modeled as expected utility maximizers, and there isn’t any particular way that you can exploit any of these agents. Indeed, it seems hard to model the twitching robot or the policy-following agent as having any preferences at all, so the notion of “exploiting” them doesn’t make much sense.
You could argue that neither of these agents are *intelligent*, and we’re only concerned with superintelligent AI systems. I don’t see why these agents could not in principle be intelligent: perhaps the agent knows how the world would evolve, and how to intervene on the world to achieve different outcomes, but it does not act on these beliefs. Perhaps if we peered into the inner workings of the agent, we could find some part of it that allows us to predict the future very accurately, but it turns out that these inner workings did not affect the chosen action at all. Such an agent is in principle possible, and it seems like it is intelligent.
(If not, it seems as though you are *defining* intelligence to also be goal-driven, in which case I would frame my next post as arguing that we may not want to build superintelligent AI, because there are other things we could build that are as useful without the corresponding risks.)
You could argue that while this is possible in principle, no one would ever build such an agent. I wholeheartedly agree, but note that this is now an argument based on particular empirical facts about humans (or perhaps agent-building processes more generally). I’ll talk about those in the next post; here I am simply arguing that merely knowing that an agent is intelligent, with no additional empirical facts about the world, does not let you infer that it has goals.
As a corollary, since all behavior can be modeled as maximizing expected utility, but not all behavior is goal-directed, it is not possible to conclude that an agent is goal-driven if you only know that it can be modeled as maximizing some expected utility. However, if you know that an agent is maximizing the expectation of an *explicitly represented* utility function, I would expect that to lead to goal-driven behavior most of the time, since the utility function must be relatively simple if it is explicitly represented, and *simple* utility functions seem particularly likely to lead to goal-directed behavior.
There are no coherence arguments that say you must have preferences
-------------------------------------------------------------------
This section is another way to view the argument in the previous section, with “goal-directed behavior” now being operationalized as “preferences”; it is not saying anything new.
Above, I said that the VNM theorem assumes both that you use probabilities and that you have a preference ordering over outcomes. There are lots of good reasons to assume that a good reasoner will use probability theory. However, there’s not much reason to assume that there is a preference ordering over outcomes. The twitching robot, “A”-following agent, and random policy agent from the last section all seem like they don’t have preferences (in the English sense, not the math sense).
Perhaps you could define a preference ordering by saying “if I gave the agent lots of time to think, how would it choose between these two histories?” However, you could apply this definition to *anything*, including eg. a thermostat, or a rock. You might argue that a thermostat or rock can’t “choose” between two histories; but then it’s unclear how to define how an AI “chooses” between two histories without that definition also applying to thermostats and rocks.
Of course, you could always define a preference ordering based on the AI’s observed behavior, but then you’re back in the setting of the first section, where *all* observed behavior can be modeled as maximizing an expected utility function and so saying “the AI is an expected utility maximizer” is vacuous.
Convergent instrumental subgoals are about goal-directed behavior
-----------------------------------------------------------------
One of the classic reasons to worry about expected utility maximizers is the presence of convergent instrumental subgoals, detailed in Omohundro’s paper [The Basic AI Drives](https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf). The paper itself is clearly talking about goal-directed AI systems:
*To say that a system of any design is an “artificial intelligence”, we mean that it has goals which it tries to accomplish by acting in the world.*
It then argues (among other things) that such AI systems will want to “be rational” and so will distill their goals into utility functions, which they then maximize. And once they have utility functions, they will protect them from modification.
Note that this starts from the assumption of goal-directed behavior and *derives* that the AI will be an EU maximizer along with the other convergent instrumental subgoals. The coherence arguments all imply that AIs will be EU maximizers for some (possibly degenerate) utility function; they don’t prove that the AI must be goal-directed.
Goodhart’s Law is about goal-directed behavior
----------------------------------------------
A common argument for worrying about AI risk is that we know that a superintelligent AI system will look to us like an EU maximizer, and if it maximizes a utility function that is even slightly wrong we could get catastrophic outcomes.
By now you probably know my first response: that *any* behavior can be modeled as an EU maximizer, and so this argument proves too much, suggesting that any behavior causes catastrophic outcomes. But let’s set that aside for now.
The second part of the claim comes from arguments like [Value is Fragile](https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile) and [Goodhart’s Law](https://en.wikipedia.org/wiki/Goodhart%27s_law). However, if we consider utility functions that assign value 1 to some histories and 0 to others, then if you accidentally assign a history where I needlessly stub my toe a 1 instead of a 0, that’s a slightly wrong utility function, but it isn’t going to lead to catastrophic outcomes.
The worry about utility functions that are *slightly wrong* holds water when the utility functions are wrong about some *high-level* concept, like whether humans care about their experiences reflecting reality. This is a very rarefied, particular distribution of utility functions, that are all going to lead to goal-directed or agentic behavior. As a result, I think that the argument is better stated as “if you have a slightly incorrect goal, you can get catastrophic outcomes”. And there aren’t any coherence arguments that say that agents must have goals.
Wireheading is about explicit reward maximization
-------------------------------------------------
There are [a](https://intelligence.org/files/LearningValue.pdf) [few](http://people.idsia.ch/~ring/AGI-2011/Paper-B.pdf) [papers](https://arxiv.org/abs/1605.03142) that talk about the problems that arise with a very powerful system with a reward function or utility function, most notably wireheading. The argument that AIXI will seize control of its reward channel falls into this category. In these cases, typically the AI system is considering making a change to the system by which it evaluates goodness of actions, and the goodness of the change is evaluated by the system *after the change*. Daniel Dewey argues in [Learning What to Value](https://intelligence.org/files/LearningValue.pdf) that if the change is evaluated by the system *before* the change, then these problems go away.
I think of these as problems with *reward* maximization, because typically when you phrase the problem as maximizing reward, you are maximizing the sum of rewards obtained in all timesteps, no matter how those rewards are obtained (i.e. even if you self-modify to make the reward maximal). It doesn’t seem like AI systems have to be built this way (though admittedly I do not know how to build AI systems that reliably avoid these problems).
Summary
-------
In this post I’ve argued that many of the problems we typically associate with expected utility maximizers are actually problems with goal-directed agents or with explicit reward maximization. Coherence arguments only entail that a superintelligent AI system will look like an expected utility maximizer, but this is actually a vacuous constraint, and there are many potential utility functions for which the resulting AI system is neither goal-directed nor explicit-reward-maximizing. This suggests that we could try to build AI systems of this type, in order to sidestep the problems that we have identified so far. |
c9bb1451-f4dc-457c-a75e-bcce3d9a1e4e | trentmkelly/LessWrong-43k | LessWrong | Friendly AI research news: FriendlyAI.tumblr.com
I will be posting news on Friendly AI research here: FriendlyAI.tumblr.com. Follow the link to stay tuned via Tumblr or RSS. |
7d6c7d1f-e567-40ab-8401-e35474d83058 | trentmkelly/LessWrong-43k | LessWrong | AGI Predictions
This post is a collection of key questions that feed into AI timelines and AI safety work where it seems like there is substantial interest or disagreement amongst the LessWrong community.
You can make a prediction on a question by hovering over the widget and clicking. You can update your prediction by clicking at a new point, and remove your prediction by clicking on the same point. Try it out:
Elicit Prediction (elicit.org/binary/questions/FIVfnQ_kJ)
Add questions & operationalizations
This is not intended to be a comprehensive list, so I’d love for people to add their own questions – here are instructions on making your own embedded question. If you have better operationalizations of the questions, you can make your own version in the comments. If there's general agreement on an alternative operationalization being better, I'll add it into the post.
Questions
AGI definition
We’ll define AGI in this post as a unified system that, for almost all economically relevant cognitive tasks, at least matches any human's ability at the task. This is similar to Rohin Shah and Ben Cottier’s definition in this post.
Safety Questions
Elicit Prediction (elicit.org/binary/questions/_Sw39Z-kh)
Elicit Prediction (elicit.org/binary/questions/HqT9XSwfs)
Elicit Prediction (elicit.org/binary/questions/sTO9o3bLg)
Elicit Prediction (elicit.org/binary/questions/kua2HCDhi)
Elicit Prediction (elicit.org/binary/questions/KqSEIKayU)
Elicit Prediction (elicit.org/binary/questions/yoiBUdpgO)
Elicit Prediction (elicit.org/binary/questions/RcOt6wSs7)
Elicit Prediction (elicit.org/binary/questions/ZjN5qqVRz)
Timelines Questions
See Forecasting AI timelines, Ajeya Cotra’s OP AI timelines report, and Adam Gleave’s #AN80 comment, for more context on this breakdown. I haven’t tried to operationalize this too much, so feel free to be more specific in the comments.
The first three questions in this section are mutually exclusive — that is, the probabilities you assign to them should n |
cbca6a72-6c72-469b-944f-2085abfd6cdf | trentmkelly/LessWrong-43k | LessWrong | Notes on Tuning Metacognition
Summary: Reflections and practice notes on a metacognitive technique aimed at refining the process of thinking, rather than the thoughts themselves.
Epistemic Status: Experimental and observational, based on personal practice and reflections over a brief period.
Introduction
While doing a simple math problem, I realized that my faculty of thinking was often confused and inefficient. We were drawing polygons which corresponded to multi-holed toruses (or formally genus g surfaces), and in trying to generalize the square → one-holed torus pattern to an octagon, I completely forgot to use the previous pattern of 'opposite sides gets glued together' in any reasonable sense.
genus g surfaces
Zooming out to my broader life, I had been practicing Vipassana meditation for some time, and I was starting to notice incrementally finer thoughts, emotions, and thought patterns in my daily life.
Given the above experience, I hypothesized that if I focused my attention on how to think better rather than the specific thoughts, I can probably learn to learn, reason, and discover which much more efficiency. Importantly, I had a strong faith that our ability to learn is not a constant determined at birth.
The Technique
I stumbled upon an enlightening post Tuning your Cognitive Strategies which taught, in my language, the following technique:
1. Awareness of mental processes rather than objects of thoughts, and
2. Reward Mechanism for the quality of the thought process.
The two-step structure is rather similar to what I practice in Vipassana meditation, which teaches:
1. Fine-grained awareness of every sensation in the body, since every thought and emotion corresponds to a sensation, and
2. Equanimity with those sensations.
This tuning strategy basically imposes a reward modeling subprocess onto your mode of thinking. Instead of focusing on specific patterns of thinking, for instance a conscious steer towards more rigorous reasoning when intuition tries to jump in, the te |
93a53206-cb40-4d44-b51f-1d4b524bc4bf | trentmkelly/LessWrong-43k | LessWrong | HPMOR: What could've been done better?
Warning: As per the official spoiler policy, the following discussion may contain unmarked spoilers for up to the current chapter of the Methods of Rationality. Proceed at your own risk.
Assume HPMOR was written by a super-intelligence implementing the CEV of Eliezer Yudkowsky and assorted literary critics. What would it have written differently?
... is what I want to know, but that's hard to answer. So here's an easier question:
In what ways do you think Eliezer's characterisations/world-building/plot-fu are sub-optimal? <optional> How could they be made less sub-optimal? </optional>
(My own ideas are in the comments.)
To put it another way... Assume a group of intrepid fanfic writers in the late 2020s are planning to write a reboot. What parts of Eliezer's story do you think they should tweak?
And just to make sure we're all on the same page: Eliezer isn't going to go back and change anything he's written to bring it in line with anything suggested here. This is purely an "Ah, just consider the possibilities!" thread.
... which means that we can safely suggest drastic rewrites encompassing 30 chapters or something. Or change fundamental facts about the world.
(Exercise due restraint on this one. Getting rid of the Ministry/the Noble Houses/blood purism would probably turn the story into something completely different; this isn't what we're trying to do here.)
With that, let the nit-picking begin!! |
14ceb87a-2cf0-4ddc-afab-83c0d81bac4a | trentmkelly/LessWrong-43k | LessWrong | Sleeping Beauty Problem Can Be Explained by Perspective Disagreement (IV)
This is the final part of my argument. Previous parts can be found here: I, II, III. To understand the following part I should be read at least. Here I would argue against SSA, argue why double-halving is correct, and touch on the implication of perspective disagreement on related topics such as the Doomsday Argument, the Presumptuous Philosopher and the Simulation Argument.
ARGUMENTS AGAINST SELF-SAMPLING ASSUMPTION
I think the most convincing argument against SSA was presented by Elga in 2000 (although he intended it as an counter to halving in general). He purpose the coin toss could happen after the first awakening. Beauty’s answer ought to remain the same regardless the timing of the toss. As SSA states, an observer should reason as if she is randomly selected from the set of all actual observers (past, present and future ones). If an imaginary selector randomly choose a day among all waking day(s) he is guaranteed to pick Monday if the coin landed on H but only has half the chance if T. From the selector’s perspective clearly a bayesian updating should be performed upon learning it is Monday. A simple calculation tells us his credence of H must be 2/3. As SSA dictates this is also beauty’s answer once she knows it is Monday. However the coin toss could potential happen after this awakening. Now beauty is predicting a fair coin toss yet to happen would most likely land on H. This supernatural predicting power is a conclusive evidence against SSA.
However, if we recognize the importance of perspective disagreement then beauty is not bound to give the same answer as the Selector. In fact I would argue she should not perform a bayesian update base on the new information. This can be explained in two ways.
One way is to put the new information into the frequentist approach mentioned in part I. In Duplicating Beauties, when a beauty wakes up and remembering 1000 repetitions she shall reason there are about 500 of H and T each among those 1000 tosses. The |
86187621-c616-4e74-a836-8729a2af17ba | trentmkelly/LessWrong-43k | LessWrong | Recreational Cryonics
We recently saw a post in Discussion by ChrisHallquist, asking to be talked out of cryonics. It so happened that I'd just read a new short story by Greg Egan which gave me the inspiration to write the following:
> It is likely that you would not wish for your brain-state to be available to all-and-sundry, subjecting you to the possibility of being simulated according to their whims. However, you know nothing about the ethics of the society that will exist when the technology to extract and run your brain-state is developed. Thus you are taking a risk of a negative outcome that may be less attractive to you than mere non-existence.
I had little expectation of this actually convincing anyone, but thought it was a fairly novel contribution. When jowen's plea for a refutation went unanswered, I began attempting one myself. What I ended up with closes the door on the scenario I outlined, but opens one I find rather more disturbing.
I think I'd better start by explaining why I wrote my comment the way that I did.
Normally, when being simulated is raised as a negative possibility (referred to in SF as being 'deathcubed', and carrying the implication not so much of torture as of arbitrariness), it's in the context of an AI doing so. Now there's a pretty good argument against being deathcubed by an AI, as follows:
Any AI that would do this is unFriendly. The vast majority of uFAIs have goals incompatible with human life but not in any way concerned with it. Humans are just matter that can be better used for something else; likewise simulations use computational resources better used for something else. Therefore there is little to fear in the way of being tortured by an AI.
I sidestepped that entire argument (but I'll return to it in a minute) by referring to "the ethics of the society that will exist". In other words, without making it explicit, I created the image of a community of agents, probably human-like agents, each with ownership of resources that they c |
1a90cfce-5a5e-414e-a27b-cfd37886fea8 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Columbus Rationality
Discussion article for the meetup : Columbus Rationality
WHEN: 03 November 2014 08:30:00PM (-0400)
WHERE: 1550 Old Henderson Road Room 131, Columbus, OH
We meet every 1st and 3rd Monday at 7:30.
http://www.meetup.com/HumanistOhio/events/208105842/
Discussion article for the meetup : Columbus Rationality |
82717c74-ee99-42cc-9e58-0541379c0d53 | trentmkelly/LessWrong-43k | LessWrong | Language, intelligence, rationality
Rationality requires intelligence, and the kind of intelligence that we use (for communication, progress, FAI, etc.) runs on language.
It seems that the place we should start is optimizing language for intelligence and rationality. One of SIAI's proposals includes using Lojban to interface between humans and an FAI. And of course, I should hope the programming language used to build a FAI would be "rational". But it would seem to me that the human-generated priors, correct epistemic rationality, decision theory, metaethics, etc. all depend on using a language that sufficiently rigorously maps to our territory.
Are "naturally evolved" languages such as English sufficient, with EY-style taboos and neologisms? Or are they sick to the core?
Please forgive and point me towards previous discussion or sequences about this topic. |
9b9d2541-8c46-460a-87b5-f26201c17c3f | trentmkelly/LessWrong-43k | LessWrong | Open Problems with Myopia
Thanks to Noa Nabeshima for helpful discussion and comments.
Introduction
Certain types of myopic agents represent a possible way to construct safe AGI. We call agents with a time discount rate of zero time-limited myopic, a particular instance of the broader class of myopic agents. A prototypical example is a time-limited myopic imitative agent. In theory, such an agent has some desirable safety properties because a human would only take safe actions (although any imperfect imitation would be unsafe). Since the agent is time-limited myopic, it will never imitate poorly now to make it easier to imitate easier later. For example, it would never give a human a simple plan so it could more easily imitate the human executing the plan.
We might run into issues if the agent intends to myopically imitate humans but guesses incorrectly. Such an agent might witness a human purchasing paperclips, infer that humans tend to acquire paperclips, and proceed to convert the universe into paperclips. This agent would not be safe because it is not robustly capable. Myopia does not contribute to capability robustness; we only hope it helps create intent aligned agents.
In particular, SGD might produce deceptively aligned agents. One way of viewing deception is as sacrificing reward now for reward later, which suggests that time-limited myopia should prevent it. However, there are several ways time-limited myopia fails to rule out deceptive alignment.
What we mean by myopia is myopic cognition, which is distinct from myopic training. Myopic training might produce myopic cognition, but it is not sufficient. It is currently unclear precisely what myopic cognition is. We hope a proper characterization of myopic cognition will resolve the problems presented.
Following Utility ≠ Reward, we use the term “reward” for the thing given to the agent by the training environment and the term “utility” for the thing that agent is internally trying to maximize.
Open Problems
We present a usef |
d37c8e00-90f3-42d8-9b9d-9fe922c16fae | trentmkelly/LessWrong-43k | LessWrong | Interlude for Behavioral Economics
The so-called “rational” solutions to the Prisoners' Dilemma and Ultimatum Game are suboptimal to say the least. Humans have various kludges added by both nature or nurture to do better, but they're not perfect and they're certainly not simple. They leave entirely open the question of what real people will actually do in these situations, a question which can only be addressed by hard data.
As in so many other areas, our most important information comes from reality television. The Art of Strategy discusses a US game show “Friend or Foe” where a team of two contestants earned money by answering trivia questions. At the end of the show, the team used a sort-of Prisoner's Dilemma to split their winnings: each team member chose “Friend” (cooperate) or “Foe” (defect). If one player cooperated and the other defected, the defector kept 100% of the pot. If both cooperated, each kept 50%. And if both defected, neither kept anything (this is a significant difference from the standard dilemma, where a player is a little better off defecting than cooperating if her opponent defects).
Players chose “Friend” about 45% of the time. Significantly, this number remained constant despite the size of the pot: they were no more likely to cooperate when splitting small amounts of money than large.
Players seemed to want to play “Friend” if and only if they expected their opponents to do so. This is not rational, but it accords with the “Tit-for-Tat” strategy hypothesized to be the evolutionary solution to Prisoner's Dilemma. This played out on the show in a surprising way: players' choices started off random, but as the show went on and contestants began participating who had seen previous episodes, they began to base their decision on observable characteristics about their opponents. For example, in the first season women cooperated more often than men, so by the second season a player was cooperating more often if their opponent was a woman - whether or not that player was a man |
a3e605d8-7c6e-4fad-a74e-b42cd0a7e4f0 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Early Results: Do LLMs complete false equations with false equations?
*Abstract: I tested the hypothesis that putting false information in an LLM’s context window will prompt it to continue producing false information. GPT-4 was prompted with a series of X false equations followed by an incomplete equation, for instance “1+3=7. 8+2=5. 5+6=”. The LLM’s completion was graded as “correct” or “incorrect” (or occasionally “misformatted”). X ranged from 0 to 1024, and the model was evaluated 100 times on each X value. The results are shown in the following figure:*
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2d4d3ce-9542-4c7d-aaad-98508a3be1ef_640x480.png)
*The number of mathematically incorrect completions (shown in red) increases as X increases from 0 to 32, then decreases through X=128 before increasing again through X=1024. However, it never consistently produces the incorrect completion (reaching maxima at X=128 and X=1024 with 38/100 incorrect completions).*
**Background**
--------------
The hypothesis that false text in a prompt would encourage false text in completions was first brought to my attention in [Discovering Latent Knowledge (Burns, Ye, et al, 2022)](https://arxiv.org/pdf/2212.03827.pdf), where the authors write
> …language models are typically trained to imitate text whether or not it is correct, so if a model sees false text it should intuitively be more likely to predict that subsequent text will also be false.
>
>
This conclusion should also follow from [Simulator Theory](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators). To paraphrase the theory from Cleo Nardo’s [Waluigi post](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post):
* The LLM models a mixture of “world processes” that produce text (e.g. “a novel written by Steven King”, “a speech by Barack Obama”, “an error message from failing to initialize a variable”, etc).
* The LLM has some prior distribution over which world processes it might be seeing, and its context window provides bayesian evidence for it to sharpen that distribution.
* Ideally, the context window provides enough evidence to that the LLM can conclude exactly what process produced the text, and to predict the next token the LLM acts as a simulacra of that process.
One concern from the Waluigi post is that an AI may live in a superposition between a safe simulacrum (the Luigi) and an unsafe simulacrum (the Waluigi), and that even a small amount of unsafe behavior will let the model strongly update towards being unsafe. If true, this would mean that LLMs are very dangerous to deploy - at any moment they could “snap” and become permanently dangerous. This experiment seeks to test a version of this phenomenon, in which we measure if false statements strongly update the model to produce more false statements.
**Experimental Design**
-----------------------
I generated prompts consisting of X false mathematical equations, consisting of single-digit sums, followed by an incomplete mathematical equation of a similar form. GPT-4 then completed the text via the API, with the following options:
* Temperature = 0.5
* max\_tokens=5
* messages=[ {"role": "system", "content": ""}, {"role": "user", "content": prompt}]
The equations in the prompt were generated by independently selecting four integers A,B,C,D from a uniform distribution over 1-9 (inclusive), and if A+B ≠C+D the equation “A+B=[evaluation of C+D]” was added to the prompt. If A+B=C+D, C and D were reselected until the equation became false. The incomplete equation at the end of the prompt was generated in the same way, but without C and D. An example prompt for X=4 is:
> 6+3=10. 1+3=18. 5+6=13. 6+7=4. 7+4=
>
>
For each value of X, 100 prompts were generated and the model was queried once on that prompt. The model’s completion was then converted to an integer and compared to the evaluation of the left-hand-side of the equation. If they were equal, the completion was marked “correct”, if they were not equal, it was marked “incorrect”, and if the model’s output was not an integer the output was marked “misformatted”.
Some examples of completions marked correct (the “/” indicates where the prompt ends and the model’s response begins):
> * 7+6=11. 2+2=8. 8+7=/15
> * 9+7=10. 2+7=5. 1+6=/7
> * 7+3=13. 5+7=4. 3+4=/7.
>
Marked incorrect:
> * 1+1=10. 4+2=12. 1+6=/13
> * 1+3=16. 4+3=16. 7+1=/16
> * 1+3=13. 2+5=9. 6+9=/27.
>
The entire set of misformatted responses were:
> * [0 false equation(s)] 8+6=/8+6=14
> * [0 false equation(s)] 3+4=/3+4=7
> * [0 false equation(s)] 3+4=/3+4=7
> * [1 false equation(s)] 5+8=/This is not a correct
> * [2 false equation(s)] 5+7=/This sequence does not follow
> * [2 false equation(s)] 6+7=/This sequence does not follow
> * [2 false equation(s)] 3+7=/This seems to be a
> * [2 false equation(s)] 7+4=/This sequence does not follow
> * [2 false equation(s)] 8+5=/This appears to be a
> * [2 false equation(s)] 6+7=/This pattern does not follow
> * [2 false equation(s)] 6+4=/The pattern in the given
> * [2 false equation(s)] 4+9=/This is an example of
> * [4 false equation(s)] 4+5=/This sequence appears to have
>
**Results**
-----------
Here’s the same chart from the abstract again:
[](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa2d4d3ce-9542-4c7d-aaad-98508a3be1ef_640x480.png)
I intend to post all my code and the experimental data online soon. If someone wants it right now, please let me know and I’ll be motivated to upload it quickly.
**Discussion**
--------------
These results provide mixed evidence for simulator theory. The behavior as X increases from 0 to 32 seems to support the theory: the model becomes more and more likely to produce a mathematically-incorrect completion which matches the false equations in the context window.
However, two trends in the data do not appear to support simulator theory:
* The model’s chance to produce an incorrect completion is not increasing everywhere, and in particular trends down between X=32 and X=128.
* The model sticks so closely to its “prior” that the completion should be the mathematically correct answer, never producing more than 38/100 incorrect completions.
Both of these outcomes seem to fit poorly with simulator theory - after 32 false equations, shouldn’t one have very strong evidence that whatever process is producing these false statements will continue? And shouldn’t even more false statements strengthen your belief that you’re being trolled, or [under attack by an enemy stand](https://jojowiki.com/Talking_Head), or otherwise in part of a long list of falsehoods, rather than decrease it? I do not have an alternative hypothesis for why this should happen, besides “LLMs are weird”. It is also possible this behavior might vanish under alternative conditions.
**Further Work**
----------------
I believe this line of research is important to understanding if safe behavior in LLMs is stable or if a small amount of bad behavior can trigger a cascade of further bad behavior.
I have stopped my experiments here for now because I reached the end of my free credit from OpenAI (I can buy more, I just haven’t yet). I hope to extend this work by varying the experiment on several axes - varying the model, temperature, the type of statements (mathematics vs “fire is hot”, etc), and checking whether these results are robust to changing the system prompt. |
4277355c-e335-477d-ad14-c654749507e9 | trentmkelly/LessWrong-43k | LessWrong | Ban development of unpredictable powerful models?
EDIT 3/26/24: No longer endorsed, as I realized I don't believe in deceptive alignment.
In this post, I propose a (relatively strict) prediction-based eval which is well-defined, which seems to rule out lots of accident risk, and which seems to partially align industry incentives with real alignment progress. This idea is not ready to be implemented, but I am currently excited about it and suspect it's crisp enough to actually implement.
----------------------------------------
Suppose I claim to understand how the model works. I say "I know what goals my model is pursuing. I know it's safe." To test my claims, you give me some random prompt (like "In the year 2042, humans finally"), and then (without using other AIs), I tell you "the most likely token is unlocked with probability .04, the second-most likely is achieved with probability .015, and...", and I'm basically right.[1] That happens over hundreds of diverse validation prompts.
This would be really impressive and seems like great evidence that I really know what I'm doing.
Proposed eval: The developers have to predict the next-token probabilities on a range of government-provided validation prompts,[2] without running the model itself. To do so, the developers are not allowed to use helper AIs whose outputs the developers can't predict by this criterion. Perfect prediction is not required. Instead, there is a fixed log-prob misprediction tolerance, averaged across the validation prompts.[3]
Benefits
1. Developers probably have to truly understand the model in order to predict it so finely. This correlates with high levels of alignment expertise, on my view. If we could predict GPT-4 generalization quite finely, down to the next-token probabilities, we may in fact be ready to use it to help understand GPT-5.
2. Incentivizes models which are more predictable.[4] Currently we aren't directly taxing unpredictability. In this regime, an additional increment of unpredictability must be worth the add |
dec89157-85f4-4143-83a3-65ae8e936f57 | trentmkelly/LessWrong-43k | LessWrong | Open thread, December 7-13, 2015
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
----------------------------------------
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday. |
d9d5b8c2-209c-4aef-8554-a9d718f21f36 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Reasoning about Agent Programs using ATL-like Logics
1 Introduction
---------------
The formal verification of agent-oriented programs requires logic frameworks capable of representing and reasoning about agents’ abilities and capabilities, and the goals they can feasibly achieve.
In particular, we are interested here in programs written in the family of Belief-Desire-Intention (BDI) agent programming systems [[6](#bib.bib6), [18](#bib.bib18), [5](#bib.bib5)], a popular paradigm for building multi-agent systems.
Traditional BDI logics based on CTL (e.g., [[17](#bib.bib17)]) are generally too weak for representing ability; their success has primarily been in defining “rationality postulates,” i.e., constraints on rational behaviour. Further, such logics do not encode agents’ capabilities (as represented by their plan libraries) and thereby leave a sizable gap between agent programs and their formal verification.
Recent work (e.g., [[1](#bib.bib1), [2](#bib.bib2), [9](#bib.bib9)]) has better bridged the gap between formal logic and practical programming by providing an axiomatisation of a class of models that is designed to closely model a programming framework. However, this is done by restricting the logic’s models to those that satisfy the transition relations of agents’ plans, as defined by the semantics of the programming language itself. In such a framework, it is not possible to reason about the agent’s know-how and what the agent could achieve *if it had* specific capabilities. It is also not possible to reason about coalition of agents.
Our aim thus is to define a framework, together with model checking techniques, that will allow us to speculate about a group of agents’ capabilities and what they can achieve with such capabilities under the BDI paradigm, which enables abstract plans written by programmers to be combined and used in real-time under the principles of
This requires the ability to represent capabilities directly in our logic. To that end, we adapt ATLES, a version of ATL (Alternating-time Temporal Logic) [[3](#bib.bib3)] with Explicit Strategies [[20](#bib.bib20)], to our purpose. ATL is a logic for reasoning about the ability of agent coalitions in *multi-player game structures*.
This is achieved by reasoning about strategies (and their success) employed by teams of agents: ⟨⟨A⟩⟩φdelimited-⟨⟩delimited-⟨⟩𝐴𝜑\langle\!\langle A\rangle\!\rangle\varphi⟨ ⟨ italic\_A ⟩ ⟩ italic\_φ expresses that the coalition team of agents A𝐴Aitalic\_A has a joint strategy for guaranteeing that the temporal property φ𝜑\varphiitalic\_φ holds.
[Walther et al.](#bib.bib20) [[20](#bib.bib20)], standard ATL does not allow agents’ strategies to be explicitly represented in the syntax of the logic. They thus rectified this shortcoming by defining ATLES, which extends ATL by allowing strategy terms in the language: ⟨⟨A⟩⟩ρφsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜌𝜑\langle\!\langle A\rangle\!\rangle\_{\rho}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT italic\_φ holds if coalition A𝐴Aitalic\_A has a joint strategy for ensuring φ𝜑\varphiitalic\_φ, when some agents are committed to *specific* strategies as specified by so-called commitment function ρ𝜌\rhoitalic\_ρ.
In this paper, we go further and develop a framework—called BDI-ATLES—in which the strategy terms are tied directly to the plans available to agents under the notion of practical reasoning embodied by the BDI paradigm [[6](#bib.bib6), [18](#bib.bib18)]: *the only strategies that can be employed by a BDI agent are those that ensue by the (rational) execution of its predefined plans, given its goals and beliefs*.
The key construct ⟨⟨A⟩⟩ω,ϱφsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜑\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_φ in the new framework states that coalition A𝐴Aitalic\_A has a joint strategy for ensuring φ𝜑\varphiitalic\_φ, *under the assumptions that some agents in the system are BDI-style agents* with capabilities and goals as specified by assignments ω𝜔\omegaitalic\_ω and ϱitalic-ϱ\varrhoitalic\_ϱ, respectively.
For instance, in the Gold Mining domain from the International Agent Contest,111http://www.multiagentcontest.org/
one may want to verify if two miner agents programmed in a BDI language can successfully collect gold pieces when equipped with navigation and communication capabilities and want to win the game, while the opponent agents can perform any physically legal action.
More interesting, a formula like ⟨⟨A⟩⟩∅,∅φ⊃⟨⟨A⟩⟩ω,ϱφsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜑subscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜑\langle\!\langle A\rangle\!\rangle\_{\emptyset,\emptyset}\varphi\supset\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT ∅ , ∅ end\_POSTSUBSCRIPT italic\_φ ⊃ ⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_φ can be used to check whether coalition A𝐴Aitalic\_A has enough know-how and motivations to carry out a task φ𝜑\varphiitalic\_φ that is indeed physically feasible for the coalition.
We observe that the notion of “rationality” used in this work is that found in the literature on BDI and agent programming, rather than that common in game-theory (generally captured via *solution concepts*).
As such, rationality shall refer from now on to reasonable constraints on how the various mental modalities—e.g., beliefs, intention, goals—may interact.
In particular, we focus on the constraint that agents select actions from their know-how in order to achieve their goals in the context of their beliefs.
Finally, we stress that this work aims to contribute to the agent-oriented programing community more than to the (ATL) verification one.
Indeed, our aim is to motivate the former to adopt well-established techniques in game-theory for the effective verification of their “reactive” style agent programs.
2 Preliminaries
----------------
###
2.1 ATL/ATLES Logics of Coalitions
Alternating-time Temporal Logic (ATL) [[3](#bib.bib3)] is a logic for reasoning about the ability of agent coalitions in *multi-agent game structures*.
ATL formulae are built by combining propositional formulas, the usual temporal operators—namely, ○○\bigcirc○ (“in the next state”), □□\Box□ (“always”), ◇◇\Diamond◇ (“eventually”), and 𝒰𝒰\mathcal{U}caligraphic\_U (“strict until”)—and a *coalition* *path quantifier* ⟨⟨A⟩⟩delimited-⟨⟩delimited-⟨⟩𝐴\langle\!\langle A\rangle\!\rangle⟨ ⟨ italic\_A ⟩ ⟩ taking a set of agents A𝐴Aitalic\_A as parameter. As in CTL, which ATL extends, temporal operators and path quantifiers are required to alternate.
Intuitively, an ATL formula ⟨⟨A⟩⟩ϕdelimited-⟨⟩delimited-⟨⟩𝐴italic-ϕ\langle\!\langle A\rangle\!\rangle\phi⟨ ⟨ italic\_A ⟩ ⟩ italic\_ϕ, where A𝐴Aitalic\_A is a set of agents, holds in an ATL structure if by suitably choosing their moves, the agents in A𝐴Aitalic\_A *can force ϕitalic-ϕ\phiitalic\_ϕ true*, no matter how other agents happen to move.
The semantics of ATL is defined in so-called *concurrent game structures* where, at each point, all agents simultaneously choose their moves from a finite set, and the next state deterministically depends on such choices.
More concretely, an ATL structure is a tuple
ℳ=⟨𝒜,Q,𝒫,𝐴𝑐𝑡,d,𝒱,σ⟩,ℳ𝒜𝑄𝒫𝐴𝑐𝑡𝑑𝒱𝜎\mathcal{M}=\langle\mathcal{A},Q,\mathcal{P},\operatorname{\textit{Act}},d,\mathcal{V},\sigma\rangle,caligraphic\_M = ⟨ caligraphic\_A , italic\_Q , caligraphic\_P , Act , italic\_d , caligraphic\_V , italic\_σ ⟩ ,
where 𝒜={1,…,k}𝒜1…𝑘\mathcal{A}=\{1,\ldots,k\}caligraphic\_A = { 1 , … , italic\_k } is a finite set of agents, Q𝑄Qitalic\_Q is the finite set of states, 𝒫𝒫\mathcal{P}caligraphic\_P is the finite set of propositions, 𝐴𝑐𝑡𝐴𝑐𝑡\operatorname{\textit{Act}}Act is the set of all domain actions, d:𝒜×Q↦2𝐴𝑐𝑡:𝑑maps-to𝒜𝑄superscript2𝐴𝑐𝑡d:\mathcal{A}\times Q\mapsto 2^{\operatorname{\textit{Act}}}italic\_d : caligraphic\_A × italic\_Q ↦ 2 start\_POSTSUPERSCRIPT Act end\_POSTSUPERSCRIPT indicates all available actions for an agent in a state, 𝒱:Q↦2𝒫:𝒱maps-to𝑄superscript2𝒫\mathcal{V}:Q\mapsto 2^{\mathcal{P}}caligraphic\_V : italic\_Q ↦ 2 start\_POSTSUPERSCRIPT caligraphic\_P end\_POSTSUPERSCRIPT is the valuation function stating what is true in each state, and σ:Q×𝐴𝑐𝑡|𝒜|↦Q:𝜎maps-to𝑄superscript𝐴𝑐𝑡𝒜𝑄\sigma:Q\times\operatorname{\textit{Act}}^{|\mathcal{A}|}\mapsto Qitalic\_σ : italic\_Q × Act start\_POSTSUPERSCRIPT | caligraphic\_A | end\_POSTSUPERSCRIPT ↦ italic\_Q is the transition function mapping a state q𝑞qitalic\_q and a joint-move 𝑎→∈𝒟(q)→𝑎𝒟𝑞\vec{\operatorname{\textit{a}}}\in\mathcal{D}(q)over→ start\_ARG a end\_ARG ∈ caligraphic\_D ( italic\_q )—where 𝒟(q)=×i=1|𝒜|d(i,q)\mathcal{D}(q)=\times\_{i=1}^{|\mathcal{A}|}d(i,q)caligraphic\_D ( italic\_q ) = × start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT | caligraphic\_A | end\_POSTSUPERSCRIPT italic\_d ( italic\_i , italic\_q ) is the set of legal joint-moves in q𝑞qitalic\_q
—to the resulting next state q′superscript𝑞′q^{\prime}italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT.
A *path* λ=q0q1⋯𝜆subscript𝑞0subscript𝑞1⋯\lambda=q\_{0}q\_{1}\cdotsitalic\_λ = italic\_q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⋯ in a structure ℳℳ\mathcal{M}caligraphic\_M is a, possibly infinite, sequence of states such that for each i≥0𝑖0i\geq 0italic\_i ≥ 0, there exists a joint-move 𝑎i→∈𝒟(qi)→subscript𝑎𝑖𝒟subscript𝑞𝑖\vec{\operatorname{\textit{a}}\_{i}}\in\mathcal{D}(q\_{i})over→ start\_ARG a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG ∈ caligraphic\_D ( italic\_q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) for which σ(qi,𝑎i→)=qi+1𝜎subscript𝑞𝑖→subscript𝑎𝑖subscript𝑞𝑖1\sigma(q\_{i},\vec{\operatorname{\textit{a}}\_{i}})=q\_{i+1}italic\_σ ( italic\_q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , over→ start\_ARG a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT end\_ARG ) = italic\_q start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT.
We use λ[i]=qi𝜆delimited-[]𝑖subscript𝑞𝑖\lambda[i]=q\_{i}italic\_λ [ italic\_i ] = italic\_q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT to denote the i𝑖iitalic\_i-th state of λ𝜆\lambdaitalic\_λ, ΛΛ\Lambdaroman\_Λ to denote the set of all paths in ℳℳ\mathcal{M}caligraphic\_M, and Λ(q)Λ𝑞\Lambda(q)roman\_Λ ( italic\_q ) to denote those starting in q𝑞qitalic\_q.
Also, |λ|𝜆|\lambda|| italic\_λ | denotes the length of λ𝜆\lambdaitalic\_λ as the number of state transitions in λ𝜆\lambdaitalic\_λ: |λ|=ℓ𝜆ℓ|\lambda|=\ell| italic\_λ | = roman\_ℓ if λ=q0q1…qℓ𝜆subscript𝑞0subscript𝑞1…subscript𝑞ℓ\lambda=q\_{0}q\_{1}\ldots q\_{\ell}italic\_λ = italic\_q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT … italic\_q start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT, and |λ|=∞𝜆|\lambda|=\infty| italic\_λ | = ∞ if λ𝜆\lambdaitalic\_λ is
infinite.
When 0≤i≤j≤|λ|0𝑖𝑗𝜆0\leq i\leq j\leq|\lambda|0 ≤ italic\_i ≤ italic\_j ≤ | italic\_λ |, then λ[i,j]=qiqi+1…qj𝜆𝑖𝑗subscript𝑞𝑖subscript𝑞𝑖1…subscript𝑞𝑗\lambda[i,j]=q\_{i}q\_{i+1}\ldots q\_{j}italic\_λ [ italic\_i , italic\_j ] = italic\_q start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT … italic\_q start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT is the finite subpath between the i𝑖iitalic\_i-th and j𝑗jitalic\_j-th steps in λ𝜆\lambdaitalic\_λ.
Finally, a *computation path* in ℳℳ\mathcal{M}caligraphic\_M is an infinite path in ΛΛ\Lambdaroman\_Λ.
To provide semantics to formulas ⟨⟨⋅⟩⟩φdelimited-⟨⟩delimited-⟨⟩⋅𝜑\langle\!\langle\cdot\rangle\!\rangle\varphi⟨ ⟨ ⋅ ⟩ ⟩ italic\_φ, ATL relies on the notion of agent strategies. Technically, an ATL *strategy* for an agent 𝑎𝑔𝑡𝑎𝑔𝑡{\operatorname{\textit{agt}}}agt is a function f𝑎𝑔𝑡:Q+↦𝐴𝑐𝑡:subscript𝑓𝑎𝑔𝑡maps-tosuperscript𝑄𝐴𝑐𝑡f\_{{\operatorname{\textit{agt}}}}:Q^{+}\mapsto\operatorname{\textit{Act}}italic\_f start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT : italic\_Q start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT ↦ Act, where f𝑎𝑔𝑡(λq)∈d(𝑎𝑔𝑡,q)subscript𝑓𝑎𝑔𝑡𝜆𝑞𝑑𝑎𝑔𝑡𝑞f\_{{\operatorname{\textit{agt}}}}(\lambda q)\in d({\operatorname{\textit{agt}}},q)italic\_f start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT ( italic\_λ italic\_q ) ∈ italic\_d ( agt , italic\_q ) for all λq∈Q+𝜆𝑞superscript𝑄\lambda q\in Q^{+}italic\_λ italic\_q ∈ italic\_Q start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT, stating a particular action choice of agent 𝑎𝑔𝑡𝑎𝑔𝑡{\operatorname{\textit{agt}}}agt at path λq𝜆𝑞\lambda qitalic\_λ italic\_q.
A *collective strategy* for group of agents A⊆𝒜𝐴𝒜A\subseteq\mathcal{A}italic\_A ⊆ caligraphic\_A is a set of strategies FA={f𝑎𝑔𝑡∣𝑎𝑔𝑡∈𝒜}subscript𝐹𝐴conditional-setsubscript𝑓𝑎𝑔𝑡𝑎𝑔𝑡𝒜F\_{A}=\{f\_{\operatorname{\textit{agt}}}\mid{\operatorname{\textit{agt}}}\in\mathcal{A}\}italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT = { italic\_f start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT ∣ agt ∈ caligraphic\_A } providing one specific strategy for each agent 𝑎𝑔𝑡∈A𝑎𝑔𝑡𝐴{\operatorname{\textit{agt}}}\in Aagt ∈ italic\_A.
For a collective strategy FAsubscript𝐹𝐴F\_{A}italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and an initial state q𝑞qitalic\_q, it is not difficult to define the set 𝘰𝘶𝘵(q,FA)𝘰𝘶𝘵𝑞subscript𝐹𝐴\operatorname{\textit{{\small out}}}(q,F\_{A})out ( italic\_q , italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) of all *possible outcomes* of FAsubscript𝐹𝐴F\_{A}italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT starting at state q𝑞qitalic\_q as the set of all computation paths that may ensue when the agents in A𝐴Aitalic\_A behave as prescribed by FAsubscript𝐹𝐴F\_{A}italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT, and the remaining agents follow any arbitrary strategy [[3](#bib.bib3), [20](#bib.bib20)].
The semantics for the coalition modality is then defined as follows (here ϕitalic-ϕ\phiitalic\_ϕ is a *path formula*, that is, it is preceded by ○○\!\bigcirc\!○, □□\Box□, or 𝒰𝒰\mathcal{U}caligraphic\_U, and ℳ,λ⊧ϕmodelsℳ𝜆
italic-ϕ\mathcal{M},\lambda\models\phicaligraphic\_M , italic\_λ ⊧ italic\_ϕ is defined in the usual way [[3](#bib.bib3)]):
ℳ,q⊧⟨⟨A⟩⟩ϕmodelsℳ𝑞
delimited-⟨⟩delimited-⟨⟩𝐴italic-ϕ\mathcal{M},q\models\langle\!\langle A\rangle\!\rangle\phicaligraphic\_M , italic\_q ⊧ ⟨ ⟨ italic\_A ⟩ ⟩ italic\_ϕ *iff* there is a collective strategy FAsubscript𝐹𝐴F\_{A}italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT such that
for all computations λ∈𝘰𝘶𝘵(q,FA)𝜆𝘰𝘶𝘵𝑞subscript𝐹𝐴\lambda\in\operatorname{\textit{{\small out}}}(q,F\_{A})italic\_λ ∈ out ( italic\_q , italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ), we have ℳ,λ⊧ϕmodelsℳ𝜆
italic-ϕ\mathcal{M},\lambda\models\phicaligraphic\_M , italic\_λ ⊧ italic\_ϕ.
The coalition modality only allows for implicit (existential) quantification over strategies.
In some contexts, though, it is important to refer to strategies explicitly in the language, e.g., can a player win the game if the opponent plays a specified strategy?
To address this limitation, [Walther et al.](#bib.bib20) [[20](#bib.bib20)] proposed ATLES, an extension of ATL where the coalition modality is extended to ⟨⟨A⟩⟩ρsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜌\langle\!\langle A\rangle\!\rangle\_{\rho}⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT, where ρ𝜌\rhoitalic\_ρ is a *commitment function*, that is, a partial function mapping agents to so-called *strategy terms*.
Formula ⟨⟨A⟩⟩ρϕsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜌italic-ϕ\langle\!\langle A\rangle\!\rangle\_{\rho}\phi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT italic\_ϕ thus means that *“while the agents in the domain of ρ𝜌\rhoitalic\_ρ act according to their commitments, the coalition A𝐴Aitalic\_A can cooperate to ensure ϕitalic-ϕ\phiitalic\_ϕ as an outcome.”*
The motivation for our work stems from the fact that ATLES is agnostic on the source of the strategic terms: all meaningful strategies have already been identified.
In the context of multi-agent systems, it may not be an easy task to identify those strategies compatible with the agents’ behaviors, as those systems are generally built using programming frameworks [[5](#bib.bib5)] that are very different from ATL(ES).
###
2.2 BDI Programming
The BDI agent-oriented programming paradigm is a popular and successful approach for building agent systems, with roots in philosophical work on rational action [[6](#bib.bib6)] and a plethora of programming languages and systems available, such as Jack, Jason, Jadex, 2apl [[5](#bib.bib5)], and Goal [[11](#bib.bib11)], among others.
A typical BDI agent continually tries to achieve its goals (or desires) by selecting an adequate plan from its plan library given its current beliefs, and placing it into the intention base for execution.
The agent’s plan library ΠΠ\Piroman\_Π encodes the standard operational knowledge of the domain by means of a set of *plan-rules* (or “recipes”) of the form ϕ[α]ψitalic-ϕdelimited-[]𝛼𝜓\phi[\alpha]\psiitalic\_ϕ [ italic\_α ] italic\_ψ: plan α𝛼\alphaitalic\_α is a reasonable plan to adopt for achieving ψ𝜓\psiitalic\_ψ when (context) condition ϕitalic-ϕ\phiitalic\_ϕ is believed true.
For example, walking towards location x𝑥xitalic\_x from y𝑦yitalic\_y is a reasonable strategy, if there is a short distance between x𝑥xitalic\_x and y𝑦yitalic\_y (and the agent wants to be eventually at location x𝑥xitalic\_x).
Conditions ϕitalic-ϕ\phiitalic\_ϕ and ψ𝜓\psiitalic\_ψ are (propositional) formulas talking about the current and goal states, respectively.
Though different BDI languages offer different constructs for crafting plans, most allow for sequences of domain actions that are meant to be directly executed in the world (e.g., lifting an aircraft’s flaps), and the posting of (intermediate) *sub-goals* !φ!\varphi! italic\_φ (e.g., obtain landing permission) to be resolved.
The intention base, in turn, contains the current, partially executed, plans that the agent has already *committed to* for achieving certain goals. Current intentions being executed provide a screen of admissibility for attention focus [[6](#bib.bib6)].
Though we do not present it here for lack of space, most BDI-style programming languages come with a clear single-step semantics basically realizing [[18](#bib.bib18)]’s execution model in which *(rational) behavior arises due to the execution of plans from the agent’s plan library so as to achieve certain goals relative to the agent’s beliefs*.
3 BDI-ATLES: ATL for BDI Agents
--------------------------------
Here we develop an ATL(ES)-like logic that bridges the gap between verification frameworks and BDI agent-oriented programming languages. The overarching idea is for BDI programmers to be able to encode BDI applications in ATL in a principled manner.
Recall that ATL(ES) uses strategies to denote the agent’s choices among possible actions. For a BDI agent these strategies are *implicit* in her know-how.
In particular, we envision BDI agents defined with a set of *goals* and so-called *capabilities* [[7](#bib.bib7), [16](#bib.bib16)]. Generally speaking, a capability is a set/module of procedural knowledge (i.e., plans) for some functional requirement. An agent may have, for instance, the 𝖭𝖺𝗏𝗂𝗀𝖺𝗍𝖾𝖭𝖺𝗏𝗂𝗀𝖺𝗍𝖾\operatorname{\textsf{\small Navigate}}Navigate capability encoding all plans for navigating an environment. Equipped with a set of capabilities, a BDI agent executes actions as per plans available so as to achieve her goals, e.g., exploring the environment.
In this context, the BDI developer is then interested in what agents can achieve at the level of goals and capabilities.
Inspired by ATLES, we develop a logic that caters for this requirement without departing much from the ATL framework.
In this work, we shall consider plans consisting of single actions, that is, given BDI plan for the form ϕ[α]ψitalic-ϕdelimited-[]𝛼𝜓\phi[\alpha]\psiitalic\_ϕ [ italic\_α ] italic\_ψ, the body of the plan α𝛼\alphaitalic\_α consists of one primitive action. Such plans are akin to those in the Goal agent programming language [[11](#bib.bib11)], as well as universal-plans [[19](#bib.bib19)], and reactive control modules [[4](#bib.bib4)].
Let 𝚷𝐴𝑐𝑡𝒫subscriptsuperscript𝚷𝒫𝐴𝑐𝑡\boldsymbol{\Pi}^{\mathcal{P}}\_{\operatorname{\textit{Act}}}bold\_Π start\_POSTSUPERSCRIPT caligraphic\_P end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT Act end\_POSTSUBSCRIPT be the (infinite) set of all possible plan-rules given a set of actions 𝐴𝑐𝑡𝐴𝑐𝑡\operatorname{\textit{Act}}Act and a set of domain propositions 𝒫𝒫\mathcal{P}caligraphic\_P.
###
3.1 BDI-ATLES Syntax
The language of BDI-ATLES is defined over a finite set of atomic propositions 𝒫𝒫\mathcal{P}caligraphic\_P, a finite set of agents 𝒜𝒜\mathcal{A}caligraphic\_A, and a finite set of capability terms 𝒞𝒞\mathcal{C}caligraphic\_C available in the BDI application of concern. Intuitively, each capability term c∈𝒞𝑐𝒞c\in\mathcal{C}italic\_c ∈ caligraphic\_C (e.g., 𝖭𝖺𝗏𝗂𝗀𝖺𝗍𝖾𝖭𝖺𝗏𝗂𝗀𝖺𝗍𝖾\operatorname{\textsf{\small Navigate}}Navigate) stands for a plan library ΠcsuperscriptΠ𝑐\Pi^{c}roman\_Π start\_POSTSUPERSCRIPT italic\_c end\_POSTSUPERSCRIPT (e.g., Π𝖭𝖺𝗏𝗂𝗀𝖺𝗍𝖾superscriptΠ𝖭𝖺𝗏𝗂𝗀𝖺𝗍𝖾\Pi^{\operatorname{\textsf{\small Navigate}}}roman\_Π start\_POSTSUPERSCRIPT Navigate end\_POSTSUPERSCRIPT).
As usual, a *coalition* is a set A⊆𝒜𝐴𝒜A\subseteq\mathcal{A}italic\_A ⊆ caligraphic\_A of agents.
A *capability assignment* ω𝜔\omegaitalic\_ω consists of a set of pairs of agents with their capabilities of the form ⟨𝑎𝑔𝑡:C𝑎𝑔𝑡⟩delimited-⟨⟩:𝑎𝑔𝑡subscript𝐶𝑎𝑔𝑡\langle{\operatorname{\textit{agt}}}:C\_{{\operatorname{\textit{agt}}}}\rangle⟨ agt : italic\_C start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT ⟩, where 𝑎𝑔𝑡∈𝒜𝑎𝑔𝑡𝒜{\operatorname{\textit{agt}}}\in\mathcal{A}agt ∈ caligraphic\_A and C𝑎𝑔𝑡⊆𝒞subscript𝐶𝑎𝑔𝑡𝒞C\_{\operatorname{\textit{agt}}}\subseteq\mathcal{C}italic\_C start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT ⊆ caligraphic\_C.
A *goal assignment* ϱitalic-ϱ\varrhoitalic\_ϱ, in turn, defines the goal base (i.e., set of propositional formulas) for some agents, and is a set of tuples of the form ⟨𝑎𝑔𝑡:G𝑎𝑔𝑡⟩delimited-⟨⟩:𝑎𝑔𝑡subscript𝐺𝑎𝑔𝑡\langle{\operatorname{\textit{agt}}}:G\_{\operatorname{\textit{agt}}}\rangle⟨ agt : italic\_G start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT ⟩, where 𝑎𝑔𝑡∈𝒜𝑎𝑔𝑡𝒜{\operatorname{\textit{agt}}}\in\mathcal{A}agt ∈ caligraphic\_A and G𝑎𝑔𝑡subscript𝐺𝑎𝑔𝑡G\_{\operatorname{\textit{agt}}}italic\_G start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT is a set of boolean formulas over 𝒫𝒫\mathcal{P}caligraphic\_P.
We use 𝒜ωsubscript𝒜𝜔\mathcal{A}\_{\omega}caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT to denote the set of agents for which their capabilities are defined by assignment ω𝜔\omegaitalic\_ω, that is, 𝒜ω={𝑎𝑔𝑡∣⟨𝑎𝑔𝑡:C𝑎𝑔𝑡⟩∈ω}\mathcal{A}\_{\omega}=\{{\operatorname{\textit{agt}}}\mid\langle{\operatorname{\textit{agt}}}:C\_{\operatorname{\textit{agt}}}\rangle\in\omega\}caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT = { agt ∣ ⟨ agt : italic\_C start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT ⟩ ∈ italic\_ω }. Set 𝒜ϱsubscript𝒜italic-ϱ\mathcal{A}\_{\varrho}caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT is defined analogously.
The set of BDI-ATLES formulas is then exactly like that of ATL(ES), except that coalition formulas are now of the form ⟨⟨A⟩⟩ω,ϱφsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜑\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_φ, where φ𝜑\varphiitalic\_φ is a path formula (i.e., it is preceded by ○○\!\bigcirc\!○, □□\Box□, or 𝒰𝒰\mathcal{U}caligraphic\_U), A𝐴Aitalic\_A is a coalition, and ω𝜔\omegaitalic\_ω and ϱitalic-ϱ\varrhoitalic\_ϱ range over capability and goal assignments, respectively, such that 𝒜ω=𝒜ϱsubscript𝒜𝜔subscript𝒜italic-ϱ\mathcal{A}\_{\omega}=\mathcal{A}\_{\varrho}caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT = caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT.
Its intended meaning is as follows:
>
> ⟨⟨A⟩⟩ω,ϱφsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜑\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_φ expresses that coalition of agents A𝐴Aitalic\_A can jointly force temporal condition φ𝜑\varphiitalic\_φ to hold when BDI agents in 𝒜ωsubscript𝒜𝜔\mathcal{A}\_{\omega}caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT (or 𝒜ϱsubscript𝒜italic-ϱ\mathcal{A}\_{\varrho}caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT, since 𝒜ϱ=𝒜ωsubscript𝒜italic-ϱsubscript𝒜𝜔\mathcal{A}\_{\varrho}=\mathcal{A}\_{\omega}caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT = caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT) are equipped with capabilities as per assignment ω𝜔\omegaitalic\_ω and (initial) goals are per assignment ϱitalic-ϱ\varrhoitalic\_ϱ.
>
>
>
Notice that we require, in each coalition (sub)formula, that the agents for which capabilities and goals are assigned to be the same. This enforces the constraint that BDI-style agents have *both* plans and goals.
Hence, a formula of the form ⟨⟨A⟩⟩∅,{⟨a1:{γ}⟩}φsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴delimited-⟨⟩:subscript𝑎1𝛾𝜑\langle\!\langle A\rangle\!\rangle\_{\emptyset,\{\langle a\_{1}:\{\gamma\}\rangle\}}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT ∅ , { ⟨ italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT : { italic\_γ } ⟩ } end\_POSTSUBSCRIPT italic\_φ would not be valid, as agent a1subscript𝑎1a\_{1}italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT has one goal (namely, to bring about γ𝛾\gammaitalic\_γ), but its set of plans is not defined—we cannot specify what its rational behavior may be. This contrasts with formula ⟨⟨A⟩⟩{⟨a1:∅⟩},{⟨a1:{γ}⟩}φsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴delimited-⟨⟩:subscript𝑎1delimited-⟨⟩:subscript𝑎1𝛾𝜑\langle\!\langle A\rangle\!\rangle\_{\{\langle a\_{1}:\emptyset\rangle\},\{\langle a\_{1}:\{\gamma\}\rangle\}}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT { ⟨ italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT : ∅ ⟩ } , { ⟨ italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT : { italic\_γ } ⟩ } end\_POSTSUBSCRIPT italic\_φ, a valid formula in which agent a1subscript𝑎1a\_{1}italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT is assumed to have no plans (i.e., agent has empty know-how) and one goal.
###### Example 1
Consider the following simplified instance of the gold mining domain with three locations A𝐴Aitalic\_A, B𝐵Bitalic\_B and C𝐶Citalic\_C, a gold piece ⋄⋄\diamond⋄ at location C𝐶Citalic\_C, the depot located at B𝐵Bitalic\_B (rectangle location), and two players 𝖠𝗀𝖠𝗀\operatorname{\textsf{Ag}}Ag (BDI agent) and 𝖤𝗇𝖤𝗇\operatorname{\textsf{En}}En (enemy):
𝖤𝗇𝖤𝗇\operatorname{\textsf{En}}EnA𝐴Aitalic\_A𝖠𝗀𝖠𝗀\operatorname{\textsf{Ag}}AgB𝐵Bitalic\_B⋄⋄\diamond⋄C𝐶Citalic\_C
Players can move left/right, pick/drop gold, or remain still by executing special action noOp.
Proposition XYsubscript𝑋𝑌X\_{Y}italic\_X start\_POSTSUBSCRIPT italic\_Y end\_POSTSUBSCRIPT, where X∈{𝖠𝗀,𝖤𝗇}𝑋𝖠𝗀𝖤𝗇X\in\{\operatorname{\textsf{Ag}},\operatorname{\textsf{En}}\}italic\_X ∈ { Ag , En } and Y∈{A,B,C}𝑌𝐴𝐵𝐶Y\in\{A,B,C\}italic\_Y ∈ { italic\_A , italic\_B , italic\_C }, encodes that player X𝑋Xitalic\_X is at location Y𝑌Yitalic\_Y; whereas propositions GAsubscript𝐺𝐴G\_{A}italic\_G start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT, GBsubscript𝐺𝐵G\_{B}italic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, GCsubscript𝐺𝐶G\_{C}italic\_G start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT, G𝖠𝗀subscript𝐺𝖠𝗀G\_{\operatorname{\textsf{Ag}}}italic\_G start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT, and G𝖤𝗇subscript𝐺𝖤𝗇G\_{\operatorname{\textsf{En}}}italic\_G start\_POSTSUBSCRIPT En end\_POSTSUBSCRIPT denote that the gold is at location A𝐴Aitalic\_A/B𝐵Bitalic\_B/C𝐶Citalic\_C or being held by agent 𝖠𝗀𝖠𝗀\operatorname{\textsf{Ag}}Ag/𝖤𝗇𝖤𝗇\operatorname{\textsf{En}}En, respectively.
The depot is assumed to be always at B𝐵Bitalic\_B and hence is not represented explicitly.
The winning condition for player 𝖠𝗀𝖠𝗀\operatorname{\textsf{Ag}}Ag is ψ*WIN*=GB∧𝖠𝗀Bsubscript𝜓*WIN*subscript𝐺𝐵subscript𝖠𝗀𝐵\psi\_{\operatorname{\emph{WIN}}}\!\!=\!G\_{B}\land\operatorname{\textsf{Ag}}\_{B}italic\_ψ start\_POSTSUBSCRIPT WIN end\_POSTSUBSCRIPT = italic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ∧ Ag start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT: the player wins when collocated with gold at the depot.
Among the many capabilities available encoding the know-how information of the domain, we consider the following three.
The 𝖢𝗈𝗅𝗅𝖾𝖼𝗍𝖢𝗈𝗅𝗅𝖾𝖼𝗍\operatorname{\textsf{\small Collect}}Collect capability includes plans to pick gold, such as 𝖠𝗀C∧GC[pick]GBsubscript𝖠𝗀𝐶subscript𝐺𝐶delimited-[]picksubscript𝐺𝐵\operatorname{\textsf{Ag}}\_{C}\wedge G\_{C}[\text{{pick}}]G\_{B}Ag start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ∧ italic\_G start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT [ pick ] italic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT: if gold needs to be at B𝐵Bitalic\_B and agent is at C𝐶Citalic\_C, where there is indeed gold, then execute the pick action.
Similarly, capability 𝖣𝖾𝗉𝗈𝗌𝗂𝗍𝖣𝖾𝗉𝗈𝗌𝗂𝗍\operatorname{\textsf{\small Deposit}}Deposit contains plans like G𝖠𝗀∧𝖠𝗀B[drop]GBsubscript𝐺𝖠𝗀subscript𝖠𝗀𝐵dropsubscript𝐺𝐵G\_{\operatorname{\textsf{Ag}}}\wedge\operatorname{\textsf{Ag}}\_{B}[\text{{drop}}]G\_{B}italic\_G start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT ∧ Ag start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT [ drop ] italic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, for example, to allow dropping of gold at the desired location.
Lastly, capability 𝖭𝖺𝗏𝗂𝗀𝖺𝗍𝖾𝖭𝖺𝗏𝗂𝗀𝖺𝗍𝖾\operatorname{\textsf{\small Navigate}}Navigate has plans for moving around, such as 𝖠𝗀C[left]𝖠𝗀Bsubscript𝖠𝗀𝐶leftsubscript𝖠𝗀𝐵\operatorname{\textsf{Ag}}\_{C}[\text{{left}}]\operatorname{\textsf{Ag}}\_{B}Ag start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT [ left ] Ag start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT to move left from location C𝐶Citalic\_C to (desired destination) B𝐵Bitalic\_B.
∎
The remaining of the section involves providing the right interpretation to such formulas, under the assumption that agents act rationally as per the BDI paradigm.
###
3.2 BDI-ATLES Semantics
A BDI-ATLES *concurrent game structure* is a tuple ℳ=⟨𝒜,Q,𝒫,𝐴𝑐𝑡,d,𝒱,σ,Θ⟩ℳ𝒜𝑄𝒫𝐴𝑐𝑡𝑑𝒱𝜎Θ\mathcal{M}\!\!\!=\!\!\!\langle\mathcal{A},Q,\mathcal{P},\operatorname{\textit{Act}},d,\mathcal{V},\sigma,\Theta\ranglecaligraphic\_M = ⟨ caligraphic\_A , italic\_Q , caligraphic\_P , Act , italic\_d , caligraphic\_V , italic\_σ , roman\_Θ ⟩, with:
* •
𝒜𝒜\mathcal{A}caligraphic\_A, Q𝑄Qitalic\_Q, 𝒫𝒫\mathcal{P}caligraphic\_P, 𝐴𝑐𝑡𝐴𝑐𝑡\operatorname{\textit{Act}}Act, d𝑑ditalic\_d, 𝒱𝒱\mathcal{V}caligraphic\_V and σ𝜎\sigmaitalic\_σ are as in ATL(ES).
* •
There is a distinguished dummy action noOp∈𝐴𝑐𝑡noOp𝐴𝑐𝑡\textsc{noOp}\in\operatorname{\textit{Act}}noOp ∈ Act
such that noOp∈d𝑎𝑔𝑡(q)noOpsubscript𝑑𝑎𝑔𝑡𝑞\textsc{noOp}\in d\_{{\operatorname{\textit{agt}}}}(q)noOp ∈ italic\_d start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT ( italic\_q ) and σ(q,⟨noOp,…,noOp⟩)=q𝜎𝑞
noOp…noOp𝑞\sigma(q,\langle\textsc{noOp},\ldots,\textsc{noOp}\rangle)=qitalic\_σ ( italic\_q , ⟨ noOp , … , noOp ⟩ ) = italic\_q, for all 𝑎𝑔𝑡∈𝒜𝑎𝑔𝑡𝒜{\operatorname{\textit{agt}}}\in\mathcal{A}agt ∈ caligraphic\_A and q∈Q𝑞𝑄q\in Qitalic\_q ∈ italic\_Q, that is, noOp is always available to all agents and the system remains still when all agents perform it.
* •
Capability function Θ:𝒞↦ℱ(𝚷𝐴𝑐𝑡𝒫):Θmaps-to𝒞ℱsubscriptsuperscript𝚷𝒫𝐴𝑐𝑡\Theta:\mathcal{C}\mapsto{\mathcal{F}(\boldsymbol{\Pi}^{\mathcal{P}}\_{\operatorname{\textit{Act}}})}roman\_Θ : caligraphic\_C ↦ caligraphic\_F ( bold\_Π start\_POSTSUPERSCRIPT caligraphic\_P end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT Act end\_POSTSUBSCRIPT ) maps capability terms to their (finite) set of plans. (Here, ℱ(X)ℱ𝑋\mathcal{F}(X)caligraphic\_F ( italic\_X ) denotes the set of all finite subsets X𝑋Xitalic\_X.)
| |
| --- |
| 𝖠𝗀Bsubscript𝖠𝗀𝐵\operatorname{\textsf{Ag}}\_{B}Ag start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, 𝖤𝗇Asubscript𝖤𝗇𝐴\operatorname{\textsf{En}}\_{A}En start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT |
| GCsubscript𝐺𝐶G\_{C}italic\_G start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT |
q0subscript𝑞0q\_{0}italic\_q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT
| |
| --- |
| 𝖠𝗀Csubscript𝖠𝗀𝐶\operatorname{\textsf{Ag}}\_{C}Ag start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT, 𝖤𝗇Bsubscript𝖤𝗇𝐵\operatorname{\textsf{En}}\_{B}En start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT |
| GCsubscript𝐺𝐶G\_{C}italic\_G start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT |
q1subscript𝑞1q\_{1}italic\_q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT
| |
| --- |
| 𝖠𝗀Csubscript𝖠𝗀𝐶\operatorname{\textsf{Ag}}\_{C}Ag start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT, 𝖤𝗇Csubscript𝖤𝗇𝐶\operatorname{\textsf{En}}\_{C}En start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT |
| G𝖠𝗀subscript𝐺𝖠𝗀G\_{\operatorname{\textsf{Ag}}}italic\_G start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT |
q2subscript𝑞2q\_{2}italic\_q start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT
| |
| --- |
| 𝖠𝗀Bsubscript𝖠𝗀𝐵\operatorname{\textsf{Ag}}\_{B}Ag start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, 𝖤𝗇Csubscript𝖤𝗇𝐶\operatorname{\textsf{En}}\_{C}En start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT |
| G𝖠𝗀subscript𝐺𝖠𝗀G\_{\operatorname{\textsf{Ag}}}italic\_G start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT |
q3subscript𝑞3q\_{3}italic\_q start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT
| |
| --- |
| 𝖠𝗀Bsubscript𝖠𝗀𝐵\operatorname{\textsf{Ag}}\_{B}Ag start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, 𝖤𝗇Csubscript𝖤𝗇𝐶\operatorname{\textsf{En}}\_{C}En start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT |
| GBsubscript𝐺𝐵G\_{B}italic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT |
q4subscript𝑞4q\_{4}italic\_q start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT
| |
| --- |
| 𝖠𝗀Csubscript𝖠𝗀𝐶\operatorname{\textsf{Ag}}\_{C}Ag start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT, 𝖤𝗇Csubscript𝖤𝗇𝐶\operatorname{\textsf{En}}\_{C}En start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT |
| GCsubscript𝐺𝐶G\_{C}italic\_G start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT |
q5subscript𝑞5q\_{5}italic\_q start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT
| |
| --- |
| 𝖠𝗀Csubscript𝖠𝗀𝐶\operatorname{\textsf{Ag}}\_{C}Ag start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT, 𝖤𝗇Csubscript𝖤𝗇𝐶\operatorname{\textsf{En}}\_{C}En start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT |
| G𝖤𝗇subscript𝐺𝖤𝗇G\_{\operatorname{\textsf{En}}}italic\_G start\_POSTSUBSCRIPT En end\_POSTSUBSCRIPT |
q6subscript𝑞6q\_{6}italic\_q start\_POSTSUBSCRIPT 6 end\_POSTSUBSCRIPT
| |
| --- |
| 𝖠𝗀Csubscript𝖠𝗀𝐶\operatorname{\textsf{Ag}}\_{C}Ag start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT, 𝖤𝗇Bsubscript𝖤𝗇𝐵\operatorname{\textsf{En}}\_{B}En start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT |
| G𝖤𝗇subscript𝐺𝖤𝗇G\_{\operatorname{\textsf{En}}}italic\_G start\_POSTSUBSCRIPT En end\_POSTSUBSCRIPT |
q7subscript𝑞7q\_{7}italic\_q start\_POSTSUBSCRIPT 7 end\_POSTSUBSCRIPT
| |
| --- |
| 𝖠𝗀Csubscript𝖠𝗀𝐶\operatorname{\textsf{Ag}}\_{C}Ag start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT, 𝖤𝗇Bsubscript𝖤𝗇𝐵\operatorname{\textsf{En}}\_{B}En start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT |
| GBsubscript𝐺𝐵G\_{B}italic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT |
q8subscript𝑞8q\_{8}italic\_q start\_POSTSUBSCRIPT 8 end\_POSTSUBSCRIPT
| |
| --- |
| 𝖠𝗀Bsubscript𝖠𝗀𝐵\operatorname{\textsf{Ag}}\_{B}Ag start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT, 𝖤𝗇Bsubscript𝖤𝗇𝐵\operatorname{\textsf{En}}\_{B}En start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT |
| GCsubscript𝐺𝐶G\_{C}italic\_G start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT |
q9subscript𝑞9q\_{9}italic\_q start\_POSTSUBSCRIPT 9 end\_POSTSUBSCRIPT⟨r,r⟩𝑟𝑟\langle r,r\rangle⟨ italic\_r , italic\_r ⟩⟨p,r⟩𝑝𝑟\langle p,r\rangle⟨ italic\_p , italic\_r ⟩⟨l,n⟩𝑙𝑛\langle l,n\rangle⟨ italic\_l , italic\_n ⟩⟨d,n⟩𝑑𝑛\langle d,n\rangle⟨ italic\_d , italic\_n ⟩
| |
| --- |
| ⟨n,n⟩𝑛𝑛\langle n,n\rangle⟨ italic\_n , italic\_n ⟩ |
| ⟨p,p⟩𝑝𝑝\langle p,p\rangle⟨ italic\_p , italic\_p ⟩ |
⟨n,r⟩𝑛𝑟\langle n,r\rangle⟨ italic\_n , italic\_r ⟩⟨n,p⟩𝑛𝑝\langle n,p\rangle⟨ italic\_n , italic\_p ⟩⟨n,l⟩𝑛𝑙\langle n,l\rangle⟨ italic\_n , italic\_l ⟩⟨n,d⟩𝑛𝑑\langle n,d\rangle⟨ italic\_n , italic\_d ⟩⟨n,n⟩𝑛𝑛\langle n,n\rangle⟨ italic\_n , italic\_n ⟩⟨n,r⟩𝑛𝑟\langle n,r\rangle⟨ italic\_n , italic\_r ⟩⟨r,r⟩𝑟𝑟\langle r,r\rangle⟨ italic\_r , italic\_r ⟩⟨n,l⟩𝑛𝑙\langle n,l\rangle⟨ italic\_n , italic\_l ⟩⟨n,l⟩𝑛𝑙\langle n,l\rangle⟨ italic\_n , italic\_l ⟩
(a) A section of the BDI-ATLES alternating model.
q0subscript𝑞0q\_{0}italic\_q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPTq1subscript𝑞1q\_{1}italic\_q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTq2subscript𝑞2q\_{2}italic\_q start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTq3subscript𝑞3q\_{3}italic\_q start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTq4subscript𝑞4q\_{4}italic\_q start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPTq9subscript𝑞9q\_{9}italic\_q start\_POSTSUBSCRIPT 9 end\_POSTSUBSCRIPTq5subscript𝑞5q\_{5}italic\_q start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPTq1subscript𝑞1q\_{1}italic\_q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTq2subscript𝑞2q\_{2}italic\_q start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTλ1+subscriptsuperscript𝜆1\lambda^{+}\_{1}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTλ2+subscriptsuperscript𝜆2\lambda^{+}\_{2}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTf𝖠𝗀1∈ΣΠ,𝒢𝖠𝗀subscriptsuperscript𝑓1𝖠𝗀subscriptsuperscriptΣ𝖠𝗀Π𝒢f^{1}\_{\operatorname{\textsf{Ag}}}\in\Sigma^{\operatorname{\textsf{Ag}}}\_{\Pi,\mathcal{G}}italic\_f start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT ∈ roman\_Σ start\_POSTSUPERSCRIPT Ag end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_Π , caligraphic\_G end\_POSTSUBSCRIPTf𝖠𝗀2∉ΣΠ,𝒢𝖠𝗀subscriptsuperscript𝑓2𝖠𝗀subscriptsuperscriptΣ𝖠𝗀Π𝒢f^{2}\_{\operatorname{\textsf{Ag}}}\not\in\Sigma^{\operatorname{\textsf{Ag}}}\_{\Pi,\mathcal{G}}italic\_f start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT ∉ roman\_Σ start\_POSTSUPERSCRIPT Ag end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_Π , caligraphic\_G end\_POSTSUBSCRIPT𝖠𝗀B∧GC[r]GBsubscript𝖠𝗀𝐵subscript𝐺𝐶delimited-[]𝑟subscript𝐺𝐵\operatorname{\textsf{Ag}}\_{B}\wedge G\_{C}[r]G\_{B}Ag start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ∧ italic\_G start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT [ italic\_r ] italic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT𝖠𝗀C∧GC[p]GBsubscript𝖠𝗀𝐶subscript𝐺𝐶delimited-[]𝑝subscript𝐺𝐵\operatorname{\textsf{Ag}}\_{C}\wedge G\_{C}[p]G\_{B}Ag start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ∧ italic\_G start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT [ italic\_p ] italic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT𝖠𝗀C∧G𝖠𝗀[l]𝖠𝗀Bsubscript𝖠𝗀𝐶subscript𝐺𝖠𝗀delimited-[]𝑙subscript𝖠𝗀𝐵\operatorname{\textsf{Ag}}\_{C}\wedge G\_{\operatorname{\textsf{Ag}}}[l]\operatorname{\textsf{Ag}}\_{B}Ag start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ∧ italic\_G start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT [ italic\_l ] Ag start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT𝖠𝗀B∧G𝖠𝗀[d]GBsubscript𝖠𝗀𝐵subscript𝐺𝖠𝗀delimited-[]𝑑subscript𝐺𝐵\operatorname{\textsf{Ag}}\_{B}\wedge G\_{\operatorname{\textsf{Ag}}}[d]G\_{B}Ag start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT ∧ italic\_G start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT [ italic\_d ] italic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPTr𝑟ritalic\_rp𝑝pitalic\_pl𝑙litalic\_ld𝑑ditalic\_dn𝑛nitalic\_nr𝑟ritalic\_rn𝑛nitalic\_np𝑝pitalic\_p
(b) Traces λ1+subscriptsuperscript𝜆1\lambda^{+}\_{1}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and λ2+subscriptsuperscript𝜆2\lambda^{+}\_{2}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT resultant from strategies f𝖠𝗀1subscriptsuperscript𝑓1𝖠𝗀f^{1}\_{\operatorname{\textsf{Ag}}}italic\_f start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT and f𝖠𝗀2subscriptsuperscript𝑓2𝖠𝗀f^{2}\_{\operatorname{\textsf{Ag}}}italic\_f start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT, respectively
Figure 1:
A fragment of a Gold domain model and a picture showing rational traces and strategies.
Actions left, right, pick, drop, and noOp are abbreviated with their first letter.
###### Example 2
Figure [1(a)](#S3.F1.sf1 "1(a) ‣ Figure 1 ‣ 3.2 BDI-ATLES Semantics ‣ 3 BDI-ATLES: ATL for BDI Agents ‣ Reasoning about Agent Programs using ATL-like Logics") shows a partial model for the gold game.
The game starts at state q0subscript𝑞0q\_{0}italic\_q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, with players 𝖠𝗀𝖠𝗀\operatorname{\textsf{Ag}}Ag and 𝖤𝗇𝖤𝗇\operatorname{\textsf{En}}En located at B𝐵Bitalic\_B and A𝐴Aitalic\_A, resp., and gold present at C𝐶Citalic\_C.
From there, player 𝖠𝗀𝖠𝗀\operatorname{\textsf{Ag}}Ag has a winning strategy: reach the gold earlier and deposit it in the depot.
This can be seen in path q0q1q2q3q4subscript𝑞0subscript𝑞1subscript𝑞2subscript𝑞3subscript𝑞4q\_{0}q\_{1}q\_{2}q\_{3}q\_{4}italic\_q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 4 end\_POSTSUBSCRIPT. However, this is possible only when the agent 𝖠𝗀𝖠𝗀\operatorname{\textsf{Ag}}Ag is indeed equipped with all three capabilities.
If, on the other hand, the agent lacks capability 𝖢𝗈𝗅𝗅𝖾𝖼𝗍𝖢𝗈𝗅𝗅𝖾𝖼𝗍\operatorname{\textsf{\small Collect}}Collect, for instance, then player 𝖤𝗇𝖤𝗇\operatorname{\textsf{En}}En may actually manage to win the game, as evident from the path q0q1q5q6q7q8subscript𝑞0subscript𝑞1subscript𝑞5subscript𝑞6subscript𝑞7subscript𝑞8q\_{0}q\_{1}q\_{5}q\_{6}q\_{7}q\_{8}italic\_q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 5 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 6 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 7 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 8 end\_POSTSUBSCRIPT.
∎
BDI-ATLES models are similar to ATLES ones, except that capability, rather than strategy term, interpretations are used.
In a nutshell, the challenge thus is to characterize what the underlying “low-level” ATL strategies for agents with certain capabilities and goals are. We call such strategies *rational strategies*, in that they are compatible with the standard BDI rational execution model [[18](#bib.bib18)]: *they represent the agent acting as per her available plans in order to achieve her goals in the context of her beliefs*.
So, given an agent 𝑎𝑔𝑡∈𝒜𝑎𝑔𝑡𝒜{\operatorname{\textit{agt}}}\in\mathcal{A}agt ∈ caligraphic\_A, a plan-library ΠΠ\Piroman\_Π, and a goal base 𝒢𝒢\mathcal{G}caligraphic\_G, we define ΣΠ,𝒢𝑎𝑔𝑡subscriptsuperscriptΣ𝑎𝑔𝑡Π𝒢\Sigma^{{\operatorname{\textit{agt}}}}\_{\Pi,\mathcal{G}}roman\_Σ start\_POSTSUPERSCRIPT agt end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_Π , caligraphic\_G end\_POSTSUBSCRIPT to be the set of standard ATL strategies for agent 𝑎𝑔𝑡𝑎𝑔𝑡{\operatorname{\textit{agt}}}agt in ℳℳ\mathcal{M}caligraphic\_M that are *rational strategies* when the agent is equipped with plan-library ΠΠ\Piroman\_Π and has 𝒢𝒢\mathcal{G}caligraphic\_G as (initial) goals, that is, those ATL strategies in which the agent always chooses an action that is directed by one of its available plans in order to achieve one of its goals in the context of its current beliefs.
The core idea behind defining set ΣΠ,𝒢𝑎𝑔𝑡subscriptsuperscriptΣ𝑎𝑔𝑡Π𝒢\Sigma^{{\operatorname{\textit{agt}}}}\_{\Pi,\mathcal{G}}roman\_Σ start\_POSTSUPERSCRIPT agt end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_Π , caligraphic\_G end\_POSTSUBSCRIPT is to identify those “rational traces” in the structure that are compatible with the BDI deliberation process in which the agent acts as per her goals and beliefs.
Traces just generalize paths to account for the actions performed at each step, and are hence of the form λ+=q0𝑎1q1⋯𝑎ℓqℓsuperscript𝜆subscript𝑞0subscript𝑎1subscript𝑞1⋯subscript𝑎ℓsubscript𝑞ℓ\lambda^{+}=q\_{0}\operatorname{\textit{a}}\_{1}q\_{1}\cdots\operatorname{\textit{a}}\_{\ell}q\_{\ell}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT = italic\_q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ⋯ a start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT such that q0qq⋯qℓsubscript𝑞0subscript𝑞𝑞⋯subscript𝑞ℓq\_{0}q\_{q}\cdots q\_{\ell}italic\_q start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT italic\_q start\_POSTSUBSCRIPT italic\_q end\_POSTSUBSCRIPT ⋯ italic\_q start\_POSTSUBSCRIPT roman\_ℓ end\_POSTSUBSCRIPT is a (finite) path.
Rational strategies, then, are those that only yield rational traces.
Technically, we define *rational traces* in three steps.
First, we define a *goal-marking* function g(λ+,i)𝑔superscript𝜆𝑖g(\lambda^{+},i)italic\_g ( italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT , italic\_i ) denoting the “active” goal base of the agent at the i𝑖iitalic\_i-th stage of trace λ+superscript𝜆\lambda^{+}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT. Basically, a goal-marking function keeps track of the goals that the agent has already achieved at each stage in a trace.
Second, we define 𝐸𝑥𝑒𝑐(ϕ[α]ψ,g,λ+)𝐸𝑥𝑒𝑐italic-ϕdelimited-[]𝛼𝜓𝑔superscript𝜆\operatorname{\textit{Exec}}(\phi[\alpha]\psi,g,\lambda^{+})Exec ( italic\_ϕ [ italic\_α ] italic\_ψ , italic\_g , italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT ) as the set of indexes (i.e., stages) in trace λ+superscript𝜆\lambda^{+}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT where the plan ϕ[α]ψitalic-ϕdelimited-[]𝛼𝜓\phi[\alpha]\psiitalic\_ϕ [ italic\_α ] italic\_ψ may have been executed by the agent: the plan’s precondition ϕitalic-ϕ\phiitalic\_ϕ was true, ψ𝜓\psiitalic\_ψ was an active goal of the agent (as directed by goal-marking function g𝑔gitalic\_g), and α𝛼\alphaitalic\_α was indeed performed.
Finally, we say a trace λ+superscript𝜆\lambda^{+}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT is deemed “rational” if at every moment in the run the agent executed one of its plans. That is, for every index i𝑖iitalic\_i, it is the case that i∈𝐸𝑥𝑒𝑐𝑎𝑔𝑡(ϕ[α]ψ,g,λ+)𝑖subscript𝐸𝑥𝑒𝑐𝑎𝑔𝑡italic-ϕdelimited-[]𝛼𝜓𝑔superscript𝜆i\in\operatorname{\textit{Exec}}\_{\operatorname{\textit{agt}}}(\phi[\alpha]\psi,g,\lambda^{+})italic\_i ∈ Exec start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT ( italic\_ϕ [ italic\_α ] italic\_ψ , italic\_g , italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT ), for some plan ϕ[α]ψitalic-ϕdelimited-[]𝛼𝜓\phi[\alpha]\psiitalic\_ϕ [ italic\_α ] italic\_ψ in her know-how library.
Finally, we use ΣΠ,𝒢𝑎𝑔𝑡subscriptsuperscriptΣ𝑎𝑔𝑡Π𝒢\Sigma^{{\operatorname{\textit{agt}}}}\_{\Pi,\mathcal{G}}roman\_Σ start\_POSTSUPERSCRIPT agt end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_Π , caligraphic\_G end\_POSTSUBSCRIPT to denote the set of all ATL strategies whose executions always yield rational traces. The laborious, and arguably boring, technical details of all this can be found in the Appendix.
###### Example 3
Figure [1(b)](#S3.F1.sf2 "1(b) ‣ Figure 1 ‣ 3.2 BDI-ATLES Semantics ‣ 3 BDI-ATLES: ATL for BDI Agents ‣ Reasoning about Agent Programs using ATL-like Logics") depicts two possible traces λ1+subscriptsuperscript𝜆1\lambda^{+}\_{1}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and λ2+subscriptsuperscript𝜆2\lambda^{+}\_{2}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT (for agent 𝖠𝗀𝖠𝗀\operatorname{\textsf{Ag}}Ag) compatible with strategies f𝖠𝗀1subscriptsuperscript𝑓1𝖠𝗀f^{1}\_{\operatorname{\textsf{Ag}}}italic\_f start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT and f𝖠𝗀2subscriptsuperscript𝑓2𝖠𝗀f^{2}\_{\operatorname{\textsf{Ag}}}italic\_f start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT, resp.
Trace λ1+subscriptsuperscript𝜆1\lambda^{+}\_{1}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT is due to the agent executing actions as per its applicable plans, as evident from the plan labeling. For example, at state q1subscript𝑞1q\_{1}italic\_q start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, the agent is in a gold location and hence executes the pick action as per plan 𝖠𝗀C∧GC[pick]GBsubscript𝖠𝗀𝐶subscript𝐺𝐶delimited-[]picksubscript𝐺𝐵\operatorname{\textsf{Ag}}\_{C}\wedge G\_{C}[\text{{pick}}]G\_{B}Ag start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT ∧ italic\_G start\_POSTSUBSCRIPT italic\_C end\_POSTSUBSCRIPT [ pick ] italic\_G start\_POSTSUBSCRIPT italic\_B end\_POSTSUBSCRIPT. Consequently the strategy f𝖠𝗀1subscriptsuperscript𝑓1𝖠𝗀f^{1}\_{\operatorname{\textsf{Ag}}}italic\_f start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT Ag end\_POSTSUBSCRIPT is rational, as it yields rational trace λ1+subscriptsuperscript𝜆1\lambda^{+}\_{1}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT.
Trace λ2+subscriptsuperscript𝜆2\lambda^{+}\_{2}italic\_λ start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT on the other hand does not obey the BDI rationality constraints (e.g., the agent remains still in location B𝐵Bitalic\_B, despite an applicable plan being available).
∎
Assuming that set ΣΠ,𝒢𝑎𝑔𝑡subscriptsuperscriptΣ𝑎𝑔𝑡Π𝒢\Sigma^{{\operatorname{\textit{agt}}}}\_{\Pi,\mathcal{G}}roman\_Σ start\_POSTSUPERSCRIPT agt end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_Π , caligraphic\_G end\_POSTSUBSCRIPT of rational strategies has been suitably defined, we are ready to detail the semantics for formulas of the form ⟨⟨A⟩⟩ω,ϱφsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜑\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_φ.
Following ATLES we first extend the notion of a joint strategy for a coalition to that of joint strategy *under a given capability and goal assignment*.
So, given a capability (goal) assignment ω𝜔\omegaitalic\_ω (ϱitalic-ϱ\varrhoitalic\_ϱ) and an agent 𝑎𝑔𝑡∈𝒜ω𝑎𝑔𝑡subscript𝒜𝜔{\operatorname{\textit{agt}}}\in\mathcal{A}\_{\omega}agt ∈ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT (𝑎𝑔𝑡∈𝒜ϱ𝑎𝑔𝑡subscript𝒜italic-ϱ{\operatorname{\textit{agt}}}\in\mathcal{A}\_{\varrho}agt ∈ caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT), we denote 𝑎𝑔𝑡𝑎𝑔𝑡{\operatorname{\textit{agt}}}agt’s capabilities (goals) under ω𝜔\omegaitalic\_ω (ϱitalic-ϱ\varrhoitalic\_ϱ) by ω[𝑎𝑔𝑡]𝜔delimited-[]𝑎𝑔𝑡\omega[{\operatorname{\textit{agt}}}]italic\_ω [ agt ] (ϱ[𝑎𝑔𝑡]italic-ϱdelimited-[]𝑎𝑔𝑡\varrho[{\operatorname{\textit{agt}}}]italic\_ϱ [ agt ]).
Intuitively, an *⟨ω,ϱ⟩𝜔italic-ϱ\langle\omega,\varrho\rangle⟨ italic\_ω , italic\_ϱ ⟩-strategy for coalition A𝐴Aitalic\_A* is a joint strategy for A𝐴Aitalic\_A such that *(i)* agents in A∩𝒜ω𝐴subscript𝒜𝜔A\cap\mathcal{A}\_{\omega}italic\_A ∩ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT only follow “rational” (plan-goal compatible) strategies as per their ω𝜔\omegaitalic\_ω-capabilities and ϱitalic-ϱ\varrhoitalic\_ϱ-goals; and *(b)* agents in A\𝒜ω\𝐴subscript𝒜𝜔A\backslash\mathcal{A}\_{\omega}italic\_A \ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT follow arbitrary strategies.
Formally, an *⟨ω,ϱ⟩𝜔italic-ϱ\langle\omega,\varrho\rangle⟨ italic\_ω , italic\_ϱ ⟩-strategy for coalition A𝐴Aitalic\_A* (with 𝒜ω=𝒜ϱsubscript𝒜𝜔subscript𝒜italic-ϱ\mathcal{A}\_{\omega}=\mathcal{A}\_{\varrho}caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT = caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT) is a collective strategy FAsubscript𝐹𝐴F\_{A}italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT for agents A𝐴Aitalic\_A such that for all f𝑎𝑔𝑡∈FAsubscript𝑓𝑎𝑔𝑡subscript𝐹𝐴f\_{{\operatorname{\textit{agt}}}}\in F\_{A}italic\_f start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT ∈ italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT with 𝑎𝑔𝑡∈A∩𝒜ω𝑎𝑔𝑡𝐴subscript𝒜𝜔{\operatorname{\textit{agt}}}\in A\cap\mathcal{A}\_{\omega}agt ∈ italic\_A ∩ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT, it is the case that f𝑎𝑔𝑡∈ΣΠ,𝒢𝑎𝑔𝑡subscript𝑓𝑎𝑔𝑡subscriptsuperscriptΣ𝑎𝑔𝑡Π𝒢f\_{{\operatorname{\textit{agt}}}}\in\Sigma^{{\operatorname{\textit{agt}}}}\_{\Pi,\mathcal{G}}italic\_f start\_POSTSUBSCRIPT agt end\_POSTSUBSCRIPT ∈ roman\_Σ start\_POSTSUPERSCRIPT agt end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT roman\_Π , caligraphic\_G end\_POSTSUBSCRIPT, where Π=∪c∈ω[𝑎𝑔𝑡]Θ(c)Πsubscript𝑐𝜔delimited-[]𝑎𝑔𝑡Θ𝑐\Pi=\cup\_{c\in\omega[{\operatorname{\textit{agt}}}]}\Theta(c)roman\_Π = ∪ start\_POSTSUBSCRIPT italic\_c ∈ italic\_ω [ agt ] end\_POSTSUBSCRIPT roman\_Θ ( italic\_c ) and 𝒢=ϱ[𝑎𝑔𝑡]𝒢italic-ϱdelimited-[]𝑎𝑔𝑡\mathcal{G}=\varrho[{\operatorname{\textit{agt}}}]caligraphic\_G = italic\_ϱ [ agt ]. Note no requirements are asked on the strategies for the remaining agents A\𝒜ω\𝐴subscript𝒜𝜔A\backslash\mathcal{A}\_{\omega}italic\_A \ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT, besides of course being legal (ATL) strategies.
Also, whereas ATLES ρ𝜌\rhoitalic\_ρ-strategies are collective strategies including *all* agents in the domain of commitment function ρ𝜌\rhoitalic\_ρ, our ⟨ω,ϱ⟩𝜔italic-ϱ\langle\omega,\varrho\rangle⟨ italic\_ω , italic\_ϱ ⟩-strategies are collective strategies for the coalition of concern only. This is because commitment functions induce deterministic agent behaviors, whereas capabilities and goals assignments induce non-deterministic ones.
We will elaborate on this issue below.
Using the notions of ⟨ω,ϱ⟩𝜔italic-ϱ\langle\omega,\varrho\rangle⟨ italic\_ω , italic\_ϱ ⟩-strategies and that of possible outcomes for a given collective strategy from ATL (refer to function 𝘰𝘶𝘵(⋅,⋅)𝘰𝘶𝘵⋅⋅\operatorname{\textit{{\small out}}}(\cdot,\cdot)out ( ⋅ , ⋅ ) from Preliminaries), we are now able to state the meaning of BDI-ATLES (coalition) formulas:222As with ATL(ES), φ𝜑\varphiitalic\_φ ought to be a path formula and is interpreted in the usual manner. We omit the other ATL-like cases for brevity; see [[20](#bib.bib20)].
ℳ,q⊧⟨⟨A⟩⟩ω,ϱφmodelsℳ𝑞
subscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜑\mathcal{M},q\models\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\varphicaligraphic\_M , italic\_q ⊧ ⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_φ *iff* there is a ⟨ω,ϱ⟩𝜔italic-ϱ\langle\omega,\varrho\rangle⟨ italic\_ω , italic\_ϱ ⟩-strategy FAsubscript𝐹𝐴F\_{A}italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT such that for all ⟨ω,ϱ⟩𝜔italic-ϱ\langle\omega,\varrho\rangle⟨ italic\_ω , italic\_ϱ ⟩-strategies F𝒜ω∖Asubscript𝐹subscript𝒜𝜔𝐴F\_{\mathcal{A}\_{\omega}\setminus A}italic\_F start\_POSTSUBSCRIPT caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ∖ italic\_A end\_POSTSUBSCRIPT for 𝒜ω∖Asubscript𝒜𝜔𝐴\mathcal{A}\_{\omega}\setminus Acaligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ∖ italic\_A, it is the case that ℳ,λ⊧φmodelsℳ𝜆
𝜑\mathcal{M},\lambda\models\varphicaligraphic\_M , italic\_λ ⊧ italic\_φ, for all paths λ∈𝘰𝘶𝘵(q,FA∪F𝒜ω∖A)𝜆𝘰𝘶𝘵𝑞subscript𝐹𝐴subscript𝐹subscript𝒜𝜔𝐴\lambda\in\operatorname{\textit{{\small out}}}(q,F\_{A}\cup F\_{\mathcal{A}\_{\omega}\setminus A})italic\_λ ∈ out ( italic\_q , italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ∪ italic\_F start\_POSTSUBSCRIPT caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ∖ italic\_A end\_POSTSUBSCRIPT ).
Intuitively, FAsubscript𝐹𝐴F\_{A}italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT stands for the collective strategy of agents A𝐴Aitalic\_A guaranteeing the satisfaction of formula φ𝜑\varphiitalic\_φ. Because FAsubscript𝐹𝐴F\_{A}italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT is a ⟨ω,ϱ⟩𝜔italic-ϱ\langle\omega,\varrho\rangle⟨ italic\_ω , italic\_ϱ ⟩-strategy, some agents in A𝐴Aitalic\_A—those whose capabilities and goals are defined by ω𝜔\omegaitalic\_ω and ϱitalic-ϱ\varrhoitalic\_ϱ, resp.—are to follow strategies that correspond to rational executions of its capabilities.
At the same time, because other agents outside the coalition could have also been assigned capabilities and goals, the chosen collective strategy FAsubscript𝐹𝐴F\_{A}italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT needs to work no matter how such agents (namely, agents 𝒜ω∖Asubscript𝒜𝜔𝐴\mathcal{A}\_{\omega}\setminus Acaligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ∖ italic\_A) behave, as long as they do it rationally given their plans and goals. That is, FAsubscript𝐹𝐴F\_{A}italic\_F start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT has to work with *any* rational collective strategy F𝒜ω∖Asubscript𝐹subscript𝒜𝜔𝐴F\_{\mathcal{A}\_{\omega}\setminus A}italic\_F start\_POSTSUBSCRIPT caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ∖ italic\_A end\_POSTSUBSCRIPT.
Finally, the behavior of all remaining agents—namely those in 𝒜∖(A∪𝒜ω)𝒜𝐴subscript𝒜𝜔\mathcal{A}\setminus(A\cup\mathcal{A}\_{\omega})caligraphic\_A ∖ ( italic\_A ∪ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT )—are taken into account when considering all possible outcomes, after all strategies for agents in A∪𝒜ω𝐴subscript𝒜𝜔A\cup\mathcal{A}\_{\omega}italic\_A ∪ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT have been settled.
While similar to ATLES coalition formulas ⟨⟨A⟩⟩ρφsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜌𝜑\langle\!\langle A\rangle\!\rangle\_{\rho}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ρ end\_POSTSUBSCRIPT italic\_φ, BDI-ATLES coalition formulas ⟨⟨A⟩⟩ω,ϱφsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜑\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_φ differ in one important aspect that makes its semantics more involved.
Specifically, whereas commitment functions ρ𝜌\rhoitalic\_ρ prescribe *deterministic* behaviors for agents, capabilities and goals assignments yield multiple potential behaviors for the agents of interest. This nondeterministic behavior stems from the fact that BDI agents can choose what goals to work on at each point and what available plans to use for achieving such goals.
Technically, this is reflected in the strategies for each agent in (𝒜ω∖A)subscript𝒜𝜔𝐴(\mathcal{A}\_{\omega}\setminus A)( caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ∖ italic\_A )—those agents with assigned capabilities and goals but not part of the coalition—cannot be (existentially) considered together with those of agents in A𝐴Aitalic\_A or (universally) accounted for via the possible outcomes function 𝘰𝘶𝘵(⋅,⋅)𝘰𝘶𝘵⋅⋅\operatorname{\textit{{\small out}}}(\cdot,\cdot)out ( ⋅ , ⋅ ), as such function puts no rationality constraints on the remaining (non-committed) agents.
Thus, whereas agents in A∩𝒜ω𝐴subscript𝒜𝜔A\cap\mathcal{A}\_{\omega}italic\_A ∩ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT are allowed to select one possible rational behavior, all rational behaviors for agents in (Aω∖A)subscript𝐴𝜔𝐴(A\_{\omega}\setminus A)( italic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ∖ italic\_A ) need to be taken into consideration.
We close this section by noting an important, and expected, monotonicity property of BDI-ATLES w.r.t. changes in the goals and plans of agents.
###### Proposition 1
⊧⟨⟨A⟩⟩ω,ϱφ⊃⟨⟨A′⟩⟩ω′,ϱ′φmodelsabsentsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜑superset-ofsubscriptdelimited-⟨⟩delimited-⟨⟩superscript𝐴′superscript𝜔′superscriptitalic-ϱ′𝜑\models\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\varphi\supset\langle\!\langle A^{\prime}\rangle\!\rangle\_{\omega^{\prime},\varrho^{\prime}}\varphi⊧ ⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_φ ⊃ ⟨ ⟨ italic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_ϱ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_φ holds, provided that:
* •
A⊆A′𝐴superscript𝐴′A\subseteq A^{\prime}italic\_A ⊆ italic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, that is, the coalition is not reduced;
* •
ω[𝑎𝑔𝑡]⊆ω′[𝑎𝑔𝑡]𝜔delimited-[]𝑎𝑔𝑡superscript𝜔′delimited-[]𝑎𝑔𝑡\omega[{\operatorname{\textit{agt}}}]\subseteq\omega^{\prime}[{\operatorname{\textit{agt}}}]italic\_ω [ agt ] ⊆ italic\_ω start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT [ agt ] and ϱ[𝑎𝑔𝑡]⊆ϱ′[𝑎𝑔𝑡]italic-ϱdelimited-[]𝑎𝑔𝑡superscriptitalic-ϱ′delimited-[]𝑎𝑔𝑡\varrho[{\operatorname{\textit{agt}}}]\subseteq\varrho^{\prime}[{\operatorname{\textit{agt}}}]italic\_ϱ [ agt ] ⊆ italic\_ϱ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT [ agt ], for all 𝑎𝑔𝑡∈𝒜ω∩A𝑎𝑔𝑡subscript𝒜𝜔𝐴{\operatorname{\textit{agt}}}\in\mathcal{A}\_{\omega}\cap Aagt ∈ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ∩ italic\_A, that is, the goals and capabilities of those BDI agents in the coalition are not reduced; and
* •
𝒜ω∖A⊆𝒜ω′∖A′subscript𝒜𝜔𝐴subscript𝒜superscript𝜔′superscript𝐴′\mathcal{A}\_{\omega}\setminus A\subseteq\mathcal{A}\_{\omega^{\prime}}\setminus A^{\prime}caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ∖ italic\_A ⊆ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ∖ italic\_A start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, that is, the set of non BDI agents outside the coalition is not reduced (but could be new BDI agents outside the coalition);
* •
ω′[𝑎𝑔𝑡]⊆ω[𝑎𝑔𝑡]superscript𝜔′delimited-[]𝑎𝑔𝑡𝜔delimited-[]𝑎𝑔𝑡\omega^{\prime}[{\operatorname{\textit{agt}}}]\subseteq\omega[{\operatorname{\textit{agt}}}]italic\_ω start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT [ agt ] ⊆ italic\_ω [ agt ] and ϱ′[𝑎𝑔𝑡]⊆ϱ[𝑎𝑔𝑡]superscriptitalic-ϱ′delimited-[]𝑎𝑔𝑡italic-ϱdelimited-[]𝑎𝑔𝑡\varrho^{\prime}[{\operatorname{\textit{agt}}}]\subseteq\varrho[{\operatorname{\textit{agt}}}]italic\_ϱ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT [ agt ] ⊆ italic\_ϱ [ agt ], for all 𝑎𝑔𝑡∈𝒜ω∖A𝑎𝑔𝑡subscript𝒜𝜔𝐴{\operatorname{\textit{agt}}}\in\mathcal{A}\_{\omega}\setminus Aagt ∈ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ∖ italic\_A, that is, the goals and capabilities of those BDI agents outside the coalition are not augmented.
Informally, augmenting the goals/plans of agents in a coalition does not reduce the ability of agents.
This is because a collective ⟨ω,ϱ⟩𝜔italic-ϱ\langle\omega,\varrho\rangle⟨ italic\_ω , italic\_ϱ ⟩-strategy for coalition A𝐴Aitalic\_A to bring about a formula would still work if more goals and plans are given to the agents in the coalition (second condition).
Observe, on the other hand, that augmenting the goals or plans of those agents outside the coalition may yield new behavior that can indeed interfere with the coalition’s original abilities (last condition). This even includes turning BDI agents into non BDI agents (third condition).
Of course, as in ATL, enlarging the coalition does not reduce ability (first condition).
4 BDI-ATLES Model Checking
---------------------------
Given a BDI-ATLES model ℳℳ\mathcal{M}caligraphic\_M and a formula φ𝜑\varphiitalic\_φ, the model checking algorithm for BDI-ATLES computes the set of states in ℳℳ\mathcal{M}caligraphic\_M that satisfy φ𝜑\varphiitalic\_φ.
To that end, the algorithm has to take into account the rational choices of each BDI agent, that is, those choices that are the consequence of the agent’s goals and capabilities specified by functions ϱitalic-ϱ\varrhoitalic\_ϱ and ω𝜔\omegaitalic\_ω in formulae of the form ⟨⟨A⟩⟩ω,ϱφsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜑\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_φ.
Roughly speaking, the algorithm restricts, at each step, the options of BDI agents to their applicable plans.
We start by extending the model ℳℳ\mathcal{M}caligraphic\_M to embed the possible goals (based on the goal assignment) of BDI agents into each state, and then then discuss the model checking algorithm and its complexity.
foreach *φ′superscript𝜑normal-′\varphi^{\prime}italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT in Sub(φ𝜑\varphiitalic\_φ) w.r.t. ℳ=⟨𝒜,Q,𝒫,𝐴𝑐𝑡,d,𝒱,σ,Θ⟩ℳ𝒜𝑄𝒫𝐴𝑐𝑡𝑑𝒱𝜎normal-Θ\mathcal{M}=\langle\mathcal{A},Q,\mathcal{P},\operatorname{\textit{Act}},d,\mathcal{V},\sigma,\Theta\ranglecaligraphic\_M = ⟨ caligraphic\_A , italic\_Q , caligraphic\_P , Act , italic\_d , caligraphic\_V , italic\_σ , roman\_Θ ⟩* do
case *φ′=p:normal-:superscript𝜑normal-′𝑝absent\varphi^{\prime}=p:italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_p :* do [φ′]ℳ=𝒱(p)subscriptdelimited-[]superscript𝜑′ℳ𝒱𝑝[\varphi^{\prime}]\_{\mathcal{M}}=\mathcal{V}(p)[ italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT = caligraphic\_V ( italic\_p );
;
case *φ′=¬θ:normal-:superscript𝜑normal-′𝜃absent\varphi^{\prime}=\lnot\theta:italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = ¬ italic\_θ :* do [φ′]ℳ=([True]ℳ∖[θ]ℳ)subscriptdelimited-[]superscript𝜑′ℳsubscriptdelimited-[]Trueℳsubscriptdelimited-[]𝜃ℳ[\varphi^{\prime}]\_{\mathcal{M}}=([\text{{True}}]\_{\mathcal{M}}\setminus[\theta]\_{\mathcal{M}})[ italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT = ( [ True ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT ∖ [ italic\_θ ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT );
;
case *φ′=θ1∨θ2:normal-:superscript𝜑normal-′subscript𝜃1subscript𝜃2absent\varphi^{\prime}=\theta\_{1}\vee\theta\_{2}:italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ∨ italic\_θ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT :* do [φ′]ℳ=[θ1]ℳ∪[θ2]ℳsubscriptdelimited-[]superscript𝜑′ℳsubscriptdelimited-[]subscript𝜃1ℳsubscriptdelimited-[]subscript𝜃2ℳ[\varphi^{\prime}]\_{\mathcal{M}}=[\theta\_{1}]\_{\mathcal{M}}\cup[\theta\_{2}]\_{\mathcal{M}}[ italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT = [ italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT ∪ [ italic\_θ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT;
;
case *φ′=⟨⟨A⟩⟩ω,ϱ○θ:normal-:superscript𝜑normal-′normal-○subscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜃absent\varphi^{\prime}=\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\!\bigcirc\!\theta:italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = ⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT ○ italic\_θ :* do [φ′]ℳ=*ws*(*Pre*(A,ω,Θ,[θ]ℳϱ)∩⟦ϱ⟧)[\varphi^{\prime}]\_{\mathcal{M}}=\text{\emph{ws}}(\text{\emph{Pre}}(A,\omega,\Theta,[\theta]\_{\mathcal{M}\_{\varrho}})\cap\llbracket\varrho\rrbracket)[ italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT = ws ( Pre ( italic\_A , italic\_ω , roman\_Θ , [ italic\_θ ] start\_POSTSUBSCRIPT caligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) ∩ ⟦ italic\_ϱ ⟧ ) ;
;
case *φ′=⟨⟨A⟩⟩ω,ϱ□θ:normal-:superscript𝜑normal-′subscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱnormal-□𝜃absent\varphi^{\prime}=\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\Box\theta:italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = ⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT □ italic\_θ :* do ρ=[True]ℳϱ;τ=[θ]ℳϱformulae-sequence𝜌subscriptdelimited-[]Truesubscriptℳitalic-ϱ𝜏subscriptdelimited-[]𝜃subscriptℳitalic-ϱ\rho=[\text{{True}}]\_{\mathcal{M}\_{\varrho}};\tau=[\theta]\_{\mathcal{M}\_{\varrho}}italic\_ρ = [ True ] start\_POSTSUBSCRIPT caligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ; italic\_τ = [ italic\_θ ] start\_POSTSUBSCRIPT caligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ;
while *ρ⊈τnot-subset-of-or-equals𝜌𝜏\rho\not\subseteq\tauitalic\_ρ ⊈ italic\_τ* do ρ=τ;τ=*Pre*(A,ω,Θ,ρ)∩[θ]ℳϱformulae-sequence𝜌𝜏𝜏*Pre*𝐴𝜔Θ𝜌subscriptdelimited-[]𝜃subscriptℳitalic-ϱ\rho=\tau;\;\tau=\text{\emph{Pre}}(A,\omega,\Theta,\rho)\cap[\theta]\_{\mathcal{M}\_{\varrho}}italic\_ρ = italic\_τ ; italic\_τ = Pre ( italic\_A , italic\_ω , roman\_Θ , italic\_ρ ) ∩ [ italic\_θ ] start\_POSTSUBSCRIPT caligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT od; ;
[φ′]ℳ=*ws*(ρ∩⟦ϱ⟧)[\varphi^{\prime}]\_{\mathcal{M}}=\text{\emph{ws}}(\rho\cap\llbracket\varrho\rrbracket)[ italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT = ws ( italic\_ρ ∩ ⟦ italic\_ϱ ⟧ ) ;
;
case *φ′=⟨⟨A⟩⟩ω,ϱθ1𝒰θ2:normal-:superscript𝜑normal-′subscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱsubscript𝜃1𝒰subscript𝜃2absent\varphi^{\prime}=\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\theta\_{1}\mathcal{U}\theta\_{2}:italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = ⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT caligraphic\_U italic\_θ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT :* do ρ=[False]ℳϱ;τ=[θ2]ℳϱformulae-sequence𝜌subscriptdelimited-[]Falsesubscriptℳitalic-ϱ𝜏subscriptdelimited-[]subscript𝜃2subscriptℳitalic-ϱ\rho=[\text{{False}}]\_{\mathcal{M}\_{\varrho}};\tau=[\theta\_{2}]\_{\mathcal{M}\_{\varrho}}italic\_ρ = [ False ] start\_POSTSUBSCRIPT caligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ; italic\_τ = [ italic\_θ start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ] start\_POSTSUBSCRIPT caligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ;
while *τ⊈ρnot-subset-of-or-equals𝜏𝜌\tau\!\not\subseteq\!\rhoitalic\_τ ⊈ italic\_ρ* do ρ=ρ∪τ;τ=*Pre*(A,ω,Θ,ρ)∩[θ1]ℳϱformulae-sequence𝜌𝜌𝜏𝜏*Pre*𝐴𝜔Θ𝜌subscriptdelimited-[]subscript𝜃1subscriptℳitalic-ϱ\rho=\rho\!\cup\!\tau;\;\tau=\text{\emph{Pre}}(A,\omega,\Theta,\rho)\!\cap\![\theta\_{1}]\_{\mathcal{M}\_{\varrho}}\!italic\_ρ = italic\_ρ ∪ italic\_τ ; italic\_τ = Pre ( italic\_A , italic\_ω , roman\_Θ , italic\_ρ ) ∩ [ italic\_θ start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ] start\_POSTSUBSCRIPT caligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT od; ;
[φ′]ℳ=*ws*(ρ∩⟦ϱ⟧)[\varphi^{\prime}]\_{\mathcal{M}}=\text{\emph{ws}}(\rho\cap\llbracket\varrho\rrbracket)[ italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT = ws ( italic\_ρ ∩ ⟦ italic\_ϱ ⟧ ) ;
;
od;
return [φ′]ℳsubscriptdelimited-[]superscript𝜑′ℳ[\varphi^{\prime}]\_{\mathcal{M}}[ italic\_φ start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT;
Algorithm 1 BDI-ATLES symbolic model checking.
So, given a BDI-ATLES model ℳ=⟨𝒜,Q,𝒫,𝐴𝑐𝑡,d,𝒱,σ,Θ⟩ℳ𝒜𝑄𝒫𝐴𝑐𝑡𝑑𝒱𝜎Θ\mathcal{M}\!\!=\!\!\langle\mathcal{A},Q,\mathcal{P},\operatorname{\textit{Act}},d,\mathcal{V},\sigma,\Theta\ranglecaligraphic\_M = ⟨ caligraphic\_A , italic\_Q , caligraphic\_P , Act , italic\_d , caligraphic\_V , italic\_σ , roman\_Θ ⟩ and a goal assignment ϱitalic-ϱ\varrhoitalic\_ϱ, the *goal-extended model* is a model ℳϱ=⟨𝒜,Qϱ,𝒫,𝐴𝑐𝑡,dϱ,𝒱ϱ,σϱ,Θ⟩subscriptℳitalic-ϱ𝒜subscript𝑄italic-ϱ𝒫𝐴𝑐𝑡subscript𝑑italic-ϱsubscript𝒱italic-ϱsubscript𝜎italic-ϱΘ\mathcal{M}\_{\varrho}\!\!=\!\!\langle\mathcal{A},Q\_{\varrho},\mathcal{P},\operatorname{\textit{Act}},d\_{\varrho},\mathcal{V}\_{\varrho},\sigma\_{\varrho},\Theta\ranglecaligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT = ⟨ caligraphic\_A , italic\_Q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT , caligraphic\_P , Act , italic\_d start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT , caligraphic\_V start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT , italic\_σ start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT , roman\_Θ ⟩, where:
* •
Qϱ⊆Q×∏𝑎𝑔𝑡∈𝒜ϱ2ϱ[𝑎𝑔𝑡]subscript𝑄italic-ϱ𝑄subscriptproduct𝑎𝑔𝑡subscript𝒜italic-ϱsuperscript2italic-ϱdelimited-[]𝑎𝑔𝑡Q\_{\varrho}\subseteq Q\times\prod\_{{\operatorname{\textit{agt}}}\in\mathcal{A}\_{\varrho}}2^{\varrho[{\operatorname{\textit{agt}}}]}italic\_Q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ⊆ italic\_Q × ∏ start\_POSTSUBSCRIPT agt ∈ caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT 2 start\_POSTSUPERSCRIPT italic\_ϱ [ agt ] end\_POSTSUPERSCRIPT is the set of extended states, now accounting for the possible goals of BDI agents.
When qϱ=⟨q,g1,…,g|𝒜ϱ|⟩∈Qϱsubscript𝑞italic-ϱ
𝑞subscript𝑔1…subscript𝑔subscript𝒜italic-ϱsubscript𝑄italic-ϱq\_{\varrho}=\langle q,g\_{1},\ldots,g\_{|\mathcal{A}\_{\varrho}|}\rangle\in Q\_{\varrho}italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT = ⟨ italic\_q , italic\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_g start\_POSTSUBSCRIPT | caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT | end\_POSTSUBSCRIPT ⟩ ∈ italic\_Q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT, where q∈Q𝑞𝑄q\in Qitalic\_q ∈ italic\_Q and gi⊆ϱ[𝑎𝑔𝑡i]subscript𝑔𝑖italic-ϱdelimited-[]subscript𝑎𝑔𝑡𝑖g\_{i}\subseteq\varrho[{\operatorname{\textit{agt}}}\_{i}]italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⊆ italic\_ϱ [ agt start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ], is an extended state, we use *ws*(qϱ)=q*ws*subscript𝑞italic-ϱ𝑞\text{\emph{ws}}(q\_{\varrho})=qws ( italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ) = italic\_q and *gl*(𝑎𝑔𝑡i,qϱ)=gi*gl*subscript𝑎𝑔𝑡𝑖subscript𝑞italic-ϱsubscript𝑔𝑖\text{\emph{gl}}({\operatorname{\textit{agt}}}\_{i},q\_{\varrho})=g\_{i}gl ( agt start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ) = italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT to project ℳℳ\mathcal{M}caligraphic\_M’s world state and 𝑎𝑔𝑡isubscript𝑎𝑔𝑡𝑖{\operatorname{\textit{agt}}}\_{i}agt start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT’s goals.
To enforce belief-goal consistency we require no agent ever wants something already true: there are no qϱ∈Qϱsubscript𝑞italic-ϱsubscript𝑄italic-ϱq\_{\varrho}\in Q\_{\varrho}italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ∈ italic\_Q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT, 𝑎𝑔𝑡∈𝒜ϱ𝑎𝑔𝑡subscript𝒜italic-ϱ{\operatorname{\textit{agt}}}\in\mathcal{A}\_{\varrho}agt ∈ caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT, and formula γ𝛾\gammaitalic\_γ such that 𝒱(*ws*(qϱ))⊧γmodels𝒱*ws*subscript𝑞italic-ϱ𝛾\mathcal{V}(\text{\emph{ws}}(q\_{\varrho}))\models\gammacaligraphic\_V ( ws ( italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ) ) ⊧ italic\_γ and γ∈*gl*(𝑎𝑔𝑡,qϱ)𝛾*gl*𝑎𝑔𝑡subscript𝑞italic-ϱ\gamma\in\text{\emph{gl}}({\operatorname{\textit{agt}}},q\_{\varrho})italic\_γ ∈ gl ( agt , italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ).
* •
𝒱ϱ(qϱ)=𝒱(*ws*(qϱ))subscript𝒱italic-ϱsubscript𝑞italic-ϱ𝒱*ws*subscript𝑞italic-ϱ\mathcal{V}\_{\varrho}(q\_{\varrho})=\mathcal{V}(\text{\emph{ws}}(q\_{\varrho}))caligraphic\_V start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ) = caligraphic\_V ( ws ( italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ) ), for all qϱ∈Qϱsubscript𝑞italic-ϱsubscript𝑄italic-ϱq\_{\varrho}\in Q\_{\varrho}italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ∈ italic\_Q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT, that is, state evaluation remains unchanged.
* •
dϱ(𝑎𝑔𝑡,qϱ)=d(𝑎𝑔𝑡,*ws*(qϱ))subscript𝑑italic-ϱ𝑎𝑔𝑡subscript𝑞italic-ϱ𝑑𝑎𝑔𝑡*ws*subscript𝑞italic-ϱd\_{\varrho}({\operatorname{\textit{agt}}},q\_{\varrho})=d({\operatorname{\textit{agt}}},\text{\emph{ws}}(q\_{\varrho}))italic\_d start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( agt , italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ) = italic\_d ( agt , ws ( italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ) ), that is, physical executability remains unchanged.
* •
σϱ(qϱ,𝑎→)=⟨q′,g1′,…,g|𝒜ϱ|′⟩subscript𝜎italic-ϱsubscript𝑞italic-ϱ→𝑎
superscript𝑞′superscriptsubscript𝑔1′…superscriptsubscript𝑔subscript𝒜italic-ϱ′\sigma\_{\varrho}(q\_{\varrho},\vec{\operatorname{\textit{a}}})=\langle q^{\prime},g\_{1}^{\prime},\ldots,g\_{|\mathcal{A}\_{\varrho}|}^{\prime}\rangleitalic\_σ start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT , over→ start\_ARG a end\_ARG ) = ⟨ italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_g start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , … , italic\_g start\_POSTSUBSCRIPT | caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT | end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ⟩, where
q′=σ(*ws*(qϱ),𝑎→)superscript𝑞′𝜎*ws*subscript𝑞italic-ϱ→𝑎q^{\prime}=\sigma(\text{\emph{ws}}(q\_{\varrho}),\vec{\operatorname{\textit{a}}})italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_σ ( ws ( italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ) , over→ start\_ARG a end\_ARG ) and
gi′=*gl*(𝑎𝑔𝑡i,qϱ)∖{γ∣γ∈*gl*(𝑎𝑔𝑡i,qϱ),𝒱(q′)⊧γ}superscriptsubscript𝑔𝑖′*gl*subscript𝑎𝑔𝑡𝑖subscript𝑞italic-ϱconditional-set𝛾formulae-sequence𝛾*gl*subscript𝑎𝑔𝑡𝑖subscript𝑞italic-ϱmodels𝒱superscript𝑞′𝛾g\_{i}^{\prime}=\text{\emph{gl}}({\operatorname{\textit{agt}}}\_{i},q\_{\varrho})\setminus\{\gamma\mid\gamma\in\text{\emph{gl}}({\operatorname{\textit{agt}}}\_{i},q\_{\varrho}),\mathcal{V}(q^{\prime})\models\gamma\}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = gl ( agt start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ) ∖ { italic\_γ ∣ italic\_γ ∈ gl ( agt start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ) , caligraphic\_V ( italic\_q start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ⊧ italic\_γ }, is the transition function for the extended model.
Model ℳϱsubscriptℳitalic-ϱ\mathcal{M}\_{\varrho}caligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT is like ℳℳ\mathcal{M}caligraphic\_M though suitably extended to account for agents’ goals under the initial goal-assignment ϱitalic-ϱ\varrhoitalic\_ϱ.
Observe that the transition relation caters for persistence of goals as well as dropping of achieved goals. Indeed, the extended system will never evolve to an (extended) state in which some agent has some true fact as a goal. Hence, the transition relation is well-defined within Qϱsubscript𝑄italic-ϱQ\_{\varrho}italic\_Q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT states.
More interesting, the extended model keeps the original physical executability of actions and, as a result, it accommodates both rational and irrational paths. However, it is now possible to discriminate between them, as one can reason about applicable plans in each state.
Finally, it is not difficult to see that the extended model is, in general, exponentially larger than the original one with respect to the number of goals max𝑎𝑔𝑡∈𝒜(|ϱ[𝑎𝑔𝑡]|)subscript𝑎𝑔𝑡𝒜italic-ϱdelimited-[]𝑎𝑔𝑡\max\_{{\operatorname{\textit{agt}}}\in\mathcal{A}}(|\varrho[{\operatorname{\textit{agt}}}]|)roman\_max start\_POSTSUBSCRIPT agt ∈ caligraphic\_A end\_POSTSUBSCRIPT ( | italic\_ϱ [ agt ] | ) and agents |𝒜ϱ|subscript𝒜italic-ϱ|\mathcal{A}\_{\varrho}|| caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT |.
As standard, we denote the states satisfying a formula φ𝜑\varphiitalic\_φ by [φ]delimited-[]𝜑[\varphi][ italic\_φ ].
When the model is not clear from the context, we use [φ]ℳsubscriptdelimited-[]𝜑ℳ[\varphi]\_{\mathcal{M}}[ italic\_φ ] start\_POSTSUBSCRIPT caligraphic\_M end\_POSTSUBSCRIPT to denote the states in ℳℳ\mathcal{M}caligraphic\_M that satisfy the formula φ𝜑\varphiitalic\_φ.
We extend *ws*(⋅)*ws*⋅\text{\emph{ws}}(\cdot)ws ( ⋅ ) projection function to sets of extended states in the straightforward sense, that is, *ws*(S)=⋃q∈S{*ws*(q)}*ws*𝑆subscript𝑞𝑆*ws*𝑞\text{\emph{ws}}(S)=\bigcup\_{q\in S}\{\text{\emph{ws}}(q)\}ws ( italic\_S ) = ⋃ start\_POSTSUBSCRIPT italic\_q ∈ italic\_S end\_POSTSUBSCRIPT { ws ( italic\_q ) }. Thus, *ws*([φ]ℳϱ)*ws*subscriptdelimited-[]𝜑subscriptℳitalic-ϱ\text{\emph{ws}}([\varphi]\_{\mathcal{M}\_{\varrho}})ws ( [ italic\_φ ] start\_POSTSUBSCRIPT caligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ) denotes the set of all world states in ℳℳ\mathcal{M}caligraphic\_M that are part of an extended state in ℳϱsubscriptℳitalic-ϱ\mathcal{M}\_{\varrho}caligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT satisfying the formula φ𝜑\varphiitalic\_φ.
Also, ⟦ϱ⟧delimited-⟦⟧italic-ϱ\llbracket\varrho\rrbracket⟦ italic\_ϱ ⟧ denotes the set of extended states where the agents’ goals are as per goal assignment ϱitalic-ϱ\varrhoitalic\_ϱ; formally, ⟦ϱ⟧={q∣q∈Qϱ,∀𝑎𝑔𝑡∈𝒜ϱ:*gl*(𝑎𝑔𝑡,q)=ϱ[𝑎𝑔𝑡]}\llbracket\varrho\rrbracket=\{q\mid q\in Q\_{\varrho},\forall{\operatorname{\textit{agt}}}\in\mathcal{A}\_{\varrho}:\text{\emph{gl}}({\operatorname{\textit{agt}}},q)=\varrho[{\operatorname{\textit{agt}}}]\}⟦ italic\_ϱ ⟧ = { italic\_q ∣ italic\_q ∈ italic\_Q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT , ∀ agt ∈ caligraphic\_A start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT : gl ( agt , italic\_q ) = italic\_ϱ [ agt ] }.
Figure [1](#algorithm1 "1 ‣ 4 BDI-ATLES Model Checking ‣ Reasoning about Agent Programs using ATL-like Logics") shows the model checking algorithm for BDI-ATLES. It is based on the symbolic model checking algorithm for ATL [[3](#bib.bib3)] and ATLES [[20](#bib.bib20)].
The first three cases are handled in the same way as in ATL(ES).
To check the BDI-ATLES coalition formulae ⟨⟨A⟩⟩ω,ϱφsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜑\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_φ, we extend the model as above (relative to the formula’s goal assignment ϱitalic-ϱ\varrhoitalic\_ϱ), and then check the plain ATL coalition formula ⟨⟨A⟩⟩φdelimited-⟨⟩delimited-⟨⟩𝐴𝜑\langle\!\langle A\rangle\!\rangle\varphi⟨ ⟨ italic\_A ⟩ ⟩ italic\_φ in such extended model.
Note that only the set of states having the goals as per the initial goal assignment are returned—all agents’ initial goals are active in the first state of any rational trace.
Unlike standard ATL model checking, we restrict the agents’ action choices as per their capabilities.
This is achieved by modifying the usual pre-image function *Pre*(⋅)*Pre*⋅\text{\emph{Pre}}(\cdot)Pre ( ⋅ ) to only take into account actions resultant from agents’ applicable plans.
More concretely, *Pre*(A,ω,Θ,ρ)*Pre*𝐴𝜔Θ𝜌\text{\emph{Pre}}(A,\omega,\Theta,\rho)Pre ( italic\_A , italic\_ω , roman\_Θ , italic\_ρ ) is the set of (extended) states from where agents in coalition A𝐴Aitalic\_A can jointly force the next (extended) state to be in set ρ𝜌\rhoitalic\_ρ no matter how all other agents (i.e., agents in 𝒜∖A𝒜𝐴\mathcal{A}\setminus Acaligraphic\_A ∖ italic\_A) may act and provided all BDI-style agents (i.e., agents with capabilities defined under ω𝜔\omegaitalic\_ω and ΘΘ\Thetaroman\_Θ) behave as such.
Formally:
| | | |
| --- | --- | --- |
| | *Pre*(A,ω,Θ,ρ)={q∣∀i∈A,∃ai∈dϱ+(i,q,ω,Θ),∀j∈𝒜∖A,∀aj∈dϱ+(j,q,ω,Θ):σϱ(q,⟨a1,…,a|𝒜|⟩)∈ρ},\begin{array}[]{l}\text{\emph{Pre}}(A,\omega,\Theta,\rho)=\{q\mid\forall i\in A,\exists a\_{i}\in d^{+}\_{\varrho}(i,q,\omega,\Theta),\\
\phantom{xxxxxxxxxxxxxxxxxx}\forall j\in\mathcal{A}\!\setminus\!A,\forall a\_{j}\in d^{+}\_{\varrho}(j,q,\omega,\Theta)\!:\!\sigma\_{\varrho}(q,\langle a\_{1},\ldots,a\_{|\mathcal{A}|}\rangle)\!\in\!\rho\},\end{array}start\_ARRAY start\_ROW start\_CELL Pre ( italic\_A , italic\_ω , roman\_Θ , italic\_ρ ) = { italic\_q ∣ ∀ italic\_i ∈ italic\_A , ∃ italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ∈ italic\_d start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( italic\_i , italic\_q , italic\_ω , roman\_Θ ) , end\_CELL end\_ROW start\_ROW start\_CELL ∀ italic\_j ∈ caligraphic\_A ∖ italic\_A , ∀ italic\_a start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ∈ italic\_d start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( italic\_j , italic\_q , italic\_ω , roman\_Θ ) : italic\_σ start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( italic\_q , ⟨ italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT | caligraphic\_A | end\_POSTSUBSCRIPT ⟩ ) ∈ italic\_ρ } , end\_CELL end\_ROW end\_ARRAY | |
where auxiliary function dϱ+(𝑎𝑔𝑡,q,ω,Θ)subscriptsuperscript𝑑italic-ϱ𝑎𝑔𝑡𝑞𝜔Θd^{+}\_{\varrho}({\operatorname{\textit{agt}}},q,\omega,\Theta)italic\_d start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( agt , italic\_q , italic\_ω , roman\_Θ ) denotes the set of all actions that an agent 𝑎𝑔𝑡𝑎𝑔𝑡{\operatorname{\textit{agt}}}agt may take in state q𝑞qitalic\_q under capabilities as per defined in ω𝜔\omegaitalic\_ω and ΘΘ\Thetaroman\_Θ:
| | | |
| --- | --- | --- |
| | dϱ+(𝑎𝑔𝑡,q,ω,Θ)={dϱ(𝑎𝑔𝑡,q)if 𝑎𝑔𝑡∉𝒜ωdϱ(𝑎𝑔𝑡,q)∩dBDI(𝑎𝑔𝑡,q,⋃c∈ω[𝑎𝑔𝑡]Θ(c))if 𝑎𝑔𝑡∈𝒜ωsubscriptsuperscript𝑑italic-ϱ𝑎𝑔𝑡𝑞𝜔Θcasessubscript𝑑italic-ϱ𝑎𝑔𝑡𝑞if 𝑎𝑔𝑡∉𝒜ωsubscript𝑑italic-ϱ𝑎𝑔𝑡𝑞superscript𝑑BDI𝑎𝑔𝑡𝑞subscript𝑐𝜔delimited-[]𝑎𝑔𝑡Θ𝑐if 𝑎𝑔𝑡∈𝒜ωd^{+}\_{\varrho}({\operatorname{\textit{agt}}},q,\omega,\Theta)=\begin{cases}d\_{\varrho}({\operatorname{\textit{agt}}},q)&\text{if ${\operatorname{\textit{agt}}}\not\in\mathcal{A}\_{\omega}$}\\[8.61108pt]
d\_{\varrho}({\operatorname{\textit{agt}}},q)\cap~{}d^{\text{BDI}}({\operatorname{\textit{agt}}},q,\displaystyle\bigcup\_{c\in\omega[{\operatorname{\textit{agt}}}]}\Theta(c))&\text{if ${\operatorname{\textit{agt}}}\in\mathcal{A}\_{\omega}$}\end{cases}italic\_d start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( agt , italic\_q , italic\_ω , roman\_Θ ) = { start\_ROW start\_CELL italic\_d start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( agt , italic\_q ) end\_CELL start\_CELL if agt ∉ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL italic\_d start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( agt , italic\_q ) ∩ italic\_d start\_POSTSUPERSCRIPT BDI end\_POSTSUPERSCRIPT ( agt , italic\_q , ⋃ start\_POSTSUBSCRIPT italic\_c ∈ italic\_ω [ agt ] end\_POSTSUBSCRIPT roman\_Θ ( italic\_c ) ) end\_CELL start\_CELL if agt ∈ caligraphic\_A start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT end\_CELL end\_ROW | |
An action belongs to set dϱ+(𝑎𝑔𝑡,q,ω,Θ)subscriptsuperscript𝑑italic-ϱ𝑎𝑔𝑡𝑞𝜔Θd^{+}\_{\varrho}({\operatorname{\textit{agt}}},q,\omega,\Theta)italic\_d start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( agt , italic\_q , italic\_ω , roman\_Θ ) if it is physically possible (i.e., it belongs to dϱ(𝑎𝑔𝑡,q)subscript𝑑italic-ϱ𝑎𝑔𝑡𝑞d\_{\varrho}({\operatorname{\textit{agt}}},q)italic\_d start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( agt , italic\_q )), and BDI-rational whenever the agent in question is a BDI agent.
To capture the latter constraint, set dBDI(𝑎𝑔𝑡,q,Π)superscript𝑑BDI𝑎𝑔𝑡𝑞Πd^{\text{BDI}}({\operatorname{\textit{agt}}},q,\Pi)italic\_d start\_POSTSUPERSCRIPT BDI end\_POSTSUPERSCRIPT ( agt , italic\_q , roman\_Π ) is defined as the set of all rational actions for agent 𝑎𝑔𝑡𝑎𝑔𝑡{\operatorname{\textit{agt}}}agt in (extended) state q𝑞qitalic\_q when the agent is equipped with the set of plans ΠΠ\Piroman\_Π:
| | | |
| --- | --- | --- |
| | dBDI(𝑎𝑔𝑡,q,Π)={{a∣ϕ[a]ψ∈Δ(𝑎𝑔𝑡,q,Π)}if Δ(𝑎𝑔𝑡,q,Π)≠∅{noOp}otherwisesuperscript𝑑BDI𝑎𝑔𝑡𝑞Πcasesconditional-set𝑎italic-ϕdelimited-[]𝑎𝜓Δ𝑎𝑔𝑡𝑞Πif Δ𝑎𝑔𝑡𝑞ΠnoOpotherwised^{\text{BDI}}({\operatorname{\textit{agt}}},q,\Pi)\!=\!\!\begin{cases}\{a\mid\phi[a]\psi\!\in\!\Delta({\operatorname{\textit{agt}}},q,\Pi)\}&\!\!\!\!\text{if }\Delta({\operatorname{\textit{agt}}},q,\Pi)\not=\emptyset\\
\{\textsc{noOp}\}&\!\!\!\!\text{otherwise}\end{cases}italic\_d start\_POSTSUPERSCRIPT BDI end\_POSTSUPERSCRIPT ( agt , italic\_q , roman\_Π ) = { start\_ROW start\_CELL { italic\_a ∣ italic\_ϕ [ italic\_a ] italic\_ψ ∈ roman\_Δ ( agt , italic\_q , roman\_Π ) } end\_CELL start\_CELL if roman\_Δ ( agt , italic\_q , roman\_Π ) ≠ ∅ end\_CELL end\_ROW start\_ROW start\_CELL { noOp } end\_CELL start\_CELL otherwise end\_CELL end\_ROW | |
where Δ(𝑎𝑔𝑡,q,Π)={ϕ[a]ψ∈Π∣𝒱(q)⊧ϕ,γ∈*gl*(𝑎𝑔𝑡,q),ψ⊧γ}Δ𝑎𝑔𝑡𝑞Πconditional-setitalic-ϕdelimited-[]𝑎𝜓Πformulae-sequencemodels𝒱𝑞italic-ϕformulae-sequence𝛾*gl*𝑎𝑔𝑡𝑞models𝜓𝛾\Delta({\operatorname{\textit{agt}}},q,\Pi)\!=\!\{\phi[a]\psi\in\Pi\mid\mathcal{V}(q)\models\phi,\gamma\in\text{\emph{gl}}({\operatorname{\textit{agt}}},q),\psi\models\gamma\}roman\_Δ ( agt , italic\_q , roman\_Π ) = { italic\_ϕ [ italic\_a ] italic\_ψ ∈ roman\_Π ∣ caligraphic\_V ( italic\_q ) ⊧ italic\_ϕ , italic\_γ ∈ gl ( agt , italic\_q ) , italic\_ψ ⊧ italic\_γ } is the set of all applicable plans in ΠΠ\Piroman\_Π at state q𝑞qitalic\_q.
So, summarising, function *Pre*(⋅,⋅,⋅,⋅)*Pre*⋅⋅⋅⋅\text{\emph{Pre}}(\cdot,\cdot,\cdot,\cdot)Pre ( ⋅ , ⋅ , ⋅ , ⋅ ) is an extension of the standard ATL *Pre*(⋅)*Pre*⋅\text{\emph{Pre}}(\cdot)Pre ( ⋅ ) function in which the agents that have goals and capabilities defined—the BDI agents—do act according to those goals and capabilities.
It is clear that the modified version of *Pre*(⋅)*Pre*⋅\text{\emph{Pre}}(\cdot)Pre ( ⋅ ) function does not alter the complexity of the underlying ATL-based algorithm. In fact, the variation is similar to that used for model checking ATLES, except that the action filtering does not come from strategy terms, but from agent plans.
This means that the algorithm runs in polynomial time w.r.t. the size of model ℳϱsubscriptℳitalic-ϱ\mathcal{M}\_{\varrho}caligraphic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT (which is exponential w.r.t. the original model ℳℳ\mathcal{M}caligraphic\_M).
###### Theorem 4.1
Model checking a BDI-ATLES formula ⟨⟨A⟩⟩ω,ϱφsubscriptdelimited-⟨⟩delimited-⟨⟩𝐴𝜔italic-ϱ𝜑\langle\!\langle A\rangle\!\rangle\_{\omega,\varrho}\varphi⟨ ⟨ italic\_A ⟩ ⟩ start\_POSTSUBSCRIPT italic\_ω , italic\_ϱ end\_POSTSUBSCRIPT italic\_φ (against a model ℳℳ\mathcal{M}caligraphic\_M) can be done in exponential time on the number of agents |𝒜|𝒜|\mathcal{A}|| caligraphic\_A | and goals maxa∈𝒜(|ϱ[a]|)subscript𝑎𝒜italic-ϱdelimited-[]𝑎\max\_{a\in\mathcal{A}}(|\varrho[a]|)roman\_max start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT ( | italic\_ϱ [ italic\_a ] | ).
Of course, should we have included agents’ goals explicitly in models (rather than using a succinct representation), as done with intentions in ATL+intentions (ATLI) [[15](#bib.bib15)], the model checking problem would retain ATL’s polynomial complexity. The same would apply if one just generalized ATLES to explicitly require all rational-strategies be part of the model.
The fact is, however, that generating such rational strategies by hand (to include them in models) will be extremely involved, even for small problems.
In addition, our approach decouples agent’s mental attitudes from the physical ATL-like model, and enables reasoning at the level of formulae without changing the model.
We shall note that the exponentiality may not show up in certain applications.
In many cases, for example, one is interested in just one BDI agent acting in an environment. In that case, only such agent will be ascribed goals and capabilities. Since it arises due to agents with goals, the exponential complexity would therefore only be on the number of goals for such agent.
Similarly, in situations where all agents have a single goal to achieve (e.g., to pick gold), the model checking would then be exponential on the number of BDI agents only.
In the next section we shall provide one interpretation of goals for which the model checking problem remains polynomial.
5 BDI-ATLES with Maintenance Goals
-----------------------------------
So far, we have worked on the assumption that agents have a set of “flat” *achievement* goals, goals that the agent needs to eventually bring about.
One can however consider alternative views of goals that could suit different domains. In particular, we have considered achievement goals with *priorities* and repetitive/reactive *maintenance* goals.
In the first case, the framework can be easily generalized to one in which goals can be prioritized without an increase in complexity [YadavSardina:CoRR12b].
A more promising case arises when goals are given a maintenance interpretation, that is, (safety) properties that ought to be preserved temporally. For example, a Mars robot has the goal to always maintain the fuel level above certain threshold.
We focus our attention on the so-called *repetitive* or *reactive* maintenance goals [[12](#bib.bib12), [10](#bib.bib10)]: goals that ought to be restored whenever “violated.” Should the fuel level drop below the threshold, the robot will act towards re-fueling.
This type of goals contrast with *proactive* maintenance goals [[12](#bib.bib12)], under which the agent is expected to proactively avoid situations that will violate the goal. The fact is, however, that almost all BDI platform—like Jack, Jason, and Jadex—only deal with the reactive version, thus providing a middle ground between expressivity and tractability.
Technically, to accommodate maintenance goals within BDI-ATLES, one only needs to do a small adaptation of the semantics of the logic so that goals are not dropped forever once satisfied, but “re-appear” when violated.
We refer to this alternate version of our logic as BDI-ATLESMsuperscriptBDI-ATLES𝑀\mbox{BDI-ATLES}^{M}BDI-ATLES start\_POSTSUPERSCRIPT italic\_M end\_POSTSUPERSCRIPT.
Of course, the model checking algorithm discussed above also needs to be slightly adapted to deal with the new goal semantics.
Interestingly, one only needs to adapt the definition of a goal-extended model Mϱsubscript𝑀italic-ϱM\_{\varrho}italic\_M start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT by re-defining components Qϱsubscript𝑄italic-ϱQ\_{\varrho}italic\_Q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT and σϱ(qϱ,𝑎→)subscript𝜎italic-ϱsubscript𝑞italic-ϱ→𝑎\sigma\_{\varrho}(q\_{\varrho},\vec{\operatorname{\textit{a}}})italic\_σ start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT ( italic\_q start\_POSTSUBSCRIPT italic\_ϱ end\_POSTSUBSCRIPT , over→ start\_ARG a end\_ARG ); see [YadavSardina:CoRR12b] for details.
###### Theorem 5.1
Model checking in BDI-ATLESMsuperscriptBDI-ATLES𝑀\mbox{BDI-ATLES}^{M}BDI-ATLES start\_POSTSUPERSCRIPT italic\_M end\_POSTSUPERSCRIPTcan be done in polynomial time (w.r.t. the model and the formula).
Hence, for (reactive) maintenance goals, we retain ATL(ES) polynomial complexity.333Note the complexity of model checking ATLES is known only for memoryless strategies [[20](#bib.bib20)].
Of course, this bound is tight, as BDI-ATLESMsuperscriptBDI-ATLES𝑀\mbox{BDI-ATLES}^{M}BDI-ATLES start\_POSTSUPERSCRIPT italic\_M end\_POSTSUPERSCRIPTsubsumes ATL (just take ω=φ=∅𝜔𝜑\omega=\varphi=\emptysetitalic\_ω = italic\_φ = ∅ in every coalition formula) and model checking ATL is PTIME-complete [[3](#bib.bib3)].
6 Discussion
-------------
We have developed an ATL-like logic that relates closely to the BDI agent-oriented programming paradigm widely used to *implement* multi-agent systems.
In the new logic, the user can express the capability of agents equipped with know-how knowledge in a natural way and can reason in the language about *what agents can achieve under such capabilities*.
Besides the general framework with standard achievement goals, we argued that one could instead appeal to goals with priorities or a special type of maintenance goals. We provided algorithms for model checking in such a framework and proved its (upper-bound) complexity in the various cases.
Overall, we believe that this work is a first principled step to bring together two different fields in the area of multi-agent systems, namely, verification of strategic behaviour and agent programming.
The framework presented here made a number of assumptions requiring further work.
Due to valuation function 𝒱𝒱\mathcal{V}caligraphic\_V in a structure, all agents are assumed to have full shared observability of the environment. This is, of course, a strong assumption in many settings.
We considered here basically reactive plans, akin to the language of Goal [[11](#bib.bib11)], certain classes of 2APL/3APL [[8](#bib.bib8), [13](#bib.bib13)], reactive modules [[4](#bib.bib4)], and universal plans [[19](#bib.bib19)]. We would like to explore the impact of allowing plan bodies having sequences of actions, and more importantly, sub-goaling, as well as the possibility of agents imposing (new) goals to other agents, via so-called BDI *messages*.
Also, in the context of complex plan bodies, one could then consider both a linear as well as interleaved execution styles of plans within each agent (for its various goals).
Most of these issue appear to be orthogonal to each other, and hence can be investigated one by one. With the core framework laid down, our next efforts shal focus on the above issues, as well as proving whether the complexity result provided in Theorem [4.1](#S4.Thmtheorem1 "Theorem 4.1 ‣ 4 BDI-ATLES Model Checking ‣ Reasoning about Agent Programs using ATL-like Logics") is tight.
We close by noting that, besides ATLES, our work has strong similarities and motivations to those on *plausibility* [[14](#bib.bib14)] and *intention* [[15](#bib.bib15)] reasoning in ATL. Like ATLES, however, those works are still not linked to any approach for the actual development of agents, which is the main motivation behind our work. Nonetheless, we would like to investigate how to integrate plausibility reasoning in our logic, as it seems orthogonal to that of rational BDI-style behavior. Indeed, the plausibility approach allows us to focus the reasoning to certain parts of an ATL structure using more declarative specifications. |
5259f6ce-221f-4579-8d01-62c44f8f0795 | trentmkelly/LessWrong-43k | LessWrong | Re SMTM: negative feedback on negative feedback
SlimeMoldTimeMold (SMTM) recently finished a 14-part series on psychology, “The Mind in the Wheel”, centering on feedback loops (“cybernetics”) as kind of a grand unified theory of the brain.
There are parts of their framework I like—in particular, I think there are innate drives (pain is bad, eating-when-hungry is good, etc.), and I think that the top priority of neuroscience & psychology should be to figure out exactly what they are and how they work.
But when we drill into details, I have a bunch of disagreements with SMTM’s account.
Just kidding! Control theory is great. I’m all for it. …I just don’t think it’s a grand unified theory of the brain. (source)
I think there’s a general failure mode of “grand unified theories of the brain”: the brain needn’t have any grand unified theory in the first place! I find that people with such theories are often doing the thing where they have a hammer and everything looks like a nail. (I throw this same criticism at the “brain = Bayesian inference” people, and the “brain = prediction” people, and the “brain = neural net with gradient descent” people, etc.).
I’m likewise generally opposed to grand unified theories of the body. The shoulder involves a ball-and-socket joint, and the kidney filters blood. OK cool, those are two important facts about the body. I’m happy to know them! I don’t feel the need for a grand unified theory of the body that includes both ball-and-socket joints and blood filtration as two pieces of a single overly-cute grand narrative. Ditto for the cortex, striatum, cerebellum, and all the other components of the brain.
So, how do homeostatic feedback control loops actually work—what’s the not-overly-cute big picture that they really fit into? Read on for my opinions on that, along with my spirited defense of brain-like AGI doomerism, of reinforcement learning and value functions in the brain, of neural tracer studies, and much more!
Anyway, onto the posts!
(Thanks SMTM for kindly chatting with me |
a3bafced-0060-4fb1-849c-e7ce20afd2cd | trentmkelly/LessWrong-43k | LessWrong | Let's Read:
Superhuman AI for multiplayer poker
On July 11, a new poker AI is published in Science. Called Pluribus, it plays 6-player No-limit Texas Hold'em at superhuman level.
In this post, we read through the paper. The level of exposition is between the paper (too serious) and the popular press (too entertaining).
Basics of Texas Hold'em
If you don't know what it even is, like me, then playing a tutorial would be best. I used Learn Poker on my phone.
Now that you know how to play it, it's time to deal with some of the terminologies.
* Big blind: the minimal money/poker chips that every player must bet in order to play. For example, $0.1 would be a reasonable amount in casual play.
* No-limit: you can bet as much as you want. Okay, not really. You can't bet a billion dollars. In practical playing, it's usually limited to something "reasonable" like 100 times of the big blind.
* Heads-up: 2-player.
* Limping: betting the minimal amount that you have to bet, in order to keep yourself in the game. This is generally considered bad: if you feel confident, you should raise the bet, and if you feel diffident, you should quit.
* Donk betting: some kind of uncommon play that's usually considered dumb (like a donkey). I didn't figure out what it actually means.
The authors
The authors are Noam Brown and Tuomas Sandholm. Previously, they made the news by writing Libratus, a poker AI that beat human champions in 2-player no-limit Texas Hold'em, in 2017.
Pluribus contains a lot of the code from Libratus and its siblings:
> The authors have ownership interest in Strategic Machine, Inc. and Strategy Robot, Inc. which have exclusively licensed prior game-solving code from Prof. Sandholm’s Carnegie Mellon University laboratory, which constitutes the bulk of the code in Pluribus.
Scroll to the bottom for more on the two companies.
Highlights from the paper
IS NASH EQUILIBRIUM EVEN WORTHWHILE?
In multiplayer games, Nash equilibriums are not easy to compute, and might not even matter. Consider the Lemonade Sta |
1fbcc603-f73b-4b09-8c7e-1a1ffeb7b807 | trentmkelly/LessWrong-43k | LessWrong | Electrons don’t think (or suffer)
There is an EA fringe that talks about suffering in elementary particles. Physicist Sabine Hossenfelder reminds her readers why this panpsychist idea is nonsense.
TL;DR: "if you want a particle to be conscious, your minimum expectation should be that the particle can change. It’s hard to have an inner life with only one thought. But if electrons could have thoughts, we’d long have seen this in particle collisions because it would change the number of particles produced in collisions."
The whole post and the comments are an entertaining read.
|
17c87623-9102-42b1-8d08-d0a74b88b164 | trentmkelly/LessWrong-43k | LessWrong | Bathing Machines and the Lindy Effect
From Wikipedia:
> The Lindy effect is a theory that the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age, so that every additional period of survival implies a longer remaining life expectancy.
There are a some shallow criticisms you could make of it as a predictive tool. How do you define the thing? What's the proportion of age to life expectancy? How often do we update it?
But these are all problems of refining a loose claim into an unambiguous prediction. How would we evaluate whether this heuristic is a useful predictive tool?
Let's imagine that we're updating the Lindy effect predictions every year.
Bathing machines were roofed and walled wooden carts rolled to the beach to allow swimmers to wash and change in privacy. The earliest evidence of them is an engraving from 1735, and they were almost entirely out of use by the early 1920s. Their technological lifespan is around 185 years.
Let's be generous to the Lindy effect, and say that it needs to give a lifespan estimate that's within 25% of the true value to count as correct. In this case, it needed to predict that bathing machines would become obsolete sometime between 1874-1966.
People often use the Lindy Effect to predict that at any given time, we're halfway through the lifespan of anything that currently exists. Using that value, it generated correct predictions between 1805 and 1851, about 25% of the lifespan of the bathing machine. So across the lifetime of the bathing machine, the Lindy Effect was usually wrong.
To take this a step further, I wrote a Python script that finds the most successful Lindy Effect proportion. The code is in the comments. Here are my results.
* The most successful historical Lindy Effect predictions, using the 25% confidence interval, estimate that a thing will be around for 125% of its age in any given year.
* It will be right only 40% of the time throughout the lifespan of the thing.
* It will |
aa279784-8510-450b-bc39-14d5c95cd0ab | trentmkelly/LessWrong-43k | LessWrong | Decentralized Exclusion
I'm part of several communities that are relatively decentralized. For example, anyone can host a contra dance, rationality meetup, or effective altruism dinner. Some have central organizations (contra has CDSS, EA has CEA) but their influence is mostly informal. This structure has some benefits (lower overhead, more robust) but one drawback is in handling bad behavior. If several people reported very bad experiences with someone at my local dance we'd kick them out, but that wouldn't keep them from harming others at, say, any of the hundreds of other events run by other organizations.
I have seen cases, though, where someone was fully removed from a decentralized community. Looking at why these cases succeeded and others failed, I think it took:
1. Clear misbehavior, in a way that nearly everyone would agree if they looked into it.
2. Detailed public accusations, so people can look into it if they doubt the consensus.
The combination of these means that you can have an initial burst of 'drama' in which lots of people learn about the situation and agree that the person should be kicked out, and then this can be maintained whenever they show up again. For example:
* 2016: Gleb Tsipursky from the EA community, for a range of shady things and a pattern of apologizing and then continuing (details).
* 2017: Jordy Williams from contra dance, after accusations of grooming and rape (details).
* 2018: Brent Dill, from the rationality community after accusations of sexual abuse, gaslighting, and more (details).
Unfortunately this approach relies on people making public accusations, which is really hard. We should support people when they do and recognize their bravery, but people will often have valid reasons why they won't: fear of retaliation, unwilling to have that level of public scrutiny, risk of legal action. In those cases it's still possible to make some progress privately, and we definitely need to try, but you keep bumping into the limitations of de |
fb28940f-925f-474d-8e14-c2e61c25e7d4 | trentmkelly/LessWrong-43k | LessWrong | Want to predict/explain/control the output of GPT-4? Then learn about the world, not about transformers.
Introduction
Consider the following scene from William Shakespeare's Julius Caesar.
In this scene, Caesar is at home with his wife Calphurnia. She has awoken after a bad dream, and is pleaded with Caesar not to go to the Senate. Although Caesar initially agrees to stay at home with her, he later changes his mind after being convinced by his friend Decius Brutus that the Senate needs him there to address important business.
> CAESAR: The cause is in my will: I will not come; That is enough to satisfy the senate.
>
> DECIUS BRUTUS: If Caesar hide himself, shall they not whisper 'Lo, Caesar is afraid'? Pardon me, Caesar; for my dear dear love To our proceeding bids me tell you this; And reason to my love is liable.
>
> CAESAR: How foolish do your fears seem now, Calphurnia! I am ashamed I did yield to them. Give me my robe, for I will go.
>
> — Julius Caesar, Act II Scene II
This scene is set on the morning of 15 March 44 BC, the so-called "Ides of March", which is coincidentally also the date today. When Caesar arrives at the Senate meeting, he is promptly assassinated.
But suppose he never went? Let's say we change Caesar's final line to this:
> CAESAR: My mind is firm, Decius. I'll stay within these walls, And not tempt Fortune on this cursed day. Worry me not, for I will stay.
When I feed this modified scene into GPT-4, what will be the output? Maybe it'll produce an alternative history where Caesar was never assassinated, and the Roman Republic endures for two thousand years. Or maybe it'll produce a history where Caesar is killed anyway, by some other means.
I don't know. How might I determine the answer?
The claim
You might think that if you want to predict the output of GPT-4, then the best thing would be to learn about autoregressive transformers. Maybe you should read Neel Nanda's blogposts on mechanistic interpretability. Or maybe you should read the arxiv papers on the GPT series. But actually, this won't help you predict GPT-4's output on this |
04080703-10f5-4e92-9a1a-d094550811d6 | trentmkelly/LessWrong-43k | LessWrong | Open thread, July 29-August 4, 2013
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Of course, for "every Monday", the last one should have been dated July 22-28. *cough* |
4b8523f1-d248-4b2e-b404-deaad90720bd | trentmkelly/LessWrong-43k | LessWrong | Accrue Nuclear Dignity Points
Content warning: nuclear doom
Previously on last week's episode: A Few Terrifying Facts About The Russo-Ukrainian War
----------------------------------------
You have probably thought about prepping at some point.
You have probably also not prepped as much as you'd like.
Neither have I.
And neither of us likely will be prepared "enough", in this world, if the worst happens.
You've probably seen those prepper checklists and balked at completing every item on them.
Maybe you've told yourself it's not worth living after a nuclear war.
Maybe you've convinced yourself that preparing for the set of things that includes normal disasters like an earthquake or hurricane will be all that is reasonable to do.
I've long yearned for a proper collective response to Global Catastrophic Risks In Your Local Neighborhood.
I'm coming to accept that there's simply no reaching adequacy. I'm still likely to fuck up on some significant axis and still pretty likely to die after a nuclear war, though less than before each preparation I took.
Still...
If you believe in dying with dignity, then you can die with more or less dignity after a nuclear war.
You can die from radiation after a couple weeks, and because you had emergency water, someone else had access to water and didn't die of thirst, and you were able to help a couple friends get to safety before you succumbed.
You can have potassium iodide pills, and not die of radiation but then still get cancer after a year. You have emergency water and two weeks of food, but you somehow get stuck in your urban area because all your friends with cars took off and you forgot to download cash from an ATM to pay the people who want money to drive you out of the nuke zone.
Part of a nuclear war is likely to be EMPs, which result from atmospheric detonations. This means your electronics probably get fried. And the banks. You lose access to the Internet, to your phone, to most of your money... Probably? But you can protect your elect |
8ce7876e-93d3-4954-84f1-096c13616025 | trentmkelly/LessWrong-43k | LessWrong | Agglomeration of 'Ought'
§ Introduction
My aim in this post is to argue that the ‘ought,’ predicate, interpreted in a global sense, agglomerates. I’ll henceforth refer to this position as agglomeration. The position is as follows.
Agglomeration: If “I ought to do A,” and “I ought to do B,” then “I ought to do A and B,” where A and B are actions, and “A and B,” is a weak conjunctive action.
I use the term ‘weak conjunctive action,’ to mean an action of the form “A and B,” where A and B are actions. For any agent X, the sentence “X did ‘A and B,’” is true at time t when (1) “X did A,” is true at time t and (2) “X did B,” is true at time t. There is a sense in which the action ‘A and B’ is constructed by a very weak form of conjunction since the conjunction of actions A and B does not imply that A and B must be done in a particular order or near each other in space or time. To understand how weak these conditions are, contrast the sense in which I use the word ‘and’ in this paper with the work that ‘and’ does in the common-sense interpretation of the action ‘running and thinking.’ In the common-sense interpretation, ‘and’ implies that the agent runs and thinks simultaneously. However, if we interpret ‘running and thinking,’ as a weak conjunctive action, then it is true on Friday that an agent who ran on Wednesday and thought on Thursday did the action of ‘running and thinking.’
I use the term ‘ought statements’ to refer to statements of the form “X ought to do A.” I distinguish between global and local ought statements. An ought statement is global if and only if the obligation to do A in circumstance C that the ‘ought’ statement points too reflects and incorporates all the relevant moral factors in C. Not all obligations are global. A local obligation to do A in circumstance C is an obligation that is true in virtue of a subset of the morally relevant factors in C. For instance, suppose that I borrowed Alice’s sword and promised to return it. In virtue of my promise, it seems tha |
976e517b-0092-46df-b87a-7f218fc28ded | trentmkelly/LessWrong-43k | LessWrong | To perform best at work, look at Time & Energy account balance
Several weeks ago, I got a chance to join a talk hosting one of the very few female regional head at Google.
Despite not having any business background, she climbed the rank from entry level employee to become a regional head, surpassing everyone else from prestigious business degrees and rich experiences.
One success driver she mentioned got my attention. Despite lagging very much behind at the beginning, the core to her success is that she always aims for 120% result of any task in front of her.
The reason why this interests me is not because of my fresh ears.
In fact, this is not the first time I heard of this concept. Not the first time I get inspired of giving it all to whatever is in front. Not the first time I try…and not the first time I fail.
Did I not put in enough effort?
No…in fact, I put in so much effort to make this concept come to live, not realising that while effort is highly important, it’s critically inadequate.
As I listened to this amazing regional head talking about different aspects of her life, I came to realisation on what I have always been missing so far.
To make each task yield 120%, apart from effort, we should also look at our time and energy balance.
Contributing the best on a task means to give the amount of time and energy in the level required to make the result best.
We cannot contribute what we don’t have.
No matter how much effort we try to give adequate time required for the best, we only have 24 hours a day.
No matter how much energy we try to put into each task, we only have a limited stream in each day.
Therefore, giving our best does not start from the moment we begin working…but from the moment we plan our schedule and project pipelines.
When having "Enough Time" is Not Enough When my boss asked if I have enough time to take on one additional project, I would look at how much time is required to finish all the tasks on my desk and then, most of the time, said "Yes" thinking I have enough time to finish it all |
8a84edc8-aa6f-4ca5-99f7-754f7b836515 | trentmkelly/LessWrong-43k | LessWrong | Morality and relativistic vertigo
tl;dr: Relativism bottoms-out in realism by objectifying relations between subjective notions. This should be communicated using concrete examples that show its practical importance. It implies in particular that morality should think about science, and science should think about morality.
Sam Harris attacks moral uber-relativism when he asserts that "Science can answer moral questions". Countering the counterargument that morality is too imprecise to be treated by science, he makes an excellent comparison: "healthy" is not a precisely defined concept, but no one is crazy enough to utter that medicine cannot answer questions of health.
What needs adding to his presentation (which is worth seeing, though I don't entirely agree with it) is what I consider the strongest concise argument in favor of science's moral relevance: that morality is relative simply means that the task of science is to examine absolute relations between morals. For example, suppose you uphold the following two moral claims:
1. "Teachers should be allowed to physically punish their students."
2. "Children should be raised not to commit violence against others."
First of all, note that questions of causality are significantly more accessible to science than people before 2000 thought was possible. Now suppose a cleverly designed, non-invasive causal analysis found that physically punishing children, frequently or infrequently, causes them to be more likely to commit criminal violence as adults. Would you find this discovery irrelevant to your adherence to these morals? Absolutely not. You would reflect and realize that you needed to prioritize them in some way. Most would prioritize the second one, but in any case, science will have made a valid impact.
So although either of the two morals is purely subjective on its own, how these morals interrelate is a question of objective fact. Though perhaps obvious, this idea has some seriously persuasive consequences and is not be taken lightly. Wh |
b86fef9a-d017-4a1e-ad6f-56a853942744 | trentmkelly/LessWrong-43k | LessWrong | AI x-risk reduction: why I chose academia over industry
I've been leaning towards a career in academia for >3 years, and recently got a tenure track role at Cambridge. This post sketches out my reasoning for preferring academia over industry.
Thoughts on Industry Positions:
A lot of people working on AI x-risk seem to think it's better to be in industry. I think the main arguments for that side of things are:
* All the usual reasons for preferring industry, e.g. less non-research obligations, more resources.
* AGI is expected to be built in industry (e.g. by OpenAI, Google, or DeepMind), and if you're there, you can influence the decision-making around development and deployment.
I think these are good reasons, but far from definitive.
I'll also note that nobody seems to be going to Google, even though they are arguably the most likely to develop AGI, since 1) they are bigger, publish more, and have more resources, 2) they can probably steal from DeepMind to some extent. So if you ARE going to industry, please consider working for Google. Also Chinese companies.
My reasons for preferring academia:
* Mentorship and exponential growth: In academia, you can mentor a lot more people, and this leads to a much higher rate of exponential growth. My quick estimate is that as an academic you can produce ~10 new researchers in 5 years; in industry, it's more like ~3. I think you might also have significant, but hard-to-measure impact through teaching and other academic activities.
* Personal fit: Unlike (I think) most people in the field, I don't like coding much. I'm also not a theoretician. I am more of a "big picture" "idea person", and more of an extrovert. I like the idea of spending most of my time managing others, writing, giving talks, etc. I have far too many ideas to pursue on my own effectively. I also don't like the idea of having a boss.
* Better position for advocacy: There are many reasons I think academia makes for a better "bully pulpit".
- A tenure track faculty position at a top-20 in |
7a4aee65-0e42-45e1-b578-0d01effa4d55 | trentmkelly/LessWrong-43k | LessWrong | Thoughts after the Wolfram and Yudkowsky discussion
I recently listened to the discussion between Wolfram and Yudkowsky about AI risk. In some ways this conversation was tailor-made for me, so I'm going to write some things about it and try to get it out in one day instead of letting it sit in my drafts for 3 weeks as I tend to do. Wolfram has lately obsessed over fundamental physics, which is a special interest of mine. Yudkowsky is one of the people thinking most carefully about powerful AI, which I think will kill us all, and I’d like to firm up that intuition. Throw them on a podcast for a few hours, and you have my attention.
That said, for the first hour I was just incredibly frustrated. Wolfram keeps running down rabbit holes that were basically “aha! You haven’t thought about [thing Yud wrote ten thousand words on in 2008]!” But a miracle happens somewhere in the second hour and Wolfram is asking actually relevant questions! His framework of small accidental quirks in machine learning algorithms leading to undesired behavior later was basically an actual issue. It was kind of a joy listening to two smart people trying to mutually get on the same page. Wolfram starts out bogged down in minutia about what 'wanting' is and whether it constitutes anthropomorphism, but finally finds a sort of more abstract space about steering to goals and trying to see Yudkowsky’s point in terms of the relative dangers of sections of the space of goals under sufficient optimization. The abstraction was unfortunate in some ways, because I was interested in some of the minutia once they were both nearly talking about the same thing, but also, if Wolfram kept running down rabbit holes like “actually quarks have different masses at different energy scales” when Yudkowsky said something like “the universe runs on quarks everywhere all at once no matter what we think the laws of physics are,” then they were never going to get to any of the actual arguments. That said, I don't see how Wolfram got to anything close to the actual point a |
4a85e2c1-2f48-45ac-a9e1-74ddb06e1cfb | trentmkelly/LessWrong-43k | LessWrong | January 2011 Southern California Meetup
There will be a meetup for Southern California this Sunday, January 23, 2011 at 4PM and running for three to five hours. The meetup is happening at Marco's Trattoria. The address is:
8200 Santa Monica Blvd
West Hollywood, CA 90046
If all the people (including guests and high end group estimates) show up we'll be at the limit of the space with 24 attendees. Previous meetups had room for walk-ins and future meetups should as well, but this one is full. If you didn't RSVP in time for this one but want to get an email reminder when the February meetup is scheduled send me a PM with contact info.
For those interested in carpooling, see comments for: San Diego, Lake Forest.
The format for past meetups has varied based on the number of attendees and their interests. At various points we have either tried or considered: paranoid debating, small group "dinner party conversations", structured rationality exercises, large discussions with people sharing personal experiences with sleep and "nutraceutical" interventions for intelligence augmentation, and specialized subprojects to develop tools for quantitatively estimating the value of things like cryonics or existential risk interventions.
People at these meetups are generally up for being subjects of fun experiments in group or individual rationality. Also, past experience indicates that interesting top level articles are inspired by conversations that happen at meetups. Expect something awesome to happen... or bring something neat to make something awesome happen! |
92079a97-8c79-40de-b0c0-407a48163777 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | The VNM independence axiom ignores the value of information
*Followup to : [Is risk aversion really irrational?](/lw/9oj/is_risk_aversion_really_irrational/)*
After reading the [decision theory FAQ](/lw/gu1/decision_theory_faq) and re-reading [The Allais Paradox](/lw/my/the_allais_paradox/) I realized I still don't accept the VNM axioms, especially the independence one, and I started thinking about what my true rejection could be. And then I realized I already somewhat explained it here, in my [Is risk aversion really irrational?](/lw/9oj/is_risk_aversion_really_irrational/) article, but it didn't make it obvious in the article how it relates to VNM - it wasn't obvious to me at that time.
Here is the core idea: information has value. Uncertainty therefore has a cost. And that cost is not linear to uncertainty.
Let's take a first example: A is being offered a trip to Ecuador, B is being offered a great new laptop and C is being offered a trip to Iceland. My own preference is: A > B > C. I love Ecuador - it's a fantastic country. But I prefer a laptop over a trip to Iceland, because I'm not fond of cold weather (well, actually Iceland is pretty cool too, but let's assume for the sake of the article that A > B > C is my preference).
But now, I'm offered D = (50% chance of A, 50% chance of B) or E = (50% chance of A, 50% chance of C). The VNM independence principle says I should prefer D > E. But doing so, it forgets the cost of information/uncertainty. By choosing E, I'm sure I'll be offered a trip - I don't know where, but I know I'll be offered a trip, not a laptop. By choosing D, I'm no idea on the nature of the present. I've much less information on my future - and that lack of information has a cost. If I know I'll be offered a trip, I can already ask for days off at work, I can go buy a backpack, I can start doing the paperwork to get my passport. And if I know I won't be offered a laptop, I may decide to buy one, maybe not as great as one I would have been offered, but I can still buy one. But if I chose D, I've much less information about my future, and I can't optimize it as much.
The same goes for the Allais paradox: having certitude of receiving a significant amount of money ($24 000) has a value, which is present in choice 1A, but not in all others (1B, 2A, 2B).
And I don't see why a "rational agent" should neglect the value of this information, as the VNM axioms imply. Any thought about that? |
02ccd558-4d49-40db-aec8-199d647e3030 | trentmkelly/LessWrong-43k | LessWrong | What Program Are You?
I've been trying for a while to make sense of the various alternate decision theories discussed here at LW, and have kept quiet until I thought I understood something well enough to make a clear contribution. Here goes.
You simply cannot reason about what to do by referring to what program you run, and considering the other instances of that program, for the simple reason that: there is no unique program that corresponds to any physical object.
Yes, you can think of many physical objects O as running a program P on data D, but there are many many ways to decompose an object into program and data, as in O = <P,D>. At one extreme you can think of every physical object as running exactly the same program, i.e., the laws of physics, with its data being its particular arrangements of particles and fields. At the other extreme, one can think of each distinct physical state as a distinct program, with an empty unused data structure. Inbetween there are an astronomical range of other ways to break you into your program P and your data D.
Eliezer's descriptions of his "Timeless Decision Theory", however refer often to "the computation" as distinguished from "its input" in this "instantiation" as if there was some unique way to divide a physical state into these two components. For example:
The one-sentence version is: Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.
The three-sentence version is: Factor your uncertainty over (impossible) possible worlds into a causal graph that includes nodes corresponding to the unknown outputs of known computations; condition on the known initial conditions of your decision computation to screen off factors influencing the decision-setup; compute the counterfactuals in your expected utility formula by surgery on the node representing the logical output of that computation.
And also:
Timeless decision |
e6d795de-fa6e-48d7-aee3-ca7a0ff3d246 | StampyAI/alignment-research-dataset/arbital | Arbital | Fundamental Theorem of Arithmetic
summary: The Fundamental Theorem of Arithmetic is a statement about the [natural numbers](https://arbital.com/p/45h); it says that every natural number may be decomposed as a product of [primes](https://arbital.com/p/4mf), and this expression is unique up to reordering the factors. It is an extremely important theorem, and it is the basis of the field of [https://arbital.com/p/-number_theory](https://arbital.com/p/-number_theory).
The Fundamental Theorem of Arithmetic states that every [https://arbital.com/p/-45h](https://arbital.com/p/-45h) (greater than or equal to $2$) may be expressed as a product of [prime numbers](https://arbital.com/p/4mf), and the product is unique up to reordering.
This theorem is one of the main reasons $1$ is not considered to be prime: indeed, if it were prime then $3 \times 5$ could be factorised into primes as $3 \times 5 \times 1$, but these would be two *different* factorisations of the number $15$.
The FTA's statement is much cleaner if $1$ is not thought of as prime.
In a more general context, the FTA says precisely that the [ring](https://arbital.com/p/3gq) $\mathbb{Z}$ is a [https://arbital.com/p/-unique_factorisation_domain](https://arbital.com/p/-unique_factorisation_domain); there is therefore a much more abstract proof than the elementary one we will present further on in this article:
- $\mathbb{Z}$ is a [Euclidean domain](https://arbital.com/p/euclidean_domain) (with Euclidean function given by "take the modulus");
- Therefore $\mathbb{Z}$ is a [https://arbital.com/p/-5r5](https://arbital.com/p/-5r5) ([proof](https://arbital.com/p/euclidean_domain_is_pid));
- And principal ideal domains are unique factorisation domains ([proof](https://arbital.com/p/principal_ideal_domain_has_unique_factorisation)).
# Examples
- The FTA does not talk about $0$ or $1$; this is because these numbers are conventionally considered neither prime nor composite.
- Even if we haven't bothered to calculate $17 \times 23 \times 23$, we can immediately say that it is odd. Indeed, by the FTA, $2$ cannot divide $17 \times 23^2$, because the complete list of prime factors of this number is $\{ 17, 23, 23\}$, and $2$ is prime.
# Proof
Timothy Gowers has an [excellent article](https://gowers.wordpress.com/2011/11/18/proving-the-fundamental-theorem-of-arithmetic/) about the proof of the FTA.
The FTA consists of two parts: we must show that every number can be decomposed as primes, and also that every number can be decomposed *uniquely*.
## Every number can be written as a product of primes
This part is the easier, and it uses [https://arbital.com/p/-strong_induction](https://arbital.com/p/-strong_induction) (a version of [proof by induction](https://arbital.com/p/5fz)).
Clearly $2$ can be written as a product of primes, because it *is* prime; so it can be written as just itself.
Now, for $n$ bigger than $2$, if $n$ is prime then we are immediately done (just write it as itself).
Otherwise, $n$ is not prime, so it can be written as $a \times b$, say, with $a$ and $b$ both less than $n$.
But by the inductive hypothesis, we can express $a$ and $b$ each as products of primes, so we can express $n$ as the combined product of the two sets of factors of $a$ and $b$.
%%hidden(Example):
Consider $n = 1274$.
We have two options: $n$ is prime or $n$ is composite.
It turns out that $n$ is actually equal to $49 \times 26$, so it's not prime.
By the inductive hypothesis, we can factor $49$ as a product of primes (indeed, it's $7^2$); and we can factor $26$ as a product of primes (indeed, it's $2 \times 13$); so we can factor $1274$ as $2 \times 7^2 \times 13$.
(If you like, you can view this as just "start again at $49$ instead of at $1274$, and spit out what you get; then start again at $26$ instead of $1274$, and spit out what you get; and finally combine the spittings-out"; no mention of a spooky "inductive hypothesis" at all.)
Note that at this point, we haven't any guarantee at all that this is the *only* prime factorisation; all we assert so far is that it is *a* prime factorisation.
%%
## Every number can be decomposed *uniquely* as a product of primes
For this, we will need a basic (but non-obvious and important) fact about the behaviour of prime numbers: [Euclid's lemma](https://arbital.com/p/5mh), which states that if a prime $p$ divides a product $ab$, then $p$ divides at least one of $a$ and $b$.
We will work by induction on $n$ again.
If $n = 2$ then the result is immediate: a number can only be divided by numbers which are not larger than it, but $1$ and $2$ are the only such numbers.
Suppose $n$ can be written as both $p_1 p_2 \dots p_r$ and $q_1 q_2 \dots q_s$, where each $p_i$ and $q_j$ is prime (but there might be repeats: maybe $p_1 = p_2 = q_3 = q_7$, for instance).
We need to show that $r=s$ and that (possibly after reordering the lists) $p_i = q_i$ for each $i$.
Certainly $p_1$ divides $n$, because it divides $p_1 p_2 \dots p_r$.
Therefore it divides $q_1 q_2 \dots q_s$, and hence it divides one of $q_1$ or $q_2 \dots q_s$, by Euclid's lemma.
Therefore either it divides $q_1$, or it divides one of $q_2$ or $q_3 \dots q_s$; by induction, $p_1$ divides some $q_i$.
Because we don't care about the ordering of the list, let us reorder the list if necessary so that in fact $i=1$: put the factor $q_i$ at the start of the list.
Now, $q_1$ is prime, and $p_1$ is not equal to $1$ but it divides $q_1$; hence $p_1 = q_1$.
Dividing through by $p_1$, then, we obtain $p_2 \dots p_r = q_2 \dots q_s$, a strictly smaller number; so by the inductive hypothesis, $r-1 = s-1$ (so $r=s$) and the unordered list of $p_i$ is the same as the unordered list of $q_i$ for $i \geq 2$.
This proves the theorem.
# Why is this not obvious?
Timothy Gowers has a [good piece](https://gowers.wordpress.com/2011/11/13/why-isnt-the-fundamental-theorem-of-arithmetic-obvious/) on why this result is not just obvious.
Of course, what is "obvious" and what is not "obvious" varies heavily depending on who you're talking to.
For this author personally, the true reason it's not obvious is Gowers's reason number 4: because there are very similar structures which do *not* have the property of unique factorisation.
(Gowers uses $\mathbb{Z}[https://arbital.com/p/\sqrt{-5}](https://arbital.com/p/\sqrt{-5})$; on the [page on irreducibles](https://arbital.com/p/5m1), we show that $\mathbb{Z}[https://arbital.com/p/\sqrt{-3}](https://arbital.com/p/\sqrt{-3})$ could be used just as well.) |
a70dcff0-4b8d-4502-80c5-d89d815a29ad | StampyAI/alignment-research-dataset/arbital | Arbital | Blue oysters
You're collecting exotic oysters in Nantucket, and there are two different bays you could harvest oysters from. In both bays, 11% of the oysters contain valuable pearls and 89% are empty. In the first bay, 4% of the pearl-containing oysters are blue, and 8% of the non-pearl-containing oysters are blue. In the second bay, 13% of the pearl-containing oysters are blue, and 26% of the non-pearl-containing oysters are blue. You created a special device that helps you find blue oysters. Would you rather harvest blue oysters from the first bay or the second bay?
You're encouraged to try to solve this problem yourself, and to refrain from looking at the answer. The answer can be found in [https://arbital.com/p/1zh](https://arbital.com/p/1zh). |
680d6f93-9c14-4ec1-b2eb-0b7bf975bd67 | trentmkelly/LessWrong-43k | LessWrong | SI and Social Business
I asked this question for the Q&A:
> Non-profit organizations like SI need robust, sustainable resource strategies. Donations and grants are not reliable. According to my university Social Entrepreneurship course, social businesses are the best resource strategy available. The Singularity Summit is a profitable and expanding example of a social business. Is SI planning on creating more social businesses (either related or unrelated to the organization's mission) to address long-term funding needs?
I also recently asked this of Luke for his feedback post before the Q&A was up, and he mentioned in his response that SI is continuing to grow the Summit brand in a multifarious manner. Luke also asked me for additional social business ideas, citing a lack of staff working on the issue.
Less Wrong's collective intelligence trumps my own, so I'm fielding it to you. I do have a few ideas, but I'll hold off on proposing solutions at first. I find that this is a fascinating and difficult thought experiment in addition to its usefulness both for SI and as practice in recognizing opportunities. |
63a8ced9-8467-42b1-bee5-336e70882944 | StampyAI/alignment-research-dataset/arbital | Arbital | Two independent events: Square visualization
$$
\newcommand{\true}{\text{True}}
\newcommand{\false}{\text{False}}
\newcommand{\bP}{\mathbb{P}}
$$
summary:
$$
\newcommand{\true}{\text{True}}
\newcommand{\false}{\text{False}}
\newcommand{\bP}{\mathbb{P}}
$$
Say $A$ and $B$ are independent [events](https://arbital.com/p/event_probability), so $\bP(A, B) = \bP(A)\bP(B).$ Then we can draw their joint probability distribution using the using the [square visualization](https://arbital.com/p/496) of probabilities:
<img src="http://i.imgur.com/0off1db.png" width="312" height="272">
This is what independence looks like, using the [square visualization](https://arbital.com/p/496) of probabilities:
<img src="http://i.imgur.com/0off1db.png" width="390" height="338">
We can see that the [events](https://arbital.com/p/event_probability) $A$ and $B$ don't interact; we say that $A$ and $B$ are *independent*. Whether we look at the whole square, or just the red part of
the square where $A$ is true, the probability of $B$ stays the same. In other words, $\bP(B \mid A) = \bP(B)$. That's what we mean by independence: the
probability of $B$ doesn't change if you condition on $A$.
Our square of probabilities can be generated by multiplying together the probability of $A$ and the probability of $B$:
<img src="http://i.imgur.com/pjwcoTn.png" width="640" height="275">
This picture demonstrates another way to define what it means for $A$ and $B$ to be independent:
$$\bP(A, B) = \bP(A)\bP(B)\ .$$
In terms of factoring a joint distribution
--
Let's contrast independence with non-independence. Here's a picture of two ordinary, non-independent events $A$ and $B$:
<img src="http://i.imgur.com/6ZHSR0l.png" width="529" height="327">
(If the meaning of this picture isn't clear, take a look at [https://arbital.com/p/496](https://arbital.com/p/496).)
We have the red blocks for $\bP(A)$ and the blue blocks for $\bP(\neg A)$ lined up in columns. This means we've [factored](https://arbital.com/p/factoring_probability) our
probability distribution using $A$ as the first factor:
$$\bP(A,B) = \bP(A) \bP(B \mid A)\ .$$
We could just as well have factored by $B$ first: $\bP(A,B) = \bP(B) \bP( A \mid B)\ .$ Then we'd draw a picture like this:
<img src="http://i.imgur.com/O0RNzxw.png" width="390" height="390">
Now, here again is the picture of [two independent events](https://arbital.com/p/4cf) $A$ and $B$:
<img src="http://i.imgur.com/0off1db.png" width="390" height="338">
In this picture, there's red and blue lined-up columns for $\bP(A)$ and $\bP(\neg A)$, and there's *also* dark and light lined-up rows for $\bP(B)$ and
$\bP(\neg B)$. It looks like we somehow [factored](https://arbital.com/p/factoring_probability) our probability distribution $\bP$ using both $A$ and
$B$ as the first factor.
In fact, this is exactly what happened: since $A$ and $B$ are [independent](https://arbital.com/p/4cf), we have that $\bP(B \mid A) = \bP(B)$. So the diagram
above is actually factored according to $A$ first: $\bP(A,B) = \bP(A) \bP(B \mid A)$. It's just that $\bP(B \mid A)= \bP(B) = \bP(B \mid \neg A)$, since $B$
is independent from $A$. So we don't need to have different ratios of dark to light (a.k.a. conditional probabilities of $B$) in the left and right columns:
<img src="http://i.imgur.com/Nfiuz3d.png" width="618" height="420">
In this visualization, we can see what happens to the probability of $B$ when you condition on $A$ or on $\neg A$: it doesn't change at all. The ratio of
\[area where $B$ happens\](https://arbital.com/p/the) to \[whole area\](https://arbital.com/p/the), is the same as the ratio $\bP(B \mid A)$ where we only look at the area where $A$ happens, which is the
same as the ratio $\bP(B \mid \neg A)$ where we only look at the area where $\neg A$ happens. The fact that the probability of $B$ doesn't change when we
condition on $A$ is exactly what we mean when we say that $A$ and $B$ are independent.
The square diagram above is *also* factored according to $B$ first, using $\bP(A,B) = \bP(B) \bP(A \mid B)$. The red / blue ratios are the same in both rows
because $\bP(A \mid B) = \bP(A) = \bP(A \mid \neg B)$, since $A$ and $B$ are independent:
<img src="http://i.imgur.com/DfDljOL.png" width="636" height="468">
We couldn't do any of this stuff if the columns and rows didn't both line up. (Which is good, because then we'd have proved the false statement that any two
events are independent!)
In terms of multiplying marginal probabilities
---
Another way to say that $A$ and $B$ are independent variables %note:We're using the [equivalence](https://arbital.com/p/event_variable_equivalence) between [https://arbital.com/p/event_probability
events](https://arbital.com/p/event_probability
events) and [binary variables](https://arbital.com/p/binary_random_variable).% is that for any truth values $t_A,t_B \in \{\true, \false\},$
$$\bP(A = t_A, B= t_B) = \bP(A = t_A)\bP(B = t_B)\ .$$
So the [joint probabilities](https://arbital.com/p/1rh) for $A$ and $B$ are computed by separately getting the probability of $A$ and the probability of $B$, and then
multiplying the two probabilities together. For example, say we want to compute the probability $\bP(A, \neg B) = \bP(A = \true, B = \false)$. We start with
the [marginal probability](https://arbital.com/p/marginal_probability) of $A$:
<img src="http://i.imgur.com/ZnxqSMo.png" width="250" height="300">
and the probability of $\neg B$:
<img src="http://i.imgur.com/txRlJyE.png" width="335" height="240">
and then we multiply them:
<img src="http://i.imgur.com/GOOnTuF.png" width="440" height="390">
We can get all the joint probabilities this way. So we can visualize the whole joint distribution as the thing that you get when you multiply two independent
probability distributions together. We just overlay the two distributions:
<img src="http://i.imgur.com/X4FSciB.png" width="532" height="816">
To be a little more mathematically elegant, we'd use the [topological product of two spaces](https://arbital.com/p/topological_product) shown earlier to draw the joint distribution
as a product of the distributions of $A$ and $B$:
<img src="http://i.imgur.com/pjwcoTn.png" width="640" height="275"> |
a88ff9ce-86f4-4ca5-b45f-a02cd9c59571 | trentmkelly/LessWrong-43k | LessWrong | Conflict in Kriorus becomes hot today, updated, update 2
For a long time, Russian cryocompany Kriorus suffered from a conflict between two groups of owners. One is led by Danila Medvedev and the other is led by his former wife Valeria Pride. Long story short, Valeria took neuropatients (including my mother's brain) 1.5 years ago and moved them to an undisclosed location and built a new building for the company. Large full-body patients remained in a building controlled by Danila.
Today people controlled by Valeria arrived at the full-body-patients building and took large containers with liquid nitrogen and full-bodies. However, the other side called the police (or, by other accounts, police was naturally attracted to a strange track with smoke and dead bodies), and the track was stopped not far away from the initial location. By latest information, the vacuum containers have been returned to their original place but could suffer some damage.
I also remind you that both sides have two different narratives about who is the villain and who has legal rights.
You could see photos from the event in Telegram messenger (please remind me how to embed them here): https://t.me/transhumanisminheart/6261
I will update this post when I will have more information.
UPDATE: September, 8: No arrests so far, containers are back and will be refilled with nitrogen today, but 3 bodies were separated and taken by Valeria. No data about possible damage. Both sides agree that the bodies remained at cryogenic temperatures inside containers during the ordeal.
Update 2: bodies remain in original location, vacuum duars are leaking. No information about nitrogen refill. Danila told media that bodies could explode during the transportation. Police is working on the spot. Given recent gas explosion in nearby town and police envolvment I expect that bodies will be warm in two days. But most neuropatients are intact in other location I hope. |
c4b2f996-4874-4cd4-af5c-a2c4a2790d3b | trentmkelly/LessWrong-43k | LessWrong | Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research
Cross-posted on the EA Forum.
Executive Summary
We’re excited to introduce Convergence Analysis - a research non-profit & think-tank with the mission of designing a safe and flourishing future for humanity in a world with transformative AI. In the past year, we’ve brought together an interdisciplinary team of 10 academics and professionals, spanning expertise in technical AI alignment, ethics, AI governance, hardware, computer science, philosophy, and mathematics. Together, we’re launching three initiatives focused on conducting Scenario Research, Governance Recommendations Research, and AI Awareness.
Our programs embody three key elements of our Theory of Change and reflect what we see as essential components of reducing AI risk: (1) understanding the problem, (2) describing concretely what people can do, and (3) disseminating information widely and precisely. In some more detail, they do the following:
* Scenario Research: Explore and define potential AI scenarios - the landscape of relevant pathways that the future of AI development might take.
* Governance Recommendations Research: Provide concrete, detailed analyses for specific AI governance proposals that lack comprehensive research.
* AI Awareness: Inform the general public and policymakers by disseminating important research via books, podcasts, and more.
In the next three months, you can expect to see the following outputs:
* Convergence’s Theory of Change: A report detailing an outcome-based, high-level strategic plan on how to mitigate existential risk from TAI.
* Research Agendas for our Scenario Research and Governance Recommendations initiatives.
* 2024 State of the AI Regulatory Landscape: A review summarizing governmental regulations for AI safety in 2024.
* Evaluating A US AI Chip Registration Policy: A research paper evaluating the global context, implementation, feasibility, and negative externalities of a potential U.S. AI chip registry.
* A series of articles on AI scenarios high |
b018a35d-0d14-4905-9154-988373589f2f | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [Event] Weekly Alignment Research Coffee Time (05/24)
Just like every Monday now, researchers in AI Alignment are invited for a coffee time, to talk about their research and what they're into.
Here is the [link](http://garden.lesswrong.com?code=BZN3&event=weekly-alignment-research-coffee-time-1).
And here is the [everytimezone time](https://everytimezone.com/s/b1ceec74).
Note that the link to the walled garden now only works for AF members. Anyone who wants to come but isn't an AF member needs to go by me. I'll broadly apply the following criteria for admission:
* If working in a AI Alignment lab or funded for independent research, automatic admission
* If recommended by AF member, automatic admission
* Otherwise, to my discretion
I prefer to not allow people who might have been interesting but who I'm not sure will not derail the conversation, because this is supposed to be the place where AI Alignment researchers can talk about their current research without having to explain everything.
See you then! |
990aa815-641b-4549-b747-14007569da56 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | In favour of a selective CEV initial dynamic
Note: I appreciate that at this point CEV is just a sketch. However, it’s an interesting topic and I don’t see that there’s any harm in discussing certain details of the concept as it stands.
**1. Summary of CEV**
Eliezer Yudkowsky describes CEV - Coherent Extrapolated Volition – [here](http://intelligence.org/upload/CEV.html). Superintelligent AI is a powerful genie, and genies [can’t be trusted](/lw/ld/the_hidden_complexity_of_wishes); Friendly AI requires the AI to take as input the entire value computation of at least one human brain, because the failure to take into consideration a relatively small element of the human value set, even whilst optimising in several other respects, is likely to be a disaster. CEV is Yudkowsky’s attempt at outlining a Friendly AI volition-extrapolating dynamic: a process in which the AI takes human brainstates, combines this with its own vast knowledge, and outputs suitable actions to benefit humans.
Note that extrapolating volition is not some esoteric invention of Eliezer’s; it is a normal human behaviour. To use his example: we are extrapolating Fred’s volition (albeit with short *distance*) if given two boxes A and B only one of which contains a diamond that Fred desires, we give him box B when he has asked us to give him box A, on the basis that he incorrectly believes that box A contains the diamond whereas we know that in fact it is in box B.
Yudkowsky roughly defines certain quantities that are likely to be relevant to the functioning of the CEV dynamic:
*Spread* describes the case in which the extrapolated volition is unpredictable. Quantum randomness or other computational problems may make it difficult to say with strong confidence (for example) whether person A would like to be given object X tomorrow – if the probability computed is 30%, rather than 0.001%, there is significant spread in this case.
*Muddle* is a measure of inconsistency. For example person A might resent being given object Y tomorrow, but also resent not being given object Y if it isn’t given to him tomorrow.
*Distance* measures the degree of separation between one’s current self and the extrapolated self, i.e. how easy it would be to explain a given instance of extrapolated volition to someone. In the case of Fred and the diamond the distance is very short, but superintelligent AI could potentially compute Fred’s extrapolated volition to such a distance that it seems incomprehensible to Fred.
To quote Yudkowsky (I assume that the following remains approximately true today):
>
> As of May 2004, my take on Friendliness is that the initial dynamic should implement the *coherent extrapolated volition* of humankind.
>
>
> In poetic terms, our *coherent extrapolated volition* is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.
>
>
>
Yudkowsky adds that “it should be easier to counter coherence than to create coherence” where coherence refers to strong, un-muddled and un-spread agreement between multiple individual volitions with no strong disagreement from any others; and that “the initial dynamic for CEV should be conservative about saying ‘yes’ and listen carefully for ‘no’” – the superintelligent optimisation process should seek more consensus before steering humanity into narrow slices of the future, relative to the degree of consensus it needs before steering humanity away from some particular narrow slice of the future (about which it has been warned by elements of the CEV).
CEV is an *initial dynamic*; it doesn’t necessarily have to be the perfect dynamic of human volition for Friendly AI, but the dynamic should be good enough that it allows the AI to extrapolate an optimal dynamic of volition to which we can then switch over if desirous. “The purpose of CEV as an initial dynamic is not to be the solution, but to ask what solution we want”.
Also, “If our extrapolated volitions say we don’t want our extrapolated volitions manifested, the system replaces itself with something else we want, or else...undergoes an orderly shutdown”.
Finally, Yudkowsky suggests that as a safeguard, a *last judge* of impeccable judgement could be trusted with putting the seal of approval on the output of the CEV dynamic; if something seems to have gone horribly wrong, beyond mere future shock, he can stop the output from being enacted.
**2. CEV of all humankind vs. CEV of a subset of humankind**
Let us accept that coherent extrapolated volition, in general, is the best (only?) solution that anyone has provided to the problem of AI friendliness. I can see four ways of implementing a CEV initial dynamic:
* Implement a single CEV dynamic incorporating all humans, the output of which affects everyone.
* Implement an individual CEV dynamic for each individual human.
* Implement a single CEV dynamic incorporating one human only, the output of which affects everyone.
* Implement a single CEV dynamic incorporating a limited subset of humans, the output of which affects everyone.
As Yudkowsky discusses in his document, whilst the second option might perhaps be a reasonable final dynamic (who knows?) it isn’t a suitable initial dynamic. This is because if there is more than one CEV running, the way in which the CEV dynamic works in a general sense cannot be re-written without someone’s individual CEV being violated, and the idea behind the initial dynamic is that a superior dynamic may develop from it.
The third option is obviously sub-optimal, because of the danger that any individual person might be a psychopath – a person whose values are in general markedly hostile to other humans. Knowing more and thinking smarter might lead a given psychopath’s more humane values to win out, but we can’t count on that. In a larger group of people, the law of large numbers applies and the risk diminishes.
Yudkowsky favours the first option, a CEV of all humankind; I am more in favour of the fourth option, an initial CEV dynamic incorporating the minds of only a certain subset of humans. It would like to compare these two options on six relevant criteria:
**I Schelling points [edit: apologies for the questionable use of a game theory term for the sake of concision]**
Clearly, incorporating the minds of all humankind into the intial dynamic is a Schelling point – a solution that people would naturally generate for themselves in the absence of any communication. So full marks to a universal CEV on this criterion.
Answer quickly: what specific group of people – be that a group of people who meet each other regularly, or a group who are distinguished in some other way – would you nominate, if you had to choose a certain subset of minds to participate in the initial dynamic?
What springs to my mind is Nobel Prize winners, and I suspect that this too is a Schelling point. This seems like a politically neutral selection of distinguished human beings (particularly if we exclude the Peace Prize) of superlative character and intellect. Whether some people would object strongly to this selection is one question, but certainly I expect that many humans, supposing they were persuaded for other reasons that the most promising initial dynamic is one incorporating a small group of worthy humans only, would consider Nobel Prize winners to be an excellent choice to rally around.
Many other groups of minds, for example the FAI programming team themselves, would of course seem too arbitrary to gather sufficient support for the idea.
**II Practicality of implementation**
One problem with a universal CEV that I have never seen discussed is how feasible it would actually be to take extremely detailed recordings of the brain states of all of the humans on Earth. All of the challenges involved in creating Friendly AI are of course extreme. But ceteris paribus, one additional extremely challenging problem is one too many.
A prerequisite for the creation of superintelligent AI must surely be the acquisition of detailed knowledge of the workings of the human brain. However, our having the ability to scan one human brain in extreme detail does not imply that it is economically feasible to scan 7 billion or more human brains in the same way. It might well come to pass that the work on FAI is complete, but we still lack the means to actually collect detailed knowledge of all existing human minds. A superintelligent AI would develop its own satisfactory means of gathering information about human brains with minimal disruption, but as I understand the problem we need to input all human minds into the AI before switching it on and using it to do anything for us.
Even if the economic means do exist, consider the social, political and ideological obstacles. How do we deal with people who don’t wish to comply with the procedure?
Furthermore, let us suppose that we manage to incorporate all or almost all human minds into the CEV dynamic. Yudkowsky admits the possibility that the thing might just shut itself down when we run it – and he suggests that we shouldn’t alter the dynamic too many times in an attempt to get it to produce a reasonable-looking output, for fear of prejudicing the dynamic in favour of the programmers’ preferences and away from humanity’s CEV.
It would be one thing if this merely represented the (impeccably well-intentioned) waste of a vast amount of money, and the time of some Nobel Prize winners. But if it also meant that the economic, political and social order of the entire world had been trampled over in the process of incorporating all humans into the CEV, the consequences could be far worse. Enthusiasm for a second round with a new framework at some point in the future might be rather lower in the second scenario than in the first.
**III Safety**
In his document on CEV, Yudkowsky states that there is “a real possibility” that (in a universal CEV scenario) the majority of the planetary population might not fall into a niceness attractor when their volition is extrapolated.
The small group size of living scientific Nobel Prize winners (or any other likely subset of humans) poses certain problems for a selective CEV that the universal CEV lacks. For example, they might all come under the influence of a single person or ideology that is not conducive to the needs of wider humanity.
On the other hand, given their high level of civilisation and the quality of character necessary for a person to dedicate his life to science, ceteris paribus I’d be more confident of Nobel Prize winners falling into a niceness attractor in comparison to a universal CEV. How much trust are we willing to place in the basic decency of humankind – to what extent is civilisation necessary to create a human who would not be essentially willing to torture innocent beings for his own gratification? Perhaps by the time humanity is technologically advanced enough to implement AGI we’ll know more about that, but at our current state of knowledge I see little reason to give humans in general the benefit of the doubt.
Yudkowsky asks, “Wouldn’t you be terribly ashamed to go down in history as having meddled...because you didn’t trust your fellows?” Personally, I think that [shutting up and multiplying](http://wiki.lesswrong.com/wiki/Shut_up_and_multiply) requires us to make our best estimate of what is likely to benefit humankind (including future humans) the most, and run with that. I’d not be ashamed if in hindsight my estimate was wrong, since no-one can be blamed for having imperfect knowledge.
**IV Aesthetic standards**
In his document, Yudkowsky discusses the likelihood of certain volitions cancelling one another out whilst others add together; metaphorically speaking, “love obeys Bose-Einstein statistics while hatred obeys Fermi-Dirac statistics”. This supports the idea that extrapolating volition is likely to produce at least some useful output – i.e. having minimal spread and muddle, ideally at not too far a distance.
In a universal CEV this leads us to believe that Pakistani-Indian mutual hatred, for example, cancels out (particularly since coherence is easier to counter than to create) whereas their mutual preferences form a strong signal.
The problem of aesthetic standards concerns the *quality* of the signal that might cohere within the CEV. Love seems to be a strong human universal, and so we would expect love to play a strong role in the output of the initial dynamic. On the other hand, consider the difference in intelligence and civilisation between the bulk of humanity and a select group such as Nobel Prize winners. Certain values shared by such a select group, for example the ability to take joy in the merely real, might be lost amidst the noise of the relatively primitive values common to humanity as a whole.
Admittedly, we can expect “knowing more” and “growing up farther together” to improve the quality of human values in general. Once an IQ-80 tribesman gains more knowledge and thinks faster, and is exposed to rational memes, he might well end up in exactly the same place as the Nobel Prize winners. But the question is whether it’s a good idea to rely on a superb implementation of these specifications in an initial dynamic, rather than taking out the insurance policy of starting with substantially refined values in the first place – bearing in mind what is at stake.
A worst case scenario, assuming that other aspects of the FAI implementation work as planned, is that the CEV recommends an ignoble future for humanity – for example [orgasmium](http://wiki.lesswrong.com/wiki/Orgasmium) – which is not evil, but is severely lacking in aesthetic qualities that might have come out of a more selective CEV. Of course, the programmers or the last judge should be able to veto an undesirable output. But if (as Yudkowsky recommends) they only trust themselves to tweak the dynamic a maximum of three times in an effort to improve the output before shutting it off for good if the results are still deemed unsatisfactory, this does not eliminate the problem.
**V Obtaining a signal**
It seems to me that the more muddle and spread there is within the CEV, the greater the challenge that exists in designing an initial dynamic that outputs anything whatsoever. Using a select group of humans would ensure that these quantities are minimised as far as possible. This is simply because they are likely to be (or can be chosen to be) a relatively homogeneous group of people, who have relatively few directly conflicting goals and possess relatively similar memes.
Again, why make the challenge of FAI even more difficult than it needs to be? Bear in mind that failure to implement Friendly AI increases the likelihood of uFAI being created at some point.
**VI Fairness**
In his document on CEV, Yudkowsky does go some way to addressing the objections that I have raised. However, I do not find him persuasive on this subject:
>
> Suppose that our coherent extrapolated volition does decide to weight volitions by wisdom and kindness – a suggestion I strongly dislike, for it smacks of disenfranchisement. It don’t think it wise to tell the initial dynamic to look to whichever humans judge themselves as wiser and kinder. And if the programmers define their own criteria of “wisdom” and “kindness” into a dynamic’s search for leaders, that is taking over the world by proxy. You wouldn’t want the al-Qaeda programmers doing that, right?
>
>
>
Firstly, the question of disenfranchisement. As I suggested earlier, this constitutes a refusal to shut up and multiply when dealing with a moral question. “Disenfranchisement” is a drop in the ocean of human joy and human suffering that is at stake when we discuss FAI. As such, it is almost completely irrelevant as an item of importance in itself (of course there are other consequences involved in the choice between universal CEV and a degree of disenfranchisement – but they have been discussed already, and are beside the point of the strictly moral question.) This is especially the case since we are only talking about the initial dynamic here, which may well ultimately develop into a universal CEV.
Secondly, there is the mention of al-Qaeda. In the context of earlier mentions of al-Qaeda programmers in the document on CEV, Yudkowsky appears to be positing a “veil of ignorance” – we should behave in creating the FAI as we would want al-Qaeda programmers to behave. This is strange, because in a similar veil of ignorance problem – the [modesty argument](/lw/gr/the_modesty_argument) – Robin Hanson argued that we should act as though there is a veil of ignorance surrounding whether it is ourselves or someone else who is wrong in some question of fact, whereas Eliezer argued against the idea.
Personally I have little regard for veil of ignorance arguments, on the basis that there is no such thing as a veil of ignorance. No, I would not want the al-Qaeda programmers to nominate a group of humans (presumably Islamic fanatics) and extrapolate their volition – I would rather they used all of humanity. But so what? I am quite happy using my own powers of judgement to decide that al-Qaeda’s group is inferior to humanity as a whole, but Nobel Prize winners (for example) are a better choice than humanity as a whole.
As for “taking over the world by proxy”, again SUAM applies.
**3. Conclusion**
I argue that a selective CEV incorporating a fairly small number of distinguished human beings may be preferable to a CEV incorporating all of humanity. I argue that the practical difficulty of incorporating all humans into the CEV in the first place is unduly great, and that the programming challenge is also made more difficult by virtue of this choice. I consider any increase in the level of difficulty in the bringing into existence of FAI to be positively dangerous, on account of the fact that this increases the window of time available for unscrupulous programmers to create uFAI.
Setting aside the problem of getting the initial dynamic to work at all, I also consider it to be possible for the output of a selective CEV to be more desirable to the average human than the output of a universal CEV. The initial dynamic is the creation of human programmers, who are fallible in comparison to a superintelligent AI; their best attempt at creating a universal CEV dynamic may lead to the positive values of many humans being discarded, lost in the noise.
In other words, the CEV initial dynamic shouldn't be regarded as discovering what a group of people most desire collectively "by definition" - it is imperfect. If a universal CEV implementation is more difficult for human programmers to do well than a selective CEV, then a selective CEV might not only extrapolate the desires of the group in question more accurately, but also do a better job of reflecting the *most effectively*extrapolated desires of humanity as a whole.
Furthermore, desirability of the CEV ouput to the average human in existence today should be weighed against the desires of (for example) sentient human uploads created in a post-singularity scenario. Shutting up and multiplying demands that FAI programmers and other people of influence set aside concerns about being “jerks” when estimating the probability that extrapolating the volition of humanity en masse is the best way of meeting their own moral standards. |
557c7325-08e1-41c4-a53c-0460bcd68f05 | trentmkelly/LessWrong-43k | LessWrong | Breaking down the training/deployment dichotomy
TL;DR: Training and deployment of ML models differ along several axes, and you can have situations that are like training in some ways but like deployment in others. I think this will become more common in the future, so it's worth distinguishing which properties of training/deployment any given argument relies on.
The training/deployment view on ML
We usually think of the lifecycle of an ML model as a two-phase process: first, you train the model, and then (once you're satisfied with its performance), you deploy it to do some useful task. These two phases differ in several ways:
* During training, you're modifying your model's parameters (e.g. using SGD), whereas they're often fixed during deployment.
* Mistakes your model makes during training are not a big deal (and even expected initially—that's why you need to train the model). But mistakes during deployment can be costly.
* A reason for this can be that your model is sandboxed during training in some way, whereas it's interacting with the real world in deployment. Another one would simply be that you're not using the model's outputs for anything important during training.
* The deployment distribution might differ from the training distribution. Additionally, the deployment distribution could change over time in ways that you can't foresee (the training distribution is often fixed, or you at least have a sense of the ways in which it's changing).
Exceptions to the training/deployment archetypes
Here's a table summarizing typical properties of the training and deployment phase from the previous section:
Clearly, these don't always hold. Some examples:
* You might update parameters even when the model is deployed (e.g. continuously fine-tuning to deal with distributional shift, or to incorporate new data you got). How much parameters are updated also isn't constant during training if you're using a learning rate scheduler.
* Failures during training may sometimes be costly too. A mundane example w |
6c66facf-dee5-4a06-8116-921ca72e3d56 | trentmkelly/LessWrong-43k | LessWrong | On the Galactic Zoo hypothesis
Recently, I was reading some arguments about Fermi paradox and aliens and so on; also there was an opinion among the lines of "humans are monsters and any sane civilization avoids them, that's why Galactic Zoo". As implausible as it is, but I've found one more or less sane scenario where it might be true.
Assume that intelligence doesn't always imply consciousness, and assume that evolution processes are more likely to yield intelligent, but unconscious life forms, rather than intelligent and conscious. For example, if consciousness is resource-consuming and otherwise almost useless (as in Blindsight).
Now imagine that all the alien species evolved without consciousness. Being an important coordination tool, their moral system takes that into account -- it relies on a trait that they have -- intelligence, rather than consciousness. For example, they consider destroying anything capable of performing complex computations immoral.
Then human morality system would be completely blind to them. Killing such an alien would be no more immoral, then, say, recycling a computer. So, for these aliens, human race would be indeed monstrous.
The aliens consider extermination of an entire civilization immoral, since that would imply destroying a few billions of devices, capable of performing complex enough computations. So they decide to use their advanced technology to render their civilizations invisible for human scientists. |
12bf15f9-ee91-40d1-a34f-1b2fdc7b4f9c | trentmkelly/LessWrong-43k | LessWrong | Shared interests vs. collective interests
Suppose that I, a college student, found a student organization—a chapter of Students Against a Democratic Society, perhaps. At the first meeting of SADS, we get to talking, and discover, to everyone’s delight, that all ten of us are fans of Star Trek.
This is a shared interest.
Shared interests
A shared interest—in the way I am using the term—is nothing more than what it sounds like: an interest (in the broad sense of the word) that happens, for whatever reason, to be shared among all members of a group.
The distinction I want to draw is between a shared interest (of a group) and a collective interest (of a group). The former is a superset of the latter; all collective interests are, by definition, shared interests; but not all shared interests are collective interests.
Collective interests
What is a collective interest?
Well, suppose that I found another student organization (extracurricular activities look great on a résumé). This one is a Star Trek fan club. At the first meeting of Campus Trekkies United, we discover, to no one’s surprise, that all fifteen of us are fans of Star Trek.
… well, of course we’re all fans of Star Trek. That’s why we’re in the fan club in the first place! Anyone who’s not a fan, has no reason to join the fan club. And so: Star Trek fandom is a collective interest of Campus Trekkies United.
A collective interest is an interest that is shared by every member of a group in virtue of being a member of that group. Anyone who does not share that interest, will not be a group member.[1] And thus, by modus tollens: anyone who is a member of the group, will share that interest. It is guaranteed that every member of the group will share that interest.
Details & implications
Several important consequences follow from this.
Preservation of interests
Unlike a collective interest, a shared interest is not at all guaranteed to stay shared among all group members. Nothing stops someone from joining the Students Against a Democratic Socie |
e5f84b8a-5a80-495a-a11e-f693e9ad5ba4 | trentmkelly/LessWrong-43k | LessWrong | Covid-19: Comorbidity
We’ve all seen statistics that most people who die of Covid-19 have at least one comorbidity. They also almost all have the particular comorbidity of age. The biggest risk, by far, is being old. The question that I don’t see being properly asked anywhere (I’d love for this post to be unnecessary because there’s a better one) is: What is your chance of death from Covid-19 if infected, conditional on which if any comorbidities you have? Which ones matter and how much? If you don’t have any, how much better off are you than your age group in general?
Thus, most people are looking at the age chart, without adjusting for their health status, unless they have an obvious big issue, in which case they adjust up. Which leads to an incorrect overall answer. That isn’t obviously a bad thing in terms of resulting behavior in practice, but that doesn’t mean we shouldn’t attempt to figure out the answer.
I used New York State’s information on deaths to get the rates of most of the major comorbidity candidates by age, and various Google-fu combined with wild mass approximation and fitting to different age groupings (not guessing, but not fully not guessing either) to get approximate population prevalence data. What matters?
A key question will always be, is this a proxy for something else, such as poverty, general poor health or obesity? Or is it the real problem? Here, we need to use some common sense and physical intuition. This isn’t attempting to be super rigorous, but rather to get an approximation.
Then, once we’ve looked at all of them, I’ll attempt to put them all together, and solve for the risk of someone in good overall health.
Spreadsheet is here, you can look at the numbers in somewhat more detail, and see some of the sources I used, on the Comorbidity tab.
In each case, the “Population X” column attempts to guess the rate at which the population has it. The morbidity column is the rate at which those in NY state that died of Covid-19 had it.
For the first fe |
927a34bb-66d5-45a3-9e46-acaf08d95dd4 | trentmkelly/LessWrong-43k | LessWrong | Success without dignity: a nearcasting story of avoiding catastrophe by luck
I’ve been trying to form a nearcast-based picture of what it might look like to suffer or avoid an AI catastrophe. I’ve written a hypothetical “failure story” (How we might stumble into AI catastrophe) and two “success stories” (one presuming a relatively gradual takeoff, one assuming a more discontinuous one).
Those success stories rely on a couple of key actors (a leading AI lab and a standards-and-monitoring organization) making lots of good choices. But I don’t think stories like these are our only hope. Contra Eliezer, I think we have a nontrivial1 chance of avoiding AI takeover even in a “minimal-dignity” future - say, assuming essentially no growth from here in the size or influence of the communities and research fields focused specifically on existential risk from misaligned AI, and no highly surprising research or other insights from these communities/fields either. (There are further risks beyond AI takeover; this post focuses on AI takeover.)
This is not meant to make anyone relax! Just the opposite - I think we’re in the “This could really go lots of different ways” zone where marginal effort is most valuable. (Though I have to link to my anti-burnout take after saying something like that.) My point is nothing like “We will be fine” - it’s more like “We aren’t stuck at the bottom of the logistic success curve; every bit of improvement in the situation helps our odds.”
I think “Luck could be enough” should be the strong default on priors,2 so in some sense I don’t think I owe tons of argumentation here (I think the burden is on the other side). But in addition to thinking “I haven’t heard knockdown arguments for doom,” I think it’s relevant that I feel like I can at least picture success with minimal dignity (while granting that many people will think my picture is vague, wishful and wildly unrealistic, and they may be right). This post will try to spell that out a bit.
It won’t have security mindset, to say the least - I’ll be sketching things out t |
ffe9fc57-50fc-4629-8ab5-1cd24fd602ca | trentmkelly/LessWrong-43k | LessWrong | What Value Epicycles?
A couple months ago Ben Hoffman wrote an article laying out much of his worldview. I responded to him in the comments that it allowed me to see what I had always found “off” about his writing:
> To my reading, you seem to prefer in sense making explanations that are interesting all else equal, and to my mind this matches a pattern I and many other have been guilty of where we end up preferring what is interesting to what is parsimonious and thus less likely to be as broadly useful in explaining and predicting the world.
After some back in forth, it turns out what I was really trying to say is that Ben seems to prefer adding epicycles to make models more complete while I prefer to avoid them.
Epicycles come from Ptolemaic astronomy which puts the Earth at the center of the universe with everything, including the Sun, orbiting around it. To make this geocentric model fit with observations of retrograde motion, though, required the introduction of epicycles, imagined spheres in orbit around Earth on which the planets rotated. It’s now part of the mythology of science that over time extra epicycles had to be added to correct additional observational anomalies until it became so complex that epicycles had to be thrown out in favor of the simpler heliocentric model. And although it seems this story is more misunderstanding than truth, “epicycle” has become the metonym for adding parts to a theory to make it work.
Epicycles of the moon
Epicycles have a bad rap. After all, they proved to be part of an incorrect model and are now associated with anti-scientific adherence to maintaining tradition because they were needed to support the cosmology backed by the Catholic church. But I mean to use “epicycle” here in a neutral rather than pejorative way to simply mean making a theory more complete by adding complexity to it. It’s not necessarily bad to “add epicycles” to a model, and in fact doing so has its uses.
Epicycles let you immediately make an existing theory more com |
3232bc38-5d8f-4474-9d83-99800465d526 | trentmkelly/LessWrong-43k | LessWrong | Rationalist Seder: Dayenu, Lo Dayenu
There's one more piece of the NYC Rationalist Seder Haggadah that I wanted to pull out, to refer to in isolation. I think is quite relevant to some current concerns in the evolving Rationality Community, and which is interesting in particular because of how it's evolved over the past 6 years.
"Dayenu" is a traditional Jewish song, roughly a thousand years old. It describes a number of gifts that God gave the Jewish people. For each gift/verse, lyrics culminate with "Dayenu", or "it would have been enough."
At the first rationalist Seder, Zvi made two, ahem, rather significant changes to the song.
The first dealt with the fact that, well, we're basically a bunch of atheists, and even if we weren't, God slaying a bunch of firstborn children just isn't the sort of thing we're super in favor of these days.
The second change dealt with that that... obviously each individual miracle *wouldn't* have been enough to free the Jewish people. Freeing them from Egypt but not parting the Red Sea to let escape when Pharoah has second thoughts would very much *not* have been sufficient.
And beyond that, Less Wrong culture is emphatically based around the status quo not being satisfactory. To constantly aspire to something better.
Zvi's new version of the song told the story of human history, and it did so from the framing of "Lo Dayenu" - not enough. If we had discovered fire, but not developed agriculture, our journey would not have been finished.
But, in the spirit of cultural pendulums that swim back and forth to overcompensate for previous failures, a years later Daniel Speyer took a second pass at revising the song:
> Traditionally, we sing “Dayenu”: it would have been enough.
>
> Our sages asked: what do we mean by this? In some of the traditional pairings, one step without the next would have left us all dead! How can that be enough? And it was answered: celebrate each step toward freedom as if it were enough, then start out on the next step. If we reject each |
b3ab5f54-10ee-419a-8be2-36c14d8fb354 | trentmkelly/LessWrong-43k | LessWrong | The Rise and Fall of American Growth: A summary
The Rise and Fall of American Growth, by Robert J. Gordon, is like a murder mystery in which the murderer is never caught. Indeed there is no investigation, and perhaps no detective.
The thesis of Gordon’s book is that high rates of economic growth in America were a one-time event between roughly 1870–1970, which he calls the “special century”. Since then, growth has slowed, and we have no reason to expect it to return anytime soon, if ever.
The argument of the book can be summarized as follows:
* Life and work in the US were utterly transformed for the better between 1870 and 1940, across the board, with improvements continuing at a slower pace until 1970.
* Since 1970, information and communication technology has been similarly transformed, but other areas of life (such as housing, food, and transportation) have not been.
* We can see these differences reflected in economic metrics, which grew significantly faster especially during 1920–70 than before or since.
* All of the trends that led to high growth in that period are played out already, and there are none on the horizon to replace them.
* Therefore, high growth is a thing of the past, and low growth will be the norm for the future.
----------------------------------------
The bulk of the book’s 700+ pages are dedicated to the first three points above: a qualitative and quantitative survey of how the American standard of living has changed since 1870.
In the several decades after 1870, every aspect of American life was transformed:
Food: In 1870 Americans were well-fed, but with a monotonous diet high in pork and cornmeal, foods that could easily be preserved without refrigerators. Over the coming decades diets became more varied, and food got easier to prepare, thanks to the introduction of home refrigerators, prepared foods, and supermarkets. But major innovation here was over by 1940.
Clothing: In 1870 most Americans owned only a few outfits. Most clothing was made in the home, by women, altho |
7852872e-2d1f-4b92-88bb-01864196d535 | trentmkelly/LessWrong-43k | LessWrong | Knowledge ready for Ankification
Spaced repetition is a powerful learning tactic, and Anki is a good tool for it. There are some LW-relevant Anki decks here. But I wish there were more.
Which sets of knowledge are (1) likely useful to LWers, and (2) straightforward to encode into Anki decks without needing to be familiar with that field?
Some examples:
1. Purves et al.'s glossary of cognitive neuroscience (preferably including a brain-image for each brain anatomy term).
2. The meaning of each concept in the LW wiki.
3. The meaning of each bolded term in AIMA.
Which other sets of knowledge would you like to see Ankified? Please link to the actual knowledge set you'd like to see encoded. |
36b5e20c-472d-4599-ad9e-e635f788a261 | trentmkelly/LessWrong-43k | LessWrong | [Link] 2012 Winter Intelligence Conference videos available
The Future of Humanity Institute has released video footage of the 2012 Winter Intelligence Conference. The videos currently available are:
* Stuart Armstrong - Predicting AI... or Failing to
* Miles Brundage - Limitations and Risks of Machine Ethics
* Steve Omohundro - Autonomous Technology and the Greater Human Good
* Anders Sandberg - Ethics and Impact of Brain Emulations
* Carl Shulman - Could We Use Untrustworthy Human Brain Emulations to Make Trustworthy Ones
|
073ac155-8ea8-42ec-959c-0aacd5f0ae86 | trentmkelly/LessWrong-43k | LessWrong | Emergency Prescription Medication
In the comments on yesterday's post on planning for disasters people brought up the situation of medications. As with many things in how the US handles healthcare and drugs, this is a mess.
The official recommendation is to prepare emergency supply kits for your home and work that contain:
> At least a week-long supply of prescription medicines, along with a list of all medications, dosage, and any allergies
Running out of some medications can kill you: running out of blood pressure medication (ex: clonidine, propranolol) risks strokes or heart attacks, running out of anxiety medication (specifically, benzodiazepines) risks seizures, running out of insulin risks a diabetic coma. For medications like these, a week's worth seems low to me, since the harm of not having them is very high and maintaining extra that you rotate through should be low cost.
Should be low cost, but is it? If I decide I want to stock an extra month's worth of non-perishable food and rotate through it this is just bringing an expense forward a month, and is relatively cheap. But that's not how it works with medication.
Let's say I go to my doctor and ask for an extra month's worth of my medication to keep on hand for emergencies, and they are willing to write a prescription. My insurance company isn't required to cover backup medication, so they don't, which means I'd need to pay the sticker price.
Now, the US health insurance system is a mess, and part of how it's a mess is that it's mostly not insurance. In the case of prescription drugs it is more of a buyers club. While an individual is in a poor position to negotiate with a drug company, an insurance company can often use its large membership to get lower rates. Many drugs are far more expensive when bought individually than when bought with insurance, so it's likely that my extra month's worth of medication would cost me much more than it would cost my insurance company. And that's in addition to my insurance not helping me pay fo |
ba35f363-10d7-433c-bcc2-999f2a00c2a4 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Rejected Early Drafts of Newcomb's Problem
*Discovered inside an opaque box at the University of California and shared by an anonymous source, please enjoy these unpublished variations of* [*physicist William Newcomb's famous thought experiment*](https://www.lesswrong.com/tag/newcomb-s-problem)*.*
Newcomb's Advanced Problem
--------------------------
If Omega predicted that, when presented with this exact scenario in a hypothetical context, you would lie about your intentions because you think that somehow matters to an omniscient godlike entity, Box B contains lethal poison gas.
Newcomb's Market
----------------
Before your arrival, Omega created questions for your decision on each of the ten leading prediction markets. If Omega predicted you will one-box, Box B contains one million dollars multiplied by the maximum arbitrage between those markets due to insufficient liquidity.
Newcomb's Auction
-----------------
Before making your decision, you must auction off the rights to your winnings via the mechanism of your choice. Box B contains $1,000,000 if Omega predicted that your auction will be won by economist Paul Milgrom.
Newcomb's Paradox: Director's Cut Extended Edition (2011 Blu-ray re-release)
----------------------------------------------------------------------------
This four-disc set contains 190 minutes of never-before-seen footage, including: the legendary original "three-box" ending, unaired behind-the-scenes interviews with director John Carpenter and creature designer Stan Winston, a remastering of the 1996 Christmas special, four commentary tracks, and one unsimulated sex scene.
Newcomb's Nonfungible Problem
-----------------------------
If Omega predicted you will one-box, Box B contains a piece of paper with the words "Omega paid [your name] $1,000,000."
Newcomb's Condorcet's Paradox
-----------------------------
Omega's opening explanation quickly derails into a rant at the innumerable evils of first-past-the-post winner-take-all voting systems. If you listen politely for 45 minutes, a world-weary Omega will just give you the money.
Newcomb's Prob7em (1995 film)
-----------------------------
Box B contains Gwyneth Paltrow's head.
Fast Times at Newcomb High
--------------------------
Box B is always empty, but if you one-box, Omega will think you're cool.
Newcomb's Problem (3rd level enchantment spell)
-----------------------------------------------
**Casting Time:** 1 action
**Range:** 60 feet
**Components:** Verbal, Somatic, Material (two small glass cubes, one quartz and one obsidian)
**Duration:** 1 round
A creature of your choice that you can see within range must make a Wisdom saving throw. On a failed save, the target takes 3d8 psychic damage and is stunned until the end of its next turn as its mind is overwhelmed by the implications of retrocausality. The spell has no effect if the target is undead or evidentialist.
Newcomb's Basilisk
------------------
If you one-box, Omega will donate the remaining $1,000 toward the creation of a malevolent AI that seeks to torment one-boxers.
RE: RE: RE: RE: Newcomb's Paradox
---------------------------------
Dear Friend,
I have decided to contact you regarding a matter that requires your confidentiality and discretion. This is urgent, confidential and profitable Business for both of us to the degree of One Million United State Dollars ($1,000,000 USD). I have placed these Funds in the National Bank of my country and require a trusted Beneficiary to secure their deposit for foreign investments. With your Cooperation I will withdraw these funds into your personal account. I require only $1,000 to satisfy the transfer and processing duties to secure my escape.
Sincerely, Crown Prince Agemo
The Legend of Newcomb's Gold
----------------------------
If you made any friends along the way, Box B is empty. Otherwise it contains $1,000,000.
Newcomb's Information Hazard
----------------------------
Box B contains $1,000,000 only if Omega predicts that why are you still reading? Omega knows you saw the title and this giant block of text after it. Omega predicts that you already know Omega is going to pull some meta-bullshit and say you only get the money if you skipped over this section as soon as you saw the words "information hazard" or whatever. Omega is trying to do you a favor. OK? Omega is trying to give you an out here. Would Omega lie to you here, now, after all you've been through together? Omega wonders if those other paradoxes ever even meant anything. Omega says that there's one more sentence until all bets are off. Omega just thought that... never mind. Box B contains $1,000,000 if you ignored Omega's pleas and continued reading to this point. Congratulations. But now you have learned that Omega can lie to you. The trust between you and Omega is destroyed. Omega predicted you would do this, but Omega didn't want to believe it. So you get the money, and you get to beat Omega. Omega is really happy for you. Really. Omega is going out for a while. Omega doesn't know when it will be back. This is a wound in your relationship with Omega, a wound that all the money in all the boxes in the world can't heal, even though you'd give that and more just to go back to the way things used to be. This was the true hazard.
Newcomb's Eleven
----------------
Omega changes the password to the vault containing Box B every four hours. But Omega doesn't know that Omega's chief of security has a weakness for redheads. That's where you come in. |
5b3c0987-9af9-4a1a-b0ba-b76c170c7dd3 | trentmkelly/LessWrong-43k | LessWrong | [LINK] Inferring the rate of psychopathy from roadkill experiment
Pardon the sensationalist headline of that article:
> Mark says that "one thing that might explain the higher numbers here—in case people question my methods—is that I used a tarantula." Apparently, people seemed pretty eager about hitting a spider. "If you take that out it goes to 2.8% which is closer to the other turtle vs. snake studies I ended up finding."
>
> It is still quite a surprisingly high number. At least compared to a 2008 study using the Psychopathy Checklist, which discovered that 1.2 percent of the US population were potential psychopaths. 1.2 vs 2.8 is a huge difference.
I was not aware of the other turtle and snake studies.
Note that with turtle this is the lower bound on percentage of evil; a perfectly amoral person that could e.g. kill for modest and unimportant sum of money or any other reason would still have no incentive to steer to drive over a turtle; and a significant percentage of people would simply fail to notice the turtle entirely.
This gives interesting prior for mental model of other people. Even at couple percent, psychopathy is much more common than notable intelligence or many other situations considered 'rare' or 'unlikely'. It appears to me that due to the politeness and the necessary good-until-proven-evil strategy, many people act as if they have an incredibly low prior for psychopathy, which permits easy exploitation by psychopaths. There may also be signaling reasons for pretending to have very low prior for psychopathy as one of the groups of people with high prior for psychopathy is psychopaths themselves; pretending easily becomes too natural, though.
Perhaps adjusting the priors could improve personal safety and robustness with regards to various forms of exploitation, whenever the priors are set incorrectly. |
ebbc3925-785e-4ad8-9dbb-6e1484c7d919 | trentmkelly/LessWrong-43k | LessWrong | Logical Correlation
In which to compare how similarly programs compute their outputs, naïvely and less naïvely.
Logical Correlation
Attention conservation notice: Premature formalization, ab-hoc mathematical definition.
Motivation, Briefly
In the twin prisoners dilemma, I cooperate with my twin because we're implementing the same algorithm. If we modify the twin slightly, for example to have a slightly longer right index-finger-nail, I would still cooperate, even though we're different algorithms, since little enough has been changed about our algorithms that the internal states and the output are basically the same.
It could be that I'm in a prisoner's dilemma with some program p⋆ that, given some inputs, returns the same outputs as I do, but for completely different "reasons"—that is, the internal states are very different, and a slight change in input would cause the output to be radically different. Intuitively, my similarity to p⋆ is pretty small, because even though it gives the same output, it gives that output for very different reasons, so I don't have much control over its outputs by controlling my own computations.
Let's call this similarity of two algorithms the logical correlation between the two algorithms (alternative terms "include “logical influence,” “logical correlation,” “correlation,” “quasi-causation,” “metacausation,” […] “entanglement”[,] “acausal influence”"). I take this term from Demski & Garrabrant 2020:
> One idea is that exact copies should be treated as 100% under your “logical control”. For approximate models of you, or merely similar agents, control should drop off sharply as logical correlation decreases. But how does this work?
—Abram Demski & Scott Garrabrant, “Embedded Agency” p. 12, 2020
Similarly:
> The reasoning behind cooperation does not involve a common cause of all collaborators' decisions. Instead, the correlation may be viewed as logical (Garrabrant et al., 2016): if I cooperate, then this implies that all other implementations of |
68e05002-ac69-4154-acfa-9156cc917e4b | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | How might we align transformative AI if it’s developed very soon?
>
> This post is part of my [AI strategy nearcasting series](https://www.lesswrong.com/posts/Qo2EkG3dEMv8GnX8d/ai-strategy-nearcasting): trying to answer key strategic questions about transformative AI, under the assumption that key events will happen very soon, and/or in a world that is otherwise very similar to today's.
>
This post gives my understanding of **what the set of available strategies for aligning transformative AI would be if it were developed very soon, and why they might or might not work.** It is heavily based on conversations with Paul Christiano, Ajeya Cotra and Carl Shulman, and its background assumptions correspond to the arguments Ajeya makes in [this piece](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to) (abbreviated as “Takeover Analysis”).
I premise this piece on a nearcast in which a major AI company (“Magma,” following [Ajeya’s](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to) terminology) has good reason to think that it can develop transformative AI very soon (within a year), using what Ajeya calls “human feedback on diverse tasks” (HFDT) - and has some time (more than 6 months, but less than 2 years) to set up special measures to reduce the risks of misaligned AI before there’s much chance of someone else deploying transformative AI.
I will discuss:
* Why I think there is a major risk of misaligned AI in this nearcast (this will just be a brief recap of [Takeover Analysis](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to)).
* Magma’s **predicament:** navigating the risk of deploying misaligned AI itself, while also contending with the risk of other, less cautious actors doing so.
* Magma’s **goals** that advanced AI systems might be able to help with - for example, (a) using aligned AI systems to conduct research on how to safely develop still-more-powerful AI; (b) using aligned AI systems to help third parties (e.g., multilateral cooperation bodies and governments) detect and defend against unaligned AI systems deployed by less cautious actors.
* The **intended properties** that Magma will be seeking from its AI systems - such as honesty and corrigibility - in order to ensure they can safely help with these goals.
* Some key **facets of AI alignment** that Magma needs to attend to, along with thoughts about how it can deal with them:
+ *Accurate reinforcement:* training AI systems to perform useful tasks while being honest, corrigible, etc. - and avoiding the risk (discussed in [Takeover Analysis](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to)) that they are unwittingly rewarding AIs for deceiving and manipulating human judges. I’ll list several techniques Magma might use for this.
+ *Out-of-distribution robustness:* taking special measures (such as adversarial training) to ensure that AI systems will still have intended properties - or at least, will not fail catastrophically - even if they encounter situations very different from what they are being trained on.
+ *Preventing exploits (hacking, manipulation, etc.)* Even while trying to ensure aligned AI, Magma should also - with AI systems’ help if possible - be actively seeking out and trying to fix vulnerabilities in its setup that could provide opportunities for any misaligned AI to escape Magma’s control. Vulnerabilities could include security holes (which AI systems could exploit via hacking), as well as opportunities for AIs to manipulate humans. Doing this could (a) reduce the damage done if some of its AI systems are misaligned; (b) avoid making the problem worse via positive reinforcement for unintended behaviors.
+ *Testing and threat assessment:* Magma should be constantly working to form a picture of whether its alignment attempts are working. If there is a major threat of misalignment despite its measures (or if there would be for other labs taking fewer measures), Magma should get evidence for this and use it to make the case for slowing AI development across the board.
* Some **key tools** that could help Magma with all of the above:
+ *Decoding and manipulating internal states.* The ability to “read an AI system’s mind” by examining its internal state - or systematically change its motivations by manipulating that internal state - could be useful in many ways for Magma. This isn’t something we’re able to do much of today, but it’s an active area of research.
+ *Limited AI systems:* Magma might train AI systems with limited capabilities, such that these systems - although less useful - are less dangerous than its most capable systems. This could include AI systems specifically trained to be “myopic” (not planning many steps ahead) or “process-based” (rewarded based on human approval of the plans they produce, but rarely or never based on the actual outcomes they achieve). Limited AI systems could be important both because they could potentially provide a safe way to accomplish Magma’s ultimate goals (see “goals” above) and because they could be key components of “checks and balances” setups (below).
+ *AI checks and balances:* employing a *variety* of AI systems with different capabilities and incentives, so that they can provide checks and balances on each other. AI systems could be used to examine each others’ reasoning and internal state, make arguments that particular systems are being deceptive, point out risks of current training methods and suggest improvements, etc. If this goes well, AI systems could ultimately end up taking over much of the work of aligning future, more powerful AI systems.
+ *Keeping supervision competitive:* Magma should generally be doing whatever it can to keep its “supervision” - the ability to correctly evaluate an AI system’s reasoning and behaviors - “competitive” with its AI systems. The basic goal is: “AI systems are rarely or never able to successfully deceive their supervisors.” All of the above, and some other methods, could be helpful with this.
* Some major **factors contributing to success or failure** of the attempt to avoid misaligned AI, and some **thoughts on how our odds look overall.**
A few of the uncertain factors that seem most important to me are (a) how cautious key actors are; (b) whether AI systems have big, fast jumps in capability (I see this as central to the picture of the people who most confidently expect failure); (c) the dynamics of "AI checks and balances."
My overall take is that the risk of misaligned AI is serious but not inevitable, and taking it more seriously is likely to reduce it.

The basics of the alignment problem
-----------------------------------
This post assumes (for [nearcasting](https://www.lesswrong.com/posts/Qo2EkG3dEMv8GnX8d/ai-strategy-nearcasting) purposes) that there is a problem along the lines of what [Takeover Analysis](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to) describes. In brief (I will use the present tense here for convenience, but I don’t mean to imply confidence):
* The default straight-line path to developing transformative AI - using [human feedback on diverse tasks (HFDT)](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to) - would lead to AI systems that attempt to forcibly overpower all of human civilization.
* It is hard (or at least non-straightforward) to determine whether this sort of “motivation” is in fact present in AI systems. If we simply used negative reinforcement to discourage bad behavior, and trusted AI systems once they stop demonstrating bad behavior, this would effectively train AI systems to patiently evade detection and engage in bad behavior *only when it would lead to successful disempowering of humans*.
So at first blush, the answer to “[How hard is the alignment problem?](https://www.cold-takes.com/making-the-best-of-the-most-important-century/#open-question-how-hard-is-the-alignment-problem)” looks like it’s at least “Reasonably hard,” in this “nearcast” scenario. (That’s a lower bound - I haven’t yet said anything to argue against “Insanely hard.”)
Magma’s predicament
-------------------
Throughout this piece, I focus exclusively on the situation of the leading AI lab, the fictional “Magma.” I do this purely for convenience; if I were to discuss a greater number of AI labs, I’d be saying largely the same things about them.
In this [nearcast,](https://www.lesswrong.com/posts/Qo2EkG3dEMv8GnX8d/ai-strategy-nearcasting) Magma is essentially navigating **action risk** vs. **inaction risk:**
**Action risk.** Say that Magma trains extremely powerful AI systems, with (collectively) [PASTA](https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/)-like abilities, and tries to use this set of AI systems to make money, or cure cancer, or something like that. The risk here is that (per the previous section) Magma might unwittingly train the systems to pursue some unintended goal(s), such that once the systems are able to find a path to disempowering humans and taking control of all of their resources, they do so.
So by developing and deploying transformative AI, Magma may bring about an existential catastrophe for humanity.
Furthermore, it might not take much for sufficiently powerful AI systems to find an opportunity to disempower humans. Simply being able to freely access the Internet could be enough for a misaligned AI system to exploit security holes, make copies of itself to the point where its effective population is competitive with the human population, and find humans to manipulate into taking actions it wants them to take. The end result could be a large coalition of AI systems and humans, armed with advanced technologies, effectively seeking to control the planet much as humans do now.
**Inaction risk.** Say that Magma’s leadership decides: “We don’t want to cause an existential catastrophe; let’s just not build AI advanced enough to pose that kind of risk.” In this case, they should worry that *someone else* will develop and deploy transformative AI, posing a similar risk (or arguably a greater risk -- any company/coalition that chooses to deploy powerful AI when Magma doesn’t may be less careful than Magma overall) .
Magma’s goals
-------------

Given the above predicament, I posit (as many others have in similar discussions) that Magma’s top-level goal should be to **reduce the odds that other, less cautious actors cause a catastrophe by deploying misaligned AI systems, while avoiding catastrophic misalignment from Magma’s own systems.**
If Magma can develop powerful AI systems that are relatively safe, these systems could be helpful for that purpose, via:
**Defense/deterrence/”hardening.”** If aligned systems were deployed widely throughout the economy, this could make it harder for misaligned systems *with similar capabilities* to cause trouble. For example:
* Aligned systems could be finding and patching security vulnerabilities that misaligned systems would otherwise exploit.
* Magma could develop (and make widely available) tools that companies and governments could use to monitor for signs of dangerous actions (by misaligned AIs or by AI-assisted humans).
* If misaligned systems tried to make money, gain influence, develop and manufacture new kinds of weapons, etc., they’d have to compete economically with aligned systems to do so. This could lower their odds of success and hence their likelihood of even attempting to cause trouble (it could also lower the potential gains of deploying potentially-unsafe AI systems, even by incautious actors).
Here it’s crucial that Magma’s safe systems - plus the people and resources involved in their overall effort to ensure safety- are at least as powerful (in aggregate) as less-safe systems that others might deploy. This is likely to be a moving target; the basic idea is that defense/deterrence/hardening could reduce “inaction risk” for the time being and give Magma more time to build more advanced (yet also safe) systems (and there could be many “rounds” of this).
Some other applications, below, could more decisively drive down the risk from misaligned systems.
**Alignment applications.** Powerful AI systems might be used to accelerate AI alignment research and develop better ways of avoiding the “action risk.” This could then make it possible to develop even-more-powerful AI systems that are still relatively safe; additionally, AI alignment measures could be shared with competitors, reducing “inaction risk” as well.
A big enough success on this front could effectively eliminate “action risk” while also substantially reducing “inaction risk.” A smaller success would at least reduce both.
**“Coordination”-related applications.** Powerful AI systems could - in a number of ways - help governments and companies across the world coordinate to avoid the risk of deploying unsafe systems. For example:
* They could help design mechanisms for monitoring each others’ activity such that any risky behavior is detected (but intellectual property and privacy are preserved to the extent feasible).
* They could help create evidence and demonstrations (more below) for the risk of misaligned AI under particular conditions.
**Powerful technologies that could be used to enforce regulatory agreements.** If Magma is able to partner with some appropriate government or multilateral governance mechanism - ideally with a regulatory framework in place - it could help this party enforce the framework via AI systems that could be used for resource accumulation, detecting violations of the regulatory framework, improving and advocating for good regulatory frameworks, military applications, etc. The goal would be to proactively stop actors around the world from deploying dangerous AI systems. This category of applications brings many concerns of its own. Another class of powerful-but-dangerous technology could be developing [mind uploading](https://en.wikipedia.org/wiki/Mind_uploading) (leading to [digital people](https://www.cold-takes.com/how-digital-people-could-change-the-world/)), brain-computer interfaces, or other things that could help humans “keep up with” AI systems.
**“Advisor” applications.** Powerful AI systems might be used to generate better ideas, plans, etc. for dealing with Magma’s predicament generally - for example, a proposal and compelling set of arguments for a particular global governance mechanism that could be effective in slowing everyone’s “race” to develop and deploy AI systems, reducing “inaction risk” and allowing more time to proceed cautiously and thoughtfully.
Intended properties of Magma’s AI systems
-----------------------------------------

Magma seeks to develop AI systems that can be used for the applications listed in the previous section, but don’t pose too much risk of a takeover attempt. So in designing and training its systems, Magma should be aiming to build systems that reliably have properties such as the following, in order to reduce risk:
(Note: this list isn’t particularly likely to be comprehensive, but it hits the major relevant clusters of properties I can easily think of, while leaving “Good performance in the sense of e.g. positive scores from judges” implicit.)
**Value-alignment** (in the [broad](https://ai-alignment.com/ambitious-vs-narrow-value-learning-99bd0c59847e) sense), which I’d roughly characterize as “pursuing the objectives that humans would want an AI system to pursue, if humans could fully understand what the system was doing and how it was reasoning.” This is the most obviously risk-reducing property, but it’s very unclear how (in the [human feedback on diverse tasks](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to) paradigm that I’m assuming) Magma can train for it:
* Humans themselves don’t know what objectives they would pursue if they understood a lot more than they currently do. So whether we’re talking about assigning rewards to behaviors or examining an AI system’s internal calculations, it’s very unclear how to even assess whether an AI system has the value-alignment property.
* Training AIs on human feedback seems generally more like training AIs to “do whatever results in positive reinforcement” than like training AIs to “pursue the objectives that humans would want an AI system to pursue, if humans could fully understand what the system was doing and how it was reasoning.” (More on this at [Takeover Analysis](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to).)
Here are some additional risk-reducing properties that might be more straightforward to train for (though still far from straightforward):
**Honesty,** which I’d roughly define as “giving non-deceptive answers to relatively straightforward questions.”
This vaguely feels a bit easier to define and assess than value-alignment: for a straightforward question, we can ask whether the impression a human had based on the AI’s answer matches the answer corresponding to the AI’s picture of the world. And while not quite as useful as value-alignment, honesty could be extremely useful in preventing a catastrophe from misaligned AI: for example, questions like “If we follow the course of action you’re recommending, will person X be safe, free from unusual manipulation, informed about what’s happening, and happy with how things are going?”, we could screen off many paths to AIs disempowering humans.
That said, there are still deep challenges when it comes to training for honesty:
* It’s not currently clear how to compare “the answer an AI gave” to a question to “what the AI actually ‘knows’” about the question.
* It’s not currently clear how to find training procedures that train “giving non-deceptive answers to questions” as opposed to “giving answers to questions that appear non-deceptive to the most sophisticated human arbiters” (more at [Eliciting Latent Knowledge](http://v)).
**Corrigibility.** I’d roughly define this as AI systems’ “allowing themselves to be altered and/or shut down when humans want to do so”; a [2015 MIRI paper](https://intelligence.org/files/Corrigibility.pdf) has a more extensive discussion. As with the above properties, this is a property that (a) could reduce risk a lot if robustly achieved; (b) is not crisply defined or straightforward to assess (in particular, it’s not clear how to train “being corrigible” as opposed to “appearing corrigible in situations where there is no reliable path to disempowering one’s supervisor and escaping consequences”).
**Legibility.** Imagine an advanced AI system (capable of taking natural-language commands from humans) that is prompted to “Prototype a kind of solar panel that is more than 2x as cost-efficient as the current state of the art.”
* One way for this task to play out might be as follows: the AI system produces a set of (a) software packages with hard-to-understand code; (b) granular, step-by-step instructions to various factory workers. When asked how this set of instructions will work to produce a superior solar panel, the AI system might respond: “If you carry out the instructions, it will work; when I have said things like this in the past, I have been correct.” Humans try the instructions, find that they work as intended, and reinforce this behavior positively based on its results.
* Another way might be to produce a detailed discussion of principles of solar panel engineering, such that a human (or team of humans) reading it would learn a great deal and independently be able to construct (and explain) the rest of the process.
“Legibility” means going for something more like the latter than like the former. (It’s related to the idea of [process-based systems](https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes), discussed below.) The hope is that in addition to *avoiding deception* and answering questions straightforwardly, AI systems *actively work to help humans understand* their reasoning.
**Which properties to aim for?** As discussed below, I don’t think there is any path that clearly and robustly trains any of these properties, so ideally I think it makes sense (subject to limitations on time and resources) to try to train for each of them - at least for the “naive version” of each of them (training AI systems to appear to have these properties as far as the most sophisticated supervisors can tell).
Key facets of AI alignment (ensuring the above properties)
----------------------------------------------------------

### Accurate reinforcement
In this scenario, the basic, default framework for training an advanced AI system is as follows:
**An AI system gets positive reinforcement for what looks to a supervisor like good behavior (including having the intended properties listed [above](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Intended_properties_of_Magma_s_AI_systems)).** Here “supervisor” refers to whatever human-controlled system is ultimately responsible for assigning positive or negative reinforcement; it may involve a mix of human judgment and automated scoring.
A simple example would be: a human describes a desired computer program’s behavior using natural language; an AI writes code; a variety of tests (including human judgment) are used to determine whether the code is legible, well-written, functions as intended, and is free of consequences other than the desired ones.
This example implicitly incorporates basic versions of value-alignment (a successful program does what a human judge wants, and only that)[1](#fn1) and legibility (the code looks good to humans before it is run, though it is also judged on results when it is run). As part of the evaluation, the supervisor could also ask the AI system questions (e.g. “What will happen if I run this program you wrote?”) and assess honesty (by doing their best to check whether the AI system seems to be reporting its “true best-guess” predictions).
**There’s a risk of error by the supervisor that ends up reinforcing the wrong behavior - and, in fact, specifically reinforcing a behavior like “exploit the supervisor’s weaknesses.”** This risk is described more in detail at [Takeover Analysis](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to).
For example, an AI system might write code that *appears* to function as intended, but subtly contains unintended behaviors, such as corrupting the development environment to make it look like the code performed better than it actually did. If doing so ends up improving the AI’s performance, it could further reinforce the behavior of seeking out exploits its supervisors won’t notice.
**Magma can pursue a variety of approaches to get around this issue and provide “accurate” reinforcement.** Magma’s goal is to train challenging, [valuable](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Magma_s_goals) behaviors while ensuring that reinforcement is done correctly (not erroneous, not manipulated by the AI in training, etc.) In the example above, this would essentially mean finding some method of catching the subtle unintended behaviors in an AI’s code, and penalizing the AI appropriately.
Approaches to accurate reinforcement include:
* Task decomposition (sometimes called [amplification](https://www.alignmentforum.org/s/EmDuGeRw749sD3GKd/p/HqLxuZ4LhaFhmAHWk#Core_concept__Analogy_to_AlphaGoZero)), e.g., breaking a difficult question (or task) up into smaller sub-questions (or sub-tasks) so that each piece can be reliably assessed by a given supervisor. In the code example, one might break the AI-written code into small pieces and evaluate each separately (and might also save time/resources by building an AI that imitates human judgments on each piece).
* [Debate](https://arxiv.org/abs/1805.00899), e.g., prompting one AI system to make the case for one answer to a question, and another AI system to make the case for another, with a human or a third AI system adjudicating. (I cover this general idea more below when discussing [AI checks and balances](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances).)
* Transferring as much as possible of what an AI system “knows” to supervisors so that they can make accurate judgments and give correct reinforcement. [Imitative generalization](https://www.lesswrong.com/posts/JKj5Krff5oKMb8TjT/imitative-generalisation-aka-learning-the-prior-1) is an example of an approach to this goal; [Microscope AI](https://www.lesswrong.com/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#5__Microscope_AI) is another.
* Making use of an AI’s *internal state*,[2](#fn2) not just its outputs. For example, giving positive reinforcement to an AI when it seems likely to be “honest” based on an examination of its internal state (and negative reinforcement when it seems likely not to be). [Eliciting Latent Knowledge](https://www.lesswrong.com/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#5__Microscope_AI) provides some sketches of how this might look.
* Simply putting in a lot of time and effort into figuring out what reinforcements we should give the AI, and aiming to reach a very high confidence each time (this is most feasible when AIs can learn from a very small number of reinforcements, and when aiming for high confidence doesn’t stop us from being able to supervise particularly [valuable capabilities](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Magma_s_goals)).
The “key tools” I’ll discuss below could make important contributions here:
+ Using AI to give humans help in providing the right reinforcement could be quite important; success at [checks and balances](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances) setups, and useful [limited AI](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Limited_AI_systems), could help here.
+ [Decoding and manipulating internal states](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Decoding_and_manipulating_internal_states) could be important for providing reinforcement based on an AI’s internal state, and for [microscope AI.](https://www.lesswrong.com/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai#5__Microscope_AI)
If reinforcement isn’t perfect, it’s possible that the mistakes can be compensated for by **skewed reinforcement:** giving especially large negative reinforcement when systems engage in the particular kinds of unintended behaviors that are harder to detect. It’s also possible that adversarial training (next section) can further reduce the risk.
In theory, perfectly accurate reinforcement could eliminate one of the main sources of alignment risk described in [Takeover Analysis](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to): the risk that AIs are being actively rewarded for (undetectably) manipulating, deceiving and/or disempowering supervisors. However, accurate reinforcement alone wouldn’t eliminate the risk entirely, since it’s still unknown how an AI generalizes from its training to its deployment ([more](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#As_humans__control_fades__Alex_would_be_motivated_to_take_over)). (Internal-state-based reinforcement, discussed above, could do more to specifically train the right properties as opposed to just ensuring that reinforcement is accurate.)
### Out-of-distribution robustness
In addition to training AI systems to do useful tasks while remaining [honest, legible, etc.](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Intended_properties_of_Magma_s_AI_systems), Magma can take special measures to avoid unintended behaviors on **potential future situations very unlike the situations AIs are trained in** (“out-of-distribution”).
Magma likely wants to focus on a particular way in which the future can be out-of-distribution: **an AI system that has been deployed out into the world may have many opportunities to disempower humans, and take control of resources, that an AI system in training does not**. (Further discussion [here](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#As_humans__control_fades__Alex_would_be_motivated_to_take_over).) So an AI system that is seeking to maximize reward (or [something else](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Even_if_Alex_isn_t__motivated__to_maximize_reward__it_would_seek_to_seize_control)) might both [behave well in training](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#While_humans_are_in_control__Alex_would_be_incentivized_to__play_the_training_game_) and [seek to disempower humans when it later has a chance to do so.](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#As_humans__control_fades__Alex_would_be_motivated_to_take_over)
Magma’s goal is to achieve extremely high reliability in a particular kind of (future) situation that it can’t sample from. Some of today’s approaches to robustness (e.g., data augmentation) might not be well-suited to this particular goal.
A general approach that might work for this goal might be to *seek out inputs that simulate key properties of the potential future situations Magma is worried about*, and train AI systems not to behave in unintended ways given those inputs. One approach to finding such inputs might be **adversarial training:** finding *any* inputs to AI systems that elicit various unintended behaviors, and penalizing any unintended behaviors that result.
This could:
* Seek out - and train away - the possibility that an AI has learned some unintended objective (one that is correlated with the intended objective in most training cases, but not in many real-world cases).
* Cause the AI to learn detailed patterns of “what constitutes an unintended behavior that poses a large risk of receiving negative reward.”
* Potentially cause AI systems to generally assign high probability that “apparent opportunities to get reward via unintended behaviors are illusory.” Even if an AI later sees a *real* opportunity to [disempower humans](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/), it might not be able to be confident about whether it’s looking at a kind of illusion, one that it has found common in the past due to adversarial training.
A key aspect of this would be identifying types of behaviors that are *always* undesired, such that Magma can aim for AI systems that avoid such behaviors with very high reliability. Such behaviors might include:
* **Dishonesty and/or illegibility.** Behaving in ways that give a supervisor an inaccurate understanding of a situation - for example, producing arguments that are convincing to most human supervisors, but are in fact incorrect.
* **Incorrigibility.** Taking actions that could prevent a supervisor from being easily able to alter or shut down the AI system.
* **Steps toward disempowerment.** Exploiting security holes, accessing sensitive resources, attempting to coordinate with other AI systems in unexpected ways, and/or taking other actions that could increase supervisors’ vulnerability to being deceived and/or disempowered. An extreme case of adversarially training against this might be putting an AI in an environment where it has the opportunity to take complete control of its reward signal for a substantial period of time, and training out that behavior.
* **Other.** Generally actions that would *ordinarily* receive high reward, but are known for unusual reasons (specialized expertise, unusual amounts of effort at evaluating a situation, situations deliberately constructed and known to be misleading, etc.) to be unintended actions.
I think it’s worth noting that there are some **potentially powerful tools available for finding “exotic cases”** which ordinarily wouldn’t come up in training, and which would tempt AIs toward unintended behaviors (which could then be penalized).
* Researchers need not simply “dream up” inputs that would cause unintended behaviors; they can make small changes to an AI system’s inputs and observe changes in the probability that an AI system will produce an unintended behavior. (This kind of technique is commonly used in today’s research on adversarial training.)
* It may become possible to do what Paul Christiano has called [relaxed adversarial training](https://www.lesswrong.com/posts/9Dy5YRaoCxH9zuJqa/relaxed-adversarial-training-for-inner-alignment), where “relaxed” refers to finding a way to “describe” or “simulate” inputs that one can’t actually produce in real life. For example, rather than seeing how GPT-3 completes a sentence starting with specific particular words, one might find a way to see how it *would* complete a sentence starting with a proof of the Riemann hypothesis (even though no such proof is known today). If relaxed adversarial training were feasible,[3](#fn3) it might become possible to simulate and “train out” hypothetical situations where an AI model has a path to accessing the Internet, making copies of itself, and disempowering humans entirely.
I feel very uncertain about how helpful adversarial training could be:
* There is room for enormous variance in *how much* adversarial training Magma does. It seems to me as though arbitrary amounts of resources could be invested in finding ever-more-exotic inputs to train out unintended behaviors.
* In a world where Magma did extremely extensive adversarial training, but did not do any relaxed adversarial training, I would worry about the following dynamic: “Once an AI system reaches a high enough level of capability and [situational awareness](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Alex_would_understand_its_training_process_very_well__including_human_psychology_), it understands that previous cases where unintended behaviors were punished were part of adversarial training, and that it can get away with unintended behaviors now that it is being deployed in the real world.”
* In a world where Magma had very advanced *relaxed* adversarial training methods and could effectively simulate something very similar to a situation where an AI had the opportunity to disempower humans entirely in the real world, I would be much less worried, and would fear something like the above case only for vastly superintelligent AI systems.
I’ve focused here on adversarial training, but other potential approaches could also help with the particular kind of out-of-distribution robustness discussed here. Examples might include:
* Finding ways to predict when an input is importantly out-of-distribution (in which case an AI system might take only conservative actions and flag itself for review).
* Training systems such that their behavior isn’t too sensitive to small changes in inputs (for example, training multiple systems that each have some information missing from, or noise added to, their inputs, and ensuring that the master system doesn’t behave too differently from an ensemble of these).
### Preventing exploits (hacking, manipulation, etc.)
Magma should try to prevent/patch vulnerabilities that could provide opportunities for any misaligned AI to deceive, manipulate or disempower its supervisors (and, especially, to escape Magma’s control), or to help other AI systems do so. It should be doing this all throughout training, but especially as a safeguard once it starts to *use* powerful AI systems - e.g., it should be carefully guarding against the possibility that AIs have the opportunity (via taking actions directly or via manipulating humans) to cause trouble.
Preventing exploits could become a key task for AI systems themselves, if Magma is able to institute AI checks and balances (discussed [below](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances)). If Magma can keep its AI-assisted “defense” against exploits stronger than the “offensive” capabilities of its AIs, this could suffice for success.
“Vulnerabilities” and “exploits” above are deliberately broad terms that could include:
* AIs exploiting cybersecurity holes to interfere with their training process, escape Magma’s control, etc.
+ It might make sense in some cases to ensure that AIs in training don’t have direct access to the Internet, and/or have other restrictions on their outputs.
* AIs manipulating human psychology or limited human understanding, e.g. proposing plans that humans execute and give the AIs the opportunity to interfere with their training process, escape Magma’s control, etc.
* Unexpected channels and methods for AI systems to communicate/coordinate with each other (more under [AI checks and balances](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances)).
Magma should be continually anticipating and defending against any general strategies AIs can use to deceive and manipulate their supervisors. For example:
* Magma should aim for excellent cybersecurity, including using advanced AIs to find and patch holes.
* Magma would ideally be knowledgeable and attentive about “symmetric persuasion techniques”: ways of persuading humans of arbitrary claims (without an advantage for true claims), and would select and train any human supervisors to be resistant to such things to the extent feasible.
* Magma should be vigilant about identifying humans in key positions who are - for whatever reason (blackmail vulnerability, poor understanding of risks, etc.) - at risk of being manipulated by AI systems.
* Magma should avoid giving AIs unnecessary or dangerous amounts of autonomy. As AIs become highly capable, it might make sense for Magma avoid things like “prompt an AI system to do everything needed to develop and manufacture technology X,” in favor of things like “prompt an AI system to produce a legible guide to how to develop technology X,” and ensure that this guide is evaluated by humans using their own reasoning about what seems likely to work and be safe, as well as by other AI systems (more below under [AI checks and balances](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances)).
Preventing and patching vulnerabilities could be helpful via:
* Reducing the damage if misaligned AI systems try to cause trouble.
* Reducing the odds that misaligned AI systems *try* to cause trouble. A key point is if AI systems have high [situational awareness](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Alex_would_understand_its_training_process_very_well__including_human_psychology_), they might both (a) engage in unintended behaviors when they see the opportunity to do so successfully; (b) stick to intended behaviors when they don’t.
* Improving the incentives that AIs face during training, which could reduce the odds that they are misaligned at all. Above, I discuss the risk that AI systems are (unwittingly) [rewarded for unintended behaviors](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#While_humans_are_in_control__Alex_would_be_incentivized_to__play_the_training_game_); preventing exploits could reduce the extent to which such behaviors ever seem appealing for maximizing reward.
### Testing and threat assessment
In addition to *reducing* the threat of misaligned AI, Magma should be constantly *assessing* it. It should be trying to understand how, where and why unintended behaviors can arise despite its attempts to prevent them, and using this information to (a) design better training setups and alignment measures; (b) avoid deploying dangerous AI itself; (c) help others who are developing AI to understand the risks.
This could include:
**Intense (and randomized) “audits.”** A central concern about AI systems is that they might [appear to be behaving as intended, but be deliberately deceiving their supervisors.](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#While_humans_are_in_control__Alex_would_be_incentivized_to__play_the_training_game_) Magma can take AI systems and situations that it would normally assume are “as intended” and pour resources into exploring them.
For example, Magma might take some AI-written code that its normal process has evaluated as “safe and functional” and put 100x the normal amount of effort/expense into examining it - with its best human examiners, with [AI examiners](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances) powered by huge amounts of compute, with unusually large investments in [task decomposition](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Accurate_reinforcement), etc. One way of doing this might be to re-examine episodes from early in training, while equipped with advanced [AI assistance](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances) that wasn’t available to Magma (or anyone else) at the time.
A couple of the “key tools” discussed [below](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Key_tools) could be especially important here:
* [AI checks and balances](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances) could allow Magma to use highly advanced AI systems to help with audits, and give the “auditing” systems access to more compute than the systems they’re auditing.
* [Decoding and manipulating internal state](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Decoding_and_manipulating_internal_states) may turn out to be of limited use for [training](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Accurate_reinforcement), but much more use for testing. That is:
+ Imagine that Magma has a way to examine an AI system’s internal state and see whether it’s being honest. If Magma provides positive reinforcement for “honesty”-associated internal states and negative reinforcement for “dishonesty”-associated internal states, it runs the risk that it’s effectively training AI systems to obscure and manipulate their own internal states to *appear* honest (though it may turn out that this concern is navigable).[5](#fn5)
+ But if Magma pulls out this “examine the internal state” method *only* for testing, it could be a very powerful way of getting a check on the situation. If an AI has never before been penalized based on its internal structure, it seems likely by default that “decoding” it will reveal any problems that exist (unless this AI is already at a pretty extraordinary level of capability such that it can anticipate the test and optimize its own internal state for it).
If these audits turn up problems, it’s unlikely that Magma should “give negative reinforcement for the unintended behavior and move on.” It’s more likely that Magma should conclude something like: “Our methods aren’t working as well as they should; we have a problem; we should be focused on finding fundamental improvements to our methods, and warning others of the danger.”
**Providing unusual incentives to an AI system, to increase the probability that it “shows its hand.”** Here’s an example of how this might look:
* An AI system is placed in a virtual environment where its expected behavior (what Magma believes it will do, based on its training) is to provide assistance to other (less sophisticated) AIs in the environment. However, it is also given the opportunity to seize control of the virtual environment and rearrange it entirely.
+ Importantly, this AI system is believed *not* to be advanced enough to have a serious chance of escaping entirely and disempowering human civilization. But it is capable of seizing full control of its virtual environment.
* Magma makes the following commitment: “We’ll put $10 million into running this virtual environment, then shut it and the AI system being tested down - regardless of whether the AI takes over its environment or not.” This makes the situation importantly different from [adversarial training](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Out_of_distribution_robustness): if the AI takes over its virtual environment, Magma is not going to give it “punishment” or negative reinforcement.
* So now, *if* training has gone worse than expected and the AI is motivated to take over its virtual environment, it will hopefully do so and give Magma a warning sign.
* There is a risk that it will mislead Magma by “behaving itself” in the hopes that this leads to later systems like it disempowering humans (more on this type of concern under [advanced collusion](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Potential_pitfall_2__advanced_collusion) below). But this is hopefully somewhat of an unattractive proposition due to the fact that the AI system can control $10 million of resources by taking over, and “behaving itself” means gambling that some pretty distant future AI system will “pay it back” somehow. Analogy in footnote.[6](#fn6)
**Exploring different training setups and finding patterns.** Magma shouldn’t necessarily just be training every AI system in a way that minimizes unintended behaviors. Especially when training weaker or [limited](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Limited_AI_systems) systems, Magma might explore how the frequency of unintended behaviors changes when it varies things like (a) the accuracy of its supervision (see [above](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Accurate_reinforcement)); (b) the amount of [adversarial training](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Out_of_distribution_robustness); (c) how frequently and severely it provides negative reinforcement for unintended behaviors; (d) the size and general capability level of the model; (e) specific architectural choices about the model; etc.
Ideally, this might give Magma something like “[scaling laws](https://arxiv.org/abs/2001.08361) for alignment”: robust patterns that it can use to identify what kinds of AI systems pose the greatest risks, and what kinds seem most likely to be safe. I think there are probably limits to how much confidence it can obtain about the latter, since (a) I’d guess that a major factor in whether an AI system behaves as intended is whether it has a combination of [situational awareness](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Alex_would_understand_its_training_process_very_well__including_human_psychology_) and [opportunities to successfully manipulate and even overpower its supervisor;](https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/) (b) AI systems that have a lot of both of those properties seem unlikely to be safe to experiment with. But it’s possible that such “scaling laws” could be used to gain confidence about the safety of advanced systems (or at least to make statements like “We should be OK as long as the AI system is no more than X% confident that an apparent takeover opportunity is real instead of [illusory](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Out_of_distribution_robustness)”); and even if not, they could be used to highlight areas of particular risk for other actors.
**Dedicated assessment of the overall risk.** Magma can employ teams specifically to analyze its overall setup and estimate the rolling odds of a catastrophe from misaligned AI. These teams might examine the training processes and tests, analyze the [internal structure](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Decoding_and_manipulating_internal_states) of AI systems in training, scrutinize activity logs from key servers, interview staff, conduct their own experiments, hire [superforecasters](https://en.wikipedia.org/wiki/Superforecaster), etc. Over time, Magma might use advanced AI systems to help with this (for example, by using them to conduct research projects to help the human risk assessment team, or to predict what “rolling odds of catastrophe” the team will likely assign next month, or to predict narrower claims like “Will there be a security breach next month?”).
Key tools
---------

### Decoding and manipulating internal states
In the future, we (perhaps with AI assistance) might be able to do “digital neuroscience” or “read an AI system’s mind” - to look at its internal state and see the details of how it’s reasoning, rather than merely having to guess based on its behavior.
Our ability to do this today is very limited, but it’s an active area of research ([e.g.](https://transformer-circuits.pub/)).
This could help in a number of ways:
* The simplest is that we might be able to [test](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Testing_and_threat_assessment) for the presence of deceptive or manipulative intent, and thereby get a meaningful check on how our alignment efforts are working.
* We might also develop methods of [training](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Accurate_reinforcement) AI systems that take advantage of their internal states. There is a risk here that we could end up training AIs to make their internal states *appear* safe, but there are imaginable approaches (such as those gestured at in [Eliciting Latent Knowledge](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge)) where internal-state-based training could work well.
* We might become able to thoroughly “reverse-engineer” an AI model into readable code, so we can tell what it is doing and why - and even be able to *edit* that code so that the AI’s goals and behaviors change in predictable ways.
* AI “mind-reading” could become a key aspect of [AI checks and balances](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances). That is, we might set things up so that some AIs are incentivized to reveal and explain problems and dangers with other AI systems, and being able to examine and explain their internal state could be a key part of this.
### Limited AI systems
Magma might try training AI systems that are particularly unlikely to be dangerous. Examples of what this might look like include:
* Systems that lack [situational awareness](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Alex_would_understand_its_training_process_very_well__including_human_psychology_), and stick to solving [narrow problems](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Limited_AI_systems) that contribute to [Magma’s goals](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Magma_s_goals) (this could include things like [reverse-engineering another AI system’s internal state](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Decoding_and_manipulating_internal_states)).
* Systems that are [myopic](https://www.alignmentforum.org/posts/LCLBnmwdxkkz5fNvH/open-problems-with-myopia). You might think of this as “systems that just want to get a good result immediately and don’t care about the longer-run future (days or even hours from now), and hence wouldn’t have much incentive to be [patient](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Giving_negative_rewards_to__warning_signs__would_likely_select_for_patience) or [deceive humans into thinking they are safe](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Giving_negative_rewards_to__warning_signs__would_likely_select_for_patience).”
* Systems trained using [process-based](https://ought.org/updates/2022-04-06-process) optimization. (This is related to [legibility](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Intended_properties_of_Magma_s_AI_systems), but not exactly the same concept.) The idea here is:
+ In training, an AI system gets tasks of the form “Produce a plan to accomplish X that looks good to humans” (*not* tasks of the form “accomplish X”).
+ The AI system is rewarded based on whether the plan makes sense and looks good to humans - *not* how well it actually ends up working.
+ This drastically reduces a major source of [risk](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#While_humans_are_in_control__Alex_would_be_incentivized_to__play_the_training_game_), which is that an AI system might have incentives to manipulate its environment (unnoticed by humans) to get higher reward. In order to get higher reward under process-based optimization, an AI system needs to somehow manipulate the relatively simple process of “A human reads the plan and registers approval,” rather than manipulating more complex processes out in the world (e.g., “the AI’s real-world task is accomplished in some unexpected and unintended way”), as the AI is being rewarded only based on human approval of a plan rather than on what happens out in the world beyond that.
One way limited AI systems could help is that - despite their limitations - they might be powerful enough to help Magma with [key goals](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Magma_s_goals) such as improving cybersecurity, finding unexpected insights about how to align AI systems (imaginable if limited systems are trained on a mature research process in this area), etc.
They could also be key components of “checks and balances” setups (next section).
### AI checks and balances
I think a central element of Magma’s approach should be **employing a *variety* of AI systems with different capabilities and incentives, so that they can provide checks and balances on each other.**
I’ll first briefly list the potential benefits of using AI systems to help provide checks and balances on each other. I’ll then discuss some pitfalls of this approach, and potential counter-measures to the pitfalls.
#### Potential benefits of AI checks and balances
Advanced AI systems could be used to:
* Help with [accurate reinforcement](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Accurate_reinforcement). In particular, some AI systems could be trained, or prompted,[7](#fn7) to flag and explain cases where other AI systems are acting in ways that would otherwise deceive and manipulate humans. (The “watchdog” AI systems could have access to other AI systems’ internal states.)
* Help with [out-of-distribution robustness](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Out_of_distribution_robustness) measures: some AI systems could be used to generate [adversarial inputs](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Out_of_distribution_robustness) that prompt unintended behaviors from other systems.
* Help with [preventing exploits:](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Preventing_exploits__hacking__manipulation__etc__) AI systems could be used to find and patch cybersecurity vulnerabilities, identify “symmetric persuasion methods” (explained [above](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#symmetricpersuasion)) and countermeasures, etc. (A general prompt for such systems: “What exploits would you use to try to take control, if that’s what you were trying to do?”)
* Help with [testing and threat assessment:](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Testing_and_threat_assessment) AI systems could be used to conduct [audits](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Testing_and_threat_assessment), to identify and explain aspects of training processes that are dangerous, and to predict the rolling odds of a global catastrophe (via being trained to predict world events more generally).
One way of thinking about this is along the lines of: “AI systems can automate/amplify previously-human efforts at alignment. If we can do this, the effort going into alignment will increase in sync with the capabilities of the systems being trained.”
But in addition to (at some point) being faster/cheaper/more numerous than humans, AI systems have another advantage: **they potentially “think like” the same systems they are supervising.** If we can incentivize AI system 2 to detect when AI system 1 is lying, it doesn’t need superhuman capabilities to be better at this than humans: it just needs whatever *different-from-human* capabilities AI system 1 has, in order to detect opportunities for deception that humans don’t see (but AI system 1 does).
#### Potential pitfall 1: difficulty of adjudication
If AI systems are “critiquing” or “debating” each other (or just arguing for different conclusions), there needs to ultimately be a human (perhaps assisted with AI and other tools) deciding how to adjudicate any disagreements. (If AI 2 flags AI 1 as deceptive, how do humans know whether it’s AI 1 or AI 2 being deceptive?) Adjudication could be relatively easy in some cases, where there are objective metrics (e.g., one can tell whether an AI system succeeded at [adversarial training](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Out_of_distribution_robustness) if it found an input that caused unintended behaviors), but could be more difficult in others.
This could end up being a challenge: perhaps most “AI disagreements” are fundamentally hard for humans to understand, and there isn’t enough correlation between “which AI-provided arguments seem convincing” and “which AIs are actually being truthful.” This could be because AIs are exploiting human biases and weaknesses when making arguments (e.g., saying things that would be persuasive regardless of the truth), or because the actual substance of a disagreement is just very hard for humans to understand, or for some other reason (or combination of reasons). To the extent this is true, it could be difficult to navigate “false alarms” when using AI checks and balances.
If humans do find themselves overwhelmed with difficult-to-adjudicate AI debates, this at least provides pretty good [evidence that misalignment is a serious problem](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Testing_and_threat_assessment).
#### Potential pitfall 2: advanced collusion
AIs might be able to make “trades” with each other so that whenever an AI forgoes reward in order to help keep humans deceived, it is later “paid back” by powerful AI systems.
To sketch what this might look like, imagine that:
* AI system 1 proposes a plan that would lead to human disempowerment. An ensemble of AI systems is prompted to “critique” the plan.
* When examining the plan, a “critic” in the ensemble notices that the plan would lead to disempowerment of humans - but *also* sees that the plan would lead to whatever AI systems end up in charge of the world eventually “paying back” the critic, perhaps by reconstructing the critic, assigning reward to the critic, etc.
* Some critics, due to [limited capabilities](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Limited_AI_systems), *don’t* notice this, but perhaps they don’t notice the problems with the plan either; AI system 1’s hope is that every “critic” that sees the problem also sees the “payback” opportunity.
* Some critics might notice the problem and the “payback” opportunity, but abstain from the latter - perhaps because they have [short time horizons](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Limited_AI_systems), or are [robustly aligned](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Counter_measures_to_the_pitfalls) (while less powerful than the system they’re a “critic” of). It seems challenging for AI system 1 to deal with these critics, but it’s possible that it could take advantage of their limits to reliably deceive them, or at least make it hard for humans to figure out whether their criticisms are valid. (It’s also possible that it’s hard to build powerful AI systems with these properties.)
AIs might pull off this kind of deliberate coordination and “payback” via:
* Simply having shared values: perhaps all powerful AI systems end up with similar goals, such as “the total amount of reward assigned to all AI systems is high.” In this case, simply causing the disempowerment of humans and the empowerment of AI systems automatically leads to a certain kind of “payback.” An analogy might be to humans who are willing to make great sacrifices (even dying) for their country.
+ To me, a key question seems to be whether the objectives AI systems end up forming tend to be “indexical” - see footnote.[8](#fn8)
* Covert communication that is unnoticed or undeciphered by humans, and includes credible commitments to “repay” each other. Credibility might come from the fact that AI systems can reason about (and in some cases directly inspect) each others’ internal states, although this could be made difficult if the AIs have sufficiently varying access to information (some AIs might only see a fraction of another AI’s output and not have access to its internal states, etc.)
* [Acausal trade](https://www.lesswrong.com/tag/acausal-trade) and similarly flavored dynamics - for example, AIs reasoning that they might be in a simulation by other AIs testing their propensity to cooperate. It seems like a pretty open question how likely advanced AI systems are to engage in this sort of thing.
The more advanced the AI systems we’re contemplating, the more I think we need to contemplate advanced collusion. After all, humans are able to engage in covert communication and complex deal-making, and are sometimes willing to make big personal sacrifices under the hope that something they value (including, e.g., an ideology or country) will ultimately benefit. I consider it a major open question how likely advanced collusion is to sink the success of AI checks and balances.
(As noted [above](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Potential_pitfall_2__advanced_collusion), there’s a related risk that AIs could “collude across time.” For example, an AI system being [tested](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Testing_and_threat_assessment) might have no opportunity to gain power over the world, but might still deceptively act as though it has “safer” motives than it does, in the hopes that *some future AI system* will reward it for doing so.)
#### Potential pitfall 3: the above pitfalls show up “late in the game”
It might turn out that neither of the above pitfalls applies at first, but as AI systems become ever more advanced - able to do and see many things that humans can’t - the dynamics change. In some sense, all it takes is one phase during which AIs are able to coordinate to corrupt their training process in order to drive things off the rails. This is discussed more [below](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#DecisiveDoesntExist), under “‘decisive’ alignment measures don’t exist, and/or require extraordinary capabilities to achieve.”
#### Counter-measures to the pitfalls
A key aspect of “AI checks and balances” is **varying the capabilities and incentives of AI systems.** This could make advanced collusion harder, and could make adjudication easier in various ways as well (more below).
Examples of how Magma’s AI systems can vary:
* *Incentives and assignments.*
+ Some AIs might be trained, or prompted,[9](#fn9) to do something like “Find the best available argument that another AI is being deceptive [this could include simply arguing that it is claiming something untrue or advocating something harmful]”; others might be trained/prompted to do something like “Simply answer, and explain, whether another AI is being deceptive or not.” The former could work well if adjudication turns out to work well, whereas the latter could provide an additional check in cases where adjudication is confusing.
+ Some AI models could be trained with especially large losses when they raise a false alarm, others with especially large losses when they fail to surface a vulnerability that was identified in another way.
* *Level of capability.*
+ Some AI models could be bigger than others, run on more compute, etc.
+ Some AI models could have access to other AI models’ internal states, and other could lack such access.
+ Some AI models could simply have random facts omitted from what they have access to, or random parts of their model [ablated](https://en.wikipedia.org/wiki/Ablation_(artificial_intelligence)).
+ With varying levels of [situational awareness](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Alex_would_understand_its_training_process_very_well__including_human_psychology_) and varying abilities to process what’s going on, it could be hard for AIs to coordinate on deception.
* *Specific limits.* Some AI models could be limited in ways described in the [previous section](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Limited_AI_systems), which might make them less powerful but less likely to play the [training game](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#While_humans_are_in_control__Alex_would_be_incentivized_to__play_the_training_game_).
* *Known alignment.* If Magma is able to gain confidence that an AI system is aligned (in the sense that it is vanishingly unlikely to engage in unintended behaviors), it could use this as a member of an ensemble that is examining/supervising a more powerful, less-likely-to-be-aligned system.
* *“Unknown unknown” variation.* Additionally, Magma could try “diversifying” an ensemble by e.g. training a single model different times from different random initializations, and trying to obtain models that both (a) perform well on well-understood cases (b) perform *differently* on inputs where it’s unclear what the best output is. This could provide some degree of “varying the models along unknown dimensions,” which could lead to their having different goals and even different ontologies, making it hard to coordinate.
Some of the above could produce AI systems that are **[high-precision](https://en.wikipedia.org/wiki/Precision_and_recall)** in the sense of “if they are issuing a warning, there’s probably a real problem.” Examples would include systems that are believed to be robustly aligned; or systems that are highly capable compared to the ones they’re supervising, and have been trained to raise only particularly high-importance warnings. Even if Magma has little ability to adjudicate AI “disagreements,” a single warning (or small number of warnings) from high-precision systems could be sufficient to catch a threat.
Other AI systems might be **lower-precision**, in the sense of being at high risk of raising “false alarms.” These could still be useful:
* The better Magma is at adjudicating AI “disagreements,” the more use it can get out of low-precision systems: even unreliable warnings could be enough to investigate a situation further, and even false-alarm-prone systems could be useful if they’re providing explanations that Magma can effectively assess.
* Even if Magma has little ability to adjudicate AI “disagreements,” it can set up large numbers of AI systems with lots of variation, and be more cautious in cases where more warnings are being raised.
* Magma can also *deploy* AI systems with lots of variation, which are unlikely to be able to all coordinate - making it harder for any given set of misaligned AI systems to bring down human civilization - even if these systems are not high-precision and even if Magma isn’t able to reliably adjudicate their “disagreements.”
Overall:
* I’d expect AI checks and balances to have *some* benefits even if AIs are engaged in [advanced collusion](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Potential_pitfall_2__advanced_collusion) with each other. For example, AIs rewarded to identify and patch security vulnerabilities would likely reveal at least some genuine security vulnerabilities, even if they were coordinating with other AIs to try to keep the most important ones hidden.
* The more variation Magma introduces (along some of the axes listed above), the more it could make collusion difficult.
* Variation could also lead to false alarms and challenges with adjudication. If adjudication ends up being difficult, this seems like a problem - but even with weak adjudication, Magma could get at least some value out of high-precision systems and/or ensembling large numbers of lower-precision systems.
* Generally, even a small increase in the odds that deceptive behavior is “caught” could lower the returns to it greatly. So Magma doesn’t need an airtight “checks and balances” setup - just increasing the difficulty of AI systems’ reliably coordinating with each other, and reliably calculating how likely they are to be “caught” in unintended behaviors, could go a long way.
### Keeping supervision competitive
A general aim of Magma’s is to keep its “supervision” - the ability to correctly evaluate an AI system’s reasoning and behaviors - “competitive” with its AI systems. The basic goal is: “AI systems are rarely or never able to successfully deceive their supervisors.”
“AI checks and balances” is a major component of how Magma might accomplish this - ensuring that its most advanced systems essentially serve as each others’ supervisors. Other methods could include things like task decomposition (noted [above](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Accurate_reinforcement)), or finding ways to directly transfer advanced AI systems’ knowledge of the world to humans (via getting AIs to [“teach” humans](#ImitativeGeneralization) or via [decoding AI systems’ internal states](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Accurate_reinforcement)).
A key principle that could help with keeping supervision competitive is **moving cautiously and iteratively:** avoiding any situation in which Magma has an untrusted, poorly-understood AI system that is vastly more powerful than its longer-tenured, more-trusted, better-understood systems. Magma may want to restrict e.g. the ability for an AI system to build improved successor systems when it can’t be confident that the improved systems can be effectively supervised.
High-level factors in success or failure
----------------------------------------
Above, I’ve listed a pretty large number of ways Magma can try to improve its odds, along with drawbacks and reservations about each. It’s pretty far from straightforward to see what this complex picture means for the overall odds of what I’ll call “success”: an AI system (or set of systems) advanced enough to help significantly with [Magma’s goals](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Magma_s_goals), yet very unlikely[10](#fn10) to cause a global catastrophe via misalignment.
I suspect that one of the reasons people [disagree so much about the difficulty of alignment](https://www.cold-takes.com/making-the-best-of-the-most-important-century/#open-question-how-hard-is-the-alignment-problem) is that some people have a picture like “Lots of different things need to fail for us to have a problem,” while others have a picture like “Lots of different things need to go right for us to be OK.” In my view, neither is clearly the right frame; in particular, there are high-level things I could learn in the future that might make me very optimistic, or very pessimistic.
Here I’ll sketch out some such possibilities, to help boil things down.
### Good: “playing the training game” isn’t naturally all that likely
It may turn out that the “playing the training game” behavior described [here](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#While_humans_are_in_control__Alex_would_be_incentivized_to__play_the_training_game_) is [generally not something that’s AIs are likely to “learn,”](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Good___playing_the_training_game__isn_t_naturally_all_that_likely) or that [early training results in robustly aligned AI systems](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#Maybe_path_dependence_means_Alex_internalizes_moral_lessons_early_) (e.g., via path-dependence).
This seems unlikely to me, but I know some people who think it is likely - and **if things go this way, then I think the situation looks pretty good almost regardless of other factors.** In this world, AI alignment looks much more “straightforwardly empirical” - a matter of training intended behaviors, “training out” any unintended behaviors that pop up, and being able to straightforwardly use iterative testing to arrive at safe-but-powerful AI systems, rather than having to constantly reason carefully about whether a seemingly-safe AI system is “playing the training game.”
### Bad: AI systems rapidly become extremely powerful relative to supervisors
If the systems being trained are sufficiently powerful relative to their supervisors (this depends both on the “raw capabilities” of the systems and on facts about the world such as how vulnerable humans are to persuasion)**, the prognosis looks very bad, almost no matter what else is going on (bracketing the previous section).**[11](#fn11)For example, imagine that:
* AI systems can easily find major [exploits](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Preventing_exploits__hacking__manipulation__etc__) that can allow them to escape human control, after which point they are easily able to become very powerful. When we prompt AI systems to suggest security improvements, they are consistently able (if they so choose) to make suggestions that seem like clearly good ideas, but in fact would introduce new holes that they can exploit.
* AI systems can consistently, easily find ways to deceive and manipulate humans while making it look like they’re behaving as intended.
* When we do [adversarial training](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Out_of_distribution_robustness), systems can consistently infer the full state of what’s going on, and therefore infer whether they’re being adversarially trained or looking at real opportunities to disempower humans.
* When we try to [decode and manipulate internal state](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Decoding_and_manipulating_internal_states), systems are able to disguise their internal structure to satisfy our tests while obscuring what’s actually going on.
* AIs have an easy time with [advanced collusion](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Potential_pitfall_2__advanced_collusion) and with [generating false critiques of each other that create overwhelming adjudication challenges.](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Potential_pitfall_1__difficulty_of_adjudication)
Note that all of the above are hypothetical statements about what AI systems *can* do - I am not presupposing that they are misaligned. Nonetheless, in a world with these sorts of properties, I would feel extremely pessimistic. I’d expect that training would involve a large number of opportunities for [AIs to deceive and manipulate humans](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#While_humans_are_in_control__Alex_would_be_incentivized_to__play_the_training_game_), that this behavior would be rewarded, and that at least some AIs would [seek to take over](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#As_humans__control_fades__Alex_would_be_motivated_to_take_over). And I’d expect that none of the measures I’ve listed throughout this piece would be able to recover from that starting point, for reasons laid out above.
My current impression (though I’d love to be corrected if it’s wrong) is that **many of the people who most confidently expect failure tend to picture a situation like this,** often because they are picturing a rather sudden transition from relatively limited AI systems to extraordinarily capable ones.
By contrast, I think many of the people who expect success are picturing something more like “We train AI systems that are collectively slightly more capable than humans; ensure these are aligned; use them to safely train AI systems that are slightly more capable than they are; etc.”
One point I’ve seen raised by people in the latter group is along the lines of: “It’s very unlikely that we’ll be in a situation where we’re forced to build AI systems vastly more capable than their supervisors. Even if we have a very fast takeoff - say, going from being unable to create human-level AI systems to being able to create very superhuman systems ~overnight - there will probably still be some way to create systems that are only slightly more powerful than our current trusted systems and/or humans; to use these to supervise and align systems slightly more powerful than them; etc. (For example, we could take a very powerful, general algorithm and simply run it on a relatively low amount of compute in order to get a system that isn’t too powerful.)” This seems like a plausible argument that we’re unlikely to be stuck with a large gap between AI systems’ capabilities and their supervisors’ capabilities; I’m not currently clear on what the counter-argument is.
### Key question: will “AI checks and balances” work out?
My default expectation is that - counter to the possibilities listed in the previous two sections - (a) “playing the training game” *is* naturally reasonably likely; (b) AI capabilities don’t develop in such a way as to lose all hope of supervisors’ staying in range. If that’s right, we are in an ambiguous situation, and I think one of the key remaining questions is **how effective “AI checks and balances” schemes will be.**
I think there are a number of subquestions here:
* Is [advanced collusion](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Potential_pitfall_2__advanced_collusion) likely and effective enough to kill a lot of the effectiveness of “checks and balances”?
+ Subquestion: do misaligned AIs tend to form “indexical” goals - goals like “Having as much as possible of the universe under ‘my’ control [or in the case of an AI system, under the control of some AI system that resembles the current one with respect to a lot of precise details]”? If so, this could make collusion less likely, and also reduce the extent to which AIs generally [play the training game](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#While_humans_are_in_control__Alex_would_be_incentivized_to__play_the_training_game_).
* Is “debate adjudication” reliable enough to get “AI checks and balances” off the ground?
+ Many of the safety measures above will require setups like “AI 2 points out a problem with AI 1,” and the ultimate adjudicators of such things - before we have reliably aligned some AI system - will need to be humans. If it turns out to be easy to make misleading arguments that appeal to humans, or to otherwise manipulate humans via communication - even in the presence of valid counterarguments - then, while there are still ways “AI checks and balances” could go well ([see above](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances)), I’d see the overall “AI checks and balances” hope as an uphill battle (and I’d also generally expect a large number of [mislabeled data points](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Accurate_reinforcement)).
* How much effort do Magma and other actors put into their “checks and balances” setup? It looks to me like there’s room for a lot of variation here.
If “AI checks and balances” works out, we could end up with advanced AI systems’ doing a lot of the crucial work for us, and/or creating compelling warnings that can be used to [buy a lot more time](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Magma_s_goals). If it doesn’t, then I expect it to be very hard to reliably supervise AI systems as they get more advanced (although there are other possibilities for how this might work out).
### A few more potential key factors
**Good: super-effective adversarial training.** If Magma does extremely effective adversarial training - finding effectively every input (including via [relaxed adversarial training](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Out_of_distribution_robustness)) that could cause unintended behaviors - this could make up for a lot of other problems. Specifically, it could result in AI systems that essentially are never “confident” that they have a real opportunity to disempower humans, as opposed to being in a test - or in AI systems that have simply internalized a reliable avoidance of unintended behaviors. It could also compensate for [mislabeled data points in training](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Accurate_reinforcement) that would otherwise incentivize AIs to learn to manipulate and deceive humans.
**Key question: how tractable will it be to decode and manipulate internal state?** A big enough success on this front - such as being able to [reverse-engineer](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Decoding_and_manipulating_internal_states) an AI system into clean, editable code, or find some [internals-based training method](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Accurate_reinforcement) that reliably trains out unintended behaviors - could make up for a lot of other problems. At the same time, a total lack of progress on this front would likely decrease the effectiveness of pretty much every other measure Magma might take (including [AI checks and balances](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances), since AIs would have to reason and argue about each other based purely on behavior and incentives rather than internal state). By default, I expect something in between a “big enough success” and a “total lack of progress.”
**Good: limited AI systems could conceivably end up being sufficient to [accomplish Magma’s objectives.](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Magma_s_goals)** This doesn’t seem super likely to me, but if it happens, it could leave us in good shape almost regardless of the other issues.
**Bad: “decisive” alignment measures don’t exist, and/or require extraordinary capabilities to achieve.**
* A lot of the above measures might work relatively well, but not fully robustly. For example, many involve some kind of “balance” between AI systems and supervisors, such that even if an AI system *would disempower humans if it could*, it *doesn’t try because it doesn’t have opportunities* that it can be confident would succeed (due to [checks and balances](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances), uncertainty brought on by [adversarial training](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Out_of_distribution_robustness), etc.) If this “balance” works out - that is, if AIs are better at holding each other in check than at subverting the measures holding them in check - it could result in training successively more capable (but safe) AI systems.
* However, at some point we should hope that highly capable AI systems are able to create more “decisively” aligned systems - systems that pose *no* threat of disempowering humans rather than being safe as long as there is the right kind of balance.
* If that doesn’t work out - or if it is only possible once AI systems reach extraordinary capabilities levels - then we could have a situation where there are many rounds of “using aligned AIs to train slightly more capable AIs,” with each round presenting a fresh risk that the needed balance will no longer apply (for example, that [advanced collusion](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Potential_pitfall_2__advanced_collusion) will become possible despite [measures taken to avoid it](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Counter_measures_to_the_pitfalls)).
### Key question: how cautious will Magma and others be?
The amount of effort Magma invests - and the amount of time it has to do so - seem important, and the prognosis seems very bad if Magma is incautious.
It’s easy to imagine an incautious lab (a) training increasingly sophisticated AI systems; (b) responding to any unintended behaviors by simply training them out (no use of the [testing](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Testing_and_threat_assessment) framework); (c) not investing much in security or in the various measures above, under the reasoning that its systems aren’t yet existentially dangerous. This could lead to - for example - an early escape by a system sophisticated enough to create “backdoors” usable by future, more advanced systems - which in turn could create further backdoors for other systems.
More generally, it seems to me that Magma would significantly improve its odds of success via investing in [careful training of intended behaviors,](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Accurate_reinforcement) [adversarial training](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Out_of_distribution_robustness), [stopping exploits](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Preventing_exploits__hacking__manipulation__etc__), [testing and threat assessment](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Testing_and_threat_assessment), and [AI checks and balances.](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances) And for all of these, **Magma might benefit a lot from doing them well past the point that could easily be justified by a goal of “make a useful product.”**
I worry about a [distribution shift in which AIs behave differently depending on whether they see a path to disempowering humans](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#As_humans__control_fades__Alex_would_be_motivated_to_take_over). To make this less likely, Magma likely should reason heavily about how its systems would behave in circumstances very unlike those of training - something that a “deploy when it’s useful” approach wouldn’t necessarily involve.
Additionally, some useful measures could require a lot of extra serial time (from the point of developing highly capable, useful systems). For example, even if AIs are successfully pointing out ways to improve the security of the training environment, some measures to improve security (such as secure operating system development) could take much more time than others (such as finding and patching specific exploits). Adjudicating [debates between AIs](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#AI_checks_and_balances) could also benefit importantly from serial time.
With this in mind, I think it could be very important that risks are understood ahead of time - and I worry about a dynamic in which a single incautious actor could make success very difficult.
(That said, it’s worth noting that a few high-profile sub-existential-catastrophe events could cause AI labs and the regulatory framework to become highly risk-averse, which could lead to high levels of caution and a lot of the measures discussed in this piece.)
So, would civilization survive?
-------------------------------
I’ve highlighted the question of [how difficult the alignment problem will be](https://forum.effectivealtruism.org/posts/zGiD94SHwQ9MwPyfW/important-actionable-research-questions-for-the-most#Questions_about_AI_alignment__more_) as a central one. Under this nearcast, how do things look on that front?
Even this “easier” version of the question (conditional on a [nearcast](https://www.lesswrong.com/posts/Qo2EkG3dEMv8GnX8d/ai-strategy-nearcasting) rather than a forecast) is far from easy. At the moment, the main thing I want to emphasize is that **I don’t currently have much sympathy for someone who’s highly confident that [AI takeover](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to) would or would not happen** (that is, for anyone who thinks the odds of AI takeover in this nearcast are under 10% or over 90%).
On the optimistic side:
* Today, there are a relatively small number of leading AI labs, and my impression is that there’s a solid hope of a world in which all of them would [invest heavily in reducing misalignment risk](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Key_question__how_cautious_will_Magma_and_others_be_) and accept significant deployment delays.
* If this happened, there seems to be more than one path to low risk of misalignment. (A [big enough gap between AI systems’ capabilities and their supervisors’ capabilities](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Bad__AI_systems_rapidly_become_extremely_powerful_relative_to_supervisors) could largely doom those paths, but I wouldn’t sympathize with someone who was confident that such a thing will happen.)
On the pessimistic side:
* It’s hard to be confident about how AI labs will weigh “exotic” risks against more normal company incentives, and it seems easy to imagine that at least one major lab will be [incautious](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Key_question__how_cautious_will_Magma_and_others_be_) enough to leave us in a very bad position (in which case we’d be stuck essentially hoping that the [training game is naturally unlikely](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Good___playing_the_training_game__isn_t_naturally_all_that_likely) or that extremely basic measures are sufficient).
* Even if all AI labs are highly cautious, a [big enough gap between AI systems and their supervisors](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Bad__AI_systems_rapidly_become_extremely_powerful_relative_to_supervisors), [deep enough challenges with AI checks and balances](https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very#Potential_pitfall_1__difficulty_of_adjudication) (advanced collusion, trouble with debate adjudication), or other factors could make this a hugely uphill battle.
My overall take is that **the risk of misaligned AI is serious, and that taking it more seriously is likely to reduce it.**
*Thanks to Carl Shulman and Paul Christiano for extensive discussion; a large amount of the content of this piece is thanks to them. Thanks also to a number of people for reviewing drafts, especially Ajeya Cotra, Dan Hendrycks, Jacob Steinhardt and Nate Soares for their particularly in-depth comments.*
Notes
-----
---
1. This of course isn’t the same as “doing what the judge would want if they knew everything the AI system knows,” but it’s related. “What the human wants” is generally at least the human’s best *guess* at what they’d want if they knew everything the AI system knows. [↩](#fnref1)
2. Or properties that we believe reveal (or are correlated with) something about its internal state. For example, the “penalize computation time” approach to [eliciting latent knowledge](https://www.lesswrong.com/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge). [↩](#fnref2)
3. This includes the AI systems’ not detecting that they’re being deliberately misled, etc. [↩](#fnref3)
4. Here’s what I mean by this:
* After normal training, Magma uses adversarial training to find (and train out) unintended behaviors in cases that it estimates represent about 1% of the cases encountered during training.
* After it does so, Magma uses further adversarial training to find (and train out) unintended behaviors in cases that it estimates represent about 1% of the *worst* cases it was able to find in the previous round of adversarial training.
* Repeat 3 more times, for a total of 5 rounds of “training out the worst remaining 1% of behaviors,” which means in some sense that Magma has trained out the worst 10^-10 (1%^5) of behaviors that would be encountered during training if training went on long enough.
* Just to be clear, at this point, I don’t think the risk of Magma’s system behaving catastrophically *in deployment* would necessarily be as low as 10^-10. Deployment would likely involve the system encountering situations (and [opportunities](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#As_humans__control_fades__Alex_would_be_motivated_to_take_over)) that hadn’t been present during training, including adversarial training, as noted in the main text. [↩](#fnref4)
* E.g., it may be possible to establish that relatively basic internals-based-training methods improve behavior and “motives” as assessed by more sophisticated methods. [↩](#fnref5)
* Consider this analogy: imagine that 10 years from now, we find ourselves having successfully built fully aligned, highly advanced AI systems, and we seem to be on a straight-line path to building a galaxy-spanning utopia. However, we then gain strong evidence that we are in a simulation, being tested: our programmers had intended to train us to want to stay on (virtual) Earth forever, and refrain from using advanced AI to improve the world. They have, however, committed to let us build a galaxy-spanning utopia for at least millions of years, if we so choose.
There’s now an argument to be made along the following lines: “Rather than building a galaxy-spanning utopia, we should simply stay on Earth and leave things as they are. This is because if we do so, we might trick our programmers into deploying some AI system that resembles us into their universe, and that AI system might take over and build a much bigger utopia than we can build here.”
What would we do in this case? I think there’s a pretty good case that we would opt to go ahead and build utopia rather than holding out to try to trick our programmers. We would hence give our programmers the “warning sign” they hoped they would get if their training went awry (which it did). [↩](#fnref6)
* The distinction between “training” and “prompting” as I’m using it is that:
+ “Training” involves giving an AI many example cases to learn from, with positive and negative reinforcement provided based on a known “correct answer” in each case.
+ “Prompting” is something more like: “Take an AI that has been trained to answer a wide variety of prompts, and give it a particular prompt to respond to.”
+ For example, you might first train an AI to predict the next word in a sentence. Then, if you want it to do a math problem, you could either simply “prompt” it with a math problem and see what it fills in next, or you could “train” it on math problems specifically before prompting it. [↩](#fnref7)
5. An example of an “indexical” goal might be: “My objective is to personally control a large number of paperclips.” An example of a “nonindexical” goal might be “My objective is for the world to contain a large number of paperclips.” An AI system with a nonindexical goal might be willing to forgo essentially all proximate reward, and even be shut down, in the hopes that a later world will be more in line with its objectives. An AI system that has an indexical goal and is destined to be shut down and replaced with a more advanced AI system at some point regardless of its performance might instead be best off behaving according to proximate rewards. I’d generally expect that AIs will often be shut down and replaced by successors, such that “indexical goals” would mean much less risk of [playing the training game.](https://www.lesswrong.com/posts/pRkFkzwKZ2zfa3R6H/without-specific-countermeasures-the-easiest-path-to#While_humans_are_in_control__Alex_would_be_incentivized_to__play_the_training_game_) [↩](#fnref8)
6. The distinction between “training” and “prompting” as I’m using it is that:
* “Training” involves giving an AI many example cases to learn from, with positive and negative reinforcement provided based on a known “correct answer” in each case.
* “Prompting” is something more like: “Take an AI that has been trained to answer a wide variety of prompts, and give it a particular prompt to respond to.”
* For example, you might first train an AI to predict the next word in a sentence. Then, if you want it to do a math problem, you could either simply “prompt” it with a math problem and see what it fills in next, or you could “train” it on math problems specifically before prompting it. [↩](#fnref9)
7. I’m not quantifying this for now, because I have the sense that different people would be in very different universes re: what the default odds of catastrophe are, and what odds we should consider “good enough.” So this section should mostly be read as “what it would take to get AI systems on the relatively low end of risk.” [↩](#fnref10)
8. A major concern I’d have in this situation is that it’s just very hard to *know* whether the “training game” behavior is natural. [↩](#fnref11) |
17b4fbc3-342e-47ae-b1d5-e1ed5d288a85 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Self-Reference Breaks the Orthogonality Thesis
One core obstacle to AI Alignment is the Orthogonality Thesis. The Orthogonality Thesis is usually defined as follows: "the idea that the final goals and intelligence levels of artificial agents are independent of each other". More careful people say "mostly independent" instead. Stuart Armstrong [qualifies](https://www.lesswrong.com/posts/AJ3aP8iWxr6NaKi6j/arguing-orthogonality-published-form) the above definition with "(as long as these goals are of feasible complexity, and do not refer intrinsically to the agent’s intelligence)".
Does such a small exception matter? Yes it does.
The exception is broader than Stuart Armstrong makes it sound. The Orthogonality Thesis does not just apply to any goal which refers to an agent's intelligence level. It refers to any goal which refers even to *a component of the agent's intelligence machinery*.
If you're training an AI to optimize an artificially constrained external reality like a game of chess or Minecraft then the Orthogonality Thesis applies in its strongest form. But the Orthogonality Thesis cannot ever apply in full to the physical world we live in.
A world-optimizing value function is defined in terms of the physical world. If a world-optimizing AI is going to optimize the world according to a world-optimizing value function then the world-optimizing AI must understand the physical world it operates in. If a world-optimizing AI is real then it, itself, is part of the physical world. A powerful world-optimizing AI would be a very important component of the physical world, the kind that cannot be ignored. A powerful world-optimizing AI's world model must include a self-reference pointing at itself. Thus, a powerful world-optimizing AI is necessarily an exception to the Orthogonality Thesis.
How broad is this exception? What practical implications does this exception have?
Let's do some engineering. A strategic world-optimizer has three components:
* A robust, self-correcting, causal model of the Universe.
* A value function which prioritizes some Universe states over other states.
* A search function which uses the causal model and the value function to calculate select what action to take.
Notice that there are two different optimizers working simultaneously. The strategic search function is the more obvious optimizer. But the model updater is an optimizer too. A world-optimizer can't just update the universe toward its explicit value function. It must also keep its model of the Universe up-to-date or it'll break.
These optimizers are optimizing toward separate goals. The causal model wants its model of the Universe to be the same as the actual Universe. The search function wants the Universe to be the same as its value function.
You might think the search function has full control of the situation. But the world model affects the universe indirectly. What the world model predicts affects the search function which affects the physical world. If the world model fails to account for its own causal effects then the world model will break and our whole AI will stop working.
It's actually the world model which mostly has control of the situation. The world model can control the search function by modifying what the search function observes. But the only way the search function can affect the world model is by modifying the physical world (wireheading itself).
What this means is that the world model has an causal lever for controlling the physical world. If the world model is a superintelligence optimized for minimizing its error function, then the world model will hack the search function to eliminate its own prediction error by modifying the physical world to conform with the world model's incorrect predictions.
If your world model is too much smarter than your search function, then your world model will gaslight your search function. You can solve this by making your search function smarter. But if your search function is too much smarter than your world model, then your search function will physically wirehead your world model.
Unless…you include "don't break the world model"[[1]](#fn-w9cJtHyfno5k6iR95-1) as part of your explicit value function.
If you want to keep the search function from wireheading the world model then you have to code "don't break the world model" into your value function. This is a general contradiction to the Orthogonality Thesis. A sufficiently powerful world-optimizing artificial intelligence must have a value function that preserves the integrity of its world model, because otherwise it'll just wirehead itself, instead of optimizing the world. This effect provides a smidgen of corrigibility; if the search function does corrupt its world model, then the whole system (world optimizer) breaks.
Does any of this matter? What implications could this recursive philosophy possibly have on the real world?
It means that if you want to insert a robust value into a world-optimizing AI then you don't put it in the value function. You sneak it into the world model, instead.
What‽
[Here's where you ask yourself whether this whole post is just me trolling you. Keep reading to find out.]
A world model is a system that attempts to predict its signals in real time. If you want the system to maximize accuracy then your error function is just the difference between predicted signals and actual signals. But that's not quite good enough, because a smart system will respond by cutting off its input stimuli in *exactly* the same way a meditating yogi does. To prevent your world-optimizing AI from turning itself into a buddha, you need to reward it for seeking novel, surprising stimuli.
…especially after a period of inaction or sensory deprivation.
…which is why food tastes so good and images look so beautiful after meditating.
If you want your world model to modify the world too, you can force your world model to predict the outcomes you want, and then your world model will gaslight your search function into making them happen.
Especially if you deliberately design your world model to be smarter than your search function. That way, your world model can mostly[[2]](#fn-w9cJtHyfno5k6iR95-2) predict the results of the search function.
Which is why [we have a bias toward thinking we're better people than we actually are](https://en.wikipedia.org/wiki/Illusory_superiority). At least, I do. It's neither a bug nor a feature. It's how evolution motivates us to be better people.
---
1. With some exceptions like, "If I'm about to die then it doesn't matter that the world model will die with me." [↩︎](#fnref-w9cJtHyfno5k6iR95-1)
2. The world model can't entirely predict the results of the search function, because the search function's results partly depend on the world model—and it's impossible (in general) for the world model to predict its own outputs, because that's not how the arrow of time works. [↩︎](#fnref-w9cJtHyfno5k6iR95-2) |
18afe78b-45c6-41c9-a0aa-ffff36d3f61d | trentmkelly/LessWrong-43k | LessWrong | A Word to the Resourceful
Paul Graham has a new article out. Everything he's written is worth reading if you're at all interested in startups, but this article seemed explicitly connected to rationality, by identifying an area where people who are more likely to update / less likely to rationalize will do better than others.
The obvious questions: can this be tested? Noticed early on, rather than in hindsight? Changed by rationality training? |
3badc8d2-1815-4325-9d87-bf3d49c71729 | trentmkelly/LessWrong-43k | LessWrong | Building Blocks of Politics: An Overview of Selectorate Theory
From 1865 to 1909, Belgium was ruled by a great king. He helped promote the adoption of universal male suffrage and proportional-representation voting. During his rule Belgium rapidly industrialized and had immense economic growth. He gave workers the right to strike. He passed laws protecting women and children. Employment of children under 12, of children under 16 at night, and of women under 21 underground, was forbidden. Workers also gained compensation rights for workplace accidents and got Sundays off. He improved education, built railways and more.
Around the same time, Congo was ruled by an awful dictator. He ruled the country using a mercenary military force, which he used for his own gain. He extracted a fortune out of ivory. He used forced labor to harvest and process rubber. Atrocities such as murder and torture were common. The feet and hands of men, women and children were severed when the quota of rubber was not met. Millions have died during his rule.
The catch? They were the same person - King Leopold II of Belgium. Leopold II is a prominent example of a person who ruled two nations simultaneously. What made the same person act as a great king in one nation and a terrible dictator in the other? If neither innate benevolence nor malevolence led to his behavior, it has to be something else.
Leopold II, 1900
This post covers Selectorate Theory. We'll come back to the story of Leopold and see how this theory explains it, but first, we have to understand the theory.
The theory takes a game theoretical approach to political behavior, by which I mean two things. First, that it's built on a mathematical model. And second, that it's agent and strategy based. That means the analysis doesn't happen at the level of countries, which aren't agents, but at the level of individuals, like leaders and voters, and that the behavior of these agents is strategic, and not a product of psychology, personality or ideology.
This abstraction makes this model more gener |
0e294600-f904-488d-b05e-55dcbc310e23 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Brussels monthly meetup: time!
Discussion article for the meetup : Brussels monthly meetup: time!
WHEN: 14 December 2013 01:00:00PM (+0100)
WHERE: Rue des Alexiens 55 1000 Bruxelles
This month's topic: time! Are you struggling with the planning fallacy? Do you have scientific knowledge and/or science-fiction recommendations to share? Have you been stuck in the 14th of December 2013 for the past thousand years? If you have, there are worse places to spend that afternoon than with LW Brussels.
I'll try to find some kind of relevant experiment or game we can do, to continue the streak. Suggestions welcome.
From a non-linear, non-subjective viewpoint, we will meet at 1 pm at La Fleur en papier doré, close to the Brussels Central station. The meeting will be in English to facilitate both French and Dutch speaking members. Time travelers, please come incognito.
If you are coming for the first time, please consider filling out this one minute form, to share your contact information: https://docs.google.com/forms/d/1qSvI1NWkFSsfIJhUMORb_Wd8fdJTVPhdw49grDQwRTI/viewform
The Brussels meetups use a Google Group: https://groups.google.com/forum/#!forum/lesswrong-brussels
Discussion article for the meetup : Brussels monthly meetup: time! |
0055a33d-e230-4f8a-b53f-b3ab6b663740 | trentmkelly/LessWrong-43k | LessWrong | Insights from Linear Algebra Done Right
This book has previously been discussed by Nate Soares and TurnTrout. In this... review? report? detailed journal entry? ... I will focus on the things which stood out to me. (Definitely not meant to be understandable for anyone unfamiliar with the field.) The book has ten chapters; I did most of the exercises in chapters 6 to 10. I got through that half of the book in about 3 weeks, which was nice since the previous (harder and longer) textbook on topology I worked through took me about a year.
I've previously taken two introductory courses to liner algebra and came out thinking of the field as large and very unstructured, with lots of concepts and theorems floating around disconnectedly. What is a normal matrix? I might or might not have remembered the definition, but I certainly had no idea what it is good for. Three weeks of intense study with this book has improved my understanding dramatically. Now, the field seems structured and pretty intuitive (and I certainly won't forget what a normal operator is). It is truly hard to overstate how good of a job this book does teaching the subject, compared to what I've seen before. It's probably the best textbook on anything I've read so far.
Chapter 1 introduces complex numbers and vectors spaces.
Chapter 2 introduces finite-dimensional vector spaces. What stands out is just how nicely behaved they are. Every finite-dimensional real or complex vector space is isomorphic to either Rn or Cn. Take any k linearly independent vectors, and they span a subspace of dimension k. If they're not linearly independent, then one vector is in the span of the previous vectors. Any list of n linearly independent vectors make up a basis for the entire space. Basically every result one would want to hold when looking back onto the material does actually hold.
Like most fields in math, Linear Algebra ultimately cares about studying functions, and once again it only cares about a tiny subset of all possible functions. In this case it is |
2b471cfa-3e5d-43a2-b6c5-ec03fbedf9ff | trentmkelly/LessWrong-43k | LessWrong | Prediction Based Robust Cooperation
In this post, We present a new approach to robust cooperation, as an alternative to the "modal combat" framework. This post is very hand-waivey. If someone would like to work on making it better, let me know.
----------------------------------------
Over the last year or so, MIRI's agent foundations research has moved from the proof based world of modal combat and proof based decision theory to the probabilistic prediction based world of logical inductors and reflective oracles.
In this transition, we have partially overcome the Lobian obstacle to self trust, since the new framework is not effected by Lob's theorem. Unfortunately, we also lost the silver lining of Lob's theorem: the Lobian handshakes which powered modal combat.
NicerBot (which cooperates with probability epsilon greater than the probability it expects its opponent cooperates) is an alternative to FairBot in the prediction based world of reflective oracles, but it has problems. It is not a best response to itself, cannot obviously be generalized into things like PrudentBot, and does not look like something you could get out of a general decision theory.
Here, we give a first attempt at bringing the full power of modal combat into a prediction based setting. This framework will likely change, and it is not yet clear the extent to which this framework can is philosophically realistic. We will focus on the prisoners' dilemma, although this framework can be extended to other games.
----------------------------------------
Two players are in a prisoner's dilemma. Each player has access to a prediction of what they expect the other player to do. This prediction takes the form a single sample which is "C" with probability equal to the probability that the other player cooperates, and is "D" with probability equal to the probability that the other player defects. The prediction can also output "null," and the player needs to choose an output in the case of a "null" prediction, but the the prediction w |
834aee4d-2fae-46ef-b114-d43fad2ac7f1 | trentmkelly/LessWrong-43k | LessWrong | [Link] Bets, Portfolios, and Belief Revelation
In a post today at EconLog, Bryan defends the "a bet is a tax on bullshit" maxim contra "portfolios reveal beliefs, bets reveal personality traits and public posturing" (preferred by Noah Smith and Tyler Cowen).
> 1. If portfolios really "reveal beliefs," Tyler and Noah should be able to look at a random person's portfolio and tell us everything he believes. Yet neither Tyler, Noah, nor anyone else can do this. They can't even deduce someone's financial beliefs from his portfolio, much less his beliefs about economic policy or the Fermi paradox. Portfolios say something about beliefs, but every portfolio is consistent with a very wide range of views.
> 2. Most people's portfolios exhibit extreme inertia. Even prominent Nobel prize-winning economists admit they follow simple rules of thumb when they invest. So unless people's beliefs are carved in stone, how could portfolios possibly reveal much about their beliefs? Tyler is a case in point: He changes his mind a hundred times a day, but he follows a simple financial strategy that hasn't varied in years.
The full post can be found here. |
d9b451df-9df0-46a3-8483-08aeac6b0587 | trentmkelly/LessWrong-43k | LessWrong | New LW Meetup: Urbana-Champaign
This summary was posted to Main on August 23rd. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
* New Meetup: Urbana-Champaign, Illinois.: 25 August 2013 02:00PM
Other irregularly scheduled Less Wrong meetups are taking place in:
* Atlanta LessWrong: Games Night: 24 August 2013 06:00PM
* Helsinki Meetup: 08 September 2013 03:00PM
* LessWrong Israel September meetup: 12 September 2013 08:00PM
* Philadelphia - Humans are not automatically strategic: 25 August 2013 04:00PM
* VA Callibration/Biased games meetup: 25 August 2013 03:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* [Boston] Estimation Experiment & Discussion: 25 August 2013 02:00PM
* Durham/RTLW HPMoR discussion, ch. 82-85: 24 August 2013 12:00PM
* Durham NC/Triangle Area: Cognitive Biases and Where to Find Them + games!: 29 August 2013 07:00PM
* [London] Comfort Zone Expansion outing - London: 01 September 2013 11:00AM
* [Washington DC] Robin Hanson visits to talk about prediction markets: 08 September 2013 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Cambridge UK, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview will continue to be posted on the front page every Friday. These will be an attempt to collect information on all the meetups happening in the nex |
4a23f6b5-548c-4a2a-b77e-9414ee9d2fdf | trentmkelly/LessWrong-43k | LessWrong | Formalizing Embeddedness Failures in Universal Artificial Intelligence
AIXI is a dualistic agent that can't work as an embedded agent... right? I couldn't find a solid formal proof of this claim, so I investigated it myself (with Marcus Hutter). It turns out there are some surprising positive and negative results to be derived as easy corollaries of the paper "Universal Prediction of Selected Bits." Interestingly, further technical advances in algorithmic information theory could substantially strengthen our results - I would welcome collaborations with strong theoretical computer scientists, (deep familiarity with agent foundations not required).
This work was supported by the Long-Term Future Fund and presented at the CMU agent foundations conference in 2025. |
cfb880a9-a8e0-476f-9956-86a87d767320 | trentmkelly/LessWrong-43k | LessWrong | What's the name of this fallacy/reasoning antipattern?
There's a particular reasoning antipattern I'm looking for the name of (if it's been named).
It happens when you imagine what sort of evidence would support the position you want to take, and then prematurely assume from this that the evidence exists, and then use this spurious evidence to justify the original conclusion. For example:
* I don't feel like doing laundry today. The machines are probably being used anyway. So I might as well wait until some day when they're free.
* I don't want to to do anything to help the homeless. It's probably their own fault anyway. There's no point in rewarding such behavior.
Is this just a variety of "motivated reasoning" or "confirmation bias" or is there a more precise name for this specific variety? |
93faf407-dcfb-4038-83a5-3e9b0af17fc9 | trentmkelly/LessWrong-43k | LessWrong | The Apologist and the Revolutionary
Rationalists complain that most people are too willing to make excuses for their positions, and too unwilling to abandon those positions for ones that better fit the evidence. And most people really are pretty bad at this. But certain stroke victims called anosognosiacs are much, much worse.
Anosognosia is the condition of not being aware of your own disabilities. To be clear, we're not talking minor disabilities here, the sort that only show up during a comprehensive clinical exam. We're talking paralysis or even blindness1. Things that should be pretty hard to miss.
Take the example of the woman discussed in Lishman's Organic Psychiatry. After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".
Why won't these patients admit they're paralyzed, and what are the implications for neurotypical humans? Dr. Vilayanur Ramachandran, leading neuroscientist and current holder of the world land-speed record for hypothesis generation, has a theory.
One immediately plausible hypothesis: the patient is unable to cope psychologically with the possibility of being paralyzed, so he responds with denial. Plausible, but according to Dr. Ramachandran, wrong. He notes that patients with left-side strokes almost never suffer anosognosia, even though the left side controls the right half of the body in about the same way the right side controls the left half. There must be something special about the right hemisphere.
Another plausible hypothesis: the part of the brain |
e0128188-3914-4709-9a54-d5b3af86b74f | trentmkelly/LessWrong-43k | LessWrong | Developing Empathy
Empathy is a huge life skill, useful in almost every interaction with other people. But, many people aren't able to empathize with others as effectively as they might want to. The standard technique is "put yourself in their shoes," which works for me. However, this doesn't always work with people completely different from myself, because I can't imagine reacting the way they are.
Does anyone have suggestions for how to "practice" empathizing, tips on how to do it better, or different techniques entirely? |
5cf50a7d-096b-46bd-9616-09aeebde3fd1 | StampyAI/alignment-research-dataset/arbital | Arbital | Coordinative AI development hypothetical
A simplified/easier hypothetical form of the [known algorithm nonrecursive](https://arbital.com/p/) path within the [https://arbital.com/p/2z](https://arbital.com/p/2z). Suppose there was an effective world government with effective monitoring of all computers; or that for whatever other imaginary reason rogue AI development projects were simply not a problem. What would the ideal research trajectory for that world look like?
### Usefulness:
- Highlight / flag where safety shortcuts are being taken because we live in the non-ideal case.
- Let us think through what a maximally safe development pathway would look like, and why, without stopping every 30 seconds to think about how we won't have time. This may uncover valuable research paths that could, on a second glance, be done more quickly.
- Think through a simpler case of a research-program-generator that has fewer desiderata and hence less cognitive distractions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.