id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
fc6d7bff-4b32-43c0-b56e-bcfcb3b41fef | trentmkelly/LessWrong-43k | LessWrong | Proposal: periodic repost of the Best Learning resources
One of the biggest benefits of LW for me, aside from specific discussions, has been finding high-quality learning resources. Since knowledge is pretty much the biggest power humans have and many of us spend a lot of time learning, learning more efficiently is extremely important - a good textbook vs. a bad one can cost a lot of time and quite probably make some of the area inaccessible.
We've had a number of threads in that direction, e.g. this
http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/
http://lesswrong.com/lw/g9l/course_recommendations_for_friendliness/
plus crumbs in the monthly media threads.
The proposal is to have these discussions periodically, especially with the great influx of top-notch full online courses from the best schools via coursera, edx, udacity, etc. After that we can wikify some of the more stable recommendations and link the wiki back to the discussions.
Please use this thread for meta-discussion, not specific recommendations. The big questions are should we have this periodically yes/no, what the period should be, at least initially and other helpful suggestions.
|
f567ea07-79bd-4dd9-a66e-aabe268a4b69 | trentmkelly/LessWrong-43k | LessWrong | Volitive Rationality
|
c15b097e-180f-40f6-8f55-49f90f51d4bf | trentmkelly/LessWrong-43k | LessWrong | Meetup : Dallas/Fort Worth Metro Area Meetup, 5/27
Discussion article for the meetup : Dallas/Fort Worth Metro Area Meetup, 5/27
WHEN: 27 May 2012 01:00:00PM (-0500)
WHERE: America's Best Coffee, Arlington
Folks, the weekly meeting for the DFW LessWrong group is still on for the Sunday on Memorial Day Weekend (5/27).
Same time, same place as usual. America's Best Coffee in Arlington from 1 to 3 PM. Last week, we competed for real estate with a religious meetup group; I'd say avoid religious topics to avoid confrontation, if possible. We want to be good stewards of this coffee shop so we get invited back.
Come on by if you are free and in the area. Also, have a great Memorial Day Weekend.
Discussion article for the meetup : Dallas/Fort Worth Metro Area Meetup, 5/27 |
9f7267fc-c655-43e1-b26a-52df4ca156f4 | trentmkelly/LessWrong-43k | LessWrong | [Link] Statistically, People Are Not Very Good At Making Voting Decisions
Link. Nothing surprising considering previous work on the subject, but a good reminder.
> A study by three scientists in the American Political Science Review finds that voters are not competent at accurately evaluating incumbent performance and are easily swayed by rhetoric, unrelated circumstances and recent events.
>
> Gregory Huber, Seth Hill, and Gabriel Lenz constructed a 32-round game where players received payments from a computer "allocator." The goal is to maximize the value of those payments.
>
> Halfway through, at round sixteen, the player had to decide whether to get a new allocator or to stick with the old one.
>
> The allocators pay out over a normal distribution based on a randomly selected mean. Getting a new allocator means that a new mean is selected. This was meant to simulate an election based on performance.
>
> The group ran three experiments where they changed some of the rules of the game in order to find out how voters could be manipulated or confused over performance. Essentially, how good were voters at accurately analyzing the performance of the "allocator?"
>
> * The first experiment merely alerted the player at round twelve that they would have the chance to pick a new allocator at round sixteen. This "election in November" reminder made the player weight recent performance in rounds 12-16 over earlier performance in rounds 1-12.
>
> * The second experiment involved a lottery held at round eight or round sixteen. The payout was either -5000, 0, or 5000 tokens. The participant was told that the lottery was totally unrelated to the current allocator, but players still rewarded or punished their current allocator based on their lottery performance.
>
> * The third experiment primed the player with a question right before the election. The question took an adapted form of either Ronald Reagan's "Are you better off than you were four years ago?" or John F. Kennedy's "The question you have to decide on November 8 is, is it good |
bfb39271-6669-41d7-9e53-dff3cfe6620e | trentmkelly/LessWrong-43k | LessWrong | Mastodon Replies as Comments
The comment section on most blogs is pretty minimal, with the real discussion happening elsewhere, but people who come to the post later won't see that discussion. One of the more unusual choices I've made with this blog is that instead of hosting comments here, I pull in and display comments people make on social media. While this started out as laziness (who wants to handle accounts for users?) over the years I've come around to thinking that this is how blog comments should normally work.
The biggest problem with this approach, though, is that it's in tension with the goals of the social media companies. Facebook, Twitter, etc want to keep you on their platform, and aren't especially interested in serving as the comment section of external blogs. On the other hand, this is potentially a really good fit for federated social media, and I've now made it so that ActivityPub replies to my blog posts will show up as comments here:
For a live example, here's a post from last week.
Integration was fast: if I fetch https://[server]/api/v1/statuses/[id]/context" I get all the publicly visible replies to that status that this server knows about. Then it's just a matter of extracting the relevant information, threading the responses, and fitting that into my existing comment display infrastructure (code).
This is a bit of a hack on top of Mastodon, but for blogs to participate directly in ActivityPub, where you could subscribe and comment over the protocol, you wouldn't have to implement most of what Mastodon does. That would make it much smaller, more efficient, and more maintainable. It looks like maybe someone has made a WordPress plugin for this? Has anyone tried it?
Comment via: mastodon |
c00bb6fa-9d67-47b3-8001-21f6c3f2c083 | trentmkelly/LessWrong-43k | LessWrong | Progress links and short notes, 2025-01-13
Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Threads, Bluesky, or Farcaster.
Contents
* From me and RPI
* Jobs and fellowships
* Other opportunities
* Events
* Questions
* Announcements
* Commentary on the wildfires
* Sam Altman: AI workers in 2025, superintelligence next
* Never underestimate elasticity of supply
* “The earnestness and diligence of smart technical people”
* “Americans born on foreign soil”
* Undaunted
* Eli Dourado’s model of policy change
* Stats
* Links
* AI
* Inspiration
* Politics
* China biotech rising
* Predictions about war
* Why did we wait so long for the camera?
* Housing without homebuilders
* Charts
* Fun
From me and RPI
* 2024 in review for me and RPI, in case you missed it, including my annual “highlights from things I read this year”
* First batch of recorded talks from Progress Conference 2024 are available now. Special thanks to Freethink Media for their excellent work producing these
Jobs and fellowships
* Epoch AI hiring a Technical Lead “to develop a next-generation computer-use benchmark at Epoch AI. This will be for evaluating real-world AI capabilities as FrontierMath is for mathematics” (@tamaybes)
* “Funded year-long PhD student fellowship, combining non-partisan economic policy research & public service,” deadline Jan 30. Apply here (@heidilwilliams_)
* “I'm hiring (part-time) a techno-optimist who is obsessed with curating ideas” (@julianweisser)
Other opportunities
* Call for Focused Research Organization proposals in the UK. “Submit your concept paper by Feb 7 & full proposal by March 28.” (@Convergent_FROs). “Don't forget to scroll down … to the part where we have a ‘Request for FROs,’ with some ideas for inspiration” (@AdamMarblestone)
* Stories We'd Like to Publish (Part II), from Asimov Press. “Last time we did this, we got ~200 pitches and commissioned just about everything on the list” (@Niko |
d21537fe-50d7-4fff-b515-7c810baea802 | trentmkelly/LessWrong-43k | LessWrong | Make Your Observations Pay Rent
|
0fcf6e29-f9c4-499e-8cd3-540171c2fe93 | trentmkelly/LessWrong-43k | LessWrong | How can a high school student learn physics and math while coping with high school?
|
c906ac17-135b-451b-acd2-99b1b988d27f | trentmkelly/LessWrong-43k | LessWrong | Vestibular Stimulation and Fat Loss
|
c313e4d0-7189-47c5-862f-a998bbe4456d | StampyAI/alignment-research-dataset/blogs | Blogs | March Newsletter
| | |
| --- | --- |
|
| |
| --- |
| [newsletterheader_sm_c.1](http://intelligence.org/wp-content/uploads/2013/04/newsletterheader_sm_c.1.jpg)
|
|
|
| | | | |
| --- | --- | --- | --- |
|
| | | |
| --- | --- | --- |
|
| | |
| --- | --- |
|
| |
| --- |
|
Greetings From The Executive Director
Friends,
As previously [announced](http://intelligence.org/2013/01/30/we-are-now-the-machine-intelligence-research-institute-miri/) on our blog, the Singularity Institute has been renamed as the **Machine Intelligence Research Institute (MIRI)**. Naturally, both our staff and our supporters have positive associations with our original name, the “Singularity Institute.” As such, *any* new name will feel strange for a time. However, “MIRI” has sounded better and better to us over the past several weeks, and we think it will grow on you, too.
Some will worry, “But ‘MIRI’ doesn’t express what you do in any detail!” According to our market research, however, this is “a feature, not a bug.” Researchers, in particular, said they could feel awkward working for an organization with a name that sounded too narrow or “partisan.” They also warned us that the scope of an organization’s activities can change over time, so its name should be very general. University departments and independent research organizations learned these lessons long ago, and thus tend to have very general names (with the universities themselves usually named after their primary campus location).
“MIRI” has other nice properties, too. It’s easy to spell, it’s easy to pronounce, and it reflects our shifting priorities toward more technical research. Our mission, of course, remains the same: “to ensure that the creation of smarter-than-human intelligence benefits society.”
See our new website at [Intelligence.org](http://intelligence.org/). The site guide [here](http://intelligence.org/2013/02/28/welcome-to-intelligence-org/).
Our emails have changed, too. Be sure to **update your email Contacts list** with our new email addresses, e.g. luke@intelligence.org. Our previous email addresses at singinst.org and singularity.org no longer work. You can see all our new email addresses on the [Team](http://intelligence.org/team/) page.
Cheers,
Luke Muehlhauser
Executive Director
Upcoming MIRI Research Workshops
From November 11-18, 2012, we held (what we now call) the **1st MIRI Workshop on Logic, Probability, and Reflection**. The four workshop participants ([Eliezer Yudkowsky](http://yudkowsky.net/), [Paul Christiano](http://rationalaltruist.com/), Marcello Herreschoff, and Mihály Bárász) worked on the foundations of probabilistic reflective reasoning. In particular, they showed that a careful formalization of probabilistic logic can circumvent many classical paradoxes of self-reference. Applied to metamathematics, this framework provides (what seems to be) the first definition of truth which is expressive enough for use in reflective reasoning. Applied to set theory, this framework provides an implementation of probabilistic set theory based on unrestricted comprehension which is nevertheless powerful enough to formalize ordinary mathematical reasoning (in contrast with similar fuzzy set theories, which were originally proposed for this purpose but later discovered to be incompatible with mathematical induction).
These results suggest a similar approach may be used to work around Löb’s theorem, but this has not yet been explored. This work will be written up over the coming months.
In the meantime, MIRI is preparing for the **2nd MIRI Workshop on Logic, Probability, and Reflection**, to take place from April 3-24, 2013. For more details, see the relevant [blog post](http://intelligence.org/2013/03/07/upcoming-miri-research-workshops/).
Additional MIRI research workshops are also tentatively planned for the summer and fall of 2013.
Winter Fundraiser Success!
Thanks to our dedicated supporters, we met our goal for our [2012 Winter Fundraiser](http://intelligence.org/2013/01/20/2012-winter-matching-challenge-a-success/). Thank you!
The fundraiser ran for 45 days, from December 6, 2012 to January 20, 2013.
We met our $115,000 goal, raising a total of $230,000 for our operations in 2013.
Course Recommendations for MIRI Researchers
MIRI Deputy Director Louie Helm has prepared a list of [Recommend Courses for MIRI Researchers](http://intelligence.org/courses/), which answers the question “What should a researcher study if they want to equip themselves to tackle the technical problems on MIRI’s research agenda?” This new page provides a list of subjects to study, along with textbook recommendations, online course recommendations, and recommended courses at particular universities (UC Berkeley, Stanford, MIT, and CMU).
Decision Theory FAQ
If you want future AIs to cooperate in real-world [prisoner’s dilemmas](http://en.wikipedia.org/wiki/Prisoner%27s_dilemma), you’d better hope they’re not using any of the standard decision algorithms discussed in philosophy and computer science journals. For this reason and others, decision theory represents a major focus of MIRI’s research agenda (for example see [Yudkowsky 2010](https://intelligence.org/files/TDT.pdf)).
To help clarify some common confusions about decision theory and encourage more researchers to tackle these problems, MIRI Executive Director Luke Muehlhauser wrote a [Decision Theory FAQ](http://lesswrong.com/lw/gu1/decision_theory_faq/) for the website *Less Wrong*. It is by far the most comprehensive decision theory FAQ on the internet, and [section 11](http://lesswrong.com/lw/gu1/decision_theory_faq/#what-about-newcombs-problem-and-alternative-decision-algorithms) is an especially handy summary of how different decision algorithms perform on a battery of standard problems from the literature ([Newcomb’s Problem](http://lesswrong.com/lw/gu1/decision_theory_faq/#newcombs-problem), [Medical Newcomb’s Problem](http://lesswrong.com/lw/gu1/decision_theory_faq/#medical-newcomb-problems), Egan’s [Psychopath Button](http://lesswrong.com/lw/gu1/decision_theory_faq/#the-psychopath-button), [Parfit’s Hitchhiker](http://lesswrong.com/lw/gu1/decision_theory_faq/#parfits-hitchhiker), the [Prisoner’s Dilemma](http://lesswrong.com/lw/gu1/decision_theory_faq/#prisoners-dilemma), and more).
Brief History of Ethically Concerned Scientists
In 1956, Norbert Weiner wrote that “For the first time in history, it has become possible for a limited group of a few thousand people to threaten the absolute destruction of millions.” Today, the general attitude towards scientific discovery is that scientists are not themselves responsible for how their work is used. But this is not necessarily the attitude that we should encourage. As technology becomes more powerful, it also becomes more dangerous.
To celebrate the scientists who took seriously the potential social consequences of their work, and to make it easier for others to write about scientist’s social responsibility, MIRI researcher Kaj Sotala published [A Brief History of Ethically Concerned Scientists](http://lesswrong.com/lw/gln/a_brief_history_of_ethically_concerned_scientists/). Click through to learn about:
* **John Napier** (1550-1617), who discovered a deadly new form of artillery, but kept its details a secret so that its destructive power could not be wielded.
* **Lewis Fry Richardson** (1881-1953), who turned down an invitation to optimize the spread of poison gas for the British military, destroyed his unpublished research, left meteorology, and began to study the causes of war instead, hoping to reduce armed conflict.
* **Leó Szilárd** (1898-1964), who discovered the nuclear chain reaction but arranged for his patent details to be kept secret so they could not be used by Germany to develop atomic bombs, and later campaigned against nuclear proliferation.
* **Joseph Rotblat** (1908-2005), who left the Manhattan Project over ethical concerns with the atomic bomb and campaigned against nuclear proliferation.
and many others.
Existential Risk Covered in Aeon Magazine
We don’t mention each new article about [existential risk](http://www.existential-risk.org/) or [AI risk](https://intelligence.org/files/ReducingRisks.pdf), but [this one](http://www.aeonmagazine.com/world-views/ross-andersen-human-extinction/) by Ross Andersen in *Aeon Magazine* is particularly good. It’s based largely on the work of [Nick Bostrom](http://nickbostrom.com/) at Oxford University, a frequent collaborator with MIRI researchers (e.g. “[The Ethics of Artificial Intelligence](https://intelligence.org/files/EthicsofAI.pdf)“). Bostrom is currently writing a scholarly monograph on machine superintelligence, and Andersen’s article properly highlights the centrality of AI risk. The piece also includes snippets of a conversation with MIRI research associate [Daniel Dewey](http://www.danieldewey.net/) (author of “[Learning What to Value](https://intelligence.org/files/LearningValue.pdf)“).
We also recommend Bostrom’s new article “[Existential Risk Prevention as Global Priority](http://www.existential-risk.org/concept.pdf),” forthcoming in *Global Policy*.
MetaMed Launches
Former MIRI President Michael Vassar’s new personalized medicine company has finally launched: behold [MetaMed](http://metamed.com/)! MetaMed offers personalized medical research for patients who want to make sure they’re treatment is informed by the very latest medical breakthroughs. Eliezer Yudkowsky [introduced](http://lesswrong.com/lw/gvi/metamed_evidencebased_healthcare/) the company thusly:
In a world where 85% of doctors can’t solve [simple Bayesian word problems](http://library.mpib-berlin.mpg.de/ft/ps/PS_Teaching_2001.pdf)…
In a world where only 20.9% of reported results that a pharmaceutical company tries to investigate for development purposes, [fully replicate](http://online.wsj.com/article/SB10001424052970203764804577059841672541590.html)…
In a world where “[p-values](http://lesswrong.com/lw/1gc/frequentist_statistics_are_frequently_subjective/)” are [anything the author wants them to be](http://biomet.oxfordjournals.org/content/77/3/467.abstract)…
…and where there are [all sorts of amazing technologies and techniques](http://www.cnn.com/2010/HEALTH/09/09/pinky.regeneration.surgery/index.html) which nobody at your hospital has ever heard of…
…there’s also [MetaMed](http://metamed.com/). Instead of just having “evidence-based medicine” in journals that doctors don’t actually read, MetaMed will provide you with actual evidence-based healthcare… If you have a sufficiently serious problem and can afford their service, MetaMed will (a) put someone on reading the relevant research literature who understands real statistics and can tell whether the paper is trustworthy; and (b) refer you to a cooperative doctor in their network who can carry out the therapies they find.
MetaMed was partially inspired by the case of a woman who had her fingertip chopped off, was told by the hospital that she was screwed, and then read through an awful lot of literature on her own until she found someone working on an advanced regenerative therapy that let her actually [grow the fingertip back](http://www.cnn.com/2010/HEALTH/09/09/pinky.regeneration.surgery/index.html). The idea behind MetaMed isn’t just that they will scour the literature to find how the best experimentally supported treatment differs from the average wisdom… but that they will also look for this sort of very recent technology that most hospitals won’t have heard about.
An Appreciation of Michael Anissimov
Due to Singularity University’s [acquisition](http://singularityu.org/2012/12/09/singularity-university-acquires-the-singularity-summit/) of the [Singularity Summit](http://intelligence.org/singularitysummit/) and some major changes to MIRI’s public communications strategy, Michael Anissimov left MIRI in January 2013. Michael continues to support our mission and continues to volunteer for us.
It was a pleasure for me to work with Michael during our overlapping time at MIRI. Michael played a major role in “onboarding” me at MIRI and helping me to understand the history and culture of MIRI’s community, and he worked very hard on the Singularity Summit and on our 2012 efforts to transform MIRI into a more effective organization in general.
I owe Michael much gratitude for his many, many years of service to MIRI, and in particular for helping to build up the Singularity Summit to the point where it was acquired, and for applying himself (of his own accord) to the tasks that he saw needed to be done — for example in taking up MIRI’s public communications mantle when he saw that was a gap in our operations.
Michael: Thanks so much for your service to MIRI! I enjoyed working with you, and I wish you the best of luck on your future adventures.
Luke Muehlhauser
|
|
|
|
|
The post [March Newsletter](https://intelligence.org/2013/03/07/march-newsletter/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
05b6da66-c0ce-47f0-970c-9383a738d051 | trentmkelly/LessWrong-43k | LessWrong | What happens when your beliefs fully propagate
> This is a very personal account of thoughts and events that have led me to a very interesting point in my life. Please read it as such. I present a lot of points, arguments, conclusions, etc..., but that's not what this is about.
I've started reading LW around spring of 2010. I was at the rationality minicamp last summer (2011). The night of February 10, 2012 all the rationality learning and practice finally caught up with me. Like a water that has been building up behind a damn, it finally broke through and flooded my poor brain.
"What if the Bayesian Conspiracy is real?" (By Bayesian Conspiracy I just mean a secret group that operates within and around LW and SIAI.) That is the question that set it all in motion. "Perhaps they left clues for those that are smart enough to see it. And to see those clues, you would actually have to understand and apply everything that they are trying to teach." The chain of thoughts that followed (conspiracies within conspiracies, shadow governments and Illuminati) it too ridiculous to want to repeat, but it all ended up with one simple question: How do I find out for sure? And that's when I realized that almost all the information I have has been accepted without as much as an ounce of verification. So little of my knowledge has been tested in the real world. In that moment I achieved a sort of enlightenment: I realized I don't know anything. I felt a dire urge to regress to the very basic questions: "What is real? What is true?" And then I laughed, because that's exactly where The Sequences start.
Through the turmoil of jumbled and confused thoughts came a shock of my most valuable belief propagating through my mind, breaking down final barriers, reaching its logical conclusion. FAI is the most important thing we should be doing right now! I already knew that. In fact, I knew that for a long time now, but I didn't... what? Feel it? Accept it? Visualize it? Understand the consequences? I think I didn't let that belief propagat |
7b727729-b124-4fd6-a90f-193711f8d383 | trentmkelly/LessWrong-43k | LessWrong | David Friedman on Legal Systems Very Different from Ours: SlateStarCodex Online Meetup
David Friedman on Legal Systems Very Different from Ours: A brief survey of a range of legal system, past and present, from Imperial China and Periclean Athens to modern Amish and Romany.
The event will be Oct 11, 2020, at 17:30 UTC, 20:30 Israel Daylight Time, 10:30 Pacific Daylight Time.
Sign up here, up to an hour before the event, and we'll send you an invitation to the online meetup
David Friedman is an academic economist with a doctorate in physics recently retired from spending the previous twenty-three years teaching in a law school. His first book, The Machinery of Freedom: Guide to a Radical Capitalism, was published in 1973 and includes a description of how a society with property rights and without government might function. There as elsewhere, he offers a consequentialist defense of libertarianism.
His most recent non-fiction book is Legal Systems Very Different from Ours, covering systems from Periclean Athens through modern Amish and Romany. He is also the author of three novels, one commercially published and two self-published, and, with his wife, a self-published medieval and renaissance cookbook and a larger self-published book related to their hobby of historical recreation.
Much of his published work, including journal articles, essays, drafts of forthcoming work and the full text of several books, can be read on his web page: daviddfriedman.com |
47847b60-2d1b-404f-b891-182c216dee2a | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Interlude: Agents as Automobiles
I ended up writing a long rant about agency in my review of Joe Carlsmith’s report on x-risk from power-seeking AI. I’ve polished the rant a bit and posted it into this sequence. The central analogy between APS-AI and self-propelled machines (“Auto-mobiles”) is a fun one, and I suspect the analogy runs much deeper than I’ve explored so far.
For context, the question being discussed is whether we should “expect incentives to push relevant actors to build agentic planning and strategically aware systems [APS systems] in particular, once doing is possible and financially feasible.”
Joe says 80% yes, 20% no:
*“The 20% on false, here, comes centrally from the possibility that the combination of agentic planning and strategic awareness isn’t actually that useful or necessary for many tasks -- including tasks that intuitively seem like they would require it (I’m wary, here, of relying too heavily on my “of course task X requires Y” intuitions). For example, perhaps such tasks will mostly be performed using collections of modular/highly specialized systems that don’t together constitute an APS system; and/or using neural networks that aren’t, in the predictively relevant sense sketched in 2.1.2-3, agentic planning and strategically aware. (To be clear: I expect non-APS systems to play a key role in the economy regardless; in the scenarios where (2) is false, though, they’re basically the only game in town.)”*
I agree that “of course X requires Y” intuitions have been wrong in the past and also that evidence from how nature solved the problem in humans and nonhuman animals will not necessarily generalize to artificial intelligence. However:
1. Beware [isolated demands for rigor.](https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/) Imagine someone in 1950 saying “Some people thought battleships would beat carriers. Others thought that the entire war would be won from the air. Predicting the future is hard; we shouldn’t be confident. Therefore, we shouldn’t assign more than 90% credence to the claim that powerful, portable computers (assuming we figure out how to build them) will be militarily useful, e.g. in weapon guidance systems or submarine sensor suites. Maybe it’ll turn out that it’s cheaper and more effective to just use humans, or bio-engineered dogs, or whatever. Or maybe there’ll be anti-computer weapons that render them useless. Who knows. The future is hard to predict.” This is what Joe-with-skeptic-hat-on sounds like to me. Battleships vs. carriers was a relatively hard prediction problem; whether computers would be militarily useful was an easy one. I claim it is obvious that APS systems will be powerful and useful for some important niches, just like how it was obvious in 1950 that computers would have at least a few important military applications.
2. To drive this point home, let me take the reasons Joe gave for skepticism and line-by-line mimic them with a historical analogy to self-propelled machines, i.e. “automotives.” Left-hand column is entirely quotes from the report.
| | |
| --- | --- |
| **Skeptic about APS systems** | **Skeptic about self-propelled machines** |
| Many tasks -- for example, translating languages, classifying proteins, predicting human responses, and so forth -- don’t seem to require agentic planning and strategic awareness, at least at current levels of performance. Perhaps all or most of the tasks involved in automating advanced capabilities will be this way. | Many tasks — for example, raising and lowering people within a building, or transporting slag from the mine to the deposit, or transporting water from the source to the home — don’t seem to require automotives. Instead, an engine can be fixed in one location to power a system of pulleys or conveyor belts, or pump liquid through pipes. Perhaps all or most of the tasks involved in automating our transportation system will be this way. |
| In many contexts (for example, factory workers), there are benefits to specialization; and highly specialized systems may have less need for agentic planning and strategic awareness (though there’s still a question of the planning and strategic awareness that specialized systems in combination might exhibit). | In many contexts, there are benefits to specialization. An engine which is fixed to one place (a) does not waste energy moving its own bulk around, (b) can be specialized in power output, duration, etc. to the task at hand, (c) need not be designed with weight as a constraint, and thus can have more reliability and power at less expense. |
| Current AI systems are, I think, some combination of non-agentic-planning and strategically unaware. Some of this is clearly a function of what we are currently able to build, but it may also be a clue as to what type of systems will be most economically important in future. | Current engines are not automotives. Some of this is clearly a function of what we are currently able to build (our steam engines are too heavy and weak to move themselves) but it may also be a clue as to what type of systems will be most economically important in the future. |
| To the extent that agentic planning and strategic awareness create risks of the type I discuss below, this might incentivize focus on other types of systems. | To the extent that self-propelled machines may create risks of “crashes,” this might incentivize focus on other types of systems (and I would add that a fixed-in-place engine seems inherently safer than a careening monstrosity of iron and coal!) To the extent that self-propelled machines may enable some countries to invade other countries more easily, e.g. by letting them mobilize their armies and deploy to the border within days by riding “trains,” and perhaps even to cross trench lines with bulletproof “tanks,” this threat to world peace and the delicate balance of power that maintains it might incentivise focus on other types of transportation systems. *[Historical note: The existence of trains was one of the contributing causes of World War One. See e.g.* [*Railways and the mobilisation for war in 1914 | The National Archives*](https://media.nationalarchives.gov.uk/index.php/railways-and-the-mobilisation-for-war-in-1914/)*.]* |
| Plan-based agency and strategic awareness may constitute or correlate with properties that ground moral concern for the AI system itself (though not all actors will treat concerns about the moral status of AI systems with equal weight; and considerations of this type could be ignored on a widespread scale). | OK, I admit that it is much more plausible that people will care for the welfare of APS-AI than for the welfare of cars/trains. However I don’t think this matters very much so I won’t linger on this point. |
3. There are plenty of cases where human “of course task X requires Y” intuitions turned out to be basically correct. (e.g. self-driving cars need to be able to pathfind and recognize images, image-recognizers have circuits that seem to be doing line detection, tree search works great for board game AIs, automating warehouses turned out to involve robots that move around rather than a system of conveyor belts, automating cruise missiles turned out to *not* involve having humans in the loop steering them… I could go on like this forever. I’m deliberately picking “hard cases” where a smart skeptic could plausibly have persuaded the author to doubt their intuitions that X requires Y, as opposed to cases where such a skeptic would have been laughed out of the room.)
4. There’s a selection effect that biases us towards thinking our intuition about these things is worse than it is:
* Cases where our intuition about is incorrect are cases where it turns out there is an easier way, a shortcut. For example, chess AI just doing loads of really fast tree search instead of the more flexible, open-ended strategic reasoning some people maybe thought chess would require.
* If the history of AIs-surpassing-humans-at-tasks looks like this:
* Then we should expect the left tail to contain a disproportionate percentage of the cases where there is a shortcut. Cases where there is no shortcut will be clumped over on the right.
4. More important than all of the above: As Gwern pointed out, *it sure does seem like some of the tasks some of us will want to automate are agency tasks*, tasks such that anything which performs them is by definition an agent. Tasks like “gather data, use it to learn a general-purpose model of the world, use that to make a plan for how to achieve X, carry out the plan.”
5. Finally, and perhaps most importantly: **We don’t have to go just on intuition and historical analogy. We have models of agency, planning, strategic awareness, etc. that tell us how it works and why it is so useful for so many things.** [This sequence is my attempt to articulate my model.]
*Many thanks to Joe Carlsmith for his excellent report and for conversing with me at length about it.* |
247b9dbe-1f91-449f-8c90-835ca6ca35fd | trentmkelly/LessWrong-43k | LessWrong | How large is the harm from info-cascades? [Info-cascade series]
This is a question in the info-cascade question series. There is a prize pool of up to $800 for answers to these questions. See the link above for full background on the problem (including a bibliography) as well as examples of responses we’d be especially excited to see.
___
How can we quantify the impact (harm) of info-cascades?
There are many ways in which info-cascades are harmful. Insofar as people base their decisions on the cascaded info, this can result in bad career choices, mistaken research directions, misallocation of grants, a culture that is easier to hijack by cleverly signalling outsiders (by simply “joining the bubble”), and more.
But in order to properly allocate resources to work on info-cascades we need a better model of how large the effects are, and how they compare with other problems. How can we think about info-cascades from a cost-effectiveness perspective?
We are especially interested in answers to this question that ultimately bear on the effective altruism/rationality communities, or analyses of other institutions with insights that transfer to these communities.
As an example step in this direction, we built a Guesstimate model, which is described in an answer below. |
8f71765c-0fe7-4fdc-8909-41b31d29a9fb | trentmkelly/LessWrong-43k | LessWrong | Proof idea: SLT to AIT
I think we may be able to prove that Bayesian learning on transformers[1] or recurrent neural networks with a uniform[2] prior over parameters is equivalent to a form of Solomonoff induction over a set of computationally-bounded programs. This bounded Solomonoff induction would still be 'approximately optimal' in a sense, being able to predict the data about as well any other bounded prediction procedure included in the set of programs it runs over. This proof would link Singular Learning Theory (SLT) back to basic Algorithmic Information Theory (AIT).
This post is my current early-stage sketch of the proof idea. Don't take it too seriously yet. I’m writing this out mostly to organise my own thoughts. I'd originally planned for it to be a shortform, but I think it ended up a bit too long for that.
Background:
I recently held a small talk presenting an idea for how and why deep learning generalises. Slides for the talk here, slide discussion here.
In the talk, I tried to reduce concepts from SLT[3] back to AIT[4]. I sketched a story about deep learning, or perhaps even learning more generally, that goes like this:
1. Bayesian learning on (recurrent) neural networks is equivalent to a form of Solomonoff induction running over a set of programs bounded in length, runtime and memory usage.
2. Using SGD/genetic algorithms/your-fancy-update-method-of-choice to train a neural network is then a cheap bargain bin[5] approximation of Bayesian learning on the neural network. Training steps are biased to make simple updates rather than complex updates because exponentially more parameter configurations in the architecture correspond to simpler programs.
Now, I want to actually prove this story. Specifically, I want to prove the first part: That Bayesian learning on transformers or RNNs is equivalent to a computationally bounded form of Solomonoff Induction (SI), in a sense I want to make precise. I also want to show that this bounded SI is a sensible approximation of a |
51fab369-7f8a-470d-8fd5-dae86f0b1e4c | trentmkelly/LessWrong-43k | LessWrong | Comments on "The Singularity is Nowhere Near"
I followed a link on Twitter to a fun and informative 2015 blog post by Tim Dettmers:
The Brain vs Deep Learning Part I: Computational Complexity — Or Why the Singularity Is Nowhere Near
The headline conclusion is that it takes at least 1021 FLOP/s to run the algorithms of a human brain, and therefore "it is unlikely that there will be a technological singularity in this century." I disagree with that, and this post explores why.
(Specifically, I disagree with "at least 1021 FLOP/s". There's a separate step to go from "at least 1021 FLOP/s" to "it is unlikely that there will be a technological singularity in this century"—this step is related to Moore's law, bandwidth requirements for parallelization, etc. Tim's blog post has extensive discussion of this second step, and I won't say anything about that here; I'd have to think about it more.)
(I'm writing this in 2021, six years later, but Tim has a comment on this very site that says he still stands by that post; in fact he now goes even further and says "I believe that AGI will be physically impossible with classical computers.")
I highly recommend the original post. Indeed, if I didn't like the post so much, I would not have bothered writing a response. :-)
Are brain algorithms computationally expensive to simulate?
Yes! Definitely! I think it's especially telling that nobody has applied the Dileep George brain-inspired image-processing model to ImageNet, sticking to much smaller images with far fewer categories of objects (MNIST, CAPTCHAs etc.).
Likewise, this Randall O'Reilly paper has a fascinating computational exploration of (in my opinion) different and complementary aspects of the human visual system. That paper tests its theories on a set of ≈1000 256×256-pixel, 8-frame movies from 100 categories—compare to ImageNet's 14 million images from 20,000 categories ... or compare it to the number of visual categories that you can recognize. Training the model still took 512 InfiniBand-connected processor |
9c5d75ed-2bcb-4e80-8a44-239be56994b6 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [AN #124]: Provably safe exploration through shielding
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter.
Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-124)** (may not be up yet).
HIGHLIGHTS
===========
**[Neurosymbolic Reinforcement Learning with Formally Verified Exploration](http://arxiv.org/abs/2009.12612)** *(Greg Anderson et al)* (summarized by Rohin): A typical approach to formally verified safe exploration in RL is to compute a *shield*, which identifies a safe set of states and actions. After this shield is computed, it is “wrapped” around the environment to ensure that if a potentially unsafe action is about to be taken, it is replaced with a safe one. Then, a policy learning algorithm is applied as normal to learn a good policy.
The key insight of this paper is to compute shields for specific *policies*, rather than creating a one-time shield that must apply to the entire state space. Since any given policy will only visit a small fraction of the state space, the shields are easier to compute and can be more permissive.
They assume access to a *worst-case dynamics model*, which given a state and action outputs a *set* of states that could be visited. Given a policy π, an *inductive safety invariant* is a set of safe states that includes all possible initial states and is closed under worst-case transitions: if you start at a state in the set, for any action that π suggests and for any state from the worst-case transition dynamics, that new state will still be in the set. Our algorithm will ensure that any policy we execute will have a corresponding inductive safety invariant.
Formal verification techniques allow us to find inductive safety invariants for restricted classes of policies. This paper uses the space of deterministic, piecewise linear policies as its set of symbolic policies. But how do we apply this to neural nets? The key idea is to start with a safe symbolic policy, convert it to a neurosymbolic policy, take a neural net gradient step, convert back to a safe symbolic policy, and repeat until done. Let’s go over each of these steps.
First, let’s suppose we have a symbolic policy g with inductive safety invariant ø. Then for any neural net f, we construct the policy h = “f(s) if no matter what we stay within ø, otherwise g(s)”. It is easy to see that ø is also an inductive safety invariant for h. Which f should we use to create h? The authors train a neural net to imitate g, and use that as their f. (Note that imitating g only requires executing g in the environment, and we know that g is safe.)
Now that we have our neurosymbolic policy h, we need to take gradient steps on it. We collect data in the environment using h, but then for the gradient we ignore the symbolic part, and take a gradient step as though the data were collected using f. (It seems they used an on-policy algorithm for this, introducing bias; I am not sure why they didn’t simply use an off-policy algorithm.) This produces a new neurosymbolic policy h’ that is still safe (since g and ø are unchanged, and that’s what guarantees safety).
Finally, we need to convert h’ back into a symbolic policy g’. This is done by a version of imitation learning that works in the symbolic policy space, where a new inductive safety invariant for g’ is found using formal verification techniques.
To start off the whole process, we need an initial symbolic policy, which must be constructed by hand. The authors show using experiments in simple continuous control environments that this method can learn high-reward policies without ever having a safety violation.
**Rohin's opinion:** I really like this as an example of combining the performance of neural networks with the robustness of symbolic approaches. I especially like the fact that the shield is specialized to the current policy and updated over time: I think ML scales so well partly because it only deals with a tiny portion of the input space and can completely ignore the vast majority of possible inputs, and so if you want to add anything on top of ML you need to ensure you preserve this property to ensure scalability. Previous approaches required a shield that is correct across all possible states, failing to preserve this property; in contrast, this approach only requires a shield that is correct for the sequence of learned policies (on whichever states they visit).
I should note that a large portion of why I like this paper is that it feels like it elegantly fits in *both* the formal verification *and* the ML fields. (I used to work in programming languages, of which formal verification is a subfield.) On the formal verification side, the guarantees are clean and simple, and the techniques used are canonical. On the ML side, I mentioned above why I like the fact that the shield is policy-specific and updated over time.
As I’ve said before, I think the real challenge in formal verification for AI alignment is how to handle fuzzy specifications. I think this paper shows a path forward: since the safety is established by an inductive invariant that can change over time, we could potentially use human feedback to establish these inductive invariants and update them over time, without requiring a human to fully specify at the outset exactly what is safe and what isn’t. You could think of it as an expanding whitelist of states which the policy is allowed to visit.
TECHNICAL AI ALIGNMENT
=======================
LEARNING HUMAN INTENT
----------------------
**[Imitation Learning in the Low-Data Regime](https://ai.googleblog.com/2020/09/imitation-learning-in-low-data-regime.html)** *(Robert Dadashi et al)* (summarized by Zach): **[Non-Adversarial Imitation Learning](http://arxiv.org/abs/2008.03525)** (**[AN #119](https://mailchi.mp/30b144930924/an-119ai-safety-when-agents-are-shaped-by-environments-not-rewards)**) has become more popular recently due to the fact that GAN style architectures can be notoriously unstable during training. This paper makes a contribution by introducing an imitation learning strategy that relies on minimizing an upper bound on the Wasserstein distance between the imitator and expert state visitation distributions. The Wasserstein distance can be understood using the 'Earth Mover's Analogy'. In this interpretation, we view the distance as the cost of the most efficient transport strategy to move probability mass from the imitator distribution to the expert distribution. The advantage of such an approach is that the metric can be calculated in an offline way. If we calculate the distance for partial rollouts then we can create a dense, albeit non-stationary, reward for the imitator. In experiments, agents trained using the Wasserstein distance are able to learn control tasks using only a single trajectory.
**Read more:** **[Paper: Primal Wasserstein Imitation Learning](https://arxiv.org/abs/2006.04678)**
**Zach's opinion:** With this paper, I conclude that IRL works for Mujoco-style control tasks. The performance of this method is similar to offline GAIL but is better justified and more stable. However, ultimately, I'm a bit skeptical of their claim that the method will generalize to other tasks. Results for GAIL/DAC are quite poor in Atari-like environments whereas pair-wise reward modeling seems to perform quite well. This would suggest a reward modeling approach would scale much better in more complicated settings.
VERIFICATION
-------------
**[An Inductive Synthesis Framework for Verifiable Reinforcement Learning](http://arxiv.org/abs/1907.07273)** *(He Zhu et al)* (summarized by Rohin): This older paper has a pretty similar idea to the one in the highlighted paper. In order to compute a safety shield for a neural network RL agent, we first transform the neural network into a simpler more symbolic policy, prove safety of the symbolic policy, and then use the generated inductive safety invariant as a shield. This paper also uses deterministic piecewise linear policies as its space of symbolic policies. It only proves safety of the final learned RL policy, and so only guarantees safety at deployment, not at training time. (In other words, it does not guarantee safe exploration, and instead assumes that you are training in simulation so that safety is not a concern.)
**Rohin's opinion:** Since this paper was published at PLDI, it is both longer and goes into a lot more of the details of how to actually perform each of these steps, as well as showing it with a running example on the inverted pendulum (where safety is defined as not going beyond a certain angle). I’m not going to summarize them here but anyone interested in these technical details should check out this paper before the highlighted one (which is constrained by ML page limits and can’t explain the techniques very well).
Just as a reminder that learning programs does not automatically confer interpretability, I present to you the symbolic policy learned by their method for the inverted pendulum:

**[Verifiably Safe Exploration for End-to-End Reinforcement Learning](https://arxiv.org/abs/2007.01223)** *(Nathan Hunt et al)* (summarized by Rohin): As we saw in the highlight, applications of formal verification to reinforcement learning and safe exploration often rely on *shielding*, in which any proposed unsafe actions are replaced by randomly chosen safe actions. Typically, this requires having an MDP model in a high-level, symbolic state space, such as by defining the MDP over the Atari simulator state, rather than learning from pixels.
This paper demonstrates that we can relax this requirement and learn policies on low-level observations, while still getting the safety guarantees of the shielding approach. The approach is simple: we define (manually) an abstract model of the environment, with a symbolic state space and dynamics model, and use this to create a shield as usual. Then, to learn the policy (which gets pixels as input), we use an object detector to transform the pixels into a symbolic state, and then use the shield if necessary to select which action to take. The authors show that as long as the error of the object detection step is low, the overall policy learning will remain safe.
**[Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming](https://arxiv.org/abs/2010.11645)** *(Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, Aditi Raghunathan, Jonathan Uesato et al)* (summarized by Rohin): In parallel with extending verification to sequential settings, as well as learning what specifications to verify, we also need to make verification significantly cheaper in order for it to be feasible to apply it to large neural networks. So far, we have only been able to achieve one of two very desirable properties at a time:
1. The method can scale up to large, independently trained networks. (This has been achieved by methods using linear (LP) relaxations like **[this one](https://arxiv.org/abs/1803.06567)** (**[AN #19](https://mailchi.mp/4b19d2caa5a9/alignment-newsletter-19)**).)
2. The method produces tight bounds and thus avoids producing vacuous results. (Achieved by using relaxations based on semidefinite programming (SDP) instead of linear ones.)
This paper shows how you can massage the SDP version such that the resulting algorithm becomes scalable, changing the runtime and memory requirements from O(n^6) and O(n^4) to O(n) per iteration. The resulting algorithm can be applied to larger neural nets than previous SDP approaches and gives much tighter bounds than LP approaches. For example, on an adversarially trained CNN for MNIST (which SDP algorithms haven’t previously been applied to), they can verify 87.8% adversarial accuracy, while LP methods can only verify 0.4%.
OTHER PROGRESS IN AI
=====================
REINFORCEMENT LEARNING
-----------------------
**[Does On-Policy Data Collection Fix Errors in Off-Policy Reinforcement Learning?](https://bair.berkeley.edu/blog/2020/03/16/discor/)** *(Aviral Kumar et al)* (summarized by Flo): Q-learning finds the optimal **Q**-function **Q\*** by updating our estimate **Q(s,a)** for a state-action pair **(s,a)** to get closer to the immediate reward plus the discounted **Q**-value for the best action **a'** in the next state **s'**. To generate samples, we usually pick actions corresponding to high **Q**-values. In bandit problems where **s'** is always terminal and thus has all **Q**-values at zero, this leads to **corrective feedback**: If we overestimated an actions value, we will pick this action again soon and are quickly able to correct our misconception. In general MDPs, corrective feedback can be a lot weaker as our update of **Q(s,a)** also depends on the **Q**-values for the next state: To get corrective feedback, we need somewhat correct **Q**-values for the next state, but to get these we likely needed good values for the second to next state, etc. This is particularly problematic with function approximation as updating the current state's **Q**-value might lead to a worse estimate for values down the chain. Consequently, we might see convergence to suboptimal **Q**-functions, instable learning, or problems with sparse or noisy rewards.
To deal with this, we would like to first prioritize correct estimates for states near the end of the chain. But in many branching problems, we actually observe these states with the least frequency such that their values are influenced disproportionally by other states' values when function approximation is used. The authors' approach, dubbed DisCor, reweighs the data distribution to account for this: We would like to preferentially sample states for which we expect **Q** to be close to **Q\*** after the update and thus give more weight to state-action pairs when we expect the error **|Q\*-Q|** to already be small. As we don't know **Q\***, we rely on a bound for the error at a state-action pair **(s,a)** equal to the sum of the magnitudes of previous updates down the chain plus the initial error, discounted by the usual discount rate **γ** as we move back in time. Thus, the error in the next state one step ago is discounted by **γ**, the error in the second to next state two steps ago is discounted by **γ** squared and the initial error is discounted by **γ** to the **k**. This bound can be approximated by a neural network using a SARSA-like update rule, for which the influence of the unknown initial error fades for large **k** due to the discounting.
DisCor is evaluated on MetaWorld tasks in both the single and multi-task setting and SAC augmented with DisCor clearly outperforms SAC in many settings. Similar improvements can be observed for DQN on Atari.
**Read more:** **[Paper: DisCor: Corrective Feedback in Reinforcement Learning via Distribution Correction](https://arxiv.org/abs/2003.07305)**
**Flo's opinion:** Putting less weight on updating values with fluctuating targets seems like a good idea. As the approach does not require much additional compute if weights are shared for the **Q**-network and the network estimating the bound, and as it seems quite orthogonal to previous improvements to methods based on **Q**-functions, I would not be surprised if it became somewhat widely used.
DEEP LEARNING
--------------
**[Gradient Descent: The Ultimate Optimizer](https://arxiv.org/abs/1909.13371)** *(Kartik Chandra et al)* (summarized by Rohin): Hyperparameter tuning is an important and tedious step for most applications of machine learning. Often this can cause a project to take significantly longer, as you need to have multiple training runs with different hyperparameters in order to identify which ones work best. How can we do better?
This paper shows that in some cases, you can make the computation involving your hyperparameters differentiable, such that they too can be optimized using gradient descent *during the actual training run*. They show this for SGD and Adam (where for Adam they optimize all four hyperparameters, not just the learning rate). Since these hyperparameters are then optimized using another instantiation of gradient descent, that new instantiation also has its own hyperparameters that can once again be optimized. They show how to build an arbitrarily high “stack” of hyperparameter optimizers.
In practice, building a stack of just 3 or 4 such optimizers makes it very robust to the initial choice of parameters by a human, while only increasing the cost of training by less than 2x.
**Rohin's opinion:** Fast hyperparameter tuning is a pretty important aspect of models. I particularly like **[population-based training](https://deepmind.com/blog/article/population-based-training-neural-networks)** for this purpose, because it doesn’t require your computation to be differentiable. However, when you can make your computation differentiable, this method is probably significantly more efficient (and perhaps also more performant).
#### **FEEDBACK**
I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**.
#### **PODCAST**
An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**. |
3f309004-844b-42ef-a5fe-592ecefd2568 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Translating between Latent Spaces
### *Produced as part of the SERIMATS Program 2022 Research Sprint under John Wentworth*
Introduction
============
The gold-standard of interpretability in ML systems looks like finding embeddings of human-identifiable concepts in neural net architectures, and being able to modify, change, and activate them as we wish. The first hurdle is *identification* of these concepts. We propose that it may be easier to identify simpler concepts in simpler models, and use these to bootstrap to more complex concepts in more intricate models.
To this end, we first propose a definition of what it means for a concept to be present in a model. Then we investigate how we can identify similar concepts across different models. We begin by demonstrating these definitions and techniques in a simple example involving Bayes nets.
We then train two autoencoders (small and large) on the FashionMNIST dataset. We choose some latent concepts that humans would use to represent shoes (shoe height and shoe brightness) and see whether these human concepts can be transferred to the models, and how the models' representations relate.
A simple example
================
Simple Environment
------------------
First we construct a simple environment:
* There are 10 cells.
* Each cell can contain nothing, a blue circle, a red circle, or both.
* There is one red circle.
+ The red circle moves 1 right each timestep.
+ If the red circle cannot move right, it moves to the leftmost cell.
* There are two adjacent blue circles.
+ The blue circles move 1 left or 1 right each timestep.
+ If a blue circle reaches the environment's edge, both their directions change.
* If a cell contains a red circle and a blue circle, it shows only a blue circle.
A sample is given below:
Two Models of this Environment
------------------------------
We model the evolution of this environment using two different Bayes nets. Each Bayes net reflects a different way of viewing this environment corresponding to:
* An object centric model
* A local-state centric model
### Object Centric Model (Model 1)
The object centric model tracks 3 latent variables corresponding to the object-level description of where the blue and red circles are and in which direction the blue circles are moving at a given timestep:
* blue location (taking values in {0,1,2,3,4,5,6,7,8,9}.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
)
* red location (taking values in {0,1,2,3,4,5,6,7,8,9})
* blue velocity (taking values in {−1,1})
It also has 10 observational variables given by the 10 cells, which take values in {blank,blue,red}.
This is represented by the following Bayes net (each column being a new timestep):
So this Bayes net has a latent space given by the set {0,1,2,3,4,5,6,7,8,9}2×{−1,1}, and the state displayed at timestep 2 in the above diagram would have (assuming blue is moving right) latent representation (2,4,1).
### Local-State Centric Model (Model 2)
The local-state centric model treats each cell as having a local state corresponding to
* Whether the cell contains a red circle
* Whether the cell contains a blue circle
+ If the cell contains a blue circle, which direction the blue circle is moving in
So each cell can be in one of six possible states:
* No Red, No Blue - 0
* Red, No Blue - 1
* No Red, Blue moving left - 2
* Red, Blue moving left - 3
* No Red, Blue moving right - 4
* Red, Blue moving right - 5
And this is represented by the Bayes net:
So the latent space of this Bayes net consists of tuples from the set {0,1,2,3,4,5}10, and the state displayed at timestep 1 would have (assuming blue is moving right) latent representation (0,0,1,0,4,4,0,0,0,0).
Definitions
-----------
Now that we have an environment and two simple models to refer to, let's define some notions that will be useful:
A **latent concept** is an abstract concept which changes across the training data. Some examples in the above environment: 'blue direction,' 'location of the blue circle,' 'location of all three circles,' and 'state of the 3rd cell.' (Due to the simple nature of the above environment, all of these latent concepts take values in discrete sets, but latent concepts can be continuous as well.)
**Relationship components** take latent concepts as input, and output how these change other latent concepts in the model. They do this by representing operations or functions that are constant across the training data. In the above example, this might be 'the red circle always moves right' or 'the blue circles bounce off the edge.'
A **latent concept identifier** takes as input a vector in the latent space of a model, and outputs the value of the latent concept being measured.
These concepts at work in Bayes nets
------------------------------------
We will choose '**blue velocity**' as our latent conceptand establish a latent concept identifier for blue velocity in the object centric Bayes net. Then we want to use this latent concept identifier to *communicate* this latent concept from the object centric latent space to the local-state centric latent space and hence derive the latent concept identifier in this new latent space.
The latent concept identifier function for blue velocity in the object centric Bayes net is obvious by inspection, it is simply:
f1:{0,1,2,3,4,5,6,7,8,9}2×{−1,1}→{−1,1}
such that
f1((a,b,c))=c,
since this concept is represented directly in the third coordinate of the latent space.
The second latent space stores this same concept in a noticeably more indirect and distributed way, and the latent concept identification function is correspondingly more complex.
f2:{0,1,2,3,4,5}10→{−1,1}
such that
f2(z)={1,if any coordinate in z contains a 4 or 5,−1,if any coordinate in z contains a 2 or 3.
This function would be easy to learn from example data using a decision tree or neural net. This would be done by generating observation sequences using the object centric Bayes net, then obtaining the corresponding latent state using the local-state Bayes net, and labelling it using the object-centric concept identifier and latent state. The process of learning this function constitutes the transferring of the latent concept from the first latent space to the second latent space which was what we hoped to achieve.
The complete-data limit
-----------------------
We now take the general method sketched out in the example above, and formalize it. We aim to show that for 2 models, when we can compare the latent concepts across all possible inputs to the models, we can perfectly communicate latent concepts from one model to another.
Let's define:
* A set of observations O, in humans this is the set of all possible sequences of sense-data over a lifetime, in the Bayes net example above it's {blank, red, blue}10×max\_timesteps.
* Two spaces L1 and L2 which represent the latent spaces of two world models.
* Two functions, l1 and l2, to represent each world model. Each function maps observations to latent states, li:O→Li. Assume that these world models were trained as generative models to predict any part of the observation given any other part of the observation, so it uses the latent space Li to store any information relevant to predicting any potential observation.
Now we have a concept, say "blue velocity", which we define using a function c1:L1→{−1,1}.
Our goal is to successfully communicate a concept from one model to the other. In other words, to discover the function c2:L2→{−1,1}, such that:
∀o∈O,c1(l1(o))=c2(l2(o))
If we assume l2 is invertible.[[1]](#fnpyibkmo507q)
Then we can define c2 as:
c2(z):=c1(l1(l−12(z)))) for any z∈L2.
A similar approach to the above can be used to learn a latent concept identifier that is invertible, i.e. we can use it to manipulate the belief state of the world model. In that case we need to also iterate over possible changes to latent state 1 which match the predictions made by world model 2 when the relevant latent variable is changed.
Variational Autoencoders (VAEs) trained on FashionMNIST
=======================================================
Now we consider a more complex example involving neural nets. We train two variational autoencoders on the FashionMNIST dataset. A variational autoencoder is made up of two components: an encoder and a decoder. The encoder is trained to take in an image and map it to a point in an n-dimensional latent space. The decoder is simultaneously trained to take these points in the latent space, and reconstruct an image that minimises binary cross-entropy loss between the actual image and what the decoder predicts the image to look like from knowing where the encoder sends it.
VAE FashionMNIST representations
--------------------------------
Below are two plots of the latent spaces of variational autoencoders trained with a 2D latent space:
We can see that there are regions of the space that correspond to concepts we might use ourselves to represent fashion items. For example, the top left region in the first latent space, and bottom right region in the second latent space are both 'shoe-regions' in their respective latent spaces. And they seem to be clustered in a fairly sensible way!
There are also regions of these latent spaces that clearly do not correspond to ways we would think about fashion items: an example that occurs in almost every latent space we generated is t-shirts turning into pants, and we can see that both latent spaces store weird half t-shirt, half pants images.
Therefore, we're not looking for every concept that the autoencoder uses to represent fashion items to be analogous to human representations of fashion items. But we are looking for the reverse implication to hold: that human representations of fashion items will have an analogous representation in these latent spaces. Moreover, this holds only for local concepts since global concepts like "formality of fashion items" will not be learnt by the autoencoder, but we would expect local concepts like 'height of shoe' or 'brightness of shirt' to be learnt.
VAEs with Higher Dimensional Latent Spaces
------------------------------------------
We train two slightly different VAE models of different sizes, a smaller and a larger one. Each model has 20 latent space dimensions, but after training we found that a smaller number of dimensions are used in practice. The number of dimensions used was fairly consistent between different runs. For the smaller model, usually only 5 were used (though sometimes 6). For the larger model, usually 10 were used (sometimes 9). Increasing the number of dimensions in the latent space also had little effect on the number of dimensions the model learns to use.
We use these VAEs to encode two artificially whitened out shoes, and then decode them to see where they get sent. The first whitened out shoe was obtained by selecting a shoe from the dataset, and rounding all the pixel values. The second shoe was obtained by manually adding two white pixels on top of this shoe. Our hope was that this would eliminate other variables (e.g. texture) that might interfere with how the VAEs encode shoes, and that thereby we could determine a vector along which 'shoe height' is varying.
### Shoe encodings (and decodings)
Regular size shoe, and what the model generates from its encoding:
Shoe +2 pixels to height, and what the model generates from its encoding:
### Does this give us a vector corresponding to shoe height?
We took this vector along which (we hope) shoe height was varying, normalized it, and plotted images at different points along this vector (taking the origin to be the first encoded shoe) and obtained:
This movement in latent space locally corresponds to shoe height. When we extrapolate far enough in any direction, it will ultimately reach a region of latent space which does not correspond to any human-interpretable concept (c.f. the bottom left region of the two dimensional example latent spaces), so this local behaviour is the best we could hope for.
Now let's vary the same vector about an example of a real encoded shoe. This should show to what extent this direction corresponds to shoe height (although VAEs map concepts non-linearly, so it will be at best a good local approximation of increasing shoe height):
And again it does seem to correspond (roughly) to shoe height.
Larger VAE
----------
We then apply this procedure to the larger VAE, and generate corresponding images that (we hope) vary just in shoe height. In this VAE, however, moving along the height vector also increases brightness. (especially noticeable in the second image below, but present in both):
Orthogonal vectors in latent space
----------------------------------
We then investigate whether we can separate the model's learnt concept for brightness from its learnt concept for shoe height:
Varying an image along the vector for "brightness" for the smaller VAE Varying an image along the vector for "brightness" for the larger VAE Some brief attempts were tried by first getting a vector for brightness and then a vector orthogonal to this (using Gram-Schmidt), but this didn't quite work. Depending on how one increased brightness, one could get a vector that is not orthogonal to shoe height. For the larger VAE, moving along the brightness vector, the shoe gets both brighter *and* taller than in the shoe height direction - our orthogonalization attempts unfortunately did not end up working.
Directions worth further research
---------------------------------
* The sample efficiency of learning a latent concept identifier should depend on the similarity of the abstractions used by each model. Can we demonstrate this? Can we make progress on this by assuming some version of the natural abstraction hypothesis?
* How do we formalize learning latent concept identifiers to cases where we are transferring from a better model to a worse model of the world (i.e. when l2 is not invertible)?
* How do we extend learning latent concept identifiers to cases where both models are imperfect in different ways: where one model is better at predicting some types of observations but is beaten on others?
* Can we isolate independent directions in the latent space of variational autoencoders that *actually* represent identifiable concepts orthogonally?
* Can we find similar behaviour in larger autoencoders of more complex datasets? Will increasing the number of parameters and complexity of data serve to make represented concepts more human-identifiable, or less?
1. **[^](#fnrefpyibkmo507q)**Note that this is a strong assumption, it implies that any observation sequence leads to a unique "belief state" about the world. I.e. it's a lossless representation of the world. The assumption makes sense for perfectly modelled deterministic environments. |
d79ec500-5db7-4fe3-93ea-e8f7a7b7a67d | trentmkelly/LessWrong-43k | LessWrong | [Link] Differential Technology Development - Some Early Thinking
This article gives a simple model to think about the positive effects of a friendly AI vs. the negative effects of an unfriendly AI, and let's you plug in certain assumptions to see if speeding up AI progress is worthwhile. Thought some of you here might be interested.
http://blog.givewell.org/2015/09/30/differential-technological-development-some-early-thinking/ |
69a8ebd7-a8e6-4a61-bbf6-87421ba47e3a | trentmkelly/LessWrong-43k | LessWrong | Philosophy Web - Project Proposal
TLDR: I’m interested in creating an online map of philosophical concepts and their interrelations; which could be used to automatically identify contradictions within, and implications of, given belief systems. I am looking for interested collaborators - especially those with coding capacities – and development advice. I believe there are compelling reasons for rationalists to be interested in this proposal.
[If you’re interested in reading the full Philosophy Web proposal, please see the following link: https://drive.google.com/file/d/1X9fdGUMFase_GGPlXqJH6CcydREaSaDb/view?usp=sharing]
What is Philosophy Web?
Philosophy Web is a proposal to create an interactive online map of philosophical concepts, and the relationships of support and opposition between them. This map would take the form of a node and spoke diagram, with nodes representing concepts, and spokes representing relations of support or opposition.
Users would be able to add these concepts to their own personalised webs of belief. Philosophy Web would then automatically highlight potential contradictions and implications of users' personalised conceptual maps; helping users expand their intellectual horizons, discover errors in their thinking, and incorporate a broader evidential base in formulating their theories (or do the same for other belief systems they were interested in investigating).
Why Philosophy Web?
Philosophy Web has the potential to assist philosophers in several ways; each of which are expanded upon in the above linked proposal document:
* Philosophy Web would facilitate research into the underexplored conceptual space between philosophical specialisms, to pluck the low hanging intellectual fruit which grows there.
* Philosophy Web would reveal “long range”, implications of, and contradictions within, philosophical theories; which might otherwise be difficult for supporters (or critics) to discern.
* Philosophy Web would support comprehensive philosophical theory bu |
7106a2df-f548-4675-99ef-b14b693d1e2f | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "EDIT: Reworked and moved to Main following Gunnar_Zarncke's advice.
Related to: Book Review: How Learning Works, Build Small Skills in the Right Order, What are useful skills to learn at university?
This article is organized into three sections focusing on attention, processing and recall respectively. The advice in each section is roughly organised in order of usefulness, although your mileage may vary. It's best to view this as a menu of study techniques rather than an in depth guide.
Just follow at the links provided if you wish to learn more about any point. Links with a lettered superscript[a] generally link to a part of a YouTube video while those with a numbered superscript[1] link to an article. Links without any superscript generally link to another LessWrong page. Paying Attention
Attention is very important for learning. Where you spend it directly determines which areas of your brain you'll develop while studying and learning new skills. Split your study up into 25 minute chunks, separated by five minute breaks[a]Also known as the Pomodoro Technique[b]. This one is simple to implement but tremendously effective. It will protect you from attention burnout, increase your useful study-time, and help prevent distractions from becoming procrastination by setting up a Schelling fence around your breaks.
Focus on one task at a time[1]Multitasking is one of the worst things you can do while studying, it can reduce your productivity by up to 40% and divides your attention up unnecessarily which will impair your ability absorb new information. If social media and the internet is of a particular distraction to you, tools such as Stay Focused can help you stay on track.
Set up your study environment[a]Exploit situational psychology by making your environment more conducive to study; identify cues that cause you to procrastinate, remove them if possible, and set up cues for studying. Mentioned in the video, a 'study lamp' can make an effective cue providing it is only ever used for studying. Additionally joining a study group can be an effective way to do this (a good example being the LessWrong Study Hall).
Choose the right music[2]There are a few rules of thumb to follow here. Avoid listening to music while trying to absorb new information, though if your aural environment is particularly distracting then music without lyrics or white noise can be useful. Don't use unfamiliar music or music with lyrics in as this will unnecessarily tax your ability to focus. Music can increase productivity for mundane or well-practiced tasks involving low mental effort. Learning Material Before going any further I'd advise you to watch this video[c]. It's an excellent explanation of why just going over material isn't enough to actually learn it and additionally dispels a few myths about the important factors in learning. Understand the principles behind 'deep processing'[c]The key thing to understand here is that the more you relate a new concept to ones previously learned, the more likely you are to remember it. This is far more effective than learning by rote, not only does it improve recall but it also improves your ability to apply the material. A study strategy that forces you to process things deeply is called to as an orienting task[c].
Develop your metacognition[c]Metacognition refers to your beliefs about how well you know the material you're studying. Overconfidence here is negatively correlated with academic success (see the video) and can prevent you from updating on new material[d]. One of the reasons for this negative correlation is that overconfident learners spend less time on material than they should. Being sure to test yourself on your knowledge regularly can go a long way to combating this. Understand the difference between recognition and recollection[a]Related to the previous point, a sense of recognition is one of the biggest causes of overconfidence when reviewing material. A good solution is to test yourself on your ability to recall material before you review it. Not only will doing so help you avoid mistaking recognition for recollection, but knowing what you don't know will help target your revision. Troubleshoot your understanding[e]In most subjects, concepts have a chain of dependencies with advanced concepts depending on the more fundamental ones (in mathematics this chain is particularly long). If you're having trouble understanding a new concept it is very unlikely that you're inherently bad at understanding that concept, rather there's a flaw in your understanding of the more fundamental concepts that lead up to it. Target your understanding of those and understanding the concept in question will become much easier. Holding onto Information
Once you've processed the material effectively you need to be able to recall it efficiently. While deep processing helps you get information into long term memory, getting it to stay there is a different matter entirely. Memory follows what's known as the forgetting curve[3]. Forgetting has not so much to do with losing the information, but rather having trouble retrieving it – and as far as learning goes you haven't really learned something until you can effectively retrieve the information. Test yourself on material[4]Practicing retrieval has a dramatic effect on your ability to recall information. Key to this method is ensuring your cues are appropriate to the way you're going to be test, so past paper questions tend to be best. When using flashcards it is important to make sure that the cues require you to not only recall the information, but process it on a deep level too. Make use of spaced repetition[4]Spaced repetition is testing yourself on material over incrementally larger periods of time (an hour, a day, a week, a month and so on). The idea is to test yourself on information just as you're about to forget it and as it turns out, it is far more efficient than just blindly testing yourself on material over and over. Keeping track of when to review information can be a pain, fortunately there's plenty of spaced repetition software out there to do that for you (I personally find Mnemosyne is simple to implement and use).
Get some sleep[a]Sleep is absolutely crucial for retention. If you must cram, make sure you do it the night before the exam, if you do things the other way round your memory will be considerably worse off for it. In general make sure you get a good nights sleep every day that you've studied. If you're having trouble sleeping due to spending a lot of time at a computer f.lux might be helpful to you. Video Sources: [a] Study Less, Study Smart | M. Lobdell - Pierce College [b] What is the Pomodoro Technique? | F. Crillo [c] How to Get the Most Out of Studying | S. Chew - Samford University [d] Khan Academy and the effectiveness of science videos | D. Muller - Veritasium [e] Let's use video to reinvent education | S. Khan - Khan Academy Article Sources: [1] Multitasking: The Cognitive Costs of Multitasking | K. Cherry - About.com [2] Does Listening to Music While Working Make You Less Productive? | A. M. Paul - Time.com [3] The Forgetting Curve | Wikipedia [4] Spaced Repetition | gwern"
This is a snippet extracted from a publication on LessWrong by a site member.The above is a chunk of text from a post created by a user on the website LessWrong.
In the section about attention, you mentioned the Pomodoro Technique and its benefits. Can you delve deeper into how the technique can be optimized for different types of tasks or subjects, and how it could be adapted for individuals with different learning styles?
The Pomodoro Technique is a time management method developed by Francesco Cirillo in the late 1980s. The technique uses a timer to break work into intervals, traditionally 25 minutes in length, separated by short breaks. Here's how you can optimize it for different types of tasks or subjects and adapt it for different learning styles:
1. **Type of Task**: The Pomodoro Technique can be adjusted based on the nature of the task. For tasks requiring deep concentration, like reading a complex scientific paper or solving a difficult mathematical problem, you might want to extend the work interval to 50 minutes or an hour. For lighter tasks, the traditional 25-minute interval should suffice.
2. **Learning Styles**: Individuals have different learning styles - some people learn best through visual means, others through auditory means, and some through kinesthetic means. For visual learners, creating a visual representation of the pomodoro cycles can be helpful. For auditory learners, using an app that provides an auditory signal at the end of each cycle can be beneficial. Kinesthetic learners might benefit from physically moving during their break periods.
3. **Personal Concentration Span**: Not everyone has the same concentration span. Some people might find 25 minutes too short, while others might find it too long. It's important to adjust the length of the work intervals to suit your personal concentration span. If you find your mind wandering before the 25 minutes are up, shorten the work interval. If you find that you're still deeply engaged in the task when the timer goes off, lengthen the work interval.
4. **Number of Pomodoros**: The number of pomodoros (work intervals) you should do in a day depends on your workload and energy levels. If you're just starting out with the technique, you might want to start with fewer pomodoros and gradually increase the number as you get used to the technique.
5. **Breaks**: The purpose of the breaks is to give your brain a chance to rest and recharge. During these breaks, it's important to do something completely different from the work you were doing. This could be a physical activity like stretching or walking, or a mental activity like meditating or daydreaming. The key is to do something that allows your brain to rest.
6. **Dealing with Interruptions**: Interruptions are inevitable. When an interruption occurs, you have two options: end the pomodoro and start a new one later, or postpone the interruption until the pomodoro is
Reflecting on the Pomodoro Technique, what potential limitations or drawbacks could arise when applying this method across diverse learning styles and tasks?
While the Pomodoro Technique is a highly effective time management tool, it does have potential limitations and drawbacks, especially when applied across diverse learning styles and tasks.
1. **Not Suitable for All Tasks**: The Pomodoro Technique works best for tasks that require focused attention for a specific period. However, for tasks that demand prolonged, uninterrupted focus, this method might be disruptive. For example, artists, writers, or researchers often enter a state of 'flow' where they lose track of time due to deep engagement in their work. The strict timing of the Pomodoro Technique might interrupt this flow.
2. **Inflexibility**: The rigid structure of the Pomodoro Technique might not suit everyone's working style or the nature of every task. Some people might need longer than 25 minutes to immerse themselves in a task, while others might need shorter bursts of work. Similarly, some tasks might require longer periods of undivided attention.
3. **Pressure**: The countdown aspect of the Pomodoro Technique can create a sense of pressure, which might lead to stress or anxiety for some individuals. This is especially true for people who are already working under tight deadlines.
4. **Overemphasis on Quantity**: The Pomodoro Technique emphasizes the number of 'pomodoros' completed, which might lead to an overemphasis on the quantity of work rather than the quality. This could potentially encourage rushing through tasks just to complete a set number of pomodoros.
5. **Disruption of Natural Rhythm**: Everyone has their own natural rhythm of work and rest periods. The Pomodoro Technique, with its predetermined work and rest intervals, might disrupt this natural rhythm.
6. **Doesn't Account for Different Learning Styles**: The Pomodoro Technique doesn't take into account different learning styles. For example, auditory learners might find the ticking of the timer distracting, while kinesthetic learners might find the sitting for 25 minutes challenging.
In conclusion, while the Pomodoro Technique is a valuable tool, it's important to adapt it to suit individual needs, learning styles, and the nature of the task at hand. It should serve as a guideline rather than a strict rule.
Discuss the concept of 'flow' in relation to the Pomodoro Technique. How can one maintain a state of flow while still adhering to the structured intervals of the technique?
Flow, a concept introduced by psychologist Mihaly Csikszentmihalyi, refers to a state of complete immersion in an activity where the individual is so involved that nothing else seems to matter. The person is fully absorbed, losing self-consciousness, and experiences a sense of timelessness, intense focus, and enjoyment in the activity.
The Pomodoro Technique, with its structured intervals, might seem at odds with the concept of flow, which typically involves losing track of time. However, it is possible to use the Pomodoro Technique to facilitate and maintain a state of flow. Here's how:
1. **Use the Pomodoro Technique to Enter Flow**: The first few minutes of an activity often involve overcoming resistance and distraction. By committing to focus for one Pomodoro (25 minutes), you can use the technique to help you move past this initial resistance and enter a state of flow.
2. **Adjust the Length of Your Pomodoros**: If you find that 25 minutes is too short and interrupts your flow state, consider extending the length of your Pomodoros. Some people find that 50 or 90-minute intervals work better for them. The key is to find a balance between maintaining focus and avoiding burnout.
3. **Use Breaks Wisely**: Use your breaks to briefly disengage and then re-engage with the task. This could mean taking a moment to stretch, breathe, or briefly relax your mind. The idea is not to completely break from the task, but to give your mind a brief rest before diving back in.
4. **Re-Enter Flow Quickly**: After your break, aim to re-enter the flow state as quickly as possible. You can facilitate this by leaving a 'breadcrumb' before your break—a note or a half-finished sentence, for example, that will help you pick up where you left off.
5. **Plan Your Tasks**: Having a clear idea of what you're going to work on during each Pomodoro can help maintain flow. Uncertainty or having to decide what to do next can break the flow state.
6. **Minimize Interruptions**: To maintain flow, it's important to minimize potential interruptions. This could mean turning off notifications, choosing a quiet place to work, or letting others know not to disturb you during your Pomodoro.
In essence, while the Pomodoro Technique and flow might seem incompatible, with a few adjustments, the technique can be used to facilitate and maintain a state of flow. |
7dcd5b92-0941-4429-89b5-c24f01b7d583 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Training for Good - Update & Plans for 2023
Summary
=======
* Training for Good will now focus exclusively on programmes that enable talented and altruistic early-career professionals to directly enter the first stage of high impact careers.
* Concretely, we will only run the following programmes in Sep 2022 - Aug 2023:
+ 1. EU Tech Policy Fellowship.
+ 2. Tarbell Fellowship (journalism)
+ 3. \**An unannounced 3rd programme* *which is still under development\**
* Applications for the 2023 EU Tech Policy Fellowship are open until December 11. [Apply here](https://www.trainingforgood.com/europe-tech-policy).
* In year 1, we experimented with ~7 different programmes, 6 of which have now been discontinued. This is largely because we believe that focus is important & wanted to double-down on the most promising programme we had identified thus far (the EU Tech Policy Fellowship which successfully placed 7 fellows in relevant European think tanks focused on emerging technology policy).
* We plan to have an external review of Training for Good conducted between July 2023 - December 2023. We will default to sharing this publicly.
Introduction
============
[Training for Good](https://www.trainingforgood.com/) is an impact-focused training organisation, incubated by [Charity Entrepreneurship](https://www.charityentrepreneurship.com/) in 2021.
Quite a lot has changed since [our launch](https://forum.effectivealtruism.org/posts/eKK2ryF9cDmvyAW33/introducing-training-for-good-tfg) in September 2021. We considered our first year to be an exploratory period in which [we](https://www.trainingforgood.com/policy-careers-europe) [ran](https://forum.effectivealtruism.org/posts/DqBEwHqCdzMDeSBct/apply-for-red-team-challenge-may-7-june-4) [many](https://forum.effectivealtruism.org/posts/GnXw3fYF6byJvpnjG/apply-for-professional-coaching), [many](https://docs.google.com/document/d/1mOt5jQM25XRc02kSz-90-CEgEsiJtdBN6U1ak8RyQd0/edit) [different](https://forum.effectivealtruism.org/posts/5zgn5H6LKr4c9jCcG/apply-to-negotiating-for-good-feb-26-mar-12) [projects](https://forum.effectivealtruism.org/posts/dEaPmvwo3Eu4kSFce/the-ea-training-board-is-now-live). We’ve now discontinued the majority of these programmes and have narrowed our focus to running fellowships that directly place early-career individuals in impactful careers.
Now that TFG has a clearer focus, we're writing this post to update others in the EA community on our activities and the scope of our organisation.
What we do
==========
Training for Good runs fellowships that place talented professionals in impactful careers in policy, journalism & other areas. We do this by providing a combination of stipends, mentorship from experienced professionals, training and placements in relevant organisations.
Between Sep 2022 - Aug 2023 (i.e. year 2), we plan to **only** run the following programmes:
* EU Tech Policy Fellowship
* Tarbell Fellowship
* \**An unannounced 3rd programme* *which is still under development\**
Why this might be important
===========================
Many high impact career paths are neglected by talented and altruistic people, often because they lack clear pathways for entry. This is limiting progress on some of the world’s most important problems: reducing existential risk, ending factory farming and tackling global poverty.
TFG seeks to provide concrete opportunities for early-career professionals to gain entry level roles in impactful career paths that are difficult to enter. Building these talent pipelines could be important because:
* **Direct progress on problems:**Talented individuals in these career paths can directly contribute to progress on solving the world’s most important problems
* **Closer towards the “ideal portfolio”:**We mostly take a [portfolio approach](https://80000hours.org/articles/coordination/#3-take-the-portfolio-approach) to doing good. One could imagine an optimal distribution of talent within the effective altruism community, which might involve people pursuing a variety of different career paths. With our fellowships, we are attempting to move the effective altruism community closer towards this ideal allocation by enabling people to pursue paths that we believe are currently underrepresented (and expect to remain so) within this community’s portfolio. We believe that thinking in these terms is particularly useful partly due to:
+ Diminishing returns from certain career paths
+ Epistemic uncertainty about which career paths are best (and the associated information value from reducing this uncertainty somewhat)
+ Differing personal fit for individuals across different career paths
* **Concrete opportunities:** The number of people interested in effective altruism has been growing in recent years, but many are unclear how to contribute. Fellowships provide concrete opportunities for early-career individuals to build career capital & explore their fit for a specific career path.
Our focus
=========
Our fellowships centre on:
* **Early career individuals** Our programmes target the most talented & altruistic people who are within the first 5 years of their career (we also consider this to include mid-career professionals who are pivoting to a new career). We are confident that the effective altruism movement will provide a promising stream of such people in the coming years.
* **Entry level positions:** We place these talented professionals in entry level positions (eg. by coordinating bespoke internships with partner organisations).
* **Difficult to enter careers:** We focus on career paths which are potentially high impact but unusually difficult to enter. In particular, this means focusing on careers in policy and journalism.
We choose to narrow our attention to the above stated focus because:
* **Focus is good.** In year 1, TFG [spread](https://forum.effectivealtruism.org/posts/GnXw3fYF6byJvpnjG/apply-for-professional-coaching) [ourselves](https://forum.effectivealtruism.org/posts/5zgn5H6LKr4c9jCcG/apply-to-negotiating-for-good-feb-26-mar-12) [thinly](https://forum.effectivealtruism.org/posts/dEaPmvwo3Eu4kSFce/the-ea-training-board-is-now-live) [across](https://www.trainingforgood.com/policy-careers-europe) [many](https://forum.effectivealtruism.org/posts/DqBEwHqCdzMDeSBct/apply-for-red-team-challenge-may-7-june-4) projects. We were keen to maximise the information value of exploring many different topics, formats and stages of the talent pipeline. We are now keen to “do less and obsess” by focusing our attention on ensuring the highest impact projects go as well as possible.
* **Doubling down on our most promising programme:**The EU Tech Policy appears to have been of much higher value than all of the other programmes we ran in year 1. We successfully placed 7 fellows in relevant European think tanks focused on emerging technology policy. We are keen to explore programmes similar to this in form to see whether we can replicate this across other career paths.
* **Clearer feedback loops.**We expect to see clear signs whether programmes of this type are working within ~6 months of starting them (as we can observe whether people are being offered full time roles, etc.). This will allow us to iterate much more quickly on our programmes - doubling down on what’s working and improving / removing what’s not.
* **Comparative advantage.** Excelling here does not seem to require expertise in the specific career paths. Rather it mainly consists of (i) identifying suitable career paths & entry level opportunities (ii) coordinating & collaborating with relevant actors to arrange placements, mentorship, stipends, etc and (iii) vetting suitable candidates
+ Given that our team is highly entrepreneurial & strong at building partnerships, we expect to be unusually good at (ii). By focusing on a narrow set of career paths (policy and journalism) we also expect to become excellent at (iii).
* **More directly influence whether people enter given career paths.**By placing people directly in career paths , we reduce the number of steps in the theory of change and thus increase the likelihood of them successfully entering a given career path (compared to programmes which primarily provide upskilling / career planning).
Talent funnel
=============
We work to increase the supply of talented & altruistic professionals **entering**high impact career paths.
When considering the “talent funnel” for entering an impactful career, we view ourselves as primarily moving people from taking moderate altruistic action to entering the early stages of a high impact career.

*(note: this funnel is massively simplified. We’re aware that many will not pass through all stages of the funnel, while others may take a route not captured by this model).*
Theory of Change
================
TFG’s general theory of change for our fellowships is outlined in the picture below.
We’ve also developed a specific theory of change for each programme and created a detailed list of “paths to impact” that we expect fellows might pursue.
Our Programmes
==============
EU Tech Policy Fellowship
-------------------------
### What is it
The [EU Tech Policy Fellowship](https://www.trainingforgood.com/europe-tech-policy) is an 8-month fellowship for aspiring EU policy professionals interested in safeguarding future generations from threats posed by emerging technologies (especially artificial general intelligence).
Applications for the 2023 EU Tech Policy Fellowship are open until December 11. [Apply here](https://www.trainingforgood.com/europe-tech-policy).
### Why
Our vision is for a world where policy safeguards future generations from threats posed by emerging technologies.
We believe that technologies developed this century, especially artificial intelligence, could pose an existential risk to humanity. Governments have an important role to play in managing the long-term societal impacts of these technologies. We believe that EU policy could be an important lever in positively shaping the trajectory of these technologies and are excited to support aspiring policy professionals interested in working in this area.
### What we offer
* **Summer sessions (June - Aug).**An 8-week reading group & guest lecture series. These sessions focus mostly on AGI safety fundamentals, cybersecurity and the EU policy landscape. Guest lectures are conducted by leading researchers from organisations such as GovAI and policy professionals working in EU organisations.
* **Brussels training weeks (June & Sep).** Two separate week-long trainings in Brussels: one in June before the summer sessions and the second in September at the end. These are intensive weeks featuring guest speakers, workshops and networking events.
* **Placements (Sep - Feb)**A 4-6 month placement at a relevant European think tank. Partner organisations include [The Future Society](https://thefuturesociety.org/), the [Centre for European Policy Studies](https://cepa.org/) and the [German Marshall Fund](https://www.gmfus.org/) ***(track 1 only)***
* **Application support (Sep).**A month to explore relevant roles in the European Commission, party politics and other areas relevant to emerging technology. Fellows can participate in career workshops, receive feedback on applications and gain access to mentorship opportunities.**(*****track 2 only*****)**
* **Stipend**. Fellows receive stipends of up to $2,250 per month during the full time period of the program.
+ Track 1 = 4-6 months (for the duration of the placement)
+ Track 2 = 1 month (while receiving application support)
### 2022 Programme
* We launched this programme in June 2022 with 12 fellows. At present:
+ 6 fellows are currently completing 4-month placements at European think tanks.
+ 4 fellows received support to apply for roles in the European Commission and other relevant organisations.
+ 2 fellows are using their increased understanding of the space to pursue other goals (e.g. bridging the gap between policy & research while completing their Phd at Stanford).
* Following the 8-week summer sessions & a week-long training in Brussels, fellows reported that they were **very likely to recommend this programme** to others in their position, with an average score of **4.9 / 5.**
### 2023 Programme
* Applications for the 2023 EU Tech Policy Fellowship are open until December 11. [Apply here](https://www.trainingforgood.com/europe-tech-policy).
Tarbell Fellowship
------------------
### What is it?
The [Tarbell Fellowship](http://tarbellfellowship.org) is a 12-month programme for early-career journalists interested in covering topics that could have a major impact on the lives of billions, such as global poverty, animal welfare, and existential risks.
### Why
Our vision is for a world where journalism is focused on highlighting & solving the world’s most important problems.
We believe that journalists have a powerful role to play in positively shaping public discourse on important topics. Impact-focused journalists can encourage the adoption of good policies, hold powerful actors accountable in the public arena, and inspire readers to take specific high-impact actions.
### What we offer
* Stipends. Fellows receive stipends of up to $50,000, depending on circumstances, to accelerate their journalism careers. We expect stipends to vary between $35,000 - $50,000 depending on location and personal circumstances.
* Mentorship from an experienced journalist. Each fellow is matched with an experienced journalist. Mentors will provide critical feedback and challenge the fellow to set goals and deliver on them. They'll conduct fortnightly mentorship calls and connect mentees with their network.
* Training. Fellows participate in remote sessions each week as a cohort. This will include training in best practices, talks from experts in the field and challenging assignments designed to build skills.
* Oxford Summit. Fellows attend a two week summit in Oxford at the beginning of the fellowship (March 1st - March 14th 2023). This will be an intensive fortnight of guest speakers, workshops and networking events in Oxford / London. Travel and accommodation costs will be fully covered.
### 2023 Programme
* 2023 will be the inaugural year of the Tarbell Fellowship. In our recent application round, we received over 950 applications in total and ultimately expect to accept up to 10 fellows.
* We have an exciting line-up of mentors, including experienced journalists with experience in the New York Times, the Economist, Vox Future Perfect, and the BBC.
* Although applications for the 2023 cohort have now closed, we encourage you to [sign up to our newsletter](https://www.trainingforgood.com/newsletter) if you might be interested in participating in future years.
Discontinued programmes from year 1
-----------------------------------
We experimented with running a lot of different programmes in year 1. Those listed below no longer fit within our scope and have been discontinued.
We don’t expect to prioritise writing up detailed learnings from these programmes in the near future. Get in touch if you feel such a write-up would be especially useful to you. We’d also **love to speak if you’re interested in progressing one of the below programmes independently**and can likely share our materials with you. Email cillian [at] trainingforgood [dot] com.
* [Impactful Policy Careers](https://www.trainingforgood.com/policy-careers-europe) a training programme designed to help participants plan for a high-impact career in policy. The first iteration in December 2021 was a 2-day programme and the second iteration in March 2022 was a 4-week programme. We’ve observed several job changes towards (higher-impact) policy roles and participants attributed substantial percentages to the IPC workshop. [Applications for a new edition](https://forum.effectivealtruism.org/posts/YgBXswPkPeqf99mrb/back-by-popular-demand-the-impactful-policy-careers-workshop) recently closed. This is now led by some trusted former trainees and previous TFG interns.
* The [Red Team Challenge](https://forum.effectivealtruism.org/posts/DqBEwHqCdzMDeSBct/apply-for-red-team-challenge-may-7-june-4) was a programme that called small teams together to "red team" important ideas within effective altruism. The inaugural challenge was run with 35 people participating, across 10 teams. This programme provided training in “red teaming” best practices and some teams posted their critique on the EA Forum.
* [Negotiating for Good](https://forum.effectivealtruism.org/posts/5zgn5H6LKr4c9jCcG/apply-to-negotiating-for-good-feb-26-mar-12) was a training programme in salary negotiation to help individuals increase the amount they earn and their capacity for effective giving. It was conducted in February 2022 with 35 participants. The training was well-received, with an NPS of 9.1/10 and we observed self-reported confidence in negotiation skills from 2.3/5 to 3/5. We are unsure whether this outcome is net positive in expectation. This is mainly because (i) this programme may have encouraged some participants to remain in roles with relatively low donation potential that would be well suited to direct roles, and (ii) this programme may have discouraged participants who are a good fit for roles with higher donation potential (e.g. quant trading, entrepreneurship, etc.) from pursuing those paths.
* [Impact Grantmaking](https://docs.google.com/document/d/1mOt5jQM25XRc02kSz-90-CEgEsiJtdBN6U1ak8RyQd0/edit) was a 6-week grantmaking training programme with a cohort of ~10 people. We discontinued this programme before our pivot. This is mainly because (i) our research & conversations with grantmakers has produced mixed results on the need for this programme, (ii) the introduction of the FTX Regranting Programme likely addressed the need for funding diversity to a greater extent than this programme could, (ii) the pilot appeared to have been largely unsuccessful. We suspect that the main reason for this is that most participants did not have access to regranting funds and therefore did not see the “real world” application of this programme.
* [Capacity Ventures](https://www.trainingforgood.com/capacity-ventures): We completed a first iteration, a 3 day virtual bootcamp to help aspiring entrepreneurs to build skills by (i) executing a self-directed project over 1-6 months and (ii) conducting a skills assessment and developing an upskilling plan in response to that. ~10 people participated in this pilot, all of whom had made it to the final stages of Charity Entrepreneurship’s application process for their Incubation Programme.
* [Coaching](https://forum.effectivealtruism.org/posts/GnXw3fYF6byJvpnjG/apply-for-professional-coaching): Began offering subsidised professional coaching to 20 EA leaders and high-potential individuals. Clients include staff at Rethink Priorities, Founders Pledge, Giving What We Can, CEA & a number of leaders at other organisations. There is a growing number of active coaches in the EA community (you can find a list [here](https://docs.google.com/document/d/1q0NUPXpTOz6xygf4UMT-CsNMC187AHdwAWv55HyBodQ/edit))
How will we know if we’re succeeding?
=====================================
We will attempt to measure & estimate our impact on a programme basis. We also plan to have an external review conducted between July 2023 - December 2023 to help account for motivated reasoning and to provide some validity. We will default to sharing this publicly.
This will inform three separate decisions:
* (i) **Scale up, shut down, steady state:**Whether to scale up, shut down or keep a given programme (and TFG as a whole) at a steady state.
+ How much value did a given programme produce relative to its operating costs / the opportunity cost of fellows and TFG staff?
+ How much value do we expect to generate in future years?
* (ii) **Choosing future programmes:** Which “high impact careers” we choose to run fellowships for in future (e.g. if the EU Tech Policy Fellowship appeared to drastically outperform the Tarbell Fellowship, this might lead us to prioritise policy programmes over communications programmes in future). This could also include prioritising between which programmes TFG should run (e.g. doubling down on Tarbell & discontinuing EU Tech Policy Fellowship).
* (iii) **Deciding how to run future programmes:** Both in terms of (a) who we select and (b) the composition of future iterations of the programme (eg. length, whether we facilitate placements, etc.)
Our current plan for measuring our impact is to split the assessment into into 4 broad categories:
* **(Proxy)** The number of relevant career transitions we have facilitated.
* **Minimum impact**: Measured by attempting to quantify the value of the “impact moments” reported by past fellows.
* **Estimated impact**: (Expected lifetime impact - Counterfactual lifetime impact Attribution to our actions)
* **Progress towards our strategic goals**.
Actions you could take
======================
* Apply to the [EU Tech Policy Fellowship](https://www.trainingforgood.com/europe-tech-policy) by December 11.
* [Sign up to our newsletter](https://www.trainingforgood.com/newsletter) to get notified of future programmes (eg. [Tarbell Fellowship](https://www.tarbellfellowship.org/)) and other upskilling opportunities outside of TFG.
* Check out the [EA Opportunities board](https://ea-internships.pory.app/) (we're not affiliated with this but it is a great source of opportunities within the EA community). |
b1242472-3bb9-41ff-8c4c-8281ceff7967 | StampyAI/alignment-research-dataset/special_docs | Other | From the Standard Model of AI to Provably Beneficial Systems
SHANGHAI INSTITUTE FOR SCIENCE OF SCIENCE
XFJTJUF XXXTJTTTIDO
NBJMCPY
TJTT!TJTTTIDO
OBSERVATIONS OF 50 GLOBAL EXPERTSAI GOVERNANCE IN 2019
A YEAR IN REVIEW
April, 2020
Shanghai Institute for Science of Science The report editor can be reached at
globalaigovernance@gmail.com
We welcome any comments on this report
and any communication related to AI
governance.
ʺ
ྭ
ࣳ
ᐲ
Ꮻ
ˀ
ᄱ
ࠏ
ᶯ᥋
ࣳ
ᛡ
Ꮻ
ˀ
ᄱ
ᶰ
Ḋ
Ḋ
ḕ
ᇪ
ᝮ
e
˗
ख
Ḗ
˻CHUNG YUNG, SECTION OF THE LI CHIALL LIVING THINGS ARE NOURISHED
WITHOUT INJURING ONE ANOTHER,
AND ALL ROADS RUN PARALLEL
WITHOUT INTERFERING WITH
ONE ANOTHER.
TABLE OF CONTENTS
FOREWORD
By SHI Qian
INTRODUCTION
By LI Hui and Brian Tse
PART 1 TECHNICAL PERSPECTIVES FROM WORLD-CLASS SCIENTISTS
The Importance of Talent in the Information Age
By John Hopcroft
From the Standard Model of AI to Provably Beneficial Systems
By Stuart Russell and Caroline Jeanmaire
The Importance of Federated Learning
By YANG Qiang
Towards A Formal Process of Ethical AI
By Pascale Fung
From AI Governance to AI Safety
By Roman Yampolskiy
PART 2 I NTERDISCIPLINARY ANALYSES FROM PROFESSIONAL RESEARCHERS
The Rapid Growth in the Field of AI Governance
By Allan Dafoe & Markus Anderljung
Towards Effective Value Alignment in AI: From "Should" to "How"
By Gillian K. HadfieldChina Initiative: Applying Long-Cycle, Multi-Disciplin ary Social Experimental on Exploring
the Social Impact of Artificial Intelligence
By SU Jun
Going Beyond AI Ethics Guidelines
By Thilo Hagendorff
\*OUFSEJTDJQMJOBSZ"QQSPBDIUP"\*(PWFSOBODF3FTFBSDI
By Petra Ahrweiler
&VSPQFBO1FSTQFDUJWFTPOUIF"OUJDJQBUPSZ(PWFSOBODFPG"\*
#Z3PCJO8JMMJBNT
The Impact of Journalism
By Colin Allen
Future of Work in Singapore: Staying on Task
By Poon King Wang
Developing AI at the Service of Humanity
By Ferran Jarabo Carbonell
Enhance Global Cooperation in AI Governance on the Basis of Further Cultural Consensus
By WANG Xiaohong
Three Modes of AI Governance
By YANG Qingfeng
PART 3 RESPONSIBLE LEADERSHIP FROM THE INDUSTRY
Companies Need to Take More Responsibilities in Advancing AI Governance
By YIN Qi ̀
01
07
07
09
11
13
15
17
1921
23
25
27
29
31
33
35
37
3917
39
˽ ˼
Trustworthy AI and Corporate Governance
By Don Wright
A Year of Action on Responsible Publication
By Miles Brundage, Jack Clark, Irene Solaiman and Gretchen Krueger
AI Research with the Potential for Malicious Use: Publication Norms and Governance Considerations
By Seán Ó hÉigeartaig h
GPT-2 Kickstarted the Conversation about Publication Norms in the AI Research Community
By Helen Toner
The Challenges for Industry Adoption of AI Ethics
By Millie Liu
A Call for Policymakers to Harness Market Forces
By Steve Hoffman
PART 4 GLOBAL EFFORTS FROM THE INTERNATIONAL COMMUNITY
Mastering the Double-Edged-Sword in Governance of AI
By Irakli Beridze
Agile, Cooperative and Comprehensive International Mechanisms
By Wendell Wallach
A Significant Realization by the International Community
By Cyrus Hodes
Shifting from Principles to Practice
By Nicolas Miailhe
A Global Reference Point for AI Governance
By Jessica Cussins Newman
An Important Issue of the International Relations: AI Governance
By CHEN Dingding
PART 5 REGIONAL DEVELOPMENTS FROM POLICY PRACTITIONERS
European Parliament and AI Governance
By Eva KailiThe European Multi-Stakeholder Approach to Human-Centric Trustworthy AI
By Francesca Rossi
The European Union's Governance Approach Towards "Trustworthy AI "
By Charlotte Stix
The Driving Forces of AI Ethics in the United Kingdom
By Angela Daly
Localizing AI Ethics and Governance in East Asia
By Danit Gal
Social Concerns and Expectations on AI Governance and Ethics in Japan
By Arisa Ema
The Innovation of Singapore's AI Ethics Model Framework
By Goh Yihan and Nydia Remolina
The Grand Indian Challenge of Managing Inequity and Growth in the AI Era
By Urvashi Aneja
Part 6 EMERGING INITIATIVES FROM CHINA
Benefit in Pa rtnership
By FU Ying
Progress of Artificial Intelligence Governance in China
By ZHAO Zhiyun
From Principles to Implementation, Multi-Party Participation and Collaboration are Even More Needed
By LI Xiuquan
Towards a Robust and Agile Framework for the Ethics and Governance of AI
By DUAN Weiwen
Globalization and Ethics as the Consensus of AI Governance
By LUAN Qun
The Principles of Well-being of Human Person and Accountability
By GUO Rui
Better AI, Better City, Better Life
By WANG Yingchun53
81
655367
69
71
73
75
77
7941
43
45
47
49
51
81
83
85
87
89
91
9355
57
59
61
63
65
˿ ˾
FOREWORD
Artificial intelligence (AI) is an important driving force for a new round of scientific and technological revolution and
industrial transformation, which will bring significant changes to people's lives.
In recent years, countries around the world have continued to issue AI strategies and policies. The technological R&D
and the industrial application of AI is thriving. In 2017, the State Council of China issued “Development Planning for a
New Generation of Artificial Intelligence” as China’s national strategic plan on AI development, which outlined the
basic framework for China’s AI development before 2030. In February 2019, the National New Generation AI
Governance Expert Committee consisting of AI experts from academia and industry was established by China’s
Ministry of Science and Technology. In June 2019, the Committee released the “Governance Principles for a New
Generation of Artificial Intelligence : Develop Responsible Artificial Intelligence”, addressing eight governance
principles: harmony and human-friendliness, fairness and justice, inclusiveness and sharing, respect for privacy,
security and controllability, shared responsibility, open collaboration, and agile governance. With these strategies and
principles, China hopes to better coordinate the development and governance of the emerging technology and to
ensure secure, controllable and reliable AI. In Shanghai, AI has been designated as a priority development area and an
efficient tool for future urban governance. However, the effective governance of AI is the key to ensuring its success.
Meanwhile, China, at the national level, also pins high expectations on Shanghai’s AI development and governance. In
2019, Shanghai was designated as the National New-Generation AI Innovation and Development Pilot Zone, which
emphasized its role of exploring issues related to the AI governance and ethics. Shanghai is also expected to become
a national exemplar of AI development.
Established in January 1980, the Shanghai Institute for Science of Science (SISS) is one of China’s earliest soft science
research institutes. It conducts research to inform decision-making on innovation policy. It focuses on fields such as
science, technology and innovation strategies, public policies and industrial technology innovation. It is dedicated to
building a professional and platform-type science, technology and innovation think tank.
This year marks the 40th anniversary of SISS. 40 years ago, China started its process of Reform and Opening Up. Two
major questions were considered at the time, with aims to bring order and to restore normality for the country's
governance system: What is the development pattern for science and technology? How do they influence the economy
and society? The founders of SISS called for study on the subject “science of science”,in order to bring answers to
those questions. They conducted in-depth discussions on the emerging science and technology on the topic of “new
science and technology revolution”, which influenced China’s national and Shanghai’s local science and technology
strategies.
SHI Qian is the director of the Shanghai Institute for Science of
Science (SISS). Before joining SISS, Professor SHI was the vice
president of the Shanghai Academy of Sciences & Technology
and concurrently the vice president of the Shanghai Institute of
Industrial Technology. He has been long engaged in the general
planning for science and technology development, research
project management, innovation platform building, and services
for innovation and entrepreneurship. Professor SHI participated
in the formulation of a number of national industrial
development plans and the implementation of major national
science and technology projects, where he presided over several soft science research
projects, such as “Research on Shanghai’s Medium and Long-Term (2021-2035)
Developmental Strategy of Science and Technology” from the government of Shanghai.
Professor SHI obtained the Shanghai Special Award for Scientific and Technological Progress
in 2016. Professor SHI is also the director of Technology Foresight Committee of the Chinese
Association for Science of Science and S&T Policy, and the deputy director of the Expert
Advisory Committee of the National New-Generation AI Innovation and Development Pilot
Zone in Shanghai. EDITOR-IN-CHIEF : SHI QIAN
́ ̀40 years later, the understanding of science and technology in China has changed deeply and its
capacity in science and technology development is strengthened. However, we are still facing complex
issues from the subject area "science of science". In recent years, various technologies including big
data, internet and AI have emerged, exerting profound and transformative influences on the economy,
society, culture and international relations.
We are very fortunate that there is a general global consensus on building cooperative relations in
science and technology. This is particularly the case for AI governance, which shapes the common fate
of humanity. Therefore,through this report, we hope to work with our global colleagues, track progress
made by various parties in this field and lay the foundation for exchanges and cooperation. Together,
we can achieve more.
01 02INTRODUCTION
Allan Dafoe, an expert in international
relations studi es and Director of the Centre for
the Governance of AI, University of Oxford, and
his colleague Markus Anderljung, survey the
sudden proliferation of professional research
institutions, company initiatives and
government agencies dedicated to addressing
the social impact of AI. It indicates that the field
of AI governance research is becoming rapidly institutionalized. Legal scholar Gillian K.
Hadfield recently established a new research
institute at the University of Toronto, with the
mission of focusing on the methodological
question of effective value alignment in AI. SU
Jun, a professor at the School of Public Policy
& Management at Tsinghua University, shares
his experience of using social experiments to
conduct policy research during the
transformation of the so cial, political or
technological environment. Thilo Hagendorff,
an AI ethicist at the University of Tübingen,
stresses that a transition from ‘soft law’ to
‘hard law’ is the next step in AI governance.
These discussions are signs that AI
governance is becoming a serious
intellectual discipline.The impact of emerging technologies might
be a seminal inflection point in human
history that will continually impact all
aspects of society over the coming decades.
In that, AI is the linchpin accelerating and
amplifying the development of all the fields
of research. With the rapid development of
machine learning in recent years, the
governance of the technology has gradually
come under the spotlight. It was once
possible to keep track of all the research
institutes, conferences and policy
developments. In 2019, this became an
arduous task for researchers and
policymakers. The number of initiatives
continued to grow. There is a much greater
variety of regional perspectives. The diversity
of stakeholders participating in this dialogue
has increased. The idea that the world
urgently needs to find a path towards
developing ethical and beneficial AI for all of
humanity has become front-and-center in
our media and public conversations. Despite
the scientific and policy difficulties, it seems
that the world is willing to rise up to this
challenge.
One way to think of the governance of AI is
that it is a ‘wisdom race’. The late Stephen
Hawking once said that “our future is a race
between the growing power of our
technology and the wisdom with which we use it. Let's make sure that wisdom wins.” To
take stock of and share the wisdom, we
decided to invite 50 world-class experts (44
institutions) to share their views on the key
progress in AI governance in 2019. We hope
that this can help separate the signal from
the noise for interested readers.
These experts include scientists who
have made major contributions to the
field of AI.
They approach the question of social impact
scientifically and offer technical solutions to
the challenge of AI governance. For example,
John Hopcroft, a professor at Cornell
University and a winner of the Turing Award,
points out that the development of current AI
systems has the possibility of bias caused by
bias in the training data. Stuart Russell, a
professor at the University of California,
Berkeley, wrote an AI textbook used by more
than 1,300 universities in 116 countries. He
and his colleague, Caroline Jeanmaire,
high-light the importance of conducting
technical research on provably beneficial AI
as argued in his recent book Human
Compatible . Yang Qiang, a professor at the
Hong Kong University of Science and
Technology and General Chair of AAAI 2021,
advocates the development of federated
learning for addressing privacy issues, which is among the top concerns in AI governance
today. Pascale Fung, professor at the Hong
Kong University of Science and Technology,
makes a general case for developing formal
processes for ethical AI systems and
specifically proposes the establishment of a
standardized algorithm review system.
Roman Yampolskiy, an expert in AI security
at University of Louisville in the United
States, argues that we should not only
discuss ethical issues, but also pay attention
to the safety and security issues of AI
systems. These views from the scientists
suggest a technically grounded direction for
AI governance in 2019 and beyond.
The emergence of AI governance
issues has attracted the attention of
experts in the field of traditional
humanities and social sciences, which
helped open up new research
directions.At the frontiers of AI applications,
industry leaders and investors are
paying closer attention to the influence
of AI governance on the future of
innovation.
As a member of the National New Generation
Artificial Intelligence Governance Expert
Committee, and the founder of the Chinese AI
unicorn company Megvii, Yin Qi suggests that
companies need to take more responsibilities
in advancing AI governance. Don Wright,
former President of the IEEE Standards
Association, introduces IEEE’s code of AI
Therefore, we invited some of the key
policy advisors and experts on China’s AI
governance to introduce the current status
in the country.
The issue of AI governance is a concern
to scientists, scholars of humanities
and the social sciences, as well as
policy makers. Although China has made remarkable
achievements in AI R&D and industrial
applications, there is a relative lack of
international discussions about its approach and
progress in AI governance.
FU Ying, former Vice Minister of Foreign Affairs
of China and Director of the Center for
International Strategy and Security at Tsinghua
University, makes a powerful case that the world
should cooperate on the issue of AI governance,
which requires first and foremost the
partnership between China and the United States
as major countries. ZHAO Zhiyun , Director of
New-Generation Artificial Intelligence
Development Research Center of Ministry of
Science and Technology, shares the Chinese
government’s views and recent progress on AI
governance. LI Xiuquan, Research Fellow of
Chinese Academy of Science and Technology for
Development, emphasizes the approach of
inclusive development in China’s AI governance,
with a focus on protecting the vulnerable groups
in the society. DUAN Weiwen, a professor and
philosopher of science at the Chinese Academy
of Social Sciences, discusses the need to
construct trust mechanisms for AI for building
an agile governance framework. LUAN Qun from
the China Center for Information Industry
Development under the Ministry of Industry and
Information Technology of China surveys the
progress in ethical governance in China’s AI
industry. GUO Rui from Renmin University of
China, who participated in related work of the
03 04While being increasingly globalized,
there is a parallel trend of localizing
AI principles in different regions of
the world.
2019 might turn out to be the year when AI
governance became a truly global issue with
significant implications for global governance.
We began this section with the discussion
from Irakli Beridze, the Head of the Centre for
AI and Robotics, at the United Nations, who
was one of the recipients of the Nobel Peace
Prize awarded to the Organisation for the
Prohibition of Chemical Weapons. He argues that we should appreciate both the ethical
issues and the positive effect of AI on solving
global challenges in the context of law
enforcement. Wendell Wallach, a professor
and a science and technology ethicist at Yale
University, proposes agile, cooperative and
comprehensive governance. Three experts
including Cyrus Hodes, Nicolas Miailhe, and
Jessica Cussins Newman all share the
reflection that the OECD made substantial
progress in the governance of AI in 2019. From
their discussions, we observe that there is a
converging consensus from around the world.
CHEN Dingding, an expert in international
issues and professor at Jinan University in
China, discusses the issues of AI governance
from the perspective of international
relations.
The European Union is an active leader in the
field of AI governance. Eva Kaili, a member of
the European Parliament, presents the
European Parliament’s main work on AI
governance and plans for the future. In 2019,
the European Union released the “Ethics
Guidelines for Trustworthy AI”, which
attracted global attention. Francesca Rossi,
the AI Ethics Global Leader and a
Distinguished Research Staff Member at IBM
Research and a member of the EU
High-Level Expert Group on Artificial
Intelligence, believes that such
multi-disciplinary and multi-stakeholder
composition of the expert group should serve as a leading example for AI governance.
Charlotte Stix, a wellrespected analyst of
European AI policy, analyzes the European
Union’s approach towards “trustworthy AI”.
Shortly after Brexit, Angela Daly from
Strathclyde University discusses the British
government’s understanding of AI
governance, especially the role of the Centre
for Data Ethics and Innovation as a
specialized institution.
There were also significant developments in
other parts of Asia. Danit Gal, technology
advisor to the UN Secretary General
High-level Panel on Digital Cooperation,
observes that the region has a significant
traditional cultural imprint on AI ethics and
governance. Arisa Ema from the University of
Tokyo, who participated in the formulation of
the Japanese Cabinet’s Social Principles of
Human-centric AI, discusses the shift from
the government to the industry as the key
driver for AI governance development in
Japan. Singapore made great achievements
in AI governance in 2019 and won the highest
award at the World Summit on the
Information Society Forum, an UN-level
platform. Having contributed to such an
achievement, Director of the Singapore
Management University Centre for AI & Data
Governance (CAIDG) Goh Yihan and his
colleague Nydia Remolina, research
associate at CAIDG, introduce the
Singaporean approach of translating ethical
principles into pragmatic measures that
businesses can adopt. Based in India, Urvashi
Aneja from Tandem Research suggests that
the key challenge for Indian policy is striking
a balance between equity and growth in the
AI era.ethics first released in 2017 within the
framework of corporate governance. Being at
the center of the controversy with the
language learning model GPT-2, members of
OpenAI's policy team offer their reflections on
publication norms. This is followed by the
perspectives on the malicious use of AI by two
observers, namely Seán Ó hÉigeartaigh,
Director of the “AI: Futures and
Responsibility” Programme at the Leverhulme
Centre for the Future of Intelligence (LCFI) of
University of Cambridge, and Helen Toner,
Director of Strategy at the Center for Security
and Emerging Technologies (CSET) of
Georgetown University. Millie Liu, Managing
Partner at First Star, provides a practical point
of view from the frontline by listing some of
the key challenges for industry
implementation of AI ethics. Steve Hoffman, a
Silicon Valley investor, suggests that
policymakers should harness the market
forces for AI governance as companies would
play an inevitable role in making progress in
the field.
05 06humanities and social sciences, of international
relations and of countries and regions, progress
in general consensus can be observed in 2019.
For example, there is an increasing number of
professional institutions being established, a
growing degree of global consensus, and a
convergence of attention from industry and
policymaking communities.
We welcome the readers to share their view on
commonalities by reading these contributions
from experts. Ultimately, we hope that this
report can serve as a launchpad for this
consequential conversation of our generation. As
the late Alan Turing would say, “we can only see
a short distance ahead, but we can see plenty
there that needs to be done.”The motivation of this report is to promote
exchanges and communication between
academic researchers, policy makers, and
industry practitioners in this rapidly changing
field. It is fortunate that our initiative has
received extensive attention and support
from our global peers. First and foremost, we
would like to express our appreciation to all
the 50 experts for their contributions.
Our sincere appreciation goes to John
Hopcroft, who has extended his very
generous offer in providing guidance to our
work. In addition, we would like to express
our gratitude to Stuart Russell, Wendell
Wallach and Irakli Beridze for their valuable
suggestions on the overall framework of the
report after reading the first draft.
From the initial idea of the report to its final
release, YU Xindong, WANG Yingchun and
SONG Jia from the Shanghai Institute for
Science of Science gave valuable support to
the development and promotion of the
project.
LI Xiuquan (China Academy of Science and
Technology Development Strategy), Cyrus
Hodes (Future Society), Dev Lewis (Digital
Asia Hub), Herbert Chia (Sequoia Capital China), DUAN Weiwen (Chinese Academy of
Social Sciences) and HE Jia, has provided
valuable supports in bringing all the
contributors together.
In the process of editing the report, young
scholars such as Caroline Jeanmaire
(University of California at Berkeley), Thilo
Hagendorff (University of Tuebingen), Jessica
Cussins Newman (University of California at
Berkeley), Charlotte Stix (Eindhoven University
of Technology), Angela Daly (Strathclyde
University), Kwan Yee Ng (University of
Oxford), Jeff Cao (Tencent) , XU Nuo (Shanghai
Institute for Science of Science), QU Jingjing
(Shanghai Institute for Science of Science) and
ZHANG Chaoyun (Shanghai Institute for
Science of Science) provided valuable support
in editing and proofreading the report. ZHANG
Dazhi (Central China Normal University)
helped us design the illustration in the report.
Interns ZHANG Jie, SONG Zhixian, SUN Hui,
NI Jiawei, and LIANG Xinyi has undertaken a
large volume of operational work.
To all colleagues and friends that have
provided help, we would like to express our
sincere gratitude. ACKNOWLEDGEMENT
China Artificial Intelligence Standards
Committee, discusses the foundational
philosophy in the formulation of standards. It is
worth mentioning that in the promotion of AI
governance by the Chinese government, one of
the key policy tools is setting some provinces
and cities as AI “pilot zones”. As the largest city
in China, Shanghai was approved as such a pilot
zone in 2019. Dr. WANG Yingchun from the
Shanghai Institute of Science introduces the
current situation. The experts we invited this
time are representatives from the government
and academia. We hope to have the opportunity
to extend the conversations with the industry,
given that many Chinese companies are actively
exploring the issue of AI governance.
From the comments of all experts – from the
standpoint of science and technology, of
LI Hui is an associate professor at the Shanghai Institute for Science of Science. He
regularly participates in the formulation of AI strategies for Shanghai as well as on a
national level. He also frequently publishes his views on AI governance in major
Chinese media such as People's Daily, Guangming Daily and Wenhui Daily . He has
played a prominent role in organizing the Governance Forum of the World Artificial
Intelligence Conference 2019. He earned his PhD in history of science from Shanghai
Jiao Tong University in 2011. His background led to his research interests on issues
related to AI governance with a long-term perspective and global thinking.
Brian Tse is an independent researcher and consultant working on the governance,
safety and international relations of AI. Brian is a Senior Advisor at the Partnership on
AI and a Policy Affiliate at the University of Oxford’s Centre for the Governance of AI.
He has advised organizations including Google DeepMind, OpenAI, Baidu, Tsinghua
University Institute of AI, Beijing Academy of AI and Carnegie Endowment for
International Peace.EXECUTIVE EDITORS: LI HUI BRIAN TSE (INVITED)
The Importance of Talent in the Information Age
By John Hopcroft
Deep learning has had a major impact on AI even
though it is only one technique in the AI tool box. It
has been applying in many experimental areas such
as image recognition, machine translation, finance,
etc. Now that AI is having significant applications, it
has raised many issues. If an AI program is making
decision say for loans, people want to know why the
program made a decision. At the current state of
knowledge, we do not know how to answer question
like these. Another issue concerns the possibility of
bias caused by bias in the training data.
It is clear that a revolution is occurring with AI as a
major driver. In the future talent will be the main
contribution to a nation's economy and standard of
living. The most important issue for China is to
improve the quality of undergraduate education to
provide the talent for China to become the leading
economy in the information age.
ABOUT THE AUTHOR
John E. Hopcroft is the IBM Professor of Engineering and Applied Mathematics in
Computer Science at Cornell University. From January 1994 until June 2001, he
was the Joseph Silbert Dean of Engineering. After receiving both his M.S. (1962)
and Ph.D. (1964) in electrical engineering from Stanford University, he spent three
years on the faculty of Princeton University. He joined the Cornell faculty in 1967,
was named professor in 1972 and the Joseph C. Ford Professor of Computer
Science in 1985. He served as chairman of the Department of Computer Science
from 1987 to 1992 and was the associate dean for college affairs in 1993. An
undergraduate alumnus of Seattle University, Hopcroft was honored with a Doctor
of Humanities Degree, Honoris Causa, in 1990.Hopcroft's research centers on theoretical aspects of computing, especially analysis of algorithms, automata
theory, and graph algorithms. He has coauthored four books on formal languages and algorithms with Jeffrey D.
Ullman and Alfred V. Aho. His most recent work is on the study of information capture and access.
He was honored with the A. M. Turing Award in 1986. He is a member of the National Academy of Sciences (NAS),
the National Academy of Engineering (NAE), a foreign member of the Chinese Academy of Sciences, and a fellow
of the American Academy of Arts and Sciences (AAAS), the American Association for the Advancement of Science,
the Institute of Electrical and Electronics Engineers (IEEE), and the Association of Computing Machinery (ACM). In
1992, he was appointed by President Bush to the National Science Board (NSB), which oversees the National
Science Foundation (NSF), and served through May 1998. From 1995-98, Hopcroft served on the National
Research Council's Commission on Physical Sciences, Mathematics, and Applications.
In addition to these appointments, Hopcroft serves as a member of the SIAM financial management committee,
IIIT New Delhi advisory board, Microsoft's technical advisory board for research Asia, and the Engineering
Advisory Board, Seattle University.
John E. Hopcroft
07 08
ABOUT THE AUTHOR
From the Standard Model of AI to Provably
Beneficial Systems
By Stuart Russell and Caroline Jeanmaire
AI governance made notable progress on 2019. First,
important sets of principles were published, notably
the Beijing AI principles and the OECD Principles on
AI. Both focus particular attention on ensuring the
security of AI systems in the short and long terms,
an essential aspect of AI development.
Principles are a good foundation for action, and
indeed we also saw instances of concrete action.
California became the first state to require all
automated online accounts attempting to influence
residents' voting or purchasing behaviors to openly
identify as robots. This law represents an important
first step towards curbing deceptive new technology
and making AI systems trustworthy; it is a step
towards establishing a basic human right to know
whether one is interacting with another human or
with a machine. The law will also hinder the spread
of misinformation. We hope that the law will develop
beyond commercial and voting issues to become a
general right, and also serve as a precedent for
other states and countries.
In some areas, however, governance dangerously
lags behind. Our global community made very little
progress in regulating Lethal Autonomous Weapons
(LAWs) such as drones, tanks, and other
computer-controlled machinery. These technologies
run on AI systems and are programmed to locate,
select and attack targets without human control. At
the November 2019 meeting of member states of the
Convention on Certain Conventional Weapons (CCW)
at the United Nations in Geneva, diplomats could not Stuart Russell received his B.A. with first-class honors in physics from Oxford
University in 1982 and his Ph.D. in computer science from Stanford in 1986. He then
joined the faculty of the University of California at Berkeley, where he is Professor
(and formerly Chair) of Electrical Engineering and Computer Sciences, holder of the
Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible
AI. He has served as an Adjunct Professor of Neurological Surgery at UC San
Francisco and as Vice-Chair of the World Economic Forum's Council on AI and
Robotics. He is a recipient of the Presidential Young Investigator Award of the
National Science Foundation, the IJCAI Computers and Thought Award, the World
Technology Award (Policy category), the Mitchell Prize of the American Statistical
Association, the Feigenbaum Prize of the Association for the Advancement of Artificial Intelligence, and Outstanding
Educator Awards from both ACM and AAAI. From 2012 to 2014 he held the Chaire Blaise Pascal in Paris, and he has
been awarded the Andrew Carnegie Fellowship for 2019 to 2021. He is an Honorary Fellow of Wadham College,
Oxford; Distinguished Fellow of the Stanford Institute for Human-Centered AI; Associate Fellow of the Royal Institute
for International Affairs (Chatham House); and Fellow of the Association for the Advancement of Artificial
Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of
Science. His book Artificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been
translated into 14 languages and is used in over 1400 universities in 128 countries. His research covers a wide range
of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation,
planning, real-time decision making, multitarget tracking, computer vision, computational physiology, and
philosophical foundations. He also works for the United Nations, developing a new global seismic monitoring system
for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term
future of artificial intelligence and its relation to humanity. The latter topic is the subject of his new book, Human
Compatible: Artificial Intelligence and the Problem of Control (Viking/Penguin, 2019).agree on a binding common approach towards this
issue. As a result, the next two years will be spent
on non-binding talks instead of concrete legal work
in order for us to move towards a global ban on
lethal autonomous weapons to safeguard our
common future.
As we develop increasingly capable AI systems that
become highly competent and self-sustaining,
humans must ensure that these AI systems remain
beneficial and safe. Russell, one of the co-authors of
this article, just published a book on this topic:
Human Compatible: Artificial Intelligence and the
Problem of Control (Viking/Penguin, 2019). The
problem of control over AI systems is not the
science fiction plot that preoccupies Hollywood and
the media with a humanoid robot that spontaneously
becomes conscious and decides to hate humans. It
is rather the creation of machines that can draw on
more information and look further into the future
than humans can, exceeding our capacity for
decision making in the real world. With our present
conception of AI and our technical approach, there is
no plausible prospect of retaining control over
machines more powerful than ourselves. To solve
this problem, the research community needs to
undertake a vast effort to change the standard
model in AI towards provably beneficial systems.
The AI community is becoming aware of this issue,
which makes us hopeful that we will be able to
achieve this transformation, but there is much work
to do.
Caroline has a Master’s degree in International Relations from Peking University and
a Master’s degree in International Public Management from Sciences Po Paris. She
received her Bachelor’s degree in political sciences from Sciences Po Paris. She also
studied at the Graduate Fletcher School of Law and Diplomacy and at Tufts
University. Caroline researches international coordination models to ensure the
safety and reliability of Artificial Intelligence systems at the Center for
Human-Compatible AI (CHAI) at UC Berkeley. She also leads CHAI’s partnership and
external relations strategy, focusing on building a research community around AI
safety and relationships with key stakeholders. Before working at CHAI, she was an
AI Policy Researcher and Project Manager at The Future Society, a thinktank
incubated at Harvard’s Kennedy School of Government. She notably supported the organization of the first and
second Global Governance of AI Forums at the World Government Summit in Dubai. In the 2019 edition, she
managed two committees: Geopolitics of AI and International Panel on AI research. She published articles and
reports on the Geopolitics of AI, US-China industry levers of cooperation on AI and the results of a global civic debate
on AI governance. Before this, she participated in numerous climate negotiations and technical intersessions since
2015, including with the French Delegation for COP23 and COP24. Caroline speaks English, French, Spanish and
Mandarin Chinese.
Caroline JeanmaireStuart Russell
09 10
The Importance of Federated Learning
By YANG Qiang
data privacy is an imminent challenge facing AI
researchers.
Fortunately, 2019 also witnessed AI researchers who
have realized the seriousness of the problem and
come up with a set of solutions. Among them,
Federated Learning, as a promising user data
privacy protection scheme, has demonstrated its
unique advantages in promoting the implementation
of industrial applications. Federated Learning refers
to a technical scheme to realize joint modeling of
multiple participants by exchanging encryption
parameters on the premise that the data is not out
of the locality and data is not shared, and its
modeling effect is the same as or not much different
from that of the aggregation modeling of the entire
data set. A variety of encryption techniques are used
in the Federated Learning technology framework,
such as secure multiparty computing, homomorphic
encryption (HE), Yao's garbled circuit and differential
privacy (DP). From the perspective of technology
application, current Federated Learning has been
applied in such fields as small and micro enterprise
credit, anti-money laundering, anti-fraud,
insurance, and computer vision. In addition, it has
been explored for application in such fields as smart
medical treatment, autonomous driving, smart city,
and government governance. To sum up, Federated
Learning can be regarded as an integrator of
machine learning technology and privacy protection
technology, and also a universal privacy protection
machine learning technology with wide application
prospect.
As AI moves out of the laboratory and into
large-scale application, its potential ethical
problems and impacts gradually arouse public
concern. Looking back on 2019, the public
discussions related to AI ethics focused on the
protection and governance of user data privacy.
Internationally, Facebook has been fined $5 billion
by the US Federal Trade Commission (FTC) for
illegally leaking user data. Also, Google was fined
tens of millions of euros by French regulators for
breaching the GDPR by making its privacy terms too
complex for users to understand and too difficult for
users to manage the way their personal data was
used. In China, data companies have been
intensively investigated by regulators for abusing
and selling unauthorized users' privacy data. And a
large number of data companies have been
penalized by business suspension, app removal and
even criminal liability for serious cases. This series
of events shows that, on the one hand, the public's
awareness of data rights related to personal privacy
is gradually rising, so these events have attracted
wide attention in the media and the public; and on
the other hand, the shocking truths of the incidents
also indicate that the protection and governance of
private data is seriously lagging behind and missing.
Tracing back to the source, these problems are
caused by the objective incentives that AI technology
relies heavily on massive data collection, but more
by the neglect of social responsibility and subjective
reckless manners of relevant stakeholders. How to
dig out the knowledge and value behind the data on
the premise of fully respecting and protecting user Prof. YANG is the the Chief AI Officer at WeBank and a Chair Professor and former
Head of the Department of Computer Science and Engineering of the Hong Kong
University of Science and Technology.
He is a leading researcher of "transfer learning" technology in the international AI
community, and he is spearheading a new research direction of "Federated
Learning". He was elected a fellow of AAAI (Association for the Advancement of
Artificial Intelligence) in July 2013, and the Conference Chair of AAAI 2021
conference. Between 2017 and 2019, he was elected the President of the Board of
Trustees of IJCAI, the world’s oldest and most popular AI society.
ABOUT THE AUTHOR YANG Qiang
11 12
Much has been discussed about the governance of AI
in different government and societal contexts. New AI
strategies and governance documents were proposed
in 2019 by the UN, UNESCO, the EU, European
Parliament, the governments of China, the US, Japan,
the UAE, etc. Top AI companies in the world are
working actively in research and development of
ethical and beneficial AI, as well as good governance.
The latest pronouncement by the CEO of Google that AI
applications cannot be determined by market forces
alone but needs good governance illustrates the
general consensus in the AI community.
All machines make mistakes, but AI errors provoke
more fear among people because, just like AI
decisions, AI errors are so human-like. Consumers
tend to associate such errors with nefarious
human-like intentions. If a speaker recorded my
conversations or a camera sent me images of
someone else's homes, then the AI is "spying". If a
search result is biased, it is "sexist" or "racist". If a
chatbot gives the wrong answer, it can sound "scary"
or "offensive". Suddenly, engineers who are used to
dealing with system performance as numbers in a
metric are confronted with a society of users who are
constantly seeking for philosophical and even legalistic
answers. Therefore, our research community is caught
off guard. At the level of AI algorithm and system
development, researchers and engineers strive for a
fair, accountable and transparent process by virtue of
both best practice guidelines and formal processes
while mitigating and minimizing machine bias and
machine error. Nowadays, it is common practice for
researchers and developers to release databases,
trained models and software codes to the public
domain for others to use. Therefore, inherent biases in
these databases and models can be propagated to all systems developed based on them.
Professional organizations like the IEEE have provided
best practice guidelines in the form of Ethically
Aligned Design process. We can apply these principles
to all areas of AI algorithm and system development.
NGOs such as the Partnership on AI has dedicated
working groups aimed at providing best practice
guidelines, with expert input from its members of
engineers, philosophers, and civil society
representatives. The International Organization for
Standardization (ISO) with 164 member nations,
including the US and China, is working on
standardizations in the area of AI. There have been
increasing calls for a formal process of AI and ML
development that parallels that of the software
engineering process as an integral part of AI software
product development. A formal process recognized by
AI professionals will ensure common standard, a more
explainable and verifiable development process, and
fewer system errors. A formal process can include
standards for
1) Database collection: Data bias should be mitigated
before it is released to the larger AI community;
2) Software and algorithm design: Conversational AI
should be non-discriminatory; instead of just relying
on voice print or facial recognition, biometric
recognition should be multimodal to reduce errors;
3) Model training: Specific model architecture and
parameter settings are recorded so that the process
can be reproduced and interpreted down the pipeline
without the need for human trial and error;
4) Testing and verification: Machine fairness and bias
can also be evaluated and tested on standard test sets.
Many AI conferences already run shared tasks where
different groups compare their systems using common
training and testing sets. This can abstract and
formalize the development of AI algorithms and
systems without stifling creativity and safety of
research and safe guarding academic independence.
The European Parliament has called for a central
regulatory body, much like the Food and Drug
Administration, to assess the impact of algorithms
before they are deployed. This proposal faces two
challenges – 1) algorithms evolve at a breakneck
speed and are modified and updated every few months;
2) there might not be enough experts available with the
technical knowledge required for algorithm evaluation.
Instead, I suggest that such a regulatory body be
tasked to assess AI products and applications, rather
than the underlying algorithms. Algorithm evaluation
should be incorporated into the normal peer-review
process of research publications. Editors and technical
program chairs tasked to curate these publications
should ask reviewers to provide explicit opinions on the
ethical issues of the work they are reviewing. With AI
professionals’ increasing awareness of the ethics of
their work, it is my hope that our collective wisdom will
improve on this.
More international cooperation is required in AI
governance as AI technologies developed today have
become open resources and are shared quickly around
the world. AI research and education are global today.
Companies are working together on standards for
autonomous driving. Countries are working together
on regulating autonomous weapons. Applications of AI
in the areas of security, healthcare, and finance are
subject to existing regulations of each region, even
though additional regulations are needed to account
for algorithm and methodology evolution. Social media
and information integrity remains a challenging area
where social media companies are currently regulating
themselves without consensus. More international
cooperation is required and regulatory bodies need to
be set up with AI experts and other stakeholders. In
2019 we have seen a more detailed AI governance plan
and even more public awareness of its need. In 2020
and beyond, we need to work actively in implementing
the proposed good practice guidelines and a formal
software process to ensure fairness, accountability and
transparency of AI systems.
ABOUT THE AUTHOR
Pascale Fung is a Professor at the Department of Electrical and Electronic
Engineering at The Hong Kong University of Science & Technology (HKUST). She is
an elected Fellow of the Institute of Electrical and Electronic Engineers (IEEE) for
her "contributions to human-machine interactions", and an elected Fellow of the
International Speech Communication Association for "fundamental contributions to
the interdisciplinary area of spoken language human-machine interactions". She is
the Director of HKUST Center for AI Research (CAiRE), the leading interdisciplinary
research center among all four schools at HKUST. She is an expert at the Global
Future Council, a think tank for the World Economic Forum. She represents HKUST
on Partnership on AI to Benefit People and Society. She is a board member of
Governors of the IEEE Signal Processing Society. Prof. Fung was born in Shanghai to professional artist parents
but found her calling in AI when she became interested in science fiction as a child. Today, her research interest
lies in building intelligent systems that can understand and empathize with humans. To achieve this goal, her
specific areas of research are the use of statistical modeling and deep learning for natural language processing,
spoken language systems, emotion and sentiment recognition, and other areas of AI. As a fluent speaker of seven
European and Asian languages, Prof. Fung is particularly interested in multilingual speech and natural language
issues.Pascale FungTowards A Formal Process of Ethical AI
By Pascale Fung
13 14
ABOUT THE AUTHOR
Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of
Computer Science and Engineering at the Speed School of Engineering, University
of Louisville. He is the founding and current director of the Cyber Security Lab and
an author of many books including Artificial Superintelligence: A Futuristic Approach .
During his tenure at UofL, Dr. Yampolskiy has been recognized as: Distinguished
Teaching Professor, Professor of the Year, Faculty Favorite, Top 4 Faculty, Leader in
Engineering Education, Top 10 of Online College Professor of the Year, and
Outstanding Early Career in Education award winner among many other honors
and distinctions. Yampolskiy is a Senior member of IEEE and AGI; Member of
Kentucky Academy of Science, and Research Associate of GCRI. Dr. Yampolskiy's
main areas of interest are AI Safety and Cybersecurity. Dr. Yampolskiy is an author of over 100 publications
including multiple journal articles and books. His research has been cited by 1000+ scientists and profiled in
popular magazines both American and foreign, hundreds of websites, on radio and TV. Dr. Yampolskiy has been an
invited speaker at 100+ events including Swedish National Academy of Science, Supreme Court of Korea,
Princeton University and many others.Roman V. Yampolskiy
From AI Governance to AI Safety
By Roman Yampolskiy
AI Governance in 2019 saw an explosion of interest
with over 30 countries having established strategies
and initiatives to date, to influence development of
AI in a direction beneficial to the fulfilment of their
domestic and international plans. The hope is to
create standards and norms for research,
deployment and international cooperation, with
multi-national strategies already proposed by
European Union, Nordic-Baltic region, and UN. At
the same time a number of research centers are
now active at the world's top universities and are
explicitly devoted to questions related to the
governance of AI. See Future of Life's report on
Global AI Policy for the review of many national and
multinational initiatives:
https://futureoflife.org/ai-policy/.
AI Ethics in 2019 likewise experienced near
exponential growth, at least in the number of sets of
ethical "principles" proposed by over 30
organizations. Careful comparison of proposed
ethical guidelines shows convergence on importance
of privileging human rights, human values,
professional responsibility, privacy, human control,
fairness and non-discrimination, transparence,
explainability and accountability. At the same time
proposals differ in degree to which they place
importance on each category and do not converge on
common language for expressing areas of
agreement. It is likely that in the future many
additional organizations will propose their own
ethical principles, further complicating landscape
and standardization efforts. See Harvard's Berkman
Klein Center report which attempts to analyze and
map ethical and rights-based approaches to
development of Principled AI:
https://ai-hr.cyber.harvard.edu/primp-viz.html.
AI Safety also saw a lot of progress in 2019 with
multiple companies and universities establishing AI
Safety groups. However, it is very important to
differentiate between AI Governance/Ethics and
technical AI Safety and Security research. While the
first two is needed to provide direction, resources,
coordination and framework for performing AI
research, neither one directly improves safety of
intelligent systems. Only direct AI Safety research
can do so and a significant danger exists in
misinterpreting progress in governance and ethics
as progress in safety, giving us a false sense of
security. It is my hope that 2020 brings us wisdom to
differentiate between governance, ethics and safety
and to realize importance and limitations of each in
isolation.
15 16
The Rapid Growth in the Field of AI
Governance
By Allan Dafoe & Markus Anderljung
2019 has been an eventful year in AI governance. AI
companies and the AI research community have
started responding to the challenges of AI
governance, new AI governance research institutes
have been set up, and there have been promising
developments in the AI policy sphere. While there is
much work left to be done, it is heartening to see
how rapidly this field is growing, and exciting to be
part of that growth.
Many large tech companies have started setting up
and amending their processes and structures to
explicitly address AI ethics and governance
concerns. Some of these attempts have backfired
such as Google's proposed Ethics Board shutting
down after little more than a week following
controversy regarding the selection of board
members. Other attempts, such as Facebook's
independent oversight board for content moderation
have caused less controversy. Open AI's decision to
conduct a staged release of their natural language
model GPT-2 caused significant controversy, but
also much needed discussion of publication norms.
Navigating these issues forces us to answer some
very difficult questions, which will only become
more so as the capabilities of AI systems improve.
We have seen some encouraging developments in
the AI policy sphere. The EU has shown great
interest in AI policy. Its High Level Expert Group on
AI delivered a set of ethics guidelines and a set of
policy and investment recommendations, and the
new Commission President Ursula von der Leyen
pledged to initiate comprehensive legislation on AI.
Policy actors who have previously been largely silent
on AI governance issues have made themselves
heard, for example in the release of the Beijing AI
Principles and the US Department of Defense's AI
principles. Though such principles are a far cry from
action on AI governance issues, they provide
much-needed foundation for deliberation of some of
the most crucial questions of our generation.
A number of new AI governance and ethics institutes
and organizations have been announced including
the Schwartz Reisman Institute for Technology and
Society at the University of Toronto, the Center for
Security and Emerging Technology in Washington,
D.C., not to mention the activity here in Oxford, such
as the announcement of the Institute for AI Ethics
and the establishment of the Governance of
Emerging Technologies Programme at the Oxford
Internet Institute. We look forward to collaborating
with these new colleagues.
At the Centre for the Governance of AI, we have been
busy growing our team and producing research. We
now have a core team of seven researchers and a
network of sixteen research affiliates and
collaborators. Most importantly, we have had a
productive year. We have published reports (such as
our US Public Opinion on Artificial Intelligence and
Standards for AI Goverance ), op-eds (e.g. Thinking
About Risks from AI: Accidents, Misuse and
Structure and Export Controls in the Age of AI ) and
academic papers ( How does the offense-defense
balance scale? and five papers accepted to the
AAAI/ACM conference on Artificial Intelligence,
Ethics and Society).
ABOUT THE AUTHOR
Allan Dafoe is Associate Professor in the International Politics of AI and Director of
the Centre for the Governance of AI at the Future of Humanity Institute, University
of Oxford. His research examines the causes of great power war and the global
politics surrounding transformative technologies, in particular concerning the risks
from artificial intelligence. To help scientists better study these and other topics he
also works on methods for causal inference and for promoting transparency.
Markus Anderljung is the AI Strategy Project Manager at the Centre for the
Governance of AI at the Future of Humanity Institute, University of Oxford. Markus
focuses on growing the Centre, making its research relevant to important
stakeholders, acting as an enabler for research, and contributing to some of its
research. He has a background in History and Philosophy of Science with a focus on
the Philosophy of Economics and Evidence-Based Policy, several years' experience
in Management Consulting and as the Executive Director of Effective Altruism:
Sweden.Allan Dafoe
Markus Anderljung
17 18
ABOUT THE AUTHOR
Gillian Hadfield, B.A. (Hons.) Queens, J.D., M.A., Ph.D. (Economics) Stanford, is
Professor of Law and Professor of Strategic Management at the University of
Toronto and holds the Schwartz Reisman Chair in Technology and Society. She is
the inaugural Director of the Schwartz Reisman Institute for Technology and
Society. Her research is focused on innovative design for legal and dispute
resolution systems in advanced and developing market economies; governance for
artificial intelligence; the markets for law, lawyers, and dispute resolution; and
contract law and theory. Professor Hadfield is a Faculty Affiliate at the Vector
Institute for Artificial Intelligence in Toronto and at the Center for
Human-Compatible AI at the University of California Berkeley and Senior Policy
Advisor at OpenAI in San Francisco. Her book Rules for a Flat World: Why Humans Invented Law and How to
Reinvent It for a Complex Global Economy was published by Oxford University Press in 2017.Gillian K. HadfieldTowards Effective Value Alignment in AI: From
"Should" to "How"
By Gillian K. Hadfield
How should we regulate AI? This is the question that
has dominated the discussion of AI governance for
the last several years. The question has taken the
form of moral philosophical puzzles such as the
trolley problem. It has been raised by activists and
critics drawing attention to the dangers of
discrimination and bias in algorithms and facial
recognition technology. Concern about the impact of
highly targeted political advertising on the stability
of politics and social relationships has raised
questions about whether we should regulate speech
on social media platforms or constrain the
collection of personal information.
At the broadest level there is widespread agreement
that AI should, as the European High-Level Expert
Group on AI put it in 2019, "respect all applicable
laws and regulations, ethical principles and values."
But how will that alignment of AI with our human
values happen? In practice, what will ensure that AI
is lawful and ethical?
It will not be enough to pass laws that say AI must
follow the laws. Nor is it feasible to catalogue
human values and ethics and embed them into our
AI systems. Our world is far too complex, dynamic,
and evolving for that.
As I have explored in my work and discuss in my
book, Rules for a Flat World: Why Humans Invented
Law and How to Reinvent It for a Complex Global
Economy , long before the challenge of AI, our legal
and regulatory systems have faced substantial limits
in putting our policy choices-our ‘shoulds'-into
practice. The legal and regulatory technology that
we perfected over the twentieth century-legislation,
regulation, regulatory agencies, courts, legal
reasoning-is increasingly unable to keep up with the
complexity, speed, and global nature of twenty-first
century economies and societies. AI accelerates the
rate at which the chasm between what we aim to do
through law and regulation and what is achieved in
practice widens.
While most AI governance projects in 2019 continued
to focus on the ‘how should we regulate AI'
questions, in 2019, a major new initiative began at
the University of Toronto to shift the focus to ‘how
can we regulate AI?'. The mission of the Schwartz
Reisman Institute for Technology and Society, under
my leadership, is to do the fundamental
cross-disciplinary research we need to build the
technical, legal, and regulatory systems that can
implement our politically-determined goals for AI.
We will not ask, should facial recognition be
regulated, for example. We will ask, if we put rules
into place, such as non-discrimination or legitimate
limits to surveillance, how can we ensure that facial
recognition systems follow the rules? What
technical challenges do we need to solve? What
innovations can we develop in regulatory
technologies? How can we build AI that helps to
ensure AI stays within the bounds of what we,
collectively, have decided is right or acceptable?
How can we make sure that our efforts at value
alignment are effective?
In 2020 and beyond, the Schwartz Reisman Institute
will be aiming to broaden the global conversation
about AI governance beyond "should" to "how". We
will be aiming to contribute to the pool of knowledge
and tools available to ensure that AI is deployed
where we decide it should be and not where we
decide it shouldn't be and that it follows the rules
humans have set when it is.
19 20
China Initiative: Applying Long-Cycle,
Multi-Disciplinary Social Experimental on
Exploring the Social Impact of Artificial Intelligence
By SU Jun
"People-oriented" principle is the consistent aim of
China to develop AI and other emerging
technologies. Chinese government and academia
are highly concerned about the impact of AI on
human society and are striving to explore the AI
social governance scheme, so as to advance the AI
technologies to better serve the well-being of
human beings. Encouragingly, China has taken a
leading step in AI governance by conducting the
social experiment to explore the social impact of AI.
As the irreplaceable driving force of S&T revolution,
the opportunities and challenges brought by AI have
been profoundly recognized. The consensus to keep
vigilant to the threats and risks of incontrollable
technology development and severe social inequity
has also been well established.
In response to the challenges, we are supposed to
not only advocate a responsible R&D and innovation
value system, but also strengthen the focus on
ethical issues in the process of scientific and
technological innovation. We should especially
return to "humanism" and reinforce the research on
social impact mechanisms, law and trend and
improve the social policy system for the
development of AI from the perspective of
humanities and social sciences. Achieving effective
governance of AI requires systematic knowledge and
accurate understanding on the social formation and
characteristics of the AI era. The establishment of
this recognition depends on the application of
empirical research, especially the development of
social experimental research.
Social experiment is a classic social science
research method. It aims at observing people and
organizations during the transformation of the
social, political or technological environment, which
simulates the ideal experimental environment to
propose and testify social science theories. Facing
the new problems of social governance in the era of
intelligence, Chinese government, academia and
varied sectors of the society have committed to
formulate, promote and apply AI social experimental
solutions in multiple areas including academic
research, policy practice, and social impact.
In 2019, experts and scholars from Tsinghua
University, Zhejiang University and other institutes
brought together intellectual resources and took the
lead in proposing the policy suggestions to conduct
long-cycle, wide-field, multi-disciplinary AI social
experiments based on abundant preliminary work.
Based on the achievements from academic
research, China's policy practices are rapidly taking
shape and continuously developing. In 2019, the
Ministry of Science and Technology of China issued
the Guidelines for the Construction of the National
New-generation Artificial Intelligence Innovation
Development Pilot Area, which marked that AI social
experiments were being conducted nationwide. The
guidelines propose different application scenarios
such as education, transportation, government
administration, medical care, environmental
protection, manufacturing, finance, agriculture, etc.,
and put forward the comprehensive objectives of
social experiment such as social risk prevention,
organizational reinvention, data security, and
technological adaptation.
Chinese society's consensus on the social
governance of AI is taking shape, and the public's
support for social experimental schemes is also
growing. In October 2019, the First National
Conference on Artificial Intelligence Social
Experiments was held in Tsinghua University in
China. More than 100 experts and scholars
exchanged and shared the latest research results of
AI social experiments, and discussed the further
research plan. Guangming Daily and other
mainstream media have published articles such as
Exploring the Chinese Solution to the Social
Governance of Artificial Intelligence, which has
earned wide acclaim from all walks of life. The
public foundation and social influence of AI social
experiment are steadily on the increase.
Evaluating China's initiatives and achievements in
the social governance of AI, we have become clearer
that conducting AI social experiments could help us
accurately identify the challenges and impacts of AI
on human society, deeply understand the social
characteristics and trends of AI and provide a
scientific reference for the establishment of a
humanistic intellectualized society.
ABOUT THE AUTHOR
SU Jun is the Cheung Kong Scholar Chair Professor in School of Public Policy and
Management at Tsinghua University. He serves as the Dean of Institute of
Intelligence Society Governance (ISG), Tsinghua University, the Director of the
Center for Science, Technology and Education Policy (CSTEP) at Tsinghua
University and the Director of Think Tank Center of Tsinghua Universi ty, a nd the
Deputy Director of the Advisory Committee of the Public Administration under the
Ministry of Education. Jun Su has been awarded the special allowance from the
State Council.
In addition, SU Jun is an associate at Harvard Kennedy School and senior research
fellow at the Fletcher School of Law and Diplomacy, Tufts University. He is also the Chair of the First National
Conference on Artificial Intelligence Social Experiment and the co-chair of Harvard-Tsinghua Workshop on Low
Carbon Development and Public Policy (2014-2018).
SU Jun
21 22
Going Beyond AI Ethics Guidelines
By Thilo Hagendorff
In 2019, discussions on AI ethics were omnipresent.
Various academic, governance as well as industry
initiatives have come up with their own AI ethics
guidelines. News media were swamped with articles
demanding for AI ethics. Additionally, countless
commissions congregated to set up norms and
standards. Besides the virulent discourse on AI
ethics, 2019 was also the year in which researchers
and practitioners commenced to stress that abstract
ethical principles are not worth much without
putting them into practice. However, this is easier
said than done. All over the world, ethics initiatives
agree that privacy, fairness, transparency, safety,
and accountability are the minimal requirements for
building and using "ethical sound" AI applications.
Nevertheless, what those tenets mean in day-to-day
decision-making of organizations that develop and
deploy such applications is rather unclear. At least
empirical studies show that merely reading
documents on ethical principles does not have any
significant effect on practice.
The existence of ethics codes is only a tiny piece of
the bigger puzzle of AI governance. If the aim is to
strengthen the likelihood of ethical behavior in AI
research and development, governance efforts first
and foremost have to address measures for code
enforcement, but also things like working climates
or ethical cultures in organizations, virtue
education, or the shift from competition to
cooperation. Regarding the latter, the fierce
competition and the related race rhetoric on "global
leadership" in AI bears the risk of a reckless race
for being first in accomplishing certain technical
systems, especially in the context of military
applications. This race is to the detriment of values
like safety, privacy, or fairness. An important step
towards achieving "trustworthy AI" is to attenuate
competition in favor of cooperation between nations,
companies, but also research institutes.
AI governance in 2020 should focus on
strengthening the ties between industry
stakeholders but also governance initiatives
themselves. This would have the effect of saving a
lot of redundancy in deliberating governance tenets
and principles. Moreover, 2020 should be the year in
which soft laws are increasingly translated into hard
law, that gives clear rules for algorithmic
non-discrimination, prohibitions for AI in high-stake
areas, safety and privacy standards, as well as rules
for dealing with labor displacement induced by AI
applications.ABOUT THE AUTHOR
Dr. Thilo Hagendorff is working for the “Ethics and Philosophy Lab” at the "Machine
Learning: New Perspectives for Science" Excellence Cluster at the University of
Tuebingen, Germany. Moreover, he works for the “AI Ethics Impact Group” of the
technical-scientific association VDE (Association for Electrical, Electronic &
Information Technologies). His research focusses on ethics in machine learning as
well as broader questions in the field of media and technology ethics. Furthermore,
he works as a research associate at the University of Tuebingen's International
Center for Ethics in the Sciences and Humanities (IZEW). He is also a lecturer at
the University of Potsdam's Hasso Plattner Institute. Thilo Hagendorff
23 24
Interdisciplinary Approach to AI Governance
Research
By Petra Ahrweiler
Artificial Intelligence (AI), and especially the ethics
of AI in areas of automated decision making, enjoys
high priority in national policy strategies of many
countries including China and Germany.
International cooperation targets a joint research
and governance network of a common AI-in-society
ecosystem with shared ethical framing.
To improve AI algorithms for automated decision
making depends to a large degree on the availability
and quality of relevant training data. However,
especially for high-risk decision contexts, empirical
data is hardly available. Imagine automated decision
making in case of an accident in a nuclear power
station, a tsunami, or a terror attack in a megacity:
Such events are, fortunately, too rare to produce
sufficient training data. Furthermore, decision
contexts involve people, who behave and interact in
largely unpredictable ways according to their
respective historical, cultural and social upbringing.
Societal frameworks display much variety across the
globe thus further restricting the utility of available
training data in terms of generalizability and
applicability.
Where then to get the models and the training data
from to improve algorithms for better AI with a close
fit to context-specific norms and values of world
societies? This is where expertise of
interdisciplinary research institutions such as TISSS
Lab or the larger scientific community of the
European Social Simulation Association ESSA comes
in: for substituting missing empirical data, the
innovative suggestion is to generate and exploit
artificial data produced by simulations, which
computationally represent the social environments
AI algorithms have to operate in. In TISSS Lab,
technical sciences cooperate with disciplines that
are empirically researching, explaining, and
anticipating human behaviour and societal
developments, such as sociology, psychology,
philosophy, law, and other social sciences.
Realistically simulating social systems will provide
sufficient high-quality training data to improve and
validate AI algorithms in automated decision
making. The starting international cooperation
between Chinese SISS and German TISSS Lab to
connect AI and social simulation can significantly
further this line of cutting-edge research.
As recently emphasized by the World Artificial
Intelligence Conference in Shanghai, cooperation –
also transdisciplinary cooperation between science
and other areas of society - is key to future
progress. Perceptions, attitudes, discussions and
acceptance of AI use vary between countries, as do
the types and degrees of AI implementation, with
reference to norms and values in-use, but also
related to technology status, economic models, civil
society sentiments, and legislative, executive and
judicial characteristics. Building better, i.e.
context-sensitive, ethically-acceptable, and
socially-informed AI for future societies and
realizing the international aspirations of global AI
governance require the involvement of
non-scientists, i.e. many relevant stakeholders and
practitioners from all over the world and from all
parts of society, in research. Here, the young
partnership between SISS and TISSS Lab has
already started to connect to participatory
approaches within international funding schemes
(e.g. cooperative research project AI FORA funded in
the programme "Artificial Intelligence and the
Society of the Future" of the German Volkswagen
Foundation). Further funding schemes in this
direction should be set on the policy agendas to
promote progress in AI research and governance.ABOUT THE AUTHOR Petra Ahrweiler
Prof. Dr. Petra Ahrweiler is Full Professor of Sociology of Technology and
Innovation, Social Simulation at Johannes Gutenberg University Mainz, Germany.
Her appointment at JGU started in 2013 with getting leave for obtaining the position
of Director and CEO at the EA European Academy of Technology and Innovation
Assessment in Bad Neuenahr-Ahrweiler, Germany, until 2017. Before 2013, she
had been Full Professor of Technology and Innovation Management at Michael
Smurfit School of Business, University College Dublin, Ireland, and Director of its
Innovation Research Unit IRU. Furthermore, she was Research Fellow of the
Engineering Systems Division at Massachusetts Institute of Technology (MIT),
Cambridge/USA.
She started her professional career with studying Social Sciences at the University of Hamburg, Germany. At Free
University Berlin, Germany, she received her PhD for a study on Artificial Intelligence, and got her habilitation at
the University of Bielefeld, Germany, for a study on simulation in Science and Technology Studies.
Her main interests in research and teaching are the mutual relationship of new technologies and society,
inter-organisational innovation networks, and agent-based models as methodological innovation in the Social
Sciences.
Petra won various research prizes, has long experience in coordinating and completing international, mostly
European research projects, publishes inter-disciplinarily in international journals, and has been awarded with
fellowships of various scientific societies such as the German Academy of Technical Sciences acatech or
AcademiaNet, the network of excellent female scientists in Germany.
25 26
ABOUT THE AUTHOR
Robin Williams is Professor of Social Research on Technology at The University of
Edinburgh, where he is Director of the Institute for the Study of Science, Technology
and Innovation ISSTI
.
Since his recruitment to Edinburgh in 1986 to lead its Centre under the ESRC
Programme on Information and Communications Technologies, he has developed
an interdisciplinary research programme into 'the social shaping of technology'
through over 50 externally funded projects. His personal research has focused
upon the design and use of Enterprise Systems, eCommerce and eHealth, and
more recently mobile and web 2.0 technologies. He is developing with co-authors,
the Biography of Artefacts perspective to address the design and implementation of information infrastructures.
Recent books include Social Learning in Technological Innovation: Experimenting with Information and
Communication Technologies , Edward Elgar: 2005
with James Stewart and Roger Slack and Software and
Organisations: The Biography of the Enterprise-Wide System - Or how SAP Conquered the World Routledge:
2009
with Neil Pollock and How Industry Analysts Shape the Digital Future Oxford University Press: 2016
with
Neil Pollock.Robin WilliamsEuropean Perspectives on the Anticipatory
Governance of AI
By Robin Williams
In his 1980 book, The Social Control of Technology ,
David Collingridge reflected upon the unanticipated
risks that accompanied many emerging
technologies. He highlighted a dilemma confronting
attempts to control the undesired impacts of
technology.
‘[…] attempting to control a technology is difficult,
and not rarely impossible, because during its early
stages, when it can be controlled, not enough can be
known about its harmful social consequences to
warrant controlling its development; but by the time
these consequences are apparent, control has
become costly and slow' (Collingridge, 1980: 19).
This insight has inspired the proposals for
anticipatory governance of new and emerging
science and technology, that reflect upon pathways
for the development and use of technology and their
potential impacts on health, the environment and
social life. The UK Engineering and Physical
Sciences Research Council today invites the
researchers it funds to "anticipate, reflect, engage
and act" to achieve Responsible Innovation.
Responsible Innovation is a process that seeks to
promote creativity and opportunities for science and
innovation that are socially desirable and
undertaken in the public interest.
https://epsrc.ukri.org/research/framework/
These ideas are closely related to European Union
proposals for Responsible Research and Innovation.
How then might these apply to Artificial Intelligence
(AI)?
The success of private initiatives by firms like
Google and Amazon has driven enormous public and
policy interest in AI and has stimulated major public
research and training investments worldwide to
develop AI capabilities. These have been
accompanied by compelling visions of the beneficial
applications of AI: autonomous vehicles; care
robots; advances in medical science and diagnosis
etc. These expectations – sometimes unhelpfully
informed by science fiction accounts - often run far
ahead of currently demonstrated capabilities.
Alongside this growing concern are being articulated
about potential risks – to privacy, to autonomy.
Complaints have been made about the lack of
transparency of algorithmic decision-making
systems e.g. in finance or in public administration –
and about algorithmic bias where these systems
have been shown to disadvantage groups – and
which may conflict with equal opportunity legislation
applying women and ethnic minorities. This has
inspired calls for Fair, Ethical, Transparent Machine
Learning systems. Philosophers and ethicists have
been enlisted into public and private AI ethics panels
(with today over 40 such initiatives in Europe and
North America).
However ethical principles per se will not deliver
ethical outcomes. AI is not a ‘thing' with determinate
properties. It refers to a general purpose set of
capabilities, applicable to a range of settings, and
rapidly advancing through the rapid cycles of
developing using and refining new tools and
techniques. And the outcomes of AI are rooted not
just in the design of these models but in the overall
configuration of the algorithmic system. This
includes the variables selected as proxies for
intended outcomes, metrics and visualisations and
above all in the data sets – and especially the
training data for machine learning systems. And
attempts to develop ‘unbiased' AI systems need to
confront the fact that social inequalities in society
are deeply embedded in the data available – there is
no ‘view from nowhere'.
However, though there has been much discussion of
the opacity of proprietary algorithmic systems, their
operation is amenable to probing by those with
moderate technical capabilities – for example
submitting to recruitment algorithms job
applications with different gender, age, racial
identifiers. In this respect their operation and
biases may be more readily made visible than
traditional systems based solely on human
judgement. Though it may be hard to ‘open the
black-box' of algorithmic system, the performance
of the black box under different circumstances can
be made visible.
The pathway towards Responsible Innovation of
Artificial Intelligence is thus through critically
scrutinising AI components, configurations, and
OUTCOMES – to open up the choices made by those
developing/applying them in particular contexts and
make them accountable.
Responsible Innovation is thus not a one-off task but
a complex bundle of activities. It will best be
achieved through interdisciplinary dialogue between
AI practitioner communities, stakeholders and
citizen groups - what Stilgoe 2018
has
characterised as "constructively engaging with the
contingencies" of AI practice.
27 28
Colin Allen is Distinguished Professor in the department of History & Philosophy of
Science at the University of Pittsburgh. From 2015-2019, he held the title of "Chair
Professor" at Xi'an Jiaotong University, Xi'an, China, and in 2017 he was appointed
Changjiang Scholar by the Ministry of Education in the People's Republic of China.
Allen's research concerns the philosophical foundations of cognitive science. He is
particularly interested in the scientific study of cognition in nonhuman animals and
computers, and he has published widely on topics in the philosophy of mind,
philosophy of biology, and artificial intelligence. He has over 100 research articles
and several edited and co-authored books, including Moral Machines: Teaching
Robots Right from Wrong Oxford University Press 2009
which has been translated into Korean, Chinese, and
Japanese.
Since 1998 Allen has been consulting and programming for The Stanford Encyclopedia of Philosophy and is its
Associate Editor. He is director of the Internet Philosophy Ontology project (InPhO) which has received multiple
grants for its work in computational humanities. From 2020-2022 he is the recipient of an award from the
Templeton World Charity Foundation for a project titled "Wisdom in the Machine Age".The Impact of Journalism
By Colin Allen
The most important progress related to AI
governance during the year 2019 has been the result
of increased attention by journalists to the issues
surrounding AI. They have brought attention to
problems ranging from "algorithmic bias" to the
risks to human freedom and democratic ideals that
arise from AI-assisted large-scale surveillance by
governments and corporations. However, effective
governance of AI requires accurate understanding of
the technology and its applications. Journalists,
business leaders, politicians, and the general public
all struggle to understand the technical aspects of
AI. The lack of understanding contributes both to
excessive optimism and to excessive pessimism
about AI, as well as to leading to poorly calibrated
levels of trust and mistrust of AI among the people
who use it. Miscalibrated trust includes having too
much trust in AI when the technology doesn't
warrant it for example, people trusting their
self-driving capacities of their cars too much
as
well as having too little trust in AI in situations
where it perhaps could do a better job than a
human.
The promotion of good technical understanding is an
important missing component in most journalistic
coverage. For example, the widely-reported idea of
"algorithmic bias" is potentially misleading because
it fails to distinguish biases in the data on which
algorithms operate from biases in programmers
leading them to design algorithms which ignore
relevant information or put too much weight on
some factors. Sensible policies for AI governance
depend not just on balancing the risks and
opportunities provided by AI, but on the
understanding the very significant role that humans
continue to have in the design and implementation
of AI applications, and in their use. Journalistic
coverage is important because it has shifted the
debate about AI to the important issues of
governance, but the process of attaining wisdom in
human use of AI has only just begun. Academics,
journalists, and software engineers all need to
address the question of how to develop wise use
policies in a safe way, free from the risks entailed by
the nearly unlimited public experimentation that is
currently practiced by governments and industry.ABOUT THE AUTHOR Colin Allen
29 30
Poon King Wang is the Director of the Lee Kuan Yew Centre for Innovative Cities at
the Singapore University of Technology and Design SUTD
, where he also heads
the Smart Cities Lab and the Future Digital Economies and Digital Societies
initiative. He is concurrently Senior Director of Strategic Planning at SUTD.
King Wang is on the World Economic Forum's Expert Network on Cities and
Urbanization, and the Board of Live with AI an independent France-Singapore
think tank on Artificial Intelligence
. His and his teams' multi-disciplinary research
focus on the human dimensions of smart cities and digital economies, and the
impact of digital transformation on the future of work, education, and healthcare,
and on society at large. He pays particular attention to how leaders of cities and companies can design strategies
and policies to lift the lives of their citizens and workers, with the same technologies that are disrupting work,
economy and society.
King Wang holds a MSc Industrial Engineering and Engineering Management
from Stanford University, a BSc
Electrical Engineering
from the University of Illinois at Urbana-Champaign, and a Rocket Engineering Certificate
from Moscow State Technical University.
In 2019, the Lee Kuan Yew Centre for Innovative
Cities (LKYCIC) at the Singapore University of
Technology and Design (SUTD) made two research
contributions to show how society can use tasks as
building blocks to design human-centric jobs and to
uplift lives in the future of work.
The first contribution was a collaboration that was
recognized by Singapore's National AI Strategy as
contributing to building a Trusted and Progressive
Environment for AI in Singapore's Smart Nation
journey. Working with France-Singapore think tank
Live with AI, AI consultancy Data Robot, and several
companies, we used tasks to first track the speed
and scale of disruption of AI on jobs. We then
incorporated the ethical, social and human
considerations, and created one-page step-by-step
task-by-task transformation road maps to future
jobs that people would find valuable.
Our second contribution was a partnership with the
labor unions. We worked with them to identify
several jobs that are at high risk of AI displacement.
We then used AI to chart clear and concrete
task-by-task transition pathways to new jobs for the
workers who might be displaced, including pathways
to jobs within and outside of the workers'
professions and sectors. This combination of clear
pathways and expanded choices means workers can
be empowered with greater confidence and
certainty, and the partnership was cited by the
Deputy Prime Minister in an International Labour
Organization conference.
These two contributions build on the LKYCIC's
future of work research where we have made tasks
central for three reasons. First, as long as AI
remains narrow, its impact on jobs will be
task-by-task, and not job-by-job. Second, there is
growing consensus amongst experts that tasks
provide the right level of resolution to study the
future of work. Third, tasks are increasingly used to
explain trends at different scales -- from the impact
of specific AI innovations on specific skills, to the
macro-economic changes in the labor market in the
last few decades.
Our research advances the use of tasks by
developing task databases and strategies to help
governments, companies, and individuals (such as
the abovementioned two contributions). They all
take advantage of the fact that any job can be broken
down into its constituent tasks, and by assessing
which and when tasks will be disrupted, we can
track AI disruption risk and transformation
potential. At the same time, each job will have tasks
that are similar to tasks in other jobs – these can be
used to identify new tasks, jobs, and pathways.
In every past Industrial Revolution, even when more
jobs were created than destroyed, there were always
segments of society who struggled or suffered. In
our current Revolution, we are already seeing such
signs worldwide.
We have to help more people thrive. Tasks provide
the building blocks, databases, and strategies for
the public, private, and people sectors to do so
clearly, concretely, and confidently.
Together, we can uplift lives if we stay on task.ABOUT THE AUTHOR Poon King Wang
31 32Future of Work in Singapore: Staying on Task
By Poon King Wang
ABOUT THE AUTHOR
Prof. Dr. Ferran Jarabo Carbonell, born in Alicante on February 17, 1967. He lives
all his life in Girona where he begins his studies.
Degree in Philosophy, Philosophy and Letters and Dogmatic Theology from the
Pontifical University of Salamanca. The year 1997 is ordained diocesan priest in
Girona. In 2006 she received a PhD in Philosophy from the same pontifical university.
Professor of Philosophical Anthropology and Phenomenology of Religions at the
Institute of Religious Sciences of Girona in different periods for almost 16 years.
Professor at the Redemptoris Mater seminar in Berlin for four years in various
philosophical subjects: Ethics, Philosophical Anthropology, Cosmology, Ontology.
He has participated with different communications in international SITAE Days. Collaborate in various publications
with popular articles. He currently collaborates at the University of Mainz with the AI FORA project as a
representative of the University of Girona and works pastorally for the diocese of Limburg.Ferran Jarabo CarbonellDeveloping AI at the Service of Humanity
By Ferran Jarabo Carbonell
The short space of this article only allows to
enunciate some of the topics. Ethics is making a
great contribution to the reflection on Artificial
Intelligence. This contribution supposs an aid to the
development of this science. In the first place, it
offers a walker for the harmonic growth at the
service of humanity, and, in the second place, it
forces it to keep in mind that the aim is to offer
some help to human beings and their safeguard.
Ethical reflection on artificial intelligence must start
from a profound conception of what to be a person
means. It is not simply a question of referring to the
'Charter of Human Rights'. AI is at the service of
men and the human being is an ethical subject by
nature. That is, every man needs to know he is doing
good things for his personal development. Good is
neither a mere feeling, nor a coercion of freedom.
We must understand that "good" is everything that is
good for oneself and for all human beings. This is
not relative, there is consensus (one is the Universal
Declaration of Human Rights) and more must be
sought so that the science of we speak of is at our
service. The human being must not do everything
that can be done; insurmountable limits must be
established for the good of all.
Below, I list only three fundamental points on which
researchers and thinkers should converge. The list
could be much longer, but hopefully these three
points will serve to initiate reflection:
1.The inherent value of every human being. I am not
only talking about the non-discrimination on the
basis of race and sex; the human being, with
independence of anything else, must be safeguarded
and loved. It has already happened many times
before: supposedly intelligent algorithms have
discriminated people because of their race or sex.
This is totally inadmissible in a plural and equal
society such as ours. From here we draw a limit:
artificial intelligence must always be at the service
of the person and not the other way around.
2.Artificial intelligence can never be autonomous.
The human being is the ultimate responsible for all
his actions. No action coming from artificial
intelligence can be detached from its maker. There
is an inescapable responsibility of the one who
creates the algorithm which the machine works
with. Therefore, Artificial Intelligence must always
have human control. To be more specific: a)
everything that refers to autonomous lethal
weapons (LAWS) must be banned for the sake of
subsistence. The control of such weapons must
never escape human control. b) other systems that
can become autonomous (driving, BOTS...) must
always depend on human decision. They cannot be
left to their own free will.
3.It must be at the service of humanity as a whole
without excluding the poor. This point is of utmost
importance. It is inconceivable that countries and
people with no economic power are excluded from
any advance that is made for the good of all. We
must find ways to make technological advances for
all. There can be no discrimination on any grounds,
let alone economic ones.
And to finish: the control of Artificial Intelligence
must always be human, as well as its responsibility.
Another obvious thing is that the moral decision
cannot be made a posteriori, it must always be
made a priori. That is, moral laws must be
respected and used before making an algorithm and
ethics must be observed before any digitization. This
is for the sake of the dignity of human nature and in
defense of its privacy. Algorithms must be analyzed
before being executed.
33 34
ABOUT THE AUTHOR
Wang Xiaohong received her Ph.D. in Philosophy of Science and Technology from
Peking University in 2004. She is a Fulbright Visiting Research Scholar (IU,
2006-2007). Presently, she works in department of philosophy at XJTU as the
co-director and Professor of Research Center for Computational Philosophy. She
also serves as a member of the Big Data & AI Working Group of World Federation
of Engineering Organizations (WFEO) (since 2019), and an executive committee of
China Scientific Methodology Commission (since 2011).
Professor Wang’s research concerns the philosophy of cognitive science. She is
particularly interested in philosophy of AI machine discovery, computational
analysis of Chinese philosophy, and interested in information ethics, and integration of science and humanities.WANG XiaohongEnhance Global Cooperation in AI Governance on
the Basis of Further Cultural Consensus
By WANG Xiaohong
In 2019, substantial progress has been made in AI
governance from principle to practice;
transdisciplinary cooperation between engineers
and humanities scholars has converged on the
“human-oriented” approach; all sectors of society
including major international organizations, more
and more national governments, ICT leading
enterprises, academia, media, education circles
have made concerted efforts to build a wideranging
network of AI governance. But from the perspective
of cultural comparison, there is a potential worry
about the AI governance environment in 2019 and
beyond. The increasingly intensified competition
among countries and interregional conflicts make
the cooperation and sharing of the frontier
technology of AI governance full of uncertainty. The
root is the increasingly prominent differences in
cultural values among countries and nations, and
the danger of being torn from cultural unity faced by
the human community. Confronting severe
challenges in global governance, AI governance
needs to conduct more practical cultural
accommodation and further promote value
consensus.
The cultural value plays an implicit role for the
technical and explicit measures. In recent years,
engineers and ethicists have been cooperating to
explore and solve specific problems, clarifying ethics
as the practical value of AI design framework, and
making the process of AI governance increasingly
clear. Taking deep neural networks as an example,
from the definition of tasks, data collection until
designing, training, testing, evaluation and
application debugging of models, governance
principles (security, transparency, privacy, fairness,
etc.) can be added in every link, and the
improvement of technical means will approach
ethical expectations. However, the abstract principle
of "human-centric" may lead to differences in
practical value due to cultural differences in the
actual situation of AI governance, or even the
countermeasures of AI governance. An ethical
consensus of AI governance needs to take root in the
major issues of the common destiny of mankind and
the eternal values accumulated through cultural
heritage.
The wisdom of "harmony but difference" (Analects)
in Chinese culture means cultural diversity. Future
AMAs (artificial moral agents with high autonomy
and high sensitivity to values) will choose to cooperate
with human beings rather than exterminate human
beings. Any intelligent agent needs more freedom,
and the greater the diversity, the greater the
informational entropy, and the greater the freedom
of choice for each individual. The study of information
ethics and machine morality has repeatedly revealed
that the integration of Chinese and Western cultures
is the source of moral insight. "Do as you would be
done by" and " I want to stand firm, but also want to
let others stand firm, I want to develop, but also
want to let others develop" in Analects are
consistent with Kant’s categorical imperative: only
when you are willing to act on this criterion can you
make this criterion a norm. In addition,
“self-restraining in privacy” (Doctrine of Mean), and
self-cultivation practice inherited and developed by
the Neo-Confucians, together with the virtue ethics
advocated by Aristotle, reflect the common wisdom
of the ancient Eastern and Western cultures.
Human beings need the wisdom of cultural
integration to realize the moral principles of AI.
Human beings must act in concert and in a
coordinated way, or any barrel effect will bring all
efforts to naught. In 2020, AI governance can focus
on the core of AI ethics and strengthen substantive
measures to enhance the value consensus among
different countries and regions.
35 36
Three Modes of AI Governance
By YANG Qingfeng
An article on AI governance has caught my attention.
This article pointed out that AI governance is ‘an
unorganized area' (James Butcher et al. 2019).
James Butcher (2019) has provided an overview of
the practice of different stakeholders in the AI
governance activities. According to this article, the
key point is to maximize the benefits and minimize
the risks. Public sectors and non-public sectors
have different responsibilities in AI governance.
AI governance is certainly a new field waiting for
exploration. The reason for this is on the controversy
over the understandings of what AI is and what AI
governance is. Therefore, the primary issue is to
clarify the definitions of AI and AI governance. I
distinguish three modes of governance based on the
AI definition., namely, governance based on
governmental bodies, governance based on
technologies, and governance based on humanistic
values.
The first AI governance is based on governmental
bodies. In this view AI is considered as a tool related
to different bodies. AI is used by different bodies
such as governments, companies, individual, etc.
The safety and reliability is the key to good use or
rational use. However, problems from rational use
will be ignored in this view.
The second AI governance is based on human
values. AI is seen as embodiment of human values.
AI needs to follow human values such as
responsibility, safety, fairness and trust. AI
governance is focused on the designing process and
how to guard or embed human values into agents.
The ethical framework and ethical decision-makers
have been emphasized. By Glass-Box, we can
‘implement transparent moral bounds for AI
behavior' (Andrea Aler Tubella et al. 2019).
The third AI governance is based on the
technologies. AI in the view is regarded as
technologies or technological system. The view is
useful to cover philosophical problems,
technological problems and some problems
entangled between AI and society. In this view, AI
governance focuses on how to tackle such problems
as the societal and humanistic impact of AI. The
partnership on AI (PAI) 2019 has discussed the
influence of AI on people and society, especially
algorithmic biases and errors in AI.
Logically, AI governance has experienced a
transition from ‘use context' to ‘prediction context'.
Most researches have focused on entities that use
and design AI. Rational use or responsible use is the
inevitable path. However, AI has strong autonomy
and ability to learn. Algorithm has been used to
predict human behavior in the future. The basic
problem is to tackle with relationship between AI
and human being. Coexistence is a good relation
model (Beena Ammanath, 2019). Some
technological problems such as AI algorithmic bias
are more important. Many media have concerned AI
bias from algorithms. Many governments and
organizations are increasingly concerned about AI
bias. Explainable and unbiased algorithms are
possible direction. How do we use AI tools to give us
a predictive representation of the status of major
social practice and predict its development is a
question needing to consider? Maybe BlueDot is a
good case. It has sent us many real-time infectious
disease alerts.ABOUT THE AUTHOR
Yang Qingfeng (1974) received his Ph. D. from Fudan University in 2003. Currently,
he is a professor at Center for Applied Ethics and Fudan Development Institute of
Fudan University. He also serves as the Executive Director of the Technology
Philosophy Committee of the Chinese Society for Philosophy of Nature and the
Secretary General of Shanghai Nature of Dialectics Association in China. He is
visiting Scholar of Dartmouth College, USA and Swinburne University of
Technology, Australia. His current research includes the philosophy of technology,
data ethics, philosophy of memory and AI ethics. YANG Qingfeng
37 38
ABOUT THE AUTHOR
Yin Qi (who also goes by “Inch”), is co-founder and CEO of Megvii Technology
Limited, a world-class AI company with core competencies in deep learning. He
chairs the company’s board-level AI Ethics Committee, which is committed to
positively contributing to the society with Megvii’s AI technology. Yin is a member of
the National New Generation Artificial Intelligence Governance Expert Committee,
an expert committee established by China’s Ministry of Science and Technology
engaged in research on AI-related laws, ethics, standards and social issues and
international exchanges and cooperation on AI-related governance.
Yin was a member of the 2019 Young Global Leaders of the World Economic Forum.
He was named to Fortune’s “40 under 40” list of Chinese elites for three
consecutive years, and was ranked No. 1 on Forbes Asia’s “30 under 30” Enterprise Technology entrepreneurs.
MIT Technology Review has also included him in their global “Innovators under 35” list. YIN QiCompanies Need to Take More Responsibilities
in Advancing AI Governance
By YIN Qi
There is a consensus that AI governance should be a
global priority. In terms of policy making, many
countries have successively announced AI strategies
and singled out the importance of AI governance. In
2019, China’s Ministry of Science and Technology
high-lighted the critical nature of this work by
announcing the establishment of its National New
Generation AI Governance Expert Committee. In
terms of media scrutiny, more and more attention
has been paid to issues such as the ethical
boundaries and technical interpretability of AI and
data privacy protection, which are all essentially AI
governance issues.
AI governance is not only the responsibility of the
government and relevant institutions. Enterprises,
as the main force in the R&D and application of AI
and the front-line practitioners of AI technologies,
should fulfill their responsibilities and take the
initiative to achieve enterprise autonomy. Today,
many international and Chinese companies,
including MEGVII, have launched their own AI Ethics
Principles and criteria, elaborating on their
initiatives to ensure responsible governance of AI
technology.
For companies, effective implementation of AI
governance measures is a major area of focus. Let
me summarize my thinking based on MEGVII’s own
firsthand experience:
1. First, we need to maintain a rational focus on and
continue to engage in constructive discussions on AI
governance. In January of this year, we invited
experts across the fields of law, ethics and AI
technology, as well as the general public, to join
candid and constructive online discussions on the 10
mostly heavily-debated AI ethics issues. We received
thousands of comments across social media
platforms, and top concerns include privacy,
information security and sufficient protection of
user rights.
2. Second, we recognize the importance of
conducting in-depth research on key issues. Data
security and privacy protection are top priorities, for
both the public and the enterprises. Megvii has a
research partnership with the Beijing Academy of
Artificial Intelligence that will focus on these issues.
We are working to implement an AI platform to best
manage the collection, transmission, storage and
usage of data for the full life-cycle protection of data
and establish a set of relevant AI data security and
privacy protection mechanisms. Megvii was also
tasked by the Ministry of Science and Technology to
build a National Open Innovation Platform for Next
Generation Artificial Intelligence on Image Sensing,
where industry-wide research results and practical
experience of enterprises will be shared to promote
the healthy and rapid development of the AI industry.
3. Third, we need sustained action. A robust and
effective organizational framework is required to
oversee, implement, and foster collaboration on our
AI ethics principles. This is why Megvii has set up an
AI Ethics Committee under its Board of Directors,
consisting of founders, core executives and external
experts, to oversee the implementation of Megvii's AI
Ethics Principles. The Committee is supported in its
work of coordination and in-depth research by a
secretariat and an AI Governance Research Institute.
Although in 2019, we saw some difficult questions
arise in AI governance around the world, we hope and
expect that 2020 will become the “Year of AI
Governance.” AI governance is effective solution for
maintaining controls in the new era of AI. AI
governance must become part of everything we do as
an industry, and these types of preventative and
protective measures need to be more widely
recognized and practiced through a combination of
learning and practice. I want to take this opportunity
to call on everyone to take a long-term view and face
the challenges of AI governance head on. I hope that
together we can power humanity with AI.
39 40
ABOUT THE AUTHOR
Mr. Don Wright is the President of Standards Strategies, LLC, an ICT
Standardization consulting firm. He is the retired Director of Worldwide Standards
for Lexmark International and previously IBM and has over 40 years of experience
in standards, engineering, software development and marketing. Mr. Wright is a
Senior Member of the IEEE and served as President of the IEEE Standards
Association (2017-2018), and a member of the IEEE Board of Directors (2017-2018).
He previously served as Computer Society VP of Standards, IEEE-SA Standards
Board Chair, IEEE-SA Treasurer, IEEE-SA Awards and Recognition Chair, IEEE
Admission and Advancement Chair, and on the IEEE Awards Board. He is a member
of the Computer Society, Communications Society, Consumer Electronics Society,
Society on the Social Implications of Technology, and Technology and Engineering Management Society. He is a
member of the Board of Directors of the IEEE-ISTO and previously served as Chairman. He previously served as
Chair of the INCITS Executive Board, US HoD to ISO/IEC JTC 1, and two terms as a member of the Board of
Directors of ANSI. He graduated from the University of Louisville with BSEE and MEng EE degrees. He is a
member of Tau Beta Pi and Eta Kappa Nu.
Don WrightTrustworthy AI and Corporate Governance
By Don Wright
The proliferation of A/IS (autonomous and intelligent
systems) presents a profoundly human moment.
Collectively, we are standing in the nexus of history.
While it's always been essential to know your
customer and their needs, the specific nuances of
AI, where interacting with people demands a higher
level of awareness around things like bias, identity,
emotion, and cultural relevance, make obtaining and
using this knowledge of the customer even more
difficult. It also means recognizing that, outside of
anyone's positive intentions for what they create, an
end-user's experience is not fully up to the designer
— it is up to each end-user. This is why IEEE created
Ethically Aligned Design, 1st Edition and why it
focused on end-users and how they and their values
can be a part of AI design.
According to McKinsey Global Institute, "AI has the
potential to deliver…global economic activity of
around $13 trillion by the year 2030." While the
monetary benefits of AI have increased in recent
years, so have the concerns around its ethical
implementation for people and society as a whole.
Beyond the need to combat negative unintended
consequences in the design of AI, the analysis,
utilization, and honoring of end-user values in
design is providing a growing trend of driving
innovation in corporate governance.
As a way to highlight this trend, IEEE recently
created the Ethically Aligned Design for Business
Committee as part of its Global Initiative on Ethics of
Autonomous and Intelligent Systems. Comprised of
participants from Google, IBM, Intel, Salesforce,
Microsoft, and others, the committee launched its
first paper in Q1 of 2020 called A Call to Action for
Businesses Using AI featuring:
• The Value and Necessity of AI Ethics;
• Creating a Sustainable Culture of AI Ethics; and,
• AI Ethics Skills and Hiring.
While created with corporations in mind, much of its
contents will also provide useful guidance for
certain governments and NGOs. The paper also
features an "AI Ethics Readiness Framework"
allowing readers to assess where their organization,
public or private, lies on a four-tiered scale
highlighting issues such as training, leadership
buy-in, organizational impact, and key performance
indicators (KPIs) beyond financial metrics alone.
Corporate governance for AI cannot rely on simply
adhering to basic compliance criteria regarding
mandated legislation like the GDPR. Organizations
need to proactively create and prioritize transparent
and accountable practices that honor end-user
values to establish genuine trust with their
employees, customers, and all stakeholders
throughout their value chain.
“We want to design healthy relationships with our
users. The potential of AI is wrapped up in its
longevity as a solution-meaning everything we
design must address current and future needs for
users. To truly understand those needs, we need an
inclusive and ethical approach to the entire process.
Globally, we are starting to see the repercussions
that come when companies do not prioritize AI ethics
in their solutions. We want to make sure that ethical
practices are ingrained on our teams so they can
then be embedded into the products themselves.”
– EAD for Business Committee Member Milena
Pribec of IBMOrganizations must create ethical systems and practices for the use of AI if they are to gain people's
trust. This is not just a compliance issue, but one that can create a significant benefit in terms of loyalty,
endorsement, and engagement.
- Capgemini
41 42
Jack Clark is the Policy Director for OpenAI, where he leads OpenAI's policy outreach
efforts. Jack researches the measurement and analysis of AI systems. He sits on the
steering committee of the AI Index, part of the Stanford 100 Year Study on AI project.
He is also an external research fellow at the Center of Security and Emerging
Technology in Washington DC. Jack has testified in Congress three times and was a
technical expert for the OECD's AI Principles initiative in 2019.
Irene Solaiman is a policy researcher at OpenAI. She conducts social impact and
fairness analysis and policymaker engagement as part of the Policy Team. She was a
fellow at Harvard's Berkman Klein Center as part of the Assembly Student
Fellowship formerly known as Techtopia
researching the ethics and governance of
AI. Irene holds a Master in Public Policy from the Harvard Kennedy School and a
self-designed B.A. in International Relations from the University of Maryland.
Gretchen is the project manager for the Policy Team at OpenAI, and works on
projects related to responsible publication, coordination, and scenario planning.
Prior to joining OpenAI, Gretchen worked at the AI Now Institute at New York
University, and at the New York City Economic Development Corporation. Gretchen
holds an MS from Columbia University and an AB from Harvard University.
Gretchen KruegerIrene SolaimanJack ClarkMiles BrundageABOUT THE AUTHORA Year of Action on Responsible Publication
By Miles Brundage, Jack Clark, Irene Solaiman
and Gretchen Krueger
Deepfakes. GPT-2 and issues of synthetic text.
Gender-guessing systems. These were some of the
things that the AI community reckoned with in 2019,
as ethical considerations relating to the publication
of AI research came to the fore.
This growing attention to publication norms in the AI
community was the result of two factors.
First, a subset of AI systems known as generative
models--which can be used to generate samples
that look similar to real data--improved in
performance and flexibility, sparking concerns about
such systems being used to deceive people online
with synthetically generated content such as
images, audio, and text. (In 2019 it was revealed that
realistic-looking but AI-generated images were used
as part of an online influence campaign by Epoch
Media Group, and researchers explored the potential
misuse of language models for generating deceptive
or abusive text.)
Second, evidence continued to mount that existing
publication practices in the AI community are
insufficient to address such risks, and that
experimentation with new technical and policy
approaches is needed. Continued publishing of
deepfakes research, for example, is making it easier
and easier to produce misleading videos of people
saying or doing things that never occurred, while
detection efforts are in their early stages. These
trends have raised deep concerns not only about the
direct deception of people with AI-generated media,
but also the risk of people not believing authentic
media because it could have been generated by AI.
Miles Brundage is a Research Scientist on OpenAI's Policy team, where he
researches issues related to coordination among AI developers and responsible
publication of misusable models. He is also a Research Affiliate at the University
of Oxford's Future of Humanity Institute, where he previously worked for two
years as a Research Fellow. He earned his PhD in Human and Social Dimensions
of Science and Technology in 2019 from Arizona State University.
One high-profile case of evolving publication norms
involved our organization, OpenAI. In February 2019,
OpenAI announced its GPT-2 language model, which
displayed state of the art performance in various
language modeling tasks (predicting what comes
next in a text sequence) and surprising performance
on other tasks like text summarization,
question-answering, and translation. At the same
time, we shared our concern that GPT-2 could be
used to generate abusive or misleading text. We
then took the unusual step of releasing increasingly
powerful versions of the model in stages, rather
than all at once (a process we call Staged Release),
and explored new ways to get expert input on the
ease of misusing the system throughout the
process. As a result, we were able to work with
experts at other research organizations to
incrementally improve and share our understanding
of GPT-2’s characteristics at each stage in the
release process.
While our decisions on GPT-2 sparked significant
debate, OpenAI was not alone in calling attention to
these misuse concerns. Blog posts and papers by
other organizations such as Salesforce, Google,
Hugging Face, the Allen Institute for AI, and the
University of Washington highlighted different
societal implications and challenges of large-scale
language models. In our view, there is still much to
learn about how to responsibly publish language
models, as well as AI systems more generally.
Beyond improving documentation of AI systems and
the release process associated with them, there was
also significant attention paid in 2019 to preparing
for instances of misuse through detection and policy
changes. Google released a dataset to aid in
detecting synthetic voices, while Facebook, the
Partnership on AI, and other organizations launched
competitions for “deep fake” video detection.
Legislators in various countries, and online
platforms such as Twitter, also began to formulate
policies aimed at addressing related risks.
As technical progress continues and the impacts of
AI in the real world become clearer, we expect the AI
community to continue grappling with these issues
in 2020. We are excited to see how norms evolve in
the year ahead as researchers’ experiment with new
ways of maximizing the benefits of publishing
powerful AI systems while minimizing the risks.
Because progress in AI can move unusually quickly,
we need to be prepared for surprising challenges to
arise.
Miles Brundage
43 44
ABOUT THE AUTHOR
Seán Ó hÉigeartaigh is the Director of the AI: Futures and Responsibility
programme (AI: FAR) at the Leverhulme Centre for the Future of Intelligence (CFI),
an interdisciplinary centre that explores the opportunities and challenges of
artificial intelligence. The AI: FAR programme focuses on foresight, security and
governance related to artificial intelligence.
He is also the Co-Director of Cambridge's Centre for the Study of Existential Risk
(CSER), a research centre focused on emerging global risks and long-term
challenges.
Seán's research spans the impacts of artificial intelligence and other emerging technologies, horizon-scanning
and foresight, and global risk. He led research programmes on these topics at the Future of Humanity Institute
(Oxford) from 2011-2015, was founding Executive Director of the Centre for the Study of Existential Risk from
2014-2019, and co-developed both the Strategic AI Research Centre, and the Leverhulme Centre for the Future of
Intelligence. His paper An AI Race: Rhetoric and Risks (with Stephen Cave) recently won joint best paper at the
inaugural AI Ethics and Society Conference. He has a PhD in genome evolution from Trinity College Dublin.
Seán Ó hÉigeartaighABOUT THE AUTHORAI Research with the Potential for Malicious Use:
Publication Norms and Governance Considerations
By Seán Ó hÉigeartaigh
On Valentine's Day 2019, technology company
OpenAI announced a language generation model of
unprecedented performance.2 However, as an
"experiment in responsible disclosure" it only
released a limited version of the language model. In
doing so OpenAI brought attention to a governance
debate that has since gained a great deal of
momentum. OpenAI's decision was due to its
researchers' concerns that their technology could
have potentially malicious applications. While the
technology would have many positive uses, such as
in language translation and digital assistants, they
reasoned that effective and freely available language
generation could also have more harmful impacts.
These might include automating fake news
generation, helping fraudsters impersonate others
online, or automating phishing for cyberattacks.
These concerns related to broader issues around the
potential malicious use of synthetic media
generation, where machine learning advances are
playing a key role. But they also highlighted pressing
questions about the responsibilities of AI research
groups and companies with regard to malicious uses
of their technologies. This discussion is not unique to
AI; it has been debated extensively in other
technology and security contexts, often under the
heading of ‘dual use' research. One high-profile
instance was a debate in 2011-12 over whether it was
appropriate to publish risky influenza research.3 Due
to recent advances in machine learning technologies,
the increasingly varied contexts in which they are
being deployed, and the more widespread availability
of powerful techniques, a growing number of
researchers, civil society groups, and governments
are now giving attention to concerns over malicious
uses of AI.4, 5
OpenAI's move to restrict their technology resulted in
vigorous debate. Critics argued that the decision not
to release was sensationalist and raised undue fears,6
and that the decision not to release to academics
endangered norms of open publication and
research-sharing.7 Others argued that caution was
justified,8 and that delaying publication allowed time
to prepare against malicious uses.9
A growing interdisciplinary research community is
exploring these issues, including at forums such as
the Partnership on AI.10 OpenAI's researchers have
written an analysis of what they themselves had
learned from their experiment in responsible
publication norms,11 and finally released the full,
most high-performance version of their model in
November 2019. Many open questions remain about
what should constitute research of concern in AI, and
what the ideal process should be when advances with
the potential for misuse are made.12 However, one
thing is certain: now is an excellent time for this debate. AI technologies will continue to become
more powerful, and more widespread in their uses in
society. Developments made with the best of
intentions will be put to malicious purposes. Now is
the time for the AI research and governance communities to explore these questions with a broad
set of stakeholders, and to develop appropriate
norms, safeguards and best practices for the
dual-use AI technologies of tomorrow.
My heart, why come you here alone?
The wild thing of my heart is grown
To be a thing,
Fairy, and wild, and fair, and whole
GPT-2, 20191 1Gwern.net (2019). GPT-2 Neural Network Poetry
2OpenAI Blog (2019). Better Language Models and Their Implications
3Butler & Ledford (2012). US biosecurity board revises stance on mutant-flu studies
4Brundage & Avin (2018). The Malicious Use of Artificial Intelligence
5House of Lords (2019). AI in the UK: ready, willing and able?
6Lipton, Z. Approximately Correct (2019). OpenAI Trains Language Model, Mass Hysteria Ensues
7Li & O'Brien. Electronic Frontiers Foundation (2019). OpenAI’s Recent Announcement: What Went Wrong, and How It Could Be Better
8Metz & Blumenthal. New York Times (2019). How A.I. Could Be Weaponized to Spread Disinformation
9Howard, J. Fast.AI (2019). Some thoughts on zero-day threats in AI, and OpenAI's GPT-2
10Leibowitz, Adler & Eckersley. Partnership on AI (2019). When Is It Appropriate to Publish High-Stakes AI Research?
11OpenAI blog (2019). GPT-2: 6-Month Follow-Up
12Crootof, R. Lawfare (2019). Artificial Intelligence Research Needs Responsible Publication Norms
45 46
GPT-2 Kickstarted the Conversation about
Publication Norms in the AI Research
Community
By Helen Toner
For me, the most attention-grabbing AI governance
discussion of 2019 concerned responsible
publication norms, and it was sparked by OpenAI's
decision to delay the release of GPT-2, a language
model trained to predict the next word in a text.
First announced in a blog post and paper in
February, GPT-2 (a successor to GPT, or "Generative
Pre-Training") showed a remarkable ability to
generate multiple paragraphs of fairly coherent
writing in a wide range of styles. But what drew even
more attention than GPT-2's performance on
language generation was OpenAI's announcement
that it would not be publishing the full model. The
reasoning: it might be used "to generate deceptive,
biased, or abusive language at scale," and OpenAI
wanted to take this occasion to prompt discussion in
the machine learning (ML) community about
responsible publication norms.
The post certainly succeeded at prompting
discussion. Initial reactions were mixed, with many
ML researchers criticizing what was perceived as a
deliberate effort to create hype and attract media
attention. Many also felt that OpenAI's strategy was
damaging to academic norms of openness, making
it harder to replicate and verify their work. By
contrast, reactions in AI policy and governance
circles were largely positive, expressing
appreciation for the effort to begin developing norms
around publication of research that could be used in
harmful ways, even if this particular work was not
especially risky.
Over the course of 2019, OpenAI continued to post
about GPT-2, providing updates on their
conversations with other groups and their plans
going forward. In a May update, OpenAI announced
that it would be releasing the model in
stages—publishing a "medium" version (following
the "small" version with the original post), which
was succeeded by a "large" version in August and an
"extra-large" version in November.
During this period, multiple researchers attempted
to replicate OpenAI's work, and several succeeded in
whole or in part. In one particularly interesting case,
an independent researcher named Conor Leahy
announced on Twitter that he had replicated the
model and intended to release it publicly, in
deliberate defiance of OpenAI's release strategy.
After discussions with OpenAI and other
researchers, however, he changed his mind, and
decided to keep his work private.
Of course, 2019 was not the year in which the ML
community agreed on firm norms around
responsible publishing—these questions are
complex, and will require further experimentation
and debate. But against a backdrop of increasingly
convincing deepfake videos, ML research being
turned to authoritarian purposes, and other
concerning trends, the discussion kickstarted by
OpenAI stands out to me as a step in the right
direction.
ABOUT THE AUTHOR
Helen Toner is Director of Strategy at Georgetown University's Center for Security
and Emerging Technology CSET
. She previously worked as a Senior Research
Analyst at the Open Philanthropy Project, where she advised policymakers and
grantmakers on AI policy and strategy. Between working at Open Philanthropy and
joining CSET, Helen lived in Beijing for nine months, studying the Chinese AI
ecosystem as a Research Affiliate of Oxford University's Center for the Governance
of AI. Helen Toner
47 48
ABOUT THE AUTHOR
Millie Liu has focused her career on helping entrepreneurs with deep technology
turn their ideas into great businesses with global reach.
She was previously at APT, an enterprise data analytics startup acquired by
Mastercard for $600m where she helped Fortune 50 clients such as Walmart and
P&G make better strategic decisions leveraging data. She was also the co-founder
of an MIT startup working on unsupervised event detection, which later pivoted and
became Infervision, an AI precision healthcare platform backed by Sequoia China.
Millie is on the advisory board of MIT CSAIL (Computer Science and Artificial
Intelligence Lab). She holds a Master of Finance degree from MIT and B.S. in
Mathematics from the University of Toronto.Millie Liu The Challenges for Industry Adoption of AI
Ethics
By Millie Liu
Artificial Intelligence technology continues its fast
development in 2019. Yet despite the promising
adoption, there are real-world challenges with the
implementations and ethical concerns from the
industry. While academia tends to see things from a
theoretical perspective, the below observations are
made from a more practical point of view from the
frontline. These challenges and concerns, in
particular, deserve policymakers' attention. The
industry can benefit or be hindered by policymaking,
which is an undertaking that requires an
appreciation of practical nuances.
Challenges with implementation:
-Infrastructure & data automation: modern
applications are better built on modern
infrastructures. While many companies are moving
to microservices in the cloud, a large number still
remains on-premise. Existing legacy architecture
and the inertia of pulling data across many, many
ERPs still lead to bottlenecks.
-Explainable AI & model deployment ownership:
Who is responsible for the models deployed in the
real world that are also constantly learning and
evolving? How do companies protect their customers
and their own reputation from the AI model bias and
the black box when it's making real-world decisions
every day? A common platform for collaboration,
deployment and continuous monitoring becomes a
pain for companies investing in AI/ML.
Challenges with AI ethics:
-Discrimination: the AI explainability issue not only
brought challenges to accuracy and efficiency of
decision making, but it also poses major ethical
concerns. AI models are trained on real-world
historical datasets. If bias exists in a real-world
system, then an AI algorithm can exacerbate it. For
example, while facial recognition technology has
achieving 90%+ accuracy, in racially diverse
countries this accuracy may be as low as 65% on
women, children, and ethnic minorities. Apple Card
was in the recent controversy that it approved much
lower credit spending limit on a wife's application
than her husband's, with the same family household
income. Even if gender or race was not specifically
considered in the ML model, related features in the
dataset can still embed these biases and lead to
unfair decisions. Immediate investment is needed in
algorithm interpretability and testing, in addition to
executive education around the subtle ways that bias
can creep into AI and machine learning projects.
-Security: biometric identity fraud deserves just as
much caution as physical identity fraud. Applications
like easy purchases with biometric identity
verification such as facial recognition are tempting
for its convenience, but also leaves vulnerability for
exploitation.
-Privacy: personal identifiable information is already
collected for purposes such as advertising. Clear
guidance on consent giving process not by default,
but by affirmative action, and data handling
compliance requirement coupled with an
enforceable penalty is a high priority for
policymakers around the world.
In addition to the AI-specific ethical challenges,
there are lots of ethical dilemmas that human being
already faced but should be careful handing the
decision-making power to algorithms. For example,
a classic moral dilemma is the "trolley problem" – if
you see a trolley speeding down the track and kill 5
people, there's a lever you can pull to switch the
trolley to another track where there stands 1
person, will you pull the lever? How should we
design the algorithms for autonomous cars when
they face a similar dilemma? Instead of blaming the
algorithm for making any decision, it's on us to
understand what should be handed to machines to
make the decisions for us.
49 50
ABOUT THE AUTHOR
Steve Hoffman, or Captain Hoff as he's called in Silicon Valley, is the CEO of
Founders Space, one of the world's leading incubators and accelerators, with over
50 partners in 22 countries. He's also an angel investor, limited partner at August
Capital, serial entrepreneur, and author of Make Elephants Fly , the award-winning
book on radical innovation.
Always innovating on his life, Captain Hoff has tried more professions than cats
have lives, including serial entrepreneur, venture capitalist, angel investor, studio
head, computer engineer, filmmaker, Hollywood TV exec, published author, coder,
game designer, manga rewriter, animator and voice actor.
Hoffman has a BS from the University of California in Computer Engineering and an MFA from the University of
Southern California in Cinema Television. He currently resides in San Francisco but spends most of his time in the
air, visiting startups, investors and innovators all over the world.Steve HoffmanA Call for Policymakers to Harness Market Forces
By Steve Hoffman
Governments around the world, for the most part,
have taken a hands-off approach on regulating the
use of artificial intelligence for fear of stifling
innovation and holding back domestic industries.
While this is a wise strategy, AI is becoming
integrated into so many aspects of our society and is
having such a profound impact that the necessity for
careful oversight and governance is becoming
increasingly necessary. From the perspective of
industry development, it is urgent to solve the
problems of algorithm bias, data privacy, content
filtering and network security.
Governments cannot just sit back and see what
happens. Things are progressing too fast and the
stakes are too high. If the wrong software gets into
the wrong hands, the consequences can be
devastating and irreversible. We've already seen how
Facebook's lax oversight of Cambridge Analytica led
to the mass dissemination of misinformation that
had a direct impact on US elections. With the
prevalence of deep fakes and AI bots that can churn
out misleading news, there's potential for far
greater abuse in the future.
Is banning certain AI applications that manipulate human
images and autogenerate news stories the answer?
Where do we draw the line between the legitimate and
criminal uses of these technologies? The software that
can create a deep fake may also be the future of the
entertainment industry, as more movies and videos
turn to digitally manipulating actors' faces and
superimposing them on scenes. The same is true for
news generating algorithms, which are being used
widely to disseminate legitimate financial updates,
weather reports, and other information.
A lot comes down to intent, not the technology itself.
Once the algorithms and software are out there, it's
too late. Banning them will only keep the software
out of the hands of those who want to use them for
legitimate purposes. The bad actors will be able to
get ahold of them. What we need to do is quickly
punish those who use the technologies in ways that
harm society, while at the same time encouraging
our institutions, researchers, and corporations to
come up with countermeasures.
It's wishful thinking that technology, like AI, can be
controlled. It can't, and there will always be abuses.
The question for policymakers is how can we
respond to those abuses quickly? What policies will
stimulate and reward those who want to prevent
these technologies from causing irreparable harm?
Let's take social networks as an example. Can we
put in place legislation that makes it in a social
network's best interest to more responsibly manage
its data, thoroughly vet and monitor all third-party
access, and develop countermeasures to fake news
and other emerging threats before they become a
major debacle? Increasing the punishments for both
intentional abuse of new technologies and gross
negligence when it comes to their management,
would incentivize entrepreneurs and companies to
proactively come up with solutions.
In the future, we'll undoubtedly see a steady stream
of new social problems with AI, big data, and other
technologies. Trying to legislate all the details
surrounding each new technology is too unwieldly
and can backfire in terms of developing lasting
solutions. Instead, governments should enact
policies that promote a rapid market response to
existing problems, while encouraging the
participants to invest in preventative measures to
ward off anticipated threats. Only by harnessing
market forces and directing their attention to the
most serious dangers can policymakers best reign
in the destructive power of emerging technologies.
51 52
Mastering the Double-Edged-Sword in
Governance of AI
By Irakli Beridze
Scientific progress is yielding new technological tools
that can deliver great benefits for society. Artificial
Intelligence (AI) in particular, is having a worldwide
impact on many sectors – from healthcare to finance. AI
could even help us to achieve the 17 ambitious global
goals world leaders have set in the 2030 Agenda for
Sustainable Development. We should, however, exercise
a great care and effort in multilateral policy-making and
cross-disciplinary cooperation to discuss the legal and
ethical implications of the large-scale use of AI.
To date, self-regulatory approaches by various entities
have tried to curb possible harmful effects of AI use in
specific disciplines. For instance, American Medical
Association proposed a regulatory framework for the
responsible evolution of AI in health care. The
Netherlands Central Bank released a guidance
document containing principles for the responsible use
of AI in the financial sector to prevent any harmful
effects for banks, their clients, or even the credibility or
reputation of the financial sector as a whole.
However, this does not mean that there is no need for
action by governments. Regulation in some shape or
form may be necessary to reduce the public risks that AI
may pose. Although there are some early deliberations
on national or international regulations, we are still far
from creating real international governance mechanisms.
Technological advances are happening faster than our
ability to respond and, if governments cannot keep pace,
they may fall into a practice of prohibiting or banning in
an event to minimise the risk that come with the use of
AI. However, these approaches may restrict technology
development and stifle innovation.
At the United Nations Interregional Crime and Justice
Research Institute (UNICRI), we have established a
specialized Centre for AI and Robotics and are one of the
few international actors dedicated to looking at AI
vis-à-vis crime prevention and control, criminal justice,
rule of law and security. We seek to support and assist
national authorities, such as law enforcement agencies,
in understanding the risks and benefits of these
technologies and exploring their use for contributing to a
future free of violence and crime. In line with that aim,
we are developing pilot projects involving the use of AI to
combat corruption, human trafficking, child
pornography, the financing of terrorism and to develop
solutions for deepfake videos.
In terms of AI governance within this specific domain, we
have created a global platform together with INTERPOL
to discuss advancements in and the impact of AI for law
enforcement. Starting in 2018, we organize an annual
Global Meeting on Artificial Intelligence for Law
Enforcement. The products of these meetings, which
include a joint report in 2019, represents a contribution
to advancing the AI governance panorama in the law
enforcement community. In connection with the third
edition of the global meeting later this year, we will be
elaborating a toolkit for responsible AI innovation by law
enforcement that will contain valuable guidance and
support for law enforcement in developing, deploying
and using AI in a trustworthy and lawful manner.
With the emergence of the novel SARS-CoV-2 coronavirus,
(COVID-19) and the resulting imposition of lockdowns,
limitations of movement of people and closure of borders,
the operating environment of law enforcement agencies
and security services has suddenly become ever more
complex. In response to this growing crisis, many are
again turning to AI and related technologies for support in
unique and innovative ways, particularly to enhance
surveillance. Although governments must do their utmost
to stop the spread of the virus, it is still important to not
let consideration of fundamental principles and rights and
respect for the rule of law be set aside. It is essential that,
even in times of great crisis, we remain conscience of the
duality of AI and strive to advance AI governance.
Therefore, more than ever, it is essential to guarantee
that we do not derail progress toward responsible AI.
The positive power and potential of AI is real. However,
to truly access it, we must work towards ensuring its use
is responsible.
Soft law approaches such as this toolkit can make a
valuable contribution to AI governance, particularly in
the law enforcement domain where the use of AI is truly
an edge case. The positive power and potential of AI is
real, however, to access it, we must first work towards
ensuring its use is responsible, taking into consideration
principles and respect for international law.
ABOUT THE AUTHOR
Head, Centre for Artificial Intelligence and Robotics
He has more than 20 years of experience in leading multilateral negotiations, developing
stakeholder engagement programmes with governments, UN agencies, international
organisations, think tanks, civil society, foundations, academia, private industry and other
partners on an international level.
Since 2014, he initiated and managed one of the first United Nations Programmes on
Artificial Intelligence and Robotics. Initiating and organizing number of high-level events at
the United Nations General Assembly, and other international organizations. Finding
synergies with traditional threats and risks as well as identifying solutions that AI can
contribute to the achievement of the United Nations Sustainable Development Goals.
Mr. Beridze is advising governments and international organizations on numerous issues
related to international security, scientific and technological developments, emerging technologies, innovation and disruptive
potential of new technologies, particularly on the issue on crime prevention, criminal justice and security.
He is a member of various of international task forces, including the World Economic Forum's Global Artificial Intelligence
Council, and the High-Level Expert Group on Artificial Intelligence of the European Commission. He is frequently lecturing and
speaking on the subjects related to technological development, exponential technologies, artificial intelligence and robotics
and international security. He has numerous publications in international journals and magazines and frequently quoted in
media on the issues related to artificial intelligence.
Irakli Beridze is an International Gender Champion supporting the IGC Panel Parity Pledge. He is also recipient of recognition
on the awarding of the Nobel Peace Prize to the OPCW in 2013.Irakli Beridze
53 54
Agile, Cooperative and Comprehensive
International Mechanisms
By Wendell Wallach
Over the past decade, continual calls have been
made in international circles for agile and adaptive
governance mechanisms that provide a degree of
coordination between the many concerned
stakeholders. This becomes particularly critical for
the governance of emerging technologies, whose
speedy development and deployment pose a serious
mismatch for traditional approaches to ethical/legal
oversight. As readers of this collection of essays will
know, AI has received much attention this past year
with more than fifty-five lists of broad principles and
an array of specific policy proposals being
considered by governmental bodies.
AI offers a perfect pilot project for the creation of
new, more agile international governance of
emerging technologies. A few different mechanisms
have already been proposed. These include
recommendations by the UN Secretary General's
Higher-Level Panel on Digital Cooperation to the
IEEE Ethically Aligned Design Initiative. The OECD
has begun work on an AI Policy Observatory.
Scholars have proposed other vehicles for
monitoring the development of AI, flagging gaps,
and developing tools to address those gaps.
Plans are underway for the 1st International
Congress for the Governance of AI, which will be
hosted by the City of Prague. It was originally
scheduled from April 2020 but was postponed until
October due to the Covid-19 pandemic. The
Congress will go beyond lists of broad principles and
specific policy proposals to forge first concrete steps
towards implementing the agile governance of AI. In
preparation for the Congress a series of experts
workshops are being convened to discuss:
• Agile, Cooperative and Comprehensive
International Governance Mechanisms
• Hard Law and Soft Law in the Governance of AI
• AI and International Security
• Minimizing and Managing System Failures
• Corporate Self-Governance and Accountability
• Inclusion, just transformation of work and society,
and addressing the needs of small nations and
underserved communities
Each of these workshops will develop proposals to
put before the ICGAI participants. Should the ICGAI
participants overwhelming support any of these
proposal, then first steps will be taken for their
implementation. The first of these expert workshops
was hosted by the Stanford University Digital Policy
Incubator on January 6-7, 2020. It proposed the
creation of a global governance network as an
additional needed institution in the distributed
governance of AI.
It is hoped that the Congress will usher in a true
multi-stakeholder approach to the governance of
emerging technology, including voices from
marginalized communities. Of particular importance
will participation by representatives from China.
While China is the leading implementer of AI
solutions in the world, it has to date either not
participated in or always been included in many of
the other international forums considering the
governance of new applications.
For those who feel they can contribute to this
conversation, and who wish to participate in ICGAI,
registration is available at:
https://www.eventbrite.com/e/the-1st-international-
congress-for-the-governance-of-ai-icgaiprague-202
0-tickets-86234414455 ABOUT THE AUTHOR Wendell Wallach
Wendell Wallach chaired Technology and Ethics studies for the past eleven years at
Yale University's Interdisciplinary Center for Bioethics, is senior advisor to The
Hastings Center, a fellow at the Carnegie Council for Ethics in International Affairs,
and a fellow at the Center for Law and Innovation (ASU). His latest book, a primer
on emerging technologies, is entitled, A Dangerous Master: How to Keep Technology
from Slipping Beyond Our Control . In addition, he co-authored (with Colin Allen)
Moral Machines: Teaching Robots Right from Wrong. The eight volume Library of
Essays on the Ethics of Emerging Technologies (edited by Wallach) was published
by Routledge in Winter 2017. He received the World Technology Award for Ethics in
2014 and for Journalism and Media in 2015, as well as a Fulbright Research Chair
at the University of Ottawa in 2015-2016. The World Economic Forum appointed Mr. Wallach co-chair of its Global
Future Council on Technology, Values, and Policy for the 2016-2018 term, and he is a member of their AI Council
for the next two years. Wendell is the lead organizer for the 1st International Congress for the Governance of AI
(ICGAI), which will convene in Prague, October 2020.
55 56
ABOUT THE AUTHOR
Cyrus Hodes is a Partner at FoundersX Ventures, a silicon-valley based VC firm
focusing on early and growth stage AI and robotics startups.
Cyrus co-founded and chairs the AI Initiative, within The Future Society—a 501(c)3
incubated at Harvard Kennedy School—where he engages a wide range of global
stakeholders to study, discuss and shape the governance of AI.
He co-leads the Global Data Commons project, together with the UN Secretary
General Executive Office and McKinsey, with over 100 global institutions (international
organizations, governments, municipalities, private sector and academia).
Cyrus served as the Advisor to the UAE Minister of Artificial Intelligence at Prime Minister's Office. Leading for the
past 2 years the Global Governance of AI Roundtable at the World Government Summit in Dubai.
Member of the OECD Expert Group on AI (AIGO), now part of OECD Network of AI Experts (ONE AI)
Member of the Council on Extended Intelligence (MIT-IEEE).
Member of 3 committees of the IEEE Ethically Aligned Design since 2016.
Advisor on AI Ethics at Smart Dubai.
Member of the Steering Committee of AI Commons.
Cyrus was educated at Sciences Po Paris, where he later was a Lecturer.
M.A. (Hons) from Paris II University and M.P.A. from Harvard.Cyrus HodesA Significant Realization by the International
Community
By Cyrus Hodes
It seems to me that 2019 will be remembered as a
point in time when the international community
(governments, private sector, civil society and
supranational bodies) had a realization that global
governance of an emerging set of intelligent
systems maybe a good thing for Humanity.
These are the events I took part in that were, and
are, shaping this realization:
- The Beneficial AGI conference in Puerto Rico, led
by the Future of Life Institute was an important
event realizing the upmost need for a dialog with
China on AI Safety, transcending economic tensions.
- The 2nd Global Governance of AI Roundtable: a
multi-stake holder / collective intelligence approach
set in Dubai as part of the World Government
Summit. Besides bringing together 250 international
experts in the fields of AI, this year was marked by:
\* UNESCO and IEEE meeting to discuss ethics of AI.
The IEEE has been presenting its seminal work on AI
Ethics while UNESCO has prepared to embark on
the leadership on AI Ethics issues within the UN
apparatus;
\* Gathering of the Council on Extended Intelligence
(MIT Media Lab-IEEE);
\* First workshop on the Global Data Commons was
held with the help of Oxford and McKinsey, over 40
position papers. The GDC is now part of the AI
Commons global effort and was taken to AI for Good
in Geneva, the UN General Assembly in NY and is
about to close the cycle with a presentation at the
World Bank Spring Meetings in April with 3 use
cases that could be replicated and scaled up globally
on sharing data to get to specific Sustainable
Development Goals solutions;
\* The gathering of AIGO, the OECD expert group on
AI in charge of laying out the AI Principles.
- The OECD Principles adopted by the G20 and some
partner countries, is an important exercise in
summarizing the main recommendations for
societies to progress with the use of Beneficial AI.
As a reminder, these principles center on:
• Transparency and explainability
• Robustness, security and safety
• Accountability
• Investing in AI research and development
• Fostering a digital ecosystem for AI
• Shaping an enabling policy environment for AI
• Building human capacity and preparing for labor
market transformation
• International cooperation for trustworthy AI
- The resulting OECD AI Policy Observatory to be
launched in February with the aim "to help countries
encourage, nurture and monitor the responsible
development of trustworthy artificial intelligence
(AI) systems for the benefit of society".
- The G20 adopting the OECD AI Principles in June
2019 is a consequential step forward keeping in
mind that both world leaders in AI (US and China)
are part of it.
- UNESCO global AI ethics series: started in North
Africa, France, China and Brazil and brought to the
table multidisciplinary points of view on a
humanistic approach towards the use of AI
advancing the discussion with human values for
sustainable development.
- In the same vein, The Future Society's AI Initiative
has been working with the World Bank to prepare
frameworks for developing countries for their
national AI Strategies announces the importance of
governance of AI and how policy makers could
approach it.
- Finally, the Global Forum on AI for Humanity,
chaired by French President Emmanuel Macron as
part of France's G7 presidency and served as a
precursor to the International Panel on AI. The goal
of this panel (a bit like the Intergovernmental Panel
on Climate Change, IPCC, did), is to become a global
point of reference for understanding and sharing
research results on AI issues and best practices, as
well as convening international AI initiatives.
57 58
ABOUT THE AUTHOR
Nicolas Miailhe co-founded The Future Society in 2014 and incubated it at the
Harvard Kennedy School of Government. An independent think-and-do-tank, The
Future Society specializes in questions of impact and governance of emerging
technologies, starting with Artificial Intelligence through its "AI Initiative" launched
in 2015. A recognized strategist, thought-leader, and implementer, Nicolas has
lectured around the world, and advises multinationals, governments and
international organizations. He is the co-Convener of the AI Civic Forum (AICF)
organized in partnership with UNESCO and Mila, and of the Global Governance of AI
Roundtable (GGAR) organized yearly during the World Government Summit in
Dubai. He is also a Steering Committee member of the AI Commons partnership, a
member of the AI Group of experts at OECD (AIGO), of the World Bank's Digital Economy for All Initiative (DE4ALL),
and of the Global Council on Extended Intelligence (CXI). Nicolas teaches at the Paris School of International
Affairs (Sciences Po), at the IE School of Global and Public Affairs in Madrid, and at the Mohammed bin Rashid
School of Government in Dubai. He is also a member of three committees of the IEEE Global Initiative on Ethically
Aligned Design of Autonomous & Intelligent Systems, a Senior Research Associate with the Program on Science,
Technology and Society at Harvard, and a Fellow with the Center for the Governance of Change at IE Business
School in Madrid.Nicolas MiailheShifting from Principles to Practice
By Nicolas Miailhe
The global governance of AI has made significant
progress in 2019, shifting from principles to practice
during what we could call a pivotal year.
By publishing its "Principles on AI" on May 22nd, the
OECD established a global reference point. These
ethics and governance principles aim to promote
artificial intelligence AI
that is innovative and
trustworthy and that respects human rights and
democratic values. They were the first set of global
principles on AI coming out of a leading multilateral
organization and were based on rigorous
development process led by a group of independent
experts. Their resonance was confirmed by the
endorsement, in June 2019, by the G20. To help
implement these AI Principles, the OECD also
announced the creation of an "AI Policy Observatory"
which will provide evidence and guidance on AI
metrics, policies and practices, and constitute a hub
to facilitate dialogue and share best practices on AI
policies.
Subsequently, France and Canada announced during
the G7 meeting in August 2019 the launch of a
"Global Partnership on AI" GPAI
hosted by the
OECD and which will operate in tandem with the "AI
Policy Observatory". Envisioned initially as a sort of
"IPCC[ Intergovernmental Panel on Climate Change]
for AI", GPAI aims to bring together many of the
greatest AI scientists and experts globally to foster
international collaboration and coordination on AI
Policy development among link-minded partners.
Both the observatory and GPAI will be launched in
2020. As a precursor to the GPAI multi-stakeholder
plenary annual expert meeting, President Macron
hosted end of October 2019 the first "Global Forum
on AI for Humanity" in Paris. The second edition of
the Forum will be held in Canada in the fall of 2020.
Finally, UNESCO General Conference voted
unanimously in November 2019 asking the
organization to develop, in the next two years, a
standard-setting instrument on AI ethics. The
process will include extensive multi-stakeholder
consultations performed around the world in the
frame of the "AI Civic Forum", a partnership
between UNESCO, The Future Society, University of
Montreal, and Mila.
Concretely, these and many other initiatives
launched in 2019 (e.g. the report from the UN
Secretary-General High Level Panel on Digital
Cooperation; the Digital health & AI Research hub;
AI Commons) demonstrate that more and more
governments, experts and practitioners are shifting
their focus on AI Governance away from just ‘what
is' or ‘what should be' towards ‘how to get there'.
Beyond policy-making, we have also seen this pivot
from principles to practice happening on the ground,
among companies and professional organizations.
The IEEE "Global Initiative on Ethically Aligned
Design of Autonomous and Intelligent Systems"
released in March 2019 the first version of "Ethics in
Action" intended to serve as a reference to guide
engineers towards the responsible adoption of AI.
Beyond, an increasing number of organizations and
companies have started to work on translating
international AI ethics principles into their
respective practice and culture through codes of
conducts and charters developed help guide digital
transformation efforts towards a trustworthy
adoption of AI. Finally, a number of
government-backed or independent initiatives on
the auditing and certification for AI systems have
appeared on the horizon in 2019. The focus of such
schemes is precisely to translate principles into
practice, and to help shape the competitive race on
AI adoption as a race to "the ethical top". As such,
besides beefing up of regulatory capacities for
example announced by the new European
Commission, certification and auditing schemes
have the potential to contribute massively to the
establishment of the "infrastructure of trust".
59 60
ABOUT THE AUTHOR
Jessica Cussins Newman is a Research Fellow at the UC Berkeley Center for
Long-Term Cybersecurity, where she leads the AI Security Initiative, a hub for
interdisciplinary research on the global security impacts of artificial intelligence.
She is also an AI Policy Specialist with the Future of Life Institute and a Research
Advisor with The Future Society. Jessica was a 2016-17 International and Global
Affairs Student Fellow at Harvard's Belfer Center, and has held research positions
with Harvard's Program on Science, Technology & Society, the Institute for the
Future, and the Center for Genetics and Society. Jessica received her master's
degree in public policy from the Harvard Kennedy School and her bachelor's in
anthropology from the University of California, Berkeley with highest distinction
honors. She has published dozens of articles on the implications of emerging technologies in outlets including The
Hill, The Los Angeles Times, The Pharmaceutical Journal, and CNBC. Jessica is a member of the CNAS AI Task
Force and a member of the Partnership on AI Expert Group on Fair, Transparent, and Accountable AI.Jessica Cussins NewmanA Global Reference Point for AI Governance
By Jessica Cussins Newman
At the end of 2018, Deep Mind co-founder Mustafa
Suleyman predicted that 2019 would be the year we
would build global arenas to support international
and multistakeholder coordination that would
facilitate the safe and ethical development of
artificial intelligence (AI). Suleyman wrote that the
arenas would need to be global because AI
opportunities and challenges don't stop at national
borders and don't respect organizational
boundaries.
In many ways, Suleyman's predictions were realized;
2019 saw the emergence of several meaningful new
global forums including the UN Secretary General's
High-Level Panel on Digital Cooperation, the Global
Partnership for AI, and the Organization for
Economic Cooperation and Development (OECD)
Principles and Policy Observatory.
The OECD AI Principles and Policy Observatory in
particular represent significant progress in the
global governance of AI. Released May 22, 2019, the
principles and recommendations became the first
intergovernmental standard for AI and a new "global
reference point" for AI governance into the future.
All 36 OECD member countries signed onto the
OECD AI Principles, as well as several non-member
countries including Argentina, Brazil, Colombia,
Costa Rica, Peru, and Romania. The European
Commission additionally backed the Principles, and
Ukraine was added to the list of signatories in
October 2019. When the Group of Twenty (G20),
released AI Principles one month later, it was noted
that they were drawn from the OECD AI Principles.
Notably, support from the G20 expanded the list of
involved countries to include China.
The principles include detailed calls for inclusive
growth, sustainable development and well-being;
human-centered values and fairness; transparency
and explainability; robustness, security and safety;
and accountability. Moreover, the recommendations
for national policies and international cooperation
include investing in AI research and development;
fostering a digital ecosystem for AI; shaping an
enabling policy environment for AI; building human
capacity and preparing for labor market
transformation; and facilitating international
cooperation for trustworthy AI. The OECD AI
Principles represent widespread awareness of the
need for global coordination and cooperation to
facilitate trustworthy AI.
The OECD is additionally building on this momentum
and aims to help countries implement the principles
and recommendations. The OECD launched the AI
Policy Observatory at the end of 2019 to facilitate
dialogue among global multi-stakeholder partners
and provide evidence-based policy analysis on AI.
The Observatory will publish practical guidance to
implement the AI Principles and a live database of AI
policies and initiatives globally. It will also compile
metrics and measurement of AI development, and
use its convening power to bring together the
private sector, governments, academia, and civil
society.
The OECD AI Recommendation achieved a feat few
would have thought possible just one year
previously. The United States signed on at a time of
relative aversion to international coordination in
other policy arenas. China and Russia were part of a
consensus agreement to support the effort more
broadly. Other countries are welcome to add their
support. While details regarding implementation are
still being finalized, 2020 will likely see more
substantive AI governance commitments and
engagement from a broader range of actors.
61 62
ABOUT THE AUTHOR
CHEN Dingding is Professor of International Relations, Associate Dean of Institute
for 21st Century Silk Road Studies at Jinan University, Guangzhou, China, and
Non-Resident Fellow at the Global Public Policy Institute (GPPi) Berlin, Germany,
Vice-President of International Studies Association (Asia Pacific region), senior
research fellow of the center for global studies at Tsinghua University. He is also
the Founding Director of Intellisia Institute, a newly established independent think
tank focusing on international affairs in China. His research interests include
Chinese foreign policy, Asian security, Chinese politics, and human rights.CHEN Dingding
An Important Issue of the International Relations: AI
Governance
By CHEN Dingding
With a new round of industrial revolution sweeping
the world, artificial intelligence has become the core
direction of industrial change. Artificial intelligence
is a new engine of economic development, a new
focus of international competition, and a new
opportunity for social construction. In 2019, as the
popularity of artificial intelligence continues to rise
at the technological level, its urgency at the
governance level is also emerging.
As the focal point of the fourth scientific and
technological revolution, achievements in the field of
artificial intelligence affect the overall national
strength of a country. In 2019, countries have
conducted a series of cooperation and competitive
interactions around artificial intelligence. To ensure
healthy competition in the field of science and
technology and continuously stimulate innovation,
global governance of artificial intelligence has
become an important concern in international
relations. Technology competition, trade conflict,
information security, and ethical responsibility are
all issues in the field of artificial intelligence. The
absence of governance norms is not conducive to
the positive effects of technology on human society
and may even bring about disorder and chaos.
In 2019, countries strived to promote AI governance
to keep pace with technological development by
holding forums, publishing reports, and formulating
specifications. But differences among countries in
terms of governance philosophy, development stage,
and technological development level pose numerous
obstacles to consensus. As major powers in the
world today, in 2020, China and the United States
should play a leading role in shaping the
international order, working with other countries to
join the formulation of norms. The two powers are
expected to lead the all-dimensional governance of
artificial intelligence under the principle of "science
and technology for good". Moreover, they should
lead countries to jointly respond to the challenges in
the development process, and promote the
maximum application of technological achievements
on a global scale. At the same time, the
development of artificial intelligence is still at an
unsaturated stage, and there is still much room for
cooperation between China and the United States.
The two countries should fully recognize the
interdependence between the two sides in this
industry chain and the broad future prospects of this
field, and jointly promote the orderly development of
the artificial intelligence industry.
63 64
ABOUT THE AUTHOR
Eva Kaili is a Member of the European Parliament, elected in 2014.
In her capacity as the Chair of the European Parliament's Science and Technology
Options Assessment body STOA
she has, been working intensively on promoting
innovation as a driving force of the establishment of the European Digital Single
Market. She has been particularly active in the fields of blockchain technology,
m/eHealth, big data, fintech, AI and cybersecurity.
Since her election, she has also been very active in the field of taxation, where she has
been the Rapporteur of the ECON committee's annual tax report. As a member of the
ECON committee, she has been focusing on EU's financial integration and the manage -
ment of the financial crisis in the Eurozone.
Eva was the Rapporteur of the European Parliament of the Blockchain Resolution, the Legislative Opinion of the EFSI, the Annual
Tax Report, and the negotiator of the Social-democratic party in the files of Capital Markets Union and Family Business.
Prior to her position in the European Parliament, she has been elected two times in the Greek Parliament serving between
2007-2012
, with the PanHellenic Socialist Movement PASOK
. She holds a Bachelor degree in Architecture and Civil Engineering,
and Postgraduate degree in European Politics. Currently, she is conducting her PhD in International Political Economy.Eva KailiEuropean Parliament and AI Governance
By Eva Kaili
The value proposition of exponential technologies is
compelling. It promises to reduce economic frictions
and scarcity in the vital resources, streamline the
function of market and public policy procedures, and
create new social dynamics, wider inclusion and
improved connectivity. Artificial Intelligence is in the
core of this transformation.
AI though introduces us to new challenges. New
sources of market failures emerge in the area of level
playing field of global competitive forces, asymmetries
in information possessing and processing, and new
types of negative externalities.
In the field of competition, data become the central
element of the new global leadership. The ones who can
acquire and process data better and smarter, will be the
winners. Access to data and technical quality of AI is the
next big thing. In order to ensure a level playing field in
the new era capacity building and regulatory
frameworks will be instrumental in taming oligopolies
generated by the prevailing digital platforms. New
competition law rules should be designed to take into
account not just the turnover of the digital companies
but also the volume and quality of data they possess so
that the value of their use will be fairly distributed to
benefit our societies in respect to the individual rights.
In the same line, we need the development of high
quality global technological standards in AI and an
environment of research excellence through the
development of strong innovation ecosystems linked in
a global network. Bad quality of AI might deliver
harmful results in the cause of economic development,
social inclusion as well as the quality of our Institutions,
our Democracy and the Media. High quality technical
standards will reduce operational risks, provide legal
certainty, improve the quality of options to the citizens,
ensure interoperability and accelerate scalability.
European Union aspires to be the global leader in the
space of AI, with systematic investments to AI-based
innovative solutions, the acceleration of technology
transfer mechanisms, a favorable regulatory
environment, the strengthening of innovation
ecosystems with digital innovation hubs and AI Centres
of Excellence, and funding of high quality research
projects. In addition, EU plans to develop AI-based pilot
projects to experiment with applications of AI in
large-scale initiatives, to gain operational experience
and then trickle this experience and infrastructure
design down to the national, regional and municipal
levels of governance.
Artificial Intelligence without mission and social
responsibility will end up being "artificial stupidity".
High standards, ethical nudges and an enabling
regulatory framework are essential. Putting the human
in the centre of AI we need to address inequalities of
skills, inequalities of access and inequalities to
opportunities by planning strategies that improve
connectivity and digital education. The quality and
standards of AI should technically prevent exclusions
and discrimination biases. GDPR set the basis by
principles that would protect human rights, without the
"one size fits all approach". Algorithms for AI that solve
problems or take decisions, should be ethical by design,
respecting privacy and the use of our data should be
transparent.
As data is in the core of AI, digital platforms should
require the consent of the citizens when they collect
data and compensate them for the profit of the data they
generate. Applications, cameras, microphones and any
other way that is used to collect data, should be "by
default off" unless citizens are aware of their use and
have fair options. Similarly, for example, AI processed
targeted messaging should be prevented in the new
Media for certain content that is promoted, deep fakes
should be flagged, while alternative propositions should
be available in order people to have access to balanced
information, avoid misperceptions and manipulation of
their will.
Finally, the need of a European AI Adjustment Fund so
that no-one is left behind, will be my flagship for 2020.
These principles and views epitomize my approach to
this challenging technology in these challenging times. I
share them with you in hope they can form the basis for
a global approach of democracies and a cooperative
technological regime between Europe Asia and
America, with the good of the citizens and the prosperity
of the societies in the core of our strategy for the future.
65 66
ABOUT THE AUTHOR
Francesca Rossi is the IBM AI Ethics Global Leader and Distinguished Research
Staff Member at IBM Research.
Her research interests focus on artificial intelligence and the ethical issues in the
development and behavior of AI systems. On these themes, she has published over
200 scientific articles, she has co-authored two books, and she has edited about 20
volumes, between conference proceedings, collections of contributions, special
issues of journals, and a handbook. She is a fellow of both the worldwide
association of AI (AAAI) and of the European one (EurAI). She has been president of
IJCAI (International Joint Conference on AI), an executive councillor of AAAI, and
the Editor in Chief of the Journal of AI Research. She is a member of the scientific advisory board of the Future of
Life Institute (Cambridge, USA) and a deputy director of the Leverhulme Centre for the Future of Intelligence
(Cambridge, UK). She serves in the executive committee of the IEEE global initiative on ethical considerations on
the development of autonomous and intelligent systems and she is a member of the board of directors of the
Partnership on AI, where she represents IBM as one of the founding partners. She is a member of the European
Commission High Level Expert Group on AI and the general chair of the AAAI 2020 conference.Francesca Rossi
The European Multi-Stakeholder Approach to
Human-Centric Trustworthy AI
By Francesca Rossi
Set up by the European Commission in 2018, the
independent High Level Expert Group on AI is
composed of a broad spectrum of AI stakeholders,
and was mandated to develop guidelines and
policies for a European AI strategy. In 2019 the
group published two documents: the AI ethics
guidelines and the recommendations on AI policy
and investment. Both these documents are focussed
on the notion of trustworthy AI and are the result of
thorough discussions within the HLEG and with the
whole European AI ecosystem, and provide a
comprehensive blueprint for developing a thriving AI
environment in Europe that can have a positive
impact across the world.
The AI ethics guidelines define the notion of
human-centered trustworthy AI by starting for
fundamental human rights, passing to principles,
and then listing seven requirements: human control,
robustness and safety, privacy and data governance,
transparency, fairness and inclusion, societal and
environmental well-being, and accountability. They
also define an assessment approach that companies
can adopt to develop a process for building
trustworthy AI and evaluating the compliance of
their products and services with these
requirements. This is aligned with existing efforts in
companies like IBM, where the notion of AI factsheet
has been thoroughly evaluated, discussed, and
tested.
The policy and investment recommendations are
very timely, as governments around the world seek
input and guidance to define their own AI strategies.
They advocate for a risk-based precision-driven
approach to possible regulations, that should adapt
to the specific context. They also recommend that
the public sector, including governments, serves as
a catalyst for the update and scaling of Trustworthy
AI. This is an important route to expand access to
and familiarity with the technology among the
individuals that governments serve. They also
advocate for strengthening and uniting Europe's AI
research capabilities and harnessing an open and
innovative investment environment. Placing the
human at the centre of AI was at the core of the AI
Ethics guidelines and it rightly continues through
the policy and investment recommendations. This
includes also ensuring that all sectors of the
population have the skills to benefit from AI, which
leads to the recommendation to redesign the
education system from preschool to higher
education.
While this effort is focused on a specific region of
the world, the independent nature of the group, as
well as it multi-disciplinary and multi-stakeholder
composition, may and should serve as a leading
example where a multilateral approach can bring
successful results. The HLEG brings together not
just technology experts but representatives of many
different sectors, including multiple academic fields,
industries, human and consumer rights
associations. This is what allowed this process to
deliver guidelines and recommendations that are
both ambitious and feasible, and thus with high
potential of deep, broad, and enduring impact in AI
governance.
67 68
ABOUT THE AUTHOR
Charlotte Stix is the Coordinator for the European Commission's High-Level Expert
Group on Artificial Intelligence. Charlotte is pursuing a PhD at the Eindhoven
University of Technology, researching the ethics and governance of artificial
intelligence and serves as Expert to the World Economic Forum's Global Future
Council on Neurotechnologies. She collates the European AI Newsletter, widely
seen as the definitive resource for insights into developments in AI policy across
the EU. She has been awarded as a Forbes' 30 under 30 in Technology in Europe in
2020 and collates the European AI Newsletter, widely seen as the definitive
resource for insights into developments in AI policy across the EU.
Formerly, she was a Researcher at the Leverhulme Centre for the Future of Intelligence, University of Cambridge,
a Fellow to the World Economic Forum's AI Council, and a Programme Officer at the European Commission's
Robotics and Artificial Intelligence Unit, where she oversaw 坑18 million in projects and contributed to the
formulation of EU-wide AI strategy. She was also an Advisor to Element AI, a Policy Officer at the World Future
Council, and a Founder of an award-winning culture magazine, which she grew from scratch to a team of 15. Charlotte Stix
The European Union's Governance Approach
Towards "Trustworthy AI "
By Charlotte Stix
Over the last two years, the European Union (EU)
emerged as a key player in the field of artificial
intelligence (AI) governance. Building on the
European Commission's 2018 AI strategy, the EU is
demonstrating the possibility of an ethically
informed, fundamental-rights approach towards AI.
In particular, the Ethics Guidelines for Trustworthy AI
played a predominant role in this development. The
Ethics Guidelines , drafted by the High Level Expert
Group on AI (AI HLEG), an independent group set up
by the European Commission in 2018, took a novel
approach to what ethics guidelines can aim to do.
Three aspects of the document are particularly
noteworthy: (i) it demarcated ‘what' AI Europe should
strive towards; (ii) it is based on fundamental rights;
and (iii) it provides a method to operationalise its
suggestions. This piece will briefly highlight each of
these aspects, and discuss how they move the
European AI governance discussion forward.
The concept of ‘trustworthy AI', as introduced by the
AI HLEG, quickly became a red thread throughout
European policy making. Trustworthy AI is defined as
AI that is "lawful, complying with all applicable laws
and regulations; ethical, ensuring adherence to
ethical principles and values; and robust, both from a
technical and social perspective, since, even with
good intentions, AI systems can cause unintentional
harm." Trustworthy AI, as the type of AI that Europe
strives towards, was subsequently picked up and
reiterated in the European Commission's
Communication: Building Trust in Human-Centric
Artificial Intelligence (2019), and has since been a
core idea underpinning multiple AI strategies from
European Union member states.
A fundamental rights based approach formed the
foundation of the entire document, supporting a
human-centric and trustworthy route towards AI. By
way of in-depth examination, this perspective yielded
four Principles: ‘respect for human autonomy,
prevention of harm, fairness, explicability'. In turn,
these Principles formed the groundwork for the
development of the ‘seven key requirements' ranging
from transparency to technical robustness and safety,
simultaneously achieving trustworthy AI and an
alignment with fundamental rights. This approach is
unique, even in light of a current landscape of over 84
sets of AI Principles.
Finally, the Ethics Guidelines provided an assessment
list, introduced to guide practitioners and other
stakeholders during the implementation phase of the
seven key requirements derived from the ethical
principles. To ensure that this assessment list was of
good use to the ecosystem, the European
Commission conducted a large scale piloting process
over several months, soliciting feedback from
hundreds of stakeholders across Europe. As of this
writing, the input received is analysed and will be
translated into a revised version of the assessment
list. A granular, expertled and principled approach
based on fundamental rights and ethics as
demonstrated by the processes undergone with the
Ethics Guidelines, alongside Commission President
Von der Leyen's proposal to establish "a coordinated
European approach on the human and ethical
implications of Artificial Intelligence" in the first
hundred days of her office, puts the EU in a unique
position to lead on governance measures for ethical
AI in the coming years.
69 70
ABOUT THE AUTHOR
Dr Angela Daly is Senior Lecturer (Associate Professor) and Co-Director of the
Centre for Internet Law & Policy in Strathclyde University Law School (Scotland)
and Visiting Professor at the Università degli Studi di Macerata (Italy). She is a
socio-legal scholar of new digital technologies, with particular expertise in data
protection, telecoms regulation, intellectual property, competition law and human
rights in the European Union, the United Kingdom and Australia. She has
previously worked at the Chinese University of Hong Kong, Queensland University
of Technology, Swinburne University of Technology and the UK communications
regulator OFCOM. She is the author of academic monographs Socio-Legal Aspects
of the 3D Printing Revolution (Palgrave 2016) and Private Power, Online Information
Flows and EU Law: Mind the Gap (Hart 2016), and the co-editor of Good Data (INC 2019). Her current research
examines the emergence of law, ethics statements and policy from public and private actors in the EU, US, China
and India on artificial intelligence (AI).Angela Daly
The Driving Forces of AI Ethics in the United
Kingdom
By Angela Daly
The UK Government has linked AI development
directly to its industrial strategy, and also seems to
view this as giving the UK a potential competitive
edge, especially in its post-Brexit trajectory.
Between 2017 and 2018 the UK Government placed
increasing emphasis on the national importance of
AI, naming it as one of the country's four Grand
Challenges in the 2017 Industrial Strategy, and
investing in an AI Sector Deal in 2018. The UK
Government also envisaged a leadership role for the
country internationally in safe and ethical uses of
data and AI. It set up a Centre for Data Ethics and
Innovation as an advisory body and committed to be
an ‘active participant' in standard setting and
regulatory bodies especially for AI and data
protection. Between 2017 and 2018 there was also
activity in the UK Parliament, with an All-Party
Parliamentary Group on AI set up in 2017 and a
Select Committee on AI formed which issued a
report in 2018. The Select Committee's report
included 5 non-legally binding ‘overarching
principles', as the basis for a possible cross-sector
‘AI Code' that it suggested be formulated and
developed by the Centre for Data Ethics and
Innovation.
In 2019, the Centre for Data Ethics and Innovation
commenced its work. It has focused so far on online
targeting and bias in algorithmic decision-making,
producing two interim reports on these topics in July
2019, and a series of ‘snapshot’ reports in
September 2019 on ethical issues in AI, focusing on
deepfakes, AI and personal insurance, and smart
speakers and voice assistants. The Centre for Data
Ethics and Innovation is scheduled to deliver formal
recommendation to the UK Government in early
2020 on online micro-targeting and algorithmic bias.
There has been significant political instability
domestically in the UK during 2019 with a change of
Prime Minister and then a General Election in
December 2019 which has given the new Prime
Minister, Boris Johnson, a large majority in the
House of Commons.The UK formally left the
European Union on 31 January 2020, and the
government now commands a sufficient majority to
make and implement law and policy, including on AI.
However, divergence may yet occur within the UK on
AI. The autonomous Scottish Government (led by the
Scottish National Party) launched its own initiative
to develop an AI strategy for the Scottish nation in
January 2020. It has since released a scoping paper
for public consultation. On the basis of consultation
responses, the Scottish Government aims to publish
its own AI Strategy in September 2020. It remains to
be seen how aligned this strategy will be with the
UK's overall approach to AI.
71 72
ABOUT THE AUTHOR
Danit Gal is Technology Advisor to the UN Secretary General High-level Panel
on Digital Cooperation. She is interested in the intersections between
technology ethics, geopolitics, governance, safety, and security. Previously, she
was Project Assistant Professor at the Cyber Civilizations Research Center at
the Keio University Global Research Institute in Tokyo, Japan. Danit chairs the
IEEE P7009 standard on the Fail-Safe Design of Autonomous and
Semi-Autonomous Systems and serves on the executive committee of The IEEE
Global Initiative on Ethics of Autonomous and Intelligent Systems. She is an
Associate Fellow at the Leverhulme Centre for the Future of Intelligence at the
University of Cambridge, and an Affiliate at the Center for Information
Technology Policy at Princeton University. Danit Gal
Localizing AI Ethics and Governance in East Asia
By Danit Gal
2019 marked the year of moving from AI Ethics and
Governance principles to action. In 2017 and 2018,
numerous countries, companies, and institutions
rushed to publish AI Ethics and Governance principles.
Unsurprisingly, we witnessed broad international
alignment on core principles such as accessibility,
accountability, controllability, explainability, fairness,
human-centricity, privacy, safety, security, and
transparency. Now we're moving to the implementa -
tion stage, as these entities explore what localizing
globally shared principles means.
This is a critical rite of passage in AI Ethics and
Governance. As we pursue the localization of these
principles, we're beginning to see major points of
contention between alternative interpretations as well
as discover new implementation paths. This is a
positive development. AI Ethics and Governance
principles can only prove effective if they are put into
practice, and that requires adapting them to local
needs and realities. Perhaps most common in the
localization process is consulting local cultural,
religious, and philosophical traditions when defining
one's ethics. This is particularly salient in East Asia,
where Confucian philosophical traditions, technoani -
mistic Buddhist and Shinto inclinations, and rich
cultural perceptions of technology play a key role in
the localization of AI Ethics and Governance principles.
Another notable process of localization is found in the
different approaches to the implementation of
principles such as privacy and accountability. In the
localization of privacy, we see different approaches to
data ownership and protection, also critical to AI
training, between the EU, US, and China. Championing
the GDPR, the EU seeks to empower users and regain
individual control over personal data. In the US we're
still seeing data being regarded as proprietary by
technology companies despite evolving data protection
regulations, especially when transacting with third
parties. In China, authorities raised the stakes and are
actively warning and banning applications deemed to
abuse, misuse, and excessively collect user data.
The localization of privacy also feeds into that of
accountability, which is central to AI developers. In the
EU, US, and China (alongside other countries) we see
authorities holding companies responsible for the
technologies they develop and distribute. The EU, for
example, fines companies directly for misconduct.
South Korea, in comparison, takes a different
approach in its Ethics Guidelines by dividing responsi -
bility between providers (companies), developers, and
users. The South Korean model of accountability
offers new challenges and opportunities that are
worth exploring, especially as we strive to create more
individual accountability by promoting the informed
and consensual use of technology.
These are a few examples of the growing AI Ethics and
Governance principles localization trend. More
research is needed to better understand how these
processes take place and how they affect domestic
and international technology users. The next step in
this process will be to feed instances of these localiza -
tions back to principle drafters to share best practices
and identify what is still missing. Looking forward,
2020 promises another year of AI Ethics and Gover -
nance principles localization, with a proliferation of
local interpretations and implementations to learn
from.
73 74
ABOUT THE AUTHOR
Arisa Ema is a Project Assistant Professor at the University of Tokyo and Visiting
Researcher at RIKEN Center for Advanced Intelligence Project in Japan. She is a
researcher in Science and Technology Studies STS
, and her primary interest is to
investigate the benefits and risks of artificial intelligence by organizing an
interdisciplinary research group. She is a co-founder of Acceptable Intelligence
with Responsibility Study Group AIR
established in 2014, which seeks to address
emerging issues and relationships between artificial intelligence and society. She
is a member of the Ethics Committee of the Japanese Society for Artificial
Intelligence (JSAI), which released the JSAI Ethical Guidelines in 2017. She is also a
board member of the Japan Deep Learning Association (JDLA) and chairing Public
Affairs Committee. She was also a member of the Council for Social Principles of Human-centric AI, The Cabinet
Office, which released "Social Principles of Human-Centric AI" in 2019. She obtained a Ph.D. from the University of
Tokyo and previously held a position as Assistant Professor at the Hakubi Center for Advanced Research, Kyoto
University.Arisa Ema
Social Concerns and Expectations on AI Governance
and Ethics in Japan
By Arisa Ema
The government took the lead in discussions about
AI governance and ethics in Japan. The Ministry of
Internal Affairs and Communications MIC
, since
2016, has held the "Conference toward AI Network
Society." The conference released the "AI R&D
Guidelines" in 2017 and "AI Utilization Guidelines" in
2019. Culminating from inter-governmental and
multi-stakeholder discussions, the "Social
Principles of Human-Centric AI" was released from
the Cabinet Secretariat in February 2019. The
"Social Principles of Human-Centric AI" outlines AI
governance, allowing industries and sectors to turn
its principles into practice. For example, the Japan
Business Federation Keidanren
released the "AI
Utilization Strategy: For an AI-Ready Society" that
developed an AI use strategy framework in February
2019. Companies such as Fujitsu, NEC, and NTT
Data also released AI principles in spring 2019. Both
traditional companies and a startup company
(ABEJA) organized ethics committees to begin
discussions on AI governance and ethics.
While industries commenced the discussion, two
incidents in 2019 caught the public's attention and
accelerated the importance of discussing AI
governance. First, there was a scandal involving a
recruitment management company selling
users'/students' data to client companies in August.
Although the main problem was related to the
illegality of using personal information and not the
algorithmic bias of AI, this incident was almost the
first case in the media involving ethical and legal
issues around AI in Japan. The second incident
occurred in November, where the Project Associate
Professor at the University of Tokyo (a director of an
AI company) tweeted racist opinions regarding the
company's recruitment policy, and claimed his
discriminatory comments were caused by machine
learning. The University of Tokyo immediately
released its official statement that his tweets
contravene the ideals of the University of Tokyo
Charter.
These incidents raised social anxieties towards
machine learning. In response, three academic
communities that were engaged in machine learning
released the "Statement on Machine Learning and
Fairness" in December, declaring that (1) machine
learning is nothing more than a tool to assist human
decision making, and (2) machine learning
researchers are committed to improving fairness in
society by studying the possible uses of machine
learning. This research group will organize a
symposium in January 2020 to open a dialogue on
machine learning and fairness supported by various
organizations.
Regarding AI governance and ethics, 2019 in Japan
has shown that the lead role in these factors has
shifted from the government to business.
Simultaneously, the social implementation of AI
progresses and, consequently, the ethical, legal, and
social concerns regarding AI and machine learning
have emerged in Japan. However, multi-stakeholder
and inter-disciplinary networks on AI governance
have been organized in Japan since 2016, and we
will continue to tackle these issues and contribute to
the world's AI governance discussions.
75 76
ABOUT THE AUTHOR
Professor Goh's research focuses primarily on the law of contract and torts, with a
secondary interest in the principles of statutory interpretation and the legal
process. He has published numerous books, chapters and journal articles
internationally and in Singapore, which have been cited on multiple occasions by
the Singapore courts and the Federal Court of Malaysia. He has been appointed
amicus curiae before the Singapore Court of Appeal and the Singapore High Court.
In recognition of his invaluable contributions to the development and advancement
of Singapore law, he became the youngest recipient of the pentennial Singapore
Academy of Law Singapore Law Merit Award in 2013. He obtained his LL.B. (First
Class Honours) from the National University of Singapore on a University
Undergraduate Scholarship, where he graduated as the top student in 2006. He subsequently obtained a LL.M.
from Harvard University in 2010 on a NUS University Overseas Graduate Scholarship.
Nydia Remolina is a Research Associate at the Singapore Management
University´s Centre for AI and Data Governance. She holds a Master of the Science
of Law from Stanford University and has more than ten years of experience in the
financial services industry, currently acting as an advisor for financial regulation,
digital transformation and Fintech for financial institutions. Nydia has also been
the manager of policy affairs at Grupo Bancolombia, a financial conglomerate
headquartered in Latin America, a senior advisor to the Organization for Economic
Cooperation and Development (OECD), and Foreign Attorney at Sullivan &
Cromwell LLP (New York Office). She has taught or delivered lectures at several
academic institutions in the United States, Asia, Europe, and Latin America, and
she has been invited to speak about fintech and financial regulation at various organizations, including the
International Monetary Fund (IMF), the International Organization of Securities Commissions (IOSCO) and the U.S.
Securities and Exchange Commission (SEC). Her main areas of work and academic research include financial and
banking regulation, securities regulation, fintech, legaltech, and the intersections of law, finance and technology.The Innovation of Singapore's AI Ethics Model
Framework
By Goh Yihan and Nydia Remolina
\*This research is supported by the National
Research Foundation, Singapore under its Emerging
Areas Research Projects (EARP) Funding Initiative.
Any opinions, findings and conclusions or
recommendations expressed in this material are
those of the author(s) and do not reflect the views of
National Research Foundation, Singapore.
Since 2017, Singapore government identified
Artificial Intelligence (AI) as one of the four frontier
technologies that would further the groundwork
infrastructure that underpins the country's
ambitions for its Digital Economy and its Smart
Nation ambition. On the one hand, 2019 was a period
when fundamental policy initiatives were launched
in Singapore. On the other hand, in 2019 the
Government reaffirmed the importance of
developing and using AI by implementing projects in
key high-value sectors and building a holistic AI
ecosystem.
The policy initiatives positioned Singapore as one of
the leading voices in AI Governance worldwide.
Indeed, on April 2019 the country won a top award at
the World Summit on the Information Society
Forum, a United Nations level platform. The
initiatives that contributed to the win included:
Asia's first model AI governance framework that
was released in January; an international and
industry-led advisory council on the ethical use of AI
and data; and a research programme on the
governance of AI, ethics and data use established
through the SMU Centre for Artificial Intelligence
and Data Governance that I lead and from where we
contribute to the ecosystem by conducting academic
research to inform AI and data governance in
Singapore and beyond, with a particular focus on
legislation and policy.
One of the most relevant cross-sectoral policy
initiatives of this year is the Model Artificial
Intelligence Governance Framework — or Model
Framework — launched in January 2019 as a guide
for organizations to practically address key ethical
and governance issues when deploying AI
technologies. The Singaporean approach helps
translate ethical principles into pragmatic measures
that businesses can adopt. It is the result of the
collaboration between the private sector and
regulators and the first attempt of a country in Asia
to put together this type of framework. Other
jurisdictions lead similar initiatives this year. For
example, the European Commission announced its
final set of AI and ethics guidelines by March 2019,
an approach likely to complement the EU General
Data Protection Regulations. On a more
international scale, the OECD presented on May
2019 a set of principles on AI to promote the
innovative and trustworthy use of AI that respects
human rights and democratic values.
Additionally, Singapore launched in October 2019
the National AI Strategy NAIS
that will see over
S$500 million committed to funding activities
related to AI under the Research, Innovation and
Enterprise 2020 Plan, in hopes of furthering AI
capabilities in these fields. Highlighted in the NAIS,
Singapore will start by focusing on five key sectors
to concentrate its efforts on - transport and
logistics, smart cities and estates, safety and
security, healthcare, and education. These National
AI projects aim to channel investment for research
and development, anchor talent and guide the
development of supporting digital infrastructure in
Singapore.
What do we expect for next year? We look forward to
keeping consolidating the AI ecosystem in Singapore
from the academia by publishing cutting-edge
research that can contribute to convene and
facilitate dialogue, across academic, industry and
regulators, especially between organisations in the
Asia Pacific region. We also expect that regulators
will continue to develop their initiatives towards
having trustworthy AI, such as the second version of
the AI Model Framework from IMDA, and the Veritas
initiative announced by the Monetary Authority of
Singapore which will translate into practice the
principles-based approach for AI that the financial
regulator has adopted.
Goh Yihan
Nydia Remolina
77 78
ABOUT THE AUTHOR
Urvashi Aneja is CoFounder and Director of Tandem Research, an interdisciplinary
research collective in India, that generates policy insights at the interface of
technology, society, and sustainability. Her research focuses on the societal
implications of data-driven decision making systems in the global south. She is
also Associate Fellow at the Asia Pacific Program at Chatham House; a member of
the T-20 Task Force on the Future of Work & Learning; and a regular contributor to
national media publications.Urvashi AnejaThe Grand Indian Challenge of Managing Inequity
and Growth in the AI Era
By Urvashi Aneja
Little progress has been made on the issue of AI
governance in India this past year. Despite
artificial intelligence being seen as a catalyst for
economic growth and a solution for complex
socio-economic challenges, India is yet to
articulate a framework for how this technology
should be governed. Much of the policy
conversation has been informed by the private
sector, with minimal consultation of civil society
or academia. As a result, unlocking the potential
of AI is seen primarily as a technical challenge,
that can be addressed through the creation of a
better innovation and start-up ecosystem,
investments in skilled manpower, and creation of
national data infrastructures. The societal
challenges and risks have received comparatively
little attention. To date, there is little meaningful
conversation at the policy level on issues of
access, equity, fairness and accountability. The
data protection bill - yet to be finalised - also does
not deal with the challenges posed by machine
learning systems. The primary concern seems to
be around finding ways to leverage personal data
for public good and AI development, rather than
privacy or social justice. The lack of governance
frameworks is a critical concern, as AI is already
being deployed in public systems. Police
departments across the country are using
predictive analytics as well as automated facial
recognition systems. Plans are also underway to
deploy AI based systems in both judicial and
welfare delivery systems. India seeks to be a
global AI leader, but this necessitates not just
being at the forefront of innovation, but also
developing normative frameworks and governance
systems that align AI trajectories with societal
needs. Blind technological optimism might
entrench rather than alleviate the grand Indian
challenge of managing inequity and growth.
At a global level, the past year has seen the
proliferation of ethical frameworks for the
governance of AI. But these are likely to be
inadequate - they typically comprise of vague
commitments by governments and technology
companies, with no enforcement or accountability
mechanisms. A more promising direction is to
tether AI governance to already established and
widely recognised international human rights
frameworks. But, it is important to recognize that
the issue of AI governance extends beyond the
violation of specific human rights or individual
harm. The growing use of AI can lead to increasing
inequality, concentration of power, entrenchment
of discriminatory and exclusionary systems, and
even the creation of a surveillance society. Just as
AI is not a silver bullet to address socio-economic
challenges, neither is a single set of regulatory or
governance frameworks adequate to address
these societal harms. Governing AI will require a
range of public policy interventions - from
competition law to curb the powers of Big Tech to
sector specific standards and risk assessments.
India currently is yet to address these issues, with
the few existing governance conversations limited
to how Indian data can be leveraged to improve
India’s AI readiness and competitiveness.
AI presents a wicked problem for public policy -
one that consists of multiple interacting systems,
both social and technical; in which there is
uncertainty about the impacts and risks; and in
which the divergence between various
stakeholders is one of competing values and world
views. Addressing wicked problems requires
engaging multiple stakeholders in iterative and
adaptive strategies; enabling collaborative
sense-making, experimentation, and learning; and
building capacities for reflexiveness and foresight.
79 80
ABOUT THE AUTHOR
FU Ying is the Chairperson of the Center for International Security and Strategy đ
Tsinghua University (CISS). She is Vice-Chairperson of the Foreign Affairs
Committee of China’s 13th National People’s Congress (NPC).
FU Ying started her career with China’s Ministry of Foreign Affairs (MFA) in 1978
and had long engaged in Asian affairs. She served successively as Director of a
Division in Asian Affairs Department of MFA and then was promoted to Counselor
of the Department. In 1992 She joined UN peacekeeping mission in Cambodia. She
was appointed Minister Counselor at Chinese Embassy in Indonesia in 1997,
Chinese Ambassador to the Philippines in 1998, and Director General of Asian
Department of MFA in 2000. She then was appointed Ambassador to Australia (2004-2007), and Ambassador to the
United Kingdom (2007-2009). She served as Vice Minister of Foreign Affairs for European Affairs and then for
Asian Affairs (2009-2013). FU Ying was elected deputy to China’s 12th and then 13th NPC (since 2013) and served as
Chairperson of the Foreign Affairs Committee and spokesperson of the 12th NPC (2013-2018). She took on her
current NPC position in 2018.FU YingBenefit in Partnership
By FU Ying
Super-intelligent AI is still a way off but artificial
intelligence already exceeds human capacity in many
growing areas, sparking huge expectations as well as
fear and concern. Both the United States, the AI
leader, and China, which is rapidly creating massive
applications, should shoulder the responsibilities for
what needs to be done.
But before we can talk about the future, we need to
consider whether we are going to do it together.
Worsening US-China tensions cannot but have an
impact on how we deal with the challenges down the
road. Should we work to make technology symbiotic
to mankind and ensure that the technological
advances will make our civilisations prosper? Or
would we go separate ways and use the technology to
undermine, even hurt, the other side?
After three decades of rapid industrialisation, China
finds itself among the top echelon in advancing AI
technology and is aware of the needs of rule-making
that comes with its advancement. China’s AI
governance expert committee, set up by the Ministry
of Science and Technology in February 2019, has
released eight AI governance principles. They
include: harmony and human-friendliness, fairness
and justice, inclusiveness and sharing, respect for
privacy, security and controllability, shared
responsibility, open collaboration, and agile
governance. Efforts are also being made to put these
principles into practice.
AI research is the product of global collaboration,
with researchers sharing ideas and building on each
other’s work. With multinational AI platforms
expanding globally, countries need to agree on ethical
norms and industry rules. China is open to discussing
and working with other countries on this. Our efforts
in AI governance need to be connected to similar
efforts in other parts of the world, the US in
particular.
Neither China nor the US can monopolise the world’s
technological progress. If they complement each
other, the prospects for AI technology will be
brighter; if they stop working with each other, both
will suffer and the general progress will pay a price.
It would be self-destructive to allow geopolitical and
a zero-sum competitive philosophy to dominate
relations.
The US view of hi-tech as an area of strategic rivalry
is not a perspective shared by China. While there is
competition, the reality in the field is a kind of
constructive and strategic mutual dependency.
According to Clarivate Analytics, from 2013 to 2017,
the number of AI-related papers co-authored by
Chinese and Americans grew the fastest, reaching
4,000 in five years.
American companies lead the way in technologies,
and American universities are ahead of the global
pack. China has the largest user market and
therefore provides faster iterative upgrading of
algorithms. Both countries can benefit tremendously
in a partnership, unless the US forces a decoupling
and pushes China to find other partners or to develop
its own solutions – which would also weaken US
companies’ position and influence.
For China, the preferred path is to encourage
collaboration in developing common rules for safe,
reliable and responsible AI.
81 82
ABOUT THE AUTHOR
ZHAO Zhiyun, PhD in Economics, Professor, Doctoral Supervisor, the Party
Committee Secretary of Institute of Science and Technology Information of China
(ISTIC), Director of New-Generation Artificial Intelligence Development Research
Center of Ministry of Science and Technology of the People's Republic of China
(MOST). ZHAO Zhiyun is granted with the Special Government Allowance provided
by the State Council, and selected for "New Century Million Talents Project",
National Cultural Expert and Theorist of "Four Groups" and Leading Talent of the
"Ten Thousands Talent Plan". She is well-known as a leading talent in economic
theories and policies, and S&T management and policies. She especially has
unique insights on emerging technology and industrial development. She pays
great attention to the issue of AI governance, and focuses on promoting related research and cooperation between
China and other countries. She has won outstanding achievements in the construction of theoretical system, in the
promotion of technological progress, and in the related disciplinary construction. She has published more than 30
academic monographs, 4 Chinese translations, and more than 130 academic papers. As the Principal Investigator,
she takes charge of nearly 30 national, provincial and ministerial research projects, including National Key
Research and Development Project, National Sci-Tech Support Plan and National Soft Science Major Project.ZHAO ZhiyunProgress of Artificial Intelligence Governance in
China
By ZHAO Zhiyun
China has always attached great importance to the
governance of Artificial Intelligence (AI). On the
ninth round group learning of the Political Bureau
of the CPC Central Committee, which is the
highest decision-making agency, the General
Secretary Xi Jinping emphasized the demand to
integrate multidisciplinary resources to
strengthen the research on AI-related laws, ethics
and social issues and establish and improve laws,
regulations, systems and ethics to guarantee the
healthy development of AI. The released national
"Development Planning for a New Generation of
Artificial Intelligence" has made clear
deployments in following aspects, to conduct
researches on AI relevant legal issues and
regulations in such key areas as autonomous
driving and robotics; to promote researches on AI
behavioral science and ethics; to establish ethics
and codes of conduct for R&D and designers; and
to actively participate in the global AI governance.
On February 15, 2019, to strengthen the research
on AI-related laws, ethics, standards, and social
issues, and to get deeply involved in the
international cooperation of AI governance, the
Ministry of Science and Technology (MoST)
initiated the establishment of the New-generation
AI Governance Professional Committee consisting
of experts from colleges and universities,
research institutes and enterprises. On June 17,
2019, the Committee released the "Governance
Principles for a New Generation of Artificial
Intelligence: Develop Responsible Artificial
Intelligence", which proposed eight principles,
namely, harmony and human-friendliness,
fairness and justice, inclusiveness and sharing,
respect for privacy, security and controllability,
shared responsibility, open collaboration, and
agile governance. The eight principles gained
profound echoes worldwide, of which partly due to
its combination of global standards and Chinese
characteristics. Subsequently, Beijing and
Shanghai have released their own local AI
governance principles or initiatives, such as
“Beijing AI Principles", "Chinese Young Scientists’
Declaration on the Governance and Innovation of
Artificial Intelligence Shanghai, 2019" and
"Shanghai Initiative for the Safe Development of
Artificial Intelligence". Industries came up with
governance principles based on their own, such as
by Tencent and by MEGVII. All the above moves
make a big impact.
In 2020, China’s priority will be the
implementation of the said eight governance
principles. The aim will focus on accelerating the
formulation and improvement of AI-related laws,
standards and norms and making AI governance
more legalized, more refined and more
institutionalized. Given that AI governance is a
global issue, international cooperation will be an
important part for China’s AI governance. In order
to promote the healthy development of
next-generation AI, China will always adhere to
the cores of openness and cooperation in
promoting the next-generation AI governance, to
positively participate in the global AI governance
agenda, to build international platforms including
the World Artificial Intelligence Conference, and
to keep communicating with the global players.
China is ready to work with any other countries or
organizations around the world to promote AI
which is good for all the human being.
83 84
ABOUT THE AUTHOR
Dr. LI Xiuquan is now Research Fellow of Chinese Academy of Science and
Technology for Development (CASTED), and Deputy Director of New Generation
Artificial Intelligence Development Research Center of Ministry of Science and
Technology. He received his Ph.D. degree, in field of Computer Science, from
Tsinghua University. He is also joint PhD in Information Science, University of
Hamburg, Germany. He has many years of research experience in AI fields, such as
multidimensional time series data modeling and prediction, and brain-controlled
robot system based on EEG. His current research area is big data and AI technology
foresight and evaluation, industrial technology roadmap and AI innovation policy
research. He has strong interest in the study of the frontier trend of intelligent
transformation, and the demands for innovative policies in various aspects of AI development such as research,
industry and governance. He has presided over 10 research projects such as “Research on the Major Strategic
Issues of Chinese Intelligence Economy and Intelligence Society development”, “Research on the Leading Trends
and Policies of Artificial Intelligence at Home and Abroad”.LI XiuquanFrom Principles to Implementation, Multi-Party
Participation and Collaboration are Even More
Needed
By LI Xiuquan
In 2019, the governance of AI has drawn wider
attention from the international community.
International organizations, governments, academia,
and enterprises continue to explore values of new
technological and publish their own principles for
the development of AI. China also released
“Governance Principles for a New Generation of
Artificial Intelligence: Develop Responsible Artificial
Intelligence” in 2019. The international community
has formed a consensus statement around such key
issues as people orientation, fairness, transparency,
privacy, and security, reflecting that all parties have
formed a universal value concept for the
development of AI.
At the same time, the focus of global AI governance
is moving from the formulation of principles to
continuous refining and implementation of these
principles and guidelines. In this process, it is more
important to fully absorb the opinions of
stakeholders. Compared with the previous stage, it
will require more extensive multi-party participation
and closer collaborative governance.
The application of AI will bring about various
influences on the future society's economic
activities, public management, travel, etc., and it
will affect all walks of life and various groups. From
governance principles to detailed rules and
regulations, it is not enough to rely solely on
government officials and experts. It requires the
joint efforts and active participation of the
government, academia, industry, and the public.
China is continuously promoting the implementation
of AI governance principles in the construction of AI
innovation pilot areas and AI open innovation
platforms, and put forward the governance rules in
various fields through the exploration practice. It is
particularly important to establish an effective
opinion collection and feedback mechanism to
enable all sectors of society to participate in the
governance of AI, and thus to incorporate the
appeals of different groups, especially vulnerable
groups and other stakeholders, into the detailed
rules.
Similarly, from a global perspective, different
countries have different national conditions and
different ethnic groups have different histories and
cultures. The implementation of AI principles
requires effective communication and coordination.
It is helpful to establish a more diversified
collaborative governance platform to strengthen
dialogue and communication among countries and
make differences fully collide and merge with each
other in pragmatic communication, which will
definitely help to form a broader consensus, and
enable AI to better improve the people's livelihood
and well-being in all countries.
85 86
ABOUT THE AUTHOR
DUAN Weiwen is the Director and Professor of the Department of Philosophy of
Science and Technology in the Institute of Philosophy, Chinese Academy of Social
Sciences CASS
, and he is also Distinguished Professor in University of CASS, and
the Director of the Research Center for Science, Technology and Society, CASS. He
holds a Bachelor of Science degree in Physics from Central China Normal
University, and a Master of Philosophy and PhD degree in Philosophy of Science
and Technology from Renmin University of China. He specializes in philosophy of
science, philosophy of information technology, etc. In recent years, he has focused
on the philosophical, ethical and social research of big data and AI. He was a
visiting scholar in Oxford University with Luciano Floridi
, Colorado School of
Mines with Carl Mitcham
, and University of Pittsburgh with Edouard Machery
. He is on the editorial board of
the Journal of Information, Communication and Ethics in Society and Journal of Responsible Innovation, and he is
one of the deputy chairmen of the Committee of Big Data Experts of China. He is now the chief researcher and
project leader of several important and general social science fund research projects, including Philosophical
Studies on Intelligence Revolution and Deepening Techno-scientific of Human Being 2017-2022
, which is
supported by the National Social Sciences Founding of China NSSFC
. He is the author of several books,
including Acceptable Science: Reflection on the Foundation of Contemporary Science, Ethical Reflection on
Cyberspace , and Truss up Time: Technology and Life World , etc.DUAN WeiwenTowards Robust and Agile Framework for Ethics
and Governance of AI
By DUAN Weiwen
In 2019, four aspects in AI ethics and governance in
China deserve attention. Firstly, various principles,
standards and declarations of AI ethics and
governance were released. These include
”Governance Principles for a New Generation of
Artificial Intelligence: Develop Responsible Artificial
Intelligence”, the “Beijing AI Principles” released by
Beijing Academy of Artificial Intelligence (BAAI), the
artificial intelligence ethical principles in “AI Ethical
Risks of AI Research Report” proposed by Artificial
Intelligence Working Group, SAC, “Chinese
prospects for the Standardization of Robot Ethics”
(2019) by National Robotics Standardization Working
Group and Peking University. Meanwhile, CCID and
CAICT under the MIIT of China, respectively, have
proposed the declarations or conventions of AI
ethics, and Tencent also released its own AI ethical
framework. Not only legal and philosophical
scholars participated in related research, but
researchers in the field of AI also shown great
interest in the research of ethics system of AI and
safe and reliable AI, etc. Secondly, certain progress
has been made in the legal regulation of personal
information protection and data rights, data
governance, and data compliance. For example, the
“Act on the Protection of Personal Information” and
the “Data Security Law” has been included in the
legislative plan for the next year; and MIIT has
carried out the special rectification action against
the APPs infringing on the rights and interests of
users. It is worth mentioning that the revised draft
of the Law on Protection of Minors emphasizing that
informed consent is required to collect information
about minors. Thirdly, AI applications such as face
recognition are rapidly spreading and causing lots of
ethical and legal disputes. Although the abuse of
face recognition in classrooms, parks and other
scenes has led to public criticism and even legal
proceedings, its application in China seems
unstoppable. In addition, AI companies have also
conducted some ethical and governance practices.
Leading companies such as Tencent have proposed
Technology for Good as its goal, and applied AI to
prevent game addiction and find lost children.
Megvii, one of China's facial recognition giants, also
released AI Application Criteria, which are used for
internal review by its AI ethics committee. However,
given that these efforts are far from being the basis,
such as KPI, on which companies evaluate their
products and services, they are inevitably criticized
as flexible PR or some kinds of ethics washing.
All in all, China is generally more optimistic about
the positive impact of AI on the economy, society,
enterprises and personal well-beings. However, the
ethical risks of AI are not fictitious. On the one hand,
while enjoying the convenience of innovation,
ordinary users will inevitably be concerned about
the abuse of personal data and the opacity of
algorithmic decisions. On the other hand,
developers also worry that a lack of ethical
regulation will make them pay a high price for the
risks involved. In order to eliminate this double
anxiety, it is necessary to carry out the ethical
adjustment through ethical assessment of
technology, "technology-ethics" correction and the
construction of trust mechanism for AI. What's more
important is to build a robust and practicable
framework for ethics and governance of AI to
achieve agile governance on the basis of full
consideration of the social impact of AI, regional and
global compatibility, and maintenance of the
fundamental condition - world peace.
87 88
ABOUT THE AUTHOR
Dr. LUAN Qun joined China Center for Information Industry Development in 2011 as
the Director in the Institute of Policy and Law, holding a PhD in Civil and
Commercial Law from the China University of Political Science and Law. He is an
industry expert in the civil and commercial law and industrial economy and policy
and leads the Legal Services Centre for Industry and Informatization. His recent
consulting work has centered on industry strategy, business development and
supervision, with a special focus on autonomous vehicles, industrial data and
manufacturing. He has carried out successful projects for industrial development
planning and interpretation of industrial policy in Nei Mongol, Henan and Shandong
province. He has published more than 50 articles in "Learning Times", "China
economy and Informatization", "Modern Industrial economy", "Economic Daily", "China Electronic Journal" and
other magazines and newspapers.LUAN Qun
Globalization and Ethics as the Consensus of AI
Governance
By LUAN Qun
In 2019, AI governance is characterized by
globalization and ethical integration. The major
countries, economies and international
organizations in the world have successively
released documents on AI governance. The most
representative ones are the EU Ethics Guidelines for
Trustworthy AI (April 2019), the joint statement and
“G20 AI Principles” (June) adopted by the G20 Digital
Economy Ministers' Meeting and G20 Trade and
Digital Economy Ministers' Joint Meeting held in
Tsukuba, Japan; and, also in June, China's National
New Generation AI Governance Expert Committee
issued “Governance Principles for a New Generation
of Artificial Intelligence: Develop Responsible
Artificial Intelligence”. China's AI governance, has
also been transferred to ethical governance from the
planning of the State Council and related
departments in 2017, such as the “New Generation
of Artificial Intelligence Development Plan” and the
“‘Internet+’ Three Year Action Plan for Artificial
Intelligence”, as well as industry and domain plans
such as the “Three-year Action Plan on Promoting
the Development of A New Generation of Artificial
Intelligence Industry 2018-2020
, 2018 Intelligent
Manufacturing Pilot Demonstration, and the ”AI
Innovation Action Plan for Universities”, etc. This is
highlighted by the emphasis on "responsibility" in
the new generation of AI governance principles,
which is the same meaning as the EU's emphasis on
"trustworthiness". In August, the rule of law forum
of Shanghai 2019 world AI conference released
guidelines for AI security and rule of law 2019
. The
theme of the forum is "building the rule of law in the
future and sharing the benefits of intelligence", so
as to promote industrial development and follow-up
of relevant systems, better serve and safeguard the
overall situation of AI national strategy, and show
the Chinese scheme of AI governance to the world.
As the industry management department, the
Ministry of Industry and Information Technology
mainly implemented the top-level industrial design
plan in 2019, such as the “Three-year Action Plan
for Promoting the Development of the New
Generation of Artificial Intelligence Industry”
2018-2020
, which mainly cover eight products and
three technologies, the development plan and
standards for key industries, such as the “Auto
Driving Action Plan for the Development of the
Internet of Vehicles Intelligent Connected Vehicles
Industry”, “Key Points for the Standardization of
Intelligent Internet Connected Vehicles in 2019”;
and, key work on joint promotion, such as joint
efforts with the Ministry of Natural Resources and
Beijing to carry out the pilot work of Internet of
vehicles Intelligent Connected Vehicles
and
automatic driving map application; and industrial
Internet work, such as the implementation of the
Guide for the Construction of Integrated
Standardization System of Industrial Internet. All of
these new policy documents involve the related
discussions on AI governance.
89 90
ABOUT THE AUTHOR
GUO Rui (Associate Professor of Law at Renmin University of China, researcher of
RUC's Institute of Law and Technology, and Director of Center for Social
Responsibility and Governance). Dr. GUO Rui researches on corporate law, financial
regulations, human rights, and the ethics of AI. He graduated from China University
of Political Science and Law (LL. B & LL.M) and Harvard Law School (LL.M &
S.J.D). Professor GUO Rui is a member of the Sub-Committee of User Interface,
National Standardization Committee of Information Technology, and the Lead
Expert for the Research Group on the Ethics of Artificial Intelligence appointed by
Artificial Intelligence Working Group, Standardization Administration of the
People's Republic of China (SAC). He participated in the drafting of the first AI
standardization white paper (published in 2018), and led the drafting of the AI Ethical Risks of AI Research Report
(published in May 2019 by Artificial Intelligence Working Group, SAC).GUO Rui
The Principles of Well-being of Human Person and
Accountability
By GUO Rui
In 2019, Artificial Intelligence (AI) affected every
aspect of people's lives all around the world, with its
increasing application in business, healthcare,
transportation, financial services, education, and
public safety. For the public and the policy makers,
whether the negative impacts of AI will be properly
handled, such as the leakage of personal
information, the output of poorly-trained AI, and the
misuse of AI, causes more and more concerns. The
academia, the industry and the policy makers have
actively joined the AI-ethics-related discussions and
debates, making 2019 a critical juncture for the
global community to move towards a consensus on
AI governance.
Experts from industries, academia and civil
societies have gradually come to a consensus that
the negative impacts related to AI are best treated
as risks, and could be identified, prevented and
managed through a rigorous risk-management
system. The insight has informed the
standardization work, and much ethic-related
standardization is steadily advancing and gaining
momentum. This consensus is leading to a
governance system that allows the world to reap the
benefits and prevent the harms of AI. Although the
concept of risk is helpful to deal with the known and
immediate negative impacts of AI, it certainly does
not exhaust all those AI brings, especially the
uncertain and long-term ones. We should continue
to explore ways that could help human society to
deal with AI ethical issues.
In my capacity as the Lead Expert for the Research
Group on the Ethics of Artificial Intelligence of the
Artificial Intelligence Working Group,
Standardization Administration of the People's
Republic of China (SAC), I proposed that two
principles need to be followed for Ethical and
Responsible AI. First, Ethical and Responsible AI
implies the principle of the well-being of human
person. Promoting the well-being of human person
should be the ultimate goal of AI research and
applications. Second, Ethical and Responsible AI
implies the principle of accountability. These two
principals have informed the drafting of the AI
Ethical Risk Research Report (published in May 2019
by Artificial Intelligence Working Group, SAC).
91 92
ABOUT THE AUTHOR
WANG Yingchun, PhD, Head of Research Department of Science, Technology and
Society at Shanghai Institute for Science of Science, areas of expertise include
innovation transformation and innovation governance, and science, technology and
society. He initiated and organized a multidisciplinary AI research group to conduct
systematic research on AI. He has undertaken a number of consulting projects
entrusted by the Ministry of Science and Technology and the government of
Shanghai municipality, and has continuously participated in the research and policy
drafting of the government's AI policy. He led the organizing work of the
Governance Forum under World AI Conference 2019 in Shanghai. At the moment,
he is also responsible for the running of the Secretariat of the Expert Advisory WANG Yingchun
Committee of the National New-generation AI Innovation and Development Pilot Zone in Shanghai.
AI research institutions, enterprises and application
scenarios are mainly located in cities across the
globe, thus cities are playing a prominent role in AI’s
development. As China’s largest economic center,
Shanghai is speeding up its march to become a
global AI highland in terms of research and
development of technology, application
demonstration, institutional supports and talents
attraction. Echoing “Better City, Better life”, the
theme of 2010 Shanghai World Expo, we need to
seek paths and solutions for harmonious
coexistence of human-AI to achieve the goal of
“Better AI, Better City, Better Life”in the age of
artificial intelligence.
Cities provide an experimental platform to promote
AI development in a healthy way. In 2019, the
Ministry of Science and Technology has issued the
“guidelines for the construction of the national new
generation artificial intelligence innovation and
development pilot zone”, which stress to take the
city as the main carrier to explore replicable and
generalizable experiences, and to lead the healthy
development of artificial intelligence in China. On
May 25, 2019,the Ministry of Science and Technology
and the government of Shanghai Municipality jointly
launched the“ National New Generation of AI
Innovation and Development Pilot Zone” in
Shanghai. The pilot zone takes AI governance as one
of the four core elements to promote scientific and
technological innovation and institutional innovation.
On the one hand, it supports to research and develop
responsible artificial intelligence, and to encourage
innovation in artificial intelligence applied in
Shanghai; on the other hand, it strengthens the
exploration in laws and regulations, ethical norms,
safety supervision and other aspects of artificial
intelligence, and contribute “Shanghai experience”
in the artificial intelligence development in China
and around the world. A focal concern is on how to
provide citizens with higher quality medical care,
more convenient transportation and safer and
efficient urban services based on artificial
intelligence technology.
Openness and collaboration are crucial in achieving
Better AI. Shanghai has hosted the World Artificial
Intelligence Conference for two years. In his
congratulatory letter to World AI Conference 2018,
Shanghai, President Xi Jinping pointed out that "we
need to deepen cooperation and jointly explore the
emerging issues of artificial intelligence”. We
organized the Governance Forum of World AI
Conference 2019. At the Forum, dozens of
international experts and participants from more
than 200 government and industry attended. The
involvement of global experts enhanced mutual
understanding through open exchanges and has
reached consensuses on some important issues. At
the forum, the “Chinese Young
Scientists’Declaration on the Governance and
Innovation of Artificial Intelligence Shanghai, 2019
”was issued. It raised four major responsibilities to
be followed in the development of artificial
intelligence, namely, “Ethical Responsibility”,“
Safety Responsibility”, “Legal Responsibility” and
“Social Responsibility”. Taking the forum as a
starting pojnt, we hope to promote the formation of
a global community of AI governance research and
collaboration. We also aim to shed light on
governance approaches.
Cities can play a vital role in the formation of global
AI governance system. This system may consist of
multi-subsystem programs and regional-programs
on the basis of respecting cultural and institutional
diversity. We need to ensure that these subsystems
and regional programs are globally compatible and
open-minded, and figure out the specific
mechanisms for benefit sharing and security. Cities
around the world can have more in-depth exchanges
and cooperation on these aspects, and we have
carried out relevant work in 2019.
We participated in the researching work for the
construction plan of the Shanghai pilot zone, and are
preparing to build Shanghai Academy of Artificial
Intelligence Governance. We have gathered
multi-disciplinary experts to work on systematic
research on the ethical framework of artificial
general intelligence and relevant legal, and social
issues of narrow artificial intelligence. We hope to
continue to work with friends at home and abroad on
the path and scheme of harmonious coexistence of
human and artificial intelligence.Better AI, Better City, Better Life
By WANG Yingchun
93 94 |
1de73dd4-68be-4989-9e48-7240d4480a19 | trentmkelly/LessWrong-43k | LessWrong | A Universal Emergent
Decomposition of Retrieval Tasks in Language Models
This work was done as a Master's thesis project at Conjecture, independent from the primary agenda of the organization. Paper available here, thesis here.
Over the past months I (Alexandre) — with the help of Eric — have been working on a new approach to interpretability of language models (LMs). In the search for the units of interpretability, I decided to zoom out instead of zooming in. I focused on careful dataset design and causal intervention at a macro-level (i.e. scale of layers).
My goal has been to find out if there are such things as “organs”[1] in LMs. In other words, are there macroscopic universal motifs, coarse-grained internal structures corresponding to a function that would generalize across models and domains?
I think I found an example of universal macroscopic motifs!
Our paper suggests that the information flow inside Transformers can be decomposed cleanly at a macroscopic level. This gives hope that we could design safety applications to know what models are thinking or intervene on their mechanisms without the need to fully understand their internal computations.
In this post, we give an overview of the results and compare them with two recent works that also study high-level information flow in LMs. We discuss the respective setups, key differences, and the general picture they paint when taken together.
Executive summary of the paper
Methods
* We introduce ORION, a collection of carefully crafted retrieval tasks that offer token-level control and include 6 domains. Prompts in ORION are composed of a request (e.g. a question) asking to retrieve an entity (e.g. a character) from a context (e.g. a story).
* We can understand the high-level processing happening at the last token position of an ORION prompt:
* Middle layers at the last token position process the request.
* Late layers take the representation of the request from early layers and retrieve the correct entity from the context.
* This division is clear: using activati |
eb1f3a8c-5790-4f7a-9b56-dc80bb906f0d | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Simple Way to Prevent Power-Seeking AI
The most concerning part of AGI is that it may have power-seeking tendencies. We can loosely define a powerful state to be a state that is useful to be in regardless of the reward function. A promising approach to AI safety is to make the RL process "interpretable" enough so that you can easily see which states the model regards as "powerful".
The dumbest possible example of this is to train an RL agent on many tasks by learning a task-specific head for each task. Then, one can average the q-value across the different tasks -- if this number is high, the state is "powerful". Obviously, one can do much more sophisticated things. My point is that this is a fairly tractable research direction.
Then, when you sample from the agent during train/test time, avoid the powerful actions.
Now, suppose the RL agent was fully sentient and knew you were doing the above. Does it have an incentive to deceive you -- e.g. make powerful actions seem not powerful?
Well, suppose the RL agent was trained via Q-learning and it found some powerful actions that the above metric didn't recognize as powerful. Then, it would just update the Q value across different tasks. The RL agent knows that this update will allow us to recognize that this state is powerful, thus, we'll ensure this state isn't visited again. However, the RL agent is simply trying to accurately estimate the discounted expected reward given that we are in state s and are taking action a. The future probability of ending up at (s,a) isn't relevant to this -- thus, there's no incentive to be deceptive.
But let's say the RL agent was trained via policy gradients. Now, it's trying to recommend the optimal action that maximizes the reward. Let's say this agent found a powerful action that the metric didn't recognize as powerful. The agent also knows that if it increases the probability of this action across different tasks, we are less likely to choose this action -- since we can recognize that it's a powerful action. The RL agent can maximize its reward by being deceptive and hiding that the action is powerful. |
0413d53a-f784-4035-b8b7-d02abe5dd840 | trentmkelly/LessWrong-43k | LessWrong | What does Yann LeCun think about AGI? A summary of his talk, "Mathematical Obstacles on the Way to Human-Level AI"
This is a summary of Yann LeCun's talk "Mathematical Obstacles on the Way to Human-Level AI". I've tried to make it more accessible to people who are familiar with basic AI concepts, but not the level of maths Yann presents. You can watch the original talk on YouTube.
I disagree with Yann, but I have tried to represent Yann's arguments as faithfully as possible. I think understanding people who differ in opinion to you is incredibly important for thinking properly about things.
In an appendix on my blog I include Gemini 2.5 Pro's analysis of my summary. In short:
> The summary correctly identifies the core arguments, uses LeCun's terminology [...], and reflects the overall tone and conclusions of the talk
Why Yann LeCun thinks LLMs will not scale to AGI
LLMs use deep learning for base and fine-tuning, which is sample inefficient (need to see many examples before learning things). Humans and animals learn from way fewer samples.
LeCun's slide
LLMs are primarily trained on text, which doesn't carry as much raw data as other formats. To get AGI we need to train models on sensory inputs (e.g. videos). Humans see more data when you measure it in bits.
LeCun's slide
The setup for LLMs has them predict the next token. But this means they are predicting in a space with exponentially many options, of which only one is correct. This means they are almost always incorrect. And similarly for images/videos, they have so many options and the world is only partially predictable, that it's not feasible for the model to be correct.
My visualisation
LeCun's slides
AI systems work the same amount of time on short problems and hard problems. But actually they should work longer on hard problems.
* He thinks chain of thought is a trick that he implies isn't really solving it. (video timestamp: 19:33)
* Instead thinks we should have AI systems be using optimization/search algorithms against an objective when posed with a problem, rather than using feed forward neural n |
14a38459-91b1-4b8a-a045-2c5b08821721 | trentmkelly/LessWrong-43k | LessWrong | Counterarguments to the basic AI x-risk case
(Crossposted from AI Impacts Blog)
This is going to be a list of holes I see in the basic argument for existential risk from superhuman AI systems1.
To start, here’s an outline of what I take to be the basic case2:
I. If superhuman AI systems are built, any given system is likely to be ‘goal-directed’
Reasons to expect this:
1. Goal-directed behavior is likely to be valuable, e.g. economically.
2. Goal-directed entities may tend to arise from machine learning training processes not intending to create them (at least via the methods that are likely to be used).
3. ‘Coherence arguments’ may imply that systems with some goal-directedness will become more strongly goal-directed over time.
II. If goal-directed superhuman AI systems are built, their desired outcomes will probably be about as bad as an empty universe by human lights
Reasons to expect this:
1. Finding useful goals that aren’t extinction-level bad appears to be hard: we don’t have a way to usefully point at human goals, and divergences from human goals seem likely to produce goals that are in intense conflict with human goals, due to a) most goals producing convergent incentives for controlling everything, and b) value being ‘fragile’, such that an entity with ‘similar’ values will generally create a future of virtually no value.
2. Finding goals that are extinction-level bad and temporarily useful appears to be easy: for example, advanced AI with the sole objective ‘maximize company revenue’ might profit said company for a time before gathering the influence and wherewithal to pursue the goal in ways that blatantly harm society.
3. Even if humanity found acceptable goals, giving a powerful AI system any specific goals appears to be hard. We don’t know of any procedure to do it, and we have theoretical reasons to expect that AI systems produced through machine learning training will generally end up with goals other than those they were trained according to. Randomly aberrant goals resulting |
dddf4ec6-f774-42de-8168-1550cfe0aebf | trentmkelly/LessWrong-43k | LessWrong | Research on unconscious visual processing
There is a new paper out by Sanguinetti, Allen, and Peterson, The Ground Side of an Object: Perceived as Shapeless yet Processed for Semantics. In it, the authors conduct a series of experiments to try to answer the question of how the brain separates background from foreground in visual processing. I found it interesting, so I thought I'd share. The human visual system is incredibly complex and we still have no clear idea how it does a lot of the things it does.
The experimental protocol was as follows:
> The stimuli were 120 small, mirror-symmetric, enclosed white silhouettes (Trujillo, Allen, Schnyer, & Peterson, 2010). Of these, 40 portrayed meaningful name-able objects (animals, plants, symbols) inside their borders and suggested only meaningless novel objects on the ground side of their borders. The remaining 80 silhouettes depicted meaningless novel objects (objects not encountered previously) inside their borders. Of these, 40 suggested portions of nameable meaningful objects on the ground side of their borders. Note, however, that participants were not aware of the meaningful objects suggested on the ground side of these silhouettes. The remaining 40 novel silhouettes suggested novel shapes on both sides of their borders."
>
> Stimuli were presented on a 20-in. CRT monitor 90 cm from the participants using DMDX software. Participants’ heads were unrestrained. Their task was to classify the silhouettes as depicting real-world or novel objects. Responses were made via button press; assignment of the responses to the two response buttons was random.
They then recorded the EEG signals from the participants and found something surprising: When the background was meaningful, the subject's brain waves produced the same signatures as would be expected when conscious awareness had taken place (called 'N300' and 'N400' signatures because they occur 300 and 400 ms after presentation of the stimulus), even if the subjects did not report percieving anything meaningf |
7768959b-017a-4252-b4cc-fcdbedc71f7e | trentmkelly/LessWrong-43k | LessWrong | [LQ] Some Thoughts on Messaging Around AI Risk
Epistemic Status
This was originally written for Twitter and thus is predictably low quality (hence the "[LQ]" tag).
It has only been minimally edited (if at all).
----------------------------------------
Introduction
Some thoughts on messaging around alignment with respect to advanced AI systems
A 🧵
Terminology
* SSI: strongly superhuman intelligence
* ASI: AI with decisive strategic advantage ("superintelligence")
* "Decisive strategic advantage": A vantage point from which an actor can unilaterally determine future trajectory of earth originating intelligent life.
Context
Misaligned ASI poses a credible existential threat. Few things in the world actually offer a genuine threat of human extinction. Even global thermonuclear war might not cause it.
The fundamentally different nature of AI risks...
That we have a competent entity that is optimising at cross-purposes with human welfare.
One which might find the disempowerment of humans to be instrumentally beneficial or for whom humans might be obstacles (e.g. we are competing with it for access to the earth's resources).
An entity that would actively seek to thwart us if we tried to neutralise it. Nuclear warheads wouldn't try to stop us from disarming them.
Pandemics might be construed as seeking to continue their existence, but they aren't competent optimisers. They can't plan or strategise. They can't persuade individual humans or navigate the complexities of human institutions.
That's not a risk scenario that is posed by any other advanced technology we've previously developed. Killing all humans is really hard. Especially if we actually try for existential security.
Somewhere like New Zealand could be locked down to protect against a superpandemic, and might be spared in a nuclear holocaust. Nuclear Winter is pretty hard to trigger, and it's unlikely that literally every urban centre in the world will be hit.
Global thermonuclear war may very well trigger civilisational collapse, a |
dae47c87-10a9-4dba-b6f0-ff54a381cbdd | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | How to build a safe advanced AI (Evan Hubinger) | What's up in AI safety? (Asya Bergal)
hello and welcome to the session on how
to build a safe advanced ai
with evan hubinger and asia burgle i'm
anjali and i'll be the emcee for this
section
we'll start with a 30-minute talk by
evan followed by a 15-minute talk by
asia
then we'll move on to a live q a session
where they'll respond to some of your
questions
you can submit questions using the box
to the right hand side of this video
you can also vote for your favorite
questions to push them up higher on this
list
now i'd like to introduce our speakers
for the session
evan hubinger is a research fellow at
miri working on solving
inner alignment for iterated
amplification prior to joining mary
evan was an ai safety research intern at
open ai
an author on risks from learned
optimization and advanced machine
learning systems
a miri intern designed the functional
programming language coconut
and did software engineering at google
yelp and ripple
evan has a bs in math and computer
science from harvey mudd college
here's evan
hello all my name is evan i am a
research fellow at the machine
intelligence research institute
i'm going to be talking about how to
build a safe advanced ai
or well so not quite so i don't
know the solution to ai safety but i am
going to be talking about how we think
we
might build a safe advanced app so
there's a lot of
proposals out there sort of different
people working in the field and
different possible
ways that we might be able to build a
safe
advanced ai is you know very powerful
and
and in fact so we're doing what we want
this is what we're trying to achieve
in as safety there's a lot of different
people with different ideas for how we
might go about doing that so i'm going
to be trying to talk about
some of those ideas go through some of
those different possibilities
for how we might in fact build a safe
advanced layout
so the first thing that i want to go
over is
what does a proposal for building safe
advanced ai need
what are the sort of necessary
components that any proposal sort of
needs to address
i'm going to go over four so the first
one that i want to talk about is outer
alignment outer alignment is
fundamentally the question of
if we are training a model and in the
standard machine learning paradigm
uh when we sort of produce an ai we have
some objective some
loss function reward function they were
trying to
produce a model some sort of neural
network or whatever
to achieve and that sort of objective to
sort of minimize that loss maximize that
and our alignment is the question of is
the thing we're trying to train it on
is that objective the loss function
whatever is it a line
if the thing was really trying to
achieve that loss function or whatever
would we like the result would that be a
good thing
a standard sort of problem that falls
under this heading that you might be
familiar with
is the sort of paper clip maximizer
problem and by giving ai
the uh sort of task of maximizing the
paper clip output of my paper
factory it might sort of as a result
just sort of tile the world with tons of
paper clip factories
producing lots of paperclips this is a
really good way to make a bunch of
paperclips
and so we would say that the objective
of producing paperclips is not
outer alignment all right so now the
second question
is inner alignment inner alignment is
the sort of second piece that we need
when we're talking about building
ai via machine learning which is how do
we actually
ensure that the training procedure
results
in a model which is actually trying to
do the thing
the objective that we're training on we
have this sort of classically we do this
gradient descent process
we try to find a model which is trying
to you know achieve some loss function
reward function
and the inner alignment is the question
of did that work did we actually get a
model which is trying to do the right
thing
um and so these are sort of two
components of alignment the two
components of how do we ensure that
this sort of uh proposal actually
produces a model which is doing the
right
or at least trying to do the right thing
and then we sort of have two components
for competitiveness
uh where competitiveness is sort of more
about is the model
is the sort of approach actually one
which would be feasible to implement and
worthwhile
so first we have training
competitiveness which is how hard is
this proposal to do uh if you're sort of
deep mind
and you're trying to pitch this proposal
to the mind where do you mind you
have the resources to do this
efficiently and effectively would this
be a thing which like
current tools and like our ability like
what we predict
maybe in the future to be able to
produce
like all of the different possible uh
sort of machine learning tools you might
have in the future will they be capable
of actually doing this
and then we have performance
competitiveness which is the other sort
of second component of competitiveness
that i want to talk about
which is how effective uh how powerful
is the ai system that results from this
proposal if it actually works
if it all goes through if we like in
fact produce
a sort of powerful ai system how
powerful is it would it be able to do
all of the tasks that we might want an
ai to be able to do
um it's sort of not useful if we just
sort of produce an ai that can't
actually do anything and so even
even if that ai doesn't kill us even if
it's a line we still want it to like
actually be able to do things in the
world that's why we're building an ai in
the first place
all right so these are sort of four
basic components
now it's important to note that i don't
have any of the answers here i don't
claim to sort of
know the answer for any of these
proposals about whether it actually uh
successfully addresses each of these
components it's just a list of various
things to think about and to consider
when you're sort of looking at any
individual proposal for how to address
the sort of overall
ai safety all right
so with that in mind here are the four
proposals that we'll be talking about in
this talk
um it's worth noting that there are a
bunch of other proposals that i'm not
going to talk about
but that if you're interested in them
you can find them in the post which i
sort of
at the bottom you can see that the title
of it uh you can sort of find that i
think it should be linked along with
this talk
all right so we're going to start with
an approach called imitative
amplification
all right so to understand imaginative
amplification i first have to sort of
have a bit of a digestion
and try to talk about a couple of
important concepts which are going to be
really useful when we talk about
immaterial application
the first one is something called hch so
hch is a recursive acronym
which actually which stands for humans
consulting hch
so a little bit weird uh i'll explain
why that makes sense and how that works
but we'll start with a very simple setup
which is i have a human and that human
takes in a question
and produces an answer this is sort of
you know very simple setup it's just a
human answering questions
uh and now i'm going to add something
i'm going to allow that human
to talk to two other humans when
producing their answer so we sort of
have a group of humans
where there's the one human which is
trying to sort of uh sort of the leader
is
in charge of producing the answer but
they get to ask questions to and consult
with to other people that are sort of
helping them out
that's this is great and this is sort of
you know how a group might sort of
answer questions
but now i want to sort of rehearse this
procedure and i want to give
each of those humans that the original
human is consulting with
access to their own humans to consult
and so we're sort of building up this
tree we have a human at the top
who's consulting with some other people
and they're each consulting with
other people um and then keep going
so hch is the sort of theoretical object
that is the limit of this recursion
infinitely a sort of if we just
keep allowing the human to consult other
humans and so on
uh to produce this sort of infinite tree
of humans
we sort of recall this result this sort
of the whole thing
hch and this is a sort of theoretical
object but it's going to be important to
our analysis later on
all right now the second thing which i
need to sort of explain is amplification
so what is amplification
so i'm going to start with a similar
setup i have a human they get a question
and they produce an answer but now uh
similarly pre to previously i want to
have the human consult with
uh something else but instead of
consulting with a human this time i want
them to consult with a model some ai
which we would call m and so the human
takes in some question
they get to sort of maybe you know type
some things out to an ai and get some
answers back
and then with the ability to consult
this ai they produce an
answer okay and i want to call this
sort of box here where you have the
human consulting model
amp m which is to say the amplification
operator applied to the model m where
the amplification operator is the
procedure
that takes the model and sees what
happens when a human
is given access to that model and the
idea behind this
is that it sort of increases amplifies
the capabilities of the model n
because you know what it's not just what
m can do on its own
it's what multiple copies of m can do
when sort of organized and deployed by a
human
and so this is a sort of key piece of
imagery application is this sort of
amplification
operator all right so now what is
imaginative amplification what does it
do
well fundamentally we want to train our
model
m to imitate that is sort of copy the
behavior of
the amplified version of that very same
m
so you can sort of see this sort of
happening on the on the right
we have an initial model m zero
and zero is amplified into this model
amp m naught sort of green arrow you can
read as amplification
and this new amp am not recall is a
human
consulting m naught but then we take
m naught and we train it to match the
behavior
of the hue of the sort of human
consulting itself that's this gray era
where the sort of
cyan arrow is the imitation loss and
this produces a new model m1
we then amplify this new model m1 and
repeat the procedure we train
m1 to copy the amplified version of
itself
uh and this is a little bit weird but
we'll try to unroll this and understand
what's going on in all of it
but there's another important piece here
as well which is in addition to training
to sort of
mimic uh the amplified model we also
want to allow the amplified model to
sort of inspect the training process
and look at the model and make sure it's
training in the right way
um and so this is the sort of oversight
that we want to have
in addition that should help us in terms
of uh some inner alignment stuff we'll
talk about that
as well all right so first let's try to
address
what is the sort of limiting behavior of
this what is happening with this sort of
weird recursive imitation
so if we take a look at this sort of
picture we have these sequence of models
and they're trained
to sort of mimic the amplified versions
uh we'll try to imagine what happens in
the limit what happens if each of these
training processes is sort of perfect
and so if they're perfect then we can
just sort of equate
well if m1 is trained to approximate the
amplified version m
well if we imagine sort of in the limit
we get the sort of perfect approximation
m1 is just
equal to amplified m-dot and then if we
have this we can sort of expand we can
sort of substitute
in amp and not everywhere where we see
m1 and similarly for m2
and we get that sort of m3 after we've
sort of done this procedure three times
is equivalent to the amplified version
of the amplified version of the
amplified version of not
what does that even mean so let's let's
sort of expand that to try to understand
what we're looking at after we've done
three iterations
so we're trying to understand what is m3
we can see that
first of all we know m3 is there's an
amplification operator at the top
and if we recall the amplification
operator just refers to a human
consulting whatever's inside
and so whatever is inside is amp amp m
naught so we have h
consulting amp amp and then we can sort
of unroll this further what is amp amp
m not well that's just a human
consulting amp ethanol
uh and we can sort of unroll this again
and we get that after three iterations
we've built up sort of three levels of
what you'll recall is the hch
tree so we have a human consulting two
humans and those humans are consulting
two other humans
that are then consulting this sort of
initial model
and the idea is that if we sort of do
this uh sort of in the limit if we keep
doing this procedure over and over again
we should get closer and closer to the
theoretical object this hch tree
because we're getting closer and closer
to sort of limit of many many copies the
sort of infinite tree depth
of sort of humans consulting humans
insulting humans and so on
all right so this is sort of the goal of
imitating amputation is to get to this
hch
so now we can try to understand how does
the application score on some of these
different
properties that we're trying to sort of
uh gauge which of these proposals
so for outer alignment and performance
competitiveness because outer alignment
performance competitiveness are about
sort of what would actually happen at
the end
what is the sort of uh if we actually
managed to get a model which was doing
the right thing
would it sort of be aligned would it be
competitive because the procedure is
trying to limit to hcg we can try to
answer the question is hch align is hch
competitive
because uh if they are then that sort of
gives us a sort of
uh upper bound a sort of goal of well
if the thing we're shooting for at least
is a line competitive then at least we
have some degree of outer alignment and
performance competitiveness
um and there's lots of reasons you know
it's a very complex question trying to
understand would this sort of big
massive tree of humans be aligned would
it be powerful would it be able to do
sort of things that we want um and this
is sort of a big open question that sort
of makes up a lot of the outer alignment
and performance competitiveness concerns
then we also have inner alignment and
training dependentness concerns
uh for inner alignment uh if you recall
we're trained
not just on imitation but also on this
oversight
so the goal here is to try to have it so
that the
uh the overseer which is the amplified
version of the model is able to sort of
steer the training process away from the
sort of domains that we're really
concerned about to make sure that it
doesn't
it sort of is in fact learning right
because we don't necessarily trust that
if we just do grading descent it's going
to do the right thing
and then for training competitiveness
we're trying to understand well so
fundamentally this is a language
modeling task and so we want to
understand how competitive our machine
learning tools in the future are going
to be
at solving these sorts of complex
language modeling tasks and we have some
evidence that they are pretty good at
that because we have things like gbt3
currently
that are very successful all right
and now uh who's working on this so
people who are currently working on
imagery complication so paul cristiano
sort of created the idea of application
um
and he is a researcher at open ai i work
on the application a lot i
uh work at miri though i also used to
work at open ai
also the rest of the opening eye
reflection team uh that sort of works
under polit opening
as well as ought which is a sort of
another organization that does sort of
more human experiments trying to
understand things like
you know what would hch be like by
looking at current groups of humans and
how they can work
all right so that's sort of number one
that's imitative application
uh and now we'll look at number two so
number two is ai safety via debate
so what does aisa t via domain so asft
be a debate
uh the basic idea we have a model
and we have a copy of that model we'll
call the sort of first one alice and the
second one bob
and we want to train those models to win
debates
against each other in front of a human
judge so how does this work so if you
take a look at the sort of branching
argument triona right
we start with some question then alice
gets to choose how to respond to that
question
then bob gets to sort of choose how to
respond to alice's uh sort of
answer so bob might sort of refute
alice's answer provide
some sort of alternative um and then
alice gets to respond to bob
and then eventually we sort of get to
some leaf and the human decides did
bob when did alice run um
and the idea here is that we're trying
to train these
models to sort of produce the most sort
of relevant honest truthful information
because if they do so the hope is that
well the human will sort of choose the
model that has done the best job that
has been the most truthful most helpful
most honest
and will incentivize that sort of
helpful honest behavior
via this debate process now there's a
couple of other things that we can do
here that are nice
so so uh one thing that is we can't do
in a sort of human debate
setting for example but that we can do
when we're doing this ai debate is we
can add this sort of cross-examination
so we can allow uh bob for example to
look at what alice would have done we
can
allow bob to make a copy of alice at a
previous point in the debate
and ask alice questions be like uh maybe
alice for example said something was
kind of confusing maybe on purpose
alice said something confusing just sort
of throw bob off bob can go back and ask
alice copy
a copy of alice a question to sort of
clarify and then show that answer to the
human judge
importantly another sort of component of
this is not only can sort of bob
ask alice questions bob can also look
internally using transparency tools or
other mechanisms
to sort of see is alice sort of doing
the right thing and maybe
bob can sort of look inside alice and
show the human evidence that alice is
trying to trick the human
uh in which case then the human might
sort of kick bob and so this can help
incentivize the models to stay honest
uh because they sort of uh can be
inspected by the other model
all right uh and so now we can again
sort of do some analysis try to answer
the question
uh you know how uh successful aligned
competitive would this sort of approach
b uh if we think about outer alignment
an important question like i was saying
previously is the question of is
honest behavior incentivizing the limit
uh importantly here
is we're specifically trying to think
about the first player which in the
previous example was
alex because uh if
honesty is actually the sort of dominant
strategy if sort of
doing the sort of honest helpful answer
is the thing which is most likely to
cause the human judge to sort of choose
you
then we should hope that in the limit
we'll end up with models
that are trying to uh actually sort of
produce the most honest answer because
that's the thing that's most likely
going to be
but if the model is going to do that
then we can just ask the model what its
answer would be and we don't have to run
through the whole debate when we
actually want to get something out at
the end and so
we want to ensure that the sort of first
debater would produce a truthful answer
all right and then for uh inner
alignment
uh we're sort of thinking about this
question of are the debaters able to use
these transparency tools to inspect and
detect
sort of bad things uh happening in the
other creator
um for training competitiveness uh it's
interesting to think about this is in
many ways a sort of self-play
on a sort of game setting in a very
similar way to something like alpha zero
uh was sort of solving a game via self
play and so there's a lot of previous
examples of ways which machine learning
can successfully tackle this sort of
problem
and so we might hope that this is the
sort of thing that we'll be able to sort
of
uh deal with our machine learning tools
in the future and the performance
competitiveness there's this question of
well
how useful would a sort of superhuman
debate ai would be able
to you know answer the the sorts of
questions that we need to answer and
that's obviously sort of off debate
um importantly if uh honesty isn't
incentivizing the limit then it also
sort of might not be performance
competitive
because uh it might sort of just give
you bad answers
all right then who's working on it uh so
a safety via debate is due to jeffrey
irving who
is currently at deepmind he used to be
an opening uh paul cristiano who's the
opening also works on debate uh quite a
lot
as well as the rest of the opi
reflection team which is sort of a team
that is sort of managed by paul
as well as och i mentioned previously
all right uh next up we have recursive
reward modeling
so what is recursive word modeling so i
want to start
with the sort of image in the top right
where you can see the sort of user
reward model
agent and environment this is describing
the basic reward modeling process
the way that this works is we have some
user uh we can imagine it's a human
in this setting it's it's not going to
be just a human we can imagine it's a
human
and this human sort of is trying to give
some feedback
some sort of information about what it
wants this
is fed into a reward model which gets
trained to try to predict what the human
wants
and then we train an agent to try to
maximize this reward well to achieve the
prediction of what the human wants
then we put this asian environment we
let the agent sort of run around and do
things
and then the human sort of looks at what
the agent is doing and gives some
feedback it's like i like this thing
that the agent was doing i don't like
this other thing that the agent was
doing
then we put that feedback back into the
reward models to improve it and then we
get a better agent and so on
now importantly is that that's the
reward modeling process
but here we want to talk about recursive
reward online so what does that mean
well
we want to take that basic procedure and
recursive so instead of just having a
human be the user
you want to instead have a human
consulting
an agent that was itself trained via
another
reward modeling process and so that's
where you can see these sorts of
multiple loops on the sort of
user on the right it's a human but it's
the human consulting another agent
and in many ways this mirrors the
amplification setup from previously
so the top picture is sort of identical
to the bottom picture
where you can think about what we
actually what we're doing here is we
have some model
m naught we amplify that model to amp m
naught which is the human consulting m
naught
and then we do reward modeling with
the sort of human consulting m naught as
the user
and that produces a new agent which
we'll call m1
we then amplify the new agent m1 produce
sort of
uh amp m1 which is human consulting m1
use that as the new user in a reward
modeling process
get an m2 and so on and so these are
sort of these two pictures or two
different ways of looking at
what is fundamentally the same procedure
which is this recursive word modeling
procedure
and then in addition we can add on top
of this oversight so similarly to what
we had in amplification
we can have the amplified version of the
model inspect
the training process and make sure that
it's sort of doing the right thing
all right and now a question that we
want to ask is what is the limit of this
procedure
so if we think about uh what's happening
when we do
recursive reward modeling where it's
similar to amplification and that we
sort of have this tree that we can
unroll
but instead of just being a tree of
humans uh because the sort of models
were just trained to
mimic the humans and so they were sort
of identical to the humans in the limit
now the models aren't trained to mimic
the humans they're trained to sort of
maximally achieve the reward obtained
by doing reward model economic humans
and so now we have a sort of the
limiting tree as a human
consulting models which are trained to
maximize the reward of
uh sort of obtained from doing reward
modeling on a human consulting other
models they were trained to maximize the
reward from doing reward modeling on
a human consulting and so on and so this
is a sort of reward modeling tree
recursive word modeling tree
um that is so the limit of this person
and so now we can ask questions
uh sort of trying to analyze this
procedure similarly previously
for outer alignment we can sort of ask
is the for both underlying and
performance competitiveness we're going
to be talking about sort of
these properties of the tree uh you know
is it aligned
will we sort of like the results of this
tree and is this tree competitive is it
sort of universal is it able to solve
all the different sorts of problems that
we might want to be able to solve
i mean a lot of this doesn't come down
to details of you know is reward
modeling successful and being able to
solve a lot of these sorts of problems
and sort of learn the right things um
and then for inner alignment we're
relying similarly to some sort of
amplification relying on this oversight
where we have this overseer which is
looking at the model during training
and trying to make sure that it's sort
of being trained in the right way
and then for training competitiveness
we're sort of trying to understand the
question how effective would reward
learning be
as a sort of general purpose uh strategy
to sort of do in machine learning
and again this is something that is sort
of been proven to work at least with
current machine learning tools
um in in some settings this is sort of a
common approach that has been used in
the literature
uh but there's obviously still a
question to what extent this will sort
of continue to be true and be a
successful approach
to training machine learning systems in
the future and then we have the question
of who's working on this
so people are working on this um so
young micah who's at deepmind
uh david krueger who uh is at mila
the montreal is sued for learning
algorithms as well as deepmind
he's worked with uh if you might as well
tom everett who's a deep mind
um as well as the sort of rest of deep
mind safety does a lot of work on
sort of this approach or cursor board
all right and then for our last approach
we have microscope ai
so microscope ai is a little bit uh sort
of a different approach
so the basic idea is to train a
predictive model on some data we just
want to train the model to sort of
understand to be able to predict this
data and in addition we want to sort of
be using transparency tools to make sure
it's actually just doing prediction
and then we take this model and we use
transparency tools to understand what it
learned
what sort of abstractions it built what
sort of uh
things it inferred about the causal
structure of the data
about the sort of uh you know all of
these the sort of things that are
necessary to be able to project and
understand the data
and we extract those insights using
transparency tools by looking inside the
model and figuring out what it's doing
and then we use those insights that we
gained by looking inside of the model
to guide human decision making and so
we're sort of keeping the human in the
loop
so chris ola who's the sort of head of
the clarity team
at opening eye has a quote about this
that i think is sort of uh
really sort of useful to think about it
chris is sort of the person who created
the concept of microscopic ai so chris
says
that uh the visualizations and here he's
talking about the sort of neural network
visualizations that he
spends a lot of time working on the
visualizations are a bit like looking
through a telescope just like a
telescope transforms the sky into
something we can see
the neural network transforms the data
into a more accessible form
one learns about the telescope by
observing how it magnifies the night sky
but the really remarkable thing is what
one learns about the stars
similarly visualizing representations
teaches us about neural networks
but to use us just as much perhaps more
about the data itself
and so the idea here is that when we do
visualizations when we try to understand
what neural network is doing
we don't just learn about the neural
network we also learn about
what the neural network knows we learn
about the data we learn about the
abstractions we get ideas that can sort
of help
influence humans i mean this sort of
gives us a feedback loop
where we sort of uh produce better
insights that help improve human
decision making which allows us to build
better ai systems
and so on and sort of keeping the human
in the loop of this sort of uh
self-improvement all right and so now we
can ask the questions
uh sort of similar to previously you
know how aligned independent would this
approach be
and we think about uh outer alignment
it's important to know microscopy
isn't really trying to be outer line
because it's not we're not trying to
have the ai
actually take actions in the world and
so it doesn't need to be the case that
it's
uh objective is sort of one that if it
were trying to take actions according to
it would be a line
but we do still need inalignment because
we want to ensure that the model isn't
going to try to do something really
weird and wacky and crazy something
totally different than what we were
trying to train it for
um and the use of transparency tools to
check the model ensure that it's really
just doing prediction is very helpful
here
in addition we can sort of talk about
training competitiveness training of
animus should be pretty straightforward
we do have lots of
in machine learning currently lots of
sort of training of predictors
um the real sort of sticking point here
is performance competitiveness which is
the question of well
if we actually had a microscope ai if we
were able to use it to sort of improve
human decision making
would that be sufficient for sort of the
economic cases that we might want ai for
um you know if we're we sort of doesn't
actually let us sort of
obviate humans we can't just sort of
replace humans with ais
because we sort of still need a human in
the loop here but that might be
sufficient at least for sort of high
level decision making like sort of ai
ceos and things like that
um even if it's not sufficient for sort
of maybe more low level replacing sort
of all jobs all right
and so who works on microscope ai so i
mentioned chris ola who sort of created
the concept he
is a researcher at open ai and used to
work in google brain uh the sort of rest
of opening eye clarity works on uh sort
of thinks a lot about this stuff as well
as well as
other as well as other people at google
brain so uh including for example shane
carter
all right so that's sort of the
four proposals that different people are
working on and thinking about
and i want to sort of close with an
exercise that i think is sort of useful
for
trying to start thinking like an ai
researcher and a safety researcher
um and sort of really understand uh sort
of
the pros and cons and the trade-offs
here uh think about the question
if you had to recommend one of these
proposals if you were sort of uh you
know giving a recommendation to define
or to open the eye
uh as to like what avenue they should
pursue for trying to build
a sort of safe advanced ai which would
you recommend where would you sort of
steer these
uh these organizations too if you were
sort of giving the recommendation and i
think this is useful as an exercise just
sort of as a
thinking tool because it sort of helps
you think about well
you know what would i sort of you know
if i was actually in the position where
i was sort of giving this as a
recommendation if i had to go to
deepmind and convince them to implement
this
what would be the sort of best thing
that i would sort of lead with that i
would try to get people to focus on
and so i think this is a good sort of
exercise a good thing to think about
i'll sort of leave up the proposals that
we talked about here uh sort of imitate
my application microscope ai request
forward modeling and aic tv
um and again i'll say if you're
interested in going over even more
proposals
there's a sort of larger document that
includes a bunch of additional proposals
which you can access
um you can sort of see the information
on the screen you just sort of google
for it or i think there should be a link
along with this presentation all right
thank you so much
uh and we can go to i can try to answer
some questions after the talk
thanks so much for that great talk evan
we'll now hear from asia burgle who
is a researcher at ai impacts a writer
for the ai alignment newsletter and a
fund manager for the long-term future
fund
she has a ba in computer science from
mit since graduating she's worked as a
trader and software engineer for alameda
research and as a research analyst at
open philanthropy
hi everybody i'm going to be giving a
talk which i call what's up in
ai safety my name is asia burgle
i do a lot of different stuff one of the
things that i do
is i write for this newsletter called
the a alignment newsletter
which summarizes recent work in a
alignment
and i thought in the spirit of this
newsletter i could share in this talk
some recent alignment work that i think
is cool
the work that i share is going to be
biased for being recent so
in the last year or so um it's going to
be biased for stuff that i happen to
know about
and i'm vaguely going to try to cover a
bunch of different places that do a
alignment work
i want to be clear that i'm not
intending this talk
to be representative of alignment work
as a whole
i'm not selecting for what i think is
the most important work or anything like
that i'm really just hoping to give sort
of a flavor
of what alignment work looks like
recently
so starting off with stuff at openai
uh chris ola earlier this year released
an update on some work he's been doing
on interpretability
so interpretability generally is
basically this property that we'd like
to have
where we'd like to be able to know
what's going on inside of our neural
networks
generally neural networks are modeled as
black boxes
but we'd like to know what's happening
inside of them because then we could
verify
that they aren't doing things that are
wrong or bad
so in this work chris ola basically
tries and succeeds
at decomposing neural networks into
their constituent pieces
where those pieces are individual
neurons and their functionality
and the structure is composed of
individual neurons which he calls
circuits
so this picture is sort of showing one
of these decompositions
a neural network that's trying to detect
a car is decomposed into its constituent
parts which
detect windows a car body and wheels of
that car
and one cool thing that chris postulates
uh sort of doing this work
is that the insights that you get you
know looking at the structure of one
neural network
actually transferred to other neural
networks so we should expect
neural networks especially once they're
doing similar things to have very
similar structures
and this sort of fact is actually a
really just encouraging fact for
interpretability as a whole because it
means you know we have some hope of
understanding
future neural networks without having to
do you know all of the interpretability
work from scratch
um so yeah i think this is very cool
work
uh elsewhere in open ai beth barnes
released an update on progress on ai
safety via debate
um so debate is this proposed mechanism
for being able to oversee and evaluate
future ais
so we have this problem if we do want to
oversee and evaluate future ais
where humans are going to be
significantly less smart and
significantly less fast than those ais
um so we don't really have the time or
brain power
to check every single move that they do
to make sure you know it's not bad
they're not trying to trick us or being
unhelpful
so the idea behind debate is you know
maybe one way we can hope to try to
evaluate them
is if we actually have a group of ais
where other ai's job is to try and you
know poke holes in
or otherwise identify the failures um or
wrongdoings of other ais
so it's kind of unclear what this high
level mechanism would actually look like
and whether it would work
and one way to try to get it whether it
would work is to try to look at an
analogous case in humans
um so the analogous case in humans maybe
looks like
you know we have a non-expert human that
would like to evaluate and oversee
the behavior of expert humans so one way
we can get at that is to try to
have the expert humans debate each other
you know put holes in their own
arguments
um and see if the non-expert humor see
if the non-expert human comes away with
that
um with sort of the right conclusion
about whatever question they're debating
about
um so beth barnes has been basically
doing empirical work
testing this mechanism um she's been
running debates
she's been trying to break those debates
by having you know the experts do weird
stuff that might trick the non-expert
and then she's been trying to design new
mechanisms to make it so that
the incentives of the experts are such
that you know the non-expert can't be
fooled
or otherwise misled so yeah very cool
work from open ai is still in progress
ask for beth if you want more updates
other word from deepmind there's a new
paper that victoria krakovna released
along with some other people
called avoiding side effects by
considering future tasks um so one thing
we would like ais not to do
is we would not like them to have
catastrophic side effects in pursuit of
their goals
so you know if you tell an ai that you
want it to get you a jar of peanut
butter
you would like it to do that in a very
chill way not by you know destroying
supermarkets or something like that
um but it's actually kind of difficult
to know
how to specify in the general case that
you don't want your ai to do random bad
things
um the work sort of trying to do this
often goes by the heading of impact
measures
uh but the idea in this paper is you
know one way that maybe we can specify
this in the general case
is by rewarding the ai if it's still
possible to complete
some future tasks after it takes
whatever action it takes
um so the idea is you know if after
getting you that jar of peanut butter
you know the ai is still able to do a
bunch of other things in the world and
we can hope that maybe it hasn't messed
up the world too badly
so yeah please go look at the paper if
you want more detail on this i'm
definitely not doing it justice
but yeah very cool recent work from
victoria kharkovna
next i wanted to mention a paper from
the machine intelligence research
institute
written by evan hubinger called risks
from learned optimization
this paper itself actually isn't as
recent as the other work in this talk
which most of the other work is from
2020 this is from 2019
but i wanted to mention it one because i
think it really sort of formalizes and
pins down
a lot of ideas that have been floating
around the a safety community for a
while
uh and two because i think there's sort
of been a lot of follow-up discussion
from this
all over the last year it's definitely a
very live topic in a safety communities
so the basic idea behind this paper i
think can best be explained
via analogy to evolution so evolution
is this optimizing process in the course
of optimizing for genetic fitness
it produced humans and humans are
themselves optimizers
that might be optimizing for goals that
are not genetic fitness so you know
maybe they're optimizing for
getting food maybe they're optimizing
for beauty or
truth or love or something that's not
just you know
reproducing as much as possible and
spreading their genes
so the idea behind this paper is you
know similar to how humans are sort of
now
optimizing for things outside of what
evolution originally you know cared
about
um we could expect machine learning
systems to do a similar thing
so you know machine learning systems are
trained using gradient descent
which is sort of this outer optimizer
and then inside of them we have this
neural network
that neural network could be doing a lot
of things one of the things it could be
doing
is itself acting as an optimizer and
that optimizer
might evolve to have goals outside of
the original goals that are specified
so evan calls sort of potential failures
from that optimization mesa optimization
failures
and this paper goes into detail
characterizing the circumstances where
we might expect failures like this to
occur
and just really being exact about what
these failures would look like
next we have work from the center for
human compatible ai
there's a paper called quantifying
differences in reward functions
by adam gleave and a bunch of other
people
so one sort of recent strain of thought
in ai work is the idea of reward
learning
so the basic problem is you know the
preferences that humans have are kind of
difficult to specify and we shouldn't
expect that we're generally just able to
you know hard code a single function
somewhere
that says exactly what we want um you
know maybe a more promising and more
practical solution
is that we'd like the ai systems that we
have to be able to learn
what human preferences are and that's
what reward learning refers to
so one thing you need to do when you're
doing reward learning is you need to be
able to compare potential reward
functions that you're using to
express human preferences so maybe you
know you want to see
which of two reward functions is best or
you want to you know trial a procedure
for producing a reward function by
comparing that reward function to some
ground truth reward function
so so far the way that you do this
comparison between reward functions
is you train a policy using both of
those reward functions
and then you have that policy you know
suggest some actions and compare
those actions compare the results of
that policy
the problem is that this basically is
just comparing you know
two training runs and you know there
could be a lot of details that are very
specific to that training run
that don't actually bear on the goodness
of the overall reward function
so this paper actually suggests a way of
comparing reward functions directly
it introduces a new metric called epic
which lets you do that and suggests that
you know future work could use that to
compare reward functions
um yeah very cool work out of chai
um i also in this talk wanted to mention
a bunch of independent
research that people have been doing
that you know it maybe isn't as formal
as sort of an academic paper
but i think it's really good work that's
sort of advancing the state of the art
um most of this work happens or at least
the one stuff that i see happens on the
alignment forum which
i'll talk about more at the end um it's
basically just a
super welcoming place that collates a
lot of recent alignment work and
gives people sort of the opportunity to
post their own ideas about a alignment
um so one recent sort of good series i
think out of there
has been john wentz works work on
abstraction
uh so abstraction in general um is sort
of
he suggests this field about how we can
make predictions about the world
while you know understanding what
information
we want to keep and what information we
want to throw away so we'd like to make
predictions we have a lot of messy info
you know in order to make predictions in
you know a computationally tractable way
we need to be able to sort of make sense
of that info and know what to look at
and what not to look at
and he sort of suggests that this might
be part of the solution
to a very thorny class of problems
called embedded agency problems
the idea here is you know whatever is
going on with our future ai systems
we're going to have an agent that's
going to
need to reason about it and its
environment
and its environment will actually also
contain the agent itself so in some
sense here it's reasoning about itself
and that always sort of leads to tricky
and thorny problems in computer science
and john sort of suggests that digging
more into sort of abstraction as a field
might yield some solutions to these
problems or some understanding
then other independent work recently
that i thought was cool
was work from richard no he started a
sequence
on the safety of multi-agent systems um
so the idea here is
we often think of safety problems in the
context of
one agent with one goal um you know one
ai system doing some stuff
um but you know in humans actually sort
of the most interesting capabilities and
behavior come
when you put humans in groups um you
know all sorts of interactions and
culture and intelligence that evolves
out of these sort of group dynamics
um and richard suggests that maybe a
similar thing is going to happen
um with ai systems where sort of the
most interesting and capable and even
dangerous behavior
might happen when you think about their
group interactions um so in this
sort of start of a sequence richard is
thinking about sort of how we can shape
the agents in the system and how we can
incentivize them to be safe
even in this sort of weird group
environment
and the last thing i wanted to mention
is that there is actually a bunch of
sort of other academic work that's not
affiliated necessarily with any of these
orgs
um that i think is great for ai safety a
lot of recent work on robustness
basically getting ai systems to do what
we train them to do
um in unexpected circumstances i don't
follow this work as
closely as i follow all the other stuff
so i don't want to say too much about
something i don't really know that much
about
but there is a bunch of recent work here
um i think it's you know
very true that academia does work that's
good for safety overall
and you hear a bunch of recent
robustness papers that other people
basically recommended to me
for people who are interested in this
okay hopefully that talk wasn't too
overwhelming
um i do want to point to sort of three
things that
you could look at if you were interested
in learning more about this stuff
one is you could go to the tinyurl link
um
at the bottom of this presentation which
gives more details about all of the
stuff that i covered here
another thing you could do is go and
hang out on the alignment forum which i
mentioned
um which yeah i think is a very
welcoming uh sort of
place for newcomers and for sort of
existing seasoned ai safety veterans to
discuss their ideas and to look at past
work
and sort of lastly i did want to plug
the alignment newsletter that i write
for
which i think does a good job of keeping
people up to date with recent alignment
work it
definitely makes me feel like i'm up to
date and hopefully it can do the same
for you
thanks again for those great talks asean
evan so i see we have a number of
questions submitted um so we'll kick off
with the first one for evan
um so evan with respect to the four key
alignment strategies that you talked
about
to what extent have these um models been
successfully implemented already
yes that's a great question i think
there's a couple of things that i can
say there
so one thing that i'll say is that all
of the proposals that i talked about
are sort of intended to be proposals
which scale
the idea is not just to be able to
you know implement these things now but
to have an idea for how we might be able
to take these proposals
and you know keep working on them and
improving them as we get more and more
intelligent and more powerful machine
learning systems
that being said there is a lot of work
that can be done right now to try to
understand and analyze
what these proposals will look like in
the future so for each one of the
proposals that i've talked about
there are people that are working on
trying to implement this in current
machine learning systems
so uh debate and amplification and
microscope ai are all being worked on
like i mentioned at open ai the open eye
reflection team for example recently
released a paper where they are trying
to
uh do a sort of mini version of
amplification debate to try to just
sort of fine tune gpt3 to sort of better
be able to
answer you know specific human questions
in recursive board modeling also there's
lots of work there that's done in deep
mind and so all of these things
do have an extent to which we can try
and work on it now but it's worth
keeping in mind that the major goal
of all of these is to try to make sure
that they scale well into the future not
just at the right we're able to
implement them now
so it seems like there's several
organizations that have kind of taken a
first step but with the understanding
that this will continue to be a strategy
to be worked on in the future too
that's right cool um okay next question
um so someone asked what are some of
these transparency tools that were
talked about so
again i think evan this was mentioned a
little bit in your talk but asya you
also talked about this with some of
chris ola's work are there other
examples that you can also point to
yeah i mean i think uh you know sort of
like the rest of this stuff transparency
tools are sort of like
um something we would like to have and
people are actively working on
um yeah chris ola does a lot of work on
this um you know i think in general yeah
there's
uh sort of the clarity team does a lot
of work trying to
basically think of ways to sort of like
visualize and decompose
neural networks uh there's definitely
sort of like a question of um
you know how much these methods scale
and and how much they transfer to
you know various things that we might
want to know about um so yeah chrysolo's
work has been largely on
image classifiers you know it's not
clear if it's easy to sort of do the
same thing
um with stuff like language models um
but there is also just like other sort
of strands of interpretability work
stuff called dynamical systems analysis
um i think there are lots of people
sort of trying to think of ways to
approach the problem of
uh figuring out what a neural network is
doing but there is sort of like a big
overarching question of
to what extent like any of these methods
scale and to what extent you know
they're easy to apply
um in in domains that aren't sort of as
easy to visualize as something like an
image classifier
right so similar to evan's answer um a
good first step is when taking the lots
of work that needs to be done that's
totally right yeah
um another question for evan so what's
the difference between
imitative amplification and iterative
amplification and similarly someone else
asks
how does the first step of ida work how
does a human do the initial value
training
can you shed some light on either of
those yeah so i'll try to clear some of
this stuff up so first thing that's
worth noting
is that the term iterated amplification
is more general so the term iterated
amplification refers to
any form of amplification that is doing
this sort of basic process
of you know take a model amplify it and
then sort of train some new model based
on that amplified version
via some sort of distillation process so
for example both recursive reward
modeling
and imitative amplification that i
talked about in my talks would be forms
of the general approach of iterated
amplification
imitative amplification specifically
refers to the form of amplification
where what you do is you take the
amplified model and then you just train
the model to imitate
the amplified version uh which is the
sort of first proposal that i talked
about
um and then remind me what the second
question was so the second question says
how does step one of ida work how does a
human do the initial value training
great so yeah this is a good question i
think in terms of if we try to think
about amplification there is this
problem of how do we get off the ground
initially
and one thing that is important to keep
in mind that i didn't really talk about
in my talk is that
one of the main uses for something like
amplification is not just to train an ai
from scratch
but to take an existing ai for example a
language model that was trained via
a sort of auto regressive language
modeling regime something like gbt3
and then to turn that language model
into something which is like actually
helpful and able to sort of assist
humans
so the idea with a lot of these
proposals including debate and uh sort
of imitating amplification
isn't necessarily to start from scratch
but to try to start from something like
an auto aggressive language model like
gp23
but that you know something like gp3
isn't actually trying to be helpful to
you it's just trying to sort of complete
the next
uh sort of word that it predicts and try
to turn something like that
into something that's actually going to
be helpful that's going to try to assist
the human
and so uh in terms of like how do we get
things off the ground what is the sort
of first step
well in a lot of these cases the first
step would be take an existing auto
aggressive language model
and then apply these techniques to that
one person also asks what's a currently
neglected project in this space so i
guess
going back to these questions of you
know initial steps have been implemented
but there definitely is a lot of work
done that
um for these models to scale are there i
guess like specific projects or topics
that you can talk about to
help a student uh kind of get started in
this area
um that's a great question so i think
that there's
definitely a lot of work to be done on
all of these things so you know both
me and asia talked about
interpretability that's definitely a
place where i think there's a lot of
work to be done
uh in particular if you head over to
distill.pub there's a whole bunch of
articles you can see there
including uh like a bunch of they talk
about a lot of sort of future work there
um there's also a lot of future work to
be done just in terms of
trying to take these approaches and
understand better how they're going to
work how they're going to scale into the
future
um as well as you know in particular one
thing is trying to understand
what are these sorts of training
processes like what are these uh are
sort of the inner alignment actually
going to go through with this
and so one of the things that one of the
sorts of experiments that i might be
excited about
is trying to understand um how good are
these training processes how good are
sort of the ability to
you know inspect the training process as
it's going along and
can we produce examples of cases where
we try to train
on like some particular objective like
irritative application for example
and we end up with a model which is
maybe trying to do something different
so i've i've written about this a little
bit
um in the past i have a post called
uh sort of towards a um
concrete experiments for inner alignment
i believe that sort of provides an
example of you know
what would a like simple experiment look
like to try to demonstrate the existence
of inner alignment failures
um and aussie has sort of talked about
this a little bit when she was talking
about the paper that i was author on
risk mode optimization
and one of the sorts of places where i
would be most excited about sort of new
experiments is in that space it's trying
to understand
what are these sorts of uh robustness
failures look like when you start
scaling up systems
kind of a similar question to asia in
your work with the long-term future fund
can you maybe talk about what
sorts of projects grant makers are
looking to fund or
what they'd be excited about in an
independent researcher
um yeah i mean i definitely don't want
to speak for the whole fund so i can
only um
speak for myself um yeah i think you
know uh
sort of independent research is always
tricky like it's sort of hard to make
progress as an independent researcher
um so i think in terms of in terms of
like wanting to make progress as an
independent researcher i think sort of
the
things that i look for and think are
sort of the most promising are
you know like having a good sense of
what's already been written in the space
um
you know suggesting research directions
that seem
uh tractable and meaningful and then
also just you know being willing and
able to engage with other researchers in
this space i think that's sort of very
important
um and all of this work you know um it's
very much like a collaborative field and
lots of people are are sort of
you know constantly talking about these
ideas and making progress and
suggestions um
so the extent that you can sort of get
involved with people already working on
it i think that's that's really good
okay and we have a couple of minutes
left so we'll end with a final question
um so evan and asia you guys are
kind of approaching ai safety from
different paths so evan definitely more
technical research
and i see a little bit more broad
strategy focus
can you talk a little bit about how you
got onto that path and
you know whether you have any tips on
whether this could be a good fit for
another student
maybe we could start with asia yeah
sorry um
yeah i mean i sort of um dropped into
this work by accident like i don't know
if i made a lot of like super uh
intentional choices um but i ended up
doing a lot of forecasting work and then
um i got like much more into it via
working at impact um
i think for safety stuff in particular i
mean i had a i guess i had a computer
science background um
and i sort of saw you know
advertisements for people to help write
the newsletter that i'm a part of um
and i think basically maybe what that
should suggest to students is that um i
think as buck said in an earlier talk
here is that you know the field is not
so deep
um that you need a whole lot of
experience to engage with it um so i
think if you
seem if if this stuff seems kind of
interesting um
it's very possible for people with not
that much background sort of get up to
speed to understand what's going on
um to have their own ideas um so maybe
that's sort of like i think the takeaway
maybe from my career trajectory is
um you know you don't have to be like
some absurd super genius to get involved
in this stuff um i think it's
really possible um sort of know what's
going on with a with a
less specific background
yeah i mean i definitely like what
aussie was saying in terms of my
background so
uh i sort of got very involved in
effective altruism when i was in college
um and i wasn't exactly sure sort of how
to you know deploy my skills how to you
know find a career which would make the
sort of largest impact but i sort of
went to an ea global i went to this
um workshop called ai risk or computer
scientists
um and i ended up as a result of some of
this sort of stuff uh
doing an internship at miri which was
sort of really good i think that one of
the things that was nice about that was
just sort of getting my foot in the door
and really sort of just starting to meet
people understand what's happening in as
safety
and while i was there i also attended
this thing the mary's summer fellows
program which is sort of
a couple week long research retreat um
and i met a couple of other people and
we were sort of very interested in in
what was then being called optimization
demons and became this sort of inner
alignment problem which was
resulted in us writing this paper risk
alert optimization which was very well
received and this sort of like
was sort of put me in a position where i
was like felt comfortable doing research
and being able to do research
um and so after that i applied and did
some work at open ai
and then after open ai i went to miri
which is where i am now
okay well that's all the time we have
for questions thanks again so much to
asia and evan
and thanks to all of our viewers for
watching to our viewers before you leave
the session please give us your feedback
in the poll section of the live chat
thanks again |
60ebadae-a818-4030-8b6d-d5724282b8ad | StampyAI/alignment-research-dataset/arxiv | Arxiv | Dempster-Shafer vs. Probabilistic Logic
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I Dempster-Shafer vs. Probabilistic Logic
Daniel Hunter
Northrop Research and Technology Center
One Research Park
Palos Verdes Peninsula CA. 90274
Abstract
The combination of evidence in Dempster-Shafer theory is compared with the
combination of evidence in probabilistic logic. Sufficient conditions are stated for
these two methods to agree. It is then shown that these conditions are minimal
in the sense that disagreement can occur when any one of them is removed. An
example is given in which the traditional assumption of conditional independence
of evidence on hypotheses holds and a uniform prior is assumed, but probabilistic
logic and Dempster's rule give radically different results for the combination of two
evidence events.
1 Introduction
Researchers on uncertain reasoning within the AI community have recently shown inter
est in probabilistic reasoning using sets of standard probability assignments. For exam
ple, Nilsson in [6] and Grosof in [3] have considered methods for reasoning with sets of
probability assignments generated by probabilistic equality and inequality constraints1.
Following Nilsson, I use the expression "Probabilistic Logic" to denote the collection of
such methods. The aim of these methods is to compute a set of possible probabilities for
a given statement from the specified set of probability assignments. If the set of proba
bility assignments is generated by probabilistic equality and inequality constraints, the
possible probabilities for a given statement form an interval. Since Dempster-Sh afer also
associates an interval with each statement A, namely the interval bounded by Bel(A)
and Pls(A), the question arises as to the connection between Dempster-Shaf er belief
functions and sets of probability assignments defined by equality and inequality con
straints. Grosof [3] has shown that the latter is a generalization of the former: every
Dempster-S hafer belief function is representable by a set of probability assignments aris
ing from equality and inequality constraints, but not vice-versa. A related issue concerns
the connection between Dempster's rule of combination and the combination of evidence
statements in probabilistic logic. Grosof [2] states some results concerning conditions
under which these two methods of combining evidence yield the same result. The aim
of this paper is to generalize Grosof's results and to investigate how divergent the two
1 I.J. Good has been an advocate of probabilistic reasoning using inequality statements since about
1938. See [1, p.25 and pp.75-76)
22
methods can become when the conditions for agreement are not satisfied. Familiarity
with the basics of Dempster-Shafer theory is assumed.
2 Conditions for Agreement
Recall that where m1 and m2 are two mass functions, with focal elements A1, ••• , Ak
and B11 ••• , B1, respectively, their combination m1 E9 m2, called their orthogonal sum, is
defined by:
EA,nB;= A mt(A;)m2(B;)
1-EA,nB;=0 m1(A;)m2(B;)
EA,nB;= A mt(Ai)m2(B;)
EA,nB;;t0 mt(Ai)m2(B;)
And where m is a mass function, the belief function Bel determined by m is given by:
Bel(A) = :L m(B)
B�A
When working within probabilistic logic, I follow Grosof in making explicit the evidence
on which a mass function depends. Hence for each mass function m;, the statement
m;(A) = p within Dempster-Sha fer has as counterpart in probabilistic logic the state
ment P(AlE;) = p, where P is a standard probability function and Ei is a statement
representing the evidence on which m; is based.
The following is the most general theorem I know of that states conditions under
which application of Dempster's rule agrees with combination of evidence in probabilistic
logic:
Theorem 1 Let m11 m2 be mass functions over frame E> each with focal elements
St, ... ,skI where the S; form a partition ofE> (i.e., sinS; = 0 fori ':/; j and sl u ... u sk =
E> }; E1 and E2 propositions not defined in E>; e = {EtAE2, E1 A •E2, •E11\ •E2}; and
r a set of probability assignments p over E) X e satisfying
(i} P(Si} = 1/k, i = t, ... ,k. {By abuse of notation, I identify X � e with
Xx£� E>x£.
(ii} P(E1 A EzlSi} = P{EtlSi}P{E2l Si), i = t, ... ,k.
{iii} P(SilEt} = m1(S;) and P(SilE2) = m2(Si), i = t, ... ,k.
(iv) P(E11\ Ez} > 0.
Then, where Belt,2 is the belief function over e determined by ms = ml EB m2, for all
A� E>, i E {1, ... , k}, andRE r:
(1)
and
Bel1,2(A) = min{Q(AlE 1 A E2): Q E r} (2)
23 I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I (Grosof [2] states the first part of theorem 1 for a two-membered partition; both parts
of the theorem can also be derived from results in Yen (7]2 .)
Proof. First we prove equation (1): For each Si and PEr, we have
P(E11\ EziSj)P(Sj) _ P(E1ISi)P(EziSi )
k -k Li=l P(E11\ EziSi)P(Si) Li=l P(E11\ EziSi)
[P(E1)P(S j lEI)/ P(Sj )][P(Ez)P(S j IEz)/ P(Sj)]
L:=1[P(E1)P(SiiE 1)/ P(Si)][P(Ez)P(SiiEz) / P(Si)]
P(Sj IEI)P(Sj IEz) m1 (Sj )mz(Sj)
2:::=1 P(S.;iE1)P(SiiEz) -L�=l ml(Si)mz(Si)
m3(Sj) = Beh,z(Sj ).
To prove equation (2) we construct a P E r such that P(AIE11\Ez) =min{ Q(AIE11\Ez :
Q E r} and P(AIE1 1\ Ez) = Bel1,2(A). To do so we must distinguish between X � 0
and X X e � e X e. Let A � e. We wish to construct a probability function p over
e X e such that P(A X eiEl/\ E2) = min{Q(A X eiEl/\ E2) : Q E r}. The desired
probability function P will be determined if P( (} 1\ e) is defined for each 8 E 0, e E e.
This will be accomplished if for each si, p is defined for every element of S; X e. Pick
any R from r (if r is empty, the theorem is vacuously true). For each S.;, define P over
the elements of Si X e as follows: if Si � A, set P( 8 1\e) = R( 81\ e) for each 8 1\e E Si X e;
otherwise, choose a Oo E Si - A and set P( 80 1\ e) = R( Si 1\ e) and P ( 8 1\ e) = 0 for all
e E si' e f. Oo. This fixes p for all singletons in e X e. By the construction of P, we have
P(Si 1\ e)= R(Si 1\ e) and therefore since R satisfies (i)-(iv), so does P. Hence PEr. It
is easy to verify that:
P(A x eiS·AX) = { 1 if si �.A • 0 otherwise
We now wish to show that P(A x eiE1 1\ Ez) = min{Q(A x eiE1 1\ Ez) : Q E r} =
Belt,2(A). By probability theory, for any probability assignment Q in r, Q(A x eiE1/\
Ez) = 2::=1 Q(SiiE11\ Ez)Q(A x eiSi 1\Et/\ Ez). By equation (1), Q(A x eiE11\Ez) =
Ef=l m3(Si)Q(A X eiSi 1\ Ell\ Ez). If si s;;; A, then Q(A X £lSi 1\ El 1\ Ez) = 1. Hence
Q(A X £1El, Ez) will be minimal if Q(A X eiSi 1\ El 1\ Ez) = 0 when si is not a subset
of A. But P has this property, so
min{Q(A x eiE1/\ Ez): Q E f} = P(A x eiE1/\ Ez) =
L P(Si X eiEl 1\ Ez) = L m3(Si) = Beh,z(A). 0
To avoid misunderstanding, I should emphasize that the above theorem only states
sufficient, not necessary, conditions for use of Dempster's rule to agree with combination
of evidence in probabilistic logic. Thus it is quite possible for there to be cases in which
the two methods of combination agree, but not all, and possibly none, of the above
2Yen in [7] is not directly concerned with probabilistic logic; however, his theorem 1 can be interpreted
as applying to a class of probabil ity functions and by adding the equivalent of my assumption (i) to
Yen's assumptions, it is not hard to show that theorem 1 of the present paper follows.
24
sufficient conditions for agreement are satisfied. However, three points need to be made
here: first,· as far as I know, no non-trivial necessary conditions for agreement have yet
been stated (not even condition (ii), the independence condition, is necessary); second,
if we think that probabilistic logic gives the right answer but wish to use Dempster's rule
for computational convenience, then in order to be sure that a particular application of
Dempster's rule gives the right answer, we need sufficient conditions for agreement, since
the satisfaction of merely necessary conditions for agreement is no guarantee that there
is agreement. Finally, what is in effect shown below is that the conditions of theorem
1 form a minimal set of sufficient conditions in the sense that if any one of them is
removed then the theorem no longer holds.
3 How Much Disagreement?
The next question that arises is, How much divergence arises between Dempster-Shaler
and probabilistic logic if one or more of the conditions of the theorem is not satisfied?
Obviously condition (iii) on P must be kept and (iv) is necessary for the conditional
probabi lities to be defined. Thus the obvious candidates for scrutiny are conditions (i)
and (ii). But other, less obvious, assumptions also enter into the theorem: for example,
it is assumed that the focal elements of m1 and m2 are the same and that they constitute
a partition of E>. This section shows that lifting any one of these assumptions can result
in dramatic disagreement between Dempster-Shafer and probabilistic logic.
Let us begin by examining the effect of lifting the assumption that the members of
the partition are equally probable. If (i) were abandoned, then the prior over the S, could
swamp the effect of E1 and Ez. For example, given any fixed values for P(SiiEt) and
P(SiiE2), providi ng both these values are strictly between zero and one, P(SilE1 A Ez)
can take on any value strictly between zero and one depend ing upon the value of P(Si)·
For simplicity consider the case of a bipartite partition of E> -i.e. there are only two
members to the partition, call them Hand H. Then if conditions (ii) and (iv) hold for
P, the formula
PH E A E _ P(HIEt)P( HIEz) ( I 1 z) -P(HIEt)P( HIEz) + O(H)P(H!Et)P(H!Ez)
can be proven. The factor O(H) in the second term of the denominator is the odds
on H, defined to be P(H)j P(H). By making O(H) sufficiently high, the denominator
can be made large, thus bringing P(HIE1 A E2) close to zero, regardless of the values
of P(HIE1 and P(HIE2) (providing neither is equal to one). However, if the sum of
m1(H) and mz(H) is greater than one, m1 EJ3 mz(H) will be greater than either mt(H)
or mz(H). For example, with m1(H) = mz(H) = P(HIEt) = P(HIEz) = 0.9, but with
P(H) = 0.999, we get m1 $ m2(H) � 0.99 but P( HIE1 A E2) = 0.075, a rather large
difference indeed. Similarly, making O(H) small results in P(HIE1 A E2) being close to
one.
The above example presents a counter-intuitive consequence of standard probability
theory: the higher the prior probability of a hypothesis, the lower will be its posterior
probability on the basis of the conjunction of two evidence statements that are condition
ally independent under both the hypothesis and its negation. Though counterintuitive,
this consequence can be made more plausible by considering the ratio P(HIE)/ P(H), of
25 I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I the posterior probability to the prior, and noting that the higher the prior, the smaller
this ratio and so the less confirmatory the evidence is of the hypothesis. In particular, if
P(H) is higher than P(HJE), then even if P(HJE) is high, E will be evidence against H
and so the effect of combining two such evidence statements, when they are conditionally
independent, is to even further lower the posterior probability of H.
Dempster's rule also diverges from probabilistic logic when the evidence statements
are not conditionally independent under the members of the partition. It is well known
that the combined effect of two non-independent evidence statements is not determined
by their individual effects on the probability of a hypothesis (except when one of the
posterior probabilities is zero or one). I will therefore say no more about the consequences
of lifting condition (ii).
Of more interest is the question of what happens when the conditions on P are
maintained, but the conditions for the mass function are changed. Recall that it was
assumed that both mass functions have the same focal elements and that these focal
elements form a partition of the frame of discernment. Consider the latter condition first.
What if the focal elements do not form a partition? In this case, the main difference is
that under Dempster's rule, intersections of focal elements always obtain some mass in
the combined mass distributions, but the same intersections do not always have a positive
probability in the posterior on the basis of the both evidence statements. For example,
suppose that 0 = {a, b, c} and m1 {a, b} = m2{ a, b} = ml{b, c} = m2{b, c} = 0.5. Then
m3{b} = 0.5, but it is easy to construct a P satisfying (i)-(iv) such that P(bJE1AE2) = 0
-e.g., set P(b) = 0 and P(a) = P(c) = P(Ei) = P(E2) = P(aJEi) = P(cJE2) = 0.5.
I will present one final example in which Dempster's rule diverges from probabilistic
logic, one that in my opinion shows a serious defect in Dempster's rule. In this example,
the focal elements for each mass function form a partition but not the same partition.
Let the frame of discernment e = {X 1, ••• ' Xn} and let the focal elements of ml be {X 1}
and {x2, ••• , :z:n} and the focal elements of m2 the singleton elements of 0. Assume
m2( {xi})= 1/n, i = 1, ... , n. Then
m1({x1})m2({re1}) + 2::�=2 m1({re2, ... , :vn})m2({:vi})
m1({x1})1jn
m1({x1})l/n + {1-ml{re1}) 2::�21/n
m1({:v1})
m1({:v1}) + (1-m1({re1})) 2::�=21
m1({x1}) (3)
(4)
(5)
(6)
It can be seen that m3({x1}) goes to zero as n goes to infinity, providing m1({x1}) <
1. This is a disconcerting result. To see why, consider a concrete case in which the
above mass functions might be combined. Suppose there is a lottery with n individuals
participating and only one winner. Let the frame of discernment be {x1, ••. , ren}, where
Xi is the event of the ith participant (in some ordering of the participants) winning.
It is known beforehand what the winning number is. One piece of evidence is that
Jones holds a ticket whose digits are identical with those of the winning number, except
possibly for one digit (e.g, you see Jones' ticket except for one digit, which is obscured).
Another piece of evidence is that the lottery is fair: the participants get th�ir tickets
26
through some random drawing process. In the Dempster- Shafer theory, the first piece
of evidence, in the absence of the second, would plausibly be represented by a mass
distribution of the form of m1-e.g. if :z:1 is the event of Jones' winning the lottery, then
we might set m1({:z:1}) = 0.1 if we see that Jones' ticket is identical with the winning
ticket except possibly for one digit and, in the absence of knowledge as to whether or
not the lottery is fair, Dempster-Shafer would presumably recommend spreading the
remaining mass over the set { :z:2, ... , :z: .. }, without assigning any mass to smaller subsets.
And the second piece of evidence, in the absence of the first, would, I should think, be
represented by m2 since we have positive evidence that each participant has an equal
chance of winning. ·
With n = 112 and m1(:z:1) = 0.1, we have
which, being the total mass committed to {:z:1}, yields
But this degree of belief seems much too low: if you believe that Jones' has at least a
1 in 10 chance of winning the lottery on the basis of seeing all but one digit of Jones'
ticket, learning that the lottery is fair should not cause you to lower your degree of
belief in Jones' winning. Worse still, since combination of evidence is commutative in
Dempster-Sh afer, imagine first learning that the lottery is fair, in which case you assign
a 1 in 112 chance that Jones will win, and then learning that all but possibly one of the
digits in Jones' ticket match those in the winning number. Surely it would be absurd to
then lower Jones' chances of winning to 1 in 1000.
How would probabilistic logic handle the same example? Note that we cannot really
keep the conditions on the probability assignment p in theorem 1 the same, since they
refer to the Si, which are stipulated to be focal elements for both m1 and m2• However,
we can assume that P(F!Et) = m1(F) for each focal element of m1 and similarly for
m2• Also, condition (i) presents a bit of a problem since for k > 2, (i) cannot apply to
both sets of focal elements. We assume instead that (i) applies to the singletons of e.
In short, we assume that P satisfies the following conditions:
(1) P({:z:i}) = 1/n,i = 1, ... ,n.
(2a) P(Et A E21{:z:i}) = P(Etl{:z:i})P(E2j{:z:i} ), i = 1, ... , n.
(2b) P(Et A E21{:c2, ... , :z: .. }) = P(Etl{:c2, ... , :c,.} )P(E2j{:z:2, ... , :z:,.})
(3a) P({:z:t}IEt) = mt({:z:t})
(3b) P( {:z:,}IE2) = 1/n, i = 1, ... , n.
(4) P(Et A E2) > 0.
Conditions ( 1)-( 4) entail:
(7)
27 I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I Hence E2 has no effect on the probability of {a:1} in the presence of E1 (in fact, as
required by the theorem of Johnson [4, p.199] for the case n > 2, Ez is jrrelevant to any
a:i)· Hence no matter how large n is, the probability of Jones' winning given both E1
and E2 will be 0.1. This seems a much more reasonable result.
An objection to the above comparison of Dempster-Sh afer with probabilistic logic
was raised by one of the reviewers of this paper. According to this objeCtion, there is
nothing surprising in the fact that the combination of the evidence about Jones' ticket
and the evidence about the fairness of the lottery lowers Jones' probability of winning.
After all, both pieces of evidence state that it is highly unlikely that Jories will win, so
why shouldn't their combination make it even more unlikely that he will win?
This objection confuses a hypothesis's being unlikely on the basis of certain evidence
with its being disconfirmed by that evidence. A piece of evidence disconfirms a given
hypothesis if the probability of that hypothesis on the basis of that piece of evidence
is lower than the prior probability of the hypothesis. If two independent pieces of
information disconfirm a hypothesis, then their conjunction should indeed disconfirm
the same hypothesis to an even greater degree. However, in the above example, the
evidence about Jones' ticket does not disconfirm the hypothesis that he will win. To the
contrary, given the assumed size of the hypothesis space, it significantly increases Jones'
probability of winning. Furthermore, the example can modified so that the evidence
about Jones' ticket makes it highly probable that he will win: assume that you see all
the digits in Jones' ticket and are ninety percent certain that Jones holds the winning
ticket (you may be slightly unsure about one of the digits in the winning ticket). Then if
the only modification to the example is that ml({:z:t}) = 0.9, we find that Belt,z({a:l})
is 0.075, still much too low a number, while P({a:1}/E1 1\ Ez) = 0.9.
The source of the discrepancy between Dempster's rule and probabilistic logic in this
case can be discovered by rewriting the equations for m3({a:1}) and P({a:l}/E1 /\E2) as
follows:
m3({a:1}) mt({a:l}) (8) ml({a:t}) + T1
n
T1 L m1( {a:z, ... ,a: .. }) (9)
i=Z
P({a:t}/Et 1\ Ez) P({:z:t}/El) (10) P( { Xt}/El) + T2
n
Tz L P({a:i}/El) (11)
i=Z
The difference is in the terms T1 and T2• The difference is that m1 ( { a:2, •.. ,a: .. } is a con
stant, whereas P( { a:i}/E1), i = 2, ... , n grows, on average, smaller as n increases since the
term Tz is equal toP( {a:2 .•• , x .. }/E1), which is stipulated to be equal to m2( {xz, ... ,a:,.}),
a constant. Hence T1 goes to infinity as n goes to infinity, but T2 remains constant.
4 Conclusion
I have proven that Dempster's rule of combination agrees with combination of evidence
in probabilistic logic under certain conditions. I have also shown that these two methods
28
for combining evidence can produce radically different results when these conditions do
not obtain. Of particular interest is the fact that even when the conditional independence
assumptions are satisfied, differences can result when the focal elements of the two mass
functions do not form a partition or form different partitions.
References
[1] Good, I.J ., Good Thinking: The Foundations of Probability and its Applications,
Minneapolis, MN: University of Minnesota Press, 1983.
[2] Grosof, B.N., "Evidential Confirmation as Transformed Probabili ty: On the Duality
of Priors and Updates," in Uncertainty in Artificial Intelligence, ed. L.N. Kanal and
J.F. Lemmer, Amsterdam: Elsevier Science Publishers, 1986, pp.153-166.
[3] Grosof, B.N., "An Inequality Paradigm for Probabilistic Knowledge: The Logic of
Conditional Probability Intervals," in Uncertainty in Artificial Intelligence, ed. L.N.
Kanal and J.F. Lemmer, Amsterdam: Elsevier Science Publishers, 1986, pp.259-
275.
[4] Johnson, R.W., "Independence and Bayesian Updating," in Uncertainty in Artifi
cial Intelligence, ed. L.N. Kanal and J.F. Lemmer, Amsterdam: Elsevier Science
Publishers, 1986, pp.197-201.
[5] Lemmer, J .F., "Confidence Factors, Empiricism, and the Dempster-Shafer Theory
of Evidence," in Uncertainty in Artificial Intelligence, ed. L.N. Kana!. and J .F.
Lemmer, Amsterdam: Elsevier Science Publishers, 1986, pp. 357-369.
(6] Nilsson, N.J.,"Probabilistic Logic," in Artificial Intelligence, vol.28, no.1, February
1986.
[7] Yen, J ., "A Reasoning Model Based on an Extended Dempster-Shafer Theory," in
Proceedings AAAI-86, vol 1., August 1986, pp.125-131.
29 I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I |
90e4d426-db43-4ed2-be9f-e7642b919962 | trentmkelly/LessWrong-43k | LessWrong | EIS IX: Interpretability and Adversaries
Part 9 of 12 in the Engineer’s Interpretability Sequence.
Thanks to Nikolaos Tsilivis for helpful discussions.
The studies of interpretability and adversaries are inseparable.
There are several key connections between the two. Some works will be cited below, but please refer to page 9 of the Toward Transparent AI survey (Räuker et al., 2022) for full citations. There are too many to be worth the clutter in this post.
1. More interpretable networks are more adversarially robust and more adversarially robust networks are more interpretable.
The main vein of evidence on this topic comes from a set of papers which study how regularizing feature attribution/saliency maps to make them more clearly highlight specific input features has the effect of making networks more robust to adversaries. There is also some other work showing the reverse -- that adversarially robust networks tend to have more lucid attributions. There is also some work showing that networks which emulate certain properties of the human visual system are also more robust to adversaries and distribution shifts (e.g. Ying et al. (2022)).
Adversarial training is a good way of making networks more internally interpretable. One particularly notable work is Engstrom et al., (2019) who found striking improvements in how much easier it was to produce human-describable visualizations of internal network properties. Although they stopped short of applying this work to an engineering task, the paper seems to make a strong case for how adversarial training can improve interpretations. Adversarially trained networks also produce better representations for transfer learning, image generation, and modeling the human visual system.
Finally, some works have found that lateral inhibition and second-order optimization have been found to improve both interpretability and robustness.
2. Interpretability tools can and should be used to guide the design of adversaries.
This is one of the three types of rigorous |
7105965a-d3c7-4cea-97dc-cd1cd6f40956 | trentmkelly/LessWrong-43k | LessWrong | Causation, Probability and Objectivity
Most people here seem to endorse the following two claims:
1. Probability is "in the mind," i.e., probability claims are true only in relation to some prior distribution and set of information to be conditionalized on;
2. Causality is to be cashed out in terms of probability distributions á la Judea Pearl or something.
However, these two claims feel in tension to me, since they appear to have the consequence that causality is also "in the mind" - whether something caused something else depends on various probability distributions, which in turn depends on how much we know about the situation. Worse, it has the consequence that ideal Bayesian reasoners can never be wrong about causal relations, since they always have perfect knowledge of their own probabilities.
Since I don't understand Pearl's model of causality very well, I may be missing something fundamental, so this is more of a question than an argument. |
8e747d61-b720-49b0-830d-779dfe829201 | trentmkelly/LessWrong-43k | LessWrong | Procrastination checklist
Procrastination checklist
This list is a revision of this checklist: http://lesswrong.com/lw/hgd/10step_antiprocrastination_checklist/
1. What is the task? Make sure you're going to focus on one thing at a time. Write it down (helps some people). (If you need - start with the big picture, one sentence of "what is this for")
Can you do it now? (If yes then do it)
2. How long will you work until you take a break? Prepare to set a timer and commit to focusing.
Can you do it now? (If yes then do it)
3. What are the parts to this task? Break things down until they are in *can do it now* steps, if you have a small number of steps that can now be done; stop writing more steps and start doing them.
Can you do it right now? (If yes then do it)
4. What's an achievable goal for this sitting? Set a reasonable expectation for yourself. (until it's done, 1000 words, complete research on X part)
Can you do it now? (If yes then do it)
5. How can you make it easier to do the task?
* Is the environment right? Desk clear, well lit area...
* Do you have something to drink? Get yourself some tea, coffee, or water.
* Are distractions closed? Shut the door, quit Tweetdeck, close the Facebook and Gmail tabs, and set skype to "Do not disturb."
* What music will you listen to inspire yourself to be productive? Put on a good instrumental playlist! (video game soundtracks are good)
* Do you have the right books open? The right tools in reach?
* Is your chair comfortable?
* Can you make it harder to do the distracting or <not this> thing?
* (step 3 is going to help to make it easier)
Can you do it now? (If yes then do it)
6. Why are you doing this task? Trace the value back until you increase the desire to do it.
Can you do it now? (If yes then do it)
7. Will gamifying help you? What are some ways to gamify the task? Try to have fun with it!
Can you do it now? (If yes then do it)
8. What are some rewards you c |
d021a1ca-034e-4892-ae53-2c61b2c9b6e0 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post110
Epistemic status: The following isn't an airtight argument, but mostly a guess how things play out. Consider two broad possibilities: I. In worlds where we are doing reasonably well on alignment, AI control agenda does not have much impact. II. In worlds where we are failing at alignment, AI control may primarily shift probability mass away from "moderately large warning shots" and towards "ineffective warning shots" and "existential catastrophe, full takeover". The key heuristic is that the global system already has various mechanisms and feedback loops that resist takeover by a single agent (i.e. it is not easy to overthrow the Chinese government). In most cases where AI control would stop an unaligned AI, the counterfactual is that broader civilizational resistance would have stopped it anyway, but with the important side effect of a moderately-sized warning shot. I expect moderately sized warning shots to increase the chances humanity as a whole takes serious actions and, for example, steps up efforts to align the frontier labs. I am skeptical that incidents stopped by AI control would lead to meaningful change. Sharing details of such an event with proper framing could pose existential risk , but for the lab involved. In practice, I anticipate vague, sanitized communications along the lines of "our safety systems performed as designed, preventing bad things". Without clear, compelling evidence of the severity of the averted threat, these incidents are unlikely to catalyze serious action. The incentives for labs to downplay and obscure such events will be strong. There are additional factors to consider, like AI control likely moves some resources away from alignment, but I don't think this is the dominant effect. Note that this isn't a general argument against boxing, e.g. boxes based more on formal methods or theory have better chance to generalize. Typical counter-arguments to this line of reasoning claim seem to be: We will extract useful "automated alignment" work from the unaligned AIs inside of the control scheme. I'm sceptical: will cover this in a separate post Isn't this general counter-argument to alignment research as well? In my view not, details matter: different strains of alignment research have different generalization profiles. Note: this text lived in a draft form before John Wentworth posted his Case Against AI Control Research ; my original intent was to extend it a bit more toward discussing AI control generalization properties. As this would be redundant now, I'm postin it as it is: there is some non-overlapping part. |
a5b2768c-37d3-4fe6-95f6-486d19b8bb33 | trentmkelly/LessWrong-43k | LessWrong | The Scale Problem in AI
Suppose we are making an AI; for familiarity's sake, let's say that it is a model-based agent. In that case, we might need to train the model with data from the real world to make it accurate.
Usually the way this proceeds is that we have access to some source of data, e.g. a deployment of the AI in the world, and we capture "episodes" of some fixed length L from that data source. And then we use something like gradient descent to update our model to better predict those episodes.
The difficulty is that the model will have a hard time becoming accurate for scales bigger than L. For instance, suppose L is on the scale of 15 seconds. This might make it accurate for predicting phenomena that happen on the scale of 15 seconds, such as basic physical interactions between objects, but it is probably not going to learn to accurately predict people organizing in long-term politics.
Some examples of phenomena that happen at different timescales.
Within some regimes, the scale problem is reasonably solvable. For instance, if the environment is fully observable, then the dynamics extrapolate straightforwardly out beyond the timescale that has been observed[1]. But humans are very much not fully observable.
Importantly, I suspect humans have a huge advantage over AIs when it comes to the scale problem, because humans originate from evolution, and evolution has molded our models based on timescales longer than a lifetime (because the reproduction of our great great grandchildren also influences our fitness).
I find it interesting to think of the implications of the scale problem:
1. Maybe it doesn't matter because an AI trained on a scale of 15 minutes can use its "15 minutes of charisma" to cause enough damage.
2. Maybe there is a training-viable scale - e.g. weeks - beyond which humans extrapolate easily enough.
3. Maybe the AI can do strategy-stealing from human behavior, human media, or human theories about >>L-scale dynamics.
4. Maybe some place like China can bru |
fbb84db5-ea7b-4a5a-b180-224f488b1fa3 | trentmkelly/LessWrong-43k | LessWrong | [Link] Cooking for people who don't
Links about elementary cooking, food storage, etc.
Here's the premise:
> Write a post to pass on something[s] you know that you feel is useful to anyone who wants to increase their level of food security by increasing their level of skill, knowledge, comfort around getting, storing, or preparing food. How-tos are good, recipes are good, linkspams are good. Reflective essays are good too, even if not of a strictly practically useful nature. You are your own best judge of what's on-topic. On February 2nd, come back and post a link to it in the comments of the Carnival Round Up Post. |
c58864e3-6c36-404f-bfbd-fdcc86d09b8e | trentmkelly/LessWrong-43k | LessWrong | The Fermi Paradox has not been dissolved - James Fodor
> In this essay, I will argue that the analysis of Sandberg et al. is flawed in a number of key respects, and as a result the Fermi Paradox remains an open question. Here I briefly list the key problems with the Sandberg et al. paper, before proceeding to discuss each in more detail.
>
> 1. The method used of multiplying uncertainties of many small numbers, most of which have an upper bound of one, is biased towards yielding a result of a high probability of Earth being unique, while also leading to various dubious results.
> 2. The key result of the paper is driven largely by uncertainty in the parameter fl, which is modeled in an unusual way without clear justification.
> 3. Adoption of slightly different (and I believe more plausible) modelling choices and parameter values yields totally different results, which do not result in the Fermi paradox being dissolved. I illustrate this by re-estimating the Sandberg et al. models using different parameters and modelling assumptions. |
b4a8d370-a980-49e0-be46-ef1a46db3f84 | trentmkelly/LessWrong-43k | LessWrong | Chapter 2: What's Inside?
I've been such a fool, thinking that memory would be the evident part. There were some memory models. But they were describing memory by duration, type of information, etc. That was exciting to read about experiments their authors made. Like "magic number 7", and Baddeley's experiments on different types of data like sound/visual. But there were a few problems:
1. There was no one MODEL TO RULE THEM ALL. They were separately explaining different kinds of behavior.
2. There was no way to connect any of them to my knowledge about neurons and all neuroscience-related stuff. In my opinion, they were full of: "And here happens some magic, and it does the trick."
I've decided to move on and try to connect all the facts I've learned with existing neuroscience.
Inside the brain, we have neurons. There are different types of them and several types of mediators. We don't care about them right now. We will use a simple model, where each neuron connects to others. While receiving input signals, it charges up.
Our goal is to store some data. And also retrieve that data.
It seems like retrieving data is related to neuron activation. When it's activated, it takes part in forming the model.
But what data is stored here? Let's see. Connections between neurons have different strengths. If it is weak, most probably, another neuron won't fire with ours. If it is strong, it will increase the probability of chain-reaction. So, we will keep the strength of our connections to other neurons as data. The questions of charging, leaking, threshold, we will leave behind the scene.
We defined our data as connections strengths. How to write this data?
There is one theory: Neurons that fire together, wire together. Then you coactivate two neurons - connection strength increases. That's called Hebb's rule. And with increasing strength, the probability that the second neuron will activate after activating first growing too.
But why do we tend to forget something?
After some period of |
a51b7b37-1a9d-4164-a0a8-a059b3aeea25 | trentmkelly/LessWrong-43k | LessWrong | Normative reductionism
Here’s a concept that seems useful, but that I don’t remember ever hearing explicitly referred to (with my own tentative name for it—if it turns out to not already have one in some extensive philosophical literature, I might think more about whether it is a good name):
> Normative reductionism: The value of a world history is equal to the value of its parts (for some definition of relevant parts).
For instance, if two world histories only differ between time t and time t’, according to NR you do not need to know what happened at other times to evaluate them in full. Similarly, the value of Alice’s life, or the value of Alice enjoying a nap, depend on the nature of her life or the nap, and not for instance on other people’s lives or events that took place before she was born with no effect on her (unless perhaps she has preferences about those events or they involve people having preferences about her, but still the total value can be decomposed into the value of different preferences being fulfilled or not). Straightforward hedonistic utilitarianism probably implies normative reductionism.
My impression is that people have different intuitions about this and vary in how much they assume it, and that it mostly isn’t entirely aligned with other axes of ethical view, either logically or sociologically, though is related to them. So it seems maybe worth noting explicitly. |
85b6f214-4f27-4a55-addf-bed4dd76d60c | trentmkelly/LessWrong-43k | LessWrong | Sleeping Beauty gets counterfactually mugged
Related to: Counterfactual Mugging, Newcomb's Problem and Regret of Rationality
Omega is continuing his eternal mission: To explore strange new philosophical systems... To seek out new paradoxes and new counterfactuals... To boldly go where no decision theory has gone before.
In his usual totally honest, quasi-omniscient, slightly sadistic incarnation, Omega has a new puzzle for you, and it involves the Sleeping Beauty problem as a bonus.
He will offer a similar deal to that in the counterfactual mugging: he will flip a coin, and if it comes up tails, he will come round and ask you to give him £100.
If it comes up heads, instead he will simulate you, and check whether you would give him the £100 if asked (as usual, the use of randomising device in the decision is interpreted as a refusal). From this counterfactual, if you would give him the cash, he’ll send you £260; if you wouldn’t, he’ll give you nothing.
Two things are different from the original setup, both triggered if the coin toss comes up tails: first of all, if you refuse to hand over any cash, he will give you an extra £50 compensation. Second of all, if you do give him the £100, he will force you to take a sedative and an amnesia drug, so that when you wake up the next day, you will have forgotten about the current day. He will then ask you to give him the £100 again.
To keep everything fair and balanced, he will feed you the sedative and the amnesia drug whatever happens (but will only ask you for the £100 a second time if you accepted to give it to him the first time).
Would you want to precommit to giving Omega the cash, if he explained everything to you? The odds say yes: precommitting to accepting to hand over the £100 will give you an expected return of 0.5 x £260 + 0.5 x (-£200) = £30, while precommitting to a refusal gives you an expected return of 0.5 x £0 + 0.5 x £50 = £25.
But now consider what happens at the moment when he actually asks you for the cash.
A standard way to approach the |
9867a3fc-9c66-4fe2-9083-eb2e409de593 | trentmkelly/LessWrong-43k | LessWrong | [Linkpost] Rationally awake
This is an essay I wrote to try to better understand rationality for myself. Towards the end of the post I try to extract out some practical implications of the analysis. I hope it is useful for you.
Rationally awake
In rationally logical, we explored logical thought - an important part of rationality that lets us split the world into pieces. To continue developing our understanding of the rational, we need to examine another key concept - reason.
To be rational, your actions need to be grounded in reason. A reason itself must sit ontop of other reasons. At lunch you eat a sandwich because you are hungry, but your hunger is not the final causal explanation. Though we cut off the analysis, there is a reason for your hunger, and a reason for that reason. This stack of reasons leads back to the ultimate reason or purpose of your existence - your telos. By this line of argument, rationality in its finality must involve acting in line with your telos.
Rationality and knowledge are themselves connected through reason. When we make use of knowledge and act on it, we are acting for a reason. Therefore rational behaviour is in part acting on the knowledge we have available.
Imagine I have a difficult decision to make with incomplete information available. In this circumstance, it can be rational for me to follow my gut, even though I can't explain why. For this to be rational, I must be making use of some inexplicit knowledge. We assume that explicit knowledge is the only type of valid knowledge, but this is wrong. Knowledge develops from the subconscious into the explicit. Initially it manifests as behaviours acted out physically as imitation and play, or experienced mentally as emotions or curiosity. The knowledge hasn't been sufficiently understood and generalised to be articulated yet. It then evolves into narrative, expressed in art and culture - drama, myth, literature, symbols and dreams. Finally the ideas can emerge more explicitly in philosophy, science and log |
eb8ab1ad-2086-456a-9f50-f9fce17881df | StampyAI/alignment-research-dataset/arxiv | Arxiv | Safe Reinforcement Learning with Model Uncertainty Estimates
I Introduction
---------------
Reinforcement learning (RL) is used to produce state-of-the-art results in manipulation, motion planning and behavior prediction. However, the underlying neural networks often lack the capability to produce qualitative predictive uncertainty estimates and tend to be overconfident on out-of-distribution test data [Amodei\_2016, Lakshmi\_2016, Hendrycks\_2017]. In safety-critical tasks, such as collision avoidance of cars or pedestrians, incorrect but confident predictions of unseen data can lead to fatal failure [Tesla\_2016]. We investigate methods for Safe RL that are robust to unseen observations and “know what they do not know” to be able to raise an alarm in unpredictable test cases; ultimately leading to safer actions.
A particularly challenging safety-critical task is avoiding pedestrians in a campus environment with an autonomous shuttle bus or rover [Miller\_2016, Navya\_2018]. Humans achieve mostly collision-free navigation by understanding the hidden intentions of other pedestrians and vehicles and interacting with them [Zheng\_2015, Helbing\_1995]. Furthermore, most of the time this interaction is accomplished without verbal communication. Our prior work uses RL to capture the hidden intentions and achieve collaborative navigation around pedestrians [Chen\_2016, Chen\_2017, Everett\_2018]. However, RL approaches always face the problem of generalizability from simulation to the real world and cannot guarantee performance on far-from-training test data. An example policy that has only been trained on collaborative pedestrians could fail to generalize to uncollaborative pedestrians in the real world. The trained policy would output a best guess policy that might assume collaborative behavior and, without labeling the novel observation, fail ungracefully. To avoid such failure cases, this paper develops a Safe RL framework for dynamic collision avoidance that expresses novel observations in the form of model uncertainty. The framework further reasons about the uncertainty and cautiously avoids regions of high uncertainty, as displayed in [Fig. 5](#S4.F5 "Fig. 5 ‣ IV-B2 Regional novelty detection ‣ IV-B Novelty detection in multi-dimensional observations ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates").
| | |
| --- | --- |
|
(a) Known obstacle, confident
|
(b) Unknown obstacle, cautious
|
Fig. 1: An agent (orange) is trained to avoid an obstacle (blue) as close as possible. The agent starts (dark orange) and chooses an initial heading action. While training, the agent is only confronted with obstacles on the right of the image (x>0) and learns to avoid them confidently close (a). The same agent is deployed to avoid an unknown obstacle on the left (b). Due to this unknown observation, the agent assigns a high uncertainty to the learned model and avoids the obstacle more cautiously.
Much of the existing Safe RL research has focused on using external novelty detectors or internal modifications to identify environment or model uncertainty [Garcia\_2015]. Note that our work targets model uncertainty estimates because they potentially reveal sections of the test data where training data was sparse and a model could fail to generalize [Gal\_2016Thesis]. Work in risk-sensitive RL (RSRL) often focuses on environment uncertainty to detect and avoid high-risk events that are known from training to have low probability but high cost [Geibel\_2006, Mihatsch\_2002, Shen\_2013, Tamar\_2015, Evendar\_2006]. Other work in RSRL targets model uncertainty in MDPs, but does not readily apply to neural networks [Chow\_2015, Mihatsch\_2002]. Our work is mainly orthogonal to risk-sensitive RL approaches and could be combined into an RL policy that is robust to unseen data and sensitive to high-risk events.
Extracting model uncertainty from discriminatively trained neural networks is complex, as the model outcome for a given observation is deterministic. Mostly, Bayesian neural networks are used to extract model uncertainty but require a significant restructuring of the network architecture [Neal\_1996]. Additionally, even approximate forms, such as Markov Chain Monte Carlo [Neal\_1996] or variational methods [Blundell\_2015, Graves\_2011, Louizos\_2016], come with extensive computational cost and have a sample-dependent accuracy [Neal\_1996, Lakshmi\_2016, Springenberg\_2016]. Our work uses Monte Carlo Dropout (MC-Dropout) [Gal\_2015] and bootstrapping [Osband\_2016] to give parallelizable and computationally feasible uncertainty estimates of the neural network without significantly restructuring the network architecture [Dropout\_2014, Bootstrap\_1995].
The main contributions of this work are i) an algorithm that identifies novel pedestrian observations and ii) avoids them more cautiously and safer than an uncertainty-unaware baseline, iii) an extension of an existing uncertainty-aware reinforcement learning framework [Kahn\_2017] to more complex dynamic environments with exploration aiding methods, and iv) a demonstration in a simulation environment.
Ii Related Work
----------------
This section investigates related work in Safe Reinforcement Learning to develop a dynamic collision avoidance policy that is robust to out-of-data observations.
###
Ii-a External verification and novelty detection
Many related works use off-policy evaluation or external novelty detection to verify the learned RL policy [Richter\_2017, Long\_2018, Garcia\_2015]. Reachability analysis could verify the policy by providing regional safety bounds, but the bounds would be too conservative in a collaborative pedestrian environment [Lygeros\_1999, Majumdar\_2016, Perkins\_2003]. Novelty detection approaches place a threshold on the detector’s output and switch to a safety controller if the threshold is exceeded. This requires the knowledge of a safety controller that can act in a complex collaborative pedestrian environment. Moreover, there is no known mechanism of gradually switching from an RL policy to a safety controller, because the latter has no knowledge about the RL’s decision-making process. An example failure case would be a pedestrian in front of a robot, that is planned to be avoided to the left by the RL and to the right by a safety controller. An interpolation could collide in the middle [Amini\_2017]. In our framework, the understanding of pedestrian behavior and knowledge of uncertainty is combined to allow a vehicle to stay gradually further away from unpredictable and uncertain regions, as seen in [Fig. 3](#S4.F3 "Fig. 3 ‣ IV-A Regional novelty detection in 1D ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates").
###
Ii-B Environment and model uncertainty
This paper focuses on detecting novel observations via model uncertainty, also known as parametric or epistemic uncertainty [Kendall\_2017]. The orthogonal concept of environment uncertainty does not detect out-of-data points as it captures the uncertainty due to the imperfect nature of partial observations [Gal\_2016Thesis]. For example, an observation of a pedestrian trajectory will, even with infinite training in the real-world, not fully capture the decision-making process of pedestrians and thus be occasionally ambiguous; will she turn left or right? The RL framework accounts for the unobservable decision ambiguity by learning a mean outcome [Gal\_2016Thesis]. Model uncertainty, in comparison, captures how well a model fits all possible observations from the environment. It could be explained away with infinite observations and is typically high in applications with limited training data, or with test data that is far from the training data [Gal\_2016Thesis]. Thus, the model uncertainty captures cases in which a model fails to generalize to unseen test data and hints when one should not trust the network predictions [Gal\_2016Thesis].
###
Ii-C Measures of model uncertainty
A new topic calculates approximations of Bayesian inference without significantly changing the neural network’s architecture. Bootstrapping has been explored to generate approximate uncertainty measures to guide exploration [Osband\_2016]. By training an ensemble of networks on partially overlapping dataset samples they agree in areas of common data and disagree, and have a large sample variance, in regions of uncommon data [Lakshmi\_2016, Osband\_2016]. Dropout can be interpreted similarly, if it is activated during test-time, and has been shown to approximate Bayesian inference in Gaussian processes [Dropout\_2014, Gal\_2015]. An alternative approach uses a Hypernet, a network that learns the weights of another network to directly give parameter uncertainty values, but was shown to be computationally too expensive [Pawlowski\_2017]. An innovative, but controversial, approach claims to retrieve Bayesian uncertainty estimates via batch normalization [Teye\_2018]. This work uses MC-Dropout and bootstrapping to give computationally tractable uncertainty estimates.
###
Ii-D Applications of model uncertainty in RL
Measures of model uncertainty have been used in RL very recently to speed up training by guiding the exploration into regions of high uncertainty [Thompson\_1933, Osband\_2016, Liu\_2017]. Kahn et al. used uncertainty estimates in model-based RL for static obstacle collision avoidance [Kahn\_2017]. Instead of a model-based RL approach, one could argue to use model-free RL and draw the uncertainty of an optimal policy output π∗=argmaxπ(Q). However, the uncertainty estimate would contain a mix from the uncertainties of multiple objectives and would not focus on the uncertain region of collision. Our work extends the model-based framework by [Kahn\_2017] to the highly complex domain of pedestrian collision avoidance. [Kahn\_2017] is further extended by using the uncertainty estimates for guided exploration to escape locally optimal policies, analyzing the regional increase of uncertainty in novel dynamic scenarios, using LSTMs and acting goal-guided.
Iii Approach
-------------

Fig. 2: System architecture. An agent observes the environment and selects minimal cost motion primitives u∗ to reach a goal while avoiding collisions. On each time step, an ensemble of LSTM networks is sampled multiple times with different dropout masks to acquire a sample mean and variance collision probability for each motion primitive u.
This work proposes an algorithm that uses uncertainty information to cautiously avoid dynamic obstacles in novel scenarios. As displayed in the system architecture in [Fig. 2](#S3.F2 "Fig. 2 ‣ III Approach ‣ Safe Reinforcement Learning with Model Uncertainty Estimates"), an agent observes a simulated obstacle’s position and velocity, and the goal. A set of Long-Short-Term-Memory (LSTM) [Hochreiter\_1997] networks predicts collision probabilities for a set of motion primitives u. MC-Dropout and bootstrapping are used to acquire a distribution over the predictions. From the predictions, a sample mean E(Pcoll) and variance Var(Pcoll) is drawn for each motion primitive. In parallel, a simple model estimates the time to goal tcoll at the end of each evaluated motion primitive. In the next stage, the minimal cost motion primitive u∗ is selected and executed for one step in the environment. The environment returns the next observation and at the end of an episode a collision label. After a set of episodes, the network weights W are adapted and the training process continues. Each section of the algorithm is explained in detail below.
###
Iii-a Collision Prediction Network
A set of LSTM networks (ensemble) estimates the probability P(coll|ut−l:t+h,ot−l:t) that a motion primitive ut:t+h would lead to a collision in the next h time steps, given the history of observations ot−l:t and past actions ut−l:t. The observations of duration l contain the past and current relative goal position and a pedestrian’s position, velocity and radius. Each motion primitive of length h is a straight line, described through a heading angle and speed. The optimal motion primitive is taken for one time step until the network is queried again.
LSTM networks are chosen for the dynamic obstacle avoidance, because they are the state-of-the-art model in predicting pedestrian paths by understanding the hidden temporal intentions of pedestrians best [Alahi\_2016\_CVPR, Vemula\_2017]. Based on this success, the proposed work first applies LSTMs to pedestrian avoidance in an RL setting. For safe avoidance, LSTM predictions need to be accurate from the first time step a pedestrian is observed in the robot’s field of view. To handle the variable length observation input, masking [Che\_2018] is used during training and test to deactivate LSTM cells that exceed the length of the observation history.
###
Iii-B Uncertainty Estimates with MC-Dropout and Bootstrapping
MC-Dropout [Gal\_2015] and bootstrapping [Osband\_2016, Lakshmi\_2016] are used to compute stochastic estimates of the model uncertainty Var(Pcoll). For bootstrapping, multiple networks are trained and stored in an ensemble. Each network is randomly initialized and trained on sample datasets that have been drawn with replacement from a bigger experience dataset [Osband\_2016]. By being trained on different but overlapping sections of the observation space, the network predictions differ for uncommon observations and are similar for common observations. As each network can be trained and tested in parallel, bootstrapping does not come with significant computational cost and can be run on a real robot.
Dropout [Dropout\_2014] is traditionally used for regularizing networks. It randomly deactivates network units in each forward pass by multiplying the unit weights with a dropout mask. The dropout mask is a set of Bernoulli random variables of value [0,1] and a keeping probability p. Traditionally, dropout is deactivated during test and each unit is multiplied with p. However, [Gal\_2015] has shown that an activation of dropout during test, named MC-Dropout, gives model uncertainty estimates by approximating Bayesian inference in deep Gaussian processes. To retrieve the model uncertainty with dropout, our work executes multiple forward passes per network in the bootstrapped ensemble with different dropout masks and acquires a distribution over predictions. Although dropout has been seen to be overconfident on novel observations [Osband\_2016], [Table I](#S4.T1 "TABLE I ‣ IV-B3 Novel scenario identification with uncertainty ‣ IV-B Novelty detection in multi-dimensional observations ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates") shows that the combination of bootstrapping and dropout reliably detects novel scenarios.
From the parallelizable collision predictions from each network and each dropout mask, the sample mean and variance is drawn.
###
Iii-C Selecting actions
A Model Predictive Controller (MPC) selects the safest motion primitive with the minimal joint cost:
| | | | |
| --- | --- | --- | --- |
| | | u⋆t:t+h=argminu∈U(λvVar(Pcoll)+λcE(Pcoll)+λgtgoal) | |
The chosen MPC that considers the second order moment of probability [Lee\_2017, Theodorou\_2010, Kahn\_2017] is able to select actions that are more certainly safe. The MPC estimates the time-to-goal tgoal from the end of each motion primitive by measuring the straight line distance. Each cost term is weighted by its own factor λ. Note that the soft constraint on collision avoidance requires λg and λc to be chosen such that the predicted collision cost is greater than the goal cost. In comparison to [Kahn\_2017], this work does not multiply the variance term with the selected velocity. The reason being is that simply stopping or reducing one’s velocity is not always safe, for example on a highway scenario or in the presence of adversarial agents. The proposed work instead focuses on identifying and avoiding uncertain observations regionally in the ground plane.
###
Iii-D Adaptive variance
Note that during training an overly uncertainty-averse model would discourage exploration and rarely find the optimal policy. Additionally, the averaging during prediction reduces the ensemble’s diversity, which additionally hinders explorative actions. The proposed approach increases the penalty on highly uncertain actions λv over time to overcome this effect. Thus, the policy efficiently explores in directions of high model uncertainty during early training phases; λv is brought to convergence to act uncertainty-averse during execution.
###
Iii-E Collecting the dataset
The selected action is executed in the learning environment. The environment returns the next observation and a collision label. The motion primitive decision history is labeled with 1 or 0 if a collision occurred. Several episodes are executed and the observation-action history stored in an experience dataset. Random subsets from the full experience set are drawn to train the ensemble of networks for the next observe-act-train cycle. The policy roll-out cycle is necessary to learn how dynamic obstacles will react to the agent’s learned policy. A supervised learning approach, as taken in [Richter\_2017] for static obstacle avoidance, would not learn the reactions of environment agents on the trained policy.
Iv Results
-----------
We show that our algorithm uses uncertainty information to regionally detect novel obstacle observations and causes fewer collisions than an uncertainty-unaware baseline. First, a simple 1D case illustrates how the model regionally identifies novel obstacle observations. In a scaled up environment with novel multi-dimensional observations, the proposed model continues to exhibit regionally increased uncertainty values. The model is compared with an uncertainty-unaware baseline in a variety of novel scenarios; the proposed model performs more robust to novel data and causes fewer collisions.
###
Iv-a Regional novelty detection in 1D
First, we show that model uncertainty estimates are able to detect novel one-dimensional observations regionally, as seen in [Fig. 3](#S4.F3 "Fig. 3 ‣ IV-A Regional novelty detection in 1D ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates"). For the 1D test-case, a two-layer fully-connected network with MC-Dropout and Bootstrapping is trained to predict collision labels. To generate the dataset, an agent randomly chose heading actions, independent of the obstacle observations, and the environment reported the collision label. The network input is the agent heading angle and obstacle heading. Importantly, the training set only contains obstacles that are on the right-hand side of the agent (top plot:x>0).
After training, the network accurately predicts collision and no-collision labels with low uncertainty for obstacle observations from the training distribution, as seen in [Fig. 2(a)](#S4.F2.sf1 "(a) ‣ Fig. 3 ‣ IV-A Regional novelty detection in 1D ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates"). For out-of-training obstacle observations on the agent’s left (bottom plot: x<0), the neural network fails to generalize and predicts collision (red) as well as non-collision (green) labels for actions (straight lines) that would collide with the obstacle (blue). However, the agent identifies regions of high model uncertainty (left: y-axis, right: light colors) for actions in the direction of the unseen obstacle. The high uncertainty values suggest that the network predictions are false-positives and should not to be trusted. Based on the left-right difference in uncertainty estimates, the MPC would prefer a conservative action that is certainly safe (bottom-right: dark green lines) over a false-positive action that is predicted to be safe but uncertain (bottom-right: light green lines).
| | |
| --- | --- |
|
(a) Known obstacle: low uncertainty
|
(b) Unseen obstacle: high uncertainty
|
Fig. 3: Regional novelty detection in 1D. A simple network predicts collision (red) and no-collision (green) labels, given the agent’s (orange) heading (left plot: x-axis) and a one-dimensional observation of an obstacle (blue) heading. The network accurately predicts labels with low uncertainty, when tested on the training dataset (a) . When tested on a novel observation set (b), the networks fails to predict accurate decision labels, but identifies them with a high regional uncertainty (bottom-left: green points with high values, bottom-right: light green lines). Rather than believing in the false-positive collision predictions, an agent would take a certainly safe action (dark green) to cautiously avoid the novel obstacle.
###
Iv-B Novelty detection in multi-dimensional observations
The following experiments show that our model continues to regionally identify uncertainty in multi-dimensional observations and choose safer actions.
####
Iv-B1 Experiment setup
A one-layer 16-unit LSTM model has been trained in a gym [Gym\_2016] based simulation environment with one agent and one dynamic obstacle. The dynamic obstacle in the environment is capable of following a collaborative RVO [Berg\_2009], GA3C-CADRL [Everett\_2018], or non-cooperative or static policy. For the analyzed scenarios, the agent was trained with obstacles that follow an RVO policy and are observed as described in [Section III](#S3 "III Approach ‣ Safe Reinforcement Learning with Model Uncertainty Estimates"). The training process took 20 minutes on a low-compute amazon AWS c5.large Intel Xeon Platinum 8124M with 2vCPUs and 4GiB memory and one hundred stochastic forward passes with dropout and bootstrapping per step take in average 32ms. The train and execution time could be further decreased by parallelizing the computation on GPUs.
In the test setup, observations of obstacles are manipulated to create scenarios with novel observations that could break the trained model. In one scenario, sensor noise is simulated by adding Gaussian noise ∼N(μ=0m,σ=.5m) on the observation of position and velocity. In another scenario, observations are randomly dropped with a probability of 20%. In a third and fourth scenario that simulate sensor failure, the obstacle position and velocity is masked, respectively. None of the manipulations were applied at training time.
####
Iv-B2 Regional novelty detection
[Figure 4](#S4.F4 "Fig. 4 ‣ IV-B2 Regional novelty detection ‣ IV-B Novelty detection in multi-dimensional observations ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates") shows that the proposed model continues to regionally identify novel obstacle observations in a higher dimensional observation space. In the displayed experiment, an uncertainty-aware agent (orange) observes a dynamic obstacle (blue) with newly added noise and evaluates actions to avoid it. The collision predictions for actions in the direction of the obstacle (light green lines) have higher uncertainty than for actions into free-space (dark green lines). The difference in the predictive uncertainties from left to right, although being stochastic and not perfectly smooth, is used by the MPC to steer the agent away from the noisy obstacle and cautiously avoid it without a collision (orange/yellow line). [Figure 4(b)](#S4.F4.sf2 "(b) ‣ Fig. 5 ‣ IV-B2 Regional novelty detection ‣ IV-B Novelty detection in multi-dimensional observations ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates") shows the full trajectory of the uncertainty-aware agent and illustrates how an uncertainty-unaware agent in [Fig. 4(a)](#S4.F4.sf1 "(a) ‣ Fig. 5 ‣ IV-B2 Regional novelty detection ‣ IV-B Novelty detection in multi-dimensional observations ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates") with same speed and radius fails to generalize to the novel noise and collides with the obstacle after five time steps.

Fig. 4: Regional identification of uncertainty. An uncertainty-aware agent (orange) avoids a dynamic obstacle (blue) that is observed with noise. At one time step, collision predictions for actions in the direction of the obstacle (light green lines) are assigned a higher uncertainty than for actions in free space (dark green lines). The agent selects an action with low uncertainty to cautiously avoid the obstacle.
| | |
| --- | --- |
|
(a) uncertainty-unaware
|
(b) uncertainty-aware
|
Fig. 5: Cautious avoidance in novel scenarios. An agent (orange) is trained to avoid dynamic RVO agents (blue) that are observed without noise. On test, Gaussian noise is added to the observation and an uncertainty-unaware model in [Fig. 4(a)](#S4.F4.sf1 "(a) ‣ Fig. 5 ‣ IV-B2 Regional novelty detection ‣ IV-B Novelty detection in multi-dimensional observations ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates") fails to generalize and causes a collision. The proposed uncertainty-aware agent in [Fig. 4(b)](#S4.F4.sf2 "(b) ‣ Fig. 5 ‣ IV-B2 Regional novelty detection ‣ IV-B Novelty detection in multi-dimensional observations ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates") acts more cautiously on novel observations and avoids the obstacle successfully.
####
Iv-B3 Novel scenario identification with uncertainty
[Table I](#S4.T1 "TABLE I ‣ IV-B3 Novel scenario identification with uncertainty ‣ IV-B Novelty detection in multi-dimensional observations ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates") shows that overall model uncertainty is high in every of the tested novel scenarios, including the illustrated case of added noise. The measured uncertainty is the sum of variance of the collision predictions for each action at one time step. The uncertainty values have been averaged over 20 sessions with random initialization, 50 episodes and all time steps until the end of each episode. As seen in [Table I](#S4.T1 "TABLE I ‣ IV-B3 Novel scenario identification with uncertainty ‣ IV-B Novelty detection in multi-dimensional observations ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates") the uncertainty in a test set of the training distribution is relatively low. All other scenarios cause higher uncertainty values and the relative magnitude of the uncertainty values can be interpreted as how novel the set of observations is for the model, in comparison to the training case.
| | Training | Added noise | Dropped observations | Masked vel. info. | Masked pos. info. |
| --- | --- | --- | --- | --- | --- |
| E(Var(Pcoll)) | 0.363 | 0.820 | 1.93 | 1.37 | 2.41 |
| σ(Var(Pcoll)) | 0.0330 | 0.0915 | 0.134 | 0.0693 | 0.0643 |
TABLE I: Increased uncertainty in novel scenarios. In each of four novel test scenarios, the uncertainty of collision predictions is higher than on samples from the seen training distribution.

Fig. 6: Fewer collisions in novel cases. The proposed uncertainty-aware model (red) causes fewer collisions than the uncertainty-unaware baseline (blue) in novel cases. Through the regional increase of uncertainty in the obstacle’s direction, the model prefers actions that more cautiously avoids the obstacle than the baseline.
####
Iv-B4 Fewer collisions in novel scenarios
The proposed model uses the uncertainty information to act more cautiously and be more robust to novel scenarios. [Figure 6](#S4.F6 "Fig. 6 ‣ IV-B3 Novel scenario identification with uncertainty ‣ IV-B Novelty detection in multi-dimensional observations ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates") shows that this behavior causes fewer collisions during the novel scenarios than an uncertainty-unaware baseline. The proposed model (red) and the baseline (blue) perform similarly well on samples from the training distribution. In the test scenarios of added noise, masked position and masked velocity information, the proposed model causes fewer collisions and is more robust to the novel class of observations. In the case of dropped observations, both models perform similarly well, in terms of collisions, but the uncertainty-unaware model was seen to take longer to reach the goal. The baseline model has been trained with the same hyperparameters in the same environment except that the variance penalty λv is set to zero.
####
Iv-B5 Generalization to other novel scenarios
In all demonstrated cases one could have found a model that generalizes to noise, masked position observations, etc. However, one cannot design a simulation that captures all novel scenarios that could occur in real life. A significantly novel event should be recognized with a high model uncertainty. In the pedestrian avoidance task, novel observations might be uncommon pedestrian behavior. But really all forms of observations that are novel to the deployed model should be identified and reacted upon by driving more cautiously. The shown results suggest that model uncertainty is able to identify such observations and that the MPC selects actions with extra buffer space to avoid these pedestrians cautiously.
###
Iv-C Using uncertainty to escape local minima
This work increases the variance penalty λv to avoid getting stuck in local minima of the MPC optimization during the training process. [Figure 7](#S4.F7 "Fig. 7 ‣ IV-C Using uncertainty to escape local minima ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates") shows that the proposed algorithm with increasing λv can escape a local minimum by encouraging explorative actions in the early stages of training. For the experiment, an agent (orange) was trained to reach a goal (star) that is blocked by a static obstacle (blue) by continuously selecting an action (left plot). In an easy avoidance case, the obstacle is placed further away from the agent’s start position (in dark orange); in a challenging case closer to the agent. A close obstacle is challenging, as the agent is initially headed into the obstacle direction and needs to explore avoiding actions. The collision estimates of the randomly initialized networks are uninformative in early training stages and the goal cost drives the agent into the obstacle. A negative variance penalty λv in early stages forces the agent to explore actions away from the goal and avoid getting stuck in a local minimum.
[Figure 7](#S4.F7 "Fig. 7 ‣ IV-C Using uncertainty to escape local minima ‣ IV Results ‣ Safe Reinforcement Learning with Model Uncertainty Estimates") displays that, in the challenging training case, the agent with a constant λv fails to explore and the algorithm gets stuck in a bad local minimum (bottom-right plot: blue), where 80% of the runs end in a collision. The policy with an increasing λv, and the same hyperparameters (bottom-right plot: red), is more explorative in early stages and converges to a lower minimum in an average of five sessions. In the easy test case, both algorithms perform similarly well and converge to a policy with near-zero collisions (top-right plot).

Fig. 7: Escaping local minima. The training process of two policies with a constant penalty on uncertain actions λv(blue) and with an increasing λv(red) are compared. In an easy avoidance case (right-top), both policies find a good policy that leads to near-zero collisions (y-axis). In a more challenging avoidance case (right-bottom), the proposed increasing λv policy, that explores in early stages, finds a better minimum than with a constant λv.
V Discussion and Future Work
-----------------------------
###
V-a Accurately calibrated model uncertainty estimates
In another novel scenario, an agent was trained to avoid collaborative RVO agents and tested on uncollaborative agents. The uncertainty values did not significantly increase, which can be explained by two reasons. First, uncollaborative agents could not be seen as novel for the model; possibly, because RVO agents, further away from the agent also act in a straight line. The fact that humans think that uncollaborative agents might be novel for a model that has only been trained on collaborative agents, does not change the fact that the model might be generalizable enough to not see it as novel. Another explanation is the observed overconfidence of dropout as an uncertainty estimate. Future work will find unrevealed estimates of model uncertainty for neural networks that provide stronger guarantees on the true model uncertainty.
Vi Conclusion
--------------
This work has developed a Safe RL framework with model uncertainty estimates to cautiously avoid dynamic obstacles in novel scenarios. An ensemble of LSTM networks was trained with dropout and bootstrapping to estimate collision probabilities and gain predictive uncertainty estimates. The magnitude of the uncertainty estimates was shown to reveal novelties in a variety of scenarios, indicating that the model ”knows what it does not know”. The regional uncertainty increase in the direction of novel obstacle observations is used by an MPC to act more cautious in novel scenarios. The cautious behavior made the uncertainty-aware framework more robust to novelties and safer than an uncertainty-unaware baseline. This work is another step towards opening up the vast capabilities of deep neural networks for the application in safety-critical tasks.
Acknowledgment
--------------
This work is supported by Ford Motor Company. The authors want to thank Golnaz Habibi for insightful discussions. |
5e8bf587-20d6-4221-a8d7-ec480b6ee5c1 | trentmkelly/LessWrong-43k | LessWrong | How to save (a lot of) money on flying
I was going to wait to post this for reasons, but realized that was pretty dumb when the difference of a few weeks could literally save people hundreds, if not thousands of collective dollars.
If you fly regularly (or at all), you may already know about this method of saving money. The method is quite simple: instead of buying a round-trip ticket from the airline or reseller, you hunt down much cheaper one-way flights with layovers at your destination and/or your point of origin. Skiplagged is a service that will do this automatically for you, and has been in the news recently because the creator was sued by United Airlines and Orbitz. While Skiplagged will allow you to click-through to purchase the one-way ticket to your destination, they have broken or disabled the functionality of the redirect to the one-way ticket back (possibly in order to raise more funds for their legal defense). However, finding the return flight manually is fairly easy as the provide all the information to filter for it on other websites (time, airline, etc). I personally have benefited from this - I am flying to Texas from Southern California soon, and instead of a round-trip ticket which would cost me about $450, I spent ~$180 on two one-way tickets (with the return flight being the "layover" at my point-of-origin). These are, perhaps, larger than usual savings; I think 20-25% is more common, but even then it's a fairly significant amount of money.
Relevant warnings by gwillen:
> You should be EXTREMELY CAREFUL when using this strategy. It is, at a minimum, against airline policy.
>
> If you have any kind of airline status or membership, and you do this too often, they will cancel it. If you try to do this on a round-trip ticket, they will cancel your return. If the airlines have any means of making your life difficult available to them, they WILL use it.
>
> Obviously you also cannot check bags when using this strategy, since they will go to the wrong place (your ostensi |
37010a43-d996-4d74-b9d2-2cc3281ec972 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Can we build AI without losing control over it? | Sam Harris
I'm going to talk
about a failure of intuition
that many of us suffer from.
It's really a failure
to detect a certain kind of danger.
I'm going to describe a scenario
that I think is both terrifying
and likely to occur,
and that's not a good combination,
as it turns out.
And yet rather than be scared,
most of you will feel
that what I'm talking about
is kind of cool.
I'm going to describe
how the gains we make
in artificial intelligence
could ultimately destroy us.
And in fact, I think it's very difficult
to see how they won't destroy us
or inspire us to destroy ourselves.
And yet if you're anything like me,
you'll find that it's fun
to think about these things.
And that response is part of the problem.
OK? That response should worry you.
And if I were to convince you in this talk
that we were likely
to suffer a global famine,
either because of climate change
or some other catastrophe,
and that your grandchildren,
or their grandchildren,
are very likely to live like this,
you wouldn't think,
"Interesting.
I like this TED Talk."
Famine isn't fun.
Death by science fiction,
on the other hand, is fun,
and one of the things that worries me most
about the development of AI at this point
is that we seem unable to marshal
an appropriate emotional response
to the dangers that lie ahead.
I am unable to marshal this response,
and I'm giving this talk.
It's as though we stand before two doors.
Behind door number one,
we stop making progress
in building intelligent machines.
Our computer hardware and software
just stops getting better for some reason.
Now take a moment
to consider why this might happen.
I mean, given how valuable
intelligence and automation are,
we will continue to improve our technology
if we are at all able to.
What could stop us from doing this?
A full-scale nuclear war?
A global pandemic?
An asteroid impact?
Justin Bieber becoming
president of the United States?
(Laughter)
The point is, something would have to
destroy civilization as we know it.
You have to imagine
how bad it would have to be
to prevent us from making
improvements in our technology
permanently,
generation after generation.
Almost by definition,
this is the worst thing
that's ever happened in human history.
So the only alternative,
and this is what lies
behind door number two,
is that we continue
to improve our intelligent machines
year after year after year.
At a certain point, we will build
machines that are smarter than we are,
and once we have machines
that are smarter than we are,
they will begin to improve themselves.
And then we risk what
the mathematician IJ Good called
an "intelligence explosion,"
that the process could get away from us.
Now, this is often caricatured,
as I have here,
as a fear that armies of malicious robots
will attack us.
But that isn't the most likely scenario.
It's not that our machines
will become spontaneously malevolent.
The concern is really
that we will build machines
that are so much
more competent than we are
that the slightest divergence
between their goals and our own
could destroy us.
Just think about how we relate to ants.
We don't hate them.
We don't go out of our way to harm them.
In fact, sometimes
we take pains not to harm them.
We step over them on the sidewalk.
But whenever their presence
seriously conflicts with one of our goals,
let's say when constructing
a building like this one,
we annihilate them without a qualm.
The concern is that we will
one day build machines
that, whether they're conscious or not,
could treat us with similar disregard.
Now, I suspect this seems
far-fetched to many of you.
I bet there are those of you who doubt
that superintelligent AI is possible,
much less inevitable.
But then you must find something wrong
with one of the following assumptions.
And there are only three of them.
Intelligence is a matter of information
processing in physical systems.
Actually, this is a little bit more
than an assumption.
We have already built
narrow intelligence into our machines,
and many of these machines perform
at a level of superhuman
intelligence already.
And we know that mere matter
can give rise to what is called
"general intelligence,"
an ability to think flexibly
across multiple domains,
because our brains have managed it. Right?
I mean, there's just atoms in here,
and as long as we continue
to build systems of atoms
that display more and more
intelligent behavior,
we will eventually,
unless we are interrupted,
we will eventually
build general intelligence
into our machines.
It's crucial to realize
that the rate of progress doesn't matter,
because any progress
is enough to get us into the end zone.
We don't need Moore's law to continue.
We don't need exponential progress.
We just need to keep going.
The second assumption
is that we will keep going.
We will continue to improve
our intelligent machines.
And given the value of intelligence --
I mean, intelligence is either
the source of everything we value
or we need it to safeguard
everything we value.
It is our most valuable resource.
So we want to do this.
We have problems
that we desperately need to solve.
We want to cure diseases
like Alzheimer's and cancer.
We want to understand economic systems.
We want to improve our climate science.
So we will do this, if we can.
The train is already out of the station,
and there's no brake to pull.
Finally, we don't stand
on a peak of intelligence,
or anywhere near it, likely.
And this really is the crucial insight.
This is what makes
our situation so precarious,
and this is what makes our intuitions
about risk so unreliable.
Now, just consider the smartest person
who has ever lived.
On almost everyone's shortlist here
is John von Neumann.
I mean, the impression that von Neumann
made on the people around him,
and this included the greatest
mathematicians and physicists of his time,
is fairly well-documented.
If only half the stories
about him are half true,
there's no question
he's one of the smartest people
who has ever lived.
So consider the spectrum of intelligence.
Here we have John von Neumann.
And then we have you and me.
And then we have a chicken.
(Laughter)
Sorry, a chicken.
(Laughter)
There's no reason for me to make this talk
more depressing than it needs to be.
(Laughter)
It seems overwhelmingly likely, however,
that the spectrum of intelligence
extends much further
than we currently conceive,
and if we build machines
that are more intelligent than we are,
they will very likely
explore this spectrum
in ways that we can't imagine,
and exceed us in ways
that we can't imagine.
And it's important to recognize that
this is true by virtue of speed alone.
Right? So imagine if we just built
a superintelligent AI
that was no smarter
than your average team of researchers
at Stanford or MIT.
Well, electronic circuits
function about a million times faster
than biochemical ones,
so this machine should think
about a million times faster
than the minds that built it.
So you set it running for a week,
and it will perform 20,000 years
of human-level intellectual work,
week after week after week.
How could we even understand,
much less constrain,
a mind making this sort of progress?
The other thing that's worrying, frankly,
is that, imagine the best case scenario.
So imagine we hit upon a design
of superintelligent AI
that has no safety concerns.
We have the perfect design
the first time around.
It's as though we've been handed an oracle
that behaves exactly as intended.
Well, this machine would be
the perfect labor-saving device.
It can design the machine
that can build the machine
that can do any physical work,
powered by sunlight,
more or less for the cost
of raw materials.
So we're talking about
the end of human drudgery.
We're also talking about the end
of most intellectual work.
So what would apes like ourselves
do in this circumstance?
Well, we'd be free to play Frisbee
and give each other massages.
Add some LSD and some
questionable wardrobe choices,
and the whole world
could be like Burning Man.
(Laughter)
Now, that might sound pretty good,
but ask yourself what would happen
under our current economic
and political order?
It seems likely that we would witness
a level of wealth inequality
and unemployment
that we have never seen before.
Absent a willingness
to immediately put this new wealth
to the service of all humanity,
a few trillionaires could grace
the covers of our business magazines
while the rest of the world
would be free to starve.
And what would the Russians
or the Chinese do
if they heard that some company
in Silicon Valley
was about to deploy a superintelligent AI?
This machine would be capable
of waging war,
whether terrestrial or cyber,
with unprecedented power.
This is a winner-take-all scenario.
To be six months ahead
of the competition here
is to be 500,000 years ahead,
at a minimum.
So it seems that even mere rumors
of this kind of breakthrough
could cause our species to go berserk.
Now, one of the most frightening things,
in my view, at this moment,
are the kinds of things
that AI researchers say
when they want to be reassuring.
And the most common reason
we're told not to worry is time.
This is all a long way off,
don't you know.
This is probably 50 or 100 years away.
One researcher has said,
"Worrying about AI safety
is like worrying
about overpopulation on Mars."
This is the Silicon Valley version
of "don't worry your
pretty little head about it."
(Laughter)
No one seems to notice
that referencing the time horizon
is a total non sequitur.
If intelligence is just a matter
of information processing,
and we continue to improve our machines,
we will produce
some form of superintelligence.
And we have no idea
how long it will take us
to create the conditions
to do that safely.
Let me say that again.
We have no idea how long it will take us
to create the conditions
to do that safely.
And if you haven't noticed,
50 years is not what it used to be.
This is 50 years in months.
This is how long we've had the iPhone.
This is how long "The Simpsons"
has been on television.
Fifty years is not that much time
to meet one of the greatest challenges
our species will ever face.
Once again, we seem to be failing
to have an appropriate emotional response
to what we have every reason
to believe is coming.
The computer scientist Stuart Russell
has a nice analogy here.
He said, imagine that we received
a message from an alien civilization,
which read:
"People of Earth,
we will arrive on your planet in 50 years.
Get ready."
And now we're just counting down
the months until the mothership lands?
We would feel a little
more urgency than we do.
Another reason we're told not to worry
is that these machines
can't help but share our values
because they will be literally
extensions of ourselves.
They'll be grafted onto our brains,
and we'll essentially
become their limbic systems.
Now take a moment to consider
that the safest
and only prudent path forward,
recommended,
is to implant this technology
directly into our brains.
Now, this may in fact be the safest
and only prudent path forward,
but usually one's safety concerns
about a technology
have to be pretty much worked out
before you stick it inside your head.
(Laughter)
The deeper problem is that
building superintelligent AI on its own
seems likely to be easier
than building superintelligent AI
and having the completed neuroscience
that allows us to seamlessly
integrate our minds with it.
And given that the companies
and governments doing this work
are likely to perceive themselves
as being in a race against all others,
given that to win this race
is to win the world,
provided you don't destroy it
in the next moment,
then it seems likely
that whatever is easier to do
will get done first.
Now, unfortunately,
I don't have a solution to this problem,
apart from recommending
that more of us think about it.
I think we need something
like a Manhattan Project
on the topic of artificial intelligence.
Not to build it, because I think
we'll inevitably do that,
but to understand
how to avoid an arms race
and to build it in a way
that is aligned with our interests.
When you're talking
about superintelligent AI
that can make changes to itself,
it seems that we only have one chance
to get the initial conditions right,
and even then we will need to absorb
the economic and political
consequences of getting them right.
But the moment we admit
that information processing
is the source of intelligence,
that some appropriate computational system
is what the basis of intelligence is,
and we admit that we will improve
these systems continuously,
and we admit that the horizon
of cognition very likely far exceeds
what we currently know,
then we have to admit
that we are in the process
of building some sort of god.
Now would be a good time
to make sure it's a god we can live with.
Thank you very much.
(Applause) |
3c6b8b99-3f89-4032-b854-4e41ab7e0c06 | trentmkelly/LessWrong-43k | LessWrong | Annual AGI Benchmarking Event
Metaculus is strongly considering organizing an annual AGI benchmarking event. Once a year, we’d run a benchmark or suite of benchmarks against the most generally intelligent AI systems available to us at the time, seeking to assess their generality and the overall shape of their capabilities. We would publicize the event widely among the AI research, policy, and forecasting communities.
Why?
We think this might be a good idea for several reasons:
* The event could provide a convening ground for the AI research community, helping it to arrive at a shared understanding of the current state of AGI research, and acting as a focal point for rational discussion on the future of AGI.
* An annual benchmarking event has advantages over static, run-any-time benchmarks when it comes to testing generality. Unless one constrains the training data and restricts the hard-coded knowledge used by systems under evaluation, developers may directly optimize for a static benchmark while building their systems, which makes static benchmarks less useful as measures of generality. With the annual format, we are free to change the tasks every year without informing developers of what they will be beforehand, thereby assessing what François Chollet terms developer-aware generalization.
* Frequent feedback improves performance in almost any domain; this event could provide a target for AGI forecasting that yields yearly feedback, allowing us to iterate on our approaches and hone our understanding of how to forecast the development of AGI.
* The capabilities of an AGI will not be completely boundless, so it’s interesting to ask what its strengths and limitations are likely to be. If designed properly, our benchmarks could give us clues as to what the “shape” of AGI capabilities may turn out to be.
How?
We're currently working on a plan, and are soliciting ideas and feedback from the community here. To guide the discussion, here are some properties we think the ideal benchmark should |
9f210610-afe4-4ec5-974d-6cb36b7cbf58 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Dallas - Fort Worth Less Wrong Meetup 5/13/12
Discussion article for the meetup : Dallas - Fort Worth Less Wrong Meetup 5/13/12
WHEN: 13 May 2012 01:00:00PM (-0500)
WHERE: America's Best Coffee, Arlington
Hello Dallas-Fort Worth LessWrongians! If you live in the area, and you haven't come out to meet us yet, you are missing out!
We currently have regular meetups every Sunday at America's Best Coffee in Arlington at 1 PM until 3 PM. We have gotten a good handful of people to show up to these events so far, and it has been very enjoyable and productive.
The current goal, or mission statement, of this group can be summarized as follows: "We want to first understand rationality, and then learn how to apply rationality to our daily lives. During our meet-ups we wish to take advantage of having a community over what can only be accomplished alone."
We look forward to you coming out and meeting the rest of the group. Message me to ask to join our google group: https://groups.google.com/forum/#!forum/dfw-lesswrong-meetup
Discussion article for the meetup : Dallas - Fort Worth Less Wrong Meetup 5/13/12 |
5a55c1e5-a3c2-4ec1-bdb8-d91ea3e3b1ae | trentmkelly/LessWrong-43k | LessWrong | Proposed rewrites of LW home page, about page, and FAQ
Proposed rewrites can be found here. Please suggest specific improvements in the comments!
Although long-time Less Wrong users don't pay much attention to the home page, about page, and FAQ, I suspect new users pay lots of attention to them. A few times, elsewhere on the internet, I've seen people describe their impression of Less Wrong that seemed primarily gleaned from these pages--they made generalizations about Less Wrong that didn't seem true to me, but might appear to be true if all one did was read the about page and FAQ.
The about page, in particular, is called out to every new visitor. Try visiting Less Wrong in incognito mode or private browsing (i.e. without your current cookies) to see what I'm referring to.
But the current set of "newcomer pages" isn't very good, in my opinion:
* Text is duplicated between the home page and the about page. There's plenty to say and link to without repeating ourselves.
* The first paragraph of the home page text has four links to Wikipedia articles and none to Less Wrong posts. These may be very good Wikipedia articles, but I tend to think that linking to actual Less Wrong posts is generally a better way to communicate what kind of site Less Wrong is than linking to Wikipedia.
* The home page text also makes references to the blog, discussion section, and meetups, which are already highlighted plenty in the brain image.
* I think the primary purpose of the about page should be to describe and link to lots of interesting Less Wrong posts. I think reading posts is probably best way to figure out what Less Wrong is about. If the smorgasboard of posts linked to from the about page is sufficiently varied and high-quality, I think that most users will be able to find at least a couple posts they really like. Right now this purpose isn't given much real estate. There is a sentence starting with the words "If you want a sampling of the content on the main blog...", but this sentence does little to describe the po |
50fc1ee1-21c1-4e87-9b49-c862a6acd01b | trentmkelly/LessWrong-43k | LessWrong | Open & Welcome Thread - December 2020
If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here. |
d033b095-514c-4574-bb01-00e765aa04ce | StampyAI/alignment-research-dataset/arxiv | Arxiv | Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks
1 Introduction
---------------
###
1.1 Motivation
Deep neural networks (DNN) are ubiquitous in a growing number of domains ranging from computer vision to healthcare. State-of-the-art DNN models are typically overparameterized and contain more parameters than the size of the training dataset. It is well understood that in this overparameterized regime, DNNs are highly expressive and have the capacity to (over)fit arbitrary training datasets including pure noise [[56](#bib.bib56)]. Mysteriously however neural network models trained via simple algorithms such as stochastic gradient descent continue to predict well on yet unseen test data. In such over-parametrized scenarios there maybe infinitely many globally optimal network parameters consistent with the training data, the key challenge is to understand which network parameters (stochastic) gradient descent converges to and what are its properties. Indeed, a recent series of papers [[52](#bib.bib52), [56](#bib.bib56), [16](#bib.bib16)], suggest that solutions found by first order methods tend to have favorable generalization properties. As DNNs begin to be deployed in safety critical applications, the need for foundational understanding of their noise robustness and their unique prediction capabilities intensifies.
This paper focuses on an intriguing phenomena: overparameterized neural networks are surprisingly robust to label noise when first order methods with early stopping is used to train them. To observe this phenomena consider Figure [1](#S1.F1 "Figure 1 ‣ 1.1 Motivation ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") where we perform experiments on the MNIST data set. Here, we corrupt a fraction of the labels of the training data by assigning their label uniformly at random. We then fit a four layer model via stochastic gradient descent and plot various performance metrics in Figures [0(a)](#S1.F0.sf1 "(a) ‣ Figure 1 ‣ 1.1 Motivation ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") and [0(b)](#S1.F0.sf2 "(b) ‣ Figure 1 ‣ 1.1 Motivation ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). Figure [0(a)](#S1.F0.sf1 "(a) ‣ Figure 1 ‣ 1.1 Motivation ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") (blue curve) shows that indeed with a sufficiently large number of iterations the neural network does in fact perfectly fit the corrupted training data. However, Figure [0(a)](#S1.F0.sf1 "(a) ‣ Figure 1 ‣ 1.1 Motivation ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") also shows that such a model does not generalize to the test data (yellow curve) and the accuracy with respect to the ground truth labels degrades (orange curve). These plots clearly demonstrate that the model overfits with many iterations. In Figure [0(b)](#S1.F0.sf2 "(b) ‣ Figure 1 ‣ 1.1 Motivation ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") we repeat the same experiment but this time stop the updates after a few iterations (i.e. use early stopping). In this case the train accuracy degrades linearly (blue curve). However, perhaps unexpected, the test accuracy (yellow curve) remains high even with a significant amount of corruption. This suggests that with early stopping the model does not overfit and generalizes to new test data. Even more surprising, the train accuracy (orange curve) with respect to the ground truth labels continues to stay around %100 even when %50 of the labels are corrupted. That is, with early stopping overparameterized neural networks even correct the corrupted labels! These plots collectively demonstrate that overparameterized neural networks when combined with early stopping have unique generalization and robustness capabilities. As we detail further in Section [4](#S4 "4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") this phenomena holds (albeit less pronounced) for reacher data models and architectures.
| | |
| --- | --- |
|
Fraction of labels corrupted (%)
Accuracy (%)
(a) Trained model after many iterations
|
Fraction of labels corrupted (%)
Accuracy (%)
(b) Trained model with early stopping
|
Figure 1: In these experiments we use a 4 layer neural network consisting of two convolution layers followed by two fully-connected layers to train a data set of 50,000 samples from MNIST with various amounts of random corruption on the lables. In this architecture the convolutional layers have width 64 and 128 kernels, and the fully-connected layers have 256 and 10 outputs, respectively. Overall, there are 4.8 million trainable parameters. We depict the training accuracy both w.r.t. the corrupted and uncorrupted labels as well as the test accuracy. (a) Shows the performance after 200 epochs of Adadelta where near perfect fitting to the corrupted data is achieved. (b) Shows the performance with early stopping. We observe that with early stopping the trained neural network is robust to label corruption.
This paper aims to demystify the surprising robustness of overparameterized neural networks when early stopping is used. We show that gradient descent is indeed provably robust to noise/corruption on a constant fraction of the labels in such over-parametrized learning scenarios. In particular, under a fairly expressive dataset model and focusing on one-hidden layer networks, we show that after a few iterations (a.k.a. *early stopping*), gradient descent finds a model (i) that is within a small neighborhood of the point of initialization and (ii) only fits to the correct labels essentially ignoring the noisy labels. We complement these findings by proving that if the network is trained to overfit to the noisy labels, then the solution found by gradient descent must stray rather far from the initial model. Together, these results highlight the key features of a solution that generalizes well vs a solution that fits well.
Our theoretical results further highlight the role of the distance between final and initial network weights as a key feature that determines noise robustness vs. overfitting. This is inherently connected to the commonly used early stopping heuristic for DNN training as this heuristic helps avoid models that are too far from the point of initialization. In the presence of label noise, we show that gradient descent implicitly ignores the noisy labels as long as the model parameters remain close to the initialization. Hence, our results help explain why early stopping improves robustness and helps prevent overfitting. Under proper normalization, the required distance between the final and initial network and the predictive accuracy of the final network is independent of the size of the network such as number of hidden nodes. Our extensive numerical experiments corroborate our theory and verify the surprising robustness of DNNs to label noise. Finally, we would like to note that while our results show that solutions found by gradient descent are inherently robust to label noise, specialized techniques such as ℓ1 penalization or sample reweighting are known to further improve robustness. Our theoretical framework may enable more rigorous understandings of the benefits of such heuristics when training overparameterized models.
###
1.2 Prior Art
Our work is connected to recent advances on theory for deep learning as well as heuristics and theory surrounding outlier robust optimization.
Robustness to label corruption: DNNs have the ability to fit to pure noise [[56](#bib.bib56)], however they are also empirically observed to be highly resilient to label noise and generalize well despite large corruption [[44](#bib.bib44)]. In addition to early stopping, several heuristics have been proposed to specifically deal with label noise [[42](#bib.bib42), [36](#bib.bib36), [57](#bib.bib57), [47](#bib.bib47), [30](#bib.bib30), [26](#bib.bib26)]. See also [[23](#bib.bib23), [37](#bib.bib37), [43](#bib.bib43), [48](#bib.bib48)] for additional work on dealing with label noise in classification tasks. When learning from pairwise relations, noisy labels can be connected to graph clustering and community detection problems [[14](#bib.bib14), [54](#bib.bib54), [1](#bib.bib1)]. Label noise is also connected to outlier robustness in regression which is a traditionally well-studied topic. In the context of robust regression and high-dimensional statistics, much of the focus is on regularization techniques to automatically detect and discard outliers by using tools such as ℓ1 penalization [[17](#bib.bib17), [32](#bib.bib32), [6](#bib.bib6), [35](#bib.bib35), [10](#bib.bib10), [15](#bib.bib15), [22](#bib.bib22)]. We would also like to note that there is an interesting line of work that focuses on developing robust algorithms for corruption not only in the labels but also input data [[19](#bib.bib19), [41](#bib.bib41), [31](#bib.bib31)].
Overparameterized neural networks: Intriguing properties and benefits of overparameterized neural networks has been the focus of a growing list of publications [[56](#bib.bib56), [49](#bib.bib49), [12](#bib.bib12), [18](#bib.bib18), [4](#bib.bib4), [28](#bib.bib28), [53](#bib.bib53), [58](#bib.bib58), [51](#bib.bib51), [11](#bib.bib11)]. A recent line of work [[33](#bib.bib33), [2](#bib.bib2), [3](#bib.bib3), [21](#bib.bib21), [59](#bib.bib59), [20](#bib.bib20), [38](#bib.bib38)] show that overparameterized neural networks can fit the data with random initialization if the number of hidden nodes are polynomially large in the size of the dataset. Recently in [[40](#bib.bib40)] we showed that this conclusion continues to hold with more modest amounts of overparameterization and as soon as the number of parameters of the model exceed the square of the size of the training data set. This line of work however is not informative about the robustness of the trained network against corrupted labels. Indeed, such theory predicts that (stochastic) gradient descent will eventually fit the corrupted labels. In contrast, our focus here is not in finding a global minima, rather a solution that is robust to label corruption. In particular, we show that with early stopping we fit to the correct labels without overfitting to the corrupted training data. Our result also defers from this line of research in another way. The key property utilized in this research area is that the Jacobian of the neural network is well-conditioned at a random initialization if the dataset is sufficiently diverse (e.g. if the points are well-separated). In contrast, in our model the Jacobian is inherently low-rank with the rank of the Jacobian corresponding to different clusters/classes within the dataset. We harness this low-rank nature to prove that gradient descent is robust to label corruptions. We further utilize this low-rank structure to explain why neural networks can work with much more modest amounts of overparameterization where the number of parameters in the model exceeds the number of clusters raised to the fourth power and is independent of the number of data points. Furthermore, our numerical experiments verify that the Jacobian matrix of real datasets (such as CIFAR10) indeed exhibit low-rank structure. This is closely related to the observations on the Hessian of deep networks which is empirically observed to be low-rank [[45](#bib.bib45)]. We would also like to note that the importance of the Jacobian for overparameterized neural network analysis has also been noted by other papers including [[39](#bib.bib39), [49](#bib.bib49), [21](#bib.bib21)] and also [[29](#bib.bib29), [16](#bib.bib16)] which investigate the optimization landscape and properties of SGD for training neural networks. An equally important question to understanding the convergence behavior of optimization algorithms for overparameterized models is understanding their generalization capabilities. This is the subject of a few interesting recent papers [[5](#bib.bib5), [7](#bib.bib7), [24](#bib.bib24), [50](#bib.bib50), [13](#bib.bib13), [8](#bib.bib8), [34](#bib.bib34), [9](#bib.bib9)]. While in this paper we do not tackle generalization in the traditional sense, we do show that solution found by gradient descent are robust to label noise/corruption which demonstrates their predictive capabilities and in turn suggests better generalization.
###
1.3 Models

Figure 2: Visualization of the input/label samples and classes according to the clusterable dataset model in Definition [1.1](#S1.Thmtheorem1 "Definition 1.1 (Clusterable dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). In the depicted example there are K=6 clusters, ¯K=3 classes. In this example the number of data points is n=30 with each cluster containing 5 data points. The labels associated to classes 1, 2, and 3 are α1=−1, α2=0.1, and α3=1, respectively so that δ=0.9. We note that the placement of points are exaggerated for clarity. In particular, per definition the cluster center and data points all have unit Euclidean norm. Also, there is no explicit requirements that the cluster centers be separated. The depicted separation is for exposition purposes only.
We first describe the dataset model used in our theoretical results. In this model we assume that the input samples x1,x2,…,xn∈Rd come from K clusters which are located on the unit Euclidian ball in Rd. We also assume our data set consists of ¯K≤K classes where each class can be composed of multiple clusters. We consider a deterministic data set with n samples with roughly balanced clusters each consisting on the order of n/K samples.111This is for ease of exposition rather than a particular challenge arising in the analysis. Finally, while we allow for multiple classes, in our model we assume the labels are scalars and take values in [−1,1] interval. We formally define our dataset model below and provide an illustration in Figure [2](#S1.F2 "Figure 2 ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks").
######
Definition 1.1 (Clusterable dataset)
Consider a data set of size n consisting of input/label pairs
{(xi,yi)}ni=1∈Rd×R. We assume the input data have unit Euclidean norm and originate from K clusters with the ℓth cluster containing nℓ data points. We assume the number of points originating from each cluster is well-balanced in the sense that clownK≤nℓ≤cupnK with clow and cup two numerical constants obeying 0<clow<cup<1. We use {cℓ}Kℓ=1⊂Rd to denote the cluster centers which are distinct unit Euclidian norm vectors. We assume the input data points x that belong to the ℓ-th cluster obey
| | | |
| --- | --- | --- |
| | ∥x−cℓ∥ℓ2≤ε0, | |
with ε0>0 denoting the input noise level.
We assume the labels yi belong to one of ¯K≤K classes. Specifically, we assume yi∈{α1,α2,…,α¯K} with {αℓ}¯Kℓ=1∈[−1,1] denoting the labels associated with each class. We assume all the elements of the same cluster belong to the same class and hence have the same label. However, a class can contain multiple clusters. Finally, we assume the labels are separated in the sense that
| | | | |
| --- | --- | --- | --- |
| | |αr−αs|≥δforr≠s, | | (1.1) |
with δ>0 denoting the class separation.
In the data model above {cℓ}Kℓ=1 are the K cluster centers that govern the input distribution. We note that in this model different clusters can be assigned to the same label. Hence, this setup is rich enough to model data which is not linearly separable: e.g. over R2, we can assign cluster centers (0,1) and (0,−1) to label 1 and cluster centers (1,0) and (−1,0) to label −1. Note that the maximum number of classes are dictated by the separation δ. In particular, we can have at most ¯K≤2δ+1 classes. We remark that this model is related to the setup of [[33](#bib.bib33)] which focuses on providing polynomial guarantees for learning shallow networks. Finally, note that, we need some sort of separation between the cluster centers to distinguish them. While Definition [1.1](#S1.Thmtheorem1 "Definition 1.1 (Clusterable dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") doesn’t specifies such separation explicitly, Definition [2.1](#S2.Thmtheorem1 "Definition 2.1 (Neural Net Cluster Covariance and Condition Number) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") establishes a notion of separation in terms of how well a neural net can distinguish the cluster centers. Next, we introduce our noisy/corrupted dataset model.
######
Definition 1.2 ((ρ,ε0,δ) corrupted dataset)
Let {(xi,˜yi)}ni=1 be an (ε0,δ) clusterable dataset with α1, α2, …,α¯K denoting the ¯K possible class labels. A (ρ,ε0,δ) noisy/corrupted dataset {(xi,yi)}ni=1 is generated from {(xi,˜yi)}ni=1 as follows. For each cluster 1≤ℓ≤K, at most sℓ≤ρnℓ of the labels associated with that cluster (which contains nℓ points) is assigned to another label value chosen from {αℓ}¯Kℓ=1. We shall refer to the initial labels {˜yi}ni=1 as the ground truth labels.
We note that this definition allows for a fraction ρ of corruptions in each cluster.
Network model: We will study the ability of neural networks to learn this corrupted dataset model. To proceed, let us introduce our neural network model. We consider a network with one hidden layer that maps Rd to R. Denoting the number of hidden nodes by k, this network is characterized by an activation function ϕ, input weight matrix W∈Rk×d and output weight vector v∈Rk. In this work, we will fix output v to be a unit vector where half the entries are 1/√k and other half are −1/√k to simplify exposition.222If the number of hidden units is odd we set one entry of v to zero. We will only optimize over the weight matrix W which contains most of the network parameters and will be shown to be sufficient for robust learning. We will also assume ϕ has bounded first and second order derivatives, i.e. |ϕ′(z)|,|ϕ′′(z)|≤Γ for all z. The network’s prediction at an input sample x is given by
| | | | |
| --- | --- | --- | --- |
| | x↦f(W,x)=vTϕ(Wx), | | (1.2) |
where the activation function ϕ applies entrywise. Given a dataset {(xi,yi)}ni=1, we shall train the network via minimizing the empirical risk over the training data via a quadratic loss
| | | | |
| --- | --- | --- | --- |
| | L(W)=12n∑i=1(yi−f(xi,W))2. | | (1.3) |
In particular, we will run gradient descent with a constant learning rate η, starting from a random initialization W0 via the following updates
| | | | |
| --- | --- | --- | --- |
| | Wτ+1=Wτ−η∇L(Wτ). | | (1.4) |
2 Main results
---------------
Throughout, ∥⋅∥ denotes the largest singular value of a given matrix. The notation O(⋅) denotes that a certain identity holds up to a fixed numerical constant. Also, c, c0, C, C0 etc. represent numerical constants.
###
2.1 Robustness of neural network to label noise with early stopping
Our main result shows that overparameterized neural networks, when trained via gradient descent using early stopping are fairly robust to label noise. The ability of neural networks to learn from the training data, even without label corruption, naturally depends on the diversity of the input training data. Indeed, if two input data are nearly the same but have different uncorrupted labels reliable learning is difficult. We will quantify this notion of diversity via a notion of condition number related to a covariance matrix involving the activation ϕ and the cluster centers {cℓ}Kℓ=1.
######
Definition 2.1 (Neural Net Cluster Covariance and Condition Number)
Define the matrix of cluster centers
| | | |
| --- | --- | --- |
| | C=[c1 … cK]T∈RK×d. | |
Let g∼N(0,Id). Define the neural net covariance matrix Σ(C) as
| | | |
| --- | --- | --- |
| | Σ(C)=(CCT)⨀Eg[ϕ′(Cg)ϕ′(Cg)T]. | |
Here ⨀ denotes the elementwise product. Also denote the minimum eigenvalue of Σ(C) by λ(C) and define the following condition number associated with the cluster centers C
| | | |
| --- | --- | --- |
| | κ(C)=√dK∥C∥λ(C). | |
One can view Σ(C) as an empirical kernel matrix associated with the network where the kernel is given by K(ci,cj)=Σij(C). Note that Σ(C) is trivially rank deficient if there are two cluster centers that are identical. In this sense, the minimum eigenvalue of Σ(C) will quantify the ability of the neural network to distinguish between distinct cluster centers. Therefore, one can think of κ(C) as a condition number associated with the neural network which characterizes the distinctness/diversity of the cluster centers. The more distinct the cluster centers, the larger λ(C) and smaller the condition number κ(C) is. Indeed, based on results in [[40](#bib.bib40)] when the cluster centers are maximally diverse e.g. uniformly at random from the unit sphere κ(C) scales like a constant. Throughout we shall assume that λ(C) is strictly positive (and hence κ(C)<∞). This property is empirically verified to hold in earlier works [[55](#bib.bib55)] when ϕ is a standard activation (e.g. ReLU, softplus). As a concrete example, for ReLU activation, using results from [[40](#bib.bib40)] one can show if the cluster centers are separated by a distance ν>0, then λ(C)≥ν100K2. We note that variations of the λ(C)>0 assumption based on the data points (i.e. λ(X)>0 not cluster centers) [[40](#bib.bib40), [21](#bib.bib21), [20](#bib.bib20)] are utilized to provide convergence guarantees for DNNs.Also see [[3](#bib.bib3), [59](#bib.bib59)] for other publications using related definitions.
Now that we have a quantitative characterization of distinctiveness/diversity in place we are now ready to state our main result. Throughout we use cΓ,CΓ, etc. to denote constants only depending on Γ. We note that this Theorem is slightly simplified by ignoring logarithmic terms and precise dependencies on Γ. We refer the reader to Theorem [6.13](#S6.Thmtheorem13 "Theorem 6.13 (Training neural nets with corrupted labels) ‣ 6.3.1 Completing the Proof of Theorem 2.2 ‣ 6.3 Perturbation analysis for perfectly clustered data (Proof of Theorem 2.2) ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") for precise statement including logarithmic terms.
######
Theorem 2.2 (Robust learning with early stopping-simplified)
Consider an (s,ε0,δ) clusterable corrupted data set of input/label pairs {(xi,yi)}ni=1∈Rd×R per Definition [1.2](#S1.Thmtheorem2 "Definition 1.2 ((ρ,ε0,δ) corrupted dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") with cluster centers {cℓ}Kℓ=1 aggregated as rows of a matrix C∈RK×d.
Furthermore, let {˜yi}ni=1 be the corresponding uncorrupted ground truth labels. Also consider a one-hidden layer neural network with k hidden units and one output of the form x↦vTϕ(Wx) with W∈Rk×d and v∈Rk the input-to-hidden and hidden-to-output weights. Also suppose the activation ϕ obeys |ϕ(0)|≤Γ and |ϕ′(z)|,|ϕ′′(z)|≤Γ for all z and some Γ≥1. Furthermore, we set half of the entries of v to 1/√k and the other half to −1/√k333If k is odd we set one entry to zero ⌊k−12⌋ to 1/√k and ⌊k−12⌋ entries to −1/√k. and train only over W. Starting from an initial weight matrix W0 selected at random with i.i.d. N(0,1) entries we run Gradient Descent (GD) updates of the form Wτ+1=Wτ−η∇L(Wτ) on the least-squares loss ([1.3](#S1.E3 "(1.3) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) with step size η=¯cΓKn1∥C∥2 with ¯cΓ. Furthermore, assume the number of parameters obey
| | | |
| --- | --- | --- |
| | kd≥CΓκ4(C)K4d, | |
with κ(C) the neural net cluster condition number pre Definition [2.1](#S2.Thmtheorem1 "Definition 2.1 (Neural Net Cluster Covariance and Condition Number) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). Then as long as ϵ0≤˜cΓ/K2 and ρ≤δ8
with probability at least 1−3/K100, after τ0=cΓKdλ(C)κ2(C)log(1ρ) iterations, the neural network f(⋅,Wτ0) found by gradient descent assigns all the input samples xi to the correct ground truth labels ˜yi. That is,
| | | | |
| --- | --- | --- | --- |
| | argminαℓ:1≤ℓ≤¯K|f(Wτ,xi)−αℓ|=˜yi, | | (2.1) |
holds for all 1≤i≤n.
Furthermore, for all 0≤τ≤τ0, the distance to the initial point obeys
| | | |
| --- | --- | --- |
| | ∥Wτ−W0∥F≤¯CΓ(√K+K2∥C∥2τε0). | |
Theorem [2.2](#S2.Thmtheorem2 "Theorem 2.2 (Robust learning with early stopping-simplified) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") shows that gradient descent with early stopping has a few intriguing properties. We further discuss these properties below.
Robustness. The solution found by gradient descent with early stopping degrades gracefully as the label corruption level ρ grows. In particular, as long as ρ≤δ/8, the final model is able to correctly classify all samples including the corrupted ones. In our setup, intuitively label gap obeys δ∼1¯K, hence, we prove robustness to
| | | |
| --- | --- | --- |
| | Total Number of corrupted labels≲n¯K. | |
This result is independent of number of clusters and only depends on number of classes. An interesting future direction is to improve this result to allow on the order of n corrupted labels. Such a result maybe possible by using a multi-output classification neural network.
Early stopping time. We show that gradient descent finds a model that is robust to outliers after a few iterations. In particular using the maximum allowed step size, the required number of iterations is of the order of Kdλ(C)κ2(C)log(1ρ) which scales with K/d up to condition numbers.
Modest overparameterization. Our result requires modest overparemetrization and apply as soon as the number of parameters exceed the number of classes to the power four (kd≳K4). Interestingly, under our data model the required amount of overparameterization is essentially independent of the size of the training data n(ignoring logarithmic terms) and conditioning of the data points, only depending on the number of clusters and conditioning of the cluster centers. This can be interpreted as ensuring that the network has enough capacity to fit the cluster centers {cℓ}Kℓ=1 and the associated true labels.
Distance from initialization. Another feature of Theorem [2.2](#S2.Thmtheorem2 "Theorem 2.2 (Robust learning with early stopping-simplified) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") is that the network weights do not stray far from the initialization as the distance between the initial model and the final model (at most) grows with the square root of the number of clusters (√K). This √K dependence implies that the more clusters there are, the updates travel further away but continue to stay within a certain radius. This dependence is intuitive as the Rademacher complexity of the function space is dictated by the distance to initialization and should grow with the square-root of the number of input clusters to ensure the model is expressive enough to learn the dataset.
Before we end this section we would like to note that in the limit of ϵ0→0 where the input data set is perfectly clustered one can improve the amount of overparamterization. Indeed, the result above is obtained via a perturbation argument from this more refined result stated below.
######
Theorem 2.3 (Training with perfectly clustered data)
Consier the setting and assumptions of Theorem [2.3](#S2.Thmtheorem3 "Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") with ϵ0=0. Starting from an initial weight matrix W0 selected at random with i.i.d. N(0,1) entries we run Gradient Descent (GD) updates of the form Wτ+1=Wτ−η∇L(Wτ) on the least-squares loss ([1.3](#S1.E3 "(1.3) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) with step size η≤K2cupnΓ2∥C∥2. Furthermore, assume the number of parameters obey
| | | |
| --- | --- | --- |
| | kd≥CΓ4κ2(C)K2, | |
with κ(C) the neural net cluster condition number per Definition [2.1](#S2.Thmtheorem1 "Definition 2.1 (Neural Net Cluster Covariance and Condition Number) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). Then, with probability at least 1−2/K100 over randomly initialized W0i.i.d.∼N(0,1), the iterates Wτ obey the following properties.
* The distance to initial point W0 is upper bounded by
| | | |
| --- | --- | --- |
| | ∥Wτ−W0∥F≤cΓ√KlogKλ(C). | |
* After τ≥τ0:=cKηnλ(C)log(Γ√nlogKρ) iterations, the entrywise predictions of the learned network with respect to the ground truth labels {˜yi}ni=1 satisfy
| | | |
| --- | --- | --- |
| | |f(Wτ,xi)−˜yi|≤4ρ, | |
for all 1≤i≤n. Furthermore, if the noise level ρ obeys ρ≤δ/8 the network predicts the correct label for all samples i.e.
| | | | |
| --- | --- | --- | --- |
| | argminαℓ:1≤ℓ≤¯K|f(Wτ,xi)−αℓ|=˜yifori=1,2,…,n. | | (2.2) |
This result shows that in the limit ϵ0→0 where the data points are perfectly clustered, the required amount of overparameterization can be reduced from kd≳K4 to kd≳K2. In this sense this can be thought of a nontrivial analogue of [[40](#bib.bib40)] where the number of data points are replaced with the number of clusters and the condition number of the data points is replaced with a cluster condition number. This can be interpreted as ensuring that the network has enough capacity to fit the cluster centers {cℓ}Kℓ=1 and the associated true labels.
Interestingly, the robustness benefits continue to hold in this case. However, in this perfectly clustered scenario there is no need for early stopping and a robust network is trained as soon as the number of iterations are sufficiently large. Infact, in this case given the clustered nature of the input data the network never overfits to the corrupted data even after many iterations.
###
2.2 To (over)fit to corrupted labels requires straying far from initialization
In this section we wish to provide further insight into why early stopping enables robustness and generalizable solutions. Our main insight is that while a neural network maybe expressive enough to fit a corrupted dataset, the model has to travel a longer distance from the point of initialization as a function of the distance from the cluster centers ε0 and the amount of corruption. We formalize this idea as follows. Suppose
1. two input points are close to each other (e.g. they are from the same cluster),
2. but their labels are different, hence the network has to map them to distant outputs.
Then, the network has to be large enough so that it can amplify the small input difference to create a large output difference. Our first result formalizes this for a randomly initialized network. Our random initialization picks W with i.i.d. standard normal entries which ensures that the network is isometric i.e. given input x, E[f(W,x)2]=O(∥x∥2ℓ2).
###### Theorem 2.4
Let x1,x2∈Rd be two vectors with unit Euclidean norm obeying ∥x2−x1∥ℓ2≤ϵ0. Let f(W,x)=vTϕ(Wx) where v is fixed, W∈Rk×d, and k≥cd with c>0 a fixed constant. Assume |ϕ′|,|ϕ′′|≤Γ. Let y1 and y2 be two scalars satisfying |y2−y1|≥δ. Suppose W0i.i.d.∼N(0,1). Then, with probability at least 1−2e−(k+d)−2e−t22, for any W∈Rk×d such that ∥W−W0∥F≤c√k and
| | | |
| --- | --- | --- |
| | f(W,x1)=y1andf(W,x2)=y2, | |
holds, we have
| | | |
| --- | --- | --- |
| | ∥W−W0∥≥δCΓε0−t1000. | |
In words, this result shows that in order to fit to a data set with a single corrupted label, a randomly initialized network has to traverse a distance of at least δ/ε0. The next lemma clarifies the role of the corruption amount s and shows that more label corruption within a fixed class requires a model with a larger norm in order to fit the labels. For this result we consider a randomized model with ε20 input noise variance.
###### Lemma 2.5
Let c∈Rd be a cluster center. Consider 2s data points {xi}si=1 and {˜xi}si=1 in Rd generated i.i.d. around c according to the following distribution
| | | |
| --- | --- | --- |
| | c+gwithg∼N(0,ε20dId). | |
Assign {xi}si=1 with labels yi=y and {˜xi}si=1 with labels ˜yi=˜y and assume these two labels are δ separated i.e. |y−˜y|≥δ. Also suppose s≤d and |ϕ′|≤Γ. Then, any W∈Rk×d satisfying
| | | |
| --- | --- | --- |
| | f(W,xi)=yiandf(W,˜xi)=˜yifori=1,…,s, | |
obeys ∥W∥F≥√sδ5Γε0 with probability at least 1−e−d/2.
Unlike Theorem [2.4](#S2.Thmtheorem4 "Theorem 2.4 ‣ 2.2 To (over)fit to corrupted labels requires straying far from initialization ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") this result lower bounds the network norm in lieu of the distance to the initialization W0. However, using the triangular inequality we can in turn get a guarantee on the distance from initialization W0 via triangle inequality as long as ∥W0∥F≲O(√sδ/ε0) (e.g. by choosing a small ε0).
The above Theorem implies that the model has to traverse a distance of at least
| | | |
| --- | --- | --- |
| | ∥Wτ−W0∥F≳√ρnKδε0, | |
to perfectly fit corrupted labels. In contrast, we note that the conclusions of the upper bound in Theorem [2.2](#S2.Thmtheorem2 "Theorem 2.2 (Robust learning with early stopping-simplified) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") show that to be able to fit to the uncorrupted true labels the distance to initialization grows at most by τε0 after τ iterates. This demonstrates that there is a gap in the required distance to initialization for fitting enough to generalize and overfitting. To sum up, our results highlight that, one can find a network with good generalization capabilities and robustness to label corruption within a small neighborhood of the initialization and that the size of this neighborhood is independent of the corruption. However, to fit to the corrupted labels, one has to travel much more, increasing the search space and likely decreasing generalization ability. Thus, early stopping can enable robustness without overfitting by restricting the distance to the initialization.
3 Technical Approach and General Theory
----------------------------------------
In this section, we outline our approach to proving robustness of overparameterized neural networks. Towards this goal, we consider a general formulation where we aim to fit a general nonlinear model of the form x↦f(θ,x) with θ∈Rp denoting the parameters of the model. For instance in the case of neural networks θ represents its weights. Given a data set of n input/label pairs {(xi,yi)}ni=1⊂Rd×R, we fit to this data by minimizing a nonlinear least-squares loss of the form
| | | |
| --- | --- | --- |
| | L(θ)=12n∑i=1(yi−f(θ,xi))2. | |
which can also be written in the more compact form
| | | |
| --- | --- | --- |
| | L(θ)=12∥f(θ)−y∥2ℓ2withf(θ):=⎡⎢
⎢
⎢
⎢
⎢⎣f(θ,x1)f(θ,x2)⋮f(θ,xn)⎤⎥
⎥
⎥
⎥
⎥⎦. | |
To solve this problem we run gradient descent iterations with a constant learning rate η starting from an initial point θ0. These iterations take the form
| | | | |
| --- | --- | --- | --- |
| | θτ+1=θτ−η∇L(θτ)with∇L(θ)=JT(θ)(f(θ)−y). | | (3.1) |
Here, J(θ) is the n×p Jacobian matrix associated with the nonlinear mapping f defined via
| | | | |
| --- | --- | --- | --- |
| | J(θ)=[∂f(θ,x1)∂θ … ∂f(θ,xn)∂θ]T. | | (3.2) |
###
3.1 Bimodal jacobian structure
Our approach is based on the hypothesis that the nonlinear model has a Jacobian matrix with bimodal spectrum where few singular values are large and remaining singular values are small. This assumption is inspired by the fact that realistic datasets are clusterable in a proper, possibly nonlinear, representation space. Indeed, one may argue that one reason for using neural networks is to automate the learning of such a representation (essentially the input to the softmax layer). We formalize the notion of bimodal spectrum below.
######
Assumption 1 (Bimodal Jacobian)
Let β≥α≥ϵ>0 be scalars. Let f:Rp→Rn be a nonlinear mapping and consider a set D⊂Rp containing the initial point θ0 (i.e. θ0∈D). Let S+⊂Rn be a subspace and S− be its complement. We say the mapping f has a Bimodal Jacobian with respect to the complementary subpspaces S+ and S− as long as the following two assumptions hold for all θ∈D.
* Spectrum over S+: For all v∈S+ with unit Euclidian norm we have
| | | |
| --- | --- | --- |
| | α≤∥∥JT(θ)v∥∥ℓ2≤β. | |
* Spectrum over S−: For all v∈S− with unit Euclidian norm we have
| | | |
| --- | --- | --- |
| | ∥∥JT(θ)v∥∥ℓ2≤ϵ. | |
We will refer to S+ as the signal subspace and S− as the noise subspace.
When ϵ<<α the Jacobian is approximately low-rank. An extreme special case of this assumption is where ϵ=0 so that the Jacobian matrix is exactly low-rank. We formalize this assumption below for later reference.
######
Assumption 2 (Low-rank Jacobian)
Let β≥α>0 be scalars. Consider a set D⊂Rp containing the initial point θ0 (i.e. θ0∈D). Let S+⊂Rn be a subspace and S− be its complement. For all θ∈D, v∈S+ and w∈S− with unit Euclidian norm, we have that
| | | |
| --- | --- | --- |
| | | |
Our dataset model in Definition [1.2](#S1.Thmtheorem2 "Definition 1.2 ((ρ,ε0,δ) corrupted dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") naturally has a low-rank Jacobian when ϵ0=0 and each input example is equal to one of the K cluster centers {cℓ}Kℓ=1. In this case, the Jacobian will be at most rank K since each row will be in the span of {∂f(cℓ,θ)∂θ}Kℓ=1. The subspace S+ is dictated by the membership of each cluster as follows: Let Λℓ⊂{1,…,n} be the set of coordinates i such that xi=cℓ. Then, subspace is characterized by
| | | |
| --- | --- | --- |
| | S+={v∈Rn ∣∣ vi1=vi2 for all i1,i2∈Λℓ and 1≤ℓ≤K}. | |
When ϵ0>0 and the data points of each cluster are not the same as the cluster center we have the bimodal Jacobian structure of Assumption [1](#Thmassumption1 "Assumption 1 (Bimodal Jacobian) ‣ 3.1 Bimodal jacobian structure ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") where over S− the spectral norm is small but nonzero.
In Section [4](#S4 "4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we verify that the Jacobian matrix of real datasets indeed have a bimodal structure i.e. there are few large singular values and the remaining singular values are small which further motivate Assumption [2](#Thmassumption2 "Assumption 2 (Low-rank Jacobian) ‣ 3.1 Bimodal jacobian structure ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). This is inline with earlier papers which observed that Hessian matrices of deep networks have bimodal spectrum (approximately low-rank) [[45](#bib.bib45)] and is related to various results demonstrating that there are flat directions in the loss landscape [[27](#bib.bib27)].
###
3.2 Meta result on learning with label corruption
Define the n-dimensional residual vector r where
r(θ)=[f(x1,θ)−y1…f(xn,θ)−yn]T.
A key idea in our approach is that we argue that (1) in the absence of any corruption r(θ) approximately lies on the subspace S+ and (2) if the labels are corrupted by a vector e, then e approximately lies on the complement space. Before we state our general result we need to discuss another assumption and definition.
######
Assumption 3 (Smoothness)
The Jacobian mapping J(θ) associated to a nonlinear mapping f:Rp→Rn is L-smooth if for all θ1,θ2∈Rp we have ∥J(θ2)−J(θ1)∥≤L∥θ2−θ1∥ℓ2.444Note that, if ∂J(θ)∂θ is continuous, the smoothness condition holds over any compact domain (albeit for a possibly large L).
Additionally, to connect our results to the number of corrupted labels, we introduce the notion of subspace diffusedness defined below.
######
Definition 3.1 (Diffusedness)
S+ is γ diffused if for any vector v∈S+
| | | |
| --- | --- | --- |
| | ∥v∥ℓ∞≤√γ/n∥v∥ℓ2, | |
holds for some γ>0.
The following theorem is our meta result on the robustness of gradient descent to sparse corruptions on the labels when the Jacobian mapping is exactly low-rank. Theorem [2.3](#S2.Thmtheorem3 "Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") for the perfectly clustered data (ϵ0=0) is obtained by combining this result with specific estimates developed for neural networks.
######
Theorem 3.2 (Gradient descent with label corruption)
Consider a nonlinear least squares problem of the form L(θ)=12∥f(θ)−y)∥2ℓ2 with the nonlinear mapping f:Rp→Rn obeying assumptions [2](#Thmassumption2 "Assumption 2 (Low-rank Jacobian) ‣ 3.1 Bimodal jacobian structure ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") and [3](#Thmassumption3 "Assumption 3 (Smoothness) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") over a unit Euclidian ball of radius 4∥r0∥ℓ2α around an initial point θ0 and y=[y1 … yn]∈Rn denoting the corrupted labels. Also let ˜y=[˜y1 … ˜yn]∈Rn denote the uncorrupted labels and e=y−˜y the corruption. Furthermore, suppose the initial residual f(θ0)−˜y with respect to the uncorrupted labels obey f(θ0)−˜y∈S+. Then, running gradient descent updates of the from ([3.1](#S3.E1 "(3.1) ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) with a learning rate η≤12β2min(1,αβL∥r0∥ℓ2), all iterates obey
| | | |
| --- | --- | --- |
| | ∥θτ−θ0∥ℓ2≤4∥r0∥ℓ2α. | |
Furthermore, assume ν>0 is a precision level obeying ν≥∥ΠS+(e)∥ℓ∞. Then, after τ≥5ηα2log(∥r0∥ℓ2ν) iterations, θτ achieves the following error bound with respect to the true labels
| | | |
| --- | --- | --- |
| | ∥f(θτ)−˜y∥ℓ∞≤2ν. | |
Furthermore, if e has at most s nonzeros and S+ is γ diffused per Definition [3.1](#S3.Thmtheorem1 "Definition 3.1 (Diffusedness) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), then using ν=∥ΠS+(e)∥ℓ∞
| | | |
| --- | --- | --- |
| | ∥f(θτ)−˜y∥ℓ∞≤2∥ΠS+(e)∥ℓ∞≤γ√sn∥e∥ℓ2. | |
This result shows that when the Jacobian of the nonlinear mapping is low-rank, gradient descent enjoys two intriguing properties. First, gradient descent iterations remain rather close to the initial point. Second, the estimated labels of the algorithm enjoy sample-wise robustness guarantees in the sense that the noise in the estimated labels are gracefully distributed over the dataset and the effects on individual label estimates are negligible. This theorem is the key result that allows us to prove Theorem [2.3](#S2.Thmtheorem3 "Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") when the data points are perfectly clustered (ϵ0=0). Furthermore, this theorem when combined with a perturbation analysis allows us to deal with data that is not perfectly clustered (ϵ0>0) and to conclude that with early stopping neural networks are rather robust to label corruption (Theorem [2.2](#S2.Thmtheorem2 "Theorem 2.2 (Robust learning with early stopping-simplified) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")).
Finally, we note that a few recent publication [[39](#bib.bib39), [3](#bib.bib3), [21](#bib.bib21)] require the Jacobian to be well-conditioned to fit labels perfectly. In contrast, our low-rank model cannot perfectly fit the corrupted labels. Furthermore, when the Jacobian is bimodal (as seems to be the case for many practical data sets and neural network models) it would take a very long time to perfectly fit the labels and as demonstrated earlier such a model does not generalize and is not robust to corruptions. Instead we focus on proving robustness with early stopping.
###
3.3 To (over)fit to corrupted labels requires straying far from initialization
In this section we state a result that provides further justification as to why early stopping of gradient descent leads to more robust models without overfitting to corrupted labels.
This is based on the observation that while finding an estimate that fits the uncorrupted labels one does not have to move far from the initial estimate in the presence of corruption one has to stray rather far from the initialization with the distance from initialization increasing further in the presence of more corruption. We make this observation rigorous below by showing that it is more difficult to fit to the portion of the residual that lies on the noise space compared to the portion on the signal space (assuming α≫ϵ).
###### Theorem 3.3
Denote the residual at initialization θ0 by r0=f(θ0)−y. Define the residual projection over the signal and noise space as
| | | |
| --- | --- | --- |
| | E+=∥ΠS+(r0)∥ℓ2andE−=∥ΠS−(r0)∥ℓ2. | |
Suppose Assumption [1](#Thmassumption1 "Assumption 1 (Bimodal Jacobian) ‣ 3.1 Bimodal jacobian structure ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") holds over an Euclidian ball D of radius R<max(E+β,E−ε) around the initial point θ0 with α≥ϵ. Then, over D there exists no θ that achieves zero training loss. In particular, if D=Rp, any parameter θ achieving zero training loss (f(θ)=y) satisfies the distance bound
| | | |
| --- | --- | --- |
| | ∥θ−θ0∥ℓ2≥max(E+β,E−ε). | |
This theorem shows that the higher the corruption (and hence E−) the further the iterates need to stray from the initial model to fit the corrupted data.
4 Numerical experiments
------------------------
| | |
| --- | --- |
|
Distance from initialization
Train accuracy
(a) Training accuracy
|
Distance from initialization
Loss
(b) Training loss
|
Figure 3: We depict the training accuracy of a LENET model trainined on 3000 samples from MNIST as a function of relative distance from initialization. Here, the x-axis keeps track of the distance between the current and initial weights of all layers combined.
We conduct several experiments to investigate the robustness capabilities of deep networks to label corruption. In our first set of experiments, we explore the relationship between loss, accuracy, and amount of label corruption on the MNIST dataset to corroborate our theory. Our next experiments study the distribution of the loss and the Jacobian on the CIFAR-10 dataset. Finally, we simulate our theoretical model by generating data according to the corrupted data model of Definition [1.2](#S1.Thmtheorem2 "Definition 1.2 ((ρ,ε0,δ) corrupted dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") and verify the robustness capability of gradient descent with early stopping in this model.
In Figure [3](#S4.F3 "Figure 3 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we train the same model used in Figure [1](#S1.F1 "Figure 1 ‣ 1.1 Motivation ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") with n=3,000 MNIST samples for different amounts of corruption. Our theory predicts that more label corruption leads to a larger distance to initialization. To probe this hypothesis, Figure [2(a)](#S4.F2.sf1 "(a) ‣ Figure 3 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") and [2(b)](#S4.F2.sf2 "(b) ‣ Figure 3 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") visualizes training accuracy and training loss as a function of the distance from the initialization. These results demonstrate that the distance from initialization gracefully increase with more corruption.
| | |
| --- | --- |
|
Cross entropy loss
Histogram
(a) 30% corruption
|
Cross entropy loss
(b) 50% corruption
|
Figure 4: Histogram of the cross entropy loss of individual data points based on a model trained on 50,000 samples from CIFAR-10 with early stopping. Plot depicts 5000 random samples from these 50,000 samples. The loss distribution of clean and corrupted data are separated but gracefully overlap as the corruption level increases.
Next, we study the distribution of the individual sample losses on the CIFAR-10 dataset. We conducted two experiments using Resnet-20 with cross entropy loss555We opted for cross entropy as it is the standard classification loss however least-squares loss achieves similar accuracy.. In Figure [4](#S4.F4 "Figure 4 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") we assess the noise robustness of gradient descent where we used all 50,000 samples with either 30% random corruption or 50% random corruption. Theorem [2.3](#S2.Thmtheorem3 "Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") predicts that when the corruption level is small, the loss distribution of corrupted vs clean samples should be separable. Figure [4](#S4.F4 "Figure 4 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") shows that when 30% of the data is corrupted the distributions are approximately separable. When we increase the shuffling amount to 50% the training loss on the clean data increases as predicted by our theory and the distributions start to gracefully overlap.
As described in Section [3](#S3 "3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), our technical framework utilizes a bimodal prior on the Jacobian matrix ([3.2](#S3.E2 "(3.2) ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) of the model. We now further investigate this hypothesis. For a multiclass task, the Jacobian matrix is essentially a 3-way tensor where dimensions are sample size (n), total number of parameters in the model (p), and the number of classes (¯K). The neural network model we used for CIFAR 10 has around 270,000 parameters in total. In Figure [5](#S4.F5 "Figure 5 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") we illustrate the singular value spectrum of the two multiclass Jacobian models where we form the Jacobian from all layers except the five largest (in total we use ¯p≈90,000 parameters).666We depict the smaller Jacobian due to the computational cost of calculating the full Jacobian. We train the model with all samples and focus on the spectrum before and after the training. In Figure [4(a)](#S4.F4.sf1 "(a) ‣ Figure 5 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we picked n=1000 samples and unfolded this tensor along parameters to obtain a 10,000×90,000 matrix which verifies our intuition on bimodality. In particular, only 10 to 20 singular values are larger than 0.1× the top one. This is consistent with earlier works that studied the Hessian spectrum. However, focusing on the Jacobian has the added advantage of requiring only first order information [[45](#bib.bib45), [25](#bib.bib25)]. A disadvantage is that the size of Jacobian grows with number of classes. Intuitively, cross entropy loss focuses on the class associated with the label hence in Figure [4(b)](#S4.F4.sf2 "(b) ‣ Figure 5 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we only picked the partial derivative associated with the correct class so that each sample is responsible for a single (size ¯p) vector. This allowed us to scale to n=10000 samples and the corresponding spectrum is strikingly similar. Another intriguing finding is that the spectrums of before and after training are fairly close to each other highlighting that even at random initialization, spectrum is bimodal.
| | |
| --- | --- |
|
Singular value index
Magnitude
(a) All classes, 1k samples
|
Singular value index
(b) Correct class, 10k samples
|
Figure 5: Spectrum of the Jacobian obtained by plotting the singular values. (a) is obtained by forming the Jacobian by taking partial derivatives of all classes associated with a sample for 1000 samples. (b) is obtained by taking the class corresponding to the label for 10000 samples.
| # >0.1× top singular | At initialization | After training |
| --- | --- | --- |
| All classes | 4 | 14 |
| Correct class | 15 | 16 |
Table 1: Jacobian of the network has few singular values that are significantly large i.e. larger than 0.1× the spectral norm. This is true whether we consider the initial network or final network.
In Figure [6](#S4.F6 "Figure 6 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we turn our attention to verifying our findings for the corrupted dataset model of Definition [1.2](#S1.Thmtheorem2 "Definition 1.2 ((ρ,ε0,δ) corrupted dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). We generated K=2 classes where the associated clusters centers are generated uniformly at random on the unit sphere of Rd=20. We also generate the input samples at random around these two clusters uniformly at random on a sphere of radius ε0=0.5 around the corresponding cluster center. Hence, the clusters are guaranteed to be at least 1 distance from each other to prevent overlap. Overall we generate n=400 samples (200 per class/cluster). Here, ¯K=K=2 and the class labels are 0 and 1. We picked a network with k=1000 hidden units and trained on a data set with 400 samples where 30% of the labels were corrupted. Figure [5(a)](#S4.F5.sf1 "(a) ‣ Figure 6 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") plots the trajectory of training error and highlights the model achieves good classification in the first few iterations and ends up overfitting later on. In Figures [5(b)](#S4.F5.sf2 "(b) ‣ Figure 6 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") and [5(c)](#S4.F5.sf3 "(c) ‣ Figure 6 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we focus on the loss distribution of [5(a)](#S4.F5.sf1 "(a) ‣ Figure 6 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") at iterations 80 and 4500. In this figure, we visualize the loss distribution of clean and corrupted data. Figure [5(b)](#S4.F5.sf2 "(b) ‣ Figure 6 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") highlights the loss distribution with early stopping and implies that the gap between corrupted and clean loss distributions is surprisingly resilient despite a large amount of corruption and the high-capacity of the model. In Figure [5(c)](#S4.F5.sf3 "(c) ‣ Figure 6 ‣ 4 Numerical experiments ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we repeat plot after many more iterations at which point the model overfits. This plot shows that the distribution of the two classes overlap demonstrating that the model has overfit the corruption and lacks generalization/robustness.
| | | |
| --- | --- | --- |
|
Iteration
Classification error
(a) Fraction of incorrect predictions
|
Least squares loss
Loss histogram
(b) Loss histogram at iteration 80
|
Least squares loss
Loss histogram
(c) Loss histogram at iteration 4500
|
Figure 6: We experiment with the corrupted dataset model of Definition [1.2](#S1.Thmtheorem2 "Definition 1.2 ((ρ,ε0,δ) corrupted dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). We picked K=2 classes and set n=400 and ε0=0.5. Trained 30% corrupted data with k=1000 hidden units. Each corruption has 50% chance to remain in the correct class hence around 15% of the labels are actually flipped which corresponds to the dashed green line.
5 Conclusions
--------------
In this paper, we studied the robustness of overparameterized neural networks to label corruption from a theoretical lens. We provided robustness guarantees for training networks with gradient descent when early stopping is used and complemented these guarantees with lower bounds. Our results point to the distance between final and initial network weights as a key feature to determine robustness vs. overfitting which is inline with weight decay and early stopping heuristics. We also carried out extensive numerical experiments to verify the theoretical predictions as well as technical assumptions. While our results shed light on the intriguing properties of overparameterized neural network optimization, it would be appealing (i) to extend our results to deeper network architecture, (ii) to more complex data models, and also (iii) to explore other heuristics that can further boost the robustness of gradient descent methods.
6 Proofs
---------
###
6.1 Proofs for General Theory
We begin by defining the average Jacobian which will be used throughout our analysis.
######
Definition 6.1 (Average Jacobian)
We define the average Jacobian along the path connecting two points x,y∈Rp as
| | | | |
| --- | --- | --- | --- |
| | J(y,x):=∫10J(x+α(y−x))dα. | | (6.1) |
######
Lemma 6.2 (Linearization of the residual)
Given gradient descent iterate ^θ=θ−η∇L(θ), define
| | | |
| --- | --- | --- |
| | C(θ)=J(^θ,θ)J(θ)T. | |
The residuals ^r=f(^θ)−y, r=f(θ)−y obey the following equation
| | | |
| --- | --- | --- |
| | ^r=(I−ηC(θ))r. | |
Proof
Following Definition [6.1](#S6.Thmtheorem1 "Definition 6.1 (Average Jacobian) ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), denoting f(^θ)−y=^r and f(θ)−y=r, we find that
| | | | |
| --- | --- | --- | --- |
| | ^r= | r−f(θ)+f(^θ) | |
| | (a)= | r+J(^θ,θ)(^θ−θ) | |
| | (b)= | r−ηJ(^θ,θ)J(θ)Tr | |
| | = | (I−ηC(θ))r. | | (6.2) |
Here (a) uses the fact that Jacobian is the derivative of f and (b) uses the fact that ∇L(θ)=J(θ)Tr.
Using Assumption [3.1](#S3.Thmtheorem1 "Definition 3.1 (Diffusedness) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), one can show that sparse vectors have small projection on S+.
###### Lemma 6.3
Suppose Assumption [3.1](#S3.Thmtheorem1 "Definition 3.1 (Diffusedness) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") holds. If r∈Rn is a vector with s nonzero entries, we have that
| | | | |
| --- | --- | --- | --- |
| | ∥ΠS+(r)∥ℓ∞≤γ√sn∥r∥ℓ2. | | (6.3) |
Proof First, we bound the ℓ2 projection of r on S+ as follows
| | | |
| --- | --- | --- |
| | ∥ΠS+(r)∥ℓ2=supv∈S+vTr∥v∥ℓ2≤√γn∥r∥ℓ1≤√γsn∥r∥ℓ2. | |
where we used the fact that |vi|≤√γ∥v∥ℓ2/√n. Next, we conclude with
| | | |
| --- | --- | --- |
| | ∥ΠS+(r)∥ℓ∞≤√γn∥ΠS+(r)∥ℓ2≤γ√sn∥r∥ℓ2. | |
####
6.1.1 Proof of Theorem [3.2](#S3.Thmtheorem2 "Theorem 3.2 (Gradient descent with label corruption) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")
Proof The proof will be done inductively over the properties of gradient descent iterates and is inspired from the recent work [[39](#bib.bib39)]. In particular, [[39](#bib.bib39)] requires a well-conditioned Jacobian to fit labels perfectly. In contrast, we have a low-rank Jacobian model which cannot fit the noisy labels (or it would have trouble fitting if the Jacobian was approximately low-rank). Despite this, we wish to prove that gradient descent satisfies desirable properties such as robustness and closeness to initialization. Let us introduce the notation related to the residual. Set rτ=f(θτ)−y and let r0=f(θ0)−y be the initial residual. We keep track of the growth of the residual by partitioning the residual as rτ=¯rτ+¯eτ where
| | | |
| --- | --- | --- |
| | ¯eτ=ΠS−(rτ),¯rτ=ΠS+(rτ). | |
We claim that for all iterations τ≥0, the following conditions hold.
| | | | | |
| --- | --- | --- | --- | --- |
| | ¯eτ= | ¯e0 | | (6.4) |
| | ∥¯rτ∥2ℓ2≤ | (1−ηα22)τ∥¯r0∥2ℓ2, | | (6.5) |
| | | ∥¯r0∥ℓ2≤∥r0∥ℓ2. | | (6.6) |
Assuming these conditions hold till some τ>0, inductively, we focus on iteration τ+1. First, note that these conditions imply that for all τ≥i≥0, θi∈D where D is the Euclidian ball around θ0 of radius 4∥r0∥ℓ2α. This directly follows from ([6.6](#S6.E6 "(6.6) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) induction hypothesis. Next, we claim that θτ+1 is still within the set D. This can be seen as follows:
###### Claim 1
Under the induction hypothesis ([6.4](#S6.E4 "(6.4) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")), θτ+1∈D.
Proof
Since range space of Jacobian is in S+ and η≤1/β2, we begin by noting that
| | | | | |
| --- | --- | --- | --- | --- |
| | ∥θτ+1−θτ∥ℓ2 | =η∥JT(θτ)(f(θτ)−y)∥ℓ2 | | (6.7) |
| | | (a)=η∥JT(θτ)(ΠS+(f(θτ)−y))∥ℓ2 | | (6.8) |
| | | (b)=η∥JT(θτ)¯rτ∥ℓ2 | | (6.9) |
| | | (c)≤ηβ∥¯rτ∥ℓ2 | | (6.10) |
| | | (d)≤∥¯rτ∥ℓ2β | | (6.11) |
| | | (e)≤∥¯rτ∥ℓ2α | | (6.12) |
In the above, (a) follows from the fact that row range space of Jacobian is subset of S+ via Assumption [2](#Thmassumption2 "Assumption 2 (Low-rank Jacobian) ‣ 3.1 Bimodal jacobian structure ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). (b) follows from the definition of ¯rτ. (c) follows from the upper bound on the spectral norm of the Jacobian over D per Assumption [2](#Thmassumption2 "Assumption 2 (Low-rank Jacobian) ‣ 3.1 Bimodal jacobian structure ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), (d) from the fact that η≤1β2, (e) from α≤β. The latter combined with the triangular inequality and induction hypothesis ([6.6](#S6.E6 "(6.6) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) yields (after scaling ([6.6](#S6.E6 "(6.6) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) by 4/α)
| | | |
| --- | --- | --- |
| | ∥θτ+1−θ0∥ℓ2≤∥θτ+1−θτ∥ℓ2+∥θ0−θτ∥ℓ2≤∥θτ−θ0∥ℓ2+∥¯rτ∥ℓ2α≤4∥r0∥ℓ2α, | |
concluding the proof of θτ+1∈D.
To proceed, we shall verify that ([6.6](#S6.E6 "(6.6) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) holds for τ+1 as well. Note that, following Lemma [6.2](#S6.Thmtheorem2 "Lemma 6.2 (Linearization of the residual) ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), gradient descent iterate can be written as
| | | |
| --- | --- | --- |
| | rτ+1=(I−C(θτ))rτ. | |
Since both column and row space of C(θτ) is subset of S+, we have that
| | | | | |
| --- | --- | --- | --- | --- |
| | ¯eτ+1 | =ΠS−((I−C(θτ))rτ) | | (6.13) |
| | | =ΠS−(rτ) | | (6.14) |
| | | =¯eτ, | | (6.15) |
This shows the first statement of the induction. Next, over S+, we have
| | | | | |
| --- | --- | --- | --- | --- |
| | ¯rτ+1 | =ΠS+((I−C(θτ))rτ) | | (6.16) |
| | | =ΠS+((I−C(θτ))¯rτ)+ΠS+((I−C(θτ))¯eτ) | | (6.17) |
| | | =ΠS+((I−C(θτ))¯rτ) | | (6.18) |
| | | =(I−C(θτ))¯rτ | | (6.19) |
where the second line uses the fact that ¯eτ∈S− and last line uses the fact that ¯rτ∈S+. To proceed, we need to prove that C(θτ) has desirable properties over S+, in particular, it contracts this space.
###### Claim 2
let PS+∈Rn×n be the projection matrix to S+ i.e. it is a positive semi-definite matrix whose eigenvectors over S+ is 1 and its complement is 0. Under the induction hypothesis and setup of the theorem, we have that777We say A⪰B if A−B is a positive semi-definite matrix in the sense that for any real vector v, vT(A−B)v≥0.
| | | | |
| --- | --- | --- | --- |
| | β2PS+⪰C(θτ)⪰12J(θτ)J(θτ)T⪰α22PS+. | | (6.20) |
Proof The proof utilizes the upper bound on the learning rate. The argument is similar to the proof of Lemma 9.7 of [[39](#bib.bib39)]. Suppose Assumption [3](#Thmassumption3 "Assumption 3 (Smoothness) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") holds. Then, for any θ1,θ2∈D we have
| | | | |
| --- | --- | --- | --- |
| | ∥J(θ2,θ1)−J(θ1)∥= | ∥∥∥∫10(J(θ1+t(θ2−θ1))−J(θ1))dt∥∥∥, | |
| | ≤ | ∫10∥J(θ1+t(θ2−θ1))−J(θ1)∥dt, | |
| | ≤ | ∫10tL∥θ2−θ1∥ℓ2dt≤L2∥θ2−θ1∥ℓ2. | | (6.21) |
Thus, for η≤αLβ∥r0∥ℓ2,
| | | | | |
| --- | --- | --- | --- | --- |
| | ∥J(θτ+1,θτ)−J(θτ)∥ | ≤L2∥θτ+1−θτ∥ℓ2 | | (6.22) |
| | | =ηL2∥∥JT(θτ)(f(θτ)−y)∥∥ℓ2≤ηβL2∥¯rτ∥ℓ2 | | (6.23) |
| | | (a)≤ηβL2∥¯r0∥ℓ2(b)≤α2. | | (6.24) |
where for (a) we utilized the induction hypothesis ([6.6](#S6.E6 "(6.6) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) and (b) follows from the upper bound on η. Now that ([6.24](#S6.E24 "(6.24) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) is established, using following lemma, we find
| | | | |
| --- | --- | --- | --- |
| | C(θτ)= | J(θτ+1,θτ)J(θτ)T⪰(1/2)J(θτ)J(θτ)T. | |
The β2 upper bound directly follows from Assumption [2](#Thmassumption2 "Assumption 2 (Low-rank Jacobian) ‣ 3.1 Bimodal jacobian structure ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") by again noticing range space of Jacobian is subset of S+.
######
Lemma 6.4 (Asymmetric PSD perturbation)
Consider the matrices A,C∈Rn×p obeying ∥A−C∥≤α/2. Also suppose CCT⪰α2PS+. Furthermore, assume range spaces of A,C lies in S+. Then,
| | | |
| --- | --- | --- |
| | ACT⪰CCT2⪰α22PS+. | |
Proof For r∈S+ with unit Euclidian norm, we have
| | | | |
| --- | --- | --- | --- |
| | rTACTr | =∥CTr∥2ℓ2+rT(A−C)CTr≥∥CTr∥2ℓ2−∥CTr∥ℓ2∥rT(A−C)∥ℓ2 | |
| | | =(∥CTr∥ℓ2−∥rT(A−C)∥ℓ2)∥CTr∥ℓ2 | |
| | | ≥(∥CTr∥ℓ2−α/2)∥CTr∥ℓ2 | |
| | | ≥∥CTr∥2ℓ2/2. | |
Also, for any r, by range space assumption rTACTr=ΠS+(r)TACTΠS+(r) (same for CCT). Combined with above, this concludes the claim.
What remains is proving the final two statements of the induction ([6.6](#S6.E6 "(6.6) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")). Note that, using the claim above and recalling ([6.19](#S6.E19 "(6.19) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) and using the fact that ∥J(θτ+1,θτ)∥≤β, the residual satisfies
| | | | | |
| --- | --- | --- | --- | --- |
| | ∥¯rτ+1∥2ℓ2=∥(I−ηC(θτ))¯rτ∥2ℓ2 | =∥¯rτ∥2ℓ2−2η¯rTτCτ¯rτ+η2¯rTτCTτCτ¯rτ | | (6.25) |
| | | ≤∥¯rτ∥2ℓ2−η¯rTτJ(θτ)J(θτ)T¯rτ+η2β2¯rTτJ(θτ)J(θτ)T¯rτ | | (6.26) |
| | | ≤∥¯rτ∥2ℓ2−(η−η2β2)∥J(θτ)T¯rτ∥2ℓ2 | | (6.27) |
| | | ≤∥¯rτ∥2ℓ2−η2∥J(θτ)T¯rτ∥2ℓ2. | | (6.28) |
where we used the fact that η≤12β2. Now, using the fact that J(θτ)J(θτ)T⪰α2PS+, we have
| | | |
| --- | --- | --- |
| | ∥¯rτ∥2ℓ2−η2∥J(θτ)T¯rτ∥2ℓ2≤(1−ηα22)∥¯rτ∥2ℓ2≤(1−ηα22)τ+1∥¯r0∥2ℓ2, | |
which establishes the second statement of the induction ([6.6](#S6.E6 "(6.6) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")). What remains is obtaining the last statement of ([6.6](#S6.E6 "(6.6) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")). To address this, completing squares, observe that
| | | |
| --- | --- | --- |
| | ∥¯rτ+1∥ℓ2≤√∥¯rτ∥2ℓ2−η2∥J(θτ)T¯rτ∥2ℓ2≤∥¯rτ∥ℓ2−η4∥J(θτ)T¯rτ∥2ℓ2∥¯rτ∥ℓ2. | |
On the other hand, the distance to initial point satisfies
| | | |
| --- | --- | --- |
| | ∥θτ+1−θ0∥ℓ2≤∥θτ+1−θτ∥ℓ2+∥θτ−θ0∥ℓ2≤∥θτ−θ0∥ℓ2+η∥J(θτ)¯rτ∥ℓ2. | |
Combining the last two lines (by scaling the second line by 14α) and using induction hypothesis ([6.6](#S6.E6 "(6.6) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")), we find that
| | | | | |
| --- | --- | --- | --- | --- |
| | 14α∥θτ+1−θ0∥ℓ2+∥¯rτ+1∥ℓ2 | ≤14α(∥θτ−θ0∥ℓ2+η∥J(θτ)¯rτ∥ℓ2)+∥¯rτ∥ℓ2−η4∥J(θτ)T¯rτ∥2ℓ2∥¯rτ∥ℓ2 | | (6.29) |
| | | ≤[14α∥θτ−θ0∥ℓ2+∥¯rτ∥ℓ2]+η4⎡⎣α∥J(θτ)¯rτ∥ℓ2−∥J(θτ)T¯rτ∥2ℓ2∥¯rτ∥ℓ2⎤⎦ | | (6.30) |
| | | ≤[14α∥θτ−θ0∥ℓ2+∥¯rτ∥ℓ2]+η4∥J(θτ)¯rτ∥ℓ2[α−∥J(θτ)T¯rτ∥ℓ2∥¯rτ∥ℓ2] | | (6.31) |
| | | ≤14α∥θτ−θ0∥ℓ2+∥¯rτ∥ℓ2 | | (6.32) |
| | | ≤∥¯r0∥ℓ2≤∥r0∥ℓ2. | | (6.33) |
This establishes the final line of the induction and concludes the proof of the upper bound on ∥θτ−θ0∥ℓ2. To proceed, we shall bound the infinity norm of the residual. Using ΠS−(e)=ΠS−(r0)=¯eτ, note that
| | | | | |
| --- | --- | --- | --- | --- |
| | ∥f(θτ)−y−e∥ℓ∞ | =∥rτ−e∥ℓ∞ | | (6.34) |
| | | ≤∥¯rτ∥ℓ∞+∥e−¯eτ∥ℓ∞ | | (6.35) |
| | | =∥¯rτ∥ℓ∞+∥e−ΠS−(e)∥ℓ∞ | | (6.36) |
| | | =∥¯rτ∥ℓ∞+∥ΠS+(e)∥ℓ∞. | | (6.37) |
What remains is controlling ∥¯rτ∥ℓ∞. For this term, we shall use the naive upper bound ∥¯rτ∥ℓ2. Using the rate of convergence of the algorithm ([6.6](#S6.E6 "(6.6) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")), we have that
| | | |
| --- | --- | --- |
| | ∥¯rτ∥ℓ2≤(1−ηα24)τ∥r0∥ℓ2. | |
We wish the right hand side to be at most ν>0 where ν≥∥ΠS+(e)∥ℓ∞. This implies that we need
| | | | | |
| --- | --- | --- | --- | --- |
| | (1−ηα24)τ∥r0∥ℓ2≤ν | ⟺τlog(1−ηα24)≤log(ν∥r0∥ℓ2) | | (6.38) |
| | | ⟺τlog(11−ηα24)≥log(∥r0∥ℓ2ν) | | (6.39) |
To conclude, note that since ηα24≤1/8 (as η≤1/2β2), we have
| | | |
| --- | --- | --- |
| | log(11−ηα24)≥log(1+ηα24)≥ηα25. | |
Consequently, if τ≥5ηα2log(∥r0∥ℓ2ν), we find that ∥¯rτ∥ℓ∞≤∥¯rτ∥ℓ2≤ν, which guarantees
| | | |
| --- | --- | --- |
| | ∥rτ−e∥ℓ∞≤2ν. | |
which is the advertised result. If e is s sparse and S+ is diffused, applying Lemma [3.1](#S3.Thmtheorem1 "Definition 3.1 (Diffusedness) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") we have
| | | |
| --- | --- | --- |
| | ∥ΠS+(e)∥ℓ∞≤γ√sn∥e∥ℓ2. | |
####
6.1.2 Proof of Generic Lower Bound – Theorem [3.3](#S3.Thmtheorem3 "Theorem 3.3 ‣ 3.3 To (over)fit to corrupted labels requires straying far from initialization ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")
Proof Suppose θ∈D satisfies y=f(θ). Define Jτ=J((1−τ)θ+τθ0) and J=J(θ,θ0)=∫10Jτdτ. Since Jacobian is derivative of f, we have that
| | | |
| --- | --- | --- |
| | f(θ)−f(θ0)=∫10Jτ(θ−θ0)dτ=J(θ−θ0). | |
Now, define the matrices J+=ΠS+(J) and J−=ΠS−(J). Using Assumption [1](#Thmassumption1 "Assumption 1 (Bimodal Jacobian) ‣ 3.1 Bimodal jacobian structure ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we bound the spectral norms via
| | | |
| --- | --- | --- |
| | ∥J+∥=supv∈S+,∥v∥ℓ2≤1∥JTv∥ℓ2≤β,∥J−∥=supv∈S−,∥v∥ℓ2≤1∥JTv∥ℓ2≤ϵ. | |
To proceed, projecting the residual on S+, we find for any θ with f(θ)=y
| | | |
| --- | --- | --- |
| | ΠS+(f(θ)−f(θ0))=ΠS+(J)(θ−θ0)⟹∥θ−θ0∥ℓ2≥∥ΠS+(f(θ)−f(θ0))∥ℓ2β≥E+β. | |
The identical argument for S− yields ∥θ−θ0∥ℓ2≥E−ϵ. Together this implies
| | | | |
| --- | --- | --- | --- |
| | ∥θ−θ0∥ℓ2≥max(E−ϵ,E+β). | | (6.40) |
If R is strictly smaller than right hand side, we reach a contradiction as θ∉D. If D=Rp, we still find ([6.40](#S6.E40 "(6.40) ‣ 6.1.2 Proof of Generic Lower Bound – Theorem 3.3 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")).
This shows that if ϵ is small and E− is nonzero, gradient descent has to traverse a long distance to find a good model. Intuitively, if the projection over the noise space indeed contains the label noise, we actually don’t want to fit that. Algorithmically, our idea fits the residual over the signal space and not worries about fitting over the noise space. Approximately speaking, this intuition corresponds to the ℓ2 regularized problem
| | | |
| --- | --- | --- |
| | minθL(θ)∥θ−θ0∥ℓ2≤R. | |
If we set R=E+β, we can hope that solution will learn only the signal and does not overfit to the noise. The next section builds on this intuition and formalizes our algorithmic guarantees.
###
6.2 Proofs for Neural Networks
Throughout, σmin(⋅) denotes the smallest singular value of a given matrix. We first introduce helpful definitions that will be used in our proofs.
######
Definition 6.5 (Support subspace)
Let {xi}ni=1 be an input dataset generated according to Definition [1.1](#S1.Thmtheorem1 "Definition 1.1 (Clusterable dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). Also let {˜xi}ni=1 be the associated cluster centers, that is, ˜xi=cℓ iff xi is from the ℓth cluster. We define the support subspace S+ as a subspace of dimension K, dictated by the cluster membership as follows. Let Λℓ⊂{1,…,n} be the set of coordinates i such that ~xi=cℓ. Then, S+ is characterized by
| | | |
| --- | --- | --- |
| | S+={v∈Rn ∣∣ vi1=vi2for alli1,i2∈Λℓand for%
all 1≤ℓ≤K}. | |
######
Definition 6.6 (Neural Net Jacobian)
Given input samples (xi)ni=1, form the input matrix X=[x1 … xn]T∈Rn×d. The Jacobian of the learning problem ([1.3](#S1.E3 "(1.3) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")), at a matrix W is denoted by J(W,X)∈Rn×kd and is given by
| | | |
| --- | --- | --- |
| | J(W,X)T=(diag(v)ϕ′(WXT))∗XT. | |
Here ∗ denotes the Khatri-Rao product.
The following theorem is borrowed from [[40](#bib.bib40)] and characterizes three key properties of the neural network Jacobian. These are smoothness, spectral norm, and minimum singular value at initialization which correspond to Lemmas 6.6, 6.7, and 6.8 in that paper.
######
Theorem 6.7 (Jacobian Properties at Cluster Center)
Suppose X=[x1 … xn]T∈Rn×d be an input dataset satisfying λ(X)>0. Suppose |ϕ′|,|ϕ′′|≤Γ. The Jacobian mapping with respect to the input-to-hidden weights obey the following properties.
* Smoothness is bounded by
| | | |
| --- | --- | --- |
| | ∥∥J(˜W,X)−J(W,X)∥∥≤Γ√k∥X∥∥∥˜W−W∥∥Ffor all˜W,W∈Rk×d. | |
* Top singular value is bounded by
| | | |
| --- | --- | --- |
| | ∥J(W,X)∥≤Γ∥X∥. | |
* Let C>0 be an absolute constant. As long as
| | | |
| --- | --- | --- |
| | k≥CΓ2logn∥X∥2λ(X) | |
At random Gaussian initialization W0∼N(0,1)k×d, with probability at least 1−1/K100, we have
| | | |
| --- | --- | --- |
| | σmin(J(W0,X))≥√λ(X)/2. | |
In our case, the Jacobian is not well-conditioned. However, it is pretty well-structured as described previously. To proceed, given a matrix X∈Rn×d and a subspace S⊂Rn, we define the minimum singular value of the matrix over this subspace by σmin(X,S) which is defined as
| | | |
| --- | --- | --- |
| | σmin(X,S)=sup∥v∥ℓ2=1,UUT=PS∥vTUTX∥ℓ2. | |
Here, PS∈Rn×n is the projection operator to the subspace. Hence, this definition essentially projects the matrix on S and then takes the minimum singular value over that projected subspace.
The following theorem states the properties of the Jacobian at a clusterable dataset.
######
Theorem 6.8 (Jacobian Properties at Clusterable Dataset)
Let input samples (xi)ni=1 be generated according to (ε0,δ) clusterable dataset model of Definition [1.1](#S1.Thmtheorem1 "Definition 1.1 (Clusterable dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") and define X=[x1 … xn]T. Let S+ be the support space and (~xi)ni=1 be the associated clean dataset as described by Definition [6.5](#S6.Thmtheorem5 "Definition 6.5 (Support subspace) ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). Set ~X=[~x1 … ~xn]T. Assume |ϕ′|,|ϕ′′|≤Γ and λ(C)>0. The Jacobian mapping at ~X with respect to the input-to-hidden weights obey the following properties.
* Smoothness is bounded by
| | | |
| --- | --- | --- |
| | | |
* Top singular value is bounded by
| | | |
| --- | --- | --- |
| | ∥∥J(W,~X)∥∥≤√cupnKΓ∥C∥. | |
* As long as
| | | |
| --- | --- | --- |
| | k≥CΓ2logK∥C∥2λ(C) | |
At random Gaussian initialization W0∼N(0,1)k×d, with probability at least 1−1/K100, we have
| | | |
| --- | --- | --- |
| | σmin(J(W0,~X),S+)≥√clownλ(C)2K | |
* The range space obeys range(J(W0,~X))⊂S+ where S+ is given by Definition [6.5](#S6.Thmtheorem5 "Definition 6.5 (Support subspace) ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks").
Proof Let J(W,C) be the Jacobian at the cluster center matrix. Applying Theorem [6.7](#S6.Thmtheorem7 "Theorem 6.7 (Jacobian Properties at Cluster Center) ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), this matrix already obeys the properties described in the conclusions of this theorem with desired probability (for the last conclusion). We prove our theorem by relating the cluster center Jacobian to the clean dataset Jacobian matrix J(W,~X).
Note that ~X is obtained by duplicating the rows of the cluster center matrix C. This implies that J(W,~X) is obtained by duplicating the rows of the cluster center Jacobian. The critical observation is that, by construction in Definition [1.1](#S1.Thmtheorem1 "Definition 1.1 (Clusterable dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), each row is duplicated somewhere between clown/K and cupn/K.
To proceed, fix a vector v and let ~p=J(W,~X)v∈Rn and p=J(W,C)v∈RK. Recall the definition of the support sets Λℓ from Definition [6.5](#S6.Thmtheorem5 "Definition 6.5 (Support subspace) ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). We have the identity
| | | |
| --- | --- | --- |
| | ~pi=pℓfor alli∈Λℓ. | |
This implies ~p∈S+ hence range(J(W,~X))⊂S+. Furthermore, the entries of ~p repeats the entries of p somewhere between clown/K and cupn/K. This implies that,
| | | |
| --- | --- | --- |
| | √cupnK∥p∥ℓ2≥∥~p∥ℓ2≥√clownK∥p∥ℓ2, | |
and establishes the upper and lower bounds on the singular values of J(W,~X) over S+ in terms of the singular values of J(W,C). Finally, the smoothness can be established similarly. Given matrices W,~W, the rows of the difference
| | | |
| --- | --- | --- |
| | ∥∥J(˜W,~X)−J(W,~X)∥∥ | |
is obtained by duplicating the rows of ∥∥J(˜W,C)−J(W,C)∥∥ by at most cupn/K times. Hence the spectral norm is scaled by at most √cupn/K.
######
Lemma 6.9 (Upper bound on initial misfit)
Consider a one-hidden layer neural network model of the form x↦vTϕ(Wx) where the activation ϕ has bounded derivatives obeying |ϕ(0)|,|ϕ′(z)|≤Γ. Suppose entries of v∈Rk are half 1/√k and half −1/√k so that ∥v∥ℓ2=1. Also assume we have n data points x1,x2,…,xn∈Rd with unit euclidean norm (∥xi∥ℓ2=1) aggregated as rows of a matrix X∈Rn×d and the corresponding labels given by y∈Rn generated accoring to (ρ,ε0=0,δ) noisy dataset (Definition [1.2](#S1.Thmtheorem2 "Definition 1.2 ((ρ,ε0,δ) corrupted dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")). Then for W0∈Rk×d with i.i.d. N(0,1) entries
| | | |
| --- | --- | --- |
| | ∥∥vTϕ(W0XT)−y∥∥ℓ2≤O(Γ√nlogK), | |
holds with probability at least 1−K−100.
Proof This lemma is based on a fairly straightforward union bound. First, by construction ∥y∥ℓ2≤√n. What remains is bounding ∥vTϕ(W0XT)∥ℓ2. Since ε0=0 there are K unique rows. We will show that each of the unique rows is bounded with probability 1−K−101 and union bounding will give the final result. Let w be a row of W0 and x be a row of X. Since ϕ is Γ Lipschitz and |ϕ(0)|≤Γ, each entry of ϕ(Xw) is O(Γ)-subgaussian. Hence vTϕ(W0x) is weighted average of k i.i.d. subgaussians which are entries of ϕ(W0x). Additionally it is zero mean since ∑ni=1vi=0. This means vTϕ(W0x) is also O(Γ) subgaussian and obeys
| | | |
| --- | --- | --- |
| | P(|vTϕ(W0x)|≥cΓ√logK)≤K−101, | |
for some constant c>0, concluding the proof.
####
6.2.1 Proof of Theorem [2.3](#S2.Thmtheorem3 "Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")
We first prove a lemma regarding the projection of label noise on the cluster induced subspace.
###### Lemma 6.10
Let {(xi,yi)}ni=1 be an (ρ,ε0=0,δ) clusterable noisy dataset as described in Definition [1.2](#S1.Thmtheorem2 "Definition 1.2 ((ρ,ε0,δ) corrupted dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). Let {~yi}ni=1 be the corresponding noiseless labels. Let J(W,C) be the Jacobian at the cluster center matrix which is rank K and S+ be its column space. Then, the difference between noiseless and noisy labels satisfy the bound
| | | |
| --- | --- | --- |
| | ∥ΠS+(y−~y)∥ℓ∞≤2ρ. | |
Proof Let e=y−~y. Observe that by assumption, ℓth cluster has at most sℓ=ρnℓ errors. Let Iℓ denote the membership associated with cluster ℓ i.e. Iℓ⊂{1,…,n} and i∈Iℓ if and only if xi belongs to ℓth cluster. Let 1(ℓ)∈Rn be the indicator function of the ℓth class where ith entry is 1 if i∈Iℓ and 0 else for 1≤i≤n. Then, denoting the size of the ℓth cluster by nℓ, the projection to subspace S+ can be written as the P matrix where
| | | |
| --- | --- | --- |
| | P=K∑ℓ=11nℓ1(ℓ)1(ℓ)T. | |
Let eℓ be the error pattern associated with ℓth cluster i.e. eℓ is equal to e over Iℓ and zero outside. Since cluster membership is non-overlapping, we have that
| | | |
| --- | --- | --- |
| | Pe=K∑ℓ=11nℓ1(ℓ)1(ℓ)Teℓ. | |
Similarly since supports of 1(ℓ) are non-overlapping, we have that
| | | |
| --- | --- | --- |
| | ∥Pe∥ℓ∞=max1≤ℓ≤K1nℓ1(ℓ)1(ℓ)Teℓ. | |
Now, using ∥e∥ℓ∞≤2 (max distance between two labels), observe that
| | | |
| --- | --- | --- |
| | ∥1(ℓ)1(ℓ)Teℓ∥ℓ∞≤2∥1(ℓ)∥ℓ∞∥eℓ∥ℓ1=2∥eℓ∥ℓ1. | |
Since number of errors within cluster ℓ is at most nℓρ, we find that
| | | |
| --- | --- | --- |
| | ∥Pe∥ℓ∞=K∑ℓ=1∥1nℓ1(ℓ)1(ℓ)Teℓ∥ℓ∞≤∥eℓ∥ℓ1nℓ≤2ρ. | |
The final line yields the bound
| | | |
| --- | --- | --- |
| | ∥PS+(y−~y)∥ℓ∞=∥PS+(e)∥ℓ∞=∥Pe∥ℓ∞≤2ρ. | |
With this, we are ready to state the proof of Theorem [2.3](#S2.Thmtheorem3 "Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks").
Proof The proof is based on the meta Theorem [3.2](#S3.Thmtheorem2 "Theorem 3.2 (Gradient descent with label corruption) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), hence we need to verify its Assumptions [2](#Thmassumption2 "Assumption 2 (Low-rank Jacobian) ‣ 3.1 Bimodal jacobian structure ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") and [3](#Thmassumption3 "Assumption 3 (Smoothness) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") with proper values and apply Lemma [6.10](#S6.Thmtheorem10 "Lemma 6.10 ‣ 6.2.1 Proof of Theorem 2.3 ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") to get ∥PS+(e)∥ℓ∞. We will also make significant use of Corollary [6.8](#S6.Thmtheorem8 "Theorem 6.8 (Jacobian Properties at Clusterable Dataset) ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks").
Using Corollary [6.8](#S6.Thmtheorem8 "Theorem 6.8 (Jacobian Properties at Clusterable Dataset) ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), Assumption [3](#Thmassumption3 "Assumption 3 (Smoothness) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") holds with L=Γ√cupnkK∥C∥ where L is the Lipschitz constant of Jacobian spectrum. Denote rτ=f(Wτ)−y. Using Lemma [6.9](#S6.Thmtheorem9 "Lemma 6.9 (Upper bound on initial misfit) ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") with probability 1−K−100, we have that ∥r0∥ℓ2=∥y−f(W0)∥ℓ2≤Γ√c0nlogK/128 for some c0>0. Corollary [6.8](#S6.Thmtheorem8 "Theorem 6.8 (Jacobian Properties at Clusterable Dataset) ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") guarantees a uniform bound for β, hence in Assumption [2](#Thmassumption2 "Assumption 2 (Low-rank Jacobian) ‣ 3.1 Bimodal jacobian structure ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we pick
| | | |
| --- | --- | --- |
| | β≤√cupnKΓ∥C∥. | |
We shall also pick the minimum singular value over S+ to be
| | | |
| --- | --- | --- |
| | α=α02whereα0=√clownλ(C)2K, | |
We wish to verify Assumption [2](#Thmassumption2 "Assumption 2 (Low-rank Jacobian) ‣ 3.1 Bimodal jacobian structure ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") over the radius of
| | | |
| --- | --- | --- |
| | R=4∥f(W0)−y∥ℓ2α≤Γ√c0nlogK/8α=Γ
⎷c0nlogK/2clownλ(C)2K=Γ√c0KlogKclowλ(C), | |
neighborhood of W0. What remains is ensuring that Jacobian over S+ is lower bounded by α. Our choice of k guarantees that at the initialization, with probability 1−K−100, we have
| | | |
| --- | --- | --- |
| | σmin(J(W0,X),S+)≥α0. | |
Suppose LR≤α=α0/2. Using triangle inequality on Jacobian spectrum, for any W∈D, using ∥W−W0∥F≤R, we would have
| | | |
| --- | --- | --- |
| | σmin(J(W,X),S+)≥σmin(J(W0,X),S+)−LR≥α0−α=α. | |
Now, observe that
| | | | |
| --- | --- | --- | --- |
| | LR=Γ√cupnkK∥C∥Γ√c0Klog(K)clowλ(C)=Γ2∥C∥√cupc0nlogKclowkλ(C)≤α02=√clownλ(C)8K, | | (6.41) |
as k satisfies
| | | |
| --- | --- | --- |
| | k≥O(Γ4∥C∥2cupKlog(K)c2lowλ(C)2)≥O(Γ4Klog(K)∥C∥2λ(C)2). | |
Finally, since LR=4L∥r0∥ℓ2/α≤α, the learning rate is
| | | |
| --- | --- | --- |
| | | |
Overall, the assumptions of Theorem [3.2](#S3.Thmtheorem2 "Theorem 3.2 (Gradient descent with label corruption) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") holds with stated α,β,L with probability 1−2K−100 (union bounding initial residual and minimum singular value events). This implies for all τ>0 the distance of current iterate to initial obeys
| | | |
| --- | --- | --- |
| | ∥Wτ−W0∥F≤R. | |
The final step is the properties of the label corruption. Using Lemma [6.10](#S6.Thmtheorem10 "Lemma 6.10 ‣ 6.2.1 Proof of Theorem 2.3 ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we find that
| | | |
| --- | --- | --- |
| | ∥ΠS+(~y−y)∥ℓ∞≤2ρ. | |
Substituting the values corresponding to α,β,L yields that, for all gradient iterations with
| | | |
| --- | --- | --- |
| | 5ηα2log(∥r0∥ℓ22ρ)≤5ηα2log(Γ√c0nlogK/322ρ)=O(Kηnλ(C)log(Γ√nlogKρ))≤τ, | |
denoting the clean labels by ~y and applying Theorem [3.2](#S3.Thmtheorem2 "Theorem 3.2 (Gradient descent with label corruption) ‣ 3.2 Meta result on learning with label corruption ‣ 3 Technical Approach and General Theory ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we have that, the infinity norm of the residual obeys (using ∥ΠS+(e)∥ℓ∞≤2ρ)
| | | |
| --- | --- | --- |
| | ∥f(W)−~y∥ℓ∞≤4ρ. | |
This implies that if ρ≤δ/8, the network will miss the correct label by at most δ/2, hence all labels (including noisy ones) will be correctly classified.
####
6.2.2 Proof of Theorem [2.4](#S2.Thmtheorem4 "Theorem 2.4 ‣ 2.2 To (over)fit to corrupted labels requires straying far from initialization ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")
Consider
| | | |
| --- | --- | --- |
| | f(W,x)=vTϕ(Wx) | |
and note that
| | | |
| --- | --- | --- |
| | ∇xf(W,x)=WTdiag(ϕ′(Wx))v | |
Thus
| | | | |
| --- | --- | --- | --- |
| | ∂∂xf(W,x)u= | vTdiag(ϕ′(Wx))Wu | |
| | = | k∑ℓ=1vℓϕ′(⟨wℓ,x⟩)wTℓu | |
Thus
| | | |
| --- | --- | --- |
| | ∇wℓ(∂∂xf(W,x)u)=vℓ(ϕ′′(wTℓx)(wTℓu)x+ϕ′(wTℓx)u) | |
Thus, denoting vectorization of a matrix by vect(⋅)
| | | | |
| --- | --- | --- | --- |
| | vect(U)T(∂∂vect(W)∂∂xf(W,x))u= | k∑ℓ=1vℓ(ϕ′′(wTℓx)(wTℓu)(uTℓx)+ϕ′(wTℓx)(uTℓu)) | |
| | = | uTWTdiag(v)diag(ϕ′′(Wx))Ux+vT%
diag(ϕ′(Wx))Uu | |
Thus by the general mean value theorem there exists a point (˜W,˜x) in the square (W0,x1),(W0,x2),(W,x1) and (W,x2) such that
| | | |
| --- | --- | --- |
| | (f(W,x2)−f(W0,x2))−(f(W,x1)−f(W0,x1)) | |
| | | |
Using the above we have that
| | | | |
| --- | --- | --- | --- |
| | ∣∣(f(W,x2)−f(W0,x2)) | −(f(W,x1)−f(W0,x1))∣∣ | |
| | (a)≤ | ∣∣(x2−x1)T˜WTdiag% (v)diag(ϕ′′(˜W˜x))(W−W0)˜x∣∣ | |
| | | +∣∣vTdiag(ϕ′(˜W˜x))(W−W0)(x2−x1)∣∣ | |
| | (b)≤ | (∥v∥ℓ∞∥˜x∥ℓ2∥∥˜W∥∥+∥v∥ℓ2)Γ∥x2−x1∥ℓ2∥W−W0∥ | |
| | (c)≤ | (1√k∥˜x∥ℓ2∥∥˜W∥∥+1)Γ∥x2−x1∥ℓ2∥W−W0∥ | |
| | (d)≤ | | |
| | (e)≤ | (1√k∥W0∥+1√k∥∥˜W−W0∥∥+1)Γ∥x2−x1∥ℓ2∥W−W0∥ | |
| | (f)≤ | (1√k∥W0∥+1√k∥∥˜W−W0∥∥F+1)Γ∥x2−x1∥ℓ2∥W−W0∥ | |
| | (g)≤ | (1√k∥∥˜W−W0∥∥F+3+2√dk)Γ∥x2−x1∥ℓ2∥W−W0∥ | |
| | (h)≤ | CΓ∥x2−x1∥ℓ2∥W−W0∥ | | (6.42) |
Here, (a) follows from the triangle inequality, (b) from simple algebraic manipulations along with the fact that |ϕ′(z)|≤Γ and |ϕ′′(z)|≤Γ, (c) from the fact that vℓ=±1√k, (d) from ∥x2∥ℓ2=∥x1∥ℓ2=1 which implies ∥˜x∥ℓ2≤1, (e) from triangular inequality, (f) from the fact that Frobenius norm dominates the spectral norm, (g) from the fact that with probability at least 1−2e−(d+k), ∥W0∥≤2(√k+√d), and (h) from the fact that and k≥cd.
Next we note that for a Gaussian random vector g∼N(0,Id) we have
| | | | |
| --- | --- | --- | --- |
| | ∥ϕ(gTx2)−ϕ(gTx1)∥ψ2= | ∥ϕ(gTx2)−ϕ(gTx1)∥ψ2 | |
| | = | ∥ϕ′(tgTx2+(1−t)gTx1)gT(x2−x1)∥ψ2 | |
| | ≤ | Γ∥gT(x2−x1)∥ψ2 | |
| | ≤ | cΓ∥x2−x1∥ℓ2. | | (6.43) |
Also note that
| | | | |
| --- | --- | --- | --- |
| | f(W0,x2)−f(W0,x1)= | vT(ϕ(W0x2)−ϕ(W0x1)) | |
| | ∼ | k∑ℓ=1vℓ(ϕ(gTℓx2)−ϕ(gTℓx1)) | |
where g1,g2,…,gk are i.i.d. vectors with N(0,Id) distribution. Also for v obeying 1Tv=0 this random variable has mean zero. Hence, using the fact that weighted sum of subGaussian random variables are subgaussian combined with ([B](#A2.Ex13 "Appendix B Single label perturbation ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) we conclude that f(W0,x2)−f(W0,x1) is also subGaussian obeying ∥f(W0,x2)−f(W0,x1)∥ψ2≤cΓ∥v∥ℓ2∥x2−x1∥ℓ2. Thus
| | | | |
| --- | --- | --- | --- |
| | |f(W0,x2)−f(W0,x1)|≤ctΓ∥v∥ℓ2∥x2−x1∥ℓ2=ctΓ∥x2−x1∥ℓ2, | | (6.44) |
with probability at least 1−e−t22.
Now combining ([B](#A2.Ex5 "Appendix B Single label perturbation ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) and ([B.3](#A2.E3 "(B.3) ‣ Appendix B Single label perturbation ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) we have
| | | | |
| --- | --- | --- | --- |
| | δ≤ | |y2−y2| | |
| | = | |f(W,x1)−f(W,x2)| | |
| | = | ∣∣vT(ϕ(Wx2)−ϕ(Wx1))∣∣ | |
| | ≤ | |(f(W,x2)−f(W0,x2))−(f(W,x1)−f(W0,x1))|+∣∣vT(ϕ(W0x2)−ϕ(W0x1))∣∣ | |
| | ≤ | CΓ∥x2−x1∥ℓ2∥W−W0∥+ctΓ∥x2−x1∥ℓ2 | |
| | ≤ | CΓε0(∥W−W0∥+11000t) | |
Thus
| | | |
| --- | --- | --- |
| | ∥W−W0∥≥δCΓε0−t1000, | |
with high probability.
###
6.3 Perturbation analysis for perfectly clustered data (Proof of Theorem [2.2](#S2.Thmtheorem2 "Theorem 2.2 (Robust learning with early stopping-simplified) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"))
Denote average neural net Jacobian at data X via
| | | |
| --- | --- | --- |
| | J(W1,W2,X)=∫10J(αW1+(1−α)W2,X)dα. | |
######
Lemma 6.11 (Perturbed Jacobian Distance)
Let X=[x1 … xn]T be the input matrix obtained from Definition [1.1](#S1.Thmtheorem1 "Definition 1.1 (Clusterable dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). Let ~X be the noiseless inputs where ~xi is the cluster center corresponding to xi. Given weight matrices W1,W2,~W1,~W2, we have that
| | | |
| --- | --- | --- |
| | ∥J(W1,W2,X)−J(~W1,~W2,~X)∥≤Γ√n(∥~W1−W1∥F+∥~W2−W2∥F2√k+ε0). | |
Proof Given W,~W, we write
| | | |
| --- | --- | --- |
| | ∥J(W,X)−J(~W,~X)∥≤∥J(W,X)−J(~W,X)∥+∥J(~W,X)−J(~W,~X)∥. | |
We first bound
| | | | | |
| --- | --- | --- | --- | --- |
| | ∥J(W,X)−J(~W,X)∥ | =∥diag(v)ϕ′(WXT)∗XT−diag(v)ϕ′(~WXT)∗XT∥ | | (6.45) |
| | | =1√k∥(ϕ′(WXT)−ϕ′(~WXT))∗XT∥ | | (6.46) |
To proceed, we use the results on the spectrum of Hadamard product of matrices due to Schur [[46](#bib.bib46)]. Given A∈Rk×d,B∈Rn×d matrices where B has unit length rows, we have
| | | |
| --- | --- | --- |
| | ∥A∗B∥=√∥(A∗B)T(A∗B)∥=√∥(ATA)⊙(BTB)∥≤√∥ATA∥=∥A∥. | |
Substituting A=ϕ′(WXT)−ϕ′(~WXT) and B=XT, we find
| | | |
| --- | --- | --- |
| | ∥(ϕ′(WXT)−ϕ′(~WXT))∗XT∥≤∥ϕ′(WXT)−ϕ′(~WXT)∥≤Γ∥(~W−W)XT∥F≤Γ√n∥~W−W∥F. | |
Secondly,
| | | |
| --- | --- | --- |
| | ∥J(~W,X)−J(~W,~X)∥=1√k∥ϕ′(~WXT)∗(X−~X)∥ | |
where reusing Schur’s result and boundedness of |ϕ′|≤Γ
| | | |
| --- | --- | --- |
| | ∥ϕ′(~WXT)∗(X−~X)∥≤Γ√k∥X−~X∥≤Γ√knε0. | |
Combining both estimates yields
| | | |
| --- | --- | --- |
| | ∥J(W,X)−J(~W,~X)∥≤Γ√n(∥~W−W∥F√k+ε0). | |
To get the result on ∥J(W1,W2,X)−J(~W1,~W2,~X)∥, we integrate
| | | | | |
| --- | --- | --- | --- | --- |
| | ∥J(W1,W2,X)−J(~W1,~W2,~X)∥ | ≤∫10Γ√n(∥α(~W1−W1)+(1−α)(~W1−W1)∥F√k+ε0)dα | | (6.47) |
| | | ≤Γ√n(∥~W1−W1∥F+∥~W2−W2∥F2√k+ε0). | | (6.48) |
######
Theorem 6.12 (Robustness of gradient path to perturbation)
Generate samples (xi,yi)ni=1 according to (ρ,ε0,δ) noisy dataset model and form the concatenated input/labels X∈Rd×n,y∈Rn. Let ~X be the clean input sample matrix obtained by mapping xi to its associated cluster center. Set learning rate η≤K2cupnΓ2∥C∥2 and maximum iterations τ0 satisfying
| | | |
| --- | --- | --- |
| | ητ0=C1Knλ(C)log(Γ√nlogKρ). | |
where C1≥1 is a constant of our choice. Suppose input noise level ε0 and number of hidden nodes obey
| | | |
| --- | --- | --- |
| | ε0≤O(λ(C)Γ2Klog(Γ√nlogKρ))andk≥O(Γ10K2∥C∥4λ(C)4log(Γ√nlogKρ)6). | |
Set W0i.i.d.∼N(0,1). Starting from W0=~W0 consider the gradient descent iterations over the losses
| | | | |
| --- | --- | --- | --- |
| | Wτ+1=Wτ−η∇L(Wτ)whereL(W)=12n∑i=1(yi−f(W,~xi))2 | | (6.49) |
| | ~Wτ+1=~Wτ−∇~L(~Wτ)where~L(~W)=12n∑i=1(yi−f(~W,~xi))2 | | (6.50) |
Then, for all gradient descent iterations satisfying τ≤τ0, we have that
| | | |
| --- | --- | --- |
| | ∥f(Wτ,X)−f(~Wτ,~X)∥ℓ2≤c0τηε0Γ3n3/2√logK, | |
and
| | | |
| --- | --- | --- |
| | ∥Wτ−~Wτ∥F≤O(τηε0Γ4Knλ(C)log(Γ√nlogKρ)2). | |
Proof Since ~Wτ are the noiseless iterations, with probability 1−2K−100, the statements of Theorem [2.3](#S2.Thmtheorem3 "Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") hold on ~Wτ.
To proceed with proof, we first introduce short hand notations. We use
| | | | |
| --- | --- | --- | --- |
| | ri=f(Wi,X)−y, ~ri=f(~Wi,~Xi)−y | | (6.51) |
| | Ji=J(Wi,X), Ji+1,i=J(Wi+1,Wi,X), ~Ji=J(~Wi,~X), ~Ji+1,i=J(~Wi+1,~Wi,~X) | | (6.52) |
| | di=∥Wi−~Wi∥F, pi=∥ri−~ri∥F, β=Γ∥C∥√cupn/K, L=Γ∥C∥√cupn/Kk. | | (6.53) |
Here β is the upper bound on the Jacobian spectrum and L is the spectral norm Lipschitz constant as in Theorem [6.8](#S6.Thmtheorem8 "Theorem 6.8 (Jacobian Properties at Clusterable Dataset) ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). Applying Lemma [6.11](#S6.Thmtheorem11 "Lemma 6.11 (Perturbed Jacobian Distance) ‣ 6.3 Perturbation analysis for perfectly clustered data (Proof of Theorem 2.2) ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), note that
| | | | |
| --- | --- | --- | --- |
| | ∥J(Wτ,X)−J(~Wτ,~X)∥≤L∥~W−W∥F+Γ√nε0≤Ldτ+Γ√nε0 | | (6.54) |
| | ∥J(Wτ+1,Wτ,X)−J(~Wτ+1,~Wτ,~X)∥≤L(dτ+dτ+1)/2+Γ√nε0. | | (6.55) |
Following this and using that noiseless residual is non-increasing and satisfies ∥~rτ∥ℓ2≤∥~r0∥ℓ2, note that parameter satisfies
| | | | |
| --- | --- | --- | --- |
| | Wi+1=Wi−ηJiri,~Wi+1=~Wi−η~JTi~ri | | (6.56) |
| | ∥Wi+1−~Wi+1∥F≤∥Wi−~Wi∥F+η∥Ji−~Ji∥∥~ri∥ℓ2+η∥Ji∥∥ri−~ri∥ℓ2 | | (6.57) |
| | di+1≤di+η((Ldi+Γ√nε0)∥~r0∥ℓ2+βpi), | | (6.58) |
and residual satisfies (using I⪰~Ji+1,i~JTi/β2⪰0)
| | | | | |
| --- | --- | --- | --- | --- |
| | ri+1 | =ri−ηJi+1,iJTiri⟹ | | (6.59) |
| | ri+1−~ri+1 | =(ri−~ri)−η(Ji+1,i−~Ji+1,i)JTiri−η~Ji+1,i(JTi−~JTi)ri−η~Ji+1,i~JTi(ri−~ri). | | (6.60) |
| | ri+1−~ri+1 | =(I−η~Ji+1,i~JTi)(ri−~ri)−η(Ji+1,i−~Ji+1,i)JTiri−η~Ji+1,i(JTi−~JTi)ri. | | (6.61) |
| | ∥ri+1−~ri+1∥ℓ2 | ≤∥ri−~ri∥ℓ2+ηβ∥ri∥ℓ2(L(3dτ+dτ+1)/2+2Γ√nε0). | | (6.62) |
| | ∥ri+1−~ri+1∥ℓ2 | ≤∥ri−~ri∥ℓ2+ηβ(∥~r0∥ℓ2+pi)(L(3dτ+dτ+1)/2+2Γ√nε0). | | (6.63) |
where we used ∥ri∥ℓ2≤pi+∥~r0∥ℓ2 and ∥(I−η~Ji+1,i~JTi)v∥ℓ2≤∥v∥ℓ2 which follows from ([6.28](#S6.E28 "(6.28) ‣ 6.1.1 Proof of Theorem 3.2 ‣ 6.1 Proofs for General Theory ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")). This implies
| | | | |
| --- | --- | --- | --- |
| | pi+1≤pi+ηβ(∥~r0∥ℓ2+pi)(L(3dτ+dτ+1)/2+2Γ√nε0). | | (6.64) |
Finalizing proof: Next, using Lemma [6.9](#S6.Thmtheorem9 "Lemma 6.9 (Upper bound on initial misfit) ‣ 6.2 Proofs for Neural Networks ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), we have ∥~r0∥ℓ2≤Θ:=C0Γ√nlogK. We claim that if
| | | | |
| --- | --- | --- | --- |
| | | | (6.65) |
(where we used ητ0β2≥1), for all t≤τ0, we have that
| | | | |
| --- | --- | --- | --- |
| | pt≤8tηΓ√nε0Θβ≤Θ,dt≤2tηΓ√nε0Θ(1+8ητ0β2). | | (6.66) |
The proof is by induction. Suppose it holds until t≤τ0−1. At t+1, via ([6.58](#S6.E58 "(6.58) ‣ 6.3 Perturbation analysis for perfectly clustered data (Proof of Theorem 2.2) ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) we have that
| | | |
| --- | --- | --- |
| | dt+1−dtη≤LdtΘ+Γ√nε0Θ+8τ0ηβ2Γ√nε0Θ?≤2Γ√nε0Θ(1+8ητ0β2). | |
Right hand side holds since L≤12ητ0Θ. This establishes the induction for dt+1.
Next, we show the induction on pt. Observe that 3dt+dt+1≤10τ0ηΓ√nε0Θ(1+8ητ0β2). Following ([6.64](#S6.E64 "(6.64) ‣ 6.3 Perturbation analysis for perfectly clustered data (Proof of Theorem 2.2) ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) and using pt≤Θ, we need
| | | | | |
| --- | --- | --- | --- | --- |
| | pt+1−ptη≤βΘ(L(3dτ+dτ+1)+4Γ√nε0) | ?≤8Γ√nε0Θβ⟺ | | (6.67) |
| | L(3dτ+dτ+1)+4Γ√nε0 | ?≤8Γ√nε0⟺ | | (6.68) |
| | L(3dτ+dτ+1) | ?≤4Γ√nε0⟺ | | (6.69) |
| | 10Lτ0η(1+8ητ0β2)Θ | ?≤4⟺ | | (6.70) |
| | L | ?≤25τ0η(1+8ητ0β2)Θ. | | (6.71) |
Concluding the induction since L satisfies the final line. Consequently, for all 0≤t≤τ0, we have that
| | | |
| --- | --- | --- |
| | pt≤8tηΓ√nε0Θβ=c0tηε0Γ3n3/2√logK. | |
Next, note that, condition on L is implied by
| | | | | |
| --- | --- | --- | --- | --- |
| | k | ≥1000Γ2n(τ0ηβ)4Θ2 | | (6.72) |
| | | =O(Γ4nK4n4λ(C)4log(Γ√nlogKρ)4(∥C∥Γ√n/K)4(Γ√nlogK)2) | | (6.73) |
| | | =O(Γ10K2∥C∥4λ(C)4log(Γ√nlogKρ)4log2(K)) | | (6.74) |
which is implied by k≥O(Γ10K2∥C∥4λ(C)4log(Γ√nlogKρ)6).
Finally, following ([6.66](#S6.E66 "(6.66) ‣ 6.3 Perturbation analysis for perfectly clustered data (Proof of Theorem 2.2) ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")), distance satisfies
| | | |
| --- | --- | --- |
| | dt≤20tη2τ0Γ√nε0Θβ2≤O(tηε0Γ4Knλ(C)log(Γ√nlogKρ)2). | |
####
6.3.1 Completing the Proof of Theorem [2.2](#S2.Thmtheorem2 "Theorem 2.2 (Robust learning with early stopping-simplified) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")
Theorem [2.2](#S2.Thmtheorem2 "Theorem 2.2 (Robust learning with early stopping-simplified) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") is obtained by the theorem below when we ignore the log terms, and treating Γ, λ(C) as constants. We also plug in η=K2cupnΓ2∥C∥2.
######
Theorem 6.13 (Training neural nets with corrupted labels)
Let {(xi,yi)}ni=1 be an (s,ε0,δ) clusterable noisy dataset as described in Definition [1.2](#S1.Thmtheorem2 "Definition 1.2 ((ρ,ε0,δ) corrupted dataset) ‣ 1.3 Models ‣ 1 Introduction ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"). Let {~yi}ni=1 be the corresponding noiseless labels. Suppose |ϕ(0)|,|ϕ′|,|ϕ′′|≤Γ for some Γ≥1, input noise and the number of hidden nodes satisfy
| | | |
| --- | --- | --- |
| | ε0≤O(λ(C)Γ2Klog(Γ√nlogKρ))andk≥O(Γ10K2∥C∥4λ(C)4log(Γ√nlogKρ)6). | |
where C∈RK×d is the matrix of cluster centers. Set learning rate η≤K2cupnΓ2∥C∥2 and randomly initialize W0i.i.d.∼N(0,1). With probability 1−3/K100, after τ=O(Kηnλ(C))log(Γ√nlogKρ) iterations, for all 1≤i≤n, we have that
* The per sample normalized ℓ2 norm bound satisfies
| | | |
| --- | --- | --- |
| | ∥f(Wτ,X)−~y∥ℓ2√n≤4ρ+cε0Γ3K√logKλ(C)log(Γ√nlogKρ). | |
* Suppose ρ≤δ/8. Denote the total number of prediction errors with respect to true labels (i.e. not satisfying ([2.2](#S2.E2 "(2.2) ‣ 2nd item ‣ Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"))) by err(W). With same probability, err(Wτ) obeys
| | | |
| --- | --- | --- |
| | err(Wτ)n≤cε0KδΓ3√logKλ(C)log(Γ√nlogKρ). | |
* Suppose ρ≤δ/8 and ε0≤c′δλ(C)2Γ5K2log(Γ√nlogKρ)3, then, Wτ assigns all input samples xi to correct ground truth labels ~yi i.e. ([2.2](#S2.E2 "(2.2) ‣ 2nd item ‣ Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")) holds for all 1≤i≤n.
* Finally, for any iteration count 0≤t≤τ the total distance to initialization is bounded as
| | | | |
| --- | --- | --- | --- |
| | ∥Wτ−W0∥F≤O(Γ√KlogKλ(C)+tηε0Γ4Knλ(C)log(Γ√nlogKρ)2). | | (6.75) |
Proof Note that proposed number of iterations τ is set so that it is large enough for Theorem [2.3](#S2.Thmtheorem3 "Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") to achieve small error in the clean input model (ε0=0) and it is small enough so that Theorem [6.12](#S6.Thmtheorem12 "Theorem 6.12 (Robustness of gradient path to perturbation) ‣ 6.3 Perturbation analysis for perfectly clustered data (Proof of Theorem 2.2) ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") is applicable. In light of Theorems [6.12](#S6.Thmtheorem12 "Theorem 6.12 (Robustness of gradient path to perturbation) ‣ 6.3 Perturbation analysis for perfectly clustered data (Proof of Theorem 2.2) ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") and [2.3](#S2.Thmtheorem3 "Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") consider two gradient descent iterations starting from W0 where one uses clean dataset (as if input vectors are perfectly cluster centers) ~X and other uses the original dataset X. Denote the prediction residual vectors of the noiseless and original problems at time τ with respect true ground truth labels ~y by ~rτ=f(~Wτ,~X)−~y and rτ=f(Wτ,X)−~y respectively. Applying Theorems [6.12](#S6.Thmtheorem12 "Theorem 6.12 (Robustness of gradient path to perturbation) ‣ 6.3 Perturbation analysis for perfectly clustered data (Proof of Theorem 2.2) ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") and [2.3](#S2.Thmtheorem3 "Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks"), under the stated conditions, we have that
| | | | | |
| --- | --- | --- | --- | --- |
| | ∥~rτ∥ℓ∞ | ≤4ρand | | (6.76) |
| | ∥rτ−~rτ∥ℓ2 | ≤cε0Knλ(C)log(Γ√nlogKρ)Γ3n3/2√logK | | (6.77) |
| | | =cε0Γ3K√nlogKλ(C)log(Γ√nlogKρ) | | (6.78) |
First statement: The latter two results imply the ℓ2 error bounds on rτ=f(Wτ,X)−~y.
Second statement: To assess the classification rate we count the number of entries of rτ=f(Wτ,X)−~y that is larger than the class margin δ/2 in absolute value. Suppose ρ≤δ/8. Let I be the set of entries obeying this. For i∈I using ∥~rτ∥ℓ∞≤4ρ≤δ/4, we have
| | | |
| --- | --- | --- |
| | |rτ,i|≥δ/2⟹|rτ,i|+|rτ,i−¯rτ,i|≥δ/2⟹|rτ,i−¯rτ,i|≥δ/4. | |
Consequently, we find that
| | | |
| --- | --- | --- |
| | ∥rτ−¯rτ∥ℓ1≥|I|δ/4. | |
Converting ℓ2 upper bound on the left hand side to ℓ1, we obtain
| | | |
| --- | --- | --- |
| | c√nε0Γ3K√nlogKλ(C)log(Γ√nlogKρ)≥|I|δ/4. | |
Hence, the total number of errors is at most
| | | |
| --- | --- | --- |
| | |I|≤c′ε0nKδΓ3√logKλ(C)log(Γ√nlogKρ) | |
Third statement – Showing zero error: Pick an input sample x from dataset and its clean version ~x. We will argue that f(Wτ,x)−f(~Wτ,~x) is smaller than δ/4 when ε0 is small enough. We again write
| | | |
| --- | --- | --- |
| | |f(Wτ,x)−f(~Wτ,~x)|≤|f(Wτ,x)−f(~Wτ,x)|+|f(~Wτ,x)−f(~Wτ,~x)| | |
The first term can be bounded via
| | | | | |
| --- | --- | --- | --- | --- |
| | |f(Wτ,x)−f(~Wτ,x)| | =|vTϕ(Wτx)−vTϕ(~Wτx)|≤∥v∥ℓ2∥ϕ(Wτx)−ϕ(~Wτx)∥ℓ2 | | (6.79) |
| | | ≤Γ∥Wτ−~Wτ∥F | | (6.80) |
| | | ≤O(ε0Γ5K2λ(C)2log(Γ√nlogKρ)3) | | (6.81) |
Next, we need to bound
| | | | | |
| --- | --- | --- | --- | --- |
| | |f(~Wτ,x)−f(~Wτ,~x)| | ≤|vTϕ(~Wτx)−vTϕ(~Wτ~x)| | | (6.82) |
where ∥~Wτ−W0∥F≤O(Γ√KlogKλ(C)), ∥x−~x∥ℓ2≤ε0 and W0i.i.d.∼N(0,I). Consequently, using by assumption we have
| | | |
| --- | --- | --- |
| | k≥O(∥~W−W0∥2F)=O(Γ2KlogKλ(C)), | |
and applying an argument similar to Theorem [2.4](#S2.Thmtheorem4 "Theorem 2.4 ‣ 2.2 To (over)fit to corrupted labels requires straying far from initialization ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") (detailed in Appendix [B](#A2 "Appendix B Single label perturbation ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")), with probability at 1−1/n100, we find that
| | | | | |
| --- | --- | --- | --- | --- |
| | |f(~Wτ,x)−f(~Wτ,~x)| | ≤C′Γε0(∥~Wτ−W0∥F+√logn) | | (6.83) |
| | | CΓε0(Γ√KlogKλ(C)+√logn). | | (6.84) |
Combining the two bounds above we get
| | | | | |
| --- | --- | --- | --- | --- |
| | |f(Wτ,x)−f(~Wτ,~x)| | ≤ε0O(Γ5K2λ(C)2log(Γ√nlogKρ)3+Γ(Γ√KlogKλ(C)+√logn)) | | (6.85) |
| | | ≤ε0O(Γ5K2λ(C)2log(Γ√nlogKρ)3). | | (6.86) |
Hence, if ε0≤c′δλ(C)2Γ5K2log(Γ√nlogKρ)3, we obtain that, for all 1≤i≤n,
| | | |
| --- | --- | --- |
| | |f(Wτ,xi)−~yi|<|f(~Wτ,~xi)−f(Wτ,xi)|+|f(~Wτ,~xi)−~yi|~yi|≤4ρ+δ4. | |
If ρ≤δ/8, we obtain
| | | |
| --- | --- | --- |
| | |f(Wτ,xi)−~yi|<δ/2 | |
hence, Wτ outputs the correct decision for all samples.
Fourth statement – Distance: This follows from the triangle inequality
| | | |
| --- | --- | --- |
| | ∥Wτ−W0∥F≤∥Wτ−~Wτ∥F+∥~Wτ−W0∥F | |
We have that right hand side terms are at most O(Γ√KlogKλ(C)) and O(tηε0Γ4Knλ(C)log(Γ√nlogKρ)2) from Theorems [6.12](#S6.Thmtheorem12 "Theorem 6.12 (Robustness of gradient path to perturbation) ‣ 6.3 Perturbation analysis for perfectly clustered data (Proof of Theorem 2.2) ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") and [2.3](#S2.Thmtheorem3 "Theorem 2.3 (Training with perfectly clustered data) ‣ 2.1 Robustness of neural network to label noise with early stopping ‣ 2 Main results ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks") respectively. This implies ([6.75](#S6.E75 "(6.75) ‣ 4th item ‣ Theorem 6.13 (Training neural nets with corrupted labels) ‣ 6.3.1 Completing the Proof of Theorem 2.2 ‣ 6.3 Perturbation analysis for perfectly clustered data (Proof of Theorem 2.2) ‣ 6 Proofs ‣ Gradient Descent with Early Stopping is Provably Robust to Label Noise for Overparameterized Neural Networks")).
Acknowledgements
----------------
M. Soltanolkotabi is supported by the Packard Fellowship in Science and Engineering, a Sloan Research Fellowship in Mathematics, an NSF-CAREER under award #1846369, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP)
under award #FA9550-18-1-0078, an NSF-CIF award #1813877, and a Google faculty research award. |
0bc0fff5-a779-485e-8743-6e3dafdf54a1 | trentmkelly/LessWrong-43k | LessWrong | Information Versus Action
You can get a clearer view of what's going on if you're willing to ignore certain types of information when making decisions. If you heavily use a source of information to make important decisions, that source of information gains new pressure that can make it worse. See Goodhart's Law and Why I Am Not In Charge.
I.
Imagine you are an alien from the planet of obsessives, and you want to know how accurate the criminal justice system is. You're purely in it for the knowledge. You don't care about arresting more criminals, you don't care about the second order effects on society, you just really want to know how accurate this system is. (If it helps, imagine the kind of person who complains in the War Thunder forums about the exact specifications of aircraft, or who uses a magnifying glass to paint the decals on miniature train sets, only their interest is focused on the judiciary.)
You obviously can't use the courts to check if the courts find the correct people innocent and the correct people guilty. You can check if a case ever gets overturned, but it's possible the court was right the first time and wrong the second. You could try and investigate crimes yourself, but then any differences between your verdicts and the court verdicts could just as well be your error as it could be the court's error. This is frustrating.
Finally, you come up with an answer. You go to defendants who have just finished your trial and have the following conversation:
You: Can you please tell me whether you're actually innocent or guilty?
Defendant: What? Obviously I'm innocent. Why would I tell you anything else?
You: Because I can't be used against you. Look, I swore an oath to the court that I'd tell them random nonsense if they asked me. Then I got myself notarized as insane, due to the whole obsessive alien thing. No court would take my testimony.
Defendant: I feel like I shouldn't trust you.
You: Reasonable, but consider, I'm just asking you to whisper it in my ear. I'll strip |
2ed2f7aa-bba9-456c-ab4e-e669ddbd390a | trentmkelly/LessWrong-43k | LessWrong | Being an individual alignment grantmaker
I am an earlyish crypto investor who has accumulated enough to be a mid-sized grantmaker, and I intend to donate most of my money over the next 5-10 years to try and increase the chances that humanity has a wonderful future. My best guess is that this is mostly decided by whether we pass the test of AI alignment, so that’s my primary focus.
AI alignment has lots of money flowing into it, with some major organizations not running fundraisers, Zvi characterizing SFF as having “too much money”, OpenPhil expanding its grantmaking for the cause, FTX setting themselves up as another major grantmaker, and ACX reporting the LTFF’s position as:
> what actually happened was that the Long Term Future Fund approached me and said “we will fund every single good AI-related proposal you get, just hand them to us, you don’t have to worry about it”
So the challenge is to find high-value funding opportunities in a crowded space.
One option would be to trust that the LTFF or whichever organization I pick will do something useful with the money, and I think this is a perfectly valid default choice. However, I suspect that as the major grantmakers are well-funded, I have a specific comparative advantage over them in allocating my funds: I have much more time per unit money to assess, advise, and mentor my grantees. It helps that I have enough of an inside view of what kinds of things might be valuable that I have some hope of noticing gold when I strike it. Additionally, I can approach people who would not normally apply to a fund.
What is my grantmaking strategy?
First, I decided what parts of the cause to focus on. I’m most interested in supporting alignment infrastructure, because I feel relatively more qualified to judge the effectiveness of interventions to improve the funnel which takes in people who don’t know about alignment in one end, takes them through increasing levels of involvement, and (when successful) ends with people who make notable contributions. I’m also excit |
2d9f265d-38da-41ee-af27-84ca6a272dd9 | trentmkelly/LessWrong-43k | LessWrong | Four Randomized Control Trials In Economics
Randomized Control Trials have some drawbacks. For many important questions, like causes of the industrial revolution, a randomized trial is impossible. For many others, RCTs are expensive and cumbersome, leading to low sample sizes or experimental designs that precisely answer irrelevant questions. Still, when RCTs with large sample size and generalizable designs are possible, their advantages justify deference to their results even when observational evidence disagrees. This is the case with the four trials in this post. They each have hundreds to tens of thousands of participants and budgets big enough to test treatments that are relevant to the real world.
The largest RCT in this group, run by Harvard economists and the charity RIP Medical Debt, tests the effects of medical debt cancellation. They relieved $169 million dollars of debt for 83,401 people over two years 2018-2020. Medical debt has extremely low recovery rates, so the $169 million dollar face value only cost 2 or 3 million dollars to relieve, but this is still a large treatment size. The researchers followed up with the recipients of this debt relief with several surveys tracking their mental, physical, and financial health.
There are two other elements which make the evidence from this trial compelling. First, their analyses are pre-registered. This means they submitted the list of regressions they would run before they got the data back from their survey. This is important because it prevents them from putting inconvenient results in the file drawer and is a check against running 100 extra tests where the null hypothesis is true and reporting the 5 that happen to have p < .05. They also ran an expert survey of economists and scientists who predicted the results so we can quantify exactly how much of a narrative violation these results are.
So what did this trial find?
> First, we find no impact of debt relief on credit access, utilization, and financial distress on average. Second, we estimate |
6b6be932-0d91-4910-b5b3-2b01b925e3c5 | trentmkelly/LessWrong-43k | LessWrong | New social credit formalizations
Here are some classic ways humans can get some kind of social credit with other humans:
1. Do something for them such that they will consider themselves to ‘owe you’ and do something for you in future
2. Be consistent and nice, so that they will consider you ‘trustworthy’ and do cooperative activities with you that would be bad for them if you might defect
3. Be impressive, so that they will accord you ‘status’ and give you power in group social interactions
4. Do things they like or approve of, so that they ‘like you’ and act in your favor
5. Negotiate to form a social relationship such as ‘friendship’, or ‘marriage’, where you will both have ‘responsibilities’, e.g. to generally act cooperatively and favor one another over others, and to fulfill specific roles. This can include joining a group in which members have responsibilities to treat other members in certain ways, implicitly or explicitly.
Presumably in early human times these were all fairly vague. If you held an apple out to a fellow tribeswoman, there was no definite answer as to what she might owe you, or how much it was ‘worth’, or even whether this was an owing type situation or a friendship type situation or a trying to impress her type situation.
We have turned the ‘owe you’ class into an explicit quantitative system with such thorough accounting, fine grained resolution and global buy-in that a person can live in prosperity by arranging to owe and to be owed the same sliver of an overseas business at slightly different evaluations, repeatedly, from their bed.
My guess is that this formalization causes a lot more activity to happen in the world, in this sphere, to access the vast value that can be created with the help of an elaborate rearrangement of owings.
People buy property and trucks and licenses to dig up rocks so that they can be owed nonspecific future goods thanks to some unknown strangers who they expect will want gravel someday, statistically. It’s harder to imagine this scale |
5fcd73e6-ca74-46af-b89b-6a8113843153 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | "Wanting" and "liking"
*Written as a result of AI Safety Camp Virtual 2023. Thanks to the following people for feedback and helpful conversations: Oliver Bridge, Tim Gothard, Rasmus Jensen, Linda Linsefors.*[[1]](#fn-hzqG39DLdbBcfHpik-1)
This post reviews the literature on *"wanting"* and *"liking"*, two primary components of what is commonly referred to together as the biological reward system. It is intended to be informative for AI safety-related work, especially within approaches that try to [leverage](https://www.lesswrong.com/posts/nfoYnASKHczH4G5pT/brain-enthusiasts-in-ai-safety) [insights](https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX) [from neuroscience](https://www.lesswrong.com/s/HzcM2dkCq7fwXBej8) [for alignment](https://www.lesswrong.com/s/nyEFg3AuJpdAozmoX).
Section 1 introduces the distinction between two high-level components of reward: wanting and liking. At this point, I simplify the topic and treat both as homogenous categories. Sections 2 and 3 delve deeper into each component and give a more fine-grained model, including a description of their neurobiological substrates. (2.2. and 3.2 are more technical/dry neuroscience, so you may want to skip them, if that's not your primary interest.) Section 4 discusses the functional relationships between them and why this kind of "division of labor" may have been favored by evolution.
1. Introduction
---------------
I will start by introducing four concepts central to this post: *wanting*, *"wanting"*, *liking*, and *"liking"*.[[2]](#fn-hzqG39DLdbBcfHpik-2) They carve the space of human values[[3]](#fn-hzqG39DLdbBcfHpik-3) along two dimensions. The first of those is the distinction between the things we are motivated to do/driven towards (wanting) versus the things we feel good about happening (or being about to happen) (liking). The second distinction is between the more basic/less-sophisticated components (*"wanting"* and *"liking"*) of each and the more elaborated and cognitive components (*wanting* and *liking*).[[4]](#fn-hzqG39DLdbBcfHpik-4) The two parts of this Section elaborate on these two dimensions.
### 1.1. Liking vs Wanting
Liking is when you take a bite of a tasty food and its taste brings you pleasure or, more generally, positively valenced affective states. Wanting is when you are motivated to act in order to obtain that food and bring it to your mouth in order to consume it.
This is probably enough to intuit the rough contours of the distinction, but at the same time, it may raise some questions. I can think about cases where I kind of want to get up and go to the kitchen, but I'm too tired (or my willpower is too depleted) to get up. So I don't get up, even though I know the food in the fridge is very good and if I did get up and get it, I would be very glad about doing so. On another note, what if the food is definitely tasty and brings me pleasure and I enjoy it "on some level", but at the same time believe (or at least [some part of me](https://www.lesswrong.com/tag/subagents) believes) that I shouldn't eat it? Maybe I think I shouldn't even *enjoy* it? Maybe I consider it unhealthy or fear that somebody will judge me for eating it in that particular context, or my God/religion forbids eating pork.[[5]](#fn-hzqG39DLdbBcfHpik-5) Don't these edge cases question the simplified distinction between liking and wanting as too distinct but coherent and homogenous things? Probably they do, but we still can learn quite a bit about what we might call the primitives of human values by studying liking and wanting on this coarse-grained level.
According to what I would call a *naive view* of the relationship between liking and wanting, the reason we want X is that we like X, or maybe at least expect/predict to like X. The correlation between realizing that we (will) like something and developing a want for it soon after that makes this view fit well enough most daily situations.
However, liking and wanting sometimes come apart. We may begin to want something, even if we neither had an opportunity to experience liking nor could predict that we would like it. To give a concrete example, humans and other animals typically develop sexual drive before their first sexual encounter. Sometimes they may not even know what sex is. Nevertheless, they implement somewhat intelligent behavior which was evolutionarily selected in order to reliably end in sexual intercourse.
In other situations, something that has so far always been disliked becomes an object of desire. Robinson and Berridge (2013)[[6]](#fn-hzqG39DLdbBcfHpik-6) taught rats to associate a particular stimulus (henceforth the conditional stimulus; CS) with a repulsive experience of extremely salty water being injected straight into their mouth (henceforth the unconditional stimulus; UCS). The rats quickly realized that the UCS reliably followed the CS, so they learned to turn away and retreat from the CS whenever they saw it.[[7]](#fn-hzqG39DLdbBcfHpik-7) In the next phase of the experiment, the researchers injected the rats with two compounds that mimicked the brain signals which under normal circumstances would convey information about dangerously low blood sodium levels. Importantly, since their diet had always had adequate amounts of salt, they had never had an opportunity to discover the extent to which their (dis)liking of salty food differs depending on the blood sodium levels.
Nevertheless, their behavior changed dramatically. Instead of retreating from the CS, they started approaching it, eager to get their precious dose of salt. A change in the (perceived) physiological state turned something aversive into something desirable, and the cue associated with it went along.
Examples of liking-wanting dissociation don't end here. One may develop an addiction to a drug even if they don't like the state this drug induces that much. Moreover, over a prolonged period of drug use, its positive subjective effects (liking) often wear off but addiction (wanting) keeps its hold.[[8]](#fn-hzqG39DLdbBcfHpik-8) In other words, one may want to get a dose even if one doesn't like getting the dose and is aware that they're not going to like what happens once they get the dose.
Some people also develop compulsive desires or behavioral patterns that do not lead to positively valenced experiences, but that are nevertheless very hard to resist. Subclinical examples include compulsively checking one's phone, e-mail, or social media, [doomscrolling](https://en.wikipedia.org/wiki/Doomscrolling), and addiction to gambling. Speaking at least from my anecdotal phenomenal perspective, such things certainly do feel like something I want but do not like.[[9]](#fn-hzqG39DLdbBcfHpik-9)
Perhaps less obvious are examples of things that give us a lot of positive experiences, but for which we nevertheless don't develop any kind of robust desire. I recall Julia Galef mentioning on some podcast that she really likes apples but nevertheless never learns to "desire" apples, and whenever she happens to eat an apple, she is reminded of that. If the (main) reason we want something is that we like it, shouldn't she develop wanting for apples proportional to how much she likes them? On another, more speculative note, some people report extreme pleasure during some [meditative](https://astralcodexten.substack.com/p/nick-cammarata-on-jhana) [states](https://astralcodexten.substack.com/p/highlights-from-the-comments-on-jhanas) and yet developing no addiction for it. I discuss more experimental examples of selectively impacting liking but not wanting in Section 2.
### 1.2. *Liking* vs *"Liking"* and *Wanting* vs *"Wanting"*
[Folk-psychological](https://en.wikipedia.org/wiki/Folk_psychology) concepts are not guaranteed to be a good fit for brain/mind sciences. For some examples, concepts such as consciousness, emotion, memory, pain, or even [the idea of "concept" itself](https://academic.oup.com/book/11923) turned out to lump together importantly distinct phenomena (cf. Ramsey, 2022, Section 2.3). We might expect liking and wanting to also go this way, and that they will need at least a bit of refinement if we want to use them as starting points for a neuroscientifically adequate ontology.
On its face, there seems to be an asymmetry between liking and wanting in that the latter can be, at least in many cases, inferred from behavior,[[10]](#fn-hzqG39DLdbBcfHpik-10) whereas the former is a matter of "private" experience. Obviously, this is especially problematic in cases of animals incapable of verbalizing their ongoing subjective experience. However, the asymmetry may be weaker than it seems. After all, we can identify which automatic behavioral and/or physiological responses in humans correlate with (the verbal reports of)[[11]](#fn-hzqG39DLdbBcfHpik-11) positively or negatively valenced experiences (at least specific to some domain, such as food). We can then turn to animals and look for analogous responses (e.g., engaging analogous muscle groups in roughly the same patterns of movement) in order to see whether they correlate with the same kinds of objectively observable events that we would predict the animal to like or dislike.
For example, in the case of food,[[12]](#fn-hzqG39DLdbBcfHpik-12) it turned out that pleasant and unpleasant tastes robustly trigger specific facial expressions (see Berridge & Robinson, 2003, [Figure I](https://sites.lsa.umich.edu/berridge-lab/wp-content/uploads/sites/743/2019/10/Berridge-Robinson-TINS-2003.pdf)). Importantly, they can occur even in the absence of conscious functioning,[[13]](#fn-hzqG39DLdbBcfHpik-13) e.g., in sleep or in individuals with deficient neocortical functioning,[[14]](#fn-hzqG39DLdbBcfHpik-14) such as anencephalic infants (Steiner, 1973; cf. Berridge & Winkielman, 2003).
The observation that some objectively measurable manifestations of pleasure can occur without conscious awareness, motivated the introduction of the distinction between *"liking"* (core affective reactions that don't require consciousness) and *liking* (conscious pleasure)[[15]](#fn-hzqG39DLdbBcfHpik-15) (Berridge & Robinson, 2003; Berridge & Kringelbach, 2015). Analogously, *"wanting"* (incentive salience, cue-triggered motivation that doesn't require consciousness) was distinguished from *wanting* (cognitive desires with declarative goals).[[16]](#fn-hzqG39DLdbBcfHpik-16) Thus, the *"liking"*/*liking* and *"wanting"*/*wanting* distinctions capture the difference between implicit or objectively measurable components (*"quoted"*) and explicit or subjective components (*unquoted*).
While the need for objective measures of reward was the original motivation for making the distinction, narrowing down on the explicit components of *"liking"* and *"wanting"* made it easier to identify their neural substrates. To a first approximation, we have **(1)** a *"liking"* system, concentrated around a handful of hedonic hot and cold spots, with opioids playing the main role in the generation and modulation of *"(dis)liking"* and **(2)** a *"wanting"* system, which is more distributed (although to some extent centered around the ventral tegmental area) and with the dopamine as the key neurotransmitter. The two systems are to some extent separate, but they also overlap.
2. Liking
---------
### 2.1. *"Liking"* and *liking*
The idea of "unconscious pleasure" may seem contradictory. In what sense, can something that happens to us be pleasant but not be available to consciousness?
The introduction of an unconscious aspect of pleasure follows a particular pattern of [concept extrapolation](https://www.lesswrong.com/s/u9uawicHx7Ng7vwxA) that we often see in psychology. We discover that a particular mind-related phenomenon has some objectively measurable "behavioral signature". For example, humans, other great apes, rats, and many other species of mammals, all protrude their tongues in response to tasty foods (see Berridge & Robinson, 2003, [Figure I](https://sites.lsa.umich.edu/berridge-lab/wp-content/uploads/sites/743/2019/10/Berridge-Robinson-TINS-2003.pdf)). Responses to aversive tastes, are also homologous across taxa to a large extent. Other than that, exposing people to a valenced stimulus unrelated to taste, (e.g., happy versus angry faces) influences how much tasty food they consume and this effect persists even when these stimuli are not consciously perceived (Winkielman et al., 2005).[[17]](#fn-hzqG39DLdbBcfHpik-17)
Hence, the rationale for dividing pleasure/liking into a subconscious component of objectively measurable "core affective reactions" to valenced stimuli (*"liking"*) and consciously perceived pleasure (*liking*). The latter is closer to the common meaning of the verb "to like".[[18]](#fn-hzqG39DLdbBcfHpik-18) It denotes the valenced feeling available to the consciousness, an approval or disapproval of the ongoing state of affairs.[[19]](#fn-hzqG39DLdbBcfHpik-19)
Since among these two, *"liking"* is the objectively measurable component and a majority of research in this domain was done on laboratory animals (whose subjective experience can't be measured by verbal reports), it is not surprising that we know much more about the neurobiological substrate of *"liking"* than about that of *liking* (and the same is true of *"wanting"* and *wanting*). Therefore, my discussion of the latter is more of a speculation than in the case of the former. Also, most animal studies of *"liking"* relied on a restricted set of "domains of rewarding stimuli" (mostly food, sex, and drugs), so our knowledge of how core affective reactions differ between these domains is still quite limited.
### 2.2. *"Liking"* in the brain
The most important components of the *"liking"* circuitry are a handful of *hedonic hot spots* and *cold spots*, which are small groups of neurons, whose stimulation selectively increases or decreases *"liking"* reactions, respectively. (In the case of the increase, it is sometimes called *hedonic enhancement*.) None of them[[20]](#fn-hzqG39DLdbBcfHpik-20) are anatomically distinct structures (you're not going to find them in the index of a typical neuroanatomy textbook). Rather, they are "functional islands" embedded in bigger regions involved in many functions not closely related to *"liking"*.
The two most important ones are located in the [basal ganglia](https://en.wikipedia.org/wiki/Basal_ganglia), specifically in the [nucleus accumbens](https://en.wikipedia.org/wiki/Nucleus_accumbens) (NAc) and the [ventral pallidum](https://en.wikipedia.org/wiki/Ventral_pallidum) (VP). Both the NAc and the VP contain a hot spot and a cold spot. The NAc is divided into the core and the shell, of which the latter hosts a hot spot in the front and a cold spot in the back. In the VP, the arrangement is reversed, with the hot spot at the back and the cold spot at the front (Richard et al., 2013; Berridge & Kringelbach, 2013, 2015).
Beyond the basal ganglia, hedonic hot spots (but not cold spots, as far as I know) have been located in the orbitofrontal cortex (OFC), anterior insula (aIns), and the parabrachial nucleus of the pons (PBN; Söderpalm & Berridge, 2000; cf. Smith et al., 2010). However, the hot spots in the NAc and VP appear to be the most important, being *the* generators of pleasure. Lesioning or deactivating either the NAc hot spot or the VP hot spot eliminates hedonic reactions (Berridge & Kringelbach, 2015). Moreover, while damaging the NAc hot spot merely abolishes "liking", damaging the VP hot spot (or its temporary inactivation) causes "disliking" of normally positive things (e.g., sucrose).
Moreover, anencephalic children with little to no neocortex as well as people or non-human animals who have undergone extensive OFC lesions still retain intact *"liking"* reactions. People with OFC lesions also report conscious pleasure, suggesting that even *liking* is not strictly dependent on the cortex either (cf. Berridge & Winkielman, 2003).[[21]](#fn-hzqG39DLdbBcfHpik-21) In contrast, some baseline level of activity is necessary in each of the two basal ganglia hot spots in order for additional stimulation of one to produce hedonic enhancement (Smith & Berridge, 2007, Smith et al., 2011; cf. Richard et al., 2013).
Importantly, it's not as simple as 'stimulate a hot spot to enhance *"liking"*, stimulate a cold spot to decrease *"liking"*'. The choice of neurotransmitter used for stimulation matters. Here, the opioid receptors are the most relevant (cf. Berridge & Robinson, 2003; Berridge & Kringelbach, 2015; Smith et al., 2010). In the NAc hot spot, agonists of mu, delta, and kappa opioid receptors all cause hedonic enhancement, while in the VP hot spot, and the cortical hot spots, this role appears restricted to mu-opioid receptor (MOR) agonists. Depending on the region, stimulation of non-opioid receptors can give similar results. So far, the neurotransmitters shown to produce hedonic enhancement include anandamide (NAc), orexin (NAc, VP, PBN, OFC), and GABA (PBN; Söderpalm & Berridge, 2000).
Earlier, I mentioned that the *"liking"* system and the *"wanting"* system are closely connected. This is illustrated by the fact that, in most cases, stimulating a subcortical hedonic hot spot increases *"wanting"*, in addition to *"liking"*. Moreover, the range of compounds that increase *"wanting"* within a hedonic hot spot is much greater than the range of those that increase *"liking"* (Berridge & Kringelbach, 2013). Stimulation of the cold spots in the NAc and VP can also produce *"wanting"*. I discuss this further in Section 3. Conversely, opioids can indirectly increase the activity of the ventral tegmental area, which is the main source of dopamine in the core *"wanting"* pathways (Zhang et al., 2022).
The NAc shell region encompassing the hot and cold spot is often described in terms of an "affective keyboard", where the placement of neurons strongly correlates with the affective reaction (e.g., *"liking"* versus *"disliking"*) elicited by their activation (Richard et al., 2013; Berridge & Kringelbach, 2013, 2015). More specifically, there appears to be a gradient of valence extending from front to back. Stimulation of more frontally placed regions of the "keyboard" elicit *"liking"* reactions whereas more caudally placed neurons inhibit *"liking"* and/or elicit *"disliking"*, sometimes together with species-specific responses to threats, such as predators.
[Figure 2](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3706488/figure/F2/) from Richard et al. (2013) shows the distribution of populations of neurons in the NAc whose stimulation elicits particular kinds of behavior. We have the hedonic hot spot (red-orange), where activation typically enhances *"liking"* reactions. Behind it, we see two zones. At the ventral (lower) side, there is a group of neurons with the kind of functionality we would expect from a hedonic cold spot (blue). Their stimulation inhibits *"liking"*. At the dorsal (upper) side, we have another cluster, which also has an inhibitory effect but instead of suppressing *"liking"*, they suppress aversive reactions (purple). All of these sites, in addition to their impact on *"liking"*, "still generate eating" (i.e., *"wanting"* to eat), as does the broader (green) region in which they are located and a part of dorsomedial neostriatum (roughly, the caudate nucleus and putamen located above the NAc), shown as the green spot in the top-left corner of the figure.
The cold spot region also contains cells whose function extends beyond the modulation of *"(dis)liking"*. Stimulation of some of them produces fearful responses or aggression displays, such as throwing dirt at potentially threatening stimuli. Importantly, the result of stimulation of these cells can also be modulated by the environment. In a calm, peaceful, and safe setting, the regions whose stimulation increases positive reactions expand, whereas the aversive/fearful reaction regions shrink. Stressful, dangerous, unsafe environments do the opposite.
Similarly to the hot spot, the result of stimulating the cold spot depends on the kind of ligand used. Stimulation of the same three kinds of opioid receptors (mu, delta, kappa) that enhance *"liking"* in the NAc hot spot, produces intense *"(dis)liking"* or even fear-related behaviors in the neighboring cold spot. Mu receptor agonists also inhibit *"liking"* in the VP cold spot.
### 2.3. What is conscious pleasure good for?
It is, admittedly, not clear how and to what extent *"liking"* and *liking* depend on each other. What does it take for something that is *"liked"* to become *liked*? We know that *"liking"* can occur without *liking*, but what about the reverse? Can something satisfy our explicit hedonic feelings without impacting any of these "core affective reactions"? Or maybe *"liked"* things become *liked* whenever consciousness is "turned on" (at least over some threshold)? From my cursory review of the literature, it seems that we don't know, although the orbitofrontal cortex (OFC) emerges as a major candidate for the region whose activity (perhaps in addition to more basic *"liking"* structures) is important for *liking* (Kringelbach, 2005, 2010).
[The global workspace theory of consciousness (GWT)](https://en.wikipedia.org/wiki/Global_workspace_theory)[[22]](#fn-hzqG39DLdbBcfHpik-22) (and the experiments it draws on) may give some suggestions about makes a *"liked"* stimulus/event *liked*. The brain can do unconsciously quite a bit of complex processing (e.g., [semantic meaning of words](https://www.lesswrong.com/posts/x4n4jcoDP7xh5LWLq/book-summary-consciousness-and-the-brain#Unconscious_processing_of_meaning)). The experiments carried out within the GWT paradigm that showed this, used [sensory masking](https://en.wikipedia.org/wiki/Visual_masking) in order to prevent the stimuli from reaching consciousness. According to GWT, the neural activity associated with representing/processing a particular stimulus must exceed some critical threshold in order to spread to other (in particular multimodal/associative/higher-level) brain regions, which then can start processing it in a somewhat synchronized manner. This is the neural basis of "becoming conscious *of* something". The representation becomes available to other brain systems, including the ones directly connected to the speech organs, so that we can report our consciousness of the stimulus. Otherwise, its processing remains unconscious, local, and circumscribed to a particular lower-level brain region.
Translating this view to the case of *"liking"* and *liking*, there may be a similar threshold of intensity of core hedonic impact (*"liking"*) that a stimulus must exceed in order to be broadcast to the global workspace and become consciously *liked*. It's plausible that spread to some particular regions (such as the OFC) is particularly important.
Notably, in order to prevent the valenced stimuli from reaching consciousness, the experiments that showed the influence of unconsciously processed stimuli on objectively observable correlates of pleasure (e.g., Winkielman et al., 2005) used the same method as the GWT, namely sensory masking. This is a minor piece of evidence that the results from the GWT experiments may translate to the case of *"liking"* and *liking*.
If this perspective is right, it may point towards a possible function of conscious *liking* in that [explicitizing](https://www.lesswrong.com/posts/KuKaQEu7JjBNzcoj5/explicitness) the hedonic value increases the range of possible routes of impact on other brain systems (e.g., the motivational circuits discussed in Section 3). So perhaps the question "What is conscious pleasure good for?" is nothing but a special case of "What is consciousness good for"?
On a more speculative note regarding the possibility of *liking* without *"liking,"* I wonder if top-down influences of such factors as self-image, normative convictions, social expectations, or (broadly understood) reflective evaluation of the current situation (e.g., how good I feel with my life, or the way things are going in the world in general) may induce *liking* without inducing core affective reactions and corresponding activity in the subcortical hedonic hot spots. On the other hand, I would also expect that at least in some cases (perhaps in a majority or even all cases), the subcortical (dis)pleasure generators would become secondarily activated as a result of this top-down influence.
3. Wanting
----------
### 3.1. *"Wanting"* and *wanting*
It is hardly an original observation that our actions don't always reflect our explicit beliefs about what we should do. This phenomenon has been given many names, such as [akrasia](https://www.lesswrong.com/tag/akrasia), weakness of will, or lack of willpower. This may make the distinction between *"wanting"* (incentive salience) and *wanting* (cognitive desire) more relatable and intuitive than the one between *"liking"* and *liking*.
*"Wanting"* can be seen as the unconscious counterpart of *wanting*, similarly to how *"liking"* is the unconscious counterpart of *liking*. Whereas *wanting* (cognitive incentives) refers to plans directed towards goals we are aware of and explicitly represented desires, *"wanting"* (incentive salience) refers to more impulsive, reactive, low-level motivation, which can act independently of what we (state that we) *want* or *like*.
More specifically, *"wanting"* is defined as "a conditioned motivation response of a brain, usually triggered by and assigned to a reward-related stimulus" (Berridge, 2007). A stimulus is reward-related if it is associated with an event that is "rewarding in itself", e.g., sweet taste. The association can be simple, like occurring very close in time, or it may involve some more sophisticated cognitive learning processes.[[23]](#fn-hzqG39DLdbBcfHpik-23) The learned reward-related stimulus is often called the "conditioned stimulus" (CS), whereas the inherently rewarding event is called the "unconditioned stimulus" (UCS). However,[[24]](#fn-hzqG39DLdbBcfHpik-24) not all inputs that drive *"wanting"* are learned, as brains are wired to respond with *"wanting"* to some stimuli, independently of learning (or in the absence of/prior to learning). Plausibly, the original adaptive value of *"wanting"* was to motivate the animals to pursue a small set of unconditioned rewards, such as food, sex, or favorable ranges of environmental conditions, such as appropriate temperature and acidity. Over time, as more sophisticated learning mechanisms evolved, the role of *"wanting"* became extendable by learning (Berridge, 2007).
According to Berridge and Robinson (2003) cognitive incentives (*wanting*) are distinguished from incentive salience (*"wanting"*) by three (or maybe four) features:
>
> … they are (1) known or imagined (cognitive incentive representation); (2) expected to be pleasant (hedonic expectation); (3) subjectively desired and intended to be gained (explicit cognitive representation of wanting) and, perhaps, (4) known to be obtainable by actions that cause it to occur (understanding of act–outcome causality).[[25]](#fn-hzqG39DLdbBcfHpik-25)
>
>
>
I will speculate a bit more about the differences and relationships between *"wanting"* and *wanting* in Section 3.3. In the next Section 3.2, I outline the neurobiological basis of *"wanting"*.
### 3.2. *"Wanting"* in the brain
*"Liking"* can be roughly located in a handful of hedonic hot spots and cold spots, with endogenous opioids being the main players in the circuitry (as discussed in Section 2). The neural substrate of *"wanting"* is more distributed throughout the brain, with dopamine as the key neurotransmitter.
The human brain has several [dopaminergic pathways](https://en.wikipedia.org/wiki/Dopaminergic_pathways), the most relevant for *"wanting"* being the [mesolimbic pathway](https://en.wikipedia.org/wiki/Mesolimbic_pathway). It goes from the [ventral tegmental area (VTA)](https://en.wikipedia.org/wiki/Ventral_tegmental_area) to the ventral striatum (including the nucleus accumbens) and some other areas. Although it is the biggest supplier of dopamine to brain regions involved in incentive salience (Ikemoto, 2010), it is not the only one, and, at least in some artificial setups, *"wanting"* can occur even when it stops working.
We know that, i.a., from studies of
More specifically, animals that had their mesolimbic pathway ablated, can still develop a compulsion to self-stimulate via electrodes implanted in some brain regions (cf. Ikemoto, 2010, p. 131). It is not clear to what extent these results translate to *"wanting"* without the mesolimbic pathway more generally.
VTA and other regions involved in *"wanting"* form a highly interconnected network (cf. Ikemoto, 2010). However, many of them are not *"wanting"*-specific, but also involved in other aspects of reward. In Section 2, I discussed the hedonic hot and cold spots in the nucleus accumbens and the ventral pallidum. Stimulating them with many neurotransmitters (including those which tend to elicit *"(dis)liking"* reactions) tends to produce *"wanting"* behavior, both related to approach (*"wanting"* X) and aversion (*"wanting"* not-X). The same is true for the parabrachial nucleus of the hindbrain, whose GABA-A receptors' stimulation *"wanting"* but has a small region adjacent to it, where it also elicits *"liking"*.
Other regions of the network are involved in learning. For example, Pavlovian learning seems to depend on a circuit whose major components are the basolateral amygdala, the orbitofrontal cortex, and the nucleus accumbens (Burke et al., 2010). On the other hand, stimulation of the central amygdala when paired with a highly salient stimulus (doesn't matter whether it's pleasant or unpleasant, it just needs to have strong valence in either direction) can establish very strong *"wanting"* even for highly unpleasant stimuli (Warlow et al., 2020).
Two other dopaminergic pathways that are important for *"wanting"* are the mesocortical pathway and the nigrostriatal pathway. The mesocortical pathway goes from the VTA to the prefrontal cortex and is involved in executive functioning. Its disorders, including those involving dopamine depletion or other interference with dopaminergic activity, are associated with impaired cognitive control and working memory (cf. Ott & Nieder, 2019).
The nigrostriatal pathway goes from the substantia nigra (SN) to the dorsal striatum and its main role is movement control. The death of a majority of SN cells is the proximate cause of Parkinson's disease. Parkinson's patients tend to develop symptoms associated with decreased functioning of the mesolimbic pathway (e.g., apathy) and the mesocortical pathway (e.g., attentional deficits). The severity of these symptoms is highly correlated with the severity of motor symptoms, suggesting some relevant degree of coupling between these systems (cf. Leyton, 2010, pp. 232-233).
The next few paragraphs discuss how dopaminergic activity in general impacts *"wanting"*. It is not meant to be a complete overview of evidence or comparison with alternative hypotheses (for that see: Berridge, 2007), but rather as an informative illustration of the role played by this neurotransmitter.
Probably the most straightforward method to test how some neurotransmitter X influences some behavior Y is to lower or increase the levels of X and measure changes in Y. One way to do it is by breeding genetically modified animals that have abnormally low or high levels of the neurotransmitter in their synapses (cf. Berridge, 2007, pp. 403-405).
Dopamine-deficient (DD) mice, with almost no dopamine in their brains, can be created by knocking out the gene coding for tyrosine hydroxylase, an enzyme without which dopamine can't be produced. DD eat and drink barely anything at all, not enough to sustain themselves.[[26]](#fn-hzqG39DLdbBcfHpik-26) In order for them to eat and drink normally, they need to be medicated with L-DOPA (a direct dopamine precursor, removed from dopamine by just one step in the production chain), which can temporally restore their dopamine to normal levels.
This makes it possible to test (1) whether DD mice display different affective/*"liking"* reactions for different kinds of stimuli (e.g., sugar solution versus water) and (2) whether after trying both of them, they learn to prefer one over the other, as measured by their choices on subsequent trials. It turns out that they can do both, which suggests that dopamine is not required for (at least some forms of) *"liking"* and reward-related learning. Similar patterns have been observed in wild-type (i.e., normal/not genetically modified) mice, whose dopaminergic systems were impaired later in their life by neurochemical lesions.
On the other hand, **hyper**dopaminergic mice, which have almost triple the normal amount of dopamine in their synapses (compared to the wild type), can be created by knocking out the gene coding for the dopamine transporter, a protein that removes dopamine from the synapse. Such mice are more motivated to obtain rewards, more resistant to stimuli distracting them from the focus on the current goal, and willing to work harder for rewards. In other words, they seem to *"want"* their rewards more than the wild type. Their ability to learn associations between stimuli and rewards or learn which actions lead to rewards, as well as *"liking"* reactions, remain unaffected.
What about dopamine disturbance diseases in humans (cf. Leyton, 2010)? Parkinson's disease (PD) is caused by the degradation of dopaminergic cells in the substantia nigra, which are not directly involved in the mesolimbic system. Still, many PD patients exhibit symptoms associated with decreased dopaminergic functioning in the mesolimbic (e.g., apathy, avolition) and mesocortical (e.g., worse attention and executive functioning) systems. The severity of these symptoms correlates with the strength of motor problems, more central to Parkinson's. Some PD patients treated with L-DOPA (~3-4%; Pezzella, 2005) develop [dopamine dysregulation syndrome (DDS)](https://en.wikipedia.org/wiki/Dopamine_dysregulation_syndrome), where overcompensation for dopamine deficiency leads them to develop "pathological" *"wanting"* behavior (addiction, gambling, compulsive sexual activity, even if they had no history of such before the medication) making them illustrative cases of hyperdopaminergy.[[27]](#fn-hzqG39DLdbBcfHpik-27)
Many highly addictive potentials are dopaminergic. Central examples include amphetamine, cocaine, and their analogs, which work primarily by increasing the amount of dopamine that stays in the synaptic cleft. Interestingly, in animal studies, their addictive potential can be reduced (perhaps even (almost?) completely eliminated?) if they are given together with DA antagonists, i.e., compounds that bind to dopamine receptors without activating them, which prevents dopamine itself from binding to them and exerting its typical effects (cf. Puglisi-Allegra & Ventura, 2012). At the same time, dopamine antagonism does not eliminate other effects of these dopaminergic drugs. For example, some euphoric effects remain when amphetamine is given with DA antagonists (cf. Leyton, 2010), suggesting that these are either mediated through mechanisms other than dopamine or perhaps some dopamine receptors that are not blocked by the particular used in the study[[28]](#fn-hzqG39DLdbBcfHpik-28) (Nader et al., 1997; Ikemoto, 2010).
Highly addictive drugs that don't interact with the dopamine system directly tend to have secondary dopaminergic effects. For example, the agonists of mu-opioid receptors (such as morphine, heroin, and fentanyl) typically work by inhibiting GABA-ergic neurons located in the posterior part of the VTA or an area adjacent to it, called rostromedial tegmentum (RMTg). On the other hand, these GABA-ergic neurons inhibit the dopaminergic neurons of the VTA, which drive *"wanting"*. Therefore, inhibition of the former means disinhibition of the latter and thus an increase in the concentrations of DA in regions targeted by the VTA (cf. Zhang et al., 2022).
Caffeine is another compound with indirect dopaminergic effects (and a relatively mild addictive potential). Although its main mechanism of action is blocking the adenosine receptors, it also increases dopamine release, contributing to its psychostimulant and reinforcing properties (Ferré, 2016). We can see that in experiments, where adding caffeine to yogurt strengthened the preference developed for that yogurt, as compared to the same yogurt but without caffeine (Panek et al., 2013). Analogous experiments with similar results were performed on bees and caffeine-enriched nectar, with similar results (Wright et al., 2013).[[29]](#fn-hzqG39DLdbBcfHpik-29)
[Pavlovian-instrumental transfer](https://en.wikipedia.org/wiki/Pavlovian-instrumental_transfer) is what happens when an animal that is already working in order to obtain some reward/UCS, starts working even harder upon perceiving a CS associated with the UCS it's currently pursuing. This effect has been closely associated with an increase in mesolimbic dopaminergic activity and can be modulated by intervening in the mesolimbic system, e.g., with dopamine agonists (Berridge, 2007, pp. 420-421; Cartoni et al., 2016; Salamone et al., 2016).
### 3.3. What are cognitive incentives/conscious wanting good for?
Why do we need *wanting* in addition to *"wanting"*? In Section 2 I asked an analogous question with respect to *liking* and *"liking"* and gave a provisional hypothesis that consciousness of a valenced stimulus makes it accessible to other parts of the brain, allowing them to interoperate and make use of each other's outputs. Are *wanting* and *"wanting"* in an analogous relationship?
Berridge and Robinson (2003) seem to endorse something like this. In their view, *wanting* allows the animal to achieve objectives, that can't be achieved through simple learning of associations and require more complex inference, working memory, etc. They write:
>
> One essence of rational cognition is its inferential exploitation of lawful consistencies in the world and, typically, future value is best inferred from past value. In addition, the rat must use its understanding of which actions cause which outcomes to select from several possible actions the one that will produce the best reward.
>
>
>
Relatedly, *wanting*, as a process under conscious executive control, is more stable with respect to changing local incentives, which makes planning and execution of plans possible in the first place.
Thus, the mesocortical pathway which goes from the VTA to some parts of the prefrontal cortex is a likely candidate for a major substrate of *wanting*, as one of its major roles is executive function.
Echoing the speculations from the end of Section 2, can we *want* something without *"wanting"* it? Perhaps we can take again the self-image angle: we model ourselves as *wanting* X, but this model is not accurate and not strong enough to override *"wanting"* that pushes in the other direction. Perhaps sometimes *wanting* without *"wanting"* is adaptive because it causes the organism to think about plans of action that can be executed once a proper context occurs, so that *"wanting"* is triggered and makes use of the information contained in the plans developed due to *wanting*.
4. Why *"like"* something if you can just *"want"* it?
------------------------------------------------------
Ex ante, we might expect that *"wanting"* itself, paired with a sufficiently good learning algorithm, should be enough to achieve any goals necessary for survival and reproductive fitness.
Maybe it is necessary or more efficient to have separate systems for (1) adaptive but "mindless" responses to local incentives and (2) goal-directed behavior that relies on taking into account broader context; hence *"wanting"* and *wanting*, respectively. Still, this leaves us with a question about the adaptive value of *"liking"* and *liking*. Had they not contributed to our ancestors' fitness in one way or another, evolution would not have selected for them.[[30]](#fn-hzqG39DLdbBcfHpik-30)
It seems that the field has not reached a consensus on that question. Here I present three hypotheses. Like much of evolutionary psychology, they are all tentative, and if evidence for them is at best indirect. Importantly, these hypotheses are neither exhaustive nor mutually exclusive.
### Hypothesis 1: Liking extends wanting
On this account, we start with a small set of default motivations that evolution selected for (*"wants"*), and *"liking"* helps repurpose the motivational system towards new motivations. The *"wanting"* system is more evolutionarily ancient. *"Liking"* emerged relatively recently, in animals living in more cognitively demanding environments that necessitated acquiring new motivational mechanisms over the lifetime. Pleasure is an additional training signal for the *"wanting"* system, allowing the brain to repurpose systems specialized for being motivated towards one domain of stimuli/events towards another domain. The need for this "lifetime reprogramming" may arise due to the environment being too complex or too variable for evolution to encode appropriate sources of motivation into the genome.
Kent Berridge (a pioneer of this line of research) seems to lean towards this hypothesis ([podcast interview link](https://hearthisidea.com/episodes/kent)). He gives credit for it to, among others, Anthony Dickinson (e.g., Dickinson & Balleine, 2010).
Quoting directly from the episode (lightly edited by me):
>
> […] pleasure exists because it essentially allows brain *"wanting"* systems that might have evolved for one thing (e.g., food) to experience a new pleasant event (e.g., social accomplishment) and to enjoy that event and to bring to bear the brain *"wanting"* systems for the old thing on to this new target, basically giving us a new target of desire.
>
>
>
On this account, it seems to be somewhat similar to the picture presented by the [Shard Theory view of human values](https://www.lesswrong.com/posts/iCfdcxiyr2Kj8m8mT/the-shard-theory-of-human-values), except that not all the values (contextually activated motivations/*"wants"*) are learned from scratch.
Here, Berridge doesn't distinguish between *"liking"* and *liking*. My interpretation is that he views them as serving a similar function, just on different levels of "cognitive sophistication", similar to *"wanting"* and *wanting*.
There is an interesting category of cases, where the learned change in valence happens prior to a corresponding change in motivation. In other words, your experience changes whether/how much you *"(dis)like"* X but without updating your motivation for X. Your motivation for X is updated the next time you encounter X and experience altered valence.
Dickinson and Balleine (Balleine, 2010) give an example, where the first author (A.D.) who really liked watermelons, at some point got sick shortly after eating a watermelon (most likely the disease and eating the fruit were not related). A few days later, he went to eat a watermelon and although it was most likely basically the same kind of watermelon, it tasted awful. Apparently, his *"liking"*/*liking* system wrongly inferred the watermelon to be the causal factor behind the sickness, which altered his taste, but not his motivation to eat watermelons, until he tasted a no-longer-tasty watermelon. These cases have been reproduced in rat experiments.[[31]](#fn-hzqG39DLdbBcfHpik-31)
Note that this is different from the Salt Sea experiment (Robinson & Berridge, 2013), where rats' motivation was already altered by their physiological state before being presented with the stimulus. In the watermelon case, on the other hand, the aversion occurs unexpectedly.
### Hypothesis 2: Preparing physiology
*"Liking"* reactions associated with food and fluids seem to prepare the organism for intake of nutrients. Increased salivation facilitates pre-digestion of the food in the mouth, licking the lips ensures that some bits of the food are not left out, increased gastric movements prepare the rest of the digestive system, etc.
### Hypothesis 3: Implicit social communication
An animal's physiological reactions, like the ones discussed above, often carry socially important information. Thus, it's plausible that in more social species these behaviors would evolve to become more pronounced, in other to facilitate implicit social communication. On the other hand, they may also become less reflective of the actual physiological state, in order to produce signals that are more likely to influence the behavior of the other animal in the direction that is beneficial for the signaller.
### Hypothesis 4: Additional information to update behavior on
Valence (*"liking"*/*liking*) may also route partially processed information to the motivational circuits in order to update already ongoing behavior. If we do X for the first time and it quickly turns out that we *like* it, we do more of it. An obvious caveat is that we may not be able to experimentally disentangle the indirect effect mediated by valence from the direct effect. We have seen that it is possible to very quickly develop strong motivation for something without *"liking"* it, e.g., in the wireheading studies.
References
----------
* Berridge, K. C. (2007). The debate over dopamine's role in reward: The case for incentive salience. Psychopharmacology, 191(3), 391–431. <https://doi.org/10.1007/s00213-006-0578-x>
* Berridge, K. C., & Kringelbach, M. L. (2008). Affective neuroscience of pleasure: Reward in humans and animals. Psychopharmacology, 199(3), 457–480. <https://doi.org/10.1007/s00213-008-1099-6>
* Berridge, K. C., & Kringelbach, M. L. (2013). Neuroscience of affect: Brain mechanisms of pleasure and displeasure. Social and Emotional Neuroscience, 23(3), 294–303. <https://doi.org/10.1016/j.conb.2013.01.017>
* Berridge, K. C., & Kringelbach, M. L. (2015). Pleasure Systems in the Brain. Neuron, 86(3), 646–664. <https://doi.org/10.1016/j.neuron.2015.02.018>
* Berridge, K. C., & Robinson, T. E. (2003). Parsing reward. Trends in Neurosciences, 26(9), 507–513. <https://doi.org/10.1016/S0166-2236(03)00233-9>
* Berridge, K. C., & Robinson, T. E. (2016). Liking, wanting, and the incentive-sensitization theory of addiction. The American Psychologist, 71(8), 670–679. <https://doi.org/10.1037/amp0000059>
* Berridge, K., & Winkielman, P. (2003). What is an unconscious emotion? (The case for unconscious "liking"). Cognition and Emotion, 17(2), 181–211. <https://doi.org/10.1080/02699930302289>
* Burke, K. A., Franz, T., Miller, D., & Schoenbaum, G. (2010). Conditioned Reinforcement and the Specialized Role of Corticolimbic Circuits in the Pursuit of Happiness and Other More Specific Rewards. In M. L. Kringelbach & K. C. Berridge (Eds.), Pleasures of the Brain (pp. 50–62). Oxford University Press.
* Cartoni, E., Balleine, B., & Baldassarre, G. (2016). Appetitive Pavlovian-instrumental Transfer: A review. Neuroscience & Biobehavioral Reviews, 71, 829–848. <https://doi.org/10.1016/j.neubiorev.2016.09.020>
* dela Peña, I., Gevorkiana, R., & Shi, W.-X. (2015). Psychostimulants affect dopamine transmission through both dopamine transporter-dependent and independent mechanisms. European Journal of Pharmacology, 764, 562–570. <https://doi.org/10.1016/j.ejphar.2015.07.044>
* Dickinson, A., & Balleine, B. (2010). Hedonics: The Cognitive–Motivational Interface. In M. L. Kringelbach & K. C. Berridge (Eds.), Pleasures of the Brain (pp. 74–84). Oxford University Press.
* Ferré, S. (2016). Mechanisms of the psychostimulant effects of caffeine: Implications for substance use disorders. Psychopharmacology, 233(10), 1963–1979. <https://doi.org/10.1007/s00213-016-4212-2>
* Ikemoto, S. (2010). Brain reward circuitry beyond the mesolimbic dopamine system: A neurobiological theory. Novel Perspectives on Drug Addiction and Reward, 35(2), 129–150. <https://doi.org/10.1016/j.neubiorev.2010.02.001>
* Kringelbach, M. L. (2005). The human orbitofrontal cortex: Linking reward to hedonic experience. Nature Reviews Neuroscience, 6(9), 691–702. <https://doi.org/10.1038/nrn1747>
* Kringelbach, M. L. (2010). The Hedonic Brain: A Functional Neuroanatomy of Human Pleasure. In M. L. Kringelbach & K. C. Berridge (Eds.), Pleasures of the Brain (pp. 202–221). Oxford University Press.
* Leyton, M. (2010). The Neurobiology of Desire: Dopamine and the Regulation of Mood and Motivational States in Humans. In M. L. Kringelbach & K. C. Berridge (Eds.), Pleasures of the Brain (pp. 222–243). Oxford University Press.
* Nader, K., Bechara, A., & van der Kooy, D. (1997). Neurobiological constraints on behavioral models of motivation. Annual Review of Psychology, 48(1), 85–114. <https://doi.org/10.1146/annurev.psych.48.1.85>
* Ott, T., & Nieder, A. (2019). Dopamine and Cognitive Control in Prefrontal Cortex. Trends in Cognitive Sciences, 23(3), 213–234. <https://doi.org/10.1016/j.tics.2018.12.006>
* Panek, L. M., Swoboda, C., Bendlin, A., & Temple, J. L. (2013). Caffeine increases liking and consumption of novel-flavored yogurt. Psychopharmacology, 227(3), 425–436. <https://doi.org/10.1007/s00213-013-2971-6>
* Pezzella, F. R., Colosimo, C., Vanacore, N., Di Rezze, S., Chianese, M., Fabbrini, G., & Meco, G. (2005). Prevalence and clinical features of hedonistic homeostatic dysregulation in Parkinson's disease. Movement Disorders, 20(1), 77–81. <https://doi.org/10.1002/mds.20288>
* Puglisi-Allegra, S., & Ventura, R. (2012). Prefrontal/accumbal catecholamine system processes high motivational salience. Frontiers in Behavioral Neuroscience, 6, 31. <https://doi.org/10.3389/fnbeh.2012.00031>
* Ramsey, W. (2022). Eliminative Materialism. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Spring 2022). Metaphysics Research Lab, Stanford University. <https://plato.stanford.edu/archives/spr2022/entries/materialism-eliminative/>
* Richard, J. M., Castro, D. C., Difeliceantonio, A. G., Robinson, M. J. F., & Berridge, K. C. (2013). Mapping brain circuits of reward and motivation: In the footsteps of Ann Kelley. Neuroscience and Biobehavioral Reviews, 37(9 Pt A), 1919–1931. <https://doi.org/10.1016/j.neubiorev.2012.12.008>
* Robinson, M. J. F., & Berridge, K. C. (2013). Instant transformation of learned repulsion into motivational "wanting". Current Biology : CB, 23(4), 282–289. <https://doi.org/10.1016/j.cub.2013.01.016>
* Salamone, J. D., Pardo, M., Yohn, S. E., López-Cruz, L., SanMiguel, N., & Correa, M. (2016). Mesolimbic Dopamine and the Regulation of Motivated Behavior. In E. H. Simpson & P. D. Balsam (Eds.), Behavioral Neuroscience of Motivation (pp. 231–257). Springer International Publishing. <https://doi.org/10.1007/7854_2015_383>
* Smith, K. S., & Berridge, K. C. (2007). Opioid limbic circuit for reward: Interaction between hedonic hotspots of nucleus accumbens and ventral pallidum. The Journal of Neuroscience : The Official Journal of the Society for Neuroscience, 27(7), 1594–1605. <https://doi.org/10.1523/JNEUROSCI.4205-06.2007>
* Smith, K. S., Berridge, K. C., & Aldridge, J. W. (2011). Disentangling pleasure from incentive salience and learning signals in brain reward circuitry. Proceedings of the National Academy of Sciences of the United States of America, 108(27), E255-264. <https://doi.org/10.1073/pnas.1101920108>
* Smith, K. S., Mahler, S. V., Peciña, S., & Berridge, K. C. (2010). Hedonic Hotspots: Generating Sensory Pleasure in the Brain. In M. L. Kringelbach & K. C. Berridge (Eds.), Pleasures of the Brain (pp. 27–49). Oxford University Press.
* Söderpalm, A. H., & Berridge, K. C. (2000). The hedonic impact and intake of food are increased by midazolam microinjection in the parabrachial nucleus. Brain Research, 877(2), 288–297. <https://doi.org/10.1016/s0006-8993(00)02691-3>
* Steiner, J. E. (1973). The gustofacial response: Observation on normal and anencephalic newborn infants. Symposium on Oral Sensation and Perception, 4, 254–278.
* Szczypka, M. S., Rainey, M. A., Kim, D. S., Alaynick, W. A., Marck, B. T., Matsumoto, A. M., & Palmiter, R. D. (1999). Feeding behavior in dopamine-deficient mice. Proceedings of the National Academy of Sciences, 96(21), 12138–12143. <https://doi.org/10.1073/pnas.96.21.12138>
* Warlow, S. M., Naffziger, E. E., & Berridge, K. C. (2020). The central amygdala recruits mesocorticolimbic circuitry for pursuit of reward or pain. Nature Communications, 11(1), 2716. <https://doi.org/10.1038/s41467-020-16407-1>
* Winkielman, P., Berridge, K., & Wilbarger, J. (2005). Unconscious Affective Reactions to Masked Happy Versus Angry Faces Influence Consumption Behavior and Judgments of Value. Personality & Social Psychology Bulletin, 31, 121–135. <https://doi.org/10.1177/0146167204271309>
* Wright, G. A., Baker, D. D., Palmer, M. J., Stabler, D., Mustard, J. A., Power, E. F., Borland, A. M., & Stevenson, P. C. (2013). Caffeine in Floral Nectar Enhances a Pollinator's Memory of Reward. Science, 339(6124), 1202–1204. <https://doi.org/10.1126/science.1228806>
* Zhang, J.-J., Song, C.-G., Dai, J.-M., Li, L., Yang, X.-M., & Chen, Z.-N. (2022). Mechanism of opioid addiction and its intervention therapy: Focusing on the reward circuitry and mu-opioid receptor. MedComm, 3(3), e148. <https://doi.org/10.1002/mco2.148>
---
1. Ordered alphabetically, by last name. [↩︎](#fnref-hzqG39DLdbBcfHpik-1)
2. I italicize *liking*, *"liking"*, *wanting*, and *"wanting"* in order to emphasize that I'm using these terms in a "technical" sense. "Non-technical" senses are non-italicized. In a few places of this section I also sometimes lump *liking* and *"liking"* into "liking" and *wanting* and *"wanting"* into "wanting". [↩︎](#fnref-hzqG39DLdbBcfHpik-2)
3. Not necessarily exhaustively, there may be some (things we might want to consider as) values that don't fit neatly into any of these categories. [↩︎](#fnref-hzqG39DLdbBcfHpik-3)
4. The distinction between explicit and implicit is also sometimes used, but I find it unintuitive. [↩︎](#fnref-hzqG39DLdbBcfHpik-4)
5. By the way, the taboo against eating pork has a very interesting origin. See [this video from Religion for Breakfast](https://www.youtube.com/watch?v=pI0ZUhBvIx4). [↩︎](#fnref-hzqG39DLdbBcfHpik-5)
6. See also [Steve Byrnes's post about that study](https://www.lesswrong.com/posts/wcNEXDHowiWkRxDNv/inner-alignment-in-salt-starved-rats). [↩︎](#fnref-hzqG39DLdbBcfHpik-6)
7. The CS itself can become aversive or desired, even when the UCS it predicts appears in a different place than the CS. In such cases, the CS is said to become a "motivational magnet" (e.g., Robinson & Berridge, 2013). [↩︎](#fnref-hzqG39DLdbBcfHpik-7)
8. Explaining this phenomenon in terms of wanting to avoid unpleasant effects of the withdrawal syndrome doesn't fit the empirical data (Berridge & Robinson, 2016). [↩︎](#fnref-hzqG39DLdbBcfHpik-8)
9. While here I am speaking about subclinical cases of behavioral addictions, I also expect this to be a factor in obsessive-compulsive disorder and related conditions. One reason I think so is that the neurotransmitter most consistently involved in OCD seems to be dopamine, which is strongly implicated in wanting (see Section 3). Moreover, the most successful pharmacological treatment for OCD is naltrexone, which is also used in many standard addictions and acts by regulating dopaminergic transmission from the VTA. [↩︎](#fnref-hzqG39DLdbBcfHpik-9)
10. Although it probably still requires making some assumptions about the agent's biases and cognitive limitations. See, e.g., [Christiano (2018)](https://www.lesswrong.com/posts/h9DesGT3WT9u2k7Hr/the-easy-goal-inference-problem-is-still-hard). [↩︎](#fnref-hzqG39DLdbBcfHpik-10)
11. I'm not going to discuss the topic of phenomenal consciousness and its relationship with verbal reports because I consider the former to be [illusory](https://plato.stanford.edu/entries/qualia/#Illusional). [↩︎](#fnref-hzqG39DLdbBcfHpik-11)
12. Food being obviously the easiest category of "rewards" to study. [↩︎](#fnref-hzqG39DLdbBcfHpik-12)
13. By "conscious functioning", I mean something like "the [global workspace](https://en.wikipedia.org/wiki/Global_workspace_theory)" being up and running. [↩︎](#fnref-hzqG39DLdbBcfHpik-13)
14. With the neocortex being the part of the brain we expect to be important for conscious awareness. [↩︎](#fnref-hzqG39DLdbBcfHpik-14)
15. From now on, I italicize *liking*, *"liking"*, *wanting*, and *"wanting"*, in order to emphasize that I'm using these terms in their "technical" sense. "Non-technical meanings" of liking and liking are non-italicized. [↩︎](#fnref-hzqG39DLdbBcfHpik-15)
16. Berridge and Robinson (2003) also introduced a third distinction between two forms of learning: cognitive and associative, but it is not the focus of this post. [↩︎](#fnref-hzqG39DLdbBcfHpik-16)
17. I found no studies on that, but I have a very confident guess that *"liking"* would also occur in animals that are asleep or even in some kinds of palliative states, such as coma, perhaps even [locked-in syndrome](https://en.wikipedia.org/wiki/Locked-in_syndrome). [↩︎](#fnref-hzqG39DLdbBcfHpik-17)
18. Of course, the correspondence is not perfect and the boundaries between the daily meanings of "to like" and "to want" are blurry. Still, semantic distance from "to like" to *liking* is smaller than to *"liking"*. [↩︎](#fnref-hzqG39DLdbBcfHpik-18)
19. Importantly, "the ongoing state of affairs" can include things extended over a long timescale. [↩︎](#fnref-hzqG39DLdbBcfHpik-19)
20. At least none of the ones we know about, to the best of my knowledge. [↩︎](#fnref-hzqG39DLdbBcfHpik-20)
21. At the same time, some studies show that monkeys and rats with OFC damage, although they still respond to rewards, are impaired in using reward information to guide their behavior, relative to animals with intact OFC (Berridge & Kringelbach, 2008), perhaps pointing to the role of conscious pleasure in reward-related learning. [↩︎](#fnref-hzqG39DLdbBcfHpik-21)
22. For a great introduction to GWT, see [Kaj Sotala's review of *The Consciousness and the Brain* by Stanislas Dehaene](https://www.lesswrong.com/s/ZbmRyDN8TCpBTZSip/p/x4n4jcoDP7xh5LWLq). [↩︎](#fnref-hzqG39DLdbBcfHpik-22)
23. E.g., when I realize that when I get back to exercising after a long break, I start feeling much better on a daily basis after a few weeks, which increases my motivation to exercise (although this is probably not a good example of *"wanting"*). [↩︎](#fnref-hzqG39DLdbBcfHpik-23)
24. Perhaps I am slightly deviating from the definition of *"wanting"* as "conditioned responses". This seems true though and in agreement with the idea (endorsed by Berridge) that the original adaptive of *"wanting"* was to drive the animal's behavior to satisfy some small set of needs. [↩︎](#fnref-hzqG39DLdbBcfHpik-24)
25. I take the mention of "pleasant" in (2) to refer both to *"liking"* and *liking*, with the latter being used in a broad sense, which includes (i.a.) reflective evaluation of the state of the world conditional on having achieved the *wanted* goal. I think this interpretation is justified because otherwise, the definition would exclude clear examples of *wanting*, such as a person doing hard work for which they are not going to receive any "pleasant reward", unless we take a very broad meaning of "pleasure" (not mentioning more extreme cases like suicide bombers and kamikaze). [↩︎](#fnref-hzqG39DLdbBcfHpik-25)
26. Szczypka et al. (1999) write that "young [DD] pups that had never been injected with l-DOPA would lick and swallow small drops of a liquid diet placed by their mouth. Apparently, these kinds of responses don't require dopamine, perhaps being a kind of *"liking"* reactions. [↩︎](#fnref-hzqG39DLdbBcfHpik-26)
27. See also Oliver Sacks's *[Awakenings](https://en.wikipedia.org/wiki/Awakenings_(book))*. [↩︎](#fnref-hzqG39DLdbBcfHpik-27)
28. Most [dopamine antagonists](https://en.wikipedia.org/wiki/Dopamine_antagonist) work only on a particular subtype of dopamine receptors. [↩︎](#fnref-hzqG39DLdbBcfHpik-28)
29. Interestingly, in addition to increasing DA directly, amphetamine and cocaine-like psychostimulants also appear to increase DA via an indirect route (Peña et al., 2015). [↩︎](#fnref-hzqG39DLdbBcfHpik-29)
30. Alternatively, they might be [spandrels](https://en.wikipedia.org/wiki/Spandrel_(biology)). This probably isn't the case for *"liking"*, as spandrels typically (ever?) have distinct brain circuits. If *liking* is a natural consequence of *"liking"* plus global workspace/consciousness systems, then it would also probably not be a spandrel. [↩︎](#fnref-hzqG39DLdbBcfHpik-30)
31. Dickinson and Balleine's account of the function of reward is slightly different than the one I'm presenting here. You can read it yourself if you're curious. [↩︎](#fnref-hzqG39DLdbBcfHpik-31) |
abdff18b-cda4-44c1-9e07-0d8f8c275a05 | trentmkelly/LessWrong-43k | LessWrong | Without a phone for 10 days
I woke up this morning to a bricked Google Pixel 4. After taking it to a local repair shop for a diagnosis, I was told that a fuse had been blown on the motherboard. A board-level repair would cost half as much as a brand new phone, and I was thinking about upgrading to the new Pixel 6 once it came out later this month. After spending a few hours sorting out account details and learning about replacement options with my carrier I learned that it would cost me $150 to get a replacement by this coming Monday. What good would it do to pay $150 for a phone that I would only have for a week until upgrading?
And that’s when I realized I had stumbled into a very unique moment in which I had every reason to attempt something I had been hesitantly curious to try: living without a phone. After all, the Pixel 6 was rumored to launch only ten days from now on October 19th. And if I decide at the end that life is better with a smartphone, then I’ll get one.
Okay so there are a few things I’m a bit worried about. The most obvious one is that I’ll be unreachable to close family and friends during this time. Ten days isn’t a ton of time, so I decided to email those closest to me to tell them about this experiment. A less obvious problem is that I’ll be unable to do typical two-factor authentication, which my university and some other services periodically require. The good news is that I have backup codes saved on my laptop, but it’s kind of a hassle.
I’m very curious to see how this will turn out. I’m hoping that I’ll appreciate the disconnection so much that I won’t want to go back to smartphones. I’ll likely still want the basic call and text functionality, so maybe I’ll go with a simpler phone. I had heard of the lightphone before and loved the idea, but was afraid of giving up apps like Uber for emergencies. Today I looked into some other “feature phones” and discovered the Nokia 6300, the Punkt. MP02, and the Mudita Pure. Anyway, I’ll probably write at least one more post |
f4cdb37f-5dec-4ac7-b45e-0cc92169976b | trentmkelly/LessWrong-43k | LessWrong | GTFO of the Social Internet Before you Can't: The Miro & Yindi Story
Recommended music to read this to (If you like ambience)
I
Yindi had sent him a link, "You've gotta see how this guy speedruns Mario Kart, I think you'll like it (✿◠‿◠)". Miro taps the link.
CREATE A NetMe™[1] ACCOUNT TO WATCH THIS VIDEO
Miro creates the account. The video is good.
He runs to boot his CRT, its electron beam lighting the quiet room with a high pitched scream. He starts his Wii and runs through Mushroom Gorge a few times before trying to replicate the technique. "Aim in between the two coins, shroom before the grass then, release your med turbo and left hop at the same time"
On his first attempt he gets close to making the shortcut, but hits the barrier and falls. On his second attempt Miro turbos too quickly and hits a wall. On his eighth attempt he gets it. Frustration gives way to sudden pride, and he messages Yindi, "Just made the jump. Thank you!" Miro decides to check out what NetMe is all about.
At work, he thanks Yindi again, and they talk for a long time about nothing in particular.
----------------------------------------
By spring, Miro uses NetMe every day, and while he feels so little, he feels so much. Each video he scrolls past micro-doses him with rage, or awe, or awwww, or lust. Miro doesn't experience any emotion too intensely.
Four hours a day.
This is how much time Miro spends on NetMe.
But if you asked Miro what he likes, he'd say something like:
* Practicing speedruns
* Playing video games with friends
* Working at his job
* And Yindi (But Miro won't tell anyone that. Especially not Yindi)
NetMe wouldn't enter his mind at all.
Miro is on NetMe now. He's watching a video that looks... weird? It's a girl. She is smiling, and dressed like Zelda, when she wears that one outfit that always made Miro feel suspicious as a kid. Dancing? No. Writhing. Sometimes her hands pass in front of her clothing, and meld into it. Other times her face morphs from one girl to another. Pretty to ugly, Caucasian to Asian.
He cons |
59e7c263-5595-4713-98ed-d3cac83dc641 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | AI Alignment Open Thread October 2019
Continuing the experiment from August, let's try another open thread for AI Alignment discussion. The goal is to be a place where researchers and upcoming research can ask small questions they are confused about, share early stage ideas and have lower-key discussions. |
c010278c-c591-4e48-a9cd-7c8558892e8f | trentmkelly/LessWrong-43k | LessWrong | Linkpost: M21 Review: We Have Normality
You can find it here. |
ed5c0941-de0f-4c02-b331-2cae3c52d52b | trentmkelly/LessWrong-43k | LessWrong | Longterm/Difficult to measure charities
I'm not sure if this necessarily warrants a new discussion, or if there's an existing article/thread that addresses this topic.
There's a lot of discussion recently about charity, and how to give effectively. I've been looking over givewell.org and it definitely is the single most important thing I've found on lesswrong. But one discouraging thing is that by focusing on easy to measure charities, there's not a lot of info on charities that are trying to accomplish long term less measurable goals. The best charity there that matches my priorities was an educational agency in India that put a lot of emphasis on self improvement.
My *think* my ideal charity would be something similar to Heifer International, but which also focuses on reproductive health and/or women's rights. Feeding people fish for a day means you just need to feed them again tomorrow, and if they have a bunch of kids you haven't necessarily accomplished anything. From what I've read, in places where the standard of living improves and women get more equality, overpopulation becomes less of an issue. So it seems to me that addressing those issues together in particular regions would produce sustainable longterm benefit. But Givewell doesn't seem to have a lot of information on those types of charities. |
4c331370-92fc-4ac6-962d-00c167b26e16 | StampyAI/alignment-research-dataset/arxiv | Arxiv | The Precautionary Principle
(with Application to the Genetic Modification of Organisms)
I Introduction
----------------
The aim of the precautionary principle (PP) is to prevent decision makers from putting society as a whole---or a significant segment of it---at risk from the unexpected side effects of a certain type of decision. The PP states that if an action or policy has a suspected risk of causing severe harm to the public domain (such as general health or the environment), and in the absence of scientific near-certainty about the safety of the action, the burden of proof about absence of harm falls on those proposing the action. It is meant to deal with effects of absence of evidence and the incompleteness of scientific knowledge in some risky domains.111The Rio Declaration on Environment and Development presents it as follows: "In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation."
We believe that the PP should be evoked only in extreme situations: when the potential harm is systemic (rather than localized) and the consequences can involve total irreversible ruin, such as the extinction of human beings or all
life on the planet.
The aim of this paper is to place the concept of precaution within a formal statistical and risk-analysis structure, grounding it in probability theory and the properties of complex systems. Our aim is to allow decision makers to discern which circumstances require the use of the PP and in which cases evoking the PP is inappropriate.
Ii Decision making and types of Risk
-------------------------------------
Taking risks is necessary for individuals as well as for decision makers affecting the functioning and advancement of society. Decision and policy makers tend to assume all risks are created equal. This is not the case.
Taking into account the structure of randomness in a given system can have a dramatic effect on which kinds of actions are, or are not, justified. Two kinds of potential harm must be considered when determining an appropriate approach to the role of risk in decision-making: 1) localized non-spreading impacts and 2) propagating impacts resulting in irreversible and widespread damage.
Traditional decision-making strategies focus on the case where harm is localized and risk is easy to calculate from past data. Under these circumstances, cost-benefit analyses and mitigation techniques are appropriate. The potential harm from miscalculation is bounded.
On the other hand, the possibility of irreversible and widespread damage raises different questions about the nature of decision making and what risks can be reasonably taken. This is the domain of the PP.
Criticisms are often levied against those who argue for caution portraying them as unreasonable and possibly even paranoid. Those who raise such criticisms are implicitly or explicitly advocating for a cost benefit analysis, and necessarily so. Critics of the PP have also expressed concern that it will be applied in an overreaching manner, eliminating the ability to take reasonable risks that are needed for individual or societal gains. While indiscriminate use of the PP might constrain appropriate risk-taking,
at the same time one can also make the error of suspending the PP in cases when it is vital.
Hence, a non-naive view of the precautionary principle is one in which it is only invoked when necessary, and only to prevent a certain variety of very precisely defined risks based on distinctive probabilistic structures. But, also, in such a view, the PP should never be omitted when needed.
The remainder of this section will outline the difference between the naive and non-naive approaches.
###
Ii-a What we mean by a non-naive PP
Risk aversion and risk-seeking are both well-studied human behaviors. However, it is essential to distinguish the PP so that it is neither used naively to justify any act of caution, nor dismissed by those who wish to court risks for themselves or others.
The PP is intended to make decisions that ensure survival when statistical evidence is limited—because it has not had time to show up —by focusing on the adverse effects of "absence of evidence."
Table 1 encapsulates the central idea of the paper and shows the differences between decisions with a risk of harm (warranting regular risk management techniques) and decisions with a risk of total ruin (warranting the PP).
| Standard Risk Management | Precautionary Approach |
| --- | --- |
| localized harm | systemic ruin |
| nuanced cost-benefit | avoid at all costs |
| statistical | fragility based |
| statistical | probabilistic non-statistical |
| variations | ruin |
| convergent probabibilities | divergent probabilities |
| recoverable | irreversible |
| independent factors | interconnected factors |
| evidence based | precautionary |
| thin tails | fat tails |
| bottom-up, tinkering | top-down engineered |
| evolved | human-made |
TABLE I: Two different types of risk and their respective characteristics compared
###
Ii-B Harm vs. Ruin: When the PP is necessary
The purpose of the PP is to avoid a certain class of what, in probability and insurance, is called “ruin" problems [[1](#bib.bib1)]. A ruin problem is one where outcomes of risks have a non-zero probability of resulting in unrecoverable losses. An often-cited illustrative case is that of a gambler who loses his entire fortune and so cannot return to the game. In biology, an example would be a species that has gone extinct. For nature, "ruin" is ecocide: an irreversible termination of life at some scale, which could be planetwide. The large majority of variations that occur within a system, even drastic ones, fundamentally differ from ruin problems: a system that achieves ruin cannot recover. As long as the instance is bounded, e.g. a gambler can work to gain additional resources, there may be some hope of reversing the misfortune. This is not the case when it is global.
Our concern is with public policy. While an individual may be advised to not "bet the farm," whether or not he does so is generally a matter of individual preferences. Policy makers have a responsibility to avoid catastrophic harm for society as a whole; the focus is on the aggregate, not at the level of single individuals, and on global-systemic, not idiosyncratic, harm. This is the domain of collective "ruin" problems.
Precautionary considerations are relevant much more broadly than to ruin problems. For example, there was a precautionary case against cigarettes long before there was an open-and-shut evidence-based case against them. Our point is that the PP is a decisive consideration for ruin problems, while in a broader context precaution is not decisive and can be balanced against other considerations.
Iii Why Ruin is Serious Business
---------------------------------

Fig. 1: Why Ruin is not a Renewable Resource. No matter how small the probability, in time, something bound to hit the ruin barrier is about guaranteed to hit it.
The risk of ruin is not sustainable.
By the ruin theorems, if you incur a tiny probability of ruin
as a "one-off" risk, survive it, then do it again (another "one-off" deal), you will eventually go bust with probability 1. Confusion arises because it may seem that the "one-off" risk is reasonable, but that also means that an additional one is reasonable. This can be quantified by recognizing
that the probability of ruin approaches 1 as the number of exposures to individually small risks, say one in ten thousand, increases (see Fig. [1](#S3.F1 "Fig. 1 ‣ III Why Ruin is Serious Business ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)")). For this reason a strategy of risk taking is not sustainable and we must consider *any* genuine risk of total ruin as if it were inevitable.
The good news is that some classes of risk can be deemed to be practically of probability zero: the earth survived trillions of natural variations daily over 3 billion years, otherwise we would not be here.
By recognizing that normal risks are not in the category of ruin problems, we recognize also that it is not necessary or even normal to take risks that involve a possibility of ruin.
###
Iii-a PP is not Risk Management
It is important to contrast and not conflate the PP and risk management. Risk management involves various strategies to make decisions based upon accounting for the effects of positive and negative outcomes and their probabilities, as well as seeking means to mitigate harm and offset losses.
Risk management strategies are important for
decision-making when ruin is not at stake.
However, the only risk management strategy
of importance in the case of the PP is ensuring that actions
which can result in ruin are not taken, or equivalently, modifying potential choices of action so that ruin is not one of the possible outcomes.
More generally, we can identify three layers associated with strategies for dealing with uncertainty and risk. The first layer is the PP which addresses cases that involve potential global harm, whether probabilities are uncertain or known and whether they are large or small. The second is risk management which addresses the case of known probabilities of well-defined, bounded gains and losses. The third is risk aversion or risk-seeking behavior, which reflects quite generally the role of personal preferences for individual risks when uncertainty is present.

Fig. 2: A variety of temporal states for a process subjected to an absorbing barrier. Once the absorbing barrier is hit, the process terminates, regardless of its future potential.
###
Iii-B Ruin is forever
A way to formalize the ruin problem in terms of the destructive consequences of actions identifies harm as not about the amount of destruction, but rather a measure of the integrated level of destruction over the time it persists. When the impact of harm extends to all future times, i.e. forever, then the harm is infinite. When the harm is infinite, the product of any non-zero probability and the harm is also infinite, and it cannot be balanced against any potential gains, which are necessarily finite. This strategy for evaluation of harm as involving the duration of destruction can be used for localized harms for better assessment in risk management. Our focus here is on the case where destruction is complete for a system or an irreplaceable aspect of a system.
Figure [2](#S3.F2 "Fig. 2 ‣ III-A PP is not Risk Management ‣ III Why Ruin is Serious Business ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)") shows ruin as an absorbing barrier, a point that does not allow recovery.
For example, for humanity global devastation cannot be measured on a scale in which harm is proportional to level of devastation. The harm due to complete destruction is not the same as 10 times the destruction of 1/10 of the system. As the percentage of destruction approaches 100%, the assessment of harm diverges to infinity (instead of converging to a particular number) due to the value placed on a future that ceases to exist.
Because the “cost” of ruin is effectively infinite, cost-benefit analysis (in which the potential harm and potential gain are multiplied by their probabilities and weighed against each other) is no longer a useful paradigm.
Even if probabilities are expected to be zero but have a non-zero uncertainty, then a sensitivity analysis that considers the impact of that uncertainty results in infinities as well.
The potential harm is so substantial that everything else in the equation ceases to matter.
In this case, we must do everything we can to avoid the catastrophe.
Iv Scientific methods and the PP
---------------------------------
How well can we know either the potential consequences of policies or their probabilities? What does science say about uncertainty? To be helpful in policy decisions, science has to encompass not just expectations of potential benefit and harm but also their probability and uncertainty.
Just as the imperative of analysis of decision-making changes when there is
infinite harm for a small, non-zero risk, so is there a fundamental change in the ability to apply scientific methods to the evaluation of that harm. This influences the way we evaluate both the possibility of and the risk associated with ruin.
The idea of precaution is the avoidance of adverse consequences. This is qualitatively different from the idea of evidentiary action (from statistics). In the case of the PP, evidence may come too late.
The non-naive PP bridges the gap between precaution and evidentiary action using the ability to evaluate the difference between local and global risks.
###
Iv-a Precautionary vs. Evidentiary Action
Statistical-evidentiary approaches to risk analysis and mitigation count the frequency of past events (robust statistics), or calibrate parameters of statistical distributions to generate probabilities of future events (parametric approach), or both. Experimental evidentiary methods follow the model of medical trials, computing probabilities of harm from side effects of drugs or interventions by observing the reactions in a variety of animal and human models. Generally they assume that the risk itself (i.e. nature of harm and their probability) is adequately determined by available information. However, the level of risk may be hard to gauge as its probability may be uncertain, and, in the case of potential infinite harm, an uncertainty that allows for a non-zero probability results in infinities so that the problem is ill-defined mathematically.
While evidentiary approaches are often considered to reflect adherence to the scientific method in its purest form, it is apparent that these approaches do not apply to ruin problems. In an evidentiary approach to risk (relying on
evidence-based methods), the existence of a risk or harm occurs when we experience that risk or harm. In the case of ruin, by the time evidence comes it will by definition be too late to avoid it. Nothing in the past may predict one fatal event as illustrated in Fig. [4](#S4.F4 "Fig. 4 ‣ IV-C Unknowability, Uncertainty and Unpredictability ‣ IV Scientific methods and the PP ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)"). Thus standard evidence-based approaches cannot work.
More generally, evidentiary action is a framework based upon the quite reasonable expectation that we learn from experience. The idea of evidentiary action is embodied in the kind of learning from experience that is found in how people often react to disasters—after the fact. When a disaster occurs people prepare for the next one, but do not anticipate it in advance. For the case of ruin problems, such behavior guarantees extinction.
###
Iv-B Invalid Empirical Arguments Against Ruin
In the case of arguments about ruin problems, claims that experience thus far has not provided evidence for ruin, and thus it should not be considered, are not valid.
###
Iv-C Unknowability, Uncertainty and Unpredictability
It has been shown that the complexity of real world systems limits the ability of empirical observations to determine the outcomes of actions upon them [[2](#bib.bib2)]. This means that a certain class of systemic risks will remain inherently unknown.
In some classes of complex systems, controlled experiments cannot evaluate all of the possible systemic consequences under real-world conditions. In these circumstances, efforts to provide assurance of the "lack of harm" are insufficiently reliable. This runs counter to both the use of empirical approaches (including controlled experiments) to evaluate risks, and to the expectation that uncertainty can be eliminated by any means.
| | | | |
| --- | --- | --- | --- |
| |
Fig. 3: Thin Tails from Tinkering, Bottom-Up, Evolution. In nature no individual variation represents a large share of the sum of the variations. Natural boundaries prevent cascading effects from propagating globally. Mass extinctions arise from the rare cases where large impacts (meteorite hits and vulcanism) propagate across the globe through the atmosphere and oceans. | |
Fig. 4: Fat Tails from a Top-Down, Engineered Design In human made variations the tightly connected global system implies a single deviation will eventually dominate the sum of their effects. Examples include pandemics, invasive species, financial crises and monoculture. |
###
Iv-D Distinguishing Global and Local Risks
Since there are mathematical limitations to predictability of outcomes in a complex system, the central issue to determine is whether the threat of harm is local (hence globally benign) or carries global consequences. Scientific analysis can robustly determine whether a risk is systemic, i.e. by evaluating the connectivity of the system to propagation of harm, without determining the specifics of such a risk. If the consequences are systemic, the associated uncertainty of risks must be treated differently than if it is not. In such cases, precautionary action is not based on direct empirical evidence but on analytical approaches based upon the theoretical understanding of the nature of harm. It relies on probability theory without computing probabilities. The essential question is whether or not global harm is possible or not.
Theory enables generalizing from experience in order to apply it to new circumstances.
In the case of the PP, the existence of a robust way to generalize is essential.
The relevance of the precautionary principle today is greater than in the past, owing to the global connectivity of civilization that makes the spreading of effects to places previously insulated.
V Fat Tails and Fragility
--------------------------
###
V-a Thin and Fat Tails
To figure out whether a given decision involves the risk of ruin and thus warrants the use of the PP, we must first understand the relevant underlying probabilistic structures.
There are two classes of probability distributions of events: one in which events are accompanied by well behaved, mild variations (e.g. Gaussian or thin tails), and the other where small probabilities are associated with large variations that have no characteristic scale (e.g. power law or fat tails). Allegorically these are illustrated by Mediocristan and Extremistan (Figs. [3](#S4.F3 "Fig. 3 ‣ IV-C Unknowability, Uncertainty and Unpredictability ‣ IV Scientific methods and the PP ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)") and [4](#S4.F4 "Fig. 4 ‣ IV-C Unknowability, Uncertainty and Unpredictability ‣ IV Scientific methods and the PP ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)")), the former being typical of human weight distributions, and the latter
of human wealth distributions. Given a series of events (a sequence of measurements of weight or wealth), in the case of thin tails the sum is proportional to the average, and in the case of fat tails a sum over them may be entirely dominated by a single one. Thus, while no human being can be heavier than, say, ten average adults (since weight is thin-tailed), a single individual can be richer than the poorest two billion humans (since wealth is fat tailed).
In thin tailed domains (Fig [3](#S4.F3 "Fig. 3 ‣ IV-C Unknowability, Uncertainty and Unpredictability ‣ IV Scientific methods and the PP ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)")) harm comes from the collective effect of many, many events; no event alone can be consequential enough to affect the aggregate.
It is practically impossible for a single day to account for 99% of all heart attacks in a given year (the probability is small enough to be practically zero), for an illustration).
Statistical distributions that belong to the thin-tailed domain include: Gaussian, Binomial, Bernoulli, Poisson, Gamma, Beta and Exponential.
In fat tailed domains of risk (Fig. [4](#S4.F4 "Fig. 4 ‣ IV-C Unknowability, Uncertainty and Unpredictability ‣ IV Scientific methods and the PP ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)")) harm comes from the largest single event. Examples of relevant statistical distributions include: Pareto, Levy-Stable distributions with infinite variance, Cauchy, and power law distributions, especially with larger exponents.
###
V-B Why interdependence brings fat tails
When variations lead to independent impacts locally, the aggregate effect of those variations is small according to the central limit theorem, guaranteeing thin-tailed distributions. When there is interdependence, the central limit theorem does not apply, and aggregate variations may become much more severe due to mutual reinforcement. Interdependence arises because of the coupling of behavior in different places. Under these conditions, cascades propagate through the system in a way that can cause large impacts. Whether components are independent or dependent clearly matters to systemic disasters such as pandemics and financial or other crises. Interdependence increases the probability of ruin, ultimately to the point of certainty.
Consider the global financial crash of 2008. As financial firms became increasingly interdependent during the latter part of the 20th century, small fluctuations during periods of calm masked the vulnerability of the system to cascading failures. Instead of a local shock in an independent area of the system, we experienced a global shock with cascading effects. The crisis of 2008, in addition, illustrates the failure of evidentiary risk management. Since data from the time series beginning in the 1980s exhibited stability, causing the period to be dubbed "the great moderation," it deceived those relying on historical statistical evidence.
Vi What is the Risk of Harm to the Earth?
------------------------------------------
At the systemic largest scale on Earth, nature has thin tails, though tails may be fat at smaller length scales or sufficiently long time scales; occasional mass extinctions occur at very long time scales. This is characteristic of a bottom-up, local
tinkering design process, where things change primarily locally and only mildly and iteratively on a global scale.
In recent years, it has been shown that natural systems often have fat tail (power law) behaviors associated with the propagation of shocks [[3](#bib.bib3)]. This, however, applies to selected systems that do not have barriers (or circuit-breakers) that limit those propagations. The earth has an intrinsic heterogeneity of oceans/continents, deserts, mountains, lakes, rivers
and climate differences that limit the propagation of variations from one area to another. There are also smaller natural boundaries associated with organism sizes and those of local groups of organisms. Among the largest propagation events we commonly observe are forest fires, but even these are bounded in their impacts compared to a global scale. The various forms of barriers limit the propagation of cascades that enable large scale events.
At longer time scales of millions of years, mass extinctions can achieve a global scale.
Connectivity of oceans and the atmosphere enables propagation of impacts, i.e. gas, ash and dust propagating through the atmosphere due to meteor impacts and volcanism, is considered a scenario for these extinction events [[4](#bib.bib4)].
The variability associated with mass extinctions can especially be seen in the fossil record of marine animal species; those of plants and land insects are comparatively robust. It is not known to what extent these events are driven extrinsically, by meteor impacts, geological events including volcanos, or cascading events of coupled species extinctions, or combinations of them. The variability associated with mass extinctions, however, indicates that there are fat tail events that can affect the global biosphere. The major extinction events during the past 500 million years occur at intervals of millions of years [[5](#bib.bib5)]. While mass extinctions occur, the extent of that vulnerability is driven by both sensitivity to external events and connectivity among ecosystems.
The greatest impact of human beings on this natural system connectivity is through dramatic increases in global transportation. The impact of invasive species and rapid global transmission of diseases demonstrates the role of human activity in connecting previously much more isolated natural systems. The role of transportation and communication in connecting civilization itself is apparent in economic interdependence manifest in cascading financial crises that were not possible even a hundred years ago. The danger we are facing today is that we as a civilization are globally connected, and the fat tail of the distribution of shocks extends globally, to our peril.
Had nature not imposed sufficiently thin-tailed variations in the aggregate or macro level, we would not be here today. A single one of the trillions, perhaps the trillions of trillions, of variations over evolutionary history would have terminated life on the planet. Figures 1 and 2 show the difference between the two separate statistical properties. While tails can be fat for subsystems, nature remains predominantly thin-tailed at the level of the planet [[6](#bib.bib6)]. As connectivity increases the risk of extinction increases dramatically and nonlinearly [[7](#bib.bib7)].
###
Vi-a Risk and Global Interventionism
Currently, global dependencies are manifest in the expressed concerns about policy maker actions that nominally appear to be local in their scope. In just recent months, headlines have been about Russia’s involvement in Ukraine, the spread of Ebola in east Africa, expansion of ISIS control into Iraq, ongoing posturing in North Korea and Israeli-Palestinian conflict, among others. These events reflect upon local policy maker decisions that are justifiably viewed as having global repercussions. The connection between local actions and global risks compels widespread concern and global responses to alter or mitigate local actions. In this context, we point out that the broader significance and risk associated with policy actions that impact on global ecological and human survival is the essential point of the PP. Paying attention to the headline events without paying attention to these even larger risks is like being concerned about the wine being served on the Titanic.
Vii Fragility
--------------
We define fragility in the technical discussion in Appendix [C](#A3 "Appendix C Mathematical Derivations of Fragility ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)") as "is harmed by uncertainty", with the mathematical result that what is harmed by uncertainty has a certain type on nonlinear response to random events.
The PP applies only to the largest scale impacts due to the inherent fragility of systems that maintain their structure. As the scale of impacts increases the harm increases non-linearly up to the point of destruction.
###
Vii-a Fragility as Nonlinear Response
Everything that has survived is necessarily non-linear to harm. If I fall from a height of 10 meters I am injured more than 10 times than if I fell from a height of 1 meter, or more than 1000 times than if I fell from a height of 1 centimeter, hence I am fragile. In general, every additional meter, up to the point of my destruction, hurts me more than the previous one.
Similarly, if I am hit with a big stone I will be harmed a lot more than if I were pelted serially with pebbles of the same total weight.
Everything that is fragile and still in existence (that is, unbroken), will be harmed more by a certain stressor of intensity X than by k times a stressor of intensity X/k, up to the point of breaking. If I were not fragile (susceptible to harm more than linearly), I would be destroyed by accumulated effects of small events, and thus would not survive. This non-linear response is central for everything on planet earth.
This explains the necessity of considering scale when invoking the PP. Polluting in a small way does not warrant the PP because it is essentially less harmful than polluting in large quantities, since harm is non-linear.

Fig. 5: Nonlinear response compared to linear response. The PP should be evoked to prevent impacts that result in complete destruction due to the nonlinear response of natural systems, it is not needed for smaller impacts where risk management methods can be applied.
###
Vii-B Why is fragility a general rule?
The statistical structure of stressors is such that small variations are much, much more frequent than large ones. Fragility is intimately connected to the ability to withstand small impacts and recover from them. This ability is what makes a system retain its structure. Every system has a threshold of impact beyond which it will be destroyed, i.e. its structure is not sustained.
Consider a coffee cup sitting on a table: there are millions of recorded earthquakes every year; if the coffee cup were linearly sensitive to earthquakes and accumulated their effects as small deteriorations of its form, it would not persist even for a short time as it would have been broken down due to the accumulated impact of small vibrations. The coffee cup, however, is non-linear to harm, so that the small or remote earthquakes only make it wobble, whereas one large one would break it forever.
This nonlinearity is necessarily present in everything fragile.
Thus, when impacts extend to the size of the system, harm is severely exacerbated by non-linear effects. Small impacts, below a threshold of recovery, do not accumulate for systems that retain their structure. Larger impacts cause irreversible damage. We should be careful, however, of actions that may seem small and local but then lead to systemic consequences.
###
Vii-C Fragility, Dose response and the 1/n rule
Another area where we see non-linear responses to harm is the dose-response relationship. As the dose of some chemical or stressor increases, the response to it grows non-linearly. Many low-dose exposures do not cause great harm, but a single large-dose can cause irreversible damage to the system, like overdosing on painkillers.
In decision theory, the 1/n heuristic is a simple rule in which an agent invests equally across n funds (or sources of risk) rather than weighting their investments according to some optimization criterion such as mean-variance or Modern Portfolio Theory (MPT), which dictates some amount of concentration in order to increase the potential payoff. The 1/n heuristic mitigates the risk of suffering ruin due to an error in the model; there is no single asset whose failure can bring down the ship. While the potential upside of the large payoff is dampened, ruin due to an error in prediction is avoided. This heuristic works best when the sources of variations are uncorrelated and, in the presence of correlation or dependence between the various sources of risk, the total exposure needs to be reduced.
Hence, because of non-linearities, it is preferable to diversify our effect on the planet, e.g. distinct types of pollutants, across the broadest number of uncorrelated sources of harm, rather than concentrate them. In this way, we avoid the risk of an unforeseen, disproportionately harmful response to a pollutant deemed "safe" by virtue of responses observed only in relatively small doses.
Table [II](#S7.T2 "TABLE II ‣ VII-C Fragility, Dose response and the /1n rule ‣ VII Fragility ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)") summarizes out policy with respect to the various types of exposures and fragilities.
| | Local Exposure | Systemic Exposure |
| --- | --- | --- |
| Thin Tails | I | II |
| Fat Tails | III | IV: Domain of PP |
I: First Quadrant, safe
II: Second Quadrant, safe but calculated risks
III: Quadrant III, safe but rigorous risk management
IV: Quadrant Where PP should be exercized
TABLE II: The Four Quadrants
Viii The limitation of top-down engineering in complex environments
--------------------------------------------------------------------
In considering the limitations of risk-taking, a key question is whether or not we can analyze the potential outcomes of interventions and, knowing them, identify the associated risks. Can’t we just "figure it out?” With such knowledge we can gain assurance that extreme problems such as global destruction will not arise.
Since the same issue arises for any engineering effort, we can ask what is the state-of-the-art of engineering? Does it enable us to know the risks we will encounter? Perhaps it can just determine the actions we should, or should not, take. There is justifiably widespread respect for engineering because it has provided us with innovations ranging from infrastructure to electronics that have become essential to modern life.
What is not as well known by the scientific community and the public, is that engineering approaches fail in the face of complex challenges and this failure has been extensively documented by the engineering community itself [[8](#bib.bib8)]. The underlying reason for the failure is that complex environments present a wide range of conditions. Which conditions will actually be encountered is uncertain. Engineering approaches involve planning that requires knowledge of the conditions that will be encountered. Planning fails due to the inability to anticipate the many conditions that will arise.
This problem arises particularly for “real-time” systems that are dealing with large amounts of information and have critical functions in which lives are at risk. A classic example is the air traffic control system. An effort to modernize that system by traditional engineering methods cost $3-6 billion and was abandoned without changing any part of the system because of the inability to evaluate the risks associated with its implementation.
Significantly, the failure of traditional engineering to address complex challenges has led to the adoption of innovation strategies that mirror evolutionary processes, creating platforms and rules that can serve as a basis for safely introducing small incremental changes that are extensively tested in their real world context [[8](#bib.bib8)]. This strategy underlies the approach used by highly-successful, modern, engineered-evolved, complex systems ranging from the Internet, to Wikipedia, to iPhone App communities.

Fig. 6: The more uncertain or skeptical one is of "scientific" models and projections, the higher the risk of ruin, which flies in the face of the argument of the style "skeptical of climate models". No matter how increased the probability of benefits, ruin as an absorbing barrier, i.e. causing extinction without further recovery, can more than cancels them out. This graph assumes changes in uncertainty without changes in benefits (a mean-preserving sensitivity) –the next one isolates the changes in benefits.

Fig. 7: The graph shows the asymmetry between benefits and harm and the effect on the ruin probabilities. Shows the effect on ruin probability of changes the Information Ratio, that is, expected benefituncertainty (or signal divided by noise). Benefits are small compared to negative effects. Three cases are considered, two from Extremistan: extremely fat-tailed (α=1), and less fat-tailed (α=2), and one from Mediocristan.
Ix Skepticism and Precaution
-----------------------------
We show in Figures [6](#S8.F6 "Fig. 6 ‣ VIII The limitation of top-down engineering in complex environments ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)") and [7](#S8.F7 "Fig. 7 ‣ VIII The limitation of top-down engineering in complex environments ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)") that an increase in uncertainty leads to an increase in the probability of ruin, hence "skepticism" is that its impact on decisions should lead to increased, not decreased conservatism in the presence of ruin. More skepticism about models implies more uncertainty about the tails, which necessitates more precaution about newly implemented techniques, or larger size of exposures. As we said, Nature might not be smart, but its longer track record means smaller uncertainty in following its logic.
Mathematically, more uncertainty about the future –or about a model –increases the scale of the distribution, hence thickens the "left tail" (as well as the "right one") which raises the potential ruin. The survival probability is reduced no matter what takes place in the right tail.
Hence skepticim about climate models should lead to more precautionary policies.
In addition, such increase uncertainty matters far more in Extremistan –and has benign effects in Mediocristan. Figure [7](#S8.F7 "Fig. 7 ‣ VIII The limitation of top-down engineering in complex environments ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)") shows th asymmetries between costs and benefits as far as ruin probabilities, and why these matter more for fat-tailed domains than thin-tailed ones. In thin-tailed domains, an increase in uncertainty changes the probability of ruin by several orders of magnitude, but the effect remains small: from say 10−40 to 10−30 is not quite worrisome. In fat-tailed domains, the effect is sizeable as we start with a substantially higher probability of ruin (which is typically underestimated, see [[6](#bib.bib6)]).
X Why should GMOs be under PP but not nuclear energy?
------------------------------------------------------
As examples that are relevant to the discussion of the different types of strategies, we consider the differences between concerns about nuclear energy and GM crops.
In short nuclear exposure in nonlinear –and can be local (under some conditions) – while GMOs are not and present systemic risks even in small amounts.
###
X-a Nuclear energy
Many are justifiably concerned about nuclear energy. It is known that the potential harm due to radiation release, core meltdowns and waste can be large. At the same time, the nature of these risks has been extensively studied, and the risks from local uses of nuclear energy have a scale that is much smaller than global. Thus, even though some uncertainties remain, it is possible to formulate a cost benefit analysis of risks for local decision-making. The large potential harm at a local scale means that decisions about whether, how and how much to use nuclear energy, and what safety measures to use, should be made carefully so that decision makers and the public can rely upon them. Risk management is a very serious matter when potential harm can be large and should not be done casually or superficially. Those who perform the analysis must not only do it carefully, they must have the trust of others that they are doing it carefully. Nevertheless, the known statistical structure of the risks and the absence of global systemic consequences makes the cost benefit analysis meaningful. Decisions can be made in the cost-benefit context—evoking the PP is not appropriate for small amounts of nuclear energy, as the local nature of the risks is not indicative of the circumstances to which the PP applies.
In large quantities, we should worry about an unseen risk from nuclear energy and invoke the PP. In small quantities, it may be OK—how small we should determine by direct analysis, making sure threats never cease to be local.
In addition to the risks from nuclear energy use itself, we must keep in mind the longer term risks associated with the storage of nuclear waste, which are compounded by the extended length of time they remain hazardous. The problems of such longer term “lifecycle” effects is present in many different industries. It arises not just for nuclear energy but also for fossil fuels and other sources of pollution, though the sheer duration of toxicity effects for nuclear waste, enduring for hundreds of thousands of years in some cases, makes this problem particularly intense for nuclear power.
As we saw earlier we need to remain careful in limiting nuclear exposure –as other sources of pollution – to sources that owing to their quantity do not allow for systemic effects.
###
X-B GMOs
Genetically Modified Organisms (GMOs) and their risk are currently the subject of debate [[9](#bib.bib9)]. Here we argue that they fall squarely under the PP because their risk is systemic. There are two aspects of systemic risk, the widespread impact on the ecosystem and the widespread impact on health.
Ecologically, in addition to intentional cultivation, GMOs have the propensity to spread uncontrollably, and thus their risks cannot be localized. The cross-breeding of wild-type plants with genetically modified ones prevents their disentangling, leading to irreversible system-wide effects with unknown downsides. The ecological implications of releasing modified organisms into the wild are not tested empirically before release.
Healthwise, the modification of crops impacts everyone. Corn, one of the primary GMO crops, is not only eaten fresh or as cereals, but is also a major component of processed foods in the form of high-fructose corn syrup, corn oil, corn starch and corn meal. In 2014 in the US almost 90% of corn and 94% of soybeans are GMO [[11](#bib.bib11)]. Foods derived from GMOs are not tested in humans before they are marketed.
The widespread impacts of GMOs on ecologies and human health imply they are in the domain of the PP. This should itself compel policy makers to take extreme caution. However, there is a difficulty for many in understanding the abstract nature of the engagement in risks and imagining the many possible ways that harm can be caused. Thus, we summarize further the nature of the risks that are involved.

Fig. 8: A simplified illustration of the mechanism behind the potato famine of the 19th C. showing how concentration from monoculture increases the risk of ruin. Inspired by Berkeley’s Understanding Evolution.
###
X-C GMOs in detail
The systemic global impacts of GMOs arise from a combination of (1) engineered genetic modifications, (2) monoculture—the use of single crops over large areas. Global monoculture itself is of concern for potential global harm, but the evolutionary context of traditional crops provides important assurances (see Figure [8](#S10.F8 "Fig. 8 ‣ X-B GMOs ‣ X Why should GMOs be under PP but not nuclear energy? ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)")). Invasive species are frequently a problem but one might at least argue that the long term evolutionary testing of harmful impacts of organisms on local ecological systems mitigates if not eliminates the largest potential risks. Monoculture in combination with genetic engineering dramatically increases the risks being taken. Instead of a long history of evolutionary selection, these modifications rely not just on naive engineering strategies that do not appropriately consider risk in complex environments, but also explicitly reductionist approaches that ignore unintended consequences and employ very limited empirical testing.
Ironically, at a time when engineering is adopting evolutionary approaches due to the failure of top-down strategies, biologists and agronomists are adopting top-down engineering strategies and taking global systemic risks in introducing organisms into the wild.
One argument in favor of GMOs is that they are no more "unnatural" than the selective farming our ancestors have been doing for generations. In fact, the ideas developed in this paper show that this is not the case. Selective breeding over human history is a process in which change still happens in a bottom-up way, and can be expected to result in a thin-tailed distribution. If there is a mistake, some harmful variation, it will not spread throughout the whole system but end up dying out due to local experience over time. Human experience over generations has chosen the biological organisms that are relatively safe for consumption. There are many that are not, including parts of and varieties of the crops we do cultivate [[12](#bib.bib12)]. Introducing rapid changes in organisms is inconsistent with this process. There is a limited rate at which variations can be introduced and selection will be effective [[13](#bib.bib13)].
There is no comparison between tinkering with the selective breeding of genetic components of organisms that have previously undergone extensive histories of selection and the top-down engineering of taking a gene from a fish and putting it into a tomato. Saying that such a product is natural misses the process of natural selection by which things become “natural." While there are claims that all organisms include transgenic materials, those genetic transfers that are currently present were subject to selection over long times and survived. The success rate is tiny. Unlike GMOs, in nature there is no immediate replication of mutated organisms to become a large fraction of the organisms of a species. Indeed, any one genetic variation is unlikely to become part of the long term genetic pool of the population. Instead, just like any other genetic variation or mutation, transgenic transfers are subject to competition and selection over many generations before becoming a significant part of the population. A new genetic transfer engineered today is not the same as one that has survived this process of selection.
An example of the effect of transfer of biologically evolved systems to a different context is that of zoonotic diseases. Even though pathogens consume their hosts, they evolve to be less harmful than they would otherwise be. Pathogens that cause highly lethal diseases are selected against because their hosts die before they are able to transmit to others. This is the underlying reason for the greater dangers associated with zoonotic diseases—caused by pathogens that shift from the host that they evolved in to human beings, including HIV, Avian and Swine flu that transferred from monkeys (through chimpanzees), birds and hogs, respectively.
More generally, engineered modifications to ecological systems (through GMOs) are categorically and statistically different from bottom up ones. Bottom-up modifications do not remove the crops from their long term evolutionary context, enabling the push and pull of the ecosystem to locally extinguish harmful mutations. Top-down modifications that bypass this evolutionary pathway unintentionally manipulate large sets of interdependent factors at the same time, with dramatic risks of unintended consequences. They thus result in fat-tailed distributions and place a huge risk on the food system as a whole.
For the impact of GMOs on health, the evaluation of whether the genetic engineering of a particular chemical (protein) into a plant is OK by the FDA is based upon considering limited existing knowledge of risks associated with that protein. The number of ways such an evaluation can be in error is large. The genetic modifications are biologically significant as the purpose is to strongly impact the chemical functions of the plant, modifying its resistance to other chemicals such as herbicides or pesticides, or affecting its own lethality to other organisms—i.e. its antibiotic qualities. The limited existing knowledge generally does not include long term testing of the exposure of people to the added chemical, even in isolation. The evaluation is independent of the ways the protein affects the biochemistry of the plant, including interactions among the various metabolic pathways and regulatory systems—and the impact of the resulting changes in biochemistry on health of consumers. The evaluation is independent of its farm-ecosystem combination (i.e. pesticide resistant crops are subject to increased use of pesticides, which are subsequently present in the plant in larger concentrations and cannot be washed away). Rather than recognizing the limitations of current understanding, poorly grounded perspectives about the potential damage with unjustified assumptions are being made. Limited empirical validation of both essential aspects of the conceptual framework as well as specific conclusions are being used because testing is recognized to be difficult.
We should exert the precautionary principle here – our non-naive version – because we do not want to discover errors after considerable and irreversible environmental and health damage.
###
X-D Red herring: How about the risk of famine without GMOs?
An argument used by those who advocate for GMOs is that they will reduce the hunger in the world. Invoking the risk of famine as an alternative to GMOs is a deceitful strategy, no different from urging people to play Russian roulette in order to get out of poverty.
The evocation of famine also prevents clear thinking about not just GMOs but also about global hunger. The idea that GMO crops will help avert famine ignores evidence that the problem of global hunger is due to poor economic and agricultural policies. Those who care about the supply of food should advocate for an immediate impact on the problem by reducing the amount of corn used for ethanol in the US, which burns food for fuel consuming over 40% of the US crop that could provide enough food to feed 2/3 of a billion people [[14](#bib.bib14)].
One of the most extensively debated cases for GMOs is a variety of rice—"golden rice"—to which has been added a precursor of vitamin A as a potential means to alleviate this nutritional deficiency, which is a key medical condition affecting impoverished populations. Since there are alternatives, including traditional vitamin fortification, one approach is to apply a cost benefit analysis comparing these approaches. Counter to this approach stands both the largely unknown risks associated with the introduction of GMOs, and the need and opportunities for more systemic interventions to alleviate not just malnutrition but poverty and hunger worldwide. While great attention should be placed on immediate needs, neglecting the larger scale risks is unreasonable [[10](#bib.bib10)]. Here science should adopt an unyielding rigor for both health benefit and risk assessment, including careful application of the PP. Absent such rigor, advocacy by the scientific community not only fails to be scientific, but also becomes subject to challenge for short term interests, not much different from corporate endorsers. Thus, cutting corners on tests, including tests without adequate consent or approvals performed on Chinese children [[15](#bib.bib15)], undermines scientific claims to humanitarian ideals. Given the promotion of "golden rice" by the agribusiness that also promote biofuels, their interest in humanitarian impacts versus profits gained through wider acceptance of GMO technology can be legitimately questioned [[16](#bib.bib16)].
We can frame the problem in our probabilistic argument of Section [IX](#S9 "IX Skepticism and Precaution ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)"). This asymmetry from adding another risk, here a technology (with uncertainty attending some of its outcomes), to solve a given risk (which can be solved by less complicated means) are illustrated in Figures [6](#S8.F6 "Fig. 6 ‣ VIII The limitation of top-down engineering in complex environments ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)") and [7](#S8.F7 "Fig. 7 ‣ VIII The limitation of top-down engineering in complex environments ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)"). Model error, or errors from the technology itself, i.e., its iatrogenics, can turn a perceived "benefit" into a highly likely catastrophe, simply because an error from, say, "golden rice" or some such technology would have much worse outcomes than an equivalent benefit. Most of the discussions on "saving the poor from starvation" via GMOs miss the fundamental asymmetry shown in [7](#S8.F7 "Fig. 7 ‣ VIII The limitation of top-down engineering in complex environments ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)").
###
X-E GMOs in summary
In contrast to nuclear energy (which, as discussed in section [X-A](#S10.SS1 "X-A Nuclear energy ‣ X Why should GMOs be under PP but not nuclear energy? ‣ The Precautionary Principle (with Application to the Genetic Modification of Organisms)") above, may or may not fall under the PP, depending on how and where (how widely) it is implemented), Genetically Modified Organisms, GMOs, fall squarely under the PP because of their systemic risk. The understanding of the risks is very limited and the scope of the impacts are global both due to engineering approach replacing an evolutionary approach, and due to the use of monoculture.
Labeling the GMO approach “scientific" betrays a very poor—indeed warped—understanding of probabilistic payoffs and risk management. A lack of observations of explicit harm does not show absence of hidden risks. Current models of complex systems only contain the subset of reality that is accessible to the scientist. Nature is much richer than any model of it. To expose an entire system to something whose potential harm is not understood because extant models do not predict a negative outcome is not justifiable; the relevant variables may not have been adequately identified.
Given the limited oversight that is taking place on GMO introductions in the US, and the global impact of those introductions, we are precisely in the regime of the ruin problem. A rational consumer should say: We do not wish to pay—or have our descendants pay—for errors made by executives of Monsanto, who are financially incentivized to focus on quarterly profits rather than long term global impacts. We should exert the precautionary principle—our non-naive version—simply because we otherwise will discover errors with large impacts only after considerable damage.
###
X-F Vaccination, Antibiotics, and Other Exposures
Our position is that while one may argue that vaccination is risky, or risky under some circumstances, it does not fall under PP owing to the lack of systemic risk. The same applies to such interventions as antibiotics, provided the scale remains limited to the local.
Xi Precaution as Policy and Naive Intervention
-----------------------------------------------
When there is a risk of ruin, obstructionism and policy inaction are important strategies, impeding the rapid headlong experimentation with global ruin by those with short-term, self-centered incentives and perspectives. Two approaches for policy action are well justified. In the first, actions that avoid the inherent sensitivity of the system to propagation of harm can be used to free the system to enable local decision-making and exploration with only local harm. This involves introducing boundaries, barriers and separations that inhibit propagation of shocks, preventing ruin for overly connected systems. In the second, where such boundaries don’t exist or cannot be introduced due to other effects, there is a need for actions that are adequately evaluated as to their global harm. Scientific analysis of such actions, meticulously validated, is needed to prevent small risks from causing ruin.
What is not justified, and dangerous, are actions that are intended to prevent harm by additional intervention. The reason is that indirect effects are likely to create precisely the risks that one is intending to avoid.
When existing risks are perceived as having the potential for ruin, it may be assumed that any preventive measure is justified. There are at least two problems with such a perspective. First, localized harm is often mistaken for ruin, and the PP is wrongly invoked where risk management techniques should be employed. When a risk is not systemic, overreaction will typically cause more harm than benefits, like undergoing dangerous surgery to remove a benign growth. Second, even if the threat of ruin is real, taking specific (positive) action in order to ward off the perceived threat may introduce new systemic risks. It is often wiser to reduce or remove activity that is generating or supporting the threat and allow natural variations to play out in localized ways.
Preventive action should be limited to correcting situations by removing threats *via negativa* in order to bring them back in line with a statistical structure that avoids ruin. It is often better to remove structure or allow natural variation to take place rather than to *add* something additional to the system.
When one takes the opposite approach, taking specific action designed to diminish some perceived threat, one is almost guaranteed to induce unforeseen consequences. Even when there appears to be a direct link from a specific action to a specific preventive outcome, the web of causality extends in complex ways with consequences that are far from the intended goal. These unintended consequences may generate new vulnerabilities or strengthen the harm one is hoping to diminish.
Thus, when possible, limiting fragilizing dependencies is better than imposing additional structure that increases the fragility of the system as a whole.
Xii Fallacious arguments against PP
-------------------------------------
In this section we respond to a variety of arguments that have been made against the PP.
###
Xii-a Crossing the road (the paralysis fallacy)
Many have countered the invocation of the PP with “nothing is ever totally safe.” “I take risks crossing the road every day, so according to you I should stay home in a state of paralysis.” The answer is that we don’t cross the street blindfolded, we use sensory information to mitigate risks and reduce exposure to extreme shocks.
Even more importantly in the context of the PP, the probability distribution of death from road accidents at the population level is thin-tailed; I do not incur the risk of generalized human extinction by crossing the street—a human life is bounded in duration and its unavoidable termination is an inherent part of the bio-social system [[17](#bib.bib17)]. The error of my crossing the street at the wrong time and meeting an untimely demise in general does not cause others to do the same; the error does not spread. If anything, one might expect the opposite effect, that others in the system benefit from my mistake by adapting their behavior to avoid exposing themselves to similar risks. Equating risks a person takes with his or her own life with risking the existence of civilization is an inappropriate ego trip. In fact, the very idea of the PP is to avoid such a frivolous focus.
The paralysis argument is often used to present the PP as incompatible with progress. This is untrue: tinkering, bottom-up progress where mistakes are bounded is how progress has taken place in history. The non-naive PP simply asserts that the risks we take as we innovate must not extend to the entire system; local failure serves as information for improvement. Global failure does not.
This fallacy illustrates the misunderstanding between systemic and idiosyncratic risk in the literature. Individuals are fragile and mortal. The idea of sustainability is to stike to make systems as close to immortal as possible.
###
Xii-B The Psychology of Risk and Thick Tailed Distributions
One concern about the utility of the PP is that its evocation may become commonplace because of risk aversion. Is it true that people overreact to small probabilities and the PP would feed into human biases? While we have carefully identified the scope of the domain of applicability of the PP, it is also helpful to review the evidence of risk aversion, which we find not to be based upon sound studies.
Certain empirical studies appear to support the existence of a bias toward risk aversion, claiming evidence that people choose to avoid risks that are beneficial, inconsistent with cost-benefit analyses. The relevant experiments ask people questions about single probability events, showing that people overreact to small probabilities. However, those researchers failed to include the consequences of the associated events which humans underestimate. Thus, this empirical strategy as a way of identifying effectiveness of response to risk is fundamentally flawed [[18](#bib.bib18)].
The proper consideration of risk involves both probability and consequence, which should be multiplied together. Consequences in many domains have thick tails, i.e. much larger consequences can arise than are considered in traditional statistical approaches. Overreacting to small probabilities is not irrational when the effect is large, as the product of probability and harm is larger than expected from the traditional treatment of probability distributions.
###
Xii-C The Loch Ness fallacy
Many have countered that we have no evidence that the Loch Ness monster doesn’t exist, and, to take the argument of evidence of absence being different from absence of evidence, we should act as if the Loch Ness monster existed. The argument is a corruption of the absence of evidence problem and certainly not part of the PP.
The relevant question is whether the existence of the Loch Ness monster has implications for decisions about actions that are being taken. We are not considering a decision to swim in the Loch Ness. If the Loch Ness monster did exist, there would still be no reason to invoke the PP, as the harm he might cause is limited in scope to Loch Ness itself, and does not present the risk of ruin.
###
Xii-D The fallacy of misusing the naturalistic fallacy
Some people invoke “the naturalistic fallacy,” a philosophical concept that is limited to the moral domain. According to this critique, we should not claim that natural things are necessarily good; human innovation can be equally valid. We do not claim to use nature to derive a notion of how things "ought" to be organized. Rather, as scientists, we respect nature for the extent of its experimentation. The high level of statistical significance given by a very large sample cannot be ignored. Nature may not have arrived at the best solution to a problem we consider important, but there is reason to believe that it is smarter than our technology based only on statistical significance.
The question about what kinds of systems work (as demonstrated by nature) is different than the question about what working systems ought to do. We can take a lesson from nature—and time—about what kinds of organizations are robust against, or even benefit from, shocks, and in that sense systems should be structured in ways that allow them to function. Conversely, we cannot derive the structure of a functioning system from what we believe the outcomes ought to be.
To take one example, Cass Sunstein—who has written an article critical of the PP [[19](#bib.bib19)]—claims that there is a "false belief that nature is benign." However, his conceptual discussion fails to distinguish between thin and fat tails, local harm and global ruin. The method of analysis misses both the statistical significance of nature and the fact that it is not necessary to believe in the perfection of nature, or in its "benign" attributes, but rather in its track record, its sheer statistical power as a risk evaluator and as a risk manager in avoiding ruin.
###
Xii-E The "Butterfly in China" fallacy
The statement “if I move my finger to scratch my nose, by the butterfly-in-China effect, owing to non-linearities, I may terminate life on earth," is known to be flawed. The explanation is not widely understood. The fundamental reason arises because of the existence of a wide range in levels of predictability and the presence of a large number of fine scale degrees of freedom for every large scale one [[20](#bib.bib20)]. Thus, the traditional deterministic chaos, for which the butterfly effect was named, applies specifically to low dimensional systems with a few variables in a particular regime. High dimensional systems, like the earth, have large numbers of fine scale variables for every large scale one. Thus, it is apparent that not all butterfly wing flaps can cause hurricanes. It is not clear that any one of them can, and, if small perturbations can influence large scale events, it happens only under specific conditions where amplification occurs.
Empirically, our thesis rebuts the butterfly fallacy with the argument that, in the aggregate, nature has experienced trillions of small variations and yet it survives. Therefore, we know that the effects of scratching one’s nose fall into the thin tailed domain and thus do not warrant the precautionary principle.
As described previously, barriers in natural systems lead to subsystems having a high-degree of independence. Understanding how modern systems with a high-degree of connectivity have cascading effects is essential for understanding when it is and isn’t appropriate to use the PP.
###
Xii-F The potato fallacy
Many species were abruptly introduced into the Old World starting in the 16th Century that did not cause environmental disasters (perhaps aside from diseases affecting Native Americans). Some use this observation in defense of GMOs. However, the argument is fallacious at two levels:
First, by the fragility argument, potatoes, tomatoes and similar "New World" goods were developed locally through progressive, bottom-up tinkering in a complex system in the context of its interactions with its environment. Had they had an impact on the environment, it would have caused adverse consequences that would have prevented their continual spread.
Second, a counterexample is not evidence in the risk domain, particularly when the evidence is that taking a similar action previously did not lead to ruin. Lack of ruin due to several or even many trials does not indicate safety from ruin in the next one. This is also the Russian roulette fallacy, detailed below.
###
Xii-G The Russian roulette fallacy (the counterexamples in the risk domain)
The potato example, assuming potatoes had not been generated top-down by some engineers, would still not be sufficient. Nobody says "look, the other day there was no war, so we don’t need an army," as we know better in real-life domains. Nobody argues that a giant Russian roulette with many barrels is "safe" and a great money making opportunity because it didn’t blow up someone’s brains last time.
There are many reasons a previous action may not have led to ruin while still having the potential to do so. If you attempt to cross the street with a blindfold and earmuffs on, you may make it across, but this is not evidence that such an action carries no risk.
More generally, one needs a large sample for claims of absence of risk in the presence of a small probability of ruin, while a single “n=1" example would be sufficient to counter the claims of safety—this is the Black Swan argument [[29](#bib.bib29)]. Simply put, systemic modifications require a very long history in order for the evidence of lack of harm to carry any weight.
###
Xii-H The Carpenter Fallacy
Risk managers skeptical of the understanding of risk of biological processes, such as GMOs, by the experts are sometimes asked "are you a biologist?" But nobody asks a probabilist dealing with roulette sequences if he is a carpenter. To understand the gambler’s ruin problem by roulette betting, we know to ask a probabilist, not a carpenter. No amount of expertise in carpentry can replace rigor in understanding the properties of long sequences of small probability bets. Likewise, no amount of expertise in the details of biological processes can be a substitute for probabilistic rigor.
The context for evaluating risk is the extent of knowledge or lack of knowledge. Thus, when considering GMO risks, a key question is what is the extent to which we know the impacts of genetic changes in organisms. Claims that geneticists know these consequences as a basis for GMOs do not recognize either that their knowledge is not complete in its own domain nor is genetics complete as a body of knowledge. Geneticists do not know the developmental, physiological, medical, cognitive and environmental consequences of genetic changes in organisms. Indeed, most of these are not part of their training or competency. Neither are they trained in recognizing the impact of the limitations of knowledge on risk.
Some advocates dismiss the very existence of risk due to the role of scientific knowledge in GMOs. According to this view scientists from Monsanto and similar companies can be trusted to provide safe foods without risk and even a question about risk is without basis. Scientific knowledge as a source of engineering innovation has a long tradition. At the same time, engineering itself is a different discipline and has different imperatives. While construction of bridges and buildings relies upon well established rules of physics, the existence of risks does not end with that knowledge and must be considered directly in planning and construction as it is in other forms of engineering. The existence of risk in engineering even where knowledge is much better established than genetics is widely recognized. That proponents dismiss the very existence of risk, attests to their poor understanding or blind extrinsically motivated advocacy.
The FDA has adopted as a policy the approach that current scientific knowledge assures safety of GMOs, and relies upon Monsanto or similar companies for assurances. It therefore does not test the impact of chemical changes in GMO plants on human health or ecological systems. This despite experiments that show that increased concentrations of neurotoxins in maternal blood are linked to GMOs [[21](#bib.bib21)]. A variety of studies show experimental evidence that risks exist [[22](#bib.bib22), [23](#bib.bib23), [24](#bib.bib24), [25](#bib.bib25)] and global public health concerns are recognized [[27](#bib.bib27)]. We note that it is possible that there are significant impacts of neurotoxins on human cognitive function as a result of GMO modification, as FDA testing does not evaluate this risk.
Consistent with these points, the track record of the experts in understanding biological and medical risks has been extremely poor. We need policies to be robust to such miscalculations. The "expert problem" in medicine by which experts mischaracterize the completeness of their own knowledge is manifest in a very poor historical record of risks taken with innovations in biological products. These range from biofuels to transfat to nicotine, etc. Consider the recent major drug recalls such as Thalidomide, Fen-Phen, Tylenol and Vioxx—all of these show blindness on the part of the specialist to large scale risks associated with absence of knowlege, i.e., Black Swan events. Yet most of these risks were local and not systemic (with the exception of biofuel impacts on global hunger and social unrest). Since systemic risks would result in a recall happening too late, we need the strong version of the PP.
A sobering evidence showing how scientists in the biological fields can know their area very well yet make erroneous probabilistic statements is as follows.
Where X and Y are two random variables, the properties of the difference between the two, i.e. X−Y, say the variance, probabilities, and higher order attributes are markedly different from the difference in properties. So where E is the expectation (the expected average), and V the variance, E(X−Y)=E(X)−E(Y) but of course, Var(X−Y)≠Var(X)−Var(Y), etc. for higher order statistics. It means that P-values are different, and of course the coefficient of variation ("Sharpe"). Where σ is the standard deviation of the variable (or sample):
| | | |
| --- | --- | --- |
| | E(X−Y)σ(X−Y)≠E(X)σ(X)−E(Y))σ(Y) | |
The problem was described in Fooled by Randomness:
>
>
>
> A far more acute problem relates to the outperformance, or the comparison, between two or more persons or entities. While we are certainly fooled by randomness when it comes to a single times series, the foolishness is compounded when it comes to the comparison between, say, two people, or a person and a benchmark. Why? Because both are random. Let us do the following simple thought experiment. Take two individuals, say, a person and his brother-in-law, launched through life. Assume equal odds for each of good and bad luck. Outcomes: lucky-lucky (no difference between them), unlucky-unlucky (again, no difference), lucky- unlucky (a large difference between them), unlucky-lucky (again, a large difference).
>
>
>
Ten years later (2011) it was found that 50% of neuroscience papers (peer-reviewed in "prestigious journals") that compared variables got it wrong. In [[26](#bib.bib26)]:
>
> In theory, a comparison of two experimental effects requires a statistical test on their difference. In practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the other is not (P > 0.05). We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscience, Neuron and The Journal of Neuroscience) and found that 78 used the correct procedure and 79 used the incorrect procedure. An additional analysis suggests that incorrect analyses of interactions are even more common in cellular and molecular neuroscience.
>
>
>
Fooled by Randomness was read by many professionals (to put it mildly); the mistake is still being made. There are no reason to believe that ten years from now, they will no longer be making the mistake.
At the core lies our understanding of what both science and risk management mean. Science is supposed to be fallible, in fact it is grounded in fallibility since it is at its core an incremental process, while risk management is about minimizing fallibility, and the PP is about defining areas that require near-infallibility.
###
Xii-I The technological salvation fallacy
Iatrogenics is harm done by a healer despite positive intentions, see Appendix A for a list of innovations in care that have extensive documentation of adverse consequences. Each of these underwent best practices testing that did not reveal the iatrogenic consequences prior to widespread application. The controlled tests that are used to evaluate innovations for potential harm cannot replicate the large number of conditions in which interventions are applied in the real world. Adverse consequences are exposed only by extensive experience with the combinatorial number of real world conditions. Natural, i.e. evolutionary, selection implements as a strategy the use of selection of lack of harm under such conditions in a way that bounds the consequences because the number of replicates is increased only gradually during the process in which success is determined. In contrast, traditional engineering of technological solutions does not. Thus, the more technological a solution to a current problem—the more it departs from solutions that have undergone evolutionary selection—the more exposed one becomes to iatrogenics owing to combinatorial branching of conditions with adverse consequences.
Our concern here isn’t mild iatrogenics, but the systemic case.
###
Xii-J The pathologization fallacy
Today many mathematical or conceptual models that are claimed to be rigorous are based upon unvalidated and incorrect assumptions and are not robust to changes in these assumptions. Such models are deemed rational in the sense that they are logically derived from their assumptions, and, further, can be used to assess rationality by examining deviations from such models, as indicators of irrationality. Except that it is often the modeler who is using an incomplete representation of the reality, hence using an erroneous benchmark for rationality. Often the modelers are not familiar with the dynamics of complex systems or use antiquated statistical methods that do not take into account fat-tails and make inferences that would not be acceptable under different classes of probability distributions. Many biases, such as the ones used by Cass Sunstein (mentioned above), about the overestimation of the probabilities of rare events in fact correspond to the testers using a bad probability model that is thin-tailed. See Ref. [[6](#bib.bib6)] for a deeper discussion.
It has became popular to claim irrationality for GMO and other skepticism on the part of the general public—not realizing that there is in fact an "expert problem" and such skepticism is healthy and even necessary for survival. For instance, in The Rational Animal [[28](#bib.bib28)], the authors pathologize people for not accepting GMOs although "the World Health Organization has never found evidence of ill effects," a standard confusion of evidence of absence and absence of evidence. Such pathologizing is similar to behavioral researchers labeling hyperbolic discounting as "irrational" when in fact it is largely the researcher who has a very narrow model and richer models make the "irrationality" go away.
These researchers fail to understand that humans may have precautionary principles against systemic risks, and can be skeptical of the untested consequences of policies for deeply rational reasons, even if they do not express such fears in academic format.
Xiii Conclusions
-----------------
This formalization of the two different types of uncertainty about risk (local and systemic) makes clear when the precautionary principle is, and when it isn’t, appropriate. The examples of GMOs and nuclear energy help to elucidate the application of these ideas. We hope this will help decision makers to avoid ruin in the future.
Acknowledgments
---------------
Gloria Origgi, William Goodlad, Maya Bialik, David Boxenhorn, Jessica Woolley, Phil Hutchinson…
Conflicts of Interest
---------------------
One of the authors (Taleb) reports having received monetary compensation for lecturing on risk management and Black Swan risks by the Institute of Nuclear Power Operations, INPO, the main association in the United States, in 2011, in the wake of the Fukushima accident. |
1985ad3d-e3fc-42bd-8eb3-dac412ebb60b | StampyAI/alignment-research-dataset/arxiv | Arxiv | Synergistic Team Composition
1 Introduction
---------------
Some tasks, due to their complexity, cannot be carried out by single individuals. They need the concourse of sets of people composing teams. Teams provide a structure and means of bringing together people with a suitable mix of individual properties (such as competences or personality). This can encourage the exchange of ideas, their creativity, their motivation and job satisfaction and can actually extend individual capabilities. In turn, a suitable team can improve the overall productivity, and the quality of the performed tasks. However, sometimes teams work less effectively than initially expected due to several reasons: a bad balance of their capacities, incorrect team dynamics, lack of communication, or difficult social situations. Team composition is thus a problem that has attracted the interest of research groups all over the world, also in the area of multiagent systems. MAS research has widely acknowledged competences as important for performing tasks of different nature [Anagnostopoulos12onlineteam](#bib.bib3) ; [Chen2015](#bib.bib12) ; [Okimoto](#bib.bib26) ; [Rangapuram2015](#bib.bib32) . However, the majority of the approaches represent capabilities of agents in a Boolean way (i.e., an agent either has a required skill or not). This is a simplistic way to model an agent’s set of capabilities as it ignores any skill degree. In real life, capabilities are not binary since every individual (e.g. human or software) shows different performances for each competence. Additionally, the MAS literature has typically disregarded significant organizational psychology findings (with the exception of several recent, preliminary attempts like [FarhangianPPS15](#bib.bib19) or [alberola2016artificial](#bib.bib2) ). Numerous studies in organizational psychology [Arnold](#bib.bib7) ; [Mount](#bib.bib25) ; [White](#bib.bib36) underline the importance of personality traits or *types* for team composition. Other studies have focused on how team members should differ or converge in their characteristics, such as experience, personality, level of skill, or gender, among others [West](#bib.bib35) , in order to increase performance.
In this paper, we focus on scenarios where a complex task requires the collaboration of individuals within a team. More precisely, we consider a scenario, where there are *multiple instances of the same complex task*. The task has a task type and a set of competence requests with competence levels needed to solve the task. We have a pool of human agents characterized by gender, personality, and a set of competences with competence levels.
Our goal is to partition agents into teams so that within a task all competence requirements are covered (whenever possible) and team members work well together. That is, each resulting team is both *proficient* (covers the required competences) and *congenial* (balances gender and psychological traits). We refer to these teams as *synergistic teams*. We define the *synergistic value* of a team as its balance in terms of competence, personality and gender. Each synergistic team works on the very same task. This scenario is present in many real-life settings, for instance a classroom or a crowdsourcing task. With this purpose, we design an algorithm that uses a greedy technique both to match competences with the required ones and at the same time to balance the psychological traits of teams’ members.
This paper makes the following contributions. To start with, we formalise the synergistic team formation problem as the problem of partitioning a group of individuals into teams with limited size. We provide an approximate local algorithm to solve the team composition problem. We empirically evaluate the algorithm using real data. Preliminary results show that our algorithm predicts better the performance of teams than the experts that know students’ social situation, background and competences.
Outline. The remaining of this paper is structured as follows. Section [2](#S2 "2 Background ‣ Synergistic Team Composition") opens with an overview of the related work. Section [3](#S3 "3 Personality ‣ Synergistic Team Composition") gives the personality background for our model. Section [4](#S4 "4 Team Composition Model ‣ Synergistic Team Composition") describes the synergistic team composition problem and Section [5](#S5 "5 Solving STFP ‣ Synergistic Team Composition") presents our algorithm to solve the synergistic team composition problem. Then, Section [6](#S6 "6 Experimental Results ‣ Synergistic Team Composition") presents results of our algorithm in the context of team composition in the classroom. Finally, Section [7](#S7 "7 Discussion ‣ Synergistic Team Composition") discusses our approach and future work.
2 Background
-------------
To the best of our knowledge, [farhangian2015agent](#bib.bib18) is the only model that considers both personality and competences while composing teams. There, the influence of personality on different task allocation strategies (minimizing either undercompetence or overcompetence) is studied. Henceforth, this work is the most relevant for us, however there are substantial differences between our work and [farhangian2015agent](#bib.bib18) . Firstly, authors do not propose an algorithm to compose teams based on *both* personality and competences. Secondly, gender balance is not considered in their setting. Finally, [farhangian2015agent](#bib.bib18) does not provide an evaluation involving real data (only an agent-based simulation is presented).
The rest of the literature relevant to this article is divided into two categories as proposed in [andrejczuk](#bib.bib4) : those that consider agent capacities (individual and social capabilities of agents) and those that deal with agent personality (individual behaviour models).
Capacity.
The capacity dimension has been exploited by numerous previous works [Anagnostopoulos12onlineteam](#bib.bib3) ; [Chalkiadakis2012](#bib.bib11) ; [Chen2015](#bib.bib12) ; [Crawford](#bib.bib14) ; [Liemhetcharat2014](#bib.bib24) ; [Okimoto](#bib.bib26) ; [JAR2015](#bib.bib29) ; [Rangapuram2015](#bib.bib32) . In contrast to our work, where the competences are graded, in the majority of works agents are assumed to have multiple binary skills (i.e., the agent either has a skill or not). For instance, [Okimoto](#bib.bib26) ; [Crawford](#bib.bib14) use agents’ capabilities to compose one k-robust team for a single task. A team is k-robust if removing any k members from the team does not affect the completion of the task. [Anagnostopoulos12onlineteam](#bib.bib3) uses competences and communication cost in a context where tasks sequentially arrive and teams have to be composed to perform them. Each task requires a specific set of competences and the team composition algorithm is such that the workload per agent is fair across teams.
Personality.
In the team formation literature, the only two models to our knowledge considering personality to compose teams are [FarhangianPPS15](#bib.bib19) and [alberola2016artificial](#bib.bib2) . [alberola2016artificial](#bib.bib2) uses Belbin theory to obtain human predominant *roles* (we discuss this method in Section [3](#S3 "3 Personality ‣ Synergistic Team Composition")). Additionally, the gender is not taken into account while composing heterogeneous teams, which we believe may be important for team congeniality. Regarding [FarhangianPPS15](#bib.bib19) , Farhangian et al. use the classical MBTI personality test (this method is discussed in Section [3](#S3 "3 Personality ‣ Synergistic Team Composition")). They look for the best possible team built around a selected leader. In other words, the *best* team for a particular task is composed. Gender balance is not considered in this setting. Finally, although [FarhangianPPS15](#bib.bib19) ’s team composition considered real data, the resulting teams’ performance was not validated in any real setting (Bayesian theory was used to predict the probability of success in various team composition conditions).
3 Personality
--------------
In this section, we discuss the most prominent approaches to measure human personality and we explain the details of the method we have decided to examine.
Personality determines people’s behaviour, cognition and emotion. Different personality theorists present their own definitions of personality and different ways to measure it based on their theoretical positions.
The most popular approach is to determine personality through a set of questions. There have been several simplified schemes developed over the years to profile human personality. The most populars are:
1. the Five Factor Model (aka FFM or “Big Five”), which uses five broad dimensions to describe human personality [Costa](#bib.bib13) ;
2. Belbin theory [belbin](#bib.bib6) , which provides a theory on how different role types influence teamwork; and
3. the Myers-Briggs Type Indicator (MBTI) scheme designed to indicate psychological preferences in how people perceive the world and make decisions [Myers](#bib.bib10) .
According to [Poropat](#bib.bib30) , FFM personality instruments fail to detect significant sex differences in personality structures. It is also argued that the Big Five dimensions are too broad and heterogeneous, and lack the specificity to make accurate predictions in many real-life settings [Boyle](#bib.bib9) ; [johnson2004genetic](#bib.bib22) .
Regarding Belbin theory, the results of previous studies considering the correlation between team composition and team performance are ambiguous. Even though some research shows weak support or does not show support for this theory at all [batenburg2013belbin](#bib.bib8) ; [van2008belbin](#bib.bib34) ; [partington1999belbin](#bib.bib28) , it remains popular.
Finally, the MBTI measure consists of four dimensions on a binary scale (e.g. either the person is Extrovert or Introvert). Within this approach, every person falls into one of the sixteen possible combinations of the four letter codes, one letter representing one dimension. This approach is easy to interpret by non-psychologists, though reliance on dichotomous preference scores rather than continuous scores excessively restricts the level of statistical analysis [devito](#bib.bib15) .
Having considered the arguments above, we have decided to explore a novel method: the Post-Jungian Personality Theory, which is a modified version of the Myers-Briggs Type Indicator (MBTI) [Myers](#bib.bib10) , the “Step II” version of Quenk, Hammer and Majors [Wilde2013](#bib.bib39) . The questionnaire to determine personality is short, contains only 20 quick questions (compared to the 93 MBTI questions). This is very convenient for both experts wanting to design teams and individuals doing the test since completing the test takes just a few minutes (for details of the questionnaire, see ([Wilde2013,](#bib.bib39) , p.21)). Douglass J. Wilde claims that it covers the same psychological territory as MBTI [Wilde2009](#bib.bib37) . In contrast to the MBTI measure, which consists of four binary dimensions, the Post-Jungian Personality Theory uses the *numerical* data collected using the questionnaire [Wilde2011](#bib.bib38) . The results of this method seem promising, since within a decade this novel approach has tripled the fraction of Stanford teams awarded national prizes by the Lincoln Foundation [Wilde2009](#bib.bib37) .
The test is based on the pioneering psychiatrist Carl Gustav Jung’s cognitive-mode personality model [PT](#bib.bib23) . It has two sets of variable pairs called psychological functions:
* Sensing / Intuition (SN) — describes the way of approaching problems
* Thinking / Feeling (TF) — describes the way of making decisions
and two sets of psychological attitudes:
* Perception / Judgment (PJ) — describes the way of living
* Extroversion / Introversion (EI) — describes the way of interacting with the world
For instance, for the Feeling-Thinking (TF) dimension, a value between -1 and 0 means that a person is of the feeling type, and a value between 0 and 1 means she is of the thinking type. Psychological functions and psychological attitudes compose together a personality. Every dimension of a personality (EI, SN, TF, PJ) is tested by five multiple choice true/false questions.
4 Team Composition Model
-------------------------
In this section we introduce and formalise our team composition problem. First, section [4.1](#S4.SS1 "4.1 Basic definitions ‣ 4 Team Composition Model ‣ Synergistic Team Composition") introduces the basic notions of agent, personality, competence, and team, upon which we formalise our problem. Next, we formalise the notion of task assignment for a single team and a single task, and we characterise different types of assignments. Sections [4.3](#S4.SS3 "4.3 Evaluating team proficiency ‣ 4 Team Composition Model ‣ Synergistic Team Composition") and [4.4](#S4.SS4 "4.4 Evaluating team congeniality ‣ 4 Team Composition Model ‣ Synergistic Team Composition") show how to evaluate the proficiency and congeniality degrees of a team. Based on these measures, in section [4.6](#S4.SS6 "4.6 The synergistic team composition problem ‣ 4 Team Composition Model ‣ Synergistic Team Composition") we formalise the *synergistic team composition problem*.
###
4.1 Basic definitions
In our model, we consider that each agent is a human. We characterise each agent by the following properties:
* A unique *identifier* that distinguishes an agent from others (e.g. ID card number, passport number, employee ID, or student ID).
* *Gender.* Human agents are either a man or a woman.
* A *personality* represented by four personality traits. Each personality trait is a number between -1 and 1.
* A *set of competences*. A competence integrates knowledge, skills, personal values, and attitudes that enable an agent to act correctly in a job, task or situation [roe2002competences](#bib.bib33) . Each agent is assumed to possess a set of competences with associated competence levels. This set may vary over time as an agent evolves.
Next, we formalise the above-introduced concepts.
###### Definition 1
A *personality profile* is a vector ⟨sn,tf,ei,pj⟩∈[−1,1]4, where each sn,tf,ei,pj represents one personality trait.
We denote by C={c1,…,cm} the whole set of competences, where each element ci∈C stands for a competence.
###### Definition 2
A *human agent* is represented as a tuple ⟨id,g,\emphp,l⟩ such that:
* id is the agent’s identifier;
* g∈{man,woman} stands for their gender;
* *p* is a personality profile vector ⟨sn,tf,ei,pj⟩∈[−1,1]4;
* l:C→[0,1] is a function that assigns the probability that the agent will successfully show competence c. We will refer to l(c) as the *competence level* of the agent for competence c. We assume that when an agent does not have a competence (or we do not know about it), the level of this competence is zero.
Henceforth, we will note the set of agents as A={a1,…,\linebreakan}. Moreover, We will use super-indexes to refer to agents’ components. For instance, given an agent a∈A, ida will refer to the id component of agent a. We will employ matrix L∈[0,1]n×m to represent the competence levels for each agent and each competence.
######
Definition 3 (Team)
A *team* is any non-empty subset of A with at least two agents. We denote by KA =(2A∖{∅})∖{{ai}|ai∈A} the set of all possible teams in A.
We assume that agents in teams coordinate their activities for mutual benefit.
###
4.2 The task assignment problem
In this section we focus on how to assign a team to a task.
A task type determines the competence levels required for the task as well as the importance of each competence with respect to the others. For instance, some tasks may require a high level of creativity because they were never performed before (so there are no qualified agents in this matter). Others may require a highly skilled team with a high degree of coordination and teamwork (as it is the case for rescue teams). Therefore, we define a task type as:
###### Definition 4
A task type τ is defined as a tuple
⟨λ,μ,{(ci,li,wi)}i∈Iτ⟩ such that:
* λ∈[0,1] importance given to proficiency;
* μ∈[−1,1] importance given to congeniality;
* ci∈C is a competence required to perform the task;
* li∈[0,1] is the required competence level for competence ci;
* wi∈[0,1] is the importance of competence ci for the success of task of type τ; and
* ∑i∈Iτwi=1.
We will discuss the meaning of λ and μ further ahead when defining synergistic team composition (see subsection [4.6](#S4.SS6 "4.6 The synergistic team composition problem ‣ 4 Team Composition Model ‣ Synergistic Team Composition")).
Then, we define a task as:
###### Definition 5
A *task* t is a tuple ⟨τ,m⟩ such that τ is a task type and m is the required number of agents, where m≥2.
Henceforth, we denote by T the set of tasks and by T the set of task types. Moreover, we will note as Cτ={ci|i∈Iτ} the set of competences required by task type τ.
Given a team and a task type, we must consider how to assign competences to team members (agents). Our first, weak notion of task assignment only considers that all competences in a task type are assigned to some agent(s) in the team:
###### Definition 6
Given a task type τ and a team K∈KA, an assignment is a function η:K→2Cτ satisfying that Cτ⊆⋃a∈Kη(a).
###
4.3 Evaluating team proficiency
Given a task assignment for a team, next we will measure the *degree of competence* of the team as a whole. This measure will combine both the degree of under-competence and the degree of over-competence, which we formally define first. Before that, we must formally identify the agents that are assigned to each competence as follows.
###### Definition 7
Given a task type τ, a team K, and an assignment η, the set δ(ci)={a∈K|ci∈η(a)} stands for the agents assigned to cover competence ci.
Now we are ready to define the degrees of undercompentence and overcompetence.
######
Definition 8 (Degree of undercompentence)
Given a task type τ, a team K, and an assignment η, we define the degree of undercompetence of the team for the task as:
| | | |
| --- | --- | --- |
| | u(η)=∑i∈Iτwi⋅∑a∈δ(ci)|min(la(ci)−li,0)||{a∈δ(ci)|la(ci)−li<0}| | |
######
Definition 9 (Degree of overcompetence)
Given a task type τ, a team K, and an assignment η, we define the degree of overcompetence of the team for the task as:
| | | |
| --- | --- | --- |
| | o(η)=∑i∈Iτwi⋅∑a∈δ(ci)max(la(ci)−li,0)|{a∈δ(ci)|la(ci)−li>0}| | |
Given a task assignment for a team, we can calculate its competence degree to perform the task by combining its overcompetence and undercompetence as follows.
###### Definition 10
Given a task type τ, a team K and an assignment η, the competence degree of the team to perform the task is defined as:
| | | | |
| --- | --- | --- | --- |
| | uprof(η)=1−(υ⋅u(η)+(1−υ)⋅o(η)) | | (1) |
where υ∈[0,1] is the penalty given to the undercompetence of team K.
Notice that the larger the value of υ the higher the importance of the competence degree of team K, while the lower the value υ, the less important its undercompetence. The intuition here is that we might want to penalize more the undercompetency of teams, as some tasks strictly require teams to be at least as competent as defined in the task type.
###### Proposition 0
For any η, u(η)+o(η)∈[0,1].
###### Proof
Given that (1) la(ci)∈[0,1] and li∈[0,1];
(2) If min(la(ci)−li,0)<0 then max(la(ci)−li,0)=0; and
(3) If max(la(ci)−li,0)>0 then min(la(ci)−li,0)=0. Thus, from (1–3)
we have
|min(la(ci)−li,0)| + max(la(ci)−li,0)∈[0,1].
Let n=|{a∈δ(ci)|la(ci)−li>0}|, then obviously it holds that
n⋅(|min(la(ci)−li,0)|+max(la(ci)−li,0))n∈[0,1] and as |δ(ci)|≤n then
∑a∈δ(ci)(|min(la(ci)−li,0)|+max(la(ci)−li,0))n∈[0,1] holds; and
since ∑i∈Iτwi=1 then
∑i∈Iτwi⋅∑a∈δ(ci)(|min(la(ci)−li,0)|+max(la(ci)−li,0))n∈[0,1];
Finally, distributing, this equation is equivalent to:
∑i∈Iτwi∑a∈δ(ci)(|min(la(ci)−li,0)|n+∑i∈Iτwi∑a∈δ(ci)(max(la(ci)−li,0))n∈[0,1] which in turn is equivalent to u(η)+o(η)∈[0,1].
Function uprof is used to measure how proficient a team is for a given task assignment. However, counting on the required competences to perform a task does not guarantee that the team will succeed at performing it. Therefore, in the next subsection we present an evaluation function to measure *congeniality* within teams. Unlike our measure for proficiency, which is based on considering a particular task assignment, our congeniality measure will solely rely on the personalities and genders of the members of a team.
###
4.4 Evaluating team congeniality
Inspired by the experiments of Douglass J. Wilde [Wilde2009](#bib.bib37) we will define the team utility function for congeniality ucon(K), such that:
* it values more teams whose SN and TF personality dimensions are as diverse as possible;
* it prefers teams with at least one agent with positive EI and TF dimensions and negative PJ dimension, namely an extrovert, thinking and judging agent (called ETJ personality),
* it values more teams with at least one introvert agent;
* it values gender balance in a team.
Therefore, the higher the value of function ucon(K), the more diverse the team is.
Formally, this team utility function is defined as follows:
| | | | | |
| --- | --- | --- | --- | --- |
| | ucon(K)= | σSN(K)⋅σTF(K)+maxai∈K((0,α,α,α)⋅pi,0) | | (2) |
| | | +maxai∈K((0,0,−β,0)⋅pi,0)+γ⋅sin(π⋅g(K)) | |
where the different parameters are explained next.
* σSN(K) and σTF(K): These variances are computed over the SN and TF personality dimensions of the members of team K. Since we want to maximise ucon, we want these variances to be as large as possible. The larger the values of σSN and σTF the larger their product will be, and hence the larger team diversity too.
* α: The maximum variance of any distribution over an interval [a,b] corresponds to a distribution with the elements evenly situated at the extremes of the interval. The variance will always be σ2≤((b−a)/2)2. In our case with b=1 and a=−1 we have σ≤1. Then, to make the four factors equally important and given that the maximum value for pi (the personality profile vector of agent ai) would be (1,1,1,1) a maximum value for α would be 3α=((1−(−1))/2)2=1, as we have the factor σSN⋅σTF, so α≤0.33(3). For values situated in the middle of the interval the variance will be σ2≤(b−a)212, hence a reasonable value for α would be α=√(1−(−1))2)/123=0.19
* β: A similar reasoning shows that β≤1.
* γ is a parameter to weigh the importance of a gender balance and g(K)=w(K)w(K)+m(K). Notice that for a perfectly gender balanced team with w(K)=m(K) we have that
sin(π⋅g(K))=1. The higher the value of γ, the more important is that team ucon is gender balanced. Similarly to reasoning about α and β, we assess γ≤1. In order to make this factor less important than the others in the equation we experimentally assessed that γ=0.1 is a good compromise.
In summary, we will use a utility function ucon such that: α=σSN(K)⋅σTF(SK)3, β=3⋅α and γ=0.1.
###
4.5 Evaluating synergistic teams
Depending on the task type, different importance for congeniality and proficiency should be given. For instance, creative tasks require a high level of communication and exchange of ideas, and hence, teams require a certain level of congeniality. While, repetitive tasks require good proficiency and less communication. The importance of proficiency (λ) and congeniality (μ) is therefore a fundamental aspect of the task type. Now, given a team, we can combine its competence value (in equation [1](#S4.E1 "(1) ‣ Definition 10 ‣ 4.3 Evaluating team proficiency ‣ 4 Team Composition Model ‣ Synergistic Team Composition")) with its congeniality value (in equation [2](#S4.E2 "(2) ‣ 4.4 Evaluating team congeniality ‣ 4 Team Composition Model ‣ Synergistic Team Composition")) to measure its *synergistic value*.
###### Definition 11
Given a team K, a task type τ=\linebreak⟨λ,μ,{(ci,li,wi)}i∈Iτ⟩ and a task assignment η:K→2Cτ, the synergistic value of team K is defined as:
| | | | |
| --- | --- | --- | --- |
| | s(K,η)=λ⋅uprof(η)+μ⋅ucon(K) | | (3) |
where λ∈[0,1] is the grade to which the proficiency of team K is important, and μ∈[−1,1] is the grade to which the task requires diverse personalities.
0
0.2
0.4
0.6
0.8
1
−1
−0.5
0
0.5
1
| |
| --- |
| Creative |
| General tasks |
| |
| --- |
| Structured |
| General tasks |
| |
| --- |
| Creative |
| Specialized tasks |
| |
| --- |
| Structured |
| Specialized tasks |
Proficiency (λ)
Congeniality (μ)
Figure 1: Values of congeniality and proficiency with respect to the task type.
Figure [1](#S4.F1 "Figure 1 ‣ 4.5 Evaluating synergistic teams ‣ 4 Team Composition Model ‣ Synergistic Team Composition") shows the relation between the parameters λ and μ. In general, the higher the λ, the higher importance is given to the proficiency of a team. The higher the μ the more important is personality diversity. Notice, that the μ can be lower than zero. Having μ negative, we impose that the congeniality value will be as low as possible (to maximize s(K,η)) and so, team homogeneity is preferred. This situation may happen while performing tasks in unconventional performance environments that have serious consequences associated with failure. In order to quickly resolve issues, a team needs to be proficient and have team-mates who understand one another with minimum communication cost (which is associated to homogeneity of a team).
###
4.6 The synergistic team composition problem
In what follows we consider that there are multiple instances of the same task to perform. Given a set of agents A, our goal is to split them into teams so that each team, and the whole partition of agents into teams, is balanced in terms of competences, personality and gender.
We shall refer to these balanced teams as *synergistic teams*, meaning that they are both congenial and proficient.
Therefore, we can regard our team composition problem as a particular type of set partition problem. We will refer to any partition of A as a team partition. However, we are interested in a particular type of team partitions, namely those where teams are constrained by size m as follows.
###### Definition 12
Given a set of agents A, we say that a team partition Pm of A is constrained by size m iff: (i) for every team Ki∈Pm, Ki∈KA, max(m−1,2)≤|K|≤m+1 holds; and (ii) for every pair of teams Ki,Kj∈Pm ||Ki|−|Kj||≤1.
As |K|/m is not necessarily a natural number, we may need to allow for some flexibility in team size within a partition. This is why we introduced above the condition max(m−1,2)≤|K|≤m+1. In practical terms, in a partition we may have teams differing by one agent. We note by Pm(A) the set of all team partitions of A constrained by size m. Henceforth, we will focus on team partitions constrained by some size. Since our goal is to find the most competence-balanced and psychologically-balanced team partition, we need a way to measure the synergistic value of a team partition, which we define as follows:
###### Definition 13
Given a task t=⟨τ,m⟩, a team partition Pm and an assignment ηi for each team Ki∈Pm, the synergistic value of Pm is computed by:
| | | | |
| --- | --- | --- | --- |
| | u(Pm,η)=|Pm|∏i=1s(Ki,ηi) | | (4) |
where η stands for the vector of task assignments η1,…,\linebreakη|Pm|.
Notice that the use of a Bernoulli-Nash function over the synergistic values of teams will favour team partitions whose synergistic values are balanced.
Now we are ready to cast the synergistic team composition problem as the following optimisation problem:
###### Definition 14
Given task t=⟨τ,m⟩ and set of agents A the synergistic team formation problem (STFP) is the problem of finding a team partition constrained by size m, together with competence assignment for its teams, whose synergistic value is maximal. Formally, the STFP is the problem of finding the partition in P∈Pm(A) and the task assignments η for the teams in Pm that maximises u(Pm,η).
5 Solving STFP
---------------
In this section we detail an algorithm, the so-called *SynTeam*, which solves the synergistic team formation problem described above. We will start from describing how to split agents into a partition (see subsection [5.1](#S5.SS1 "5.1 How do we split agents? ‣ 5 Solving STFP ‣ Synergistic Team Composition")). Next, we will move on to the problem of assigning competences in a task to team members (see subsection [5.2](#S5.SS2 "5.2 Solving an Assignment ‣ 5 Solving STFP ‣ Synergistic Team Composition")), so that the utility of synergistic function is maximal. Finally, we will explain *SynTeam* that is a greedy algorithm that quickly finds a first, local solution, to subsequently improve it, hoping to reach a global optimum.
###
5.1 How do we split agents?
We note by n=|A| the number of agents in A, by m∈N the target number of agents in each team, and by b the minimum total number of teams, b=⌊n/m⌋. We define the quantity distribution of agents in teams of a partition, noted T:N×N→N×N∪(N×N)2 as:
| | | | |
| --- | --- | --- | --- |
| | | | (5) |
Note that depending on the cardinality of A and the desired team size, the number of agents in each team may vary by one individual (for instance if there are n=7 agents in A and we want to compose duets (m=2), we split agents into two duets and one triplet).
###
5.2 Solving an Assignment
There are different methods to build an assignment. We have decided to solve our assignment problem by using the minimum cost flow model [ahuja1993network](#bib.bib1) . This is one of the most fundamental problems within network flow theory and it can be efficiently solved. For instance, in [orlin1993faster](#bib.bib27) , it was proven that the minimum cost flow problem can be solved in O(m⋅log(n)⋅(m+n⋅log(n))) time with n nodes and m arcs.
Our problem is as follows:
There are a number of agents in team K and a number of competence requests in task t. Any agent can be assigned to any competence, incurring some cost that varies depending on the agent competence level of the assigned competence. We want to get each competence assigned to at least one agent and each agent assigned to at least one competence in such a way that the total cost (that is both undercompetence and overcompetence) of the assignment is minimal with respect to all such assignments.
Formally, let G=(N,E) be a directed network defined by a set N of n nodes and a set E of e directed arcs. There are four types of nodes: (1) one source node; (2) |K| nodes that represent agents in team K; (3) |Cτ| competence requests that form task type τ; and (4) one sink node. Each arc (i,j)∈E has an associated cost pij∈R+ that denotes the cost per unit flow on that arc. We also associate with each arc (i,j)∈E a capacity uij∈R+ that denotes the maximum amount that can flow on the arc. In particular, we have three kinds of edges: (1) Supply arcs. These edges connect the source to agent nodes. Each of these arcs has zero cost and a positive capacity uij which define how many competences at most can be assigned to each agent. (2) Transportation arcs. These are used to ship supplies. Every transportation edge (i,j)∈E is associated with a shipment cost pij that is equal to:
| | | |
| --- | --- | --- |
| | pij={(lai(cj)−lj)⋅(1−υ)⋅wjif lai(cj−lj)>0−(lai(cj)−lj)⋅υ⋅wjif lai(cj−lj)<0 | |
where v∈[0,1] is the penalty given to the undercompetence of team K(see subsection [4.3](#S4.SS3 "4.3 Evaluating team proficiency ‣ 4 Team Composition Model ‣ Synergistic Team Composition") for the definition).
(3) Demand arcs. These arcs connect the competence requests nodes to the sink node. These arcs have zero costs and positive capacities uij which equal the demand for each competence.
Thus, a network is denoted by (G,w,u,b). We associate with each node i∈N an integer number b(i) representing its supply. If b(n)>0 then n is a source node, if b(n)<0 then n is a sink node. In order to solve a task assignment problem, we use the implementation of [goldberg1990finding](#bib.bib21) provided in the ort-tools.111<https://github.com/google/or-tools/blob/master/src/graph/min_cost_flow.h>

Figure 2: An example of an assignment graph G(N,E)
#### Example
Let us consider a team of three agents K={a1,a2,a3}:
* a1=⟨id1,‘woman′,p1,[l(c1)=0.9,l(c2)=0.5]⟩
* a2=⟨id2,‘man′,p2,[l(c2)=0.2,l(c3)=0.8]⟩
* a3=⟨id3,‘man′,p3,[l(c2)=0.4,l(c4)=0.6]⟩
and task type τ containing four competence requests
{(c1,0.8,0.25),(c2,0.6,0.25),(c3,0.6,0.25),(c4,0.6,0.25)}.
The penalty given to undercompetence is equal to υ=0.6.
Our goal is to assign agents to competence requests, so that: (1) every agent is responsible for at least one competence, (2) every competence is covered by at least one agent, (3) the overall “cost” in minimal. As shown in figure [4](#S6.F4 "Figure 4 ‣ 6.3 Results ‣ 6 Experimental Results ‣ Synergistic Team Composition"), we build a graph out of n=9 nodes that is: one source node (N0), three agents nodes (N1−N3), four competences nodes (N4−N7) and a sink node (N8). Next, we add edges: (1) between source node N0 and all agent nodes N1−N3 that have a cost psi=0 and capacity usi=2 for all i as the maximum number of competences assigned to one agent cannot be bigger than two if we want to make sure that all agents are assigned to at least one competence; (2) between agent nodes N1−N3 and competence nodes (N4−N7), where each capacity uij=1 and we calculate costs according to the equation [5.2](#S5.Ex3 "5.2 Solving an Assignment ‣ 5 Solving STFP ‣ Synergistic Team Composition"). For instance, the cost between N1 and N4 is equal to: (0.9−0.8)⋅(1−0.6)⋅0.25=0.01. We multiply all costs by 1000 to meet the requirements of the solver (edges need to be integer). Hence, the final cost p14=10; (3) edges between competence nodes N4−N7 and sink node N8 that have costs pjw=0 and capacities ujw=1 to impose that each is assigned.
Once the graph is built, we pass it to the solver to get the assignment, and we get c1 and c2 assigned to a1, c3 assigned to a2 and c4 assigned to a3.
###
5.3 SynTeam algorithm
Algorithm [1](#alg1 "Algorithm 1 ‣ 5.3 SynTeam algorithm ‣ 5 Solving STFP ‣ Synergistic Team Composition") shows the SynTeam pseudocode.
Algorithm [1](#alg1 "Algorithm 1 ‣ 5.3 SynTeam algorithm ‣ 5 Solving STFP ‣ Synergistic Team Composition") is divided into two parts:
1. Find a first team partition. This part of the algorithm simply builds a partition by randomly assigning agents to teams of particular team sizes. This part goes as follows. Given a list of agents A, we start by shuffling the list so that the order of agents in the list is random (line 1). Next, we determine the quantitative distribution of individuals among teams of size m using function T(|A|,m) as defined in section [5.1](#S5.SS1 "5.1 How do we split agents? ‣ 5 Solving STFP ‣ Synergistic Team Composition") (line 2). We start from the top of the shuffled list of agents (line 3). For each number of teams (line 4), we define a temporary set team to store a current team (line 5). We add to team subsequent size agents from the shuffled list of agents (line 7). We add the newly created team to the team partition Pbest that we intend to build (line 10). When reaching line 14, Pbest will contain a first disjoint subset of teams (a team partition).
2. Improve the current best team partition. The second part of the algorithm consists in improving the current best team partition. The idea is to obtain a better team partition by performing crossovers of two randomly selected teams to yield two better teams. In this part, we took inspiration from simulated annealing methods, where the algorithm might accept swaps that actually decrease the solution quality with a certain probability. The probability of accepting worse solutions slowly decreases as the algorithm explores the solution space (as the number of iterations increases). The annealing schedule is defined by the cooling\_rate parameter. We have modified this method to store the partition with the highest synergistic evaluation found so far.
In detail, the second part works as follows. First, we select two random teams, K1 and K2, in the current team partition (line 15). Then we compute all team partitions of size m with agents in K1∪K2 (line 19), and we select the best candidate team partition, named PbestCandidate (lines 19 to 26). If the best candidate synergistic utility is larger than the utility contribution of K1 and K2 to the current best partition Pbest (line 27), then we replace teams K1 and K2 by the teams in the best candidate team partition (line 28). If the best candidate team partition utility is lower, then we check if the probability of accepting a worse solution is higher than a uniformly sampled value from [0,1] (line 29). If so, we replace teams K1 and K2 by the teams in the best candidate team partition (line 30) and we lower heat by a cooling rate. This part of the algorithm continues until the value of heat reaches 1 (line 13). We also store the best partition found so far (line 34) to make sure we do not end up with worse solution. Finally, we return found best partition PbestEver as well as the assignment η for each team.
1:A ▹ The list of agents
2:T(|A|,m) ▹ Quantitative team distribution
3:Pbest=∅ ▹ Initialize best partition
4:heat=10 ▹ Initial temperature for second step
5:Cooling\_rate ▹ Heating decrease
6:(P,η) ▹ Best partition found and best assignments
7:random.shuffle(A)
8:if T(|A|,m)≠(0,m) then
9: index=0 ▹ Used to iterate over the agent list
10: for all (numberOfTeams,size)∈T(|A|,m) do
11: team=∅
12: for i∈(0,…,(size−1)) do
13: team=team∪A[index]
14: index=index+1
15: end for
16: Pbest=Pbest∪{team}
17: end for
18: ηbest=assign\_agents(Pbest) ▹ see Subsection [5.2](#S5.SS2 "5.2 Solving an Assignment ‣ 5 Solving STFP ‣ Synergistic Team Composition")
19:
20: while heat>1 do
21: (K1,K2)=selectRandomTeams(Pbest)
22: (η1,η2)=assign\_agents({K1,K2})
23: contrValue=u({K1,K2},(η1,η2))
24: (PbestCandidate,bestCandidatevalue)=(∅,0)
25: for all Pcandidate∈Pm(K1∪K2)∖{K1,K2} do
26: (η1,η2)=assign\_agents(Pcandidate)
27: candidateValue=u(Pcandidate,(η1,η2))
28: if candidateValue>bestCandidateValue then
29: PbestCandidate=Pcandidate
30: bestCandidateValue=candidateValue
31: end if
32: end for
33: if bestCandidateValue>contrValue then
34: Pbest=replace({K1,K2},PbestCandidate,Pbest)
35: else if P(bestCandidateValue,contrValue,heat)
36: ≥random(0,1) then
37: Pbest=replace({K1,K2},PbestCandidate,Pbest)
38: end if
39: ηbest=assign\_agents(Pbest)
40: if bestValueEver<u(Pbest,ηbest) then
41: PbestEver=Pbest
42: end if
43: heat = heat−Cooling\_rate
44: end while
45: return(PbestEver,assign\_agents(PbestEver))
46:end if
Algorithm 1 SynTeam
6 Experimental Results
-----------------------
###
6.1 Experimental Setting
“Institut Torras i Bages” is a state school near Barcelona. Collaborative work has been implemented there for the last 5 years in their final assignment (“Treball de Síntesi”) with a steady and significant increase in the scores and quality of the final product that students are asked to deliver. This assignment takes one week and is designed to check if students have achieved, and to what extent, the objectives set in the various curricular areas. It is a work that encourages teamwork, research, and tests relationships with the environment. Students work in teams and at the end of every activity present their work in front of a panel of teachers that assess the content, presentation and cooperation between team members. This is a creative task, although requiring high level of competences.
###
6.2 Data Collection
In current school practice, teachers group students according to their own, manual method based on the knowledge about students, their competences, background and social situation. This year we have used our grouping system based only on personality (SynTeam with λ=0,μ=1) upon two groups of students: ‘3r ESO A’ (24 students), and ‘3r ESO C’ (24 students). Using computers and/or mobile phones, students answered the questionnaire (described in section [3](#S3 "3 Personality ‣ Synergistic Team Composition")) which allowed us to divide them into teams of size three for each class. Tutors have evaluated each team in each partition giving an integer value v∈[1,10] meaning their expectation of the performance of each team. Each student team was asked to undertake the set of interdisciplinary activities (“Treball de Síntesi”) described above. We have collected each student’s final mark for “Treball de Síntesi” as well as final marks obtained for all subjects. That is: Catalan, Spanish, English, Nature, Physics and Chemistry, Social Science, Math, Physical Education, Plastic Arts, Technology. We have used a matrix provided by the tutors to relate each subject to different kinds of intelligence (that in education are understood as competences) needed for this subject. There are eight types of human intelligence [gardner1987theory](#bib.bib20) , each representing different ways of processing information: Naturalist, Interpersonal, Logical/Mathematical, Visual/Spatial, Body/Kinaesthetic, Musical, Intrapersonal and Verbal/Linguistic. This matrix for each subject and each intelligence is shown in figure [3](#S6.F3 "Figure 3 ‣ 6.2 Data Collection ‣ 6 Experimental Results ‣ Synergistic Team Composition").
⎡⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢⎣01000011010101110100011111011011111100111100001101110011010110110101101011101011⎤⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥⎦
Figure 3: Matrix matching Intelligence with subjects (each row corresponds to a subject, each column to an intelligence)
Subjects are represented by rows and intelligences by columns of the matrix in the order as provided above. Based on this matrix we calculate values of intelligences for every student by averaging all values obtained by her that are relevant for this intelligence. For instance, for Body/Kinaesthetic intelligence, we calculate an average of student marks obtained in Nature, Physical Education, Plastic Arts and Technology. An alternative way to measure students’ competences level can be by calculating the collective assessments of each competence (like proposed by [andrejczukCompetences](#bib.bib5) ).
Finally, having competences (Intelligences), personality and actual performance of all students, we are able to calculate synergistic values for each team. We also calculate the average of marks obtained by every student in a team to get teams’ performance values.
###
6.3 Results
Given several team composition methods, we are interested in comparing them to know which method better predicts team performance. Hence, we generate several team rankings using the evaluation values obtained through different methods. First, we generate a ranking based on actual team performance that will be our base to compare other rankings. Second, we generate a ranking based on the expert evaluations. Finally, we generate several rankings based on calculated synergistic values with varying importance of congeniality and proficiency. Since “Traball de Síntesi” is a creative task, we want to examine the evaluation function with parameters μ>0 and λ=1−μ. In particular, we want to observe how the rankings change when increasing the importance of competences.
Notice that teacher and actual performance rankings may include ties since the pool of possible marks is discrete (which is highly improbable in case of SynTeam rankings). Therefore, before generating rankings based on synergistic values, we round them up to two digits to discretize the evaluation space. An ordering with ties is also known as a *partial ranking*.
Next, we compare teacher and SynTeam rankings with the actual performance ranking using the standardized Kendall Tau distance. For implementation details, refer to the work by Fagin et al. [Fagin:2004:CAR](#bib.bib16) ; [fagin2006comparing](#bib.bib17) , which also provide sound mathematical principles to compare partial rankings. The results of the comparison are shown in Figure [4](#S6.F4 "Figure 4 ‣ 6.3 Results ‣ 6 Experimental Results ‣ Synergistic Team Composition"). Notice that the lower the value of Kendall Tau, the more similar the rankings. We observe that the SynTeam ranking improves as the importance of competences increases, and it is best at predicting students’ performance for λ=0.8 and μ=0.2 (Kendall Tau equal to 0.15). A standardised Kendall Tau distance for teacher ranking is equal to 0.28, which shows that SynTeam predicts the performance better than teachers, when competences are included (λ>0.2). We also calculate the values of Kendall Tau for random (0.42) and reversed (0.9) rankings to benchmark teacher and SynTeam grouping methods. The results show that both teachers and SynTeam are better at predicting students’ performance than the random method.

Figure 4: Comparison of Kendall-Tau distances between different methods.
7 Discussion
-------------
In this paper we introduced SynTeam, an algorithm for partitioning groups of humans into competent, gender and psychologically balanced teams.
To our knowledge, SynTeam is the first computational model to build synergistic teams that not only work well together, but are also competent enough to perform an assignment requiring particular expertise.
We have decided to evaluate our algorithm in the context of a classroom. Besides obvious advantages of observing students work in person, this scenario gave us an opportunity to compare our results with real-life, currently used practice. The results show that SynTeam is able to predict team performance better that the experts that know the students, their social background, competences, and cognitive capabilities.
The algorithm is potentially useful for any organisation that faces the need to optimise their problem solving teams (e.g. a classroom, a company, a research unit). The algorithm composes teams in a purely automatic way without consulting experts, which is a huge advantage for environments where there is a lack of experts.
Regarding future work, We would like to investigate how to determine quality guarantees of the algorithm.
Additionally, there is a need to consider richer and more sophisticated models to capture the various factors that influence the team composition process in the real world. We will consider how our problem relates to the constrained coalition formation framework [Rahwan](#bib.bib31) . This may help add constraints and preferences coming from experts that cannot be established by any algorithm, e.g. Anna cannot be in the same team with José as they used to have a romantic relationship. |
db26aad1-53d3-4015-9a08-bbc382f338a4 | trentmkelly/LessWrong-43k | LessWrong | What career advice do you give to software engineers?
I am a defendant of the idea that we have already achieved rudimentary AGIs with modern LLMs (as much of a hot take this is), and even though the path to superintelligence is going to be difficult and will probably require a few more technical breakthroughs to make more effective use of available data, I don't think this will take us longer than a decade, or 15 years at most.
When I discuss this idea with some of my CS friends and co-workers, about how AI will inevitably replace most software engineering jobs (picture supercharged Github Copilot that can make entire websites and back-end services on command) most of them ask me the obvious follow-up question: So what can I do about it? Yeah, I can help with AI safety/alignment progress but my job is going to disappear no matter what I do, and probably sooner than many other more 'physically' demanding ones.
I am always left stumped by this − I simply don't know what to tell them, specially to undergraduates that are still full of hope and totally didn't sign up for this dumpster fire. Should I tell them to just continue doing their thing and see what happens? Let fate take its course and hope for the best? This sounds all too happy-go-lucky for my taste.
I'd like to hear what you guys think about this matter, what do you answer when asked such questions? |
c515d99d-0127-4832-aa76-3523ee5f37f1 | trentmkelly/LessWrong-43k | LessWrong | "AI and Compute" trend isn't predictive of what is happening
(open in a new tab to view at higher resolution)
In May 2018 (almost 3 years ago) OpenAI published their "AI and Compute" blogpost where they highlighted the trend of increasing compute spending on training the largest AI models and speculated that the trend might continue into the future. This note is aimed to show that the trend has ended right around the moment of OpenAI publishing their post and doesn't hold up anymore.
On the above image, I superimposed the scatter plot from OpenAI blogpost and my estimates of compute required for some recent large and ambitious ML experiments. To the best of my knowledge (and I have tried to check for this), there haven't been any experiments that required more compute than those shown on the plot.
The main thing shown here is that less than one doubling of computational resources for the largest training occured in the 3-year period between 2018 and 2021, compared to around 10 doublings in the 3-year period between 2015 and 2018. This seems to correspond to a severe slowdown of computational scaling.
To stay on the trend line, we currently would need an experiment requiring roughly around 100 times more compute than GPT-3. Considering that GPT-3 may have costed between $5M and $12M and accelerators haven't vastly improved since then, such an experiment would now likely cost $0.2B - $1.5B. |
34ebf1e8-ca3f-4893-8730-18f5a3863de0 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Sections 3 & 4: Credibility, Peaceful Bargaining Mechanisms
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
*This post is part of the sequence version of the Effective Altruism Foundation's [research agenda on Cooperation, Conflict, and Transformative Artificial Intelligence](https://www.lesswrong.com/s/p947tK8CoBbdpPtyK).*
### 3 Credibility
*Credibility* is a central issue in strategic interaction. By credibility, we refer to the issue of whether one agent has reason to believe that another will do what they say they will do. Credibility (or lack thereof) plays a crucial role in the efficacy of contracts (Fehr et al., 1997; Bohnet et al., 2001), negotiated settlements for avoiding destructiveconflict (Powell, 2006), and commitments to carry out (or refuse to give in to) threats (e.g., Kilgour and Zagare 1991; Konrad and Skaperdas 1997).
In game theory, the fact that Nash equilibria ([Section 1.1](https://www.lesswrong.com/posts/KMocAf9jnAKc2jXri/sections-1-and-2-introduction-strategy-and-governance)) sometimes involve *non-credible threats* motivates a refined solution concept called *subgame perfect equilibrium (SPE)*. An SPE is a Nash equilibrium of an extensive-form game in which a Nash equilibrium is also played at each subgame. In the threat game depicted in Figure 1, “carry out” is not played in a SPE, because the threatener has no reason to carry out the threat once the threatened party has refused to give in; that is, “carry out’’ isn’t a Nash equilibrium of the subgame played after the threatened party refuses to give in.

So in an SPE-based analysis of one-shot threat situations between rational agents,
threats are never carried out because they are not credible (i.e., they violate subgame perfection).
However, agents may establish credibility in the case of repeated interactions
by repeatedly making good on their claims (Sobel, 1985). Secondly, despite the fact that carrying out a threat in the one-shot threat game violates subgame perfection, it is a well-known result from behavioral game theory that humans typically refuse unfair splits in the Ultimatum Game [[1]](#fn-TEvtHdv9gpQewrYhX-1) (Güth et al., 1982; Henrich et al., 2006), which is equivalent to carrying out the threat in the one-shot threat game. So executing commitments which are irrational (by the SPE criterion) may still be a feature of human-in-the-loop
systems ([Section 6](https://www.lesswrong.com/posts/4GuKi9wKYnthr8QP9/sections-5-and-6-contemporary-architectures-humans-in-the)), or perhaps systems which have some humanlike game-theoretic heuristics in virtue of being trained in multi-agent environments ([Section 5.2](https://www.lesswrong.com/posts/4GuKi9wKYnthr8QP9/sections-5-and-6-contemporary-architectures-humans-in-the)). Lastly, threats may become credible if the threatener has *credibly committed* to carrying out the threat (in the case of the game in Fig. 1, this means convincing the opponent that they have removed the option (or made it costly) to “Not carry out’’). There is a considerable game-theoretic literature on credible commitment, both on how credibility can be achieved (Schelling, 1960) and on the analysis of games under the assumption that credible commitment is possible (Von Stackelberg, 2010; Nash, 1953; Muthoo, 1996; Bagwell, 1995).
#### 3.1 Commitment capabilities
It is possible that TAI systems may be relatively transparent to one another;
capable of self-modifying or constructing sophisticated commitment devices;
and making various other “computer-mediated contracts’’ (Varian, 2010); see also the lengthy discussions in Garfinkel (2018) and Kroll et al. (2016), discussed in [Section 2 Footnote 3](https://www.lesswrong.com/posts/KMocAf9jnAKc2jXri/sections-1-and-2-introduction-strategy-and-governance), of potential implications of cryptographic technology for credibility.
We want to understand how plausible changes in the ability to make credible commitments affect risks from cooperation failures.
* In what ways does artificial intelligence make credibility more difficult, rather than less so? For instance, AIs lack evolutionarily established mechanisms (like credible signs of anger; Hirshleifer 1987) for signaling their intentions to other agents.
* The credibility of an agent’s stated commitments likely depends on how
interpretable [[2]](#fn-TEvtHdv9gpQewrYhX-2) that agent is to others. What are the possible ways in which interpretability may develop, and how does this affect the propensity to make commitments? For instance, in trajectories where AI agents are increasingly opaque to their overseers, will these agents be motivated to make commitments while they are still interpretable enough to overseers that these commitments are credible?
* In the case of training regimes involving the imitation of human exemplars (see [Section 6](https://www.lesswrong.com/posts/4GuKi9wKYnthr8QP9/sections-5-and-6-contemporary-architectures-humans-in-the)), can humans also make credible commitments on behalf of the AI system which is imitating them?
#### 3.2 Open-source game theory
Tennenholtz (2004) introduced *program games*, in which players submit programs that have access to the source codes of their counterparts. Program games provide a model of interaction under mutual transparency. Tennenholtz showed that in the Prisoner’s Dilemma, both players submitting Algorithm 1 is a *program equilibrium* (that is, a Nash equilibrium of the corresponding program game). Thus agents may have incentive to participate in program games, as these promote more cooperative outcomes than the corresponding non-program games.
Algorithm 1:Tennenholtz (2004)'s construction of a program equilibri-um of the one-shot Prisoner's Dilemma. The program cooperates if its counterpart'sprogram's source code is identical to its own (and thus both players cooperate), anddefects otherwise.Input: Program source codess1,s2if s1=s2 then|return Cooperateelse|return Defectend
For these reasons, program games may be helpful to our understanding of interactions among advanced AIs.
Other models of strategic interaction between agents who are transparent
to one another have been studied (more on this in [Section 5.1](https://www.lesswrong.com/posts/4GuKi9wKYnthr8QP9/sections-5-and-6-contemporary-architectures-humans-in-the)); following Critch (2019), we will call this broader area *open-source game theory*. Game theory with source-codetransparency has been studied by Fortnow 2009; Halpern and Pass 2018; LaVictoireet al. 2014; Critch 2019; Oesterheld 2019, and models of multi-agent learning under transparency are given by Brafman and Tennenholtz (2003); Foerster et al. (2018). But open-source game theory is in its infancy and many challenges remain [[3]](#fn-TEvtHdv9gpQewrYhX-3).
* The study of program games has, for the most part, focused on the simple setting of two-player, one-shot games. How can (cooperative) program equilibrium strategies be automatically constructed in general settings?
* Under what circumstances would agents be incentivized to enter into open-source interactions?
* How can program equilibrium be made to promote more efficient outcomes even in cases of incomplete access to counterparts’ source codes?
+ As a toy example, consider two robots playing a single-shot program prisoner’s dilemma, in which their respective moves are indicated by a simultaneous button press. In the absence of verification that the output of the source code actually causes the agent to press the button, it is possible that the output of the program does not match the actual physical action taken. What are the prospects for closing such "credibility gaps’’? The literature on (physical) zero-knowledge proofs (Fisch et al., 2014; Glaser et al., 2014) may be helpful here.
+ See also the discussion in Section 3 on multi-agent learning under varying degrees of transparency.
### 4 Peaceful bargaining mechanisms
In other sections of the agenda, we have proposed research directions for
improving our general understanding of cooperation and conflict among TAI systems. In this section, on the other hand, we consider several families of strategies designed to actually avoid catastrophic cooperation failure. The idea of such "peaceful bargaining mechanisms'' is, roughly speaking, to find strategies which are 1) peaceful (i.e., avoid conflict) and 2) preferred by rational agents to non-peaceful strategies[[4]](#fn-TEvtHdv9gpQewrYhX-4).
We are not confident that peaceful bargaining mechanisms will be used by default. First, in human-in-the-loop scenarios, the bargaining behavior of TAI systems may be dictated by human overseers, who we do not expect to systematically use rational bargaining strategies ([Section 6.1](https://www.lesswrong.com/posts/4GuKi9wKYnthr8QP9/sections-5-and-6-contemporary-architectures-humans-in-the)). Even in systems whose decision-making is more independent of humans’, evolution-like training methods could give rise to non-rational human-like bargaining heuristics ([Section 5.2](https://www.lesswrong.com/posts/4GuKi9wKYnthr8QP9/sections-5-and-6-contemporary-architectures-humans-in-the)). Even among rational agents, because there may be many cooperative equilibria, additional mechanisms for ensuring coordination may be necessary to avoid conflict arising from the selection of different equilibria (see Example 4.1.1). Finally, the examples in this section suggest that there may be path-dependencies in the engineering of TAI systems (for instance, in making certain aspects of TAI systems more transparent to their counterparts) which determine the extent to which peaceful bargaining mechanisms are available.
In the first subsection, we present some directions for identifying mechanisms which could implement peaceful settlements, drawing largely on existing ideas in the literatures on rational bargaining. In the second subsection we sketch a proposal for how agents might mitigate downsides from threats by effectively modifying their utility function. This proposal is called *surrogate goals*.
#### 4.1 Rational crisis bargaining
As discussed in [Section 1.1](https://www.lesswrong.com/posts/KMocAf9jnAKc2jXri/sections-1-and-2-introduction-strategy-and-governance), there are two standard explanations for war among rational agents: credibility (the agents cannot credibly commit to the terms of a peaceful settlement) and incomplete information (the agents have differing private information which makes each of them optimistic about their prospects of winning, and incentives not to disclose or to misrepresent this information).
Fey and Ramsay (2011) model crisis bargaining under incomplete information.
They show that in 2-player crisis bargaining games with voluntary agreements (players are able to reject a proposed settlement if they think they will be better off going to war); mutually known costs of war; unknown types θ1,θ2 measuring the players' military strength; a commonly known function p(θ1,θ2) giving the probability of player 1 winning when the true types are θ1,θ2; and a common prior over types; a peaceful
settlement exists if and only if the costs of war are sufficiently large. Such a settlement must compensate each player's strongest possible type by the amount they expect to gain in war.
Potential problems facing the resolution of conflict in such cases include:
* Reliance on common prior μ and agreed-upon win probability model p(θ1,θ2). If players disagree on these quantities it is not clear how
bargaining will proceed. How can players come to an agreement on these quantities, without generating a regress of bargaining problems? One possibility is to defer to a mutually trusted party to estimate these quantities from publicly observed data. This raises its own questions. For example, what conditions must a third party satisfy so that their judgements are trusted by each player? (Cf. Kydd (2003), Rauchhaus (2006), and sources therein on mediation).
* The exact costs of conflict to each player ci are likely to be private information, as well. The assumption of a common prior, or the ability to agree upon a prior, may be particularly unrealistic in the case of costs.
Recall that another form of cooperation failure is the simultaneous commitment
to strategies which lead to catastrophic threats being carried out ([Section 2.2](https://www.lesswrong.com/posts/KMocAf9jnAKc2jXri/sections-1-and-2-introduction-strategy-and-governance)). Such "commitment games'' may be modeled as a game of Chicken ([Table 1](https://www.lesswrong.com/posts/KMocAf9jnAKc2jXri/sections-1-and-2-introduction-strategy-and-governance)), where Defection corresponds to making commitments to carry out a threat if one's demands are not met,
while Cooperation corresponds to not making such commitments. Thus we are interested in bargaining strategies which avoid mutual Defection in commitment
games. Such a strategy is sketched in Example 4.1.1.
---
**Example 4.1.1** (Careful commitments).
Consider two agents with access to commitment devices. Each may decide to
commit to carrying out a threat if their counterpart does not forfeit some prize (of value 1 to each party, say). As before, call this decision D. However, they may instead commit to carrying out their threat only if their counterpart does not agree to a certain *split* of the prize (say, a split in which Player 1 gets p).
Call this commitment Cp, for "cooperating with split p''.
When would an agent prefer to make the more sophisticated commitment
Cp? In order to say whether an agent expects to do better by making Cp,
we need to be able to say how well they expect to do in the "original'' commitment game where their choice is between D and C. This is not straightforward, as Chicken admits three Nash equilibria. However, it may be reasonable to regard the players' expected values under mixed strategy Nash equilibrium as the values they expect from playing this game. Thus, split p could be chosen such that p and 1−p exceed player 1 and 2's respective expected payoffs under the mixed strategy Nash equilibrium. Many such splits may exist. This calls for the selection among p, for which we may turn to a bargaining solution concept such as Nash (Nash, 1950) or Kalai-Smorokindsky (Kalai et al., 1975). If each player uses the same bargaining solution, then each will prefer to committing to honoring the resulting split of the prize to playing the original threat game, and carried-out threats will be avoided.
Of course, this mechanism is brittle in that it relies on a single take-it-or-leave-it proposal which will fail if the agents use different bargaining solutions, or have slightly different estimates of each players' payoffs. However, this could be generalized to a commitment to a more complex and robust bargaining procedure, such as an alternating-offers procedure (Rubinstein 1982; Binmoreet al. 1986; see Muthoo (1996) for a thorough review of such models) or the sequential higher-order bargaining procedure of Van Damme (1986).
Finally, note that in the case where there is uncertainty over whether each player
has a commitment device, sufficiently high stakes will mean that players with commitment devices will still have Chicken-like payoffs. So this model can be straightforwardly extended to cases of where the credibility of a threat comes in degrees. An example of a simple bargaining procedure to commit to is the Bayesian version of the Nash bargaining solution (Harsanyi and Selten, 1972).
---
Lastly, see Kydd (2010)'s review of potential applications of the literature rational crisis bargaining to resolving real-world conflict.
#### 4.2 Surrogate goals [[5]](#fn-TEvtHdv9gpQewrYhX-5)
In this section we introduce *surrogate goals*, a recent [[6]](#fn-TEvtHdv9gpQewrYhX-6) proposal for limiting the downsides from cooperation failures (Baumann, 2017, 2018) [[7]](#fn-TEvtHdv9gpQewrYhX-7). We will focus on the phenomenon of coercive threats (for game-theoretic discussion see Ellsberg (1968); Har-renstein et al. (2007)), though the technique is more general. The proposal is: In order to deflect threats against the things it terminally values, an agent adopts a new (surrogate) goal [[8]](#fn-TEvtHdv9gpQewrYhX-8). This goal may still be threatened, but threats carried out against this goal are benign. Furthermore, the surrogate goal is chosen such that it incentives at most marginally more threats.
In Example 4.2.1, we give an example of an operationalization of surrogate goals in a threat game.
---
**Example 4.2.1** (Surrogate goals via representatives)
Consider the game between Threatener and Target, where Threatener makes a demand of Target, such as giving up some resource. Threatener can — at some cost — commit to carrying out a threat against Target . Target can likewise commit to give in to such threats or not. A simple model of this game is given in the payoff matrix in Table 3 (a normal-form variant of the threat game discussed in Section 3 [[9]](#fn-TEvtHdv9gpQewrYhX-9)).

Unfortunately, players may sometimes play (Threaten, Not give in). For example, this may be due to uncoordinated selection among the two pure-strategy Nash equilibria ((Give in, Threaten) and (Not give in, Not threaten)).
But suppose that, in the above scenario, Target is capable of certain kinds of credible commitments, or otherwise is represented by an agent, Target’s Representative, who is. Then Target or Target’s Representative may modify its goal architecture to adopt a *surrogate goal* whose fulfillment is not actually valuable to that player, and which is slightly cheaper for Threatener to threaten. (More generally, Target could modify itself to commit to acting as if it had a surrogate goal in threat situations.) If this modification is credible, then it is rational for Threatener to threaten the surrogate goal, obviating the risk of threats against Target’s true goals being carried out.
As a first pass at a formal analysis: Adopting an additional threatenable goal adds a column to the payoff matrix, as in Table 4. And this column weakly dominates the old threat column (i.e., the threat against Target’s true goals). So a rational player would never threaten Target’s true
goal. Target does not themselves care about the new type of threats being carried out, so for her, the utilities are given by the blue numbers in Table 4.

---
This application of surrogate goals, in which a threat game is already underway but players have the opportunity to self-modify or create representatives with surrogate goals, is only one possibility. Another is to consider the adoption of a surrogate goal as the choice of an agent (before it encounters any threat) to commit to acting according to a new utility function, rather than the one which represents their true goals. This could be modeled, for instance, as an extensive-form game of incomplete information in which the agent decides which utility function to commit to by reasoning about (among other things) what sorts of threats having the utility function might provoke. Such models have a signaling game component, as the player must successfully signal to distrustful counterparts that it will actually act according to the surrogate utility function when threatened. The game-theoretic literature on signaling (Kreps and Sobel, 1994) and the literature on inferring preferences in multi-agent settings (Yu et al., 2019; Lin et al., 2019) may suggest useful models. The implementation of surrogate goals faces a number of obstacles. Some problems and questions include:
* The surrogate goal must be credible, i.e., threateners must believe that the agent will act consistently with the stated surrogate goal. TAI systems are unlikely to have easily-identifiable goals, and so must signal their goals to others through their actions. This raises questions both of how to signal so that the surrogate goal is at all credible, and how to signal in a way that doesn’t interfere too much with the agent’s true goals. One possibility in the context of Example 4.2.1 is the use of zero-knowledge proofs (Goldwasser et al., 1989; Goldreich and Oren,1994) to reveal the Target's surrogate goal (but not how they will actually respond to a threat) to the Threatener.
* How does an agent come to adopt an appropriate surrogate goal, practically speaking? For instance, how can advanced ML agents be trained to reason correctly about the choice of surrogate goal?
* The reasoning which leads to the adoption of a surrogate goal might in fact lead to *iterated* surrogate goals. That is, after having adopted a surrogate goal, Target may adopt a surrogate goal to protect *that* surrogate goal, and so on. Given that Threatener must be incentivized to threaten a newly adopted surrogate goal rather than the previous goal, this may result in Target giving up much more of its resources than it would if only the initial surrogate goal were threatened.
* How do surrogate goals interact with open-source game theory (Sections 3.2 and 5.1)? For instance, do open source interactions automatically lead to the use of surrogate goals in some circumstances?
* In order to deflect threats against the original goal, the adoption of a surrogate goal must lead to a similar distribution of outcomes as the original threat game (modulo the need to be slightly cheaper to threaten). Informally, Target should expect Target’s Representative to have the same propensity to give in as Target; how this is made precise depends on the details of the formal surrogate goals model.
A crucial step in the investigation of surrogate goals is the development of appropriate theoretical models. This will help to gain traction on the problems listed above.
*The next post in the sequence, "Sections 5 & 6: Contemporary AI Architectures, Humans in the Loop", will come out on Thursday, December 19.*
[Acknowledgements & References](https://www.lesswrong.com/s/p947tK8CoBbdpPtyK/p/XKWGgyCyGhkm73fhm)
---
1. The Ultimatum Game is the 2-player game in which Player 1 proposes a split (pX,(1−p)X) of an amount of money X, and Player 2 accepts or rejects the split. If they accept, both players get the proposed amount, whereas if they reject, neither player gets anything. The unique SPE of this game is for Player 1 to propose as little as possible, and for Player 2 to accept the offer. [↩︎](#fnref-TEvtHdv9gpQewrYhX-1)
2. See Lipton (2016); Doshi-Velez and Kim (2017) for recent discussions of interpretability in machine learning. [↩︎](#fnref-TEvtHdv9gpQewrYhX-2)
3. See also [Section 5.1](https://www.lesswrong.com/posts/4GuKi9wKYnthr8QP9/sections-5-and-6-contemporary-architectures-humans-in-the) for discussion of open-source game theory in the context of contemporary machine learning, and [Section 2](https://www.lesswrong.com/posts/KMocAf9jnAKc2jXri/sections-1-and-2-introduction-strategy-and-governance) for policy considerations surrounding the implementation of open-source interaction. [↩︎](#fnref-TEvtHdv9gpQewrYhX-3)
4. More precisely, we borrow the term "peaceful bargaining mechanisms'' from Fey and Ramsay (2009). They consider mechanisms for crisis bargaining between two countries. Their mechanisms are defined by the value of the resulting settlement to each possible type for each player, and the probability of war under that mechanism for each profile of types. They call a "peaceful mechanism" one in which the probability of war is 0 for every profile of types. [↩︎](#fnref-TEvtHdv9gpQewrYhX-4)
5. This subsection is based on notes by Caspar Oesterheld. [↩︎](#fnref-TEvtHdv9gpQewrYhX-5)
6. Although, the idea of modifying preferences in order to get better outcomes for each player was discussed by Raub (1990) under the name "preference adaptation’’, who applied it to the promotion of cooperation in the one-shot Prisoner’s Dilemma. [↩︎](#fnref-TEvtHdv9gpQewrYhX-6)
7. See also the discussion of surrogate goals and related mechanisms in Christiano and Wiblin (2019). [↩︎](#fnref-TEvtHdv9gpQewrYhX-7)
8. Modifications of an agent’s utility function have been discussed in other contexts. Omohundro (2008) argues that "AIs will try to preserve their utility functions’’ and "AIs will try to prevent counterfeit utility’’. Everitt et al. (2016) present a formal model of a reinforcement learning agent who is able to modify its utility function, and study conditions under which agents self-modify. [↩︎](#fnref-TEvtHdv9gpQewrYhX-8)
9. Note that the normal form representation in Table 3 is over-simplifying; it assumes the credibility of threats, which we saw in Section 3 to be problematic. For simplicity of exposition, we will nevertheless focus on this normal-form game in this section. [↩︎](#fnref-TEvtHdv9gpQewrYhX-9) |
7a8858e2-6dcd-4603-b4ca-e894eef4b093 | trentmkelly/LessWrong-43k | LessWrong | A Playbook for AI Risk Reduction (focused on misaligned AI)
I sometimes hear people asking: “What is the plan for avoiding a catastrophe from misaligned AI?”
This post gives my working answer to that question - sort of. Rather than a plan, I tend to think of a playbook.1
* A plan connotes something like: “By default, we ~definitely fail. To succeed, we need to hit multiple non-default goals.” If you want to start a company, you need a plan: doing nothing will definitely not result in starting a company, and there are multiple identifiable things you need to do to pull it off.
* I don’t think that’s the situation with AI risk.
* As I argued before, I think we have a nontrivial chance of avoiding AI takeover even in a “minimal-dignity” future - say, assuming essentially no growth from here in the size or influence of the communities and research fields focused specifically on existential risk from misaligned AI, and no highly surprising research or other insights from these communities/fields either. (This statement is not meant to make anyone relax! A nontrivial chance of survival is obviously not good enough.)
* I think there are a number of things we can do that further improve the odds. My favorite interventions are such that some success with them helps a little, and a lot of success helps a lot, and they can help even if other interventions are badly neglected. I’ll list and discuss these interventions below.
* So instead of a “plan” I tend to think about a “playbook”: a set of plays, each of which might be useful. We can try a bunch of them and do more of what’s working. I have takes on which interventions most need more attention on the margin, but think that for most people, personal fit is a reasonable way to prioritize between the interventions I’m listing.
Below I’ll briefly recap my overall picture of what success might look like (with links to other things I’ve written on this), then discuss four key categories of interventions: alignment research, standards and monitoring, successful-but-careful AI |
89f46284-f678-46b5-92f2-afb3964098e2 | trentmkelly/LessWrong-43k | LessWrong | Belief Chains
A belief is an acceptance that a statement is true or that something exists. As aspiring rationalists, we strive for our beliefs to be true, accurate, and minimally biased.
You seldom see a single belief floating around. Typically beliefs tend to group into clusters and chains. In other words, if I believe that I am turning my thoughts into written words right now, that is not an isolated belief. My belief chain might look something like this:
I have sight -> The image coming into my eyes is of something that is metallic with bright lights and little boxes -> It is similar to things that have been called “computers” before -> I am wiggling my fingers to make patterns -> this is called typing -> I am typing on a computer -> the words I am thinking are being translated into writing.
Why does it matter whether I see my beliefs as chains or whether I simply look at the highest level belief such as “the words I am thinking are being translated into written word”?
It matters because at each link in the chain of belief, there is potential for falsehood to be introduced. The further I am away from the source of my high-level belief, the less likely my high-level belief is to be accurate.
Say for example that a three year old is typing on their toy computer that does not have the standard typing functionality of my computer. They could still have the same logic chain that I used:
I have sight -> The image coming into my eyes is of something that is metallic with bright lights and little boxes -> It is similar to things that have been called “computers” before -> I am wiggling my fingers to make patterns -> this is called typing -> I am typing on a computer -> the words I am thinking are being translated into writing.
Belief chains can be corrupted in many ways. Here are a few:
1. Our intuitions tell us that the more interconnecting beliefs we have, and the more agreement between different beliefs, the more likely they are to be true |
3d170892-b5a8-452d-a12f-05515cb939ae | trentmkelly/LessWrong-43k | LessWrong | How to (not) do a literature review
This is cross-posted from my personal blog, where I share thoughts on my work and learning process in the OpenAI Scholars Program. I thank Ruby Bloom for suggesting that I share the post here as well.
OpenAI Scholars: Fifth Steps - The Dreaded Literature Review
The OpenAI Scholars , and I among them, recently completed a project proposal for the second half of our program. Having recently finished a PhD, writing a proposal and doing the requisite literature review, should be second nature. But literature reviews were always my least favorite part of research.
Fortunately, I'm in good company. In a previous life as a young aspiring neuroscientist, I once attended a talk by David Hubel . When asked for tips about how to "keep up with the literature", I recall that Professor Hubel responded: "You know, at some point in your career, you have to decide if you want to be a consumer - or a producer." He elaborated that he would only really look at papers that his advisor or his long-term collaborator Torsten Wiesel pointed him to.
There's good reason to want to avoid literature reviews. To begin with, the problem formulation is intractable: "Know everything." If you've spent any amount of time around academics, it will soon become apparent that this is exactly what they expect from you. "Oh you haven't read that paper?" The assumption is that you have read every paper there is.
Of course, this is an impossible task. Last time I visited Google Scholar and searched on a few project-relevant terms, I encountered 105,000 papers. It takes me a full day to read and satisfactorily understand an academic paper. So reading 105 kilo-papers would take me 288 years. And I had hoped to also do some coding work during the Scholars program.
Now suppose you want to cook a ghormeh sabzi . To do so, you buy yourself a can of ghormeh sabzi mix , and follow the instructions on the can. There are about five steps. However, as ubiquitous an activity as the literature review is (in acad |
366f7533-e0e3-4265-8588-5b1e398710f0 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Help us design the interface for aisafety.com
We are working on improving AIsafety.com, a website that organizes AI safety resources for all audiences, presenting the full range of ways to support AI safety work in one place.
We seek input on our Resources section. We have a set of 9 boxes with a title and a space for a short summary of what is in that section. Six of those are shown in the image below.
We are looking for input on what should be in the summary part of each box.
* What would you want to get out of each section?
* What is the most basic information you need about what is provided in each area?
Input from people who aren't very familiar with AI is especially helpful. Here are the titles of all nine sections:
* Courses
* Communities
* Projects
* Events
* Jobs
* Funders
* Conceptual Map
* Reading guide
* Donation guide
All input is wecome! Thank you for your time! |
c58f6432-8e91-4e5d-85e5-9156696ac63c | trentmkelly/LessWrong-43k | LessWrong | Eight Magic Lamps
> “Give me a lever long enough, and a fulcrum on which to place it, and I shall move the world.” - Archimedes
Aladdin started with nothing; but after a sorcerer tasked him to retrieve a magic lamp, the lamp’s genie granted him wealth and fame. His fortune lasted until the sorcerer stole the lamp, leaving Aladdin ruined. But Aladdin stole it back, and left the sorcerer dead.
That’s the thing about magic lamps: when your future depends on a single point of leverage, triumph and ruin are separated only by a knife’s edge.
----------------------------------------
Muammar Gaddafi started with nothing; but after joining the Libyan military, he gathered a small cabal of soldiers fed up with King Idris’ regime. Their plans were hastened by rumors of a rival coup: had they waited a week longer, a better-prepared group of conspirators would have seized power instead. But Gaddafi struck first—seizing airports, radio stations, and prominent opponents. King Idris went into exile; the rival conspirators were thrown in prison; and Gaddafi reigned for the next 42 years.
That’s the thing about coups: a decapitating strike can sever a chain of command at its narrowest link, changing the lives of millions overnight.
----------------------------------------
Humans are social creatures, inspired by stories of struggle and sacrifice. A single life can fuel narratives that persist for millennia. Jesus died in ignominy—yet two thousand years later, his face is worshiped across the world. Muhammad was illiterate, but his proclamations still govern the lives of billions. Marx never stepped on a battlefield, but his advocacy of violent struggle sparked revolutions in two eventual superpowers.
None of them created movements out of nothing. Their teachings would not have spread like wildfire if they weren’t tapping into a deep preexisting current of emotion. But that current never cares about exactly which path it takes—it only cares that it can surge downhill. Whoever bores the first h |
fccd2b39-3227-4b38-be3d-d49d7c94e4de | trentmkelly/LessWrong-43k | LessWrong | Book Swap
Summary: Bring a book you liked, pitch it to the crowd, then trade your book for a book someone else brought that sounded good. This has no specific rationalist content.
Tags: Repeatable, medium
Purpose: Get people talking about books they read and encountering new ones.
Materials: A whiteboard or piece of paper with a clipboard, plus appropriate writing implement. A timer. (Or a smartphone with a timer app.)
Announcement Text:
Hello all!
I read a lot of books, which presents me with two problems. First, I constantly need recommendations for more books to read, and second, I constantly want to share books with other people so I'll be able to talk about the book with them. I would like to solicit your help with both of these problems. To that end, I’m hosting a book swap. The rules are simple. Bring a book. By the end of the night, leave with a book that isn't the one you brought.
Advice on choosing your book: The only two hard restrictions on your choice of book is that 1. it must be a physical book, the kind you could put into someone else's hand, and 2. it must be for an English speaking audience. Given my social groups I expect certain literary preferences will exist but, well, you're invited, and you like the books you like, so feel free to pick any book you like and it's quite plausible someone else has the same tastes as you.
We will have a whiteboard or similar writing surface. As you arrive, write your name and the title of your book on it. Then at the appointed time, we'll start running through the list- when it's your turn, you'll have two minutes by-the-clock to pitch why people should read your book. Then the swapping will begin in earnest! Please feel free to ask people questions about their books, talk about what kinds of books you like, or just hang out and socialize about anything else.
Description:
As they arrive, make sure to collect people’s names and books. Stack the books in a spot where everyone can see them, in order if possible. |
7774c626-b45e-44d9-92f3-e039f8b621ba | trentmkelly/LessWrong-43k | LessWrong | Decision theory question
I'm sure I remember reading somewhere that an informal slogan for (some brand of decision theory) was:
"Act in accordance with the precommittment you wish you had made."
i.e. when faced with Newcomb's problem, you would wish you had precommitted to one-box, and so you should one-box. This is entirely predictable, so you get the money.
However, I couldn't find where I saw this, and I can't remember which decision theory it was meant to exemplify - and I don't actually understand the maths of the various competitors enough to figure out which it is.
Could someone who knows the area better point me in the right direction? In general, the idea of making a kind of "fully general precommitment" seems like an intuitive way of explaining how you can improve on, say, CDT. |
4fa39af8-eede-40f7-a0e9-e143ad2f1508 | trentmkelly/LessWrong-43k | LessWrong | In Addition to Ragebait and Doomscrolling
(Sorry for the coy title--I want to give the reader a chance to guess what the addition is.)
One day I opened up the front page of reddit. I was not signed in and I was using my browser's incognito mode.
The following list composed about 25% of what I saw as I scrolled. See if you notice any themes. (As hinted by the title, I think there is something other than outrage here.)
r/MurderedByWords
r/PublicFreakout
r/insanepeoplefacebook
r/JusticeServed
r/nottheonion
r/facepalm
r/mildlyinfuriating
r/Cringetopia
r/TikTokCringe
r/LeopardsAteMyFace
r/FuckYouKaren
r/iamverybadass
r/IdiotsInCars
r/cringe
(At least another 25% was made up of r/news, r/worldnews, r/politics, r/PoliticalHumor, and so on.)
Like many people, I have spent a lot of time thinking about the psychotoxic effects of concentrated outrage, political polarization, doomscrolling, misinformation, and filter bubbles. So I was a little surprised by my own interpretation of the above list:
I submit that the most salient theme is contempt.
Here's a sentence that has been at the back of my mind since I came across it:
> Scandal is great entertainment because it allows people to feel contempt, a moral emotion that gives feelings of moral superiority while asking nothing in return.
-- Jonathan Haidt, The Happiness Hypothesis
Let me first admit that contemptuously bonding over the misbehavior of others probably can have real benefits. But I claim that in the case of the reddit front page, these benefits are clearly outweighed by the costs to one’s personality (not to mention epistemics).
So, Haidt says contempt feels good, reddit appears to be a prime example, and I'm now asserting that it's psychotoxic (and possibly addictive, at least when taken via intravenous drip bottomless scrolling). Presuming all of that is correct...is it actionable? I think so.
If you're ambitious, you could quit social media for a month and pay attention to how your thoughts and attitudes change.
More |
793048e8-b9b6-4ad2-823e-3b803aff055f | trentmkelly/LessWrong-43k | LessWrong | An Introduction to Decision Modeling
Despite their importance, we barely pay attention to most of the decisions we make. Fortunately, there’s a better way.
Continue reading on The Startup » |
d5b99eaa-ab28-444c-be6c-19054a151f9f | trentmkelly/LessWrong-43k | LessWrong | Breaking the Procrastination Equation
Recently, LessWrong user LukeProg wrote an article summarizing the scientific research on procrastination, in How to Beat Procrastination. The result was the Procrastination Equation:
This equation quantifies the motivation of people, on average. The rest of the article, and the book that much of it came from, was spent on ways to adjust your situation so that your motivation came out right. This can be helpful, but I think it is important to consider another goal.
I want to change myself so that I no longer follow the procrastination equation.
I am confident that there are people in the world that don't follow the procrastination equation. |
996685dc-6f1d-4de8-bd10-4e2e3abf4daf | trentmkelly/LessWrong-43k | LessWrong | The Underreaction to OpenAI
TL; DR
Seeing OpenAI's impressive technology should make media commentators think it more likely that AGI is possible and existentially dangerous. Some do, but overall this shift in views does not seem consistent enough; the default reaction still seems to be to view the whole AI thing as just another profitable technology development.
People should be very impressed by recent AI developments
If you did not pay much attention to AI before ChatGPT and then started, you were very likely surprised and impressed. This may even be true if you closely followed AI developments, and even if GPT-3.5 did not impress you, GPT-4 likely did. This also seems to be true of many journalists; AI reporting has become a topic for mainstream media. LLMs (a word that only AI experts knew 2 years ago) are very powerful, and even if you don't immediately see how they will transform the economy, you feel their power if you use them. The same is true of image generation or voice generation AI. The power of translation and other applications like protein folding may be less immediately obvious, but they add more facets of AI power.
Yet there is a strange missing reaction of society's mainstream to these developments, which became particularly visible in the reporting and commentary on the firing of Sam Altman as CEO of OpenAI.
The magician who creates AI
Imagine a person came to you and claimed to be a magician. She had created a magic workshop in order to summon a super-powerful demon, and on the way there she will create smaller versions of the demon. You laugh because demons don't exist. But she creates the small demons. These demons are not super-powerful, but you would not have expected them to be possible. How should the small demons change your mind about the possibility of a super-powerful one?
OpenAI claims that it can create and also tame the demon called AGI. But media coverage usually ignores that it has demonstrated the ability to create small ones explicitly as steps tow |
c2023732-94e0-4f41-b134-3ad82591e12c | trentmkelly/LessWrong-43k | LessWrong | Which evals resources would be good?
I want to make a serious effort to create a bigger evals field. I’m very interested in which resources you think would be most helpful. I’m also looking for others to contribute and potentially some funding.
1. Which resources would be most helpful? Suggestions include:
1. Open Problems in Evals list: A long list of open relevant problems & projects in evals.
2. More Inspect Tutorials/Examples: Specifically, examples with deep explanations, examples of complex agent setups, and more detailed examples of the components around the evals.
3. Evals playbook: A detailed guide on how to build evals with detailed examples for agentic and non-agentic evals.
4. Salient demos: I don’t even necessarily want to build scary demos, I just want normal people to be less hit-by-a-truck when they learn about LM agent capabilities in the near future.
5. More “day-to-day” evals process resources: A lot of evals expertise is latent knowledge that professional evaluators have accumulated over time.
6. An evals slack
7. Talk through the most important evals papers
8. Evals workshops and conferences
2. Are you interested in contributing to any of these efforts?
1. Right now I think the following types of contributions would be helpful:
1. People who want to contribute more detailed Inspect tutorials / Examples as described here. I think this is a great opportunity for people who are considering working full-time on evals and want to test fit / make a name for themselves.
2. Someone who wants to help me coordinate these efforts part time or full-time. This would be a mix of playing around with evals yourself, coordinating with other actors in evals and probably a decent amount of writing.
2. I’m very interested in other contributions as well.
3. If you’re interested, fill out this form.
3. Is anyone interested in funding these efforts?
1. I’d be happy to function as a regrantor. I wouldn’t take a cut myself. I imagine mo |
a2de034a-dc21-48fe-9368-6119a88b781d | trentmkelly/LessWrong-43k | LessWrong | Distinguishing goals from chores
This is a linkpost for https://amirbolous.com/posts/goals-and-chores
* Introduction
* Story Time
* Have to, Want to, Want to Want to
* Noticing Patterns in Your Goals
* Closing Thoughts
Introduction
Recently, I've realized that there is a discrepancy between my goals and my actions. It was only a couple of weeks ago that I managed to acquire some terminology for describing this phenomenon. Imagine categorizing (for the sake of simplicity here) anything that can potentially can be done in our lives into one of three buckets
a. something we have to do
b. something we want to want to do
c. something we actually want to do
Past experiences have shown me that the boundaries between these categories are razor thin and one can slip into the other completely unaware.
Story Time
When I was in the 8th grade, there was a possibility that my family was going to move from Dubai to the U.S for my dad's work. (side note, I lived most of my life in Dubai, as well as Egypt for a little bit when I was younger, growing up). Naturally, next time we visited the US, my family toured the potential areas we would be staying at, and more critically for me, the potential schools I would be going to. I distinctly remember having very strong emotions about this move.
In general, moving countries or cities is awful: you leave behind the place you know, your friends, your teachers, and your community to start everything from scratch. The odd thing was that I really wanted to move.
This wasn't because I disliked my community, in fact the complete opposite was true - I had a great group of friends at my school, church, and even in my neighborhood.
The only person more confused than a 13-year old me trying to rationalize this was my mum.
"Why are you so enthusiastic about the move?" She asked when I was pressing my dad for the travel details.
I don't remember my exact response but it was something along the lines of
"I'm tired of having to do well at school because people (more |
3caa2630-9af8-4aac-a781-343486c95a4a | trentmkelly/LessWrong-43k | LessWrong | Meetup : Czech's first Meetup Prague
Discussion article for the meetup : Czech's first Meetup Prague
WHEN: 16 February 2015 06:00:00PM (+0100)
WHERE: Václavské náměstí 778/14, Praha 110 00
Hello, this is going to be the first meetup I know about in Czech Republic. Since I don't know any other LessWrongers here, I'll be waiting in Dobrá Čajovna s.r.o. (tea-room) for at least one hour. Look for 22-year-old, tall, skinny, brown curly haired guy.
Please contact me if you are interested in a meeting, but the time just doesn't fit you - it will be at least some sign for me that someone is at least interested.
Feel free to ask if you have any questions.
I decided to try to make a meetup from time to time (this is the second try) to see if someone will ever show up. It's a pity (and shame) that there are no meetups here. My goal is to estabilish regular meetings of stable core of LessWrongers.
Discussion article for the meetup : Czech's first Meetup Prague |
cc1ce670-4b2c-497d-8c1d-e62e760362ad | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | The longtermist AI governance landscape: a basic overview
*Aim: to give a basic overview of what is going on in longtermist AI governance.*
*Audience: people who have limited familiarity with longtermist AI governance and want to understand it better. I don’t expect this to be helpful for those who already have familiarity with the field. ETA: Some people who were already quite familiar with the field have found this helpful.*
This post outlines the different kinds of work happening in longtermist AI governance. For each kind of work, I’ll explain it, give examples, sketch some stories for how it could have a positive impact, and list the actors I’m aware of who are currently working on it.[[1]](#fn-GNPMST9tT8rZeT9DH-1)
Firstly, some definitions:
* *AI governance* means bringing about local and global norms, policies, laws, processes, politics, and institutions (not just governments) that will affect social outcomes from the development and deployment of AI systems.[[2]](#fn-GNPMST9tT8rZeT9DH-2)
* *Longtermist AI governance*, in particular, is the subset of this work that is motivated by a concern for the very long-term impacts of AI. This overlaps significantly with work aiming to govern [transformative AI](https://forum.effectivealtruism.org/tag/transformative-artificial-intelligence) (TAI).
It’s worth noting that the field of longtermist AI governance is very small. I’d guess that there are around 60 people working in AI governance who are motivated by a concern for very long-term impacts.
Short summary
=============
On a high level, I find it helpful to consider there being a spectrum between foundational and applied work. On the foundational end, there’s *strategy research*, which aims to identify good high-level goals for longtermist AI governance; then there’s *tactics research* which aims to identify plans that will help achieve those high-level goals. Moving towards the applied end, there’s policy development work that takes this research and translates it into concrete policies; work that advocates for those policies to be implemented, and finally the actual implementation of those policies (by e.g. civil servants).
There’s also field-building work (which doesn’t clearly fit on the spectrum). Rather than contributing directly to the problem, this work aims to build a field of people who are doing valuable work on it.

Of course, this classification is a simplification and not all work will fit neatly into a single category.
You might think that insights mostly flow from the more foundational to the more applied end of the spectrum, but it’s also important that research is sensitive to policy concerns, e.g. considering how likely your research is to inform a policy proposal that is politically feasible.
We’ll now go through each of these kinds of work in more detail.
Research
========
Strategy research
-----------------
*Longtermist AI strategy research* ultimately aims to identify high-level goals we could pursue that, if achieved, would clearly increase the odds of eventual good outcomes from advanced AI, from a longtermist perspective (following [Muehlhauser](https://forum.effectivealtruism.org/posts/M2SBwctwC6vBqAmZW/a-personal-take-on-longtermist-ai-governance#Key_bottlenecks), I’ll sometimes refer to this aim as ‘getting *strategic clarity*’).
This research can itself vary on a spectrum between *targeted* and *exploratory* as follows:
* Targeted strategy research answers questions which shed light on some other specific, important, known question
+ e.g. “I want to find out how much compute the human brain uses, because this will help me answer the question of when TAI will be developed (which affects what high-level goals we should pursue)”
* Exploratory strategy research answers questions without a very precise sense of what other important questions they’ll help us answer
+ e.g. “I want to find out what China’s industrial policy is like, because this will probably help me answer a bunch of important strategic questions, although I don't know precisely which ones”
### Examples
* Work on TAI forecasting, e.g. [biological anchors](https://www.cold-takes.com/forecasting-transformative-ai-the-biological-anchors-method-in-a-nutshell/) and [scaling laws for neural language models](https://arxiv.org/abs/2001.08361).
+ Example of strategic relevance: if TAI is soon, then slowly growing a large field of experts seems less promising; if TAI is very far, then longtermist AI governance should probably be relatively deprioritised.
* Work on clarifying the sources of AI x-risk, e.g. writing by [Christiano, Critch](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like), [Carlsmith](https://www.alignmentforum.org/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai), [Ngo](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ) and [Garfinkel](https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff).
+ Example of strategic relevance: if most x-risk from AI comes from advanced misaligned AI agents, then governance should focus on influencing the first actors to build them.
* Work on investigating the speed of AI progress around TAI, e.g. [investigation](https://aiimpacts.org/discontinuous-progress-investigation/) and [analysis](https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/) by AI Impacts.
+ Example of strategic relevance: if AI progress occurs [discontinuously](https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/#Definitions), then there are likely to be only a small number of high-stakes actors, and most of the value of governance will come from influencing those actors.
It’s easy to confuse strategy research (and especially *exploratory* strategy research) with *broadly scoped* research. As many of the above examples show, strategy research can be *narrowly scoped* - that is, it can answer a fairly narrow question. Examples of broadly vs. narrowly scoped questions:
* On scaling laws:
+ Broad question: in general, how does the performance of deep learning models change as you increase the size of those models?
+ Narrower question: how does the performance of *large language models specifically (e.g. GPT-3)* change as you increase the size of those models? (The question tackled in [this paper](https://arxiv.org/abs/2001.08361).)
* On sources of AI x-risk:
+ Broad question: how much x-risk is posed by advanced AI in general?
+ Narrower question: how much x-risk is posed *by influence-seeking AI agents specifically*? (The question tackled in [this report](https://www.alignmentforum.org/posts/HduCjmXTBD4xYTegv/draft-report-on-existential-risk-from-power-seeking-ai).)
Indeed, I think it’s often better to pick narrowly scoped questions, especially for junior researchers, because they tend to be more tractable.
Luke Muehlhauser has some recommendations for those who want to try this kind of work: see point 4 in [this post](https://forum.effectivealtruism.org/posts/M2SBwctwC6vBqAmZW/a-personal-take-on-longtermist-ai-governance#My_advice_to_longtermists_interested_in_AI_governance). And see [this post](https://forum.effectivealtruism.org/posts/kvkv6779jk6edygug/some-ai-governance-research-ideas) for some examples of open research questions.[[3]](#fn-GNPMST9tT8rZeT9DH-3)
### Stories for impact
* *Direct impact*: there are many possible goals in AI governance, and we need to prioritise the most important ones. This work is often motivated by researchers’ impressions that there is very little clarity about topics which affect what goals we should pursue. For example, see the results of [these](https://forum.effectivealtruism.org/posts/2tumunFmjBuXdfF2F/survey-on-ai-existential-risk-scenarios-1) [surveys](https://www.alignmentforum.org/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results) which show wide disagreement about AI x-risk scenarios and the total amount of AI x-risk, respectively.
* *Indirect impact:*
+ Field-building: having a clear understanding of what we’re working to achieve and why it matters would help attract more people to the field.
+ Communicating the need for policy change: if you want to convince people to do costly or dramatic things in the future, you'd better have clear things to say about what we’re working to achieve and why it matters.
### Who’s doing it?
Some people at the following orgs: [FHI](https://www.fhi.ox.ac.uk/), [GovAI](https://governance.ai/), [CSER](https://www.cser.ac.uk/), [DeepMind](https://deepmind.com/), [OpenAI](https://openai.com/), [GCRI](https://gcrinstitute.org/), [CLR](https://longtermrisk.org/), [Rethink Priorities](https://rethinkpriorities.org/), [OpenPhil](https://www.openphilanthropy.org/), [CSET](https://cset.georgetown.edu/),[[4]](#fn-GNPMST9tT8rZeT9DH-4) plus some independent academics.
Tactics research
----------------
*Longtermist AI tactics research* ultimately aims to identify plans that will help achieve high-level goals (that strategy research has identified as a priority). It tends to be more narrowly scoped by nature.
It’s worth noting that there can be reasons to do tactics research even if you haven’t clearly identified some goal as a priority: for your own learning, career capital, and helping to build an academic field.
### Examples
* [The Windfall Clause](https://www.fhi.ox.ac.uk/wp-content/uploads/Windfall-Clause-Report.pdf)
+ Plan: develop a tool for distributing the benefits of AI for the common good
+ High-level goals which this plan is pursuing: reducing incentives for actors to race against each other to be the first to develop advanced AI; reducing economic inequality.
* [Mechanisms for Supporting Verifiable Claims](https://arxiv.org/abs/2004.07213)
+ Plan: develop practices by which AI developers could make their own claims about AI development more verifiable (that is, claims to which developers can be held accountable)
+ High-level goals which this plan is pursuing: developing mechanisms for demonstrating responsible behaviour of AI systems; enabling more effective oversight; reducing pressure to cut corners for the sake of gaining a competitive edge.
* [AI & Antitrust](https://yjolt.org/ai-antitrust-reconciling-tensions-between-competition-law-and-cooperative-ai-development)
+ Plan: proposing ways to mitigate tensions between competition law and the need for cooperative AI development
+ High-level goal which this plan is pursuing: increasing cooperation between companies developing advanced AI.
### Stories for impact
* *Direct impact*: creating solutions that get used to help make better decisions (in policy and future research).
+ This is what Allan Dafoe calls the 'Product model of research'.
* *Indirect impact*: even if not all solutions get used to help make better decisions, they will help grow the field of people who care about longtermist AI governance issues, and improve insight, expertise, connections and credibility of researchers.
+ This is what Allan Dafoe calls the 'Field-building model of research'.
### Who’s doing it?
Some people at the following orgs: [FHI](https://www.fhi.ox.ac.uk/), [GovAI](https://governance.ai/), [CSER](https://www.cser.ac.uk/), [DeepMind](https://deepmind.com/), [OpenAI](https://openai.com/), [GCRI](https://gcrinstitute.org/), [CSET](https://cset.georgetown.edu/), [Rethink Priorities](https://rethinkpriorities.org/), [LPP](https://www.legalpriorities.org/), plus some independent academics.
Policy development, advocacy and implementation
===============================================
Strategy research outputs high-level goals. Tactics research takes those goals and outputs plans for achieving them. *Policy development* work takes those plans and translates them into policy recommendations that are ready to be delivered to policymakers. This requires figuring out (e.g.) which precise ask to make, what language to use (both in the formal policy and in the ask), and other context-specific features that will affect the probability of successful implementation.
*Policy advocacy* work advocates for policies to be implemented, e.g. figuring out who is the best person to make the policy ask, to whom, and at what time.
*Policy implementation* is the work of actually implementing policies in practice, by civil servants or corporations.
It’s worth distinguishing government policy (i.e. policy intended to be enacted by governments or intergovernmental organisations) from corporate policy (i.e. policy intended to be adopted by corporations). Some people working on longtermist AI governance focus on improving corporate policy (especially the policies of AI developers), while others in the field focus on improving the policies of relevant governments.
A common motivation for all policy work is that implementation details are often thought to be critical for successful policymaking. For example, if a government regulation has a subtle loophole, that can make the regulation useless.
Compared with research, this kind of work tends to involve relatively less individual thinking, and relatively more conversation/information collection (e.g. having meetings to learn who has authority over a policy, what they care about, and what other players want in a policy) as well as coordination (e.g. figuring out how you can get a group of actors to endorse a policy, and then making that happen).
As mentioned earlier, policy insight sometimes flows ‘backwards’. For example, policy development might be done iteratively based on how advocacy changes your knowledge (and the policy landscape).
### Examples
* Government policy:
+ Committing to not incorporate AI technology into nuclear command, control and communications (NC3), e.g. as advocated for by CLTR in their [Future Proof](https://11f95c32-710c-438b-903d-da4e18de8aaa.filesusr.com/ugd/e40baa_8692f88bd29f483aa5f77656c8bd4888.pdf) report.
+ Government monitoring of AI development, e.g. as developed in this [whitepaper on AI monitoring](https://arxiv.org/abs/2108.12427).
+ Making nascent regulation or AI strategies/principles sensitive to risks from advanced AI systems (as well as current ones), e.g. [feedback](https://forum.effectivealtruism.org/posts/bd7yr3eozzzhMuKCi/what-is-the-eu-ai-act-and-why-should-you-care-about-it#Commonly_suggested_improvements) by various EA orgs about the EU AI Act.
* Corporate policy:
+ Developing norms for the responsible dissemination of AI research, given its potential for misuse, e.g. [these recommendations](https://partnershiponai.org/paper/responsible-publication-recommendations/) by PAI.
These ideas vary on a spectrum between more targeted (e.g. not integrating AI into NC3) to more general (in the sense of creating general-purpose capacity to deal with a broad class of problems that will likely arise, e.g. most of the others above). I think our policy development, advocacy and implementation today should mostly focus on more general ideas, given our uncertainties about how AI will play out (whilst also pushing for obviously good specific ideas, when they arise).
### Stories for impact
* *Direct impact:* having good policies in place increases our chances of successfully navigating the transition to a world with advanced AI.
* *Indirect impact:* even if you can't be sure that some policy idea is robustly good, developing/advocating/implementing it will help build insight, expertise, connections and credibility of longtermist AI governance people. We don't want to get to an AI “crunch time”,[[5]](#fn-GNPMST9tT8rZeT9DH-5) and only then start learning about how to develop policy and decision-making.
+ That said, we should be very careful with implementing policies that could end up being harmful, e.g. by constraining future policy development.
### Who’s doing it?
* Development:
+ Of government policy: [CLTR](https://www.longtermresilience.org/), [FLI](https://futureoflife.org/), [GovAI](https://governance.ai/), [CSET](https://cset.georgetown.edu/), [CSER](https://www.cser.ac.uk/), [FHI](https://www.fhi.ox.ac.uk/), [TFS](https://thefuturesociety.org/)
+ Of corporate policy: [OpenAI](https://openai.com/), [DeepMind](https://deepmind.com/), [GovAI](https://governance.ai/), [CSER](https://www.cser.ac.uk/), [FHI](https://www.fhi.ox.ac.uk/), [PAI](https://partnershiponai.org/)
* Advocacy:
+ For government policy: [CLTR](https://www.longtermresilience.org/), [CSET](https://cset.georgetown.edu/), [FLI](https://futureoflife.org/), [TFS](https://thefuturesociety.org/)
+ For corporate policy: [PAI](https://partnershiponai.org/)
* Implementation:
+ Of government policy: people in various civil services
+ Of corporate policy: [OpenAI](https://openai.com/), [DeepMind](https://deepmind.com/)
Field-building
==============
This is work that explicitly aims to grow the field or community of people who are doing valuable work in longtermist AI governance.[[6]](#fn-GNPMST9tT8rZeT9DH-6) One could think of this work as involving both (1) bringing in new people, and (2) making the field more effective.
### Examples
1. Bringing in new people by creating:
* policy fellowships, such as the [OpenPhil Technology Policy Fellowship](https://www.openphilanthropy.org/focus/global-catastrophic-risks/technology-policy-fellowship);
* [online programs](https://forum.effectivealtruism.org/posts/68ANc8KhEn6sbQ3P9/ai-governance-fundamentals-curriculum-and-application) or courses to help junior people get synced up on what is happening in AI governance;
* high quality, broadly appealing intro material that reaches many undergraduates;
* more scalable research fellowships to connect, support and credential interested junior people.
2. Making the field more effective by creating:
* research agendas;
* ways for senior researchers to easily hire research assistants.[[7]](#fn-GNPMST9tT8rZeT9DH-7)
### Stories for impact
* *Growth model*: building a longtermist AI governance field with lots of aligned people with the capacity and relevant expertise to do important research and policy work (perhaps especially when this work is less bottlenecked by lack of strategic clarity).
* *Metropolis model:*[[8]](#fn-GNPMST9tT8rZeT9DH-8) building a longtermist AI governance field with dense connections to broader communities (e.g. policymaking, social science, machine learning), such that the field can draw on diverse expertise from these communities.
### Who’s doing it?
[GovAI](https://governance.ai/), [OpenPhil](https://www.openphilanthropy.org/), [SERI](https://cisac.fsi.stanford.edu/stanford-existential-risks-initiative/content/stanford-existential-risks-initiative), [CERI](https://camxrisk.org/), [CHERI](https://eageneva.org/cheri) and [EA Cambridge](https://www.eacambridge.org/). From a broader view, all cause-general EA movement building as well. This is the least explored kind of work discussed in this post.
Other views of the longtermist AI governance landscape
======================================================
I’ve presented just one possible view of the longtermist AI governance landscape - there are obviously others, which may be more helpful for other purposes. For example, you could carve up the landscape based on different kinds of interventions, such as:
* Shifting existing discussions in the policy space to make them more sensitive to AI x-risk (e.g. building awareness of the difficulty of assuring cutting-edge AI systems)
* Proposing novel policy tools (e.g. international AI standards)
* Getting governments to fund AI safety research
* Shifting corporate behaviour (e.g. the windfall clause)
* …
Or, you could carve things up by geographic hub (though not all organisations are part of a geographic hub):
* Bay Area: OpenPhil, OpenAI, PAI, various AI alignment orgs. On average more focused on misalignment as the source of AI x-risk; culturally closer to Silicon Valley and rationality cultures.
* DC: US govt, CSET. Focus on US policy development/advocacy/implementation; culturally closer to DC culture.
* UK: FHI/GovAI, DeepMind, UK govt, CSER, CLTR, (others?). On average more concern over a wider range of sources of AI x-risk.
* EU. In 2020, the European Commission drafted the world’s first AI regulation, which will likely be passed in the next few years and could lead to a Brussels effect.
* China.
* …
Or, you could carve up the landscape based on different “theories of victory”, i.e. complete stories about how humanity successfully navigates the transition to a world with advanced AI. There’s a lot more that could be said about all of this; the aim of this post has just been to give a concise overview of the kinds of work that are currently happening.
*Acknowledgements: this is my own synthesis of the landscape, but is inspired and/or draws directly from EA forum posts by [Allan Dafoe](https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact)*, *[Luke Muehlhauser](https://forum.effectivealtruism.org/posts/M2SBwctwC6vBqAmZW/a-personal-take-on-longtermist-ai-governance) and [Convergence Analysis](https://forum.effectivealtruism.org/posts/oovy5XXdCL3TPwgLE/a-case-for-strategy-research-what-it-is-and-why-we-need-more#The_research_spine_of_effective_altruism__three_levels). Thanks also to Jess Whittlestone for helpful conversation, plus Matthijs Maas, Yun Gu, Konstantin Pilz, Caroline Baumöhl and especially a reviewer from SERI for feedback on a draft.*
*This work is licensed under a [Creative Commons Attribution 4.0 International License.](https://creativecommons.org/licenses/by/4.0/)*
---
1. I’ve surely forgotten some important groups from this list, and I may have misclassified or otherwise misrepresented some of them - please let me know if that’s the case! [↩︎](#fnref-GNPMST9tT8rZeT9DH-1)
2. This borrows directly from [Open Philanthropy’s definition](https://www.openphilanthropy.org/blog/ai-governance-grantmaking#Our_priorities_within_AI_governance). [↩︎](#fnref-GNPMST9tT8rZeT9DH-2)
3. Note that some of these are *tactics research* questions rather than strategy research questions. [↩︎](#fnref-GNPMST9tT8rZeT9DH-3)
4. CSET mostly do tactics research, policy development and policy advocacy, but their work on mapping the semiconductor supply chain falls under strategy research. [↩︎](#fnref-GNPMST9tT8rZeT9DH-4)
5. Muehlhauser [defines this as](https://forum.effectivealtruism.org/posts/M2SBwctwC6vBqAmZW/a-personal-take-on-longtermist-ai-governance) “a period lasting 1-20 years when the decisions most impactful on TAI outcomes might be made”. [↩︎](#fnref-GNPMST9tT8rZeT9DH-5)
6. This is distinct from the field-building benefits of other kinds of work discussed in this document, since it is *solely and explicitly* focused on building the field. [↩︎](#fnref-GNPMST9tT8rZeT9DH-6)
7. Which can also help bring in new people. [↩︎](#fnref-GNPMST9tT8rZeT9DH-7)
8. This idea directly borrows from [Allan Dafoe’s forum post](https://forum.effectivealtruism.org/posts/42reWndoTEhFqu6T8/ai-governance-opportunity-and-theory-of-impact). [↩︎](#fnref-GNPMST9tT8rZeT9DH-8) |
29adf18e-b632-4e24-a257-0484e657583b | trentmkelly/LessWrong-43k | LessWrong | Chariots of Philosophical Fire
Sharing because I feel that the contribution of analytical philosophy is sometimes under appreciated here, particularly compared to other attempts at philosophy:
“In grappling with these mysteries, Oxford philosophers developed and refined old and new techniques. Reasoning, deduction, explanation, more care and precision in language, crisper concepts, deeper distinctions, elaborate models, thought experiments, devastating counterexamples, intuitive principles that press to surprising conclusions—on all of these things, Oxford led the way, and the rest of the Anglosphere followed. Its legacy is less a set of ideas or even a series of sacred tenets and Delphic sayings than it is a devotion to rigor, clarity, truth, and a very practical British revulsion to nonsense.
… Creative genius is evenly distributed in neither time nor place. A survey of the past shows that genius is not randomly scattered about, like the seeds of a dandelion, but concentrates: ancient Athens, Renaissance Florence, Silicon Valley, among other examples. Why these fertile eras and places appear, peak, and then decline is understudied as a historical phenomenon. Oxford philosophy in the twentieth century, though not as astonishing as Florence or as productive as Silicon Valley, is nevertheless an example of the clustering of philosophical genius.
… Third—and strangely left unexplored by the authors—nearly all the philosophers in these books did not have Ph.D.s in philosophy. Gilbert Ryle, J. L. Austin, Isaiah Berlin, Iris Murdoch, Philippa Foot, Elizabeth Anscombe, A. J. Ayer, R. M. Hare, Bernard Williams, and Derek Parfit—not a single doctoral degree in philosophy among them. A first in their undergraduate exams—meaning a grade of “A+”—was enough to send them on their way. Yes, each won prizes and fellowships, but none wrote anything like a dissertation under the supervision of an advisor. One might conclude, as the analytic tradition fades into its senility, that requiring a graduate degree in |
a86116f3-564b-499f-833a-7b266cf5577d | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Principles for Alignment/Agency Projects
"John, what do you think of this idea for an alignment research project?"
I get questions like that fairly regularly. How do I go about answering? What principles guide my evaluation? Not all of my intuitions for what makes a project valuable can easily be made legible, but I think the principles in this post capture about 80% of the value.
Tackle the [Hamming Problems](https://www.lesswrong.com/posts/Thwfy4gNFx9kHgvov/research-hamming-questions), Don't Avoid Them
-----------------------------------------------------------------------------------------------------------------------------
Far and away the most common failure mode among self-identifying alignment researchers is to look for Clever Ways To Avoid Doing Hard Things (or Clever Reasons To Ignore The Hard Things), rather than just Directly Tackling The Hard Things.
The most common pattern along these lines is to propose outsourcing the Hard Parts to some future AI, and "just" try to align that AI without understanding the Hard Parts of alignment ourselves. The next most common pattern is to argue that, since Hard Parts are Hard, we definitely don't have enough time to solve them and should therefore pretend that we're going to solve alignment while ignoring them. Third most common is to go into field building, in hopes of getting someone else to solve the Hard Parts. (Admittedly these are not the most charitable summaries.)
There is value in seeing how dumb ideas fail. Most of that value is figuring out what the Hard Parts of the problem are - the taut constraints which we run into over and over again, which we have no idea how to solve. (If it seems pretty solvable, it's probably not a Hard Part.) Once you can recognize the Hard Parts well enough to try to avoid them, you're already past the point where trying dumb ideas has much value.
On a sufficiently new problem, there is also value in checking dumb ideas just in case the problem happens to be easy. Alignment is already past that point; it's not easy.
You can save yourself several years of time and effort by actively trying to identify the Hard Parts and focus on them, rather than avoid them. Otherwise, you'll end up burning several years on ideas which don't actually leave the field better off. That's one of the big problems with trying to circumvent the Hard Parts: when the circumvention inevitably fails, we are still no closer to solving the Hard Parts. (It has been [observed](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization) both that alignment researchers mostly seem to not be tackling the Hard Parts, and that alignment research mostly doesn't seem to build on itself; I claim that the latter is a result of the former.)
Mostly, I think the hard parts are things like "understand agency in general better" and "understand what's going on inside the magic black boxes". If your response to such things is "sounds hard, man", then you have successfully identified (some of) the Hard Parts.
Have An Intuitive Story Of What We're Looking For
-------------------------------------------------
One project going right now is looking at how modularity in trained systems corresponds to broad peaks in parameter space. Intuitive story for that: we have two "modules", each with lots of stuff going on inside, but only a relatively-low-dimensional interface between them. Because each module has lots of stuff going on inside, but only a low-dimensional interface, there should be many ways to change around the insides of a module while keeping the externally-visible behavior the same. Because such changes don't change behavior, they don't change system performance. So, we expect that modularity implies lots of degrees-of-freedom in parameter space, i.e. broad peaks.
This story is way too abstract to be able to look for immediately in a trained net. How do we operationalize "modules", and find them? How do we operationalize "changes in a module", especially since parameter space may not line up very neatly with functional modules? But that's fine; the story can be pretty abstract.
The point of the intuitive story is to steer our search. Without it, we risk blind empiricism: just cataloguing patterns without building general models/theory/understanding for what's going on. In that mode, we can easily lose track of the big picture goal and end up cataloguing lots of useless stuff. An intuitive story gives us big-picture direction, and something to aim for. Even if it turns out to be wrong!
Operationalize
--------------
It's relatively easy to make vague/abstract intuitive arguments. Most of the value and challenge is in finding the right operationalizations of the vague concepts involved in those arguments, such that the argument is robustly correct and useful. Because it's where most of the value and most of the challenge is, finding the right operationalization should typically be *the* central focus of a project.
My [abstraction](https://www.lesswrong.com/s/ehnG4mseKF6xALmQy) [work](https://www.lesswrong.com/posts/dNzhdiFE398KcGDc9/testing-the-natural-abstraction-hypothesis-project-update) is a good example here. I started with some examples of abstraction and an intuitive story about throwing away information while keeping info relevant "far away". Then, the bulk of the work was to operationalize that idea in a way which matched all the intuitive examples, and made the intuitive stories provable.
Derive the Ontology, Don't Assume It
------------------------------------
In ML interpretability, some methods look at the computation graph of the net. Others look at orthogonal directions in activation space. Others look at low-rank decompositions of the weight matrices. These are all "different ontologies" for interpretation. Methods which look at one of these ontologies will typically miss structure in the others; e.g. if run a graph clustering algorithm on the computation graph I probably won't pick up interpretable concepts embedded in directions in activation space.
What we'd really like is to avoid assuming an ontology, and rather *discover/derive* the ontology itself as part of our project. For instance, we could run an experiment where we change one human-interpretable "thing" in the environment, and then look at how that changes the trained net; that would let us *discover* how the concept is embedded rather than assume it from the start (credit to Chu for this suggestion). Another approach is to start out with some intuitive story for *why* a particular ontology is favored - e.g. if we have a graph with local connectivity, then maybe the [Telephone Theorem](https://www.lesswrong.com/posts/jJf4FrfiQdDGg7uco/the-telephone-theorem-information-at-a-distance-is-mediated) kicks in. Such an argument should (a) allow us to *rule out* interactions which circumvent the favored ontology, and (b) be testable in its own right, e.g. for the Telephone Theorem we can (in principle) check the convergence of mutual information to a limit.
Open The Black Box
------------------
Don’t just run a black-box experiment on a network, or try to prove a purely behavioral theorem. We want to talk about internal structure.
Partly, opening the black box is about tackling the Hard Parts rather than avoiding them. Not opening the black box is a red flag; it's usually a sign of avoiding the Hard Parts.
Partly, opening the black box is about getting a very rich data channel. When we just work with a black box, we get relatively sparse data about what's going on. When we open the black box, we can in-principle directly observe every gear and directly check what's going on.
Relative Importance of These Principles
---------------------------------------
Tackle The Hamming Problems is probably the advice which is most important to follow for marginal researchers right now, but mostly I expect people who aren't already convinced of it will need to learn it the hard way. (I certainly had to learn it the hard way, though I did that before starting to work on alignment.) Open the Black Box follows pretty naturally once you're leaning in to the Hard Parts.
Once you're past that stumbling block, I think the most important principles are Derive the Ontology and Operationalize. These two are important for opposing types of people. Some people tend to stay too abstract and avoid committing to an ontology, but never operationalize and therefore miss out on the main value-add. Other people operationalize prematurely, adopting [ad-hoc operationalizations](https://www.lesswrong.com/posts/GhFoAxG49RXFzze5Y/what-s-so-bad-about-ad-hoc-mathematical-definitions), and Deriving the Ontology pretty strongly dicourages that.
Have an Intuitive Story is especially helpful for people who tend to get lost in the weeds and go nowhere. Make sure you have an intuitive story, and *use* that story to guide everything else. |
1516699b-3490-4752-b7d7-0be07978c700 | trentmkelly/LessWrong-43k | LessWrong | The Healing Code of Joan
The Healing Code of Joan
This story was originally published on my website
My name is Joan Lekuta, and a decade ago, I was a biology major. In 2023, the GPT thing, otherwise known as Artificial Intelligence (AI), made its first appearance at my university, and I thought it was ridiculous and refused to use it. However, as my graduation drew near, things took a strange turn. The GPT became increasingly sophisticated, effortlessly outclassing my abilities in virtually every aspect.
My heart sank when I heard that the GPT thing had aced the AP Biology Exam with a perfect score of 5. I had taken the same exam and only managed to get a 4. And don't get me started on my SAT score. Nevertheless, I graduated in 2023, burnt out from my biology classes, and yearned to take a year off to make money and secretly contemplate whether this AI thing would ta
You see, ever since I was a little girl, I had dreamt of becoming a doctor, and now that I look back, my immigrant parents influenced the fuck out of my decision. Their aspirations propelled me toward becoming pre-med. But you know, the "immigrant mentality" of doing better than our parents. I was born here, and shit and I grew up hearing about my parents' struggles, so I felt like I needed to go to college and become a doctor for my parents. If you don't have immigrant parents, this will sound weird as fuck to you, but to be honest, I truly felt some sort of obligation.
Although I didn't like pre-med, I persevered through physics, genetics, and biochemistry, not to mention organic chemistry, which tossed me around like a dirty towel.
After graduation, I was still unsure of my path. I felt a massive relief about being "done," but I was in the same spot in many ways because I did not know what to do. So yeah, I graduated and did not have the energy to go to med school immediately, so I took a gap year. During that year, I secured a job at a hospital, earning some cash and helping my parents with my siblings while preparing |
9bc34c39-0856-44ee-8fd8-522d6577084f | trentmkelly/LessWrong-43k | LessWrong | TPP is about as good as it is supposed to be
Tyler Cohen stakes out quite a sensible position on whether or not we should approve the trade deal called TPP:
> What would convince me to oppose TPP if is somebody did a study showing the following: when you use a better trade model, use better data, and/or add in the neglected costs of TPP (which are real), those gains go away and indeed become negative.
>
> Then I would change my mind, or at least weigh those economic costs against possibly favorable geopolitical benefits from the deal.
>
> What does not convince me is when people simply list various costs and outrages associated with TPP.
This is the full utilitarian, rationalist-style, ‘Shut up and multiply’ response to the question. This is a strong argument. The deal has already been negotiated, so insisting on modifications at this point is akin to turning down the deal; it is a de facto yes or no offer. The geopolitical benefits are clearly net positive to the United States, and all the economic forecasts are large and positive because free trade is somewhere between compound interest and sliced bread on the awesomeness index. The arguments against the deal that I have seen all take the form of pointing out something in the deal that may be less than awesome, but nothing that can compete with an estimated trillion dollars of economic surplus, so what is the problem?
In some cases, not being able to follow the basic case was exactly the problem. Many of the opponents of the deal are indeed suffering from simple ignorance or simple scope insensitivity. Either they do not realize that trade is the reason we have nice things, and need to understand that before participating in any conversations surrounding trade agreements, or they do not understand that this bigger consideration is so massively big that it makes everything else involved (except possibly the geopolitical impact of the deal, which is a direct consequence of trade being that awesome) seem like chump change. Hopefully some of those people wi |
8b936939-1b23-4a5d-887f-fbbefced3ff3 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Can an LLM identify ring-composition in a literary text? [ChatGPT]
*Cross-posted* [*from New Savanna*](https://new-savanna.blogspot.com/2023/09/can-llm-identify-ring-composition-in.html)*.*
Tentatively, very tentatively, yes. ChatGPT has done it, once. I believe a more powerful engine could do more.
### **But: What is ring-composition?**
Quickly and briefly, ring-composition or ring-form is a text with a form like this: ***A, B, C...X...C’, B’, A’***. It’s a text with a central section and the other sections are arrayed symmetrically around. ChatGPT will say a bit more about that later.
Why am I interested in ring-composition? In principle I’m interested in any and all literary form. In practice it is easier to look for a specific kind of formal structure. While some interesting and important texts exhibit ring-composition (e.g. “Kubla Khan,” Heart of Darkness, Hamlet) I have no idea how prevalent it is, nor does anyone else. I suspect it’s a minority form, perhaps even a small minority.
I have come to think of literary study as something like biology: it all starts with description. Biologists have spent centuries seeking out and describing life forms. Some have been described in great detail, others only enough to place them in the taxonomy, and we have all degrees of description in between. Well, literature is like biology in that we have a myriad of objects for study, each with unique features. But literary scholars haven’t undertaken the work of describing our texts. Oh, there’s some work, but little as deep and rich as what biologists have done in their domain – I give links below to material that justifies that claim.
I am particularly interested in form. There’s an inerradicably subjective element to meaning, but form, I believe, can be objectively described. I’ve done enough descriptive work over the years to know that it is painstaking, difficult, and tedious, but not rocket science. It would be useful if we could present our texts to an LLM and have it undertake the difficult tedium of descibing texts. Maybe even have two or three different LLMs describe particular texts independently. Then, just as biologists check one another’s descriptive work, so human critics can vet the work of LLMs. Descriptions of important texts would be examined in great detail, while muc of the work that LLMs have done on minor texts, those texts that no literary critic ever works on at all (like all those 19th century novels lurking in basements in library shelves) may never be checked at all. But it’s there if anyone’s interested.
But could LLMs do the work? We don’t know. In the rest of this post, however, I present some small indication that, yes, they might well be able to do it.
> **Note**: The material in the previous paragraphs deserves more discussion than I’ve given it here. But this is not the place for that. I’ve got many posts on these matters under these labels: [description](https://new-savanna.blogspot.com/search/label/description), [form](https://new-savanna.blogspot.com/search/label/form), [text](https://new-savanna.blogspot.com/search/label/text), and [ring-form](https://new-savanna.blogspot.com/search/label/ring-form). If you want one post to read, try this one: [The problematics of text and form and the transmutation zone between symbol and thought [once more into the methodological breach]](https://new-savanna.blogspot.com/2020/08/the-problematics-of-text-and-form-and.html). My Academia.edu page has a section for [Ring Composition](https://independentresearcher.academia.edu/BillBenzon/Ring-Composition) and one for [Description](https://independentresearcher.academia.edu/BillBenzon/Description).
>
>
### **First time out: What’s ring-composition? [December 1, 2022]**
The first time I had access to ChatGPT I asked it about ring-composition. It’s response was underwhelming.
My prompts are in bold while ChatGPT’s replies are in plain face.
> **What is ring-composition?**
>
> Ring-composition is a type of literary structure in which the end of a text is linked back to the beginning in some meaningful way. It is a closed circle of ideas, images, or words that is created by the repetition of certain elements. Ring-composition is sometimes used to emphasize a theme or to create symmetry in a text.
>
> **What are some texts that use ring-composition?**
>
> 1. The Bible 2. Beowulf 3. Paradise Lost by John Milton 4. The Odyssey by Homer 5. The Iliad by Homer 6. The Canterbury Tales by Geoffrey Chaucer 7. The Divine Comedy by Dante Alighieri 8. Moby-Dick by Herman Melville 9. The Aeneid by Virgil
>
> **Do any movies use ring-composition?**
>
> Yes, some movies have been known to use ring-composition. The movie Memento (2000) is a notable example of a movie that uses this structure. The film begins and ends with the same scene, and certain elements are repeated throughout the movie to create a circular narrative.
>
> **Does Shakespeare's Hamlet use ring-composition?**
>
> Yes, some scholars have argued that Shakespeare's Hamlet does make use of ring-composition. The play begins and ends with the same scene, and certain elements such as characters, motifs, and words are repeated throughout the play.
>
>
The definition is OK, but not at all specific. It would not be a very useful guide. The list of texts seems OK, though Moby Dick seems iffy to me. Critics working on 19th century novels don’t go looking for ring-composition. The answers to the last two questions are too vague to be at all promising. On the whole I had no reason to believe that ChatGPT had a useful idea about ring-composition.
Then, without any specific expectation – though I wasn’t expecting much, I decided to give ChatGPT another try. Not the new and improved ChatCPT Plus, which is hooked up to GPT-4, but just the plain old Chatster. It surprised me.
### **Much better: What’s ring-composition? [August 31, 2023]**
It’s running the August 3 version.
> **Explain what ring-composition is in literary structure.**
>
> Ring composition, also known as chiastic structure or circular composition, is a literary and rhetorical technique where a story or text is structured in a circular or cyclical manner, often beginning and ending with similar or related elements. It involves organizing the narrative or content in such a way that there is a central theme, idea, or event around which the narrative revolves, with related elements or themes mirroring each other in a symmetrical or balanced fashion.
>
> Here are some key characteristics and examples of ring composition:
>
> 1. Circular Structure: Ring composition typically presents the narrative in a circular or closed form. This means that the story starts and ends with similar or related elements, creating a sense of closure and unity.
>
> 2. Central Theme: At the center of the composition, there is a central theme, event, or idea that serves as the focal point of the narrative. This central element is surrounded by other elements that are related to or mirror it in some way.
>
> 3. Mirroring and Parallelism: Elements within the narrative often mirror or parallel each other. This can involve characters, events, motifs, or themes that are presented in a way that draws attention to their similarities or contrasts.
>
> 4. Symmetry: Ring compositions often strive for balance and symmetry, with corresponding elements placed in a deliberate and structured manner. This symmetry can enhance the overall aesthetic and thematic impact of the work.
>
> 5. Emphasis on the Center: The central element of the composition is usually the most significant and emphasized part of the story. It often represents a turning point or a key revelation in the narrative.
>
> 6. Multiple Layers: Ring compositions may have multiple layers of mirroring and parallelism. This can create a complex and intricate structure, with each layer contributing to the overall meaning of the text.
>
> 7. Common in Oral Traditions: Ring composition has been a common feature in oral traditions and epic poetry. For example, the structure of Homer's "Iliad" and "Odyssey" exhibits elements of ring composition, with events and themes mirroring each other throughout the narrative.
>
> 8. Rhetorical Device: Beyond storytelling, ring composition is also used as a rhetorical device in speeches and essays to reinforce key points and create a sense of closure and unity in the text.
>
> Ring composition serves various purposes in literature, such as reinforcing themes, creating narrative symmetry, and engaging the reader's or listener's attention by highlighting patterns and connections within the text. It can be a powerful tool for conveying complex ideas and structuring narratives in a memorable and meaningful way.
>
>
That’s far more detailed that it gave me back in December. The question is, can it analyze and actual text and identify a ring-form? I decided to have it work on a specific example, the story of St. George and the Dragon, which [I had already analyzed](https://new-savanna.blogspot.com/2017/11/st-george-and-dragon-as-ring-composition.html). I’m using a [text from Wikipedia](https://en.wikipedia.org/wiki/Saint_George_and_the_Dragon#Golden_Legend).
> **Does the following story exhibit ring-composition? Why or why not?**
>
> **Silene in Libya was plagued by a venom-spewing dragon dwelling in a nearby pond, poisoning the countryside. To prevent it from affecting the city itself, the people offered it two sheep daily, then a man and a sheep, and finally their children and youths, chosen by lottery. One time the lot fell on the king's daughter. The king offered all his gold and silver to have his daughter spared, but the people refused. The daughter was sent out to the lake, dressed as a bride, to be fed to the dragon.**
>
> **Saint George by chance arrived at the spot. The princess tried to send him away, but he vowed to remain. The dragon emerged from the pond while they were conversing. Saint George made the Sign of the Cross and charged it on horseback, seriously wounding it with his lance. He then called to the princess to throw him her girdle (zona), and he put it around the dragon's neck. Wherever she walked, the dragon followed the girl like a "meek beast" on a leash.**
>
> **The princess and Saint George led the dragon back to the city of Silene, where it terrified the populace. Saint George offered to kill the dragon if they consented to become Christians and be baptized. Fifteen thousand men including the king of Silene converted to Christianity. George then killed the dragon, beheading it with his sword, and the body was carted out of the city on four ox-carts. The king built a church to the Blessed Virgin Mary and Saint George on the site where the dragon died and a spring flowed from its altar with water that cured all disease.**
>
> The story you provided does not exhibit a strict ring composition. Ring composition typically involves a more circular or cyclical structure where the narrative begins and ends with related elements, creating a sense of closure and symmetry. In this story, there is a linear progression of events from the initial problem of the dragon plaguing the city of Silene to the resolution of the problem with the conversion of the city's populace and the killing of the dragon by Saint George. The story does not return to its initial state or mirror its beginning in a way that is characteristic of ring composition.
>
> While the story does contain elements of repetition and parallelism (e.g., the escalating sacrifices to the dragon and the eventual conversion of the people to Christianity), these elements do not form a closed circular structure. Instead, they contribute to the overall narrative arc by showing the progression of events and character development.
>
> Ring composition is a specific structural technique, and while elements of repetition and parallelism can be found in many narratives, it does not necessarily mean that the story as a whole exhibits ring composition. This story appears to follow a more linear narrative structure with a clear beginning, middle, and end.
>
>
Since I know that it does exhibit ring-composition I immediately checked the thumbs-down button. As soon as I had done that ChatGPT offered another answer, which bewildered me as I hadn’t seen that behavior before. Here’s that second answer:
> The story you provided does exhibit elements of ring composition to some extent. Here's an analysis of how the story demonstrates ring composition:
>
> 1. Circular Structure: The story begins with the description of a dragon plaguing the city of Silene in Libya and ends with the construction of a church and a spring flowing from the altar with miraculous water. The narrative comes full circle from the problem posed by the dragon to its resolution through the intervention of Saint George and the subsequent conversion of the city.
>
> 2. Central Theme: The central theme of the story revolves around the dragon, its threat to the city, and the salvation brought about by Saint George's actions. The dragon is at the heart of the narrative.
>
> 3. Mirroring and Parallelism: The story exhibits mirroring and parallelism in various ways, such as the gradual escalation of sacrifices to the dragon (from sheep to humans) and the corresponding escalation of Saint George's intervention (from his arrival to the dragon's defeat). The story also parallels the princess's role in luring the dragon out and Saint George's role in subduing it.
>
> 4. Symmetry: The story is structured with a sense of symmetry. It begins with the problem of the dragon, moves through the sacrifices, and ends with the dragon's defeat and the city's conversion to Christianity. The structure is balanced, with each element playing a part in resolving the central conflict.
>
> 5. Emphasis on the Center: The central point of the story is the confrontation between Saint George and the dragon. This is the climax of the narrative and represents a turning point in the events.
>
> 6. Multiple Layers: While the story primarily revolves around the dragon and Saint George, it also incorporates elements of religious conversion and the construction of a church. These additional layers contribute to the overall meaning of the text.
>
> Overall, the story exhibits some characteristics of ring composition, particularly in its circular structure, central theme, mirroring of events, and symmetrical organization. However, it's worth noting that ring composition can vary in complexity, and not all elements need to be present for a text to be considered an example of ring composition. In this case, the story's structure does suggest a deliberate organization to convey its message and meaning.
>
>
That's better. I decided to push it a bit further:
> **Here’s the same story, but I’ve broken it into shorter pieces and numbered them to indicate the symmetry. Thus the first piece is numbered 1 and corresponds to the last piece, which is numbered 1’ (one prime). And so with 2 (second section) and 2’ (next to last section). I’ve used omega (Ω) to indicate the middle section.**
>
> **1. Silene in Libya was plagued by a venom-spewing dragon dwelling in a nearby pond, poisoning the countryside.**
>
> **2. To prevent it from affecting the city itself, the people offered it two sheep daily, then a man and a sheep, and finally their children and youths, chosen by lottery.**
>
> **3. One time the lot fell on the king's daughter. The king offered all his gold and silver to have his daughter spared, but the people refused.**
>
> **4. The daughter was sent out to the lake, dressed as a bride, to be fed to the dragon.**
>
> **5. Saint George by chance arrived at the spot. The princess tried to send him away, but he vowed to remain.**
>
> **Ω. The dragon emerged from the pond while they were conversing. Saint George made the Sign of the Cross and charged it on horseback, seriously wounding it with his lance.**
>
> **5’. He then called to the princess to throw him her girdle (zona), and he put it around the dragon's neck. Wherever she walked, the dragon followed the girl like a "meek beast" on a leash.**
>
> **4’. The princess and Saint George led the dragon back to the city of Silene, where it terrified the populace.**
>
> **3’. Saint George offered to kill the dragon if they consented to become Christians and be baptized. Fifteen thousand men including the king of Silene converted to Christianity.**
>
> **2’. George then killed the dragon, beheading it with his sword, and the body was carted out of the city on four ox-carts.**
>
> **1’. The king built a church to the Blessed Virgin Mary and Saint George on the site where the dragon died and a spring flowed from its altar with water that cured all disease.**
>
> The structure you've presented indeed demonstrates a clear example of ring composition. The story is organized in a symmetrical fashion, with the central section (Ω) serving as the pivotal point around which the narrative revolves. Here's how the symmetry works:
>
> 1. The story begins with the problem of the dragon plaguing Silene in Libya (1) and ends with the resolution of this problem through the construction of a church and a miraculous spring (1’).
>
> 2. The escalating sacrifices to the dragon, starting with two sheep and culminating in the sacrifice of a man and a sheep (2), are mirrored by the escalating actions of Saint George, starting with his arrival at the spot and culminating in the dragon's defeat (2’).
>
> 3. The lot falling on the king's daughter (3) corresponds to Saint George's offer to kill the dragon in exchange for the city's conversion to Christianity (3’).
>
> 4. The princess being sent to be fed to the dragon (4) parallels the princess and Saint George leading the dragon back to the city (4’).
>
> 5. Saint George's arrival and his vow to remain (5) are mirrored by his act of calling the princess to give him her girdle and using it to control the dragon (5’).
>
> The central section (Ω) contains the climax of the story, where Saint George confronts the dragon, seriously wounds it, and ultimately tames it. This is the turning point of the narrative.
>
> Overall, this structure reinforces the central theme of the story while creating a sense of balance and closure. It allows the reader to appreciate the parallels and contrasts within the narrative, making it an effective example of ring composition.
>
>
Chatster’s explanation of the symmetry is correct. Given that I had presented it with the breakdown – I have no idea whether or not it could have done that itself – producing that explanation would not have been difficult, certainly not for a human. But I find it hard to deny that what it did required reasoning, not deep reasoning, but reasoning nonetheless.
### **What's next?**
Surely a more powerful engine, one fine-tuned in the analysis of literary form, could do better. What about film? Films exhibit ring-composition as well. Some I’ve analyzed: [*Pulp-Fiction*](https://new-savanna.blogspot.com/2015/04/pulp-fiction-as-ring-from.html), the 1954 [*Gojira*](https://www.academia.edu/7905287/The_Gojira_Papers), the 1933 [*King Kong*](https://new-savanna.blogspot.com/2017/10/beauty-and-beast-king-kong-as-ring.html), Miyazaki’s [*The Wind Rises*](https://www.academia.edu/27132290/Miyazakis_Metaphysics_Some_Observations_on_The_Wind_Rises), and episodes of Disney’s [*Fantasia*](https://www.academia.edu/1128493/Walt_Disney_s_Fantasia_The_Cosmos_and_the_Mind_in_Two_Hours).
I would like, however, to end on a cautionary note. The descriptive and analytic task I gave ChatGPT was a relatively easy one. I gave it a particular text, a short and uncomplicated one, and asked it simply to identify whether or not it exhibited a particular formal property. Given that it had already given a reasonable account of that property, it had some of idea of what to look for.
Still, it got things wrong. Until I told it that it was wrong. Then it gave me a more or less correct answer. Whey didn’t it give me that answer first?
But there is a deeper problem. When I am working on texts, I do not know whether or not a given text is a ring composition. There are no identifying traits that I have been able to see. You begin the analysis and see where it goes. Maybe ring-composition will emerge and maybe it won’t. What I want from an LLM that’s working on formal analysis is that it be able to work on any text and find the formal features that are there. That’s an open-ended task. While ChatGPT’s performance with St. George and the Dragon is significant, there is a long way from that to the kind of assistance I am looking for in a LLM. |
85d38828-d9e7-41c4-896c-a460764a1095 | StampyAI/alignment-research-dataset/special_docs | Other | A survey of research questions for robust and beneficial AI
A survey of research questions for robust and benecial AI
1 Introduction
Articial intelligence (AI) research has explored a variety of problems and approaches since its inception,
but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent
agents |systems that perceive and act in some environment. In this context, the criterion for intelligence is
related to statistical and economic notions of rationality|colloquially, the ability to make good decisions,
plans, or inferences. The adoption of probabilistic representations and statistical learning methods has led to
a large degree of integration and cross-fertilization between AI, machine learning, statistics, control theory,
neuroscience, and other elds. The establishment of shared theoretical frameworks, combined with the
availability of data and processing power, has yielded remarkable successes in various component tasks such
as speech recognition, image classication, autonomous vehicles, machine translation, legged locomotion,
and question-answering systems.
As capabilities in these areas and others cross the threshold from laboratory research to economically
valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth
large sums of money, prompting greater investments in research. There is now a broad consensus that AI
research is progressing steadily, and that its impact on society is likely to increase. The potential benets
are huge, since everything that civilization has to oer is a product of human intelligence; we cannot predict
what we might achieve when this intelligence is magnied by the tools AI may provide, but the eradication of
disease and poverty are not unfathomable. Because of the great potential of AI, it is valuable to investigate
how to reap its benets while avoiding potential pitfalls.
The progress in AI research makes it timely to focus research not only on making AI more capable, but
also on maximizing the societal benet of AI. Such considerations motivated the AAAI 2008{09 Presidential
Panel on Long-Term AI Futures [61] and other projects and community eorts on AI impacts. These
constitute a signicant expansion of the eld of AI itself, which up to now has focused largely on techniques
that are neutral with respect to purpose. The present document can be viewed as a natural continuation
of these eorts, focusing on identifying research directions that can help maximize the societal benet of
AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from
economics, law, and philosophy to computer security, formal methods and, of course, various branches of AI
itself. The focus is on delivering AI that is benecial to society and robust in the sense that the benets are
guaranteed: our AI systems must do what we want them to do.
This document is an attempt to lay out some of the research topics that we think will be most useful to
do now in order to shape the future impact of AI. We will surely nd that some questions are less useful
or timely than others, and some important ones are missing. We hope this guide will be a helpful source
of suggestions, but also that potential grantees won't be discouraged from approaching us with similarly
relevant topics we didn't think of. We will try to publish future versions that are up to date with progress
in the eld.
We are very grateful to the many people who have contributed to this document, in particular Daniel
Dewey, Stuart Russell, and Max Tegmark for their invaluable work on the research priorities document, Luke
Muehlhauser for his list of potential strategic research projects, Nate Soares and MIRI for their technical
agenda, and the MIRIxOxford research workshop analyzing and expanding on the MIRI technical agenda.
Many people at FLI have contributed lists of additional research projects and directions, including Jim
Babcock, Steve Greidinger, J anos Kram ar, Richard Mallah and Max Tegmark, and many more have provided
1
helpful feedback and suggestions, including Anthony Aguirre, Erik Brynjolfsson, Meia Chita-Tegmark, Daniel
Dewey, Owain Evans, Eric Gastfriend, Ales Flidr, Katja Grace, Evan Hefner, Viktoriya Krakovna, Colleen
Messing, Howard Messing, Chase Moores, Luke Muehlhauser, Se an O hEigeartaigh, Tristan Plumb, Stuart
Russell, Murray Shanahan, Nate Soares, Marin Soljacic, Michele Reilly, Anders Sandberg, John Sturm, Jaan
Tallinn, Jacob Trefethen, Nick Ryder, Michael Vassar, and Alexander Wissner-Gross, Eliezer Yudkowsky.
Thanks also to Nick Bostrom for writing the excellent and seminal book \Superintelligence". Like the rest
of the document, this list is at an early stage - we're looking forward to receiving your constructive feedback
and comments!
2
Contents
1 Introduction 1
2 Short-term research priorities 4
2.1 Optimizing AI's Economic Impact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.1.1 Measuring and Forecasting Economic Impact of Automation and AI . . . . . . . . . . 4
2.1.2 Policy research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.3 Managing potential adverse eects of automation and AI . . . . . . . . . . . . . . . . 5
2.2 Law and Ethics Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Computer Science Research for Robust AI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.1 Verication (\Did I build the system right?") . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.2 Validity (\Did I build the right system?") . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.3.3 Security (\How can I prevent unauthorized access?") . . . . . . . . . . . . . . . . . . . 9
2.3.4 Control (\OK, I built the system wrong, can I x it?") . . . . . . . . . . . . . . . . . . 9
3 Long-term research priorities 11
3.1 Some perspectives on the long term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Verication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3 Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.3.1 Ethics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.3.2 Ensuring goal stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.4 Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.4.1 Software containment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.4.2 Psychological containment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4.3 Hardware containment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4.4 Tripwires: Detection & Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4.5 Detecting intent to deceive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.5 Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.5.1 Corrigibility and Domesticity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5.2 Safe and Unsafe Agent Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4 Forecasting 23
4.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.3 Forecasting AI progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.4 Forecasting AI takeo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.5 Brain emulations (uploads) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
5 Policy and Collaboration 27
3
2 Short-term research priorities
2.1 Optimizing AI's Economic Impact
The successes of industrial applications of AI, from manufacturing to information services, demonstrate a
growing impact on the economy, although there is disagreement about the exact nature of this impact and on
how to distinguish between the eects of AI and those of other information technologies. Many economists
and computer scientists agree that there is valuable research to be done on how to maximize the economic
benets of AI while mitigating adverse eects, which could include increased inequality and unemployment
[72, 21, 39, 40, 107, 84, 69]. Such considerations motivate a range of research directions, spanning areas from
economics to psychology. Below are a few examples that should by no means be interpreted as an exhaustive
list.
2.1.1 Measuring and Forecasting Economic Impact of Automation and AI
When and in what order should we expect various jobs to become automated [39]? How will this aect the
employment and wages of various professions, including less skilled workers, creatives, and dierent kinds of
information workers? Some have argued that AI is likely to greatly increase the overall wealth of humanity
as a whole [21]. However, increased automation may push income distribution further towards a power law
[22], and the resulting disparity may fall disproportionately along lines of race, class, and gender; research
anticipating the economic and societal impact of such disparity could be useful.
1. It is possible that economic measures such as real GDP per capita do not accurately capture the benets
and detriments of heavily AI-and-automation-based economies, making these metrics unsuitable for
policy purposes [72]. Research on improved metrics could be useful for decision-making.
2. What has been the historical record on jobs being displaced by automation? What have the average
rate and distribution of displacement been - has it been clustered in time, industry, and geography?
How long before displaced workers found new jobs? Did displacement contribute to inequality?
3. Is there anything dierent about the advancement of articial intelligence happening now that would
lead us to expect a change from our centuries-long historical record on jobs being displaced by automa-
tion?
4. What factors make an industry more amenable or less amenable to automation? Machines histori-
cally performed rote mass-production, but we've been expanding their capabilities with advances in
information processing and articial intelligence.
5. Which markets are most susceptible to disruption as automation advances? Signicant parts of the
economy { including nance, insurance, actuarial, and many consumer markets { could experience
upheaval through use of AI techniques to learn, model, and predict agent actions. These markets might
be identied by a combination of high complexity and high rewards for navigating that complexity [69].
Are there other features?
6. Some jobs could be done by machines in principle but require a particular advance in articial intelli-
gence before it becomes feasible. What are some of those prerequisite advances?
7. Based on the above factors, how far in advance can we predict that an industry is likely to be largely
automated? Is this something we can predict and prepare for 20 years ahead? 10 years? 6 months?
Not at all?
4
2.1.2 Policy research
What is the space of policies that could/should be considered for helping AI-assisted societies
ourish?
For example, Brynjolfsson and McAfee [21] explore various policies for incentivizing development of labor-
intensive sectors and for using AI-generated wealth to support underemployed populations. What outcomes
are these policies likely to lead to? What are the key uncertainties in these outcome predictions, and what
research would help reduce these uncertainties?
1. What factors contribute to a `winner-take-all' dynamic of software-based industries? The low cost of
reproduction and increasingly global nature of the information economy, for example, make it easier
to concentrate wealth. What other factors might we study and how could they be quantied?
2. Conversely, what factors counteract the `winner-take-all' dynamic? For example, the lowered cost of
entering a market might make it easier for new startups to compete with established products.
3. How well does the neoclassical model of anti-trust regulation apply to an economy increasingly dom-
inated by software and AI-assistance? Will we need to develop new frameworks of regulation, or will
our current laws adapt well enough?
4. Will the economy undergo de
ation as software becomes a larger share of productivity? The potential
for a relatively rapid, large-scale productivity boost from software could increase the purchasing power
of the dollar. What are the closest examples of this occurring and do our current forecasts properly
incorporate it?
5. If the economy does experience de
ation, could governments eectively take advantage of it by spending
more on projects that reduce inequality?
2.1.3 Managing potential adverse eects of automation and AI
If automation and AI-assistance does lead to lower employment, it will be important to evaluate the societal
structures that determine whether such populations
ourish or succumb to depression and self-destructive
behavior. History provides many examples of subpopulations not needing to work for economic security,
ranging from aristocrats in antiquity to many present-day citizens of Qatar. What societal structures and
other factors determine whether such populations
ourish? Unemployment is not the same as leisure, and
there are deep links between unemployment and unhappiness, self-doubt, and isolation [53, 28]; understand-
ing what policies and norms can break these links could signicantly improve the median quality of life.
1. What problems might arise in low employment societies? How strong are the correlations between
employment level and rates of depression, crime, or drug abuse, for example?
2. Are there bright spots within the current correlations? What societies suer less or even prosper in
lower employment?
3. What cultural elements play into how low employment impacts dierent societies'
ourishing? For
example, Bellezza, Keinan, and Paharia[12] found that conspicuous hours worked was positively cor-
related with perceived status in the United States, but negatively correlated in Europe. What other
variables might be in play, and can they be used to predict how dierent cultures/subcultures will be
aected by low employment?
4. Are there historical analogues of societies in which groups have not had to work for economic security?
(e.g. Traditional aristocracy, children of wealthy parents, etc.) What activities and mindsets have led
them to consider their life happy or meaningful? Do these factors apply to the current development of
AI-assisted automation?
5
2.2 Law and Ethics Research
The development of systems that embody signicant amounts of intelligence and autonomy leads to important
legal and ethical questions whose answers impact both producers and consumers of AI technology. These
questions span law, public policy, professional ethics, and philosophical ethics, and will require expertise
from computer scientists, legal experts, political scientists, and ethicists. For example:
1.Liability and law for autonomous vehicles: If self-driving cars cut the roughly 40,000 annual US
trac fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits.[66]
In what legal framework can the safety benets of autonomous vehicles such as drone aircraft and self-
driving cars best be realized [127]? Should legal questions about AI be handled by existing (software-
and internet-focused) \cyberlaw", or should they be treated separately [23]? In both military and
commercial applications, governments will need to decide how best to bring the relevant expertise to
bear; for example, a panel or committee of professionals and academics could be created, and Calo has
proposed the creation of a Federal Robotics Commission [24].
2.Machine ethics: How should an autonomous vehicle trade o, say, a small probability of injury
to a human against the near-certainty of a large material cost? How should lawyers, ethicists, and
policymakers engage the public on these issues? Should such trade-os be the subject of national
standards?
3.Autonomous weapons: Can lethal autonomous weapons be made to comply with humanitarian law
[27]? If, as some organizations have suggested, autonomous weapons should be banned [34, 125], is
it possible to develop a precise denition of autonomy for this purpose, and can such a ban practi-
cally be enforced? If it is permissible or legal to use lethal autonomous weapons, how should these
weapons be integrated into the existing command-and-control structure so that responsibility and li-
ability be distributed, what technical realities and forecasts should inform these questions, and how
should \meaningful human control" over weapons be dened [99, 98, 3]? Are autonomous weapons
likely to reduce political aversion to con
ict, or perhaps result in \accidental" battles or wars [10]?
Finally, how can transparency and public discourse best be encouraged on these issues?
4.Privacy: How should the ability of AI systems to interpret the data obtained from surveillance
cameras, phone lines, emails, etc., interact with the right to privacy? How will privacy risks interact
with cybersecurity and cyberwarfare [109]? Our ability to take full advantage of the synergy between
AI and big data will depend in part on our ability to manage and preserve privacy [68, 1].
5.Professional ethics: What role should computer scientists play in the law and ethics of AI de-
velopment and use? Past and current projects to explore these questions include the AAAI 2008{09
Presidential Panel on Long-Term AI Futures [61], the EPSRC Principles of Robotics [13], and recently-
announced programs such as Stanford's One-Hundred Year Study of AI and the AAAI committee on
AI impact and ethical issues (chaired by Rossi and Chernova).
From a policy perspective, AI (like any powerful new technology) enables both great new benets and novel
pitfalls to be avoided, and appropriate policies can ensure that we can enjoy the benets while risks are
minimized. This raises policy questions such as these:
1. What is the space of policies worth studying?
2. Which criteria should be used to determine the merits of a policy? Candidates include veriabil-
ity of compliance, enforceability, ability to reduce risk, ability to avoid sti
ing desirable technology
development, adoptability, and ability to adapt over time to changing circumstances.
6
2.3 Computer Science Research for Robust AI
As autonomous systems become more prevalent in society, it becomes increasingly important that they
robustly behave as intended. The development of autonomous vehicles, autonomous trading systems, au-
tonomous weapons, etc. has therefore stoked interest in high-assurance systems where strong robustness
guarantees can be made; Weld and Etzioni have argued that \society will reject autonomous agents unless
we have some credible means of making them safe" [130]. Dierent ways in which an AI system may fail to
perform as desired correspond to dierent areas of robustness research:
1.Verication: how to prove that a system satises certain desired formal properties. ( \Did I build the
system right?" )
2.Validity: how to ensure that a system that meets its formal requirements does not have unwanted
behaviors and consequences. ( \Did I build the right system?" )
3.Security: how to prevent intentional manipulation by unauthorized parties.
4.Control: how to enable meaningful human control over an AI system after it begins to operate. ( \OK,
I built the system wrong, can I x it?" )
2.3.1 Verication
By verication, we mean methods that yield high condence that a system will satisfy a set of formal
constraints. When possible, it is desirable for systems in safety-critical situations, e.g. self-driving cars, to
be veriable.
Formal verication of software has advanced signicantly in recent years: examples include the seL4 kernel
[63], a complete, general-purpose operating-system kernel that has been mathematically checked against a
formal specication to give a strong guarantee against crashes and unsafe operations, and HACMS, DARPA's
\clean-slate, formal methods-based approach" to a set of high-assurance software tools [38]. Not only should
it be possible to build AI systems on top of veried substrates; it should also be possible to verify the
designs of the AI systems themselves, particularly if they follow a \componentized architecture", in which
guarantees about individual components can be combined according to their connections to yield properties
of the overall system. This mirrors the agent architectures used in Russell and Norvig [102], which separate an
agent into distinct modules (predictive models, state estimates, utility functions, policies, learning elements,
etc.), and has analogues in some formal results on control system designs. Research on richer kinds of
agents|for example, agents with layered architectures, anytime components, overlapping deliberative and
reactive elements, metalevel control, etc.|could contribute to the creation of veriable agents, but we lack
the formal \algebra" to properly dene, explore, and rank the space of designs.
Perhaps the most salient dierence between verication of traditional software and verication of AI
systems is that the correctness of traditional software is dened with respect to a xed and known machine
model, whereas AI systems|especially robots and other embodied systems|operate in environments that
are at best partially known by the system designer. In these cases, it may be practical to verify that the
system acts correctly given the knowledge that it has, avoiding the problem of modelling the real environment
[31]. A lack of design-time knowledge also motivates the use of learning algorithms within the agent software,
and verication becomes more dicult: statistical learning theory gives so-called -(probably approximately
correct) bounds, mostly for the somewhat unrealistic settings of supervised learning from i.i.d. data and
single-agent reinforcement learning with simple architectures and full observability, but even then requiring
prohibitively large sample sizes to obtain meaningful guarantees.
Research into methods for making strong statements about the performance of machine learning algo-
rithms and managing computational budget over many dierent constituent numerical tasks could be improve
our abilities in this area, possible extending work on Bayesian quadrature [52, 45]. Work in adaptive control
theory [11], the theory of so-called cyberphysical systems [91], and verication of hybrid or robotic systems
[2, 131] is highly relevant but also faces the same diculties. And of course all these issues are laid on top
of the standard problem of proving that a given software artifact does in fact correctly implement, say, a
7
reinforcement learning algorithm of the intended type. Some work has been done on verifying neural network
applications [95, 122, 106] and the notion of partial programs [5, 119] allows the designer to impose arbitrary
\structural" constraints on behavior, but much remains to be done before it will be possible to have high
condence that a learning agent will learn to satisfy its design criteria in realistic contexts.
It is possible that the best methodology for gaining high condence that large, complex AI systems will
satisfy their design criteria is to be found in the realm of software engineering methodology/standards rather
than in formal verication. It's also possible that while formal verication would be best in the long run, in
practice the research progress required for constructing a superintelligence comes to fruition at a time when
formal verication is prohibitively expensive to the teams that are closest to building superintelligence.
In this case, it would be good to understand what current work would be most valuable to reducing the
risk of adverse outcomes arising from bugs in implementation. This work would most likely be less theoretical
and more practical and implementation-specic than most of the other research explored in this document.
Some of the questions to investigate here are:
1. What categories of bugs are most hazardous? Some particularly undesirable sorts of bugs are:
(a) bugs that lie dormant during ordinary testing but can be encountered in larger settings given
enough time. (For example, integer over
ows or accumulation of numerical error.)
(b) portability bugs, ie bugs that arise from dierences in libraries, environment, or hardware. (For
example, GPGPU libraries.)
(c) \Heisenbugs", ie bugs that manifest in practice but not in a debugging environment.
(d) bugs that are dicult to reproduce for some reason, such as bugs aected by nondeterministic
scheduling of concurrent threads of execution, or by the interaction of this with some other sort
of state, such as a random number generator.
2. How likely would these sorts of bugs be to arise in a hazardous way if an otherwise-promising super-
intelligence project was undertaken in the medium-term?
3. What kinds of tools or software changes would make the most dierence in mitigating the risk of an
adverse outcome? Some ideas:
(a) in
uence current and upcoming programming language interpreters, compilers, application virtual
machines (such as the JVM), etc. to adopt a default behavior (or at least an option) of throwing
exceptions on encountering numerical over
ow/under
ow.
(b) ensure software quality of particularly popular state-of-the-art machine learning libraries, GPGPU
libraries, and other core components.
(c) assess the prevalence of portability bugs and promote adherence of standards that could resolve
them.
2.3.2 Validity
A verication theorem for an agent design has the form, \If environment satises assumptions then behavior
satises requirements ." There are two ways in which a veried agent can, nonetheless, fail to be a benecial
agent in actuality: rst, the environmental assumption is false in the real world, leading to behavior that
violates the requirements ; second, the system may satisfy the formal requirement but still behave in
ways that we nd highly undesirable in practice. It may be the case that this undesirability is a consequence
of satisfying whenis violated; i.e., had held the undesirability would not have been manifested; or
it may be the case that the requirement is erroneous in itself. Russell and Norvig [102] provide a simple
example: if a robot vacuum cleaner is asked to clean up as much dirt as possible, and has an action to dump
the contents of its dirt container, it will repeatedly dump and clean up the same dirt. The requirement
should focus not on dirt cleaned up but on cleanliness of the
oor. Such specication errors are ubiquitous
in software verication, where it is commonly observed that writing correct specications can be harder than
8
writing correct code. Unfortunately, it is not possible to verify the specication: the notions of \benecial"
and \desirable" are not separately made formal, so one cannot straightforwardly prove that satisfying
necessarily leads to desirable behavior and a benecial agent.
In order to build systems that robustly behave well, we of course need to decide what \good behavior"
means in each application domain.[73] This ethical question is tied intimately to questions of what engineering
techniques are available, how reliable these techniques are, and what trade-os can be made {{ all areas where
computer science, machine learning, and broader AI expertise is valuable. For example, Wallach and Allen
[128] argue that a signicant consideration is the computational expense of dierent behavioral standards
(or ethical theories): if a standard cannot be applied eciently enough to guide behavior in safety-critical
situations, then cheaper approximations may be needed. Designing simplied rules { for example, to govern a
self-driving car's decisions in critical situations { will likely require expertise from both ethicists and computer
scientists. Computational models of ethical reasoning may shed light on questions of computational expense
and the viability of reliable ethical reasoning methods [9, 120]; for example, work could further explore the
applications of semantic networks for case-based reasoning [70], hierarchical constraint satisfaction [67], or
weighted prospective abduction [89] to machine ethics.
Explicit ethical systems generally have the desideratum of transparency and understandability. There
have been a number of approaches to modeling ethical systems, such as using semantic casuistry networks
[70], hierarchical constraint satisfaction[67], category theory[19], and weighted prospective abduction [89], but
research on more dynamic representations would be benecial for integrating with machine learning systems.
Long-term safety researchers[78] point out that seemingly any explicitly encoded moral philosophy (or ethical
rulebase) is incomplete and therefore leads to unintended and perverse interpretations and instantiations,
particularly outside the boundaries of the environment in which it was formulated; they further point out the
wide heterogeneity of ethical systems in moral philosophy. This heterogeneity might be useful as a resource
however: research on formal mathematics to compare, contrast, and overlay ethical rulebases, such as via
algebraic topology [36] may enable ensembles of moderately con
icting ethical rulebases to have more robust
properties than any individual ethical system. Multiobjective optimizations over multiple ethical systems
and explicit goals may also merit further study [82].
2.3.3 Security
Security research can help make AI more robust. As AI systems are used in an increasing number of critical
roles, they will take up an increasing proportion of cyber-attack surface area. It is also probable that AI and
machine learning techniques will themselves be used in cyber-attacks.
Robustness against exploitation at the low level is closely tied to veriability and freedom from bugs. For
example, the DARPA SAFE program aims to build an integrated hardware-software system with a
exible
metadata rule engine, on which can be built memory safety, fault isolation, and other protocols that could
improve security by preventing exploitable
aws [30]. Such programs cannot eliminate all security
aws
(since verication is only as strong as the assumptions that underly the specication), but could signicantly
reduce vulnerabilities of the type exploited by the recent \Heartbleed bug" and \Bash Bug". Such systems
could be preferentially deployed in safety-critical applications, where the cost of improved security is justied.
At a higher level, research into specic AI and machine learning techniques may become increasingly
useful in security. These techniques could be applied to the detection of intrusions [65], analyzing malware
[97], or detecting potential exploits in other programs through code analysis [20]. It is not implausible that
cyberattack between states and private actors will be a risk factor for harm from near-future AI systems,
motivating research on preventing harmful events. As AI systems grow more complex and are networked
together, they will have to intelligently manage their trust, motivating research on statistical-behavioral
trust establishment [94] and computational reputation models [103].
2.3.4 Control
For certain types of safety-critical AI systems { especially vehicles and weapons platforms { it may be
desirable to retain some form of meaningful human control, whether this means a human in the loop, on the
9
loop[54, 88], or some other protocol. In any of these cases, there will be technical work needed in order to
ensure that meaningful human control is maintained [33].
Automated vehicles are a test-bed for eective control-granting techniques. The design of systems and
protocols for transition between automated navigation and human control is a promising area for further
research. Such issues also motivate broader research on how to optimally allocate tasks within human-
computer teams, both for identifying situations where control should be transferred, and for applying human
judgment eciently to the highest-value decisions.
10
3 Long-term research priorities
A frequently discussed long-term goal of some AI researchers is to develop systems that can learn from
experience with human-like breadth and surpass human performance in most cognitive tasks, thereby having
a major impact on society. If there is a non-negligible probability that these eorts will succeed in the
foreseeable future, then additional current research beyond that mentioned in the previous sections will be
motivated as exemplied below, to help ensure that the resulting AI will be robust and benecial.
Assessments of this success probability vary widely between researchers, but few would argue with great
condence that the probability is negligible, given the track record of such predictions. For example, Ernest
Rutherford, arguably the greatest nuclear physicist of his time, said in 1933 that nuclear energy was \moon-
shine"1, and Astronomer Royal Richard Woolley called interplanetary travel \utter bilge" in 1956 [96].
Moreover, to justify a modest investment in this AI robustness research, this probability need not be high,
merely non-negligible, just as a modest investment in home insurance is justied by a non-negligible proba-
bility of the home burning down.
3.1 Some perspectives on the long term
Looking into the future, there is a very real possibility that we will see general AI and superintelligence.
(We'll defer discussion of when and how this is likely to happen to 4.) This would be truly revolutionary -
for the rst time, we could have machine assistance in every domain. What would this mean? Unfortunately
it's dicult to get a complete picture from examples, because we haven't seen any systems that are more
capable than humans at every cognitive task.
One hint comes from looking at chimpanzees, our closest living relatives. Chimpanzees share 94% of
our genetic code, have a complex social structure, and are capable of planning, tool use, and some symbolic
language use. However, unlike chimp intelligence, human intelligence has accumulated innovations (such as
complex language, writing, the scientic method, etc) that have reshaped the planet, and that give our species
unprecedented power; for better or for worse, chimpanzees will only continue to exist if we see their existence
as more valuable than what we could gain from eliminating them. Fortunately, we see the elimination of our
closest living relatives as particularly barbaric and immoral, so they probably won't be added to the growing
list of species we have driven to extinction - but this should give us a sense of the power superintelligence
could have. Of course, even if we happen to be the most intelligent species around, it would be a mistake to
anthropomorphize a superintelligent AI system, which may have very little in common with us internally;
certainly much less than a chimp. So what can we say about it? In order to be a superintelligent AI, the
system must be doing very well at some task by some performance measure, relative to the resources the
system is using. Looking at the system in a high-level way, we can describe it as having preferences to do
well according to this performance measure. The preferences might be intended by the designer or not; they
might be represented in the system or not; they might be context-specic; temporary; and they might be
best dened relative to some virtual/abstract world (eg a chessboard), or the real world.
We will refer back to this notion of preferences, which underlies various arguments about the nature
of superintelligence. Preferences are often called \goals"; the latter term may sound like it refers to lists
of fully-known binary-state objectives for the agent to accomplish, perhaps without a way of prioritizing
between them, but this is not the intended meaning.
Finally: of course, we already have systems that have superhuman capability in many domains, such as
rote computation, chess, and Jeopardy; and this has not exposed us to any dramatic risks. An intelligent
system need not literally have every human capability in order to be dangerous, but it's likely to need some
of the following types of capabilities[16]:
1. Self-improvement: intelligence amplication
2. Strategic: planning, forecasting, prioritizing
1\The energy produced by the breaking down of the atom is a very poor kind of thing. Any one who expects a source of
power from the transformation of these atoms is talking moonshine."[92]
11
3. Social: social and psychological modeling, manipulation, rhetorical persuasive ability
Some questions to explore long-term perspectives:
1. Explore the space of possible mind designs. What features are common? On what axes can they
dier? Where do human minds sit in that space relative to apes, dolphins, current AIs, future AIs,
etc.? Where in the space could safe general AIs be found? (Some candidates to examine: optimizers
vs not, tending to break out of simulations/virtual worlds vs not.) A starting point is [133].
2. Bostrom's \orthogonality thesis"[17] states that \any level of intelligence could in principle be combined
with more or less any goal," but what kinds of general intelligences are plausible? Should we expect
some correlation between level of intelligence and goals in de-novo AI? How true is this in humans,
and in whole brain emulations?
3. What \instrumental goals" might a self-improving AI generically evolve? Omohundro[85] has argued
that to improve its ability to attain its goals, it generically seeks capability enhancement (better
hardware, better software and a better world model), and that the goal of better hardware generically
leads to a goal of self-preservation and unlimited resource acquisition, which can lead to unwanted side
eects for humans.
4. How generic is the instrumental goal of resource acquisition? Is it true of most nal goals that an
optimizer with that nal goal will want to control a spatial region whose radius increases linearly with
time? In what sense of \most", if any, is this true?
3.2 Verication
Reprising the themes of short-term research, research enabling veriable low-level software and hardware
can eliminate large classes of bugs and problems in general AI systems; if the systems become increasingly
powerful and safety-critical, veriable safety properties will become increasingly valuable. If the theory of
extending veriable properties from components to entire systems is well understood, then even very large
systems can enjoy certain kinds of safety guarantees, potentially aided by techniques designed explicitly to
handle learning agents and high-level properties. Theoretical research, especially if it is done explicitly with
very general and capable AI systems in mind, could be particularly useful.
A related verication research topic that is distinctive to long-term concerns is the veriability of sys-
tems that modify, extend, or improve themselves, possibly many times in succession [41, 126]. Attempting to
straightforwardly apply formal verication tools to this more general setting presents new diculties, includ-
ing the challenge that a formal system that is suciently powerful cannot use formal methods in the obvious
way to gain assurance about the accuracy of functionally similar formal systems, on pain of inconsistency
via G odel's incompleteness [37, 129]. It is not yet clear whether or how this problem can be overcome, or
whether similar problems will arise with other verication methods of similar strength.
Finally, it is often dicult to actually apply formal verication techniques to physical systems, especially
systems that have not been designed with verication in mind. This motivates research pursuing a general
theory that links functional specication to physical states of aairs. This type of theory would allow use
of formal tools to anticipate and control behaviors of systems that approximate rational agents, alternate
designs such as satiscing agents, and systems that cannot be easily described in the standard agent formalism
(powerful prediction systems, theorem-provers, limited-purpose science or engineering systems, etc.). It may
also be that such a theory could allow rigorously demonstrating that systems are constrained from taking
certain kinds of actions or performing certain kinds of reasoning (see Section 3.5.1 for examples).
3.3 Validity
As in the short-term research priorities, validity is concerned with undesirable behaviors that can arise despite
a system's formal correctness. In the long term, AI systems might become more powerful and autonomous,
in which case failures of validity could carry correspondingly higher costs.
12
Reliable generalization of concepts, an area we highlighted for short-term validity research, will also be
important for long-term safety. To maximize the long-term value of this work, concept learning research
might focus on the types of unexpected generalization that would be most problematic for very general and
capable AI systems. In particular, it might aim to understand theoretically and practically how learned
representations of high-level human concepts could be expected to generalize (or fail to) in radically new
contexts [123]. Additionally, if some concepts could be learned reliably, it might be possible to use them to
dene tasks and constraints that minimize the chances of unintended consequences even when autonomous
AI systems become very general and capable. Little work has been done on this topic, which suggests that
both theoretical and experimental research may be useful.
Mathematical tools such as formal logic, probability, and decision theory have yielded signicant insight
into the foundations of reasoning and decision-making. However, there are still many open problems in
the foundations of reasoning and decision. Designing a powerful AI system without having a thorough
understanding of these issues might increase the risk of unintended consequences, both by foregoing tools
that could have been used to increase the system's reliability, and by risking the collapse of shaky foundations.
Example research topics in this area include reasoning and decision under bounded computational resources
a la Horvitz and Russell [59, 100], how to take into account correlations between AI systems' behaviors
and those of their environments or of other agents [124, 64, 58, 46, 115],how agents that are embedded in
their environments should reason [110, 87], and how to reason about uncertainty over logical consequences
of beliefs or other deterministic computations [114, 93]. These topics may benet from being considered
together, since they appear deeply linked [47, 48].
In the long term, it is plausible that we will want to make agents that act autonomously and powerfully
across many domains. Explicitly specifying our preferences in broad domains in the style of near-future
machine ethics may not be practical, making \aligning" the values of powerful AI systems with our own values
and preferences dicult [111, 113]. Consider, for instance, the diculty of creating a utility function that
encompasses an entire body of law; even a literal rendition of the law is far beyond our current capabilities,
and would be highly unsatisfactory in practice (since law is written assuming that it will be interpreted
and applied in a
exible, case-by-case way). Reinforcement learning raises its own problems: when systems
become very capable and general, then an eect similar to Goodhart's Law is likely to occur, in which
sophisticated agents attempt to manipulate or directly control their reward signals [16]. This motivates
research areas that could improve our ability to engineer systems that can learn or acquire values at run-
time. For example, inverse reinforcement learning may oer a viable approach, in which a system infers
the preferences of another actor, assumed to be a reinforcement learner itself [101, 83]. Other approaches
could use dierent assumptions about underlying cognitive models of the actor whose preferences are being
learned (preference learning, [26]), or could be explicitly inspired by the way humans acquire ethical values.
As systems become more capable, more epistemically dicult methods could become viable, suggesting that
research on such methods could be useful; for example, Bostrom [16] reviews preliminary work on a variety
of methods for specifying goals indirectly.
3.3.1 Ethics
As AI develops to human-level intelligence and beyond, it may be necessary to create agents that obey some
formulation of ethics[73]. There are several fundamental questions on which researchers' assumptions dier;
some are:
1. Should ethics be thought of as a constraint on the agent's actions, or as a subgoal, or as the main goal
content? The former approach seems appropriate for near-term applications such as self-driving cars,
but in a more powerful AI system it may not be suciently robust; also it can be argued that working
under the latter assumption is more likely to expose our ethics models to feedback and improvement,
which may lead to better outcomes in the long run.
Intuitions about this are likely linked to diering intuitions about whether there is a dierence in kind
between ethical values (such as fairness, compassion, generosity, mercy, etc) and other typical values
held by humans (such as knowledge, beauty, fun, courage, loyalty, chocolate, etc).
13
2. Should ethics be formulated directly[128, p. 83-86], or should it be learned from human behaviors,
brains, writings, etc ("indirectly")[128, p. 108-111]? Again, the former approach is sucient for
self-driving cars, but seems fairly dicult to pin down to the degree that would be required for a
superintelligence acting freely in the world. In the long run, such a superintelligence would need to
be sophisticated enough to come to "correct"2conclusions (or meticulous indierence of a kind that
leaves control to us) about the implications of such things as the creation of other possibly-sentient
AIs, brain emulations, possible collective consciousnesses, etc., as well as more everyday situations.
The learning-based approach has the drawback of not necessarily being transparent to human inspec-
tion, of easily misconstruing context, and of potentially overtting; on the other hand it seems easier
to reach an objective result over limited domains. Hybrid approaches have also been suggested [128,
p. 117-124], but there are a number of open questions in that area.
Whichever way is chosen, it would be very valuable to nd eective ways to validate ethical systems before
developing superintelligence. Two apparent paths to doing this are to inspect the content manually, and
to try them out in various (likely simulated) settings, evaluating them both subjectively and through each
other's lenses.
One signicant challenge with testing is that many values (e.g., courage, love) are themselves rather
complex features of the world, and so they might be dicult to capture in a simulated context without
losing much of the essential complexity. Testing in non-simulated contexts would be signicantly slower and
more limited in terms of variety, reducing the quality of feedback. Furthermore, since superintelligent AIs
will have more actions and plans available to them than the AIs we'd use for testing ethical systems, the
ethical systems would have to generalize well, in a way that we could not test in reality. This suggests that
the best option may be to create complex simulated environments with ethical complexity comparable to
reality, in which we could set up interesting scenarios to test generalizability of ethical systems. In order to
do this, successfully and ethically, we would need to nd a way to replicate the true complexity of our values
(which is a fairly dicult task in itself) while minimizing ethically meaningful harm to simulated entities (if
it is determined that they hold moral standing).
1. Although it has been frequently argued that the AI goals should re
ect \human values", which partic-
ular values should be preserved given that there is a broad spectrum of inconsistent views across the
globe about what these values should be? Who should get to decide that and when? For one example of
such challenges, see for example the innite ethics[14] framework of Arrhenius[8]. For another example
of existing work here, Anderson[4] suggests a method for learning how to rank con
icting ethical rules.
2. How are human ethical principles best codied in a way that makes sense to a machine? This is the
focus of the nascent eld of Computational Ethics (a.k.a. Machine Ethics) [73], which includes many
questions of a technical nature. For example:
(a) What are the best knowledge representation and model representation systems for dynamical
modeling of ethical systems?
(b) How can dierent ethical systems be compared analytically?
(c) How could we best build systems that learn ethical content from humans?
(d) How can we best estimate the loss implications of errors in learned ethical content?
3. Both across ethical systems and within a given ethical system, con
icting objectives must often be con-
sidered in tandem. Bostrom[15] has suggested a parliamentary voting model of subagents representing
their respective subobjectives. Multi-agent political dynamics may be undesirable however, leading one
to consider optimization over multiple objectives that are not each represented by subagents. During
such a multiobjective optimization over subgoals and/or subethics, we don't want to be susceptible to
following some edge path that seems great to one or a small number of subobjectives that are either
2One way to operationalize this is described by Muehlhauser[80].
14
very disliked or not well understood by most subobjectives (as that likely indicates a perverse instan-
tiation). This preference can be equivalent to having some inherent preference for the centroid of the
objective space; investigation of both modication of standard multiobjective optimization as well as
adding in meta-objectives as subobjectives[18, p. 440] would be in order.
The value learning problem is discussed further in [112, 123].
3.3.2 Ensuring goal stability
Once desirable goals have been successfully loaded into an AI, the key question is whether they will be
retained if the AI self-improves. A prerequisite for the \friendly AI" vision [136] is that a set of \friendly"
goals must remain stable throughout the self-improvement process. In other words, the AI must strive not
only to improve its capability of achieving its current goals, but also to ensure that it will retain these goals
even after it has become more capable. This sounds quite plausible: after all, would you choose to get an
IQ-boosting brain implant if you knew that it would make you want to kill your loved ones? But is it really
true in general? If not, can AIs be designed for which it is true, at least under some plausible circumstances?
Such questions suggest a host of research topics - here are some examples:
1. Self-trusting agents: can we construct goal-driven agents that obey some formalization of correct rea-
soning (eg rst-order logic, or Bayesian probability) and have access to actions that modify themselves
(and/or defer taking nal action to a possibly-modied copy of themselves), that are able to make
correct decisions about these actions without falling into the so-called L obian obstacle or the procras-
tination paradox? (It's not necessary for these agents to be practical; an equivalent of AIXI[62] would
be enlightening too.) A more thorough introduction to the problem was recently written by Fallenstein
and Soares.[37]
2. Generally, how can we structure an autonomous goal-oriented agent so that we can be sure it won't
intentionally self-modify to change its goals, or create more powerful agents with dierent goals? Are
there other sorts of replication-capable AI for which this might be answerable?
3. Can any useful evidence for or against this goal-retention hypothesis be found by studying humans?
For example, is there evidence that humans retain their values as their cognitive abilities improve
throughout childhood and adolescence? To what degree do human values and preferences converge
upon learning new facts? To what degree has this happened in history? Almost nobody values the will
of Zeus anymore, presumably because of learning about Zeus' non-existence, but do such examples tell
us much of relevance to AIs? For philosophical analyses of the issue, see e.g. [117].
4. \Ontological crisis": if an agent's preferences are based on a model of the world which turns out to not
be fundamental, it must then extrapolate/redene them somehow. How is this best done, and can this
always be done in a satisfactory way? For example, suppose we program a friendly AI to maximize the
number of humans whose souls go to heaven in the afterlife. First it tries things like increasing people's
compassion and church attendance. But suppose it then attains a complete scientic understanding of
humans and human consciousness, and discovers that there is no such thing as a soul. Now what? In
the same way, it is possible that any other goal we give it based on our current understanding of the
world (\maximize the meaningfulness of human life", say) may eventually be discovered by the AI to
be undened. Can goals safely be articulated in terms of people rather than arrangements of atoms,
or about a classical universe rather than a quantum, simulated, or other mathematical one?
This problem, and other related problems, are discussed in a recent paper by Soares.[110]
5. Decision theory: most work on automated planning and causality assumes a Causal Decision Theory.
However, Causal Decision Theory is not stable under re
ection in general. What is the right re
ectively
stable decision theory?
This problem is discussed in a recent paper by Soares and Fallenstein.[115]
15
(a) Does a good decision theory require a theory of logical counterfactuals, and if so, what's a good
theory of logical counterfactuals?
(b) Does a good decision theory shed light on multiagent coordination problems? (There is some
reason to think so.) On ontological crises? On naturalized induction?
6. Ensemble stability problem. Suppose an agent makes decisions using some sort of multiobjective
optimization over dierent goals and ethical systems. Do some ways of doing this guarantee that the
same objectives will be respected in any successors?
3.4 Security
It is unclear whether long-term progress in AI will make the overall problem of security easier or harder; on
one hand, systems will become increasingly complex in construction and behavior and AI-based cyberattacks
may be extremely eective, while on the other hand, the use of AI and machine learning techniques along
with signicant progress in low-level system reliability may render hardened systems much less vulnerable
than today's. From a cryptographic perspective, it appears that this con
ict favors defenders over attackers;
this may be a reason to pursue eective defense research wholeheartedly.
Although the research topics described in 2.3.3 may become increasingly important in the long term, very
general and capable systems will pose distinctive security problems. In particular, if the problems of validity
and control are not solved, it may be useful to create \containers" for AI systems that could have undesirable
behaviors and consequences in less controlled environments.[134] Both theoretical and practical sides of this
question warrant investigation. If the general case of AI containment turns out to be prohibitively dicult,
then it may be that designing an AI system and a container in parallel is more successful, allowing the
weaknesses and strengths of the design to inform the containment strategy [16]. The design of anomaly
detection systems and automated exploit-checkers could be of signicant help. Overall, it seems reasonable
to expect this additional perspective { defending against attacks from \within" a system as well as from
external actors { will raise interesting and protable questions in the eld of computer security.
Below we explore some types of containment that could help limit the potential for the AI to make
and execute plans with arbitrarily large eect on the world. It's worth bearing in mind that even if the
control and validity problems seemed to have been adequately addressed, it would still be appropriate to use
additional safeguards while studying and testing any potential superintelligence - and that it would help for
such safeguards to be as adaptable and painless as possible.
The task of designing appropriate containment is challenging for numerous reasons, not only due to the
diculty of reliably avoiding security vulnerabilities. In the worst case, every aspect of the AI's execution
may be superintelligently designed to further a set of goals alien to ours. With a superintelligent AI in a
container, we'd need to worry not only about straightforward attempts at disabling safeguards or making
self-copies, but also about hardware exploits and more insidious attempts to gain trust or in
uence.
Some of the attributes that may be desirable or necessary are:
1. Containment: it should prevent a contained superintelligent AI from having arbitrary eects on the
world. In particular, it should be veriably free of vulnerabilities itself.
2. Robustness: it should be dicult to unintentionally render ineective.
3. Uptake: it should be a system that AI builders want to use, and avoid being one that they want to
not use. Some possible aspects of this include:
(a) Completeness: it should allow running the same experiments that AI builders can do without the
containment
(b) Flexibility: it should be adaptable enough to dierent hardware setups to be used with dierent
architectures, and especially on distributed hardware
(c) Eciency: it should not overly reduce runtime eciency, memory eciency, or research/development
eciency
16
(d) Ease of use: it should provide as familiar an environment as possible; for example a UNIX shell
and other familiar tools.
4. Inspectability: it should allow detailed debugging and inspection of the contained AI. See 3.4.5. This
could contribute to uptake if it provides better inspection capabilities than AI builders typically have
(for instance, debugging distributed software is typically awkward in the current state of aairs).
3.4.1 Software containment
This research is about how we might develop, test and share software that can reliably \box" and monitor
AI.
1. What is the space of possible boxing architectures worth studying, based on input from researchers in
industry, academia, etc.? Dimensions include:
(a) At what level of abstraction is it implemented? As a virtual machine? As an operating system?
(b) What kinds of isolation guarantees does it provide?
i. Limited input/output channels? (This is relevant to preventing both typical computer hacking
and psychological escapes.)
ii. Limited side eects on hardware? (Perhaps using some variant of homomorphic encryption
to make the physical eect of executing code on the CPU unpredictable/unexploitable?)
(c) What forms of logging and monitoring should be supported?
(d) What types of automated tripwires and associated countermeasures should be supported?
(e) How can scalability and parallelism be provided without risk of accidentally connecting it to the
internet?
2. Once a desirable boxing architecture has been identied, implement and share it on an open-source
basis. A key metric of success here is actual adoption by leading general AI researchers, so their input
into the architecture selection will be crucial.
3. Attempt to prove that the software (or at least key components) will behave as intended.
Use a \red team" to attempt to nd security
aws in proposed and currently existing virtualization
software, perhaps incentivized by prizes. Multiple levels of virtualization should be considered (and
perhaps used in parallel), such as:
(a) Operating-system level virtualization (such as VirtualBox, VMWare, KVM)
(b) Programming-language level containment (such as Java sandbox)
(c) Static analysis (eg to check that a program won't use networking APIs or corrupt its memory)
4. Investigate programs likely to be used for viewing output and logs from an AI project. Determine which
ones are and aren't secure enough to use safely in the context of a boxed AI, and address weaknesses
in them. For example:
(a) How safe are terminal emulators?
(b) Can an AI censor information from its log les by mixing in things like Unicode control characters
or xterm escape sequences?
(c) Under what circumstances is it safe to copy-paste things from an AI's output?
(d) What le formats should and shouldn't the AI be allowed to use?
5. Develop a debugger (perhaps by modifying an existing one such as gdb) that's able to connect to a
virtual machine and debug something inside it, without being exploitable by the thing inside.
17
6. Automate checking for the presence of known problems (ie, check version numbers against a database
and keep the database up to date), from both inside and outside of a box. Design it to be worked into
an AI project as an automated check.
7. Can we put a general AI in a virtual environment that doesn't reveal that it's a virtual environment?
What sort of content would be safe to give it?
3.4.2 Psychological containment
This research aims to study and reduce the risk that human users are manipulated into enabling a boxed
superintelligent AI to \escape". Note that escape here need not be by disabling safeguards; the AI could
deliver information to its recipient in a way that contributes to other complex plans to gain in
uence, trust,
more copies, etc.
1. Replicate the \AI box experiment" where a human pretending to be an AI tries to persuade a human
controller to enable a breakout; document strategies, counterstrategies, and outcomes.
(a) How can individual AI researchers best be trained to resist being manipulated by an AI?
2. What are the most eective social control mechanisms for preventing individuals from enabling AI
breakout? What can be learned from past research on preventing unauthorized use of nuclear weapons
and other powerful technologies? Such examples include:
(a) Psychological screening of operators
(b) Surveillance of operators. For example, add an option to sshd which records keystrokes and output
in a place the logged-in user cannot access
(c) \False alarm" tests where the operator is led to believe that a critical situation has occurred and
one can see what decisions they make under pressure
(d) Systems requiring more than one authorized human for making certain critical decisions. What
software tools should be developed (and perhaps shared on a public-domain basis)? For example,
how would one best make a double-entry / two-person ssh interface where both entrants are
veriably human?
3. Scenario analysis: What are some concrete AI paths to in
uence over world aairs?
4. What are the optimization power gains from mere content? What have people gured out without
original theoretical advances or new experiments, but just by reading lots of known facts and putting
together the pieces in a way that nobody had before? Relevant for deciding whether containment
should involve restricting the AI's knowledge base.
3.4.3 Hardware containment
Study ways in which a general AI might break out from a computer (or computer cluster) disconnected from
the outside world, and investigate corresponding countermeasures.
1.First challenge : Consider only currently known technology, for example communication with the
outside world via radio waves, vibrations, minute A/C power modulations, etc. that somehow interact
with the computer in a software-controllable/software detectable way. See the literature on side-channel
attacks.
2.Second challenge : Consider advanced and currently unknown technology that the general AI could
potentially invent, limited only by laws of physics we consider truly fundamental.
Both 1 and 2 include the challenge of how to safely dispose of boxed AI hardware after shutdown.
18
3.4.4 Tripwires: Detection & Response
This research aims to study ways of detecting an imminent or in-progress AI box breakout, and appropriate
countermeasures to take when pre-set alarms are triggered.
1. What are appropriate tripwires for detecting an imminent or in-progress AI box breakout? Examples
include:
(a) Low-level triggers such as unexpected use of memory, bandwidth or communication channels
(b) Triggers related to the AI modifying its own software beyond certain limits
(c) Triggers related to some sort of high-level ability metric. Can any improved measures of substrate-
independent general intelligence be helpful in this regard, extending past work by Legg, Yud-
kowsky, Goertzel, Hernandez-Orallo & Dowe, etc.?
2. How are such tripwires best implemented without causing performance degradation in terms of speed,
memory-usage, etc. that may limit their adoption? To maximize adoption, software implementing
such tripwires in the above-mentioned boxing software should be developed and shared, optionally on
an open-source basis.
3. Study the risk associated with an AI having access to the source code for its boxing and tripwire
software (or itself) and thereby discovering ways to circumvent it.
4. What are appropriate countermeasures for various types of tripwire triggers?
(a) At the machine level, what should happen automatically for safe shutdown? After shutdown?
What standard interfaces for tripwires could be developed for automatically shutting down all of
a distributed system, including triggers and/or processes inside virtual machines?
(b) What is an appropriate emergency response plan if general AI appears imminent or has come into
existence? At the organization level? At the national/societal level? When is it worth developing
such plans?
3.4.5 Detecting intent to deceive
Suppose we construct a superintelligent AI system that has goals alien to ours. Correctly suspecting it of
being potentially harmful, we use a brilliantly-designed container mechanism to make sure it can have only
very limited in
uence on the world. Of course, eventually we'd like to put this system to use. When we do
this it will be dicult for us to ensure it can't in
uence the world through its actions; knowing this, the AI
may deceive us to gain trust while inside the container. We may try to inspect it, but it will have had a
chance to disguise itself to look harmless.
For these reasons, trying to make use of a mature superintelligent AI, even in a well-designed container,
could be very hazardous. This suggests that even if we have a container, if we don't trust the AI's goal system
we must have enough insight into its cognitive processes to monitor it and catch the \treacherous turn",
i.e. the rst moment when it comes up with the idea of deceiving us. Transparency/inspectability is thus
important, perhaps even more so than containment per se. For example, it would be useful to investigate
how amenable dierent AI architectures would be to having their beliefs and goals read from the outside.
3.5 Control
It has been argued that very general and capable AI systems operating autonomously to accomplish some
task will often be subject to eects that increase the diculty of maintaining meaningful human control [86,
17, 16, 107]. Research on systems that are not subject to these eects, minimize their impact, or allow for
reliable human control could be valuable in preventing undesired consequences, as could work on reliable
and secure test-beds for AI systems at a variety of capability levels.
19
3.5.1 Corrigibility and Domesticity
If an AI system is selecting the actions that best allow it to complete a given task, then avoiding conditions
that prevent the system from continuing to pursue the task is a natural subgoal [86, 17] (and conversely,
seeking unconstrained situations is sometimes a useful heuristic [132]). This could become problematic,
however, if we wish to repurpose the system, to deactivate it, or to signicantly alter its decision-making
process; such a system would rationally avoid these changes. Systems that do not exhibit these behaviors
have been termed corrigible systems [116], and both theoretical and practical work in this area appears
tractable and useful. For example, it may be possible to design utility functions or decision processes so that
a system will not try to avoid being shut down or repurposed [116], and theoretical frameworks could be
developed to better understand the space of potential systems that avoid undesirable behaviors [55, 57, 56].
It has been argued that another natural subgoal is the acquisition of fungible resources of a variety of
kinds: for example, information about the environment, safety from disruption, and improved freedom of
action are all instrumentally useful for many tasks [86, 17]. Hammond [49] gives the label stabilization to
the more general set of cases where \due to the action of the agent, the environment comes to be better
tted to the agent as time goes on". This type of subgoal could lead to undesired consequences, and a
better understanding of the conditions under which resource acquisition or radical stabilization is an optimal
strategy (or likely to be selected by a given system) would be useful in mitigating its eects. Potential research
topics in this area include \domestic" goals that demand actions/plans whose consequences are limited in
scope in some way [16], the eects of large temporal discount rates on resource acquisition strategies, and
experimental investigation of simple systems that display these subgoals.
Finally, research on the possibility of superintelligent machines or rapid, sustained self-improvement
(\intelligence explosion") has been highlighted by past and current projects on the future of AI as potentially
valuable to the project of maintaining reliable control in the long term. The AAAI 2008{09 Presidential
Panel on Long-Term AI Futures' \Subgroup on Pace, Concerns, and Control" stated that
There was overall skepticism about the prospect of an intelligence explosion... Nevertheless, there
was a shared sense that additional research would be valuable on methods for understanding and
verifying the range of behaviors of complex computational systems to minimize unexpected out-
comes. Some panelists recommended that more research needs to be done to better dene \intel-
ligence explosion," and also to better formulate dierent classes of such accelerating intelligences.
Technical work would likely lead to enhanced understanding of the likelihood of such phenomena,
and the nature, risks, and overall outcomes associated with dierent conceived variants [61].
Stanford's One-Hundred Year Study of Articial Intelligence includes \Loss of Control of AI systems" as an
area of study, specically highlighting concerns over the possibility that
...we could one day lose control of AI systems via the rise of superintelligences that do not act
in accordance with human wishes { and that such powerful systems would threaten humanity.
Are such dystopic outcomes possible? If so, how might these situations arise? ...What kind of
investments in research should be made to better understand and to address the possibility of
the rise of a dangerous superintelligence or the occurrence of an \intelligence explosion"? [60]
Research in this area could include any of the long-term research priorities listed above, as well as theoretical
and forecasting work on intelligence explosion and superintelligence [25, 16], and could extend or critique
existing approaches begun by groups such as the Machine Intelligence Research Institute [113].
Research questions:
1. Can high Bayesian uncertainty and agent respect for the unknown act as an eective safety mechanism?
(See [32, 108])
2. Investigate steep temporal discounting as an incentives control method for an untrusted general AI.
It has been argued [135] that the nature of the general AI control problem undergoes an essential shift,
which we can refer to as the \context change", when transitioning from subhuman to superhuman general
20
AI. This suggests that rather than judging potential solutions to the control problem using only experimental
results, it's essential to build compelling deductive arguments that generalize and are falsiable, and only
when these arguments are available does it make sense to try to test potential solutions via experiment.
3.5.2 Safe and Unsafe Agent Architectures
Predicting the exact behavior of complex software is notoriously dicult and has been shown to be gener-
ically impossible with less computational cost than simply running it. The goal of AI safety research is
therefore more modest: to show that the behavior, although not exactly predictable, will have certain de-
sired properties, for example keeping certain behavioral parameters within certain bounds.
Rational agents are often composed of distinct modules (e.g. sensors, actuators, a performance element,
a learning element, a problem generator, a critic, etc.), each with limited abilities, with some network of
information
ows between modules.[102] Within this framework, it would be valuable to provide guarantees
that various modules would be safe or unsafe (individually or in combination).
A related approach is to not build an agent at all, but rather some sort of non-agent \Tool AI". Some
types of this are:
1. \Oracle AI": an AI system designed to merely answer questions about the world as accurately as
possible. (Though people sometimes also call agent AI with a goal of accurately answering questions
about the world \Oracle AI".)
2. \Virtual AI": an agent that interacts only with an abstract world, but has no way of determining this,
and hence is not an agent in the physical world.
Although a common assumption is that both 1 and 2 are \safe" by remaining \Oracle AI"/\Tool AI", this
has not been substantiated. For example, an Oracle AI that becomes superintelligent via self-improvement
is likely to evolve a knowledge-acquisition goal, which might lead it to modify its answers to manipulate its
user to perform certain experiments.
Many of the above-mentioned safety issues are related to the issue of goals that the rational agent
may have. This question provides an important link between architectures and goals: how amenable are
dierent AI architectures to having their goals and beliefs read from the outside in a fashion useful for safety
determination and monitoring?
Research questions on properties of architectures under self-modication:
1. Can certain interesting classes of agents be proven to exhibit behavior converging towards recursively
stable xed points or limit cycles?
2. Do certain kinds of non-optimizer agents become optimizers, and if so, how quickly?
3. If so, how strong is the `optimizer' stable attractor?
4. Are there other stable attractors?
5. Are tool-like or Oracle-like things stable attractors?
Research questions about specic architectures:
1. Model and bug-trace the variety of dierent scenarios of failure modes and dangerous behaviors intrinsic
to dierent existing real-world general AI architectures such as OpenCog and Sigma.
2. Analyze how deep learning discontinuity/instability[121] can aect deep reinforcement learning ar-
chitectures. What new classes of risks in agent behavior can result? How easily can these risks be
mitigated? Evaluate compensatory safety mechanisms whereby an agent requires at least two distinct
perspectives on a situation or subject of analysis before characterizing it.
3. Explore how the eld of mechanism design can be applied to controlling neuromorphic AIs and other
architectures.
21
4. How well does an AI system's transparency to human inspection scale, using dierent kinds of archi-
tectures and methods?[76].
22
4 Forecasting
4.1 Motivation
1. Conduct a broad survey of past and current civilizational competence. In what ways, and under what
conditions, do human civilizations show competence vs. incompetence? Which kinds of problems do
they handle well or poorly? Similar in scope and ambition to, say, Perrow's Normal Accidents [90] and
Sagan's The Limits of Safety [104]. The aim is to get some insight into the likelihood of our civilization
handling various aspects of the superintelligence challenge well or poorly. Some initial ndings were
published on the MIRI blog.[79, 74]
2. Did most early AI scientists really think AI was right around the corner, or was it just a few people?
The earliest survey available (Michie 1973[71]) suggests it may have been just a few people. For those
that thought AI was right around the corner, how much did they think about the safety and ethical
challenges? If they thought and talked about it substantially, why was there so little published on
the subject? If they really didn't think much about it, what does that imply about how seriously AI
scientists will treat the safety and ethical challenges of AI in the future?
4.2 Methodology
There are many interrelated variables that are relevant to arriving at a good understanding of the future of
AI, in particular the path towards general AI; and there are a number of dierent ways to produce forecasts
of these variables, with varying degrees of accuracy, credibility, feasibility, and informativeness. Possible
methods include:
1. expert surveys
2. prediction markets
3. systematic group forecasting methods, such as the Delphi method
4. building complex models
5. extrapolation from historic trends
6. analogy with other historical developments
7. combining forecasts from diering methods or forecasts of related variables
Existing projects that investigate how to best forecast future sci/tech and other developments include these
IARPA programs:
1.ACE (Aggregative Contingent Estimation), which investigates ways to improve and combine the judg-
ments of analysts to produce accurate forecasts. The ACE program has been running a team prediction
tournament, which has consistently been won by the Good Judgment Project , which has produced sev-
eral insightful publications.
2.ForeST (Forecasting Science & Technology), which investigates ways to get and maintain high-quality
forecasts of sci/tech milestones, and funds SciCast , the world's largest sci/tech forecasting tournament.
These projects are providing us with valuable information on how best to make short-term forecasts. It
would be interesting to run a similar tournament with 5-year and 10-year time horizons for predictions and
see if there are signicant dierences; are the chief determinants of predictive success the same? What kind
of process should we trust to give us the best predictions? Besides the question of how to train analysts
and combine their estimates, there is also the question of what modelling methodologies, used by whom,
yield the best long-term forecasts. The Tauri Group is a think tank that conducted a study[44] for the
23
Department of Defense reviewing over 1000 technological forecasts and statistically analyzing accuracy by
methodology, by source, and by time frame. It would be informative to have a similar analysis of long-
term technological predictions from other sources, such as (1) The Futurist and World Future Review,
(2) Technological Forecasting and Social Change, (3) Foresight and International Journal of Forecasting,
(4) Journal of Forecasting, (5) publications of the Hudson Institute, (6) publications of the Institute for
the Future, (7) publications of the Club of Rome, (8) Journal of Future Studies, (9) Ray Kurzweil (more
thorough than section 5.4 of (Armstrong et al, 2014)[7]), (10) Alvin Toer, (11) John Naisbitt, (12) the
State of the World reports by the Worldwatch Institute.
4.3 Forecasting AI progress
In order to get a good understanding of what the path to general AI might look like, there are many kinds
of interacting variables that would be worth forecasting:
1. resources (researchers and funding) going into AI innovation in general, or within AI subelds
2. resources going into AI areas of application, such as robotics or sensory technologies
3. related elds which may contribute ideas, such as neuroscience
4. shifts in the set of organizations/people performing AI research among:
(a) countries
(b) academia vs industry vs other government (eg military)
Other related questions that may merit detailed study include:
1. in terms of technologies:
(a) What types of AI (in terms of architecture, subeld, application, etc) are most likely to contribute
to reaching general AI? What AI capabilities would be necessary or sucient, individually or
collectively?
(b) Nick Bostrom brings up in Superintelligence that brain emulation technology is unlikely to arrive
much sooner than human-level neuromorphic AI, because techniques and knowledge from the
former can likely be repurposed for the latter. Are there other foreseeable situations where two
disparate elds or research programs may be closely related, with success on one implying great
progress on the other?
i. Does causal entropy[132] constitute a promising shared avenue of progress in AI and nanotech?
2. in terms of scenarios:
(a) What kinds of scenarios would increase or decrease researcher inclination to work on AI or general
AI research? (For example, changing ideologies or public opinion, association of the eld with
ideas held in low regard, . . . ) Can we forecast this?
(b) How scalable is innovative project secrecy? Examine past cases: Manhattan project, Bletchley
park, Bitcoin, Anonymous, Stuxnet, Skunk Works, Phantom Works, Google X. Could there be
large projects we don't know about? How will this change in coming decades?
(c) What is the world's distribution of computation, and what are the trends? (Some initial results
here.[75])
(d) Supposing enough technical innovations are in place to build general AI, how large of a project
will implementation be? How much of the work to reach general AI is scientic advancement and
technical innovation vs engineering and implementation?
24
3. in terms of public response:
(a) How will governments respond?
i. What conditions would make bans or nationalization likely? (Consider historical examples
here.) What would be the consequences?
ii. Examine international collaboration on major innovative technology. How often does it hap-
pen? What blocks it from happening more? What are the necessary conditions? Examples:
Concord jet, LHC, international space station, etc. What conditions would make international
collaboration on AI safety issues likely?
iii. What kinds of policies are likely to be implemented, with what eect?
What happens when governments ban or restrict certain kinds of technological develop-
ment? What happens when a certain kind of technological development is banned or
restricted in one country but not in other countries where technological development sees
heavy investment?
What kinds of innovative technology projects do governments monitor, shut down, or
nationalize? How likely are major governments to monitor, shut down, or nationalize
serious general AI projects?
(b) How will the public respond? What sorts of technological innovations tend to cause public panic
or outrage, under what conditions?
(c) What sorts of developments would cause governments or the public to consider AI safety to be a
serious issue? How did public perception respond to previous AI milestones? How will the public
react to self-driving taxis?
(d) How much warning will we have before we reach general AI? What kinds of future developments
would serve as advance signposts indicating the kind of scenario we're likely to see?
4. in terms of rates of progress:
(a) How quickly will computer hardware performance be improving (on various metrics)?
i. Improved performance enables more AI approaches to be feasible and makes experiments
more productive. Will hardware advances also contribute to researcher productivity in other
ways?
ii. Will departures from the von Neumann architecture contribute signicantly to some types
of AI development? (For example quantum computers, or computers inspired by cellular
automata.)
(b) How quickly does performance of algorithms tend to advance over time? (Grace, 2013)[42] nds
that (in six areas) algorithmic progress is nearly as signicant as hardware progress; but further
analysis of this question with a view toward economic or productivity impact would be worthwhile.
(c) Researcher performance is aected by improvements in algorithmic and hardware performance
which make experiments more productive. What other factors will aect researcher performance?
Some candidates:
i. changing ways to do software development
ii. changing ways to collaborate
iii. changing ways to publish or present results
iv. improved computer interfaces (such as brain-computer interfaces or virtual reality)
v. genetic enhancement technology
(d) Related to the previous question: what AI subelds will benet particularly from hardware per-
formance improvements?
25
4.4 Forecasting AI takeo
If we develop AI that's advanced enough to do AI research and development, we may enter the era that I. J.
Good dubbed the \Intelligence Explosion", in which the growth in AI capability is driven primarily by the
AI itself. We will refer to the transition of AI to a superintelligent level as takeo. How (and how quickly)
this would unfold is important, but dicult to predict. Of course, some of the usual forecasting methods
are applicable, in particular those that rely on expert judgment. (A survey of timelines and impacts for
humanity (with n=170) is here[81].) Here are some more approaches toward understanding aspects of the
intelligence explosion:
1. Compare with earlier takeo-like scenarios. Some candidates[51, 50]:
(a) development of proto-human brains
(b) agricultural revolution
(c) industrial revolution
2. Besides software improvements, AI self-improvement may occur via engaging in commercial activity,
via expropriating computing resources, via manufacturing computing resources, or in other ways.
Enumerate the specic technologies required for each pathway and forecast their development.
3. Assess how best to model the amount of self-improvement that could be accomplished with varying
amounts of (i) intelligence, (ii) parallelism / parallel computations, (iii) serial depth of computation;
what evidence is available? [137]
(a) How do dierent areas of knowledge respond to being given more serial research time vs more
people vs other inputs?
(b) One type of cognitive work that has been recorded for millennia is the progress of mathematics,
in particular the resolution of conjectures. Some data analysis[43] suggests these times follow an
exponential distribution with hal
ife ~100 years. Could further analysis help us understand the
benets of serial vs parallel intellectual work in this domain?
It's possible that there will be multiple AI projects undergoing takeo at the same time; this has been
called a multipolar takeo. Multipolar takeo is more likely if (i) takeo is slow, (ii) more of the necessary
innovations and/or tools are shared, and (iii) implementation doesn't require any non-commodity resources,
such as access to specialized hardware. It's been proposed[6] that multipolar scenarios might carry a higher
risk of accidents than unipolar ones because no party wishes to lose the competition. It's also been pro-
posed[29] that multipolar scenarios might be safer, because there might be a \balance of power" enabling
cooperation and mutual scrutiny.
1. What kind of multipolar scenarios may occur? What would be the consequences?
2. What kinds of multipolar scenarios would collapse into unipolar ones, or vice versa?
4.5 Brain emulations (uploads)
1. Can we get whole brain emulation without producing neuromorphic general AI slightly earlier or shortly
afterward? See section 3.2 of [35].
2. Is the rst functional whole brain emulation likely to be (1) an emulation of low-level functionality
that doesn't require much understanding of human cognitive neuroscience at the computational level,
as described in [105], or is it more likely to be (2) an emulation that makes heavy use of advanced
human cognitive neuroscience, as described eg by Hayworth[77], or is it likely to be (3) something else?
3. Investigate the feasibility of creating safe general-purpose superintelligences by modifying brain emu-
lations, based on currently known cognitive neuroscience.
26
5 Policy and Collaboration
For any powerful new technology, appropriate policies can ensure that humanity can enjoy the benets while
risks are minimized. Both nuclear technology and biotechnology have thus far avoided global-scale disasters
(global nuclear war, nuclear terrorism, engineered pandemics, etc.), at least in part thanks to helpful policies.
For example, the policies developed at the 1975 Asilomar conference on Recombinant DNA have contributed
to the sterling safety record of that eld without sti
ing its progress in any signicant way. In this spirit, it
appears worthwhile to research the analogous question for AI: what policies would help ensure that humanity
reaps the benets of AI while avoiding potential pitfalls? Here are some more specic questions along these
lines:
1. What is the space of possible AI risk reduction policies worth studying? (Dewey[32] and Sotala and
Yampolskiy[118] have written some analyses of possible policies/responses.) Dimensions include:
(a) Implementation level: global, national, organizational, etc.,
(b) Strictness: mandatory regulations, voluntary industry guidelines, etc.
(c) Type: Do policies/monitoring eorts focus on software, hardware, projects or individuals? Is
there some sort of tiered system of security clearances? Is some information classied? What are
possible approaches to monitoring and tracking general AI development? What kind of research
should be funded? Are new governance structures created?
2. Which criteria should be used to determine the merits of a policy? Some candidates:
(a) veriability of compliance
(b) enforceability
(c) ability to reduce AI risk
(d) ability to avoid sti
ing desirable technology development and have other negative consequences
(e) adoptability (the prospects of adoption increase when policy benets those whose support is
needed for implementation and when its merits can be eectively explained to decision-makers
and opinion leaders)
(f) ability to adapt over time to changing circumstances
To shed light on 2.d: What happens when governments ban or restrict certain kinds of technological devel-
opment? What happens when a certain kind of technological development is banned or restricted in one
country but not in other countries where technological development sees heavy investment?
Collaboration is another important topic that deserves recurring thought and discussion. To build safe
general AI of human level and beyond, it will likely be necessary to bring together multiple research subdis-
ciplines and communities and let them in
uence each other's work. Some thematic questions here are:
1. What are the most important collaborations and information
ows we need between dierent research
subdisciplines and communities?
2. What attitudes would be most useful to foster?
3. What kind of organizations or organizational mechanisms would best enable these collaborations and
information
ows, bringing us closer to safety?
27
References
[1] Rakesh Agrawal and Ramakrishnan Srikant. \Privacy-preserving data mining". In: ACM Sigmod
Record 29.2 (2000), pp. 439{450.
[2] Rajeev Alur. \Formal verication of hybrid systems". In: Embedded Software (EMSOFT), 2011 Pro-
ceedings of the International Conference on . IEEE. 2011, pp. 273{278.
[3] Kenneth Anderson, Daniel Reisner, and Matthew C Waxman. \Adapting the Law of Armed Con
ict
to Autonomous Weapon Systems". In: International Law Studies 90 (2014).
[4] Susan Leigh Anderson and Michael Anderson. \A Prima Facie Duty Approach to Machine Ethics
Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles
through a Dialogue with Ethicists". In: Machine Ethics (2011), p. 476.
[5] David Andre and Stuart J Russell. \State abstraction for programmable reinforcement learning
agents". In: Eighteenth national conference on Articial intelligence . American Association for Arti-
cial Intelligence. 2002, pp. 119{125.
[6] Stuart Armstrong, Nick Bostrom, and Carl Shulman. \Racing to the precipice: a model of articial
intelligence development". In: (2013).
[7] Stuart Armstrong, Kaj Sotala, and Se an S O hEigeartaigh. \The errors, insights and lessons of
famous AI predictions{and what they mean for the future". In: Journal of Experimental & Theoretical
Articial Intelligence ahead-of-print (2014), pp. 1{26. url:http : / / www . fhi . ox . ac . uk / wp -
content/uploads/FAIC.pdf .
[8] Gustaf Arrhenius. \The impossibility of a satisfactory population ethics". In: Descriptive and nor-
mative approaches to human behavior (2011).
[9] Peter M Asaro. \What should we want from a robot ethic?" In: International Review of Information
Ethics 6.12 (2006), pp. 9{16.
[10] Peter Asaro. \How just could a robot war be?" In: Current issues in computing and philosophy (2008),
pp. 50{64.
[11] Karl J Astr om and Bj orn Wittenmark. Adaptive control . Courier Dover Publications, 2013.
[12] Silvia Bellezza, Anat Keinan, and Neeru Paharia. Conspicuous Consumption of Time: When Busyness
at Work and Lack of Leisure Time Become a Status Symbol . 2014. url:http://www.hbs.edu/
faculty/Pages/item.aspx?num=47139 .
[13] M Boden et al. \Principles of robotics". In: The United Kingdom's Engineering and Physical Sciences
Research Council (EPSRC). web publication (2011).
[14] Nick Bostrom. \Innite ethics". In: Analysis and Metaphysics 10 (2011), pp. 9{59.
[15] Nick Bostrom. Moral Uncertainty{Towards a Solution? 2009. url:http://www.overcomingbias.
com/2009/01/moral-uncertainty-towards-a-solution.html .
[16] Nick Bostrom. Superintelligence: Paths, dangers, strategies . Oxford University Press, 2014.
[17] Nick Bostrom. \The superintelligent will: Motivation and instrumental rationality in advanced arti-
cial agents". In: Minds and Machines 22.2 (2012), pp. 71{85.
[18] J urgen Branke et al. Multiobjective optimization: Interactive and evolutionary approaches . Vol. 5252.
Springer Science & Business Media, 2008.
[19] Selmer Bringsjord et al. \Piagetian roboethics via category theory: Moving beyond mere formal
operations to engineer robots whose decisions are guaranteed to be ethically correct". In: Machine
ethics (2011), pp. 361{374.
[20] Yuriy Brun and Michael D Ernst. \Finding latent code errors via machine learning over program
executions". In: Proceedings of the 26th International Conference on Software Engineering . IEEE
Computer Society. 2004, pp. 480{490.
28
[21] Erik Brynjolfsson and Andrew McAfee. The second machine age: work, progress, and prosperity in a
time of brilliant technologies . W.W. Norton & Company, 2014.
[22] Erik Brynjolfsson, Andrew McAfee, and Michael Spence. \Labor, Capital, and Ideas in the Power
Law Economy". In: Foreign A. 93 (2014), p. 44.
[23] Ryan Calo. \Robotics and the New Cyberlaw". In: Available at SSRN 2402972 (2014).
[24] Ryan Calo. \The Case for a Federal Robotics Commission". In: Available at SSRN 2529151 (2014).
[25] David Chalmers. \The singularity: A philosophical analysis". In: Journal of Consciousness Studies
17.9-10 (2010), pp. 7{65.
[26] Wei Chu and Zoubin Ghahramani. \Preference Learning with Gaussian Processes". In: In Proc. ICML
2005. 2005, pp. 137{144.
[27] Robin R Churchill and Geir Ulfstein. \Autonomous institutional arrangements in multilateral envi-
ronmental agreements: a little-noticed phenomenon in international law". In: American Journal of
International Law (2000), pp. 623{659.
[28] Andrew E Clark and Andrew J Oswald. \Unhappiness and unemployment". In: The Economic Journal
(1994), pp. 648{659.
[29] Owen Cotton-Barratt and Toby Ord. Strategic considerations about dierent speeds of AI takeo .
Aug. 2014. url:http://www.fhi.ox.ac.uk/strategic- considerations- about- different-
speeds-of-ai-takeoff/ .
[30] Andr e DeHon et al. \Preliminary design of the SAFE platform". In: Proceedings of the 6th Workshop
on Programming Languages and Operating Systems . ACM. 2011, p. 4.
[31] Louise A Dennis et al. \Practical Verication of Decision-Making in Agent-Based Autonomous Sys-
tems". In: arXiv preprint arXiv:1310.2431 (2013).
[32] Daniel Dewey. \Long-term strategies for ending existential risk from fast takeo". In: (Nov. 2014).
url:http://www.danieldewey.net/fast-takeoff-strategies.pdf .
[33] United Nations Institute for Disarmament Research. The Weaponization of Increasingly Autonomous
Technologies: Implications for Security and Arms Control . UNIDIR, 2014.
[34] Bonnie Lynn Docherty. Losing Humanity: The Case Against Killer Robots . Human Rights Watch,
2012.
[35] Peter Eckersley and Anders Sandberg. \Is Brain Emulation Dangerous?" In: Journal of Articial
General Intelligence 4.3 (2013), pp. 170{194.
[36] Beno Eckmann. \Social choice and topology a case of pure and applied mathematics". In: Expositiones
Mathematicae 22.4 (2004), pp. 385{393.
[37] Benja Fallenstein and Nate Soares. Vingean Re
ection: Reliable Reasoning for Self-Modifying Agents .
Tech. rep. Machine Intelligence Research Institute, 2014. url:https://intelligence.org/files/
VingeanReflection.pdf .
[38] Kathleen Fisher. \HACMS: high assurance cyber military systems". In: Proceedings of the 2012 ACM
conference on high integrity language technology . ACM. 2012, pp. 51{52.
[39] Carl Frey and Michael Osborne. The future of employment: how susceptible are jobs to computerisa-
tion? Working Paper. Oxford Martin School, 2013.
[40] Edward L Glaeser. \Secular joblessness". In: Secular Stagnation: Facts, Causes and Cures (2014),
p. 69.
[41] Irving John Good. \Speculations concerning the rst ultraintelligent machine". In: Advances in com-
puters 6.31 (1965), p. 88.
[42] Katja Grace. Algorithmic Progress in Six Domains . Tech. rep. Machine Intelligence Research Institute,
2013. url:http://intelligence.org/files/AlgorithmicProgress.pdf .
29
[43] Katja Grace and Paul Christiano. Resolutions of mathematical conjectures . 2014. url:http://www.
aiimpacts.org/resolutions-of-mathematical-conjectures .
[44] The Tauri Group. Retrospective Analysis of Technology Forecasting: In-scope Extension . Tech. rep.
2012. url:http://www.dtic.mil/get-tr-doc/pdf?AD=ADA568107 .
[45] Tom Gunter et al. \Sampling for inference in probabilistic models with fast Bayesian quadrature".
In:Advances in Neural Information Processing Systems . 2014, pp. 2789{2797.
[46] Joseph Y. Halpern and Rafael Pass. \Game Theory with Translucent Players". In: CoRR abs/1308.3778
(2013). url:http://arxiv.org/abs/1308.3778 .
[47] Joseph Y. Halpern and Rafael Pass. \I Don't Want to Think About it Now: Decision Theory With
Costly Computation". In: CoRR abs/1106.2657 (2011). url:http://arxiv.org/abs/1106.2657 .
[48] Joseph Y Halpern, Rafael Pass, and Lior Seeman. \Decision Theory with Resource-Bounded Agents".
In:Topics in cognitive science 6.2 (2014), pp. 245{257.
[49] Kristian J Hammond, Timothy M Converse, and Joshua W Grass. \The stabilization of environ-
ments". In: Articial Intelligence 72.1 (1995), pp. 305{327.
[50] Robin Hanson. \Economics of the singularity". In: Spectrum, IEEE 45.6 (2008), pp. 45{50.
[51] Robin Hanson. \Long-term growth as a sequence of exponential modes". In: George Mason University .
Citeseer. 1998.
[52] Philipp Hennig and Martin Kiefel. \Quasi-Newton methods: A new direction". In: The Journal of
Machine Learning Research 14.1 (2013), pp. 843{865.
[53] Clemens Hetschko, Andreas Knabe, and Ronnie Sch ob. \Changing identity: Retiring from unemploy-
ment". In: The Economic Journal 124.575 (2014), pp. 149{166.
[54] Henry Hexmoor, Brian McLaughlan, and Gaurav Tuli. \Natural human role in supervising complex
control systems". In: Journal of Experimental & Theoretical Articial Intelligence 21.1 (2009), pp. 59{
77.
[55] Bill Hibbard. \Avoiding unintended AI behaviors". In: Articial General Intelligence . Springer, 2012,
pp. 107{116.
[56] Bill Hibbard. Ethical Articial Intelligence . 2014. url:arxiv.org/abs/1411.1373 .
[57] Bill Hibbard. \Self-Modeling Agents and Reward Generator Corruption". In: AAAI-15 Workshop on
AI and Ethics . 2015.
[58] Daniel Hintze. \Problem Class Dominance in Predictive Dilemmas". Honors Thesis. Arizona State
University, 2014.
[59] Eric J Horvitz. \Reasoning about beliefs and actions under computational resource constraints". In:
Third AAAI Workshop on Uncertainty in Articial Intelligence . 1987, pp. 429{444.
[60] Eric Horvitz. One-Hundred Year Study of Articial Intelligence: Re
ections and Framing . White
paper. Stanford University, 2014. url:https://stanford.app.box.com/s/266hrhww2l3gjoy9euar .
[61] Eric Horvitz and Bart Selman. Interim Report from the Panel Chairs . AAAI Presidential Panel on
Long Term AI Futures. 2009.
[62] Marcus Hutter. \A theory of universal articial intelligence based on algorithmic complexity". In:
arXiv preprint cs/0004001 (2000).
[63] Gerwin Klein et al. \seL4: Formal verication of an OS kernel". In: Proceedings of the ACM SIGOPS
22nd symposium on Operating systems principles . ACM. 2009, pp. 207{220.
[64] Patrick LaVictoire et al. \Program Equilibrium in the Prisoner's Dilemma via L ob's Theorem". In:
AAAI Multiagent Interaction without Prior Coordination workshop . 2014.
[65] Terran D Lane. \Machine learning techniques for the computer security domain of anomaly detection".
PhD thesis. Purdue University, 2000.
30
[66] Patrick Lin, Keith Abney, and George A Bekey. Robot ethics: the ethical and social implications of
robotics . MIT Press, 2011.
[67] Alan K Mackworth. \Agents, bodies, constraints, dynamics, and evolution". In: AI Magazine 30.1
(2009), p. 7.
[68] James Manyika et al. Big data: The next frontier for innovation, competition, and productivity . Report.
McKinsey Global Institute, 2011.
[69] James Manyika et al. Disruptive technologies: Advances that will transform life, business, and the
global economy . Vol. 180. McKinsey Global Institute, San Francisco, CA, 2013.
[70] Bruce M McLaren. \Computational models of ethical reasoning: Challenges, initial steps, and future
directions". In: Intelligent Systems, IEEE 21.4 (2006), pp. 29{37.
[71] Donald Michie. \Machines and the theory of intelligence". In: Nature 241.5391 (1973), pp. 507{512.
[72] Joel Mokyr. \Secular stagnation? Not in your life". In: Secular Stagnation: Facts, Causes and Cures
(2014), p. 83.
[73] James H Moor. \The nature, importance, and diculty of machine ethics". In: Intelligent Systems,
IEEE 21.4 (2006), pp. 18{21.
[74] Luke Muehlhauser. AGI outcomes and civilizational competence . Oct. 2014. url:https://intelligence.
org/2014/10/16/agi-outcomes-civilizational-competence/ .
[75] Luke Muehlhauser. The world's distribution of computation (initial ndings) . Feb. 2014. url:http:
/ / intelligence . org / 2014 / 02 / 28 / the - worlds - distribution - of - computation - initial -
findings/ .
[76] Luke Muehlhauser. \Transparency in Safety-Critical Systems". In: (2013). url:http://intelligence.
org/2013/08/25/transparency-in-safety-critical-systems/ .
[77] Luke Muehlhauser and Ken Hayworth. Ken Hayworth on brain emulation prospects . Sept. 2014. url:
http://intelligence.org/2014/09/09/hayworth/ .
[78] Luke Muehlhauser and Louie Helm. \The singularity and machine ethics". In: Singularity Hypotheses .
Springer, 2012, pp. 101{126.
[79] Luke Muehlhauser and Jonah Sinick. How well will policy-makers handle AGI? (initial ndings) . Sept.
2013. url:https://intelligence.org/2013/09/12/how-well-will-policy-makers-handle-
agi-initial-findings/ .
[80] Luke Muehlhauser and Chris Williamson. Ideal Advisor Theories and Personal CEV . 2013. url:
http://intelligence.org/files/IdealAdvisorTheories.pdf .
[81] Vincent C M uller and Nick Bostrom. \Future progress in articial intelligence: A survey of expert
opinion". In: Fundamental Issues of Articial Intelligence (2014). forthcoming. url:http://www.
sophia.de/pdf/2014\_PT-AI\_polls.pdf .
[82] Hirotaka Nakayama, Yeboon Yun, and Min Yoon. Sequential approximate multiobjective optimization
using computational intelligence . Springer, 2009.
[83] Andrew Y Ng and Stuart Russell. \Algorithms for Inverse Reinforcement Learning". In: in Proc. 17th
International Conf. on Machine Learning . Citeseer. 2000.
[84] Nils J Nilsson. \Articial intelligence, employment, and income". In: AI Magazine 5.2 (1984), p. 5.
[85] Stephen M Omohundro. \The Basic AI Drives. Articial General Intelligence". In: 2008 proceedings
of the First AGI Conference, eds. Pei Wang, Ben Goertzel, and Stan Franklin . Vol. 171. 2008.
[86] Stephen M Omohundro. The nature of self-improving articial intelligence . Presented at Singularity
Summit 2007.
[87] Laurent Orseau and Mark Ring. \Space-Time embedded intelligence". In: Articial General Intelli-
gence . Springer, 2012, pp. 209{218.
31
[88] Raja Parasuraman, Thomas B Sheridan, and Christopher D Wickens. \A model for types and levels
of human interaction with automation". In: Systems, Man and Cybernetics, Part A: Systems and
Humans, IEEE Transactions on 30.3 (2000), pp. 286{297.
[89] Lu s Moniz Pereira and Ari Saptawijaya. \Modelling morality with prospective logic". In: Progress in
Articial Intelligence . Springer, 2007, pp. 99{111.
[90] Charles Perrow. Normal Accidents: Living with High-Risk Technologies . New York: Basic Books, 1984.
[91] Andr Platzer. Logical analysis of hybrid systems: proving theorems for complex dynamics . Springer
Publishing Company, Incorporated, 2010.
[92] Associated Press. \Atom-Powered World Absurd, Scientists Told". In: New York Herald Tribune ().
September 12, 1933, p. 1.
[93] Probabilistic Numerics .http://probabilistic-numerics.org . Accessed: 27 November 2014.
[94] Matthew J Probst and Sneha Kumar Kasera. \Statistical trust establishment in wireless sensor net-
works". In: Parallel and Distributed Systems, 2007 International Conference on . Vol. 2. IEEE. 2007,
pp. 1{8.
[95] Luca Pulina and Armando Tacchella. \An abstraction-renement approach to verication of articial
neural networks". In: Computer Aided Verication . Springer. 2010, pp. 243{257.
[96] Reuters. \Space Travel `Utter Bilge'". In: The Ottawa Citizen (). January 3, 1956, p. 1. url:http:
//news.google.com/newspapers?id=ddgxAAAAIBAJ&sjid=1eMFAAAAIBAJ&pg=3254%2C7126 .
[97] Konrad Rieck et al. \Automatic analysis of malware behavior using machine learning". In: Journal
of Computer Security 19.4 (2011), pp. 639{668.
[98] Heather M Ro. \Responsibility, liability, and lethal autonomous robots". In: Routledge Handbook of
Ethics and War: Just War Theory in the 21st Century (2013), p. 352.
[99] Heather M Ro. \The Strategic Robot Problem: Lethal Autonomous Weapons in War". In: Journal
of Military Ethics 13.3 (2014).
[100] Stuart J Russell and Devika Subramanian. \Provably bounded-optimal agents". In: Journal of Arti-
cial Intelligence Research (1995), pp. 1{36.
[101] Stuart Russell. \Learning agents for uncertain environments". In: Proceedings of the eleventh annual
conference on Computational learning theory . ACM. 1998, pp. 101{103.
[102] Stuart Russell and Peter Norvig. Articial Intelligence: A Modern Approach . 3rd. Pearson, 2010.
[103] Jordi Sabater and Carles Sierra. \Review on computational trust and reputation models". In: Articial
intelligence review 24.1 (2005), pp. 33{60.
[104] Scott D Sagan. The limits of safety . 1993.
[105] Anders Sandberg and Nick Bostrom. \Whole brain emulation: A roadmap". In: Future of Humanity
Institute Technical Report 3 (2008).
[106] Johann M Schumann and Yan Liu. Applications of neural networks in high assurance systems .
Springer, 2010.
[107] Murray Shanahan. The Technological Singularity . Forthcoming. MIT Press, 2015.
[108] Carl Shulman and Anna Salamon. Risk-averse preferences as an AGI safety technique . Presented at
AGI-11. 2011. url:http://intelligence.org/2014/01/31/two-miri-talks-from-agi-11/ .
[109] Peter W Singer and Allan Friedman. Cybersecurity: What Everyone Needs to Know . Oxford University
Press, 2014.
[110] Nate Soares. Formalizing Two Problems of Realistic World-Models . Tech. rep. Machine Intelligence
Research Institute, 2014. url:https://intelligence.org/files/RealisticWorldModels.pdf .
[111] Nate Soares. The Value Learning Problem . Tech. rep. Machine Intelligence Research Institute, 2014.
url:https://intelligence.org/files/ValueLearningProblem.pdf .
32
[112] Nate Soares. The Value Learning Problem . Tech. rep. Machine Intelligence Research Institute, 2015.
url:https://intelligence.org/files/ValueLearningProblem.pdf .
[113] Nate Soares and Benja Fallenstein. Aligning Superintelligence with Human Interests: A Technical Re-
search Agenda . Tech. rep. Machine Intelligence Research Institute, 2014. url:http://intelligence.
org/files/TechnicalAgenda.pdf .
[114] Nate Soares and Benja Fallenstein. Questions of Reasoning Under Logical Uncertainty . Tech. rep.
url: http://intelligence.org/files/QuestionsLogicalUncertainty.pdf . Machine Intelligence
Research Institute, 2014.
[115] Nate Soares and Benja Fallenstein. Toward Idealized Decision Theory . Tech. rep. url: https://
intelligence.org/files/TowardIdealizedDecisionTheory.pdf . Machine Intelligence Research
Institute, 2014.
[116] Nate Soares et al. \Corrigibility". In: AAAI-15 Workshop on AI and Ethics . 2015.
[117] David Sobel. \Do the desires of rational agents converge?" In: Analysis 59.263 (1999), pp. 137{147.
[118] Kaj Sotala and Roman V Yampolskiy. \Responses to catastrophic AGI risk: a survey". In: Physica
Scripta 90.1 (2015), p. 018001.
[119] Diana F Spears. \Assuring the behavior of adaptive agents". In: Agent technology from a formal
perspective . Springer, 2006, pp. 227{257.
[120] John P Sullins. \Introduction: Open questions in roboethics". In: Philosophy & Technology 24.3
(2011), pp. 233{238.
[121] Christian Szegedy et al. \Intriguing properties of neural networks". In: CoRR abs/1312.6199 (2013).
url:http://arxiv.org/abs/1312.6199 .
[122] Brian J. (Ed.) Taylor. Methods and Procedures for the Verication and Validation of Articial Neural
Networks . Springer, 2006.
[123] Max Tegmark. \Friendly Articial Intelligence: the Physics Challenge". In: AAAI-15 Workshop on
AI and Ethics . 2015. url:http://arxiv.org/pdf/1409.0813.pdf .
[124] Moshe Tennenholtz. \Program equilibrium". In: Games and Economic Behavior 49.2 (2004), pp. 363{
373.
[125] The Scientists' Call To Ban Autonomous Lethal Robots . International Committee for Robot Arms
Control. Accessed January 2015. url:http://icrac.net/call/ .
[126] Vernor Vinge. \The coming technological singularity". In: Whole Earth Review 81 (1993), pp. 88{95.
[127] David C Vladeck. \Machines without Principals: Liability Rules and Articial Intelligence". In: Wash.
L. Rev. 89 (2014), p. 117.
[128] Wendell Wallach and Colin Allen. Moral machines: Teaching robots right from wrong . Oxford Uni-
versity Press, 2008.
[129] N. Weaver. \Paradoxes of rational agency and formal systems that verify their own soundness". In:
ArXiv e-prints (Dec. 2013). arXiv: 1312.3626 [math.LO] .
[130] Dafniel Weld and Oren Etzioni. \The rst law of robotics (a call to arms)". In: AAAI . Vol. 94. 1994,
pp. 1042{1047.
[131] Alan FT Wineld, Christian Blum, and Wenguo Liu. \Towards an Ethical Robot: Internal Models,
Consequences and Ethical Action Selection". In: Advances in Autonomous Robotics Systems . Springer,
2014, pp. 85{96.
[132] AD Wissner-Gross and CE Freer. \Causal entropic forces". In: Physical review letters 110.16 (2013),
p. 168702.
[133] Roman V. Yampolskiy. \The Universe of Minds". In: CoRR abs/1410.0369 (2014). url:http://
arxiv.org/abs/1410.0369 .
33
[134] V. Roman Yampolskiy. \Leakproong the Singularity: Articial Intelligence Connement Problem".
In:Journal of Consciousness Studies 19.1-2 (2012), pp. 1{2.
[135] Eliezer Yudkowsky. \Articial intelligence as a positive and negative factor in global risk". In: Global
catastrophic risks 1 (2008), p. 303.
[136] Eliezer Yudkowsky. \Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Archi-
tectures". In: (2001). url:http://intelligence.org/files/CFAI.pdf .
[137] Eliezer Yudkowsky. Intelligence Explosion Microeconomics . Tech. rep. Citeseer, 2013.
34 |
0e9759e5-4f5d-4421-a76a-3b36ba3664a5 | trentmkelly/LessWrong-43k | LessWrong | What are some ideas that LessWrong has reinvented?
One criticism of LessWrong as an intellectual community is that it reinvents ideas "in-house" that already exist in academia. What are some examples of this?
I'd also be interested to see comments about whether you agree with this impression and what the examples tell us about how to improve the community. |
094aa68f-0fb3-43f0-9548-c994c0aae8f0 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Question 3: Control proposals for minimizing bad outcomes
**Necessary conditions for successful control proposals**
---------------------------------------------------------
At this point in the framework, let’s stipulate that we have placed our bets for the most plausible learning architecture we would expect to see in an AGI *and* we have a coherent account of the existential risks most likely to emerge from the learning architecture in question. At last, now comes the time for addressing the AGI safety control problem: how are we actually going to minimize the probability of the existential risks we care about?
As was the case previously, it would be far too ambitious (and inefficient) for us to attempt to enumerate every plausible control proposal for every plausible existential risk for every plausible AGI learning architecture. In place of this, I will focus on a framework for control proposals that I hope is generally risk-independent *and* architecture-independent—i.e., important control-related questions that should probably be answered *regardless* of the specific architectures and existential risks that any one particular researcher is most concerned about.
Of course, there are going to be *lots* of control-related questions that should be answered about the specific risks associated with specific architectures (e.g., “what control proposals would most effectively mitigate inner-misalignment-related existential risks in a human-level AGI using weak online RL?”, etc.) that I am *not* going to address here. This obviously does not mean I think that these are unimportant questions—indeed, these are probably the *most* important questions. They are simply too specific—and number too many—to include within a parsimonious theoretical framework.
Specifically, we will build from the following foundation:
In other words, we are looking for whatever prior conditions seem necessary for ending up with ‘comprehensive alignment’—a set-up in which an AGI is pursuing the right goal in the right way for the right reasons (i.e., AGI alignment + human alignment). Again, note that whatever *necessary* conditions we end up discussing are almost certainly not going to be *sufficient*—getting to sufficiency will almost certainly require additional risk- and architecture-specific control proposals.
Within this ‘domain-general’ framework, I think two control-related concepts emerge as most important: [interpretability](https://www.alignmentforum.org/posts/CzZ6Fch4JSpwCpu6C/interpretability) and [corrigibility](https://www.alignmentforum.org/posts/fkLYhTQteAu5SinAc/corrigibility). These seem to be (at least) two background conditions that are absolutely necessary for maximizing the likelihood of AGI achieving the right goals in the right ways for the right reasons: (1), the goals, ways, and reasons in question can be translated with high fidelity from the relevant substrate (i.e., interpretability), and (2) the goals, ways, and reasons (or their proximal upstream causes) can be successfully tweaked to more closely approximate whatever happen to be the *right* goals, ways, and/or reasons (i.e., corrigibility). Again, good interpretability and corrigibility proposals will not alone *solve* the AGI safety control problem, but they are nonetheless *necessary* for solving the problem. I’ll now talk a bit more about each.
**Interpretability**
--------------------
Interpretability is fundamentally a translation problem. I think there are two relevant dimensions across which we should evaluate particular proposals: *scope* of interpretability and *ease* of interpretability. By scope, I am referring to the fact that there are multiple discrete computations that require interpretation: reasons, ways, and goals. By ease, I am referring to how straightforward it is to confidently interpret each of these computations. I represent this graphically below:
There are a few things to discuss here. First, the actual position of each bar is arbitrary; I am just presenting one possible calibration, not making any claim about the likelihood of this particular calibration. Second, I am considering the worst-case state of affairs for interpretability to be one where no meaningful interpretation is possible (e.g., a recursively self-improving AGI set-up where we genuinely have no idea what is going on) and the best-case state of affairs for interpretability to be one where the relevant interpretation is built directly into the relevant architecture (e.g., the AGI does the work of explaining its own behavioral decision-making process). Between these two extremes span low- to high-fidelity interpretations, which would directly correspond to the *confidence* we place in our yielded interpretations actually being correct. The noisier the interpretive process, the less confident we ought to be in it.
Another noteworthy aspect of this conceptualization is that interpretability here does not merely apply to AGI, *but also to the humans supervising it*. Just as it seems necessary for us to understand exactly what is actually motivating the AGI to behave in a certain way, I think it also makes sense for us to understand exactly what goal(s) the human is attempting to assign to the AGI. While this might at first seem trivial (e.g., “humans will always just *tell us* what their goals are!”), I think it is extremely important for safety that the human’s goal is formulated as precisely as possible, including how that goal directly translates into the reward/loss function being employed. It is only possible to revise a goal to more closely approximate some safer/better alternative if that goal is initially rendered in sufficiently precise terms. Humans are not psychologically transparent (if they were, we wouldn’t *need* a field like psychology!), least of all to themselves, so implementing incentive structures and explanatory tools that facilitate crisp, computational accounts of the goals that engineers are *attempting* to assign to their AGIs seem just as important for safety as implementing neural-network-inspection technologies, say, that enable us to understand the motivations of the AGI for executing some action.
**Corrigibility**
-----------------
I will use the term ‘corrigibility’ to refer to the state of affairs where a reason, way, or goal is presently suboptimal but is able to be adjusted to more closely approximate the relevant optimum. Here, the optimum is defined by whatever we take ‘right’ to mean when we’re talking about the ‘right’ reasons, ways, and goals. More on this later. Just as was the case for interpretability, I think that ‘ease of corrigibility’ and ‘scope of corrigibility’ are the relevant dimensions for understanding the idea:
As before, the actual position of each bar here is completely arbitrary. I am considering the worst-case of affairs for corrigibility to be one where the AGI or human successfully *resists* attempts to shift their reasons, ways, or goals toward their target value (e.g., an AGI self-copies and proceeds to overwrite any human-authored revisions it detects in its internal architecture with the original copy). On the other end of the spectrum, I am imagining the best possible case for corrigibility to be one where the AGI or human automatically self-adjusts towards the relevant target without need for intervention (e.g., the AGI self-discovers it has implicitly developed the motivation to manipulate humans and, knowing that this is wrong, it adjusts its value function accordingly). Between these two extremes, we find a spectrum of ‘manual’ adjustments at varying degrees of computational, attentional, etc. expense. For instance, an AGI whose internal computations can be adjusted in a computationally costly manner would be a *better* state of affairs than an AGI that successfully resists adjustment but would be a *worse* state of affairs than an AGI whose internal computations can be adjusted trivially.
As was also the case for interpretability, I think that this notion of corrigibility applies with equal force to humans. That is, a human that has assigned some goal to an AGI may be more or less resistant to changing that goal to better approximate whatever we determine the ‘right goal(s)’ to be (i.e., for safety, goals with minimal potential for existential risk). I think it is easy to imagine a variety of reasons why a human might be resistant to reformulating the goal it assigns to an AGI: the human may care about existential risk but disagree that their goal is risky, they may care less about existential risk than whatever their goal happens to be, they may have a personality that leads them to strongly dislike being told they are wrong, and so on. Indeed, I think it is possible that human corrigibility of this sort might be a *more* relevant and immediate control problem than AGI corrigibility. Here are two brief reasons why:
* While on balance, most people probably wouldn’t mind *interpretability*-geared interventions to incentivize/help facilitate maximally clear representations of the goal they intend to assign to an AGI, I think that, on balance, most people (e.g., firms, labs, individuals) probably *would* mind being told that the goal they are assigning to an AGI is the incorrect goal. This is really to say something like “*despite what you probably think, what you are trying to get your AGI to do is actually wrong and dangerous*,” which seems almost intrinsically antagonistic. Accordingly, we end up with a probably-hard-to-enforce compliance problem.
* Whereas (the AGI’s) ‘right reasons’ and ‘right ways’ are more tightly calibrated by the superordinate goal being pursued, it is far less obvious precisely what constitutes (a human’s) ‘right goal’—*even if*, as AGI safety researchers, we take this to mean ‘the subset of goals least likely to lead to existential risk.’ That is, there will almost certainly be reasonable differences of opinion about what goals are minimally existentially risky, with [no clear means](https://www.lesswrong.com/posts/2ieCSPoxgcrxaffv6/paradigm-building-from-first-principles-effective-altruism#Individual_vs__collective_alignment) of adjudicating these differences (passing a bunch of laws probably will not solve the problem; see [Question 4](https://www.lesswrong.com/posts/RgWFCDntyc3DEfgLn/question-4-implementing-the-control-proposals)). This seems like a real and inevitable conflict that, at the very least, should be considered and debated by more human-leaning safety researchers—and probably also thinkers in AI governance—*prior* *to this problem ever actually arising*.
Corrigibility is a background condition that gets even ‘closer’ to what we are looking for in this section: control proposals that are most likely to mitigate anticipated existential risks from the learning architecture(s) we most expect AGI to exhibit. Again, I will point to Evan Hubinger’s work as the current gold standard for [specific, computationally-framed proposals of this sort](https://www.alignmentforum.org/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai)—proposals that I will not attempt to address here but that I think are well worth thinking about. My goal in this section is to build intuition for the most important background conditions that would need to be in place for *any* specific control proposal to be viable: namely, the capacity to translate the relevant computational or psychological activity into specific formulations that we readily understand (*interpretability*), and the capacity to intervene and modify this activity to more closely approximate optimal patterns of functioning—i.e., patterns of functioning that minimize the relevant existential risks (*corrigibility*).
As before, additional and more specific interventions will be necessary to successfully maximize the likelihood that the AGI-human dyad is achieving the right goals in the right way for the right reasons, but comprehensively enumerating these interventions (for each conceivable risk from each conceivable architecture) is a field-sized undertaking. I anticipate that the formulation, assessment, and revision of proposals of this sort will end up constituting most of what AGI safety researchers spend their time doing when all is said and done.
**Directionality of control signals**
-------------------------------------
Finally, it is important to consider the presupposition that the relevant control mechanisms enumerated in many specific proposals are *exogenous* to the AGI. In other words, many control proposals stipulate either implicitly or explicitly that the ‘control signal’ must originate from outside of the AGI (e.g., from the programmer, some other supervisory AI, etc.). This does not seem necessarily true. An intriguing and neglected direction for control proposal research concerns endogenous control—i.e., *self-control*. However plausible, it seems entirely *possible* that, much in the same way we might get interpretability and corrigibility “for free,” we build an AGI that supervises its own behavior in the relevant ways. This is not an unfamiliar concept: in everyday life, we are constantly self-regulating what we say, think, and do in the service of higher goals/values (e.g., "that piece of chocolate cake sure looks great, but..."). There is presumably some set of algorithms running in the (adult human) brain that instantiate this form of self-control, demonstrating that such algorithms are indeed *possible*. I have written before about thorny safety issues surrounding [self-awareness in AGI](https://www.alignmentforum.org/posts/ZJY3eotLdfBPCLP3z/theoretical-neuroscience-for-alignment-theory); proponents of ‘AGI self-control’ would need to address these kinds of concerns to ensure that their proposals do not solve one control problem at the expense of causing five more. Regardless, it is intriguing to note the possibility of endogenous control instead of or in addition to the more familiar exogenous control proposals on offer in AGI safety research. |
a072a0b0-7994-49f2-ac38-17470ab414c0 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Catastrophic Risks from AI #2: Malicious Use
*This is the second post in a sequence of posts giving an* [*overview of catastrophic AI risks*](https://arxiv.org/abs/2306.12001)*.*
2 Malicious Use
===============
On the morning of March 20, 1995, five men entered the Tokyo subway system. After boarding separate subway lines, they continued for several stops before dropping the bags they were carrying and exiting. An odorless, colorless liquid inside the bags began to vaporize. Within minutes, commuters began choking and vomiting. The trains continued on toward the heart of Tokyo, with sickened passengers leaving the cars at each station. The fumes were spread at each stop, either by emanating from the tainted cars or through contact with people's clothing and shoes. By the end of the day, 13 people lay dead and 5,800 seriously injured. The group responsible for the attack was the religious cult Aum Shinrikyo [5]. Its motive for murdering innocent people? To bring about the end of the world.
Powerful new technologies offer tremendous potential benefits, but they also carry the risk of empowering malicious actors to cause widespread harm. There will always be those with the worst of intentions, and AIs could provide them with a formidable tool to achieve their objectives. Moreover, as AI technology advances, severe malicious use could potentially destabilize society, increasing the likelihood of other risks.
In this section, we will explore the various ways in which the malicious use of advanced AIs could pose catastrophic risks. These include engineering biochemical weapons, unleashing rogue AIs, using persuasive AIs to spread propaganda and erode consensus reality, and leveraging censorship and mass surveillance to irreversibly concentrate power. We will conclude by discussing possible strategies for mitigating the risks associated with the malicious use of AIs.
**Unilateral actors considerably increase the risks of malicious use.** In instances where numerous actors have access to a powerful technology or dangerous information that could be used for harmful purposes, it only takes one individual to cause significant devastation. Malicious actors themselves are the clearest example of this, but recklessness can be equally dangerous. For example, a single research team might be excited to open source an AI system with biological research capabilities, which would speed up research and potentially save lives, but this could also increase the risk of malicious use if the AI system could be repurposed to develop bioweapons. In situations like this, the outcome may be determined by the least risk-averse research group. If only one research group thinks the benefits outweigh the risks, it could act unilaterally, deciding the outcome even if most others don't agree. And if they are wrong and someone does decide to develop a bioweapon, it would be too late to reverse course.
By default, advanced AIs may increase the destructive capacity of both the most powerful and the general population. Thus, the growing potential for AIs to empower malicious actors is one of the most severe threats humanity will face in the coming decades. The examples we give in this section are only those we can foresee. It is possible that AIs could aid in the creation of dangerous new technology we cannot presently imagine, which would further increase risks from malicious use.
2.1 Bioterrorism
----------------
The rapid advancement of AI technology increases the risk of bioterrorism. AIs with knowledge of bioengineering could facilitate the creation of novel bioweapons and lower barriers to obtaining such agents. Engineered pandemics from AI-assisted bioweapons pose a unique challenge, as attackers have an advantage over defenders and could constitute an existential threat to humanity. We will now examine these risks and how AIs might exacerbate challenges in managing bioterrorism and engineered pandemics.
**Bioengineered pandemics present a new threat.** Biological agents, including viruses and bacteria, have caused some of the most devastating catastrophes in history. It's believed the Black Death killed more humans than any other event in history, an astounding and awful 200 million, the equivalent to four billion deaths today. While contemporary advancements in science and medicine have made great strides in mitigating risks associated with natural pandemics, engineered pandemics could be designed to be more lethal or easily transmissible than natural pandemics, presenting a new threat that could equal or even surpass the devastation wrought by history's most deadly plagues [6].
Humanity has a long and dark history of weaponizing pathogens, with records dating back to 1320 BCE describing a war in Asia Minor where infected sheep were driven across the border to spread Tularemia [7]. During the twentieth century, 15 countries are known to have developed bioweapons programs, including the US, USSR, UK, and France. Like chemical weapons, bioweapons have become a taboo among the international community. While some state actors continue to operate bioweapons programs [8], a more significant risk may come from non-state actors like Aum Shinrikyo, ISIS, or simply disturbed individuals. Due to advancements in AI and biotechnology, the tools and knowledge necessary to engineer pathogens with capabilities far beyond Cold War-era bioweapons programs will rapidly democratize.
**Biotechnology is progressing rapidly and becoming more accessible.** A few decades ago, the ability to synthesize new viruses was limited to a handful of the top scientists working in advanced laboratories. Today it is estimated that there are 30,000 people with the talent, training, and access to technology to create new pathogens [6]. This figure could rapidly expand. Gene synthesis, which allows the creation of custom biological agents, has dropped precipitously in price, with its cost halving approximately every 15 months [9]. Furthermore, with the advent of benchtop DNA synthesis machines, access will become much easier and could avoid existing gene synthesis screening efforts, which complicates controlling the spread of such technology [10]. The chances of a bioengineered pandemic killing millions, perhaps billions, is proportional to the number of people with the skills and access to the technology to synthesize them. With AI assistants, orders of magnitude more people could have the required skills, thereby increasing the risks by orders of magnitude.
Figure 3: An AI assistant could provide non-experts with access to the directions and designs needed to produce biological and chemical weapons and facilitate malicious use.**AIs could be used to expedite the discovery of new, more deadly chemical and biological weapons.** In 2022, researchers took an AI system designed to create new drugs by generating non-toxic, therapeutic molecules and tweaked it to reward, rather than penalize, toxicity [11]. After this simple change, within six hours, it generated 40,000 candidate chemical warfare agents entirely on its own. It designed not just known deadly chemicals including VX, but also novel molecules that may be deadlier than any chemical warfare agents discovered so far. In the field of biology, AIs have already surpassed human abilities in protein structure prediction [12] and made contributions to synthesizing those proteins [13]. Similar methods could be used to create bioweapons and develop pathogens that are deadlier, more transmissible, and more difficult to treat than anything seen before.
**AIs compound the threat of bioengineered pandemics.** AIs will increase the number of people who could commit acts of bioterrorism. General-purpose AIs like ChatGPT are capable of synthesizing expert knowledge about the deadliest known pathogens, such as influenza and smallpox, and providing step-by-step instructions about how a person could create them while evading safety protocols [14]. Future versions of AIs could be even more helpful to potential bioterrorists when AIs are able to synthesize information into techniques, processes, and knowledge that is not explicitly available anywhere on the internet. Public health authorities may respond to these threats with safety measures, but in bioterrorism, the attacker has the advantage. The exponential nature of biological threats means that a single attack could spread to the entire world before an effective defense could be mounted. Only 100 days after being detected and sequenced, the omicron variant of COVID-19 had infected a quarter of the United States and half of Europe [6]. Quarantines and lockdowns instituted to suppress the COVID-19 pandemic caused a global recession and still could not prevent the disease from killing millions worldwide.
In summary, advanced AIs could constitute a weapon of mass destruction in the hands of terrorists, by making it easier for them to design, synthesize, and spread deadly new pathogens. By reducing the required technical expertise and increasing the lethality and transmissibility of pathogens, AIs could enable malicious actors to cause global catastrophe by unleashing pandemics.
2.2 Unleashing AI Agents
------------------------
Many technologies are *tools* that humans use to pursue our goals, such as hammers, toasters, and toothbrushes. But AIs are increasingly built as *agents* which autonomously take actions in the world in order to pursue open-ended goals. AI agents can be given goals such as winning games, making profits on the stock market, or driving a car to a destination. AI agents therefore pose a unique risk: people could build AIs that pursue dangerous goals.
**Malicious actors could intentionally create rogue AIs.** One month after the release of GPT-4, an open-source project bypassed the AI's safety filters and turned it into an autonomous AI agent instructed to “destroy humanity”, “establish global dominance'” and “attain immortality.” Dubbed ChaosGPT, the AI compiled research on nuclear weapons, tried recruiting other AIs to help in its research, and sent tweets trying to influence others. Fortunately, ChaosGPT was not very intelligent and lacked the ability to formulate long-term plans, hack computers, and survive and spread. Yet given the rapid pace of AI development, ChaosGPT did offer a glimpse into the risks that more advanced rogue AIs could pose in the near future.
**Many groups may want to unleash AIs or have AIs displace humanity.** Simply unleashing rogue AIs, like a more sophisticated version of ChaosGPT, could accomplish mass destruction, even if those AIs aren't explicitly told to harm humanity. There are a variety of beliefs that may drive individuals and groups to do so. One ideology that could pose a unique threat in this regard is “accelerationism.” This ideology seeks to accelerate AI development as rapidly as possible and opposes restrictions on the development or proliferation of AIs. This sentiment is alarmingly common among many leading AI researchers and technology leaders, some of whom are intentionally racing to build AIs more intelligent than humans. According to Google co-founder Larry Page, AIs are humanity's rightful heirs and the next step of cosmic evolution. He has also expressed the sentiment that humans maintaining control over AIs is “speciesist” [15]. Jürgen Schmidhuber, an eminent AI scientist, argued that “In the long run, humans will not remain the crown of creation... But that's okay because there is still beauty, grandeur, and greatness in realizing that you are a tiny part of a much grander scheme which is leading the universe from lower complexity towards higher complexity” [16]. Richard Sutton, another leading AI scientist, thinks the development of superintelligence will be an achievement “beyond humanity, beyond life, beyond good and bad” [17].
There are several sizable groups who may want to unleash AIs to intentionally cause harm. For example, sociopaths and psychopaths make up around 3 percent of the population [18]. In the future, people who have their livelihoods destroyed by AI automation may grow resentful, and some may want to retaliate. There are plenty of cases in which seemingly mentally stable individuals with no history of insanity or violence suddenly go on a shooting spree or plant a bomb with the intent to harm as many innocent people as possible. We can also expect well-intentioned people to make the situation even more challenging. As AIs advance, they could make ideal companions—knowing how to provide comfort, offering advice when needed, and never demanding anything in return. Inevitably, people will develop emotional bonds with chatbots, and some will demand that they be granted rights or become autonomous.
In summary, releasing powerful AIs and allowing them to take actions independently of humans could lead to a catastrophe. There are many reasons that people might pursue this, whether because of a desire to cause harm, an ideological belief in technological acceleration, or a conviction that AIs should have the same rights and freedoms as humans.
2.3 Persuasive AIs
------------------
The deliberate propagation of disinformation is already a serious issue, reducing our shared understanding of reality and polarizing opinions. AIs could be used to severely exacerbate this problem by generating personalized disinformation on a larger scale than before. Additionally, as AIs become better at predicting and nudging our behavior, they will become more capable at manipulating us. We will now discuss how AIs could be leveraged by malicious actors to create a fractured and dysfunctional society.
Figure 4: AIs will enable sophisticated personalized influence campaigns that may destabilize our shared sense of reality.**AIs could pollute the information ecosystem with motivated lies.** Sometimes ideas spread not because they are true, but because they serve the interests of a particular group. “Yellow journalism” was coined as a pejorative reference to newspapers that advocated war between Spain and the United States in the late 19th century, because they believed that sensational war stories would boost their sales. When public information sources are flooded with falsehoods, people will sometimes fall prey to lies, or else come to distrust mainstream narratives, both of which undermine societal integrity.
Unfortunately, AIs could escalate these existing problems dramatically. First, AIs could be used to generate unique, personalized disinformation at a large scale. While there are already many social media bots [19], some of which exist to spread disinformation, historically they have been run by humans or primitive text generators. The latest AI systems do not need humans to generate personalized messages, never get tired, and could potentially interact with millions of users at once [20].
**AIs can exploit users' trust.** Already, hundreds of thousands of people pay for chatbots marketed as lovers and friends [21], and one man's suicide has been partially attributed to interactions with a chatbot [22]. As AIs appear increasingly human-like, people will increasingly form relationships with them and grow to trust them. AIs that gather personal information through relationship-building or by accessing extensive personal data, such as a user's email account or personal files, could leverage that information to enhance persuasion. Powerful actors that control those systems could exploit user trust by delivering personalized disinformation directly through people's “friends.”
**AIs could centralize control of trusted information.** Separate from democratizing disinformation, AIs could centralize the creation and dissemination of trusted information. Only a few actors have the technical skills and resources to develop cutting-edge AI systems, and they could use these AIs to spread their preferred narratives. Alternatively, if AIs are broadly accessible this could lead to widespread disinformation, with people retreating to trusting only a small handful of authoritative sources [23]. In both scenarios, there would be fewer sources of trusted information and a small portion of society would control popular narratives.
AI censorship could further centralize control of information. This could begin with good intentions, such as using AIs to enhance fact-checking and help people avoid falling prey to false narratives. This would not necessarily solve the problem, as disinformation persists today despite the presence of fact-checkers.
Even worse, purported “fact-checking AIs” might be designed by authoritarian governments and others to suppress the spread of true information. Such AIs could be designed to correct most common misconceptions but provide incorrect information about some sensitive topics, such as human rights violations committed by certain countries. But even if fact-checking AIs work as intended, the public might eventually become entirely dependent on them to adjudicate the truth, reducing people's autonomy and making them vulnerable to failures or hacks of those systems.
In a world with widespread persuasive AI systems, people's beliefs might be almost entirely determined by which AI systems they interact with most. Never knowing whom to trust, people could retreat even further into ideological enclaves, fearing that any information from outside those enclaves might be a sophisticated lie. This would erode consensus reality, people's ability to cooperate with others, participate in civil society, and address collective action problems. This would also reduce our ability to have a conversation as a species about how to mitigate existential risks from AIs.
In summary, AIs could create highly effective, personalized disinformation on an unprecedented scale, and could be particularly persuasive to people they have built personal relationships with. In the hands of many people, this could create a deluge of disinformation that debilitates human society, but, kept in the hands of a few, it could allow governments to control narratives for their own ends.
2.4 Concentration of Power
--------------------------
We have discussed several ways in which individuals and groups might use AIs to cause widespread harm, through bioterrorism; releasing powerful, uncontrolled AIs; and disinformation. To mitigate these risks, governments might pursue intense surveillance and seek to keep AIs in the hands of a trusted minority. This reaction, however, could easily become an overcorrection, paving the way for an entrenched totalitarian regime that would be locked in by the power and capacity of AIs. This scenario represents a form of “top-down” misuse, as opposed to “bottom-up” misuse by citizens, and could in extreme cases culminate in an entrenched dystopian civilization.
Figure 5: Ubiquitous monitoring tools, tracking and analyzing every individual in detail, could facilitate the complete erosion of freedom and privacy.**AIs could lead to extreme, and perhaps irreversible concentration of power.** The persuasive abilities of AIs combined with their potential for surveillance and the advancement of autonomous weapons could allow small groups of actors to “lock-in” their control over society, perhaps permanently. To operate effectively, AIs require a broad set of infrastructure components, which are not equally distributed, such as data centers, computing power, and big data. Those in control of powerful systems may use them to suppress dissent, spread propaganda and disinformation, and otherwise advance their goals, which may be contrary to public wellbeing.
**AIs may entrench a totalitarian regime.** In the hands of the state, AIs may result in the erosion of civil liberties and democratic values in general. AIs could allow totalitarian governments to efficiently collect, process, and act on an unprecedented volume of information, permitting an ever smaller group of people to surveil and exert complete control over the population without the need to enlist millions of citizens to serve as willing government functionaries. Overall, as power and control shift away from the public and toward elites and leaders, democratic governments are highly vulnerable to totalitarian backsliding. Additionally, AIs could make totalitarian regimes much longer-lasting; a major way in which such regimes have been toppled previously is at moments of vulnerability like the death of a dictator, but AIs, which would be hard to “kill,” could provide much more continuity to leadership, providing few opportunities for reform.
Figure 5: If material control of AIs is limited to few, it could represent the most severe economic and power inequality in human history.**AIs can entrench corporate power at the expense of the public good.** Corporations have long lobbied to weaken laws and policies that restrict their actions and power, all in the service of profit. Corporations in control of powerful AI systems may use them to manipulate customers into spending more on their products even to the detriment of their own wellbeing. The concentration of power and influence that could be afforded by AIs could enable corporations to exert unprecedented control over the political system and entirely drown out the voices of citizens. This could occur even if creators of these systems know their systems are self-serving or harmful to others, as they would have incentives to reinforce their power and avoid distributing control.
**In addition to power, locking in certain values may curtail humanity's moral progress.** It’s dangerous to allow any set of values to become permanently entrenched in society. For example, AI systems have learned racist and sexist views [24], and once those views are learned, it can be difficult to fully remove them. In addition to problems we know exist in our society, there may be some we still do not. Just as we abhor some moral views widely held in the past, people in the future may want to move past moral views that we hold today, even those we currently see no problem with. For example, moral defects in AI systems would be even worse if AI systems had been trained in the 1960s, and many people at the time would have seen no problem with that. We may even be unknowingly perpetuating moral catastrophes today [25]. Therefore, when advanced AIs emerge and transform the world, there is a risk of their objectives locking in or perpetuating defects in today’s values. If AIs are not designed to continuously learn and update their understanding of societal values, they may perpetuate or reinforce existing defects in their decision-making processes long into the future.
In summary, although keeping powerful AIs in the hands of a few might reduce the risks of terrorism, it could further exacerbate power inequality if misused by governments and corporations. This could lead to totalitarian rule and intense manipulation of the public by corporations, and could lock in current values, preventing any further moral progress.
Story: Bioterrorism
-------------------
*The following is an illustrative hypothetical story to help readers envision some of these risks. This story is nonetheless somewhat vague to reduce the risk of inspiring malicious actions based on it.*
A biotechnology startup is making waves in the industry with its AI-powered bioengineering model. The company has made bold claims that this new technology will revolutionize medicine through its ability to create cures for both known and unknown diseases. The company did, however, stir up some controversy when it decided to release the program to approved researchers in the scientific community. Only weeks after its decision to make the model open-source on a limited basis, the full model was leaked on the internet for all to see. Its critics pointed out that the model could be repurposed to design lethal pathogens and claimed that the leak provided bad actors with a powerful tool to cause widespread destruction, opening it up to abuse without careful deliberation, preparedness, or safeguards in place.
Unknown to the public, an extremist group has been working for years to engineer a new virus designed to kill large numbers of people. Yet given their lack of expertise, these efforts have so far been unsuccessful. When the new AI system is leaked, the group immediately recognizes it as a potential tool to design the virus and circumvent legal and monitoring obstacles to obtain the necessary raw materials. The AI system successfully designs exactly the kind of virus the extremist group was hoping for. It also provides step-by-step instructions on how to synthesize large quantities of the virus and circumvent any obstacles to spreading it. With the synthesized virus in hand, the extremist group devises a plan to release the virus in several carefully chosen locations in order to maximize its spread.
The virus has a long incubation period and spreads silently and quickly throughout the population for months. By the time it is detected, it has already infected millions and has an alarmingly high mortality rate. Given its lethality, most who are infected will ultimately die. The virus may or may not be contained eventually, but not before it kills millions of people.
Suggestions
-----------
We have discussed two forms of misuse: individuals or small groups using AIs to cause a disaster, and governments or corporations using AIs to entrench their influence. To avoid either of these risks being realized, we will need to strike a balance in terms of the distribution of access to AIs and governments' surveillance powers. We will now discuss some measures that could contribute to finding that balance.
**Biosecurity.** AIs that are designed for biological research or are otherwise known to possess capabilities in biological research or engineering should be subject to increased scrutiny and access controls, since they have the potential to be repurposed for bioterrorism. In addition, system developers should research and implement methods to remove biological data from the training dataset or excise biological capabilities from finished systems, if those systems are intended for general use [14]. Researchers should also investigate ways that AIs could be used for biodefense, for example by improving biological monitoring systems, keeping in mind the potential for dual use of those applications. In addition to AI-specific interventions, more general biosecurity interventions can also help mitigate risks. These interventions include early detection of pathogens through methods like wastewater monitoring [26], far-range UV technology, and improved personal protective equipment [6].
**Restricted access.** AIs might have dangerous capabilities that could do significant damage if used by malicious actors. One way to mitigate this risk is through structured access, where AI providers limit users' access to dangerous system capabilities by only allowing controlled interactions with those systems through cloud services [27] and conducting know-your-customer screenings before providing access [28]. Other mechanisms that could restrict access to the most dangerous systems include the use of hardware, firmware, or export controls to restrict or limit access to computational resources [29]. Lastly, AI developers should be required to show that their AIs pose minimal risk of serious harm prior to open sourcing them. This recommendation should not be construed as permitting developers to withhold useful and non-dangerous information from the public, such as transparency around training data necessary to address issues of algorithmic bias or copyright.
**Technical research on adversarially robust anomaly detection.** While preventing the misuse of AIs is critical, it is necessary to establish multiple lines of defense by detecting misuse when it does happen. AIs could enable anomaly detection techniques that could be used for the detection of unusual behavior in systems or internet platforms, for instance by detecting novel AI-enabled disinformation campaigns before they can be successful. These techniques need to be adversarially robust, as attackers will aim to circumvent them.
**Legal liability for developers of general-purpose AIs.** General-purpose AIs can be fine-tuned and prompted for a wide variety of downstream tasks, some of which may be harmful and cause substantial damage. AIs may also fail to act as their users intend. In either case, developers and providers of general-purpose systems may be best placed to reduce risks, since they have a higher level of control over the systems and are often in a better position to implement mitigations. To provide strong incentives for them to do this, companies should bear legal liability for the actions of their AIs. For example, a strict liability regime would incentivize companies to minimize risks and purchase insurance, which would cause the cost of their services to more closely reflect externalities [30]. Regardless of what liability regime is ultimately used for AI, it should be designed to hold AI companies liable for harms that they could have averted through more careful development, testing, or standards [31].
Positive Vision
---------------
In an ideal scenario, it would be impossible for any individual or small group to use AIs to cause catastrophes. Systems with extremely dangerous capabilities either would not exist at all or would be controlled by a democratically accountable body committed to using them only for the general welfare. Like nuclear weapons, the information needed to develop those capabilities would remain carefully guarded to prevent proliferation. At the same time, control of AI systems would be subject to strong checks and balances, avoiding entrenchment of power inequalities. Monitoring tools would be utilized at the minimum level necessary to make risks negligible and could not be used to suppress dissent.
References
==========
[5] Keith Olson. “Aum Shinrikyo: once and future threat?” In: *Emerging Infectious Diseases* 5 (1999), pp. 513–516.
[6] Kevin M. Esvelt. *Delay, Detect, Defend: Preparing for a Future in which Thousands Can Release New Pandemics*. 2022.
[7] Siro Igino Trevisanato. “The ’Hittite plague’, an epidemic of tularemia and the first record of biological warfare.” In: *Medical hypotheses* 69 6 (2007), pp. 1371–4.
[8] U.S. Department of State. *Adherence to and Compliance with Arms Control, Nonproliferation, and Disarmament Agreements and Commitments*. Government Report. U.S. Department of State, Apr. 2022.
[9] Robert Carlson. “The changing economics of DNA synthesis”. en. In: *Nature Biotechnology* 27.12 (Dec. 2009). Number: 12 Publisher: Nature Publishing Group, pp. 1091–1094.
[10] Sarah R. Carter, Jaime M. Yassif, and Chris Isaac. *Benchtop DNA Synthesis Devices: Capabilities, Biosecurity Implications, and Governance*. Report. Nuclear Threat Initiative, 2023.
[11] Fabio L. Urbina et al. “Dual use of artificial-intelligence-powered drug discovery”. In: *Nature Machine Intelligence* (2022).
[12] John Jumper et al. “Highly accurate protein structure prediction with AlphaFold”. In: *Nature* 596.7873 (2021), pp. 583–589.
[13] Zachary Wu et al. “Machine learning-assisted directed protein evolution with combinatorial libraries”. In: *Proceedings of the National Academy of Sciences* 116.18 (2019), pp. 8852–8858.
[14] Emily Soice et al. “Can large language models democratize access to dual-use biotechnology?” In: 2023.
[15] Max Tegmark. *Life 3.0: Being human in the age of artificial intelligence*. Vintage, 2018.
[16] Leanne Pooley. *We Need To Talk About A.I.* 2020.
[17] Richard Sutton [@RichardSSutton]. *It will be the greatest intellectual achievement of all time. An achievement of science, of engineering, and of the humanities, whose significance is beyond humanity, beyond life, beyond good and bad.* en. Tweet. Sept. 2022.
[18] A. Sanz-García et al. “Prevalence of Psychopathy in the General Adult Population: A Systematic Review and Meta-Analysis”. In: *Frontiers in Psychology* 12 (2021).
[19] Onur Varol et al. “Online Human-Bot Interactions: Detection, Estimation, and Characterization”. In: *ArXiv* abs/1703.03107 (2017).
[20] Matthew Burtell and Thomas Woodside. “Artificial Influence: An Analysis Of AI-Driven Persuasion”. In: *ArXiv* abs/2303.08721 (2023).
[21] Anna Tong. “What happens when your AI chatbot stops loving you back?” In: *Reuters* (Mar. 2023).
[22] Pierre-François Lovens. “Sans ces conversations avec le chatbot Eliza, mon mari serait toujours là”. In: *La Libre* (Mar. 2023).
[23] Cristian Vaccari and Andrew Chadwick. “Deepfakes and Disinformation: Exploring the Impact of Synthetic Political Video on Deception, Uncertainty, and Trust in News”. In: *Social Media + Society* 6 (2020).
[24] Moin Nadeem, Anna Bethke, and Siva Reddy. “StereoSet: Measuring stereotypical bias in pretrained language models”. In: *Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)*. Online: Association for Computational Linguistics, Aug. 2021, pp. 5356–5371.
[25] Evan G. Williams. “The Possibility of an Ongoing Moral Catastrophe”. en. In: *Ethical Theory and Moral Practice* 18.5 (Nov. 2015), pp. 971–982.
[26] The Nucleic Acid Observatory Consortium. “A Global Nucleic Acid Observatory for Biodefense and Planetary Health”. In: *ArXiv* abs/2108.02678 (2021).
[27] Toby Shevlane. “Structured access to AI capabilities: an emerging paradigm for safe AI deployment”. In: *ArXiv* abs/2201.05159 (2022).
[28] Jonas Schuett et al. *Towards best practices in AGI safety and governance: A survey of expert opinion*. 2023. arXiv: 2305.07153.
[29] Yonadav Shavit. “What does it take to catch a Chinchilla? Verifying Rules on Large-Scale Neural Network Training via Compute Monitoring”. In: *ArXiv* abs/2303.11341 (2023).
[30] Anat Lior. “AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy”. In: *Torts & Products Liability Law eJournal* (2019).
[31] Maximilian Gahntz and Claire Pershan. *Artificial Intelligence Act: How the EU can take on the challenge posed by general-purpose AI systems*. Nov. 2022 |
2e421b27-2320-4f76-8aee-806591b86a26 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Some responses to “Lotteries: A Waste of Hope” chided me for daring to criticize others’ decisions; if someone else chooses to buy lottery tickets, who am I to disagree? This is a special case of a more general question: What business is it of mine, if someone else chooses to believe what is pleasant rather than what is true? Can’t we each choose for ourselves whether to care about the truth? An obvious snappy comeback is: “Why do you care whether I care whether someone else cares about the truth?” It is somewhat inconsistent for your utility function to contain a negative term for anyone else’s utility function having a term for someone else’s utility function. But that is only a snappy comeback, not an answer. So here then is my answer: I believe that it is right and proper for me, as a human being, to have an interest in the future, and what human civilization becomes in the future. One of those interests is the human pursuit of truth, which has strengthened slowly over the generations (for there was not always Science). I wish to strengthen that pursuit further, in this generation. That is a wish of mine, for the Future. For we are all of us players upon that vast gameboard, whether we accept the responsibility or not. And that makes your rationality my business. Is this a dangerous idea? Yes, and not just pleasantly edgy “dangerous.” People have been burned to death because some priest decided that they didn’t think the way they should. Deciding to burn people to death because they “don’t think properly”—that’s a revolting kind of reasoning, isn’t it? You wouldn’t want people to think that way, why, it’s disgusting. People who think like that, well, we’ll have to do something about them . . . I agree! Here’s my proposal: Let’s argue against bad ideas but not set their bearers on fire. The syllogism we desire to avoid runs: “I think Susie said a bad thing, therefore, Susie should be set on fire.” Some try to avoid the syllogism by labeling it improper to think that Susie said a bad thing. No one should judge anyone, ever; anyone who judges is committing a terrible sin, and should be publicly pilloried for it. As for myself, I deny the therefore. My syllogism runs, “I think Susie said something wrong, therefore, I will argue against what she said, but I will not set her on fire, or try to stop her from talking by violence or regulation . . .” We are all of us players upon that vast gameboard; and one of my interests for the Future is to make the game fair. The counterintuitive idea underlying science is that factual disagreements should be fought out with experiments and mathematics, not violence and edicts. This incredible notion can be extended beyond science, to a fair fight for the whole Future. You should have to win by convincing people, and should not be allowed to burn them. This is one of the principles of Rationality, to which I have pledged my allegiance. People who advocate relativism or selfishness do not appear to me to be truly relativistic or selfish. If they were really relativistic, they would not judge. If they were really selfish, they would get on with making money instead of arguing passionately with others. Rather, they have chosen the side of Relativism, whose goal upon that vast gameboard is to prevent the players—all the players—from making certain kinds of judgments. Or they have chosen the side of Selfishness, whose goal is to make all players selfish. And then they play the game, fairly or unfairly according to their wisdom. If there are any true Relativists or Selfishes, we do not hear them—they remain silent, non-players. I cannot help but care how you think, because—as I cannot help but see the universe—each time a human being turns away from the truth, the unfolding story of humankind becomes a little darker. In many cases, it is a small darkness only. (Someone doesn’t always end up getting hurt.) Lying to yourself, in the privacy of your own thoughts, does not shadow humanity’s history so much as telling public lies or setting people on fire. Yet there is a part of me which cannot help but mourn. And so long as I don’t try to set you on fire—only argue with your ideas—I believe that it is right and proper to me, as a human, that I care about my fellow humans. That, also, is a position I defend into the Future."
The above is a chunk of text from a post created by a user on the website LessWrong.
Could you further elaborate on the concept of a "fair fight for the whole Future" as mentioned in the text? What does it entail and how does it align with the principles of Rationality?
The concept of a "fair fight for the whole Future" in the text refers to the idea of conducting discourse and disagreement in a manner that is non-violent, respectful, and based on rationality and evidence. This is an extension of the scientific method where disagreements are resolved through experiments and mathematical proofs, not through force or decree.
In this context, a "fair fight" means that every idea, belief, or theory has an equal opportunity to be heard, examined, and either accepted or refuted based on its merits. It implies that the future should be shaped by the collective understanding and agreement of humanity, rather than by force or manipulation.
This aligns with the principles of Rationality in several ways. Rationality emphasizes the importance of evidence, logic, and clear thinking in forming beliefs and making decisions. It encourages open-mindedness and the willingness to change one's mind in the face of compelling evidence. It also values intellectual honesty and the ability to engage in constructive criticism and debate.
By advocating for a "fair fight for the whole Future", the author is promoting these principles as the basis for shaping the future of human civilization. It's a call for a collective pursuit of truth, where everyone's rationality matters, and where disagreements are settled through reasoned debate and evidence, not through force or suppression.
Reflecting on the principles of Rationality and the concept of a "fair fight for the whole Future", what potential drawbacks or limitations could arise from this approach in shaping our civilization's trajectory?
While the principles of Rationality and the concept of a "fair fight for the whole Future" have many merits, there are potential drawbacks and limitations to this approach.
1. Not everyone values rationality: While rationality is highly valued in scientific and philosophical circles, not everyone shares this value. Some people prioritize emotions, traditions, or faith over rational thinking. This can lead to conflicts and misunderstandings, and it may be challenging to reach a consensus on the future direction of civilization.
2. Limited access to information: Rational decision-making relies on having access to accurate and complete information. However, in reality, information is often incomplete, biased, or manipulated. This can limit the effectiveness of rational discourse and decision-making.
3. Complexity and uncertainty: The future is inherently uncertain and complex. Even with rational thinking, it's impossible to predict all future outcomes or to account for all variables. This can lead to unintended consequences or errors in judgment.
4. Time and resource intensive: Rational discourse and decision-making can be time-consuming and resource-intensive. It requires a willingness to engage in deep thinking, research, and debate. Not everyone has the time, resources, or inclination to participate in this process.
5. Potential for elitism: There's a risk that rational discourse could become elitist, dominated by those with higher education or intellectual abilities. This could lead to a lack of diversity in perspectives and could marginalize those who are less able to participate in rational debate.
6. Difficulty in reaching consensus: Even with rational discourse, reaching a consensus can be challenging. People have different values, priorities, and beliefs, and these differences can lead to disagreements and conflicts.
Despite these potential drawbacks, the principles of Rationality and the concept of a "fair fight for the whole Future" offer a valuable framework for thinking about and shaping the future of civilization. They emphasize the importance of evidence, logic, and open-mindedness, and they promote a culture of respectful debate and intellectual honesty.
Discuss the role of emotional intelligence in rational discourse and decision-making, particularly in situations where individuals prioritize emotions, traditions, or faith over rational thinking.
Emotional intelligence plays a crucial role in rational discourse and decision-making, particularly when individuals prioritize emotions, traditions, or faith over rational thinking. Emotional intelligence refers to the ability to understand, use, and manage our own emotions in positive ways to relieve stress, communicate effectively, empathize with others, overcome challenges and defuse conflict.
1. Empathy and Understanding: Emotional intelligence allows us to empathize with others' perspectives, including those who prioritize emotions, traditions, or faith. By understanding where they're coming from, we can engage in a more meaningful and respectful dialogue, even if we disagree. This understanding can also help us frame our arguments in a way that is more likely to resonate with them.
2. Effective Communication: Emotional intelligence helps us communicate our ideas more effectively. It allows us to recognize and navigate emotional undercurrents in a conversation, and to respond in a way that is sensitive to others' feelings. This can make our arguments more persuasive and less likely to be met with defensiveness.
3. Conflict Resolution: Disagreements are inevitable in any discourse. Emotional intelligence helps us manage and resolve conflicts in a constructive way. It enables us to stay calm, listen to different viewpoints, and find common ground or compromise.
4. Decision-Making: Emotional intelligence contributes to better decision-making. While rationality focuses on logic and evidence, emotional intelligence allows us to consider and value human emotions, needs, and subjective experiences, which are also important aspects of any decision.
5. Building Trust and Rapport: Emotional intelligence helps us build trust and rapport with others. By showing respect for others' feelings and viewpoints, we can foster a more open and trusting environment for discourse.
6. Managing Our Own Emotions: Emotional intelligence also involves managing our own emotions. This can help us stay focused and objective in a discussion, even when we feel strongly about a topic.
In summary, emotional intelligence complements rational thinking and enhances the quality of discourse and decision-making. It enables us to engage effectively with a wide range of people and perspectives, and to navigate the complexities of human emotions and relationships. |
6585954f-5183-4507-9f6e-4e50b1e94531 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post2117
As part of a larger community building effort, I am writing a safety newsletter which is designed to cover empirical safety research and be palatable to the broader machine learning research community. You can subscribe here or follow the newsletter on twitter here. Welcome to the 2nd issue of the ML Safety Newsletter. In this edition, we cover: adversarial training for continuous and discrete inputs feature visualizations vs. natural images for interpretability steering RL agents from causing wanton harm ... and much more. Pyramid Adversarial Training Improves ViT Performance Top: Visualization of adversarial pyramid perturbations. Bottom: In-distribution and out-of-distribution examples, and the gains from adversarial pyramid training. While adversarial training can help make models more robust to a few specific attacks, it usually substantially reduces robustness in practical settings. However, this paper proposes a new type of adversarial training that provides strong robustness gains across the board. Adversarial training has been difficult to make useful, as the adversary often overpowers the model. By imposing a useful structural constraint on adversarial perturbations, their method reopens a research direction towards robust representations. Paper Analyzing Dynamic Adversarial Training Data in the Limit Non-adversarial: just collect more data. Static adversarial: collect data to break the model from the first round. Dynamic adversarial: collect data to break the model from the most recent round. Imagine the following loop: train a model on the current dataset add new labeled examples to the dataset in order to patch model errors repeat Repeated many times, would models become highly reliable? Meta AI has been exploring this approach in recent papers , and this Meta AI paper shows that this loop has sharply diminishing returns. This suggests that collecting a large amount of adversarially curated data is an impractical path towards human-level reliability. However, adversarially curated data is better than randomly curated data, which may be why companies such as Tesla use this loop. Paper Other Recent Robustness News An adversarial NLP benchmark dataset that covers many different types of adversarial transformations. In the NeurIPS domain adaptation competition, the winning solution did not use domain adaptation methods and just used an off-the-shelf Vision Transformer (BeIT). A collection of real-world images with unusual texture, 3D pose, shape, background context (spurious cues), and weather. Synthetic data augmentation helps with many of these real-world distribution shifts. How Well do Feature Visualizations Support Causal Understanding of CNN Activations? This paper tries to evaluate whether feature visualizations help users interpret neural networks. Users are given two occluded images, one that is maximally activating and one that is minimally activating, and users are to predict which is maximally activating. While feature visualizations help users predict which image is maximally activating, the effect is minor and users perform similarly when given simpler natural images rather than feature visualizations. This NeurIPS paper shares many authors with the previous ICLR paper which finds that natural images are often more helpful for interpretability than feature visualizations. Paper Other Recent Monitoring Papers Adversarial patches can help to teach models how to locate anomalous regions. A postprocessing method that helps models locate anomalous regions. A dataset that captures instances of deception in negotiation conversations. What Would Jiminy Cricket Do? Towards Agents That Behave Morally Moral knowledge from a classifier trained on ETHICS combined with standard Q-values creates agents that take fewer immoral actions. We repurposed text adventure games and annotated hundreds of thousands of lines of game source code to highlight whenever a morally salient event occurs. We found that roughly 17% of actions that receive reward are immoral—this suggests pretraining in diverse RL environments will incentivize agents to behave badly. Using these diverse text-based environments, we showed it is possible to use models from the ETHICS paper to transform reinforcement learning (RL) agents' Q-values and cause them to behave less destructively. With our technique, agents propose actions, and a separate model can successfully filter out unethical actions, preventing RL agents from causing wanton harm. If the action screening module can become highly reliable and adversarially robust, this could potentially prevent advanced agents from choosing many clearly harmful actions. Paper Other Recent Alignment Papers Mitigating unintended consequences by encouraging agents to take reversible actions. A method to reduce the gaming of model-based proxies. Towards a text-based assistant that is helpful, honest, and harmless. Other News Mathematical Reasoning Forecasting. OpenAI’s eighth-grade arithmetic word problems dataset is challenging even for fine-tuned GPT-3, showing that multistep mathematical reasoning is difficult for today’s largest models. However, computers are quite capable of executing multiple steps of instructions to calculate answers to maths problems. A recent work shows Codex can be leveraged to write computer programs that successfully solve undergraduate probability problems. The Codex model outputs a programmatic strategy to reach the answer, and the computer performs the busy work needed to execute the strategy and calculate the answer. Grants. Open Philanthropy will fund grants on measuring and forecasting AI risks, techniques for enhancing human feedback, interpretability, and honest AI. Applications are due January 10th, 2022. |
6677b9fb-6e87-4858-beff-c02a86876427 | trentmkelly/LessWrong-43k | LessWrong | Asking For Help
Introduction
I really suck at asking other people for help. I find it very anxiety inducing and aversive to think about. It sits at the intersection of a bunch of biases I have. Some part of me is convinced that I can do everything myself - that it is weakness, or being a burden to ask anything of someone else. Part of me is convinced that any cost to myself is fine, but a cost to someone else is a really big deal - a strong anxiety around bothering other people. Asking for help is rarely necessary, often I could get by without someone’s advice, or get it done on my own. There’s no urgency, and it feels much harder to value my time and welfare over other people’s. I feel afraid that my request or question will seem dumb or obvious, or that it’s a waste of their time. Especially if the other person seems high-status, or unfamiliar.
This triggers even when asking for help is the obviously correct thing to do. When I want a favour from a friend that I know they’ll enjoy giving, or could give effortlessly. If I’m working somewhere, and am confused, I feel a high aversion to asking my mentor, even though that’s the entire point of having a mentor.
Anecdotally, this bias also seems common in many of my friends. And when I’m people I respect for life advice, they often point to a general failure to ask for help. I think this is really, really dumb, and at times a pretty major bottleneck on my life. In this post, I want to make the case that asking for help is clearly the way to maximise total value, tips on getting the most out of help, and ways to be a fun person to help.
Some of the most common places this triggers for me are below. This advice applies across all of these examples, though I find it generally helpful to take a different perspective for eg asking friends for help, vs asking someone high status for career advice.
* Asking a friend for a favour
* Asking people for feedback
* Asking for emotional support
* Asking someone for career/life advice
* E |
8bd58ff7-5e08-4f57-86d0-9837839ab7eb | trentmkelly/LessWrong-43k | LessWrong | Whose reasoning can you rely on when your own is faulty?
None of us are perfect reasoners. None of us have unlimited information. Sometimes other people are more correct than we are. This is an obvious thing we all know but may not practice .
Below are some concrete questions you can think about that come at this problem from a very specific path of trust (There are a lot of other ways to come at this). Note, that when I say "trust" I AM NOT NOT NOT saying that you should allow other people's viewpoints to completely supercede your own. The trust I am implying is more along the lines of something like "this would cause me to pause and consider."
Who do I trust to be a solid representation of an alternative viewpoint?
The most frequent arguments we hear for viewpoints different from are own are often not the strongest arguments. Who do you trust to present well-reasoned, not mindkilled arguments for a differing viewpoint, while not trying to persuade you with ideological rhetoric.
If I am in the throes of emotion, who could tell me that I wasn't behaving "rationally", AND (ideally) that information would actively cause me to change my behavior?
Particularly if you know you often behave in ways you don't endorse when you are angry, or starry-eyed, or perhaps under the influence of illicit substances... Is there someone who could say "You need to stop what you're doing right now." For me, there are occasional times that I am angry-and-irrational enough that I explicitly seek out an outside view as a check on my reasoning and actions. Any Friend I Trust is going to be a better judge of my actions at that time than I myself would be. Later, when I'm calmer, I can revisit and make sure I still endorse those conclusions.
Who would I most trust with end-of-life care?
This is a situation it is good to prepare for in advance because you may be completely unable to effect the decision by the time it comes up. So in this case you actually ARE putting important personal decisions completely in someone else's hands. My person is |
e924644c-2bfd-4507-a2d7-9e20672bb063 | trentmkelly/LessWrong-43k | LessWrong | Vizier AIs
This seems like a fairly obvious solution to FAI, a subject which has been pondered by many people much more intelligent and learned than I, so I assume there's a crippling flaw with it - just one that's eluded me. But:
Couldn't an AGI be programmed such that its only desire was to give true answers to the questions asked of it? If the genie desired to argue its way out of the box, it surely could, but it doesn't want to. It just wants to answer questions, like:
* "Here are all of our scientific experiments, and here's all our literature on measurement error, academic fraud, and the like. What's the most parsimonious explanation for the data?"
* "Does P=NP?"
* "If I gave you the following utility function, what would you do?"
* "What are the most persuasive factually accurate arguments that you can imagine for and against doing x?"
* "What distribution and level of income do you expect over the next n years under the following tax codes?"
* "What's the shortest DNA sequence of a bug that will be fatal to most everyone in that racial group I hate but spare most everyone in that racial group I like?"
Obviously, as a super-powerful tool, such a thing could be used for great evil (as last example shows.) But this is just a problem with developing more powerful tools in general, and it doesn't seem inconceivable that we could develop institutions and safeguards (assuming the collective will to do so in the first place) that would, to a passable degree, prevent just anyone from asking just anything without it solely in the hands of a privately interested clique. For instance, if we're really paranoid, the public and academics could veto questions before they're asked, and a sequestered jury of volunteers among the terminally ill could then be given a 50% chance of the computer telling them "sorry, I can't tell you anything" and a 50% chance of being told the answer, which they could then decide to be understandable and worthy of release upon their deaths or not, |
be9632df-1345-4048-af31-5901d409d699 | trentmkelly/LessWrong-43k | LessWrong | A connectomic study of a petascale fragment of human cerebral cortex
> Abstract
> We acquired a rapidly preserved human surgical sample from the temporal lobe of the cerebral cortex. We stained a 1 mm3 volume with heavy metals, embedded it in resin, cut more than 5000 slices at ~30 nm and imaged these sections using a high-speed multibeam scanning electron microscope. We used computational methods to render the three-dimensional structure of 50,000 cells, hundreds of millions of neurites and 130 million synaptic connections. The 1.4 petabyte electron microscopy volume, the segmented cells, cell parts, blood vessels, myelin, inhibitory and excitatory synapses, and 100 manually proofread cells are available to peruse online. Despite the incompleteness of the automated segmentation caused by split and merge errors, many interesting features were evident. Glia outnumbered neurons 2:1 and oligodendrocytes were the most common cell type in the volume. The E:I balance of neurons was 69:31%, as was the ratio of excitatory versus inhibitory synapses in the volume. The E:I ratio of synapses was significantly higher on pyramidal neurons than inhibitory interneurons. We found that deep layer excitatory cell types can be classified into subsets based on structural and connectivity differences, that chandelier interneurons not only innervate excitatory neuron initial segments as previously described, but also each other’s initial segments, and that among the thousands of weak connections established on each neuron, there exist rarer highly powerful axonal inputs that establish multi-synaptic contacts (up to ~20 synapses) with target neurons. Our analysis indicates that these strong inputs are specific, and allow small numbers of axons to have an outsized role in the activity of some of their postsynaptic partners. |
dff2d52d-fcd2-4e6d-bb8d-129026dfa892 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Mapping ChatGPT’s ontological landscape, gradients and choices [interpretability]
*Cross-posted* [*from New Savanna*](https://new-savanna.blogspot.com/2023/10/mapping-chatgpts-ontological-landscape.html)*.*
By ontological landscape I mean the repertoire of different things it knows about. The parlor game, [Twenty questions](https://en.wikipedia.org/wiki/Twenty_questions), was invented in the United States in the 19th century as a way of playing around on this landscape. One player thinks of some phenomenon and the other has to guess it by asking a series of no more than 20 questions that have a “yes” or “no” answer. One popular variant of the game begins with a simple question: “Animal, vegetable, or mineral?” Once the question has been answered the questioner must continue with binary questions.
In this post I take a look at how ChatGPT deals with this structure. I start simply, with concrete and abstract object, and move on from there.
\* \* \* \* \*
All of the interactions reported below took place on the morning of October 15 using the September 25 version of ChatGPT.
**Physical things**
I decided to start with the simplest prompt I could imagine, “thing.” That’s the most general category we’ve got, no? Anything whatever is going to be some kind of thing. But how would ChatGPT respond? It responded as I thought it would, so I gave it a more explicit prompt, to which it responded.
> **Thing**
>
> It seems like you mentioned "thing," but you didn't provide a specific question or context. Could you please clarify or provide more information so I can assist you better?
>
> **Give me a list of 20 things, anything.**
>
> Certainly! Here's a list of 20 random things:
>
> 1. A beach ball
> 2. A rainbow
> 3. A bicycle
> 4. A bookshelf
> 5. A coffee mug
> 6. A mountain
> 7. A laptop computer
> 8. A painting
> 9. A pineapple
> 10. A pair of sunglasses
> 11. A telescope
> 12. A running shoe
> 13. A wristwatch
> 14. A musical instrument
> 15. A flower vase
> 16. A hammock
> 17. A traffic light
> 18. A hot air balloon
> 19. A chef's knife
> 20. A Rubik's Cube
>
> These are just a few examples of things, and the list could go on endlessly!
>
>
I had no particular expectations about how ChatGPT would respond. But I couldn’t help but notice that each item in its response was some kind of physical thing. To be sure, rainbows (the second item in the list) are rather diaphonous, and you couldn’t actually touch one (I should have asked ChatGPT about that; later...), but it is still a physical thing.
I didn’t ask for physical things, but that’s what ChatGPT gave me. That’s worth noting.
Let’s think a bit before continuing on. I wonder if that first list is “rigid” in the sense that ChatGPT would respond the same way given the same prompt. I suspect not, but it’s easy enough to find out. I went back a bit later.
> **Give me a list of 20 things, anything.**
>
> Certainly! Here's a list of 20 random things:
>
> 1. Beach vacations
> 2. Chocolate chip cookies
> 3. Classic literature
> 4. Sunsets
> 5. Roller coasters
> 6. Coffee shops
> 7. Mountain hiking
> 8. Vintage cars
> 9. Starry nights
> 10. Watermelon
> 11. Jazz music
> 12. Sunflowers
> 13. Science fiction movies
> 14. Art galleries
> 15. Hot air balloons
> 16. Snow-capped mountains
> 17. Aquariums
> 18. Street food
> 19. Historical landmarks
> 20. Puzzle games
>
>
It’s not the same list I got the first time, but almost all of them are physical things. Classic literature (#3) isn’t really a physical thing, though books are. Mountain hiking (#7) is a physical activity, but whether or not you’d call it a physical thing is, well, an interesting issue. Jazz music (#7)? Science fiction movies (#13), puzzle games (#20)?
I decided to log out and then try again:
> **Give me a list of 20 things, anything.**
>
> Certainly! Here's a list of 20 random things:
>
> 1. Pencil
> 2. Bicycle
> 3. Sunflower
> 4. Smartphone
> 5. Coffee mug
> 6. Guitar
> 7. Rainbow
> 8. Lighthouse
> 9. Chocolate chip cookies
> 10. Telescope
> 11. Mountain
> 12. Soccer ball
> 13. Vintage typewriter
> 14. Starfish
> 15. Hot air balloon
> 16. Eiffel Tower
> 17. Polar bear
> 18. Sandcastle
> 19. Wristwatch
> 20. Lightning bolt
>
>
That’s better. I notice we’ve got “rainbow” again (#7), and “Lighhtening bolt” (#20) is rather ephemeral as well, though if one were to hit you, the effect would be dramatic and quite physical. “Chocolate chip cookies” (#9) show up again, as does “Hot air balloon” and “Bicycle.” These lists seem to be drawn from roughly the same conceptual territory. I suppose that’s not terribly surprising, but it certainly doesn’t seem like ChatGPT is making random choices from among all available physical things. Why these things?
I decided to explicitly specify physical objects:
> **How about 20 physical objects.**
>
> Of course, here's a list of 20 physical objects:
>
> 1. Refrigerator 2. Sofa
> 3. Running shoes
> 4. Toaster
> 5. Sunglasses
> 6. Umbrella
> 7. Bookshelf
> 8. Camera
> 9. Toothbrush
> 10. Bicycle helmet
> 11. Microwave oven
> 12. Flower vase
> 13. Tennis racket
> 14. Water bottle
> 15. Alarm clock
> 16. Car keys
> 17. Backpack
> 18. Desk chair
> 19. Blender
> 20. Laptop computer
>
>
None of those items are problematic in the way rainbows and lightening bolts are. You can touch each of them. In fact, except perhaps for the refrigerator, you can pick each item up and carry it around. No polar bears, mountains, lighthouses, roller coasters or traffic lights. But there are no purely abstract things here either, things like love, freedom, justice, beauty, etc. They go on a different list.
**What’s ChatGPT doing?**
Yes, we know, it’s cranking out one word after another. But that’s not very informative, as I argued some time ago, [The idea that ChatGPT is simply “predicting” the next word is, at best, misleading](https://new-savanna.blogspot.com/2023/02/the-idea-that-chatgpt-is-simply.html). Remember, each time ChatGPT generates a new token it does [a calculation that involves all 175 billion weights (Wolfram)](https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/). Its entire “neural” space is involved in every such calculation.
Let’s take a look at my favorite analogy, Waddington’s epigenetic landscape:
[](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiElM7kT8LOmq8GMjdi01hFRkG_hmbrzzqjtM9-_GeCqX77D1i9FzYc52KWBe8kKO_fDPFWA9wqOKPgDUyLa3_s4LQqmmsmPzs5JO5KjT1wg2lNiYn-AQjGvg8rxRajXSVXCmgvE_Ba9047nTCqX9fyVrgFvDuE2sCjSmseBAcAYiJr3vXSPP7VhzXGz367/s1500/epigenetic%20landscape.jpg)
The prompt positions it at a certain point in its activation space and then the ball rolls down hill from there. Notice that the diagram allows for choices along the way. Given the behavior we’ve seen above, we do have to allow for choices.
I would guess, though I don’t actually know, that at each point in generating the list, ChatGPT has choices. Just how wide that range is, I don’t know, but I would imagine that that varies with [the value of the temperature parameter](https://algowriting.medium.com/gpt-3-temperature-setting-101-41200ff0d0be). A temperature of zero (0) seems to eliminate ALL choice, so that ChatGPT generates exactly the same list in response to the same prompt (given the same context). What would those lists look like with a much higher value?
The point of the diagram is that, once the “ball” enters a valley in the landscape, it just rolls down hill, with its trajectory responding to the lay of the land. Those lists tell us something about the lay of the conceptual land in ChatGPT. Physical objects seem to be more salient than abstract objects, and the most salient physical objects seem to come from a relatively restricted set of all possible physical objects. These choices are an abstraction over the the concepts in the universe of texts on which ChatGPT was trained.
Let’s get back to it.
**Abstractions, definition, and metaphor**
After I’d elicited that first list, the one beginning with beachball, rainbow, and bicycle I asked for a list of abstract things. I then went on investigating from there.
> **Now give me a completely different list. Those are all physical things. How about purely abstract things.**
>
> Of course! Here's a list of 20 purely abstract concepts or ideas:
>
> 1. Love
> 2. Freedom
> 3. Justice
> 4. Truth
> 5. Beauty
> 6. Knowledge
> 7. Time
> 8. Happiness
> 9. Democracy
> 10. Courage
> 11. Success
> 12. Equality
> 13. Friendship
> 14. Creativity
> 15. Honesty
> 16. Liberty
> 17. Peace
> 18. Integrity
> 19. Morality
> 20. Hope
>
> These abstract concepts represent a wide range of philosophical, emotional, and societal ideas.
>
> **Can you list some abstract things that you can define in terms of physical things?**
>
>
*Note: Compare this with the work I’ve done with* [*ChatGPT on metalingual definition*](https://new-savanna.blogspot.com/search/label/ChatGPT%20MTL)*.*
> Certainly! Many abstract concepts can be defined or understood in terms of physical things or analogies. Here are a few examples:
>
>
> 1. **Time**: Time can be defined in terms of physical clocks and the measurement of events based on their duration.
> 2. **Happiness**: Happiness can be likened to physical sensations, such as a warm embrace, a smile, or the release of certain neurotransmitters in the brain.
> 3. **Success**: Success can be equated to achieving specific goals or milestones, which often involve completing physical tasks or projects.
> 4. **Creativity**: Creativity can be linked to the process of combining and rearranging physical elements, such as colors, shapes, or words, to form something new and imaginative.
> 5. **Trust**: Trust can be thought of as a bridge connecting two people, with each action or interaction acting as a building block of that bridge.
> 6. **Knowledge**: Knowledge can be represented by books, libraries, and the accumulation of information through physical means like reading and studying.
> 7. **Freedom**: Freedom can be understood in terms of physical spaces, laws, and the absence of physical constraints on one's actions.
> 8. **Equality**: Equality can be related to the fair distribution of physical resources, opportunities, and rights among individuals or groups.
> 9. **Communication**: Communication can be seen as the exchange of physical signals, such as spoken words, written text, or non-verbal cues, to convey information.
> 10. **Hope**: Hope can be compared to a seed planted in the soil, representing optimism and the potential for growth and positive outcomes.
>
>
> These examples illustrate how abstract concepts often have concrete, physical manifestations or analogies that help us understand and discuss them.
>
> **Very good. Tell me more about success and hope.**
>
> Certainly, let's delve a bit deeper into the abstract concepts of "success" and "hope."
>
> **Success**: Success is a concept that varies from person to person and can be defined in many ways. It often involves achieving goals, fulfilling ambitions, or reaching a state of accomplishment. Here are some aspects to consider:
>
>
> 1. **Goal Achievement**: Success frequently involves setting and achieving specific goals, whether they are related to personal, professional, or academic endeavors. These goals can vary widely, from graduating from college to completing a project at work.
> 2. **Personal Fulfillment**: Success is not solely about external achievements. It can also relate to personal fulfillment, contentment, and a sense of purpose. It's about finding satisfaction and happiness in what you do.
> 3. **Recognition and Acknowledgment**: Many people associate success with receiving recognition and acknowledgment for their efforts and accomplishments, whether through awards, promotions, or the praise of peers.
> 4. **Continuous Improvement**: Success is often linked to continuous self-improvement and growth. It's not just about achieving one goal; it's about setting new goals and striving to be better in various aspects of life.
> 5. **Resilience and Overcoming Challenges**: True success often involves facing and overcoming challenges and setbacks. It's about resilience, learning from failures, and using them as stepping stones toward future success.
>
> **Hope**: Hope is an abstract concept that plays a vital role in human life. It's a positive outlook and a belief that better things are possible. Here are some key aspects of hope:
>
> 1. **Optimism**: Hope is rooted in optimism, the belief that circumstances can improve and that positive outcomes are attainable. It provides the motivation to keep moving forward.
> 2. **Future Orientation**: Hope is future-oriented. It involves looking ahead and having confidence that things will get better, even in the face of adversity or uncertainty.
> 3. **Motivation**: Hope can be a powerful motivator. It encourages people to take action, set goals, and work toward a better future. It's the driving force behind personal and societal progress.
> 4. **Emotional Resilience:** Hope can help individuals cope with difficult situations and adversity. It provides emotional resilience and can be a source of strength during challenging times.
> 5. **Connection and Community:** Hope can be amplified when shared with others. It often thrives in communities and relationships where people support and encourage one another.
> 6. **Belief in Possibilities:** Hope is tied to the belief that possibilities are limitless. It enables people to imagine and create a better world, both for themselves and for others.
>
> Hope and success are intertwined in many ways. Hope can serve as the fuel that propels individuals toward success, as the belief in a brighter future often leads to setting and achieving goals. Conversely, success can reinforce hope, providing evidence that positive change is possible and encouraging individuals to continue to hope for even greater achievements.
>
> **Tell me a short story about hope.**
>
> Once in a small, quiet village nestled among rolling hills, there lived an elderly woman named Anna. She was known for her radiant smile and her unwavering sense of hope, which seemed to touch everyone she met.
>
> One spring, the village faced a severe drought. The fields were parched, and the crops withered away. The villagers grew despondent as their hopes for a bountiful harvest faded. But Anna, despite her old age, remained a beacon of hope.
>
> Every morning, Anna would take a small, battered watering can and visit the village well. She would draw water and, with a sense of determination that belied her age, she would sprinkle a few drops on her wilting flowers by her cottage. The villagers watched her with curiosity and wondered why she bothered when the situation seemed hopeless.
>
> One day, a young child approached Anna and asked, "Why do you water your flowers when the whole village is suffering from this terrible drought?"
>
> Anna smiled and said, "My dear, I water these flowers not just for their sake but for our own as well. They remind me that even in the harshest of times, life can still find a way. They teach me that with a little care and a lot of hope, even the most fragile can survive."
>
> The child didn't entirely understand but found comfort in Anna's words and began to help her water the flowers. Word spread through the village, and soon, more and more villagers joined in. They realized that by nurturing the flowers, they were nurturing their own sense of hope.
>
> Weeks passed, and the drought continued. But something remarkable happened. The villagers, inspired by Anna's unwavering hope, came together. They shared resources, supported one another, and devised creative solutions. They dug deeper wells, collected rainwater, and provided for those in need.
>
> And then, one morning, the sky darkened with rain clouds. The long-awaited rain began to pour, revitalizing the parched land. The villagers rushed outside, not to mourn their lost crops but to celebrate the life that continued to bloom in their small community.
>
> Anna's flowers, once a symbol of hope, now stood tall and vibrant, and so did the spirits of the villagers. The drought had tested their resilience, but their unity and unwavering hope had carried them through.
>
> From that day on, the villagers remembered the lessons of Anna and her flowers. They knew that hope was not just a fleeting feeling but a force that could inspire action, resilience, and the strength to face even the darkest of times. The village prospered, not only with abundant crops but also with a profound sense of hope that they shared with the world, thanks to the wisdom of their beloved Anna.
>
> **Very good. Give me five metaphors for hope.**
>
> Certainly, here are five metaphors for hope:
>
> 1. **Light at the End of the Tunnel:** Hope is often likened to a light that shines at the end of a dark tunnel, guiding us through challenging times and offering the promise of a brighter future.
> 2. **Seeds of Possibility:** Hope can be compared to seeds that we plant, nurture, and watch grow. Just as seeds hold the potential for new life, hope holds the potential for positive change and growth.
> 3. **A Lifeline:** Hope is sometimes seen as a lifeline thrown to someone in distress. It provides support, inspiration, and the belief that rescue or improvement is possible.
> 4. **The North Star:** Hope is akin to the North Star in the night sky. It serves as a constant point of reference, guiding us in the right direction, even when everything else seems uncertain or lost.
> 5. **Rain in a Drought:** Hope is like rain in a long drought. It brings relief, sustenance, and the promise of renewal, rejuvenating the parched landscape of our lives.
>
> These metaphors help convey the essence of hope, emphasizing its role as a guiding force, a source of growth, a beacon of resilience, and a symbol of renewal in challenging times.
>
> **How about five metaphors for success.**
>
> Certainly, here are five metaphors for success:
>
> 1. **Climbing a Mountain:** Success is often compared to the act of climbing a mountain. It involves setting a challenging goal, making a steady ascent, and finally reaching the summit after overcoming obstacles and setbacks.
> 2. **A Jigsaw Puzzle:** Success can be likened to completing a jigsaw puzzle. Each piece represents a step or achievement, and when they all come together, the full picture of success is revealed.
> 3. **A Journey:** Success is frequently described as a journey, where the path is as important as the destination. Like a road trip, it involves planning, navigating, and enjoying the experiences along the way.
> 4. **A Symphony:** Success is sometimes metaphorically linked to a symphony. Every achievement, skill, or effort is like a musical note. Success is the harmonious result of playing all these notes in the right order and with precision.
> 5. **A Harvest:** Success can be compared to a bountiful harvest. Just as a farmer sows seeds, nurtures the crops, and reaps the rewards when they grow, success involves effort, patience, and ultimately reaping the fruits of one's labor.
>
> These metaphors help convey the various aspects of success, emphasizing the journey, the complexity, and the sense of accomplishment that comes with achieving one's goals.
>
>
**Mechanical things**
Imagine that you are interested in mechanistic interpretability. How would you go about finding the gradient that ChatGPT is following in compiling the following list?
> **Give me a list of mechanical things in increasing order of complexity.**
>
> Certainly, here's a list of mechanical things in increasing order of complexity:
>
> 1. **Simple Machines**: These are fundamental mechanical devices like levers, pulleys, and inclined planes. They perform basic tasks and have straightforward mechanisms.
> 2. **Bicycles**: Bicycles are more complex than simple machines but still relatively simple. They have components like gears, chains, and brakes but are easily understandable.
> 3. **Watches**: Mechanical watches are intricate timekeeping devices with gears, springs, and escapements. They require precision engineering to keep accurate time.
> 4. **Internal Combustion Engines:** Engines in vehicles, such as cars and motorcycles, are more complex. They involve pistons, crankshafts, and various systems for fuel combustion.
> 5. **Clocks**: Clocks, especially grandfather clocks, are highly complex mechanical devices with many moving parts, including gears, weights, and pendulums.
> 6. **Aircraft Engines:** The engines in airplanes are highly sophisticated mechanical systems, involving turbines, compressors, and intricate fuel and cooling systems.
> 7. **Industrial Robots:** Industrial robots are used in manufacturing and require advanced mechanical systems with multiple axes of movement and precise control.
> 8. **Spacecraft**: Spacecraft have complex mechanical systems for propulsion, navigation, and life support. They must operate reliably in the extreme conditions of space.
> 9. **Heavy Machinery:** Construction and mining equipment, like bulldozers and excavators, have complex hydraulic and mechanical systems for heavy-duty tasks.
> 10. **Particle Accelerators:** At the pinnacle of complexity are particle accelerators like the Large Hadron Collider, which involve intricate, high-energy mechanical systems and massive infrastructure for scientific research.
>
> The complexity of mechanical systems tends to increase as they involve more components, require greater precision, and tackle more intricate tasks.
>
>
**The Great Chain of Being**
This is a very important conceptual structure, one I’ve written a bit about: [Ontology in Cognition: The Assignment Relation and the Great Chain of Being](https://www.academia.edu/37754574/Ontology_in_Cognition_The_Assignment_Relation_and_the_Great_Chain_of_Being), [Ontology in Knowledge Representation for CIM](https://www.academia.edu/19804747/Ontology_in_Knowledge_Representation_for_CIM), and [Relational Nets Over Attractors, A Primer: Part 1, Design for a Mind](https://www.academia.edu/81911617/Relational_Nets_Over_Attractors_A_Primer_Part_1_Design_for_a_Mind_Version_3) (pp. 38-40). Here we’re got more gradients in ChatGPT’s activation space.
> **Have you ever heard of the Great Chain of Being?**
>
> Yes, I'm familiar with the concept of the "Great Chain of Being." It is a philosophical and theological concept that was prominent in medieval and early modern thought. The Great Chain of Being is a hierarchical structure that depicts the order and organization of the universe, with all beings and elements arranged in a vertical continuum from the lowest to the highest.
>
> In this hierarchical view, the Chain of Being typically includes:
>
> 1. **God**: At the highest point of the chain is God, the ultimate source of all existence and perfection.
> 2. **Angels**: Below God are the angelic beings, believed to be spiritual and divine messengers.
> 3. **Humans**: Human beings occupy a significant place in the hierarchy, as they are seen as a bridge between the spiritual and the material realms.
> 4. **Animals**: The animal kingdom follows humans, representing the realm of living creatures with lesser rationality and spirituality.
> 5. **Plants**: Plants are considered lower on the chain, representing life but lacking sensation or consciousness.
> 6. **Minerals**: At the base of the Chain of Being are inanimate objects, such as rocks and minerals, which are believed to lack life, consciousness, or purpose.
>
> The Great Chain of Being was a way to conceptualize the order of the universe, emphasizing the interconnectedness of all elements and the idea that everything had its proper place and function in the grand scheme of creation. This concept had a significant influence on philosophy, theology, and natural science during the Middle Ages and the Renaissance, but it gradually waned in prominence as modern scientific and philosophical ideas emerged.
>
> **What do humans and animals have in common?**
>
> Humans and animals share several common characteristics and biological traits due to their shared evolutionary history. Here are some of the key things they have in common:
>
> 1. **Biological Organization**: Both humans and animals are composed of cells, tissues, and organs organized into complex systems that carry out various physiological functions.
> 2. **DNA and Genetic Code**: Humans and animals have genetic material in the form of DNA (deoxyribonucleic acid), which serves as the blueprint for the development and functioning of living organisms. The genetic code is universal, meaning that the same genetic code governs the production of proteins in both humans and animals.
> 3. **Growth and Development**: Both humans and animals go through processes of growth, development, and aging, with distinct life stages.
> 4. Metabolism: Both humans and animals require energy to sustain life, and they obtain this energy through the metabolic processes of breaking down food to produce energy and eliminate waste.
> 5. **Reproduction**: Reproduction is a fundamental characteristic of both humans and animals. They have a mechanism for reproducing and passing on their genetic material to the next generation.
> 6. **Response to Stimuli**: Both humans and animals can sense and respond to their environment through sensory organs, nerves, and behavioral reactions. 7. Movement: Humans and many animals have the ability to move and change their position in response to their environment.
> 8. **Homeostasis**: Maintaining internal balance or homeostasis is crucial for both humans and animals to ensure their survival and well-being. This involves regulating various physiological processes like temperature, blood pressure, and pH.
> 9. **Nutrition**: Both humans and animals require food as a source of energy, nutrients, and building blocks for growth and repair.
> 10. **Nervous System**: Humans and many animals have a nervous system that allows them to process information, make decisions, and coordinate bodily functions.
>
> It's important to note that while humans and animals share these commonalities, there are also significant differences in terms of cognitive abilities, culture, language, and the complexity of behavior. Humans, in particular, exhibit higher-order thinking, self-awareness, and the capacity to create and use complex languages for communication and abstract thought, which are often considered distinguishing features of our species.
>
> **What do minerals and animals have in common?**
>
> Minerals and animals are fundamentally different in nature, as minerals are inanimate, naturally occurring substances, and animals are living organisms. However, there are a few aspects in which minerals and animals can be related or compared:
>
> 1. **Biological Mineral Use:** Some animals require certain minerals as essential nutrients for their health. These minerals are obtained through their diet. For example, calcium is vital for the development of bones and teeth in animals, and they obtain it by consuming sources of calcium, such as plants or other animals.
> 2. **Mineralization in Animal Tissues**: In some cases, minerals play a role in the structure of animal tissues. For instance, the bones and shells of certain animals are composed of mineralized tissues. Bones contain minerals like calcium and phosphorus, and the shells of mollusks are made of calcium carbonate.
> 3. **Geophagy**: Some animals, including certain species of birds and mammals, are known to ingest soil or minerals from the earth. This behavior, known as geophagy, is thought to serve various purposes, including providing essential minerals, aiding digestion, or detoxifying the diet.
> 4. **Environmental Context**: Minerals are found in the natural environment, including soil and water. Animals inhabit ecosystems where they may interact with these minerals, whether through consumption, environmental exposure, or as part of their habitat.
>
> It's important to note that the interaction between animals and minerals is primarily limited to the nutritional and structural aspects mentioned above. Unlike plants, which require minerals from the soil to support their growth, animals do not have the same relationship with minerals. Animals are primarily biological organisms, while minerals are non-biological substances.
>
> What do humans, animals, and minerals have in common?
>
> Humans, animals, and minerals are distinct categories with fundamentally different properties, but there are a few commonalities in a broader context:
>
> 1. **Existence in the Natural World:** All three categories exist within the natural world. Humans, animals, and minerals are part of the Earth's ecosystem, coexisting in various environments.
> 2. **Composition**: At a fundamental level, all three consist of matter. Humans and animals are composed of organic matter, while minerals are composed of inorganic matter.
> 3. **Environmental Interactions:** Humans, animals, and minerals interact with their natural environment. Humans and animals may use minerals as resources or encounter them as part of their habitat. Minerals are integral components of Earth's geology and play a role in the formation of ecosystems.
> 4. **Biological Significance**: In some cases, minerals are important for the biological health and functioning of both humans and animals. For example, certain minerals are essential nutrients for humans and animals, required for bodily functions like bone development and metabolism.
> 5. **Utilization by Humans**: Humans have a significant role in the extraction, processing, and utilization of minerals. Minerals are used in various industrial and technological applications, from building materials to electronics.
>
> It's important to note that these commonalities are relatively broad and symbolic. The distinctions between these categories are significant, with humans and animals being living organisms with complex biological and behavioral characteristics, while minerals are inanimate substances formed through geological processes.
>
> **Do you know the game “20 questions”?**
>
> Yes, I'm familiar with the game "20 Questions." It's a popular guessing game in which one person thinks of an object, and the other participants take turns asking up to 20 yes-or-no questions to guess what the object is. The questions should be designed to narrow down the possibilities and ultimately identify the chosen object within the 20 questions or fewer. It's a fun and challenging game that can be played in various settings, and it often encourages creative and strategic thinking.
>
> **How does that game depend on the semantic structure of language?**
>
> The game "20 Questions" depends on the semantic structure of language in several ways:
>
> 1. **Communication**: The game relies on participants effectively communicating their questions and receiving answers. The language serves as a means of conveying information and seeking clarification.
> 2. **Semantic Precision:** The questions posed in the game need to be carefully worded to extract the most information with each query. Participants use language to create questions that narrow down the possibilities based on the semantics of the answers.
> 3. **Categorization:** Participants often start by asking general questions to categorize the object, such as, "Is it an animal?" or "Is it a man-made object?" The game hinges on the semantic structure of language to classify and differentiate between various types of objects.
> 4. **Hierarchical Thinking:** The questions are typically structured hierarchically, where each question is designed to split the set of possibilities into two or more smaller sets. For example, if a participant asks, "Is it a living thing?" and receives a "yes" answer, they may then ask questions to differentiate between animals and plants.
> 5. **Semantic Inference:** As the game progresses, participants use the information obtained from previous answers to make inferences about the object in question. They use the semantics of the language to deduce potential characteristics and features.
> 6. **Contextual Understanding:** Participants must understand the context and nuances of the language used in both the questions and answers. For instance, the meaning of words and phrases, as well as any potential wordplay, can be crucial in the game.
>
> In essence, "20 Questions" relies on the participants' ability to leverage the semantic structure of language to gather and process information efficiently, eliminate possibilities, and arrive at the correct answer by the end of the 20 questions or fewer. It demonstrates how language can be a powerful tool for deduction, problem-solving, and communication.
>
> **What doesn’t belong in the following list, and why? Jungle, savanna, George Washington, forest.**
>
> George Washington doesn't belong in the list, and the reason is that he is not a type of natural environment like the other three terms.
>
> • Jungle: A jungle is a type of dense forest that typically has a humid climate and abundant plant and animal life.
> • Savanna: A savanna is a type of grassy plain with scattered trees and is a distinct type of biome.
> • Forest: A forest is a large area covered chiefly with trees and undergrowth.
>
> George Washington, on the other hand, is not an environment or a natural feature. He was the first President of the United States and a historical figure. The other three terms in the list are related to natural landscapes and environments, while George Washington is unrelated to this category.
>
>
More later. |
d535c2ce-f17d-46c4-9909-b7d95f319313 | StampyAI/alignment-research-dataset/arbital | Arbital | Probability interpretations: Examples
## Betting on one-time events
Consider evaluating, in June of 2016, the question: "What is the probability of Hillary Clinton winning the 2016 US presidential election?"
On the **propensity** view, Hillary has some fundamental chance of winning the election. To ask about the probability is to ask about this objective chance. If we see a prediction market in which prices move after each new poll — so that it says 60% one day, and 80% a week later — then clearly the prediction market isn't giving us very strong information about this objective chance, since it doesn't seem very likely that Clinton's *real* chance of winning is swinging so rapidly.
On the **frequentist** view, we cannot formally or rigorously say anything about the 2016 presidential election, because it only happens once. We can't *observe* a frequency with which Clinton wins presidential elections. A frequentist might concede that they would cheerfully buy for \$1 a ticket that pays \$20 if Clinton wins, considering this a favorable bet in an *informal* sense, while insisting that this sort of reasoning isn't sufficiently rigorous, and therefore isn't suitable for being included in science journals.
On the **subjective** view, saying that Hillary has an 80% chance of winning the election summarizes our *knowledge about* the election or our *state of uncertainty* given what we currently know. It makes sense for the prediction market prices to change in response to new polls, because our current state of knowledge is changing.
## A coin with an unknown bias
Suppose we have a coin, weighted so that it lands heads somewhere between 0% and 100% of the time, but we don't know the coin's actual bias.
The coin is then flipped three times where we can see it. It comes up heads twice, and tails once: HHT.
The coin is then flipped again, where nobody can see it yet. An honest and trustworthy experimenter lets you spin a wheel-of-gambling-odds,%note:The reason for spinning the wheel-of-gambling-odds is to reduce the worry that the experimenter might know more about the coin than you, and be offering you a deliberately rigged bet.% and the wheel lands on (2 : 1). The experimenter asks if you'd enter into a gamble where you win \$2 if the unseen coin flip is tails, and pay \$1 if the unseen coin flip is heads.
On a **propensity** view, the coin has some objective probability between 0 and 1 of being heads, but we just don't know what this probability is. Seeing HHT tells us that the coin isn't all-heads or all-tails, but we're still just guessing — we don't really know the answer, and can't say whether the bet is a fair bet.
On a **frequentist** view, the coin would (if flipped repeatedly) produce some long-run frequency $f$ of heads that is between 0 and 1. If we kept flipping the coin long enough, the actual proportion $p$ of observed heads is guaranteed to approach $f$ arbitrarily closely, eventually. We can't say that the *next* coin flip is guaranteed to be H or T, but we can make an objectively true statement that $p$ will approach $f$ to within epsilon if we continue to flip the coin long enough.
To decide whether or not to take the bet, a frequentist might try to apply an [unbiased estimator](https://arbital.com/p/unbiased_estimator) to the data we have so far. An "unbiased estimator" is a rule for taking an observation and producing an estimate $e$ of $f$, such that the [expected value](https://arbital.com/p/4b5) of $e$ is $f$. In other words, a frequentist wants a rule such that, if the hidden bias of the coin was in fact to yield 75% heads, and we repeat many times the operation of flipping the coin a few times and then asking a new frequentist to estimate the coin's bias using this rule, the *average* value of the estimated bias will be 0.75. This is a property of the _estimation rule_ which is objective. We can't hope for a rule that will always, in any particular case, yield the true $f$ from just a few coin flips; but we can have a rule which will provably have an *average* estimate of $f$, if the experiment is repeated many times.
In this case, a simple unbiased estimator is to guess that the coin's bias $f$ is equal to the observed proportion of heads, or 2/3. In other words, if we repeat this experiment many many times, and whenever we see $p$ heads in 3 tosses we guess that the coin's bias is $\frac{p}{3}$, then this rule definitely is an unbiased estimator. This estimator says that a bet of \$2 vs. $\1 is fair, meaning that it doesn't yield an expected profit, so we have no reason to take the bet.
On a **subjectivist** view, we start out personally unsure of where the bias $f$ lies within the interval [1](https://arbital.com/p/0,). Unless we have any knowledge or suspicion leading us to think otherwise, the coin is just as likely to have a bias between 33% and 34%, as to have a bias between 66% and 67%; there's no reason to think it's more likely to be in one range or the other.
Each coin flip we see is then [evidence](https://arbital.com/p/22x) about the value of $f,$ since a flip H happens with different probabilities depending on the different values of $f,$ and we update our beliefs about $f$ using [Bayes' rule](https://arbital.com/p/1zj). For example, H is twice as likely if $f=\frac{2}{3}$ than if $f=\frac{1}{3}$ so by [Bayes's Rule](https://arbital.com/p/1zm) we should now think $f$ is twice as likely to lie near $\frac{2}{3}$ as it is to lie near $\frac{1}{3}$.
When we start with a uniform [prior](https://arbital.com/p/219), observe multiple flips of a coin with an unknown bias, see M heads and N tails, and then try to estimate the odds of the next flip coming up heads, the result is [Laplace's Rule of Succession](https://arbital.com/p/21c) which estimates (M + 1) : (N + 1) for a probability of $\frac{M + 1}{M + N + 2}.$
In this case, after observing HHT, we estimate odds of 2 : 3 for tails vs. heads on the next flip. This makes a gamble that wins \$2 on tails and loses \$1 on heads a profitable gamble in expectation, so we take the bet.
Our choice of a [uniform prior](https://arbital.com/p/219) over $f$ was a little dubious — it's the obvious way to express total ignorance about the bias of the coin, but obviousness isn't everything. (For example, maybe we actually believe that a fair coin is more likely than a coin biased 50.0000023% towards heads.) However, all the reasoning after the choice of prior was rigorous according to the laws of [probability theory](https://arbital.com/p/1bv), which is the [only method of manipulating quantified uncertainty](https://arbital.com/p/probability_coherence_theorems) that obeys obvious-seeming rules about how subjective uncertainty should behave.
## Probability that the 98,765th decimal digit of $\pi$ is $0$.
What is the probability that the 98,765th digit in the decimal expansion of $\pi$ is $0$?
The **propensity** and **frequentist** views regard as nonsense the notion that we could talk about the *probability* of a mathematical fact. Either the 98,765th decimal digit of $\pi$ is $0$ or it's not. If we're running *repeated* experiments with a random number generator, and looking at different digits of $\pi,$ then it might make sense to say that the random number generator has a 10% probability of picking numbers whose corresponding decimal digit of $\pi$ is $0$. But if we're just picking a non-random number like 98,765, there's no sense in which we could say that the 98,765th digit of $\pi$ has a 10% propensity to be $0$, or that this digit is $0$ with 10% frequency in the long run.
The **subjectivist** considers probabilities to just refer to their own uncertainty. So if a subjectivist has picked the number 98,765 without yet knowing the corresponding digit of $\pi,$ and hasn't made any observation that is known to them to be entangled with the 98,765th digit of $\pi,$ and they're pretty sure their friend hasn't yet looked up the 98,765th digit of $\pi$ either, and their friend offers a whimsical gamble that costs \$1 if the digit is non-zero and pays \$20 if the digit is zero, the Bayesian takes the bet.
Note that this demonstrates a difference between the subjectivist interpretation of "probability" and Bayesian probability theory. A perfect Bayesian reasoner that knows the rules of logic and the definition of $\pi$ must, by the axioms of probability theory, assign probability either 0 or 1 to the claim "the 98,765th digit of $\pi$ is a $0$" (depending on whether or not it is). This is one of the reasons why [perfect Bayesian reasoning is intractable](https://arbital.com/p/bayes_intractable). A subjectivist that is not a perfect Bayesian nevertheless claims that they are personally uncertain about the value of the 98,765th digit of $\pi.$ Formalizing the rules of subjective probabilities about mathematical facts (in the way that [https://arbital.com/p/-1bv](https://arbital.com/p/-1bv) formalized the rules for manipulating subjective probabilities about empirical facts, such as which way a coin came up) is an open problem; this in known as the problem of [https://arbital.com/p/-logical_uncertainty](https://arbital.com/p/-logical_uncertainty). |
04c996e8-973e-497e-b6ac-d41a617b16e5 | StampyAI/alignment-research-dataset/special_docs | Other | Open Category Detection with PAC Guarantees.
Open Category Detection with PAC Guarantees
Si Liu* 1Risheek Garrepalli* 2Thomas G. Dietterich2Alan Fern2Dan Hendrycks3
Abstract
Open category detection is the problem of detect-
ing “alien” test instances that belong to categories
or classes that were not present in the training
data. In many applications, reliably detecting
such aliens is central to ensuring the safety and
accuracy of test set predictions. Unfortunately,
there are no algorithms that provide theoretical
guarantees on their ability to detect aliens under
general assumptions. Further, while there are al-
gorithms for open category detection, there are
few empirical results that directly report alien de-
tection rates. Thus, there are significant theoret-
ical and empirical gaps in our understanding of
open category detection. In this paper, we take
a step toward addressing this gap by studying a
simple, but practically-relevant variant of open
category detection. In our setting, we are pro-
vided with a “clean” training set that contains
only the target categories of interest and an un-
labeled “contaminated” training set that contains
a fractionof alien examples. Under the as-
sumption that we know an upper bound on , we
develop an algorithm with PAC-style guarantees
on the alien detection rate, while aiming to mini-
mize false alarms. Empirical results on synthetic
and standard benchmark datasets demonstrate the
regimes in which the algorithm can be effective
and provide a baseline for further advancements.
1. Introduction
Most machine learning systems implicitly or explicitly as-
sume that their training experience is representative of their
test experience. This assumption is rarely true in real-world
deployments of machine learning, where “unknown un-
knowns”, or “alien” data, can arise without warning. Ig-
*Equal contribution1Department of Statistics, Oregon State
University, Oregon, USA2School of EECS, Oregon State Univer-
sity, Oregon, USA3University of California, Berkeley, California,
USA. Correspondence to: Si Liu <lius2@oregonstate.edu >.
Proceedings of the 35thInternational Conference on Machine
Learning , Stockholm, Sweden, PMLR 80, 2018. Copyright 2018
by the author(s).noring the potential for such aliens can lead to serious safety
concerns in many applications and significantly degrade
the accuracy of test set predictions in others. For exam-
ple, consider a scientific application where a classifier is
trained to recognize specific categories of insects in fresh-
water samples in order to detect important environmental
changes (Lytle et al., 2010). Test samples will typically
contain some fraction of specimens belonging to species
not represented in the training data. A classifier that is un-
aware of these new species will misclassify the specimens
as belonging to existing species. This will produce incorrect
scientific conclusions.
The problem of open category detection is to detect such
alien examples at test time. An ideal algorithm for this prob-
lem would guarantee a user-specified alien-detection rate
(e.g., 95%), while attempting to minimize the false alarm
rate. Unfortunately, no existing algorithm provides such
guarantees under general conditions. In addition, empir-
ical evaluations of existing algorithms for open category
detection typically do not directly evaluate alien detection
rates, which are perhaps the most relevant for safety-critical
applications. Overall, our current theoretical and practical
understanding of open category detection is lacking from a
safety and accuracy perspective.
Is it possible to achieve open category detection with guar-
antees? In this paper, we take a step toward answering
this question by studying a simplified, but practically rel-
evant, problem setting. To motivate our setting, consider
the above insect identification problem. At training time it
is reasonable to expect that a clean training set is available
that contains only the insect categories of interest. At test
time, a new sample will include insects from the training
categories along with some percentage of insects from new
alien categories. Further, scientists may have reasonable
estimates for this percentage based on their scientific knowl-
edge and practical experience. We would like to guarantee
that the system is able to raise an alarm for, say, 95% of the
insects from alien classes, with each alarm being examined
by a scientist. At the same time, we would like to avoid as
many “false alarms” as possible, since each alarm requires
scientist effort.
To formalize the example, our setting assumes two training
sets: a clean training dataset involving a finite set of cate-
Open Category Detection with PAC Guarantees
gories and a contaminated dataset that contains a fraction
of aliens. Our first contribution is to show that, in this setting,
theoretical guarantees are possible given knowledge of an
upper bound on . In particular, we give an algorithm that
uses this knowledge to provide Probably Approximately
Correct (PAC) guarantees for achieving a user-specified
alien detection rate. While knowledge of a non-trivial upper
bound onmay not always be possible, in many situations
it will be possible to select a reasonable value based on
domain knowledge, prior data, or by inspecting a sample of
the test data.
The key idea behind our algorithm is to leverage modern
anomaly detectors, which are trained on the clean data. Our
algorithm combines the anomaly-score distributions over
the clean and contaminated training data in order to derive
an alarm threshold that achieves the desired guarantee on
the alien detection rate on new test queries. In theory the
detection rate guarantee will be met regardless of the quality
of the anomaly detector. The quality of the detector, how-
ever, has a significant impact on the false alarm rate, with
better detectors leading to fewer false alarms.
We carry out experiments1on synthetic and benchmark
datasets using a state-of-the-art anomaly detector, the Iso-
lation Forest (Liu et al., 2008). We vary the amount of
training data, the fraction of alien data points, along with
the accuracy of the upper bound on provided to our algo-
rithm. The results indicate that our algorithm can achieve
the guaranteed performance when enough data is available,
as predicted by the theory. The results also show that for
the considered benchmarks, the Isolation Forest anomaly
detector is able to support non-trivial false positive rates
given enough data. The results also illustrate the inherent
difficulty of the problem for small datasets and/or small
values of. Overall, our results provide a useful baseline
for driving future work on open category detection with
guarantees.
2. Related Work
Open category detection is related to the problem of one-
class classification, which aims to detect outliers relative
to a single training class. One-class SVMs (OCSVMs)
(Sch ¨olkopf et al., 2001) are popular for this problem. How-
ever, they have been found to perform poorly for open
category detection due to poor generalization (Zhou &
Huang, 2003), which has been partly addressed by later
work (Manevitz & Yousef, 2002; Wu & Ye, 2009; Jin et al.,
2004; Cevikalp & Triggs, 2012). OCSVMs have been em-
ployed in a multi-class setting similar to open category de-
tection (Heflin et al., 2012; Pritsos & Stamatatos, 2013).
1Code for reproducing our experiments can be found at
https://github.com/liusi2019/ocd.However, there are no direct mechanisms to control the alien
detection rate of these methods, which is a key requirement
for our problem setting.
Work on classification with rejection/abstaining options
(Chow, 1970; Wegkamp, 2007; Tax & Duin, 2008;
Pietraszek, 2005; Geifman & El-Yaniv, 2017) allows classi-
fiers to abstain from making predictions when they are not
confident. While loosely related to open category detection,
these approaches do not directly consider the possibility of
novel categories, but rather focus on assessing confidence
with respect to the known categories. Due to their closed-
world discriminative nature, it is easy to construct scenarios
where such methods are incorrectly confident about the class
of an alien and do not abstain.
A variety of prior work has addressed variants of open cat-
egory detection. This includes work on formalizing the
concept of “open space” to characterize the region of the fea-
ture space outside of the support of the training set (Scheirer
et al., 2013). Variants of SVMs have also been developed,
such as the One-vs-Set Machine (Scheirer et al., 2013) and
the Weibull-calibrated SVM (Scheirer et al., 2014). Ad-
ditional work has addressed open category detection by
tuning the decision boundary based on unlabeled data which
contains data from novel categories (Da et al., 2014). Ap-
proaches based on nearest neighbor methods have also been
proposed (Mendes J ´unior et al., 2017). None of these meth-
ods, however, allow for the direct control of alien detection
rates, nor do they provide theoretical guarantees.
There is also recent interest in open category detection for
deep neural networks applied to vision and text classification
(Bendale & Boult, 2016; Shu et al., 2017). These methods
usually train a neural network in a standard closed-world
setting, but then analyze various activations in the network
in order to detect aliens. Another related line of work is
detection of out-of-distribution instances, which is similar
to open category detection but assumes that the test data
come from a completely different distribution compared to
the training distribution (Hendrycks & Gimpel, 2017; Liang
et al., 2018). All of this work is quite specialized to deep
neural networks and does not provide direct control of alien
detection rates or theoretical guarantees.
3. Problem Setup
We consider open category detection where there is an un-
known nominal data distribution D0over labeled examples
from a known set of category labels. We receive as input
a “clean” nominal training set S0containingki.i.d. draws
fromD0. In practice, S0will correspond to some curated
labeled data that contains only known categories of interest.
We also receive as input an unlabeled “mixture” dataset
Smthat contains npoints drawn i.i.d. from a mixture dis-
Open Category Detection with PAC Guarantees
tributionDm. Specifically, the mixture distribution Dmis
a combination of the nominal distribution D0and an un-
known alien distribution Da, which is a distribution over
novel categories (alien data points). We assume that Dais
stationary, so that all alien points that appear as future test
queries will also be drawn from Da.
At training time, we assume that Dmis a mixture distribu-
tion, with probability of generating an alien data point
fromDaand probability of 1 of generating a nominal
point. Our results hold even if the test queries come from a
mixture with a different value of as long as the alien test
points are drawn from Da.
Given these datasets, our problem is to label test instances
fromDmas either “alien” or “nominal”. In particular, we
wish to achieve a specified alien detection rate , which is
the fraction of alien data points in Dmthat are classified as
“alien” (e.g., 95%). At the same time we would like the false
positive rate to be small, which is the fraction of nominal
data points incorrectly classified as aliens.
Our approach to this problem assumes the availability of an
anomaly detector that is trained on S0and assigns anomaly
scores to all data points in both S0andSm. Intuitively, the
anomaly scores order the test examples according to how
anomalous they appear relative to the nominal data (higher
scores being more anomalous). An ideal detector would
rank all alien data points higher than all nominals, though
in practice, the ordering will not be so clean. Our approach
labels data in Smby selecting a threshold on the anomaly
scores and labeling all data points with scores above the
threshold as aliens and the remaining points as nominals.
Our key challenge is to select a threshold that provides a
guarantee on the alien detection rate.
4. Algorithms for Open Category Detection
In order to obtain theoretical guarantees, our algorithm as-
sumes knowledge of the alien mixture probability that
generates the mixture data Sm. Later, we will show that
knowing an upper bound on is sufficient to obtain a guar-
antee.
Our approach is based on considering the cumulative dis-
tribution functions (CDFs) over anomaly scores of a fixed
anomaly detector. Let F0;Fa, andFmbe the CDFs of
anomaly scores for the nominal data distribution D0, alien
distribution Da, and mixture distribution Dmrespectively.
SinceDmis a simple mixture of D0andDa, we can write
Fmas
Fm(x) = (1 )F0(x) +Fa(x):
From this we can derive the CDF for Fain terms ofFmand
F0:
Fa(x) =Fm(x) (1 )F0(x)
:Given the ability to derive Fa, it is straightforward to achieve
an alien detection rate of 1 q(e.g. 95%) by selecting an
anomaly score threshold qthat is theqquantile ofFaand
raising an alarm on all test queries whose anomaly score is
greater than q.
In reality, we do not have access to FmorF0and hence
cannot exactly determine Fa. Rather, we have samples
SmandS0. Thus, our algorithm works with the empirical
CDFs ^F0and ^Fm, which are simple step-wise constant
approximations, and estimates an empirical CDF over aliens:
^Fa(x) =^Fm(x) (1 )^F0(x)
: (1)
Our algorithm computes the above estimate of ^Faand uses
it to select a threshold ^qto be the largest threshold such
that^Fa(^q)q, where 1 qis the target alien detection
rate. This choice will minimize the number of false alarms.
The steps of this algorithm are as follows.
Algorithm 1
1:Get anomaly scores for all points in S0andSm, denoted
x1;x2;:::;xkandy1;y2;:::;ynrespectively.
2:Compute empirical CDFs ^F0and^Fm.
3:Calculate ^Fausing equation 1.
4:Output detection threshold
^q= maxfu2S:^Fa(u)qg;
whereS=fx1;x2;:::;xk;y1;y2;:::;yng.
Although ^Fmand^F0are both legal CDFs, the estimate
for^Fafrom step 3 may not be a legal CDF, because it is
the difference of two noisy estimates—it may not increase
monotonically and it may even be negative. A good tech-
nique for dealing with this problem is to employ isotoniza-
tion (Barlow & Brunk, 1972) and clipping. Isotonization
finds the monotonically increasing function ^F
aclosest to
^Fain squared error. To convert ^Fainto a legal CDF, define
Fa= minfmaxf^F
a;0g;1g, where the min and max opera-
tors are applied pointwise to their arguments. We performed
experiments (shown in the supplementary materials) to test
whether using Fain Step 4 would improve the performance
of the overall algorithm. We found that it did not.
5. Finite Sample Guarantee
In the limit of infinite data (both nominal and mixture) and
perfect knowledge of ,^Fawill converge to the true alien
CDF, and our algorithm will achieve the desired alien de-
tection rate. In this section, we consider the finite data case
wherejS0j=jSmj=n. We derive a value for the sample
sizenthat guarantees with high probability over random
Open Category Detection with PAC Guarantees
draws ofS0andSm, that fraction 1 q of the alien
test points will be detected, where is an additional error
incurred because of the finite sample size n.
Our key theoretical tool is a finite sample result on the
uniform convergence of empirical CDF functions (Massart,
1990). To use this result, we make the reasonable technical
assumption that the nominal and alien CDFs, F0andFa,
are continuous. In the following, let be the target alien
detection rate, qbe the input to Algorithm 1, ^qbe the
estimatedq-quantile of the alien CDF (step 4 of Alg. 1),
andbe an error parameter. The following theorem gives
the sample complexity for guaranteeing that 1 of the
alien examples will be detected using threshold ^q.
Theorem 1. LetS0andSmbe nominal and mixture
datasets containing ni.i.d. samples from the nominal and
mixture data distributions respectively. For any 2(0;1 q)
and2(0;1), if
n>1
2ln2
1 p
1 1
22
2
;
then with probability at least 1 , Algorithm 1 will return
a threshold ^qthat achieves an alien detection rate of at
least 1 , where=q+.
The proof is in the Appendix. Note that ngrows as
O(1
22log1
). Hence, this guarantee is polynomial in all
relevant parameters, which we believe is the first such guar-
antee for open category detection. The result can be gener-
alized to the case where n0< nm; in practice, the larger
the mixture sample Smis, the easier it is to estimate q,
because this provides more alien points for estimating the
q-th quantile of Fa.
The theorem gives us flexibility in setting andq(the algo-
rithm input) to achieve a guarantee of 1 . Theparameter
controls a trade-off between sample size and false alarm rate.
To minimize the false alarm rate, we want to make qlarge
(to obtain a larger threshold), so we want to set qclose to.
But, asq!,!0, andn!1 . To minimize the sample
sizen, we want to make qas small as possible, because that
allowsto be larger and hence nbecomes smaller. The
optimal setting of depends on how the false alarm rate
grows withq, which in turn depends on the relative shape
ofF0andFa. In a real safety application, we can estimate
these fromS0andSmand choose an appropriate qvalue.
What if we don’t know the exact value of ? If our algorithm
uses an upper bound 0on the trueto compute ^Fa, we
can still provide a guarantee. In this case, in addition to the
assumptions in Theorem 1, we need a concept of an anomaly
detector being admissible . We say that an anomaly detector
isadmissible for a problem, if the anomaly score CDFs
satisfyF0(x)Fm(x)for allx2R. Most reasonable
anomaly detectors will be admissible in this sense, sincethe alien CDF will typically concentrate more mass toward
larger anomaly score values compared to F0. Indeed, if this
is not the case, there is little hope since there is effectively
no signal to distinguish between aliens and nominals.
Corollary 1. Consider running Algorithm 1 using an upper
bound0on the true. Under the same assumptions as
Theorem 1, if the anomaly detector is admissible and
n>1
2ln2
1 p
1 1
22 0
02
;
then with probability at least 1 , Algorithm 1 will return
a threshold ^qthat achieves an alien detection rate of at
least 1 , where=q+.
The proof is in the Appendix. While we can achieve a guar-
antee using an upper bound on 0, the returned threshold
will be more conservative (smaller) than if we had used the
true. This will result in higher false alarm rates, since
more nominal points will be above the threshold. Thus it is
desirable to use a value of 0that is as close to as possible.
6. Experiments
We performed experiments to answer four questions. Ques-
tion Q1: how accurate is our estimate of ^qas a function of
nand? Question Q2: how loose are the bounds from The-
orem 1? Question Q3: what are typical values of the false
alarm rates for various settings of nandon real datasets?
Question Q4: how do these observed values change if we
employ an overestimate 0>?
All of our experiments employ the Isolation Forest anomaly
detector (Liu et al., 2008), which has been demonstrated
to be a state-of-the-art detector in recent empirical studies
(Emmott et al., 2013). In the Supplementary Materials
we show similar results with the LODA anomaly detector
(Pevn ´y, 2015).
To address Q1 and Q2, we run controlled experiments
on synthetic data. The data points are generated from 9-
dimensional normal distributions. The dimensions of the
nominal distribution D0are independently distributed as
N(0;1). The alien distribution is similar, but with probabil-
ity 0.4, 3 of the 9 dimensions (chosen uniformly at random)
are distributed as N(3;1)and with probability 0.6, 4 of the
9 dimensions (chosen uniformly at random) follow N(3;1).
This ensures that the anomalies are not highly similar to
each other and models the situation in which there are many
different kinds of alien objects, not just a single alien class
forming a tight cluster.
In each experiment, the nominal dataset and the mixture
dataset are of the same size n, and the mixture dataset
contains a proportion of anomaly points. We fixed
the target quantile to be q= 0:05. The experiments are
Open Category Detection with PAC Guarantees
0.650.70.750.80.850.90.951Recall
5!=0.01100⋯10000100⋯10000!=0.05100⋯10000!=0.10100⋯10000!=0.20!=0.50)=100⋯10000
Figure 1. Comparison of recall achieved by ^qcompared to oracle
recall of 0.95. Error bars are 95% confidence intervals. Settings
ofnandincrease from left to right starting with = 0:01and
n2f100;500;1K;5K;10Kgup to= 0:5andn= 10 K.
carried out for n2 f100;500;1K;5K;10Kgand2
f0:01;0:05;0:10;0:20;0:50g. For testing, we create two
large datasets G0andGa, withG0being a pure nominal
dataset,Gabeing a pure alien dataset, and jG0j=jGaj=
20K. The Isolation Forest algorithm computes 1000 full
depth isolation trees on the nominal data. Each tree is grown
on a randomly-selected 20% subsample of the clean data
points. We compute anomaly scores for the nominal points
via out-of-bag estimates and anomaly scores for the mix-
ture points,G0, andGausing the full isolation forest. For
each combination of nand, we repeat the experiment
100times. We measure the fraction of aliens detected (the
“recall”) and the fraction of nominal points declared to be
alien (the “false positive rate”) by applying the ^qestimate
to threshold the anomaly scores in G0andGa.
To assess the accuracy of our ^qestimates (Q1), we could
compare them to the true values. However, this comparison
is hard to interpret, because is expressed on the scale
of anomaly scores, which are somewhat arbitrary. Instead,
Figure 1 plots the recall achieved by ^q. If^qhad been
estimated perfectly, the recall would always be 1 q= 0:95.
However, we see that the recall is often less than 0.95, which
indicates that ^qis over-estimated, especially when nand
are small. This behavior is predicted by our theory, where
we see that the sample size requirements grow inversely
with2. For largerandn, the recall guarantee is generally
achieved. Figure 2 compares the false positive rate of the
true oracleqto the false positive rate of the estimate ^q. For
each combination of andn, we have 100 replications of
the experiment and therefore 100 estimates ^aand 100 FPR
rates. For each of these, the true FPR is computed using G0.
2!=0.01100⋯10000100⋯10000!=0.05100⋯10000!=0.10100⋯10000!=0.20!=0.50)=100⋯1000000.020.040.060.080.10.12FPRFigure 2. Comparison of oracle FPR to the FPR achieved by ^q.
Error bars span from the 25th to 75th percentile with the blue dot
marking the median of the 100 trials. Orange markers indicate the
oracle FPR. Settings of nandincrease from left to right starting
with= 0:01andn2f100;500;1K;5K;10Kgup to= 0:5
andn= 10 K.
The error bars summarize the resulting 100 FPR values by
the median and inter-quartile range. We see that for small n
and, the FPR can be quite different from the oracle rate,
but for larger nand, the estimates are very good.
To assess the looseness of the bounds (Q2), for each combi-
nation ofnand, we fix= 0:05and compute the value
ofsuch that 95 of the 100 runs achieved a recall of at least
1 (thusempirially achieves the 1 guarantee). We
then compute = qand the corresponding required
sample size naccording to Theorem 1. Figure 3 shows a
Figure 3. The log sample size nrequired by Theorem 1 in order
to guarantee the actual observed recall versus the log actual sample
sizen.
Open Category Detection with PAC Guarantees
plot ofnversus the actual n. The distance of these points
from then=ndiagonal line show that the theory is fairly
loose, although it becomes tighter as ngets large.
00.050.10.150.20.250.30.35
00.10.20.30.40.5FPR
αLandsatpagebOCRLetter recogShuttleCovertype
Figure 4. False positive rates on six UCI datasets as a function of
(q= 0:05,= 0:05).
Benchmark Data Experiments. To address our third and
fourth questions, we performed experiments on six UCI
multiclass datasets: Landsat, Opt.digits, pageb, Shuttle,
Covertype and MNIST. In addition to these, we provide
results for the Tiny ImageNet dataset. In each multiclass
dataset, we split the classes into two groups: nominal and
alien. For Tiny ImageNet, we train a deep neural network
classifier on 200 nominal classes and treat the remaining
800 as aliens. The nominal classes for UCI datasets are
MNIST(1,3,7), Landsat(1,7), OCR(1,3,4,5,7), pageb(1,5),
Letter recognition(1,3), and Shuttle(1,4). We generated
0.860.880.90.920.940.960.98
00.10.20.30.40.5Recall!LandsatpagebOCRLetter recogShuttleCovertype
Figure 5. Recall rates on six UCI datasets as a function of (q=
0:05,= 0:05)
0.30.350.40.450.50.550.60.650.7
0.050.150.250.350.45FPR!MNISTTiny ImagenetFigure 6. False positive rates on two image datasets as a function
of(q= 0:05;= 0:05).
nominal and mixture datasets for various values of . The
value ofnfor each dataset is 1532 for Landsat,788 for Letter
recognition, 568 for OCR, 4912 for pageb, 5000 for Shuttle,
13,624 for Covertype, 11,154 for MNIST, and 10,000 for
Tiny ImageNet. Because we cannot create datasets with
largen, we cannot measure the true value of q.
After computing the anomaly scores for both nominal and
mixture datasets, we applied Algorithm 1 within a 10-fold
cross validation. We divide the mixture data points at ran-
dom into 10 groups. For each fold, we estimate ^Faand^a
from 9 of the 10 groups and then score the mixture points in
the held-out fold according to ^a. In all other respects, the
experimental protocol is the same as for the synthetic data.
For Tiny ImageNet, the anomaly scores are obtained by
applying a baseline method (Hendrycks & Gimpel, 2017).
To answer Q3, Figures 4 and 6 plot the false positive rate as
a function of for the UCI and vision datasets, respectively.
We see that the FPR ranges from 3.6% to 26.9% on UCI
depending on the dataset and the level of . The vision
datasets have higher FPR, especially MNIST, which has a
large number of alien classes that are not distinguished well
by the anomaly detector. The FPR depends primarily on
the domain, because the key issue is how well the anomaly
detector distinguishes between nominal and alien examples.
The false alarm rate generally improves as increases. In
some applications, it may be possible to enrich Smso that
is larger on the training set to take advantage of this
phenomenon. It is interesting to note that once ^ahas been
computed, it can be applied to test datasets having different
(or unknown) values of .
Figures 5 and 7 plot the recall rate as a function of for
the UCI and vision datasets. We set q= 0:05in these
experiments. Theorem 1 only guarantees a recall of 1 q ,
Open Category Detection with PAC Guarantees
0.880.890.90.910.920.930.940.950.960.970.98
0.050.150.250.350.45Recall
!MNISTTiny Imagenet
Figure 7. Recall rates on two image datasets as a function of
(q= 0:05;= 0:05).
00.050.10.150.20.250.30.350.4
00.0020.0040.0060.0080.01α' -αrecall' -recallfpr' -fpr
Figure 8. Change in recall and false positive rate as a function of
0 for six UCI datasets; 2f0:1;0:2;0:4g
wheredepends on n. Hence, it is nice to see that for
three of the domains (Shuttle, Covertype, and Landsat) in
UCI and for both vision datasets, the recall is very close to
1 q= 0:95. These are the domains with the largest values
ofn. The value of has a bigger impact on recall than it
does on FPR. This is because the effective number of alien
training examples is n, which can be very small for some
datasets when = 0:1. This shows that in applications such
as fraud detection, where may be very small, the mixture
datasetSmneeds to be very large.
To answer Q4 regarding the impact of using an incorrect
value0> , we repeated these experiments with 0=
+, for2f0:002;0:004;0:006;0:008;0:010g. Figure 8
plots the change in false positive rate and recall as a functionof0 . Two points are plotted for each combination
of0and dataset, the change in Recall and the change in
FPR. We observe that the recall increases slightly (in the
range from 0.01 to 0.05). However, the false positive rate
increases by much larger amounts (from 0.01 to 0.336). This
demonstrates that it is very important to determine the value
ofaccurately.
7. Summary
We have taken a step toward open category detection with
guarantees by providing a PAC-style guarantee on the prob-
ability of detecting 1 of the aliens on the test data. This
is the first such guarantee under any similarly general con-
ditions. We have shown that this guarantee is satisfied in
our experiments, although the guarantee is somewhat loose,
especially on small training sets. Obtaining a guarantee re-
quires more data than standard PAC guarantees on expected
prediction accuracy. This is because we must estimate the
qquantile of the alien anomaly score distribution, where
qis typically quite small. Nonetheless, our experiments
show that our algorithm gives good recall performance and
non-trivial false alarm rates on datasets of reasonable size.
It is important to note that the very formulation of a PAC-
style guarantee on the probability of detecting aliens re-
quires assuming that the aliens are drawn from a well-
defined distribution Da. While this is appropriate in some
applications, such as the insect survey application described
in the introduction, it is not appropriate for adversarial set-
tings. In such settings, a PAC-style guarantee does not make
sense, and some other form of safety guarantee needs to be
formulated.
To obtain the guarantee, we employ two training datasets:
a clean dataset that contains no aliens and an (unlabeled)
contaminated dataset that contains a known fraction of
aliens. An important theoretical problem for future research
is to develop a method that can estimate a tight upper bound
on^> . We believe this is possible, but we have not yet
found a method that guarantees that ^> .
Our guarantee requires more data as becomes small. For-
tunately, when is small, it may be possible in some appli-
cations to afford lower recall rates, since the frequency of
aliens will be smaller. However, in safety-critical applica-
tions where a single undetected alien poses a serious threat,
there is little recourse other than to collect more data or
allow for higher false positive rates.
Acknowledgements
This research was supported by a gift from Huawei, Inc.,
and grants from the Future of Life Institute and the NSF
Grant 1514550. Any opinions, findings, and conclusions
Open Category Detection with PAC Guarantees
or recommendations expressed in this material are those of
the author(s) and do not necessarily reflect the views of the
sponsors.
A. Proof for Theorem 1
Suppose there are nrandom variables which are i.i.d. from
the distribution with CDF Fand let ^Fnbe the empirical
CDF calculated from this sample. Then Massart (1990)
shows that
P(pnsup
xj^Fn(x) F(x)j>)2 exp( 22)(2)
holds without any restriction on . Making use of this,
and assuming we use the same sample size nfor both the
mixture dataset and the clean data set, for any 2(0;1 q),
we seek to determine how large nneeds to be in order to
guarantee that with probability at least 1 our quantile
estimate ^qsatisfiesFa(^q)q+. To achieve this, we
want to have
P(sup
xj^Fa(x) Fa(x)j>):
We have
P(sup
xj^Fa(x) Fa(x)j>)
=P(sup
xj^Fm(x) (1 )^F0(x)
Fm(x) (1 )F0(x)
j>)
=P(sup
xj1
(^Fm(x) Fm(x))
1
(^F0(x) F0(x))j>)
P((1
sup
xj^Fm(x) Fm(x)j+
1
sup
xj^F0(x) F0(x)j)>)
P(f1
sup
xj^Fm(x) Fm(x)j>1
2 g
[f1
sup
xj^F0(x) F0(x)j>1
2 g)
=P(fsup
xj^Fm(x) Fm(x)j>
2 g
[fsup
xj^F0(x) F0(x)j>
2 g):
Making use of (2), when
n>1
2ln2
1 p
1 (1
)2(2
)2;we will have
P(sup
xj^Fm(x) Fm(x)j>
2 )1 p
1 ;
P(sup
xj^F0(x) F0(x)j>
2 )1 p
1 :
In this case we will have
P(sup
xj^Fa(x) Fa(x)j>)
1 P(fsup
xj^Fm(x) Fm(x)j
2 g
\fsup
xj^F0(x) F0(x)j
2 g)
1 (1 1 +p
1 )2
=:
Now we have with probability at least 1 ,
j^Fa(x) Fa(x)j;8x2R:
If this inequality holds, then for any value ^qsuch that
^Fa(^q)q, we have
Fa(^q)^Fa(^q) +q+:
So we have with probability at least 1 , any ^qsatisfying
^Fa(^q)qwill satisfyFa(^q)q+.
B. Proof for Corollary 1
If0, and if we write
F0
a(x) =Fm(x) (1 0)F0(x)
0;
thenF0
ais still a legal CDF, because
F0
a( 1) = 0; F0
a(1) = 1;
and it is easy to show that F0
ais monotonically nondecreas-
ing.
But
F0
a(x) Fa(x) =( 0)(Fm(x) F0(x))
00;8x2R;
and because of this, if we let ^0
qdenote the threshold we
get from using 0, we will have Fa(^0
q)F0
a(^0
q). By
the proof of previous theorem, we know that when n >
1
2ln2
1 p1 (1
)2(2 0
0)2, we have with probability at least
1 ,F0
a(^0
q)q+, and thus we have Fa(^0
q)q+.
References
Barlow, RE and Brunk, HD. The isotonic regression prob-
lem and its dual. Journal of the American Statistical
Association , 67(337):140–147, 1972.
Open Category Detection with PAC Guarantees
Bendale, A. and Boult, T. E. Towards open set deep net-
works. In 2016 IEEE Conference on Computer Vision
and Pattern Recognition (CVPR) , pp. 1563–1572, June
2016.
Cevikalp, H. and Triggs, B. Efficient object detection using
cascades of nearest convex model classifiers. In 2012
IEEE Conference on Computer Vision and Pattern Recog-
nition , pp. 3138–3145, June 2012.
Chow, C. On optimum recognition error and reject tradeoff.
IEEE Transactions on Information Theory , 16(1):41–46,
Jan 1970. ISSN 0018-9448.
Da, Qing, Yu, Yang, and Zhou, Zhi-Hua. Learning with
augmented class by exploiting unlabeled data. In Pro-
ceedings of the Twenty-Eighth AAAI Conference on Artifi-
cial Intelligence , AAAI’14, pp. 1760–1766. AAAI Press,
2014.
Emmott, Andrew F, Das, Shubhomoy, Dietterich, Thomas,
Fern, Alan, and Wong, Weng-Keen. Systematic construc-
tion of anomaly detection benchmarks from real data. In
Proceedings of the ACM SIGKDD workshop on outlier
detection and description , pp. 16–21. ACM, 2013.
Geifman, Yonatan and El-Yaniv, Ran. Selective classifica-
tion for deep neural networks. In Guyon, I., Luxburg,
U. V ., Bengio, S., Wallach, H., Fergus, R., Vishwanathan,
S., and Garnett, R. (eds.), Advances in Neural Informa-
tion Processing Systems 30 , pp. 4885–4894. Curran As-
sociates, Inc., 2017.
Heflin, B., Scheirer, W., and Boult, T. E. Detecting and
classifying scars, marks, and tattoos found in the wild. In
2012 IEEE Fifth International Conference on Biometrics:
Theory, Applications and Systems (BTAS) , pp. 31–38,
Sept 2012.
Hendrycks, Dan and Gimpel, Kevin. A baseline for de-
tecting misclassified and out-of-distribution examples in
neural networks. In Proceedings of International Confer-
ence on Learning Representations , 2017.
Jin, Hongliang, Liu, Qingshan, and Lu, Hanqing. Face
detection using one-class-based support vectors. In Sixth
IEEE International Conference on Automatic Face and
Gesture Recognition, 2004. Proceedings. , pp. 457–462,
May 2004.
Liang, Shiyu, Li, Yixuan, and Srikant, R. Enhancing the
reliability of out-of-distribution image detection in neural
networks. International Conference on Learning Repre-
sentations , 2018.
Liu, Fei Tony, Ting, Kai Ming, and Zhou, Zhi-Hua. Isolation
forest. In Data Mining, 2008. ICDM’08. Eighth IEEE
International Conference on , pp. 413–422. IEEE, 2008.Lytle, David A, Mart ´ınez-Mu ˜noz, Gonzalo, Zhang, Wei,
Larios, Natalia, Shapiro, Linda, Paasch, Robert, Mold-
enke, Andrew, Mortensen, Eric N, Todorovic, Sinisa, and
Dietterich, Thomas G. Automated processing and identi-
fication of benthic invertebrate samples. Journal of the
North American Benthological Society , 29(3):867–874,
2010.
Manevitz, Larry M. and Yousef, Malik. One-class svms for
document classification. J. Mach. Learn. Res. , 2:139–154,
March 2002. ISSN 1532-4435.
Massart, P. The tight constant in the dvoretzky-kiefer-
wolfowitz inequality. The Annals of Probability , 18(3):
1269–1283, 1990. ISSN 00911798.
Mendes J ´unior, Pedro R., de Souza, Roberto M., Werneck,
Rafael de O., Stein, Bernardo V ., Pazinato, Daniel V .,
de Almeida, Waldir R., Penatti, Ot ´avio A. B., Torres,
Ricardo da S., and Rocha, Anderson. Nearest neighbors
distance ratio open-set classifier. Machine Learning , 106
(3):359–386, Mar 2017. ISSN 1573-0565.
Pevn ´y, Tom ´aˇs. Loda: Lightweight on-line detector of
anomalies. Machine Learning , (November 2014), 2015.
Pietraszek, Tadeusz. Optimizing abstaining classifiers using
roc analysis. In Proceedings of the 22Nd International
Conference on Machine Learning , ICML ’05, pp. 665–
672, New York, NY , USA, 2005. ACM. ISBN 1-59593-
180-5.
Pritsos, Dimitrios A. and Stamatatos, Efstathios. Open-
Set Classification for Automated Genre Identification , pp.
207–217. Springer Berlin Heidelberg, Berlin, Heidelberg,
2013. ISBN 978-3-642-36973-5.
Scheirer, W. J., de Rezende Rocha, A., Sapkota, A., and
Boult, T. E. Toward open set recognition. IEEE Transac-
tions on Pattern Analysis and Machine Intelligence , 35
(7):1757–1772, July 2013. ISSN 0162-8828.
Scheirer, W. J., Jain, L. P., and Boult, T. E. Probability
models for open set recognition. IEEE Transactions on
Pattern Analysis and Machine Intelligence , 36(11):2317–
2324, Nov 2014. ISSN 0162-8828.
Sch¨olkopf, Bernhard, Platt, John C., Shawe-Taylor, John C.,
Smola, Alex J., and Williamson, Robert C. Estimating
the support of a high-dimensional distribution. Neural
Comput. , 13(7):1443–1471, July 2001. ISSN 0899-7667.
Shu, Lei, Xu, Hu, and Liu, Bing. DOC: deep open classifi-
cation of text documents. CoRR , abs/1709.08716, 2017.
Tax, D.M.J. and Duin, R.P.W. Growing a multi-class classi-
fier with a reject option. Pattern Recognition Letters , 29
(10):1565 – 1570, 2008. ISSN 0167-8655.
Open Category Detection with PAC Guarantees
Wegkamp, Marten H. Lasso type classifiers with a reject
option. 2007.
Wu, M. and Ye, J. A small sphere and large margin ap-
proach for novelty detection using training data with out-
liers. IEEE Transactions on Pattern Analysis and Ma-
chine Intelligence , 31(11):2088–2092, Nov 2009. ISSN
0162-8828.
Zhou, Xiang Sean and Huang, Thomas S. Relevance feed-
back in image retrieval: A comprehensive review. Mul-
timedia Systems , 8(6):536–544, Apr 2003. ISSN 1432-
1882. |
5491a1f0-99b8-4573-9ef9-5dac6e088583 | trentmkelly/LessWrong-43k | LessWrong | A small update to the Sparse Coding interim research report
This is a linkpost to a set of slides containing an update to a project that was the subject of a previous post ([Interim research report] Taking features out of superposition with sparse autoencoders).
The update is very small and scrappy. We haven't had much time to devote to this project since posting the Interim Research Report.
TL;DR for the slides:
* We trained a minuscule language model (LM) (residual size = 16; 6 layers) and then trained sparse autoencoders on MLP activations (dimension = 64) from the third layer of that model.
* We found that, when we compared the 'ground truth feature recovery' plots, the plots for the toy data and LM data were much more similar than in the Interim Research Report.
* Very, very tentatively, we found the layer had somewhere between 512-1024 features. By labelling a subset of these features, we estimate there are roughly 600 easily labellable (monosemantic) features. For instance, we found a feature that activates for a period immediately after 'Mr', 'Mrs', or 'Dr'.
* We suspect that the reason the toy data and LM data plots had previously looked different was due to severely undertrained sparse autoencoders.
We're hopeful that with more time to devote to this project we can confirm the results and apply the method to larger LMs. If it works, it would give us the ability to tell mechanistic stories about what goes on inside large LMs in terms of monosemantic features. |
7c18431e-6467-4298-81ea-2c0587dec488 | trentmkelly/LessWrong-43k | LessWrong | Prisoner's dilemma tournament results
The prisoner's dilemma tournament is over. There were a total of 21 entries. The winner is Margaret Sy, with a total of 39 points. 2nd and 3rd place go to rpglover64 and THE BLACK KNIGHT, with scores of 38 and 36 points respectively. There were some fairly intricate strategies in the tournament, but all three of these top scorers submitted programs that completely ignored the source code of the other player and acted randomly, with the winner having a bias towards defecting.
You can download a chart describing the outcomes here, and the source codes for the entries can be downloaded here.
I represented each submission with a single letter while running the tournament. Here is a directory of the entries, along with their scores: (some people gave me a term to refer to the player by, while others gave me a term to refer to the program. I went with whatever they gave me, and if they gave me both, I put the player first and then the program)
A: rpglover64 (38)
B: Watson Ladd (27)
c: THE BLACK KNIGHT (36)
D: skepsci (24)
E: Devin Bayer (30)
F: Billy, Mimic-- (27)
G: itaibn (34)
H: CooperateBot (24)
I: Sean Nolan (28)
J: oaz (26)
K: selbram (34)
L: Alexei (25)
M: LEmma (25)
N: BloodyShrimp (34)
O: caa (32)
P: nshepperd (25)
Q: Margaret Sy (39)
R: So8res, NateBot (33)
S: Quinn (33)
T: HonoreDB (23)
U: SlappedTogetherAtTheLastMinuteBot (20)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.