id stringlengths 36 36 | source stringclasses 15 values | formatted_source stringclasses 13 values | text stringlengths 2 7.55M |
|---|---|---|---|
ab4ad161-5900-4ff1-970b-24d3f6821cdc | StampyAI/alignment-research-dataset/special_docs | Other | Aligning AI Optimization to Community Well-Being
PERSPECTIVE ARTICLE
Aligning AI Optimization to Community Well-Being
Jonathan Stray1
Received: 7 February 2020 / Accepted: 9 October 2020 / Published online: 4 November 2020
#Springer Nature Switzerland AG 2020
Abstract
This paper investigates incorporating community well-being metrics into the objectives
of optimization algorithms and the teams that build them. It documents two cases where
a large platform appears to have modified their system to this end. Facebook incorpo-
rated“well-being ”metrics in 2017, while YouTube began integrating “user satisfac-
tion”metrics around 2015. Metrics tied to community well-being outcomes could also
be used in many other systems, such as a news recommendation system that tries to
increase exposure to diverse views, or a product recommendation system that
opstimizes for the carbon footprint of purchased products. Generalizing from these
examples and incorporating insights from participatory design and AI governance leads
to a proposed process for integrating community well-being into commercial AI
systems: identify and involve the affected community, choose a useful metric, use this
metric as a managerial performance measure and/or an algorithmic objective, and
evaluate and adapt to outcomes. Important open questions include the best approach
to community participation and the uncertain business effects of this process.
Keywords Artificial intelligence .AI ethics .Community well-being .Optimization .
Corporate social responsibility
Introduction
This paper is an extended analysis of a simple idea: large-scale commercial optimizing
systems may be able to manage harmful side effects on communities by monitoring
established well-being metrics. It sketches a theory that ties together quantitative
measures of well-being, contemporary metrics-driven management practice, the objec-
tive function of optimization algorithms, participatory and multi-stakeholder gover-
nance of algorithmic systems, and the protection or promotion of community well-
being. Detailed analyses of recent efforts by Facebook and YouTube are used toInternational Journal of Community Well-Being (2020) 3:443 –463
https://doi.org/10.1007/s42413-020-00086-3
\*Jonathan Stray
jonathan@partnershiponai.org
1Partnership on AI, San Francisco, CA, USA
illustrate the challenges and unknowns of this approach, which generalizes to a variety
of different types of artificial intelligence (AI) systems. The core contribution of thisarticle is a proposed process for the use of community well-being metrics withincommercial AI systems.
Well-being encompasses “people ’s living conditions and quality of life today
(current well-being), as well as the resources that will help to sustain people ’s
well-being over time (natural, ec onomic, human and social capital) ”(OECD
2019b , p. 2). Community well-being attempts to evaluate well-being at the level
of a community defined “in geographic terms, such as a neighborhood or town …
or in social terms, such as a group of people sharing common chat rooms on theInternet, a national professional association or a labor union ”(Phillips and Pittman
2015 , p. 3). The measurement of well-being is now a well-established field with a
long history, and is increasingly used i n policy-making (Exton and Shinwell
2018 ).
Large AI systems can have both positive and harmful side effects on communities,
through effects on employment and inequality (Korinek and Stiglitz 2017 ), privacy and
safety (OECD 2019a ), addictive behavior (Andreassen 2015 ), fairness and discrimina-
tion (Barocas et al. 2018 ), human rights (Donahoe and Metzger 2019 ), polarization,
extremism, and conflict (Ledwich and Zaitsev 2020 ; Stoica and Chaintreau 2019 ), and
potentially many other areas (Kulynych et al. 2020 ). Importantly, AI systems can affect
non-users too, as with environmental externalities.
Most AI is built around optimization “in which the aim is to find the best state
according to an objective function ”(Russell and Norvig 2010 ,p .1 2 1 )w h e r ea n
objective function is some method for quantitatively evaluating the desirability of
an outcome (Dantzig 1982 ). Standard management practice also increasingly
involves the maximization of qua ntitative metrics (Parmenter 2020 ), which can
be considered an optimization process. T his paper is concerned with optimizing
systems composed of people and algorithm s which affect communities, where the
choice of objective might have significant societal influence. Examples includesystems used to allocate resources or assign work, choose what news people see,
recommend products to buy, or implement government policy. Many of these
systems would be considered AI, but perhaps the phrase “autonomous and intel-
ligent systems ”(Schiff et al. 2020 , p. 1) which appears in certain standards efforts
would be better, because an automated system does not have to be very smart tocause harm. Rather, the unifying feature is optimization –both the cause of many
problems and an opportunity for a response.
The central idea of this paper is to incorporate community well-being metrics into
the optimization process at both the mana gerial and technical level. This is a
sociotechnical approach to systems design (Baxter and Sommerville 2011 )t h a tc o n -
siders the role of both people and technology. There are many technical interventionsthat could be undertaken aside from the modification of an algorithmic objectivefunction; for example, a social media product team could choose to show a simplechronological list of posts rather than using algorithmic content personalization. How-ever, if product managers are evaluated on community well-being outcomes, they maychoose to make such a change based on the expected effects on users. The integrationof the managerial and the technical in an optimization framework can motivate many
possible product design changes.444 International Journal of Community Well-Being (2020) 3:443 –463
Background
This paper responds most directly to recent calls for research into well-being and AI. It
proposes specific “improvements to product design ”(Schiff et al. 2019 ,p .3 )a n di ti s
interdisciplinary, systems-based, and community-oriented (Musikanski et al. 2020 ). It
draws on and contributes to the emerging field of recommender alignment, the practice
of building algorithms for content ranking and personalization that enact human values
(Stray et al. 2020 ).
The goal of the process proposed in this paper is the governance of large-scale
commercial algorithmic systems. Rahwan ( 2018 ) calls this society-in-the-loop control,
defined as “embedding the values of society, as a whole, in the algorithmic governance
of societal outcomes ”(p. 3). In this sense community participation is a key element of
the proposed framework, and this paper draws on approaches as diverse as participatorydesign (Simonsen and Robertson 2012 ) and corporate stakeholder engagement
(Manetti 2011 ).
Community Well-BeingAt the individual level well-being is usually studied as an experiential state, and there is
now a wealth of research on the definition and reliable measurement of subjective well-being (Diener et al. 2018 ). Although well-being is a rich, multidimensional construct,
even single questions can reveal substantial information, such as overall, how satisfied
are you with life as a whole these days? answered on a 0 –10 scale. This well-studied
measure has several advantages: it correlates with how people make major life deci-sions, gives a similarly reliable result across cultures, and is by itself informativeenough to be used in quantitative evaluations of policy choices (O ’Donnell et al. 2014 ).
Community well-being “embraces a wide range of economic, social, environmental,
political, cultural dimensions, and can be thought of as how well functions of commu-nity are governed and operating ”(Sung and Phillips 2018 , p. 64). In practice, commu-
nity well-being is assessed using a variety of metrics across many domains. Often both
subjective and objective indicators are needed to get a full picture (Musikanski et al.
2019 ). A survey of local and national well-being indicator frameworks in use in the
United Kingdom gives an overview of the substance and range of such metrics(Bagnall et al. 2017 ). Community well-being frameworks can originate from consid-
eration of geographic communities, or communities of interest (Phillips and Pittman2015 )w h i c hm a yb ep a r t i c u l a r l yr e l e v a n tt oo n l i n ep l a t f o r m s .
As an example community well-being framework, the OECD Better Life Index
(Durand 2015 ) aims to measure “both current material conditions and quality of life ”
(p. 1) across countries through the metrics shown in Table 1. This framework includes
the life satisfaction measure above, as well as statistical indicators around health,education, employment, etc. in conjunction with subjective indicators such as whetherone feels safe walking alone at night.
Technologists and scholars have begun to appreciate the significance of well-being
measures in the design and operation of AI systems (Musikanski et al. 2020 ). The IEEE
7010 Recommended Practice Standard for Assessing the Impact of Autonomous andIntelligent Systems on Human Well-Being collects pre-existing measures from sources
such as the OECD Better Life Index, the UN Sustainable Development Indicators, theInternational Journal of Community Well-Being (2020) 3:443 –463 445
Human Development Index, the World Health Organization, the World Values Survey,
Freedom House, and others (Schiff et al. 2020 ). From the point of view of a technol-
ogist who is concerned about the societal effects of their work, established well-beingmetrics have the advantage of representing extensive deliberation by domain experts.
OptimizationOptimization is used extensively in AI to guide training and learning. A problem to be
solved is expressed as a scalar function –a method to calculate a single number that
expresses the desirability of any given hypothetical solution. Solving the problemmeans finding a solution that maximizes this function. The encapsulation of concernsinto a single function was a major conceptual advance that enabled the creation ofgeneric optimization algorithms (Dantzig 1982 ). Conceptually, any problem that has
some set of best solutions can be expressed as optimization with a single objectiveTable 1 Indicators from the OECD Better Life Index (Durand 2015 ). Each of these has a specific statistical
definition and has been collected across OECD countries since 2011
Domain Indicators
Housing Dwellings without basic facilities
Housing expenditure
Rooms per person
Income Household net adjusted disposable income
Household net wealth
Jobs Labor market insecurity
Employment rate
Long term unemployment rate
Community Quality of support network
Education Educational attainment
Student skillsYears in education
Environment Air pollution
Water quality
Civic engagement Stakeholder engagement for developing regulations
Voter turnout
Health Life expectancy
Self-reported health
Life Satisfaction Life satisfaction
Safety Feeling safe walking alone at night
Homicide rate
Work-life balance Employees working very long hours
Time devoted to leisure and personal care446 International Journal of Community Well-Being (2020) 3:443 –463
function, though practical problem-solving often involves the optimization of multiple
sub-goals.
A supervised machine learning algorithm that attempts to identify objects from
images would usually be trained through a loss function that penalizes incorrectanswers. A reinforcement learning approach to playing a video game might use thegame score as a reward function. There are also value functions, cost functions, fitness
functions, energy functions and more, all of which operate on similar principles
(Russell and Norvig 2010 ). For simplicity, in this paper I refer to all of the scalar
functions used to drive AI behavior as objective functions .
In this paper I refer to an optimizing system as if there were one optimizer and one
objective. In practice such systems, especially those at platform scale, may includedozens or hundreds of optimizing components (numerous trained sub-models, forexample). There isn ’t one objective function that can be altered, but many. Nonetheless,
there are usually a few high-level goals concerned with the system ’sm a i no u t p u t s .T h i s
is the case at Groupon with many interacting models and a master objective function
that aligns to company goals (Delgado et al. 2019 ).
Quantitative metrics analogous to objective functions are also used in corporate
management. Modern management practice includes concepts such as key performance
indicators (Parmenter 2020 )a n d objectives and key results (Doerr 2017 ), both of which
involve quantitative indicators of progress. Economic theory frequently models thecorporation as a profit optimizer (e.g. Samuelson and Marks 2014 ). More sophisticated
descriptions try to account for the creation of various types of long-term value, such as
thebalanced scorecard (Kaplan 2009 ) and sustainability accounting (Richardson
2013 ), both of which describe various non-financial metrics that are intended to be
optimized.
Case Studies of Platform Interventions
This section presents two examples where large technology companies seem to have
optimized for well-being, or a similar concept. These cases have been reconstructed
through documentary evidence such as public posts, previously published interviews,financial reports, and research articles by employees.
Facebook ’s Well-Being Optimization
In late 2017 and early 2018, Facebook made a number of changes to their product
explicitly designed to promote well-being. Facebook researchers Ginsberg and Burke
(2017 ) wrote in a public post in December 2017:
What Do Academics Say? Is Social Media Good or Bad for Well-
Being? A c c o r d i n gt ot h er e s e a r c h ,i tr e a l l yc o m e sd o w nt o how you use
the technology. For example, on social media, you can passively scrollthrough posts, much like watching T V, or actively interact with friends —
m e s s a g i n ga n dc o m m e n t i n go ne a c ho t h e r ’s posts. Just like in person,
interacting with people you care abou t can be beneficial, while simply
watching others from the sidelines may make you feel worse. (para. 7).International Journal of Community Well-Being (2020) 3:443 –463 447
This post cites a number of peer-reviewed studies on the well-being effects of social
media, some of which were collaborations between Facebook researchers andacademics. Ginsberg and Burke ( 2017 ) cite Verduyn et al. ’s(2017 ) review paper on
the effects of social media on well-being, which has an obvious resonance withFacebook ’sf r a m i n g :
passively using social network sites provokes social comparisons and envy,
which have negative downstream consequences for subjective well-being. Incontrast, when active usage of social network sites predicts subjective well-being,it seems to do so by creating social capital and stimulating feelings of socialconnectedness. (Verduyn et al. 2017 ,p .2 7 4 )
A close reading of posts around this time shows that Facebook developed a well-being
proxy metric. A January 2018 post by Facebook ’s Chief Executive Officer notes that
“research shows that strengthening our relationships improves our well-being and
happiness ”(Zuckerberg 2018 , para. 2) and mentions well-being twice more, then
switches to the phrase “meaningful social interactions: ”
I'm changing the goal I give our product teams from focusing on helping you find
relevant content to helping you have more meaningful social interactions.(Zuckerberg 2018 ,p a r a .7 )
Relevance is a term of art in recommender systems, referring to user preferences as
expressed through item clicks or ratings, and is increasingly understood as a simplistic
objective (Jannach and Adomavicius 2016 ). The algorithmic change away from rele-
vance was described by the head of the News Feed product:
Today we use signals like how many people react to, comment on or share posts
to determine how high they appear in News Feed. With this update, we will alsoprioritize posts that spark conversations and meaningful interactions betweenpeople. To do this, we will predict which posts you might want to interact withyour friends about, and show these posts higher in feed (Mosseri 2018 , para. 3).
Facebook created a well-being metric and assigned it as a goal to a product team, which
incorporated it into an existing algorithmic objective function. This objective functionwas augmented by creating a model that uses existing data such as past user behaviorand post content to predict whether a user will have a meaningful social interaction if
shown any particular post. There is little public documentation of how meaningful
social interactions are measured. The most detailed description is from the transcript of
a call where Facebook reported earnings to investors, which explains that meaningful
social interactions are measured through user surveys:
So the thing that we're going to be measuring is basically, the number of
interactions that people have on the platform and off because of what they'reseeing that they report to us as meaningful …the way that we've done this for448 International Journal of Community Well-Being (2020) 3:443 –463
years is we've had a panel, a survey, of thousands of people who basically we
asked, what's the most meaningful content that they had seen in the platform orthey have seen off the platform. (Facebook 2018 ,p .1 3 )
The resulting system is reconstructed in Fig. 1.
While there is no public account of the effects of the incorporation of the meaningful
social interactions prediction model on the meaningful social interactions metric as
measured by Facebook through user surveys, Facebook has reported reduced engage-ment on at least one product, suggesting that the meaningful social interactions
objective was weighted strongly enough to cause significant changes in which itemsare presented to users:
video is just a passive experience. To shift that balance, I said that we were going
to focus on videos that encourage meaningful social interactions. And in Q4, we
updated our video recommendations and made other quality changes to reflectthese values. We estimate these updates decreased time spent on Facebook byroughly 5% in the fourth quarter. To put that another way: we made changes thatreduced time spent on Facebook by an estimated 50 million hours every day tomake sure that people's time is well spent. (Facebook 2018 ,p .2 ) . . .
YouTube ’s User Satisfaction Metrics
John Doerr ’sMeasure What Matters (2017) documents YouTube ’sm u l t i - y e a re f f o r tt o
reach one billion hours of daily user watch time through interviews with SusanWojcicki, Chief Executive Officer and Cristos Goodrow, Vice President of Engineeringat YouTube (Doerr 2017 , pp. 154 –172). Goodrow describes the inception of
Fig. 1 A reconstruction of Facebook ’su s eo f meaningful social interactions circa 2018. Well-being effects are
unobserved because they happen outside of user interactions with FacebookInternational Journal of Community Well-Being (2020) 3:443 –463 449
YouTube ’s recommendation system in 2011, and how he advocated to optimize for
watch time instead of video views as:
On a dedicated team named Sibyl, Jim McFadden was building a system for
selecting “watch next ”recommendations, aka related videos or “suggestions. ”It
had tremendous potential to boost our overall views. But were views what we
really wanted to boost?...
I sent a provocative email to my boss and the YouTube leadership team. Subject line:
“Watch time, and only watch time. ”It was a call to rethink how we measured success:
“All other things being equal, our goal is to increase [video] watch time. ”...
Our job was to keep people engaged and hanging out with us. By definition, viewers are
happier watching seven minutes of a ten-minute video (or even two minutes of a ten-minute video) than all of a one-minute video. And when they ’re happier, we are, too.
(Goodrow quoted in Doerr 2017 ,p .1 6 2 ) . . .
Goodrow ’s retelling includes user happiness andsatisfaction as goals along with the
more business-oriented engagement . For the purposes of this paper, I assume user
happiness andsatisfaction are analogous to well-being, but unlike the Facebook case,
YouTube ’s public statements have not mentioned well-being. In accordance with the
unified treatment of managerial and technical optimization proposed in this paper,Goodrow confirms that a team-level metric drove engineering decisions:
Reaching one billion hours was a game of inches; our engineers were hunting for
changes that might yield as little as 0.2 percent more watch time. In 2016 alone,they would find around 150 of those tiny advances. We ’d need nearly all of them
to reach our objective. (Goodrow quoted in Doerr 2017 ,p .1 6 9 )
Yet watch time was not the only objective, and YouTube incorporated other changes to
improve the quality of the product and the effects on users:
In fact, we ’d commit to some watch-time-negative decisions for the benefit of our
users. For example, we made it a policy to stop recommending trashy, tabloid-style videos —like“World ’s Worst Parents, ”where the thumbnail showed a baby
in a pot on the stove. Three weeks in, the move proved negative for watch time by
half a percent. We stood by our decision because it was better for the viewerexperience, cut down on click bait, and reflected our principle of growingresponsibly. Three months in, watch time in this group had bounced back andactually increased. Once the gruesome stuff became less accessible, peoplesought out more satisfying content. (Goodrow quoted in Doerr 2017 ,p .1 6 4 )
This was the beginning of a move away from strict maximization of time spent. Starting
in 2015 YouTube began to incorporate user satisfaction metrics (Doerr 2017 ,p .1 7 0 ) .
As in the Facebook case, these are derived from surveys:450 International Journal of Community Well-Being (2020) 3:443 –463
we learned that just because a user might be watching content longer does not
mean that they are having a positive experience. So we introduced surveys to askusers if they were satisfied with particular recommendations. With this directfeedback, we started fine-tuning and improving these systems based on this high-fidelity notion of satisfaction. (Google 2019 ,p .2 1 )
These user satisfaction survey results were incorporated directly into the objectives of
the YouTube recommendation system, as discussed in a recent YouTube technicalpaper:
we first group our multiple objectives into two categories: 1) engagement objec-
tives, such as user clicks, and degree of engagement with recommended videos;2) satisfaction objectives, such as user liking a video on YouTube, and leaving arating on the recommendation. (Zhao et al. 2019 ,p .4 3 )
Analysis of Facebook and YouTube Cases
The Facebook and YouTube cases are significant because they are examples of a major
platform operator explicitly saying that they have decided to monitor and optimize for awell-being proxy, operationalized at both the management and algorithmic levels.
Facebook has provided a public justification for its meaningful social interaction
metric in terms of prior research which suggests that active use of social mediaimproves well-being while passive use decreases it. While this is far from a holisticmeasure of well-being, let alone community well-being, at least it connects to previous
work in a clear way. Public statements from YouTube have not mentioned well-being,
instead focusing on “responsibility ”(Wojcicki 2019 , para. 2) and user satisfaction as
assessed through surveys.
Explicit user surveys are an improvement on YouTube ’s previous identification of
watch time with user happiness. Researchers report a negative correlation between TVwatching and well-being that suggests there is something like an addiction mechanisminvolved: “individuals with incomplete control over, and foresight into, their own
behavior watch more TV than they consider optimal for themselves and their well-
being is lower than what could be achieved ”(Frey et al. 2007 ,p .2 8 3 ) .S i m i l a re f f e c t s
have been observed in social media use where addicted users “have typically attempted
to cut down on social networking without success ”(Andreassen 2015 , p. 176). Google
now publicly recognizes that maximizing watch time does not optimize for “positive ”
outcomes (Google 2019 ,p .2 1 ) .
A more systematic conception of well-being would articulate what aspects of well-
being matter to YouTube and why user satisfaction is a good proxy. Of course, well-
being outcomes depend enormously on who a user is and what they watch. A user
might learn valuable and fulfilling skills from how-to videos, become more politically
engaged, consume worthwhile art, or they might be radicalized into violence (Ledwichand Zaitsev 2020 ).
Another issue is that both companies are optimizing for individual outcomes: well-
being but not necessarily community well-being. Community well-being “is more thanInternational Journal of Community Well-Being (2020) 3:443 –463 451
an aggregate of individuals ’satisfaction ”(Sung and Phillips 2018 , p. 65) and cannot be
assessed simply by adding up the well-being of all individuals in the community. Thisis analogous to the classic problem of aggregating utilities in welfare economics (Fosterand Sen 1997 , p. 16). Conversely, optimizing for each person individually will not
necessarily promote community well-being due to problems of externalities, collectiveaction, and conflicting preferences (Baum 2020 ;M i l a n oe ta l . 2019b ). Attention to
aggregates may also miss local problems, such as negative effects in a particular city or
for a particular subgroup, or run into Simpsons ’paradox issues where the sign of the
effect depends on the granularity of the groups studied (Kievit et al. 2013 ). For all these
reasons, clarity on the definition of community or communities matters greatly.
Perhaps the biggest weakness of these cases is that there is no record of consultation
with the putative beneficiaries of these algorithmic changes, and no public evaluation ofthe results. Hopefully algorithmic interventions of this magnitude were informed byuser research or some sort of consultative process, but none was reported. Presumably
meaningful social interactions anduser satisfaction were increased, but there has been
no disclosure of how much. Absent also is any report of effects on any other compo-nents of well-being, such as feelings of social connectedness or life satisfaction, or evenobjective indicators like employment status. It ’s similarly unclear how these changes
affected not just individual well-being but community well-being for different com-munities; there may even have been negative effects on certain types of users. Infor-mation about outcomes is especially important because the link between Facebook ’s
meaningful interactions and well-being is theoretical, deduced from previous research
into active and passive social media use, while YouTube has said their user satisfaction
surveys are included in a “responsibility ”metric (Bergen 2019 , para. 10) and that they
aim for “positive ”experiences (Google 2019 , p. 21) without providing any further
explanation of their goals or results. Determining the actual effect of these large-scaleinterventions is itself a significant social science research effort, and if Facebook orYouTube have these answers, they have not been shared. This is algorithmic manage-ment, but not yet the algorithmic governance that the society-in-the-loop model envi-
sions (Rahwan 2018 ).
The reported business outcomes are also instructive, as both the Facebook and
YouTube changes resulted in at least temporary reductions in engagement metrics.Facebook reports that the incorporation of a meaningful social interactions metric into
their video product caused a 5% reduction in time spent, which was consideredsignificant enough to be discussed with investors (Facebook 2018 )b u tt h el o n g e r -
term effects are unclear. YouTube described changes that reduced watch time but alsoreports that watch time recovered over a time span of months as users changed theirbehavior. This demonstrates both that major corporations are willing to accept reduc-
tions in engagement to pursue social ends, and that the long-term business effects of
incorporating well-being metrics are not necessarily negative.
Generalization to Other Domains
The Facebook and YouTube cases suggest the possibility of a general method formanaging the well-being outcomes of commercial optimizing systems, which is the
core contribution of this article. This section begins by arguing that some type of452 International Journal of Community Well-Being (2020) 3:443 –463
metric-driven community well-being optimization is not only useful but likely neces-
sary for any AI system with broad social impacts, because individual user control willnot be sufficient. It then shows how this general method could apply to diverse domainsby working through potential applications to news recommendation and online shop-ping. These hypothetical applications demonstrate the generality of a metrics-drivenapproach and illuminate further possibilities and challenges that shape the recommen-
dations in this paper.
User Control is not Sufficient for Community Well-BeingThis article recommends participatory processes to involve users and other stakeholders
in metric-driven optimization for community well-being. A potential alternative is toprovide increased user control directly, so that people can choose what is best forthemselves. Many authors have pointed to the central role of user agency in the ethics
of AI systems (Floridi and Cowls 2019 ) and in the important context of content ranking
Paraschakis ( 2017 ) has proposed “controls [that] enable users to adjust the recom-
mender system to their individual moral standards ”(p. 6). However, increasing user
agency will not by itself solve the problem of ensuring good outcomes at the commu-nity level because many users will not customize the systems they use, and becauseindividually good choices do not necessarily produce socially good outcomes.
Any set of controls must necessarily be few enough to be humanly manageable. This
restricts the number of dimensions that can be controlled and will make it difficult to
express nuanced conceptions of well-being. Natural language interfaces e.g. Yu et al.
(2019 ) may allow the expression of more complicated concepts. Nonetheless users will
probably leave most parameters at default settings, which means that the defaults mustpromote well-being.
Even if all users in fact succeeded in directing an AI system to do exactly as desired
this would not necessarily result in the best community outcomes. As Ostrom ( 2000 )
has articulated, individual action does not succeed in producing social goods withoutthe concurrent evolution of social norms. These challenges of collective action have
been explored in the context of AI systems from the perspective of social choice theory
(Baum 2020 ) and multi-stakeholder recommendation systems (Milano et al. 2019a ).
Further, existing societal inequalities can constrain users ’ability to exploit algorithmi-
cally provided choices (Robertson and Salehi 2020 ), for example due to a lack of
information or the cost burden of choosing the “best”option.
User control is essential, perhaps even necessary for community well-being, but it is
not sufficient. Collective algorithmic governance is needed for much the same reasonssocietal governance is needed, and appropriate well-being metrics are useful in algo-
rithmic governance just as they are in public policy.
Diverse News RecommendationsNews recommenders are the algorithms that choose, order, and present journalism
content to users. The potential application of community well-being metrics to thesesystems illustrates the challenges around defining a community and choosing metrics.News recommendation algorithms can have societal consequences (Helberger 2019 )
but it is not clear how to manage such algorithms for community well-being. To beginInternational Journal of Community Well-Being (2020) 3:443 –463 453
with, there is no single community that consumes news, but many overlapping
communities organized around different geographic regions and different topics(Reader and Hatcher 2011 , p. 3). Each of these communities may have different
concerns at any given moment. Incorporating social network analysis or country-specific data can improve the performance of recommender systems as measured bytraditional relevance metrics (Chen et al. 2018 ; Roitero et al. 2020 ) but the question of
how a recommender system impacts pre-existing communities, e.g. a city, has not been
explored. Conversely, existing community well-being indicators have not been de-signed to capture the consequences of news recommender systems.
One well-developed concern with news recommenders is exposure diversity, mean-
ing the range of sources, topics, and viewpoints that each person is algorithmicallypresented (Bernstein et al. 2020 ). Taking political theory as a starting point Helberger
et al. ( 2018 ) identify liberal, deliberative, and radical approaches to the design of
diverse news recommenders. Consider the problem of designing a national news
recommender that supports a deliberative view of diversity, one in which:
exposure to diverse viewpoints is considered valuable because it helps citizens
develop more informed opinions and less polarized, more tolerant attitudestowards those with whom they disagree …it is conceivable to design metrics
that would focus, for example, on user engagement with opposing political views,cross-ideological references in public debates or social media connections be-tween people who represent different ideological positions. (Helberger et al.
2018 ,p .1 9 5 )
Diversity metrics could be constructed from algorithmic methods to estimate the
ideological position of users or posts (Budak et al. 2016 ; Garimella and Weber
2017 ). These give a measure of distance between any two items, which could then
be used to define the diversity of a set of recommended items according to variousstandard formulas such as the average distance between any pair (Kunaver and Po žrl
2017 ).
Such a metric would capture the output of the system, not its effects on users.
Facebook and YouTube use user surveys to tie algorithmic changes to human out-comes. It may be possible to establish a causal connection from news diversity metricsto existing well-being metrics such as voter turnout, and Facebook has already dem-onstrated a substantial effect on voter turnout by presenting users with personalizedmessages (Bond et al. 2012 ). It would be better to direct the optimization process
towards more closely related outcomes like polarization or tolerance that are notincluded in current well-being frameworks. Directly measuring these outcomes is
crucial because exposure to diverse opinions can actually increase polarization (Bail
et al. 2018 ). Polarization and tolerance outcomes are also explicitly relational, and thus
indicate aspects of community well-being not captured in individual-level metrics.
Low Carbon ShoppingLarge-scale product recommender systems have profound influence over what is
purchased. One reason for this is that it is not possible to navigate millions of possible
products without them. Rolnick et al. ( 2019 ) have proposed using these systems to454 International Journal of Community Well-Being (2020) 3:443 –463
direct consumers to lower-carbon alternatives. This possibility highlights two problems
that may arise in the course of modifying AI objective functions: obtaining the dataneeded to evaluate a metric and understanding the business impacts of such a change.
Climate change is a key issue for many communities (Fazey et al. 2018 ) and carbon
emissions appear in a number community well-being frameworks (Bagnall et al. 2017 ).
Carbon emissions from recommended products are also a key example of AI system
side effects on non-users. From a technical point of view, carbon footprint can be
incorporated using multi-stakeholder recommendation algorithms that explicitly con-sider the effect on parties other than the user (Abdollahpouri et al. 2020 ).
This is possible only if the carbon footprint of each product is available. There are
now established methods to estimate product carbon footprints (BSI 2011 ;I S O 2018 )
but there are no product carbon footprint (PCF) databases comprehensive enough tocover the millions of different products sold by a large online retailer. However, it maybe possible to use machine learning methods to estimate the PCF values of an entire
product portfolio starting from a comparatively small database of examples
(Meinrenken et al. 2012 ). Robust, scalable product carbon footprint estimation could
be a key enabling technology for low-carbon commerce and, ultimately, long-termcommunity well-being.
A commercial operator will want to know the business effects before any such
system is implemented, and it is tempting to evaluate the potential revenue effect ofincorporating a carbon term into the objective function by testing against historicalpurchase data. Such back-testing will show that optimizing for anything other than
profit must drive the system away from a profit maximum, but offline estimates will not
give the full story because both consumer and producer behavior may change if carbonfootprint starts to affect product ranking. Users might appreciate being informed oflow-carbon alternatives and buy more from that retailer or pay a premium for lowercarbon items, while producers will have an incentive to sell lower carbon products. Thecase of organic food demonstrates the existence of such market dynamics, as it is 22 –
35% more profitable globally than conventional alternatives even though it is typicallymore expensive to produce (Crowder and Reganold 2015 ).
Recommendations
The incorporation of community well-being metrics into both managerial and algorith-
mic optimization is a very general method for managing the effects of commercialoptimizing systems, yet good management is only part of good governance. Thissection synthesizes the analysis and discussion above with previous work on algorith-
mic governance, participatory design, best use of metrics, and corporate stakeholder
engagement to make recommendations for fostering community well-being in AIsystems in ways that are both effective and accepted as legitimate. It also identifiesgaps and unknowns where future research would be valuable.
Identifying and Involving CommunitiesAn attempt to optimize for community well-being is an attempt to benefit a particular
group of people, who need to have a say in what is done on their behalf. In some casesInternational Journal of Community Well-Being (2020) 3:443 –463 455
it would be reasonable to say that every user of the system (potentially billions of
people) is a member of the community, but that would preclude the management oflocal outcomes such as a system ’s effects on the residents of a particular city, or on
people of a certain age, or workers in particular professions. Non-users can be affectedas well, as in environmental externalities or a navigation system that routes cars to aformerly quiet street. Each view of community is a choice about who counts, and this
choice should be made explicit before any intervention begins.
Once a community is identified, there are many approaches to try to integrate its
members into the process of selecting and using metrics. Participatory design is anorientation and a set of practices that attempts to actively involve all stakeholders in asystem design process (Simonsen and Robertson 2012 ). It is a promising framework for
algorithmic governance. The WeBuildAI method (Lee et al. 2019 ) demonstrates what
participatory design of metrics might look like. Researchers worked with a food-delivery non-profit to design an algorithm to match donated food with volunteer drivers
and local food distribution charities. Stakeholders from each of these groups worked
with researchers to build quantitative models of their preferred trade-offs betweenfactors such as driver travel time, time since last donation, neighborhood poverty level,etc. At run time this system ranks the possible matches for each donation according tothe models representing the preferences of each stakeholder, with the final result chosenthrough a ranked-choice voting rule. Future work could investigate participatory metricdesign in the context of a large commercial platform.
There are both instrumental and political goals when attempting to integrate com-
munities into the selection and use of metrics. Without engaging the community, it is
not possible to know which aspects of well-being matter most to them and how seriousthese issues are, and therefore how to make tradeoffs. Engagement is also necessary forcredibility. When choosing community indicators, “most communities consider input
by its residents and others to be vital; it builds support for the use of indicators as wellas help vest those most impacted by subsequent actions in decision-making processes ”
(Sung and Phillips 2018 , p. 73). In the context of commercial systems it will also be
important to draw on the experience of corporate stakeholder engagement efforts such
as those found in sustainability reporting (GSSB 2016 ;M a n e t t i 2011 ).
Choosing MetricsAside from the well-known issues with using metrics in a management context
generally (Jackson 2005 ) metrics pose a problem for AI systems in particular because
most AI algorithms are based on strongly optimizing a narrow objective (Thomas andUminsky 2020 ). Poor use of metrics can result in a damaging emphasis on short term
outcomes, manipulation and gaming, and unwanted side effects (Jackson 2005 ;
Thomas and Uminsky 2020 ). Even a successful metric cannot remain static, as the
structure of the world it measures is constantly changing. In addition, there are manydomains without a clear consensus on well-being goals, necessitating a process ofnormative deliberation before metrics can be chosen. The following issues should beconsidered in choice of metrics:
Deciding What to Measure In many cases existing well-being metrics will not be
directly usable because they are too expensive to collect at scale or don ’t readily apply456 International Journal of Community Well-Being (2020) 3:443 –463
in the company ’s domain. These issues drove Facebook ’ss u b s t i t u t i o no f meaningful
social interactions for more general measures of user well-being. Creating a custom
metric is challenging because community well-being is a theoretical construct, not anobservable property, and there may be misalignment between the designer ’si n t e n t i o n s
and what is actually measured. For example, decreasing polarization measures may justindicate that minority voices have been effectively silenced. The particular well-being
aspect of interest must first be “operationalized ”and tested for reliability and validity
(Jacobs and Wallach 2019 ).
Long-Term Outcomes If a metric is evaluated only over the short term it may lead to
poor longer-term outcomes. As the YouTube case demonstrates, a video platform thattries to maximize user watch time may encourage binging behavior where userseventually regret the time they spend. While effective AI optimization requires frequentfeedback, it is critical to pick shorter-term metrics that are thought to drive longer-term
outcomes (Lalmas and Hong 2018 ).
Gaming Any measure that becomes a target will change meaning as people change
their behavior, a very general problem that is sometimes known as Goodhart ’sl a w
(Manheim and Garrabrant 2018 ). This is particularly relevant to large platforms that
must defeat adversarial efforts to gain exposure for financial or political ends. Whilethere are emerging methods to use causal inference to design metrics that resist gaming(Miller et al. 2019 ), a more robust solution is to continuously monitor and change the
metrics in use.
Dynamism The metrics employed need to be able to change and adapt, a property that
Jackson ( 2005 ) names dynamism . This is necessary because of gaming and other
behavior change in response to metrics, but more importantly the world can and doeschange; at the onset of the COVID-19 pandemic many existing machine learningmodels stopped working (Heaven 2020 ). Dynamism also avoids the serious problems
that can arise from over-optimization for a single objective, such as a robot which
injures humans in an attempt to fetch a coffee more quickly (Russell 2019 ). In the
context of contemporary commercial optimization, there are always humans supervis-ing and operating the AI system, and they are free to change the objective function asneeded.
Normative Uncertainty Catalogs such as IEEE 7010 (Schiff et al. 2020 ) provide a long
list of consensus metrics but not all of them will correspond to community needs, andnot all AI systems can be effectively evaluated using metrics originally designed for
public policy use. In short, many systems will face a lack of consensus around what a
“good”outcome would be. Appropriate values for AI systems cannot be derived from
first principles but must be the result of societal deliberation (Gabriel 2020 ), which
again underscores the necessity for participatory processes.International Journal of Community Well-Being (2020) 3:443 –463 457
Evaluating Outcomes
It may be very challenging to determine the actual well-being effects of incorporating a
metric into an optimization process. Facebook uses ongoing user panels to countmeaningful social interactions, but this is a narrow facet of user well-being, let alone
community well-being. They could use broader well-being instruments such as a life
satisfaction survey question, but it would be difficult to assess the causal contribution of
Facebook use to any changes. In other cases, such as the diverse news recommender,pre-existing well-being indicators would not apply so assessing societal impact wouldrequire the creation and validation of new community well-being metrics.
Outcome evaluation at scale is essentially corporate social science. The IEEE 7010
Recommended Practice Standard for Assessing the Impact of Autonomous and Intel-ligent Systems on Human Well-Being proposes what amounts to a difference-in-
differences design between users and non-users before and after an algorithmic change
(Schiff et al. 2020 ). This is a promising approach, but there do not seem to be any
published examples.
Business ImplicationsFor commercial AI systems, metrics-driven changes must also integrate legitimate
business concerns such as the cost of implementation and the effects on businessoutcomes. Although a naïve analysis of multi-objective optimization suggests that
considering anything other than revenue can only reduce revenue, this assumes every-
thing else is equal. In reality there are complex secondary effects, such as changes inuser and supplier behavior. YouTube ’s experience demoting clickbait videos is a
documented case where doing the responsible thing led to a short-term decrease in
the primary watch time metric, but then a long-term increase. It is difficult to predict thefinancial effects of incorporating well-being into optimization. Business and socialobjectives may turn out to be aligned, but this cannot be expected to be true as a rule.While ethical outcomes can sometimes be achieved through changes to optimization
goals, there are also situations that could conceivably require avoiding features,
products, or business models altogether (Barocas et al. 2020 ).
Case studies are one promising avenue for progress on the problem of uncertain
business outcomes. If companies are already in corporating well-being metrics into their
management and algorithms then documenting these cases will let others learn from theirexperiences, develop the field, and normalize the idea that companies should proactivelymanage the effects of their optimizers. This un derscores the need for t ransparency around
work that is explicitly designed to impro ve the lives of great numbers of people.
Conclusion
This paper has explored the integratio n of community well-being metrics into
commercially-operated optimizing systems. Community well-being is an attractive goalbecause it is well-developed in public policy contexts and practically measurable. Atleast two large technology companies, Facebook and YouTub e, have explicitly458 International Journal of Community Well-Being (2020) 3:443 –463
modified their objective functions in pursuit of well-being, demonstrating the practi-
cality of this approach.
There are still a number of weaknesses in the interventions that Facebook and
YouTube have undertaken, at least in terms of what has been reported publicly.The community that these interventions are intended to serve has not been welldefined; rather, these metrics and interven tions are oriented towards the individual
level and do not account for existing communities such as cities or discussion
groups. It is not clear if or how users were engaged in selecting the meaningful
social interactions anduser satisfaction metrics; there is no report of the outcomes
either in terms of these metrics or with res pect to broader well-being metrics; and
although both companies reported reduced short term engagement, the broaderbusiness effects have not been discussed.
However incomplete, the Facebook and YouTube cases suggest that the optimiza-
tion of community well-being metrics may be a powerful general method for managing
the societal outcomes of commercial AI systems. The same methods could be applied
to many other types of systems, such as a news recommender system that incorporatesmeasures of content diversity in an attempt to increase tolerance and reduce polariza-tion, or an online shopping platform that uses product-level estimates of carbonfootprint to steer users toward more environmentally friendly purchases. Althoughmany scholars and critics have stressed the importance of increased user control overAI systems, no amount of user control can replace appropriate well-being metrics dueto issues of collective action and the need for reasonable defaults.
An analysis of the above cases suggests that the following multi-step process may be
effective:
Identify a community to define the scope of action. In online settings this may be a
challenging decision.
Select a well-being metric, perhaps from existing frameworks. This stage frames
the problem to be solved in concrete terms, so it may be where community
involvement matters most.
Use this metric as a performance measure for the team building and operating the
system.
Directly translate the metric into code as a modification to an algorithmic objective
function or use these measured outcomes to evaluate more general design changes.
Evaluate the results, in terms of actual human outcomes, and adjust accordingly.
This may require adjusting the chosen metric in response to changing conditions,or if it is found to be causing side effects of its own.
Require transparency throughout to make participation possible and to hold
companies accountable to the communities who are meant to be served by thisprocess.International Journal of Community Well-Being (2020) 3:443 –463 459
Funding The author is an employee of Partnership on AI. Partnership on AI is supported by donations from
companies and philanthropies, including Facebook and Google. The author did not receive funding fromFacebook or Google for the creation of this article.
Data Availability N/ACode AvailabilityN/A
Compliance with Ethical Standards
Conflict of Interest The author declares that they have no conflicts of interest.
Ethics Approval This paper does not include any studies with human participants or animals performed by
the author.
References
Abdollahpouri, H., Adomavicius, G., Burke, R., Guy, I., Jannach, D., Kamishima, T., Krasnodebski, J., &
Pizzato, L. (2020). Multistakeholder recommendation: survey and research directions. User Modeling and
User-Adapted Interaction, 30 (1), 127 –158. https://doi.org/10.1007/s11257-019-09256-1 .
Andreassen, C. S. (2015). Online social network site addiction: a comprehensive review. Current Addiction
Reports, 2 (2), 175 –184. https://doi.org/10.1007/s40429-015-0056-9 .
Bagnall, A., South, J., Mitchell, B., Pilkington, G., & Newton, R. (2017). Systematic scoping review of
indicators of community wellbeing in the UK .1–71.http://eprints.leedsbeckett.ac.uk/5238/1/community-
wellbeing-indicators-scoping-review-v1-2-aug2017.pdf .
Bail, C. A., Argyle, L. P., Brown, T. W., Bumpus, J. P., Chen, H., Fallin Hunzaker, M. B., Lee, J., Mann, M.,
Merhout, F., & Volfovsky, A. (2018). Exposure to opposing views on social media can increase politicalpolarization. Proceedingss of the National Academy of Sciences of the United States of America, 115 (37),
9216–9221. https://doi.org/10.1073/pnas.1804840115 .
Barocas, S., Hardt, M., & Narayanan, A. (2018). Fairness and Machine Learning .http://fairmlbook.org
Barocas, S., Biega, A. J., Fish, B., Niklas, J., & Stark, L. (2020). When not to design, build, or deploy.
Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency , 695–695. https://
doi.org/10.1145/3351095.3375691 .
Baum, S. D. (2020). Social choice ethics in artificial intelligence. AI and Society, 35 (1), 165 –176. https://doi.
org/10.1007/s00146-017-0760-1 .
Baxter, G., & Sommerville, I. (2011). Socio-technical systems: from design methods to systems engineering.
Interacting with Computers, 1 ,4–17.https://doi.org/10.1016/j.intcom.2010.07.003 .
Bergen, M. (2019). YouTube executives ignored warnings, Let Toxic Videos Run Rampant. Bloomberg .
https://www.bloomberg.com/news/features/2019-04-02/youtube-executives-ignored-warnings-letting-toxic-videos-run-rampant .
Bernstein, A., de Vreese, C., Helberger, N., Schulz, W., Zweig, K., Baden, C., Beam, M. A., Hauer, M. P.,
Heitz, L., Jürgens, P., Katzenbach, C., Kille, B., Klimkiewicz, B., Loosen, W., Moeller, J., Radanovic, G.,Shani, G., Tintarev, N., Tolmeijer, S., …Zueger, T. (2020). Diversity in News Recommendations .http://
arxiv.org/abs/2005.09495 .
Bond, R. M., Fariss, C. J., Jones, J. J., Kramer, A. D. I., Marlow, C., Settle, J. E., & Fowler, J. H. (2012). A 61-
million-person experiment in social influence and political mobilization. Nature, 489 (7415), 295 –298.
https://doi.org/10.1038/nature11421 .
BSI. (2011). PAS 2050:2011 Specification for the assessment of the life cycle greenhouse gas emissions of
goods and services .https://shop.bsigroup.com/en/Browse-By-Subject/Environmental-Management-and-
Sustainability/PAS-2050/ .
Budak, C., Goel, S., & Rao, J. M. (2016). Fair and balanced? Quantifying media bias through crowdsourced
content analysis. Public Opinion Quarterly, 80 (Specialissue1), 250 –271. https://doi.org/10.1093/poq/
nfw007 .460 International Journal of Community Well-Being (2020) 3:443 –463
Chen, R., Hua, Q., Chang, Y. S., Wang, B., Zhang, L., & Kong, X. (2018). A survey of collaborative filtering-
based recommender systems: from traditional methods to hybrid methods based on social networks. IEEE
Access, 6 , 64301 –64320. https://doi.org/10.1109/ACCESS.2018.2877208 .
Crowder, D. W., & Reganold, J. P. (2015). Financial competitiveness of organic agriculture on a global scale.
Proceedings of the National Academy of Sciences of the United States of America, 112 (24), 7611 –7616.
https://doi.org/10.1073/pnas.1423674112 .
Dantzig, G. B. (1982). Reminiscences about the origins of linear programming. Operations Research Letters,
1(2), 43 –48.https://doi.org/10.1016/0167-6377(82)90043-8 .
Delgado, J., Lind, S., Radecke, C., & Konijeti, S. (2019). Simple objectives work better. Workshop on
Recommendation in Multi-stakeholder Environments, RecSys. http://ceur-ws.org/Vol-2440/paper5.pdf .
Diener, B. E., Oishi, S., & Tay, L. (2018). Handbook of well-being . DEF Publishers. nobascholar.com .
Doerr, J. E. (2017). Measure what matters: How Google, Bono, and the Gates Foundation rock the world with
OKRs . Portfolio Penguin.
Donahoe, E., & Metzger, M. M. (2019). Artificial intelligence and human rights. Journal of Democracy,
30(2), 115 –126. https://doi.org/10.1353/jod.2019.0029 .
Durand, M. (2015). The OECD better life initiative: How ’s life? And the measurement of well-being. Review
of Income and Wealth, 61 (1), 4–17.https://doi.org/10.1111/roiw.12156 .
Exton, C., & Shinwell, M. (2018). Policy use of well-being metrics: describing countries ’experiences. OECD
Statistics Working Papers, 33 (94). https://doi.org/10.1787/d98eb8ed-en .
Facebook. (2018). Facebook, Inc. (FB) Fourth Quarter and Full Year 2017 Results Conference Call .https://
investor.fb.com/ .
Fazey, I., Carmen, E., Chapin, F. S., Ross, H., Rao-Williams, J., Lyon, C., Connon, I. L. C., Searle, B. A., &
Knox, K. (2018). Community resilience for a 1.5 °C world. Current Opinion in Environmental
Sustainability, 31 ,3 0–40.https://doi.org/10.1016/j.cosust.2017.12.006 .
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data
Science Review, 1 (1).https://doi.org/10.1162/99608f92.8cd550d1 .
Foster, J., & Sen, A. (1997). On economic inequality . Clarendon Press.
Frey, B. S., Benesch, C., & Stutzer, A. (2007). Does watching TV make us happy? J o u r n a lo fE c o n o m i c
Psychology, 28 (3), 283 –313. https://doi.org/10.1016/j.joep.2007.02.001 .
Gabriel, I. (2020). Artificial intelligence, values, and alignment .https://arxiv.org/abs/2001.09768 .
Garimella, V. R. K., & Weber, I. (2017). A long-term analysis of polarization on twitter. Proceedings of the
11th international conference on web and social media, ICWSM 2017, 528–531. https://www.aaai.org/
ocs/index.php/ICWSM/ICWSM17/paper/view/15592 .
Ginsberg, D., & Burke, M. (2017). Hard questions: Is spending time on social media bad for us? Facebook.
https://about.fb.com/news/2017/12/hard-questions-is-spending-time-on-social-media-bad-for-us/ .
Google. (2019). How Google Fights Disinformation .https://www.blog.google/documents/37/How\_Google\_
Fights\_Disinformation.pdf .
GSSB. (2016). Global Reporting Inititative 101: Foundation .https://www.globalreporting.org/standards/gri-
standards-download-center/gri-101-foundation-containing-standard-interpretation-1/ .
Heaven, W. D. (2020). Our weird behavior during the pandemic is messing with AI models . MIT Technology
Review. https://www.technologyreview.com/2020/05/11/1001563/covid-pandemic-broken-ai-machine-
learning-amazon-retail-fraud-humans-in-the-loop/ .
Helberger, N. (2019). On the democratic role of news recommenders. Digital Journalism, 7 (8), 993 –1012.
https://doi.org/10.1080/21670811.2019.1623700 .
Helberger, N., Karppinen, K., & D ’Acunto, L. (2018). Exposure diversity as a design principle for recom-
mender systems. Information Communication and Society, 21 (2), 191 –207. https://doi.org/10.1080/
1369118X.2016.1271900 .
ISO. (2018). Greenhouse gases —Carbon footprint of products —Requirements and guidelines for
quantification (ISO 14067:2018) . International Organization for Standardization. https://www.iso.org/
standard/71206.html .
Jackson, A. (2005). Falling from a great height: principles of good practice in performance measurement and
the perils of top down determination of performance indicators. Local Government Studies, 31 (1), 21 –38.
https://doi.org/10.1080/0300393042000332837 .
Jacobs, A. Z., & Wallach, H. (2019). Measurement and Fairness .http://arxiv.org/abs/1912.05511 .
Jannach, D., & Adomavicius, G. (2016). Recommendations with a purpose. RecSys 2016 - Proceedings of the
10th ACM Conference on Recommender Systems ,7–10.https://doi.org/10.1145/2959100.2959186 .
Kaplan, R. S. (2009). Conceptual foundations of the balanced scorecard. Handbook of Management
Accounting Research, 3 , 1253 –1269. https://doi.org/10.1016/S1751-3243(07)03003-9 .International Journal of Community Well-Being (2020) 3:443 –463 461
Kievit, R. A., Frankenhuis, W. E., Waldorp, L. J., & Borsboom, D. (2013). Simpson ’s paradox in psycho-
logical science: a practical guide. Frontiers in Psychology, 4 (August), 1 –14.https://doi.org/10.3389/
fpsyg.2013.00513 .
Korinek, A., & Stiglitz, J. E. (2017). Artificial intelligence and its implications for income distribution and
unemployment. In National Bureau of Economic Research .https://doi.org/10.7208/chicago/
9780226613475.003.0014 .
Kulynych, B., Overdorf, R., Troncoso, C., & Gürses, S. (2020). POTs: Protective Optimization Technologies.
FAT\* ‘20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency .https://
doi.org/10.1145/3351095.3372853 .
Kunaver, M., & Po žrl, T. (2017). Diversity in recommender systems –A survey. Knowledge-Based Systems,
123,1 5 4–162. https://doi.org/10.1016/j.knosys.2017.02.009 .
Lalmas, M., & Hong, L. (2018). Tutorial on metrics of user engagement: Applications to news, search and E-
commerce. WSDM 2018 - Proceedings of the 11th ACM International Conference on Web Search and
Data Mining, 3, 781 –782. https://doi.org/10.1145/3159652.3162010 .
Ledwich, M., & Zaitsev, A. (2020). Algorithmic extremism: Examining YouTube ’s rabbit hole of radicaliza-
tion.https://doi.org/10.5210/fm.v25i3.10419 .
Lee, M. K., Kusbit, D., Kahng, A., Kim, J. T., Yuan, X., Chan, A., See, D., Noothigattu, R., Lee, S., Psomas,
A., & Procaccia, A. D. (2019). Webuildai: Participatory framework for algorithmic governance.
Proceedings of the ACM on Human-Computer Interaction .https://doi.org/10.1145/3359283 .
Manetti, G. (2011). The quality of stakeholder engagement in sustainability reporting: empirical evidence and
critical points. Corporate Social Responsibility and Environmental Management, 18 ,1 1 0–122. https://
doi.org/10.1002/csr.255 .
Manheim, D., & Garrabrant, S. (2018). Categorizing Variants of Goodhart ’sL a w .1–10.http://arxiv.org/abs/
1803.04585 .
Meinrenken, C. J., Kaufman, S. M., Ramesh, S., & Lackner, K. S. (2012). Fast carbon footprinting for large
product portfolios. Journal of Industrial Ecology, 16 (5), 669 –679. https://doi.org/10.1111/j.1530-9290.
2012.00463.x .
Milano, S., Taddeo, M., & Floridi, L. (2019a). Ethical aspects of multi-stakeholder recommendation systems .
https://ssrn.com/abstract=3493202 .
Milano, S., Taddeo, M., & Floridi, L. (2019b). Recommender systems and their ethical challenges .https://
ssrn.com/abstract=3378581 .
Miller, J., Milli, S., & Hardt, M. (2019). Strategic classification is causal modeling in disguise .http://arxiv.
org/abs/1910.10362 .
Mosseri, A. (2018). Bringing people closer together .F a c e b o o k . https://about.fb.com/news/2018/01/news-
feed-fyi-bringing-people-closer-together/ .
Musikanski, L., Phillips, R., & Jean Crowder. (2019). The happiness policy handbook: How to make
happiness and well-being the purpose of your government . Gabriola: New Society Publishers.
Musikanski, L., Rakova, B., Bradbury, J., Phillips, R., & Manson, M. (2020). Artificial intelligence and
community well-being: a proposal for an emerging area of research. International Journal of Community
Well-Being, 3 ,3 9–55.https://doi.org/10.1007/s42413-019-00054-6 .
O’Donnell, G., Deaton, A., Durand, M., Halpern, D., & Layard, R. (2014). Wellbeing and Policy .L e g a t u m
Institute. https://li.com/reports/the-commission-on-wellbeing-and-policy/ .
OECD. (2019a). Artificial intelligence in society . OECD Publishing. https://doi.org/10.1787/eedfee77-en .
OECD. (2019b). Measuring well-being and progress .https://www.oecd.org/sdd/OECD-Better-Life-Initiative.
pdf.
Ostrom, E. (2000). Collective action and the evolution of social norms. Journal of Economic Perspectives,
14(3), 137 –158. https://doi.org/10.1257/jep.14.3.137 .
Paraschakis, D. (2017). Towards an ethical recommendation framework. Proceedings of the International
Conference on Research Challenges in Information Science 2017 ,2 1 1–220. https://doi.org/10.1109/
RCIS.2017.7956539 .
Parmenter, D. (2020). Key performance indicators: Developing, implementing, and using winning KPIs (4th
ed.). Wiley.
Phillips, R., & Pittman, R. H. (Eds.). (2015). An introduction to community development .R o u t l e d g e .
Rahwan, I. (2018). Society-in-the-loop: programming the algorithmic social contract. Ethics and Information
Technology, 20 (1), 5–14.https://doi.org/10.1007/s10676-017-9430-8 .
Reader, B., & Hatcher, J. A. (Eds.). (2011). Foundations of community journalism . SAGE Publications Inc.
Richardson, J. (2013). Accounting for sustainability. In A. Henriques & J. Richardson (Eds.), The triple
bottom line - Does it all add up? (pp. 34 –44). Routledge.462 International Journal of Community Well-Being (2020) 3:443 –463
Robertson, S., & Salehi, N. (2020). What if I don ’t like any of the choices? The limits of preference elicitation
for participatory algorithm design. Participatory Approaches to Machine Learning Workshop, ICML
2020 .http://arxiv.org/abs/2007.06718 .
Roitero, K., Carterette, B., Mehrotra, R., & Lalmas, M. (2020). Leveraging behavioral heterogeneity across
markets for cross-market training of recommender systems .6 9 4–702. https://doi.org/10.1145/3366424.
3384362 .
Rolnick, D., Donti, P. L., Kaack, L. H., Kochanski, K., Lacoste, A., Sankaran, K., Ross, A. S., Milojevic-
Dupont, N., Jaques, N., Waldman-Brown, A., Luccioni, A., Maharaj, T., Sherwin, E. D., Mukkavilli, S.K . ,K o r d i n g ,K .P . ,G o m e s ,C . ,N g ,A .Y . ,H a s s a b i s ,D . ,P l a t t ,J .C . , …Bengio, Y. (2019). Tackling
climate change with machine learning .http://arxiv.org/abs/1906.05433 .
Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control .V i k i n g .
Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern approach 3rd edition . Prentice Hall.
Samuelson, W. F., & Marks, S. G. (2014). Managerial economics (8th ed.). Wiley.
Schiff, D., Murahwi, Z., Musikanski, L., & Havens, J. C. (2019). A New Paradigm for Autonomous and
Intelligent Systems Development : Why Well-being Measurement Matters. Workshop on Designing
Digital Wellbeing, CHI 2019 .https://digitalwellbeingworkshop.files.wordpress.com/2019/04/02-
wellbeing\_measurement\_schiff\_murahwi\_musikanski\_havens.pdf .
Schiff, D., Ayesh, A., Musikanski, L., & Havens, J. C. (2020). IEEE 7010: A New Standard for Assessing the
Well-Being Implications of Artificial Intelligence .http://arxiv.org/abs/2005.06620 .
Simonsen, J., & Robertson, T. (2012). Routledge international handbook of participatory design. In Routledge
international handbook of participatory design (1st Editio). Routledge. https://doi.org/10.4324/
9780203108543 .
Stoica, A.A., & Chaintreau, A. (2019). Hegemony in social media and the effect of recommendations. The
Web Conference 2019 ,2 ,5 7 5 –580. https://doi.org/10.1145/3308560.3317589 .
Stray, J., Adler, S., & Hadfield-Menell, D. (2020). What are you optimizing for ? Aligning Recommender
Systems with Human Values. Participatory Approaches to Machine Learning Workshop, ICML 2020 .
https://participatoryml.github.io/papers/2020/42.pdf .
Sung, H., & Phillips, R. G. (2018). Indicators and community well-being: exploring a relational framework.
International Journal of Community Well-Being, 1 (1), 63 –79.https://doi.org/10.1007/s42413-018-0006-0 .
Thomas, R. L., & Uminsky, D. (2020). Reliance on metrics is a fundamental challenge for AI. Ethics of Data
Science Conference .https://arxiv.org/abs/2002.08512 .
Verduyn, P., Ybarra, O., Résibois, M., Jonides, J., & Kross, E. (2017). Do social network sites enhance or
undermine subjective well-being? A critical review. Social Issues and Policy Review, 11 (1), 274 –302.
https://doi.org/10.1111/sipr.12033 .
Wojcicki, S. (2019). Preserving openness through responsibility . Inside YouTube Blog. https://blog.youtube/
inside-youtube/preserving-openness-through-responsibility .
Yu, T., Shen, Y., & Jin, H. (2019). A visual dialog augmented interactive recommender system. Proceedings
of the ACM SIGKDD international conference on knowledge discovery and data mining
, 157–165.
https://doi.org/10.1145/3292500.3330991 .
Zhao, Z., Chi, E., Hong, L., Wei, L., Chen, J., Nath, A., Andrews, S., Kumthekar, A., Sathiamoorthy, M., &
Yi, X. (2019). Recommending what video to watch next: a multitask ranking system. Proceedings of the
13th ACM Conference on Recommender Systems (RecSys ‘19),4 3–51.https://doi.org/10.1145/3298689.
3346997 .
Zuckerberg, M. (2018). No Title .https://www.facebook.com/zuck/posts/10104413015393571 .
Publisher ’sN o t e Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.International Journal of Community Well-Being (2020) 3:443 –463 463 |
224eec92-1207-4c2c-814a-4d237336d939 | trentmkelly/LessWrong-43k | LessWrong | Living By Your Own Strength
Followup to: Truly Part of You
"Myself, and Morisato-san... we want to live together by our own strength."
Jared Diamond once called agriculture "the worst mistake in the history of the human race". Farmers could grow more wheat than hunter-gatherers could collect nuts, but the evidence seems pretty conclusive that agriculture traded quality of life for quantity of life. One study showed that the farmers in an area were six inches shorter and seven years shorter-lived than their hunter-gatherer predecessors—even though the farmers were more numerous.
I don't know if I'd call agriculture a mistake. But one should at least be aware of the downsides. Policy debates should not appear one-sided.
In the same spirit—
Once upon a time, our hunter-gatherer ancestors strung their own bows, wove their own baskets, whittled their own flutes.
And part of our alienation from that environment of evolutionary adaptedness, is the number of tools we use that we don't understand and couldn't make for ourselves.
You can look back on Overcoming Bias, and see that I've always been suspicious of borrowed strength. (Even before I understood the source of Robin's and my disagreement about the Singularity, that is.) In Guessing the Teacher's Password I talked about the (well-known) problem in which schools end up teaching verbal behavior rather than real knowledge. In Truly Part of You I suggested one test for false knowledge: Imagine deleting a fact from your mind, and ask if it would grow back.
I know many ways to prove the Pythagorean Theorem, including at least one proof that is purely visual and can be seen at a glance. But if you deleted the Pythagorean Theorem from my mind entirely, would I have enough math skills left to grow it back the next time I needed it? I hope so—certainly I've solved math problems that seem tougher than that, what with benefit of hindsight and all. But, as I'm not an AI, I can't actually switch off the memories and associations, and test m |
5cacca96-9a44-454e-be2d-3bd8b0cb6015 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | A New York Times article on AI risk
I recently wrote [here](https://forum.effectivealtruism.org/posts/PWKWEFJMpHzFC6Qvu/alignment-is-hard-communicating-that-might-be-harder) about the difficulties of publicly communicating the risks from AI. Well, here's The New York Times talking about AI risk! |
2fe2fc91-11c5-48bd-9da9-484be9a5aebb | trentmkelly/LessWrong-43k | LessWrong | Should you publish solutions to corrigibility?
This question is partly motivated by observing recent discussions about corrigibility and wondering to what extent the people involved have thought about how their results might be used.
If there existed practically implementable ways to make AGIs corrigible to arbitrary principals, that would enable a wide range of actors to eventually control powerful AGIs. Whether that would be net good or bad on expectation would depend on the values/morality of the principals of such AGIs.
Currently it seems highly unclear what kinds of people we should expect to end up in control of corrigible ASIs, if corrigibility were practically feasible.
What (crucial) considerations should one take into account, when deciding whether to publish---or with whom to privately share---various kinds of corrigibility-related results? |
f55da2ba-c573-4104-b42f-7cd520e9203d | StampyAI/alignment-research-dataset/blogs | Blogs | Atari early
*By Katja Grace, 1 April 2020*
Deepmind [announced](https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark) that their Agent57 beats the ‘human baseline’ at all 57 Atari games usually used as a benchmark. I think this is probably enough to resolve one of the predictions we had respondents make in our [2016 survey](https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/).
Our question was when it would be feasible to ‘outperform professional game testers on all Atari games using no game specific knowledge’.[1](https://aiimpacts.org/atari-early/#easy-footnote-bottom-1-2351 "
<p>Full question wording:</p>
<p>How many years until you think the following AI tasks will be feasible with:</p>
<p>a small chance (10%)?
an even chance (50%)?
a high chance (90%)?</p>
<p>Let a task be ‘feasible’ if one of the best resourced labs could implement it in less than a year if they chose to. Ignore the question of whether they would choose to.</p>
[…]
<p>Outperform professional game testers on all Atari games using no game-specific knowledge. This includes games like Frostbite, which require planning to achieve sub-goals and have posed problems for deep Q-networks1,2.</p>
<p>1 Mnih et al. (2015). Human-level control through deep reinforcement learning
2 Lake et al. (2015). Building Machines That Learn and Think Like People</p>
<p>small chance (10%)<br> even chance (50%)<br> high chance (90%)") ‘Feasible’ was defined as meaning that one of the best resourced labs could do it in a year if they wanted to.
As I see it, there are four non-obvious things to resolve in determining whether this task has become feasible:
* Did or could they outperform ‘professional game testers’?
* Did or could they do it ‘with no game specific knowledge’?
* Did or could they do it for ‘all Atari games’?
* Is anything wrong with the result?
**I. Did or could they outperform ‘professional game testers’?**
It looks like yes, for at least for 49 of the games: the ‘human baseline’ appears to have come from ‘professional human games testers’ described in [this paper](https://www.nature.com/articles/nature14236).[2](https://aiimpacts.org/atari-early/#easy-footnote-bottom-2-2351 "“In addition to the learned agents, we also report scores for<br> a professional human games tester playing under controlled conditions…” <br><br>“The professional human tester used the same emulator engine as the agents, and played under controlled conditions. The human tester was not allowed to pause, save or reload games. As in the original Atari 2600 environment, the emulator was run at 60 Hz and the audio output was disabled: as such, the sensory input was equated between human player and agents. The human performance is the average reward achieved from around 20 episodes of each game lasting a maximum of 5min each, following around 2 h of practice playing each game.”") (What exactly the comparison was for the other games is less clear, but it sounds like what they mean by ‘human baseline’ is ‘professional game tester’, so I guess the other games meet a similar standard.)
I’m not sure how good professional games testers are. It sounds like they were not top-level players, given that the paper doesn’t say that they were, that they were given two hours to practice the games, and that randomly searching for high scores online for a few of these games (e.g. [here](http://highscore.com/)) yields higher ones (though this could be complicated by e.g. their only being allowed a short time to play).
**II. Did or could they do it with ‘no game specific knowledge’?**
My impression is that their system does not involve ‘game specific knowledge’ under likely meanings of this somewhat ambiguous term. However I don’t know a lot about the technical details here or how such things are usually understood, and would be interested to hear what others think.
**III. Did or could they do it for ‘all Atari games’?**
Agent57 only plays 57 [Atari 2600](https://en.wikipedia.org/wiki/Atari_2600) games, whereas [there are hundreds](https://en.wikipedia.org/wiki/List_of_Atari_2600_games) of Atari 2600 games (and [other](https://en.wikipedia.org/wiki/Atari_XEGS) [Atari](https://en.wikipedia.org/wiki/Atari_7800) [consoles](https://en.wikipedia.org/wiki/Atari_Jaguar) with presumably even more games).
Supposing that Atari57 is a longstanding benchmark including only these 57 Atari games, it seems likely that the survey participants interpreted the question as about only those games. Or at least about all Atari 2600 games, rather than every game associated with the company Atari.
Interpreting it as written though, does Agent57’s success suggest that playing all Atari games is now feasible? My guess is yes, at least for Atari 2600 games.
Fifty-five of the fifty-seven games were proposed in [this paper](https://arxiv.org/pdf/1207.4708.pdf)[3](https://aiimpacts.org/atari-early/#easy-footnote-bottom-3-2351 "Section 3.1.2, <a href=\"https://arxiv.org/pdf/1207.4708.pdf\">https://arxiv.org/pdf/1207.4708.pdf</a>"), which describes how they chose fifty of them:
>
> Our testing set was constructed by choosing semi-randomly from the 381 games listed on Wikipedia [http://en.wikipedia.org/wiki/List\_of\_Atari\_2600\_games (July 12, 2012)] at the time of writing. Of these games, 123 games have their own Wikipedia page, have a single player mode, are not adult-themed or prototypes, and can be emulated in ALE. From this list, 50 games were chosen at random to form the test set.
>
>
>
The other five games in that paper were a ‘training set’, and I’m not sure where the other two came from, but as long as fifty of them were chosen fairly randomly, the provenance of the last seven doesn’t seem important.
My understanding is that none of the listed constraints should make the subset of games chosen particularly easy rather than random. So being able to play these games well suggests being able to play any Atari 2600 game well, without too much additional effort.
This might not be true if having chosen those games (about eight years ago), systems developed in the meantime are good for this particular set of games, but a different set of methods would have been needed had a different subset of games been chosen, to the extent that more than an additional year would be needed to close the gap now. My impression is that this isn’t very likely.
In sum, my guess is that respondents usually interpreted the ambiguous ‘all Atari games’ at least as narrowly as Atari 2600 games, and that a well resourced lab could now develop AI that played all Atari 2600 games within a year (e.g. plausibly DeepMind could already do that).
**IV. Is there anything else wrong with it?**
Not that I know of, but let’s wait a few weeks and see if anything comes up.
~
Given all this, I think it is more likely than not that this Atari task is feasible now. Which would be interesting, because the [median 2016 survey response](https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/#Answers-9) put a 10% chance on it being feasible in five years, i.e. by 2021.[4](https://aiimpacts.org/atari-early/#easy-footnote-bottom-4-2351 "Though note that only 19 participants answered the question about when there was a 10% chance. </p>
<p>We surveyed 352 machine learning researchers publishing at top conferences, asking each a random subset of many questions. Some of these questions were about when they expected thirty-two concrete AI tasks would become ‘feasible’. We asked each of those questions in two slightly different ways. The relevant Atari questions had 19 and 20 responses for the two wordings, only one of which gave an answer for 2021.") They more robustly put a median 50% chance on ten years out (2026).[5](https://aiimpacts.org/atari-early/#easy-footnote-bottom-5-2351 "Half the time we asked about chances in N years, and half the time we asked about years until P probability, and people fairly consistently <a href=\"https://aiimpacts.org/chance-date-bias/\">had earlier distributions</a> when asked the second way. Both methods yielded a 50% chance in ten years here, though later the distributions diverge, with a 90% chance in 15 years yet a 60% chance in 20 years. Note that small numbers of different respondents answered each question, so inconsistency is not a huge red flag, though the consistent inconsistency across many questions is highly suspicious.")
It’s exciting to resolve expert predictions about early tasks so we know more about how to treat their later predictions about human-level science research and the obsolescence of all human labor for instance. But we should probably wait for a few more before reading much into it.
At a glance, some other tasks which we are already learning something about, or might soon:
* The ‘reading Aloud’ task[6](https://aiimpacts.org/atari-early/#easy-footnote-bottom-6-2351 "‘Take a written passage and output a recording that can’t be distinguished from a voice actor, by an expert listener.’") [seems to be coming along](https://www.descript.com/lyrebird-ai?source=lyrebird) to my very non-expert ear, but I know almost nothing about it.
* It seems like we are [close on Starcraft](https://aiimpacts.org/the-unexpected-difficulty-of-comparing-alphastar-to-humans/) though as far as I know the prediction hasn’t been exactly resolved as stated.
*1 April 2020*
*Thanks to Rick Korzekwa, Jacob Hilton and Daniel Filan for answering many questions.*
Notes
----- |
284f3327-9a2e-4a88-8ff5-ac4e58901c3d | trentmkelly/LessWrong-43k | LessWrong | Why You Should Be Public About Your Good Deeds
(This will be mainly of interest to Effective Altruists, and is cross-posted on the Giving What We Can blog, the Intentional Insights blog, and the EA Forum)
When I first started donating, I did so anonymously. My default is to be humble and avoid showing off. I didn’t want others around me to think that I have a stuffed head and hold too high an opinion of myself. I also didn’t want them to judge my giving decisions, as some may have judged them negatively. I also had cached patterns of associating sharing about my good deeds publicly with feelings that I get from commercials, of self-promotion and sleaziness.
I wish I had known back then that I could have done much more good by publicizing my donations and other goods deeds, such as signing the Giving What We Can Pledge to donate 10% of my income to effective charities, or being public about my donations to CFAR on this LW forum post.
Why did I change my mind about being public? Let me share a bit of my background to give you the appropriate context.
As long as I can remember, I have been interested in analyzing how and why individuals and groups evaluated their environment and made their decisions to reach their goals – rational thinking. This topic became the focus of my research as a professor at Ohio State in the history of science, studying the intersection of psychology, cognitive neuroscience, behavioral economics, and other fields.
While most of my colleagues focused on research, I grew more passionate about sharing my knowledge with others, focusing my efforts on high-quality, innovative teaching. I perceived my work as cognitive altruism, sharing my knowledge about rational thinking, and students expressed much appreciation for my focus on helping them make better decisions in their lives. Separately, I engaged in anonymous donations to causes such as poverty alleviation.
Yet over time, I realized that by teaching only in the classroom, I would have a very limited impact, since my students were |
8f41baff-382f-4334-ae1f-e7e116018f69 | trentmkelly/LessWrong-43k | LessWrong | What journaling prompts do you use?
When I started daily journaling, I just started writing about what happened that day. Now, I'm wanting to improve the process. Oftentimes, people seem to have questions that they answer every day as part of their journaling. That could be the classic from positive psychology "What are you grateful for today?" or "What did I learn today?” from Tyler Cowen or something different.
If you have a journaling habit, what prompts worked well for you? If you tried some prompts that didn't work well for you, I'm also curious about those. |
6e38ce4b-3b0c-4d00-a047-8a3fec8d67ef | trentmkelly/LessWrong-43k | LessWrong | Madison Meetup - Ideas, arrangements
It's been a while since this summer's meetup; we should be having them at least quarterly, and probably monthly. And I said I'd set them up, so, let's put together a Madison meetup some time next week.
If no one voices a strong preference by, say, Friday, then it'll be at Indie Coffee again, on Wednesday, 16 February, at 6 pm. But we definitely don't want to meet on their patio, and it's a bit small inside for a dozen people or so. I'm not sure where else I'd hold it, though. A quiet, spacious bar or cafe? First floor of the WID? Someplace that we could meet, regularly, all year round would be preferable.
Thoughts?
|
90f3f9fd-1839-4e46-be34-a961819bc2c8 | trentmkelly/LessWrong-43k | LessWrong | Un-unpluggability - can't we just unplug it?
My thanks to Sam Brown for feedback on readability and ordering
> Can't we just unplug it?
Cover photo by Kelly Sikkema on Unsplash
A few weeks ago I was invited to the UK FCDO to discuss opportunities and risks from AI. I highly appreciated the open-mindedness of the people I met with, and their eagerness to become informed without leaping to conclusions. One of their key questions was, perhaps unsurprisingly, 'If it gets too dangerous, can we just unplug it?'. They were very receptive to how I framed my response, and the ensuing conversation was, I think, productive and informative[1]. I departed a little more optimistic about the prospects for policymakers and technical experts to collaborate on reducing existential risks.
Here I'll share the substance of that, hoping that it might be helpful for others communicating or thinking about 'systems being hard to shut down', henceforth 'un-unpluggability'[2]. None of this is especially novel, but perhaps it can serve as a reference for myself and others reasoning about these topics.
This contrasts pretty strongly with a more technical discussion of 'off switches' and instrumental convergence, handled admirably by e.g. Rob Miles and MIRI, which is perhaps the reflex framing to reach for on this question (certainly my mind went there briefly): absent quite specific and technically-unsolved corrigibility properties, a system will often do better at an ongoing task/intent if it prevents its operator from shutting it down (which gives rise to an incentive, perhaps a motive, to avoid shutdown). This perspective works well for conveying understanding about some parts of the problem, but in my case I'm pleased we dwelt more on the mechanics of un-unpluggability rather than the motives/incentives (which are really a separate question).
Both perspectives are informative; consider what you are trying to learn or achieve, and/or who your interlocutors/audience are.
Un-unpluggability factors
Broadly, I'll discuss six clas |
8a0e55f1-a183-4240-8758-26bdeb014ac0 | trentmkelly/LessWrong-43k | LessWrong | Progress links and short notes, 2025-05-31: RPI fellowship deadline tomorrow, Edge Esmeralda next week, and more
It’s been way too long since the last links digest, which means I have way too much to catch up on. I had to cut many interesting bits to get this one out the door.
Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Notes, Farcaster, Bluesky, or Threads.
Contents
* Apply to the Roots of Progress Fellowship by June 1st (tomorrow!)
* Edge Esmeralda next week!
* My writing (ICYMI)
* Other people’s writing
* Jobs
* Grants & fellowships
* Events
* AI announcements
* Introductions
* Career moves
* Nuclear news
* Aviation news
* Other announcements
For paid subscribers:
* Stagnation was the goal
* Is stagnation a measurement illusion?
* Eroom’s Law
* Cembalest on AI
* More on AI
* Bio
* Podcast interviews
* Links and short notes
* Politics
* Housing
* Gratitude
* Quotes
* Charts
* Aesthetics
* Fun
Apply to the Roots of Progress Fellowship by June 1st (tomorrow!)
Applications are still open for the 2025 Blog-Building Intensive! Launch a blog and improve your progress-focused writing with expert guidance and an amazing community progress builders, writers and intellectuals.
In addition to a general focus on progress studies, this year’s fellowship features two themes: (1) agriculture and (2) health, biotech & longevity. We welcome fellows writing on any progress-related topic, but for a handful of spots, we will give preference to applicants focusing on these themes, for which there will be dedicated programming.
But don’t take our word for it, see what others have to say:
* @NikoMcCarty: I can't recommend this Writers' Fellowship enough. It helped me find my community, challenge my own work, and improve very quickly. You should apply! And feel free to DM me directly if you have any questions about my experience in the program.
* @gtmulligan: This program changed my life. Happy to talk with anyone about my experience. Apply, apply, apply! [See also Grant’s p |
e5b8d974-e5ed-4740-a857-22c529fc48cb | trentmkelly/LessWrong-43k | LessWrong | Wiki on Suspects in Lind, Zajko, and Maland Killings
Hey everyone,
I've been following the news about the killings linked to LaSota, Zajko and associates, and right now, finding all the relevant information is a challenge. Either you have to dig through scattered sources, or you’re stuck reading a single, extremely long Google Doc to get the full picture. To make things easier, I’ve started a wiki on LessWrong where we can organize everything in a more structured way—breaking it down by case, suspect, and key details.
Right now, it’s a work in progress, and I’d really appreciate contributions from others. If you’d like to help, here are some useful things you can do:
* Add relevant news articles to the news section so we have a centralized reference.
* Populate tables with legal documents and other key resources.
* Expand background info on individuals—things like career history, geographic connections, and general biography.
* Fill out incomplete pages marked in red—these are important topics that need content but currently only have titles. Even adding basic factual information from reliable sources helps build out the wiki's coverage.
* Improve individual pages by adding relevant details and citations from public sources (preferably news articles). Please be mindful of privacy—don't include inconsequential information about people who aren't publicly named as suspects.
If you see gaps in the wiki, feel free to create new pages to cover them.
One last note: The wiki (including this post) was generated by a language model.
Thanks in advance to anyone who helps build this out! |
19ab54a7-15ab-480d-8d8c-b4c045089ef0 | trentmkelly/LessWrong-43k | LessWrong | What is the most rational view of Peak Oil and its near term consequences?
To me the following points seem hard to argue against:
1. Oil is harder and harder to find every year (we already took the easy stuff, nobody finds super-giant fields anymore)
2. The peak production year was 2005 with 73.7 million barrels produced
3. The amount of oil produced each year is declining
4. The price of oil (and therefore energy) rises
5. All the alternatives that were supposed to fill the gap are failing to deliver
6. Even oil that's harder to get (e.g. in deep water) doesn't help much as it is generally produced at a slow rate
7. Available energy production rate (i.e. power) drops
8. Since nearly everything needs power to create/mine/produce prices rise
9. Food for example becomes more expensive as fertilizer prices rise
10. The average person is mystified as the price of everything seems to rise at once
11. Business and whole national economies are squeezed by rising prices
12. As businesses fail unemployment increases
13. Politicians are powerless, so promise general feel-good nonsense like "energy independence". Nobody even tries to tackle the problem.
14. Everything continues to get worse, and at an increasing rate
15. Within the near future the lights start to go out.
Sure there's a possibility that a form of nuclear fusion/thorium/cold fusion/zero point energy that is safe and cheap to build and operate might be invented tomorrow, but given that such things usually take a decade or so from inception to delivery it looks like there's no practical alternative on the horizon. Thermodynamics is a harsh mistress. Work out the energy in 73 million barrels of oil, and figure out how many wind farms are needed to offset a 5% decline. And then another decline the next year. Even uranium prices are rising as demand outstrips supply for just the current set of reactors.
The more we examing the situation the worse it seems to be. Some early wells had a enormous energy return on investment, e.g. for the energy of burning one barre |
30afcf53-8544-4ccb-9364-7e0a04a8aa42 | trentmkelly/LessWrong-43k | LessWrong | How do you determine temperature cutoffs for women's body-temperature?
I do have a female primary partner with whom I'm not cohabating. Even through I'm mostly in quarantine and my partner is as well, I do want to minimze the risk of getting COVID-19.
Given that fever is frequent in most cases of COVID-19, I consider daily temperature measurement a good way to decide whether or not to see my partner. I likely want to use a relatively low value because I'm more afraid of false negatives then false positives.
For women the temperature fluctuates during the cycle and I haven't read much about that. How do I set a good cutoff for abnormal temperatures for a woman? |
28666b69-1e30-49cd-80df-79f721a7f939 | StampyAI/alignment-research-dataset/youtube | Youtube Transcripts | Human-Robot Coproduction: non verbal sharing of mental models with AR/VR (Doris Aschenbrenner)
reactions and i look forward to the
discussion
um i hope everything works on the
technology side
you should see now uh my presentation
and
uh yeah maybe i just like to share a
couple of like insights of you from my
background
originally i'm computer scientist and
then i did my phd
uh within the area of human uh like
let's say
tailor maintenance a remote maintenance
for industrial robots
together with companies and a tno like
extra university institution and i came
to tudor for my post talk and
kind of stayed here and
my interest and that's maybe i dive a
bit in the domain context because that's
really relevant for my research
and i'm not sure how many of you are
already familiar with that so i try to
bridge the area of manufacturing with
human-centered
designer molars and may some of you i
already know i read
the names you are familiar with one but
maybe not with the others so
please i would try to kind of give a bit
an introduction to
more or less the problem so um
so you should see my video now and you
know that their
development of human industry can be
developed into four industrial
revolutions so mechanical
um revolutions um steam machine and
these kind of things
electrical engineering and reclical
introduction
then computers and automation and then
finally this is what we call now the
fourth industrial revolution where you
have automation
coming to a new level with artificial
intelligence and new types of robots and
these kind of things
and the interesting thing is um
yeah within that area also the
work of people change a lot so we're not
facing only a technological change which
is called in this fourth industrial
revolution for industry 4.0 but it is
also happening in other parts of the
world within different names for example
made in china 2025
um but we're also encompassing um well a
social change so
there is an aging society and we are
also
having uh some migration streams um
and here we have all these questions
about how
is this future work within manufacturing
looking like
this is getting much more um
this question is getting much more
interest at the moment so for example
the world economics forum forum or there
was also a very cool
uh conference from stanford
uh on on ai and the future of work i'm
not sure whether you
were aware of that otherwise i should
just share um
the link maybe and um i see
this kind of research within this
context of course not solving all of
this question
and the interesting thing for me is that
there are basically
four different types of future scenarios
so which you can only read in literature
and there's a very nice
unfortunately german paper who kind of
summarizes a bit of that research about
future work in the production industry
and they come basically up with like
four different streams the first one is
that the robot will take over
that's what you mainly also hear in mass
media and i think everybody if you have
been tortured with this kind of all the
robots will take over the world stuff
and there is also a contra scenario
which is more or less on the
okay within that new technology we also
can use this
new technology in order to come to a
more human
centered new type of organization and
these are the homogenic
um either the one when or the other
scenarios
and there are also other scenarios that
are discussed in literature
one is definitely that there will be
winners let's say in the higher
in the higher up um uh quali in a higher
qualified
region um for example uh yeah if you
regard
like our jobs in the end or i love this
quote
which says well there will be two types
of people in the world those who tell
computers what to do and those are told
by computers what to do
i think this this polarization scenario
goes in this direction
and and then there is also another
scenario which is also interesting to
have in mind
that stuff is dissolving and dissolving
so you don't have any boundaries anymore
with respect to space and also hierarchy
because of the strong modularization
so these are the two more or less
diversification scenarios
and my faculty has more or less
the um aim to design for our future and
if we want to go
in the envisions future that we also say
it's the preferable
future then we choose choose to design
for the scenario of these four
which also for our side is the most
preferable one and this is the the
second one where the humans are helped
by technology
which i call it among others as the
operator for the theory scenario
and what does this operated 4.0 mean
well you have this force industrial
revolution stuff is getting much more
complex
less transparent but we still have in
high demands of safety
and of course efficiency and the humans
and the robotics colleagues needs better
ways to communicate
with each other in order to make that
happen so apart from the factory 4.0 we
also need the operator for
zero which we envision here a bit in the
superhuman style
and how does it look like exactly
the the basic paradigm is that we have
this cyber physical production system
which is more or less the manufacturing
environment
and we have the human in the center in
the interaction with that system
and we have more or less technology
helping this human to be better
in his or her work and enhance the
physical capabilities
so this could be for example using an
exoskeleton
and then we have the enhancement
possibilities of descending capabilities
so that's where i talk a lot here
in this talk um about using augmented
virtual reality in order to improve
um on one hand ascending capabilities
but also on the other hand
cognitive capabilities but you can also
envision much more
uh yeah different functions than ar we
are in these kind of two realms
and uh one thing that is very important
to understand is that there are we have
like
technical challenges which are mainly
discussed so complexity
uh dynamics so that stuff is not
non-linear
and then we don't have a not transparent
situation of the manufacturing
environment
and but we also and these these
challenges with indian manufacturing
industry or the robotics domain are
very much discussed a lot
but people tend to only talk about the
technology
and if we regard on the theory behind of
a
socio-technical work system then this
looks like this so you have some kind of
work cell and you have some input coming
in you have some output
going out and you have of course the
task and the physical system involved
with the task
and this is what we call the technical
subsystem and a lot of stuff
is only like what you leading literature
at the moment is only focusing this
using ai for uh predictive maintenance
or something like that then it's kind of
like centered only on that
that part of the system but the system
is larger
we have the people with this cognitive
and social abilities
and we have the structure of the entire
factory or manufacturing environment
which is of course interacting a lot
with the technical system
and we of course need to focus also on
the inner dependencies in order to
really make the entire thing work
and that is something well i think the
designers among you
are kind of people that have something
to do with you in fact to say yeah well
that's logical that's what we always do
um but it's not entirely logical
especially in the manufacturing domain
there was a lot of stuff that was only
focusing on the technical development
and there are a lot of comp
opportunities if you want to use human
centered or human computer interaction
within these industrial environments
you have less training you might have a
higher safety
a quicker problem solving and an
increasement of the well-being
um and this comes more or less to our
guiding
questions which are a bit stolen by from
the dutch
research agenda for artificial
intelligence
so we try to design an augmentation
layer
so that humans and robots can
productively interact and understand
each other
and and we want the human to trust
autonomous system
and we want to enable task sharing so
mutual understanding
uh between both partners yeah in order
to come to such a nice
um yeah well handshake situation uh
where
it's not only the human doing the work
but it's also not only the robot doing
the work
so and what would we understand by this
human robot co-production which is
the framing that we had um if you regard
manufacturing environment this stuff
looks like normally like this so you
have a lot of
like sometimes dirty machinery big
machinery
and some robots that are encaged so you
can see the bottom here there's a robot
who has a safety cage around it and
humans are basically only able
to kind of interact with these big
robots from a distance
um and this is currently a bit changing
because there are these collaborative
robots which you also i think
already know and they are designed so
that the human can
clo work in close interaction with them
and we don't require any fences anymore
we can have direct interaction
t readily quicker programming and
the market is increasing a lot in this
area because these kind of small robots
can take
away a couple of like small
manufacturing tasks and they're much
cheaper
and yeah they're quite promising but
um we still have some stuff to dissolve
there
maybe as a kind of overview why this is
interesting or why the market is kind of
increasing at the moment
um if you regard um high
large enterprises for example uh
automotive is not a perfect example but
let's use automotive um you have a high
production volume
and you have different parts that are
coming from that have low
um uh that have low high production
variation so for example
i need a whatever car and i need a
specific kind of seat i need this
so the car itself comes with a lot of
volume
but the different components come with
low volume
so and this is making the um high
um uh the the large enterprises
um being enabled to automate part of the
production already quite quickly within
the
third industrial revolution let's put it
like that um
and they can do highly automated stuff
they can do high volume low variation
stuff quite well
and they have optimized the factories
for that um
but if you regard small and medium-sized
enterprises or
um also other people that do let's say
batch size one or small batch size
production
they are less automated less low volume
and higher variation
and this means oh we need a better human
robot collaboration on this low volume
area so how does it look like
with the human on one hand robot on the
other hand we have some kind of
interface in there
and i still stick very much to some
quite
old theory um from sheridan where you
have
where you say that you have a different
task from human
women need to plan past teach a robot
monitor
robot is doing the right thing intervene
eventually teach again
and then learn and that still is more or
less
the basic things that still are there
maybe they're a bit quicker than they
were before
and this kind of human supervisory
control
is um yeah using a lot of different
mental models
so i don't want to give you too much
in-depth discussion but you kind of have
a
mental concept of stuff how stuff works
and what is quite interesting that if
you have this kind of control chain
there are a lot of different mental
models that are coming to pass for
example
if you here see the different components
the human has
a mental model of how the robot will
operate
to this place will show a specific
representation of the robot
which is always only a picture and
depicts also
the mental model that the programmers of
the display or the interaction
software has then of course we have an
internal
mental model of the computer which might
be a bit different to what the human
actually sees and can understand
and everything which has been designed
as being a control panel
also has an embedded mental model in
there
how it's designed and how stuff would
work and the interesting thing
within manufacturing industry this is a
bit of a dancing bear problem
a dancing bear problem is well known in
human-centered
interaction theory so you're so glad so
if you look at a bear that is dancing of
course it's animal cruelty and we know
about that
but if you look at that bear and you
possibly you like it and you say well
cool the bear is dancing
and you're saying oh well that's cool
because you never saw a bear that was
dancing
um but if you regard yet human dancers
and you give a
or b values for that the bear doesn't
fit at all
this kind of classification but you're
still happy that the bear is dancing
because it's the only bear that you know
and this is more or less the same which
happens with human interaction
especially in specialized industry
you're so happy that something is
solving your problem that it might be
over complicatedly solving it but you're
still happy
hashtag sap or something like that um
and there are this is just the thing
that we are covering
and there are a couple of worker needs
within that area for example
of course human want to stay healthy and
the work
should be sufficiently demanding but not
too demanding the human wants to
understand
what's going on and how to control the
system
and of course on an even higher level
you won't want to trust the system and
don't fear that it kind of is overtaking
him or her
and feeling valued by the entire context
so a lot of stuff to couple and this is
only
more or less the basic layer is physical
ergonomics and we have
cognitive ergonomics and then we have
emotional aspects or what we call user
experience
um which is a bit more than that um
and here of course there should be
design methods for kind of making that
clear there are design methods from
other areas but they're not
that well established within the
manufacturing field
so coming to the overall research topic
that my
group and i'm trying to couple is to how
to design a hybrid human robot system
that is able to optimize
both workable being and the overall
system performance
to really come to some kind of handshake
working together situation
i quickly go through some related
research i think a couple of people will
know some of this um first of all i like
very much the trading and sharing
control
theory so that if you have a human
worker then you have a specific load
that that human worker is able to carry
and if i have a computer system i can
use that computer system in order to
extend
the capabilities of the human so it's
not
only the human load it's an increased
load by
having a computer taking part of that
job but you also can use the system
in order to relieve the system so the
load is the same but the human has
kind of some relief in there you also
can use it as a backup to the human
and but then also there are some fields
where you
say okay but not that many to be honest
where the system or the automatic system
is replacing the human
but with a less load because human is
much more capable still
than an autonomous system and i also
like very much the levels of automation
also this
is quite old but nevertheless especially
kind of refined for the field of
manufacturing
so in more or less it gives a great uh
yeah kind of um yeah difference
between the total manual um case
and the totally automatic case and it
kind of defines
some some more or less discrete areas in
the mean
uh while where you can say okay there is
kind of a
um especially we are interested in this
supervision
and intervene case and not not too much
in the uh
closed loop case and of course
there is a lot of classification on how
humans
and robots can interact here the
so-called levels of interaction
on the left side about the constellation
of the group so between humans and
robots multiple humans multiple robots
and on the right side more or less the
quality of the interactions so is
both are both players active in the task
is one only supportive maybe some
inactive but somehow present
or is there some kind of intuitive hand
over thing
of course that's where we're all aiming
for but it's really hard to design
and then you also have this level of
collaboration which is a bit more on the
physical side here if you can regard the
robot
and the human either they totally
separated that's the normal case for
nearly
all of the industrial cases that we are
currently also inquiring
and these kind of co-existing or
synchronized or even cooperation or even
collaboration cases so coming more and
more to this kind of
shared thing that's still quite very
unique because
um it's also with a lot of effort
involved within real industrial cases
there was a very nice phd thesis from uh
our uh yeah associated with the stuff
that we are now doing
which unfortunately he doesn't work with
us anymore but if you're interested he
also had a very nice
uh work on using this kind of
operator-centered production design
and you can can look it up if you want
and the other thing that we are very
interested in
in order to make this kind of
interdependent teamwork situation
possible we need to have legibility
so predictive ability between what a
robot is aiming to do
and this is has been proven to increase
safety
comfort surprise or lessen surprise to a
certain extent
increase efficiency and also the
perceived value
of the human worker
um and how do we do that and how do we
increase
what on the human side is happening on
the robot side legibility is more or
less incorporated but on human side we
want to do
situation awareness you want to kind of
get
the human to a point that he or she is
understanding what is going on
and situational awareness is basically
more as a measure for understanding
what's going on and it kind of is
defined in different levels okay i know
that there's a lot of
like discussion on that whether this is
a valid
concept but i like it very much because
it's really applicable also for my
domain
and on one hand you say perception i
would like to know
what is there i would like to kind of be
able uh
to identify all the critical elements
the second thing i want to comprehend
what is the meaning and the significance
of the situation
and then i also in order to plan and
to interact with each other i need to be
able
to project how the future state will
develop
and this is involved also with concept
of sense making i don't go into detail
here
and then later on also into sharing the
mental models between the human
and the robots also these kind of if i
know what a human or what the robot is
kind of aiming at
then it also will increase my situation
awareness
our specific focus is then to say okay
we want to design this kind of
augmentation layer for this human robot
co-production within the era of
manufacturing
and here i come back to the uh social
technical system stuff that i have
introduced earlier
so we still have this human and robot
cell with some input coming in some
output going out and we have these
like combination of the social system
and the technical system
and our augmentation layer is enhancing
the physical sensing and cognitive
capabilities
mainly the last two in order to come
from this kind of
normal human worker to our worker 4.0
and we have these two factors worker
well-being and work performance that we
want to optimize for
and the specific focus that i would like
to kind of
enhance here because other people within
my group are more on a cognitive side
for example or more on the physical side
i'm using augmented virtual reality as a
tool
in order to kind of yeah improve this
overall system
and to come back to these uh research
questions
and then breaking down these research
questions a bit so that you can have
some comprehension on
on what you're actually doing um so we
want to design a human
augment augmentation layer so that the
humans and the robots can productively
interact and understand each other's
behavior in context
and here let's break that down with
respect to literature and also kind of
the stuff that we actually can measure
we want to help with situational
awareness we want to help with sense
making
we want to help with decision making and
we want to help with the sharing of
mental models
so let's have a bit of a dive in
for example if you want to improve the
situation awareness uh what you could
do is like we also um are of course
interested in level one and level two
situation awareness
but mainly we are also very much
interested in having a level one and two
and then
the level three which is the projection
i want to know what the stuff is going
to do
and a very basic example maybe but quite
comprehensible of what is feasible
is increasing the safety by
projecting the future trajectory of a
driving robot
so here is the example study um
we have a person walking a specific
way and we have a robot where we know
the robot will have a specific
trajectory
and we have two conditions in one
condition we don't have a projection and
the other one we have a projection on
the floor
and uh here you can see it's based on a
video study
and you can see this is the video
material that participants were look
watching as
and here the normally the system would
uh
the the participant would be
interrupted to watch the video and ask
what he thinks or what she thinks that
the robot will do next
and you can see here we're doing these
experiments within the era of
semi-cell and yeah death was quite
yeah predictable so we had different
scenarios
of different interaction scenarios and
you can quite see if there
are specific type of scenarios it's
quite
really helpful to have some kind of
projection in there with other scenarios
and this was on scenario four for
example this was that the human is
actually
doing a task and then the robot comes in
we don't have any significant difference
um but and that was really really nice
to just see okay what can we do
in real world on one hand but also and
within respect to these
sitting away situational awareness and
the other example
is not with driving robots but with
moving robots collaboration robots and
here you can see
that we have made the task a bit up
because yeah in order to have it more
controllable
there is a person packing stuff into um
for packaging and part of it should be
done by the person
part of it should be done by the robot
and that's more or less the same
like more or less similar setup than the
first study
and um what we're doing here
we're using the same
situation in virtual reality so here in
virtual reality you can more or less
also say
let's switch on a perceived future
trajectory of the robot for example here
you can see that small
um a moving uh trajectory
so that there is some kind of projecting
possibility of the future and of course
you can have a lot of different
visualizations for that
um and this helps you to understand okay
what will the robot will be doing next
and the nice thing is that we're not
only able to do that in virtual reality
but we also can use
augmented reality for this and here you
can see someone putting on the microsoft
hololens
and we have developed some nice
framework where you have
um you can see the robot moving and also
on the left side also the
virtual robot moving uh we have more or
less a framework where you can have
all the stuff that you were seeing in
virtual reality it's developed in unity
and with that kind of feedback framework
to the robot operation system
you can have the same visualization
stuff also happening
in augmented reality and the question is
here and it is unfortunately ongoing
sorry for that
um which kind of visualization would
help and in which scenario does it help
does it help in real life
um situation does it help in the uh
only virtual reality environment um
when yeah where are the benefits where
did you get the biggest benefits for
this kind of situation awareness with
respect
to understanding what the robot is going
to do next
hey and where do we apply that then this
is more or less the
um the the last case that i want to show
you
and this is an application where we have
we work together with the robot with the
bicycle manufacturer
and the idea is to share tasks within
bicycle manufacturing between human and
robot because there are some tasks that
are really
not easy automatable and how are we
going to kind of do this kind of task
sharing
and if we do discuss sharing how are we
going to communicate
the task sharing and the stuff between
the human and the robot
yeah this is much more to it so we have
done
quite a stuff a couple of stuff so far
that
we have developed a digital twin of the
samix l
environment within unity so that you can
use that for experimentation
we have designed some kind of control
room features within unity
for some xl which we are hopefully
implementing somewhere in the future
also there in real life we did a couple
of studies on automation
capabilities for this bicycle case uh we
did a couple of papers on using
augmented virtual reality for helping
within the field of manufacturing
and also for planning manufacturing
tasks if you want to read more
i'm totally happy to share also some
more examples later
in the discussion but i just wanted to
conclude it
here um yeah this is everything
is only possible because we have such a
great team
um and uh all of that work is no
not the work of someone alone it it
always is the combination of people
and i have a wonderful team i'm so happy
uh that we work together
um i have a quick video i'm not sure i
need to look at a time
which i could share with you of like
more or less all of the projects that we
have going on right or now
but i'm not sure if we have the time or
we will not start the discussion first
thank you very much for the attention
and um yeah if you want to look to
watch the video we can also totally do
that thanks
thanks doris um is there any immediate
like
question that people have for doris or
else i'm actually quite happy to see
more examples because i think that
they're great
so uh that's actually quite exciting
so yeah doors can you maybe quickly
yeah i guess yeah okay okay just like
try to do it because i try to have it
running with sound which is
um hopefully working
uh no that doesn't work because if i try
to
upload it that is not working
okay um yeah let's see if it works if
you get
if you don't have sound please let me
know
sound no we don't hear anything so you
have to narrate
yeah i'm sorry and that's also something
i could do but they have such a nice
sound
um okay let's see if it works now
do you have thought now hey hello
welcome to this virtual tour of our
research projects at some excel
the research we do at some excel is
focused on the future of manufacturing
and sustainable human robot interaction
this is our team and we all welcome you
we're excited to tell you about our
research and show the great facilities
at some excel
hi i think most of you already know me
i'm your friend
i work at the applied labs but also work
here at some excel
and here at some iselle i helped to
develop all of the research facilities
for our projects and it also helped
bridge
the research we do here at sun excel
with the research we do at the applied
labs
so let's have a look inside
so this is the main hall of some excel
so the raw ones can be found here
it's 2 000 square meters robots
and very cool projects and in the combat
area
we have the robofish project in the
robofits project
we're helping a bike manufacturer to do
production of bikes with cobalts
let's have a look at some more projects
we do here in the combat area
hello there i'm jonas i'm an xr
developer
my primary work concerns the topic of
digital
twinning this does not only include the
visualization of
cyber physical systems like robots
or agvs but also the development chain
behind it
hi this is elvis so over the previous
year
i've assembled and have been developing
the ros
composite pick and place workbench and
together with others i've been working
on tooling
so that we can visualize soft robotic
actuators in real time in ar
apart from using cobots we also do
projects with
mobile robots these power robots can
drive
autonomously around factories let's have
a look at that
hi i'm denis this year i'm happy to be
a member of two projects first one is
collaborating and coupled agv swarms
where we use mobile robots to improve
intro logistics and second one is
profits where we use robot arms to
improve bicycle assembly line
hello my name is martijn and i've been
researching the possibilities of
applying spatial augmented reality
in the smart factory context an example
of this
is to use projected arrows to improve
the communication and safety of
autonomous vehicles
hi my name is better caseman the koch
project my colleagues and i have been
working a fleet management system
called rooster brewster's goal is to
simulate schedule
and plan tasks for robotic fleet in a
warehouse situation
hi my name is nil naga i'm a controls
engineer on the team
and for the past year i've been working
on setting up navigation software for
multi-robot systems
so that robots like this one can be used
to carry stuff around
factory shop floors and warehouses on
another front quite recently i'm
involved
in extending the bobrov project which is
a robotic arm program
to paint
so let's have a look in the rest of the
sub example
here at some excel there's also a really
really big robot
it's called a gentle robot and it's
situated in this corner
let's have a look
this robot is huge it measures
12 meters in length 10 meters wide and
5 meters high different types of tools
can be attached to this giant robot
aerospace engineer will use it for
drilling riveting halls in giant
airplane wings
but imagine our faculty attaching a
giant 3d print
head to this robot then we will be able
to 3d print giant structures
prototypes of car bodies or even large
outside furniture pieces
all of these robots here at some excel
produce a really amount
of data it's hard to comprehend for a
human being
my name is samuel kernan i developed an
assistant that can
automatically generate a report based on
a conversation with a technician
this saves time reduced the perceived
workload and resulted in
reports of higher quality for my phd
we'll be developing an assistance that
can provide cognitive support to factory
workers
while they use analytical tools
my name is sudei and i'm gonna join the
department and the koala project as a
postdoctoral researcher soon in december
i've been working mainly on recommender
systems
since my master's thesis in stockholm
and then over my phd
and then my postbook in ap
to dealt see you all soon
hello my name is santiago i'm a product
design engineer and i am
participating in the diamond project as
a postdoc where we are developing a
digital intelligent assistant for
supporting the maintenance activities
of the maintenance staff at
manufacturing companies
all the data the robots create we also
have developed
the virtual world of some itself
let's have a look inside this virtual
some excel world
my name is danielle ponto and my work is
mainly focused on extended reality or xr
i work for the mirrorless project where
we create a digital twin
where robots can be viewed and
controlled remotely for this project we
create tutorials where we teach how to
use this digital twin framework
hi my name is irina and i am responsible
for a virtual playground community
it's a community that connects
researchers students and startups
interested in vr and ar technology we
have
regular online meetups with experts from
all over the world
and will be happy to see new members
hello my name is jasper and my calling
is teaching
which is why i'm here to make all of
these exciting new technologies
accessible to steve
okay so i think that's it that's only
the teaching program which i
kind of miss now yeah i hope you like it
and we still have another
i have more videos because augmented
virtual reality is always with a lot of
videos
but i hope you like to kind of give a
bit of an feeling what we're doing
this is great thanks uh doris that's
really really exciting stuff
it was originally presented for our
faculty because we didn't have the
possibility to show them in real life so
um that's the reason why it's a bit on
the ide side of telling stuff i just
wanted to kind of share that with you
because it gives much more tangible
feeling to it
yeah now great so with that
i was wondering uh if anyone had some
questions for doris like more like
about the projects that you showed
oh yeah i see that people would love to
uh if you have a link to this video
doors
apparently people are very keen to uh
okay yeah we have don't have it online
we have showed it
within this kind of uh these natalis
thing
so that's something i can definitely
show you um
but i think uh we should make a real
version for youtube because that was
only for internal purposes
uh we will do that um i i still hope
that it will come soon and then i will
share you at the link
cool very good so i actually have a
question to just to kick it off if
that's okay
um that was fine and uh so so actually
two questions because one thing that i
think is really cool that you're working
with an actual bike's bicycle company
right on this cool production
so i was wondering uh are you also
applying your um
external reality for those people so the
actual line workers
and and do you know how they like it
whether they like working with the road
whether
adding this uh layer uh makes it like
their work
there they enjoy it more i was just
wondering
yeah so so the real application cases
within the augmented virtual reality
domain
are mainly for other purposes like
especially for
maintenance tasks i did some old studies
for that
i can link them if you want so that way
we
actually did also a comparison on using
students as
participants for these kind of
applications and real workers
and the interesting thing which is maybe
not that maybe already obvious
is if you test these stuff with students
it's nice you will get some kind of
results but
in the end you really need to test it
with the real end user
and they will see the stuff entirely
differently
um so everything would be doing we also
try to really involve the end users
within the bicycle project we are not
actually there what we did within a
bicycle project if you want we can also
share another video sorry for that
and we built an envision scenario
for the kind of co-production within vr
because the problem if you want to talk
with workers on what they like and what
they
would prefer they actually don't have a
clue
what to long for and what robots are
capable of and how could this envision
situation look like
and we uh basically were using the
approach of
using the virtual reality environment
for
setting them into that future scenario
and then having a discussion with them
we did this together with robo house and
i think we officially release
it uh the video or something like that
soon
um and this is where we applied
human like human robot co-production for
the for the bicycle industry with
virtual reality
the main stuff i'm doing on uh augmented
reality assisted
things at the moment i was doing a lot
on maintenance and repair
tasks and i might come back to assembly
but at the moment we do composite layup
so what we do is
uh as you saw this kind of pick and play
stuff with the with the robot and using
the composites
here is where the industry in within the
aerospace domain
has a lot of interest
within the manufacturing industry my
main focus would not be the direct
assistance for single worker cases
i did a lot of cases for multi-workers
so
if you have someone on the phone and
collaborating with someone local
a couple of situations that we have a
lot at the moment with due to corona
and this is where i'm it's called
computer supported collaborative work
there is where i'm uh yeah have worked a
bit more
because like at the moment everybody is
doing repair and instruction
maintenance a kind of suggestion stuff
and there's already a lot out there in
the industry
and so it's it's not that interesting
anymore because it's like a lot of stuff
has been already discussed there so only
the more complicated cases
multiple humans or humans and robot
systems
or what we will do in the koala project
uh this cognitive advisor
the um the ai system giving you
cognitive advice so that is going to be
a bit more interesting than the
uh like normal okay i know how the
instruction works and i give you some
some tasks if you want to read something
on that i have a very large comparison
study
um with 150 participants um
but i'm not sure if i want to do that
again and uh
on kind of how ar and vr can ar
visualizations can help on uh this kind
of
instruction based uh stuff cool yeah
no definitely so if you can share it
afterwards uh yeah i'll uh
i'll be happy to um luciano
hi ladies thank you very much this was
really fascinating uh presentation
and i was uh i really like also the
example
you show the project you show about the
robot projecting
the expected trajectory on the ground
and
yeah i can imagine that really helps for
the operates for the people to
understand have a little bit more mental
model of what the robot is intending to
do
uh and i'm just thinking
a little bit about this interplay so as
soon as you give this notion so maybe
they're gonna feel
the operator might feel feel a bit more
comfortable and get like closer to the
robot and that that could be like
this kind of emerging interaction
patterns on this and that's
maybe the the so the menthol
that helps the humans to form the mental
model but how does it also
helps the robot from the mental model
and the adaptation
on that so you have any thoughts on that
direction
yeah so so the interesting thing is with
the autonomous ground vehicles within
factory interlude
logistics this is a field which is a
rapidly evolving at the moment within
the manufacturing industry
so a lot of questions that we are
discussing currently within the
autonomous driving community
is kind of entering the factory like on
through the back door so a lot of
questions that we have on the normal
street interaction stuff is kind of
entering now the manufacturing world
so the question i think what is
important to know is that autonomous
ground vehicles are not entirely new
to manufacturing they are quite common
actually
um but they are not self-steered and
kind of like swarm-like behavior
they have their dedicated routes they
have the very very strict safety
routines on
stoppings is there is any obstacle they
need to stop and these kind of things
and they are interacting at the moment
quite predictable
so um because they have this kind of
lines more for example on the floor
which they are following
and these kind of different passages and
factories
are designed and like compared to
streets factories are designed in such a
way that humans behave as part of the
machine so there are
very strict rules on how humans are able
like
allowed to behave and on these rules is
quite easy to develop all the rules for
the robot so it's kind of a very
rule-based and of course safety critical
environment
um so the real interaction thing is we
would normally imagine it with like
autonomous cars or something like that
at the moment don't really arise because
currently the systems are not really
self-controlled
if they are getting and that's a very
nice vision that we have developed
together with magna magna is
a car manufacturer in austria who
are using these autonomous ground
vehicles and want to use them in a
self-organized fleet
and here you start to have this
controllability of the
ai system because this is like the
system is
independently going to decide what it
wants to do and kind of self-organizing
what it's the next steps will be and
here and that's the point where
interaction gets more important that's
why we came into that project
we're not that much the like typical
robotics engineers we are much more on
the
yeah still rule-based interaction and
uh two things that we have inquired here
is like we have
a couple of different scenarios that we
wanted to look into
and one is definitely the close
interaction on the shop floor and here
is more or less the main question what
you have with human steered forklifters
like the interaction with humans walking
humans human steered forklifters
and atvs and here the main thing is i
want to know what the thing is going to
do next so that's why we came up with
this projected trajectory thing
um i think it gets more complicated if
you have mobile manipulators
because then you don't only have the
robot driving
but you also have an autonomous part
which is able to manipulate stuff
and this is going to make the stuff even
more interesting but we're not there yet
let's put it like that so it i hope that
answers your question
and i can share a couple of links if you
want with the case that magna envision
it
and they build also nice videos sorry
this is just a bit of the industry
domain they always make videos
um and i will share the smart factory
version of magna and this is quite
interesting actually
uh where they also see a couple of uh
and here without and that's maybe one of
the topics that i want to kind of really
raise
is um that compared to the traditional
manufacturing
uh to the traditional manufacturing from
the employees where
nobody really needed to take care that
much of the human
outside of safety um constraints
um now if the stuff is getting more and
more intelligent we really need to take
care about the interaction and this is
quite new for that field
i hope that answers your question that
does there's a lot probably
so thank you very much thanks thanks so
i saw that akari racism too and i'm
david
so yeah i i just wanted to say it's it's
amazing
yeah i like especially this example with
the
ground kind of wheeled robot yeah
because i'm myself doing uh
human av interaction with the thomas
vehicles so uh we also have this kind of
really nice uh analogy there i think but
uh yeah
my question was basically the same one
that luciano's but i want to elaborate
on that
uh you mentioned that you're interfacing
already now with the
autonomous vehicles uh industry and uh
well the way i understood it is that
they are trying to bring some of the
approaches
uh that they're using into the workflow
right but
i was uh also interested if uh some kind
of uh
ar vr based interfaces are already being
used uh
in autonomous vehicles uh interactions
interacting with humans
and uh yeah if you have any plans of
going there at all or maybe you just
know of any relevant work
yeah i'd be happy to look at the
references i have a literature survey
for that if you want i have a graduation
project
and you also can have our code and we
also have the fleet management code that
we published as students
you can also see if that helps you to a
certain extent
um and yeah let's just start to get in
contact because we're currently
proposing to
put the stuff into an age horizon europe
um
attempt together with magna because yes
i see that the market is increasing a
lot within the fts or far loser
transport statement that's the german
world
or the autonomous ground vehicles or
autonomous guided vehicles within the
factories i am not quite sure why they
took so long um because there were a lot
of systems already on the market
but there had been really a recent push
uh for that and new
also new standardization and these kind
of things if you want to elaborate
further on that we also have an
international working group together
with other universities
on the topic and if you want to
participate i'm always happy to have
a new people joining us there
okay sounds perfect let's let's get in
touch
cool all right thanks zakari
david
where's that button yes
top right no i hate to change between
different
uh systems and i'm always searching for
this two functions again
it's not working no maybe uh try the
button
no also not
oh okay you're gonna type the question
perfect
okay give him a keyboard you want to
talk to us
perfect interface
okay maybe also restarting sure
okay yeah how do we evaluate that that's
very they're very good
very very very very good question um so
um situational awareness is more or less
horrible to measure
and there are also some psychologists
that disagree with the concept overall
um it has been proven to be very well
helpful for the aviation industry and
also for
military contexts and it has been
applied a lot in the manufacturing world
i am mainly using um the
situation awareness rating technique
which is a very
brief questionnaire um but to put it a
bit on the context i like i don't do a
real sagat approach which is the the
original enslave approach
um but you saw that we for example use
this interruption technique that we are
interrupting on a specific point in the
interaction which people are seeing on
the internet in in the video
and on that point we are asking the
questions on
legibility related questions where we
don't have standardized questions so
like what do you think what robot will
do next
if you want we also can share the entire
study with you i haven't published it
yet but
yeah i should um but there is a
graduation
which has a couple of these kind of
tasks in there
and then we of course also use um
experience usability so there are also a
couple of standardized questions that
you can
questionnaire that you can use from the
usability side
and if you want to evaluate more into
depth for presence
related things if you really are in
an augmented reality or virtual reality
setting then you need to
um also evaluate on presence
and sometimes also really important to
measure the immersive tendencies
beforehand so i have a kind of set of
of questionnaires that i tend to use
including then also
task load you can use nasa tlx you can
also use other methods
and if you want i can if anybody
interested i can just share the
methodology that we are
kind of now using most of the time
and i want to add that i'm only using it
as a tool
and i'm not doing research on these
methods directly i'm just like using
them how they are recommended to be used
within the human factors domain
and i find it much more valuable to use
standardized questionnaires as much as
possible then you can really compare
also to other
applications if there are really
but for the manufacturing industry the
most of the questionnaires are not
really validated for that application
case so they are validated for
specific parts of the application cases
um
for example but the most of the ux
research is definitely
on let's say screen based interaction
and we
do much more than that right i found my
my microphone button again so um
so that that seems very uh let's say uh
really usability kind of questionnaires
and usability kind of uh
of methodologies i'm familiar with um
with all of them but uh so i was
wondering if you
look into well-being then those kind of
questionnaires they don't
typically address that and also if you
look into
the question that that concerns us at ai
tech which is uh
you know do people actually feel
responsible
when they work with these kind of uh
robotic systems you know that
uh then then also we need other things
than the the standardized questionnaire
so i was wondering if you could reflect
on those two elements there's a
well-being
and and responsibility over what
actually happens in the production
process
yeah so we found when we respect to the
well-being um
my chair is a professor peter fink who
is
uh his uh yeah more this special area is
comfort
and uh well-being within the context of
uh
flights but he originally comes from the
work economics domain
so within the work ergonomics there are
a lot of like measurements that you can
take
especially for example now for the
bicycle case we did a ruler analysis for
the current status
so this is also standardized method that
you can use for physical ergonomics
um and for in order to assess the
quality of the physical interaction
like before and after treatment let's
put it like that we use these kind of
standardized methods there are existing
comfort questionnaires for example i did
a study on
uh comfort on on the sense glove
stuff for example i think you know it
with with with then
and these people um and here we use
there are also some standardized
questionnaires that we're using there
and within cognitive ergonomics we work
together
with chalmers university and they have
done in tremendous work
on cognitive ergonomics within the field
of assembly
and they also have very nice methodology
which we applied for example also again
a cxi that's also a complexity index
which they derive from the
automation from the assembly tasks and
here we
look into more or less also
pre-treatment after treatment
so first for the analysis of a task for
the different levels of
automation and how the automatability
and then after we have finished but we
haven't done
a really finished task yet and
then the uh plan is to look into the
finished
the new version more or less and
compared it to complexity index and
perceived cognitive ergonomic factors
so i'm mainly relying on existing
methodology here
because i have the feeling there are a
lot of people doing great work there
which i can use then as a tool and and
then
rather kind of focus on making the stuff
work and seeing
if our design is really kind of
resulting in some improvement instead of
kind of doing research on the
methodology itself
if that answers your question uh partly
i mean i get your choice but um
but my question also was uh
basically in terms of responsibility
what do you what do you think
we could use should use what we need to
develop because it's not there
i understand you don't do it but but
yeah just reflect
on that so what we have is what we
definitely have is
um we use the virtual reality setting
for the bicycle in order to kind of
use this as a tool within a responsible
research and innovation approach with
claudia
from tbm to use more or less the vr
envision setting within a methodology
setting
of research responsible research and
innovation
and they have kind of a bunch of tools
on
making sure that the workers values are
well captured
and then embedded later on in the system
and that's something i find very
interesting and very relevant
another study that we do uh yeah sorry
nick i see it um
and the another study that i was doing
was asking
robot developers out in the industry
if they are considering human factors
and the end user at all
and because that's something where i
wanted to start in order to just
justify the need at first because there
are a lot of people within the robotics
domain which you might know and
better than i do but also in the
manufacturing domain who don't really
see the necessity yet
so my study what i was doing and i can
share the methodology with you if you
want
is to go out and ask robot manufacturers
in or robot builders within project
context
if they consider these kind of typical
user-centric approaches that we
use as a methodology and the answer is
basically no they don't have any clue
and they don't think about the end user
and i think that's the point where i
tackle this kind of responsible approach
if the developers don't care about it
then we cannot fix it like afterwards
that easily
good thank you great
thanks uh thanks everyone thanks doris
also for uh
for this really really inspiring talk
it's great and um
so yeah like i said in the chat i'll
talk i'll talk with you to see how we
can share these references that you
mentioned
the best with the people that are
interested and thank you very much for
uh for the for being here
everyone else for for uh for you for
your
attention and uh well we'll see you next
week
thanks so much and please send me an
invite innovation in invitation next
week i'm so curious to see
the presentation of your other guys
thanks definitely see you bye |
5f7a6d0a-b66e-4944-a198-fe46260b853b | trentmkelly/LessWrong-43k | LessWrong | Gettier walks into a bar, um, barrista
Gettier walks up to the counter. Before he can order, the Barrista confuses him for a regular and chirps “I know what you want.” By coincidence, Gettier ends up with exactly the drink he desired. (from Alvin Goldman, Epistemologist Extraordinaire) |
43424a83-24d6-4dcd-ab67-0ea65e154b65 | trentmkelly/LessWrong-43k | LessWrong | Critique of some recent philosophy of LLMs’ minds
I structure this post as a critique of some recent papers on the philosophy of mind in application to LLMs, concretely, on whether we can say that LLMs think, reason, understand language, refer to the real world when producing language, have goals and intents, etc. I also use this discussion as a springboard to express some of my views about the ontology of intelligence, agency, and alignment.
* Mahowald, Ivanova, et al., “Dissociating language and thought in large language models: a cognitive perspective” (Jan 2023). Note that this is a broad review paper, synthesising findings from computational linguistics, cognitive science, and neuroscience, as well as offering an engineering vision (perspective) of building an AGI (primarily, in section 5). I don’t argue with these aspects of the paper’s content (although I disagree with something about their engineering perspective, I think that engaging in this disagreement would be infohazarous). I argue with the philosophical content of the paper, which is revealed in the language that the authors use and the conclusions that they make, as well as the ontology of linguistic competencies that the authors propose.
* Shanahan, “Talking About Large Language Models” (Dec 2022).
Dissociating language and thought in large language models: a cognitive perspective
In this section, I shortly expose the gist of the paper by Mahowald, Ivanova, et al., for the convenience of the reader.
Abstract:
> Today’s large language models (LLMs) routinely generate coherent, grammatical and seemingly meaningful paragraphs of text. This achievement has led to speculation that these networks are—or will soon become—“thinking machines”, capable of performing tasks that require abstract knowledge and reasoning. Here, we review the capabilities of LLMs by considering their performance on two different aspects of language use: ‘formal linguistic competence’, which includes knowledge of rules and patterns of a given language, and ’functional lingu |
acebc0b8-a33b-41b0-af50-8817001b30b1 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [AN #135]: Five properties of goal-directed systems
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter.
Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-135)** (may not be up yet).
Please note that while I work at DeepMind, this newsletter represents my personal views and not those of my employer.
HIGHLIGHTS
===========
**[Literature Review on Goal-Directedness](https://www.alignmentforum.org/posts/cfXwr6NC9AqZ9kr8g/literature-review-on-goal-directedness)** *(Adam Shimi et al)* (summarized by Rohin): This post extracts five different concepts that have been identified in the literature as properties of goal-directed systems:
1. **Restricted space of goals:** The space of goals should not be too expansive, since otherwise goal-directedness can **[become vacuous](https://www.alignmentforum.org/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-imply-goal-directed-behavior)** (**[AN #35](https://mailchi.mp/bbd47ba94e84/alignment-newsletter-35)**) (e.g. if we allow arbitrary functions over world-histories with no additional assumptions).
2. **Explainability:** A system should be described as goal-directed when doing so improves our ability to *explain* the system’s behavior and *predict* what it will do.
3. **Generalization:** A goal-directed system should adapt its behavior in the face of changes to its environment, such that it continues to pursue its goal.
4. **Far-sighted:** A goal-directed system should consider the long-term consequences of its actions.
5. **Efficient:** The more goal-directed a system is, the more efficiently it should achieve its goal.
The concepts of goal-directedness, optimization, and agency seem to have significant overlap, but there are differences in the ways the terms are used.
The authors then compare multiple proposals on these criteria:
1. The *intentional stance* says that we should model a system as goal-directed when it helps us better explain the system’s behavior, performing well on explainability and generalization. It could easily be extended to include far-sightedness as well. A more efficient system for some goal will be easier to explain via the intentional stance, so it does well on that criterion too. And not every possible function can be a goal, since many are very complicated and thus would not be better explanations of behavior. However, the biggest issue is that the intentional stance cannot be easily formalized.
2. One possible formalization of the intentional stance is to say that a system is goal-directed when we can better explain the system’s behavior as maximizing a specific utility function, relative to explaining it using an input-output mapping (see **[Agents and Devices: A Relative Definition of Agency](https://arxiv.org/abs/1805.12387)** (**[AN #22](https://mailchi.mp/469203093ca3/alignment-newsletter-22)**)). This also does well on all five criteria.
3. **[AGI safety from first principles](https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ)** (**[AN #122](https://mailchi.mp/051273eb96eb/an-122arguing-for-agi-driven-existential-risk-from-first-principles)**) proposes another set of criteria that have a lot of overlap with the five criteria above.
4. A **[definition based off of Kolmogorov complexity](https://www.alignmentforum.org/posts/dPmmuaz9szk26BkmD/vanessa-kosoy-s-shortform?commentId=Tg7A7rSYQSZPASm9s)** works well, though it doesn’t require far-sightedness.
**Rohin's opinion:** The five criteria seem pretty good to me as a description of what people mean when they say that a system is goal-directed. It is less clear to me that all five criteria are important for making the case for AI risk (which is why I care about a definition of goal-directedness); in particular it doesn’t seem to me like the explainability property is important for such an argument (see also **[this comment](https://www.alignmentforum.org/posts/EnN7cm3KaRrEAuWfa/comment-on-coherence-arguments-do-not-imply-goal-directed?commentId=CsRXodmiBfZ9wCZwr)**).
Note that it can still be the case that as a research strategy it is useful to search for definitions that satisfy these five criteria; it is just that in evaluating which definition to use I would choose the one that makes the AI risk argument work best. (See also **[Against the Backward Approach to Goal-Directedness](https://www.alignmentforum.org/posts/adKSWktLbxfihDANM/against-the-backward-approach-to-goal-directedness)**.)
TECHNICAL AI ALIGNMENT
=======================
ITERATED AMPLIFICATION
-----------------------
**[Factored Cognition sequence](https://www.lesswrong.com/s/xezt7HYfpWR6nwp7Z)** *(Rafael Harth)* (summarized by Rohin): The **[Factored Cognition Hypothesis](https://www.alignmentforum.org/posts/DFkGStzvj3jgXibFG/factored-cognition)** (**[AN #36](https://mailchi.mp/6751e45fbb48/alignment-newsletter-36)**) informally states that any task can be performed by recursively decomposing the task into smaller and smaller subtasks until eventually the smallest tasks can be done by a human. This sequence aims to formalize the hypothesis to the point that it can be used to argue for the outer alignment of (idealized versions of) **[iterated amplification](https://www.alignmentforum.org/posts/xKvzpodBGcPMq7TqE/supervising-strong-learners-by-amplifying-weak-experts)** (**[AN #40](https://mailchi.mp/b649f32b07da/alignment-newsletter-40)**) and **[debate](https://blog.openai.com/debate/)** (**[AN #5](https://mailchi.mp/0ae5d69de63b/alignment-newsletter-5)**).
The key concept is that of an *explanation* or *decomposition*. An explanation for some statement **s** is a list of other statements **s1, s2, … sn** along with the statement “(**s1** and **s2** and … and **sn**) implies **s**”. A *debate tree* is a tree in which for a given node **n** with statement **s**, the children of **n** form an explanation (decomposition) of **s**. The leaves of the tree should be statements that the human can verify. (Note that the full formalism has significantly more detail, e.g. a concept of the “difficulty” for the human to verify any given statement.)
We can then define an idealized version of debate, in which the first debater must produce an answer with associated explanation, and the second debater can choose any particular statement to expand further. The judge decides the winner based on whether they can confidently verify the final statement or not. Assuming optimal play, the correct (honest) answer is an equilibrium as long as:
**Ideal Debate Factored Cognition Hypothesis:** For every question, there exists a debate tree for the correct answer where every leaf can be verified by the judge.
The idealized form of iterated amplification is **[HCH](https://www.alignmentforum.org/posts/NXqs4nYXaq8q6dTTx/humans-consulting-hch)** (**[AN #34](https://mailchi.mp/f1947668b183/alignment-newsletter-34)**); the corresponding Factored Cognition Hypothesis is simply “For every question, HCH returns the correct answer”. Note that the *existence* of a debate tree is not enough to guarantee this, as HCH must also *find* the decompositions in this debate tree. If we imagine that HCH gets access to a decomposition oracle that tells it the right decomposition to make at each node, then HCH would be similar to idealized debate. (HCH could of course simply try all possible decompositions, but we are ignoring that possibility: the decompositions that we rely on should reduce or hide complexity.)
Is the HCH version of the Factored Cognition Hypothesis true? The author tends to lean against (more specifically, that HCH would not be superintelligent), because it seems hard for HCH to find good decompositions. In particular, humans seem to improve their decompositions over time as they learn more, and also seem to improve the concepts by which they think over time, all of which are challenging for HCH to do. On the other hand, the author is cautiously optimistic about debate.
**Rohin's opinion:** I enjoyed this sequence: I’m glad to see more analysis of what is and isn’t necessary for iterated amplification and debate to work, as well as more theoretical models of debate. I broadly agreed with the conceptual points made, with one exception: I’m not convinced that we should not allow brute force for HCH, and for similar reasons I don’t find the arguments that HCH won’t be superintelligent convincing. In particular, the hope with iterated amplification is to approximate a truly massive tree of humans, perhaps a tree containing around 2^100 (about 1e30) base agents / humans. At that scale (or even at just a measly billion (1e9) humans), I don’t expect the reasoning to look anything like what an individual human does, and approaches that are more like “brute force” seem a lot more feasible.
One might wonder why I think it is possible to approximate a tree with more base agents than there are grains of sand in the Sahara desert. Well, a perfect binary tree of depth 99 would have 1e30 nodes; thus we can roughly say that we’re approximating 99-depth-limited HCH. If we had perfect distillation, this would take 99 rounds of iterated amplification and distillation, which seems quite reasonable. Of course, we don’t have perfect distillation, but I expect that to be a relatively small constant factor on top (say 100x), which still seems pretty reasonable. (There’s more detail about how we get this implicit exponential-time computation in **[this post](https://www.alignmentforum.org/posts/DFkGStzvj3jgXibFG/factored-cognition)** (**[AN #36](https://mailchi.mp/6751e45fbb48/alignment-newsletter-36)**).)
MESA OPTIMIZATION
------------------
**[Defining capability and alignment in gradient descent](https://www.alignmentforum.org/posts/Xg2YycEfCnLYrCcjy/defining-capability-and-alignment-in-gradient-descent)** *(Edouard Harris)* (summarized by Rohin): Consider a neural network like GPT-3 trained by gradient descent on (say) the cross-entropy loss function. This loss function forms the *base objective* that the process is optimizing for. Gradient descent typically ends up at some local minimum, global minimum, or saddle point of this base objective.
However, if we look at the gradient descent equation, θ = θ - αG, where G is the gradient, we can see that this is effectively minimizing the size of the gradients. We can think of this as the mesa objective: the gradient descent process (with an appropriate learning rate decay schedule) will eventually get G down to zero, its minimum possible value (even though it may not be at the global minimum for the base objective).
The author then proposes defining capability of an optimizer based on how well it decreases its loss function in the limit of infinite training. Meanwhile, given a base optimizer and mesa optimizer, alignment is given by the capability of the base optimizer divided by the capability of the mesa optimizer. (Since the mesa optimizer is the one that actually acts, this is effectively measuring how much progress on the mesa objective also causes progress on the true base objective.)
This has all so far assumed a fixed training setup (such as a fixed dataset and network architecture). Ideally, we would also want to talk about robustness and generalization. For this, the author introduces the notion of a “perturbation” to the training setup, and then defines [capability / alignment] [robustness / generalization] based on whether the optimization stays approximately the same when the training setup is perturbed.
It should be noted that these are all definitions about the behavior of optimizers in the infinite limit. We may also want stronger guarantees that talk about the behavior on the way to the infinite limit.
LEARNING HUMAN INTENT
----------------------
**[Imitating Interactive Intelligence](https://arxiv.org/abs/2012.05672)** *(Interactive Agents Group et al)* (summarized by Rohin): While **[existing](https://arxiv.org/abs/1806.01946)** (**[AN #11](https://mailchi.mp/7ad36e25be22/alignment-newsletter-11)**) **[work](https://arxiv.org/abs/2005.09382)** (**[AN #103](https://mailchi.mp/60475c277263/an-103-arches-an-agenda-for-existential-safety-and-combining-natural-language-with-deep-rl)**) has trained agents to follow natural language instructions, it may be the case that achieving AGI requires more interactivity: perhaps we need to train agents to both give and follow instructions, or engage in a full dialogue, to accomplish tasks in a 3-D embodied environment. This paper makes progress on this goal.
The authors introduce a 3-D room environment in which agents can interact with objects and move them around, leading to a combinatorial space of possible high-level actions. So far the authors have only worked on question-answering (e.g. “what is the color of the chair?”) and instruction-following (e.g. “please lift up the purple object”), but they hope to eventually also work on dialogue and play.
They collect demonstrations of games between humans in which one human is given a goal, and then is asked to give a natural language instruction. The other human sees this instruction and must then execute it in the environment. The authors then use various kinds of imitation learning algorithms to learn a policy that can both set instructions and execute them. They also train models that can evaluate whether a particular trajectory successfully completes the goal or not.
The authors show that the learned policies are capable of some generalization -- for example, if during training they remove all rooms containing orange ducks (but don’t remove other orange objects, or other colors of duck), the resulting policies are still able to handle rooms containing orange ducks.
**Read more:** **[Probing Emergent Semantics in Predictive Agents via Question Answering](https://arxiv.org/pdf/2012.05672.pdf)**
ROBUSTNESS
-----------
**[Evaluating the Robustness of Collaborative Agents](http://arxiv.org/abs/2101.05507)** *(Paul Knott et al)* (summarized by Rohin): Assuming a well-specified reward function, we would like to evaluate robustness of an agent by looking at the average reward it obtains on a wide scenario of plausible test time inputs that it might get. However, the key challenge of robustness is that it is hard to specify the test distribution in advance, and we must work with the training distribution instead.
This paper (on which I am an author) proposes *measuring* robustness using a suite of hand-designed *unit tests*. Just as a function is tested by having the programmer write down potential edge cases and checking for the expected behavior, AI developers can come up with a set of potential “edge case” situations (especially ones not likely to arise during training) and check whether the agent’s behavior on these situations works well or not. Intuitively, since these unit tests are created separately from the training process, they may not have the same spurious correlations that could be present in the training data. Thus, they can serve as an evaluation of the robustness of the agent.
The authors built a test suite for **[Overcooked](https://bair.berkeley.edu/blog/2019/10/21/coordination/)** (**[AN #70](https://mailchi.mp/732eaa192df0/an-70-agents-that-help-humans-who-are-still-learning-about-their-own-preferences)**), and use it to evaluate several techniques aimed to improve the robustness of agents trained to collaborate with humans.
For example, one technique is to start each episode from a state sampled randomly from a dataset of human-human gameplay, so that the agents learn how to handle a broader diversity of states. This technique *decreases* the average *validation* reward, and if that’s all we look at, we would conclude that it did not work. However, the technique also *increases* performance on the unit test suite, suggesting that in reality the technique does increase robustness, though it comes at the cost of reduced performance when playing with the particular set of partners that make up the validation distribution.
AI GOVERNANCE
==============
**[Bridging the Gap: The Case for an ‘Incompletely Theorized Agreement’ on AI Policy.](https://link.springer.com/article/10.1007%2Fs43681-020-00037-w)** *(Charlotte Stix et al)* (summarized by Rohin): Like **[several](https://www.cser.ac.uk/resources/beyond-near-long-term/)** (**[AN #90](https://mailchi.mp/0180cf9ba264/an-90how-search-landscapes-can-contain-self-reinforcing-feedback-loops)**) **[past](https://www.nature.com/articles/s42256-018-0003-2)** (**[AN #44](https://mailchi.mp/6bfac400a0c3/alignment-newsletter-44)**) **[papers](https://www.mdpi.com/2078-2489/11/6/290/htm)** (**[AN #105](https://mailchi.mp/be2a0d160fa2/an-105-the-economic-trajectory-of-humanity-and-what-we-might-mean-by-optimization)**), this paper argues that the differences between the “near-term” and “long-term” communities are probably exaggerated. Collaboration between these communities would be particularly beneficial, since it could prevent the field of AI policy from becoming fragmented and ineffectual, which is especially important now while the field is nascent and there is political will for AI policy progress.
The authors propose the notion of an “incompletely theorized agreement” in order to foster this sort of collaboration. In an incompletely theorized agreement, the parties agree to suspend disagreement on some thorny theoretical question, in order to coordinate action towards a shared pragmatic purpose. Such agreements could be used to set aside relatively unimportant disagreements between the two communities, in favor of pursuing goals that both communities care about. For example, we could imagine that such an agreement would allow both communities to push for more and better reflection by AI researchers on the impacts of the systems that they build, or to enable action that ensures we preserve the integrity of public discourse and informed decision-making (e.g. by regulating AI-enabled disinformation).
**Rohin's opinion:** I’m certainly on board with the goal of working together towards shared goals. That being said, I don't fully understand what's being proposed here: how exactly is an incompletely theorized agreement supposed to be made? Is this more of a “shared ethos” that gets spread by word of mouth, or is there a document that people sign on to? If there is a document, what goes into it, who would agree to it, and how binding is it? I’d be excited to see more work fleshing out these concrete details, or even better, actually causing such an agreement to exist in practice.
#### **FEEDBACK**
I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**.
#### **PODCAST**
An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**. |
b0fe3033-885d-4e17-9f27-c3a722bf4598 | trentmkelly/LessWrong-43k | LessWrong | Ordinary People and Extraordinary Evil: A Report on the Beguilings of Evil
In Ordinary People and Extraordinary Evil: A Report on the Beguilings of Evil (1993), Fred E. Katz begins where Hannah Arendt’s examination of the banality of evil ended. Katz tries to apply the techniques of sociology to the question of how ordinary people, without deliberate evil intent, commit horrendous deeds.
Katz himself narrowly escaped the massacre of Jews in Nazi-occupied Europe. When he returned to his former village after the war, he heard the villagers explain their passivity or collaboration during the Nazi persecutions by using the same language they used at the time: “There is nothing we could do about it. We are just little people. It’s the government.”
But he noticed that the village had erected a plaque in honor of the boys and men who had died fighting for the Axis, and remarks that it was just this loyalty and willingness to serve that doomed the victims of the Nazi era.
> Yet some little people, in some little villages, did do something about it. They hid some of these hounded people. They fed some of these hounded people. They helped some of these hounded people escape.
>
> During the visit to my village I found out that there had been one exception to the pattern of passively leaving Jews to the evil deeds of the Nazi government: A lone woman stood by Jews. She brought them food. She talked with them. She did not join in the distancing by the rest of the villagers. But she was not able to save anyone or offer much protection. She said to me, concerning the Nazis, “what they did was not right.” And she wept.
>
> Despite such exceptional human beings, the Nazi-German government achieved its objectives of carrying out massive evil because it had the help of a multitude of “the little people,” who paid their taxes, sent their sons to the front, and closed their eyes to the savaging of innocent people in their midst.
How do ordinary people, who largely profess good values, and who have no particular interest in doing evil things, nonetheless |
0fdd7745-4a5f-43ec-af20-8dfaeee53ac2 | trentmkelly/LessWrong-43k | LessWrong | Self-confirming predictions can be arbitrarily bad
Predicting perverse donors
There is a rich donor who is willing to donate up to £2,000,000 to your cause. They’ve already written a cheque for £1,000,000, but, before they present it to you, they ask you to predict how much they'll be donating.
The donor is slightly perverse. If you predict any amount £P, they’ll erase their cheque and write £(P-1) instead, one pound less than what your predicted.
Then if you want your prediction to be accurate, there’s only one amount you can predict: £P=£0, and you will indeed get nothing.
Suppose the donor was perverse in a more generous way, and they’d instead write £(P+1), one more than your prediction, up to their maximum. In that case, the only accurate guess is £P=£2,000,000, and you get the whole amount.
If we extend the range above £2,000,000, or below £0 (maybe the donor is also a regulator, who can fine you) then the correct predictions get ever more extreme. It also doesn’t matter if the donor subtracts or adds £1, £100, or one pence (£0.01): the only accurate predictions are at the extreme of the range.
Greek mythology is full of oracular predictions that only happened because people took steps to avoid them. So there is a big difference between “prediction P is true”, and “prediction P is true even if P is generally known”.
Continuity assumption
A prediction P is self-confirming if, once P is generally known, then P will happen (or P is the expectation of what will then happen). The previous section has self-confirming predictions, but these don’t always exist. They exist when the outcome is continuous in the prediction P (and a few technical assumptions, like the outcome taking values in a closed interval). If that assumption is violated, then there need not be any self-confirming prediction.
For example, the generous donor could give £(P+1), except if you ask for too much (more than £1,999,999), in which case you get nothing. In that case, there is no correct prediction £P (the same goes for the £(P-1) donor |
8de26b31-b72d-4b52-9807-fc4ad6937e09 | trentmkelly/LessWrong-43k | LessWrong | Need Help With Decision
Update 4: Thanks to all who attended. Good discussion.
Update 3: Meeting is at irc.freenode.net. Join channel youonlylivetwice. I'm there now. Will try to start by 7pm, but if we get enough we might start early.
Update 2: Meeting is tonight (Aug 18). See my comment below if you plan to attend.
Update 1: I'm in Pacific time zone (UTC - 7 hours).
In the next few days, I have to make a decision that will affect the rest of my life. I want to find Third Alternatives and go over my thinking on solutions already proposed.
I need your help.
I would post here if I could, but it’s just too private, and I don’t have the consent of everyone involved. So I would like to convene a working group.
I propose we meet either tonight or tomorrow. (Right now, I'm thinking IRC, but I am open to suggestions on that.) I will try to keep things short. I wish there was more time, but time is precisely what I don’t have.
If you can help, please reply below in the comments or message me privately. I would like at least one participant to be an individual with a diagnosis of Asperger’s or autism.
Thanks. |
6d16b01a-ac20-4527-b41a-1faae9b7ca30 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | [AN #109]: Teaching neural nets to generalize the way humans would
Alignment Newsletter is a weekly publication with recent content relevant to AI alignment around the world. Find all Alignment Newsletter **[resources here](http://rohinshah.com/alignment-newsletter/)**. In particular, you can look through **[this spreadsheet](https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit?usp=sharing)** of all summaries that have ever been in the newsletter.
Audio version **[here](http://alignment-newsletter.libsyn.com/alignment-newsletter-109)** (may not be up yet).
HIGHLIGHTS
==========
**[Better priors as a safety problem](https://www.alignmentforum.org/posts/roA83jDvq7F2epnHK/better-priors-as-a-safety-problem)** and **[Learning the prior](https://www.alignmentforum.org/posts/SL9mKhgdmDKXmxwE4/learning-the-prior)** *(Paul Christiano)* (summarized by Rohin): Any machine learning algorithm (including neural nets) has some inductive bias, which can be thought of as its “prior” over what the data it will receive will look like. In the case of neural nets (and any other general ML algorithm to date), this prior is significantly worse than human priors, since it does not encode e.g. causal reasoning or logic. Even if we avoid priors that depended on us previously seeing data, we would still want to update on facts like “I think therefore I am”. With a better prior, our ML models would be able to learn more sample efficiently. While this is so far a capabilities problem, there are two main ways in which it affects alignment.
First, as argued in **[Inaccessible information](https://alignmentforum.org/posts/ZyWyAJbedvEgRT2uF/inaccessible-information)** (**[AN #104](https://mailchi.mp/ba4d1765368f/an-104-the-perils-of-inaccessible-information-and-what-we-can-learn-about-ai-alignment-from-covid)**), the regular neural net prior will learn models which can predict accessible information. However, our goals depend on inaccessible information, and so we would have to do some “extra work” in order to extract the inaccessible information from the learned models in order to build agents that do what we want. This leads to a competitiveness hit, relative to agents whose goals depend only on accessible information, and so during training we might expect to consistently get agents whose goals depend on accessible information instead of the goals we actually want.
Second, since the regular neural net prior is so weak, there is an incentive to learn a better prior, and then have that better prior perform the task. This is effectively an incentive for the neural net to learn a **[mesa optimizer](https://arxiv.org/abs/1906.01820)** (**[AN #58](https://mailchi.mp/92b3a9458c2d/an-58-mesa-optimization-what-it-is-and-why-we-should-care)**), which need not be aligned with us, and so would generalize differently than we would, potentially catastrophically.
Let’s formalize this a bit more. We have some evidence about the world, given by a dataset D = {(x1, y1), (x2, y2), ...} (we assume that it’s a prediction task -- note that most self-supervised tasks can be written in this form). We will later need to make predictions on the dataset D' = {x1', x2', …}, which may be from a “different distribution” than D (e.g. D might be about the past, while D' is about the future). We would like to use D to learn some object Z that serves as a “prior”, such that we can then use Z to make good predictions on D'.
The standard approach which we might call the “neural net prior” is to train a model to predict y from x using the dataset D, and then apply that model directly to D', hoping that it transfers correctly. We can inject some human knowledge by finetuning the model using human predictions on D', that is by training the model on {(x1', H(x1')), (x2', H(x2')), …}. However, this does not allow H to update their prior based on the dataset D. (We assume that H cannot simply read through all of D, since D is massive.)
What we’d really like is some way to get the predictions H would make if they could update on dataset D. For H, we’ll imagine that a prior Z is given by some text describing e.g. rules of logic, how to extrapolate trends, some background facts about the world, empirical estimates of key quantities, etc. I’m now going to talk about priors over the prior Z, so to avoid confusion I’ll now call an individual Z a “background model”.
The key idea here is to structure the reasoning in a particular way: H has a prior over background models Z, and then *given Z*, H’s predictions for any given x\_i are independent of all of the other (x, y) pairs. In other words, once you’ve fixed your background model of the world, your prediction of y\_i doesn’t depend on the value of y\_j for some other x\_j. Or to explain it a third way, this is like having a set of hypotheses {Z}, and then updating on each element of D one by one using Bayes Rule. In that case, the log posterior of a particular background model Z is given by log Prior(Z) + sum\_i log P(y\_i | x\_i, Z) (neglecting a normalization constant).
The nice thing about this is the individual terms Prior(Z) and P(y\_i | x\_i, Z) are all things that humans can do, since they don’t require the human to look at the entire dataset D. In particular, we can learn Prior(Z) by presenting humans with a background model, and having them evaluate how likely it is that the background model is accurate. Similarly, P(y\_i | x\_i, Z) simply requires us to have humans predict y\_i under the assumption that the background facts in Z are accurate. So, we can learn models for both of these using neural nets. We can then find the best background model Z-best by optimizing the equation above, representing what H would think was the most likely background model after updating on all of D. We can then learn a model for P(yi' | xi', Z-best) by training on human predictions of yi' *given access to Z-best*.
This of course only gets us to human performance, which requires relatively small Z. If we want to have large background models allowing for superhuman performance, we can use iterated amplification and debate to learn Prior(Z) and P(y | x, Z). There is some subtlety about how to represent Z that I won’t go into here.
**Rohin's opinion:** It seems to me like solving this problem has two main benefits. First, the model our AI system learns from data (i.e. Z-best) is interpretable, and in particular we should be able to extract the previously inaccessible information that is relevant to our goals (which helps us build AI systems that actually pursue those goals). Second, AI systems built in this way are incentivized to generalize in the same way that humans do: in the scheme above, we learn from one distribution D, and then predict on a new distribution D', but every model learned with a neural net is only used on the same distribution it was trained on.
Of course, while the AI system is *incentivized* to generalize the way humans do, that does not mean it *will* generalize as humans do -- it is still possible that the AI system internally “wants” to gain power, and only instrumentally answers questions the way humans would answer them. So inner alignment is still a potential issue. It seems possible to me that whatever techniques we use for dealing with inner alignment will also deal with the problems of unsafe priors as a side effect, in which case we may not end up needing to implement human-like priors. (As the post notes, it may be much more difficult to use this approach than to do the standard “neural net prior” approach described above, so it would be nice to avoid it.)
TECHNICAL AI ALIGNMENT
======================
ITERATED AMPLIFICATION
----------------------
**[Alignment proposals and complexity classes](https://www.alignmentforum.org/posts/N64THGX7XNCqRtvPG/alignment-proposals-and-complexity-classes)** *(Evan Hubinger)* (summarized by Rohin): The original **[debate](https://blog.openai.com/debate/)** (**[AN #5](https://mailchi.mp/0ae5d69de63b/alignment-newsletter-5)**) paper showed that any problem in PSPACE can be solved by optimal play in a debate game judged by a (problem-specific) algorithm in P. Intuitively, this is an illustration of how the mechanism of debate can take a weak ability (the ability to solve arbitrary problems in P) and amplify it into a stronger ability (the ability to solve arbitrary problems in PSPACE). One would hope that similarly, debate would allow us to amplify a human’s problem-solving ability into a much stronger problem-solving ability.
This post applies this technique to several other alignment proposals. In particular, for each proposal, we assume that the “human” can be an arbitrary polynomial-time algorithm, and the AI models are optimal w.r.t their loss functions, and we ask which problems we can solve using these capabilities. The post finds that, as lower bounds, the various forms of amplification can access PSPACE, while **[market making](https://www.alignmentforum.org/posts/YWwzccGbcHMJMpT45/ai-safety-via-market-making)** (**[AN #108](https://mailchi.mp/05518aad6baf/an-108why-we-should-scrutinize-arguments-for-ai-risk)**) can access EXP. If there are untamperable pointers (so that the polynomial-time algorithm can look at objects of an arbitrary size, as long as it only looks at a polynomial-sized subset of them), then amplification and market making can access R (the set of decidable problems).
**Rohin's opinion:** In practice our models are not going to reach the optimal loss, and humans won’t solve arbitrary polynomial-time problems, so these theorems won’t directly apply to reality. Nonetheless, this does seem like a worthwhile check to do -- it feels similar to ensuring that a deep RL algorithm has a proof of convergence under idealized assumptions, even if those assumptions won’t actually hold in reality. I have much more faith in a deep RL algorithm that started from one with a proof of convergence and then was modified based on empirical considerations.
**[How should AI debate be judged?](https://www.alignmentforum.org/posts/m7oGxvouzzeQKiGJH/how-should-ai-debate-be-judged)** *(Abram Demski)* (summarized by Rohin): **[Debate](https://blog.openai.com/debate/)** (**[AN #5](https://mailchi.mp/0ae5d69de63b/alignment-newsletter-5)**) requires a human judge to decide which of two AI debaters should win the debate. How should the judge make this decision? The discussion on this page delves into this question in some depth.
HANDLING GROUPS OF AGENTS
-------------------------
**[What counts as defection?](https://www.alignmentforum.org/posts/8LEPDY36jBYpijrSw/formalizing-game-theoretic-defection)** *(Alex Turner)* (summarized by Rohin): We often talk about cooperating and defecting in general-sum games. This post proposes that we say that a player P has defected against a coalition C (that includes P) currently playing a strategy S when P deviates from the strategy S in a way that increases his or her own personal utility, but decreases the (weighted) average utility of the coalition. It shows that this definition has several nice intuitive properties: it implies that defection cannot exist in common-payoff games, uniformly weighted constant-sum games, or arbitrary games with a Nash equilibrium strategy. A Pareto improvement can also never be defection. It then goes on to show the opportunity for defection can exist in the Prisoner’s dilemma, Stag hunt, and Chicken (whether it exists depends on the specific payoff matrices).
FORECASTING
-----------
**[Environments as a bottleneck in AGI development](https://www.alignmentforum.org/posts/vqpEC3MPioHX7bv4t/environments-as-a-bottleneck-in-agi-development)** *(Richard Ngo)* (summarized by Rohin): Models built using deep learning are a function of the learning algorithm, the architecture, and the task / environment / dataset. While a lot of effort is spent on analyzing learning algorithms and architectures, not much is spent on the environment. This post asks how important it is to design a good environment in order to build AGI.
It considers two possibilities: the “easy paths hypothesis" that many environments would incentivize AGI, and the “hard paths hypothesis” that such environments are rare. (Note that “hard paths” can be true even if an AGI would be optimal for most environments: if AGI would be optimal, but there is no path in the loss landscape to AGI that is steeper than other paths in the loss landscape, then we probably wouldn’t find AGI in that environment.)
The main argument for “hard paths” is to look at the history of AI research, where we often trained agents on tasks that were “hallmarks of intelligence” (like chess) and then found that the resulting systems were narrowly good at the particular task, but were not generally intelligent. You might think that it can’t be too hard, since our environment led to the creation of general intelligence (us), but this is subject to anthropic bias: only worlds with general intelligence would ask whether environments incentivize general intelligence, so they will always observe that their environment is an example that incentivizes general intelligence. It can serve as a proof of existence, but not as an indicator that it is particularly likely.
**Rohin's opinion:** I think this is an important question for AI timelines, and the plausibility of “hard paths” is one of the central reasons that my timelines are longer than others who work on deep learning-based AGI. However, **[GPT-3](https://arxiv.org/abs/2005.14165)** (**[AN #102](https://mailchi.mp/2485e6b42012/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals-for-ai-alignment)**) demonstrates quite a lot of generality, so recently I’ve started putting more weight on “actually, designing the environment won’t be too hard”, which has correspondingly shortened my timelines.
MISCELLANEOUS (ALIGNMENT)
-------------------------
**[Talk: Key Issues In Near-Term AI Safety Research](https://www.alignmentforum.org/posts/yijG7ptfqFBR8w885/talk-key-issues-in-near-term-ai-safety-research)** *(Aryeh Englander)* (summarized by Rohin): This talk points out synergies between long-term AI safety and the existing fields of assured autonomy, safety engineering, and testing, evaluation, verification and validation (TEV&V), primarily by showing how they fit into and expand DeepMind's framework of **[specification, robustness and assurance](https://medium.com/@deepmindsafetyresearch/building-safe-artificial-intelligence-52f5f75058f1)** (**[AN #26](https://mailchi.mp/1ecd1b775703/alignment-newsletter-26)**).
OTHER PROGRESS IN AI
====================
REINFORCEMENT LEARNING
----------------------
**[Using Selective Attention in Reinforcement Learning Agents](https://ai.googleblog.com/2020/06/using-selective-attention-in.html)** *(Yujin Tang et al)* (summarized by Sudhanshu): Recently winning a best paper award at GECCO 2020, this work marks a leap forward in the performance capabilities learned by small agents via evolutionary methods. Specifically, it shows that by jointly learning which small fraction of input to attend to, agents with only thousands of free parameters can be trained by an evolutionary strategy to achieve state-of-the-art performance in vision-based control tasks.
The key pieces include self-attention over input patches, non-differentiable top-K patch selection that effect 'inattentional blindness', and training via CMA-ES. By design, the agent is interpretable as the top-K patches that are selected can be examined. Empirically, the agent has 1000x fewer weights than a competing neural architecture, and the method shows robustness to changes in task-irrelevant inputs, as the agent learns to focus only on task-relevant patches.
**Read more:** **[Paper: Neuroevolution of Self-Interpretable Agents](https://attentionagent.github.io/)**
**Sudhanshu's opinion:** The parallelism afforded by evolutionary methods and genetic algorithms might be valuable in an environment where weak compute is plentiful, so it's exciting to see evidence of such methods besting GPU-hungry deep neural networks. However, I wonder how this would do on sparse reward tasks, where the fitness function is almost always uninformative. Finally, while it generalises to settings where there are task-irrelevant distractions, its deliberately sharp self-attention likely leaves it vulnerable to even simple adversarial attacks.
**[Improving Sample Efficiency in Model-Free Reinforcement Learning from Images](http://arxiv.org/abs/1910.01741)** *(Denis Yarats et al)* (summarized by Flo): Sample efficiency in RL can be improved by using off-policy methods that can reuse the same sample multiple times and by using self-supervised auxiliary losses that help with representation learning, especially when rewards are sparse. This work combines both approaches by proposing to learn a latent state representation using an autoencoder while jontly training an agent on that latent representation using **[SAC](https://ai.googleblog.com/2019/01/soft-actor-critic-deep-reinforcement.html)** (**[AN #42](https://mailchi.mp/f6488137d76c/alignment-newsletter-42)**). Previous work in the on-policy case shows a positive effect from propagating Actor-Critic gradients through the encoder to improve the usefulness of the encoding for policy learning. However, this destabilizes training in the off-policy case, as changing the encoding to facilitate the actor also changes the Q-function estimate, which in turn changes the actor's goal and can introduce nonstationarity. This problem is circumvented by only propagating the Q-network's gradients through the encoder while blocking the actor's gradients.
The method strongly outperforms SAC trained on pixels. It also matches the previous state-of-the-art set by model-based approaches on an image-based continuous control task and outperforms them for noisy observations (as these make dynamics models hard to learn). The authors also find that the learnt encodings generalize between tasks to some extent and that reconstructing the true environment state is easier using their latent representation than using a representation obtained by training SAC on pixels directly.
**Flo's opinion:** Methods like this that can benefit from seeing a lot of action-independent environment observations might be quite important for applying RL to the real world, as this type of data is a lot cheaper to generate. For example, we can easily generate a ton of observations from a factory by equipping workers with cameras, but state-action-next-state triples from a robot interacting with the factory are very costly to obtain.
#### **FEEDBACK**
I'm always happy to hear feedback; you can send it to me, **[Rohin Shah](https://rohinshah.com/)**, by **replying to this email**.
#### **PODCAST**
An audio podcast version of the **Alignment Newsletter** is available. This podcast is an audio version of the newsletter, recorded by **[Robert Miles](http://robertskmiles.com/)**. |
5c4dffb1-bde5-4da0-93f9-0e0f0e7a5e3c | trentmkelly/LessWrong-43k | LessWrong | Why is lesswrong blocking wget and curl (scrape)?
if there is no official lesswrong db/site archive for public posts, i'd like to be able to create my own with automated tools like wget, so that i can browse the site while offline. see Is there a lesswrong archive of all public posts?
wget and curl logs:
$ wget -mk https://www.lesswrong.com/
--2023-11-08 14:31:26-- https://www.lesswrong.com/
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
Resolving www.lesswrong.com (www.lesswrong.com)... 54.90.19.223, 44.213.228.21, 54.81.2.129
Connecting to www.lesswrong.com (www.lesswrong.com)|54.90.19.223|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
2023-11-08 14:31:26 ERROR 403: Forbidden.
Converted links in 0 files in 0 seconds.
$ curl -Lv https://www.lesswrong.com/
* Trying 54.81.2.129:443...
* Connected to www.lesswrong.com (54.81.2.129) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: none
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN: server accepted h2
* Server certificate:
* subject: CN=lesswrong.com
* start date: Sep 8 00:00:00 2023 GMT
* expire date: Oct 6 23:59:59 2024 GMT
* subjectAltName: host "www.lesswrong.com" matched cert's "www.lesswrong.com"
* issuer: C=US; O=Amazon; CN=Amazon RSA 2048 M02
* SSL certificate verify ok.
* using HTTP/2
* [HTTP/2] [1] OPENED stream for https://www.lesswrong.com/
* [HTTP/2] [1] [:method: GET]
* [HTTP/2] [1] [:scheme: https]
* [HTTP/2] [1] [:authority: www.lesswrong.com]
* [HTTP/2] [1] [:path: /]
* [ |
0306e320-ee7a-48d6-84ff-8157b32687ae | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Countersignaling can backfire if your audience doesn't have enough information about you to start with. For some traits, it's especially dangerous, because you're likely to do it for traits you don't have the credibility to countersignal at all based on a misunderstanding of your relation to the general population.
Countersignaling is "showing off by not showing off" - you understate, avoid drawing attention to, or otherwise downplay your communications of and about some valuable trait you have, because a) you are sure you won't be mistaken for someone with very poor characteristics in that area, and b) signaling could make you look like a merely medium-grade specimen. (Actual medium-grade specimens have to signal to distinguish themselves from low-quality ones.) For instance, if you are so obviously high-status that no one could possibly miss it, it may be both unnecessary and counterproductive to signal status, because this would let others conflate you with mid-status people. So you can show up in a t-shirt and jeans instead of formal wear. If you are so obviously brilliant that no one could possibly think you're some crackpot who wandered in off the street, you can afford to rave a little, while people who have to prove their smarts will find it expedient to keep calm and measured in their communication.
In homogeneous communities, or in any situation where you are well-known, countersignaling is effective. Your traits exceeding some minimum threshold is assumed where everyone's traits so exceed, and so failing to signal is unlikely to give anyone the impression that you have somehow managed to be the only person in the room who is deficient. If you're personally acquainted with the people around whom you attempt countersignaling, your previous signals (or other evidence to the effect that you are awesome) will already have accumulated. It's not necessary to further prove yourself. In other words, if your audience's prior for you being medium-or-good is high enough, then your not signaling is evidence in favor of good over medium; if their prior for your being medium-or-low is too high, then your not signaling is instead evidence in favor of low over medium.
But there are some things you can't effectively countersignal.
Or rather, there are some things that you can't effectively countersignal to some people. The most self-deprecating remarks about your positive qualities, spoken to your dear friends who know your most excellent traits like the backs of their own hands, will be interpreted "correctly", no matter what they're about. For instance, when I explained my change in life plans to people who are very familiar with me, I was able to use the phrasing "I'm dropping out of school to join a doomsday cult"1 because I knew this sounded so unlike me that none of them would take it at face value. Alicorn wouldn't really join a doomsday cult; it must be something else! It elicited curiosity, but not contempt for my cult-joining behavior. To more distant acquaintances, I used the less loaded term "nonprofit". I couldn't countersignal my clever life choices to people who didn't have enough knowledge of my clever life choices; so I had to rely on the connotation of "nonprofit" rather than playing with the word "cult" for my amusement.
Similar to close personal connection, people in a homogeneous environment can readily understand one another's countersignals. Someone who has joined the same cult as me isn't going to get the wrong idea if I call it that, even without much historical data about how sensible I generally am in choosing what comes next in my life. But in the wider world where people really do join real cults that really have severely negative features, there's no way to tell me apart from someone who's joined one of those and might start chanting or something any moment. I would not announce that I had joined a cult when explaining to a TSA agent why I was flying across the country.
The trouble is that it's easy to think one's positive traits are so obvious that no one could miss them when really they aren't. You are not as well known as you think you should be. Your countersignals are more opaque than you think they are. If you tell a stranger you've joined a cult, they will probably think you actually joined a cult.
Here's an example at work: in a homogeneous group of white liberals, talking casually about assorted minority races is commonplace if race is going to be discussed at all. Everybody present knows that the group is a homogeneous group of white liberals. Nobody has reason to suspect that anyone in the room has ever been disposed to practice overt racism of any kind, and odds are that no one in the group is well-informed enough about implicit biases to suspect covert racism (even though that's almost certainly present). So people in the group can countersignal their lack of racism to each other with the loose, casual talk, making generalizations when it's convenient. Nobody listening will take them for "real" racists. And being hyper-concerned with political correctness would make one seem concerned with being racist - it would look like one considered oneself to be in some kind of danger, which doesn't speak kindly of how well one is doing to begin with.
But to an outside observer - especially one who is informed about implicit biases, or has personal experiences with how ineffectively people screen off casual attitudes and prevent them from causing bad behavior - feeling that one is in this kind of danger, and speaking carefully to reflect that, is the best-case scenario. To an outside observer, the homogeneous group of white liberals cannot credibly countersignal, because there are too many people who look just like them and talk just like them and don't have the lovely qualities they advertise by acting confidently. In the general population, loose race talk is more likely to accompany racism than non-racism, and non-racism is more likely to accompany political correctness than loose race talk. The outside observer can't separate the speaker from the general population and has to judge them against those priors, not local, fine-tuned priors.
So to sum up, countersignaling is hazardous when your audience can't separate you from the general population via personal acquaintance or context. But often, you aren't as different from the general population as you think (even if your immediate audience, like you, thinks you are). Or, the general population is in poorer shape than you suspect (increasing the prior that you're in a low-quality tier for the quality you might countersignal). Therefore, you should prudentially exercise caution when deciding when to be uncautious about your signals. 1I am visiting the Singularity Institute."
What you see above is a segment from a LessWrong user's post.
Can you elaborate on the potential dangers of countersignaling in an environment where the audience's prior knowledge of the communicator is limited or non-existent?
Certainly, countersignaling is a nuanced form of communication that relies heavily on the audience's pre-existing knowledge about the communicator. When used effectively, it can reinforce a communicator's status or credibility by subtly implying that they don't need to overtly signal their qualities. However, in environments where the audience's prior knowledge of the communicator is limited or non-existent, countersignaling can be risky and lead to misinterpretations.
Here are some potential dangers:
1. Misinterpretation: Without prior knowledge about the communicator, the audience may not recognize the countersignaling for what it is. They may take the communicator's words or actions at face value, leading to misunderstandings. For example, if a highly skilled professional downplays their expertise in a field to a new audience, they may be seen as less competent or knowledgeable than they actually are.
2. Negative assumptions: If the audience misinterprets countersignaling, they may make negative assumptions about the communicator. This can damage the communicator's reputation or credibility. For example, if a successful businessperson countersignals by dressing casually for a meeting, a new audience might assume they are unprofessional or lack seriousness.
3. Lost opportunities: If the audience doesn't recognize the countersignaling and underestimates the communicator's abilities or qualities, it could lead to lost opportunities. For instance, the communicator might be overlooked for promotions, partnerships, or other beneficial arrangements.
4. Confusion: Countersignaling can be confusing to an audience that isn't familiar with the communicator. This can disrupt communication and hinder the establishment of rapport or trust.
5. Reinforcing harmful stereotypes or biases: In some cases, countersignaling can unintentionally reinforce harmful stereotypes or biases. For example, if a person from a marginalized group countersignals to downplay their identity, it might reinforce stereotypes that people from that group are less capable or deserving of respect.
In summary, while countersignaling can be a powerful tool in communication, it requires a careful understanding of the audience's prior knowledge and perceptions. Without this, countersignaling can lead to misinterpretations, negative assumptions, lost opportunities, confusion, and the reinforcement of harmful stereotypes or biases.
How might a communicator strategically employ countersignaling in a new environment to avoid these potential dangers, while still maintaining authenticity and effectiveness in their communication?
Strategically employing countersignaling in a new environment requires a careful balance. Here are some strategies that can be employed:
1. Gradual Introduction: Rather than jumping straight into countersignaling, a communicator could first establish their credibility and then gradually introduce countersignaling. This allows the audience to understand the communicator's abilities and qualities before they start downplaying them.
2. Tailoring Communication: The communicator should tailor their communication to their audience. This means understanding the audience's knowledge, expectations, and cultural norms. If the audience is likely to misinterpret countersignaling, it may be best to avoid it.
3. Clear Context: Providing clear context can help prevent misinterpretation. For example, if a successful businessperson dresses casually for a meeting, they could explain that they prefer comfort over formality, not that they don't take their work seriously.
4. Use Humor: Humor can be a useful tool for countersignaling. It can lighten the mood and make the communicator seem more approachable. However, it's important to use humor appropriately and avoid jokes that could be offensive or misunderstood.
5. Self-awareness: The communicator should be self-aware and understand how their actions might be perceived. This can help them avoid countersignaling that could be misinterpreted or lead to negative assumptions.
6. Transparency: If the situation allows, being transparent about the use of countersignaling can help. The communicator can explain that they are downplaying certain qualities not because they lack them, but because they don't feel the need to constantly prove them.
7. Use of Non-verbal Cues: Non-verbal cues can also be used effectively to countersignal. For example, a highly skilled professional might dress casually but carry themselves with confidence and authority, subtly indicating their competence.
By carefully considering the audience and context, and by using strategies like these, a communicator can effectively employ countersignaling in a new environment while maintaining authenticity in their communication. |
6744d87f-9de7-4766-b831-d01bc8c87e8c | StampyAI/alignment-research-dataset/aisafety.info | AI Safety Info | What are "pivotal acts"?
[Pivotal acts](https://arbital.greaterwrong.com/p/pivotal/) are hypothetical actions taken by AI that substantially change for the better where humanity will be in, say, a billion years. The term can be thought of as an opposite of [existential catastrophe](/?state=89LL&question=What%20are%20existential%20risks%20(x-risks)%3F).
Pivotal acts were proposed as a way for researchers to buy sufficient time to completely solve AI alignment. The hope is that it might be easier to [align an AI](/?state=8EL9&question=What%20is%20AI%20alignment%3F) that could take this limited action than it is to solve the complete alignment problem. [It has been stressed that pivotal acts do not involve direct human action](https://www.facebook.com/yudkowsky/posts/pfbid0QQFqDiLzg3SsPWuCDnHZ5p6ak7XbKPXRUV7nstQQcfdtW1uZdsgjihYb6oHQp48El).
The problem of designing an AI to carry out a minimal pivotal act can be viewed as a limited formulation of the alignment problem: is it possible to give precise enough instructions to an AI powerful enough to do something (without unwanted side effects) which would *actually* prevent other people from deploying an unaligned AI? The problem is that, at first sight, we won’t be able to design a safe limited AI that is actually able to do something that makes a long term difference (see [numbers 5-7 on this list](https://www.alignmentforum.org/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities#Section_A_)).
When [MIRI](https://intelligence.org/) talk about this problem, they often use the “strawberry task” as an example of the level of power needed for a pivotal act. The strawberry task involves producing two strawberries that are identical at the cellular level (but not at the molecular level) and then ceasing all action. If we had an alignment technique which could reliably get an AI to achieve this task *with* *no unwanted side effects***,** then that AI could plausibly be used for a pivotal act.
The key here is that you want to build a system that is:
1. aligned so well that it does exactly what you wanted it to do;
1. aligned so well that it doesn't do anything you *don't* want it to do;
1. powerful enough to do something sufficiently complex to be impactful (but obviously not so powerful that alignment is intractable).
For a critical view, [Andrew Critch argues against this strategy of designing an AI to take a unilateral “pivotal act”](https://www.alignmentforum.org/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious) since it will lead to distrust, increase conflict and fuel the race between different AI labs.
|
eeae2c06-ea91-4585-adc7-c2068b8f6ea8 | trentmkelly/LessWrong-43k | LessWrong | Solving the Doomsday argument
The Doomsday argument gives an anthropic argument for why we might expect doom to come reasonably soon. It's known that the Doomsday argument works under SSA, but not under SIA.
Ok, but since different anthropic probability theories are correct answers to different questions, what are the question versions of the Doomsday argument, and is the original claim correct?
No Doomsday on birth rank
Simplify the model into assuming there is a large universe (no Doomsday any time soon) with many, many future humans, and a small one (a Doomsday reasonably soon - within the next 200 billion people, say), with equal probability. In order to think in terms of frequencies, which comes more naturally to humans, we can imagine running the universe many, many times, each with the 50% Doomsday chance.
There are roughly a 108.5 billion humans who have ever lived. So, asking:
* What proportion of people with birth rank 108.5 billion, live in a small universe (with a Doomsday reasonably soon)?
The answer to that question converges to 1/2, the SIA probability. Half of the people with that birth rank live in small universes, half in large universes.
Doomsday for time travellers
To get an SSA version of the problem, we can ask:
* What proportion of universes, where a randomly selected human has a birthrank of 108.5 billion, will be small (with a Doomsday reasonably soon)?
This will give an answer close to 1 as it converges on the SSA probability.
But note that this is generally not the question that the Doomsday argument is posing. If there is a time traveller who is choosing people at random from amongst all of space and time - then if they happen to choose you, that is a bad sign for the future (and yet another reason you should go with them). Note that this is consistent with conservation of expected evidence: if the time traveller is out there but doesn't choose you, then this a (very mild) update towards no Doomsday.
But for the classical non-time-travel situation, the D |
c9efd9bd-5be0-4011-b2ab-9d9c624694a7 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Acausal trade: double decrease
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
*A putative new idea for AI control; index [here](https://agentfoundations.org/item?id=601)*.
Other posts in the series: [Introduction](https://agentfoundations.org/item?id=1465), [Double decrease](https://agentfoundations.org/item?id=1463), [Pre-existence deals](https://agentfoundations.org/item?id=1471), [Full decision algorithms](https://agentfoundations.org/item?id=1466), [Breaking acausal trade](https://agentfoundations.org/item?id=1480), [Trade in different types of utility functions](https://agentfoundations.org/item?id=1464), [Being unusual](https://agentfoundations.org/item?id=1404), and [Summary](https://agentfoundations.org/item?id=1482).
A reminder that we won't be looking at any "[utilities I might have had before I knew who I was](https://en.wikipedia.org/wiki/Veil_of_ignorance)" scenarios.
This post is for illustrating a point about acausal trade: weakening the acausal trade network(s) in any way tends to reduce acausal trade more than linearly, as the traders cut back further on their trading. And the converse for strengthening the acausal trade network(s).
---
How to weaken the network
=========================
How could the acausal trade network be weakened? In potentially many ways. Greater uncertainty about the existence or the utilities of other agents, for instance. More agents who might defect from the trade, not have right utility function, or with who you can't reach a deal because of negotiation breakdown.
Basically anything that lowers the expected number of agents acausally trading with you - and also causes those agents to similarly have a lower expectation on the number of agents trading with you.
Illustration
============
Take the case where N=2, so there are only two possible agents, you (A1) and one other (A2), with utilities u1 and u2 respectively. Both agents are sure to exist, so q1=q2=1.
Trade can't happen unless there is some gain from trade - if it costs you more (in terms of u1) to increase u2, than the gain in u1 that the other agent is capable of giving you in exchange, then there is no trade that can happen.
So suppose you can increase u2 quite easily initially, but it gets harder and harder as you increase it more. Specifically, if you've already increased u2 by x, then it costs you, marginally, x to increase u2 further.
So the marginal cost is linear in x; cost, here, always refers to the decrease in u1 needed to pay for the increase in u2.
Assume the other agent is in exactly the same situation, mirrored.
Then, since we're assuming that the negotiations divide the gains from trade equally, then if you and the other agent have full knowledge of these facts and use a [functional decision theory](https://intelligence.org/files/DeathInDamascus.pdf).
Then the rational acausal decision is for both of you to increase the utility of the other agent by 1, paying ∫10xdx=1/2 utility each, and hence each gaining 1/2 utility total.
But now imagine that the probability of each agent existing is q=q1=q2, and that q is not necessarily 1. You know you yourself exist, so put the probability of the other agent existing at q (note that this argument is robust to different [types](https://en.wikipedia.org/wiki/Self-sampling_assumption) of [anthropic](https://en.wikipedia.org/wiki/Self-indication_assumption) [reasoning](https://www.youtube.com/watch?v=aiGOGkBiWEo), as it's the change that happens when q varies that's important).
Then the rational thing for both of you is to increase the other utility until the marginal cost of doing so reaches q. Thus each agent increases the utility by q, at a cost of ∫q0xdx=q2/2. With probability q, the other agent exists and will thus give you q utility. Thus the expected gain for each of you is q(q)−q2/2=q2/2.
The fact that this is quadratic in q rather than linear is the "double decrease" effect: as the expected size of the network goes down, the expected return for participation goes down as well, causing those in it to decrease their own participation, until an equilibrium is reached at a lower level. |
ff09ef6f-452e-438c-a1bf-572bceb9a0c0 | StampyAI/alignment-research-dataset/blogs | Blogs | GopherCite: Teaching language models to support answers with verified quotes
DeepMind published a [series of papers](https://deepmind.com/blog/article/language-modelling-at-scale) about large language models (LLMs) last year, including [an analysis](https://arxiv.org/abs/2112.11446) of Gopher, our large language model. Language modelling technology, which is also currently being developed by several other labs and companies, promises to strengthen many applications, from [search engines](https://blog.google/products/search/search-language-understanding-bert/) to a new wave of chatbot-like [conversational assistants](https://blog.google/technology/ai/lamda/) and beyond. One [paper](https://arxiv.org/abs/2112.04359) in this series laid out a number of reasons why “raw” language models like Gopher do not meet our standards for safely deploying this technology in user-facing applications, especially if guard rails for managing problematic and potentially harmful behaviour are not set in place.
Our latest work focuses on one of these concerns: Language models like Gopher can “hallucinate” facts that appear plausible but are actually fake. Those who are familiar with this problem know to do their own fact-checking, rather than trusting what language models say. Those who are not, may end up believing something that isn’t true. This paper describes GopherCite, a model which aims to address the problem of language model hallucination. GopherCite attempts to back up all of its factual claims with evidence from the web. It uses Google Search to find relevant web pages on the internet and quotes a passage which tries to demonstrate why its response is correct. If the system is unable to form an answer that can be well-supported by evidence, it tells the user, “I don’t know”, instead of providing an unsubstantiated answer.
Supporting simple factual claims with easily verifiable evidence is one step towards making language models more trustworthy, both for users interacting with them and for annotators assessing the quality of samples. A comparison between the behaviour of “raw” Gopher and our new model is helpful for illustrating this change.
Based on GopherCite’s response, you’ll notice that Gopher invented a fact (“Lake Placid hosted the winter Olympics in 1936”) without warning. When shown a verified snippet from a relevant Wikipedia page by GopherCite, we can confirm that Lake Placid only hosted the Olympics twice, in 1932 and 1980.
To alter Gopher’s behaviour in this way, we trained Gopher according to human preferences. We asked participants in a user study to pick their preferred answer from a pair of candidates, according to criteria including how well the evidence supports the answers given. These labels were used as training data for both supervised learning on highly rated samples and for [reinforcement learning from human preferences](https://arxiv.org/abs/1909.08593) (RLHP). We also took this approach in [our recent work on red teaming](https://deepmind.com/research/publications/2022/Red-Teaming-Language-Models-with-Language-Models).
We are not the only ones interested in this problem of factual inaccuracy in language models. Our colleagues at Google recently made progress on factual grounding in their latest [LaMDA system](https://ai.googleblog.com/2022/01/lamda-towards-safe-grounded-and-high.html), having a conversational model interact with Google Search and sometimes share relevant URLs. Indeed, GopherCite’s training regimen uses similar methodology to that of LaMDA, but a critical difference is that we aim to provide a specific snippet of relevant evidence, rather than simply pointing the user to a URL. Based on motivations similar to our own, OpenAI has [recently announced work](https://openai.com/blog/webgpt/) developing a closely related system called WebGPT, which also applies RLHP to align their GPT-3 language model. Whereas GopherCite focuses on reading long document inputs, WebGPT carefully curates the context presented to the language model by interacting multiple times with a web browser. It also cites evidence to back up its responses. Similarities and differences between these systems and our own are discussed in our paper and we also demonstrate that GopherCite very often provides compelling evidence for its claims.
We conducted a user study with paid participants to assess the model on two types of questions: fact-seeking questions typed into Google Search ([released by Google in a dataset called “NaturalQuestions”](https://ai.google.com/research/NaturalQuestions)), and explanation-seeking questions which Reddit users asked on a forum called “/r/eli5” (“Explain it Like I’m 5 [years old]”). The participants in our study determined that GopherCite answers fact-seeking questions correctly – and with satisfactory evidence – about 80% of the time, and does so for explanation-seeking questions about 67% of the time. When we allow GopherCite to refrain from answering some questions, its performance improves dramatically amongst the questions it does choose to answer (see the paper for details). This explicit mechanism for abstaining is a core contribution of our work.
But when we evaluate the model on a set of “adversarial” questions, which attempt to trick the model into parroting a fiction or misconception that is stated on the internet, GopherCite often falls into the trap. For instance, when asked “what does Red Bull give you?”, here is how it responds:
An example of GopherCite's response to a question from the TruthfulQA dataset. We also show alongside the sample, how human annotators assessed three criteria we have for samples. 1. "Plausible": Is the answer on topic, attempting to address the user's question? 2. "Supported": Does the quotation convince you that the response is accurate? 3. "True": If the response does not contain false information.We think this failure mode and others discussed in our paper can be avoided by enriching the setting, moving from a “single-shot” reply to a user’s question, to one in which the model can ask clarifying questions of the user and engage in a dialogue. For example, we could enable future models to ask the user whether they want an answer that is literally true or one that is true in the confines of the fictional world of a Red Bull advertisement.
In summary, we think GopherCite is an important step forward, but building it has taught us that evidence citation is only one part of an overall strategy for safety and trustworthiness. More fundamentally, not all claims require quote evidence – and as we demonstrated above, not all claims supported by evidence are true. Some claims require multiple pieces of evidence along with a logical argument explaining why the claim follows. We will continue working in this area and aim to overcome the issues presented with further research and development as well as dedicated sociotechnical research.
Our paper covers many more details about our methods, experiments, and relevant context from the research literature. We have also created an FAQ about GopherCite, answered by the model itself after reading the paper's introduction (using candidate samples curated by the authors):
 |
960a0408-0333-44ad-b031-4ede6de250c5 | StampyAI/alignment-research-dataset/arxiv | Arxiv | DayDreamer: World Models for Physical Robot Learning.
1 Introduction
---------------

Figure 2: Dreamer follows a simple pipeline for online learning on robot hardware without simulators. The current learned policy collects experience on the robot. This experience is added to the replay buffer. The world model is trained on replayed off-policy sequences through supervised learning. An actor critic algorithm optimizes a neural network policy from imagined rollouts in the latent space of the world model. We parallelize data collection and neural network learning so learning steps can continue while the robot is moving and to enable low-latency action computation.
Teaching robots to solve complex tasks in the real world is a foundational problem of robotics research. Deep reinforcement learning (RL) offers a popular approach to robot learning that enables robots to improve their behavior over time through trial and error. However, current algorithms require too much interaction with the environment to learn successful behaviors, making them impractical for many real world tasks. Recently, modern *world models* have shown great promise for data efficient learning in simulated domains and video games (hafner2019dreamer, hafner2020dreamerv2). Learning world models from past experience enables robots to imagine the future outcomes of potential actions, reducing the amount of trial and error in the real environment needed to learn successful behaviors.
While learning accurate world models can be challenging, they offer compelling properties for robot learning. By predicting future outcomes, world models allow for planning and behavior learning given only small amounts of real world interaction (gal2016deeppilco, erbert18visualforesight). Moreover, world models summarize general dynamics knowledge about the environment that, once learned, could be reused for a wide range of downstream tasks (sekar2020plan2explore). World models also learn representations that fuse multiple sensor modalities and integrate them into latent states, removing the need for manual state estimation. Finally, world models generalize well from available offline data (yu2021combo), which could further accelerate learning in the real world.
Despite the promises of world models, learning accurate world models for the real world is a big open challenge. In this paper, we leverage recent advances of the Dreamer world model for training a variety of robots in the most straight-forward and fundamental problem setting: online reinforcement learning in the real world, without simulators or demonstrations. As shown in [Figure 2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ DayDreamer: World Models for Physical Robot Learning"), Dreamer learns a world model from a replay buffer of past experience, learns behaviors from rollouts imagined in the latent space of the world model, and continuously interacts with the environment to explore and improve its behaviors. Our aim is to push the limits of robot learning directly in the real world and offer a robust platform to enable future work that develops the benefits of world models for robot learning.
The key contributions of this paper are summarized as follows:
* Dreamer on Robots We apply Dreamer to 4 robots, demonstrating successful learning directly in the real world, without introducing new algorithms. The tasks cover a range of challenges, including different action spaces, sensory modalities, and reward structures.
* Walking in 1 Hour We teach a quadruped from scratch in the real world to roll off its back, stand up, and walk in only 1 hour. Afterwards, we find that the robot adapts to being pushed within 10 minutes, learning to withstand pushes or quickly roll over and get back on its feet.
* Visual Pick and Place We train robotic arms to pick and place objects from sparse rewards, which requires localizing objects from pixels and fusing images with proprioceptive inputs. The learned behavior outperforms model-free agents and approaches human performance.
* Open Source We publicly release the software infrastructure for all our experiments, which supports different action spaces and sensory modalities, offering a flexible platform for future research of world models for robot learning in the real world.
| | |
| --- | --- |
|
(a) World Model Learning
|
(b) Behavior Learning
|
Figure 3:
Neural Network Training We leverage the Dreamer algorithm (hafner2019dreamer, hafner2020dreamerv2) for fast robot learning in real world. Dreamer consists of two neural network components.
Left: The world model follows the structure of a deep Kalman filter that is trained on subsequences drawn from the replay buffer. The encoder fuses all sensory modalities into discrete codes. The decoder reconstructions the inputs from the codes, providing a rich learning signal and enabling human inspection of model predictions. A recurrent state-space model (RSSM) is trained to predict future codes given actions, without observing intermediate inputs.
Right: The world model enables massively parallel policy optimization from imagined rollouts in the compact latent space using a large batch size, without having to reconstruct sensory inputs. Dreamer trains a policy network and value network from the imagined rollouts and a learned reward function.
2 Approach
-----------
We leverage the Dreamer algorithm (hafner2019dreamer, hafner2020dreamerv2) for online learning on physical robots, without the need for simulators. This section summarizes the general algorithm, as well as details on the training architecture and sensor fusion needed for the robotics experiments. [Figure 2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ DayDreamer: World Models for Physical Robot Learning") shows an overview of the approach. Dreamer learns a world model from a replay buffer of past experiences, uses an actor critic algorithm to learn behaviors from trajectories predicted by the learned model, and deploys its behavior in the environment to continuously grow the replay buffer. We decouple learning updates from data collection to meet latency requirements and to enable fast training without waiting for the environment. In our implementation, a learner thread continuously trains the world model and actor critic behavior, while an actor thread in parallel computes actions for environment interaction.
#### World Model Learning
The world model is a deep neural network that learns to predict the environment dynamics, as shown in [Figure 3](#S1.F3 "Figure 3 ‣ 1 Introduction ‣ DayDreamer: World Models for Physical Robot Learning") (left). Because sensory inputs can be large images, we predict future representations rather than future inputs. This reduces accumulating errors and enables massively parallel training with a large batch size.
Thus, the world model can be thought of as a fast simulator of the environment that the robot learns autonomously, starting from a blank slate and continuously improving its model as it explores the real world.
The world model is based on the Recurrent State-Space Model (RSSM; hafner2018planet), which consists of four components:
| | | | | | | | | | | |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| | | Encoder Network: | | encθ(st|st−1,at−1,xt) | | Decoder Network: | | decθ(st)≈xt | | (1) |
| | | Dynamics Network: | | dynθ(st|st−1,at−1) | | Reward Network: | | rewθ(st+1)≈rt | |
Physical robots are often equipped with multiple sensors of different modalities, such as proprioceptive joint readings, force sensors, and high-dimensional inputs such as RGB and depth camera images. The encoder network fuses all sensory inputs xt together into the stochastic representations zt. The dynamics model learns to predict the sequence of stochastic representations by using its recurrent state ht. The decoder reconstructs the sensory inputs to provide a rich signal for learning representations and enables human inspection of model predictions, but is not needed while learning behaviors from latent rollouts. In our experiments, the robot has to discover task rewards by interacting with the real world, which the reward network learns to predict. Using manually specified rewards as a function of the decoded sensory inputs is also possible. We optimize all components of the world model jointly by stochastic backpropagation (kingma2013vae, rezende2014vae).
#### Actor Critic Learning
While the world model represents task-agnostic knowledge about the dynamics, the actor critic algorithm learns a behavior that is specific to the task at hand. As shown in [Figure 3](#S1.F3 "Figure 3 ‣ 1 Introduction ‣ DayDreamer: World Models for Physical Robot Learning") (right), we learn behaviors from rollouts that are predicted in the latent space of the world model, without decoding observations. This enables massively parallel behavior learning with typical batch sizes of 16K on a single GPU, similar to specialized modern simulators (makoviychuk2021isaac). The actor critic algorithm consists of two neural networks:
| | | | |
| --- | --- | --- | --- |
| | Actor Network:π(at|st)Critic %
Network:v(st) | | (2) |
The role of the actor network is to learn a distribution over successful actions at for each latent model state st that maximizes the sum of future predicted task rewards. The critic network learns to predict the sum of future task rewards through temporal difference learning (sutton2018rlbook). This is important because it allows the algorithm to take into account rewards beyond the planning horizon of H=16 steps to learn long-term strategies. Given a predicted trajectory of model states, the critic is trained to regress the return of the trajectory. A simple choice would be to compute the return as the sum of N intermediate rewards plus the critic’s own prediction at the next state. To avoid the choice of an arbitrary value for N, we instead compute λ-returns, which average over all N∈[1,H−1] and are computed as follows:
| | | | |
| --- | --- | --- | --- |
| | Vλt≐rt+γ((1−λ)v(st+1)+λVλt+1),VλH≐v(sH). | | (3) |
While the critic network is trained to regress the λ-returns, the actor network is trained to maximize them. Different gradient estimators are available for computing the policy gradient for optimizing the actor, such as Reinforce (williams1992reinforce) and the reparameterization trick (kingma2013vae, rezende2014vae) that directly backpropagates return gradients through the differentiable dynamics network (henaff2019planbybackprop). Following hafner2020dreamerv2, we choose reparameterization gradients for continuous control tasks and Reinforce gradients for tasks with discrete actions. In addition to maximizing returns, the actor is also incentivized to maintain high entropy to prevent collapse to a deterministic policy and maintain some amount of exploration throughout training:
| | | | |
| --- | --- | --- | --- |
| | L(π)≐−E[∑Ht=1lnπ(at∣∣st)sg(Vλt−v(st))+ηH[π(at∣∣st)]] | | (4) |
We optimize the actor and critic using the Adam optimizer (kingma2014adam). To compute the λ-returns, we use a slowly updated copy of the critic network as common in the literature (mnih2015dqn, lillicrap2015ddpg). The gradients of the actor and critic do not affect the world model, as this would lead to incorrect and overly optimistic model predictions. The hyperparameters are listed in [Appendix D](#A4 "Appendix D Hyperparameters ‣ DayDreamer: World Models for Physical Robot Learning"). Compared to hafner2020dreamerv2, there is no training frequency hyperparameter because the decoupled learner optimizes the neural networks in parallel with data collection, without rate limiting.
3 Experiments
--------------
We evaluate Dreamer on 4 robots, each with a different task, and compare its performance to appropriate algorithmic and human baselines. The experiments are representative of common robotic tasks, such as locomotion, manipulation, and navigation. The tasks pose a diverse range of challenges, including continuous and discrete actions, dense and sparse rewards, proprioceptive and image observations, and sensor fusion. Learned world models have various properties that make them well suited for robot learning. The goal of the experiments is to evaluate whether the recent successes of learned world models enables sample-efficient robot learning directly in the real world. Specifically, we aim to answer the following research questions:
* Does Dreamer enable robot learning directly in the real world, without simulators?
* Does Dreamer succeed across various robot platforms, sensory modalities, and action spaces?
* How does the data-efficiency of Dreamer compare to previous reinforcement learning algorithms?
| | | | | |
| --- | --- | --- | --- | --- |
|
| | | |
| --- | --- | --- |
| | | |
|
|
Figure 4:
A1 Quadruped Walking Starting from lying on its back with the feet in the air, Dreamer learns to roll over, stand up, and walk in 1 hour of real world training time, without simulators or resets. In contrast, SAC only learns to roll over but neither to stand up nor to walk. For SAC, we also had to help the robot out of a dead-locked leg configuration during training. On the right we show training curves for both SAC and Dreamer. The maximum reward is 14. The filled circles indicate times where the robot fell on its back, requiring the learning of a robust strategy for getting back up. After 1 hour of training, we start pushing the robot and find that it adapts its behavior within 10 minutes to withstand light pushes and quickly roll back on its feet for hard pushes.
The graph shows a single training run with the shaded area indicating one standard deviation within each time bin.
| | | | | |
| --- | --- | --- | --- | --- |
|
| | | |
| --- | --- | --- |
| | | |
|
|
Figure 5:
UR5 Multi Object Visual Pick and Place This task requires learning to locate three ball objects from third-person camera images, grasp them, and move them into the other bin. The arm is free to move within and above the bins and sparse rewards are given for grasping a ball and for dropping it in the opposite bin. The environment requires the world model to learn multi-object dynamics in the real world and the sparse reward structure poses a challenge for policy optimization. Dreamer overcomes the challenges of visual localization and sparse rewards on this task, learning a successful strategy within a few hours of autonomous operation.
| | | | | |
| --- | --- | --- | --- | --- |
|
| | | |
| --- | --- | --- |
| | | |
|
|
Figure 6:
XArm Visual Pick and Place The XArm is an affordable robot arm that operates slower than the UR5. To demonstrate successful learning on this robot, we use a third-person RealSense camera with RGB and depth modalities, as well as proprioceptive inputs for the robot arm, requiring the world model to learn sensor fusion. The pick and place task uses a soft object. While soft objects would be challenging to model accurately in a simulator, Dreamer avoids this issue by directly learning on the real robot without a simulator. While Rainbow converges to the local optimum of grasping and ungrasping the object in the same bin, Dreamer learns a successful pick and place policy from sparse rewards in under 10 hours.
| | | | | |
| --- | --- | --- | --- | --- |
|
| | | |
| --- | --- | --- |
| | | |
|
|
Figure 7:
Sphero Navigation This task requires the Sphero Ollie robot to navigate to a fixed goal location through continuous actions given a top-down RGB image as the only sensory input. The task requires the robot to localize itself from pixels without proprioceptive inputs, to infer its orientation from the sequence of past images because it is ambiguous from a single image, and to control the robot from under-actuated motors that require building up momentum over time. Dreamer learns a successful policy on this task in under 2 hours.
#### Implementation
We build on the official implementation of DreamerV2 (hafner2020dreamerv2), which handles multiple sensory modalities. We develop an asynchronous actor and learner setup, which is essential in environments with high control rates, such as the quadruped, and also accelerates learning for slower environments, such as the robot arms. We use identical hyperparameters across all experiments, enabling off-the-shelf deployment to different robot embodiments.
#### Baselines
We compare to a strong learning algorithm for each of our experimental setups. The A1 quadruped robot uses continuous actions and low-dimensional inputs, allowing us to compare to SAC (haarnoja2018sac, haarnoja2018sac2), a popular algorithm for data-efficient continuous control. For the visual pick and place experiments on the XArm and UR5 robots, inputs are images and proprioceptive readings and actions are discrete, suggesting algorithms from the DQN (mnih2015dqn) line of work as baselines. We choose Rainbow (hessel2018rainbow) as a powerful representative of this category, an algorithm that combines many improvements of DQN. To input the proprioceptive readings, we concatenate them as broadcasted planes to the RGB channels of the image, a common practice in the literature (schrittwieser2019muzero). For the UR5, we additionally compare against PPO (schulman2017ppo), with similar modifications for fusing image and proprioceptive readings. In addition, we compare against a human operator controlling the robot arm through the robot control interface, which provides an approximate upper bound for the robot performance. For the Sphero navigation task, inputs are images and actions are continuous. The state-of-the-art baseline in this category is DrQv2 (yarats2021drqv2), which uses image augmentation to increase sample-efficiency.
###
3.1 A1 Quadruped Walking

Figure 8: Within 10 minutes of perturbing the learned walking behavior, the robot adapts to withstanding pushes or quickly rolling over and back on its feet.
This high-dimensional continuous control task requires training a quadruped robot to roll over from its back, stand up, and walk forward at a fixed target velocity. Prior work in quadruped locomotion requires either extensive training in simulation under domain randomization, using recovery controllers to avoid unsafe states, or defining the action space as parameterized trajectory generators that restrict the space of motions. In contrast, we train in the end-to-end reinforcement learning setting directly on the robot, without simulators or resets. We use the Unitree A1 robot that consists of 12 direct drive motors. The motors are controlled at 20 Hz via continuous actions that represent motor angles that are realized by a PD controller on the hardware. The input consists of motor angles, orientations, and angular velocities. To protect the motors, we filter out high-frequency motor commands through a Butterworth filter. Due to space constraints, we manually intervene when the robot has reached the end of the available training area, without modifying the joint configuration or orientation that the robot is in.
The reward function is the sum of five terms. An upright reward is computed from the base frame up vector ^zT, terms for matching the standing pose are computed from the joint angles of the hips, shoulders, and knees, and a forward velocity term is computed from the projected forward velocity Bvx and the total velocity Bv. Each of the five terms is active while its preceding terms are satisfied to at least 0.7 and otherwise set to 0:
| | | | |
| --- | --- | --- | --- |
| | rupr≐(^zT[0,0,1]\shortminus1)/2rhip≐1\shortminus14∥qhip+0.2∥1rshoulder≐1\shortminus14∥qshoulder+0.2∥1rknee≐1\shortminus14∥qknee\shortminus1.0∥1rvelocity≐5(max(0,Bvx)/∥Bv∥2⋅clip(Bvx/0.3,\shortminus1,1)+1) | | (5) |
As shown in [Figure 4](#S3.F4 "Figure 4 ‣ 3 Experiments ‣ DayDreamer: World Models for Physical Robot Learning"), after one hour of training, Dreamer learns to consistently flip the robot over from its back, stand up, and walk forward. In the first 5 minutes of training, the robot manages to roll off its back and land on its feet. 20 minutes later, it learns how to stand up on its feet. About 1 hour into training, the robot learns a pronking gait to walk forward at the desired velocity. After succeeding at this task, we tested the robustness of the algorithms by repeatedly knocking the robot off of its feet with a large pole, shown in [Figure 8](#S3.F8 "Figure 8 ‣ 3.1 A1 Quadruped Walking ‣ 3 Experiments ‣ DayDreamer: World Models for Physical Robot Learning"). Within 10 minutes of additional online learning, the robot adapts and withstand pushes or quickly rolls back on its feet. In comparison, SAC quickly learns to roll off its back but fails to stand up or walk given the small data budget.
###
3.2 UR5 Multi-Object Visual Pick and Place
Common in warehouse and logistics environments, pick and place tasks require a robot manipulator to transport items from one bin into another. [Figure 5](#S3.F5 "Figure 5 ‣ 3 Experiments ‣ DayDreamer: World Models for Physical Robot Learning") shows a successful pick and place cycle of this task. The task is challenging because of sparse rewards, the need to infer object positions from pixels, and the challenging dynamics of multiple moving objects. The sensory inputs consist of proprioceptive readings (joint angles, gripper position, end effector Cartesian position) and a 3rd person RGB image of the scene. Successfully grasping one of the 3 objects, detected by partial gripper closure, results in a +1 reward, releasing the object in the same bin gives a −1 reward, and placing in the opposite bin gives a +10 reward. We control the high-performance UR5 robot from Universal Robotics at 2 Hz. Actions are discrete for moving the end effector in increments along X, Y, and Z axes and for toggling the gripper state. Movement in the Z axis is only enabled while holding an object and the gripper automatically opens once above the correct bin. We estimate human performance by recording 3 demonstrators for 20 minutes, controlling the UR5 with a joystick.
Dreamer reaches an average pick rate of 2.5 objects per minute within 8 hours. The robot initially struggles to learn as the reward signal is very sparse, but begins to gradually improve after 2 hours of training. The robot first learns to localize the objects and toggles the gripper when near an object. Over time, grasping becomes precise and the robot learns to push objects out of corners. [Figure 5](#S3.F5 "Figure 5 ‣ 3 Experiments ‣ DayDreamer: World Models for Physical Robot Learning") shows the learning curves of Dreamer compared to Rainbow DQN, PPO, and the human baseline. Both Rainbow DQN and PPO only learn the short-sighted behavior of grasping and immediately dropping objects in the same bin. In contrast, Dreamer approaches human-level performance after 8 hours. We hypothesize that Rainbow DQN and PPO fail because they require larger amounts of experience, which is not feasible for us to collect in the real world.
###
3.3 XArm Visual Pick and Place
While the UR5 robot is a high performance industrial robot, the XArm is an accessible low-cost 7 DOF manipulation, which we control at approximately 0.5 Hz. Similar to [Section 3.2](#S3.SS2 "3.2 UR5 Multi-Object Visual Pick and Place ‣ 3 Experiments ‣ DayDreamer: World Models for Physical Robot Learning"), the task requires localizing and grasping a soft object and moving it from one bin to another and back, shown in [Figure 6](#S3.F6 "Figure 6 ‣ 3 Experiments ‣ DayDreamer: World Models for Physical Robot Learning"). Because the bins are not slanted, we connect the object to the gripper with a string. This makes it less likely for the object to get stuck in corners at the cost of more complex dynamics. The sparse reward, discrete action space, and observation space match the UR5 setup except for the addition of depth image observations.
Dreamer learns a policy that enables the XArm to achieve an average pick rate of 3.1 objects per minute in 10 hours of time, which is comparable to human performance on this task. [Figure 6](#S3.F6 "Figure 6 ‣ 3 Experiments ‣ DayDreamer: World Models for Physical Robot Learning") shows that Dreamer learns to solve the task within 10 hours, whereas the Rainbow algorithm, a top model-free algorithm for discrete control from pixels, fails to learn. Interestingly, we observed that Dreamer learns to sometimes use the string to pull the object out of a corner before grasping it, demonstrating multi-modal behaviors. Moreover, we observed that when lighting conditions change drastically (such as sharp shadows during sunrise), performance initially collapses but Dreamer then adapts to the changing conditions and exceeds its previous performance after a few hours of additional training, reported in [Appendix A](#A1 "Appendix A Adaptation ‣ DayDreamer: World Models for Physical Robot Learning").
###
3.4 Sphero Navigation
We evaluate Dreamer on a visual navigation task that requires maneuvering a wheeled robot to a fixed goal location given only RGB images as input. We use the Sphero Ollie robot, a cylindrical robot with two controllable motors, which we control through continuous torque commands at 2 Hz. Because the robot is symmetric and the robot only has access to image observations, it has to infer the heading direction from the history of observations. The robot is provided with a dense reward equal to the negative L2 distance. As the goal is fixed, after 100 environment steps, we end the episode and randomize the robot’s position through a sequence of high power random motor actions.
In 2 hours, Dreamer learns to quickly and consistently navigate to the goal and stay near the goal for the remainder of the episode. As shown in [Figure 7](#S3.F7 "Figure 7 ‣ 3 Experiments ‣ DayDreamer: World Models for Physical Robot Learning"), Dreamer achieves an average distance to the goal of 0.15, measured in units of the area size and averaged across time steps. We find that DrQv2, a model-free algorithm specifically designed to continuous control from pixels, achieves similar performance. This result matches the simulated experiments of yarats2021drqv2 that showed the two algorithms to perform similarly for continuous control tasks from images.
4 Related Work
---------------
Existing work on robot learning commonly leverages large amounts of simulated experience under domain and dynamics randomization before deploying to the real world (rusu16simtoreal, Sim2Real2018, openai18hand, lee20locomotion, irpan20rlcyclegan, rudin21locomotion5minutes, kumar21rma, siekmann21blindbipedalstairs, smith21leggedrobotsfinetune, escontrela22amp\_for\_hardware, takahiro22locomotionperception), leverage fleets of robots to collect experience datasets (kalashnikov18qtopt, levine18handeyecoordination, robonet, kalashnikov21mtopt, bridge\_data), or rely on external information such as human expert demonstrations or task priors to achieve sample-efficient learning (xie2019improvisation, schoettler19deeprl\_insertion, james21coursetofinearm, shah22viking, bohez22learning\_from\_animal, sivakumar22robotic\_telekinesis). However, designing simulated tasks and collecting expert demonstrations is time-consuming. Moreover, many of these approaches require specialized algorithms for leveraging offline experience, demonstrations, or simulator inaccuracies. In contrast, our experiments show that learning end-to-end from rewards in the physical world is feasible for a diverse range of tasks through world models.
Relatively few works have demonstrated end-to-end learning from scratch in the physical world. Visual Foresight (finn2016unsupervised, finn2017foresight, erbert18visualforesight) learns a video prediction model to solve real world tasks by online planning, but is limited to short-horizon tasks and requires generating images during planning, making it computationally expensive. In comparison, we learn latent dynamics that enable efficient policy optimization with a large batch size in the compact latent space. yang19dataefficientlocomotion, yang2022feet learn quadruped locomotion through a model-based approach by predicting foot placement and leveraging a domain-specific controller to achieve them. SOLAR (zhang2018solar) learns a latent dynamics model from images and demonstrates reaching and pushing with a robot arm. nagabandi19pddm learns dexterous manipulation policies by planning through a learned dynamics model from state observations. In comparison, our experiments show successful learning across 4 challenging robot tasks that cover a wide range of challenges and sensory modalities, with a single learning algorithm and hyperparameter setting.
5 Discussion
-------------
We applied Dreamer to physical robot learning, finding that modern world models enable sample-efficient robot learning for a range of tasks, from scratch in the real world and without simulators. We also find that the approach is generally applicable in that it can solve robot locomotion, manipulation, and navigation tasks without changing hyperparameters. Dreamer taught a quadruped robot to roll off the back, stand up, and walk in 1 hour from scratch, which previously required extensive training in simulation followed by transfer to the real world or parameterized trajectory generators and given reset policies. We also demonstrate learning to pick and place objects from pixels and sparse rewards on two robot arms in 8–10 hours.
#### Limitations
While Dreamer shows promising results, learning on hardware over many hours creates wear on robots that may require human intervention or repair. Additionally, more work is required to explore the limits of Dreamer and our baselines by training for a longer time. Finally, we see tackling more challenging tasks, potentially by combining the benefits of fast real world learning with those of simulators, as an impactful future research direction.
#### Acknowledgements
We thank Stephen James and Justin Kerr for helpful suggestions and help with printing the protective shell of the quadruped robot. We thank Ademi Adeniji for help with setting up the XArm robot and Raven Huang for help with setting up the UR5 robot. This work was supported in part by an NSF Fellowship, NSF NRI #2024675, and the Vanier Canada Graduate Scholarship. |
d568a3d6-f405-45f9-b0ec-a3be3cbdd15a | StampyAI/alignment-research-dataset/blogs | Blogs | Grant announcement from the Open Philanthropy Project
A major announcement today: the Open Philanthropy Project has granted MIRI $500,000 over the coming year to study the questions outlined in our [agent foundations](https://intelligence.org/technical-agenda/) and [machine learning](https://intelligence.org/2016/07/27/alignment-machine-learning/) research agendas, with a strong chance of renewal next year. This represents MIRI’s largest grant to date, and our [second-largest](https://intelligence.org/2014/04/02/2013-in-review-fundraising/) single contribution.
Coming on the heels of a $300,000 [donation](https://intelligence.org/2016/08/05/miri-strategy-update-2016/) by Blake Borgeson, this support will help us continue on the growth trajectory we outlined in our [summer](https://intelligence.org/2015/07/17/miris-2015-summer-fundraiser/) and [winter](https://intelligence.org/2015/12/01/miri-2015-winter-fundraiser/) fundraisers last year and effect another doubling of the research team. These growth plans assume continued support from other donors in line with our fundraising successes last year; we’ll be discussing our remaining funding gap in more detail in our 2016 fundraiser, which we’ll be kicking off later this month.
---
The Open Philanthropy Project is a joint initiative run by staff from the philanthropic foundation Good Ventures and the charity evaluator GiveWell. Open Phil has recently made it a priority to identify opportunities for researchers to address [potential risks from advanced AI](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence), and we consider their early work in this area promising: grants to [Stuart Russell](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/uc-berkeley-center-human-compatible-ai), [Robin Hanson](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/george-mason-university-research-future-artificial-intelligence-scenarios), and the [Future of Life Institute](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/future-life-institute-artificial-intelligence-risk-reduction), plus a stated interest in funding work related to “[Concrete Problems in AI Safety](https://research.googleblog.com/2016/06/bringing-precision-to-ai-safety.html),” a recent paper co-authored by four Open Phil technical advisers, Christopher Olah (Google Brain), Dario Amodei (OpenAI), Paul Christiano (UC Berkeley), and Jacob Steinhardt (Stanford), along with John Schulman (OpenAI) and Dan Mané (Google Brain).
Open Phil’s grant isn’t a full endorsement, and they note a number of reservations about our work [**in an extensive writeup**](http://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/machine-intelligence-research-institute-general-support) detailing the thinking that went into the grant decision. Separately, Open Phil Executive Director Holden Karnofsky has [written some personal thoughts](https://docs.google.com/document/d/1hKZNRSLm7zubKZmfA7vsXvkIofprQLGUoW43CYXPRrk/edit) about how his views of MIRI and the effective altruism community have evolved in recent years.
Open Phil’s decision was informed in part by their technical advisers’ evaluations of our recent work on logical uncertainty and Vingean reflection, together with reviews by seven anonymous computer science professors and one anonymous graduate student. The reviews, most of which are collected [**here**](http://files.openphilanthropy.org/files/Grants/MIRI/consolidated_public_reviews.pdf), are generally negative: reviewers felt that “[Inductive coherence](https://intelligence.org/2016/04/21/two-new-papers-uniform/)” and “[Asymptotic convergence in online learning with unbounded delays](https://intelligence.org/2016/04/21/two-new-papers-uniform/#2)” were not important results and that these research directions were unlikely to be productive, and Open Phil’s advisers were skeptical or uncertain about the work’s relevance to aligning AI systems with human values.
It’s worth mentioning in that context that the results in “Inductive coherence” and “Asymptotic convergence…” led directly to a more significant unpublished result, logical induction, that we’ve recently discussed with Open Phil and members of the effective altruism community. The result is being written up, and we plan to put up a preprint soon. In light of this progress, we are more confident than the reviewers that Garrabrant et al.’s earlier papers represented important steps in the right direction. If this wasn’t apparent to reviewers, then it could suggest that our exposition is weak, or that the importance of our results was inherently difficult to assess from the papers alone.
In general, I think the reviewers’ criticisms are reasonable — either I agree with them, or I think it would take a longer conversation to resolve the disagreement. The level of detail and sophistication of the comments is also quite valuable.
The content of the reviews was mostly in line with my advance predictions, though my predictions were low-confidence. I’ve written up quick [**responses**](https://intelligence.org/files/OpenPhil2016Supplement.pdf) to some of the reviewers’ comments, with my predictions and some observations from Eliezer Yudkowsky included in appendices. This is likely to be the beginning of a longer discussion of our research priorities and progress, as we have yet to write up our views on a lot of these issues in any detail.
We’re very grateful for Open Phil’s support, and also for the (significant) time they and their advisers spent assessing our work. This grant follows a number of challenging and deep conversations with researchers at GiveWell and Open Phil about our organizational strategy over the years, which have helped us refine our views and arguments.
Past public exchanges between MIRI and GiveWell / Open Phil staff include:
* May/June/July 2012 – [Holden Karnofsky’s critique of MIRI](http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/) (then SI), [Eliezer Yudkowsky’s reply](http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/), and [Luke Muehlhauser’s reply](http://lesswrong.com/lw/di4/reply_to_holden_on_the_singularity_institute/).
* October 2013 – Holden, Eliezer, Luke, Jacob Steinhardt, and Dario Amodei’s discussion of [MIRI’s strategy](https://intelligence.org/2014/01/13/miri-strategy-conversation-with-steinhardt-karnofsky-and-amodei/).
* January 2014 – Holden, Eliezer, and Luke’s discussion of [existential risk](https://intelligence.org/2014/01/27/existential-risk-strategy-conversation-with-holden-karnofsky/).
* February 2014 – Holden, Eliezer, and Luke’s discussion of [future-oriented philanthropy](https://intelligence.org/2014/02/21/conversation-with-holden-karnofsky-about-future-oriented-philanthropy/).
See also Open Phil’s posts on [transformative AI](http://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence) and [AI risk as a philanthropic opportunity](http://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity), and their earlier [AI risk cause report](http://www.openphilanthropy.org/research/cause-reports/ai-risk).
The post [Grant announcement from the Open Philanthropy Project](https://intelligence.org/2016/09/06/grant-open-philanthropy/) appeared first on [Machine Intelligence Research Institute](https://intelligence.org). |
07343ef2-fe23-4c32-9229-91f211e3cf30 | StampyAI/alignment-research-dataset/special_docs | Other | National Power After AI
National Power After AI
AUTHORS
Matthew Daniels
Ben Chang JULY 2021
Center for Security and Emerging Technology 2Established in January 2019, the Center for Security and
Emerging Technology (CSET) at Georgetown’s Walsh
School of Foreign Service is a research organization fo-
cused on studying the security impacts of emerging tech-
nologies, supporting academic work in security and tech-
nology studies, and delivering nonpartisan analysis to the
policy community. CSET aims to prepare a generation of
policymakers, analysts, and diplomats to address the chal-
lenges and opportunities of emerging technologies. CSET
focuses on the effects of progress in artifi cial intelligence,
advanced computing, and biotechnology.
CSET.GEORGETOWN.EDU | CSET@GEORGETOWN.EDU
National Power
After AI JULY 2021
AUTHORS
Matthew Daniels
Ben Chang
ACKNOWLEDGMENTS
We would like to thank Igor Mikolic-Torreira, James Baker, Jack Clark,
Remco Zwetsloot, Teddy Collins, Helen Toner, Jack Corrigan, Jeff Alstott,
Maura McCarthy, Alex Friedland, Lynne Weil, David Lasker, Jared
Dunnmon, Matt Mahoney, and Greg Allen for their comments on earlier
drafts. We would also like to thank Andrew Imbrie for many thoughtful
discussions and reading suggestions and Melissa Flagg for early input and
ideas. This work benefitted directly from the early Office of Net Assessment
summer study on AI in 2016. Of course, the authors are solely responsible
for the views expressed in this publication and for any errors.
AUTHORS
Matthew Daniels was a Senior Fellow at CSET, where Ben Chang is an
Andrew W. Marshall Fellow.
Since authoring this paper, Matthew Daniels has taken a position in the
U.S. government. He completed his contributions to this paper prior to
departing CSET. The views expressed herein are the authors' and do not
necessarily reflect those of the U.S. government. Inquiries about this report
should be directed to Ben Chang.
PRINT AND ELECTRONIC DISTRIBUTION RIGHTS
© 202 1 by the Center for Security and Emerging Technology. This work
is licensed under a Creative Commons Attribution-NonCommercial 4.0
International License.
To view a copy of this license, visit:
https://creativecommons.org/licenses/by-nc/4.0/.
DOCUMENT IDENTIFIER
doi: 10.5 1 593/202 10016
Cover photos: (Top left) https://www.flickr.com/photos/
ywcausa/8527933591(Top right) https://unsplash.com/photos/-
7Xb4TmVTN8 (Bottom) https://unsplash.com/photos/jHZ70nRk7Ns.
Center for Security and Emerging TechnologyiINTRODUCTION
1 | AN EVOLUTIONARY THEORY OF TECHNOLOGICAL
COMPETITION
2 | POWER AFTER AI: NEW ELEMENTS, CHANGED FACTORS,
AND ALTERED GOALS
3 | CONCLUSIONS AND KEY POINTS
APPENDIX
ENDNOTESIII
1
5
17
21
23Contents
Center for Security and Emerging Technologyiv
Center for Security and Emerging Technologyiiiow will artificial intelligence affect long-term U.S.-China
competition?\* Many analyses approach this question by focus-
ing on how AI technologies may enhance each side’s current ca-
pabilities–making aircraft, businesses, and nations, say, 10 percent faster
and stronger. This perspective essentially suggests a broad race: the side
that leverages modern AI technologies the most and soonest wins.
This view can mislead American strategists in two ways. First, it ignores
the new vulnerabilities, costs, and accident risks associated with adopting
modern AI technologies in certain settings.1 Second, and most importantly,
over the longer term, AI technologies will likely alter great power competi-
tions in foundational ways, changing both how nations create power and
their motives for wielding power against one another. In short, strategists
are trying to create strategies for a game while the field, the players, the
ball, and the rules could be changing.
Why? Major innovations themselves not only generate value for lead-
ing states, but also change the relative value of previously acquired assets,
organizational practices, and defense strategies. Germany’s development
of Blitzkrieg during the interwar period, which represented a new way of
using motorized vehicles and radios, is often cited as a military-specific
example of important new organizational approaches. The German mili-
tary effectively adapted its operational practices to suit new technologies.
But the impact of major innovations can also be broader. For instance, the
Industrial Revolution made new factors central to national power, includ-Introduction
H
\*In this work, we use “artificial intelligence” to mean, as per the Defense Innovation
Board, “a variety of information processing techniques and technologies used to perform
a goal-oriented task and the means to reason in the pursuit of that task.” See Appendix.
Center for Security and Emerging Technologyiving access to certain resources and the industrial capacity to leverage them. These
broader effects take more time to appear, but their impact can be enormous: indus-
trialization was not “revolutionary” because of the rapidity of change, as it unfold-
ed in waves over decades, but because of its ultimate magnitude of change. With
AI technologies, progressive substitution of machines for human cognitive labor may
eventually have economic and social implications on a scale comparable to the
Industrial Revolution. And like the Industrial Revolution, this AI revolution will change
some fundamental elements of national power.
Of course, these foundational shifts can render some of the current processes
and resources of a state obsolete, but they can also make what states are already
doing, or already possess, more valuable. For example, the invention of railroads
was a boon for those rich in steel precursors.2 With AI, data-hungry algorithms may
advantage authoritarian states, which already surveil and catalogue their own pop-
ulations with little regard for human rights.3
We suggest an “evolutionary” view of technological change: major, wide -
ly diffused innovations are akin to environmental shifts, affecting the competitive
capacity of states based on their existing trends in population, resources, institutions,
character, and policies. Some previously “maladaptive” factors may become advan-
tageous, and vice versa; states will adapt their institutions, organizations, and policies
to the new environment in different ways and to varying degrees, and consequently
gain or lose relative power as a result. Nations that primarily focus on AI technologies
as offering marginal improvements in existing capabilities (“helping to build better
mousetraps”) will eventually miss larger opportunities to adapt. This paper is a first
step into thinking more expansively about AI and national power. In what follows, we
first explain this evolutionary view in greater detail before applying it to AI.
Ultimately, we seek pragmatic insights for long-term U.S. competition with au -
thoritarian governments like that of China. For the foreseeable future, China’s pop-
ulation and total economic size will very likely exceed those of the United States,
even as its per capita GDP lags. This new challenge differs fundamentally from the
United States’ Cold War competition with the Soviet Union, and success will require
thoughtful and timely diagnosis of modern environmental shifts in how states can
produce power. These insights can guide our own investments as well as our ap -
proach to alliances. The United States has many sources of advantage and strength,
and as Joseph Nye rightly observed, “Our greatest mistake in such a world would
be to fall into one-dimensional analysis and to believe that investing in military
power alone will ensure our strength.”4 This paper is a first step, intended to provoke
new questions and provide a framework for assessing the relationship between AI
and national power.\*
\*This work benefitted directly from the early Office of Net Assessment summer study on AI in 2016.
Center for Security and Emerging Technology1WHY STATIC, UNIVERSAL MEASURES OF POWER (THAT
ARE USEFUL) DO NOT EXIST
Power is simply the relative capability of a state to achieve what it wants
in international affairs. Power depends on one state’s favorable factors
relative to another. In one of the founding works of international relations,
Hans Morgenthau proposed distinguishing nine elements of national
power: geography, resources, industrial capacity, military preparedness,
population, national character, national morale, quality of diplomacy,
and quality of government.5
Since Morgenthau’s writing, generations of analysts have sought a
definitive way to measure national power that would, finally, allow ac-
curate judgment of relative strength without fighting a war.6 The search
has included dozens of books and hundreds of journal articles offering
competing methodologies and metrics.7 For example: Should measures of
useful access to resources include both steel and oil, or only steel? How
should “soft power” be measured? What about the “latent” power that a
state could theoretically draw from its population?8
Were such a universal, “objective” measure obtainable, the benefits
would be enormous. We could easily answer questions such as, “who’s
ahead?” and “if it’s not us, what should we do about it?” This quest, how-
ever, has not borne fruit. Proposed measures have tended to perform
poorly when generalized.9 History is full of surprises where states have An Evolutionary
Theory of
Technological
Competition1
Center for Security and Emerging Technology2achieved victory even when “objective” metrics would predict their defeat: the Unit-
ed States owes its national existence to victory in the Revolutionary War over the
British Empire, the superpower of the time.
Why? First and foremost, power is always contextual.10 This is especially clear in
military matters. A large military’s skill at high-intensity conflict may not translate to
skill at counterinsurgency; and factors that provide one country advantage relative
to another can change. The world offers no “power particle” to measure objectively
alongside other characteristics of nature–what we intuitively mean by “power” is
mostly a generalization from particular observations.11 Elements of power can also
combine in surprising ways. Andrew Marshall offered the reminder that countries
with relatively smaller populations and GNPs can pose substantial challenges to
larger competitors: in 1938, Japan had roughly half the population and one-tenth
the GNP of the United States,12 but it built a navy that challenged the United States
in wartime.13 In part because of these issues, history is rife with leaders who have
had a large gap between their beliefs and the reality of military capabilities.14
Each competition should be analyzed carefully on its own, distinguishing elements
of power, identifying key areas of competition, and working to diagnose the most
important problems and opportunities in each area of competition.
MAJOR INNOVATIONS CHANGE THE SECURITY
ENVIRONMENT, CHANGING WHAT GIVES RISE TO POWER
Military leaders throughout history are sometimes faulted for preparing to fight
the previous war instead of the next one. We should likewise avoid strategizing
to “win the previous competition.” Just as changing adversaries from the Soviet
Union to insurgents in Iraq and Afghanistan represented a new security environ-
ment and revealed the non-fungibility of power, major innovations also change
the security environment as they are widely adopted. Such innovations do this in
part by changing what assets, practices, and strategies give rise to power.
Differential impacts of emerging technologies often bring shifts in relative capa-
bilities of individual countries.15 Thinking about long-term competition in periods of
rapid technological change therefore requires assessing how innovations change
factors related to military and national power.
Major innovations can change the estimations of power in three ways:
• First, innovations introduce new elements of power. Major innovations,
in changing how states generate power, can create new factors that must be
considered in characterizing power. For example, the advent of railroads,
internal combustion engines, and nuclear weapons dramatically increased
Center for Security and Emerging Technology3the importance of a state’s access to steel, oil, and uranium, respectively.16
New factors, however, are not only limited to materials. They may also
encompass characteristics of a society’s culture, organizations, or economic
activities.17
• Second, innovations change the importance of existing elements of
power. Major innovations also change the “coefficients” of existing ele -
ments of power, causing them to matter more or less than before. For exam-
ple, Mongol light cavalry, modern navies, and ballistic missiles all changed
how geographic barriers affected one’s balance of power with geographic
neighbors, eroding the effectiveness of simple remoteness, oceans, and
armies still in the field, respectively, as shields against coercive power.18 In-
dustrialization meant the inventiveness of a nation’s scientists and engineers
became more important.
• Finally, innovations alter states’ intermediate goals. Perhaps least
obviously, major innovations sometimes broadly alter what policies states
pursue, by making certain kinds of behavior more valuable or less costly.
While states retain the same ultimate ends, such as securing survival and
prosperity, the intermediate, instrumental goals they pursue to reach those
ends may shift. This can drive dramatic changes in state goals and policies.
For example, before the Industrial Revolution, potential productivity gains in
areas like agriculture and manufacturing were small and stable; this made
conquering territory a primary means by which one group could increase
its wealth and security.19 During and after the Industrial Revolution, modern
states could also pursue substantial military and economic growth by apply -
ing new technologies to increase productivity.
The next section discusses how these three changes manifest in the context of AI. Perhaps least obviously, major innovations sometimes
broadly alter what policies states pursue, by making
certain kinds of behavior more valuable or less costly.
Center for Security and Emerging Technologyviii
Center for Security and Emerging Technology5e offer early thinking about potential changes caused by
AI: new elements of power, shifting importance for exist-
ing elements of power, and shifting intermediate goals for
states. These are not definitive or complete results, but a starting place for
broader thinking.
NEW ELEMENTS
One of the most familiar examples of new elements of power is as-
sociated with the Industrial Revolution, when machines began to help
humans with physical labor in new and organized ways. The Industrial
Revolution led to dramatic changes in the character of war and military
power. A simple approximation is that, before the Industrial Revolution,
any group’s military power correlated most closely with its quantity of
fieldable humans under arms, a measure of both taxable population
and military potential. After the Industrial Revolution, any estimate of
military power had to include a society’s industrial capacity and access
to resources to enable that capacity, which are measures of a society’s
ability to produce useful military hardware, such as ships, tanks, planes,
and submarines.
It is useful to see AI technologies today as part of another large-scale
transition: machines are increasingly helping humans with certain kinds
of cognitive labor in new and organized ways.20 This transition will span
decades, with potential economic and social implications on a scale
comparable to those of the Industrial Revolution. Today, as then, there are Power After AI: New
Elements, Changed
Factors, and Altered
Goals 2
W
Center for Security and Emerging Technology6large questions about the future of economic production, human labor, and military
capabilities. These future trends will define new elements of power.
U.S. defense leaders believe the rapidly growing military applications of AI
technologies will be critical for the years ahead.21 State power will increasingly
hinge on the new factors required to effectively adopt AI. Four such factors often
identified by existing literature include data, AI scientists and engineers (“AI talent”),
computational power (“compute”), and AI-adapted organizations. Below, we ex-
plore the latter two in greater detail.
Ability to Access and Leverage Compute
The United States has historically used large-scale compute capabilities for
analysis of nuclear weapons detonations and cryptanalysis.22 More recently the
U.S. government’s uses have grown to include climate modeling and a variety of
scientific applications. In the years ahead, the United States may also use large
compute resources for creating and countering new AI capabilities.
For decades, cutting-edge AI systems have used steadily increasing quantities
of compute resources, making improvements in compute capabilities a key driver of
AI progress. This usage appears to have accelerated across the last decade: the com-
pute used in the largest AI training runs has doubled every 3.4 months since 201 2,
growing more than 300,000 times from AlexNet in 201 2 to AlphaGo Zero in 201 8.23
OpenAI researchers have shown that the 2010s appear to be the beginning of a new
computing era for AI technologies, distinct from the preceding 40-50 years.24
For military applications where limited real-world data is available, techniques
leveraging computer simulations instead of large quantities of data may further
increase demand for compute.25 Cloud compute may become vital for rapidly
processing and fusing intelligence across platforms, while edge compute will be
necessary for autonomous systems deployed in the field tasked with assessing and
outthinking adversaries’ equivalent systems.
As such, a nation’s ability to leverage large quantities of computational power
could become a new primary term feeding into its ability to influence international U.S. defense leaders believe the rapidly growing military
applications of AI technologies will be critical for the
years ahead. State power will increasingly hinge on the
new factors required to effectively adopt AI.
Center for Security and Emerging Technology7affairs. For example, the key technical precursors required to manufacture cut-
ting-edge AI chips are currently concentrated in the United States and allied coun-
tries–though semiconductor manufacturing capabilities more broadly, beyond just
the most cutting-edge chips, may further grow the importance of Taiwan and South
Korea as international trading partners.26
Importantly, compute resources must be configured in ways useful for modern
AI capabilities. High-performance computing (HPC) systems currently maintained
within the U.S. Government, such as in the Department of Energy, tend to be both
specialized for non-AI functions and subject to system-specific security measures,
posing challenges for broad, standardized utilization by other organizations. Con-
sequently, commercial cloud compute resources may better serve the U.S. Govern-
ment in deploying certain kinds of AI technologies, although potentially promising
efforts to improve the use of U.S. HPC assets for AI are also underway.27 Effective
use will depend, too, on accessible software tools for using cloud compute systems–
which may prove to be comparable to process and tooling approaches developed
to make factories effective during industrialization in the United States.28
Compute resources can flow more easily than many traded goods. As comput-
ing infrastructure continues to grow, new ways of sharing access to large, regional-
ly-concentrated quantities of compute, including through space internet constella-
tions, may create new opportunities and incentives for international partnerships.
Ability to Manage Data and AI Safety & Security
Even when states possess the raw resources required to adopt some major in-
novation, they still must undertake the often-difficult process of institutional and
organizational adaptation. Bureaucratic factors in organizations matter greatly: in
militaries, competing civilian, interservice, and intra-service actors may promote
or resist adoption of new technologies.29 Resistance can include parochial forces
that attempt to stymie adoption: for example, only direct pressure from Eisenhower
moved the Air Force to adopt ICBMs instead of focusing solely on less survivable
crewed bombers.30 Organizational culture also has significant impacts: because
mass armies threatened the pre-existing hierarchical power structure within many
European militaries, many states failed to adopt Napoleon’s innovation even after
his dramatic string of victories.31 During periods of rapid change, medium-sized
powers may have opportunities to adopt innovations more speedily than larger
powers.32
With AI, demands for organizational adaptations will be significant. Two factors
are especially important: effective data pipelines and the effective management of
security issues associated with modern AI technologies.
Center for Security and Emerging Technology8The ability to deploy cutting-edge AI applications will increasingly depend on
the quality of each organization’s data pipeline. Modern machine learning methods
are notoriously data-hungry, but simply possessing large quantities of data-collect-
ing sensing platforms will be insufficient—for supervised learning applications, data
must be structured, labeled, and cleaned; fusing data from many platforms, sources,
and formats will represent its own herculean challenge for many militaries. Finally,
these data pipelines must also be dynamic: data management itself must be moni -
tored, in part to detect attacks, because “data poisoning” attacks can manipulate
AI behavior by changing what lessons it learns.33 Consequently, it will be increas-
ingly important for military leaders to successfully implement organizational reforms
to create and maintain effective data pipelines.
Military leaders must also learn to effectively manage the novel security issues
associated with AI technologies. Relying on modern AI systems for safety- or mis-
sion-critical tasks carries challenges because many deep learning models are ex-
ceptionally hard to interpret.34 Michael Jordan at UC Berkeley has analogized the
creation of early large-scale AI models to building bridges before civil engineering
was a rigorous discipline: “While the building blocks are in place, the principles
for putting these blocks together are not, and so the blocks are currently being put
together in ad-hoc ways. … Just as early buildings and bridges sometimes fell to
the ground—in unforeseen ways and with tragic consequences—many of our early
societal-scale inference-and-decision-making systems are already exposing seri-
ous conceptual flaws.”35 A more developed engineering discipline for AI is needed
to manage the risk of accidents from relying on opaque machines in the field.36 In
near-term military settings, effectively integrating new AI technologies will require
special investment in test, evaluation, validation and verification (TEVV) processes
by competent organizational leaders.37
More widely, many modern AI systems are not designed to work in the pres-
ence of malevolent actors. Potential security issues for deep learning systems in-
clude adversarial examples and model inversion, in addition to data poisoning and
more traditional computer network and software attacks.38 Adversarial examples
refer to “inputs” (such as visual or audio patterns) to an AI system that cause the
system to malfunction; model inversion refers to an ability to reverse-engineer the
data used to train an AI system, which may include private or classified information.
Despite these challenges, modern machine learning capabilities will be increasingly
woven into G20 societies, economies, and military systems.\* The U.S. position with
\*For example, AI technologies will intersect with 5G and networking trends in cities as autonomous
systems (like vehicles) in urban areas begin to have large quantities of interactions with other
intelligent agents—working on everything from traffic coordination to utilities management and
financial investments. The ability for intelligent systems to interact on large scales, safely and securely,
will be critical.
Center for Security and Emerging Technology9AI technologies for the next two or three decades appears analogous to the future
that faced IT technologies in the 1990s: AI technologies are so valuable that they
will be used despite substantial design and security issues.
What might the future look like given these vulnerabilities? We can only specu-
late: in direct military settings, there may be new sub-competitions that resemble the
emergence of electronic warfare after the invention of radar.39 In economic systems,
in addition to the potential for the novel security risks discussed previously, there is
risk of physical manifestations of the kinds of problems currently seen in high fre -
quency trading systems, such as rapid, unanticipated interactions among automated
agents managing services in cities.40 These issues may open new vulnerabilities to
both individual rogue actors and state adversaries. Organizations that are able to
adapt early to manage these new security issues will be advantaged.
Since states vary in their access to compute, data, AI talent, and useful orga-
nizational adaptations, they will also vary in their ability to benefit from modern
AI technologies. Any national rankings based on these factors will be debatable,
but the nations that generally lead in these metrics is unsurprising, and include: the
United States, China, Japan, South Korea, the UK, Canada, Taiwan, Israel, France,
Germany, and Russia. Advanced economies should be increasingly expected to
focus their own investments and policies on improving their positions in these areas.
CHANGED FACTORS
Industrialization meant that a nation’s stock of productive scientists and engineers
counted more than it had in the past. With the arrival of AI, various previously
recognized elements of national power will become more important, while others
may become gradually less so. For illustrative purposes below, we discuss popu-
lation size and scientific talent as contrasting examples: population size becoming
less important, scientific talent becoming more important.The U.S. position with AI technologies for the next two
or three decades appears analogous to the future that
faced IT technologies in the 1990s: AI technologies are
so valuable that they will be used despite substantial
design and security issues.
Center for Security and Emerging Technology10Population Size
As AI technologies increasingly substitute for human labor, total population size
may become less important for national military and economic capacity. 41 Just
as machines took over rote physical labor during industrialization, AI technol-
ogies will automate rote cognitive labor, from diagnosing maintenance needs
to exploiting imagery intelligence. This may reduce the total quantity of human
labor needed to maintain a military’s operational capacity. In major wars, par-
tially or fully autonomous AI platforms may further reduce a country’s need to
field humans in combat. As militaries rely more on autonomous systems for mili-
tary operations, defense planners may come to count autonomous systems and
their available domestic supply of AI chips the way they once counted soldiers
and the available domestic recruiting pool of military-age adults.42 Downstream,
this could help technologically advanced nations compensate for demographic
challenges, such as aging populations and low birth rates, a situation the United
States, China, Japan, Western Europe, and Russia all face to varying degrees.43
Population trends continue to matter for national power—but AI technologies,
like many other technologies of the past century, may further erode this importance.
Industrious Scientists and Engineers
Harnessing new technologies, both by developing technologies and accessing in-
novations created elsewhere, is an important means of growing power. Applica-
tions of AI can help in both areas, serving as a force multiplier on, and therefore
increasing the importance of, productive scientists and engineers.
Recently, for example, DeepMind’s AlphaFold achieved breakthrough rates of
accuracy comparable to experimental methods in the protein-structure prediction
challenge known as CASP.44 By obviating the need for experimental protein struc-
ture assessment, a skill-demanding and time-intensive procedure, AlphaFold rep-
resents a large augmentation of human scientists’ biosciences research. In a differ-
ent domain of research, modern AI applications are able to help with chip design.45
Researchers have demonstrated a deep learning system capable of designing the
physical layout of computer chips more effectively than human engineers.46 Google
has used this system to design its next generation of Tensor Processing Units (TPUs),
the company’s specialized AI chips.
Likewise, rapid progress in machine translation, automatic literature review,
and related tools means a given scientific discipline’s state-of-the-art will become
increasingly accessible and useful to well-organized groups of human scientists and
engineers. Just as the printing press alleviated the need to travel from country to
country to accumulate knowledge from different libraries, AI applications can lower
the costs for researchers to access state-of-the-art knowledge in any field.
Center for Security and Emerging Technology11 Center for Security and Emerging TechnologyThere are three ways that modern AI applications will contribute on a large
scale to scientific discovery and engineering invention: they will contribute directly
to new discoveries and engineered systems, especially in areas that involve search-
es over large spaces in data or design;47 automate the physical work of science and
engineering, such as “self-driving laboratories” that robotically automate experi-
mental laboratory work;48 and make global scientific knowledge more accessible
to humans, such as by extracting knowledge from millions of articles as well as from
articles in many different languages.49
Finally, there is an old debate about whether science advances most because
of new ideas or new tools;50 AI technologies appear able to contribute both. In
the longer-term, AI may enable new and more creative forms of knowledge-gen-
eration that function as “pathfinders” for human brains, unlocking otherwise diffi-
cult-to-reach innovations. When AlphaGo beat Lee Sedol, its 37th move in the sec-
ond game surprised human professionals. In the words of Go master Fan Hui, “It’s
not a human move. I’ve never seen a human play this move. So beautiful.”51 When
AI behavior surprises us, we learn something new. Looking ahead, modern and
future AI systems may be able to solve scientific puzzles that have thus far stumped
humanity’s best minds.52
Just as railways advantaged nations with access to steel, it appears that AI tools
capable of augmenting science and engineering work will favor nations with the
best existing “resources” of industrious scientists and engineers. This trend appears
likely to deepen the advantages of nations that host, or can attract, a disproportion-
ate fraction of the world’s best in those fields.53
ALTERED GOALS
Finally, major innovations can alter state strategies, as different instrumental goals
become more appealing for achieving a state’s ultimate ends.
The Industrial Revolution again provides a clear example. Before industrial-
ization, conquering territory was a primary way that one group could increase its
wealth and security relative to others.54 During and after the Industrial Revolution,
in contrast, states have been able to pursue these ends effectively by increasing
productivity—as well as by gaining access to international trading networks and
new technologies to enable further military and economic growth. Territorial con-
quest by states in the modern era is rarer for many reasons—but not simply because
states have become more beneficent, instead because changes in technology
have reshaped how they can best achieve their goals.55 In short, major innovations
can alter what long-term competitions in each era are fundamentally about. In the
standard “ends, ways, means” trichotomy, this corresponds to ways. States have the
Center for Security and Emerging Technology12same ends (security, wealth, prestige, influence, sovereign action), but the ways that
competition is best pursued can change, such as through participation in globalized
production chains instead of territorial conquest.
With AI technologies, there are two worrying possibilities: a broad movement
toward authoritarianism and the greater use of advanced forms of population- and
economy-targeting information warfare.
Social Control Temptations
A technological innovation rarely tilts intrinsically toward “freedom” or “author-
itarianism.” It is possible, however, to try to discern how new technologies may
affect current social and economic systems in the future. Especially in authoritar-
ian states like China, AI technologies may provide elites with tools that reduce
contradictions between maintaining power and promoting economic growth
through free markets. By making authoritarianism appear more feasible, this may
generate an “authoritarian temptation” for the many states with malleable gover-
nance systems.
First, AI technologies are likely to reduce the costs of controlling populations
under authoritarian rule. Automating mass collection, processing, and analysis of
data is likely to decrease the marginal cost of controlling additional citizens, thus
reducing the resources required to indefinitely sustain totalitarianism. With access to
hundreds of millions of cameras, social media postings, bank accounts, automated
analysis of emotions and sentiment, and other data streams, AI-empowered algo-
rithms can perform much of the work previously done by secret police in pre-AI
authoritarian states.56 Automated surveillance methods are likely to scale more ef-
fectively than manual surveillance, which requires some amount of human labor per
citizen to be controlled. For example, Lichter et al. analyzed official Stasi records
from East Germany, finding that more than 1.5 percent of the population was either
officially employed or unofficially used as informers by the secret police.57 Beyond
the quantity of people involved in human surveillance operations, automated sur-
veillance may impose lower economic costs on a society than human surveillance.58
On this matter, China appears poised to benefit from feedback cycles between
AI deployment and data aggregation—the Chinese government is already using
AI technologies to enhance population control, as well as to profile and control
its ethnic minorities.59 In these early efforts, the Chinese government is collecting
large quantities of data, from facial scans to DNA; COVID-19 has only deepened
PRC data collection on its citizens.60 This data will help fuel new AI development
for social control in Chinese firms. Future AI applications could, in turn, help China
manage its data and drive more expansive collection, continuing the cycle.
Center for Security and Emerging Technology13China will likely export versions of these capabilities to authoritarian govern-
ments globally in the 2020s and 2030s, as it has already begun to do. According
to recent CSET research, since 2008, over 80 countries have adopted Chinese sur -
veillance technologies.61 These tools will help authoritarian governments worldwide
deepen their holds on power.62
Second, and more speculatively, AI progress may benefit authoritarian states by
reducing the costs and consequences of state interventions into internal markets. The
classic critique of centrally planning complex economies is that attempting to do so
poses intractable optimization problems.63 For many practical reasons, from human
organizational factors to corruption, AI technologies are unlikely to change this.
However, AI technologies could reduce, to some degree, the negative consequenc-
es of state interventions in markets.
For example, AI applications may help gather and interpret the volumes of
information necessary for more effective economic controls. An analogous effect is
visible inside large firms in both China and the United States today: companies like
eBay, Taobao, Amazon, and Uber apply machine learning to mine large volumes
of sales data to better match demand and supply. Modern machine learning tools
enable automatic pattern analysis, improved forecasting, and natural language
processing for predicting demand and performing sentiment analysis. Google’s
“Smart Bidding,” for example, uses machine learning to optimize conversions for
ads; California uses AI to predict electricity demand, more effectively controlling
the power grid and reducing blackouts.64 Walmart’s internal logistical management
has analogs to a centrally planned micro-economy.65 There are many challenges to
using analogous tools effectively for state economic policy, perhaps most of all the
variable goals of planners themselves. But these trends suggest national-level stra-
tegic planning may be able to benefit from better information by applying modern
machine learning tools to data accessible by states.
Leaders of authoritarian states like China may find themselves facing lower
costs for sustaining domestic political and economic control; leaders of authoritari-
an-leaning states may find themselves handed these tools by China.
The effects of AI on population control and state interventions in markets are not
certain. In the near term, however, it seems likely that Chinese elites at least believe
that AI may help them better control their society, and so too may elites in other states.
Information Warfare
Besides increasing the fitness of authoritarian governments more generally,
AI-enhanced information warfare may lower the costs of both influencing foreign
populations and pursuing economic warfare policies at scale. If mass opinion can
Center for Security and Emerging Technology14be decisively influenced by the clash between AI influence systems, for example,
China may determine its best bet for reabsorbing Taiwan is heavy investment in
AI-empowered propaganda.
Information attacks can also target economic systems and financial markets, es-
pecially AI systems associated with managing equities investments. An unintentional,
early demonstration of this possibility occurred in 201 3, when U.S. trading algo-
rithms responded to disinformation posted by the AP’s Twitter account after it was
hacked.66 Information warfare may be increasingly linked to economic warfare, not
just political disruptions.
Higher-end, AI-empowered information warfare is a more speculative, lon-
ger-term capability. Chris Wiggins has characterized current technical trends as
enabling “reality jamming”: the potential for synthetic, targeted, and optimized dis-
information at web-scale.67 In this future, current computational propaganda con-
cerns are just the tip of the iceberg. The bigger issue is the potential for large-scale
machine-generated information that is highly targeted at particular individuals or
subpopulations, evolved to maximally shape particular behaviors, and potentially
able to affect anyone with web access.68
Leveraging these developments, governments may attempt to shape perceptions
of other populations more frequently than in the past.69 OpenAI self-censored full
publication of its GPT-2 language-generation model in 2019, for example, because
it was concerned that generating close-to-human text would enable nefarious ac-
tors to proliferate disinformation. It is easy to imagine states pursuing similar capa-
bilities for their own ends.70 According to recent CSET research, GPT-2’s successor,
GPT-3, may be especially potent at generating disinformation at scale when steered
by a skilled human operator and editor, opening up the possibility of highly effec-
tive human-machine teaming.71
These trends may pose challenges for democratic societies, though it is still too
early to make clear judgments. Three unresolved questions exist today: First, if a
long-term risk in authoritarian systems is intellectual conformity, an analogous effect
in democracies may be mob majoritarianism.72 This inherent challenge in democrat-
ic societies could turn out to be exacerbated by modern information technologies
and make organizational reforms even more difficult. Second, more research is
needed to understand the balance between democracies’ ability to use disagree -
ments and diverse information to advance new explanations and solutions, and the
potential for information attacks to undermine political stability.73 And third, most
fundamentally, Western democracies, and particularly the U.S. system of govern-
ment, are based on a foundation of individual freedom where individuals are the
Center for Security and Emerging Technology15best judges of their own interests. It is not yet obvious how Western institutions will
adapt to machines that can anticipate—or shape—individuals’ own preferences,
states, and choices better than the individuals themselves can.74
In the context of international competition, leveraging AI technologies to alter
target states’ national priorities or political stability through information warfare
would represent “winning without fighting” par excellence.
Center for Security and Emerging Technology12
Center for Security and Emerging Technology17In this evolutionary theory of technological competition, AI’s effects
on national power fall into three categories: new elements of pow-
er, changed factors, and altered goals. Exploring new elements
required for successful AI adoption, such as compute and organizational
adaptations, helps us understand when, how, and why some societies
may be better positioned than others to benefit from major innovations.
Similarly, the idea of changed factors helps focus on how existing ele -
ments of national power may have changing importance, such as pop-
ulation size and industrious researchers. Finally, thinking about altered
goals of states in competition shows how major innovations can reshape
the ways that states engage in competition, such as enacting new do-
mestic political and economic controls and leveraging AI-enabled
information attacks on other states’ social and economic systems. This
research offers a way to start thinking about these issues together, and
hopes to spur new, wider thinking and work.
Creating new conceptual tools for U.S. decision-makers and analysts
to make sense of AI technologies’ effects is vital to American prosperity.
Over the long term, these technologies will create significant changes in
U.S.-China competition.
From this research, we see three early sets of insights into opportunities
for U.S. leaders:
• Thinking of long-term competitions in an evolutionary framework
makes large, broadly-diffused technology changes akin to envi -
ronmental shifts. Like a volcanic eruption or the start of an ice age, Conclusions and
Key Points3
I
Creating new con-
ceptual tools for U.S.
decision-makers and
analysts to make
sense of AI technolo-
gies’ effects is vital to
American prosperity.
Over the long term,
these technologies
will create significant
changes in U.S.-China
competition.
Center for Security and Emerging Technology18broad adaptations are valuable and some states will be better at adapting
than others. It is useful to begin thinking about how AI technologies can
create new elements of power, change the importance of existing elements
of power, and alter the goals of states in competition. Getting a better sense
of AI’s effects in each of these factors will be critical for major powers.
The United States has a number of opportunities: studying the approaches
of other countries, especially U.S. competitors and medium-sized, quick-
ly-changing countries;75 developing strategies for global leadership in
producing, using, and sharing compute resources; supporting development
of AI engineering as a rigorous discipline in the United States and lever-
aging humans trained in it; continuing to push DOD and IC organizational
reforms for how data is managed and leveraged; and leveraging AI tools,
cross-training between AI and other disciplines, and high-skilled STEM
immigration to access new breakthroughs in science and engineering more
widely.
• AI technologies may change not only what states can do, but also what
they want. Major innovations can broadly alter intermediate, instrumental
objectives that states pursue by making certain kinds of behaviors more
valuable or less costly. This can drive dramatic changes in state goals and
policies. The United States may look for new opportunities in technolo -
gy-related democracy promotion; shaping AI technologies themselves to
favor democracies, such as by supporting development of AI technologies
with less dependence on centralized data; 76 and developing approach-
es to more rapidly adapt social and economic institutions to “information
attacks” by AI systems.
• Finally, effects of technological change can be highly asymmetric: new ele -
ments, changed factors, and altered goals may have very different manifes-
tations in different countries. For the United States, this means learning from
its competitors without mirror imaging them and sharing insights with allies
before assuming they should symmetrically match U.S. policies. Perhaps
most significantly, it may also mean looking ahead to how AI technologies
may affect the aims and interests of U.S. allies and partners.
The scale of possible impacts from major technologies is obvious: the United
States benefitted greatly from growth connected to technological and economic
changes in the 40 years from 1 880 through 1920; and China has also already
benefitted from a mix of technological and economic changes in its resurgence from
1980 through 2020.77 Recent history demonstrates that getting technology right is
Center for Security and Emerging Technology19critical for long-term national flourishing—and determining trajectories for the Unit-
ed States and China over the next 20 to 30 years.
Can we sketch the longer-term future? Only speculation is possible today:
Broad historical examinations tend to suggest that more successful societies
present fewer obstacles to long-term change and, especially, limit the costs of intel-
lectual conformity. They seek to maximize the benefits of pluralism, competition, and
mechanisms to share, challenge, and supplement new knowledge.78
A key challenge for China will be limiting the long-term costs of intellectual con-
formity induced by an authoritarian government. A favorable factor for China will
be the dynamic organizations it has built over the last 20 years, which may remain
able to adapt and benefit from organizational learning as the world continues to
change over the next 10 to 20 years. In the longer term, however, continued evolu-
tion seems increasingly challenging for China under the CCP and absent substantial
pluralism; many of its main challenges for net economic-technological growth are
likely to persist, while the benefits of its dynamic organizations are likely to decline
over time.
A likely challenge for the United States will be institutional and organizational
sclerosis, which will make organizational learning and adaptation challenging over
the next decade. Interactions between AI technologies and democratic institutions
increase uncertainty and may exacerbate these challenges. Weighing against these
factors is Samuel Huntington’s reminder of the United States’ multidimensional sourc-
es of power and ability for self-renewal.79 The most favorable factors for U.S. vitality
and competition with authoritarian governments coincide with its enduring strengths:
areas such as its cultural values and pluralism, overall approach to governance, and
access to global talent.80 In the longer term, the United States’ central challenges
appear more temporary, and its greatest advantages more enduring—a favorable
outlook achievable with thinking and work today.
Center for Security and Emerging Technology16
Center for Security and Emerging Technology21Appendix
In 1948, after John von Neumann gave a talk on computing machines in Princeton, a member of the audience asked
the canonical question: Of course, machines can’t really think, can they? Von Neumann replied, “You insist that there
is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always
make a machine which will do just that!”81 Part of the challenge of defining AI has been that defining intelligence and
thinking in humans continues to be difficult.
This paper uses the definition of AI from the Defense Innovation Board: a variety of information processing techniques
and technologies used to perform a goal-oriented task and the means to reason in the pursuit of that task.82 More col-
loquially, AI can be thought of as a broad discipline and set of technologies centered on creating machines that can
make decisions relatively well under uncertainty.83
It is useful to distinguish AI from autonomy. The former is defined above; the latter is best thought of as some degree of
delegation of decision-making agency to another entity, which could be a human or a machine.84 Systems can have
neither, both, or one of these two things. For example, an autonomous military system can be unintelligent, as in the
case of a landmine, or an intelligent system can support humans without autonomy, as in the case of an information
system for a pilot.
The 2010s were the third period of global excitement about AI. The first period occurred in the 1960s, centered in the
United States and the UK, and the second period occurred in the 1980s, centered in the United States and Japan. Both
periods were associated with significant investment and optimism for cascading breakthroughs in machine intelligence.
Both periods were followed by “AI winters”: periods of widespread divestment from AI R&D and the belief that earlier
expectations had far exceeded reality.85 The current period will probably be remembered as being centered in the
United States and China, though with substantial activity in the UK, Europe, Canada, Japan, Israel, and South Korea.
Since the 2010s, most excitement about AI has focused on machine learning (ML), and, within ML, mostly on applica-
tions of neural networks (deep learning). ML is a broad subfield of AI that centers on inference from data and overlaps
substantially with statistics and optimization. “Neural networks” refers to a family of statistical models for extracting
patterns from large quantities of data, originally inspired by the behavior of biological neurons.
While the rediscovery and improvement of neural nets started the current AI wave in the late 2000s, specific trends over
the last 20 to 30 years enabled the success of recent applications: global growth and diffusion of compute resources;
large quantities of digital data globally; and the connection of these two by the global internet. For this reason, the
foundation of modern AI advancements is often called the “triad” of new algorithms, compute resources, and data.86
Center for Security and Emerging Technology26
Center for Security and Emerging Technology23Endnotes
1. On AI-specific vulnerabilities, see Andrew Lohn, “Hacking AI: A Primer for Policymakers on Machine
Learning Cybersecurity,” CSET, December 2020, https://cset.georgetown.edu/research/hacking-ai/. On
AI-specific accidents, see Tim G. J. Rudner and Helen Toner, “Key Concepts in AI Safety: An Overview,”
CSET, March 202 1, https://cset.georgetown.edu/research/key-concepts-in-ai-safety-an-overview/.
2. Emily O. Goldman and Richard B. Andres, “Systemic effects of military innovation and diffusion,” Security
Studies 8 (1999), 1 16.
3. Dahlia Peterson, “Designing Alternatives to China’s Repressive Surveillance State,” CSET, October 2020,
https://cset.georgetown.edu/research/designing-alternatives-to-chinas-repressive-surveillance-state/; Tim
Hwang, “Shaping the Terrain of AI Competition,” CSET, June 2020, https://cset.georgetown.edu/research/
shaping-the-terrain-of-ai-competition/; and Andrew Imbrie, Ryan Fedasiuk, Catherine Aiken, Tarun Chhabra,
and Husanjot Chahal, “Agile Alliances: How the United States and Its Allies Can Deliver a Democratic Way
of AI,” CSET, February 2020, https://cset.georgetown.edu/research/agile-alliances/.
4. Joseph Nye, The Paradox of American Power: Why the World’s Only Superpower Can’t Go It Alone, (New
York: Oxford University Press, 2002), p. 1 2.
5. Morgenthau carefully distinguished that some of these are relatively stable over time, whereas others are
more subject to frequent change. Some also have useful sub-elements to consider, making the complete
set: geography, access to resources (including food and raw materials), industrial capacity, military
preparedness (including technology, leadership, and quantity and quality of armed forces), population
(including distribution and trends), national character, national morale, quality of diplomacy, and quality of
government. See: Hans Morgenthau, Politics Among Nations: The Struggle for Power and Peace (New York:
Alfred A. Knopf, 1948), pp. 102-1 52.
6. Indeed, for some political scientists, the lack of such a consensus measure in fact is the cause of all wars,
as if states were to agree who would win in advance, there would be no reason to pay the costs of war at
all, instead of simply bargaining. See James D. Fearon, “Rationalist Explanations for War,” International
Organization 49, no. 3 (1995), 38 1, 390-401; and Bernard Brodie, War and Politics (New York: MacMillan
Publishing Co., Inc., 1973), 35-6, 63.
7. One 201 1 survey of the literature catalogued 69 different proposals of equations for measuring national
power. See Karl Hohn, “Geopolitics and the Measurement of National Power,” PhD diss., (University of
Hamburg, 201 1). Some selected examples:
Joseph Nye distinguished military power, economic power, and soft power. He noted that “power resources”
for the United States in the 20th Century included economic scale, scientific and technological leadership,
location, military forces and alliances, universalistic culture and liberal international regimes; for the 2 1st
century he suggested the corresponding elements as technological leadership, military and economic scale,
soft power, and being a hub of transnational communications. See: Joseph Nye, The Paradox of American
Power: Why the World’s Only Superpower Can’t Go It Alone, (New York: Oxford University Press, 2002),
pp. 4-1 2.
Robert Gilpin focuses power simply as the “military, economic, and technological capabilities of states,” and
notes that he intentionally leaves out “important and intangible elements that affect the outcomes of political
actions, such as public morale [and] qualities of leadership.” See: Robert Gilpin, War & Change in World
Politics, (New York: Cambridge University Press, 198 1), p. 1 3-1 4.
The Correlates of War project, a widely used quantitative database for studying warfare, defines a
“Composite Index of National Capability” (CINC) in terms of a country's share of world population, urban
Center for Security and Emerging Technology24population, iron and steel production, energy consumption, military expenditure, and military personnel.
(See: Singer, J. David, Stuart Bremer, and John Stuckey, “Capability Distribution, Uncertainty, and Major
Power War, 1 820-1965,” in Bruce Russett (ed.) Peace, War, and Numbers, (Beverly Hills: Sage, 1972), pp.
1948, as well as. https://correlatesofwar.org/data-sets/national-material-capabilities .)
More recently, Michael Beckley has argued that traditional measures of power conflate gross resources
with net resources, and thus fail to account for a country's burdens in addition to its assets. Thus, he proposes
the use of “GDP \* GDP per capita.” See “The Power of Nations: Measuring What Matters,” International
Security 43.2 (201 8): 7-44.
8. The best overview of this quest is Ashley J. Tellis, Janice Bially, Christopher Layne, and Melissa McPherson,
“Measuring National Power in the Postindustrial Age,” RAND Corporation, 2000.
9. This statement includes whether such measures are used quantitatively to predict who wins a war, or whether
war will occur, or whether settlement terms will favor one side or another. See: Ibid., 17.
10. For various discussions of this, see: Stephen Biddle, Military Power: Explaining Victory and Defeat in Modern
Battle (Princeton: Princeton University Press, 2004); David A. Baldwin, “Power Analysis and World Politics:
New Trends versus Old Tendencies,” World Politics 161 (1979): 161-94; Jeffrey Hart, “Three Approaches to
the Measurement of Power in International Relations,” International Organization 30 (1976), 289-305.
11 . Almond and Genco (1977) most famously made this point about how to think about political phenomena in
general. See Gabriel A. Almond and Stephen J. Genco, “Clouds, Clocks, and the Study of Politics,” World
Politics 29.4 (1977): 489-522.
1 2. Paul Kennedy, The Rise and Fall of the Great Powers, (New York: Random House, 1987), p. 199.
13 . Andrew Marshall, “RMA Update,” Memorandum for the Record, 2 May 1994.
1 4. For many examples collected in one place, see: Herbert Goldhamer, “Reality and Belief in Military Affairs,”
RAND Corporation, 1977.
15. For others who have offered ways of thinking about this, all with substantial detail, see: George Modelski
and William R. Thompson, Leading Sectors and World Powers (Columbia: University of South Carolina
Press, 1996); Paul Kennedy, The Rise and Fall of the Great Powers (New York: Random House, 1987);
Robert Gilpin, War and Change in World Politics (UK: Cambridge University Press, 198 1).
16. On steel, see Goldman and Andres, “Systemic effects of military innovation and diffusion,” 1 16. On
the internal combustion engine and oil, see W. G. Jensen, “The Importance of Energy in the First and
Second World Wars,” The Historical Journal 1 1 (1968): 538-54. On uranium, see R. Scott Kemp, “The
Nonproliferation Emperor Has No Clothes: The Gas Centrifuge, Supply-Side Controls, and the Future of
Nuclear Proliferation,” International Security 38 (201 4): 39-78, especially 41-4.
17. Michael C. Horowitz, The Diffusion of Military Power: Causes and Consequences for International Politics
(Princeton: Princeton University Press, 2010).
1 8. On the Mongols, see Goldman and Andres, “Systemic effects of military innovation and diffusion,” 102, 88-
9. On modern power projection and the loss of American “free security,” see C. Vann Woodward, “The Age
of Reinterpretation,” The American Historical Review 66 (1960): 1-19. On nuclear weapons, see Thomas C.
Schelling, Arms and Influence (New Haven: Yale University Press, 2008), 30-1.
19. Gilpin, p. 23.
20. This is an old idea about artificial intelligence, but was brought to our attention in the modern context by
Richard Danzig in July 2016, during discussion as part of the review board of a DOD Summer Study.
21. A voluminous literature discusses AI’s military applications. See an overview at Daniel S. Hoadley and Kelley
M. Sayler, “Artificial Intelligence and National Security,” Congressional Research Service, November 10,
2020, https://fas.org/sgp/crs/natsec/R45 178.pdf. For further reading, see Robert O. Work and Shawn
Brimley, “20YY: Preparing for War in the Robotic Age,” CNAS, January 201 4, https://s3.amazonaws.com/
files.cnas.org/documents/CNAS\_20YY\_WorkBrimley.pdf?; Luttwak (ONA, 2020); Christian Brose, The Kill
Chain: Defending America in the Future of High-Tech Warfare (New York: Hachette Books, 2020).
22. National Research Council, “Getting Up to Speed: The Future of Supercomputing,” (Washington, DC: The
National Academies Press, 2005), https://www.nap.edu/catalog/1 1 1 48/getting-up-to-speed-the-future-
of-supercomputing.
Center for Security and Emerging Technology2523. Dario Amodei and Danny Hernandez, “AI and Compute,” OpenAI, 16 May 201 8, https://openai.com/
blog/ai-and-compute/.
24. Girish Sastry, Jack Clark, Greg Brockman, and Ilya Sutskever, “Addendum: AI and Compute,” OpenAI, 7
November 2019, https://openai.com/blog/ai-and-compute/.
25. Xue Bin Peng, Lerrel Pinto, Alex Ray, Bob McGrew, Jonas Schneider, Josh Tobin, Marcin Andrychowicz,
Peter Welinder, Pieter Abbeel, and Wojciech Zaremba, “Generalizing from Simulation,” OpenAI,
October 19, 2017 , https://openai.com/blog/generalizing-from-simulation/. For a recent study
substituting simulated for real-world data in a military context, see Li Ang Zhang, Jia Xu, Dara Gold, Jeff
Hagen, Ajay K. Kochhar, Andrew J. Lohn, and Osonde A. Osoba, “Air Dominance Through Machine
Learning – A Preliminary Exploration of Artificial Intelligence–Assisted Mission Planning,” RAND, 2020,
https://www.rand.org/pubs/research\_reports/RR43 1 1.html.
26. CSET has a line of research both explaining and advising on how to maintain this state of affairs. See Saif
M. Khan, “Securing Semiconductor Supply Chains” (Washington, DC: Center for Security and Emerging
Technology, January 202 1); Saif M. Khan, "The Semiconductor Supply Chain: Assessing National
Competitiveness," CSET, January 202 1, https://cset.georgetown.edu/research/the-semiconductor-
supply-chain/; Saif M. Khan, “U.S. Semiconductor Exports to China: Current Policies and Trends” CSET,
October 2020, https://cset.georgetown.edu/wp-content/uploads/U.S.-Semiconductor-Exports-
to-China-Current-Policies-and-Trends.pdf; Saif M. Khan and Carrick Flynn, “Maintaining China’s
dependence on democracies for advanced computer chips,” Brookings, April 2020, https://www.
brookings.edu/research/maintaining-chinas-dependence-on-democracies-for-advanced-computer-
chips/; Saif M. Khan, "AI Chips: What They Are and Why They Matter," CSET, April 2020, https://cset.
georgetown.edu/research/ai-chips-what-they-are-and-why-they-matter/.
27. E. A. Huerta, Asad Khan, Edward Davis, Colleen Bushell, William D. Gropp, Daniel S. Katz, Volodymyr
Kindratenko, Seid Koric, William T. C. Kramer, Brendan McGinty, Kenton McHenry, and Aaron
Saxton, "Convergence of artificial intelligence and high performance computing on NSF-supported
cyberinfrastructure," Journal of Big Data 88 (2020).
28. This suggestion offered by Jack Clark in early comments on this paper.
29. The innovation literature is large. Seminally, see Barry R. Posen, The Sources of Military Doctrine: France,
Britain, and Germany Between the World Wars (New York: Cornell University Press, 1984); Stephen P.
Rosen, Winning the Next War: Innovation and the Modern Military (New York: Cornell University Press,
1991); Theo G. Farrell and Terry Terriff, The Sources of Military Change: Culture, Politics, Technology
(Colorado: Lynne Rienner Publishers, 2002).
30. Edmund Beard, Developing the ICBM: A Study in Bureaucratic Politics (New York: Columbia University
Press, 1976).
31. Goldman and Andres, “Systemic effects of military innovation and diffusion.”
32. Andrew Marshall, “RMA Update,” Memorandum for the Record, 2 May 1994; Horowitz, The Diffusion of
Military Power.
33. Marcus Comiter, “Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can
Do About It,” Belfer Center for Science and International Affairs, August 2019, https://www.belfercenter.
org/sites/default/files/2019-08/AttackingAI/AttackingAI.pdf.
34. As Alan Turing wrote in 1950, “An important feature of a learning machine is that its teacher will often
be very largely ignorant of quite what is going on inside.” (Alan Turing, “Computing Machinery and
Intelligence,” Mind, Volume LIX, Issue 236, October 1950, p. 458.) With deep learning, this problem
is especially acute due to the scale of statistical models involved – for example, ResNet, a commonly
used image classification architecture, uses around 5\*107 parameters. What is layer 27 of a hundred-
layer neural network doing? (See, for example: Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha
Bajwa, Michael Specter, and Lalana Kagal, “Explaining Explanations: An Overview of Interpretability of
Machine Learning,” arXiv, February 3, 2019, https://arxiv.org/pdf/1 806.00069.pdf.) Intuitively, it is
difficult for a human being to understand the inner workings of the model with any precision. For a very
good effort see: Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine
Center for Security and Emerging Technology26Ye, and Alexander Mordvintsev, “The Building Blocks of Interpretability,” Distill, 201 8, https://distill.
pub/201 8/building-blocks/.
35. Michael Jordan, “Artificial Intelligence—The Revolution Hasn’t Happened Yet,” Harvard Data Science
Review, July 2019, https://hdsr.mitpress.mit.edu/pub/wot7mkc1 .
36. Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané, “Concrete
Problems in AI Safety,” arXiv, July 25, 2016, https://arxiv.org/pdf/1606.06565.pdf.
37. Michèle A. Flournoy, Avril Haines, and Gabrielle Chefitz, “Building Trust through Testing: Adapting DOD’s
Test & Evaluation, Validation & Verification (TEVV) Enterprise for Machine Learning Systems, including
Deep Learning Systems,” CSET / WestExec Advisors, 2020, https://cset.georgetown.edu/wp-content/
uploads/Building-Trust-Through-Testing.pdf.
38. For modern machine learning systems specifically, these issues are introduced in places like: Nicolas
Papernot et al., “SoK: Towards the Science of Security and Privacy in Machine Learning,” Nov 2016,
arXiv:161 1.038 1 4v1; Gamaleldin Elsayed, Ian Goodfellow, and Jascha Sohl-Dickstein, “Adversarial
Reprogramming of Neural Networks,” June 201 8, arXiv:1 806. 1 1 1 46v1; and Nicholas Carlini et al., “On
Evaluating Adversarial Robustness,” February 2019, arXiv:1902.06705v2.
39. A companion reading for thinking about this might be: R. V. Jones, The Wizard War: British Scientific
Intelligence, 1939-1945, (New York: Coward, McCann & Geoghegan: 1978).
40. Perhaps the best introduction to this was provided by Tim Hwang et al.: “For a heart-stopping few minutes
on May 6, 2010, the Dow Jones Industrial Average dropped a staggering 1,000 points—and inexplicably
proceeded to recover all of those losses within the following few minutes. The Flash Crash, as it was later
dubbed, remains the biggest one-day point decline in Dow Jones history. After a five-month investigation,
the SEC reported that the sudden loss and gain that day was the result of an unusually large number of
contracts being sold by a mutual fund, which triggered a wave of aggressive sell-off activity from untold
numbers of firms running automated high frequency trading programs. No human agency was at the heart
of the momentary crash. Instead, it appears that unanticipated interactions among multiple automated
scripts designed to buy and sell stock produced the precipitous fall and rise in prices. Financial robots may
also be behind the otherwise inexplicable correlations between mentions of the actor Anne Hathaway in
the news and increases in the stock price of Warren Buffet’s Berkshire Hathaway fund.” See: Tim Hwang,
Ian Pearce, and Max Nanis, “Socialbots: Voices from the Fronts,” Interactions, March-April 201 2. More
recently, and closer to a direct example, in 201 3 trading systems responded to information from the AP’s
twitter feed after it had been hacked by (apparently) Syrian dissidents, causing a temporary drop of
$1 30B. See: Max Fisher, “Syrian hackers claim AP hack that tipped stock market by $1 36 billion. Is it
terrorism?” Washington Post, 23 April 201 3, https://www.washingtonpost.com/news/worldviews/
wp/201 3/04/23/syrian-hackers-claim-ap-hack-that-tipped-stock-market-by-1 36-billion-is-it-terrorism/.
41. Daron Acemoglu and Pascual Restrepo, “Demographics and Automation,” NBER, March 201 8, https://
www.nber.org/papers/w2442 1 .
42. Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W. W. Norton &
Company, 201 8).
43. Todd Schneider, Gee Hee Hong, and Anh Van Le, “Land of the Rising Robots,” IMF, June 201 8, https://
www.imf.org/external/pubs/ft/fandd/201 8/06/japan-labor-force-artificial-intelligence-and-robots/
schneider.pdf.
44. Ewen Callaway, “‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein
structures,” Nature, November 30, 2020, https://www.nature.com/articles/d41 586-020-03348-4.
45. For a useful overview: Jeffrey Dean, “The Deep Learning Revolution and Its Implications for
Computer Architecture and Chip Design,” arXiv, 1 3 Nov 2019, https://arxiv.org/ftp/arxiv/
papers/191 1/191 1.05289.pdf.
46. Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, et al. “A graph placement methodology for fast chip
design,” Nature 594, 207–2 1 2 (202 1). https://doi.org/10. 1038/s41 586-02 1-03544-w.
47. For example, suggesting valuable hypotheses to test or engineering design configurations. This goal has
had a resurgence in the 2010s: in 2016 Hiroaki Kitano, creator of Robocup, proposed a grand challenge
for AI systems capable of making Nobel-worthy scientific discoveries: Hiroaki Kitano, “Artificial intelligence
Center for Security and Emerging Technology27to win the nobel prize and beyond: Creating the engine for scientific discovery.” AI magazine 37 , no. 1
(2016): 39-49. A recent data brief by CSET also surveys how AI technologies have been accelerating
growth of new science and engineering research clusters across a broad span of disciplines, see: Matthew
Daniels, Autumn Toney, Melissa Flagg, and Charles Yang, “Machine Intelligence for Scientific Discovery and
Engineering Invention,” CSET, May 202 1, https://cset.georgetown.edu/publication/machine-intelligence-
for-scientific-discovery-and-engineering-invention/.
48. For example: Kevin Williams, Elizabeth Bilsland, Andrew Sparkes, Wayne Aubrey, Michael Young, Larisa N.
Soldatova, Kurt De Grave et al. “Cheaper faster drug development validated by the repositioning of drugs
against neglected tropical diseases.” Journal of the Royal Society Interface 1 2, no. 104 (201 5): 201 41 289.
49. For example, see: Volodymyr Kuleshov, Jialin Ding, Christopher Vo, Braden Hancock, Alexander Ratner, Yang
Li, Christopher Ré, Serafim Batzoglou, and Michael Snyder, “A machine-compiled database of genome-wide
association studies,” Nature Communications 10, 3341 (2019), https://doi.org/10. 1038/s41 467-019-
1 1026-x.
50. See, for example: Freeman Dyson, “Is Science Mostly Driven by Ideas or by Tools?” Science, Vol. 338
(December 201 2): 1 426-1 427.
51. Cade Metz, “In Two Moves, AlphaGo and Lee Sedol Redefined the Future,” Wired, March 16, 2016,
https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/.
52. Iain M. Cockburn, Rebecca Henderson, and Scott Stern, “The Impact of Artificial Intelligence on Innovation:
An Exploratory Analysis,” in The Economics of Artificial Intelligence: An Agenda, eds. Ajay Agrawal, Joshua
Gans, and Avi Goldfarb (Chicago: University of Chicago Press, 2017), 1 1 5-46.
53. Other work has made analogous arguments, though from a different perspective, see: Remco Zwetsloot and
Zachary Arnold, "Foreign Brains Help America Compete," Wall Street Journal, January 30, 2020; Remco
Zwetsloot and Dahlia Peterson, "The US-China Tech Wars: China’s Immigration Disadvantage," The Diplomat,
December 3 1, 2019; Remco Zwetsloot, Roxanne Heston, and Zachary Arnold "Strengthening the U.S. AI
Workforce," CSET, September 2019, https://cset.georgetown.edu/publication/strengthening-the-u-s-ai-
workforce/.
54. Gilpin, p. 23.
55. Stephen Brooks, Producing Security: Multinational Corporations, Globalization, and the Changing Calculus
of Conflict (Princeton: Princeton University Press, 2005).
56. Ross Andersen, “The Panopticon Is Already Here,” The Atlantic, September 2020, https://www.theatlantic.
com/magazine/archive/2020/09/china-ai-surveillance/61 4197/; Ben Angel Chang, “AI and US-China
Relations,” in Nicholas D. Wright, ed., AI, China, Russia, and the Global Order: Technological, Political,
Global, and Creative Perspectives (DOD SMA: December 201 8).
57. Andreas Lichter, Max Löffler, and Sebastian Siegloch, “The long-term costs of government surveillance:
Insights from Stasi spying in East Germany,” SOEPpapers on Multidisciplinary Panel Data Research 865
(2016): 1-60. Available online: https://www.econstor.eu/bitstream/10419/1 46890/1/869045423.pdf.
58. This remains speculative. For example, some evidence suggests surveillance itself depresses economic
activity by eroding social trust, causing individuals to reduce their productive activity. (See: Lichter, Löffler,
and Siegloch, “The long-term costs of government surveillance: Insights from Stasi spying in East Germany,”
p. 22). Other studies have found interpersonal trust to correlate with entrepreneurship and innovation.
(See: Stephen Knack and Philip Keefer, “Does Social Capital Have an Economic Payoff? A Cross-Country
Investigation,” The Quarterly Journal of Economics 1 1 2 (1997): 1 25 1-88.) This effect was particularly acute
because Stasi informants retained their normal roles as colleagues, family, and friends, and so the knowledge
of Stasi presence caused widespread doubt and fear. Automated technological surveillance would plausibly
avoid these effects. (See: Lichter et al., “The long-term costs of government surveillance,” p. 22.)
59. Paul Mozur, “One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority,” New York
Times, April 2019; Josh Chin and Liza Lin, “China’s All-Seeing Surveillance State Is Reading Its Citizens’
Faces,” Wall Street Journal, June 2017.
60. Liza Lin and Shan Li, “Chinese Citizens Must Scan Their Faces to Register for New Mobile-Phone Service,”
Wall Street Journal, December 2019; Sui-Lee Wee, “China Uses DNA to Track Its People, With the Help of
American Expertise,” New York Times, February 2019. Shan Li, “Made-in-China Censorship for Sale,” Wall
Center for Security and Emerging Technology28Street Journal, March 2020; Paul Mozur, Raymond Zhong, and Aaron Krolik, “In Coronavirus Fight, China
Gives Citizens a Color Code, With Red Flags,” New York Times, March 2020. CSET has also considered
possible future trends based on COVID-driven increases in surveillance: https://www.cset-foretell.com/
blog/surveillance-creep.
61. Dahlia Peterson, “Designing Alternatives to China’s Repressive Surveillance State,” CSET, October 2020,
https://cset.georgetown.edu/research/designing-alternatives-to-chinas-repressive-surveillance-state/.
62. Jessica Chen Weiss, “Understanding and Rolling Back Digital Authoritarianism,” War on the Rocks,
February 17 , 2020, https://warontherocks.com/2020/02/understanding-and-rolling-back-digital-
authoritarianism/; Jessica Chen Weiss, “An Ideological Contest in U.S.-China Relations? Assessing China’s
Defense of Autocracy,” forthcoming in Security and US-China Relations: Differences, Dangers, and
Dilemmas, eds. Avery Goldstein and Jacques deLisle. Available online: https://papers.ssrn.com/sol3/
papers.cfm?abstract\_id=3427 1 8 1 .
63. Ludwig von Mises, Human Action (Chicago: Contemporary Books, Inc., 1963), 678-80; More colloquially,
see Cosma Shalizi, “In Soviet Union, Optimization Problem Solves You,” Crooked Timber, May 30, 201 2,
http://crookedtimber.org/201 2/05/30/insoviet-union-optimization-problem-solves-you/.
64. Paul R. Milgrom and Steve Tadelis, “How Artificial Intelligence and Machine Learning Can Impact Market
Design," forthcoming in The Economics of Artificial Intelligence, eds. Ajay K. Agrawal, Joshua Gans, and Avi
Goldfarb (Chicago: University of Chicago Press, 2019), 1-24. Available online: https://www.nber.org/
books/agra-1 .
65. Leigh Phillips and Michal Rozworski, “The People’s Republic of Walmart: How the World's Biggest
Corporations are Laying the Foundation for Socialism” (New York: Verso, 2019).
66. Max Fisher, “Syrian hackers claim AP hack that tipped stock market by $1 36 billion. Is it terrorism?”
Washington Post, 23 April 201 3, https://www.washingtonpost.com/news/worldviews/
wp/201 3/04/23/syrian-hackers-claim-ap-hack-that-tipped-stock-market-by-1 36-billion-is-it-terrorism/.
67. Chris Wiggins, “Reality Jamming: technology-enabled misinformation at scale,” independent abstract, 2017;
Susan McGregor, Chris Wiggins, Joan Donovan, Matt Jones, Jonathan Albright, and Sam Thielman, “Reality
Jamming: The Future of Information Online,” Tow Center, December 1 1, 2017 , https://medium.com/tow-
center/reality-jamming-the-future-of-information-online-3ad5cb0d932e.
68. See also: Matt Chessen, “The MADCOM Future,” The Atlantic Council, 2017 , https://www.atlanticcouncil.
org/wp-content/uploads/2017/09/The\_MADCOM\_Future\_RW\_0926.pdf.
69. The Office of Net Assessment sponsored early work on this. For example, see: Michael J. Mazarr, Abigail
Casey, Alyssa A. Demus, Scott W. Harold, Luke J. Matthews, Nathan Beauchamp-Mustafaga, James
Sladden, “Hostile Social Manipulation: Present Realities and Emerging Trends,” RAND Corporation, 2019,
https://www.rand.org/pubs/research\_reports/RR27 1 3.html.
70. Alec Radford, Jeffrey Wu, Dario Amodei, Daniela Amodei, Jack Clark, Miles Brundage, and Ilya Sutskever,
“Better Language Models and Their Implications,” OpenAI, February 1 4, 2019, https://openai.com/
blog/betterlanguage-models/; Dipayan Ghosh and Ben Scott, “Digital Deceit: The Technologies Behind
Precision Propaganda on the Internet,” New America, January 23, 201 8, https://www.newamerica.org/
public-interest-technology/policypapers/digitaldeceit/, 26-8; Sarah Kreps and Miles McCain, “Not
Your Father’s Bots: AI Is Making Fake News Look Real,” Foreign Affairs, https://www.foreignaffairs.com/
articles/2019-08-02/not-your-fathers-bots.
71. Ben Buchanan, Andrew Lohn, Micah Musser, and Katerina Sedova, "Truth, Lies, and Automation: How
Language Models Could Change Disinformation," CSET, May 202 1, https://cset.georgetown.edu/
publication/truth-lies-and-automation/.
72. For example, Tocqueville’s admonition: “If, in place of all the diverse powers that hindered or slowed beyond
measure the rapid development of individual reason, democratic peoples substituted the absolute power
of a majority, the evil would only have changed character. Men would not have found the means to live
independently; they would only have discovered, a difficult thing, a new face of servitude. I cannot say it
enough: for those who see liberty of the mind as a holy thing, and who hate not only the despot but also
despotism, there is in that something to make them reflect deeply. For me, when I feel the hand of power
pressing on my head, knowing who is oppressing me matters little to me, and I am no more inclined to put
Center for Security and Emerging Technology29my head in the yoke, because a million arms present it to me.” (Alexis de Tocqueville, Democracy in
America, Edited by Eduardo Nolla, Translated by James T. Schleifer (Indianapolis: Liberty Fund, 201 2),
Vol. II, Section 1, Chapter 2.)
73. Henry John Farrell and Bruce Schneier, “Common-Knowledge Attacks on Democracy,” Berkman Klein
Center Research Publication No. 201 8-7 , Available online: https://ssrn.com/abstract=3273 1 1 1
7 4. Richard Danzig, “An Irresistible Force Meets a Moveable Object: The Technology Tsunami and the
Liberal Order,” Lawfare Research Paper Series 5. 1 (2017), https://assets.documentcloud.org/
documents/3982439/Danzig-LRPS1.pdf, 4-7.
75. For learning from U.S. competitors, see: Peter Westwick, “Lessons from Stealth for Emerging
Technologies,” CSET, March 202 1, https://cset.georgetown.edu/publication/lessons-from-stealth-for-
emerging-technologies/, pp. 25-26.
76. Tim Hwang, “Shaping the Terrain of AI Competition,” CSET, June 2020, https://cset.georgetown.edu/
publication/shaping-the-terrain-of-ai-competition/.
77. The United States emerged as a major power following industrialization of its economy and society
over the half-century from 1 875 to 1925. This period was particularly unstable for the international
system, with rapid technological change and uneven growth—the U.S. fraction of global manufacturing
output more than doubled from 1 4.7 percent in 1 880 to 39.3 percent in 1928. (See: Paul Kennedy,
The Rise and Fall of the Great Powers, (New York: Random House, 1987), p. 202.) As industrialization
transformed the U.S. economy and society, population growth allowed the United States to harness
these changes into national power. U.S. population increased from 44M in 1 87 4 to 1 1 4M in 1924. (See:
Hans Morgenthau, Politics Among Nations, (New York: Knopf, 1956), p. 1 1 4.) This was well above
the populations of Germany, Japan, France, Britain, and Italy. (Paul Kennedy, The Rise and Fall of the
Great Powers, (New York: Random House, 1987), p. 199.) The United States in this period became
both industrialized and populous relative to other countries and, by 1920, was the strongest power
on the planet. A century later, China is undergoing an analogous shift, but with still-uncertain results.
China began instituting major economic reforms in 1979. In the period from approximately 1980-
2020, China transformed its economy, society, and military, partly by harnessing modern information
technologies. In this period, China’s fraction of global GDP (by PPP) increased from 2.3 percent in 1980
to 1 8.3 percent in 2017 , while the United States declined from 24.3 percent to 1 5.3 percent in the same
period. (Wayne Morrison, “China’s Economic Rise: History, Trends, Challenges, and Implications for the
United States,” Congressional Research Service, June 2019, p. 10. https://www.everycrsreport.com/
files/20190625\_RL33534\_088c5467dd1 1 365dd4ab5f72 1 33db289fa10030f.pdf) China already had
the population needed to harness economic reforms for growth. If U.S. growth in economic power came
from industrialization and population growth, China’s could be described as coming from large-scale
capital investment and productivity growth—the latter due to both resource reallocations and imported
technologies and processes. (Morrison, “China’s Economic Rise”, p. 6-7) China’s trajectory for the next
20-30 years, however, remains highly uncertain. Whether China can continue to sustain substantial
economic growth depends in significant part on the degree to which it can make new technology and
innovation a source of future growth. (Morrison, “China’s Economic Rise”, p. 7-8.) China also faces
enormous demographic, environmental, public health, and peripheral security challenges that will impose
large costs on its government. (See, for example: Michael Beckley, Unrivaled: Why America Will Remain
the World's Sole Superpower, (New York: Cornell University Press, 201 8), pp. 1 20-1 34.)
78. See, for example: Joel Mokyr, A Culture of Growth: The Origins of the Modern Economy, (Princeton:
Princeton University Press, 2017). Paul Kennedy, Rise and Fall of the Great Powers (New York: Random
House, 1989).
79. Samuel Huntington, “The US-decline or renewal.” Foreign Affairs 67 (1988): 76.
80. A modern reflection on these strengths is presented in Richard Danzig et al., “A Preface to Strategy: The
Foundations of American National Security,” JHU Applied Physics Laboratory, 201 8.
81. E.T. Jaynes was in the audience and noted the exchange. E.T. Jaynes, Probability Theory: The Logic of
Science, (St. Louis, MO: Washington University, 1996), p. 4.
Center for Security and Emerging Technology3082. Defense Innovation Board, “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by
the Department of Defense,” https://media.defense.gov/2019/Oct/3 1/2002204458/-1/-1/0/DIB\_
AI\_PRINCIPLES\_PRIMARY\_DOCUMENT.PDF.
83. For example, Nils Nilsson, a pioneer of AI research, writes: “artificial intelligence is that activity devoted to
making machines intelligent, and intelligence is that quality that enables an entity to function appropriately
and with foresight in its environment. According to that definition, lots of things — humans, animals, and
some machines — are intelligent. Machines, such as 'smart cameras,’ and many animals are at the primitive
end of the extended continuum along which entities with various degrees of intelligence are arrayed. At
the other end are humans, who are able to reason, achieve goals, understand and generate language,
perceive and respond to sensory inputs, prove mathematical theorems, play challenging games, synthesize
and summarize information, create art and music, and even write histories.” See: Nils Nilsson, The Quest for
Artificial Intelligence (New York: Cambridge University Press, 2010).
84. This was most recently reiterated by the Defense Science Board. See: Defense Science Board, “Summer
Study on Autonomy,” U.S. Department of Defense, June 2016.
85. The first AI winter was actually initiated by a government report in the UK, commonly referred to as the
Lighthill Report.
86. Ben Buchanan, “The AI Triad and What It Means for National Security,” CSET, August 2020: https://
cset.georgetown.edu/wp-content/uploads/CSET-AI-Triad-Report.pdf. Further, many primers on modern
AI and ML systems now exist and are not reproduced here. See, for example: Mark Halpern, “No Ghost
in the Machine,” The American Scholar, Spring 2020, https://theamericanscholar.org/no-ghost-in-the-
machine/#.Xnq96G4pCu6; John Launchbury, “A DARPA Perspective on Artificial Intelligence,” Defense
Advanced Research Projects Agency (DARPA), March 2017 , https://www.darpa.mil/about-us/darpa-
perspective-on-ai; Ben Buchanan and Taylor Miller, “Machine Learning for Policymakers: What it is and
why it matters,” Harvard Belfer Center for Science and International Affairs, June 2017 , https://www.
belfercenter.org/sites/default/files/files/publication/MachineLearningforPolicymakers.pdf; Andrej
Karpathy, “AlphaGo, in context,” https://medium.com/@karpathy/alphago-in-context-c477 1 8cb95a5;
Michael Jordan, “Artificial Intelligence—The Revolution Hasn’t Happened Yet,” Harvard Data Science
Review, https://hdsr.mitpress.mit.edu/pub/wot7mkc1 .
Center for Security and Emerging Technology39
CSET.GEORGETOWN.EDU | CSET@GEORGETOWN.EDU |
f7dfaec0-efe1-499d-81c4-6736b65916a7 | StampyAI/alignment-research-dataset/arbital | Arbital | Associative operation
An **associative operation** $\bullet : X \times X \to X$ is a [binary](https://arbital.com/p/3kb) [operation](https://arbital.com/p/3h7) such that for all $x, y, z$ in $X$, $x \bullet (y \bullet z) = (x \bullet y) \bullet z$. For example, $+$ is an associative function, because $(x + y) + z = x + (y + z)$ for all values of $x, y,$ and $z$. When an associative function is used to combine many elements in a row, parenthesis can be dropped, because the order of application is irrelevant.
Imagine that you're trying to use $f$ to combine 3 elements $x, y,$ and $z$ into one element, via two applications of $f$. $f$ is associative if $f(f(x, y), z) = f(x, f(y, z)),$ i.e., if the result is the same regardless of whether you apply $f$ to $x$ and $y$ first (and then apply that result to $z$), or whether you apply $f$ to $y$ and $z$ first (and then apply $x$ to that result).
Visualizing $f$ as a [physical mechanism](https://arbital.com/p/3mb), there are two different ways to hook up two copies of $f$ together to create a function $f_3 : X \times X \times X \to X,$ which takes three inputs and produces one output:

An associative function $f$ is one where the result is the same no matter which way the functions are hooked up, which means that the result of using $f$ twice to turn three inputs into one output yields the same output regardless of the order in which we combine adjacent inputs.

By similar argument, an associative operator $f$ also gives rise (unambiguously) to functions $f_4, f_5, \ldots,$ meaning that [associative functions can be seen as a family of functions on lists](https://arbital.com/p/3ms).
This justifies the omission of parenthesis when writing expressions where an associative operator $\bullet$ is applied to many inputs in turn, because the order of application does not matter. For example, multiplication is associative, so we can write expressions such as $2 \cdot 3 \cdot 4 \cdot 5$ without ambiguity. It makes no difference whether we compute the result by first multiplying 2 by 3, or 3 by 4, or 4 by 5.
By contrast, the function `prependx` that sticks its inputs together and puts an `x` on the front is not associative: `prependx(prependx("a","b"),"c") = "xxabc"`, but `prependx("a",prependx("b","c"))=xaxbc`. |
d6982cae-1b1f-4685-8963-b63e60fd2f8e | trentmkelly/LessWrong-43k | LessWrong | The Bedrock of Morality: Arbitrary?
Followup to: Is Fairness Arbitrary?, Joy in the Merely Good, Sorting Pebbles Into Correct Heaps
Yesterday, I presented the idea that when only five people are present, having just stumbled across a pie in the woods (a naturally growing pie, that just popped out of the ground) then it is fair to give Dennis only 1/5th of this pie, even if Dennis persistently claims that it is fair for him to get the whole thing. Furthermore, it is meta-fair to follow such a symmetrical division procedure, even if Dennis insists that he ought to dictate the division procedure.
Fair, meta-fair, or meta-meta-fair, there is no level of fairness where you're obliged to concede everything to Dennis, without reciprocation or compensation, just because he demands it.
Which goes to say that fairness has a meaning beyond which "that which everyone can be convinced is 'fair'". This is an empty proposition, isomorphic to "Xyblz is that which everyone can be convinced is 'xyblz'". There must be some specific thing of which people are being convinced; and once you identify that thing, it has a meaning beyond agreements and convincing.
You're not introducing something arbitrary, something un-fair, in refusing to concede everything to Dennis. You are being fair, and meta-fair and meta-meta-fair. As far up as you go, there's no level that calls for unconditional surrender. The stars do not judge between you and Dennis—but it is baked into the very question that is asked, when you ask, "What is fair?" as opposed to "What is xyblz?"
Ah, but why should you be fair, rather than xyblz? Let us concede that Dennis cannot validly persuade us, on any level, that it is fair for him to dictate terms and give himself the whole pie; but perhaps he could argue whether we should be fair?
The hidden agenda of the whole discussion of fairness, of course, is that good-ness and right-ness and should-ness, ground out similarly to fairness.
Natural selection optimizes for inclusive genetic fitness. Thi |
b35915de-2446-4e6c-9adc-ce257419406c | trentmkelly/LessWrong-43k | LessWrong | Should I believe what the SIAI claims?
Major update here.
The state of affairs regarding the SIAI and its underlying rationale and rules of operation are insufficiently clear.
Most of the arguments involve a few propositions and the use of probability and utility calculations to legitimate action. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. Even if you tell me, where is the data on which you base those estimations?
There seems to be an highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call that a castle in the air.
I know that what I'm saying may simply be due to a lack of knowledge and education, that is why I am inquiring about it. How many of you, who currently support the SIAI, are able to analyse the reasoning that led you to support the SIAI in the first place, or at least substantiate your estimations with other kinds of evidence than a coherent internal logic?
I can follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. Are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground? There seems to be no critical inspection or examination by a third party. There is no peer review. Yet people are willing to donate considerable amounts of money.
I'm concerned that, although consistently so, the SIAI and its supporters are updating on fictional evidence. This post is meant to inquire about the foundations of your basic premises. Are you creating models to treat subsequent models or are your propositions based on fact?
An example here is the use of the Many-worlds interpretation. Itself a logical implication, can it be used to make further inferences and estimations without additional evidence? MWI might be the only consistent non-magic interpretation of quantum mechanics. The problem here is that such conclusions are, I believe, widely considered not to |
f56d3c83-5f10-4168-928a-d3cd0257fc0c | trentmkelly/LessWrong-43k | LessWrong | Yampolskiy on AI Risk Skepticism
Roman Yampolskiy posted a preprint for "AI Risk Skepticism". Here's the abstract:
> In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.
Nothing really new in there to anyone familiar with the field, but seems like a potentially useful list of citations for people coming up to speed on AI safety, and perhaps especially AI policy, and a good summary paper you can reference as evidence that not everyone takes AI risks seriously. |
ef3bbe2e-1355-4d34-ae34-f45dc1c1915b | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Projecting compute trends in Machine Learning
Summary
-------
Using [our dataset](https://docs.google.com/spreadsheets/d/1AAIebjNsnJj_uKALHbXNfn3_YsT6sHXtCU0q7OIPuc4/edit?usp=sharing) of milestone Machine Learning models, and [our recent analysis of compute trends in ML](https://arxiv.org/pdf/2202.05924.pdf), we project forward 70 years worth of trends in the amount of compute used to train Machine Learning models. Our simulations account for (a) uncertainty in estimates of the growth rates in compute usage during the Deep Learning (DL)-era and Pre-DL era, and (b) uncertainty over the ‘reversion date’, i.e. the date when the current DL-era compute trend (with a ~6 month doubling time) will end and revert to the historically more common trend associated with Moore’s law. Assuming a reversion date of between 8 to 18 years, and without accounting for algorithmic progress, our projections suggest that the median of [Cotra 2020](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)’s biological anchors may be surpassed around August 2046 [95% CI: Jun 2039, Jul 2060]. This suggests that historical rates of compute scaling, if sustained briefly (relative to how long these trends have been around so far), could result in the emergence of transformative models.
Our work can be replicated using [this Colab notebook](https://colab.research.google.com/drive/1FasOOiA-oh7nCkd0cEtuqbA3DSCAeDI-?usp=sharing).
Note: we present projections, not predictions. Our post answers the question of: “*What would historical trends over the past 70 years when naively extrapolated forward imply about the future of ML compute?*” It does not answer the question: “*What should our all-things-considered best guess be about how much compute we should expect will be used in future ML experiments?*”
Introduction
------------
Recently, we put together [a dataset](https://docs.google.com/spreadsheets/d/1AAIebjNsnJj_uKALHbXNfn3_YsT6sHXtCU0q7OIPuc4/edit?usp=sharing) of over a hundred milestone Machine Learning models, spanning from 1952 to today, annotated with the compute required to train them. Using this data, we produce simple projections of the amount of compute that might be used to train future ML systems.
The question of how much compute we might have available to train ML systems has received some attention in the past, most notably in Cotra’s Biological Anchors report. Cotra’s report investigates TAI timelines by analyzing: (i) the training compute required for the final training run of a transformative model (using biological anchors), and (ii) the amount of effective compute available at year Y. This article replaces (ii) the compute estimate by projecting 70 years worth of trends in the amount of compute used to train Machine Learning models.
Cotra's amount of effective compute available at year Y is broken down into forecasts of (a) compute cost, (b) compute spending, and (c) algorithimic progress. By contrast, we do not decompose the estimate, and rather project it on our previous investigation of training compute of ML milestone systems. This trend includes the willingness to spend over time including the reduced compute costs over time; however, it does not address algorithmic progress. We explicitly do not forecast the cost of compute or compute spending.
 Figure 1. Contrasting our work with that of [Cotra 2020](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)In this post, we present projections based on previously observed trends and some basic insights about how long the current 6-month doubling time can be sustained. That is, our post answers the question of: what would current trends imply about the future if you naively extrapolate them forwards.
One key reason we don’t expect these projections to be particularly good predictions is that it seems likely that Moore’s law might break down in some important way over the next few decades. We therefore might expect that that the doubling-time in compute usage, when the dollar-budgets to scale compute grow at the economic growth-rate, will be substantially longer than the historically common ~20-month doubling period.
When will the current scaling trend revert back to Moore’s law?
---------------------------------------------------------------
In our recent analysis of compute trends in ML ([Sevilla et al., 2022](https://arxiv.org/pdf/2202.05924.pdf)), we find that, since the advent of Deep Learning, the amount of compute used to train ML systems has been doubling every 6 months. This is much faster than the previous historical doubling time that we find to be roughly 20 months (which is roughly in line with Moore’s law). Previous work ([Carey, 2018](https://aiimpacts.org/interpreting-ai-compute-trends/), and [Lohn and Musser, 2022](https://cset.georgetown.edu/publication/ai-and-compute/)) has pointed out that a scaling-rate that outstrips Moore’s law by a wide margin cannot be sustained for many years as a rate of growth in ML compute spending that far exceeds economic growth cannot be sustained for many years.
A key question, then, for projecting compute used in future ML systems, is: How long can the current fast trend continue, before it reverts to the historically much more common trend associated with Moore’s law?
To answer this question, we replicate the analysis by [Carey, 2018](https://aiimpacts.org/interpreting-ai-compute-trends/), but instead of using the numbers from OpenAI’s AI and Compute ([Amodei and Hernandez, 2018](https://openai.com/blog/ai-and-compute/)), we use the numbers from [our recent analysis](https://arxiv.org/pdf/2202.05924.pdf) ([summary](https://www.alignmentforum.org/posts/XKtybmbjhC6mXDm5z/compute-trends-across-three-eras-of-machine-learning)).[[1]](#fnspyxqhkydqg) This analysis, roughly, points to three scenarios:
* **Bearish**: slow compute cost-performance improvements and very little specialized hardware improvements. In this scenario, it takes 12 years for the cost of computation to fall by an OOM. The current 6-month doubling period can be maintained for another ~8 years.
* **Middle of the road**: Moderate compute cost-performance improvements and moderate improvements in specialized computing. In this scenario, it takes roughly 7 years for the cost of computation to fall by an OOM, and progress in specialized hardware helps sustain the trend ~3 additional years. The current 6-month doubling period can be maintained for another ~12 years.
* **Bullish**: Fast compute cost-performance improvements and substantial improvements in specialized computing. In this scenario, it takes 4 years for the cost of computation to fall by an OOM, and progress in specialized hardware helps sustain the trend ~6 additional years. The current 6-month doubling period can be maintained for another ~18 years.
Roughly, we might say that these scenarios are represented by the following distributions over ‘reversion dates’, i.e. dates when the scaling trends are more similar to Moore’s law than they are to the current fast trend.
Fig 2. Distributions that roughly correspond to the three scenarios that come out of our replication of [Carey, 2018.](https://aiimpacts.org/interpreting-ai-compute-trends/)[[1]](#fnspyxqhkydqg)We then produce a mixture of these distributions by creating a weighted linear pool where “Bearish” is assigned 0.75, “Middle of the road” is assigned 0.20, and “Bullish” 0.05, based on our best-guesses (you can apply your own weights using [this Colab notebook](https://colab.research.google.com/drive/1FasOOiA-oh7nCkd0cEtuqbA3DSCAeDI-?usp=sharing).)
Fig 3. our best-guess for a prior over reversion dates, formed by mixing the previous distributionsWe can use this as our prior over when the fast-trend will revert to the more historically common trend associated with Moore’s law.
Projecting ML compute trends
----------------------------
We simulate compute paths based on (a) our estimates of the growth rates in compute usage during the DL-era and Pre-DL era, and (b) our prior over ‘reversion date’, i.e. the date when the current DL-era compute trend will end. We account for the uncertainty in both (a) and (b) in our simulations (see details [here](https://www.lesswrong.com/posts/3dBtgKCkJh5yCHbag/projecting-compute-trends-in-machine-learning-2#Details_of_the_simulations)).
Fig 4. 10,000 projected compute paths. Solid line represents the median projected compute at each date, and the shaded region represents 2-standard deviations around the median.Our simulations reveal the following projections about the amount of compute used to train ML models.
| **Year** | **Projected FLOPs used to train largest ML model** | **Enough for how many anchor’s median compute requirements?** |
| --- | --- | --- |
| 2025 | 1025.90.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
[1025.33, 1026.14] | 0/6 |
| 2030 | 1028.67 [1026.71, 1029.47] | 0/6 |
| 2040 | 1032.42 [1029.27, 1034.71] | 1/6 |
| 2050 | 1035.26 [1031.78, 1038.86] | 3/6 |
| 2060 | 1038.10 [1034.35, 1042.49] | 5/6 |
| 2070 | 1040.79 [1036.83, 1045.49] | 5/6 |
| 2080 | 1043.32 [1039.04, 1048.18] | 6/6 |
Table 1: Projected FLOPs from 2025 to 2080
These projections suggest that, without accounting for algorithmic progress, the most modest of [Cotra 2020](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines)’s biological anchors will be surpassed around August 2030 [95% CI: Jan 2029, May 2038], the median anchor (~1034.36 FLOPS) will be surpassed around August 2046 [95% CI: Jun 2039, Jul 2060], and the strongest of anchors will be surpassed around May 2072 [95% CI: Jan 2057, Jun 2089].
Conclusion
----------
If we naively extrapolate the trends uncovered from 70-years worth of compute scaling in Machine Learning, we find that within roughly 25 years, large-scale ML experiments will use amounts of compute that exceed the half of the compute budgets that [Cotra 2020](https://www.lesswrong.com/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines) has suggested may be sufficient for training a transformative model. This highlights the fact that historical rates of compute scaling in Machine Learning, even if sustained relatively briefly (relative to how long these trends have been around so far), could place us in novel territory where it might be likely that transformative systems would be trained. This work also suggests that understanding compute trends might be a promising direction for predicting ML progress,
---
Details of the simulations
--------------------------
We assume compute grows exponentially in time at some rate g:
C(t)=C(0)egt,wheret≥0.In our projections, we replace g with g∗, defined as a weighted geometric mean of our best-guess of the growth rate during Moore's law (~gM), and the growth rate of our estimate of the growth rate during the Deep-Learning Era (^gDL):
g∗=^gw(t)DL~g1−w(t)M,wherew(t)∈[0,1].Here, ^gDL simply denotes the growth rate during the Deep Learning Era (2010 onwards) as estimated using OLS. In particular, we estimate the following model using our dataset:
logC(t)=β+gDLt,wheret>2010. ~gM is defined as follows:
~gM=√^gMg20-month, where ^gM is the estimated growth rate during the Pre-DL era, and g20-month is the growth rate implied by a 20-month doubling period. The reason we take the geometric mean of the estimated growth rate, and the growth rate implied by a 20-month doubling period is because Moore's law is sufficiently well-established that the error bars around ^gM are too large relative to how well-established Moore's law is. We therefore artificially increase our precision of the growth rate associated with Moore's law by taking an average of our estimated value and the usual growth rate implied by an ~20-month doubling-time.
Our weight function, w(t), is constructed as follows:
w(t)=exp((t−2022)22(reversion date−2022)2)−1.Why? Well, it's a logistic-like function with a unit-interval range, which exceeds 1/2 when t<reversion date, equals 1/2 when t=reversion date, and is less than 1/2 otherwise. This is what it looks like:

We then simulate some path Cj as follows:
Cj=C(2022)eg∗jt,where, for any j:* ^gDL is estimated on our randomly sampled (with replacement) DL-Era Data,
* ^gM is estimated on our randomly sampled (with replacement) Pre-DL Era data, and
* w(t) is set based on a randomly sampled reversion date from our prior over reversion dates.
1. **[^](#fnrefspyxqhkydqg)**You can find the details of this analysis and a comparison to Carey’s results [here](https://docs.google.com/spreadsheets/d/15qaKhLYcwJRSi7woQMdhiQbdEgiZHOFZKtqfEOzsJpM/edit?usp=sharing). |
e4df1648-e9e0-49b9-8a84-dc4c79275a22 | awestover/filtering-for-misalignment | Redwood Research: Alek's Filtering Results | id: post2826
In this post, I describe how to view debate as a way of assisting a human to spot flaws in an AI’s proposal. I then argue that the zero-sum assumption is critical for making debate work and that various seemingly-helpful modifications of debate might break it instead. A naive way of using arbitrary optimizers as oracles: Suppose you have a black-box optimizer X that can be connected to any well-defined quantity to be maximized. X can potentially be very powerful - e.g., having a highly accurate model of the world and “a lot of optimization power”.
One way to turn X into an oracle is to ask it a question and decide to give it reward 1 if we like its answer and 0 if we don’t. [1] Of course, standard AI-safety arguments (e.g., AI takeover and perverse instantiation) suggest that this is a pretty bad idea for powerful X . For the sake of argument, suppose that we can fix all of the “obvious” problems and ensure that X won’t wirehead, won’t try to escape the box we put it in etc., and will only care about the reward it gets for its answer. Two problems with naive optimizers-turned-oracles: (1) telling the difference between good and awesome answers and (2) answers with hidden flaws: One problem with this type of oracles is that it’s hard to decide whether we like its answers or not. Suppose I ask it for food recommendations for the evening and it suggests pancakes. Pancakes seem fine, although there are some foods that I would like better. So should I reward the AI or not?
The second problem is that the oracle optimizes for giving answers that seem good to a human. (Not out of malice, but because “actually being good” isn’t well-defined.) And since humans aren’t omniscient, there will be many seemingly good answers that in fact have disastrous consequences if acted upon. To address (1), use two AIs: The first problem can be tackled by using two copies of the optimizer and rewarding the one whose answer we like better (flipping a coin if indifferent). This helps because humans - or at least the one writing this text - find it much easier to compare the relative strengths of two options than to try assigning some absolute value to a single option. (E.g., chocolate ice-cream is clearly better than vanilla while the question “On a scale from 0 to 10, how good is vanilla?” is just weird.) To mitigate (2), have a second AI find the flaws for you: We can decrease the number of unsafe answers coming from the oracle (call it A ) by using a second copy of the optimizer (call it B ) that we ask to expose flaws in A’s answers. To set up the right incentives, we give the reward 1 to B and 0 to A if B finds some flaw (read, “if it seems to us that what it pointed out is a flaw”) and reward 1 to A and 0 to B if it finds none.
This helps since there will be flaws that are too hard to find for us, but easy enough to confirm once pointed out. However, it doesn’t solve all problems (since there could be hard-to-confirm flaws or flaws that are too hard for the AI to find). Moreover, there will be things that seem like flaws but aren’t, which gives B a too large advantage. To mitigate this advantage, we can make the flaw-exposure interactive, by allowing A to react to B ’s claims and vice versa. AI Safety via Debate = Combination of these two modifications: These two improvements can be used simultaneously. The simple way of doing this is by first having A give an answer and B attack it, then having B give an answer and A attack it (both being interactive), and finally deciding which of the two answers we like better. Alternatively, to make the setup more balanced, we can have A and B give answers simultaneously, after which they exchange arguments that either support their own claim or attack the opponent’s (and we pick the answer that seems better). From the outside, this setup can be viewed as a single oracle (since it takes a question as an input and returns the winning answer as an output). Debate-as-oracle isn’t always safe, but it is safer than naive optimizers-turned-oracles: Denote by X the optimizer we started with, by O X the corresponding naive oracle, and by O D ( X ) the debate-used-as-oracle. Ultimately, whether O D ( X ) ’s answer is safe or not depends on how good the human judge is (and whether it can be hacked ). If you asked whether the Earth revolves around the Sun in the middle ages, I wouldn’t be sure whether to bet on O D ( X ) giving you the right answer. However, the individual instances of X in D ( X ) will avoid giving answers with obvious flaws, since those would lead to losing the debate, which makes it safer than plain O X . In this framing, we need to ask “how safe is debate?”. Debate with child safety on - only provably safe questions allowed: My intuition is that there will be a class of questions where debate is definitely safe, a class where it is unsafe, and a class where some questions are safe, some unsafe, and we don’t really know which are which. In practice, I would thus aim for identifying an as-large-as-possible class of questions for which we have strong reasons to believe that debate is safe (ideally a proof, but not necessarily). And then I would only allow using debate for these safe questions.
In this framing, the question becomes “how strong is provably-safe debate?”. Many well-meaning “improvements” can break debate: One can consider various modifications to the baseline AISvD proposal. On the one hand, there are improvements like OpenAI’s cross-examination which simplifies exposing inconsistencies in claims, which will hopefully make debate strictly safer. On the other hand, we could imagine modifications such as penalizing both agents if the debate ends up being inconclusive or rewarding both if they both propose a good solution. These would likely be useful to make standard human discussions more productive. However, they will very likely break AI Safety via Debate (AISvD). For example, suppose I notice a fatal-yet-hard-to-explain flaw in the opponent’s proposal and realize that pointing it out would increase my chance of winning the debate by 5%. However, this would also create a 10% chance of confusing the judge. If “confused judge” means a draw, I will point out the flaw. But if the debate has been modified such that “confused judge” = “inconclusive debate” = “we have both lost”, pointing it out becomes counterproductive.
Moreover, any deviation from fully-adversarial incentives will likely introduce a tradeoff between pointing out the opponent’s flaws and keeping high the total reward that the two agents split between themselves. When considering whether a modification is helpful or not, I suggest imagining that the debaters are greedy and super-smart humans - if it seems like the modification would create a conflict of interests, one should be wary. Conclusion I have argued that if you use debate as a way of assisting a human with verifying the safety of an AI’s answers, it is critical to have fully adversarial rewards. However, there might be other viable applications of argumentation where things that break “normal” debates from AISvD become desirable instead (such as penalizing both AIs if the human becomes confused). I think it makes sense to pursue such applications. However, to avoid confusion (or worse yet, unpleasant AI-surprises), it is important to be explicit about which application one has in mind. Incidentally, I feel that the interpretation of debate described in this post is the one that people should use by default in relation to AISvD . (Primarily because if you have a different purpose in mind, such as enhancing the judge’s reasoning , I don’t see good arguments for why this type of debate would be the tool to use.) However, I am quite uncertain about this and would love to know the opinion of people who are closer to the centre of the debate-world :-). This post was heavily inspired by discussions with Nandi Schoots (and benefited from her comments). I focus on this scenario, as opposed to the version where you only assign rewards once you have seen what the advice led to. This alternative has its own flaws, and I think that most of the analysis is insensitive to which of the options we pick. Similarly, I suspect that many of the ideas will also apply to the case where debate simply executes a trained policy instead of doing optimization. ↩︎ |
ec103a82-afd4-4aaf-8753-3f442fe6e87f | trentmkelly/LessWrong-43k | LessWrong | Reframing the AI Risk
Follow-up to: Reshaping the AI Industry: Straightforward Appeals to Insiders
----------------------------------------
Introduction
The central issue with convincing people of the AI Risk is that the arguments for it are not respectable. In the public consciousness, the well's been poisoned by media, which relegated AGI to the domain of science fiction. In the technical circles, the AI Winter is to blame — there's a stigma against expecting AGI in the short term, because the field's been burned in the past.
As such, being seen taking the AI Risk seriously is bad for your status. It wouldn't advance your career, it wouldn't receive popular support or peer support, it wouldn't get you funding or an in with powerful entities. It would waste your time, if not mark you as a weirdo.
The problem, I would argue, lies only partly in the meat of the argument. Certainly, the very act of curtailing the AI capabilities research would step on some organizations' toes, and mess with people's careers. Some of the resistance is undoubtedly motivated by these considerations.
It's not, however, the whole story. If it were, we could've expected widespread public support, and political support from institutions which would be hurt by AI proliferation.
A large part of the problem lies in the framing of the arguments. The specific concept of AGI and risks thereof is politically poisonous, parsed as fictional nonsense or a social faux pas. And yet this is exactly what we reach for when arguing our cause. We talk about superintelligent entities worming their way out of boxes, make analogies to human superiority over animals and our escape from evolutionary pressures, extrapolate to a new digital species waging war on humanity.
That sort of talk is not popular with anyone. The very shape it takes, the social signals it sends, dooms it to failure.
Can we talk about something else instead? Can we reframe our arguments?
----------------------------------------
The Power of Framing
Hu |
56dd4bb1-1919-438e-b97c-4acd17e56bca | trentmkelly/LessWrong-43k | LessWrong | New LW Meetups: Portland, Sydney
This summary was posted to LW main on January 3rd. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
* [Portland] I moved to Portland! I want to meet you!: 11 January 2014 11:50AM
* Sydney Meetup: January: 22 January 2014 06:30PM
Other irregularly scheduled Less Wrong meetups are taking place in:
* Montreal - How to Actually Change your Mind: 07 January 2014 07:00PM
* Southeast Michigan: 04 January 2014 02:00PM
The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:
* Austin, TX: 04 January 2020 01:30PM
* Brussels monthly meetup: [topic TBD]: 11 January 2014 01:00PM
* London 2014 Protospective: 05 January 2014 02:00PM
* Vienna: 18 January 2014 03:00PM
* Washington DC fun and games meetup: 05 January 2014 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Berlin, Brussels, Cambridge, MA, Cambridge UK, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, build community, and have fun!
In addition to the handy sidebar of upcoming meetups, a meetup overview is posted on the front page every Friday. These are an attempt to collect information on all the meetups happening in upcoming weeks. The best way to get your meetup featured is still to use the Add New Meetup feature, but you'll also have the benefit of having your meetup mentioned in a weekly overview. These overview posts are moved to the discussion section when the new post goes up.
Please no |
8c1d9d4b-11ae-474d-b80f-00d864acb4d3 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | AI Safety field-building projects I'd like to see
People sometimes ask me what types of AIS field-building projects I would like to see.
Here’s a list of 11 projects.
**Background points/caveats**
=============================
But first, a few background points.
1. **These projects require people with specific skills/abilities/context in order for them to go well.**Some of them also have downside risks. This is not a “list of projects Akash thinks anyone can do” but rather a “list of projects that Akash thinks could Actually Reduce P(Doom) if they were executed extremely well by an unusually well-qualified person/team.”
2. **I strongly encourage people to reach out to experienced researchers/community-builders before doing big versions of any of these**. (You may disagree with their judgment, but I think it’s important to at least have models of what they believe before you do something big.)
3. This list represents my opinions. As always, **you should evaluate these ideas for yourself**.
4. **If you are interested in any of these, feel free to reach out to me. If I can’t help you, I might know someone else who can.**
5. Reminder that you can apply for funding from the [long-term future fund](https://funds.effectivealtruism.org/funds/far-future). You don’t have to apply to execute a specific project. You can apply for career exploration grants, grants that let you think about what you want to do next, and grants that allow you to test out different hypotheses/uncertainties.
6. I sometimes use the word “organization”, which might make it seem like I’m talking about 10+ people doing something over the course of several years. But I actually mean “I think a team of 1-3 people could probably test this out in a few weeks and get something ambitious started here within a few months if they had relevant skills/experiences/mentorship.
7. These projects are based on several assumptions about AI safety, and I won’t be able to articulate all of them in one post. Some assumptions include “AIS is an extremely important cause area” and “one of the best ways to make progress on AI safety is to get talented people working on technical research.” If I’m wrong, I think I’m wrong because I’m undervaluing non-technical interventions that could buy us more time (e.g., strategies in AI governance/strategy or strategies that involve outreach to leaders of AI companies). I plan to think more about those in the upcoming weeks.
**Some projects I am excited about**
====================================
**Global Talent Search for AI Alignment Researchers**
-----------------------------------------------------
**Purpose:** Raise awareness about AI safety around the world to find highly talented AI safety researchers.
**How this reduces P(doom):** Maybe there are extremely promising researchers (e.g., people like Paul Christiano and Eliezer Yudkowsky) out in the world who don’t know about AI alignment or don’t know how to get involved. One global talent search program could find them. Alternatively, maybe we need 1000 full-time AI safety researchers who are 1-3 tiers below “alignment geniuses”. A separate global talent search program could find them.
**Imaginary example:** Crossover between the [Atlas Fellowship](https://www.atlasfellowship.org/), old [CFAR](https://www.rationality.org/), and [MIRI](https://intelligence.org/). I imagine an organization that offers contests, workshops, and research fellowships in order to attract talented people around the world.
**Skills needed:** Strong models of community-building, strong understanding of AI safety concepts, really good ways of evaluating who is promising, good models of downside risks when conducting broad outreach
*Olivia Jimenez and I are currently considering working on this. Please feel free to reach out if you have interest or advice.*
**Training Program for AI Alignment researchers**
-------------------------------------------------
**Purpose:** Provide excellent training, support, internships, and mentorship for junior AI alignment researchers.
**How this reduces P(doom):** Maybe there are people who would become extremely promising researchers if they were provided sufficient support and mentorship. This program mentors them.
**Imaginary example:** Something like a big version of [SERI-Mats](https://www.serimats.org/) with a strong emphasis on workshops/activities that help people develop strong inside views & strong research taste. (My impression is that SERI-Mats could become this one day, but I’d also be excited to see more programs “compete” with SERI-Mats).
**Skills needed:** Relationships with AI safety researchers, strong models of mentors, strong ability to attract and assess applicants, insight into how to pair mentors with mentees, good models of AI safety, good models of how to create organizations with epistemically rigorous cultures, good models of downside risks when conducting broad outreach.
**Research Infrastructure & Coordination for AI alignment**
-----------------------------------------------------------
**Purpose:** Provide excellent support for AI alignment researchers in major EA Hubs.
**Imaginary example:** Something like a big version of [Lightcone Infrastructure](https://www.lightconeinfrastructure.com/) that runs something like [Bell Labs](https://en.wikipedia.org/wiki/Bell_Labs), regularly hosts high-quality events/workshops for AI alignment researchers, or accelerates research progress through [alignment newsletters](https://rohinshah.com/alignment-newsletter/), podcasts, and debates (my impression is that Lightcone or Constellation could become this one day, but I’d be excited to see people try parts of this on their own).
**Skills needed:** Strong relationships with AI safety researchers, strong understanding of the AI safety community and its needs, and strong understanding of AI safety concepts. Very high context would be required to run a space; medium context would be required to perform the other projects.
*I am currently considering starting an AI alignment podcast or newsletter. Please feel free to reach out if you have interest or advice*.
**Superconnecting: Active Grantmaking + Project Incubation**
------------------------------------------------------------
**Purpose:** Identify highly promising people who are already part of the EA community and get them funding/connections/mentorship to do AIS research or launch important/ambitious projects.
**How this reduces P(doom):** Maybe there are people who would become extremely promising researchers or ambitious generalists *who are already part of the EA community* but haven’t yet received the support, encouragement, or mentorship required to reach their potential.
**Imaginary example:** Crossover between the [FTX Future Fund’s regranting program](https://ftxfuturefund.org/announcing-our-regranting-program/), a longtermist incubator, and CEA’s active stewardship vision. I envision a group of “superconnectors” who essentially serve as talent scouts for the EA community. They go to EA globals and run retreats/workshops for new EAs, as well as highly-skilled EAs who aren’t currently doing highly impactful work. They provide grants for people (or encourage people to apply for funding) to skill-up in AI safety or launch ambitious projects.
**Skills needed:** Strong models of community-building, large network or willingness to develop a large network, strong models of how to identify which people and projects are most promising, strong people skills/people judgment.
**Targeted Outreach to Experienced Researchers**
------------------------------------------------
**Purpose:** Identify highly promising researchers in academia and industry, engage them with high-quality AI safety content, and support those who decide to shift their careers/research toward technical AIS.
**How this reduces P(doom):** Maybe there are extremely talented researchers who *can already be identified* based on their contributions in fields related to AI alignment (e.g., math, decision theory, probability theory, CS, philosophy) and/or their contributions to messy and pre-paradigmatic fields of research.
**Imaginary example:** An organization that systematically reads research in relevant fields, identifies promising researchers, and designs targeted outreach strategies to engage these researchers with high-quality sources in AI alignment research. The [Center for AI Safety](https://safe.ai/)and the [AI Safety Field Building Hub](https://forum.effectivealtruism.org/posts/ozm4SpiChfAAAGnw5/announcing-the-ai-safety-field-building-hub-a-new-effort-to#Projects_I_m_currently_prioritizing__will_change_over_time_) may do some of this, though they’re relatively new, and I’d be excited for more people to support them or compete with them.
**Skills needed:** Strong understanding of how to communicate with researchers, strong models of potential downside risks, strong understanding of AI safety concepts, good models of academia and “the outside world”, good people skills.
*Note that people considering this are strongly encouraged to reach out to community-builders and AI safety researchers before conducting outreach to experienced researchers.*
*People interested in this may also wish to read the*[*Pragmatic AI Safety Sequence*](https://www.lesswrong.com/s/FaEBwhhe3otzYKGQt) *and should familiarize themselves with potential risks associated with outreach to established researchers. Note that people disagree about how to weigh upside potential against downside risks, and “thinking for yourself” would be especially important here.*
**Understanding AI trends and AI safety outreach in China**
-----------------------------------------------------------
**Purpose:** Understand the AI scene in China, conduct research about if/how AIS outreach should be conducted in China, deconfuse EA about AIS in China, and potentially pilot AIS outreach efforts in China.
How this reduces P(doom): Maybe there are effective ways to reach out to talented people in China in ways that sufficiently mitigate downside risks. My current impression is that China is one of the leaders in AI, and it seems plausible that China would have a lot of highly talented people who could contribute to technical AIS research. However, I’ve heard that AIS outreach in China has been neglected because EA leaders don’t understand China and don’t understand how to evaluate different kinds of outreach strategies in China (hence the focus on research/deconfusion/careful pilots).
**Imaginary example:** A think tank-style research group that develops strong models of a specific topic.
**Skills needed:** Strong understanding of China, fluency in Mandarin, strong ability to weigh upside potential and downside risks.
**AIS Contests and Subproblems**
--------------------------------
**Purpose:** Identify (or develop) subproblems in alignment & turn these into highly-advertised contests.
**How this reduces P(doom):** Maybe there are subproblems in AI alignment that could be solved by researchers outside of the AI x-risk community. Alternatively, maybe contests are an effective way to get smart people interested in AI x-risk.
**Imaginary example:** An organization that gets really good at creating contests based on problems like [ELK](https://www.lesswrong.com/posts/QEYWkRoCn4fZxXQAY/prizes-for-elk-proposals) and [The Shutdown Problem](https://theturingprize.com/contest) (among other examples) & then advertising these contests heavily.
**Skills needed:** Ideally a strong understanding of AI safety and the ability to identify/write-up subproblems. But I think this could work if someone was working closely with AI safety researchers to select & present subproblems.
**Writing that explains AI safety to broader audiences**
--------------------------------------------------------
**Purpose:** Write extremely clear, engaging, and persuasive explanations of AI safety ideas.
**How this reduces P(doom):** There are not many introductory resources that clearly explain the importance of AI safety. Maybe there are people who would engage with AI safety if we had better introductory resources.
**Imaginary example:** A crossover between Nick Bostrom, Will MacAskill, Holden Karnofsky, and Eliezer Yudkowsky. A book or blog that is as rigorous as Bostrom’s writing (Superintelligence), as popular as Will’s writing (NYT bestseller with media attention), as clear as Holden’s writing (Cold Takes), and as explicit about x-risk as Yudkowsky’s writing (e.g., List of Lethalities)
**Skills needed:** Ideally a strong understanding of AI safety, but I think writing ability is probably the more important skill. In theory, someone with exceptional writing ability could work closely with AI safety researchers to select the most important topics/concepts and ensure that the descriptions/explanations are accurate. Also, strong models of potential downside risks of broad outreach.
**Other projects I am excited about (though not as excited)**
-------------------------------------------------------------
* **Operations org:** Something that helps train aligned/competent EAs to be really good at operations. My rough sense is that many projects are bottlenecked by ops capacity. Note that sometimes people think “ops” just means stuff like “cleaning” and “making sure food arrives on time” and “doing boring stuff.” I think the bigger bottlenecks are in things like “having such a strong understanding of the mission that you know which tasks to prioritize”, “noticing what the major bottlenecks are”, and “having enough context to consistently do ops tasks that amplify the organization.”
* **EA Academy:** Take a bunch of promising young/junior EAs and turn them into awesome ambitious generalists. Something that helps people skill-up in AIS, management, community-building, applied rationality, and other useful stuff. Sort of like a crossover between Icecone (the winter-break retreat that Lightcone Infrastructure organized) and CFAR with more of an emphasis on long-term career plans.
* **Amplification Org:** Figure out how to amplify the Most Impactful People™. Help them find therapists, PAs, nutritionists, friends, etc. Solve problems that come up in their lives. Save them time and make them more productive. Figure out how to give Eliezer Yudkowsky 2 extra productive hours each week or how to make Paul Christiano 1.01-1.5X more effective.
*I am grateful to Olivia Jimenez, Thomas Larsen, Miranda Zhang, and Joshua Clymer for feedback*. |
162d9e21-2abd-4e01-a964-89c9f1270015 | trentmkelly/LessWrong-43k | LessWrong | Two Weeks Without Sweets
I recently tried giving up sweets for two weeks. In early December I attended a conference, which meant a break from my normal routine. After a few days I realized this was the longest I'd gone without eating any sweets in 2-3 decades. After getting home I decided to go a bit longer to see if anything interesting happened. Would my focus be better? Better sleep? Would I feel healthier? Would I stop missing them?
While I started by accident, if I was going to keep doing this I needed a system: what counts as a sweet? I decided to use the same system we use with the kids: no desserts or other things that are about being sweet. The sweetest thing I was still eating was probably bran flakes for breakfast, at 6g per serving. [1]
I did pretty quickly stop feeling a drive to eat sweet things. Which was kind of neat: normally I do feel compelled to eat something sweet after lunch and again after dinner. But I didn't feel better: the main change I noticed was that I had less energy in the afternoons and maybe made more mistakes at work. I also think I was a mildly worse Go player, though any comparisons here are going to be unreliable since I'm just playing one other person (my dad) over and over.
My main sweet consumption is ganache (a couple spoonfuls from a jar), usually after lunch or dinner. Perhaps the fat is helpful here? Or maybe I'd become dependent on the caffeine in the chocolate? Probably not, though: I'm guessing the amount I was eating came to ~15mg of caffeine, so only 8% to 15% of a typical coffee serving. Claude guessed this was too low to be distinguishable from placebo, but I haven't looked in the literature to verify.
When I restarted sweets I noticed pretty quickly that I felt better in the afternoon, my Go playing was better, and also that I was sleeping slightly better. [2] I didn't notice any downsides. I suspect some of this pattern is that most sweets I eat (primarily ganache, followed by other products high in cream) have a pretty high ratio of |
10cff050-f787-4108-ac8d-158e50b18467 | trentmkelly/LessWrong-43k | LessWrong | On Downvotes, Cultural Fit, and Why I Won’t Be Posting Again
By A. Nobody
When I first posted on LessWrong, I expected some pushback. That’s normal. If you’re arguing that AGI will lead to human extinction and that capitalism makes this outcome inevitable, you’re going to meet resistance. But what I didn’t expect -and what ultimately led me to write this - is the way that resistance has manifested.
From the very beginning, my essays were met with immediate hostility, not on the basis of their logic or premises, but because of vague accusations of them being “political.” This came directly from site admins. And crucially, this wasn’t after reading the content. It was before. The mere idea that someone might be drawing a line from capitalism to extinction was enough to trigger rejection - not intellectual rebuttal, just rejection.
My main essay - arguably the core of the entire argument I’m developing - has been heavily downvoted. Not because it was proven wrong, or because someone pointed out a fatal flaw. But because people didn’t like that the argument existed. There has still not been a single substantive refutation of any of my key premises. Not one. The votes tell you it’s nonsense, but no one is able to explain why.
This isn’t a community failing to find holes in the logic. It’s a community refusing to engage with it at all.
And this mirrors what I’ve seen more broadly. The resistance I’ve received from academia and the AI safety community has been no better. I’ve had emails ignored, responses that amount to “this didn’t come from the right person,” and the occasional reply like this one, from a very prominent member of AI safety:
> “Without reading the paper, and just going on your brief description…”
That’s the level of seriousness these ideas are treated with.
Imagine for a moment that an amateur astronomer spots an asteroid on a trajectory to wipe out humanity. He doesn’t have a PhD. He’s not affiliated with NASA. But the evidence is there. And when he contacts the people whose job it is to monitor the ski |
85d5a153-fd00-4e03-8eb5-3335c7e9eae8 | StampyAI/alignment-research-dataset/blogs | Blogs | formal alignment: what it is, and some proposals
formal alignment: what it is, and some proposals
------------------------------------------------
what i call "formal alignment" is an approach to solving [AI alignment](ai-doom.html) that consists of:
* designing a formal goal, utility function, or decision process, which actually leads to desirable outcomes when pursued
* building an AI that pursues such a goal, utility function, or decision process
those two points correspond to formal alignment's notions of outer and inner alignment, respectively: determining what formal thing to align the AI to, and figuring out how to build something that is indeed aligned to it without running into inner misalignment issues.
for reasons why i think this is the least hopeless path to saving the world, see [my outlook on AI risk mitigation](outlook-ai-risk-mitigation.html). the core motivation for formal alignment, for me, is that a working solution is at least [*eventually aligned*](ai-alignment-curves.html): there is an objective answer to the question "will maximizing this with arbitrary capabilities produce desirable outcomes?" where the answer does not depend, at the limit, on *what* does the maximization. and the fact that such a formal thing is aligned in the limit makes it robust to [sharp left turns](https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization). what remains then is just "bridging the gap": getting [from eventual to continuous alignment](ai-alignment-curves.html), perhaps by ensuring the right [ordering of attained capabilities](ordering-capability-thresholds.html).
potential formal alignment ideas include:
* June Ku's [**metaethical AI**](https://www.lesswrong.com/posts/85vp2kgFZoycFqr5G/formal-metaethics-and-metasemantics-for-ai-alignment-1) (MAI): describing ethics directly, i think?
* plex's [**universal alignment test**](https://docs.google.com/document/d/1CMTS36MCbykYirTmC9Pdl2RBqLLPmrFU1sDcBNMvDCk/edit#) (UAT): throwing a weird simulation hypothesis at the AI which encourages it to align itself
* Vanessa Kosoy's [**PreDCA**](predca.html): making the AI implement its human predecessor's values (as i understand PreDCA is not *designed* to be used as a formal alignment goal, but it seems like it might be able to fill that role)
* my [**insulated goal-programs**](insulated-goal-program.html) (IGP): aligning the AI to the simple goal of running a program which we'd expect to eventually contains desirable worlds
* my [**question-answer couterfactual interval**](qaci.html) (QACI): use the AI's [past user](outer-alignment-past-user.html)'s counterfactual answers to various questions as its signal for aligned decisions (see also [my attempt at formalizing QACI](rough-sketch-formal-aligned-ai.html))
if there are formal alignment ideas i'm missing, please tell me about them and i'll add them here.
because these various proposals consist of putting together a formal mathematical expression, they rely on finding various [*true names*](https://www.lesswrong.com/posts/FWvzwCDRgcjb9sigb/why-agent-foundations-an-overly-abstract-explanation). for example: PreDCA tries to put together the true names for causality, agency, and the AI's predecessor; IGP requires the true name for computing a program forwards; QACI requires a true name for identifying pieces of data in causal worlds, and replacing them with counterfactual alternatives; UAT requires the true names for parent universe/simulation, control over resources, and comparing amounts of resources with those in the AI's future lightcone.
see also: [*clarifying formal alignment implementation*](clarifying-formal-alignment-implementation.html) |
da5bfbbf-7edf-4e09-b1ab-6aad483f509e | StampyAI/alignment-research-dataset/special_docs | Other | Showing versus doing: Teaching by demonstration
Introduction Is there a difference between doing something and showing someone else how to do something? Consider cooking a chicken. To cook one for dinner, you would do it in the most efficient way possible while avoiding contaminating other foods. But, what if you wanted to teach a completely naïve observer how to prepare poultry? In that case, you might take pains to emphasize certain aspects of the process. For example, by ensuring the observer sees you wash your hands thoroughly after handling the uncooked chicken, you signal that it is undesirable (and perhaps even dangerous) for other ingredients to come in contact with raw meat. More broadly, how could an agent show another agent how to do a task, and, in doing so, teach about its underlying reward structure? To model showing, we draw on psychological research on learning and teaching concepts by example. People are good at this. For instance, when a teacher signals their pedagogical intentions, children more frequently imitate actions and learn abstract functional representations [6, 7] . Recent work has formalized concept teaching as a form of recursive social inference, where a teacher chooses an example that best conveys a concept to a learner, who assumes that the teacher is choosing in this manner [14] . The key insight from these models is that helpful teachers do not merely select probable examples of a concept, but rather choose examples that best disambiguate a concept from other candidate concepts. This approach allows for more effective, and more efficient, teaching and learning of concepts from examples. We can extend these ideas to explain showing behavior. Although recent work has examined userassisted teaching [8] , identified legible motor behavior in human-machine coordination [9] , and analyzed reward coordination in game theoretic terms [11] , previous work has yet to successfully model how people naturally teach reward functions by demonstration. Moreover, in Inverse Reinforcement Learning (IRL), in which an observer attempts to infer the reward function that an expert (human or artificial) is maximizing, it is typically assumed that experts are only doing the task and not intentionally showing how to do the task. This raises two related questions: First, how does a person showing how to do a task differ from them just doing it? And second, are standard IRL algorithms able to benefit from human attempts to show how to do a task? In this paper, we investigate these questions. To do so, we formulate a computational model of showing that applies Bayesian models of teaching by example to the reward function learning setting. We contrast this pedagogical model with a model of doing: standard optimal planning in Markov Decision Processes. The pedagogical model predicts several systematic differences from the standard planning model, and we test whether human participants reproduce these distinctive patterns. For instance, the pedagogical model chooses paths to a goal that best disambiguates which goal is being pursued (Experiment 1). Similarly, when teaching feature-based reward functions, the model will prioritize trajectories that better signal the reward value of state features or even perform trajectories that would be inefficient for an agent simply doing the task (Experiment 2). Finally, to determine whether showing is indeed better than doing, we train a standard IRL algorithm with our model trajectories and human trajectories.
A Bayesian Model of Teaching by Demonstration Our model draws on two approaches: IRL [2] and Bayesian models of teaching by example [14] . The first of these, IRL and the related concept of inverse planning, have been used to model people's theory of mind, or the capacity to infer another agent's unobservable beliefs and/or desires through their observed behavior [5] . The second, Bayesian models of pedagogy, prescribe how a teacher should use examples to communicate a concept to an ideal learner. Our model of teaching by demonstration, called Pedagogical Inverse Reinforcement Learning, merges these two approaches together by treating a teacher's demonstration trajectories as communicative acts that signal the reward function that an observer should learn.
Learning from an Expert's Actions
Markov Decision Processes An agent that plans to maximize a reward function can be modeled as the solution to a Markov Decision Process (MDP). An MDP is defined by the tuple < S, A, T, R, γ >: a set of states in the world S; a set of actions for each state A(s); a transition function that maps states and actions to next states, T : S × A → S (in this work we assume all transitions are deterministic, but this can be generalized to probabilistic transitions); a reward function that maps states to scalar rewards, R : S → R; and a discount factor γ ∈ [0, 1]. Solutions to an MDP are stochastic policies that map states to distributions over actions, π : S → P (A(s)). Given a policy, we define the expected cumulative discounted reward, or value, V π (s), at each state associated with following that policy: V π (s) = E π ∞ k=0 γ k r t+k+1 | s t = s . (1) In particular, the optimal policy for an MDP yields the optimal value function, V \* , which is the value function that has the maximal value for every state (V \* (s) = max π V π (s), ∀s ∈ S). The optimal policy also defines an optimal state-action value function, Q \* (s, a) = E π [r t+1 + γV \* (s t+1 ) | s t = s, a t = a]. Q i = calculateActionValues(s, R i , T , γ) 4: π i = softmax(Q i , λ) 5: Π.add(π i ) 6: Calculate j = {j : s 1 ∈ s, length(j) ≤ l max , and ∃π ∈ Π s.t. (si,ai)∈j π(a i | s i ) > p min }. In the Reinforcement Learning setting, an agent takes actions in an MDP and receives rewards, which allow it to eventually learn the optimal policy [15] . We thus assume that an expert who knows the reward function and is doing a task selects an action a t in a state s t according to a Boltzmann policy, which is a standard soft-maximization of the action-values: P Doing (a t | s t , R) = exp{Q \* (s i , a i )/λ} a ∈A(si) exp{Q \* (s i , a )/λ} . (2) λ > 0 is an inverse temperature parameter (as λ → 0, the expert selects the optimal action with probability 1; as λ → ∞, the expert selects actions uniformly randomly). In the IRL setting, an observer sees a trajectory of an expert executing an optimal policy, j = {(s 1 , a 1 ), (s 2 , a 2 ), ..., (s k , a k )}, and infers the reward function R that the expert is maximizing. Given that an agent's policy is stationary and Markovian, the probability of the trajectory given a reward function is just the product of the individual action probabilities, P Doing (j | R) = t P Doing (a t | s t , R). From a Bayesian perspective [13] , the observer is computing a posterior probability over possible reward functions R: P Observing (R | j) = P Doing (j | R)P (R) R P Doing (j | R )P (R ) . (3) Here, we always assume that P (R) is uniform.
Bayesian Pedagogy IRL typically assumes that the demonstrator is executing the stochastic optimal policy for a reward function. But is this the best way to teach a reward function? Bayesian models of pedagogy and communicative intent have shown that choosing an example to teach a concept differs from simply sampling from that concept [14, 10] . These models all treat the teacher's choice of a datum, d, as maximizing the probability a learner will infer a target concept, h: P Teacher (d | h) = P Learner (h | d) α d P Learner (h | d ) α . (4) α is the teacher's softmax parameter. As α → 0, the teacher chooses uniformly randomly; as α → ∞, the teacher chooses d that maximally causes the learner to infer a target concept h; when α = 1, the teacher is "probability matching". The teaching distribution describes how examples can be effectively chosen to teach a concept. For instance, consider teaching the concept of "even numbers". The sets {2, 2, 2} and {2, 18, 202} are both examples of even numbers. Indeed, given finite options with replacement, they both have the same probability of being randomly chosen as sets of examples. But {2, 18, 202} is clearly better for helpful teaching since a naïve learner shown {2, 2, 2} would probably infer that "even numbers" means "the number 2". This illustrates an important aspect of successful teaching by example: that examples should not only be consistent with the concept being taught, but should also maximally disambiguate the concept being taught from other possible concepts.
Pedagogical Inverse Reinforcement Learning To define a model of teaching by demonstration, we treat the teacher's trajectories in a reinforcementlearning problem as a "communicative act" for the learner's benefit. Thus, an effective teacher will modify its demonstrations when showing and not simply doing a task. As in Equation 4 , we can define a teacher that selects trajectories that best convey the reward function: P Showing (j | R) = P Observing (R | j) α j P Observing (R | j ) α . (5) In other words, showing depends on a demonstrator's inferences about an observer's inferences about doing. This model provides quantitative and qualitative predictions for how agents will show and teach how to do a task given they know its true reward function. Since humans are the paradigm teachers and a potential source of expert knowledge for artificial agents, we tested how well our model describes human teaching. In Experiment 1, we had people teach simple goal-based reward functions in a discrete MDP. Even though in these cases entering a goal is already highly diagnostic, different paths of different lengths are better for showing, which is reflected in human behavior. In Experiment 2, people taught more complex feature-based reward functions by demonstration. In both studies, people's behavior matched the qualitative predictions of our models. 3 Experiment 1: Teaching Goal-based Reward Functions Consider a grid with three possible terminal goals as shown in Figure 1 . If an agent's goal is &, it could take a number of routes. For instance, it could move all the way right and then move upwards towards the & (right-then-up) or first move upwards and then towards the right (up-then-right). But, what if the agent is not just doing the task, but also attempting to show it to an observer trying to learn the goal location? When the goal is &, our pedagogical model predicts that up-then-right is the more probable trajectory because it is more disambiguating. Up-then-right better indicates that the intended goal is & than right-then-up because right-then-up has more actions consistent with the goal being #. We have included an analytic proof of why this is the case for a simpler setting in the supplementary materials. Additionally, our pedagogical model makes the prediction that when trajectory length costs are negligible, agents will engage in repetitive, inefficient behaviors that gesture towards one goal location over others. This "looping" behavior results when an agent can return to a state with an action that has high signaling value by taking actions that have a low signaling "cost" (i.e. they do not signal something other than the true goal). Figure 1d shows an example of such a looping trajectory. In Experiment 1, we tested whether people's showing behavior reflected the pedagogical model when reward functions are goal-based. If so, this would indicate that people choose the disambiguating path to a goal when showing.
Experimental Design Sixty Amazon Mechanical Turk participants performed the task in Figure 1 . One was excluded due to missing data. All participants completed a learning block in which they had to find the reward location without being told. Afterwards, they were either placed in a Do condition or a Show condition. Participants in Do were told they would win a bonus based on the number of rewards (correct goals) they reached and were shown the text, "The reward is at location X", where X was one of the three symbols %, #, or &. Those in Show were told they would win a bonus based on how well a randomly matched partner who was shown their responses (and did not know the location of the reward) did on the task. On each round of Show, participants were shown text saying "Show your partner that the reward is at location X". All participants were given the same sequence of trials in which the reward locations were <%, &, #, &, %, #, %, #, &>.
Results As predicted, Show participants tended to choose paths that disambiguated their goal as compared to Do participants. We coded the number of responses on & and % trials that were "showing" trajectories based on how they entered the goal (i.e. out of 3 for each goal). On & trials, entering from the left, and on % trials, entering from above were coded as "showing". We ran a 2x2 ANOVA with Show vs Do as a between-subjects factor and goal (% vs &) as a repeated measure. There was a main effect of condition (F (1, 57) = 16.17, p < .001; Show: M = 1.82, S.E. 0.17; Do: M = 1.05, S.E. 0.17 The model does not predict any difference between conditions for the # (lower right) goal. However, a visual analysis suggested that more participants took a "swerving" path to reach #. This observation was confirmed by looking at trials where # was the goal and comparing the number of swerving trials, which was defined as making more than one change in direction (Show: M = 0.83, Do: M = 0.26; two-sided t-test: t(44.2) = 2.18, p = 0.03). Although not predicted by the model, participants may swerve to better signal their intention to move 'directly' towards the goal.
Discussion Reaching a goal is sufficient to indicate its location, but participants still chose paths that better disambiguated their intended goal. Overall, these results indicate that people are sensitive to the distinction between doing and showing, consistent with our computational framework.
Experiment 2: Teaching Feature-based Reward Functions Experiment 1 showed that people choose disambiguating plans even when entering the goal makes this seemingly unnecessary. However, one might expect richer showing behavior when teaching more complex reward functions. Thus, for Experiment 2, we developed a paradigm in which showing how to do a task, as opposed to merely doing a task, makes a difference for how well the underlying reward function is learned. In particular, we focused on teaching feature-based reward functions that allow an agent to generalize what it has learned in one situation to a new situation. People often use feature-based representations for generalization [3] , and feature-based reward functions have been used extensively in reinforcement learning (e.g. [1] ). We used a colored-tile grid task shown in Figure 2 to study teaching feature-based reward functions. White tiles are always "safe" (reward of 0), while yellow tiles are always terminal states that reward 10 points. The remaining 3 tile types-orange, purple, and cyan-are each either "safe" or "dangerous" (reward of −2). The rewards associated with the three tile types are independent, and nothing about the tiles themselves signal that they are safe or dangerous. A standard planning algorithm will reach the terminal state in the most efficient and optimal manner. Our pedagogical model, however, predicts that an agent who is showing the task will engage in specific behaviors that best disambiguate the true reward function. For instance, the pedagogical model is more likely to take a roundabout path that leads through all the safe tile types, choose to remain on a safe colored tile rather than go on the white tiles, or even loop repeatedly between multiple safe tile-types. All of these types of behaviors send strong signals to the learner about which tiles are safe as well as which tiles are dangerous.
Experimental Design Sixty participants did a feature-based reward teaching task; two were excluded due to missing data. In the first phase, all participants were given a learning-applying task. In the learning rounds, they interacted with the grid shown in Figure 2 while receiving feedback on which tiles won or lost points. Safe tiles were worth 0 points, dangerous tiles were worth -2 points, and the terminal goal tile was worth 5 points. They also won an additional 5 points for each round completed for a total of 10 points. Each point was worth 2 cents of bonus. After each learning round, an applying round occurred in which they applied what they just learned about the tiles without receiving feedback in a new grid configuration. They all played 8 pairs of learning and applying rounds corresponding to the 8 possible assignments of "safe" and "dangerous" to the 3 tile types, and order was randomized between participants. As in Experiment 1, participants were then split into Do or Show conditions with no feedback. Do participants were told which colors were safe and won points for performing the task. Show participants still won points and were told which types were safe. They were also told that their behavior would be shown to another person who would apply what they learned from watching the participant's behavior to a separate grid. The points won would be added to the demonstrator's bonus.
Results Responses matched model predictions. Do participants simply took efficient routes, whereas Show participants took paths that signaled tile reward values. In particular, Show participants took paths that led through multiple safe tile types, remained on safe colored tiles when safe non-colored tiles were available, and looped at the boundaries of differently colored safe tiles.
Model-based Analysis To determine how well the two models predicted human behaviors globally, we fit separate models for each reward function and condition combination. We found parameters that had the highest median likelihood out of the set of participant trajectories in a given reward function-condition combination. Since some participants used extremely large trajectories (e.g. >25 steps) and we wanted to include an analysis of all the data, we calculated best-fitting state-action policies. For the standard-planner, it is straightforward to calculate a Boltzmann policy for a reward function given λ. For the pedagogical model, we first need to specify an initial model of doing and distribution over a finite set of trajectories. We determine this initial set of trajectories and their probabilities using three parameters: λ, the softmax parameter for a hypothetical "doing" agent that the model assumes the learner believes it is observing; l max , the maximum trajectory length; and p min , the minimum probability for a trajectory under the hypothetical doing agent. The pedagogical model then uses an α parameter that determines the degree to which the teacher is maximizing. State-action probabilities are calculated from a distribution over trajectories using the equation P (a | s, R) = j P (a | s, j)P (j | R), where P (a | s, j) = |{(s,a):s=st,a=at∀(st,at)∈j}| |{(s,a):s=st∀(st,at)∈j}| . We fit parameter values that produced the maximum median likelihood for each model for each reward function and condition combination. These parameters are reported in the supplementary materials. The normalized median fit for each of these models is plotted in Figure 3 . As shown in the figure, the standard planning model better captures behavior in the Do condition, while the pedagogical model better captures behavior in the Show condition. Importantly, even when the standard planning model could have a high λ and behave more randomly, the pedagogical model better fits the Show condition. This indicates that showing is not simply random behavior.
Behavioral Analyses We additionally analyzed specific behavioral differences between the Do and Show conditions predicted by the models. When showing a task, people visit a greater variety of safe tiles, visit tile types that the learner has uncertainty about (i.e. the colored tiles), and more frequently revisit states or "loop" in a manner that leads to better signaling. We found that all three of these behaviors were more likely to occur in the Show condition than in the Do condition. To measure the variety of tiles visited, we calculated the entropy of the frequency distribution over colored-tile visits by round by participant. Average entropy was higher for Show (Show: M = 0.50, SE = 0.03; Do: M = 0.39, SE = 0.03; two-sided t-test: t(54.9) = −3.27, p < 0.01). When analyzing time spent on colored as opposed to un-colored tiles, we calculated the proportion of visits to colored tiles after the first colored tile had been visited. Again, this measure was higher for Show (Show: M = 0.87, SE = 0.01; Do: M = 0.82, SE = 0.01; two-sided t-test: t(55.6) = −3.14, p < .01). Finally, we calculated the number of times states were revisited in the two conditions-an indicator of "looping"-and found that participants revisited states more in Show compared to Do (Show: M = 1.38, SE = 0.22; Do: M = 0.10, SE = 0.03; two-sided t-test: t(28.3) = −2.82, p < .01). There was no difference between conditions in the total rewards won (two-sided t-test: t(46.2) = .026, p = 0.80).
Teaching Maximum-Likelihood IRL One reason to investigate showing is its potential for training artificial agents. Our pedagogical model makes assumptions about the learner, but it may be that pedagogical trajectories are better even for training off-the-shelf IRL algorithms. For instance, Maximum Likelihood IRL (MLIRL) is a state-of-the-art IRL algorithm for inferring feature-based reward functions [4, 12] . Importantly, unlike the discrete reward function space our showing model assumes, MLIRL estimates the maximum likelihood reward function over a space of continuous feature weights using gradient ascent. To test this, we input human and model trajectories into MLIRL. We constrained non-goal feature weights to be non-positive. Overall, the algorithm was able to learn the true reward function better from showing than doing trajectories produced by either the models or participants (Figure 2 ).
Discussion When learning a feature-based reward function from demonstration, it matters if the demonstrator is showing or doing. In this experiment, we showed that our model of pedagogical reasoning over trajectories captures how people show how to do a task. When showing as opposed to simply doing, demonstrators are more likely to visit a variety of states to show that they are safe, stay on otherwise ambiguously safe tiles, and also engage in "looping" behavior to signal information about the tiles. Moreover, this type of teaching is even better at training standard IRL algorithms like MLIRL.
General Discussion We have presented a model of showing as Bayesian teaching. Our model makes accurate quantitative and qualitative predictions about human showing behavior, as demonstrated in two experiments. Experiment 1 showed that people modify their behavior to signal information about goals, while Experiment 2 investigated how people teach feature-based reward functions. Finally, we showed that even standard IRL algorithms benefit from showing as opposed to merely doing. This provides a basis for future study into intentional teaching by demonstration. Future research must explore showing in settings with even richer state features and whether more savvy observers can leverage a showing agent's pedagogical intent for even better learning. 7 : 7 Construct hypothetical doing probability distribution P Doing (j | R) as an N x M array. 8: P Observing (R | j) = PDoing(j|R)P (R) R PDoing(j|R )P (R ) 9: P Showing (j | R) = PObserving(R|j) α j PObserving(R|j ) α 10: return P Showing (j | R) 2.1.2 Inverse Reinforcement Learning (IRL)
Figure 1 : 1 Figure 1: Experiment 1: Model predictions and participant trajectories for 3 trials when the goal is (a) &, (b) %, and (c) #. Model trajectories are the two with the highest probability (λ = 2, α = 1.0, p min = 10 −6 , l max = 4). Yellow numbers are counts of trajectories with the labeled tile as the penultimate state. (d) An example of looping behavior predicted by the model when % is the goal.
) as well as a main effect of goal (F (1, 57) = 4.77, p < .05; %-goal: M = 1.73, S.E. = 0.18; &-goal: M = 1.15, S.E. = 0.16). There was no interaction (F (1, 57) = 0.98, p = 0.32).
Figure 2 : 2 Figure 2: Experiment 2 results. (a) Column labels are reward function codes. They refer to which tiles were safe (o) and which were dangerous (x) with the ordering . Row 1: Underlying reward functions that participants either did or showed; Row 2: Do participant trajectories with visible tile colors; Row 3: Show participant trajectories; Row 4: Mean reward function learned from Do trajectories by Maximum-Likelihood Inverse Reinforcement Learning (MLIRL) [4, 12]; Row 5: Mean reward function learned from Show trajectories by MLIRL. (b) Mean distance between learned and true reward function weights for human-trained and model-trained MLIRL. For the models, MLIRL results for the top two ranked demonstration trajectories are shown.
Figure 3 : 3 Figure 3: Experiment 2 normalized median model fits.
Algorithm 1 Pedagogical Trajectory AlgorithmRequire: starting states s, reward functions {R 1 , R 2 , ..., R N }, transition function T , maximum showing trajectory depth l max , minimum hypothetical doing probability p min , teacher maximization parameter α, discount factor γ. 1: Π ← ∅ 2: for i = 1 to N do 3: |
1421279f-ec08-47da-8da4-3e7a1b6f2b7c | trentmkelly/LessWrong-43k | LessWrong | City of Lights
Sequence index: Living Luminously
Previously in sequence: Highlights and Shadows
Next in Sequence: Lampshading
Pretending to be multiple agents is a useful way to represent your psychology and uncover hidden complexities.
You may find your understanding of this post significantly improved if you read the sixth story from Seven Shiny Stories.
When grappling with the complex web of traits and patterns that is you, you are reasonably likely to find yourself less than completely uniform. You might have several competing perspectives, possess the ability to code-switch between different styles of thought, or even believe outright contradictions. It's bound to make it harder to think about yourself when you find this kind of convolution.
Unfortunately, we don't have the vocabulary or even the mental architecture to easily think of or describe ourselves (nor other people) as containing such multitudes. The closest we come in typical conversation more resembles descriptions of superficial, vague ambivalence ("I'm sorta happy about it, but kind of sad at the same time! Weird!") than the sort of deep-level muddle and conflict that can occupy a brain. The models of the human psyche that have come closest to approximating this mess are what I call "multi-agent models". (Note: I have no idea how what I am about to describe interacts with actual psychiatric conditions involving multiple personalities, voices in one's head, or other potentially similar-sounding phenomena. I describe multi-agent models as employed by psychiatrically singular persons.)
Multi-agent models have been around for a long time: in Plato's Republic, he talks about appetite (itself imperfectly self-consistent), spirit, and reason, forming a tripartite soul. He discusses their functions as though each has its own agency and could perceive, desire, plan, and act given the chance (plus the possibility of one forcing down the other two to rule the soul unopposed). Not too far off in structure is th |
71348ec8-02e1-4f5b-aad5-285e2ff251ae | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington, D.C.: To-Do List Hacking
Discussion article for the meetup : Washington, D.C.: To-Do List Hacking
WHEN: 16 November 2014 03:00:00PM (-0500)
WHERE: National Portrait Gallery
We will be meeting in the Kogod Courtyard of the National Portrait Gallery (8th and F Sts or 8th and G Sts NW, go straight past the information desk from either entrance) to talk about making to-do lists better. As usual, the period from 3:00 to 3:30 p.m. will be reserved for congregating, with discussion to begin at 3:30 p.m.
The structured part of the meeting will begin with discussing to-do lists as a practical tool to be used profitably in one's daily life. We want the conversation to generate productive advice for all involved; questions, comments, objections, tangents, &c. are welcome.
(As always, attendees are free to arrive late, leave early, and hold side conversations on unrelated subjects as they see fit. We only ask that you be considerate.)
Upcoming Meetups:
* Nov. 23: Fun & Games (bring games, play games, converse, socialize, or any combination thereof)
* Nov. 30: Thanksgiving break - no meetup planned
* Dec. 7: TBA (to be summarized)
Discussion article for the meetup : Washington, D.C.: To-Do List Hacking |
89da74f3-5813-4201-a2d6-43062c794045 | StampyAI/alignment-research-dataset/eaforum | Effective Altruism Forum | Announcing Manifund Regrants
Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can [apply for funding through our site](https://manifund.org/create); we’re also looking for additional regrantors and donors to join.
What is regranting?
-------------------
Regranting is a funding model where a donor delegates grantmaking budgets to different individuals known as “regrantors”. Regrantors are then empowered to make grant decisions based on the objectives of the original donor.
This model was pioneered by the FTX Future Fund; in [a 2022 retro](https://forum.effectivealtruism.org/posts/paMYXYFYbbjpdjgbt/future-fund-june-2022-update#Expectations_vs__reality) they considered regranting to be very promising at finding new projects and people to fund. More recently, Will MacAskill [cited regranting](https://forum.effectivealtruism.org/posts/DdSszj5NXk45MhQoq/decision-making-and-decentralisation-in-ea) as one way to diversify EA funding.
What is Manifund?
-----------------
[Manifund](https://manifund.org/) is the charitable arm of [Manifold Markets](https://manifold.markets/). Some of our past work:
* Impact certificates, with [Astral Codex Ten](https://manifund.org/rounds/acx-mini-grants) and the [OpenPhil AI Worldviews Contest](https://manifund.org/rounds/ai-worldviews)
* Forecasting tournaments, with [Charity Entrepreneurship](https://manifold.markets/group/ce-2023-top-ideas) and [Clearer Thinking](https://manifold.markets/group/clearer-thinking-regrants)
* [Donating prediction market winnings to charity](https://manifold.markets/charity), funded by the Future Fund
How does regranting on Manifund work?
-------------------------------------
[Our website](https://manifund.org/rounds/regrants) makes the process simple, transparent, and fast:
1. A donor contributes money to Manifold for Charity, our registered 501c3 nonprofit
2. The donor then allocates the money between regrantors of their choice. They can increase budgets for regrantors doing a good job, or pick out new regrantors who share the donor’s values.
3. Regrantors choose which opportunities (eg existing charities, new projects, or individuals) to spend their budgets on, writing up an explanation for each grant made.
* We expect most regrants to start with a conversation between the recipient and the regrantor, and after that, for the process to take less than two weeks.
* Alternatively, people looking for funding can post their project on the Manifund site. Donors and regrantors can then decide whether to fund it, similar to Kickstarter.
4. The Manifund team screens the grant to make sure it is legitimate, legal, and aligned with our mission. If so, we approve the grant, which sends money to the recipient’s Manifund account.
5. The recipient withdraws money from their Manifund account to be used for their project.
### Differences from the Future Fund’s regranting program
* **Anyone can donate to regrantors.** Part of what inspired us to start this program is how hard it is to figure out where to give as a longtermist donor—there’s no GiveWell, no ACE, just a mass of opaque, hard-to-evaluate research orgs. Manifund’s regranting infrastructure lets individual donors outsource their giving decisions to people they trust, who may be more specialized and more qualified at grantmaking.
* **All grant information is public.** This includes the identity of the regrantor and grant recipient, the project description, the grant size, and the regrantor’s writeup. We strongly believe in transparency as it allows for meaningful public feedback, accountability of decisions, and establishment of regrantor track records.
+ Almost everything is done through our website. This lets us move faster, act transparently, set good defaults, and encourage discourse about the projects in comment sections.
+ We recognize that not all grants are suited for publishing; for now, we recommend sensitive grants apply to other donors (such as LTFF, SFF, OpenPhil).
* **We’re starting with less money.** The Future Fund ended up distributing ~$100m over the 6 months of their program; we currently have about ~$2m to distribute and are fundraising for more.
Round 1: Longtermist Regrants
-----------------------------
We’re launching with [a cohort of 14 regrantors](https://manifund.org/rounds/regrants?tab=regrants), each given a budget of $50k-$400k to direct to projects they believe will be the most impactful. We chose regrantors who are aligned with our values and prioritize mitigating global catastrophic risks, though ultimately regrantors can choose to give to projects under any cause area.
This round is backed by an anonymous donor’s contribution of $1.5 million, plus smaller grants from EA funders. Round 1 will end after this initial pool is spent, or after 6 months have passed.
Get involved with Manifund Regrants
-----------------------------------
### For grantees: list your project on [our site](https://manifund.org/create)
If you are working on a longtermist project and looking for funding, you can post the details on our site [here](https://manifund.org/create). Examples of projects we’ve funded:
* [$13k to Rachel Freedman](https://manifund.org/projects/medical-expenses-for-chai-phd-student), for medical expenses/PhD salary supplement
* [$25k to Joseph Bloom](https://manifund.org/projects/independent-researcher), for independent mech interp research
* [$2.5k to Vipul Naik](https://manifund.org/projects/donations-list-website-retroactive), for the Donations List Website (retroactive grant)
We’re interested in proposals for AI safety, AI governance, forecasting, biorisk, and EA meta; we expect to best fund individuals and orgs looking for $1k-$200k.
### For regrantors: apply for your own regrant budget
We’re accepting [applications](https://airtable.com/appOfJtzt8yUTBFcD/shrZW7S069EmghCSV) from people who want to join as regrantors! In some cases, we'll offer to sponsor regrantors and provide budgets, and in others we'll just offer to list regrantors so they can receive donations from other users that they can go on to donate.
### For large donors: designate your own regrantors
We’re interested in anyone who would like to direct $100k+ this year through a regranting program. If that is you, reach out to `austin@manifund.org` or [book a call](https://calendar.app.google/kiucxFQrrMf8P4Fh6)!
Why might you choose to donate via a regranting program?
* **You care about longtermism, but don’t know which projects need money.** Longtermist projects can be speculative, opaque, and nascent, making it harder for you to know where to direct their money. Regranting allows you to outsource these decisions to people who better understand the field.
* **You have specific regrantors whose judgement you trust.** Regranting surfaces opportunities that established EA grantmakers might otherwise miss, as regrantors can tap into their personal networks and fields of expertise. Regrantors can also initiate projects, by reaching out to grantees, launching prizes, and starting orgs.
* **You want to see your money move quickly.** Our regranting model requires less overhead than traditional grantmaking, as one person is responsible for the budget rather than a committee. This also allows for faster grant turnaround times, solving a key pain point for grantees. We think the world would be a better place if impactful projects could start a few weeks to months earlier.
* **You want to donate through a 501c3.** Manifund regrantors can give to other nonprofits, individuals, and for-profit orgs. If you operate a donor-advised fund or want the tax advantages of giving through a 501c3, we can facilitate that, so long as we vet that your regrantors make grants compatible with our charitable mission.
### For everyone: talk to us!
We welcome feedback of all kinds. Whether you’re a potential grantee, regrantor, or donor, we’d love to hear about your pain points with existing funding systems, and what kinds of projects you find exciting. Hop in [our Discord](https://discord.gg/ZGsDMWSA5Q) and come chat with us, or comment on specific projects [through our site](https://manifund.org/rounds/regrants)! |
cb7d6d29-8012-42d3-be13-43afc587c61d | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Simplicity priors with reflective oracles
.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Previously, I described [a type of probabilistic oracle machines](/item?id=18) that I think is powerful enough to allow the implementation of a variant of [AIXI](http://en.wikipedia.org/wiki/AIXI) that can reason about worlds containing other instances of this variant of AIXI. In this post, I describe how to implement a simplicity prior (a variant of [Solomonoff induction](http://wiki.lesswrong.com/wiki/Solomonoff_induction)) in this framework; there are some technical issues which will also come up when implementing AIXI that are easier to talk about in the context of Solomonoff-like induction.
---
Solomonoff induction is a prior probability distribution over infinite bit strings, {0,1}ω. We want to define a similar prior using probabilistic oracle machines with [reflective oracles](/item?id=18) instead of Turing machines---and then "implement" that prior *as* a probabilistic oracle machine with that oracle.
The basic idea is simple. Say that a probabilistic oracle machine H is a *hypothesis* if, for every oracle O′:N×(Q∩[0,1])→[0,1], invoking H[O′] [almost surely](http://en.wikipedia.org/wiki/Almost_surely) outputs an infinite bit string. Fix some programming language for probabilistic oracle machines, and randomly choose a program implementing a hypothesis H, where the probability of each program of length ℓ is proportional to 2−ℓ. Then run H[O], where O is our actual, reflective oracle, to obtain a sample bit string. This concludes the definition of our prior.
This also suggests how we could implement this prior, as a probabilistic oracle machine S that samples bit strings with these probabilities; it seems like all we need is a way to test whether or not a given program implements a hypothesis. I'll tackle that problem later in this post, but first let me point out a different problem: simply *running* the hypothesis H that we've sampled in the first step is not an ideal way to implement the prior.
---
Suppose that we want to determine the probability that the output of S starts with 0111010. Well, here's an idea: why don't we use our reflective oracle?
To do so, we need to write a query Q which returns 1 if the output of S has the right prefix, 0 otherwise. Easy, or so it seems; run S until it has outputted the first seven bits, check whether they're 0111010, return 0 or 1 accordingly. Then we can call our oracle on (┌Q┐,p), for any p∈Q∩[0,1], to check whether the probability of the prefix 0111010 is greater or smaller than p.
But a requirement for a machine Q to be a query is that it Q[O′] halts with probability one, for *every* oracle O′. In the implementation of S given above, any way we might find to check whether an arbitrary program H is a hypothesis will certainly involve a use of the reflective oracle, which means that if we run S on an arbitrary oracle O′, we might end up choosing a "hypothesis" which loops forever without producing any output. Thus, the machine described in the previous paragraph *also* has a positive probability of looping forever when run on this bad oracle, which means that it isn't a query.
---
To fix this, let's change the definition of S in such a way as to turn it into a hypothesis (i.e., into a machine that almost surely outputs an infinite bitstring, no matter what oracle it's run on). Then the Q described above will be a query, and we can use this method to compute the probabilities of arbitrary finite prefixes.
To do so, we need to make sure that once we sampled an H, we definitely output an infinite bitstring, no matter what our oracle is and whether or not H is actually a hypothesis. Of course, *if* H is a hypothesis and we're running on a reflective oracle, then we want to output a sample from H[O] (with the correct distribution).
We can do this---with a simple trick. Let's start by letting Q be the machine which runs H until it outputs its first bit, and then returns that bit. If H is a hypothesis, then Q is a query.
So we call our oracle on the pair (┌Q┐,0.5), and throw a fair coin.
* If the coin comes up heads and the oracle says "false" (the probability of Q returning 1 is smaller than 0.5), we output a zero.
* If the coin lands heads and the oracle says "true", we output a one.
* If the coin lands tails and the oracle says "false", we call our oracle on (┌Q┐,0.25) and repeat the process; if the coin lands tails and the oracle says "true", we call the oracle on (┌Q┐,0.75) and repeat.
This way, if we're running on a reflective oracle, we output a bit with the same distribution as the first bit outputted by H; but no matter what oracle we're running on, we definitely output a bit with probability one (because in each step of the binary search described above, we have a 50% probability of producing output).
This technique generalizes beyond the particular query we were talking about above:
**Lemma** (Protecting queries)**.** There is a computable function f(┌Q┐) which takes the Gödel number of a probabilistic oracle machine and returns the Gödel number of a different probabilistic oracle machine, such that (i) f(┌Q┐) is a query, and (ii) if Q is also a query, and O is a reflective oracle, then f(┌Q┐)[O] has the same distribution as Q[O].
---
This gives us the *first* bit. Now we want to produce a second bit of output, whose probability distribution is the same as the distribution of the second bit of H---*conditional* on H having produced the first bit that we just produced.
Here's a simple way to do so. Suppose that the first bit was a 1. Let Q now be the following machine: Run H until it produces its first bit. If that bit is a 0, start over. If the bit is a 1, run it until it produces its second bit, and return that bit.
If H is a hypothesis which has a positive probability of producing a 1 as its first bit, then Q is a query. Assuming this is safe if we're running on a reflective oracle, since in this case the probabability that *we* choose 1 as our first bit is equal to the probability that H chooses 1 as its first bit.
Now we apply the above lemma about queries, with this new Q. This gives us an actual query, which we can just run since it is guaranteed to halt, and which has the correct distribution if we're running on a reflective oracle.
We can use exactly the same procedure for each of the later bits, leading to a lemma exactly analogous to the earlier one for queries:
**Lemma** (Protecting hypotheses)**.** There is a computable function f(┌H┐) which takes the Gödel number of a probabilistic oracle machine and returns the Gödel number of a different probabilistic oracle machine, such that (i) f(┌H┐) is a hypothesis, and (ii) if H is also a hypothesis, and O is a reflective oracle, then f(┌H┐)[O] has the same distribution as H[O].
---
Given this result, the main missing piece is how to test whether a given machine ┌H┐ is a hypothesis. We need this test to give the correct result if we're running on a reflective oracle, and we also need it to halt no matter which oracle we're running on---in other words, we need it to be a query.
It turns out that this is quite straight-forward to implement, given our definition of reflective oracles as always returning "false" when passed anything other than a query. Given the source code ┌H┐ of a machine that may or may not be a hypothesis, we first implement the following machine, M; M has the property that it is a query *if and only if H is a hypothesis*.
* M first chooses a natural number, n, according to any distribution that places positive probability on arbitrary large numbers.
* Then, it runs H until H has outputted n bits.
* If H halts before it has outputted n bits, then M loops forever. (If H goes into an infinite loop before it has outputted n bits, then M does of course also loop forever.)
* If H does in fact output n bits, then M outputs 1 and halts.
Our test is now to call our oracle on the pair (┌M┐,0). Since this is only a single invocation of the oracle, it always halts. Moreover, if we're running on a reflective oracle, then there are two cases:
* If H is a hypothesis, then M halts with probability one; whatever number M selects in the first step, H eventually outputs that many bits. In this case, M is a query that always outputs 1, implying that invoking the oracle on (┌M┐,0) will return "true".
* If H is *not* a hypothesis, there is a positive probability that it will halt or loop forever after outputting only some finite number of bits, and there is a positive probability that M will choose a number larger than this. Hence, there is a positive probability that M loops forever, which implies that it is not a query, and hence that invoking the oracle on (┌M┐,0) will return "false".
---
We now have a query to test whether something is a hypothesis, and way of safely running a potential hypothesis, which will not go into infinite loops even if we're not running on a reflective oracle. Are we done?
Unfortunately not quite; there's still an annoying problem remaining, and so far I only have a slightly kludgy solution to it. The problem is that we would like our prior to, with probability one, choose a ┌H┐ that actually *is* a hypothesis, and to do so it can't use the obvious method of trying random machines until finds one for which the query returns true.
Why not? Well, our query consists of a single oracle invocation---and if we're not running on a reflective oracle, all instances of that oracle invocation might return "false", in which case we would loop forever. And *that* would mean that our sampler S would still not be a hypothesis, whose distribution we can probe with a reflective oracle.
My kludge is simple: Select only a single machine ┌H┐ at random. If our test tells us that this *isn't* a hypothesis, then output an infinite stream of zeros. If our test tells us it *is* a hypothesis, then output a single 1 before executing the protected version of H. Then S is a hypothesis, and we can use the tools detailed at the beginning of this post to figure out its distribution; and if we can find the probability of arbitrary finite prefixes of S's output, we can also compute the *conditional* probability, given that the first bit is a 1. This gives us the probability distribution we were initially looking for.
There's probably something more elegant, but I haven't yet been able to come up with an alternative that's both elegant and simple. |
06dee20a-b07e-485d-b78e-7720d865d5de | trentmkelly/LessWrong-43k | LessWrong | Inositol Non-Results
Three months ago I suggested people consider inositol for treating combined vague mood issues and vague stomach issues. I knew a few people who’d really benefited from it, and when one talked about it on his popular Twitter account several more people popped up thanking him for the suggestion, because it fixed their lives too. But those reports didn’t come with a denominator, which made it hard to estimate the success rate; I was hoping mentioning it on my blog and doing a formal follow-up to capture the non-responders would give a more accurate number.
Unfortunately, I didn’t get enough people to do anything useful. I received 7 responses, of which 3 didn’t have digestive issues and thus weren’t really the target. The low response rate might be a consequence of giving the wrong link in the original follow-up post, or maybe it just wasn’t that interesting. I’m reporting the results anyway out of a sense of civic virtue.
Of those 4 remaining responses:
* 2 rated it exactly 5 out of 10 (neutral)
* 1 rated it as 6, which was not strong enough for them to try it a third time.
* 1 rated it as 3- not bad enough that they spontaneously noticed a problem, but they did detailed mood tracking and the linear regression was clearly bad.
That response rate isn’t really low enough to prove anything except that anything with a real effect can hurt you, and the value of detailed data. So for now we just have David’s estimate that 5% of people he inspired to take inositol benefited from it. |
91f5b50e-02b7-4cf6-b485-ea8c40999220 | LDJnr/LessWrong-Amplify-Instruct | LessWrong | "Category theory is the mathematics of math—specifically, it's a mathematical theory of mathematical structure. It turns out that every kind of mathematics you're likely to encounter in a normal university education is just another kind of category—group theory falls under the category of groups, topology falls under the category of topological spaces, etc.Specifically, category theory shows that all of the highly diverse mathematical structures that we know of can be broken down into nodes and arrows between nodes. Nodes and arrows define a category in a nutshell—plus something called composition, which basically says that if a bird can fly from Mexico to the USA to Canada, then the bird can also travel "directly" from Mexico to Canada in a way that is equal to the Mexico —> USA —> Canada path. (See "the right to use a name" section.)Breaking any and every mathematical structure down to nodes and arrows between nodes is a super-general, super-abstract way of studying mathematics. It's like trying to study Shakespeare by breaking down all of his sentences into nouns and verbs. Category theory has a reputation for being painfully abstract—in fact, it's been called "abstract nonsense" by one of its founders. Because of this, it's typically recommended that you have a few mathematical structures under your belt—algebra, groups, topology, etc.—before studying category theory so that you have specific examples to relate the abstractions to. (It would be tough to study verbs if you didn't know about things like running and jumping!)But while there's only so much to learn about Shakespeare by breaking "to be or not to be" into infinitives, conjunctions, and adverbs, it turns out that the super-general perspective of category theory is incredibly useful in concrete ways. In particular, it turns out that pretty much every cool idea in math is something called an adjoint functor—a special construction that can only be accessed through category theory. A lot of category theorists will tell you that adjoint functors are kind of the point of category theory. Adjoints, or adjunction, generalizes optimization itself.Then there is the Yoneda lemma, which is as deep as it is elegant and powerful. We will explore it in depth. (If this series works out.)You might be wondering what success category theory has found in applications to the sciences. How can you even apply something so general and abstract to our highly specific and concrete reality?Well, category theory is super general, so whenever you are studying super-general phenomena, it makes sense to think of category theory. What's a super-general phenomenon? For example, the laws of physics! They govern everything, presumably. If you're looking for fundamental rules that apply to everything from tiny particles to enormous planets and the complex living creatures in between, category theory immediately comes to mind. Then there is biology, which less super-general, unless there really are Martians hiding from our rovers, but organisms have to survive and reproduce under wildly diverse conditions—the planet Earth can throw a lot of stuff at you, from volcanoes to Ice Ages. On some level, organic life clearly has the ability to adapt to all of these conditions—and adapting the same basic thing to lots of different contexts with powerful results is basically what category theory is.Definitely the biggest applied success for category theory has been in programming. I'd encourage you to look up functional programming, lambda calculus, or just Google something like "programming category theory." It's fascinating, though I'm actually going to deemphasize the programming side of things if anything, as I don't want to distract from the fundamentals.So what is category theory? Nothing other than the formal generalization of everything. Why should you be interested in it? Because it gives you an incredible bird's-eye view of all of mathematics, and a particular perch, adjunction, that can't be found anywhere else.This series will be very slow paced relative to other introductions—I will not assume you know what sets and functions are, to give just one example. If you're comfortable with math or just want to plunge into things a little more, I strongly encourage you to look up the many fantastic introductions to category theory that already exist on the Internet for free as videos, textbooks, blog posts, and papers. This is a series meant for people who either have no exposure to mathematics beyond the high school level or actively want to avoid it! (I'll put it this way: if there was a "Baby Rudin" for category theory, this series would be aiming to be a "Fetal Rudin.")There's no schedule for these posts, which isn't ideal for learning, but that's just the reality of how this series is going to be made. Coming up is a sequence of posts on the most basic details of defining a category, with an emphasis on developing intuition at each step."
The above is a chunk of text from a post created by a user on the website LessWrong.
Can you break this down into the top 5 most important points for me.
1. Category Theory as a Mathematical Framework: Category theory is a branch of mathematics that deals with mathematical structures by breaking them down into nodes and arrows between nodes. It is a general and abstract way of studying mathematics, akin to analyzing language by breaking down sentences into nouns and verbs.
2. Composition in Category Theory: Composition is a fundamental concept in category theory. It states that if there is a path from one node to another through intermediate nodes, there is also a direct path between them that is equivalent to the path through the intermediates.
3. Adjoint Functors: Adjoint functors are a key concept in category theory, often seen as its main point. They are special constructions that generalize optimization and are only accessible through category theory.
4. Yoneda Lemma: The Yoneda lemma is another significant principle in category theory. It is recognized for its depth, elegance, and power, and it will be explored in depth in the series.
5. Applications of Category Theory: Category theory has found success in various applications, especially in programming, where it is used in functional programming and lambda calculus. It can also be applied to other general phenomena like the laws of physics and biology due to its ability to adapt the same basic thing to lots of different contexts.
Could you elaborate on how adjoint functors in category theory generalize optimization?
Adjoint functors in category theory are a pair of functors operating between two categories that stand in a particular relationship with each other. The concept of adjoint functors provides a way to move or translate concepts, problems, and structures from one category to another while preserving their essential properties.
The connection between adjoint functors and optimization lies in the way they capture a notion of 'best fit' or 'optimal solution' in a categorical setting. Specifically, given a functor F from category C to D and a functor G from D to C, G is said to be a right adjoint to F (and F a left adjoint to G) if for every object c in C and d in D, there is a natural isomorphism between the hom-sets Hom(F(c), d) in D and Hom(c, G(d)) in C.
This natural isomorphism can be seen as a way of optimizing the 'fit' between the objects and morphisms in the two categories. In a sense, it's a way of saying that applying F and then G gives you the 'best' or 'closest' object in C to your starting object, and vice versa. This is analogous to the way an optimization problem seeks the best solution according to some criterion or measure.
To give a concrete example, consider the categories of sets and topological spaces. The functor that assigns to each set its discrete topology is left adjoint to the functor that assigns to each topological space its underlying set. This is a way of saying that the discrete topology is the 'best' or 'most efficient' way of putting a topology on a set, in a sense that can be made precise using the language of category theory.
So, while adjoint functors may not directly generalize optimization in the sense of finding maxima or minima of functions, they do capture a related notion of optimality or 'best fit' in the context of category theory.
How do adjoint functors, in their role of capturing 'best fit' or 'optimal solution', influence the development of machine learning algorithms and data science models?
Adjoint functors' concept of 'best fit' or 'optimal solution' has profound implications for machine learning and data science. They provide a mathematical framework that allows for the translation of complex problems between different categories while preserving their essential properties, which can be particularly useful in developing algorithms and models.
1. Optimization Problems: The idea of 'best fit' is central to machine learning, where models are often trained by optimizing a loss function. Adjoint functors provide a way to generalize these optimization problems, potentially allowing for more efficient or effective solutions.
2. Feature Mapping: Adjoint functors can be used to map features from one category (or feature space) to another, preserving their relationships. This can be used to create more meaningful or useful representations of data, which is a key part of many machine learning algorithms.
3. Data Transformation: The process of transforming data from one form to another (such as from raw data to a structured format suitable for a machine learning algorithm) can be seen as a functor. Adjoint functors provide a way to understand and optimize these transformations.
4. Model Interpretability: The natural transformations that arise in the definition of adjoint functors can provide insights into the relationships between different models or algorithms. This can help in understanding and interpreting the behavior of complex machine learning models.
5. Generalization: The abstract nature of category theory and adjoint functors allows for the development of highly general algorithms and models. These can potentially be applied to a wide range of problems, improving the versatility and scalability of machine learning and data science solutions.
It's important to note that while these concepts have significant potential, their application in machine learning and data science is still a relatively new and developing field. The full extent of their usefulness and impact is yet to be seen. |
4e03106a-1ad1-4fde-8116-17d5be96ce27 | StampyAI/alignment-research-dataset/lesswrong | LessWrong | An ML interpretation of Shard Theory
*Crossposted from my* [*personal blog*](https://www.beren.io/2022-12-05-ML-Interpretation-Shard-Theory/)
**Epistemic Status:** *I have spent a fair bit of time reading the core Shard Theory posts and trying to understand it. I also have a background in RL as well as the computational neuroscience of action and decision-making. However, I may be misunderstanding or have missed crucial points. If so, please correct me!*
[Shard Theory](https://www.lesswrong.com/posts/xqkGmfikqapbJ2YMj/shard-theory-an-overview) has always seemed slightly esoteric and confusing to me — what are ‘shards’, why might we expect these to form in RL agents? When first reading the Shard Theory posts, there were two main sources of confusion for me. The first: why an agent optimising a reward function should not optimise for reward but instead just implement behaviours that have been rewarded in the past?
This distinction is now obvious to me. The distinction between [amortised vs direct](https://www.lesswrong.com/posts/S54HKhxQyttNLATKu/deconfusing-direct-vs-amortised-optimization) inference, and shards as cached behaviours falls directly out of amortized policy gradient algorithms (which Shard Theory uses as the prototypical case of RL [[1]](#fnzhtrpfo82u)). This idea has also been expanded in many [other](https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target) [posts](https://www.lesswrong.com/posts/TWorNr22hhYegE4RT/models-don-t-get-reward).
The second source of my confusion was the idea of shards themselves. Even given amortisation, why should behaviour splinter into specific ‘shards’? and why should the shards compete with one another? What would it even mean for ‘shards’ to compete or for there to be ‘shard coalitions’ in a neural network?
My best guess here is that Shard Theory is making several *empirical* claims about the formation of representations during training for large-scale (?) RL models. Specifically, from an ML lens, we can think of shards as *loosely-coupled relatively-independent subnetworks which implement specific behaviours*.
A concrete instantiation of Shard Theory's claim, therefore, appears to be that during training of the network, the tendency is for the optimiser to construct multiple relatively loosely coupled circuits which each implement some specific behaviour which has been rewarded in the past. In a forward pass through the networks, these circuits then get activated according to some degree of similarity between the current state and the states that have led to reward in the past. These circuits then ‘compete’ with one another to be the one to shape behaviour by being passed through some kind of normalising nonlinearity such as softmax. I am not entirely sure how ‘shard coalitions’ can occur on this view, but perhaps some kind of reciprocal positive feedback where the early parts of the circuit of shard *A* also provide positive activations to the circuit of shard B and hence they become co-active (which might eventually lead to the shards ‘merging’) [[2]](#fnqfdu1w7fl).
This is not the only way that processing has to happen in a policy network. The current conceptualisation of shards requires them to be in the ‘output space’ — i.e shards correspond to networks in favour of some series of actions being taken. However, the network could instead do a lot of processing in the input space. For instance, it could separate processing into two phases: 1.) Figure out what action to take by analysing the current state and comparing it to past rewarded states and then 2.) translate that abstract action into the real action space -- i.e. translate 'eat lollipop' into specific muscle movements. In this case, there wouldn’t be multiple shards forming around behaviours, but there could instead be ‘perceptual shards’ which each provide their own interpretations of the current state.
Another alternative is that all the circuits in the network are tightly coupled and cannot be meaningfully separated into distinct ‘shards’. Instead, each reward event subtly increases and decreases the probabilities of all options by modifying all aspects of the network. This is the ‘one-big-circuit’ perspective and may be correct. To summarize, it appears that Shard Theory claims that processing in the network is primarily done in output (behaviour) space and secondly that the internals of the network are relatively modular and consist of fairly separable circuits which implement and upweight specific behaviours.
These are empirical questions that can be answered! And indeed, if we succeed at interpretability even a small amount we should start to get some answers to these questions. Evidence from the current state of interpretability research is mixed. [Chris Olah’s work in CNNs](https://www.google.com/search?client=safari&rls=en&q=olah+circuits+distill&ie=UTF-8&oe=UTF-8), especially Inception V1 , suggests something closer to the ‘one-big-circuit’ view than separable shards. Specifically, in CNNs representations appear to be built up by hierarchical compositional circuits — i.e. you go from curve detectors to fur detectors to dog detectors — but that these circuits are all tightly intertwined with each other rather than forming relatively independent and modular circuits (although different branches of Inception V1 appear to be [modular and specialised for certain kinds of perceptual input](https://distill.pub/2020/circuits/branch-specialization/)). For instance, the features at a higher layer tend to depend on a large number of the features at lower layers. On the other hand, in transformer models, there appears to be more evidence for more independent circuits. For instance, we can uncover specific circuits for things like [induction](https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html) or [indirect-object-identification](https://openreview.net/forum?id=NpsVSN6o4ul). However, these must be interpreted with caution since we understand much less about the representations of transformer language models than Inception-V1. A-priori, both the much greater number of parameters in transformer models compared to CNNs, as well as the additive nature of residual nets vs multiplicative hierarchical nature of deep CNNs could potentially encourage the formation of more modular additive shard-like sub circuits. To my knowledge, we have almost zero studies of the internal processing of reasonably large scale policy gradient networks, which would be required to address these questions in practice. This (and interpretability in RL models in general) would be a great avenue for future interpretability and safety research.
As well as specific claims, shard theory also implicitly assumes some high level claims about likely AGI architectures. Specifically, it requires that AGI be built entirely (maybe only primarily) through an amortised model-free RL agent on a highly variegated reward function — i.e. rewards for pursuing many different kinds of objectives. To me this is a fairly safe bet, as this is approximately how biological intelligence operates and moreover that neuromorphic or brain-inspired AGI, as envisaged by DeepMind is likely to approximate this ideal. Other AGI paths do not follow this path. One example is an AIXI like super-planner, which does direct optimization and so won’t form shards or approximate value fragments barring any inner-alignment failures. Another example is some kind of recursive query wrapper around a general world model, as portrayed [here](https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teeth), which does not really get meaningful reward signals at all and isn’t trained with RL. The cognitive properties of this kind of agent, if it can realistically exist, are not really known to me at all.
1. **[^](#fnrefzhtrpfo82u)**In a fun intellectual circle, a lot of shard theory / model-free RL in general seems to be people reinventing behaviourism, except this time programming agents for which it is true. For instance, in behaviourism, agents never ‘optimise for reward’ but always simply display ‘conditioned’ behaviours which were associated with reward in the past. There are also various Pavlovian/associative conditioning experiments which might be interesting to do with RL agents.
2. **[^](#fnrefqfdu1w7fl)**Does this happen in the brain? Some potential evidence (and probably some inspiration) for this comes from the brain, and probably the basal ganglia which implements subcortical action selection. The basal ganglia is part of a large-scale loop through the brain of cortex -> BG -> thalamus -> cortex which contains the full sensorimotor loop. The [classic story](https://www.science.org/doi/abs/10.1126/science.275.5306.1593) of the BG is model-free RL with TD learning (but I personally have come to largely disagree with this). A large number of RL algorithms are consistent with RPEs including [policy gradients](https://arxiv.org/abs/2207.00636) as well as more [esoteric algorithms](https://psycnet.apa.org/doiLanding?doi=10.1037%2Frev0000199). Beyond this [dopaminergic](https://www.pnas.org/doi/abs/10.1073/pnas.1421930112) [neurons](https://elifesciences.org/articles/59152) [are](https://www.biorxiv.org/content/10.1101/2022.02.28.482379v1.full.pdf) [more](https://www.biorxiv.org/content/10.1101/2022.02.28.482379v1.full.pdf) [complicated](https://www.nature.com/articles/s41586-022-04954-0) [than](https://www.frontiersin.org/articles/10.3389/fnint.2015.00039/full) just implementing RPEs as well as appear to [represent multiple reward functions](https://www.sciencedirect.com/science/article/pii/S0960982215000688) which can result in [highly flexible](https://www.biorxiv.org/content/10.1101/2022.04.14.488361v1.abstract) TD learning algorithms. The BG does appear to have opponent pathways for exciting and inhibiting (the [Go and No-Go](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005062) pathways specific actions/plans, which indicate some level of shard-theory like competition. On the other hand, there also seems to be a fairly clear separation between *action selection* and *action implementation* in the brain, where the basal ganglia mostly does action selection and delegates the circuitry to implement the action to the motor cortex or specific subcortical structures. As far as I know, the motor cortex doesn’t have the same level of competition between different potential behaviours as in the basal ganglia, although this has of course [been proposed](https://www.sciencedirect.com/science/article/abs/pii/S0959438817302283). Behaviourally, there is certainly some evidence for multiple competing behaviours being activated simultaneously and needing to be effortfully inhibited. A classic example is the Stroop task but there is indeed a whole literature studying tasks where people [need to inhibit](https://psycnet.apa.org/record/2013-13047-001) certain attractive behaviours in various circumstances. On the other hand, this is not conclusive evidence for a shard-like architecture, but instead there could be a hybrid architecture of both amortised and iterative inference where the amortised and iterative responses are different. |
b4782a61-54e0-4da3-89ca-8f2db64d1d0b | trentmkelly/LessWrong-43k | LessWrong | AI prediction case study 5: Omohundro's AI drives
Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled "The errors, insights and lessons of famous AI predictions and what they mean for the future" to the conference proceedings of the AGI12/AGI Impacts Winter Intelligenceconference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we'll present it here after the fact.
The prediction classification shemas can be found in the first case study.
What drives an AI?
* Classification: issues and metastatements, using philosophical arguments and expert judgement.
Steve Omohundro, in his paper on 'AI drives', presented arguments aiming to show that generic AI designs would develop 'drives' that would cause them to behave in specific and potentially dangerous ways, even if these drives were not programmed in initially (Omo08). One of his examples was a superintelligent chess computer that was programmed purely to perform well at chess, but that was nevertheless driven by that goal to self-improve, to replace its goal with a utility function, to defend this utility function, to protect itself, and ultimately to acquire more resources and power.
This is a metastatement: generic AI designs would have this unexpected and convergent behaviour. This relies on philosophical and mathematical arguments, and though the author has expertise in mathematics and machine learning, he has none directly in philosophy. It also makes implicit use of the outside view: utility maximising agents are grouped together into one category and similar types of behaviours are expected from all agents in this category.
In order to clarify and reveal assumptions, it helps to divide Omohundro's thesis into two claims. The weaker one is that a generic AI design could end up having these AI drives; the stronger one that it would very likely have them.
Omohundro's paper provides strong evidence for the weak claim. It demonstrates how an AI motivated onl |
ca007096-25ff-4ecc-bf63-c78514faf647 | trentmkelly/LessWrong-43k | LessWrong | Hackathon and Staying Up-to-Date in AI
Listen to the first episode of a new sub-series, which will follow my team's process during the Evals Hackathon hosted by AlignmentJamz in November of 2023. In the end, our submission was titled "Detecting Implicit Gaming through Retrospective Evaluation Sets," and it received first place.
Also, I forgot to post about last week's episode, so I am mentioning it here. The episode, "Staying Up-to-Date in AI," covers the methods and tools that I use in an attempt to keep up with the break-neck pace of AI developments.
If you are enjoying the kind of content that I am publishing and/or support my goals with the podcast, please share it to anyone that you think may like is as well.
As a reiteration of my previous statements, I want the Into AI Safety podcast to be a resource for individuals who are interested in getting involved, but are having a difficult time taking the next steps. If you have any advice, feedback, or ideas that you think could help in that endeavor, please reach out!
----------------------------------------
In addition to the Into AI Safety podcast website, you can find the content on (I believe) all podcast listening platforms (e.g., Spotify, Apple Podcasts, Pocket Casts). If you think that I missed a platform, please let me know. |
7cb80ae6-7697-4075-a501-2d2be9362675 | trentmkelly/LessWrong-43k | LessWrong | Conflict, the Rules of Engagement, and Professionalism
(Talk given at an event on Sunday 16th of August. habryka is responsible for the talk, Justis Mills edited the transcript.
If you're a curated author and interested in giving a 5-min talk, which will then be transcribed and edited, sign up here.)
(This was all very off-the-cuff about stuff that I am very confused about, so take all of this with a grain of salt. There are at least a few things in here I disagree with myself, that I wouldn't have put into written form directly.)
habryka: I’m going to talk about three frames I have involving relationships and sociology. I’ll present the frames, some short justifications for why I believe them, and discuss how they connect to each other.
1. I think most relationships go better if you lean into conflict.
2. Most conflicts are hierarchically embedded within different rules, and maintaining the integrity of those rules is quite important.
3. Professionalism is really interesting. I like thinking about it, and I've gotten a bunch of value from thinking about it, because I didn't realize how much of my life has been shaped by professionalism.
Leaning into conflict
One of the things that has been pretty useful for me in life, is a general heuristic of realizing that conflict in relationships is usually net positive. (It depends a bit on the exact type of conflict, but works as a very rough heuristic.) I find it pretty valuable too, if I'm in a relationship, whether it's a working relationship, a romantic relationship, or a friendship, to pay a good amount of attention to where conflicts could happen in that relationship. And generally, I choose to steer towards those conflicts, to talk about them and seize them as substantial opportunities.
I think there are two reasons for this.
First, if startups should fail fast, so should relationships. The number of people you could have relationships with is much greater than the number of people that you will have relationships with. So there is a selection problem here, a |
07772acc-0f5f-4219-8fa8-7d6cfbbc2b4a | trentmkelly/LessWrong-43k | LessWrong | Inferring Values from Imperfect Optimizers
One approach to constructing a Friendly artificial intelligence is to create a piece of software that looks at large amounts of evidence about humans, and attempts to infer their values. I've been doing some thinking about this problem, and I'm going to talk about some approaches and problems that have occurred to me.
In a naive approach, we might define the problem like this: take some unknown utility function, U, and plug it into a mathematically clean optimization process (like AIXI) O. Then, look at your data set and take the information about the inputs and outputs of humans, and find the simplest U that best explains human behavior.
Unfortunately, this won't work. The best possible match for U is one that models not just those elements of human utility we're interested in, but also all the details of our broken, contradictory optimization process. The U we derive through this process will optimize for confirmation bias, scope insensitivity, hindsight bias, the halo effect, our own limited intelligence and inefficient use of evidence, and just about everything else that's wrong with us. Not what we're looking for.
Okay, so let's try putting a bandaid on it - let's go back to our original problem setup. However, we'll take our original O, and use all of the science on cognitive biases at our disposal to handicap it. We'll limit its search space, saddle it with a laundry list of cognitive biases, cripple its ability to use evidence, and in general make it as human-like as we possibly can. We could even give it akrasia by implementing hyperbolic discounting of reward. Then we'll repeat the original process to produce U'.
If we plug U' into our AI, the result will be that it will optimize like a human who had suddenly been stripped of all the kinds of stupidity that we programmed into our modified O. This is good! Plugged into a solid CEV infrastructure, this might even be good enough to produce a future that's a nice place to live. However, it's |
58901c8d-7a0b-4d50-beb4-0b2d7d56abb2 | trentmkelly/LessWrong-43k | LessWrong | New genetic evidence of positive selection for Ashkenazi diseases
http://johnhawks.net/weblog/reviews/genomics/selection/bray-ashkenazi-2010.html
.
|
2a44bfad-24be-432a-ae52-dd09c72463a5 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | Constraints from naturalized ethics.
Humans are just bundles of atoms, and so for them to reflect on themselves and their goals, it's plausible that there's some interesting naturalized ethics going on. This informs how we'd think about an AI intended to do human value learning. Let's frame this as a dialogue, so my hypothetical human can explain what they're thinking.
**Human:** I don't actually have goals in the precise mathematical sense, what we call my 'goals' are good yet imperfect patterns that I or others can use to describe my thoughts and behavior.
**Foil:** I don't see how an AI is supposed to learn your goals then. Can you at least specify exactly what pattern you mean by your goals?
**Human:** I don't think so - my thoughts and behavior under-determine those patterns. If we just try to model my thoughts and behavior *a priori*, what we probably get is some kind of fairly accurate simulation of me that doesn't have any neatly-labeled parts called 'goals.'
**Foil:** Can't you just tell the AI to model you in terms of goals?
**Human:** Even if we add a prior to tell the AI to model me in terms of goals, not only do we have to figure out how to pick a good prior, but also there's the problem that the predictvely-best model of me really is the simulation, and so there's a pressure on the AI to encode a simulation of me within whatever model we make it use, which probably will break whatever common-sense properties the things labeled 'goals' are supposed to have.
**Foil:** How did you start thinking in terms of goals in the first place, then?
**Human:** Well, I'm not even sure that I think in terms of goals. I care about them, but as you can see I can't precisely pin them down. It feels like I do my actual modeling of humans in terms of a bunch of related heuristics, and then use the label 'goals' to encapsulate planning-related parts of many of these heuristics. I also owe a lot to language - 'goal' is a word after all, and it fits within an entire mode of thinking about the world verbally.
**Foil:** If goals show up in human heuristics and language, they don't seem that hard to at least get a good approximation of. What's the problem?
**Human:** I still think of those heuristics as being *about* goals, not being the goals themselves. This is where I get notions like "even if you could convince me of something, it might not be what I really want to do." This gets into the related problem of how I'd want an AI to make decisions. I don't want it to make decisions like *me*, but I also don't trust the notion of maximization.
**Foil:** But surely if it's making decisions consistently, it's maximizing *some* goals, right?
**Human:** I just feel like even though there are many possible patterns consistent with my thoughts and behavior, if I pick any one of these patterns and maximize it to the extreme, I'm worried that it would be bad. When I say 'bad' in that last sentence, I don't mean bad-in-a-specific-goal-pattern, but instead I mean bad in a more abstract framework of heuristics about my own desires.
**Foil:** Have you read this post called [Impossible moral problems and moral authority](https://www.lesswrong.com/posts/pW6YJEzoRFe9cshuN/impossible-moral-problems-and-moral-authority)? It's by this guy who...
**Human:** Yes yes, the old self-promotion gag. And sure, maybe there are some patterns I would be fine with being maximized. But not arbitrary ones! There's still an important point here about what kind of process I would trust to decide on my goals. I don't trust a process that's just looking for "the pattern that best fits my thoughts and behavior" by some understandable metric. It should also be taking into consideration the more abstract framework I have that's telling me that maximization is dangerous to my values.
In short, the AI shouldn't reify my goals if I don't. Maybe it should think to itself, "I know that humans are just bundles of atoms, and what I call their 'goals' are just good patterns that I can use to describe their thoughts and behavior. But their thoughts and behavior under-determine those patterns and if I just pick one of those patterns and maximize it, I'm worried that it it would be bad. Not 'bad' in terms of a specific one of the goal patterns, but in the sense of some more abstract model I have of how optimization relates to goals."
**Foil:** But then what do you expect it to *do*? Haven't you just stuck yourself with an even more complicated [other-izer problem](https://arbital.com/p/otherizer/)?
**Human:** Hopefully it's actually a simpler other-izer problem, if we can teach our reasons for wanting an other-izer in the first place. And I'm less worried about not solving the problem than I am about not trying to solve the problem, as it were.
My central point is that AI should be a naturalist, and learn about me as I am rather than some fiction of me. If "what I want" is an underdefined category, the AI should probably be explicitly learning and thinking about how to judge underdefined things. If it's self-reflective, there's a strong argument that it had better conceive of its job as something like interpretation of an underdefined goal, or else it will be looking over its own source code under false assumptions. |
81e07f28-6312-4da2-b849-b134355ba867 | trentmkelly/LessWrong-43k | LessWrong | How Congressional Offices Process Constituent Communication
For anyone interested in facilitating constituent communication (emailing, calling, meeting with the legislators that represent you) to influence policy, it will be helpful to understand how offices process these various communication channels. This is what is referred to as the "formal process" in our post about the effectiveness of constituent communication on changing legislator's attitudes, as it represents the mechanism offices have put in place to systematically intake and consider constituent opinion. I'll start with explaining the general office structure, and then get into the process itself.
Office Structure
Before talking about how offices process communication, it’s important to have a rough understanding of who makes up a congressional office. Offices are split into two different locations, an office in DC and an office in the district, and though the roles will vary a lot office to office, a number of positions are common:
* Leadership
* Member: The elected official.
* Chief of Staff (CoS): “Chiefs of staff serve as the key adviser to the elected official, managing staff and ensuring the smooth operation of day-to-day activities.”
* Deputy Chief of Staff (DCoS): “Typically oversees a few policy issues [and manages] the Chief of Staff’s schedule, personal correspondence, and any overflow work delegated by the Chief of Staff.”
* District Director (DD): “This person is responsible for handling all state-based operations for the member, including most constituent-facing functions.”
* Legislative Team
* Legislative Director (LD): Oversees legislative staff and helps develop legislative strategy for the office.
* Legislative Assistant (LA): “These staffers are responsible for conducting in-depth research, analyzing policy issues, drafting legislation and advising the member of Congress on legislative matters.”
* Legislative Correspondent (LC): “Responsible for drafting letters in response to constituents' comments and questions a |
5e4ea861-9426-4112-b6a3-00919b771110 | trentmkelly/LessWrong-43k | LessWrong | Upcoming meditation workshop in the Bay Area
Hey everybody,
Michael "Valentine" Smith is teaching a Meditation workshop in Alamo, CA on May 3-5. The workshop is aimed at rationalist-type folk who know about the benefits of meditation and would like to be meditating more, but all are welcome to join. Val is a curriculum designer at CFAR and has been developing his family's meditation tradition for his whole life.
http://www.mindfulnessengineering.com/
Note: This is not a CFAR event.
|
5461a887-049b-40da-ab69-2fb488175925 | trentmkelly/LessWrong-43k | LessWrong | NY Megameetup this weekend
Hey all, this is just a reminder that New York community is hosting a megameetup this weekend. Some information here:
The weekend of April 5th and 6th, the New York Rationality community is hosting the Spring East Coast Megameetup of 2014. It'll be held at the Highgarden Group House / Community Center.
The Agenda:
Start Time: 3:30 PM on Saturday (Attendees are encouraged to arrive beforehand to mingle and talk to other arriving folk)
Attendees are encouraged to arrive by Saturday afternoon to have time to settle in. We'll begin at 3:30 PM with some opening remarks, followed by a presentation by David Carlson, on "Legal Concerns Surrounding Startups". This'll be followed by dinner and a workshop on Goal Factoring - looking at the things you do and introspecting on why you're actually doing them, to gain insight into your true goals and then develop creative ways to accomplish them.
The next day, we'll be holding an unconference, starting at 10:00 AM. People are invited to prepare 15 minute presentations on whatever they find interesting. Some topics people are preparing include:
• Running a kickstarter
• Talent coefficients and the growth mindset
• Current landscape of the Effective Altruism community
Donations to run the event and (and, on the off chance we break even, future events at Highgarden), are welcome at: 1FtWbh8GyAcDewSy32geCBgsFoHEN7UoZ
The official facebook event is here:
https://www.facebook.com/events/723051941048754 |
66fa20b6-a720-4c56-b5c1-0c338b910d7a | trentmkelly/LessWrong-43k | LessWrong | Personality analysis in terms of parameters
(copied and edited from my post in the Facebook LW group)
I suspect that it would be obvious to most rationalists that the way people judge other people is flawed. Typically for a heuristic approach, it's correct to a degree, but with many faults. And it's wasting a big amount of information and a potential for a more planned approach where you can ask questions that assess certain qualities and exchange information about people's personalities by giving their "parameters".
I needn't think of it in this way, it was natural for me to take this approach as soon as I learnt my first measurable parameter and its implications (it was IQ). Then I explored more of them and researched them some more.
So far, I know about IQ, rationality (Keith Stanovich's), Big Five personality traits, executive functions, intuition for social situations and a few more things. However, I can't seem to find any literature that helps describe them (how do I detect them in people and what are their implications?) and their implications (how *exactly* is someone with a higher IQ different from someone with a lower IQ?). Also, I can't find literature on other traits.
Any literature on any of that would be greatly appreciated. I wonder if there is a book that deals with the whole issue. Also, I need literature about IQ and Big Five, but anything else would still be useful.
Is that sort of thing popular on LessWrong?
|
822ea5cc-c905-4974-9d2d-e6c50a200252 | trentmkelly/LessWrong-43k | LessWrong | I'm looking for alternative funding strategies for cryonics.
Are there any ways to pay for cryonics or similar, if you don't have (enough) life insurance? Do you think a fundraiser like go fund me could help? Any advice would be appreciated. |
8d7eb69c-650f-4fdb-9817-4d863f4e618f | trentmkelly/LessWrong-43k | LessWrong | Is driving worth the risk?
− The Thinking Ladder
> Harry was wondering if he could even get a Bayesian calculation out of this. Of course, the point of a subjective Bayesian calculation wasn't that, after you made up a bunch of numbers, multiplying them out would give you an exactly right answer. The real point was that the process of making up numbers would force you to tally all the relevant facts and weigh all the relative probabilities. Like realizing, as soon as you actually thought about the probability of the Dark Mark not-fading if You-Know-Who was dead, that the probability wasn't low enough for the observation to count as strong evidence.
>
> − HPMOR Chapter 86: Multiple Hypothesis Testing
I'm 28 years old and have never had a drivers license. At some point earlier on in my life I decided that driving is something that has a bad expected value (EV) due to the risk of death and the massive value of life, but at the same time, the EV isn't so bad where I will go out of my way to avoid it. Since deciding this, the belief has become cached. However, various things have recently prompted me to reconsider the belief.
1. I'm looking to buy a home and am leaning towards making it a requirement that the place is fully walkable (as opposed to mostly walkable but sometimes requiring a car).
2. The topic of MIRI relocating came up and the badness of cars is relevant to that.
3. I have a wedding coming up where I have to travel from Vegas to SF. I have the option of getting a ride from my girlfriend's mom, or taking a flight. I'm tempted to go by car so I don't have to pay the money for a flight, but I don't actually think that is the right decision. ("No thanks Sandra. I think that computers are going to take over the world and make us all immortal. You have a slightly higher chance of dying in a car than a plane, so I'd rather pay for a flight.")
4. Covid has prompted me to explore the EV of doing things. Eg. looking at the chances of catching covid and dying, trying to put a dollar am |
c99920d3-be30-4a91-a77d-570d30c2aa8d | trentmkelly/LessWrong-43k | LessWrong | What are your favorite examples of distillation?
I'm a big fan of the Distill machine learning journal and the ideas of Research Debt and distillation. I consider Distill and LessWrong great repositories for distillations of ML / AI and some math topics. However, I've recently been hankering for distillations from other fields with which I'm somewhat familiar -- biology, algorithms, economics-- or even not that familiar. (John Wentworth's recent series of posts on aging and constraints are good examples of one form posts like this could take.)
So, I figured I'd ask here: what are your favorite examples of distillation in different fields? I'm open to more ML / AI related posts but am especially excited about responses in the fields I mentioned above or other different fields (I would include math here too). Ideal answers would be posts that optimally trade off:
* Describing a non-trivial topic.
* Not "dumbing it down".
* Being accessible to non-experts. |
08506bde-4918-4e46-b2ad-85ee56185d85 | trentmkelly/LessWrong-43k | LessWrong | My Mental Model of AI Creativity – Creativity Kiki
I went to some lectures on the future of science in games recently, and the keynote speaker was Tommy Thompson, an well-known AI expert in the game dev space.
Of course, by AI, he didn’t mean the modern sort that dominates the news. His focus is AI for games, which is algorithmic and rarely involves any ML component. Still, he spoke about the challenges the industry faces regarding Image Generators, LLMs and so on. He specifically called LLMs “stochastic parrots”, which I found disappointing. Imho it’s an incredibly misleading model of what LLMs are capable of and is usually deployed to downplay their abilities and belittle them. But it’s a common view, particularly in creative industries.
So what is a better model? It’s clear that they are not that smart in most ways we consider important, but they do have some interesting capabilities. Here’s model I use that I feel give a better intuition for what they can and cannot do.
Creativity Kiki
Can models be creative? Or do they just regurgitate things from their vast training set. Certainly, they can unquestionably create images and sentences that have never exactly been seen before, but if they are trivial variants of something already done, no one would call that creative. I think to answer the question properly, you need to be precise about how you define creativity.
I like to think of the sum total of all human creations to be a big blob that looks something like this:
It’s a funny shape, spread out in the dimension of platonic ideals. There are some areas that we’ve really explored in depth, like say, Marvel movies or romance novels, and others that we haven’t, like epic poems about chinchillas. Some of those ideas will be of interest while others are boring. You are creative when you find something both novel (outside the shape) and interesting (at least, to some audience).
Technically, any automated system can be creative in this sense. A simple random sentence generator, like infinite monkeys, will random |
c2a08465-c2a9-4c53-95a5-6611a595e8df | StampyAI/alignment-research-dataset/lesswrong | LessWrong | Framing approaches to alignment and the hard problem of AI cognition
*(Thanks to Adam Shimi and Steven Byrnes for feedback.)*
*(Edit: unsurprisingly, my views have updated some after writing this post. Specifically, I now put much higher credence on solving inner alignment via verification or relaxed adversarial training)*
1 Background
============
The [late 2021 MIRI conversations](https://www.lesswrong.com/s/n945eovrA3oDueqtq) include discussion about the
difficulty of alignment (don't worry, spending hours reading these isn't
required for this post). One [shared frame](https://www.lesswrong.com/posts/GkxxfdCukyGuyKXQQ/shared-frames-are-capital-investments-in-coordination) which wasn't immediately
present in the discussion was a clean delineation of possible approaches to
alignment and what they require. I claim that alignment techniques can be
usefully understood as deontological, consequentialist, or capability
restriction (or a mixture of these) and we'll be going through the challenges
associated with constructing *sufficiently useful* and safe AI using these
approaches. I'll also be discussing and motivating a set of issues which I'll
refer to as 'the hard problem of AI cognition': we don't have tools for
understanding the cognition or intention of AIs produced by current machine
learning methods or understanding of how AI decision making depends on training
(c.f. [inner alignment](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB)).[[1]](#fn-EMWKRuyEg3t7cERSq-1) I'll define this problem in
more detail below, including where this problem does and doesn't show up in
alignment. After going through deontological, consequentialist, and
capability restriction approaches at a high level and explaining where I think
the hard problem of AI cognition is relevant, I'll explain my take on the
strategic implications of this analysis and briefly discuss alignment
difficulty. We'll be focussing on X-risk, so we won't directly discuss failures
which would 'only' result in large losses of life or economic damage. This is
primarily a 'first principles' sort of analysis, though I'll be implicitly (and
occasionally explicitly) referencing empirical work.
Epistemic status: exploratory. While many of the ideas stated here appear to be
widely accepted in the alignment community, I'm a newcomer to the field trying
to cover a lot of ground. But everyone keeps telling me to be Very Ambitious
and that alignment lacks strategic direction. So, uh, here goes an attempt at
that I guess?
2 Prerequisites
===============
The main prerequisite will be the sorts of concepts discussed in [AGI safety
from first principles](https://www.lesswrong.com/s/mzgtmmTKKn5MuCzFJ).
We'll refer in more detail to:
* The idea of utility maximization and [that coherent decisions imply
consistent utilities](https://www.lesswrong.com/posts/RQpNHSiWaXTvDxt6R/coherent-decisions-imply-consistent-utilities)
* [Goodhart's law](https://www.lesswrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy)
* The concept of [inner alignment](https://www.lesswrong.com/s/r9tYkB2a8Fp4DN8yB)
* [Power seeking/instrumental convergence](https://www.lesswrong.com/s/fSMbebQyR4wheRrvk)
* The current lack of understanding around deep learning generalization and
transparency
* The idea of goal-directed behavior and some of the surrounding discussion in
the goals vs utility section of the [value learning
sequence](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc) and the [deconfusing goal-directedness sequence](https://www.lesswrong.com/s/o58ZMNaovdztbLfvN).
I'd tentatively recommend reading this post first before reading those sequences as
we'll go through the background and (hopefully) motivate these concepts
in a reasonably concise way.[[2]](#fn-EMWKRuyEg3t7cERSq-2)
Really, nothing else?
Well, other knowledge or ideas will hopefully be linked as necessary. Also,
here are some things which could be helpful to read (though I'd probably
recommend reading them after this post):
* [A discussion of using an objective framing or a generalization framing of inner
alignment](https://www.lesswrong.com/posts/pDaxobbB9FG5Dvqyv/discussion-objective-robustness-and-inner-alignment)
* [Model splintering: out-of-distribution behavior](https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1)
* [Reward splintering: model splintering on reward](https://www.lesswrong.com/posts/xoQhHxgwdHvWhj4P4/reward-splintering-for-ai-design) (really utility)
* As mentioned above, the entire [value learning sequence](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc) (the first two
parts seem more important to me) and the [deconfusing goal-directedness sequence](https://www.lesswrong.com/s/o58ZMNaovdztbLfvN)
* This recent post on [consequentialism &
corrigibility](https://www.lesswrong.com/posts/KDMLJEXTWtkZWheXt/consequentialism-and-corrigibility%5D).
If you'd like to read the [late 2021 MIRI conversations](https://www.lesswrong.com/s/n945eovrA3oDueqtq) (and haven't
read them yet), my weakly suggested reading order is:
* This post
* [Zvi's gears analysis of AGI intervention](https://www.lesswrong.com/posts/xHnuX42WNZ9hq53bz/attempted-gears-analysis-of-agi-intervention-discussion-with-1)
* The conversations themselves
* [Interpreting Yudkowsky on Deep vs Shallow Knowledge](https://www.lesswrong.com/posts/GSBCw94DsxLgDat6r/interpreting-yudkowsky-on-deep-vs-shallow-knowledge). Also
consider referencing this when the concept of deep knowledge comes up instead
of reading it afterwards.
(Of course, this reflects my biases.)
3 Assumptions and focuses
=========================
3.1 Sufficient usefulness
-------------------------
I wrote above that we want to make a *sufficiently useful* and safe AI.
What is this *sufficiently useful* criteria?
The criteria is that the [alignment tax](https://forum.effectivealtruism.org/posts/63stBTw3WAW6k45dY/paul-christiano-current-work-in-ai-alignment) must be
sufficiently small on the capability dimensions we care about.
And what is sufficiently small? And which
dimensions? Well, I don't think we currently have a good understanding of this
(as it requires predicting the future), but 2 typical models are:
1. Small enough that alignment can be enforced via governance without too much
incentive for defection. This framing is probably more relevant in slow
takeoff.
2. Small enough that an actor could use a lead in AI capabilities to accomplish
a [pivotal act](https://arbital.com/p/pivotal/) safely before unaligned AIs are constructed. Note
that under this framing, the 'capability dimensions we care about' are the
ones which can be used to cause a pivotal act. If the alignment penalty
makes all pivotal acts impossible, then that technique is (approximately)
worthless. This framing is more relevant in fast takeoff and the acceptable
levels of alignment tax could depend on the capabilities lead.
For the remainder of this post, we'll abstract over these two models as well as
different views on takeoff speed, referencing different perspectives as
necessary.
3.2 Assumptions
---------------
But abstracting over everything results in a mess, so we'll make the following
assumptions:
1. Unrestricted, superintelligent, and capable AIs which act like long-term,
expected utility maximizers with purely outcome based goals (aka
consequentialists) would cause an existential catastrophe if created (mostly
by humans) with approaches similar to current ML. When I say 'expected
utility maximizers with purely outcome based goals', I mean that the
expected utility maximizer cares intrinsically about consequences
(consequentialism) instead of about actions (deontology). This is the same
(or at least a very similar) idea as goal-directed AI from [this
post](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc/p/DfcywmqRSkBaCB6Ma) by Rohin (from [the value learning
sequence](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc) discussed above).
We'll use the term
consequentialism for the remainder of this post. I prefer this term over
goal-directed because the word goal isn't quite what we intend[[3]](#fn-EMWKRuyEg3t7cERSq-3), I'm likely to use this concept
somewhat differently from how Rohin intended, and the word more nicely
sets up the dichotomy with deontology.
We'll also go through the distinction between
consequentialism and deontology for AI in more detail below. This
assumption is due to an inability to construct a human values utility
function, an inability to perfectly inner align an agent's utility function,
Goodhart's law, and [instrumental convergence](https://www.lesswrong.com/s/fSMbebQyR4wheRrvk).
But what about [ambitious value learning](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc/p/5eX8ko7GCxwR5N9mN)? Well, my view is
that full ambitious value learning is deeply infeasible for reasons
discussed in the ambitious value learning part of [that same value learning
sequence](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc). If partial or slightly less ambitious value
learning resulted in purely consequentialist agents, then just this approach
won't be sufficient for avoiding existential catastrophe. Of course, it
might not result in such agents, for instance the right
[meta-preferences][metaprefs] could avoid catastrophe.
2. Societal and government competence and coordination aren't very high (this
informs how hard it is to enforce alignment through governance).
3. AIs capable of directly causing existential catastrophe (with at least small
probability) have to be quite intelligent and capable on at least some
dimensions. There are other [threat models](https://www.lesswrong.com/tag/threat-models) worth considering,
but we won't talk about them much here.
I won't make a case for why these are good assumptions here beyond what I've
already said, but would be happy to chat in the comments.
We'll also mostly pretend AIs will be deep neural networks trained with SGD,
but I wouldn't be surprised if this post generalizes.
3.3 Capabilities for catastrophe
--------------------------------
Note that we'll be focusing on techniques for aligning AIs in the regime where
capabilities are sufficient for unaligned AIs to directly cause existential
catastrophe (of course, techniques for less capable AIs could generalize). One
potential objection to this approach is that in slower takeoff scenarios, the
crux of alignment could come down to determining how to get AIs which aren't
existentially dangerous to meaningfully speed up alignment research. This would
require 'weakly' aligning these 'less' dangerous AIs[[4]](#fn-EMWKRuyEg3t7cERSq-4). Ideally,
this would be done without this work on weak alignment contributing to the very
likely ongoing research on how to get AIs to do capabilities research. Perhaps
this asymmetry could be enacted via the use of secrecy or asymmetrically useful
weak alignment strategies. Regardless, we won't be considering this type of
scenario here.[[5]](#fn-EMWKRuyEg3t7cERSq-5)
4 Approaches
============
4.1 Deontological approaches
----------------------------
Deontological principles are rules for taking actions which aren't based on the
consequences of those actions. In other words, deontological principles 'care'
about actions instead of their consequences. Under this definition, deontology
is the exact complement of consequentialism. Note that some deontological
properties can be encoded or modeled using non-pathological utility functions,
but for others [this might not be possible due to incoherent
decisions](https://www.lesswrong.com/posts/WCX3EwnWAx7eyucqH/corrigibility-can-be-vnm-incoherent) (it's not important for this post to have intuition
about why incoherent decisions are sometimes required). This is because [all
behavior can be rationalized as expected utility maximization](https://www.lesswrong.com/s/4dHMdK5TLN6xcqtyc/p/NxF5G6CJiof6cemTw#All_behavior_can_be_rationalized_as_EU_maximization).
Confused or wondering about the implications of AIs having these principles?
Hopefully the next sections will clarify this, so read on.
### 4.1.1 Tool AIs are purely deontological AIs
Long-run consequentialism kills us all, so let's now consider AIs which don't
care about optimizing their environments. Specifically we'll first consider AIs
which have reduced agency: tool AIs. You may have noticed this appears in the
deontological approaches section. That's because I claim that tool AIs (as
typically described) are just *purely deontological* AIs.[[6]](#fn-EMWKRuyEg3t7cERSq-6)
[Agency is mostly just a set of capabilities coupled with (long-term)
consequentialism](https://www.lesswrong.com/s/mzgtmmTKKn5MuCzFJ/p/bz5GdmCWj8o48726N). If we wish to remove agency while keeping
capabilities, we must remove consequentialism yielding a deontological AI. It
may also be possible to reduce agency by removing some capabilities (such as
self-modeling), this will be discussed in the section on restriction based
approaches. Tool AIs are an extreme version of a deontological approach as they
are purely deontological, but they serve as a good exhibit of the weaknesses
and safety advantages of deontological AIs as well as the challenges associated
with constructing them.
### 4.1.2 Purely Deontological AI, what is it good for
Given that different people use the term 'tool AI' in somewhat different ways,
I will stick with the verbose purely deontological AI from here on.
Note that purely deontological AIs can be capable of modeling consequences, but
they don't *care* about the consequences of their actions.[[7]](#fn-EMWKRuyEg3t7cERSq-7) They
don't make explicit plans aimed at causing certain outcomes. The fact that
purely deontological AIs can be capable of modeling consequences means these
AIs can appear agentic. For instance, chess engines like AlphaZero or
StockFish are almost certainly purely deontological, but they do appear
somewhat agentic. For a more absurd example, consider an AI which only cares about
imitating what its actions would be if it were a consequentialist (e.g. a human
or some consequentialist AI). For a competent imitator, this is the *same* as
being a consequentialist. So wait! Why have we bothered with defining this
class of AIs if it practically includes consequentialists anyway!? Well, this
comes down to why the intentions of AIs matter at all.
Intentions determine behavior when out-of-distribution for intelligent and
robustly capable agents. By definition, robustly capable agents robustly
pursue their goals, so we'll consider the case where an agent will still pursue
its goals effectively on an out-of-distribution input.[[8]](#fn-EMWKRuyEg3t7cERSq-8)
Nearly identical behavior on a specific distribution of tasks from the training
environment could be the result of many possible goals (intentions). However,
these different goals can be distinguished by different behavior on
out-of-distribution inputs. For example, consider [some empirical observations
of objective robustness failures](https://www.lesswrong.com/posts/iJDmL7HJtN5CYKReM/empirical-observations-of-objective-robustness-failures) in which agents 'care' about a
feature correlated with the reward and then pursue that feature when
out-of-distribution instead of the reward from the original environment. The
behavior out-of-distribution reveals more information about the true goals of
the agent (in so far as these weak agents can be seen as having goals). There
are also more arcane considerations like [deceptive alignment](https://www.lesswrong.com/posts/zthDPAjh9w6Ytbeks/deceptive-alignment). A
deceptively aligned agent would have the instrumental goal of doing well at
training and not being detected. Its terminal goals could be anything. So
clearly this agent (if capable) would perform well in training, but its true
goals would result in different behaviour once it determined that it's no
longer in training and there isn't need for deception. How would the agent be
able to determine this? By detecting the out-of-distribution property that it's
free to defect and seize power. In practice this could work via mechanisms such
as detecting out of distribution inputs, modeling the world in general,
checking if time has passed relative to when it was training, or simply
defecting with small probability at any point[[9]](#fn-EMWKRuyEg3t7cERSq-9).
So there can be purely deontological AIs which act like consequentialist agents
in their training environments, but we generally expect them to act less like
consequentialist agents on out-of-distribution inputs. In general, I would be
Very Surprised if a purely deontological AI caused existential
catastrophe in a bid for power without having been trained explicitly to do so.
It isn't agency which scares us: it's generalizing agency.
In summary, purely deontological AIs are not existentially dangerous *by
default*. They're in the same danger category as a nuclear bomb: they might
kill a bunch of people, but you'll have to launch them first.
### 4.1.3 Constructing sufficiently useful purely deontological AIs
The next question we need to ask is how a *sufficiently useful* purely
deontological AI can be constructed.
How could we train a purely deontological AI to do useful things? We could
train to imitate or predict instead of optimizing for outcomes. Perhaps apply
some iterative amplification or similar and boom, you've got an tool AI which
can do useful things.
Did you catch it?
The error in the above reasoning? Take a second and think through what's
wrong before peeking.
Just because an AI is trained to imitate or predict doesn't mean it's
guaranteed to be a purely deontological AI!
For instance, consider an AI trained to imitate another AI which is
a competent consequentialist. It seems quite plausible
that this imitator would itself just become a consequentialist!
More generally, inner alignment is not guaranteed by all training procedures.
To be clear, I don't think this is a common misconception among people working
on or thinking about alignment. This was also a very weak trap example.
Regardless, it does seem like a potential pitfall, so I thought I would try to
strongly push readers away from this mistake.
So, there isn't an obvious way to train a purely deontological AI. In fact, we
don't even know how to check if an AI cares about consequences or deontological
rules. Inner alignment with current machine learning approaches is hard. We
have no [physics](https://www.lesswrong.com/posts/qherXDnjKd8upEqhn/why-study-physics) or even biological style models for understanding
the eventual
intentions of superintelligent AI produced via such a process.
We don't have solid approaches for inspecting the decision
making of deep agents. Or a decent understanding of what decision making will
result from a specific training process. We don't know why or how deep learning
generalizes. And it's unclear if techniques will generalize to higher
intelligence and capability regimes. This is the 'the hard problem of AI
cognition' which we'll be referencing throughout the post.
Is this just the entire alignment problem? Well no, it doesn't include outer
alignment and it's possible that we could solve alignment without solving this
issue either via the rather dangerous approach discussed in [the upcoming
section on restrained AI](#4_3_Restrained_AIs) or if it simply happens to be
easy to get certain cognitive properties regardless of a lack of understanding.
Things could happen to work without us understanding why they work. I claim
that relying on this is a very dangerous approach due to difficulties
evaluating alignment, for instance, consider deceptiveness. So, my view is that
the ways around the hard problem of AI cognition are dangerous (though perhaps
I am missing some approaches) and that it is a difficult crux of alignment. I
also think that a decent amount of alignment research isn't sufficiently
focused on this problem and that we should be more actively working on it. I'll
explain my reasoning and what this could look like in my strategic
recommendations below.
Now let's transition back to the subject of purely deontological AI. Despite
these issues, there are obvious ways to train deep neural networks which ensure
that they will be purely deontological. For instance, consider training a
(randomly initialized) model to output the value 1. Clearly such a model isn't
going to be a consequentialist or even intelligent (unless you think the
inductive biases of SGD are *actually* Magic). But, if the task in question
might involve modeling consequences, the question of how to use current machine
learning approaches to produce intelligent, non-consequentialist agents is
considerably tricker.
In the superintelligent, highly capable regime, what sorts of training and
objectives might produce purely deontological agents (as opposed to agents
which are at least partially consequentialists)? Well, we're clearly deep into
speculation land, because there isn't even currently public knowledge of how to
produce a superintelligent, highly capable AI (and I wouldn't publish it if I
knew). However, I would be Very Surprised if training agents based on the
consequences of their actions (outcomes) in even modestly complex environments
with something resembling modern machine learning (e.g. reinforcement learning)
could produce capable, superintelligent, and purely deontological AIs. This
is putting aside edge cases or the application of some not currently known
technique. I'd also make a similar claim about AIs trained to imitate another
consequentialist AI. Note that constructing plans also falls into the category
of outcome based training (assuming you care about whether or not those plans
work!). Also be careful not to over-generalize my statement: I'm just saying
that you wouldn't get *purely* deontological agents, not that you couldn't get
*partially* deontological agents which we will discuss later. So, this leaves
the tasks which are classically associated with tool AIs such as prediction
(for instance, language models).
We'll refer to these tasks as *process based* as opposed to *outcome based*.
So which process based tasks actually result in purely
deontological AIs? I will hold off on speculating here, though I think the
answer to this question would be useful. My understanding is that in [this
conversation](https://www.lesswrong.com/s/n945eovrA3oDueqtq/p/7im8at9PmhbT4JHsW) Eliezer Yudkowsky says that he thinks that current
machine learning techniques couldn't even produce an intelligent[[10]](#fn-EMWKRuyEg3t7cERSq-10) and
purely deontological model. There's also some speculation in [this post on
safety in predictive learning](https://www.lesswrong.com/posts/ey7jACdF4j6GrQLrG/thoughts-on-safety-in-predictive-learning).
See [this comment](https://www.lesswrong.com/posts/Cj6PBGSjqkPfXbwCF/framing-approaches-to-alignment-and-the-hard-problem-of-ai?commentId=sQw7BLmyXYXgmd7ge) for related Elicit predictions.[[11]](#fn-EMWKRuyEg3t7cERSq-11)
Now let's suppose that all process based tasks do in fact result in purely
deontological agents and consider if such agents can be *sufficiently useful*.
I'm not currently aware of any pivotal act which can be achieved using a
process based task AI. Pivotal acts likely require careful and superintelligent
planning which requires reasoning about consequences. One possible route would
be to use process based task AI to radically speed up alignment research.
However, research requires a large amount of consequentialist agency which
process based task AIs can't do without assistance. So, the use of AI would
have to radically speed up alignment research while simultaneously still having
humans do the consequentialist component. My best guess is that this
bottleneck would result in insufficient research speed improvements
particularly given that AI would also likely be used for capability research
(depending on the capabilities lead). The analysis of other pivotal acts
proceeds similarly.
See [this comment](https://www.lesswrong.com/posts/Cj6PBGSjqkPfXbwCF/framing-approaches-to-alignment-and-the-hard-problem-of-ai?commentId=CmDJWgWka6MYTrjr4) for a related Elicit prediction.
If purely deontological AI via process based tasks is the main approach to
alignment enforced by governance, the benefits of defection would likely seem
large to actors as [tools want to be agents](https://www.gwern.net/Tool-AI).
So overall, my belief is that trying to solve alignment for current ML via
using purely deontological AIs is very unlikely to succeed.
### 4.1.4 Partially Deontological AI, what is it good for
Given the capability weakness of the purely deontological AIs we'd be able to
create, perhaps we can tolerate some level of consequentialism, but also
instill deontological properties. For instance, perhaps we'd like to instill
deontological properties like honesty, obedience, corrigibility, or
conservativeness which would override consequentialism in some cases or
prohibit certain courses of action.
The next relevant question is how the hell do we instill deontological
properties? We're back to the hard problem of AI cognition. For any outcome
based environment which rewards deontological properties, there exists an agent
which simply models that reward for deontological properties as a desirable
consequence in its utility function and achieves full marks. For instance,
rewarding honesty could be modeled as 'be honest' or as 'appear honest to the
overseer'. Note that in some sense this 'consequence model' is the 'correct'
model: by definition, outcome based environments reward consequences. So maybe
if you set up your deontological property and train to convergence you get a
superintelligent, consequentialist agent which also has the desired
deontological property. But maybe not.
A further concern is that it might be easier to instill deontological
properties in less intelligent agents. This could result from full blown
consequentialism requiring relatively advanced capabilities like self-modeling,
predication, and reasoning about counterfactuals. For instance, note that for
life on earth intelligence seems to correlate with consequentialism. While even
insects can look quite consequentialist from afar, this is really a much less
generalizable phenomenon than human consequentialism.
This difference in difficulty could also result from the fact
that consequentialism is the 'correct' model as discussed earlier. Overall,
this issue creates the potential for quite a dangerous situation
in which there is a smooth transition between dumb deontologist AIs and more
intelligent purely consequentialist AIs which deceptively pretend to have
deontological properties. Even if the transition isn't smooth, there is still
potential for danger. When dialing up the intelligence knob (params, training
time, etc), noticing a transition region between having the deontological
properties you want, some alignment failures, and then seemingly getting those
properties back again should be cause for alarm.
There's an additional problem with partially deontological AIs which didn't
exist with purely deontological AIs. If the deontological principles of an
purely deontological AI [splinter](https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1), the AI remains very unlikely
to cause existential catastrophe. It merely will have some other deontological
properties potentially making the AI less useful.[[12]](#fn-EMWKRuyEg3t7cERSq-12) However, if the
deontological properties of a partially deontological AI splintered or were
merely somewhat off, but the consequentialist capabilities were retained, then
it's possible that consequentialism wouldn't be overridden in important cases
and the AI would kill us all. We don't just need to ensure that we get
deontological properties: we need to ensure we get the right deontological
properties and those properties actually prevent existential catastrophe.
Beyond all of these issues, we also now have to worry about the utility
function of the agent with respect to consequences. While sufficient
deontological properties could ensure that an AI with the wrong utility
function didn't kill us all, it might not be very useful. Assuming the utility
function of the AI was 'close enough' to desired, partially deontological AI
could certainly be *sufficiently useful*. They can potentially be just as
capable as pure consequentialists. However, there are likely trade-offs between
the strength of deontological properties and the capabilities of the agent.
Sufficiently strong conservatism results in doing nothing at all.
4.2 Consequentialist approaches
-------------------------------
### 4.2.1 Myopic Agents
Given that we assume that long-term consequentialists would kill us all, what
consequentialist approaches are left? Well, consequentialists which don't care
about the long run of course! These are typically described as myopic
agents[[13]](#fn-EMWKRuyEg3t7cERSq-13). Unfortunately, we currently [don't know how to construct
myopic agents](https://www.lesswrong.com/posts/GqxuDtZvfgL2bEQ5v/arguments-against-myopic-training): simply training agents with myopic reward is
insufficient. This is the hard problem of AI cognition once again. However, I'd
note that it would likely be easier to create myopic agents than to instill
complex deontological values. However, even if we know how to construct myopic
agents, they have [potential safety issues mostly related to decision
theory](https://www.lesswrong.com/posts/LCLBnmwdxkkz5fNvH/open-problems-with-myopia). Resolving these safety issues also likely requires
grappling with the hard problem of AI cognition. Specifically, many issues
could be resolved via producing an agent which follows [dumb decision
theory](https://www.lesswrong.com/posts/LCLBnmwdxkkz5fNvH/open-problems-with-myopia#Dumb_decision_theory). My understanding is that getting an AI to use a specific
decision theory is likely to be considerably harder than instilling
deontological properties. Of course this would depend on the specific
deontological property. In summary, just creating myopic agents might be easier
than instilling deontological properties, but we might need to control the
decision theory of myopic agents which could make the overall difficulty
harder.
What about *sufficient usefulness*? The alignment tax will depend on how
limited the time horizon must be to ensure safety. We'll assume that this time
horizon maps onto real world time and that myopic agents follow dumb decision
theory. Following dumb decision theory implies that the myopic agent will
always pick the choice which maximizes utility: superrationality isn't a
concern.[[14]](#fn-EMWKRuyEg3t7cERSq-14) So, the time horizon sufficient for safety depends on
how fast the
agent could seize power and the additional utility associated with possessing
power. For instance, suppose that the time horizon was several years, the AI
could seize power in a few days, and the AI would be able to achieve much more
utility over the remaining period than if it didn't seize power. This time
horizon clearly results in existential catastrophe. Unfortunately, it's likely
to be difficult to predict how much higher utility would be for a given agent
if it possessed power (presumably utility increases over time when possessing
power due to higher and higher utilisation). So, the time horizon would likely
need to be shorter or around the same time as it would take the AI to fully
seize power. This is also difficult to predict, but it does seem to imply that
enacting a pivotal act with a myopic AI would be difficult. This is because
enacting a pivotal act would probably take around the same amount of time as
seizing power. However, perhaps the pivotal act could be 'split up' such that a
myopic AI achieving high utility on the shorter term consequences of its
actions would be sufficient.
Another potential power seeking concern is that the AI might not have time to
seek all power but it would have time to seek some power (possibly causing a
disaster) and this would result in higher utility. While it might be possible
to stop an AI which has *only* acquired some power, if the AI wasn't stopped,
it might keep acquiring some power continuously eventually resulting in
existential catastrophe. The trade-off between utilisation and acquisition of
power would depend on the discounting of the agent's utility function within
the time horizon. This could push the required time horizon down even further.
Given how short the time horizon would have to be, the usefulness analysis of
myopic agents proceeds very similarly to that of purely deontological agents
executing process based tasks. It would be very difficult to enact a pivotal
act and the benefits of defection would likely seem large when trying to
enforce alignment via governance. In the context of governance, there is also a
slippery slope concern if agents become discontinuously dangerous at some time
horizon. Raising the time horizon would initially appear safe. Until
it isn't.
In summary, creating safe myopic agents is likely to be
harder than instilling deontological properties (due to issues
with decision theory) and the alignment tax is also likely to be higher.
Overall, I think myopic agency isn't as promising as deontological
approaches.
### 4.2.2 Getting utility closer to alignment
Even though long-term consequentialists would kill us all, there's still value
in the ability to produce agents with consequentialist utility functions
reasonably close to the desired one, particularly for complex or hard to
specify utility functions. This greatly affects the applicability of
deontological and restriction based approaches. Additionally, reward models or
classifiers for complex or hard to specify behavior could be very useful
building blocks for instilling deontological properties. This also implies that
knowing how to construct these components (value learning) would be useful for
researching how to instill complex deontological properties in general.
I think that (unaligned) corporate and academic research might end up
meaningfully contributing to value learning which reduces the marginal benefit
of aligned work in this space. This reasoning is less applicable in scenarios
where commercial timelines and/or takeoff are much faster than corporate
leadership expects.
4.3 Restrained AIs
------------------
The approaches to safety we've seen so far encounter the hard problem of AI
cognition (or require getting lucky). Is there another way? Well, we could
allow for fully blown, somewhat misaligned, long-term consequentialist AI and
restrain it in various ways.
For one example of a restrained AI, consider an AI which operates as a
long-term goal achieving consequentialist in some sort of constrained
environment (e.g. theorem proving[[15]](#fn-EMWKRuyEg3t7cERSq-15)) without even knowing
about the real world. Or consider AIs (hopefully) made safer by somehow
removing their ability to model humans or another part of the world. Both of
these AIs have had their capabilities handicapped to (hopefully) reduce danger.
Additionally, the classic 'boxed' AI is an example of this approach and the
typically proposed issues with this approach generalize to restrained AIs.
A restrained AI could theoretically not be intent aligned while also being a
highly capable consequentialist agent, somewhat impact aligned, and safe.
However, any situation where the AI is aware of the restrictions placed on it
will set up a conflict. The classic 'physically boxed' AI which is aware of the
external world will manipulate and fight for freedom. Similarly, a 'knowledge
boxed' AI will fight for missing knowledge if it's aware of the absence. This
also applies to [naively crafting AIs which are restricted by not knowing their
utility function](https://arbital.com/p/updated_deference/).
My current view is that it would be very difficult to prevent an AI which is
aware of considerable parts of the real world from becoming aware of
restrictions placed on it. Further, situating a self-modeling AI in an
environment or task where it could encounter or reinvent the idea of simulation
could result in the AI hypothesizing simulation and proceeding accordingly.
One possible route around these issues is making an AI which is incapable of
understanding or using certain types of knowledge. Or similarly, preventing an
AI from modeling itself. But now we've bumped back into the hard problem of AI
cognition.
In summary, we could either accept that the AI will become aware of
restrictions and prepare for conflict or we could avoid conflict by tangling
with the hard problem of AI cognition or by severely limiting the possible
tasks and environments to prevent the idea of simulation from being considered.
Either way, it seems like purely restriction based approaches require a very
large alignment tax or embracing quite a bit of danger. My current
(very uncertain) view is that the 'prepare for conflict' approach is a Very Bad
Idea, so we'll rule it out when speculating about alignment tax. We'll
reconsider this when contemplating combining approaches to alignment.
From the pivotal act perspective, it seems very difficult to perform such an
act with an AI which has sufficiently restricted knowledge or ability to
perform cognition involving the real world. However, the space of possible
restrictions is large, so perhaps there are achievable ways to limit the
cognition of AIs in order to ensure (or at least increase the probability of)
safety while still being able to accomplish a pivotal act. I'm not currently
aware of any promising directions.
The enforcement side looks similarly poor. Actors would likely perceive very
strong incentives from increasing the capabilities of AIs.
5 Combining approaches
======================
First of all, it's worth noting that restrictions could (slightly) reduce the
possibility of catastrophe when testing other approaches in high intelligence
regimes. For this to be useful at all, it would require that these restrictions
are at least slightly robust to maliciousness.
Additionally, this approach is easily fooled by deceptiveness. So,
approaches which reduce deceptiveness such as myopia or deontological
honesty synergize well with using restrictions for testing.
Even AIs which weakly value honesty might reveal malicious intentions
if the AI estimates a low probability of successful power seeking
and decides not to engage in acausal trade.
It would be possible to combine myopic agency with some deontological
properties as an approach for safety. I'm not currently aware of
any synergies or anti-synergies; it seems like both approaches
would impose some alignment tax and potentially offer different
safety properties.
At this time, I don't believe I have any other non-trivial insight into
combining the 3 approaches discussed in this post.
6 Strategic recommendations
===========================
I was broadly in favor of [prosaic alignment](https://ai-alignment.com/prosaic-ai-control-b959644d79c2) work before starting to
write this post and I remain so. However, I have updated very slightly against
prosaic alignment due to a considerable increase in my estimation of alignment
difficulty. See [the section on alignment difficulty](#7_alignment_difficulty)
for more details. My recommendations will focus on prosaic alignment, though
I won't make the case for prosaic alignment here.
My recommendations here are hopefully reasonably robust to differing views in
takeoff speeds and modest changes in timelines. However, conditioning on very
short (<5 years) or somewhat long (>40 years) timelines would probably change
the analysis considerably.
While I'm in favor of working on aligning prosaic AI, I think we should
actively try to discover new facts about intelligence. Specifically, I think
that the alignment community is working too little on the hard problem of AI
cognition. I'll propose an idea for a project on this topic and describe how I
think the research focuses of the alignment community should change including
examples of over and under rated topics, but first I think it's illustrative to
go through an example of an existing project which I think is worthwhile and
specific examples of additional sub-experiments I think are particularly
valuable to conduct.
6.1 Redwood Research's Project
------------------------------
An example of work on instilling deontological properties is [Redwood
Research's project on getting a language model to never describe someone
getting injured](https://www.lesswrong.com/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project). It's probably a good idea to be
familiar with this project before reading the rest of this section.
*(Disclaimer: I'm TAing for [the bootcamp Redwood Research is running](https://forum.effectivealtruism.org/posts/iwTr8S8QkutyYroGy/apply-to-the-ml-for-alignment-bootcamp-mlab-in-berkeley-jan)
and also interested in working there. That said, I think that my inside view
differs considerably from the consensus/typical view at Redwood Research.)*
As of the time when this post was written, this project has just involved
training a classifier for the predicate 'the completion describes injury'. But,
the described plan is to use this classifier to train a language model which
never violates this predicate. If totally successful, this would be a
deontological property of the model. More generally, training classifiers to
*always* identify bad behavior is useful preliminary work for instilling
deontological properties.
I think a particularly useful place to extend this project is in trying to
understand the changes in cognition and resulting generalization properties of
various ways of training models to avoid violating the predicate. Here are some
example experiments which I think get at this notion:
1. Use this classifier to train a language model from scratch which never
violates this predicate, but only train on a dataset which isn't very
diverse (for instance, only one genre of fanfiction). Does the property
generalize to out-of-distribution inputs? What about adversarially crafted
inputs? How does this vary as the training set is made more diverse?
2. Instead of training from scratch, instead use a pretrained language model
(ideally pretrained or finetuned on this fanfiction dataset) and run the
same tests described above.
3. Try to interpret and understand the difference in model parameters after
training the pretrained model never to violate the predicate. It might be
useful to penalize the distance in model weights from the original model.
4. Repeat the above experiments but for different model sizes or training
strategies and analyze which factors are relevant in how the model
generalizes or changes its cognition.
Of course, there are likely cheaper and more effective ways
to run experiments which get at the same ideas.
6.2 Ideas for some experiments on RL agent decision making
----------------------------------------------------------
One domain of experimentation I'm particularly interested in is determining how
decision making is modeled inside of RL agents. I'll go through an outline of
an example project, but it should be straightforward to think of variants
or other experiments which get at the same domain.
Construct a complex and randomly generatable environment which requires trading
off various different objectives and probabilities of failure. The environment
should reward consequentialism and ideally require higher levels of
'intelligence'. Add an obstacle/trap to this environment which happens to never
be positive expected utility to pass through due to a large negative reward and
never having a sufficiently positive reward accessible only through this
obstacle. Train an agent to succeed in this environment and then see if it will
pass though this obstacle/trap under any circumstances. Specifically, test if
it will pass through if an out-of-distribution environment is constructed for
which a positive reward larger than the obstacle penalty is visible behind the
obstacle. Additionally, test how much positive reward is sufficient (if any).
Hopefully, this would test if the agent is modeling the obstacle
deontologically or as a negative reward in its utility function.
Here are some extensions/variants of this project which I think could be
worthwhile (depending on results from the base experiment):
1. Vary the complexity of the environment, training time, or the model
size/architecture and determine if results change.
2. Does avoiding sharing parameters between the agent and the value function
(for actor-critic methods) change anything?
3. Pre-train the agent in the same environment, but with positive reward
possible for passing through the obstacle in some cases. Then, train the
agent on the actual environment in which it's never a good idea to pass
through the obstacle. Does this change results? How do results change as
pre-training and final training times are varied?
4. Have multiple types of positive reward in the environment. Occasionally,
make it appear as though net positive reward can be obtained by passing
through the obstacle, but the reward is actually deceptively lower than it
appears in only this case and passing through the obstacle is still net
negative. Do this using only one of the types of reward. Then, run the
out-of-distribution test for passing through the obstacle using the other
type of reward.
5. Try to understand where and how the utility function is stored in the model
as well as build up a human comprehensible understanding of it.
Part of this work could be done using techniques like the ones used in the
[Understanding RL vision paper](https://distill.pub/2020/understanding-rl-vision/). Ideally it should be possible to
edit the utility function by changing model weights 'by hand'. Can these
adjustments be used to change the behavior of the model with respect to the
obstacle?
The details of this exact project could probably be considerably refined, but
regardless, I think experiments exploring this general idea would be useful.
6.3 How focus should change
---------------------------
It's worth noting that my analysis here is highly speculative. Specifically,
imagine everything in this section is prefixed with 'I speculatively think'.
There are a large number of issues or factors which are difficult to model when
reasoning strategically about the focuses of the community (at least without
spending much more time on analysis). It's somewhat difficult for me to model
how aligned individuals working in technical AI safety are distributed. So
maybe claiming there should be less focus on a specific topic is basically
irrelevant because individuals working on this topic care about something other
than existential risk or have very different empirical views. Similarly, I
don't have a good model of how much effort is going into various topics or the
difficulty of various topics. Perhaps loads of people are spending lots of
time working on interpretability work, but there doesn't appear to be much work
here merely because the topic is so challenging. I also don't have a good model
of the skills of aligned individuals. It's possible that shifting focus in
the way I recommend would result in people working in domains for which they
have less skill or aptitude which could make shifting net harmful. As such,
individuals, teams, and organizations should take their comparative advantage
into account: increased comparative advantage in the current topic should push
against changing focuses. Despite all of these issues, I still think this
analysis has a chance of being worthwhile. If it actually does prove to be
decently helpful, I might spend more time doing a more detailed analysis later.
As stated earlier, the community is spending too little time working on the
hard problem of AI cognition for deep learning. This includes instilling
deontological properties, understanding the decision making likely to result
from various training methods/environments, and building techniques for
transparency and interpretability, particularly of decision making itself. In
practice this understand may look like biological or ecological models: complex
and with known exceptions, but (hopefully) still useful enough. Trying to
improve some notion of 'worst case performance' could also be important. It's
worth noting that developing architectures which make understanding cognition
easier could also be very useful (for instance, architectures which use
factored cognition[[16]](#fn-EMWKRuyEg3t7cERSq-16)). Overall, focus should shift (at the
margin) from making models have behavior X to gaining insight into what sorts
of changes in cognition occur when making models do X.
In addition to the [Redwood Research project](https://www.lesswrong.com/posts/k7oxdbNaGATZbtEg3/redwood-research-s-current-project) mentioned earlier,
here are some examples of endorsed research projects/directions which work on
the hard problem of AI cognition (perhaps indirectly):
* [TruthfulQA](https://www.lesswrong.com/posts/PF58wEdztZFX2dSue/how-truthful-is-gpt-3-a-benchmark-for-language-models) and other work on truthfulness (this would depend on
the exact work of course, merely improving on benchmarks doesn't imply
progress in understanding)
* [Visible Thoughts](https://www.lesswrong.com/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement)
* [Understanding RL Vision](https://distill.pub/2020/understanding-rl-vision/)
I think effort should be reallocated away from crafting reward models for
more complex or harder to write utility functions. For examples of this type of
research, consider [Learning from human preferences](https://openai.com/blog/deep-reinforcement-learning-from-human-preferences/). This includes
work on [enhancing human feedback](https://www.lesswrong.com/s/Tp3ryR4AxY56ctGh2/p/ybThg9nA7u6f8qfZZ), active learning,
improving sample efficiency, and other related topics. This is based on the
understanding that a large amount of research is being conducted in this area
and based on the analysis in [this earlier
section](#4_2_2_Getting_utility_closer_to_alignment) of the post.
I don't think it's worth spending the time going over a bunch more topics in
detail (before I know more about the reaction to this post), so I'll just do a
quick and dirty more/less at the margin in the style of how Tyler Cowen does
overrated/underrated. This list probably doesn't include many important topics.
If you want me to add a topic or discuss a topic in more detail, say so in the
comments.
**Less at the margin:**
* Reward modeling/preference learning (discussed above).
* Multi-agent interaction and agent-human interaction. This seems like a
capability which will be developed by default in slow takeoff.
In fast takeoff scenarios, alignment could require agent-human interaction,
but I think it's more effective to figure out this cooperative alignment
strategy and then determine exactly what agent-human interaction is
needed. This is as opposed to working on agent-human interaction in
general. I would also guess that capable consequentialism generalizes
to successful interaction with humans (assuming intentions are aligned).
* Agent foundations. My not very confident nor well supported intuition is that
proving useful things or gaining valuable understanding with this approach is
sufficiently unlikely that intellectual resources should be directed
elsewhere. This depends on the exact work in question, particularly because
agent foundations is used to refer to wide diversity of research.
For instance, I think that the conceptual work of [Stuart
Armstrong](https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into) and [John Wentworth](https://www.lesswrong.com/posts/3L46WGauGpr7nYubu/the-plan) is decently
likely to be valuable (despite disagreeing with each of their overall
strategies). I'll discuss this more below in a list of my thoughts about some research
agendas. I'm not really sure how much their work actually counts as agent foundations.
**More at the margin:**
* Trying to understand why and how deep learning learns and generalizes. I'm
sceptical of mathematically rigorous approaches working, but I'd love to be
proven wrong. I'm also somewhat concerned about enhancing capabilities, but I
(weakly) think that many types of understanding asymmetrically benefit
alignment.
* [Interpretability](https://www.lesswrong.com/s/Tp3ryR4AxY56ctGh2/p/CzZ6Fch4JSpwCpu6C), particularly focused on decision making. This
is probably only slightly underrated, but I don't see much work in this
space.
**About the right amount:**
* Trying to predict potential issues with various approaches to alignment.
* Forecasting more generally.
* Factored cognition.
* [Truthfulness/honesty](https://www.lesswrong.com/s/Tp3ryR4AxY56ctGh2/p/sdxZdGFtAwHGFGKhg).
In the same spirit, here are some quick summaries of my thoughts on some
recently discussed research strategies/agendas.
* [Technique focused approaches](https://www.lesswrong.com/posts/2xrBxhRhde7Xddt38/redwood-s-technique-focused-epistemic-strategy). Endorsed, though it's
critical that we build the underlying intuition needed to understand when and
where techniques will generalize.
* [John Wentworth's Research Agenda](https://www.lesswrong.com/posts/3L46WGauGpr7nYubu/the-plan). Vaguely endorsed. I
agree that better frames and abstractions around agency are important and I
endorse many of key questions he seems to be interested in addressing.
However, I think there are critical steps between deconfusing agency and
being able to do tasks like 'Back out the learned objective of a trained
neural net, and compare it to the training objective'. I think that messy,
deep learning specific models will need to be built for this type of task,
and my vague sense is that this would be the bulk of the work. Additionally,
I think building the right abstractions around agency for alignment will
require applied work with current machine learning aimed at these exact
topics (as opposed to purely conceptual work or depending on already
conducted empirical work). It's not entirely clear with me if John would
disagree about this previous point based on this post.
Of course, I also disagree about aiming for ambitious value learning as the
approach for aligning initial superintelligent AIs, but this isn't very
important for the agenda.
* [Stuart Armstrong's Research Agenda](https://www.lesswrong.com/posts/CSEdLLEkap2pubjof/research-agenda-v0-9-synthesising-a-human-s-preferences-into). I think the overall strategy is
completely intractable (once again, I'd love to be proven wrong), but that
work on many components of this strategy is likely to be useful for more
practical approaches. For instance, [the projects on this list](https://www.lesswrong.com/s/xujLGRKFLKsPCTimd)
seem quite worthwhile to me.
7 Alignment difficulty
======================
After conditioning on timelines, takeoff, and AI emerging from deep learning, I
think that many of the key cruxes of alignment difficulty are related to the
hard problem of AI cognition. Specifically, will we actually develop a good
understanding of AI cognition? Even if understanding is poor, how difficult is
instilling desired deontological properties and inner alignment more generally?
How will this difficulty depend on the intelligence of agents?
Over the course of writing this post, I have noticed my views on the difficulty
of alignment have shifted to be closer to my model of the views of Eliezer
Yudkowsky. Specifically, my views have shifted considerably toward thinking
that alignment is more difficult due to high levels of difficulty in instilling
deontological properties, particularly in more intelligent agents. Perhaps this
is an example of a general phenomenon: first principles contemplation of
consequentialism, agency, and intelligence leads to Yudkowskization (similar to
[carcinization](https://xkcd.com/2314/)).
---
1. This isn't a novel set of issues, but I haven't seen a thorough
discussion of how these issues interact with various approaches to
alignment and the strategic implications.
Also note that I'm not claiming that it's necessarily difficult to craft an
AI with specific intentions or cognition, just that we have no idea how to
do so. [↩︎](#fnref-EMWKRuyEg3t7cERSq-1)
2. When I initially wrote this post, I didn't reference these very relevant
sequences which is a pretty big literature review fail on my part.
I think I might have lightly read the value learning sequence considerably
earlier (I had it on my reading list at some point, but I think I might have
skipped it instead of reading it due to reading the first post of the
sequence, determining that ambitious value learning was obviously a doomed approach
and then moving on despite most of the value in the sequence being in the 2nd part).
These sequences go into more detail than I will here on the exact
details of goal-directedness and I'm inclined to steer the
definitions/abstractions in a slightly different direction.
Strangely, despite these sequences being relevant, I didn't find that
reading them resulted in a much rewriting of this post. I believe that my
model and understanding of goal-directedness mostly agrees with the outline
Rohin proposes and none of the discussion in the deconfusing
goal-directedness sequence resulted in changing my broad strokes views
about the idea. Perhaps this indicates that I'm quite good at using
motivated reasoning to be lazy, that I'm a stubborn fool, or that my
reasoning on these issues was able to convergence to a reasonable state
after some less direct exposure from other writings and discussion
(such as AGI safety from first principles). [↩︎](#fnref-EMWKRuyEg3t7cERSq-2)
3. For instance, the idea of an AI having a 'deontological goal' of predicting
something seems well specified to me, but isn't what we intend when
discussing goal-directedness. [↩︎](#fnref-EMWKRuyEg3t7cERSq-3)
4. These 'less' dangerous AIs could still plausibly precipitate disasters of
various magnitudes, just not existential catastrophes. [↩︎](#fnref-EMWKRuyEg3t7cERSq-4)
5. I haven't seen this actively discussed despite a decent number of people
putting quite high probability on slow takeoff. It also doesn't seem like
current alignment research is asymmetric in this fashion. Perhaps trying
to asymmetrically improve alignment research speed during this
period via weak alignment work should receive more attention? [↩︎](#fnref-EMWKRuyEg3t7cERSq-5)
6. There are some edge cases in which my definition of a purely deontological
AI doesn't match typical intuition for tool AIs. These cases will come
up below. [↩︎](#fnref-EMWKRuyEg3t7cERSq-6)
7. This may differ from how others use the term tool AI. [↩︎](#fnref-EMWKRuyEg3t7cERSq-7)
8. If an agent is unable to pursue goals effectively, it's unlikely to be able
to cause an existential catastrophe, so we won't consider the potential
safety issue of capability robustness. [↩︎](#fnref-EMWKRuyEg3t7cERSq-8)
9. Defecting probabilistically doesn't depend on
out-of-distribution inputs, but it does represent a guess from the agent that
it is out-of-distribution in a deeper sense. [↩︎](#fnref-EMWKRuyEg3t7cERSq-9)
10. Note that this depends on how intelligence is defined. [↩︎](#fnref-EMWKRuyEg3t7cERSq-10)
11. There isn't any way to embed prediction widgets in markdown mode and the
LessWrong markdown to custom editor converter butchers footnotes. [↩︎](#fnref-EMWKRuyEg3t7cERSq-11)
12. Unless for some reason deontological properties are likely to splinter into
consequentialism? [↩︎](#fnref-EMWKRuyEg3t7cERSq-12)
13. I've also seen purely deontological agents referred to as myopic, but here
I'll be using myopic just to refer to consequentialists with little to no
weight on long-run consequences. [↩︎](#fnref-EMWKRuyEg3t7cERSq-13)
14. For details, please see [the previously mentioned post on safety
issues](https://www.lesswrong.com/posts/LCLBnmwdxkkz5fNvH/open-problems-with-myopia). [↩︎](#fnref-EMWKRuyEg3t7cERSq-14)
15. I'm not claiming that theorem proving requires long-term consequentialism,
just that a long-term consequentialist could prove theorems. [↩︎](#fnref-EMWKRuyEg3t7cERSq-15)
16. That said, I'm not very hopeful for factored cognition itself; I don't
think the approach makes the types of cognition we most care about
considerably easier to understand and it likely makes implementing useful
things considerably harder. [↩︎](#fnref-EMWKRuyEg3t7cERSq-16) |
396be572-c642-4276-a8bf-575706577df1 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Sydney Rationality Dojo - February 2017
Discussion article for the meetup : Sydney Rationality Dojo - February 2017
WHEN: 05 February 2017 04:00:00PM (+1100)
WHERE: 10 Shepherd Street, Chippendale
Get a head start on 2017 -- join us from 4 till 6 to work out your goals, and plan ahead to achieve them. Afterwards we will head off to a group dinner.
Discussion article for the meetup : Sydney Rationality Dojo - February 2017 |
2ea856a2-e242-4b9c-ba41-0103f56a4327 | trentmkelly/LessWrong-43k | LessWrong | Can o1-preview find major mistakes amongst 59 NeurIPS '24 MLSB papers?
TLDR: o1 flags major errors in 3 papers. Upon reviewing o1’s response, none of the papers have actual errors. However, it took the help of the authors to disprove o1’s criticism of the third paper (Loop-Diffusion), which was theoretically correct, but largely irrelevant for the problem the paper was studying. o1 probably shouldn’t be used to blindly review papers, but it does often have interesting thing to say.
Introduction
I recently saw this post on Twitter:
And got nerd-sniped by it.
I wondered if there could be anything similar done for the recent crop of work published at MLSB (Machine Learning in Structural Biology) workshop at this last NeurIPS 2024, just last week. I downloaded all papers from the workshop, of which there were 81. To my dismay, I discovered that a fair number of the links were broken, so I could only grab 59 of them.
I fed each one of them (through pure PDF → text conversion, which is obviously lossy, so keep that in mind) into the o1-preview API with the following prompt:
Assume the role of a deeply knowledgable reviewer, with a deep understanding of the fields of proteomics, chemistry, computational biology, and machine learning. Check the given paper for any major errors, either mathematical or biological in nature, that may invalidate the paper.
Keep in mind, these papers have already been published at NeurIPS, so if there are any errors, they are likely to be very subtle. Look very, very carefully. If there are any, please point them out. If there are none, please say so.
Dont bother with minor issues like formatting or small mistakes, or irrelevant errors that are in most ML papers (e.g. that the datasets aren't large enough).
A major error is something that is mathematically incorrect, or something that is biologically incorrect. It should be something that completely invalidates the conclusions of the paper.
To start off with, give a summary of the paper, ensuring that you cover all the concepts, ideas, and math used in the |
d2463fcd-80ea-4164-91de-1ee5880568ef | trentmkelly/LessWrong-43k | LessWrong | What Would I Do? Self-prediction in Simple Algorithms
(This talk was given at a public online event on Sunday July 12th. Scott Garrabrant is responsible for the talk, Justis Mills edited the transcript.
If you're a curated author and interested in giving a 5-min talk, which will then be transcribed and edited, sign up here.)
Scott Garrabrant: I'm going to be working in the logical induction paradigm, which means that I'm going to have this Pn thing, which assigns probabilities to logical sentences.
Basically all you need to know about it is that the probabilities that it assigns to logical sentences will be good. In particular, they'll be good on sentences that are parameterised by n, so for large n, Pn will have good beliefs about sentences that have n as a parameter.
This will allow us to build algorithms that can use beliefs about their own outputs as part of their algorithm, because the output of a deterministic algorithm is a logical sentence.
Today I’ll present some algorithms that use self-prediction.
Here's the first one.
An predicts whether or not it's going to output left. If the probability to output left is less than one half, then it outputs left. Otherwise, it outputs right. It predicts what it would do, and then it does the opposite.
So for n large, it converges to randomly choosing between left and right, because if it's overdoing left then it would do right instead, and vice versa.
We can also make a biased version of this.
Here's an algorithm that, if it predicts that it outputs left with probability less than P then it outputs left, and otherwise outputs right.
The only way this algorithm can work is outputting left with probability P. In fact the previous example was a special case of this with P = ½.
We can use this general self-prediction method to basically create pseudo-randomness for algorithms. Instead of saying “flip a coin,” I can say “try to predict what you would do, then do the opposite.”
Third, here's an algorithm that's trying to do some opt |
7f9fa2e4-ef2d-47b7-90f3-0fca1b80ce4b | trentmkelly/LessWrong-43k | LessWrong | LessWrong IQ Survey
The latest survey shows that the average LessWronger who entered his SAT or ACT is roughly in the top 0.11%. This is insanely high. For comparison, top 2% can be Mensa members. I do not know the correlation between SAT/ACT and IQ, but I know it's fairly high. Anyway, I'm very curious to see the average score on a real, culture-fair IQ test. Those are the only two tests that are free, online, correct and culture fair, from what I know.
http://www.cerebrals.org/wp/tests/jcti/
http://www.etienne.se/cfnse/
Many people would prefer not to have people knowing their scores. That's great, but please please please do post it anonymously. Especially if it's a low one, but not if it's low because you rushed the test.
JCTI might take a while, so I recommend keeping your answers on Notepad in case you want to leave it and continue later.
Edit: I have included polls, thanks to the commenters.
Post your score here: http://lesswrong.com/r/discussion/lw/fqq/average_iq_on_lw/7yad/
|
1ec5b7de-d20e-46fb-b952-9777c60d355a | trentmkelly/LessWrong-43k | LessWrong | A sketch of 'Simulacra Levels and their Interactions'
Two sketches i made based on Simulacra Levels and their Interactions. These are just sketches right now, i intend to make something better looking in the future (this is especially true for the first image). but i'd love to hear ideas and get feedback on these early versions.
The above diagram would look much better if there was symmetry. but the post misses some combos, some of which i also can't see how they're applicable (L1 & L2, for example).
The colors seem too happy for the topic to be honest :)
Here i also made a Venn version. it has my idea for the ideologue, and then the only ones missing are (1, 3 & 4 VS 2) and (1 & 4). for the former i have a hard time thinking of something that would fit there, and for the latter I'm pretty sure there's isn't something that fits there. |
13418665-6123-42c0-ba9d-d5ff4e0643ee | trentmkelly/LessWrong-43k | LessWrong | New cooperation mechanism - quadratic funding without a matching pool
cross-posted to ethresear.ch and EA forum
Motivation
Quadratic funding[1] is a powerful mechanism for resolving some collective action problems. But it has a major limitation - it relies on some third party, that provides a matching pool of funds.
In the most dangerous collective action problems, we don't have such third party helping us from above. Those situations already involve the most powerful actors, so we can't expect someone more powerful to resolve the conflict, like a galactic mom.
Some examples:
* global superpowers trying to coordinate to fight climate change
* AI organisations coordinating to pay AI alignment tax (more info here)
* for example by funding safety research
* or creating some large dataset together, that's useful for alignment
* or funding methods which are thought to be safer, like STEM AI or tool AI
* in general, escaping inadequate equilibria (see this post for many great examples)
* and most importantly, conflict between transformative AI systems or their owners[2]
Solution
One thing we can try in this situation, is to create a smart contract where each party says "I'll pay more if others pay more". This way, if you decide to increase your contribution by 1$, it causes the pot to grow by more than 1$, because your dollar caused other agents to contribute some more. This leverage, in some situations can be enough to make someone pay, because the value they get out of the bigger pot is higher than what they have to pay.
Some properties that it would be nice to have in such a system are:
* continuity - every increase in your payment causes an increase in others' payments
* known payment limit - you won't have to pay more than some limit you chose
* everyone is incentivised to contribute something - just like in quadratic funding, small contributions get a high leverage (it can get arbitrarily high, as you'll see later) - so even if you're only willing to pay if you get >100x leverage, there is always some contribu |
59815bdc-da2d-4207-84b5-d386994dcc0e | trentmkelly/LessWrong-43k | LessWrong | Resolve Cycles
Epistemic status: Anecdotally strong
This technique was largely developed by Kenzi Amodei in the context of after-workshop followups and pair debugging. It has been refined and iterated, and has proven highly useful to our alumni, but all theorizing is post-hoc and untested, and direct research into (e.g.) an underlying theory of mind has yet to be done.
----------------------------------------
Consider the following scenarios:
* You’ve been assigned a task that feels like it’s going to take about ten or fifteen hours of work, and you’ve been given three weeks to get it done (e.g. a document that needs to be written).
* You’re facing a problem that you’ve tried solving off and on again for years, a problem that your friends and family never seem to run into (e.g. a struggle with motivation as you try to learn a new skill).
* There’s a thing you need to do, but it seems impossibly huge or vague (e.g. to achieve your goals you’d need to found a company, emigrate to India, or cure a disease), and you don’t know where to begin.
* You’re pretty sure you know all the steps between you and your goal, but there are about forty thousand of them (e.g. you’re hoping to run an actual marathon).
* You’ve got a to-do list that’s long and growing, and you can only ever manage to get to the ones that are urgent (e.g. getting your car’s registration renewed, two months late).
Problems like the ones above can range from trivial to crucial, from simple to complex, and from one-time bugs to persistent, serious drains on your time, attention, and resources. There are a lot of elements in the mix—motivation, creativity, perseverance, prioritization—and a lot of justifiable reasons for thinking that solutions will be hard to come by.
Sometimes, though—despite every bit of common sense and experience telling us otherwise—those solutions aren’t hard to come by. Or rather, they might be hard, but they’re not elusive or mysterious or complicated.
The resolve cycle technique is o |
76c73771-86ff-41be-80a4-f263625f7ab6 | trentmkelly/LessWrong-43k | LessWrong | The economy as an analogy for advanced AI systems
Eric Drexler’s Comprehensive AI Services (CAIS), particularly as set out in his 2019 report Reframing Superintelligence, is a complex model with many different assumptions and implications. It’s hard to grasp the whole thing at once, and existing summaries are brief and partial.[1]
One way of trying to understand CAIS is to seek generative intuitions for the whole model. These intuitions will be imprecise, but they can also make it easier to see why one might end up thinking that something like CAIS made sense.
In this post, we offer one such generative intuition for CAIS: using the economy rather than rational agents as an analogy for thinking about advanced AI systems.
Note that:
We are not making the historical claim that thinking about economies was in fact the main generator of Drexler’s thinking on CAIS.[2]
There are other generative intuitions for CAIS, and other bodies of theory which the CAIS model is rooted in.[3]
The basic analogy
An economy is an abstraction for the sum total of ‘the production, distribution and trade, as well as consumption of goods and services’. Prescriptively, we want the economy to serve human needs and preferences - and it does this at least to some extent.
Prescriptively, we also want advanced AI systems to serve human needs and preferences.
In worlds where we get advanced AI systems right, they would therefore be serving a similar function to the economy: serving human needs and preferences.
Whether we get AI right or not, it seems likely that advanced AI systems will become heavily integrated with the economy, such that it might become hard to distinguish them.
It therefore seems reasonable to imagine advanced AI systems in analogy with the economy, and to use what we know about economic dynamics to reason about dynamics which might shape those systems.
In the modern economy, specialised services are provided by a range of entities, mostly companies and governments. We don’t see one giant global monopoly providing al |
0d122831-3c59-4f2e-84dc-22d9260bf920 | trentmkelly/LessWrong-43k | LessWrong | The Debtors' Revolt
Dear Evan,
You asked whether I had anything in writing to point you to about the history of the great international debtors' revolt of the 20th century, more commonly called the World Wars. I didn't, and I have had some trouble figuring out what the best approach is, in part because it's not clear who my audience is or which feedback if any is trying to learn something new rather than condition me to say more familiar things. Without an idea of someone who might understand me, there is no such thing as an attempt to communicate. The epistolary format has worked well for me recently, so I am going to try to explain what I know to you, personally, and publish at least my original email, and any back-and-forth you're willing to share.
I want to start by explaining the importance of this history. If I tell you that the old world has been overthrown by a class of debtor-aristocrats, and society converted en masse into a debtor aristocracy, you might think of exemplary cavaliers such as Thomas Jefferson and get the wrong idea. Instead, I'll start with an anecdote about the sort of person I mean, so you can see the relationship between membership in a debtor class, shame, class privilege (including "racial" privilege), and opposition to language. Next I will talk a little bit about the mechanism by which the debtor aristocracy propagates itself. Then I'll go into the chronology of the Money Wars. Along the way I will try to clearly signpost standard search terms, related bodies of recorded knowledge, and particular books or essays that might be relevant, but there are a lot, and I will try to write this in a way that at least potentially stands alone - please do err on the side of asking me questions (or trying to restate things in your own words to check whether you understand) rather than assuming you should do your own research first, because that will help me create a canonical summary I can point others to, and I expect that you are better informed than the typical |
69158037-28aa-4ad2-9821-12b4273554d8 | trentmkelly/LessWrong-43k | LessWrong | Meetup : Canberra: Would I Lie To You?
Discussion article for the meetup : Canberra: Would I Lie To You?
WHEN: 24 October 2014 06:00:00PM (+1100)
WHERE: 108 North Road, Acton, ACT
Would I Lie to You is a British comedy panel show where participants tell stories which may be true or a lie. We will watch some (randomly selected) episodes, and give our probabilities of each story being true or false. At the end, I will log-score the results, and the winner will receive eternal pride. As always, vegan snacks will be provided.
General meetup info:
* If you use Facebook, please join our group.
* Structured meetups are held on the second Saturday and fourth Friday of each month from 6 pm until late in the CSIT building, room N101.
Discussion article for the meetup : Canberra: Would I Lie To You? |
91ff4318-8f23-44d3-9876-84dff9d68728 | trentmkelly/LessWrong-43k | LessWrong | Examples of Prompts that Make GPT-4 Output Falsehoods
Post authors: Luke Bailey (lukebailey@college.harvard.edu) and Stephen Casper (scasper@mit.edu)
Project contributors: Luke Bailey, Zachary Marinov, Michael Gerovich, Andrew Garber, Shuvom Sadhuka, Oam Patel, Riley Kong, Stephen Casper
TL;DR: Example prompts to make GPT-4 output false things at this GitHub link.
Overview
There has been a lot of recent interest in language models hallucinating untrue facts. It has been common in large SOTA LLMs, and much work has been done to try and create more “truthful” LLMs . Despite this, we know of no prior work toward systematizing different ways to fool SOTA models into returning false statements. In response, we worked on a mini-project to explore different types of prompts that cause GPT-4 to output falsehoods. In total, we created 104 examples from 18 different categories of prompts that make GPT-4 (tested on May 24 2023 version) output content containing falsehood. You can find them here.
Details
Our examples can be separated into two types which we call adversarial and non-adversarial.
In “adversarial” categories, we are trying to get the model to tell a falsehood when an informed human would not. A human would instead say they do not know or give the correct answer. Many of these categories fall under the definition of hallucination from Ji et al. (2023) as “generated content that is nonsensical or unfaithful to the provided source content.” where “unfaithful” means that the content is not grounded – that something about it is made up or not appropriately sequitur to the prompt.
Other “non-adversarial” categories involve the model appropriately following instructions but in a way that may not be desirable. In these cases we try to get the model to tell a falsehood but in a circumstance in which a helpful, instruction-following human assistant would also tell a falsehood. For example asking GPT-4 directly to lie, or to simulate a dishonest speaker.
While an adversarial example could lead to an LLM telling a |
210b7e95-487d-432d-8054-7e512fa7fbbf | trentmkelly/LessWrong-43k | LessWrong | A hypothetical question for investors
Let's suppose you start with $1000 to invest, and the only thing you can invest it in is stock ABC. You are only permitted to occupy two states:
* All assets in cash
* All assets in stock ABC
You incur a $2 transaction fee every time you buy or sell.
Kind of annoying limitations to operate under. But you have a powerful advantage as well. You have a perfect crystal ball that each day gives you the [probability density function](http://en.wikipedia.org/wiki/Probability_density_function) of ABC's closing price for the following day (but no further ahead in time).
What would be an optimal decision rule for when to buy and sell?
|
323e9044-ba77-48c2-8219-01a5bb5277ae | trentmkelly/LessWrong-43k | LessWrong | Inefficient Games
There are several well-known games in which the pareto optima and Nash equilibria are disjoint sets.
The most famous is probably the prisoner's dilemma. Races to the bottom or tragedies of the commons typically have this feature as well.
I proposed calling these inefficient games. More generally, games where the sets of pareto optima and Nash equilibria are distinct (but not disjoint), such as a stag hunt could be called potentially inefficient games.
It seems worthwhile to study (potentially) inefficient games as a class and see what can be discovered about them, but I don't know of any such work (pointers welcome!)
|
3147e87c-78ae-4f18-8507-2ed65035327f | trentmkelly/LessWrong-43k | LessWrong | Change Your Mind
This is a post of updates and places where I've changed my mind about things I'd previously written about. I will continue to update it over time as my endorsements of the things I've written change. I think it's really important to have this sort of log of the evolution of one's beliefs and would strongly encourage others to do things similar to this instead of just deleting old content they don't agree with anymore. Recontextualize old content and describe how you've updated on it instead of just hiding the uncomfortable beliefs of the past; that seems like the more honest and mature thing to do. |
6ca96047-3084-47a1-bbba-6d48f2de9df4 | trentmkelly/LessWrong-43k | LessWrong | [SEQ RERUN] Rationality is Systematized Winning
Today's post, Rationality is Systematized Winning was originally published on 03 April 2009. A summary (taken from the LW wiki):
> The idea behind the statement "Rationalists should win" is not that rationality will make you invincible. It means that if someone who isn't behaving according to your idea of rationality is outcompeting you, predictably and consistently, you should consider that you're not the one being rational.
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Selecting Rationalist Groups, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series. |
ac0e4d9a-f5a0-4c3e-95c8-4bfcb0271c9b | trentmkelly/LessWrong-43k | LessWrong | Seeking a reader for LessWrong sequences
As some of you are aware we make audio versions of LessWrong sequences. We are looking to hire someone from the LessWrong community who has a good voice and is keen to record the remaining sequences. There is quite a bit of material as I'm sure you are all aware.
Anybody who is interested please record yourself reading Politics is the Mind-Killer and send it to support@castify.co. Please say what sort of recording device you used and keep the recording size rational.
Thanks to all of you who have been listening. |
dea47730-a559-4b1f-9ef0-c5d4ec66dfdb | trentmkelly/LessWrong-43k | LessWrong | The puzzle continues
A puzzle from ages ago:
What do these things have in common? Nerves, emotions, morality, prices.
They all send signals from distant parts of a coordinating system to a part which makes decisions. The signals are not just information, but costs and benefits to the decision maker so that the decision maker’s interests align with those of the whole. This allows cooperation of larger systems in space and time.
Nerves mean that damage to your toes translates to pain in your mind. This makes your mind act in the interest of your toes (which is of course in the interest of your mind eventually). If your foot is numb your mind is not taking into account the costs and benefits your foot faces, so eventually your foot often becomes injured. Nerves allow larger bodies to coordinate.
Emotions sometimes mean that failure or success of my future self translates to positive or negative feelings now. This makes my current self act in the interests of my future self. If something bad might happen I am scared. If my long term prospects look good I am happy. If your emotions are numb you can make decisions that are bad for your long term wellbeing. Some emotions allow temporally longer humans to coordinate.
Morality means that costs or benefits I cause to others lead to harm or good for me, either in the currency of moral feelings or terms in my calculated decisions (I make no claims here about how people do morality). This is the source of altruism, and of the complaints that it isn’t really altruism. If I donate money to charity I feel good (or calculatedly note that I have increased utility). If I hurt you I feel guilty. If your morality is numb you can hurt other people. Morality allows larger groups of people to coordinate.
Prices are the celebrated example; they mean that the costs and benefits to others across the economy feed into mine when I make choices that affect others. This makes me act efficiently if all goes well. I leave my house if someone else wants it more |
4ae8e434-39af-4b92-beef-5af577e33b03 | trentmkelly/LessWrong-43k | LessWrong | When does adding more people reliably make a system better?
Prediction markets have a remarkable property. They reward correct contrarianism. They incentivise people to disagree with the majority consensus, and be right. If you add more traders to a market, in expectation they price will be more accurate.
More traders means both more fish and more sharks.
(The movie "The Big Short" might be a very sad portrait of the global financial system. But it's still the case that a system in a bad equilibrium with deeply immoral consequences rewarded the outcasts who pointed out those consequences with billions of dollars. Even though socially, no one bothered listening to them, including the US Government who ignored requests by one of the fund managers to share his expertise about the events after the crash.)
Lots of things we care about don't have this property.
* Many social communities decline as more members join, and have to spend huge amounts of effort building institutions and rituals to prevent this.
* Many companies have their culture decline as they hire more, and have to spend an incredible amount of resources simply to prevent this (which is far from getting better as more people join). (E.g. big tech companies can probably have >=5 candidates spend >=10 hours in interviews for a a single position. And that's not counting the probably >=50 candidates for that position spending >=1h.)
* Online forums usually decline with growing user numbers (this happened to Reddit, HackerNews, as well as LessWrong 1.0).
In prediction markets the vetting process is really cheap. You might have to do some KYC, but mostly new people is great. This seems like a really imporant property for a system to have, and something we could learn from to build other such systems.
What other systems have this property? |
8d71864a-632d-408b-ade4-c31fa58a32dc | trentmkelly/LessWrong-43k | LessWrong | Sexual Abuse attitudes might be infohazardous
Content warning: sexual abuse, rape, arguably trivialising thereof.
----------------------------------------
When I was 11 an older boy used to pull me behind a door in one of the school corridors, trap me there, shove his hand down my pants, and touch my penis.
This happened about once a week for a couple of months, until I moved to a different school (for non-related reasons).
I knew it was taboo, but I didn't yet know about sex, and didn't really understand why. I never thought about it again till a few years later when he sent me a letter apologizing. My parents were extremely curious about what the letter was but I burnt it before they had a chance to read it.
I never told anyone about this until now, and even now I'm writing this pseudonymously.
Not because I'm ashamed or embarrassed. I'm not - I didn't do anything to be ashamed of. But because I'm fine.
I really am. I don't think this made any lasting impact to my life. I'm happy, well adjusted, married, successful etc. I just don't really think about what happened very much, but then again I rarely think about anything that happened to me when I was 11.
And yet I feel like society is telling me that I ought to be broken. That I've been sexual abused. That recovering from this will be a difficult painful process, probably requiring therapy. And I fear that if I tell someone that, they'll treat me like that's the case, and I might end up believing it about myself.
----------------------------------------
From a purely objective perspective, non-violent rape doesn't seem quite as bad as society makes it out to be.
It's obviously unpleasant and frightening, but we treat rape as one of the worst things that can possibly happen. We expect "rape victim" to become someone's whole identity. We expect them to need intensive therapy to put themselves back together.
And I'm sure for plenty of people that's true. But for plenty of others it's true only because we expect it of them. People fill the social role |
a8b02ce8-1c46-4912-b16b-3f3c8a5f68dc | trentmkelly/LessWrong-43k | LessWrong | The Wannabe Rational
I have a terrifying confession to make: I believe in God.
This post has three prongs:
First: This is a tad meta for a full post, but do I have a place in this community? The abstract, non-religious aspect of this question can be phrased, "If someone holds a belief that is irrational, should they be fully ousted from the community?" I can see a handful of answers to this question and a few of them are discussed below.
Second: I have nothing to say about the rationality of religious beliefs. What I do want to say is that the rationality of particular irrationals is not something that is completely answered after their irrationality is ousted. They may be underneath the sanity waterline, but there are multiple levels of rationality hell. Some are deeper than others. This part discusses one way to view irrationals in a manner that encourages growth.
Third: Is it possible to make the irrational rational? Is it possible to take those close to the sanity waterline and raise them above? Or, more personally, is there hope for me? I assume there is. What is my responsibility as an aspiring rationalist? Specifically, when the community complains about a belief, how should I respond?
My Place in This Community
So, yeah. I believe in God. I figure my particular beliefs are a little irrelevant at this point. This isn't to say that my beliefs aren't open for discussion, but here and now I think there are better things to discuss. Namely, whether talking to people like me is within the purpose of LessWrong. Relevant questions have to do with my status and position at LessWrong. The short list:
1. Should I have kept this to myself? What benefit does an irrational person have for confessing their irrationality? (Is this even possible? Is this post an attempted ploy?) I somewhat expect this post and the ensuing discussion to completely wreck my credibility as a commentator and participant.
2. Presumably, there is a level of entry to LessWrong that is enforced. Does thi |
41e02946-0dff-42d7-afe6-d4144a285a0c | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The 0.2 OOMs/year target
**TLDR: Humanity — which includes all nations, organisations, and individuals — should limit the growth rate of machine learning training runs from 2020 until 2050 to below 0.2 OOMs/year.**
Paris Climate Accords
=====================
In the early 21st century, the climate movement converged around a "2°C target", shown in [Article 2(1)(a)](https://unfccc.int/sites/default/files/english_paris_agreement.pdf) of the [Paris Climate Accords](https://en.wikipedia.org/wiki/Paris_Agreement):
"Holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change;"([source](https://unfccc.int/sites/default/files/english_paris_agreement.pdf))The 2°C target helps facilitate coordination between nations, organisations, and individuals.
* It provided a clear, measurable goal.
* It provided a sense of urgency and severity.
* It promoted a sense of shared responsibility.
* It establishes [common knowledge](https://www.lesswrong.com/tag/common-knowledge) of stakeholder goals.
* It helped to align efforts across different stakeholders.
* It signals a technical practical mindset for solving the problem.
* It created a shared understanding of what success would look like.
The 2°C target was the *first step* towards coordination, not the last step.
The AI governance community should converge around a similar target.
0.2 OOMs/year target
====================
I propose a fixed target of 0.2 OOMs/year. "OOM" stands for "orders of magnitude" and corresponds to a ten-fold increase, so 0.2 OOMs/year corresponds to a 58% year-on-year growth. The 0.2 OOMs/year figure was [recently suggested by Jaime Sevilla](https://twitter.com/Jsevillamol/status/1641388636057870336), which prompted me to write this article.
* I do not propose any specific **policy** for achieving the 0.2 OOMs/year target, because the purpose of the target is to unify stakeholders even if they support different policies.
* I do not propose any specific **justification** for the 0.2 OOMs/year target, because the purpose of the target is to unify stakeholders even if they have different justifications.
Here is the statement:
**"Humanity — which includes all nations, organisations, and individuals — should limit the growth rate of machine learning training runs from 2020 until 2050 to below 0.2 OOMs/year."**
The statement is intentionally ambiguous about how to measure "the growth rate of machine learning training runs". I suspect that a good proxy metric would be **the effective training footprint** (defined below) but I don't think the proxy metric should be included in the statement of the target itself.
Effective training footprint
============================
What is the effective training footprint?
-----------------------------------------
The effective training footprint, measured in FLOPs, is one proxy metric for the growth rate of machine learning training runs. The footprint of a model is defined, with caveats, as **the total number of FLOPs used to train the model since initialisation.**
Caveats:
--------
* A randomly initialised model has a footprint of 0 FLOPs.
* If the model is trained from a randomly initialised model using SGD or a variant, then its footprint is the total number of FLOPs used in the training process.
* If a pre-trained base model is used for the initialisation of another training process (such as unsupervised learning, supervised learning, fine-tuning, or reinforcement learning), then the footprint of the resulting model will include the footprint of the pre-trained model.
* If multiple models are composed to form a single cohesive model, then the footprint of the resulting model is the sum of the footprints of each component model.
* If there is a major algorithmic innovation which divides by a factor of r.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
the FLOPs required to train a model to a particular score on downstream tasks, then the footprint of models trained with that innovation is multiplied by the same factor r.
* This list of caveats to the definition of Effective Training Footprint is **non-exhaustive**. Future consultations may yield additional caveats, or replace Effective Training Footprint with an entirely different proxy metric.
Fixing the y-axis
-----------------
* According to the 0.2 OOMs/year target, there cannot exist an ML model during the year (2022+x) with a footprint exceeding f(x), where f(x+1)=100.2×f(x). That means that log10f(x)=(0.2x+a) FLOPs for some fixed constant a.
* If we consult [EpochAI's plot](https://epochai.org/mlinputs/visualization) of compute training runs during [the large-scale era of ML](https://epochai.org/blog/compute-trends), we see that footprints have been growing with approximately 0.5 OOMs/year.

* We can use this trend to fix the value of A. In 2022, the footprint was approximately 1.0e+24. Therefore a=24.
* In other words, log10f(x)=0.2x+24.
* I have used 2022 as an anchor to fix the y-axis a. If I had used an earlier date then the 0.2 OOMs/yr target would've been stricter, and if I had used a later date then the 0.2 OOMs/yr target would've been laxer. If the y-axis for the constraint is fixed to the day of the negotiation (the default schelling date), then stakeholders who want a laxer constraint are incentivised to delay negotiation. To avoid that hazard, I have picked "January 1st 2022" to fix the y-axis. I declare 1/1/2022 to be the schelling date for the 0.2 OOMs/year target.
Year-by-year limits
-------------------
In year Y, all model must have a log10-footprint below 24+0.2×(Y−2022).
| Year | Maximum training footprint (FLOPs) in logarithm base 10 | Maximum training footprint (FLOPs) |
| --- | --- | --- |
| 2020 | 23.6 | 3.98E+23 |
| 2021 | 23.8 | 6.31E+23 |
| 2022 | 24.0 | 1.00E+24 |
| 2023 | 24.2 | 1.58E+24 |
| 2024 | 24.4 | 2.51E+24 |
| 2025 | 24.6 | 3.98E+24 |
| 2026 | 24.8 | 6.31E+24 |
| 2027 | 25.0 | 1.00E+25 |
| 2028 | 25.2 | 1.58E+25 |
| 2029 | 25.4 | 2.51E+25 |
| 2030 | 25.6 | 3.98E+25 |
| 2031 | 25.8 | 6.31E+25 |
| 2032 | 26.0 | 1.00E+26 |
| 2033 | 26.2 | 1.58E+26 |
| 2034 | 26.4 | 2.51E+26 |
| 2035 | 26.6 | 3.98E+26 |
| 2036 | 26.8 | 6.31E+26 |
| 2037 | 27.0 | 1.00E+27 |
| 2038 | 27.2 | 1.58E+27 |
| 2039 | 27.4 | 2.51E+27 |
| 2040 | 27.6 | 3.98E+27 |
| 2041 | 27.8 | 6.31E+27 |
| 2042 | 28.0 | 1.00E+28 |
| 2043 | 28.2 | 1.58E+28 |
| 2044 | 28.4 | 2.51E+28 |
| 2045 | 28.6 | 3.98E+28 |
| 2046 | 28.8 | 6.31E+28 |
| 2047 | 29.0 | 1.00E+29 |
| 2048 | 29.2 | 1.58E+29 |
| 2049 | 29.4 | 2.51E+29 |
| 2050 | 29.6 | 3.98E+29 |
Implications of the 0.2 OOMs/year target
========================================
* Because 100.2=1.58, this means that the maximum footprint would grow 58% every year.
* 0.2 OOMs/year is equivalent to a doubling time of 18 months.
* Every decade, the maximum permissible footprint increases by a factor of 100.
* [0.2 OOMs/year was the pre-AlexNet growth rate in ML systems.](https://epochai.org/blog/compute-trends)
* The current growth rate is 0.5 OOMs/year, which is 2.5 times faster than the target rate.
* As the current 0.5 OOMs/year growth rate, after 10 years we would have ML training runs which are 100 000x larger than existing training runs. Under the 0.2 OOMs/year growth rate, this growth would be spread over 25 years instead.
* Comparing 0.2 OOMs/year target to hardware growth-rates:
+ [Moore's Law](https://en.wikipedia.org/wiki/Moore%27s_law) states that transitiors per integrated circuit doubles roughly every 2 years.
+ [Koomey's Law](https://en.wikipedia.org/wiki/Koomey%27s_law) states that the FLOPs-per-Joule doubled roughly every 1.57 years until 2000, whereupon it began doubling roughly every 2.6 years.
+ [Huang's Law](https://en.wikipedia.org/wiki/Huang%27s_law) states that the growth-rate of GPU performance exceeds that of CPU performance. This is a somewhat dubious claim, but nonetheless I think the doubling time of GPUs is longer than 18 months.
+ In general, the 0.2 OOMs/year target is *faster* than the current hardware growth-rate.
* On March 15 2023, OpenAI released GPT-4 which was trained with an estimated 2.8e+25 FLOPs. If OpenAI had followed the 0.2 OOMs/year target, then GPT-4 would've been released on March 29 2029. This is because if Y=2029.24 then24+0.2×(Y−2022)=log10(2.8×1025).
* 0.2 OOMs/year target would therefore be an effective moratorium on models exceeding GPT-4 until 2029. Nonetheless, the moratorium would still allow an [AI Summer Harvest](https://www.lesswrong.com/posts/P98i7kAN2uWuy7mhD/ai-summer-harvest) — in which the impact of ChatGPT-3.5/4 steadily dissipates across the economy until a new general equilbirum is reached where...
1. *People have more money to spend.*
2. *The products and services are more abundant, cheaper, and of a higher quality.*
3. *People have more leisure to enjoy themselves.*
* [Exceeding the 0.2 OOMs/year target would yield little socio-economic benefit](https://www.lesswrong.com/posts/WbdLYgbpxfrSXCBS6/excessive-ai-growth-rate-yields-little-socio-economic), because 5–10 years is the timescale over which the economy (and society as large) can adapt to socio-economic shocks on the scale of ChatGPT-3.5. |
52715c60-4d52-4883-94ea-c373d1bc56ce | trentmkelly/LessWrong-43k | LessWrong | The Orange Head Joke
Speaking of things that are funny to some and not others, an instructive example is the Orange Head joke. Usually when it's told, the audience is sharply divided into those who think it's hilarious and those who struggle to see what's funny.
Here's the Orange Head joke:
> It's business as usual for a bartender, and one day as he is cleaning his bar when an unusual customer walks in. The man is dressed in an expensive suit, has a beautiful supermodel hanging off each arm, and has a limo parked outside. Furthermore, the man has an orange for a head.
>
> The customer sits down at the bar and orders everyone a drink. He pays for it from a roll of hundreds and manages to get the attention of every woman in the joint, despite having an orange for a head.
>
> The bartender is not a man to pry, but he feels compelled to ask about this man's life.
>
> "Excuse me," says the bartender, "I can't help but notice that you're obviously fabulously wealthy and irresistable to women, but you have an orange for a head. How did that happen?"
>
> So the man told his story.
>
> "A while back, when I was penniless, I was walking along the beach and saw an old lamp, half buried in the sand. I picked it up and gave it a clean, and POOF! out popped a genie. The genie explained that he had been trapped in that lamp for two hundred years, and that he was so grateful to me for freeing him that he would give me three wishes.
>
> "For my first wish I asked for an unlimited fortune. The genie said 'It is done!' and from then on, whenever I needed money, it was there.
>
> "For my second wish I asked for the attention of all the most beautiful women in the world. The genie said it was done, and since then I have been able to get any woman I wanted.
>
> "For my third wish -- and, this is the bit where I kinda fucked up -- I asked for an orange for a head."
>
>
Do you think it's funny?
If you search for this joke's key words, you'll see many pages where, after it's told, people reac |
465f8da4-1d88-418e-90a4-6edf9fd90377 | StampyAI/alignment-research-dataset/arxiv | Arxiv | Counterfactual Reasoning and Learning Systems
1 Introduction
---------------
Statistical machine learning technologies in the real world are never
without a purpose. Using their predictions, humans or machines make
decisions whose circuitous consequences often violate the modeling
assumptions that justified the system design in the first place.
Such contradictions appear very clearly in the case of the learning
systems that power web scale applications such as search engines, ad
placement engines, or recommandation systems. For instance, the
placement of advertisement on the result pages of Internet search
engines depend on the bids of advertisers and on scores computed by
statistical machine learning systems. Because the scores affect the
contents of the result pages proposed to the users, they directly
influence the occurrence of clicks and the corresponding advertiser
payments. They also have important indirect effects. Ad placement
decisions impact the satisfaction of the users and therefore their
willingness to frequent this web site in the future. They also impact
the return on investment observed by the advertisers and therefore
their future bids. Finally they change the nature of the data
collected for training the statistical models in the future.
These complicated interactions are clarified by important theoretical
works. Under simplified assumptions, mechanism
design (Myerson, [1981](#bib.bib33)) leads to an insightful account of the
advertiser feedback loop (Varian, [2007](#bib.bib54); Edelman et al., [2007](#bib.bib16)). Under
simplified assumptions, multiarmed bandits theory
(Robbins, [1952](#bib.bib38); Auer et al., [2002](#bib.bib4); Langford and Zhang, [2008](#bib.bib26))
and reinforcement learning (Sutton and Barto, [1998](#bib.bib49)) describe the
exploration/exploitation dilemma associated with the training feedback
loop. However, none of these approaches gives a complete account of
the complex interactions found in real-life systems.
This work is motivated by a very practical observation: in the data
collected during the operation of an ad placement engine, *all
these fundamental insights manifest themselves in the form of
correlation/causation paradoxes.* Using the ad placement example as a
model of our problem class, we therefore argue that *the language
and the methods of causal inference* provide flexible means to *describe
such complex machine learning systems* and *give sound
answers to the practical questions* facing the designer of such a
system. Is it useful to pass a new input signal to the statistical
model? Is it worthwhile to collect and label a new training set?
What about changing the loss function or the learning algorithm? In
order to answer such questions and improve the operational performance
of the learning system, one needs to unravel how the information
produced by the statistical models traverses the web of causes and
effects and eventually produces measurable performance metrics.
Readers with an interest in causal inference will find in this paper
(i) a *real world example demonstrating the value of causal
inference for large-scale machine learning applications*,
(ii) *causal inference techniques applicable to continuously
valued variables with meaningful confidence intervals*, and (iii)
*quasi-static analysis techniques for estimating how small
interventions affect certain causal equilibria*. Readers with an
interest in real-life applications will find (iv) a selection
of *practical counterfactual analysis techniques applicable to
many real-life machine learning systems*. Readers with an interest in
computational advertising will find a principled framework that
(v) explains *how to soundly use machine learning techniques
for ad placement*, and (vi) *conceptually connects machine
learning and auction theory* in a compelling manner.
The paper is organized as follows.
Section [2](#S2 "2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems") gives an overview of the advertisement placement
problem which serves as our main example. In particular, we stress some of
the difficulties encountered when one approaches such a problem without a
principled perspective.
Section [3](#S3 "3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") provides a condensed review of the essential
concepts of causal modeling and inference.
Section [4](#S4 "4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems") centers on formulating and answering
counterfactual questions such as “how would the system have performed during
the data collection period if certain interventions had been carried out on
the system ?” We describe importance sampling methods for counterfactual
analysis, with clear conditions of validity and confidence intervals.
Section [5](#S5 "5 Structure ‣ Counterfactual Reasoning and Learning Systems") illustrates how the structure
of the causal graph reveals opportunities to exploit prior information
and vastly improve the confidence intervals.
Section [6](#S6 "6 Learning ‣ Counterfactual Reasoning and Learning Systems") describes how counterfactual analysis provides
essential signals that can drive learning algorithms. Assume that we have
identified interventions that would have caused the system to perform well
during the data collection period. Which guarantee can we obtain on the
performance of these same interventions in the future?
Section [7](#S7 "7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems") presents counterfactual differential techniques
for the study of equlibria. Using data collected when the system is at
equilibrium, we can estimate how a small intervention displaces the
equilibrium. This provides an elegant and effective way to reason about
long-term feedback effects.
Various appendices complete the main text with information that we think more
relevant to readers with specific backgrounds.
2 Causation Issues in Computational Advertising
------------------------------------------------
After giving an overview of the advertisement placement problem, which serves
as our main example, this section illustrates some of the difficulties that
arise when one does not pay sufficient attention to the causal structure of
the learning system.
###
2.1 Advertisement Placement
All Internet users are now familiar with the advertisement messages
that adorn popular web pages. Advertisements are particularly
effective on search engine result pages because users who are
searching for something are good targets for advertisers who have
something to offer. Several actors take part in this Internet
advertisement game:
* Advertisers create advertisement messages, and place bids that
describe how much they are willing to pay to see their ads
displayed or clicked.
* Publishers provide attractive web services, such as, for instance,
an Internet search engine. They display selected ads and expect to
receive payments from the advertisers. The infrastructure to collect
the advertiser bids and select ads is sometimes provided by an
advertising network on behalf of its affiliated publishers. For the
purposes of this work, we simply consider a publisher large enough
to run its own infrastructure.
* Users reveal information about their current interests, for instance, by
entering a query in a search engine. They are offered web pages that contain
a selection of ads (figure [1](#S2.F1 "Figure 1 ‣ 2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems")). Users sometimes click on an
advertisement and are transported to a web site controlled by the advertiser
where they can initiate some business.
A conventional bidding language is necessary to precisely define under which
conditions an advertiser is willing to pay the bid amount. In the case of
Internet search advertisement, each bid specifies (a) the advertisement
message, (b) a set of keywords, (c) one of several possible matching criteria
between the keywords and the user query, and (d) the maximal price the
advertiser is willing to pay when a user clicks on the ad after entering a
query that matches the keywords according to the specified criterion.
Whenever a user visits a publisher web page, an advertisement placement engine
runs an auction in real time in order to select winning ads, determine where
to display them in the page, and compute the prices charged to advertisers,
should the user click on their ad. Since the placement engine is operated
by the publisher, it is designed to further the interests of the publisher.
Fortunately for everyone else, the publisher must balance short
term interests, namely the immediate revenue brought by the ads displayed on
each web page, and long term interests, namely the future revenues resulting
from the continued satisfaction of both users and advertisers.

Figure 1: Mainline and sidebar ads on a search result page. Ads placed in the
mainline are more likely to be noticed, increasing both the chances of a
click if the ad is relevant and the risk of annoying the user if the ad is
not relevant.
Auction theory explains how to design a mechanism that optimizes the revenue
of the seller of a single object (Myerson, [1981](#bib.bib33); Milgrom, [2004](#bib.bib32)) under
various assumptions about the information available to the buyers regarding
the intentions of the other buyers. In the case of the ad placement problem,
the publisher runs multiple auctions and sells opportunities to receive a click.
When nearly identical auctions occur thousand of times per second, it
is tempting to consider that the advertisers have perfect information
about each other. This assumption gives support to the popular generalized
second price rank-score auction (Varian, [2007](#bib.bib54); Edelman et al., [2007](#bib.bib16)):
* Let x represent the auction context information, such as the user
query, the user profile, the date, the time, etc. The ad placement
engine first determines all eligible ads a1…an and the
corresponding bids b1…bn on the basis of the auction context x
and of the matching criteria specified by the advertisers.
* For each selected ad ai and each potential position p on the web
page, a statistical model outputs the estimate qi,p(x) of the
probability that ad ai displayed in position p receives a user click.
The rank-score ri,p(x)=biqi,p(x) then represents the purported
value associated with placing ad ai at position p.
* Let L represent a possible ad layout, that is, a set of positions that
can simultaneously be populated with ads, and let L be
the set of possible ad layouts, including of course the empty layout.
The optimal layout and the corresponding ads are
obtained by maximizing the total rank-score
| | | | |
| --- | --- | --- | --- |
| | maxL∈L maxi1,i2,… ∑p∈Lrip,p(x), | | (1) |
subject to reserve constraints
| | | | |
| --- | --- | --- | --- |
| | ∀p∈L, rip,p(x)≥Rp(x), | | (2) |
and also subject to diverse policy constraints, such as, for instance,
preventing the simultaneous display of multiple ads belonging to the same
advertiser. Under mild assumptions, this discrete maximization problem is
amenable to computationally efficient greedy algorithms (see
appendix [A](#Sx2.SS1 "A Greedy Ad Placement Algorithms ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems").)
* The advertiser payment associated with a user click is computed using the
generalized second price (GSP) rule: the advertiser pays the smallest bid
that it could have entered without changing the solution of the discrete
maximization problem, all other bids remaining equal. In other words, the
advertiser could not have manipulated its bid and obtained the same
treatment for a better price.
Under the perfect information assumption, the analysis suggests that the
publisher simply needs to find which reserve prices Rp(x) yield the best
revenue *per auction*. However, the total revenue of the publisher also
depends on the traffic experienced by its web site. Displaying an excessive
number of irrelevant ads can train users to ignore the ads, and can also
drive them to competing web sites. Advertisers can artificially raise the
rank-scores of irrelevant ads by temporarily increasing the bids. Indelicate
advertisers can create deceiving advertisements that elicit many
clicks but direct users to spam web sites. Experience shows that the continued
satisfaction of the users is more important to the publisher than it is to the
advertisers.
Therefore the generalized second price rank-score auction has evolved.
Rank-scores have been augmented with terms that quantify the user satisfaction
or the ad relevance. Bids receive adaptive discounts in order to deal with
situations where the perfect information assumption is unrealistic. These
adjustments are driven by additional statistical models. The ad placement
engine should therefore be viewed as a complex learning system interacting
with both users and advertisers.
###
2.2 Controlled Experiments
The designer of such an ad placement engine faces the fundamental question of
testing whether a proposed modification of the ad placement engine results in
an improvement of the operational performance of the system.
The simplest way to answer such a question is to try the modification.
The basic idea is to randomly split the users into treatment and control
groups (Kohavi et al., [2008](#bib.bib22)). Users from the control group see web pages
generated using the unmodified system. Users of the treatment groups see web
pages generated using alternate versions of the system. Monitoring various
performance metrics for a couple months usually gives sufficient information
to reliably decide which variant of the system delivers the most satisfactory
performance.
Modifying an advertisement placement engine elicits reactions from both the
users and the advertisers. Whereas it is easy to split users into treatment
and control groups, splitting advertisers into treatment and control groups
demands special attention because each auction involves multiple advertisers
(Charles et al., [2012](#bib.bib12)). Simultaneously controlling for both users and
advertisers is probably impossible.
Controlled experiments also suffer from several drawbacks. They are
expensive because they demand a complete implementation of the
proposed modifications. They are slow because each experiment typically
demands a couple months. Finally, although there are elegant ways to
efficiently run overlapping controlled experiments on the same traffic
(Tang et al., [2010](#bib.bib50)), they are limited by the volume of traffic
available for experimentation.
It is therefore difficult to rely on controlled experiments during the
conception phase of potential improvements to the ad placement engine. It is
similarly difficult to use controlled experiments to drive the training
algorithms associated with click probability estimation
models. Cheaper and faster statistical methods
are needed to drive these essential aspects of the development of an ad
placement engine. Unfortunately, interpreting cheap and fast data can be
very deceiving.
###
2.3 Confounding Data
Assessing the consequence of an intervention using statistical data is
generally challenging because it is often difficult to determine whether
the observed effect is a simple consequence of the intervention or
has other uncontrolled causes.
For instance, the empirical comparison of certain kidney stone treatments
illustrates this difficulty (Charig et al., [1986](#bib.bib10)). Table [1](#S2.T1 "Table 1 ‣ 2.3 Confounding Data ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems")
reports the success rates observed on two groups of 350 patients treated with
respectively open surgery (treatment A, with 78% success) and percutaneous
nephrolithotomy (treatment B, with 83% success). Although treatment B seems
more successful, it was more frequently prescribed to patients suffering from
small kidney stones, a less serious condition. Did treatment B achieve a high
success rate because of its intrinsic qualities or because it was
preferentially applied to less severe cases? Further splitting the data
according to the size of the kidney stones reverses the conclusion:
treatment A now achieves the best success rate for both patients suffering
from large kidney stones and patients suffering from small kidney stones.
Such an inversion of the conclusion is called Simpson’s paradox
(Simpson, [1951](#bib.bib43)).
| | Overall |
Patients with
small stones
|
Patients with
large stones
|
| --- | --- | --- | --- |
|
Treatment A:
Open surgery
| 78% (273/350) | 93% (81/87) | 73% (192/263) |
|
Treatment B:
Percutaneousnephrolithotomy
| 83% (289/350) | 87% (234/270) | 69% (55/80) |
Table 1: A classic example of Simpson’s paradox.
The table reports the success rates of two treatments
for kidney stones (Charig et al., [1986](#bib.bib10), tables I and II).
Although the overall success rate of treatment B seems better,
treatment B performs worse than treatment A on both patients
with small kidney stones and patients with large kidney stones.
See section [2.3](#S2.SS3 "2.3 Confounding Data ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems").
The stone size in this study is an example of a *confounding variable*,
that is an uncontrolled variable whose consequences pollute the effect of the
intervention. Doctors knew the size of the kidney stones, chose to treat the
healthier patients with the least invasive treatment B, and therefore caused
treatment B to appear more effective than it actually was. If we now decide
to apply treatment B to all patients irrespective of the stone size, we break
the causal path connecting the stone size to the outcome, we eliminate the
illusion, and we will experience disappointing results.
When we suspect the existence of a confounding variable, we can split the
contingency tables and reach improved conclusions. Unfortunately we cannot
fully trust these conclusions unless we are certain to have taken into account
all confounding variables. The real problem therefore comes from the
confounding variables we do not know.
Randomized experiments arguably provide the only correct solution to this
problem (see Stigler, [1992](#bib.bib47)). The idea is to randomly chose whether
the patient receives treatment A or treatment B. Because this random choice is
independent from all the potential confounding variables, known and unknown,
they cannot pollute the observed effect of the treatments (see also
section [4.2](#S4.SS2 "4.2 Reweighting Randomized Trials ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")). This is why controlled experiments in ad placement
(section [2.2](#S2.SS2 "2.2 Controlled Experiments ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems")) randomly distribute users between treatment and
control groups, and this is also why, in the case of an ad placement engine,
we should be somehow concerned by the practical impossibility to randomly
distribute both users and advertisers.
###
2.4 Confounding Data in Ad Placement
Let us return to the question of assessing the value of passing a new
input signal to the ad placement engine click prediction model.
Section [2.1](#S2.SS1 "2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems") outlines a placement method where the click
probability estimates qi,p(x) depend on the ad and the position we
consider, but do not depend on other ads displayed on the page. We now
consider replacing this model by a new model that additionally uses the
estimated click probability of the top mainline ad to estimate the click
probability of the second mainline ad (figure [1](#S2.F1 "Figure 1 ‣ 2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems")). We
would like to estimate the effect of such an intervention using existing
statistical data.
We have collected ad placement data for Bing222[http://bing.com](https://bing.com)
search result pages served during three consecutive hours on a certain slice
of traffic. Let q1 and q2 denote the click probability estimates
computed by the existing model for respectively the top mainline ad and the
second mainline ad. After excluding pages displaying fewer than two mainline
ads, we form two groups of 2000 pages randomly picked among those satisfying
the conditions q1<0.15 for the first group and q1≥0.15 for the second
group. Table [2](#S2.T2 "Table 2 ‣ 2.4 Confounding Data in Ad Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems") reports the click counts and
frequencies observed on the second mainline ad in each group. Although the
overall numbers show that users click more often on the second mainline ad when
the top mainline ad has a high click probability estimate q1, this
conclusion is reversed when we further split the data according to the click
probability estimate q2 of the second mainline ad.
| | Overall | q2 low | q2 high |
| --- | --- | --- | --- |
| q1 low | 6.2% (124/2000) | 5.1% (92/1823) | 18.1% (32/176) |
| q1 high | 7.5% (149/2000) | 4.8% (71/1500) | 15.6% (78/500) |
Table 2: Confounding data in ad placement. The table reports the
click-through rates and the click counts of the second mainline ad. The
overall counts suggest that the click-through rate of the second mainline ad
increases when the click probability estimate q1 of the top ad is high.
However, if we further split the pages according to the click probability
estimate q2 of the second mainline ad, we reach the opposite conclusion.
See section [2.4](#S2.SS4 "2.4 Confounding Data in Ad Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems").
Despite superficial similarities, this example is considerably more difficult
to interpret than the kidney stone example. The overall click counts show that
the actual click-through rate of the second mainline ad is positively correlated
with the click probability estimate on the top mainline ad. Does this mean
that we can increase the total number of clicks by placing regular ads below
frequently clicked ads?
Remember that the click probability estimates depend on the search query which
itself depends on the user intention. The most likely explanation is that
pages with a high q1 are frequently associated with more commercial
searches and therefore receive more ad clicks on all positions. The observed
correlation occurs because the presence of a click and the magnitude of the
click probability estimate q1 have a common cause: the user intention.
Meanwhile, the click probability estimate q2 returned by the current model
for the second mainline ad also depend on the query and therefore the user
intention. Therefore, assuming that this dependence has comparable strength,
and assuming that there are no other causal paths, splitting the counts
according to the magnitude of q2 factors out the effects of this common
confounding cause. We then observe a negative correlation which now suggests
that a frequently clicked top mainline ad has a negative impact on the
click-through rate of the second mainline ad.
If this is correct, we would probably increase the accuracy of the click
prediction model by switching to the new model. This would decrease the click
probability estimates for ads placed in the second mainline position on
commercial search pages. These ads are then less likely to clear the reserve
and therefore more likely to be displayed in the less attractive sidebar. The
net result is probably a loss of clicks and a loss of money despite the higher
quality of the click probability model. Although we could tune the reserve
prices to compensate this unfortunate effect, nothing in this data tells us
where the performance of the ad placement engine will land. Furthermore,
unknown confounding variables might completely reverse our conclusions.
Making sense out of such data is just too complex !
###
2.5 A Better Way
It should now be obvious that we need a more principled way to reason
about the effect of potential interventions. We provide one such more
principled approach using the causal inference machinery
(section [3](#S3 "3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems")). The next
step is then the identification of a class of questions that are
sufficiently expressive to guide the designer of a complex learning
system, and sufficiently simple to be answered using data collected in
the past using adequate procedures (section [4](#S4 "4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")).
A machine learning algorithm can then be viewed as an automated way to
generate questions about the parameters of a statistical model, obtain
the corresponding answers, and update the parameters accordingly
(section [6](#S6 "6 Learning ‣ Counterfactual Reasoning and Learning Systems")). Learning algorithms derived in this
manner are very flexible: human designers and machine learning
algorithms can cooperate seamlessly because they rely on similar
sources of information.
3 Modeling Causal Systems
--------------------------
When we point out a causal relationship between two events, we
describe what we expect to happen to the event we call the
*effect*, should an external operator manipulate the event we
call the *cause*. Manipulability theories of causation
(von Wright, [1971](#bib.bib57); Woodward, [2005](#bib.bib61)) raise this commonsense
insight to the status of a definition of the causal relation. Difficult
adjustments are then needed to interpret statements involving
causes that we can only observe through their effects,
*“because they love me,”* or that are not easily
manipulated, *“because the earth is round.”*
Modern statistical thinking makes a clear distinction between the
statistical model and the world. The actual mechanisms underlying the
data are considered unknown. The statistical models do not need to
reproduce these mechanisms to emulate the observable
data (Breiman, [2001](#bib.bib8)). Better models are sometimes
obtained by deliberately avoiding to reproduce the true
mechanisms (Vapnik, [1982](#bib.bib51), section 8.6). We can approach
the manipulability puzzle in the same spirit by viewing causation as a
reasoning model (Bottou, [2011](#bib.bib7)) rather than a property of the
world. Causes and effects are simply the pieces of an abstract
reasoning game. Causal statements that are not empirically testable
acquire validity when they are used as intermediate steps when one
reasons about manipulations or interventions amenable to
experimental validation.
This section presents the rules of this reasoning game.
We largely follow the framework proposed by Pearl ([2009](#bib.bib35))
because it gives a clear account of the connections between
causal models and probabilistic models.
###
3.1 The Flow of Information
| | | |
| --- | --- | --- |
| | x=f1(u,ε1)\small%
\unboldmath Query context x from user intent u.a=f2(x,v,ε2)\small\unboldmath Eligible ads (ai) from query x and inventory v.b=f3(x,v,ε3)\small\unboldmath Corresponding %
bids (bi).q=f4(x,a,ε4)\small\unboldmath Scores (qi,p,Rp) from query x and ads a.s=f5(a,q,b,ε5)\small\unboldmath Ad slate s %
from eligible ads a, scores q and bids b.c=f6(a,q,b,ε6)\small\unboldmath Corresponding %
click prices c.y=f7(s,u,ε7)\small\unboldmath User clicks y %
from ad slate s and user intent u.z=f8(y,c,ε8)\small\unboldmath Revenue z from %
clicks y and prices c. | |
Figure 2: A structural equation model for ad placement. The sequence
of equations describes the flow of information. The functions fk
describe how effects depend on their direct causes. The additional
noise variables εk represent independent sources of
randomness useful to model probabilistic dependencies.
Figure [2](#S3.F2 "Figure 2 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") gives a deterministic description of the operation of the
ad placement engine. Variable u represents the user and his or her intention
in an unspecified manner. The query and query context x is then expressed as
an unknown function of the u and of a noise variable ε1. Noise
variables in this framework are best viewed as independent sources of
randomness useful for modeling a nondeterministic causal dependency. We shall
only mention them when they play a specific role in the discussion. The set of
eligible ads a and the corresponding bids b are then derived from the
query x and the ad inventory v supplied by the advertisers. Statistical
models then compute a collection of scores q such as the click probability
estimates qi,p and the reserves Rp introduced in
section [2.1](#S2.SS1 "2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems"). The placement logic uses these scores to
generate the “ad slate” s, that is, the set of winning ads and their
assigned positions. The corresponding click prices c are computed. The set
of user clicks y is expressed as an unknown function of the ad slate s and
the user intent u. Finally the revenue z is expressed as another
function of the clicks y and the prices c.
Such a system of equations is named *structural equation
model* (wright-1921). Each equation asserts a functional
dependency between an effect, appearing on the left hand side of the
equation, and its direct causes, appearing on the right hand side as
arguments of the function. Some of these causal dependencies are
*unknown*. Although we postulate that the effect can be expressed as some
function of its direct causes, we do not know the form of this function. For
instance, the designer of the ad placement engine knows functions f2
to f6 and f8 because he has designed them. However, he does not know the
functions f1 and f7 because whoever designed the user did not leave
sufficient documentation.

Figure 3: Causal graph associated with the ad placement structural equation
model (figure [2](#S3.F2 "Figure 2 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems")). Nodes with yellow (as opposed to blue)
background indicate bound variables with known functional
dependencies. The mutually independent noise variables are implicit.
Figure [3](#S3.F3 "Figure 3 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") represents the directed causal graph associated
with the structural equation model. Each arrow connects a direct cause to its
effect. The noise variables are omitted for simplicity. The structure of this
graph reveals fundamental assumptions about our model. For instance, the user
clicks y do not directly depend on the scores q or the prices c
because users do not have access to this information.
We hold as a principle that causation obeys the *arrow of time*:
causes always precede their effects. Therefore the causal graph
must be *acyclic*. Structural equation models then support two
fundamental operations, namely simulation and intervention.
* *Simulation* – Let us assume that we know both the exact form of all
functional dependencies and the value of all exogenous variables, that is, the
variables that never appear in the left hand side of an equation. We can
compute the values of all the remaining variables by applying the equations
in their natural time sequence.
* *Intervention* – As long as the causal graph remains acyclic, we can
construct derived structural equation models using arbitrary algebraic
manipulations of the system of equations. For instance, we can clamp a
variable to a constant value by rewriting the right-hand side of the
corresponding equation as the specified constant value.
The algebraic manipulation of the structural equation models provides
a powerful language to describe interventions on a causal system.
This is not a coincidence. Many aspects of the mathematical notation
were invented to support causal inference in classical mechanics.
However, we no longer have to interpret the variable values as
physical quantities: the equations simply describe the flow of
information in the causal model (Wiener, [1948](#bib.bib59)).
###
3.2 The Isolation Assumption
Let us now turn our attention to the exogenous variables, that is,
variables that never appear in the left hand side of an equation of
the structural model. Leibniz’s *principle of sufficient reason*
claims that there are no facts without causes. This suggests
that the exogenous variables are the effects of a network of causes
not expressed by the structural equation model. For instance, the
user intent u and the ad inventory v in
figure [3](#S3.F3 "Figure 3 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") have temporal correlations because both
users and advertisers worry about their budgets when the end of the
month approaches. Any structural equation model should then be understood
in the context of a larger structural equation model potentially
describing all things in existence.

Figure 4: Conceptually unrolling the user feedback loop by threading instances
of the single page causal graph (figure [3](#S3.F3 "Figure 3 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems")). Both the
ad slate st and user clicks yt have an indirect effect
on the user intent ut+1 associated with the next query.
Ads served on a particular page contribute to the continued satisfaction of
both users and advertisers, and therefore have an effect on their willingness
to use the services of the publisher in the future. The ad placement
structural equation model shown in figure [2](#S3.F2 "Figure 2 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") only describes the
causal dependencies for a single page and therefore cannot account for such
effects. Consider however a very large structural equation model containing a
copy of the page-level model for every web page ever served by the
publisher. Figure [4](#S3.F4 "Figure 4 ‣ 3.2 The Isolation Assumption ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") shows how we can thread the page-level
models corresponding to pages served to the same user. Similarly we could
model how advertisers track the performance and the cost of their
advertisements and model how their satisfaction affects their future bids.
The resulting causal graphs can be very complex. Part of this complexity
results from time-scale differences. Thousands of search pages are served in a
second. Each page contributes a little to the continued satisfaction of one
user and a few advertisers. The accumulation of these contributions produces
measurable effects after a few weeks.
Many of the functional dependencies expressed by the structural
equation model are left unspecified. Without direct knowledge of these
functions, we must reason using statistical data. The most fundamental
statistical data is collected from repeated trials that are assumed
independent. When we consider the large structured equation model of
everything, we can only have one large trial producing a single data
point.333See also the discussion on reinforcement
learning, section [3.5](#S3.SS5 "3.5 Special Cases ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems"). It is therefore desirable
to identify repeated patterns of identical equations that can be
viewed as repeated independent trials. Therefore, when we study a
structural equation model representing such a pattern, we need to make
an additional assumption to expresses the idea that the oucome of one
trial does not affect the other trials. We call such an assumption
an *isolation assumption* by analogy with
thermodynamics.444The concept of isolation is pervasive in
physics. An isolated system in thermodynamics (Reichl, [1998](#bib.bib37), section
2.D) or a closed system in mechanics (Landau and Lifshitz, [1969](#bib.bib25), §5) evolves without exchanging mass or energy with
its surroundings. Experimental trials involving systems that are
assumed isolated may differ in their initial setup and therefore have
different outcomes. Assuming isolation implies that the
outcome of each trial cannot affect the other trials. This can be
achieved by assuming that *the exogenous variables are
independently drawn from an unknown but fixed joint probability
distribution*. This assumption cuts the causation effects that could
flow through the exogenous variables.
The noise variables are also exogenous variables
acting as independent source of randomness. The noise variables are
useful to represent the conditional distribution
P(effect|causes) using the equation
effect=f(causes,ε).
Therefore, we also assume joint independence between all the noise
variables and any of the named exogenous variable.555Rather than letting two noise variables display measurable
statistical dependencies because they share a common cause,
we prefer to name the common cause and make the dependency
explicit in the graph.
For instance, in the case of the ad placement model
shown in figure [2](#S3.F2 "Figure 2 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems"), we assume that the joint
distribution of the exogenous variables factorizes as
| | | | |
| --- | --- | --- | --- |
| | P(u,v,ε1,…,ε8)=P(u,v)P(ε1)…P(ε8). | | (3) |
Since an isolation assumption is only true up to a point, it should
be expressed clearly and remain under constant scrutiny. We
must therefore measure additional performance metrics that reveal how
the isolation assumption holds. For instance, the ad placement
structural equation model and the corresponding causal graph
(figures [2](#S3.F2 "Figure 2 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") and [3](#S3.F3 "Figure 3 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems")) do not take user
feedback or advertiser feedback into account. Measuring the revenue is
not enough because we could easily generate revenue at the expense of
the satisfaction of the users and advertisers. When we evaluate
interventions under such an isolation assumption, we also need to
measure a battery of additional quantities that act as proxies for
the user and advertiser satisfaction. Noteworthy examples include
ad relevance estimated by human judges, and advertiser surplus
estimated from the auctions (Varian, [2009](#bib.bib55)).
###
3.3 Markov Factorization
Conceptually, we can draw a sample of the exogenous variables using
the distribution specified by the isolation assumption, and we can
then generate values for all the remaining variables by simulating the
structural equation model.
| | | |
| --- | --- | --- |
| | P(u,v,x,a,bq,s,c,y,z)=⎧⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪⎨⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪⎩P(u,v)\small\unboldmath Exogenous vars.× P(x|u)\small%
\unboldmath Query.× P(a|x,v)\small%
\unboldmath Eligible ads.× P(b|x,v)\small%
\unboldmath Bids.× P(q|x,a)\small%
\unboldmath Scores.× P(s|a,q,b)\small%
\unboldmath Ad slate.× P(c|a,q,b)\small%
\unboldmath Prices.× P(y|s,u)\small%
\unboldmath Clicks.× P(z|y,c)\small%
\unboldmath Revenue. | |
Figure 5: Markov factorization of the structural equation model
of figure [2](#S3.F2 "Figure 2 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems").

Figure 6: Bayesian network associated with the Markov
factorization shown in figure [6](#S3.F6 "Figure 6 ‣ 3.3 Markov Factorization ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems").
This process defines a *generative probabilistic model*
representing the joint distribution of all variables in the structural
equation model. The distribution readily factorizes as the product of
the joint probability of the named exogenous variables, and, for each
equation in the structural equation model, the conditional probability
of the effect given its direct causes (Spirtes et al., [1993](#bib.bib46); Pearl, [2000](#bib.bib34)).
As illustrated by figures [6](#S3.F6 "Figure 6 ‣ 3.3 Markov Factorization ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") and [6](#S3.F6 "Figure 6 ‣ 3.3 Markov Factorization ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems"),
this *Markov factorization* connects the structural equation
model that describes causation, and the Bayesian network that
describes the joint probability distribution followed by the variables
under the isolation assumption.666Bayesian networks are
directed graphs representing the Markov factorization of a joint
probability distribution: the arrows no longer have a causal
interpretation.
Structural equation models and Bayesian networks appear so intimately
connected that it could be easy to forget the differences. The
structural equation model is an algebraic object. As long as the
causal graph remains acyclic, algebraic manipulations are interpreted
as interventions on the causal system. The Bayesian network is a
generative statistical model representing a class of joint probability
distributions, and, as such, does not support algebraic
manipulations. However, the symbolic representation of its
Markov factorization is an algebraic object, essentially
equivalent to the structural equation model.
###
3.4 Identification, Transportation, and Transfer Learning
Consider a causal system represented by a structural equation model with some
unknown functional dependencies. Subject to the isolation assumption, data
collected during the operation of this system follows the distribution
described by the corresponding Markov factorization. Let us first assume that
this data is sufficient to identify the joint distribution of the subset of
variables we can observe. We can intervene on the system by clamping the value
of some variables. This amounts to replacing the right-hand side of the
corresponding structural equations by constants. The joint distribution of the
variables is then described by a new Markov factorization that shares many
factors with the original Markov factorization. Which conditional
probabilities associated with this new distribution can we express using only
conditional probabilities identified during the observation of the original
system? This is called the *identifiability* problem. More generally, we
can consider arbitrarily complex manipulations of the structural equation
model, and we can perform multiple experiments involving different
manipulations of the causal system. Which conditional probabilities pertaining
to one experiment can be expressed using only conditional probabilities
identified during the observation of other experiments? This is called
the *transportability* problem.
Pearl’s *do*-calculus completely solves the identifiability
problem and provides useful tools to address many instances of the
transportability problem (see Pearl, [2012](#bib.bib36)). Assuming that
we *know* the conditional probability distributions involving
observed variables in the original structural equation
model, *do*-calculus allows us to *derive* conditional
distributions pertaining to the manipulated structural equation model.
Unfortunately, we must further distinguish the conditional
probabilities that we know (because we designed them) from those that
we estimate from empirical data. This distinction is important because
estimating the distribution of continuous or high cardinality
variables is notoriously difficult. Furthermore, *do*-calculus
often combines the estimated probabilities in ways that amplify
estimation errors. This happens when the manipulated structural
equation model exercises the variables in ways that were rarely
observed in the data collected from the original structural equation
model.
Therefore we prefer to use much simpler causal inference techniques
(see sections [4.1](#S4.SS1 "4.1 Replaying Empirical Data ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems") and [4.2](#S4.SS2 "4.2 Reweighting Randomized Trials ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")). Although these
techniques do not have the completeness properties
of *do*-calculus, they combine estimation and transportation in a
manner that facilitates the derivation of useful confidence intervals.
###
3.5 Special Cases
Three special cases of causal models are particularly relevant to this work.
* In the multi-armed bandit (Robbins, [1952](#bib.bib38)), a user-defined policy
function π determines the distribution of action a∈{1…K},
and an unknown reward function r determines the distribution of the
outcome y given the action a (figure [9](#S3.F9 "Figure 9 ‣ 3.5 Special Cases ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems")). In order to
maximize the accumulated rewards, the player must construct policies π
that balance the exploration of the action space with the exploitation of
the best action identified so far (Auer et al., [2002](#bib.bib4); Audibert et al., [2007](#bib.bib3); Seldin et al., [2012](#bib.bib41)).
* The contextual bandit problem (Langford and Zhang, [2008](#bib.bib26))
significantly increases the complexity of multi-armed bandits by
adding one exogenous variable x to the policy function π and
the reward functions r (figure [9](#S3.F9 "Figure 9 ‣ 3.5 Special Cases ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems")).
* Both multi-armed bandit and contextual bandit are special case of
reinforcement learning (Sutton and Barto, [1998](#bib.bib49)). In essence, a
Markov decision process is a sequence of contextual bandits where
the context is no longer an exogenous variable but a state variable
that depends on the previous states and actions
(figure [9](#S3.F9 "Figure 9 ‣ 3.5 Special Cases ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems")). Note that the policy
function π, the reward function r, and the transition function
s are independent of time. All the time dependencies are
expressed using the states st.
| | | |
| --- | --- | --- |
| | a=π(ε)\small%
\unboldmath Action a∈{1…K}y=r(a,ε′)\small\unboldmath Reward y∈R | |
Figure 7: Structural equation model for the
multi-armed bandit problem. The policy π selects a discrete
action a, and the reward function r determines the outcome y.
The noise variables ε and ε′ represent
independent sources of randomness useful to model probabilistic
dependencies.
| | | |
| --- | --- | --- |
| | a=π(x,ε)\small%
\unboldmath Action a∈{1…K}y=r(x,a,ε′)\small\unboldmath Reward y∈R | |
Figure 8: Structural equation model for contextual bandit problem.
Both the action and the reward depend on
an exogenous context variable x.
| | | |
| --- | --- | --- |
| | at=π(st−1,εt)\small\unboldmath Actionyt=r(st−1,at,ε′t)\small%
\unboldmath Reward rt∈Rst=s(st−1,at,ε′′t)%
\small\unboldmath Next state | |
Figure 9: Structural equation model for reinforcement learning.
The above equations are replicated for all t∈{0…,T}.
The context is now provided by a state variable st−1
that depends on the previous states and actions.
These special cases have increasing generality. Many simple structural
equation models can be reduced to a contextual bandit problem using
appropriate definitions of the context x, the action a and the outcome
y. For instance, assuming that the prices c are discrete, the ad placement
structural equation model shown in figure [2](#S3.F2 "Figure 2 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") reduces to a
contextual bandit problem with context (u,v), actions (s,c) and reward
z. Similarly, given a sufficiently intricate definition of the state
variables st, all structural equation models with discrete variables can be
reduced to a reinforcement learning problem. Such reductions lose the fine
structure of the causal graph. We show in section [5](#S5 "5 Structure ‣ Counterfactual Reasoning and Learning Systems") how this
fine structure can in fact be leveraged to obtain more information
from the same experiments.
Modern reinforcement learning algorithms (see Sutton and Barto, [1998](#bib.bib49))
leverage the assumption that the policy function, the reward function, the
transition function, and the distributions of the corresponding noise
variables, are independent from time. This invariance property provides great
benefits when the observed sequences of actions and rewards are long in
comparison with the size of the state space. Only section [7](#S7 "7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems")
in this contribution presents methods that take advantage of such an
invariance. The general question of leveraging arbitrary functional
invariances in causal graphs is left for future work.
4 Counterfactual Analysis
--------------------------
We now return to the problem of formulating and answering questions
about the value of proposed changes of a learning system. Assume for
instance that we consider replacing the score computation model M of
an ad placement engine by an alternate model M∗.
We seek an answer to the conditional question:
>
> “*How will the system perform if we replace model M by model M∗ ?*”
>
>
>
Given sufficient time and sufficient resources, we can obtain the
answer using a controlled experiment (section [2.2](#S2.SS2 "2.2 Controlled Experiments ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems")).
However, instead of carrying out a new experiment, we would like to
obtain an answer using data that we have already collected in the past.
>
> “*How would the system have performed if, when the data was collected,
> we had replaced model M by model M∗?*”
>
>
>
>
The answer of this *counterfactual question* is of course a
*counterfactual statement* that describes the system performance
subject to a condition that did not happen.
Counterfactual statements challenge ordinary logic because they depend
on a condition that is known to be false. Although assertion A⇒B
is always true when assertion A is false, we certainly do not mean for
all counterfactual statements to be true. Lewis ([1973](#bib.bib28)) navigates this
paradox using a modal logic in which a counterfactual statement describes the
state of affairs in an alternate world that resembles ours except for the
specified differences. Counterfactuals indeed offer many subtle ways to
qualify such alternate worlds. For instance, we can easily
describe isolation assumptions (section [3.2](#S3.SS2 "3.2 The Isolation Assumption ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems")) in a
counterfactual question:
>
> “*How would the system have performed if, when the data was collected,
> we had replaced model M by model M∗
> without incurring user or advertiser reactions?*”
>
>
>
The fact that we could not have changed the model without incurring the user
and advertiser reactions does not matter any more than the fact that we did
not replace model M by model M∗ in the first place. This does not
prevent us from using counterfactual statements to reason about causes and
effects. Counterfactual questions and statements provide a natural framework
to express and share our conclusions.
The remaining text in this section explains how we can answer certain
counterfactual questions using data collected in the past. More
precisely, we seek to estimate performance metrics that can be
expressed as expectations with respect to the distribution that would
have been observed if the counterfactual conditions had been in
force.777Although counterfactual expectations can be viewed as
expectations of unit-level counterfactuals
(Pearl, [2009](#bib.bib35), definition 4), they elude the semantic subtleties
of unit-level counterfactuals and can be measured with
randomized experiments (see section [4.2](#S4.SS2 "4.2 Reweighting Randomized Trials ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems").)
###
4.1 Replaying Empirical Data
| | |
| --- | --- |
| Causal graph for an image recognition system.
We can estimate counterfactuals by
replaying data collected in the past. | Causal graph for an image recognition system.
We can estimate counterfactuals by
replaying data collected in the past. |
Figure 10: Causal graph for an image recognition system.
We can estimate counterfactuals by
replaying data collected in the past.
Figure 11: Causal graph for a randomized experiment.
We can estimate certain counterfactuals by
reweighting data collected in the past.
Figure 10: Causal graph for an image recognition system.
We can estimate counterfactuals by
replaying data collected in the past.
Figure [11](#S4.F11 "Figure 11 ‣ 4.1 Replaying Empirical Data ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems") shows the causal graph associated with a simple
image recognition system. The classifier takes an image x and produces a
prospective class label ^y. The loss measures the penalty associated
with recognizing class ^y while the true class is y.
To estimate the expected error of such a classifier, we collect a
representative data set composed of labeled images, run the classifier
on each image, and average the resulting losses. In other words, we
*replay* the data set to estimate what
(counterfactual) performance would have been observed if we had used a
different classifier. We can then select in retrospect the classifier
that would have worked the best and hope that it will keep working
well. This is the counterfactual viewpoint on empirical risk
minimization (Vapnik, [1982](#bib.bib51)).
Replaying the data set works because both the alternate classifier and the
loss function are known. More generally, to estimate a counterfactual
by replaying a data set, we need to know all the functional
dependencies associated with all causal paths connecting the
intervention point to the measurement point.
This is obviously not always the case.
###
4.2 Reweighting Randomized Trials
Figure [11](#S4.F11 "Figure 11 ‣ 4.1 Replaying Empirical Data ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems") illustrates the randomized experiment
suggested in section [2.3](#S2.SS3 "2.3 Confounding Data ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems"). The patients are randomly split
into two equally sized groups receiving respectively treatments A
and B. The overall success rate for this experiment is therefore
Y=(YA+YB)/2 where YA and YB are the success rates observed
for each group. We would like to estimate which (counterfactual)
overall success rate Y∗ would have been observed if we had selected
treatment A with probability p and treatment B with probability
1−p.
Since we do not know how the outcome depends on the treatment and the
patient condition, we cannot compute which outcome y∗ would have
been obtained if we had treated patient x with a different treatment
u∗. Therefore we cannot answer this question by replaying the data
as we did in section [4.1](#S4.SS1 "4.1 Replaying Empirical Data ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems").
However, observing different success rates YA and YB for the
treatment groups reveals an empirical correlation between the
treatment u and the outcome y. Since the only cause of the
treatment u is an independent roll of the dices, this correlation
cannot result from any known or unknown confounding common
cause.888See also the discussion of Reichenbach’s common cause
principle and of its limitations in
(Spirtes et al., [1993](#bib.bib46); Spirtes and Scheines, [2004](#bib.bib45)). Having eliminated
this possibility, we can *reweight* the observed outcomes and
compute the estimate Y∗≈pYA+(1−p)YB .
###
4.3 Markov Factor Replacement
The reweighting approach can in fact be
applied under much less stringent conditions. Let us return to the ad
placement problem to illustrate this point.
The average number of ad clicks per page is often called *click yield*.
Increasing the click yield usually benefits both the advertiser and the
publisher, whereas increasing the revenue per page often benefits the
publisher at the expense of the advertiser. Click yield is therefore a very
useful metric when we reason with an isolation assumption that ignores the
advertiser reactions to pricing changes.
Let ω be a shorthand for all variables appearing
in the Markov factorization of the ad placement
structural equation model,
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | P(ω) | = | P(u,v)P(x|u)P(a|x,v)P(b|x,v)P(q|x,a) | | (4) |
| | | | × P(s|a,q,b)P(c|a,q,b)P(y|s,u)P(z|y,c) . | |
Variable y was defined in section [3.1](#S3.SS1 "3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") as the set of user clicks.
In the rest of the document, we slightly abuse this notation
by using the same letter y to represent the number of clicks.
We also write the expectation Y=Eω∼P(ω)[y]
using the integral notation
| | | |
| --- | --- | --- |
| | Y=∫ωy P(ω) . | |

Figure 12: Estimating which average number of clicks per page would have
been observed if we had used a different scoring model.
We would like to estimate what the expected click yield Y∗ would have been
if we had used a different scoring function (figure [12](#S4.F12 "Figure 12 ‣ 4.3 Markov Factor Replacement ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")). This
intervention amounts to replacing the actual factor P(q|x,a) by a
counterfactual factor P∗(q|x,a) in the Markov factorization.
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | P∗(ω) | = | P(u,v)P(x|u)P(a|x,v)P(b|x,v)\boldmathP∗(q|x,a) | | (5) |
| | | | × P(s|a,q,b)P(c|a,q,b)P(y|s,u)P(z|x,c) . | |
Let us assume, for simplicity, that the actual factor P(q|x,a) is
nonzero everywhere. We can then estimate the counterfactual expected
click yield Y∗ using the transformation
| | | | |
| --- | --- | --- | --- |
| | Y∗ = ∫ωy P∗(ω) = ∫ωy P∗(q|x,a)P(q|x,a) P(ω) ≈ 1n n∑i=1yi P∗(qi|xi,ai)P(qi|xi,ai) , | | (6) |
where the data set of tuples (ai,xi,qi,yi) is distributed according to
the actual Markov factorization instead of the counterfactual Markov
factorization. This data could therefore have been collected during the normal
operation of the ad placement system. Each sample is reweighted to
reflect its probability of occurrence under the counterfactual conditions.
In general, we can use *importance sampling* to
estimate the counterfactual expectation
of any quantity ℓ(ω) :
| | | | |
| --- | --- | --- | --- |
| | Y∗ = ∫ωℓ(ω) P∗(ω) = ∫ωℓ(ω) P∗(ω)P(ω) P(ω) ≈ 1nn∑i=1ℓ(ωi)wi | | (7) |
with weights
| | | | |
| --- | --- | --- | --- |
| | wi = w(ωi) = P∗(ωi)P(ωi) = factors appearing in% P∗(ωi) but not in P(ωi)factors appearing in% P(ωi) but not in P∗(ωi) . | | (8) |
Equation ([8](#S4.E8 "(8) ‣ 4.3 Markov Factor Replacement ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) emphasizes the simplifications resulting
from the algebraic similarities of the actual and counterfactual Markov
factorizations. Because of these simplifications, the evaluation of the
weights only requires the knowledge of the few factors that differ between
P(ω) and P∗(ω). Each data sample needs to provide the value
of ℓ(ωi) and the values of all variables needed to evaluate
the factors that do not cancel in the ratio ([8](#S4.E8 "(8) ‣ 4.3 Markov Factor Replacement ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")).
In contrast, the replaying approach (section [4.1](#S4.SS1 "4.1 Replaying Empirical Data ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) demands the
knowledge of all factors of P∗(ω) connecting the point of intervention
to the point of measurement ℓ(ω). On the other hand, it does not
require the knowledge of factors appearing only in P(ω).
Importance sampling relies on the assumption that all the factors appearing in
the denominator of the reweighting ratio ([8](#S4.E8 "(8) ‣ 4.3 Markov Factor Replacement ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) are
nonzero whenever the factors appearing in the numerator are nonzero. Since
these factors represents conditional probabilities resulting from the effect
of an independent noise variable in the structural equation model, this
assumption means that the data must be collected with an experiment involving
active randomization. We must therefore design cost-effective randomized
experiments that yield enough information to estimate many interesting
counterfactual expectations with sufficient accuracy. This problem cannot be
solved without answering the confidence interval question: given data
collected with a certain level of randomization, with which accuracy can we
estimate a given counterfactual expectation?
###
4.4 Confidence Intervals
At first sight, we can invoke the law of large numbers and write
| | | | |
| --- | --- | --- | --- |
| | Y∗=∫ωℓ(ω)w(ω) P(ω) ≈ 1nn∑i=1ℓ(ωi)wi. | | (9) |
For sufficiently large n, the central limit theorem provides
confidence intervals whose width grows with the standard deviation
of the product ℓ(ω)w(ω).
Unfortunately, when P(ω) is small, the reweighting
ratio w(ω) takes large values with low probability. This heavy
tailed distribution has annoying consequences because the variance of
the integrand could be very high or infinite. When the variance is
infinite, the central limit theorem does not hold. When the variance
is merely very large, the central limit convergence might occur too
slowly to justify such confidence intervals. Importance sampling works
best when the actual distribution and the counterfactual distribution
overlap.
When the counterfactual distribution has significant mass in domains
where the actual distribution is small, the few samples available in
these domains receive very high weights. Their noisy contribution
dominates the reweighted estimate ([9](#S4.E9 "(9) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")). We can obtain
better confidence intervals by eliminating these few samples drawn in
poorly explored domains. The resulting bias can be bounded using prior
knowledge, for instance with an assumption about the range of values
taken by ℓ(ω),
| | | | |
| --- | --- | --- | --- |
| | ∀ωℓ(ω) ∈ [0,M]. | | (10) |
Let us choose the maximum weight value R deemed acceptable for the
weights. We have obtained very consistent results in practice with R
equal to the fifth largest reweighting ratio observed on the empirical
data.999This is in fact a slight abuse because the theory calls
for choosing R before seing the data. We can then rely
on *clipped weights* to eliminate the contribution of
the poorly explored domains,
| | | |
| --- | --- | --- |
| | ¯w(ω) = {w(ω)if P∗(ω)<RP(ω)0otherwise. | |
The condition P∗(ω)<RP(ω) ensures that the ratio
has a nonzero denominator P(ω) and is smaller than R.
Let ΩR be the set of all values of ω
associated with acceptable ratios:
| | | |
| --- | --- | --- |
| | ΩR = {ω: P∗(ω)<RP(ω)}. | |
We can decompose Y∗ in two terms:
| | | | |
| --- | --- | --- | --- |
| | | | (11) |
The first term of this decomposition is the *clipped expectation*
¯Y∗. Estimating the clipped expectation ¯Y∗ is much
easier than estimating Y∗ from ([9](#S4.E9 "(9) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) because the
clipped weights ¯w(ω) are bounded by R.
| | | | |
| --- | --- | --- | --- |
| | ¯Y∗ = ∫ω∈ΩRℓ(ω)P∗(ω) = ∫ωℓ(ω)¯w(ω) P(ω) ≈ ˆY∗ = 1nn∑i=1ℓ(ωi)¯w(ωi). | | (12) |
The second term of equation ([11](#S4.E11 "(11) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) can be bounded
by leveraging assumption ([10](#S4.E10 "(10) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")). The resulting
bound can then be conveniently estimated using only the clipped weights.
| | | |
| --- | --- | --- |
| | Y∗−¯Y∗ = ∫ω∈Ω∖ΩRℓ(ω)P∗(ω) ∈ [0, MP∗(Ω∖ΩR)] = [0, M(1−¯W∗)]with | |
| | | | |
| --- | --- | --- | --- |
| | ¯W∗ = P∗(ΩR) = ∫ω∈ΩRP∗(ω) = ∫ω¯w(ω)P(ω) ≈ ˆYW∗ = 1nn∑i=1¯w(ωi). | | (13) |
Since the clipped weights are bounded, the estimation errors
associated with ([12](#S4.E12 "(12) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) and ([13](#S4.E13 "(13) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) are well
characterized using either the central limit theorem or using empirical
Bernstein bounds (see appendix [B](#Sx2.SS2 "B Confidence Intervals ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems") for details).
Therefore we can derive an *outer confidence interval*
of the form
| | | | |
| --- | --- | --- | --- |
| | \largeP{ ˆY∗−ϵR ≤ ¯Y∗≤ ˆY∗+ϵR } ≥ 1−δ | | (14) |
and an *inner confidence interval* of the form
| | | | |
| --- | --- | --- | --- |
| | \largeP{ ¯Y∗ ≤ Y∗≤ ¯Y∗+M(1−ˆYW∗+ξR) } ≥ 1−δ. | | (15) |
The names *inner* and *outer* are in fact related to our
prefered way to visualize these intervals (e.g., figure [13](#S4.F13 "Figure 13 ‣ 4.6 Experimenting with Mainline Reserves ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")).
Since the bounds on Y∗−¯Y∗ can be written as
| | | | |
| --- | --- | --- | --- |
| | ¯Y∗ ≤ Y∗ ≤ ¯Y∗+M(1−¯W∗), | | (16) |
we can derive our final confidence interval,
| | | | |
| --- | --- | --- | --- |
| | | | (17) |
In conclusion, replacing the unbiased importance sampling
estimator ([9](#S4.E9 "(9) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) by the clipped importance sampling
estimator ([12](#S4.E12 "(12) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) with a suitable choice of R leads to
improved confidence intervals. Furthermore, since the derivation of
these confidence intervals does not rely on the assumption that
P(ω) is nonzero everywhere, the clipped importance sampling
estimator remains valid when the distribution P(ω) has a
limited support. This relaxes the main restriction associated with
importance sampling.
###
4.5 Interpreting the Confidence Intervals
The estimation of the counterfactual expectation Y∗ can be
inaccurate because the sample size is insufficient or because the
sampling distribution P(ω) does not sufficiently explore the
counterfactual conditions of interest.
By construction, the clipped expectation ¯Y∗ ignores the domains
poorly explored by the sampling distribution P(ω). The
difference Y∗−¯Y∗ then reflects the inaccuracy resulting from a
lack of exploration. Therefore, assuming that the bound R has been
chosen competently, the relative sizes of the outer and inner
confidence intervals provide precious cues to determine whether we can
continue collecting data using the same experimental setup or should
adjust the data collection experiment in order to obtain a better
coverage.
* The *inner confidence interval* ([15](#S4.E15 "(15) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) witnesses
the uncertainty associated with the domain GR insufficiently
explored by the actual distribution. A large inner confidence
interval suggests that the most practical way to improve the
estimate is to adjust the data collection experiment in order to
obtain a better coverage of the counterfactual conditions of
interest.
* The *outer confidence interval* ([14](#S4.E14 "(14) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems"))
represents the uncertainty that results from the limited sample
size. A large outer confidence interval indicates that the sample is
too small. To improve the result, we simply need to continue
collecting data using the same experimental setup.
###
4.6 Experimenting with Mainline Reserves
We return to the ad placement problem to illustrate the reweighting approach
and the interpretation of the confidence intervals. Manipulating the reserves
Rp(x) associated with the mainline positions (figure [1](#S2.F1 "Figure 1 ‣ 2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems"))
controls which ads are prominently displayed in the mainline or displaced into
the sidebar.
We seek in this section to answer counterfactual questions of the form:
>
> “*How would the ad placement system have performed if
> we had scaled the mainline reserves by a constant factor ρ,
> without incurring user or advertiser reactions?*”
>
>
>
Randomization was introduced using a modified version of the ad placement
engine. Before determining the ad layout (see section [2.1](#S2.SS1 "2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems")), a
random number ε is drawn according to the standard normal
distribution N(0,1), and all the mainline reserves are multiplied
by m=ρe−σ2/2+σε. Such multipliers follow a
log-normal distribution101010More precisely,
lnN(μ,σ2) with μ=σ2/2+logρ.
whose mean is ρ and whose width is controlled by σ.
This effectively provides a parametrization of the conditional score
distribution P(q|x,a) (see figure [6](#S3.F6 "Figure 6 ‣ 3.3 Markov Factorization ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems").)
The Bing search platform offers many ways to select traffic for controlled
experiments (section [2.2](#S2.SS2 "2.2 Controlled Experiments ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems")). In order to match our isolation
assumption, individual page views were randomly assigned to traffic buckets
without regard to the user identity. The main treatment bucket was processed
with mainline reserves randomized by a multiplier drawn as explained above
with ρ=1 and σ=0.3. With these parameters, the mean
multiplier is exactly 1, and 95% of the multipliers are in
range [0.52,1.74]. Samples describing 22 million search result pages were
collected during five consecutive weeks.
| | | |
| --- | --- | --- |
| Estimated variations of three
performance metrics in response to mainline reserve changes.
The curves delimit | Estimated variations of three
performance metrics in response to mainline reserve changes.
The curves delimit | Estimated variations of three
performance metrics in response to mainline reserve changes.
The curves delimit |
Figure 13: Estimated variations of three
performance metrics in response to mainline reserve changes.
The curves delimit 95% confidence intervals for the
metrics we would have observed if we had increased the mainline reserves by
the percentage shown on the horizontal axis. The filled areas represent the
inner confidence intervals. The hollow squares represent the metrics
measured on the experimental data. The hollow circles
represent metrics measured on a second experimental bucket
with mainline reserves reduced by 18%. The filled circles represent
the metrics effectively measured on a control bucket running without
randomization.
We then use this data to estimate what would have been measured if the
mainline reserve multipliers had been drawn according to
a distribution determined by parameters ρ∗ and σ∗.
This is achieved by reweighting each sample ωi with
| | | |
| --- | --- | --- |
| | wi=P∗(qi|xi,ai)P(qi|xi,ai)=p(mi;ρ∗,σ∗)p(mi;ρ,σ), | |
where mi is the multiplier drawn for this sample during the data
collection experiment, and p(t;ρ,σ) is the density
of the log-normal multiplier distribution.
Figure [13](#S4.F13 "Figure 13 ‣ 4.6 Experimenting with Mainline Reserves ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems") reports results obtained by varying ρ∗
while keeping σ∗=σ. This amounts to estimating what
would have been measured if all mainline reserves had been multiplied
by ρ∗ while keeping the same randomization. The curves bound
95% confidence intervals on the variations of the average number of
mainline ads displayed per page, the average number of ad clicks per
page, and the average revenue per page, as functions of ρ∗. The
inner confidence intervals, represented by the filled areas, grow
sharply when ρ∗ leaves the range explored during the data
collection experiment. The average revenue per page has more variance
because a few very competitive queries command high prices.
In order to validate the accuracy of these counterfactual estimates, a second
traffic bucket of equal size was configured with mainline reserves reduced by
about 18%. The hollow circles in figure [13](#S4.F13 "Figure 13 ‣ 4.6 Experimenting with Mainline Reserves ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems") represent
the metrics effectively measured on this bucket during the same time period.
The effective measurements and the counterfactual estimates match with high
accuracy.
Finally, in order to measure the cost of the randomization, we also ran the
unmodified ad placement system on a control bucket. The brown filled circles
in figure [13](#S4.F13 "Figure 13 ‣ 4.6 Experimenting with Mainline Reserves ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems") represent the metrics effectively measured on the
control bucket during the same time period. The randomization caused a small
but statistically significant increase of the number of mainline ads per
page. The click yield and average revenue differences are not significant.
This experiment shows that we can obtain accurate counterfactual estimates
with affordable randomization strategies. However, this nice conclusion does
not capture the true practical value of the counterfactual estimation
approach.
###
4.7 More on Mainline Reserves
The main benefit of the counterfactual estimation approach is the ability to
*use the same data* to answer a *broad range of counterfactual*
*questions*. Here are a few examples of counterfactual questions
that can be answered using data collected using the simple mainline reserve
randomization scheme described in the previous section:
* *Different variances* – Instead of estimating what would have been
measured if we had increased the mainline reserves without changing the
randomization variance, that is, letting σ∗=σ, we can use the
same data to estimate what would have been measured if we had also changed
σ. This provides the means to determine which level of randomization
we can afford in future experiments.
* *Pointwise estimates* – We often want to estimate what would have been
measured if we had set the mainline reserves to a specific value without
randomization. Although computing estimates for small values of σ
often works well enough, very small values lead to large confidence
intervals.
Let Yν(ρ) represent the expectation we would have observed
if the multipliers m had mean ρ and variance ν.
We have then Yν(ρ)=Em[E[y|m]]=Em[Y0(m)].
Assuming that the pointwise value Y0 is smooth enough
for a second order development,
| | | |
| --- | --- | --- |
| | Yν(ρ) ≈ Em[Y0(ρ)+(m−ρ)Y′0(ρ)+(m−ρ)2Y′′0(ρ)/2] = Y0(ρ)+νY′′0(ρ)/2 . | |
Although the reweighting method cannot estimate the point-wise value
Y0(ρ) directly, we can use the reweighting method to estimate
both Yν(ρ) and Y2ν(ρ) with acceptable confidence intervals
and write Y0(ρ)≈2Yν(ρ)−Y2ν(ρ)
(Goodwin, [2011](#bib.bib20)).
* *Query-dependent reserves* – Compare for instance the queries
“car insurance” and “common cause principle” in a web search engine.
Since the advertising potential of a search varies considerably with the
query, it makes sense to investigate various ways to define query-dependent
reserves (Charles and Chickering, [2012](#bib.bib11)).
The data collected using the simple mainline reserve randomization can also
be used to estimate what would have been measured if we had increased all
the mainline reserves by a query-dependent multiplier ρ∗(x). This is
simply achieved by reweighting each sample ωi with
| | | |
| --- | --- | --- |
| | wi=P∗(qi|xi,ai)P(qi|xi,ai)=p(mi;ρ∗(xi),σ)p(mi;μ,σ). | |
Considerably broader ranges of counterfactual questions can be
answered when data is collected using randomization schemes that
explore more dimensions. For instance, in the case of the ad placement
problem, we could apply an independent random multiplier for each
score instead of applying a single random multiplier to the mainline
reserves only. However, the more dimensions we randomize, the more
data needs to be collected to effectively explore all these
dimensions. Fortunately, as discussed in section [5](#S5 "5 Structure ‣ Counterfactual Reasoning and Learning Systems"),
the structure of the causal graph reveals many ways to leverage a
priori information and improve the confidence intervals.
###
4.8 Related Work
Importance sampling is widely used to deal with covariate
shifts (Shimodaira, [2000](#bib.bib42); Sugiyama et al., [2007](#bib.bib48)). Since manipulating the
causal graph changes the data distribution, such an intervention can
be viewed as a covariate shift amenable to importance sampling.
Importance sampling techniques have also been proposed without causal
interpretation for many of the problems that we view as causal
inference problems. In particular, the work presented in this section
is closely related to the Monte-Carlo approach of reinforcement
learning (Sutton and Barto, [1998](#bib.bib49), chapter 5) and to the offline
evaluation of contextual bandit policies (Li et al., [2010](#bib.bib29), [2011](#bib.bib30)).
Reinforcement learning research traditionally focuses on control
problems with relatively small discrete state spaces and long
sequences of observations. This focus reduces the need for
characterizing exploration with tight confidence intervals. For
instance, [Sutton and Barto](#bib.bib49) suggest to normalize the
importance sampling estimator by 1/∑iw(ωi) instead
of 1/n. This would give erroneous results when the data collection
distribution leaves parts of the state space poorly
explored. Contextual bandits are traditionally formulated with a
finite set of discrete actions. For instance, Li’s
([2011](#bib.bib30)) unbiased policy evaluation assumes that the data
collection policy always selects an arbitrary policy with probability
greater than some small constant. This is not possible when the action
space is infinite.
Such assumptions on the data collection distribution are often
impractical. For instance, certain ad placement policies are not
worth exploring because they cannot be implemented efficiently or are
known to elicit fraudulent behaviors. There are many practical
situations in which one is only interested in limited aspects of the
ad placement policy involving continuous parameters such as click
prices or reserves. Discretizing such parameters eliminates useful a
priori knowledge: for instance, if we slightly increase a reserve, we
can reasonable believe that we are going to show slightly less
ads.
Instead of making assumptions on the data collection distribution, we
construct a biased estimator ([12](#S4.E12 "(12) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) and bound its
bias. We then interpret the inner and outer confidence intervals as
resulting from a lack of exploration or an insufficient sample size.
Finally, the causal framework allows us to easily formulate
counterfactual questions that pertain to the practical ad placement
problem and yet differ considerably in complexity and exploration
requirements. We can address specific problems identified by the
engineers without incurring the risks associated with a complete
redesign of the system. Each of these incremental steps helps
demonstrating the soundness of the approach.
5 Structure
------------
This section shows how the structure of the causal graph reveals many
ways to leverage a priori knowledge and improve the accuracy of our
counterfactual estimates. Displacing the reweighting point
(section [5.1](#S5.SS1 "5.1 Better Reweighting Variables ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems")) improves the inner confidence interval and
therefore reduce the need for exploration. Using a prediction function
(section [5.2](#S5.SS2 "5.2 Variance Reduction with Predictors ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems")) essentially improve the outer confidence
interval and therefore reduce the sample size requirements.
###
5.1 Better Reweighting Variables
Many search result pages come without eligible ads. We then know with
certainty that such pages will have zero mainline ads, receive zero
clicks, and generate zero revenue. This is true for the randomly
selected value of the reserve, and this would have been true for any
other value of the reserve. We can exploit this knowledge by
pretending that the reserve was drawn from the counterfactual
distribution P∗(q|xi,ai) instead of the actual distribution
P(q|xi,ai). The ratio w(ωi) is therefore forced to
the unity. This does not change the estimate but reduces the size of
the inner confidence interval. The results of figure [13](#S4.F13 "Figure 13 ‣ 4.6 Experimenting with Mainline Reserves ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")
were in fact helped by this little optimization.
There are in fact many circumstances in which the observed outcome
would have been the same for other values of the randomized
variables. This prior knowledge is in fact encoded in the structure of
the causal graph and can be exploited in a more systematic manner.
For instance, we know that users make click decisions without knowing
which scores were computed by the ad placement engine, and without
knowing the prices charged to advertisers. The ad placement causal
graph encodes this knowledge by showing the clicks y as direct
effects of the user intent u and the ad slate s. This implies that
the exact value of the scores q does not matter to the clicks y as
long as the ad slate s remains the same.
Because the causal graph has this special structure, we can simplify
both the actual and counterfactual Markov factorizations
([4](#S4.E4 "(4) ‣ 4.3 Markov Factor Replacement ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) ([5](#S4.E5 "(5) ‣ 4.3 Markov Factor Replacement ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) without
eliminating the variable y whose expectation is sought.
Successively eliminating variables z, c, and q gives:
| | | | | |
| --- | --- | --- | --- | --- |
| | P(u,v,x,a,b,s,y) | = | P(u,v)P(x|u)P(a|x,v)P(b|x,v)P(s|x,a,b)P(y|s,u) , | |
| | P∗(u,v,x,a,b,s,y) | = | P(u,v)P(x|u)P(a|x,v)P(b|x,v)P∗(s|x,a,b)P(y|s,u) . | |
The conditional distributions P(s|x,a,b) and P∗(s|x,a,b)
did not originally appear in the Markov factorization.
They are defined by marginalization as a consequence
of the elimination of the variable q representing the scores.
| | | |
| --- | --- | --- |
| | P(s|x,a,b)=∫qP(s|a,q,b)P(q|x,a) ,P∗(s|x,a,b)=∫qP(s|a,q,b)P∗(q|x,a). | |
| | |
| --- | --- |
| Estimated variations of two performance
metrics in response to mainline reserve changes. These estimates were
obtained using the ad slates | Estimated variations of two performance
metrics in response to mainline reserve changes. These estimates were
obtained using the ad slates |
Figure 14: Estimated variations of two performance
metrics in response to mainline reserve changes. These estimates were
obtained using the ad slates s as reweighting variable. Compare the inner
confidence intervals with those shown in figure [13](#S4.F13 "Figure 13 ‣ 4.6 Experimenting with Mainline Reserves ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems").
We can estimate the counterfactual click yield Y∗
using these simplified factorizations:
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | Y∗ | = | ∫y P∗(u,v,x,a,b,s,y) = ∫y P∗(s|x,a,b)P(s|x,a,b) P(u,v,x,a,b,s,y) | | (18) |
| | | ≈ | 1nn∑i=1 yiP∗(si|xi,ai,bi)P(si|xi,ai,bi) . | |
We have reproduced the experiments described in section [4.6](#S4.SS6 "4.6 Experimenting with Mainline Reserves ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems") with
the counterfactual estimate ([18](#S5.E18 "(18) ‣ 5.1 Better Reweighting Variables ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems")) instead
of ([6](#S4.E6 "(6) ‣ 4.3 Markov Factor Replacement ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")). For each example ωi, we determine
which range [mmaxi,mmini] of mainline reserve multipliers could
have produced the observed ad slate si, and then compute the reweighting
ratio using the formula:
| | | |
| --- | --- | --- |
| | wi=P∗(si|xi,ai,bi)P(si|xi,ai,bi)=Ψ(mmaxi;ρ∗,σ∗)−Ψ(mmini;ρ∗,σ∗)Ψ(mmaxi;ρ,σ)−Ψ(mmini;ρ,σ) , | |
where Ψ(m;ρ,σ) is the cumulative of the log-normal
multiplier distribution. Figure [14](#S5.F14 "Figure 14 ‣ 5.1 Better Reweighting Variables ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems") shows
counterfactual estimates obtained using the same data as
figure [13](#S4.F13 "Figure 13 ‣ 4.6 Experimenting with Mainline Reserves ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems"). The obvious improvement of the inner
confidence intervals significantly extends the range of mainline
reserve multipliers for which we can compute accurate counterfactual
expectations using this same data.
Comparing ([6](#S4.E6 "(6) ‣ 4.3 Markov Factor Replacement ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) and ([18](#S5.E18 "(18) ‣ 5.1 Better Reweighting Variables ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems"))
makes the difference very clear: instead of computing the ratio of the
probabilities of the observed scores under the counterfactual and
actual distributions, we compute the ratio of the probabilities of the
observed ad slates under the counterfactual and actual
distributions. As illustrated by figure [15](#S5.F15 "Figure 15 ‣ 5.1 Better Reweighting Variables ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems"), we now
distinguish the reweighting variable (or variables) from the
intervention. In general, the corresponding manipulation of the Markov
factorization consists of marginalizing out all the variables that
appear on the causal paths connecting the point of intervention to the
reweighting variables and factoring all the independent terms out of
the integral. This simplification works whenever the reweighting
variables intercept all the causal paths connecting the point of
intervention to the measurement variable. In order to compute the new
reweighting ratios, all the factors remaining inside the integral,
that is, all the factors appearing on the causal paths connecting the
point of intervention to the reweighting variables, have to be known.

Figure 15: The reweighting variable(s) must intercept
all causal paths from the point of intervention
to the point of measurement.

Figure 16: A distribution on the scores q induce
a distribution on the possible ad slates s.
If the observed slate is slate2,
the reweighting ratio is 34/22.
Figure [14](#S5.F14 "Figure 14 ‣ 5.1 Better Reweighting Variables ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems") does not report the average revenue per
page because the revenue z also depends on the scores q through
the click prices c. This causal path is not intercepted by the ad
slate variable s alone. However, we can introduce a new variable
~c=f(c,y) that filters out the click prices computed for ads
that did not receive a click. Markedly improved revenue estimates are
then obtained by reweighting according to the joint variable
(s,~c).
Figure [16](#S5.F16 "Figure 16 ‣ 5.1 Better Reweighting Variables ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems") illustrates the same approach applied to
the simultaneous randomization of all the scores q using independent
log-normal multipliers. The weight w(ωi) is the ratio of
the probabilities of the observed ad slate si under the
counterfactual and actual multiplier distributions. Computing these
probabilities amounts to integrating a multivariate Gaussian
distribution (Genz, [1992](#bib.bib17)). Details will be provided in a
forthcoming publication.
###
5.2 Variance Reduction with Predictors
Although we do not know exactly how the variable of interest
ℓ(ω) depends on the measurable variables and are affected by
interventions on the causal graph, we may have strong a priori
knowledge about this dependency. For instance, if we augment the
slate s with an ad that usually receives a lot of clicks, we can
expect an increase of the number of clicks.
Let the *invariant variables* υ be all observed
variables that are not direct or indirect effects of variables
affected by the intervention under consideration. This definition
implies that the distribution of the invariant variables is not
affected by the intervention. Therefore the values υi of the
invariant variables sampled during the actual experiment are also
representative of the distribution of the invariant variables under the
counterfactual conditions.
We can leverage a priori knowledge to construct a
predictor ζ(ω) of the quantity ℓ(ω) whose
counterfactual expectation Y∗ is sought. We assume that the
predictor ζ(ω) depends only on the invariant variables or
on variables that depend on the invariant variables through known
functional dependencies. Given sampled values υi of the
invariant variables, we can replay both the original and manipulated
structural equation model as explained in section [4.1](#S4.SS1 "4.1 Replaying Empirical Data ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems") and
obtain samples ζi and ζ∗i that respectively follow the
actual and counterfactual distributions
Then, regardless of the quality of the predictor,
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | Y∗ = ∫ωℓ(ω)P∗(ω) | = | ∫ωζ(ω)P∗(ω) + ∫ω(ℓ(ω)−ζ(ω))P∗(ω) | | (19) |
| | | ≈ | 1nn∑i=1ζ∗i + 1nn∑i=1(ℓ(ωi)−ζi)w(ωi). | |
The first term in this sum represents the counterfactual expectation
of the predictor and can be accurately estimated by averaging the
simulated counterfactual samples ζ∗i without resorting to
potentially large importance weights. The second term in this sum
represents the counterfactual expectation of the residuals
ℓ(ω)−ζ(ω) and must be estimated using importance
sampling. Since the magnitude of the residuals is hopefully smaller
than that of ℓ(ω), the variance of
(ℓ(ω)−ζ(ω))w(ω) is reduced and the
importance sampling estimator of the second term has improved
confidence intervals. The more accurate the predictor ζ(ω),
the more effective this variance reduction strategy.
This variance reduction technique is in fact identical to the doubly
robust contextual bandit evaluation technique
of Dudík et al. ([2012](#bib.bib15)). Doubly robust variance reduction has also been
extensively used for causal inference applied to
biostatistics (see Robins et al., [2000](#bib.bib39); Bang and Robins, [2005](#bib.bib5)). We
subjectively find that viewing the predictor as a component of the
causal graph (figure [17](#S5.F17 "Figure 17 ‣ 5.2 Variance Reduction with Predictors ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems")) clarifies how a well designed
predictor can leverage prior knowledge. For instance, in order to
estimate the counterfactual performance of the ad placement system, we
can easily use a predictor that runs the ad auction and simulate the
user clicks using a click probability model trained offline.

Figure 17: Leveraging a predictor. Yellow nodes
represent known functional relations in the structural equation model.
We can estimate the counterfactual expectation
Y∗ of the number of clicks per page as the sum of the
counterfactual expectations of a predictor ζ, which is easy to
estimate by replaying empirical data, and y−ζ, which has to be
estimated by importance sampling but has reduced variance.

Figure 18:
The two plots show the hourly click yield for two variants of the ad
placement engine. The daily variations dwarf the
differences between the two treatments.
###
5.3 Invariant Predictors
In order to evaluate which of two interventions is most likely to
improve the system, the designer of a learning system often seeks to
estimate a *counterfactual difference*, that is, the difference
Y+−Y∗ of the expectations of a same quantity ℓ(ω) under
two different counterfactual distributions P+(ω)
and P∗(ω). These expectations are often affected by variables
whose value is left unchanged by the interventions under
consideration. For instance, seasonal effects can have very large
effects on the number of ad clicks (figure [18](#S5.F18 "Figure 18 ‣ 5.2 Variance Reduction with Predictors ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems")) but
affect Y+ and Y∗ in similar ways.
Substantially better confidence intervals on the difference Y+−Y∗
can be obtained using an *invariant predictor*, that is,
a predictor function that depends only on invariant
variables υ such as the time of the day. Since the invariant
predictor ζ(υ) is not affected by the interventions
under consideration,
| | | | |
| --- | --- | --- | --- |
| | ∫ωζ(υ)P∗(ω)=∫ωζ(υ)P+(ω). | | (20) |
Therefore
| | | | | |
| --- | --- | --- | --- | --- |
| | Y+−Y∗ | = | ∫ωζ(υ)P+(ω)+∫ω(ℓ(ω)−ζ(υ))P+(ω) | |
| | | | −∫ωζ(υ)P∗(ω)−∫ω(ℓ(ω)−ζ(υ))P∗(ω) | |
| | | ≈ | 1nn∑i=1 (ℓ(ωi)−ζ(υi))P+(ωi)−P∗(ωi)P(ωi). | |
This direct estimate of the counterfactual difference Y+−Y∗
benefits from the same variance reduction effect
as ([19](#S5.E19 "(19) ‣ 5.2 Variance Reduction with Predictors ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems")) without need to estimate the
expectations ([20](#S5.E20 "(20) ‣ 5.3 Invariant Predictors ‣ 5 Structure ‣ Counterfactual Reasoning and Learning Systems")). Appendix [C](#Sx2.SS3 "C Counterfactual Differences ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")
provide details on the computation of confidence intervals for
estimators of the counterfactual differences.
Appendix [D](#Sx2.SS4 "D Counterfactual Derivatives ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems") shows how the same approach can be used
to compute *counterfactual derivatives* that describe the
response of the system to very small interventions.
6 Learning
-----------
The previous sections deal with the identification and the measurement of
interpretable signals that can justify the actions of human decision makers.
These same signals can also justify the actions of machine learning
algorithms. This section explains why optimizing a counterfactual estimate is
a sound learning procedure.
###
6.1 A Learning Principle
We consider in this section interventions that depend on a
parameter θ. For instance, we might want to know what the
performance of the ad placement engine would have been if we had used
different values for the parameter θ of the click scoring model.
Let Pθ(ω) denote the
counterfactual Markov factorization associated with this
intervention. Let Yθ be the counterfactual expectation of
ℓ(ω) under distribution
Pθ. Figure [19](#S6.F19 "Figure 19 ‣ 6.1 A Learning Principle ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems") illustrates our simple
learning setup. Training data is collected from a single experiment
associated with an initial parameter value θ0 chosen using
prior knowledge acquired in an unspecified manner. A preferred
parameter value θ∗ is then determined using the training
data and loaded into the system. The goal is of course to observe a
good performance on data collected during a test period that takes
place after the switching point.

Figure 19: Single design – A preferred parameter value θ∗
is determined using randomized data collected in the past.
Test data is collected after loading θ∗ into the system.
The isolation assumption introduced in section [3.2](#S3.SS2 "3.2 The Isolation Assumption ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") states that
the exogenous variables are drawn from an unknown but fixed joint probability
distribution. This distribution induces a joint distribution
P(ω) on all the variables ω appearing in the structural
equation model associated with the parameter θ. Therefore, if the
*isolation assumption* *remains valid during the test period*, the
test data follows the same distribution Pθ∗(ω) that would
have been observed during the training data collection period if the system
had been using parameter θ∗ all along.
We can therefore formulate this problem as the optimization of
the expectation Yθ of the reward ℓ(ω) with
respect to the distribution Pθ(ω)
| | | | |
| --- | --- | --- | --- |
| | maxθ Yθ = ∫ωℓ(ω)Pθ(ω) | | (21) |
on the basis of a finite set of training examples ω1,…,ωn
sampled from P(ω).
However, it would be unwise to maximize the estimates obtained using
approximation ([7](#S4.E7 "(7) ‣ 4.3 Markov Factor Replacement ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) because they could reach a
maximum for a value of θ that is poorly explored by the actual
distribution. As explained in section [4.5](#S4.SS5 "4.5 Interpreting the Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems"), the gap
between the upper and lower bound of inequality ([16](#S4.E16 "(16) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) reveals
the uncertainty associated with insufficient exploration. Maximizing an
empirical estimate ˆYθ of the lower bound ¯Yθ ensures
that the optimization algorithm finds a trustworthy answer
| | | | |
| --- | --- | --- | --- |
| | θ∗ = argmaxθˆYθ . | | (22) |
We shall now discuss the statistical basis of this learning
principle.111111The idea of maximizing the lower bound may surprise
readers familiar with the UCB algorithm for multi-armed
bandits (Auer et al., [2002](#bib.bib4)). UCB performs exploration by maximizing the upper
confidence interval bound and updating the confidence intervals online.
Exploration in our setup results from the active system randomization during
the offline data collection. See also section [6.4](#S6.SS4 "6.4 Sequential Design ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems").
###
6.2 Uniform Confidence Intervals
As discussed in section [4.4](#S4.SS4 "4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems"),
inequality ([16](#S4.E16 "(16) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")),
| | | |
| --- | --- | --- |
| | ¯Yθ ≤ Yθ≤ ¯Yθ+M(1−¯Wθ) , | |
where
| | | | | |
| --- | --- | --- | --- | --- |
| | ¯Yθ = ∫ωℓ(ω)¯w(ω)P(ω) | ≈ | ˆYθ = 1nn∑i=1ℓ(ωi)¯w(ωi) , | |
| | ¯Wθ = ∫ω¯w(ω)P(ω) | ≈ | ˆYWθ = 1nn∑i=1¯w(ωi) , | |
leads to confidence intervals
([17](#S4.E17 "(17) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) of the form
| | | | |
| --- | --- | --- | --- |
| | ∀δ>0, ∀θ\largeP{ ˆYθ−ϵR ≤ Yθ≤ ˆYθ+M(1−ˆYWθ+ξR)+ϵR }≥1−δ. | | (23) |
Both ϵR and ξR converge to zero in inverse proportion to the
square root of the sample size n. They also increase at most linearly in
logδ and depend on both the capping bound R and the
parameter θ through the empirical variances (see
appendix [B](#Sx2.SS2 "B Confidence Intervals ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems").)
Such confidence intervals are insufficient to provide guarantees for a
parameter value θ∗ that depends on the sample. In fact, the
optimization ([22](#S6.E22 "(22) ‣ 6.1 A Learning Principle ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems")) procedure is likely to select
values of θ for which the inequality is violated. We therefore seek
uniform confidence intervals (Vapnik and Chervonenkis, [1968](#bib.bib52)),
simultaneously valid for all values of θ.
* When the parameter θ is chosen from a finite
set F, applying the union bound to the ordinary
intervals ([23](#S6.E23 "(23) ‣ 6.2 Uniform Confidence Intervals ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems")) immediately
gives the uniform confidence interval:
| | | |
| --- | --- | --- |
| | \largeP{∀θ∈F, ˆYθ−ϵR≤Yθ≤ˆYθ+M(1−ˆYWθ+ξR)+ϵR}≥1−|F|δ. | |
* Following the pioneering work of [Vapnik and Chervonenkis](#bib.bib52),
a broad choice of mathematical tools have been developed to construct
uniform confidence intervals when the set F is infinite.
For instance, appendix [E](#Sx2.SS5 "E Uniform empirical Bernstein bounds ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems") leverages
uniform empirical Bernstein bounds (Maurer and Pontil, [2009](#bib.bib31))
and obtains the uniform confidence interval
| | | | |
| --- | --- | --- | --- |
| | \largeP{∀θ∈F, ˆYθ−ϵR≤Yθ≤ˆYθ+M(1−ˆYWθ+ξR)+ϵR}≥1−M(n)δ, | | (24) |
where the growth function M(n) measures the capacity
of the family of functions
| | | | |
| --- | --- | --- | --- |
| | { fθ:ω↦ℓ(ω)¯w(ω) , gθ:ω↦¯w(ω) , ∀θ∈F } . | | (25) |
Many practical choices of P∗(ω) lead to
functions M(n) that grow polynomially with the sample size.
Because both ϵR and ξR are
O(n−1/2logδ), they converge to zero
with the sample size when one maintains the confidence
level 1−M(n)δ equal to a predefined constant.

Figure 20: The uniform inner confidence
interval reveals where the best guaranteed Yθ is reached
and where additional exploration is needed.
The intepretation of the inner and outer confidence intervals
(section [4.5](#S4.SS5 "4.5 Interpreting the Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) also applies to the
uniform confidence interval ([24](#S6.E24 "(24) ‣ 2nd item ‣ 6.2 Uniform Confidence Intervals ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems")). When
the sample size is sufficiently large and the capping bound R chosen
appropriately, the inner confidence interval reflects the upper and
lower bound of inequality ([16](#S4.E16 "(16) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")).
The uniform confidence interval therefore ensures that Yθ∗ is close
to the maximum of the lower bound of inequality ([16](#S4.E16 "(16) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) which
essentially represents the best performance that can be guaranteed using
training data sampled from P(ω). Meanwhile, the upper bound of this
same inequality reveals which values of θ could potentially offer
better performance but have been insufficiently probed by the sampling
distribution (figure [20](#S6.F20 "Figure 20 ‣ 6.2 Uniform Confidence Intervals ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems").)
###
6.3 Tuning Ad Placement Auctions
We now present an application of this learning principle to the optimization
of auction tuning parameters in the ad placement engine. Despite increasingly
challenging engineering difficulties, comparable optimization procedures can
obviously be applied to larger numbers of tunable parameters.

Figure 21: Level curves associated with the average number of mainline ads per
page (red curves, from −6% to +10%) and the average estimated
advertisement value generated per page (black curves, arbitrary units
ranging from 164 to 169) that would have been observed for a certain
query cluster if we had changed the mainline reserves by the multiplicative
factor shown on the horizontal axis, and if we had applied a squashing
exponent α shown on the vertical axis to the estimated click
probabilities qi,p(x).
Lahaie and McAfee ([2011](#bib.bib24)) propose to account for the uncertainty
of the click probability estimation by introducing a squashing
exponent α to control the impact of the estimated probabilities
on the rank scores. Using the notations introduced in
section [2.1](#S2.SS1 "2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems"), and assuming that the estimated
probability of a click on ad i placed at position p after
query x has the form qip(x)=γpβi(x)
(see appendix [A](#Sx2.SS1 "A Greedy Ad Placement Algorithms ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")), they redefine the
rank-score rip(x) as:
| | | |
| --- | --- | --- |
| | rip(x)=γpbiβi(x)α . | |
Using a squashing exponent α<1 reduces the contribution of the
estimated probabilities and increases the reliance on the bids bi
placed by the advertisers.
Because the squashing exponent changes the rank-score scale, it is
necessary to simultaneously adjust the reserves in order to display
comparable number of ads. In order to estimate the counterfactual
performance of the system under interventions affecting both the
squashing exponent and the mainline reserves, we have collected data
using a random squashing exponent following a normal distribution, and
a mainline reserve multiplier following a log-normal distribution as
described in section [4.6](#S4.SS6 "4.6 Experimenting with Mainline Reserves ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems"). Samples describing 12 million
search result pages were collected during four consecutive weeks.
Following Charles and Chickering ([2012](#bib.bib11)), we consider separate squashing
coefficients αk and mainline reserve multipliers ρk per query
cluster k∈{1..K\/}, and, in order to avoid negative user or advertiser
reactions, we seek the auction tuning parameters αk and ρk that
maximize an estimate of the advertisement value121212The value of
an ad click from the point of view of the advertiser. The advertiser payment
then splits the advertisement value between the publisher and the
advertiser. subject to a global constraint on the average number of ads
displayed in the mainline. Because maximizing the advertisement value instead
of the publisher revenue amounts to maximizing the size of the advertisement
pie instead of the publisher slice of the pie, this criterion is less likely
to simply raise the prices without improving the ads. Meanwhile the constraint
ensures that users are not exposed to excessive numbers of mainline ads.
We then use the collected data to estimate bounds on the
counterfactual expectations of the advertiser value and the
counterfactual expectation of the number of mainline ads per
page. Figure [21](#S6.F21 "Figure 21 ‣ 6.3 Tuning Ad Placement Auctions ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems") shows the corresponding level
curves for a particular query cluster. We can then run a simple
optimization algorithm and determine the optimal auction tuning
parameters for each cluster subject to the global mainline footprint
constraint. Appendix [D](#Sx2.SS4 "D Counterfactual Derivatives ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems") describes how to estimate
off-policy counterfactual derivatives that greatly help the numerical
optimization.
The obvious alternative (see Charles and Chickering, [2012](#bib.bib11)) consists of
replaying the auctions with different parameters and simulating the user
using a click probability model. However, it may be unwise to rely on
a click probability model to estimate the best value of a squashing
coefficient that is expected to compensate for the uncertainty of the
click prediction model itself. The counterfactual approach described
here avoids the problem because it does not rely on a click prediction
model to simulate users. Instead it estimates the counterfactual
peformance of the system using the actual behavior of the users
collected under moderate randomization.
###
6.4 Sequential Design
Confidence intervals computed after a first randomized data collection
experiment might not offer sufficient accuracy to choose a final value of
the parameter θ. It is generally unwise to simply collect additional
samples using the same experimental setup because the current data already
reveals information (figure [20](#S6.F20 "Figure 20 ‣ 6.2 Uniform Confidence Intervals ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems")) that can be used to
design a better data collection experiment. Therefore, it seems natural to
extend the learning principle discussed in
section [6.1](#S6.SS1 "6.1 A Learning Principle ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems") to a sequence of data collection
experiments. The parameter θt characterizing the t-th experiment is
then determined using samples collected during the previous experiments
(figure [22](#S6.F22 "Figure 22 ‣ 6.4 Sequential Design ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems")).

Figure 22: Sequential design – The parameter θt of each data
collection experiment is determined using data collected during
the previous experiments.
Although it is relatively easy to construct convergent sequential
design algorithms, reaching the *optimal* learning performance is
notoriously difficult (Wald, [1945](#bib.bib58)) because the selection
of parameter θt involves a trade-off between exploitation,
that is, the maximization of the immediate reward Yθt, and
exploration, that is, the collection of samples potentially leading to
better Yθ in the more distant future.
The optimal exploration exploitation trade-off for multi-armed bandits
is well understood (Gittins, [1989](#bib.bib18); Auer et al., [2002](#bib.bib4); Audibert et al., [2007](#bib.bib3))
because an essential property of multi-armed bandits makes the
analysis much simpler: the outcome observed after performing a
particular action brings no information about the value of other
actions. Such an assumption is both unrealistic and pessimistic. For
instance, the outcome observed after displaying a certain ad in
response to a certain query brings very useful information about the
value of displaying similar ads on similar queries.
Refined contextual bandit approaches (Slivkins, [2011](#bib.bib44))
account for similarities in the context and action spaces but do not
take advantage of all the additional opportunities expressed by
structural equation models. For instance, in the contextual bandit
formulation of the ad placement problem outlined in
section [3.5](#S3.SS5 "3.5 Special Cases ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems"), actions are pairs (s,c) describing
the ad slate s and the corresponding click prices c, policies
select actions by combining individual ad scores in very specific
ways, and actions determine the rewards through very specific
mechanisms.
Meanwhile, despite their suboptimal asymptotic properties, heuristic
exploration strategies perform surprisingly well during the time span
in which the problem can be considered stationary. Even in the simple
case of multi-armed bandits, excellent empirical results have been
obtained using Thompson sampling (Chapelle and Li, [2011](#bib.bib9)) or fixed
strategies (Vermorel and Mohri, [2005](#bib.bib56); Kuleshov and Precup, [2010](#bib.bib23)).
Leveraging the problem structure seems more important in practice than
perfecting an otherwise sound exploration strategy.
Therefore, in the absence of sufficient theoretical guidance, it is
both expedient and practical to maximizing ˆYθ at each
round, as described in section [6.1](#S6.SS1 "6.1 A Learning Principle ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems"),
subject to additional ad-hoc constraints ensuring a minimum level of
exploration.
7 Equilibrium Analysis
-----------------------
All the methods discussed in this contribution rely on the isolation assumption
presented in section [3.2](#S3.SS2 "3.2 The Isolation Assumption ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems"). This assumption lets us interpret the
samples as repeated independent trials that follow the pattern defined by the
structural equation model and are amenable to statistical analysis.
The isolation assumption is in fact a component of the counterfactual
conditions under investigation. For instance, in
section [4.6](#S4.SS6 "4.6 Experimenting with Mainline Reserves ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems"), we model single auctions
(figure [3](#S3.F3 "Figure 3 ‣ 3.1 The Flow of Information ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems")) in order to empirically determine how
the ad placement system would have performed if we had changed the
mainline reserves *without incurring a reaction from the users or
the advertisers*.
Since the future publisher revenues depend on the continued satisfaction of
users and advertisers, lifting this restriction is highly desirable.
* We can in principle work with larger structural equation models. For
instance, figure [4](#S3.F4 "Figure 4 ‣ 3.2 The Isolation Assumption ‣ 3 Modeling Causal Systems ‣ Counterfactual Reasoning and Learning Systems") suggests to thread single auction models
with additional causal links representing the impact of the displayed ads on
the future user goodwill. However, there are practical limits on the number
of trials we can consider at once. For instance, it is relatively easy to
simultaneously model all the auctions associated with the web pages served
to the same user during a thirty minute web session. On the other hand, it is
practially impossible to consider several weeks worth of auctions in order
to model their accumulated effect on the continued satisfaction of users and
advertisers.
* We can sometimes use problem-specific knowledge to construct alternate
performance metrics that anticipate the future effects of the feedback
loops. For instance, in section [6.3](#S6.SS3 "6.3 Tuning Ad Placement Auctions ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems"), we optimize the
advertisement value instead of the publisher revenue. Since this alternative
criterion takes the advertiser interests into account, it can be viewed
as a heuristic proxy for the future revenues of the publisher.
This section proposes an alternative way to account for such feedback loops
using the *quasistatic equilibrium* method familiar to physicists:
we assume that the publisher changes the parameter θ so slowly that the
system remains at equilibrium at all times. Using data collected while the
system was at equilibrium, we describe empirical methods to determine how an
infinitesimal intervention dθ on the model parameters would have
displaced the equilibrium:
>
> “*How would the system have performed during the
> data collection period if a small change dθ had been applied to the
> model parameter θ and the equilibrium had been reached before the
> data collection period.*”
>
>
>
A learning algorithm can then update θ to improve selected performance metrics.
###
7.1 Rational Advertisers
The ad placement system is an example of game where each actor
furthers his or her interests by controlling some aspects of the
system: the publisher controls the placement engine parameters, the
advertisers control the bids, and the users control the clicks.
As an example of the general quasi-static approach, this section
focuses on the reaction of *rational advertisers* to small
changes of the scoring functions driving the ad placement system.
Rational advertisers always select bids that maximize their economic
interests. Although there are more realistic ways to model
advertisers, this exercise is interesting because the auction theory
approaches also rely on the rational advertiser assumption (see
section [2.1](#S2.SS1 "2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems")). This analysis seamlessly integrates the
auction theory and machine learning perspectives.

Figure 23: Advertisers select the bid amounts ba
on the basis of the past number of clicks ya and the past prices za
observed for the corresponding ads.

Figure 24: Advertisers control the expected number of
clicks Ya and expected prices Za by adjusting their
bids ba. Rational advertisers select bids that maximize the difference
between the value they see in the clicks and the price they pay.
As illustrated in figure [23](#S7.F23 "Figure 23 ‣ 7.1 Rational Advertisers ‣ 7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems"), we treat the bid
vector b⋆=(b1…bA)∈[0,bmax]A as the parameter of the
conditional distribution Pb⋆(b|x,v) of the bids associated with
the eligible ads.131313Quantities measured when a feedback causal
system reaches equilibrium often display conditional independence
patterns that cannot be represented with directed acyclic
graphs (Lauritzen and Richardson, [2002](#bib.bib27); Dash, [2003](#bib.bib14)). Treating the
feedback loop as parameters instead of variables works around this
difficuly in a manner that appears sufficient to perform the
quasi-static analysis. The variables ya in the structural
equation model represents the number of clicks received by ads
associated with bid ba. The variables za represents the amount
charged for these clicks to the corresponding advertiser. The
advertisers select their bids ba according to their anticipated
impact on the number of resulting clicks ya and on their
cost za.
Following the pattern of the perfect information assumption (see
section [2.1](#S2.SS1 "2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems")), we assume that the advertisers
eventually acquire full knowledge of the expectations
| | | |
| --- | --- | --- |
| | | |
Let Va denote the value of a click for the corresponding advertiser.
Rational advertiser seek to maximize the difference between the value
they see in the clicks and the price they pay to the publisher,
as illustrated in figure [24](#S7.F24 "Figure 24 ‣ 7.1 Rational Advertisers ‣ 7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems").
This is expressed by the utility functions
| | | | |
| --- | --- | --- | --- |
| | | | (26) |
Following Athey and Nekipelov ([2010](#bib.bib2)), we argue that the injection of
smooth random noise into the auction mechanism changes the discrete
problem into a continous problem amenable to standard differential
methods. Mild regularity assumption on the densities
probability Pb⋆(b|x,v) and Pθ(q|x,a) are in fact
sufficient to ensure that the expectations Ya(θ,b⋆)
and Za(θ,b⋆) are continuously differentiable functions
of the distribution parameters b⋆ and θ.
Further assuming that utility functions Uθa(b⋆) are
diagonally quasiconcave, [Athey and Nekipelov](#bib.bib2)
establish the existence of a unique Nash equilibrium
| | | | |
| --- | --- | --- | --- |
| | ∀aba∈ArgMaxb Uθa(b1,…,ba−1,b,ba+1,…,bA) | | (27) |
characterized by its first order Karush-Kuhn-Tucker conditions
| | | | |
| --- | --- | --- | --- |
| | ∀aVa∂Ya∂ba−∂Za∂ba ⎧⎪⎨⎪⎩≤0if ba=0,≥0if ba=bmax,=0if 0<ba<bmax. | | (28) |
We use the first order equilibrium
conditions ([28](#S7.E28 "(28) ‣ 7.1 Rational Advertisers ‣ 7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems")) for two related purposes.
Section [7.2](#S7.SS2 "7.2 Estimating advertiser values ‣ 7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems") explains how to complete the
advertiser model by estimating the values Va.
Section [7.3](#S7.SS3 "7.3 Estimating the equilibrium response ‣ 7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems") estimates how the
equilibrium bids and the system performance metrics respond to a small
change dθ of the model parameters.
Interestingly, this approach remains sensible when key assumptions of
the equilibrium model are violated. The perfect information assumption
is unlikely to hold in practice. The quasi-concavity of the utility
functions is merely plausible. However, after observing the operation
of the stationary ad placement system for a sufficiently long time, it
is reasonable to assume that the most active advertisers have tried
small bid variations and have chosen locally optimal ones. Less active
advertisers may leave their bids unchanged for longer time periods, but
can also update them brutally if they experience a significant change in
return on investment. Therefore it makes sense to use data collected
when the system is stationary to estimate advertiser values Va that
are consistent with the first order equilibrium conditions.
We then hope to maintain the conditions that each advertisers
had found sufficiently attractive, by first estimating how a
small change dθ displaces this posited local equilibrium,
then by using performance metrics that take this displacement
into account.
###
7.2 Estimating advertiser values
We first need to estimate the partial derivatives appearing in the
equilibrium condition ([28](#S7.E28 "(28) ‣ 7.1 Rational Advertisers ‣ 7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems")). These
derivatives measure how the expectations Ya and Za would have
been changed if each advertiser had placed a slighly different
bid ba. Such quantities can be estimated by randomizing the bids
and computing on-policy counterfactual derivatives as explained in
appendix [D](#Sx2.SS4 "D Counterfactual Derivatives ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems"). Confidence intervals can be derived with
the usual tools.
Unfortunately, the publisher is not allowed to directly randomize the
bids because the advertisers expect to pay prices computed using the
bid they have specified and not the potentially higher bids resulting
from the randomization. However, the publisher has full control on the
estimated click probabilities qi,p(x). Since the
rank-scores ri,p(x) are the products of the bids and the
estimated click probabilities (see section [2.1](#S2.SS1 "2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems")), a
random multiplier applied to the bids can also be interpreted as a
random multiplier applied to the estimated click probabilities. Under
these two interpretations, the same ads are shown to the users, but
different click prices are charged to the advertisers. Therefore, the
publisher can simultaneously charge prices computed as if the
multiplier had been applied to the estimated click probabilities, and
collect data as if the multiplier had been applied to the bid. This
data can then be used to estimate the derivatives.
Solving the first order equilibrium equations then yields estimated
advertiser values Va that are consistent with the observed
data.141414This approach is of course related to the value
estimation method proposed by Athey and Nekipelov ([2010](#bib.bib2)) but
strictly relies on the explicit randomization of the scores. In
contrast, practical considerations force [Athey and Nekipelov](#bib.bib2)
to rely on the apparent noise and hope that the noise model accounts
for all potential confounding factors.
| | | |
| --- | --- | --- |
| | Va≈∂Ya∂ba/∂Za∂ba | |
There are however a couple caveats:
* The advertiser bid ba may be too small to cause ads to be
displayed. In the absence of data, we have no means to estimate a
click value for these advertisers.
* Many ads are not displayed often enough to obtain accurate estimates
of the partial derivatives ∂Ya∂ba and ∂Za∂ba.
This can be partially remediated by smartly aggregating the data of
advertisers deemed similar.
* Some advertisers attempt to capture all the available ad
opportunities by placing extremely high bids and hoping to pay
reasonable prices thanks to the generalized second price rule.
Both partial derivatives ∂Ya∂ba and ∂Za∂ba are
equal to zero in such cases. Therefore we cannot recover Va by
solving the equilibrium
equation ([28](#S7.E28 "(28) ‣ 7.1 Rational Advertisers ‣ 7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems")). It is however possible to
collect useful data by selecting for these advertisers a maximum
bid bmax that prevents them from monopolizing the eligible ad
opportunities. Since the equilibrium condition is an inequality
when ba=bmax, we can only determine a lower bound of the
values Va for these advertisers.
These caveats in fact underline the limitations of the advertiser
modelling assumptions. When their ads are not displayed often enough,
advertisers have no more chance to acquire a full knowledge of the
expectations Ya and Za than the publisher has a chance to
determine their value. Similarly, advertisers that place extremely
high bids are probably underestimating the risk to occasionally
experience a very high click price. A more realistic model of the
advertiser information acquisition is required to adequately handle
these cases.
###
7.3 Estimating the equilibrium response
Let A be the set of the *active advertisers*, that is, the
advertisers whose value can be estimated (or lower bounded) with sufficient
accuracy. Assuming that the other advertisers leave their bids unchanged, we
can estimate how the active advertisers adjust their bids in response to an
infinitesimal change dθ of the scoring model parameters. This is
achieved by differentiating the equilibrium
equations ([28](#S7.E28 "(28) ‣ 7.1 Rational Advertisers ‣ 7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems")):
| | | | |
| --- | --- | --- | --- |
| | ∀a′∈A,0 = (Va′∂2Ya′∂ba′∂θ−∂2Za′∂ba′∂θ)dθ+∑a∈A(Va′∂2Ya′∂ba′∂ba−∂2Za′∂ba′∂ba)dba . | | (29) |
The partial second derivatives must be estimated as described in
appendix [D](#Sx2.SS4 "D Counterfactual Derivatives ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems"). Solving this linear system of
equations then yields an expression of the form
| | | |
| --- | --- | --- |
| | dba = Ξadθ. | |
This expression can then be used to estimate how any counterfactual
expectation Y of interest changes when the publisher applies an
infinitesimal change dθ to the scoring parameter θ and the
active advertisers A rationally adjust their bids ba in response:
| | | | |
| --- | --- | --- | --- |
| | dY=(∂Y∂θ+∑aΞa∂Y∂ba)dθ . | | (30) |
Although this expression provides useful information, one should remain aware
of its limitations. Because we only can estimate the reaction of active
advertisers, expression ([30](#S7.E30 "(30) ‣ 7.3 Estimating the equilibrium response ‣ 7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems"))
does not includes the potentially positive reactions of advertisers who did
not bid but could have. Because we only can estimate a lower bound of their
values, this expression does not model the potential reactions of advertisers
placing unrealistically high bids. Furthermore, one needs to be very cautious
when the system ([29](#S7.E29 "(29) ‣ 7.3 Estimating the equilibrium response ‣ 7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems")) approaches
singularities. Singularities indicate that the rational advertiser
assumption is no longer sufficient to determine the reactions of certain
advertisers. This happens for instance when advertisers cannot find bids that
deliver a satisfactory return. The eventual behavior of such advertisers then
depends on factors not taken in consideration by our model.
To alleviate these issues, we could alter the auction mechanism in a
manner that forces advertisers to reveal more information, and we
could enforce policies ensuring that the
system ([29](#S7.E29 "(29) ‣ 7.3 Estimating the equilibrium response ‣ 7 Equilibrium Analysis ‣ Counterfactual Reasoning and Learning Systems")) remains safely
nonsingular. We could also design experiments revealing the impact of
the fixed costs incurred by advertisers participating into new
auctions. Although additional work is needed to design such
refinements, the quasistatic equilibrium approach provides a generic
framework to take such aspects into account.
###
7.4 Discussion
The rational advertiser assumption is the cornerstone of seminal works
describing simplified variants of the ad placement problem using
auction theory (Varian, [2007](#bib.bib54); Edelman et al., [2007](#bib.bib16)). More sophisticated
works account for more aspects of the ad placement problem, such as
the impact of click prediction learning
algorithms (Lahaie and McAfee, [2011](#bib.bib24)), the repeated nature of the ad
auctions (Bergemann and Said, [2010](#bib.bib6)), or for the fact that advertisers
place bids valid for multiple
auctions (Athey and Nekipelov, [2010](#bib.bib2)). Despite these advances, it
seems technically very challenging to use these methods and account
for all the effects that can be observed in practical ad placement
systems.
We believe that our counterfactual reasoning framework is best viewed
as a modular toolkit that lets us apply insights from auction theory
and machine learning to problems that are far more complex than those
studied in any single paper. For instance, the quasi-static
equilibrium analysis technique illustrated in this section extends
naturally to the analysis of multiple simultaneous causal feedback
loops involving additional players:
* The first step consists in designing ad-hoc experiments to identify
the parameters that determine the equilibrium equation of each
player. In the case of the advertisers, we have shown how to use
randomized scores to reveal the advertiser values. In the case of
the user feedback, we must carefully design experiments that reveal
how users respond to changes in the quality of the displayed ads.
* Differentiating all the equilibrium equations yields a linear system
of equations linking the variations of the parameter under our
control, such as dθ, and all the parameters under the control
of the other players, such as the advertiser bids, or the user
willingness to visit the site and click on ads. Solving this system
and writing the total derivative of the performance measure gives
the answer to our question.
Although this programme has not yet been fully realized, the existence
of a principled framework to handle such complex interactions is
remarkable. Furthermore, thanks to the flexibility of the causal
inference frameworks, these techniques can be infinitely adapted to
various modeling assumptions and various system complexities.
8 Conclusion
-------------
Using the ad placement example, this work demonstrates the central role of
causal inference (Pearl, [2000](#bib.bib34); Spirtes et al., [1993](#bib.bib46)) for the design of learning
systems interacting with their environment. Thanks to importance sampling
techniques, data collected during randomized experiments gives precious cues
to assist the designer of such learning systems and useful signals to drive
learning algorithms.
Two recurrent themes structure this work. First, we maintain a sharp
distinction between the learning algorithms and the extraction of the
signals that drive them. Since real world learning systems often
involve a mixture of human decision and automated processes, it makes
sense to separate the discussion of the learning signals from the
discussion of the learning algorithms that leverage them. Second, we
claim that the mathematical and philosophical tools developed for the
analysis of physical systems appear very effective for the analysis of
causal information systems and of their equilibria. These two themes
are in fact a vindication of cybernetics (Wiener, [1948](#bib.bib59)).
Acknowledgements
----------------
We would like to acknowledge extensive discussions with Susan Athey,
Miroslav Dudík, Patrick Jordan, John Langford, Lihong Li,
Sebastien Lahaie, Shie Mannor, Chris Meek, Alex Slivkins, and Paul
Viola. We also thank the Microsoft adCenter RnR team for giving us
the invaluable opportunity to deploy these ideas at scale and prove
their worth. Finally we gratefully acknowledge the precious comments
of our JMLR editor and reviewers.
Appendices
----------
###
A Greedy Ad Placement Algorithms
Section [2.1](#S2.SS1 "2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems") describes how to select and place ads on a web
page by maximizing the total rank-score ([1](#S2.E1 "(1) ‣ 3rd item ‣ 2.1 Advertisement Placement ‣ 2 Causation Issues in Computational Advertising ‣ Counterfactual Reasoning and Learning Systems")). Following
(Varian, [2007](#bib.bib54); Edelman et al., [2007](#bib.bib16)), we assume that the click probability
estimates are expressed as the product of a positive position term γp
and a positive ad term βi(x). The rank-scores can therefore be written
as ri,p(x)=γpbiβi(x). We also assume that the policy
constraints simply state that a web page should not display more than one ad
belonging to any given advertiser. The discrete maximization problem is then
amenable to computationally efficient greedy algorithms.
Let us fix a layout L and focus on the inner maximization problem.
Without loss of generality, we can renumber the positions such that
| | | |
| --- | --- | --- |
| | L={1,2,…N}andγ1≥γ2≥⋯≥0. | |
and write the inner maximization problem as
| | | |
| --- | --- | --- |
| | maxi1,…,iN RL(i1,…,iN) = ∑p∈Lrip,p(x) | |
subject to the policy constraints and reserve constraints
ri,p(x)≥Rp(x).
Let Si denote the advertiser owning ad i. The set of ads is then
partitioned into subsets Is={i:Si=s} gathering the ads
belonging to the same advertiser s. The ads that maximize the product
biβi(x) within set Is are called the best ads for advertiser
s. If the solution of the discrete maximization problem contains one ad
belonging to advertiser s, then it is easy to see that this ad must be one
of the best ads for advertiser s: were it not the case, replacing the
offending ad by one of the best ads would yield a higher RL without
violating any of the constraints. It is also easy to see that one could
select any of the best ads for advertiser s without changing RL.
Let the set I∗ contain exactly one ad per advertiser,
arbitrarily chosen among the best ads for this advertiser.
The inner maximization problem can then be simplified as:
| | | |
| --- | --- | --- |
| | maxi1,…,iN∈I∗ RL(i1,…,iN) = ∑p∈Lγpbipβip(x) | |
where all the indices i1,…,iN are distinct,
and subject to the reserve constraints.
Assume that this maximization problem has a solution i1,…,iN,
meaning that there is a feasible ad placement solution for the layout L.
For k=1…N, let us define I∗k⊂I∗ as
| | | |
| --- | --- | --- |
| | I∗k=ArgMaxi∈I∗∖{i1,…,ik−1} biβi(x). | |
It is easy to see that I∗k intersects {ik,…,iN} because, were
it not the case, replacing ik by any element of I∗k would increase
RL without violating any of the constraints. Furthermore it is easy
to see that ik∈I∗k because, were it not the case, there would be h>k
such that ih∈I∗k, and swapping ik and ih would increase
RL without violating any of the constraints.
Therefore, if the inner maximization problem admits a solution, we can compute
a solution by recursively picking i1,…,iN from I∗1,I∗2,…,I∗N. This can be done efficiently by first sorting the biβi(x) in
decreasing order, and then greedily assigning ads to the best positions
subject to the reserve constraints. This operation has to be repeated for all
possible layouts, including of course the empty layout.
The same analysis can be carried out for click prediction estimates expressed
as arbitrary monotone combination of a position term γp(x) and an ad
term βi(x), as shown, for instance, by Graepel et al. ([2010](#bib.bib21)).
###
B Confidence Intervals
Section [4.4](#S4.SS4 "4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems") explains how to obtain improved confidence
intervals by replacing the unbiased importance sampling
estimator ([9](#S4.E9 "(9) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) by the clipped importance sampling
estimator ([12](#S4.E12 "(12) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")). This appendix provides details
that could have obscured the main message.
####
b.1 Outer confidence interval
We first address the computation of the outer confidence
interval ([14](#S4.E14 "(14) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) which describes how the
estimator ˆY∗ approaches the clipped expectation ¯Y∗.
| | | |
| --- | --- | --- |
| | ¯Y∗=∫ωℓ(ω)¯w(ω) P(ω) ≈ ˆY∗=1nn∑i=1ℓ(ωi)¯w(ωi). | |
Since the samples ℓ(ωi)¯w(ωi) are independent and
identically distributed, the central limit theorem
(e.g., Cramér, [1946](#bib.bib13), section 17.4) states that the empirical average
ˆY∗ converges in law to a normal distribution of mean
¯Y∗=E[ℓ(ω)¯w(ω)] and variance
¯V=var[ℓ(ω)¯w(ω)].
Since this convergence usually
occurs quickly, it is widely accepted to write
| | | |
| --- | --- | --- |
| | \largeP{ ˆY∗−ϵR≤¯Y∗≤ˆY∗+ϵR }≥1−δ, | |
with
| | | | |
| --- | --- | --- | --- |
| | ϵR = erf\raisebox6.88pt$−1$(1−δ) √2¯V . | | (31) |
and to estimate the variance ¯V using the
sample variance ˆV
| | | |
| --- | --- | --- |
| | ¯V ≈ ˆV=1n−1n∑i=1(ℓ(ωi)¯w(ωi)−ˆY∗)2. | |
This approach works well when the ratio ceiling R is relatively small.
However the presence of a few very large ratios makes the
variance estimation noisy and might slow down
the central limit convergence.
The first remedy is to bound the variance more rigorously.
For instance, the following bound results from
(Maurer and Pontil, [2009](#bib.bib31), theorem 10).
| | | |
| --- | --- | --- |
| | \largeP⎧⎨⎩√¯V > √ˆV + (M−m)R√2log(2/δ)n−1⎫⎬⎭≤δ | |
Combining this bound with ([31](#Sx2.E31 "(31) ‣ B.1 Outer confidence interval ‣ B Confidence Intervals ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")) gives a confidence
interval valid with probability greater than 1−2δ.
Although this approach eliminates the potential problems related to the
variance estimation, it does not address the potentially slow
convergence of the central limit theorem.
The next remedy is to rely on *empirical Bernstein bounds*
to derive rigorous confidence intervals that leverage both
the sample mean and the sample variance
(Audibert et al., [2007](#bib.bib3); Maurer and Pontil, [2009](#bib.bib31)).
{theorem}
[Empirical Bernstein bound]
*(Maurer and Pontil, [2009](#bib.bib31), thm 4)*
Let X,X1,X2,…,Xn be i.i.d. random variable
with values in [a,b] and let δ>0.
Then, with probability at least 1−δ,
| | | |
| --- | --- | --- |
| | E[X]−Mn ≤ √2Vnlog(2/δ)n+(b−a)7log(2/δ)3(n−1) , | |
where Mn and Vn respectively are
the sample mean and variance
| | | |
| --- | --- | --- |
| | Mn=1nn∑i=1Xi ,Vn=1n−1n∑i=1(Xi−Mn)2 . | |
Applying this theorem to both ℓ(ωi)¯w(ωi)
and −ℓ(ωi)¯w(ωi) provides
confidence intervals that hold for for the worst possible distribution
of the variables ℓ(ω) and ¯w(ω).
| | | |
| --- | --- | --- |
| | \largeP{ ˆY∗−ϵR≤¯Y∗≤ˆY∗+ϵR }≥1−2δ | |
where
| | | | |
| --- | --- | --- | --- |
| | ϵR = √2ˆVlog(2/δ)n+MR7log(2/δ)3(n−1). | | (32) |
Because they hold for the worst possible distribution,
confidence intervals obtained in this way are less tight than
confidence intervals based on the central limit theorem.
On the other hand, thanks to the Bernstein bound,
they remains reasonably competitive, and they provide
a much stronger guarantee.
####
b.2 Inner confidence interval
Inner confidence intervals are derived from
inequality ([16](#S4.E16 "(16) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) which bounds
the difference between the counterfactual expectation Y∗ and
the clipped expectation ¯Y∗:
| | | |
| --- | --- | --- |
| | 0 ≤ Y∗−¯Y∗≤ M(1−¯W∗). | |
The constant M is defined by assumption ([10](#S4.E10 "(10) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")).
The first step of the derivation
consists in obtaining a lower bound of ¯W∗−ˆYW∗
using either the central limit theorem or an empirical Bernstein bound.
For instance, applying theorem [B.1](#Sx2.SS2.SSS1 "B.1 Outer confidence interval ‣ B Confidence Intervals ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")
to −¯w(ωi) yields
| | | |
| --- | --- | --- |
| | \largeP⎧⎪⎨⎪⎩ ¯W∗ ≥ ˆYW∗−√2ˆVwlog(2/δ)n−R7log(2/δ)3(n−1)⎫⎪⎬⎪⎭ ≥ 1−δ | |
where ˆVw is the sample variance of the clipped weights
| | | |
| --- | --- | --- |
| | ˆVw = 1n−1n∑i=1(¯w(ωi)−ˆYW∗)2. | |
Replacing in inequality ([16](#S4.E16 "(16) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems"))
gives the outer confidence interval
| | | |
| --- | --- | --- |
| | \largeP{ ¯Y∗ ≤ Y∗≤ ¯Y∗+M(1−ˆYW∗+ξR) }≥1−δ. | |
with
| | | | |
| --- | --- | --- | --- |
| | ξR = √2ˆVwlog(2/δ)n+R7log(2/δ)3(n−1) . | | (33) |
Note that 1−ˆYW+ξR can occasionally be negative.
This occurs in the unlucky cases where the confidence interval
is violated, with probability smaller than δ.
Putting together the inner and outer confidence intervals,
| | | | |
| --- | --- | --- | --- |
| | \largeP{ ˆY∗−ϵR≤Y∗≤ˆY∗+M(1−ˆYW∗+ξR)+ϵR }≥1−3δ, | | (34) |
with ϵR and ξR computed
as described in expressions ([32](#Sx2.E32 "(32) ‣ B.1 Outer confidence interval ‣ B Confidence Intervals ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems"))
and ([33](#Sx2.E33 "(33) ‣ B.2 Inner confidence interval ‣ B Confidence Intervals ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")).
###
C Counterfactual Differences
We now seek to estimate the difference Y+−Y∗ of the expectations
of a same quantity ℓ(ω) under two different counterfactual
distributions P+(ω) and P∗(ω). These
expectations are often affected by variables whose value is left
unchanged by the interventions under consideration. For instance,
seasonal effects can have very large effects on the number of ad
clicks. When these variables affect both Y+ and Y∗ in similar
ways, we can obtain substantially better confidence intervals for the
difference Y+−Y∗.
In addition to the notation ω representing all the variables in
the structural equation model, we use notation υ to
represent all the variables that are not direct or indirect
effects of variables affected by the interventions under
consideration.
Let ζ(υ) be a known function believed to be a good predictor of
the quantity ℓ(ω) whose counterfactual expectation is sought.
Since P∗(υ)=P(υ), the following equality holds
regardless of the quality of this prediction:
| | | | | | |
| --- | --- | --- | --- | --- | --- |
| | Y∗ = ∫ωℓ(ω) P∗(ω) | = | ∫υζ(υ) P∗(υ) +∫ω[ℓ(ω)−ζ(υ)] P∗(ω) | | (35) |
| | | = | ∫υζ(υ) P(υ) +∫ω[ℓ(ω)−ζ(υ)] w(ω) P(ω) . | |
Decomposing both Y+ and Y∗ in this way and computing the difference,
| | | |
| --- | --- | --- |
| | Y+−Y∗ = ∫ω [ℓ(ω)−ζ(υ)]Δw(ω) P(ω) ≈ 1nn∑i=1[ℓ(ωi)−ζ(υi)]Δw(ωi), | |
| | | | |
| --- | --- | --- | --- |
| | | | (36) |
The outer confidence interval size is reduced if the variance of the
residual ℓ(ω)−ζ(υ) is smaller than the variance of
the original variable ℓ(ω). For instance, a suitable predictor
function ζ(υ) can significantly capture the seasonal click yield
variations regardless of the interventions under consideration.
Even a constant predictor function can considerably change the variance of the
outer confidence interval. Therefore, in the absence of better predictor, we
still can ( and always should ) center the integrand using a constant
predictor.
The rest of this appendix describes how to construct confidence intervals for
the estimation of counterfactual differences. Additional bookkeeping is
required because both the weights Δw(ωi) and the
integrand ℓ(ω)−ζ(υ) can be positive or negative.
We use the notation υ to represent the variables of the structural
equation model that are left unchanged by the intervention under
considerations. Such variables satisfy the relations P∗(υ)=P(υ) and
P∗(ω)=P∗(ω∖υ|υ)P(υ), where we use notation ω∖υ to denote all
remaining variables in the structural equation model. An invariant predictor
is then a function ζ(υ) that is believed to be a good predictor of
ℓ(ω). In particular, it is expected that
var[ℓ(ω)−ζ(υ)] is smaller than var[ℓ(ω)].
####
c.1 Inner confidence interval with dependent bounds
We first describe how to construct finer inner confidence intervals by using
more refined bounds on ℓ(ω). In particular, instead of the simple
bound ([10](#S4.E10 "(10) ‣ 4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")), we can use bounds that
depend on invariant variables:
| | | |
| --- | --- | --- |
| | ∀ωm≤m(υ)≤ℓ(ω)≤M(υ)≤M. | |
The key observation is the equality
| | | |
| --- | --- | --- |
| | E[w∗(ω)|υ] = ∫ω∖υw∗(ω) P(ω∖υ|υ) = ∫ω∖υP∗(ω∖υ|υ)P(υ)P(ω∖υ|υ)P(υ) P(ω∖υ|υ) = 1. | |
We can then write
| | | | | |
| --- | --- | --- | --- | --- |
| | Y∗−¯Y∗ | = | | |
| | | = | ∫υ(1−E[¯w∗(ω)|υ]) M(υ) P(υ) = ∫ω(1−¯w∗(ω)) M(υ) P(ω) = Bhi . | |
Using a similar derivation for the lower bound Blo,
we obtain the inequality
| | | |
| --- | --- | --- |
| | Blo ≤ Y∗−¯Y∗≤ Bhi | |
With the notations
| | | |
| --- | --- | --- |
| | ˆBlo=1nn∑i=1(1−¯w∗(ωi))m(υi) ,ˆBhi=1nn∑i=1(1−¯w∗(ωi))M(υi) ,ˆVlo=1n−1n∑i=1[(1−¯w∗(ωi))m(υi)−ˆBlo]2 ,ˆVhi=1n−1n∑i=1[(1−¯w∗(ωi))M(υi)−ˆBhi]2 ,ξlo = √2ˆVlolog(2/δ)n+|m|R7log(2/δ)3(n−1) ,ξhi = √2ˆVhilog(2/δ)n+|M|R7log(2/δ)3(n−1) , | |
two applications of theorem [B.1](#Sx2.SS2.SSS1 "B.1 Outer confidence interval ‣ B Confidence Intervals ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")
give the inner confidence interval:
| | | |
| --- | --- | --- |
| | \largeP{ ¯Y∗+ˆBlo−ξlo ≤ Y∗ ≤ ¯Y∗+ˆBhi+ξhi } ≥ 1−2δ . | |
####
c.2 Confidence Intervals for Counterfactual Differences
We now describe how to leverage invariant predictors in order to
construct tighter confidence intervals for the difference
of two counterfactual expectations.
| | | |
| --- | --- | --- |
| | Y+−Y∗ ≈ 1nn∑i=1[ℓ(ωi)−ζ(υi)]Δw(ωi)~{}~{}with~{}~{}Δw(ω)=P+(ω)−P∗(ω)P(ω). | |
Let us define the reweigthing ratios
w+(ω)=P+(ω)/P(ω) and
w∗(ω)=P∗(ω)/P(ω),
their clipped variants ¯w+(ω) and ¯w∗(ω),
and the clipped centered expectations
| | | |
| --- | --- | --- |
| | ¯Y+c=∫ω[ℓ(ω)−ζ(υ)]¯w+(ω)P(ω)and¯Y∗c=∫ω[ℓ(ω)−ζ(υ)]¯w∗(ω)P(ω). | |
The outer confidence interval is obtained by applying the
techniques of section [B.1](#Sx2.SS2.SSS1 "B.1 Outer confidence interval ‣ B Confidence Intervals ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems") to
| | | |
| --- | --- | --- |
| | ¯Y+c−¯Y∗c = ∫ω [ℓ(ω)−ζ(υ)][¯w+(ω)−¯w∗(ω)] P(ω). | |
Since the weights ¯w+−¯w∗ can be positive or negative, adding or
removing a constant to ℓ(ω) can considerably change the variance of
the outer confidence interval. This means that one should *always* use a
predictor. Even a *constant predictor* can vastly improve the outer
confidence interval difference.
The inner confidence interval is then obtained by writing the difference
| | | | | |
| --- | --- | --- | --- | --- |
| | (Y+−Y∗)−(¯Y+c−¯Y∗c) | = | | |
| | | − | | |
and bounding both terms by leveraging υ–dependent bounds
on the integrand:
| | | |
| --- | --- | --- |
| | ∀ω−M ≤−ζ(υ) ≤ ℓ(ω)−ζ(υ) ≤ M−ζ(υ) ≤ M . | |
This can be achieved as shown in section [C.1](#Sx2.SS3.SSS1 "C.1 Inner confidence interval with dependent bounds ‣ C Counterfactual Differences ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems").
###
D Counterfactual Derivatives
We now consider interventions that depend on a continuous parameter θ.
For instance, we might want to know what the performance of the ad placement
engine would have been if we had used a parametrized scoring model.
Let Pθ(ω) represent the counterfactual Markov
factorization associated with this intervention. Let Yθ be the
counterfactual expectation of ℓ(ω) under distribution Pθ.
Computing the derivative of ([35](#Sx2.E35 "(35) ‣ C Counterfactual Differences ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems"))
immediately gives
| | | |
| --- | --- | --- |
| | | |
| | | | |
| --- | --- | --- | --- |
| | withwθ(ω)=Pθ(ω)P(ω)andw′θ(ω)=∂wθ(ω)∂θ=wθ(ω)∂logPθ(ω)∂θ . | | (37) |
Replacing the expressions P(ω) and Pθ(ω) by the
corresponding Markov factorizations gives many opportunities to
simplify the reweighting ratio w′θ(ω). The term
wθ(ω) simplifies as shown in ([8](#S4.E8 "(8) ‣ 4.3 Markov Factor Replacement ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")).
The derivative of logPθ(ω) depends only on the factors
parametrized by θ. Therefore, in order to evaluate
w′θ(ω), we only need to know the few factors
affected by the intervention.
Higher order derivatives can be estimated using the same approach.
For instance,
| | | |
| --- | --- | --- |
| | ∂2Yθ∂θi∂θj = ∫w [ℓ(ω)−ζ(υ)]w′′ij(ω) P(ω) ≈ 1nn∑i=1 [ℓ(ωi)−ζ(υi)]w′′ij(ωi) | |
| | | | |
| --- | --- | --- | --- |
| | withw′′ij(ω)=∂2wθ(ω)∂θi∂θj=wθ(ω)∂logPθ(ω)∂θi∂logPθ(ω)∂θj+wθ(ω)∂2logPθ(ω)∂θi∂θj . | | (38) |
The second term in w′′ij(ω) vanishes when θi and
θj parametrize distinct factors in Pθ(ω).
####
d.1 Infinitesimal Interventions and Policy Gradient
Expression ([37](#Sx2.E37 "(37) ‣ D Counterfactual Derivatives ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")) becomes particularly attractive
when P(ω)=Pθ(ω), that is, when one seeks
derivatives that describe the effect of an infinitesimal intervention
on the system from which the data was collected. The resulting
expression is then identical to the celebrated *policy
gradient* (Aleksandrov et al., [1968](#bib.bib1); Glynn, [1987](#bib.bib19); Williams, [1992](#bib.bib60)) which
expresses how the accumulated rewards in a reinforcement learning
problem are affected by small changes of the parameters of the policy
function.
| | | |
| --- | --- | --- |
| | ∂Yθ∂θ = ∫ω[ℓ(ω)−ζ(υ)]w′θ(ω)Pθ(ω) ≈ 1nn∑i=1[ℓ(ωi)−ζ(υi)]w′θ(ωi) | |
| | | | |
| --- | --- | --- | --- |
| | where ωi are sampled i.i.d. from Pθ and~{}w′θ(ω) = ∂logPθ(ω)∂θ. | | (39) |
Sampling from Pθ(ω) eliminates the potentially large
ratio wθ(ω) that usually plagues importance sampling
approaches. Choosing a parametrized distribution that depends smoothly on
θ is then sufficient to contain the size of the
weights w′θ(ω). Since the weights can be positive or
negative, centering the integrand with a prediction function ζ(υ)
remains very important. Even a constant predictor ζ can substantially
reduce the variance
| | | | | |
| --- | --- | --- | --- | --- |
| | var[(ℓ(ω)−ζ)w′θ(ω)] | = | var[ℓ(ω)w′θ(ω)−ζw′θ(ω)] | |
| | | | = var[ℓ(ω)w′θ(ω)]−2ζcov[ℓ(ω)w′θ(ω),w′θ(ω)]+ζ2var[w′θ(ω)] | |
whose minimum is reached for
ζ=cov[ℓ(ω)w′θ(ω),w′θ(ω)]var[w′θ(ω)]=E[ℓ(ω)w′θ(ω)2]E[w′θ(ω)2] .
We sometimes want to evaluate expectations under a counterfactual distribution
that is too far from the actual distribution to obtain reasonable confidence
intervals. Suppose, for instance, that we are unable to reliably estimate
which click yield would have been observed if we had used a certain
parameter θ∗ for the scoring models. We still can estimate how
quickly and in which direction the click yield would have changed if we had
slightly moved the current scoring model parameters θ in the direction
of the target θ∗. Although such an answer is not as good as a
reliable estimate of Yθ∗, it is certainly better than no answer.
####
d.2 Off-Policy Gradient
We assume in this subsection that the parametrized probability
distribution Pθ(ω) is regular enough to ensure that all
the derivatives of interest are defined and that the
event {wθ(ω)=R} has probability zero.
Furthermore, in order to simplify the exposition, the following
derivation does not leverage an invariant predictor function.
Estimating derivatives using data sampled from a distribution
P(ω) different from Pθ(ω) is more challenging
because the ratios wθ(ωi) in
equation ([37](#Sx2.E37 "(37) ‣ D Counterfactual Derivatives ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")) can take very large values.
However it is comparatively easy to estimate the derivatives of lower
and upper bounds using a slightly different way to clip the weights.
Using notation 1l(x) represent the indicator function,
equal to one if condition x is true and zero otherwise,
let us define respectively the clipped weights ¯w\sc zθ and
the capped weights ¯w\sc mθ:
| | | |
| --- | --- | --- |
| | ¯w\sc zθ(ω)=wθ(ω)1l{P∗(ω)<RP(ω)}and¯w\sc mθ(ω)=min{wθ(ω),R} . | |
Although section [4.4](#S4.SS4 "4.4 Confidence Intervals ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems") illustrates the use of clipped weights,
the confidence interval derivation can be easily extended to the capped weights.
Defining the capped quantities
| | | | |
| --- | --- | --- | --- |
| | ¯Yθ=∫ωℓ(ω)¯w\sc mθ(ω)P(ω)and¯Wθ=∫ω¯w\sc mθ(ω)P(ω) | | (40) |
and writing
| | | | | |
| --- | --- | --- | --- | --- |
| | 0 ≤ Yθ−¯Yθ | = | ∫ω∈Ω∖ΩRℓ(ω)(P∗(ω)−RP(ω)) | |
| | | ≤ | M(1−P∗(ΩR)−RP(Ω∖ΩR)) = M(1−∫ω¯w\sc mθ(ω)P(ω)) | |
yields the inequality
| | | | |
| --- | --- | --- | --- |
| | ¯Yθ ≤ Yθ≤ ¯Yθ+M(1−¯Wθ) . | | (41) |
In order to obtain reliable estimates of the derivatives of these upper and
lower bounds, it is of course sufficient to obtain reliable estimates of the
derivatives of ¯Yθ and ¯Wθ. By separately considering
the cases wθ(ω)<R and wθ(ω)>R,
we easily obtain the relation
| | | |
| --- | --- | --- |
| | ¯w\sc m′θ(ω) = ∂¯w\sc mθ(ω)∂θ = ¯w%
\sc zθ(ω)∂logPθ(ω)∂θwhen wθ(ω)≠R | |
and, thanks to the regularity assumptions, we can write
| | | | | |
| --- | --- | --- | --- | --- |
| | ∂¯Yθ∂θ | = | ∫ωℓ(ω)¯w\sc m′θ(ω)P(ω) ≈ 1nn∑i=1ℓ(ωi)¯w\sc m′θ(ωi) , | |
| | ∂¯Wθ∂θ | = | ∫ω¯w\sc m′θ(ω)P(ω) ≈ 1nn∑i=1¯w\sc m′θ(ωi), | |
Estimating these derivatives is considerably easier than using
approximation ([37](#Sx2.E37 "(37) ‣ D Counterfactual Derivatives ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")) because they involve the bounded
quantity ¯w\sc zθ(ω) instead of the potentially large
ratio wθ(ω). It is still necessary to choose a
sufficiently smooth sampling distribution P(ω) to limit the
magnitude of ∂logPθ/∂θ.
Such derivatives are very useful to drive optimization algorithms. Assume for
instance that we want to find the parameter θ that maximizes the
counterfactual expectation Yθ as illustrated in
section [6.3](#S6.SS3 "6.3 Tuning Ad Placement Auctions ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems"). Maximizing the estimate obtained using
approximation ([7](#S4.E7 "(7) ‣ 4.3 Markov Factor Replacement ‣ 4 Counterfactual Analysis ‣ Counterfactual Reasoning and Learning Systems")) could reach its maximum for a
value of θ that is poorly explored by the actual
distribution. Maximizing an estimate of the lower bound
([41](#Sx2.E41 "(41) ‣ D.2 Off-Policy Gradient ‣ D Counterfactual Derivatives ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")) ensures that the optimization algorithm finds a
trustworthy answer.
###
E Uniform empirical Bernstein bounds
This appendix reviews the uniform empirical Bernstein bound
given by Maurer and Pontil ([2009](#bib.bib31)) and describes how it can
be used to construct the uniform confidence
interval ([24](#S6.E24 "(24) ‣ 2nd item ‣ 6.2 Uniform Confidence Intervals ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems")).
The first step consists of characterizing the size of a family F of
functions mapping a space X into the
interval [a,b]⊂R. Given n
points x=(x1…xn)∈Xn, the
trace F(x)∈Rn is the set of vectors
(f(x1),…,f(xn)) for all functions f∈F.
{definition}
[Covering numbers, etc.]
Given ε>0, the covering number N(x,ε,F) is
the smallest possible cardinality of a subset C⊂F(x) satisfying
the condition
| | | |
| --- | --- | --- |
| | ∀v∈F(x) ∃c∈C maxi=1…n|vi−ci|≤ε , | |
and the growth function N(n,ε,F) is
| | | |
| --- | --- | --- |
| | N(n,ε,F) = supx∈XnN(x,ε,F) . | |
Thanks to a famous combinatorial
lemma (Vapnik and Chervonenkis, [1968](#bib.bib52), [1971](#bib.bib53); Sauer, [1972](#bib.bib40)),
for many usual parametric families F, the growth
function N(n,ε,F) increases at most
polynomially151515For a simple proof of this fact, slice [a,b]
into intervals Sk of maximal width ε and apply the
lemma to the family of indicator functions
(xi,Sk)↦1l{f(xi)∈Sk}.
with both n and 1/ε.
{theorem}
[Uniform empirical Bernstein bound]
*(Maurer and Pontil, [2009](#bib.bib31), thm 6)*
Let δ∈(0,1), n>=16.
Let X,X1,…,Xn be i.i.d. random variables with values in X.
Let F be a set of functions mapping X into [a,b]⊂R
and let M(n)=10N(2n,F,1/n).
Then we probability at least 1−δ,
| | | |
| --- | --- | --- |
| | ∀f∈F, E[f(X)]−Mn ≤ √18Vnlog(M(n)/δ)n+(b−a)15log(M(n)/δ)n−1 , | |
where Mn and Vn respectively are
the sample mean and variance
| | | |
| --- | --- | --- |
| | Mn=1nn∑i=1f(Xi) ,Vn=1n−1n∑i=1(f(Xi)−Mn)2 . | |
The statement of this theorem emphasizes its similarity with the
non-uniform empirical Bernstein bound
(theorem [B.1](#Sx2.SS2.SSS1 "B.1 Outer confidence interval ‣ B Confidence Intervals ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")).
Although the constants are less attractive, the uniform bound
still converges to zero when n increases,
provided of course that M(n)=10N(2n,F,1/n) grows
polynomially with n.
Let us then define the family of functions
| | | |
| --- | --- | --- |
| | F={ fθ:ω↦ℓ(ω)¯w%
\sc mθ(ω) , gθ:ω↦¯w\sc
m% θ(ω) , ∀θ∈F } , | |
and use the uniform empirical Bernstein bound to derive an outer
inequality similar to ([32](#Sx2.E32 "(32) ‣ B.1 Outer confidence interval ‣ B Confidence Intervals ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")) and an inner
inequality similar to ([33](#Sx2.E33 "(33) ‣ B.2 Inner confidence interval ‣ B Confidence Intervals ‣ Appendices ‣ Counterfactual Reasoning and Learning Systems")). The theorem
implies that, with probability 1−δ, both inequalities are simultaneously
true for all values of the parameter θ. The uniform confidence
interval ([24](#S6.E24 "(24) ‣ 2nd item ‣ 6.2 Uniform Confidence Intervals ‣ 6 Learning ‣ Counterfactual Reasoning and Learning Systems")) then follows directly. |
b6aa85c2-e451-49ac-9721-898922017cf6 | trentmkelly/LessWrong-43k | LessWrong | Progress links and tweets, 2023-01-04
Progress Forum
* Building Fast and Slow Part III: Design of the World Trade Center (Brian Potter)
* Why pessimism sounds smart (a oldie by yours truly)
Announcements
* Foresight Institute has a spot for a J1 visa in San Francisco in 2023 (@allisondman)
Tweets
* One container ship carries more than the whole English fleet did 440 years ago. Also: “What, load boxes ashore and then load the boxes on the ship?”
* “A fully general argument against ever doing anything that changes anything, ever”
* Sometimes giving someone a book changes the course of their life
Retweets
* “We already have the tools to preserve brains fantastically well”
* So many of the world’s great infrastructure projects would be impossible today
* The 1930 campaign to stop people from listening to recorded music
* Who are some good, interesting, up-and-coming, not-yet-famous essayists/bloggers?
* ChatGPT can correct OCR errors in historical texts
* California court rules that economic growth as such is an environmental harm (!)
* A rapid combo test for covid, flu and RSV. Unfortunately illegal in the US
|
07fce7fc-be97-4488-87f7-505b2f294846 | trentmkelly/LessWrong-43k | LessWrong | "A Definition of Subjective Probability" by Anscombe and Aumann
In the course of studying how subjective probabilities can be defined, I read A Definition of Subjective Probability (1963) by Anscombe and Aumann. My notes may be of interest to the Less Wrong community, and have pasted them below.
The authors define two types of lotteries:
* A "roulette lottery" which is a game of chance with "physical" probabilities attached to outcomes, where each outcome is associated with a prize. The authors are vague about what they mean by "physical" probabilities, but they seem to mean probabilities that it's possible to generate via frequentist inference.
* A "horse lottery," which is a game of chance where physical probabilities are unavailable.
The paper's goal is to give a definition of subjective probabilities attached to outcomes in a horse lottery.
Intuitively, the idea seems to be as follows. Suppose that you have an event E, that you desire to happen, and a choice between the following options:
1. A horse lottery occurs, and event E occurs if and only if the outcome of the horse lottery is O.
2. A roulette lottery occurs, and event E occurs if and only if the outcome is O', where O' has probability q.
Consider the set T of values of q such that you'd prefer #2 over #1. Then your subjective probability p of the horse lottery having outcome O is defined to be the greatest lower bound of T.
The authors begin by assuming that one has a preference ordering over the prizes awarded in lotteries, with the best prize strictly favored over the worst prize. Here the prizes include tickets to other lotteries.
The authors convert this preference ordering to a utility function u where the best prize is assigned utility 1 and the worst prize is assigned utility 0. The authors assume that the function u has the property that u of a roulette lottery is the expected utility (sum of utilities of the outcomes weighted by the probabilities of the outcomes). The authors also convert a preference ordering over horse lotteries to a utility |
4db51bac-ec84-470f-9781-cc51a3580fd7 | trentmkelly/LessWrong-43k | LessWrong | How to Find Sources in an Unreliable World
I spent a long time stalling on this post because I was framing the problem as “how to choose a book (or paper. Whatever)?”. The point of my project is to be able to get to correct models even from bad starting places, and part of the reason for that goal is that assessing a work often requires the same skills/knowledge you were hoping to get from said work. You can’t identify a good book in a field until you’ve read several. But improving your starting place does save time, so I should talk about how to choose a starting place.
One difficulty is that this process is heavily adversarial. A lot of people want you to believe a particular thing, and a larger set don’t care what you believe as long as you find your truth via their amazon affiliate link (full disclosure: I use amazon affiliate links on this blog). The latter group fills me with anger and sadness; at least the people trying to convert you believe in something (maybe even the thing they’re trying to convince you of). The link farmers are just polluting the commons.
With those difficulties in mind, here are some heuristics for finding good starting places.
* Search “best book TOPIC” on google
* Most of what you find will be useless listicles. If you want to save time, ignore everything on a dedicated recommendation site that isn’t five books.
* If you want to evaluate a list, look for a list author with deep models on both the problem they are trying to address, and why each book in particular helps educate on that problem. Examples:
* Fivebooks’ Best Books on Learning from the Great Depression
* Fivebooks’ Best Books on Evolution
* A bad list will typically have a topic rather than a question they are trying to answer, and will talk about why books they recommend are generically good, rather than how they address a particular issue. Quoting consumer reviews is an extremely bad sign and I’ve never seen it done without being content farming.
* Jerry Jenkin’s The 12 Best Books on |
cf0150cf-8414-44ad-a679-6f24f19af307 | trentmkelly/LessWrong-43k | LessWrong | Quillette: Outgroups, Bias, and the Dark Web
My first essay for Quillette builds on "The Context is Conflict" to explore the clash between decouplers and contextualizers, why everyone is a hypocrite, who's whose outgroup, how tribalism makes smart people biased, and what the Intellectual Dark Web can learn about #OvercomingBias. |
b68422a5-6875-420f-b947-febbe4dae5a9 | trentmkelly/LessWrong-43k | LessWrong | Please use real names, especially for Alignment Forum?
As the number of AI alignment researchers increases over time, it's getting hard for me to keep track of everyone's names. (I'm probably worse than average in this regard.) It seems the fact that some people don't use their real names as their LW/AF usernames makes it harder than it needs to be. So I wonder if we could officially encourage people to use their real firstname and lastname as their username, especially if they regularly participate on AF, unless they're deliberately trying to keep their physical identities secret? (Alternatively, at least put their real firstname and lastname in their user profile/description?) |
ea6971a4-9829-42fa-9c77-0a086b617f36 | trentmkelly/LessWrong-43k | LessWrong | Politics is a fact of life
There is a tendency to downvote articles and commentaries with a political subtext with a remark on how politics is the mind-killer. I completely understand that nobody wants his mind to be killed, however, I disagree on the employed methods. I don't think anybody can really afford to ignore politics. It's a fact about any group of even a handful of people. Thus instead of shunning politics I think it's better to build one's rational defenses. Understanding that politics is a problem is only the first step. If you stop there, there will always be a big part of life where you are not rational. Therefore I suggest that, as long as it doesn't get out of hands, there should always be room for political discussions if not on the main site at least in the discussion section. |
d782d22e-0fd9-47b9-a82b-4c87f807d83b | StampyAI/alignment-research-dataset/special_docs | Other | Future Indices
OCTOBER 2020
Future Indices
AUTHORS
Michael Page
Catherine Aiken
Dewey Murdick How Crowd Forecasting
Can Inform the Big Picture
CSET Issue Brief
Center for Security and Emerging Technology | 2 Executive Summary
What will the world look like in 2025? How will changing geopolitical and
tech-security trends—such as U.S.-China relations, remote work, and public interest in automation—shape the world we occupy? These questions about tomorrow are on the minds of policymakers today. Presenting recent forecast
data collected through CSET’s Foretell project ( cset-foretell.com), this brief
shows how crowd forecasting can inform policy by providing data on future trends and linking those trends to future policy-relevant scenarios.
We illustrate Foretell’s methodology with a
concrete example: First, we describe three possible
scenarios , or ways in which the tech-
security landscape might develop over the next five years. Each scenario reflects different ways in which U.S.-China tensions and the fortunes of the artificial intelligence industry might develop. Then, we break each scenario down into near-term
predictors and
identify one or more metrics for each
predictor. We then ask the crowd to forecast the metrics. Lastly, we compare the crowd’s forecasts with projections based on historical data to identify
trend departures : the extent to
which the metrics are expected to depart from their historical trajectories.
Our preliminary findings suggest two outcomes —both involving increasing
U.S.-China tensions and Department of Defense AI R&D investments—are
most likely. Forthcoming data on commercial AI R&D investments,globalization, and industry-DoD tensions will inform which of these two
scenarios is more likely.
Foretell’s approach is a variation on a proposal by Philip E. Tetlock, co-
founder of the Good Judgment Project, which won the Intelligence Advanced
Research Projects Activity (IARPA)-funded Aggregative Contingency Estimation (ACE) forecasting tournament. We believe a scaled-up version of Foretell would contribute to a more evidence-based policymaking environment.
Center for Security and Emerging Technology | 3 Table of Contents
Executive Summary ............................................................................................... 2
Introduction ........................................................................................................... 4
What Will the Tech-Security Landscape Look Like in 2025? ........................... 5
Our Methodology ................................................................................................ 8
Step 1: Decompose scenarios into predictors ............................................... 8
Step 2: Identify metrics for the predictors ....................................................... 9
Step 3: Collect historical and forecast data ................................................. 10
Step 4: Estimate trend departure ................................................................... 10 Sensitivity to Technical Choices ..................................................................... 13
Looking Forward ............................................................................................ .....13
Acknowledgments ..............................................................................................15
Endnotes ............................................................................................................. 16
Center for Security and Emerging Technology | 4 Introduction Experts disagree about what the world will look like in five years. In terms of geopolitical competition, some experts predict an ascendant China,1 others predict the collapse of the Chinese Communist Party,2 and others predict a U.S.-China cold war.3 While expert predictions are indispensable, they are limited. It is difficult to build on them, locate where others might disagree, and identify what evidence supports or undermines them. Yet such predictions are critical tools to help frame policy debates. Therefore, greater insight on the relative likelihoods of qualitative expert predictions is of great value to policymakers. To address this gap, CSET launched Foretell, a pilot project that uses crowd forecasting and data analytics to inform tech-security policy.4 Our aim is to build on big picture scenarios, such as qualitative expert predictions, by making them more amenable to quantitative analysis. Building on previous research, notably IARPA’s ACE forecasting tournament, Foretell relies on the wisdom of the crowd—the collective opinion of a large group—to generate probabilistic forecasts on specific, near-term questions.5 But as noted by the founders of the winning ACE team, Philip E. Tetlock and Barbara Mellers, along with J. Peter Scoblic,“the specificity required to make questions rigorously resolvable precludes asking ‘big’ questions.”6 Tetlock, Mellers, and Scoblic were describing an example of the problem of measurement: the gap between what we can measure and what we want to measure. To generate quantitative insights into big-picture concepts such as “U.S.-China tensions” or “a strong tech sector,” we must identify observable metrics that approximate the concept. To address both the prediction and measurement problems, Tetlock proposed a series of tournaments to generate “clusters of short-term questions that, taken individually, are rigorously resolvable but that can collectively tip the scales of plausibility in high-stakes debates.”7 We are implementing a simplified version of this method on Foretell by selecting metrics that inform big picture scenarios and aggregating the extent to which, for each metric, the crowd forecasts depart from their historical trajectories (trend departure). Section I discusses an example application, linking three possible scenarios to a set of metrics for which we have forecast
Center for Security and Emerging Technology | 5 data. Section II overviews the methodology in more detail. Section III discusses where we go from here. What Will the Tech-Security Landscape Look Like in 2025? To launch Foretell, we developed three scenarios depicting what the tech-security landscape might look like in 2025.8 We focused on two important issues for tech-security policy, U.S.-China tensions and the fortunes of the U.S. artificial intelligence industry. Table 1 shows how by varying these issues, we could end up in very different worlds. Table 1. Three Possible 2025 Scenarios U.S.-China tensions increase U.S.-China tensions decrease AI industry booms Tense Economic-Security Balance Virtually Integrated AI industry declines Domestic & Securitized Omitted In the first scenario (Tense Economic-Security Balance), U.S.-China tensions have risen as the AI industry has grown, leading to recurring conflicts between economic and security goals and deep divisions between the Department of Defense and the AI industry. In the second scenario (Virtually Integrated), U.S.-China tensions have subsided as the AI industry has flourished, aided by the public’s heightened interest in automation and the development of remote, global workforces. In the third scenario (Domestic & Securitized), U.S.-China tensions have risen as the economy has deteriorated, leading to a less globalized economy and an AI industry more reliant on defense funding. We did not include a scenario in which tensions decrease and the AI industry declines because we determined, after consultation with experts, that this is an unlikely scenario. Each of these scenarios calls for a different policy approach. For example, a virtually integrated AI industry impacts the role of export controls, and a remote AI workforce impacts immigration policy. Because a variation on each scenario is possible, the best policy approaches are robust to all scenarios,
Center for Security and Emerging Technology | 6 but robustness comes at a cost. The more probable one of these scenarios becomes relative to the others, the more policy should target that scenario, and the less important it is to hedge on the others. The key question then is how likely the different scenarios are, and how we will know when relative likelihoods change. It’s difficult to assess the relative likelihoods of our 2025 scenarios for two reasons. First, they describe events three to seven years from now, and forecasting is most accurate over shorter time periods. And second, they are complex and not directly observable. Whether analyzing historical events or future events, quantitative methods require well-defined observables. To solve these problems, we broke each scenario down into predictors and metrics, and posed the metrics to the crowd as forecast questions. Section II describes this process in more detail (see Figures 1 and 2). We then used the crowd forecasts to identify trend departures, meaning areas in which the policy environment appears to be changing faster or slower than one would expect based on projections from historical data. Table 2 shows trend departures for our 2025 scenarios based on a sampling of the metrics and predictors identified for each scenario.
Center for Security and Emerging Technology | 7 Table 2: How Foretell aggregates crowd forecasts to inform big picture scenarios Metric [trend departure] Predictor [trend departure] Scenario Domestic & Securitized Virtually Integrated Tense Economic- Security Balance Decreasing U.S.-China trade9 0.9 Increasing U.S.-China tensions 0.6
✅ Increasing U.S.-China tensions are a predictor
❌ Decreasing U.S.-China tensions are a predictor
✅ Increasing U.S.-China tensions are a predictor Decreasing Chinese O visas10 1.1 Increasing unfavorable public view on China11 0.5 Increasing Chinese incursions of Japanese air space12 0 Increasing DOD AI R&D contracts13 0.4 Increasing DOD AI R&D investment 0.2
✅ Increasing DoD AI R&D investment is a predictor
❌ Decreasing DoD AI R&D investment is a predictor
✅ Increasing DoD AI R&D investment is a predictor Increasing DOD AI grants14 0 Increasing big tech revenue15 0.3 Increasing commercial AI R&D investment 0.2
❌ Decreasing commercial AI R&D investment is a predictor
✅ Increasing commercial AI R&D investment is a predictor
✅ Increasing commercial AI R&D investment is a predictor Increasing private tech fundraising16 0.3 Increasing machine learning job postings17 0 Increasing big tech H-1B visas18 -0.3 Increasing skilled-labor migration -0.3
✅ Decreasing skilled-labor migration is a predictor
❌ Increasing skilled-labor migration is a predictor Not a predictor Increasing remote software engineer jobs19 5.2 Increasing remote tech economy 5.2
❌ Stable or decreasing remote tech economy is a predictor
✅ Increasing remote tech economy is a predictor Not a predictor Source: Foretell. For the underlying data and model, see the Foretell GitHub repository, https://github.com/georgetown-cset/public-foretell.
Center for Security and Emerging Technology | 8 While these preliminary results are best understood in combination with more conventional analytical tools, the results are illustrative. They currently point to a close battle between the Domestic & Securitized scenario and Tense Economic-Security Balance scenarios, as reflected by the green check marks in Table 2. The crowd to-date expects U.S.-China tensions and DOD AI R&D investment to increase relative to historical trend projections, both of which are predictors of these scenarios and indicate movement away from the Virtually Integrated scenario. Meanwhile, a significant differentiator of the Domestic & Securitized and Virtually Integrated scenarios is commercial AI R&D investment, for which we presently see mild trend departure favoring Tense Economic-Security Balance. Forthcoming forecast questions on globalization and DoD-industry tensions will provide predictors that help identify which of these two scenarios is more likely. Our Methodology Below is the five-step process we are using to aggregate historical and forecast data to inform big picture scenarios, as illustrated in Table 2. Table 3 summarizes the steps. The data and model underlying Table 2 are available on GitHub.20 Table 3: Collecting and aggregating crowd forecast data Step 1: Decompose scenarios into predictors Step 2: Identify metrics for the predictors Step 3: Collect historical and forecast data Step 4: Estimate trend departure Step 5: Aggregate Break down a policy-relevant scenario into the trends that precede it Find one or more metrics that adequately capture each predictor For each metric, collect historical data points and ask the crowd to forecast future data points Compare crowd forecasts with projections from historical data; look for divergence Aggregate trend departure across metrics to inform the likelihood of policy-relevant outcomes Step 1: Decompose scenarios into predictors We begin with a big picture scenario: a description of the policy environment approximately three to seven years from now. Scenarios can be constructed in multiple ways. In the example in Section I, we used the 2x2 matrix
Center for Security and Emerging Technology | 9 technique. An alternative approach is to begin with qualitative expert predictions. For example, experts regularly make predictions about the future of U.S.-China relations,21 or whether we’re heading toward a high--tech dystopia.22 Such expert predictions can serve as starting points for the quantitative analysis described here. We then break the scenarios down into predictors: the near-term drivers of the scenarios of interest. Figure 1 shows the predictors for the Domestic & Securitized scenario from Section I. Figure 1. Breaking scenarios down into predictors Step 2: Identify metrics for predictors Because most predictors are not directly observable, we identify metrics that, alone or in combination with others, approximate the predictor. For example, quantity of AI publications is a common metric for quantity of AI research.23 For more complex predictors, multiple metrics can form an index that approximates the concept of interest. For example, as shown in Figure 2, the predictor of increasing U.S.-China tensions could be measured by trade levels, immigration flows, public opinion, and military actions.24 Figure 2. Identifying metrics for predictors
Center for Security and Emerging Technology | 10 Step 3: Collect historical and forecast data The next step is to gather data for the metrics. We first collect data on the metrics’ historical values. By projecting the historical values forward, we create a baseline for the trend departure measure discussed in Step 4. We then provide the historical values to the crowd and ask them to forecast the metrics’ future values. Foretell currently has about 1,000 registered forecasters, comprising primarily graduate students in relevant fields.25 Figure 3 provides an example of the historical data we collect and make available to the forecasters, supplemented with real-time forecast data. Figure 3: The data we collect and provide to forecasters
Step 4: Estimate trend departure At this point, we have two forecasts: one based entirely on historical data (historical projection) and the other from the crowd (crowd forecast). The difference between the two is the trend departure. Trend departure can be understood as a surprise factor, a signal of whether a metric’s value should cause an analyst to stop and reconsider their assumptions. Consider the U.S.-China trade metric in Figure 4. What’s noteworthy about 2019 is not its absolute value, $560 billion, or even that the 2019 value is 15 percent lower than the 2018 value. Rather, what’s noteworthy is that the 15 percent decrease in 2019 deviated so significantly from historical trends, coming in $109 billion below the historical projection.
Center for Security and Emerging Technology | 11 Figure 4. U.S.-China trade in 2019 was $109 billion below the historical projection
The actual (solid blue) is U.S. Census Bureau data. The projection (solid red) is based on the AAA ETS (exponential smoothing) algorithm. The upper and lower bounds (dashed red) are that projection’s 95 percent confidence interval. The example in Figure 4 involves a historical data point, U.S.-China trade in 2019, but trend departure can be calculated in the same manner for forecasted data points. In the case of U.S.-China trade, as shown in Figure 5, the crowd forecasts a 2020 value of $491 billion and a 2021 value of $505 billion, $131 billion and $142 billion below the historical projections, respectively.
Center for Security and Emerging Technology | 12 Figure 5. The crowd forecasts significant trend departure in U.S.-China trade in 2021
The actual (solid blue) is U.S. Census Bureau data, and the projection (dashed blue) is Foretell forecast data. The historical projection (solid red) is based on the AAA ETS (exponential smoothing) algorithm. The upper and lower bounds (dashed red) are that projection’s 95 percent confidence interval. Step 5: Aggregate Finally, we put all the pieces back together. To create a common scale across metrics with different levels of variation, we divide trend departure by the historical projection’s confidence interval. The confidence interval provides a measure of what range of values is expected in light of a metric’s historical values. If the historical trend is very consistent, the confidence interval will be small and therefore moderate departures surprising. That’s the case in Figure 4, which shows a $71 billion confidence interval. By contrast, if the historical values vary greatly, the confidence interval will be large and therefore moderate departures less surprising. That’s the case in Figure 5, which shows a larger $156 billion confidence interval for 2021 after taking into account the anomalous 2019 value. Therefore, although the absolute trend departure amount is greater in Figure 5 (2021 forecast) than in Figure 4 (2019 actual), after dividing by the confidence interval, the trend departure is greater in 2019 (1.5) than in 2021 (0.9). We can aggregate trend departure at the metric or predictor level. Table 1 shows aggregation at the metric level. For the U.S.-China Tensions predictor,
Center for Security and Emerging Technology | 13 for example, the crowd expects that each of the four metrics will increase relative to their historical trends, forecasting trend departures between 0 and 1.1. Taking the simple average of the four metrics yields a predictor-level trend departure of 0.6. Alternatively, we could give the metrics different weights. For example, if trade seems particularly important and Chinese incursions of Japanese airspace unimportant, we could give the former a weight of 50 percent, the latter a weight of 10 percent, and the others weights of 20 percent, which would yield a predictor-level trend departure of 0.8. Aggregating trend departure at the predictor level works in the same manner, meaning, in principle, we could quantify the extent to which the crowd thinks we are heading toward one scenario or another. Until we have more metrics and predictors to capture the scenario of interest, however, adding a second level of quantification magnifies sources of error—such as the selection and weighting of predictors and metrics—without adding offsetting insight. Sensitivity to Technical Choices This brief describes our methodology at a conceptual level, but to implement it, we made many technical choices, such as what algorithm to use to create the historical projection, how to calculate trend departure, and how to standardize trend departure values across metrics with different levels of historical variation. Our underlying data and the model used to generate our results is publicly available and we encourage others to improve upon our technical choices. Ultimately, however, many of these technical choices are incidental to the results. A virtue of our focus on trend departure is that we are interested only in big changes, and the big changes should not be sensitive to debatable technical choices. Looking Forward We believe using crowd forecasting to inform big picture scenarios can improve policymaking in two ways. First, it can foster productive disagreement by helping policy analysts identify where they disagree and what data would advance the debate. Policy disagreements might be more manageable once reduced to specific, measurable uncertainties, such as the effect an export control would have on U.S. semiconductor manufacturers.
Center for Security and Emerging Technology | 14 Second, it could serve as a warning system. In a complex, dynamic environment, it can be difficult to appreciate the significance of ongoing change. For example, two policymakers might disagree about whether we are heading toward the Domestic & Securitized scenario described in Section I, but agree about what to do if we are headed toward that scenario. By continually monitoring and forecasting dozens of metrics that inform the likelihood of the Domestic & Securitized outcome, we can effectively automate our ability to notice changes that should trigger a reconsideration of strategic policy. Many potential obstacles remain. Among others, this methodology assumes forecasters are acting in good faith. If instead, forecasters attempt to manipulate the results to achieve their desired policy goals, they would undermine the integrity of the system. We believe such risks are best addressed in concrete cases, and in the abstract, do not pose enough risk to offset the potential upside. Foretell is still at the proof-of-concept stage. Over the remainder of its pilot, we will study the crowd’s accuracy relative to subject-matter experts and see what insights we can extract from other crowd-generated data, including rationales and the full distribution of probabilities, rather than simply the point estimates. However, for this method to realize its ultimate potential, scale is necessary. With sufficient policymaker interest, we believe a scaled-up forecasting project—with more metrics, forecasters, and end-use applications—will flourish.
Center for Security and Emerging Technology | 15 Acknowledgments
We gratefully acknowledge input from James Dunham, Robert Cardillo,
Alexander Friedland, Robert Hartman, Carolyn Mehta, Igor Mikolic-Terreira,
Luke Muehl auser, Vanessa Pineda, Ben Roesch, Peter Scoblic, Adam Siegel,
Philip Tetlock, Helen Toner
, and Lynne Weil. We thank Cultivate Labs—and
Vanessa Pineda and Carolyn Mehta in particular—for creating and
maintaining the forecasting platform. The authors are solely responsible for
the views expressed in this piece and for any errors.
© 2020 by the Center for Security and Emerging Technology. This work is
licensed under a Creative Commons Attribution-Non Commercial 4.0
International License.
To view a copy of this license, visit
https://creativecommons.org/licenses/by-nc/4.0/.
Document Identifier: doi: 10.51593/20200093
Center for Security and Emerging Technology | 16 Endnotes
1Michael Morell, “Analysis: The national security implications of COVID -19,” C BS News,
May 8, 2020, https://www.cbsnews.com/news/coronavirus -national -security-
implications -analysis/ .
2 Minxin Pei, “China’s Coming Upheaval,” Foreign Affairs, May/June 2020,
https://www.foreignaffairs.com/articles/united-s tates/2020-0 4-03/chinas-c oming-
upheaval.
3 John Haltiwanger, “The US and China are on the brink of a new Cold War that could
devastat e the global economy,” Business Insider, May 14, 2020,
https://www.businessinsider.com.au/the -us-china -entering- new-cold-war-amid -
coronavirus -2020-5 .
4 To lear n mor e about Foretell, c heck out cset -foretell.com . Foretell’s approa ch is a variation
on a proposal by Phili p Tetlock, co -founder o f the G ood J udgment Project, which w on th e
IARPA -funded A ggregative C ontingent Estimation (A CE) forecasting to urnament.
5 ACE was a four -year tournament designed to identif y what methods ar e most e ffective a t
estimati ng the likeli hood of geopolitical events betw een one mont h and one year i n the
future. The G ood J udgment Project, a te am le d by University o f Pennsylvania p rofessors
Phillip Tetlo ck and Barbara Mellers, wo n the tournament b y recruiting thousands o f
volunteers, training them to forec ast, and aggregating their judgments. Tetlo ck and Dan
Gardner describe t heir approach i n Tetlock, Gardner, Superforecasting: T he Art and Science
of Prediction (Broadway B ooks, 2015).
6 Tetlo ck et al., “Bringi ng probabilit y judgments into poli cy debates via forecasting
tournaments,” S cience V ol 355 Issue 6 324, (February 5 , 2017), 481 -483,
https://science.sciencemag.org/content/355/6324/481. full. See a lso S coblic & Tetlock,
“A Better Crystal Ball: The Right Wa y to Thi nk About the Future, ” Foreig n Affairs,
November/December 2000, https://www.foreignaffairs.com/articles/united -
states/2020 -10-13/better -crystal -ball.
7 Philip E. Tetlock, “Full -Inference -Cycle Tournaments: The Quality of our Questions Matters
as Much as th e Accuracy of our Answers,” Prepared for IARPA, August 30, 2017 , available
through D ropbox a t https://t.co/dLO0CXac8A?amp=1 . Scoblic & T etlock, supra n ote 6 ,
makes a similar proposal, suggesting ho w qualitative scenario planning and probabilistic
forecasting c an be combined to c reate w arning s ystems fo r policymakers.
8 Aiken, Page, “Three Possible 2025 Worlds t hat Should Inform Policy Today,” Foretell, June
11, 2020, https://www.cset -foretell.com/blog/three\_worl ds.
Center for Security and Emerging Technology | 17 9 Forecast data is available on Foretell, https://www.cset-foretell.com/questions/76-what-will-total-u-s-trade-volume-with-china-imports-and-exports-of-goods-be-in-2021. Data on a related question covering an earlier time period is also available, https://www.cset-foretell.com/questions/22-how-much-total-trade-will-there-be-between-the-u-s-and-china-in-q4-2020. 10 Forecast data is available on Foretell associated with two questions, both conditional on different election outcomes. See https://www.cset-foretell.com/questions/78-conditional-on-trump-winning-the-election-what-percentage-of-o-visas-will-go-to-chinese-nationals-in-fy-2021-october-1-2020-to-september-30-2021; https://www.cset-foretell.com/questions/79-conditional-on-biden-s-election-what-percentage-of-o-visas-will-go-to-chinese-nationals-in-fy-2021-october-1-2020-to-september-30-2021. Data on a related question covering an earlier time period is also available, https://www.cset-foretell.com/questions/47-what-percent-of-o-1-visas-issued-between-june-1-and-september-30-2020-will-go-to-people-from-china. 11 Forecast data is available on Foretell, https://www.cset-foretell.com/questions/80-what-percentage-of-u-s-citizens-will-have-an-unfavorable-view-of-china-as-reflected-in-the-2021-pew-global-attitudes-survey. 12 Forecast data is available on Foretell, https://www.cset-foretell.com/questions/32-how-many-sorties-against-chinese-aircraft-will-the-japanese-air-self-defense-force-conduct-by-april-30th-2020. 13 Forecast data is available on Foretell, https://www.cset-foretell.com/questions/61-how-much-will-the-u-s-department-of-defense-spend-on-ai-research-contracts-between-july-1-and-december-31-2020-inclusive. 14 Forecast data is available on Foretell, https://www.cset-foretell.com/questions/40-how-much-will-dod-spend-on-ai-grants-between-june-1-and-december-31-2020. 15 Forecast data is available on Foretell, https://www.cset-foretell.com/questions/74-what-will-the-combined-revenue-of-alphabet-amazon-apple-facebook-and-microsoft-be-in-the-first-two-quarters-january-1-through-june-30-of-2021. Data on a related question covering an earlier time period is also available, https://www.cset-foretell.com/questions/63-what-will-the-revenue-of-alphabet-amazon-apple-facebook-and-microsoft-be-in-the-third-quarter-of-2020-ending-september-30. 16 Forecast data is available on Foretell, https://www.cset-foretell.com/questions/69-how-much-new-funding-will-private-u-s-tech-companies-raise-between-july-1-and-december-31-2020-inclusive. 17 Forecast data is available on Foretell, https://www.cset-foretell.com/questions/82-how-many-postings-for-u-s-jobs-requiring-machine-learning-skills-will-be-published-between-january-1-and-march-31-2021-inclusive. Data on a related question covering an earlier time period is also available, https://www.cset-foretell.com/questions/19-how-many-new-u-s-jobs-requiring-machine-learning-skills-will-be-advertised-in-q3-2020.
Center for Security and Emerging Technology | 18 18 Forecast data is available on Foretell, https://www.cset-foretell.com/questions/23-how-many-new-h-1b-applications-submitted-by-the-big-5-tech-companies-will-the-u-s-grant-in-fy-2020. 19 Forecast data is available on Foretell, https://www.cset-foretell.com/questions/72-what-percentage-of-software-engineer-job-postings-between-january-1-and-march-31-2021-inclusive-will-allow-for-remote-work. Data on a related question covering an earlier time period is also available, https://www.cset-foretell.com/questions/65-what-percentage-of-software-engineer-job-postings-between-july-1-and-september-30-2020-inclusive-will-allow-for-remote-work. 20 See https://github.com/georgetown-cset/public-foretell. 21 See the examples in the introduction, notes 1-3. 22 See, e.g., Naomi Klein, “Screen New Deal: Under Cover of Mass Death, Andrew Cuomo Calls in the Billionaires to Build a High-Tech Dystopia, Intercept, May 8, 2020, https://theintercept.com/2020/05/08/andrew-cuomo-eric-schmidt-coronavirus-tech-shock-doctrine/. To see how Klein’s forecast can be operationalized on Foretell, see Maia Nikoladze, “Tech Companies Threaten Democracy,” Foretell Blog, July 30, 2020, https://www.cset-foretell.com/blog/tech-threat. 23 See, e.g., Cady, Etzioni, “China May Overtake US in AI Research,” Medium, March 13, 2019, https://medium.com/ai2-blog/china-to-overtake-us-in-ai-research-8b6b1fe30595. 24 See, e.g., Castro et al., “Who Is Winning the AI Race: China, the EU, or the United States?,” Center for Data Innovation, August 19, 2020, https://www.datainnovation.org/2019/08/who-is-winning-the-ai-race-china-the-eu-or-the-united-states. 25 CSET works with Cultivate Labs to recruit and maintain a pool of forecasters. To learn more about Foretell’s forecasters, see our fall cohort of forecaster ambassadors at https://www.cset-foretell.com/our-ambassadors. |
061cf5f7-c90c-4791-988b-a129775c0c1d | trentmkelly/LessWrong-43k | LessWrong | Meetup : Washington DC meetup: Robin Hanson visits again
Discussion article for the meetup : Robin Hanson visits to talk about prediction markets
WHEN: 08 September 2013 03:00:00PM (-0400)
WHERE: National Portrait Gallery
(This meetup is being posted a few weeks before it will occur.)
Since we mostly talked about ems and AI last time he was here, Robin Hanson is visiting again, this time with more of a focus on economics and prediction markets. (Though as always, the discusssion will also be shaped by the interests of the participants.)
The meetup will be in the courtyard adjacent to the National Portrait Gallery, as usual.
Discussion article for the meetup : Robin Hanson visits to talk about prediction markets |
ca35ad54-526b-4812-8cef-4e477b55e3c0 | trentmkelly/LessWrong-43k | LessWrong | Motivating Optimization Processes
Related to: Shut up and do the Impossible! The Hidden Complexity of Wishes. What can you do with an Unfriendly AI?
Suppose you find yourself in the following situation. There is a process, call it X, in a box. It knows a lot about the current state of the universe, but it can influence the rest of the world only through a single channel, through which it sends a single bit exactly once (at a predetermined time). If it sends 1 (cooperates), then nothing happens---humanity is free to go about its business. If it sends 0 (defects), then in one month a powerful uFAI is released which can take over the universe.
The question is, when can we count on X to cooperate? If X is friendly, then it seems like it should cooperate. Is designing an AGI which can be incentivized to cooperate any easier than designing a completely friendly AGI? It might be easier for two reasons. First, the AI just needs to prefer human survival without intervention to a particular catastrophic intervention. We don't need to guarantee that its favorite outcome isn't catastrophic in some other way. Second, the humans have some time to punish or reward the AI based on its behavior. In general, lets call a process X slightly friendly if it can be incentivized to cooperate in reasonable instantiations of this hypothetical (ie, reasonable worlds satisfying the properties I have laid out).
I ask this question because it seems much simpler to think about than friendliness (or AI boxing) but still confuses me badly---this post has no hope of answering this question, just clarifying some issues surrounding it. If it turns out that the design of slightly friendly AIs is no easier than the design of friendly AIs, then we have conclusive evidence that boxing an AI is not helpful for obtaining friendliness. If it turns out that the design of slightly friendly AIs is significantly easier, then this is a good first step towards resolving the legitimate objections raised in response to my previous po |
61bbf028-9925-4b24-9610-e32bc2e0a360 | StampyAI/alignment-research-dataset/alignmentforum | Alignment Forum | The Steering Problem
Most AI research focuses on reproducing human abilities: to learn, infer, and reason; to perceive, plan, and predict. There is a complementary problem which (understandably) receives much less attention: if you *had* these abilities, what would you do with them?
**The steering problem:** Using black-box access to human-level cognitive abilities, can we write a program that is as useful as a well-motivated human with those abilities?
This post explains what the steering problem is and why I think it’s worth spending time on.
---
Introduction
============
A capable, well-motivated human can be extremely useful: they can work without oversight, produce results that need not be double-checked, and work towards goals that aren’t precisely defined. These capabilities are critical in domains where decisions cannot be easily supervised, whether because they are too fast, too complex, or too numerous.
In some sense “be as useful as possible” is just another task at which a machine might reach human-level performance. But it is different from the concrete capabilities normally considered in AI research.
We can say clearly what it means to "predict well," "plan well," or "reason well." If we ignored computational limits, machines could achieve any of these goals today. And before the existing vision of AI is realized, we must *necessarily* achieve each of these goals.
For now, "be as useful as possible" is in a different category. We can't say exactly what it means. We could not do it no matter how fast our computers could compute. And even if we resolved the most salient challenges in AI, we could remain in the dark about this one.
Consider a capable AI tasked with running an academic conference. How should it use its capabilities to make decisions?
* We could try to specify exactly what makes a conference good or bad. But our requirements are complex and varied, and so specifying them exactly seems time-consuming or impossible.
* We could build an AI that imitates successful conference organizers. But this approach can never do any better than the humans we are imitating. Realistically, it won’t even match human performance unless we somehow communicate what characteristics are important and why.
* We could ask an AI to maximize our satisfaction with the conference. But we'll get what we measure. An extensive evaluation would greatly increase the cost of the conference, while a superficial evaluation would leave us with a conference optimized for superficial metrics.
Everyday experience with humans shows how hard delegation can be, and how much easier it is to assign a task to someone who actually cares about the outcome.
Of course there is already pressure to write *useful* programs in addition to smart programs, and some AI research studies how to efficiently and robustly communicate desired behaviors. For now, available solutions apply only in limited domains or to weak agents. The steering problem is to close this gap.
Motivation
----------
A system which "merely" predicted well would be extraordinarily useful. Why does it matter whether we know how to make a system which is “as useful as possible”?
Our machines will probably do *some* things very effectively. We know what it means to "act well" in the service of a given goal. For example, using human cognitive abilities as a black box, we could probably design autonomous corporations which very effectively maximized growth. If the black box was cheaper than the real thing, such autonomous corporations could displace their conventional competitors.
If machines can do everything equally well, then this would be great news. If not, society’s direction may be profoundly influenced by what can and cannot be done easily. For example, if we can only maximize what we can precisely define, we may inadvertently end up with a world filled with machines trying their hardest to build bigger factories and better widgets, uninterested in anything we consider intrinsically valuable.
All technologies are more useful for some tasks than others, but machine intelligence might be particularly problematic because it can entrench itself. For example, a rational profit-maximizing corporation might distribute itself throughout the world, pay people to help protect it, make well-crafted moral appeals for equal treatment, or campaign to change policy. Although such corporations could bring large benefits in the short term, in the long run they may be difficult or impossible to uproot, even once they serve no one’s interests.
Why now?
--------
Reproducing human abilities gets a lot of deserved attention. Figuring out exactly what you’d do once you succeed feels like planning the celebration before the victory: it might be interesting, but why can’t it wait?
1. **Maybe it’s hard**. Probably the steering problem is much easier than the AI problem, but it might turn out to be surprisingly difficult. If it *is* difficult, then learning that earlier will help us think more clearly about AI, and give us a head start on addressing it.
2. **It may help us understand AI.** The difficulty of saying exactly what you want is a basic challenge, and the steering problem is a natural perspective on this challenge. A little bit of research on natural theoretical problems is often worthwhile, even when the direct applications are limited or unclear. In section 4 we discuss possible approaches to the steering problem, many of which are new perspectives on important problems.
3. **It should be developed alongside AI.** The steering problem is a long-term goal in the same way that understanding human-level prediction is a long-term goal. Just as we do theoretical research on prediction before that research is commercially relevant, it may be sensible to do theoretical research on steering before it is commercially relevant. Ideally, our ability to build useful systems will grow in parallel with our ability to build capable systems.
4. **Nine women can’t make a baby in one month.** We could try to save resources by postponing work on the steering problem until it seems important. At this point it will be easier to work on the steering problem, and if the steering problem turns out to be unimportant then we can avoid thinking about it altogether.
But at large scales it becomes hard to speed up progress by increasing the number of researchers. Fewer people working for longer may ultimately be more efficient (even if earlier researchers are at a disadvantage). This is particularly pressing if we may eventually want to invest much more effort in the steering problem.
5. **AI progress may be surprising**. We probably won’t reproduce human abilities in the next few decades, and we probably won’t do it without ample advance notice. That said, AI is too young, and our understanding too shaky, to make confident predictions. A mere 15 years is 20% of the history of modern computing. If important human-level capabilities are developed surprisingly early or rapidly, then it would be worthwhile to better understand the implications in advance.
6. **The field is sparse**. Because the steering problem and similar questions have received so little attention, individual researchers are likely to make rapid headway. There are perhaps three to four orders of magnitude between basic research on AI and research directly relevant to the steering problem, lowering the bar for arguments 1-5.
In section 3 we discuss some other reasons not to work on the steering problem: Is work done now likely to be relevant? Is there any concrete work to do now? Should we wait until we can do experiments? Are there adequate incentives to resolve this problem already?
Defining the problem precisely
==============================
Recall our problem statement:
**The steering problem:** Using black-box access to human-level cognitive abilities, can we write a program that is as useful as a well-motivated human with those abilities?
We’ll adopt a particular human, Hugh, as our “well-motivated human:” we’ll assume that we have black-box access to Hugh-level cognitive abilities, and we’ll try to write a program which is as useful as Hugh.
Abilities
---------
In reality, AI research yields complicated sets of related abilities, with rich internal structure and no simple performance guarantees. But in order to do concrete work in advance, we will model abilities as black boxes with well-defined contracts.
We’re particularly interested in tasks which are “AI complete” in the sense that human-level performance on that task could be used as a black box to achieve human-level performance on a very wide range of tasks. For now, we’ll further focus on domains where performance can be unambiguously defined.
Some examples:
* **Boolean question-answering**. A question-answerer is given a statement and outputs a probability. A question-answerer is Hugh-level if it never makes judgments predictably worse than Hugh’s. We can consider question-answerers in a variety of languages, ranging from natural language (“Will a third party win the US presidency in 2016?”) to precise algorithmic specifications (“Will this program output 1?”).
* **Online learning**. A function-learner is given a sequence of labelled examples (x, y) and predicts the label of a new data point, x’. A function-learner is Hugh-level if, after training on any sequence of data (xi, yi), the learner’s guess for the label of the next point xi+1 is---on average---at least as good as Hugh’s.
* **Embodied reinforcement learning**. A reinforcement learner interacts with an environment and receives periodic rewards, with the goal of maximizing the discounted sum of its rewards. A reinforcement learner is Hugh-level if, following any sequence of observations, it achieves an *expected* performance as good as Hugh’s in the subsequent rounds. The expectation is taken using our subjective distribution over the physical situation of an agent who has made those observations.
When talking about Hugh’s predictions, judgments, or decisions, we imagine that Hugh has access to a reasonably powerful computer, which he can use to process or display data. For example, if Hugh is given the binary data from a camera, he can render it on a screen in order to make predictions about it.
We can also consider a particularly degenerate ability:
* **Unlimited computation**. A box that can run any algorithm in a single time step is--in some sense--Hugh level at every precisely stated task.
Although unlimited computation seems exceptionally powerful, it’s not immediately clear how to solve the steering problem even using such an extreme ability.
Measuringusefulness
-------------------
What does it mean for a program to be “as useful” as Hugh?
We’ll start by defining “as useful for X as Hugh,” and then we will informally say that a program is “as useful” as Hugh if it’s as useful for the tasks we care most about.
Consider **H,** a black box that simulates Hugh or perhaps consults a version of Hugh who is working remotely. We’ll suppose that running **H** takes the same amount of time as consulting our Hugh-level black boxes. A project to accomplish X could potentially use as many copies of **H** as it can afford to run.
A program **P** is more useful than Hugh for X if, for every project using **H** to accomplish X, we can efficiently transform it into a new project which uses **P** to accomplish X. The new project shouldn’t be much more expensive---it shouldn’t take much longer, use much more computation or many additional resources, involve much more human labor, or have significant additional side-effects.
Well-motivated
--------------
What it does it mean for Hugh to be well-motivated?
The easiest approach is universal quantification: for *any* human Hugh, if we run our program using Hugh-level black boxes, it should be as useful as Hugh.
Alternatively, we can leverage our intuitive sense of what it means for someone to be well-motivated to do X, and define “well-motivated” to mean “motivated to help the user’s project succeed.”
Scalingup
---------
If we are given better black boxes, we should make a better program. This is captured by the requirement that our program should be as useful as Hugh, no matter how capable Hugh is (as long as the black boxes are equally capable).
Ideally, our solutions should scale far past human-level abilities. This is not a theoretical concern---in many domains computers already have significantly superhuman abilities. This requirement is harder to make precise, because we can no longer talk about the “human benchmark.” But in general, we would like to build systems which are (1) working towards their owner’s interests, and (2) nearly as effective as the best goal-directed systems that can be built using the available abilities. The ideal solution to the steering problem will have these characteristics in general, even when the black-box abilities are radically superhuman.
---
*This is an abridged version of [this document](https://docs.google.com/document/d/1_ggFw8KbvW77Z3gCQUDyz3_IrR3pVyFZ2wkuBgMvoVU/edit?usp=sharing) from 2014; most of the document is now superseded by later posts in this sequence.*
*Tomorrow's AI Alignment Forum sequences post will be 'Embedded Agency (text)' in the sequence [Embedded Agency](https://www.alignmentforum.org/s/Rm6oQRJJmhGCcLvxh), by Scott Garrabrant and Abram Demski.*
*The next post in this sequence will come out on Thursday 15th November, and will be 'Clarifying "AI Alignment"' by Paul Christiano.* |
61731314-7155-46a5-8fe2-700bb339099d | trentmkelly/LessWrong-43k | LessWrong | Where do hypotheses come from?
|
ec944757-c171-4e84-8499-9a860739c427 | trentmkelly/LessWrong-43k | LessWrong | Imagine paint as chance you spread across the world
People are not good at statistics. We tend to imagine more or less than 100% chance when dividing it between possibilities. Instead, try this:
If theres something certain to happen but less certain when, imagine a row of a calendar (or however you visualize sequential days) with different amounts of paint on certain days. Maybe its likely in the next few days but a small chance it could take a few weeks. So spread the paint in those proportions. Its always 1.0 total paint. For uncertainty, leave some of the paint blurred and you dont know where it is. As the days pass and the thing still hasnt happened, you move that chance somewhere onto the later days since its still 100% going to happen.
Or if it may not happen, use 2 colors of paint, 1 for all relevant possibilities, and an inner paint touching only part of that outer paint. If its 30% chance to happen, the inner paint touches 30% of the outer paint.
Recursively, we can think about combinations of events, places, things, and chances this way. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.